This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.2 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.44 (see Section C.1.3, “Changes in MySQL 5.1.44 (04 February 2010)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
It is now possible to determine, using the
ndb_desc utility or the NDB API, which data
nodes contain replicas of which partitions. For
ndb_desc, a new
--extra-node-info option is
added to cause this information to be included in its output. A
is added to the NDB API for obtaining this information
On Solaris platforms, the MySQL Cluster management server and
NDB API applications now use
as the default clock.
--with-ndb-port-base option for
configure did not function correctly, and has
been deprecated. Attempting to use this option produces the
warning Ignoring deprecated option
Beginning with MySQL Cluster NDB 7.1.0, the deprecation warning
itself is removed, and the
option is simply handled as an unknown and invalid option if you
try to use it.
See also Bug#38502.
Cluster Replication: Important Change:
In a MySQL Cluster acting as a replication slave and having
multiple SQL nodes, only the SQL node receiving events directly
from the master recorded DDL statements in its binary logs
unless this SQL node also had binary logging enabled; otherwise,
other SQL nodes in the slave cluster failed to log DDL
statements, regardless of their individual
The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:
DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.
DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.
Affect on upgrades. When upgrading from a previous MySQL CLuster release, you should perform either one of the following:
Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on “old” SQL nodes until all SQL nodes have been upgraded.
Make sure that
enabled on all SQL nodes performing binary logging prior
to the upgrade, so that all DDL is captured.
Logging of DML statements was not affected by this issue.
pkg installer for MySQL Cluster on
Solaris did not perform a complete installation due to an
invalid directory reference in the post-install script.
NoOfReplicas equal to 1 or 2, if
data nodes from one node group were restarted 256 times and
applications were running traffic such that it would encounter
NDB error 1204
(Temporary failure, distribution
changed), the live node in the node group would
crash, causing the cluster to crash as well. The crash occurred
only when the error was encountered on the 256th restart; having
the error on any previous or subsequent restart did not cause
If a query on an
NDB table compared
a constant string value to a column, and the length of the
string was greater than that of the column, condition pushdown
did not work correctly. (The string was truncated to fit the
column length before being pushed down.) Now in such cases, the
condition is no longer pushed down.
When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.
Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug#48861)
Performing intensive inserts and deletes in parallel with a high
scan load could a data node crashes due to a failure in the
DBACC kernel block. This was because checking
for when to perform bucket splits or merges considered the first
4 scans only.
The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug#48604)
In certain cases, performing very large inserts on
NDB tables when using
ndbmtd caused the memory allocations for
ordered or unique indexes (or both) to be exceeded. This could
cause aborted transactions and possibly lead to data node
See also Bug#48113.
NDB native backup to
back up and restore an empty
table that used a non-sequential
AUTO_INCREMENT value, the
AUTO_INCREMENT value was not restored
Under some circumstances, when a scan encountered an error early
in processing by the
DBTC kernel block (see
DBTC Block), a node
could crash as a result. Such errors could be caused by
applications sending incorrect data, or, more rarely, by a
DROP TABLE operation executed in
parallel with a scan.
When starting a node and synchronizing tables, memory pages were allocated even for empty fragments. In certain situations, this could lead to insufficient memory. (Bug#47782)
mysqld allocated an excessively large buffer
BLOB values due to
overestimating their size. (For each row, enough space was
allocated to accommodate every
TEXT column value in the result
set.) This could adversely affect performance when using tables
TEXT columns; in a few extreme
cases, this issue could also cause the host system to run out of
When an instance of the
handler was recycled (this can happen due to table definition
cache pressure or to operations such as
FLUSH TABLES or
ALTER TABLE), if the last row
read contained blobs of zero length, the buffer was not freed,
even though the reference to it was lost. This resulted in a
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug#47505)
NDB stores blob column data in a
separate, hidden table that is not accessible from MySQL. If
this table was missing for some reason (such as accidental
deletion of the file corresponding to the hidden table) when
making a MySQL Cluster native backup, ndb_restore crashed when
attempting to restore the backup. Now in such cases, ndb_restore
fails with the error message Table
table_name has blob column
column_name) with missing parts
table in backup instead.
For very large values of
MaxNoOfAttributes, the calculation for
StringMemory could overflow when creating
large numbers of tables, leading to NDB error 773
(Out of string memory, please modify StringMemory
config parameter), even when
StringMemory was set to
100 (100 percent).
The default value for the
configuration parameter, unlike other MySQL Cluster
configuration parameters, was not set in
Signals from a failed API node could be received after an
API_FAILREQ signal (see
Operations and Signals)
has been received from that node, which could result in invalid
states for processing subsequent signals. Now, all pending
signals from a failing API node are processed before any
API_FAILREQ signal is received.
See also Bug#44607.
Using triggers on
NDB tables caused
to be treated as having the NDB kernel's internal default
value (32) and the value for this variable as set on the
cluster's SQL nodes to be ignored.
Full table scans failed to execute when the cluster contained more than 21 table fragments.
The number of table fragments in the cluster can be calculated
as the number of data nodes, times 8 (that is, times the value
of the internal constant
MAX_FRAG_PER_NODE), divided by the number
of replicas. Thus, when
NoOfReplicas = 1 at
least 3 data nodes were required to trigger this issue, and
NoOfReplicas = 2 at least 4 data nodes
were required to do so.
Ending a line in the
config.ini file with
an extra semicolon character (
reading the file to fail with a parsing error.
When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug#46069)
Problems could arise when using
whose size was greater than 341 characters and which used the
utf8_unicode_ci collation. In some cases,
this combination of conditions could cause certain queries and
OPTIMIZE TABLE statements to
If a node failed while sending a fragmented long signal, the receiving node did not free long signal assembly resources that it had allocated for the fragments of the long signal that had already been received. (Bug#44607)
When performing auto-discovery of tables on individual SQL
NDBCLUSTER attempted to overwrite
files and corrupted them.
In the mysql client, create a new table
t2) with same definition as the corrupted
t1). Use your system shell or file
manager to rename the old
.MYD file to
the new file name (for example, mv t1.MYD
t2.MYD). In the mysql client,
repair the new table, drop the old one, and rename the new
table using the old file name (for example,
RENAME TABLE t2
When starting a cluster with a great many tables, it was possible for MySQL client connections as well as the slave SQL thread to issue DML statements against MySQL Cluster tables before mysqld had finished connecting to the cluster and making all tables writeable. This resulted in Table ... is read only errors for clients and the Slave SQL thread.
This issue is fixed by introducing the
--ndb-wait-setup option for the
MySQL server. This provides a configurable maximum amount of
time that mysqld waits for all
NDB tables to become writeable,
before allowing MySQL clients or the slave SQL thread to
See also Bug#46955.
When building MySQL Cluster, it was possible to configure the
--with-ndb-port without supplying a
port number. Now in such cases, configure
fails with an error.
See also Bug#47941.
An insert on an
NDB table was not
always flushed properly before performing a scan. One way in
which this issue could manifest was that
LAST_INSERT_ID() sometimes failed
to return correct values when using a trigger on an
See also Bug#34102.
Some joins on large
BLOB columns could cause
mysqld processes to leak memory. The joins
did not need to reference the
BLOB columns directly for this
issue to occur.
When the MySQL server SQL mode included
engine warnings and error codes specific to
NDB were returned when errors occurred,
instead of the MySQL server errors and error codes expected by
some programming APIs (such as Connector/J) and applications.
On Mac OS X 10.5, commands entered in the management client
failed and sometimes caused the client to hang, although
management client commands invoked using the
-e) option from the system shell worked
For example, the following command failed with an error and hung until killed manually, as shown here:
SHOWWarning, event thread startup failed, degraded printouts as result, errno=36
However, the same management client command, invoked from the system shell as shown here, worked correctly:
ndb_mgm -e "SHOW"
See also Bug#34438.
When a copying operation exhausted the available space on a data
node while copying large
columns, this could lead to failure of the data node and a
Table is full error on the SQL node which
was executing the operation. Examples of such operations could
ALTER TABLE that
INT column to a
BLOB column, or a bulk insert of
BLOB data that failed due to
running out of space or to a duplicate key error.
Trying to insert more rows than would fit into an
NDB table caused data nodes to crash. Now in
such situations, the insert fails gracefully with error 633
Table fragment hash index has reached maximum
The error message text for
error code 410 (REDO log files
overloaded...) was truncated.
--verbose was used to read a
binary log that had been recorded using the row-based format,
the output for events that updated some but not all columns of
tables was not correct.
In some cases, a
statement could cause the replication slave to crash. This issue
was specific to MySQL on Windows or Macintosh platforms.
(Bug#45238, Bug#45242, Bug#45243, Bug#46013, Bug#46014, Bug#46030)
See also Bug#40796.
Disk Data: Inserts of blob column values into a Disk Data table that exhausted the tablespace resulted in misleading error messages about rows not being found in the table rather than the expected error Out of extents, tablespace full. (Bug#48113)
Disk Data: A local checkpoint of an empty fragment could cause a crash during a system restart which was based on that LCP. (Bug#47832)
See also Bug#41915.
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.
This miscalculation was not reflected in the contents of the
as it applied to extents allocated to a fragment, and not to a
If the value set in the
config.ini file for
FileSystemPathUndoFiles was identical to the
value set for
FileSystemPath, that parameter
was ignored when starting the data node with
--initial option. As a result, the Disk Data
files in the corresponding directory were not removed when
performing an initial start of the affected data node or data
When a crash occurs due to a problem in Disk Data code, the
currently active page list is printed to
stdout (that is, in one or more
files). One of these lists could contain an endless loop; this
caused a printout that was effectively never-ending. Now in such
cases, a maximum of 512 entries is printed from each list.
Cluster Replication: When using multiple active replication channels, it was sometimes possible that a node group would fail on the slave cluster, causing the slave cluster to shut down. (Bug#47935)
When recording a binary log using the
(both enabled by default) and later attempting to apply that
binary log with mysqlbinlog, any operations
that were played back from the log but which updated only some
(but not all) columns caused any columns that were not updated
to be reset to their default values.
When reading blob data with lock mode
LM_SimpleRead, the lock was not upgraded as
When a DML operation failed due to a uniqueness violation on an
NDB table having more than one
unique index, it was difficult to determine which constraint
caused the failure; it was necessary to obtain an
NdbError object, then decode its
details property, which in could lead to
memory management issues in application code.
To help solve this problem, a new API method
Ndb::getNdbErrorDetail() is added, providing
a well-formatted string containing more precise information
about the index that caused the unque constraint violation. The
following additional changes are also made in the NDB API:
NdbError.details is now deprecated
in favor of the new method.
has been modified to provide more information.
The NDB API methods
NdbOperation::getErrorLine() formerly had
const and non-
variants. The non-
const versions of these
methods have been removed. In addition, the
NdbOperation::getBlobHandle() method has been
re-implemented in order to provide consistent internal
In some circumstances, if an API node encountered a data node
failure between the creation of a transaction and the start of a
scan using that transaction, then any subsequent calls to
closeTransaction() could cause the same
transaction to be started and closed repeatedly.
Cluster API: A duplicate read of a column caused NDB API applications to crash. (Bug#45282)
Performing multiple operations using the same primary key within
call could lead to a data node crash.
This fix does not make change the fact that performing
multiple operations using the same primary key within the same
execute() is not supported; because there
is no way to determine the order of such operations, the
result of such combined operations remains undefined.
See also Bug#44015.
The error handling shown in the example file
ndbapi_scan.cpp included with the MySQL
Cluster distribution was incorrect.
When using blobs, calling
requires the full key to have been set using
getBlobHandle() must access the key for
adding blob table operations. However, if
getBlobHandle() was called without first
setting all parts of the primary key, the application using it
crashed. Now, an appropriate error code is returned instead.
API: The fix for Bug#24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the “dummy” thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug#42850)
On some Unix/Linux platforms, an error during build from source
could be produced, referring to a missing
LT_INIT program. This is due to versions of
libtool 2.1 and earlier.
1) In rare cases, if a thread was interrupted during a
PRIVILEGES operation, a debug assertion occurred later
due to improper diagnostic area setup. 2) A
KILL operation could cause a
console error message referring to a diagnostic area state
without first ensuring that the state existed.
When using the
SHOW TABLE STATUS displayed incorrect