[ Avaa Bypassed ]




Upload:

Command:

hmhc3928@18.227.183.161: ~ $
include/rpl_init.inc [topology=1->2,4->3]
include/rpl_connect.inc [creating master]
include/rpl_connect.inc [creating master1]
include/rpl_connect.inc [creating slave]
include/rpl_connect.inc [creating slave1]
include/rpl_start_slaves.inc
Cluster A servers have no epoch replication info
select count(1) from mysql.ndb_apply_status;
count(1)
0
Cluster A servers have no max replicated epoch value
Master(1)
select variable_name, variable_value from information_schema.global_status
where variable_name='Ndb_slave_max_replicated_epoch';
variable_name	variable_value
NDB_SLAVE_MAX_REPLICATED_EPOCH	0
Master1(3)
select variable_name, variable_value from information_schema.global_status
where variable_name='Ndb_slave_max_replicated_epoch';
variable_name	variable_value
NDB_SLAVE_MAX_REPLICATED_EPOCH	0
Make a change originating at Cluster A
Master(1)
use test;
create table t1 (a int primary key, b varchar(100)) engine=ndb;
insert into t1 values (1, "Venice");
Allow it to propagate to Cluster B
Originate a second unrelated change at Cluster B, to allow us to wait for
reverse propagation in the testcase
Slave1 (4)
insert into t1 values (2, "Death");
Allow it to propagate to Cluster A
Observe new entry in ndb_apply_status on Cluster A
Master (1)
select server_id from mysql.ndb_apply_status order by server_id;
server_id
1
4
Non-slave server on Cluster A will have no value for Max Replicated Epoch
select variable_name, variable_value from information_schema.global_status
where variable_name='Ndb_slave_max_replicated_epoch';
variable_name	variable_value
NDB_SLAVE_MAX_REPLICATED_EPOCH	0
Slave server on Cluster A has current value for Max Replicated Epoch
Master1 (3)
Expect count 1
select
count(1)
from
information_schema.global_status,
mysql.ndb_apply_status
where
server_id = 1
and
variable_name='Ndb_slave_max_replicated_epoch'
    and
variable_value = epoch;
count(1)
1
Now wait for all replication to quiesce
Now swap replication channels around
include/rpl_stop_slaves.inc
include/rpl_change_topology.inc [new topology=2->1,3->4]
Get current master status on Cluster A new master (next pos in Binlog)
Master1 (3)
Flush logs to ensure any pending update (e.g. reflected apply_status write row)
is skipped over.
flush logs;
Setup slave on Cluster B to use it
Slave1 (4)
Get current master status on Cluster B new master (next pos in Binlog)
Slave (2)
Flush logs to ensure any pending update (e.g. reflected apply_status write row)
is skipped over.
flush logs;
Setup slave on Cluster A to use it
Master (1)
Master (1)
Show that Cluster A Slave server (old master) has no Max replicated epoch before receiving data
select variable_name, variable_value from information_schema.global_status
where variable_name='Ndb_slave_max_replicated_epoch';
variable_name	variable_value
NDB_SLAVE_MAX_REPLICATED_EPOCH	0
Master1 (3)
Cluster A Master server (old slave) has old Max replicated epoch
select
count(1)
from
information_schema.global_status,
mysql.ndb_apply_status
where
server_id = 1
and
variable_name='Ndb_slave_max_replicated_epoch'
    and
variable_value = epoch;
count(1)
1
Now start slaves up
include/rpl_start_slaves.inc
Show that applying something from Cluster B causes the
old Max Rep Epoch to be loaded from ndb_apply_status
There is no new Max Rep Epoch from Cluster A as it has not changed
anything yet
Slave (2)
insert into test.t1 values (3, "From the Sea");
Allow to propagate to Cluster A
Master (1)
New Slave server on Cluster A has loaded old Max-Replicated-Epoch
select server_id from mysql.ndb_apply_status order by server_id;
server_id
1
2
4
select
count(1)
from
information_schema.global_status,
mysql.ndb_apply_status
where
server_id = 1
and
variable_name='Ndb_slave_max_replicated_epoch'
    and
variable_value = epoch;
count(1)
1
Now make a new Cluster A change and see that the Max Replicated Epoch advances
once it has propagated
Master1 (3)
insert into test.t1 values (4, "Brooke");
Propagate to Cluster B
Make change on Cluster B to allow waiting for reverse propagation
Slave (2)
insert into test.t1 values (5, "Rupert");
Wait for propagation back to Cluster A
Master (1)
Show that Cluster A now has 2 different server_id entries in ndb_apply_status
Those from the new master (server_id 3) are highest.
select server_id from mysql.ndb_apply_status order by server_id;
server_id
1
2
3
4
select
count(1)
from
information_schema.global_status,
mysql.ndb_apply_status
where
server_id = 3
and
variable_name='Ndb_slave_max_replicated_epoch'
    and
variable_value = epoch;
count(1)
1
local_server_with_max_epoch
3
Done
drop table t1;
include/rpl_stop_slaves.inc
CHANGE MASTER TO IGNORE_SERVER_IDS= ();
CHANGE MASTER TO IGNORE_SERVER_IDS= ();
CHANGE MASTER TO IGNORE_SERVER_IDS= ();
CHANGE MASTER TO IGNORE_SERVER_IDS= ();
include/rpl_start_slaves.inc
include/rpl_end.inc

Filemanager

Name Type Size Permission Actions
ndb_rpl_2innodb.result File 41.21 KB 0644
ndb_rpl_2myisam.result File 41.21 KB 0644
ndb_rpl_2ndb.result File 15.83 KB 0644
ndb_rpl_2other.result File 32.39 KB 0644
ndb_rpl_add_column.result File 5.55 KB 0644
ndb_rpl_apply_status.result File 529 B 0644
ndb_rpl_auto_inc.result File 3.18 KB 0644
ndb_rpl_bank.result File 10.82 KB 0644
ndb_rpl_basic.result File 13.07 KB 0644
ndb_rpl_binlog_format_errors.result File 920 B 0644
ndb_rpl_bitfield.result File 14.11 KB 0644
ndb_rpl_blob.result File 6.92 KB 0644
ndb_rpl_break_3_chain.result File 2.25 KB 0644
ndb_rpl_bug22045.result File 2.19 KB 0644
ndb_rpl_check_for_mixed.result File 105 B 0644
ndb_rpl_circular.result File 5.18 KB 0644
ndb_rpl_circular_2ch.result File 1.53 KB 0644
ndb_rpl_circular_2ch_rep_status.result File 4.69 KB 0644
ndb_rpl_circular_simplex.result File 559 B 0644
ndb_rpl_conflict.result File 9.76 KB 0644
ndb_rpl_conflict_epoch.result File 107.75 KB 0644
ndb_rpl_conflict_max.result File 49.57 KB 0644
ndb_rpl_conflict_max_delete_win.result File 49.51 KB 0644
ndb_rpl_conflict_old.result File 49.75 KB 0644
ndb_rpl_ctype_ucs2_def.result File 703 B 0644
ndb_rpl_dd_advance.result File 8.54 KB 0644
ndb_rpl_dd_basic.result File 2.04 KB 0644
ndb_rpl_dd_partitions.result File 35.55 KB 0644
ndb_rpl_do_db.result File 1.07 KB 0644
ndb_rpl_do_table.result File 784 B 0644
ndb_rpl_empty_epoch.result File 817 B 0644
ndb_rpl_gap_event.result File 785 B 0644
ndb_rpl_idempotent.result File 1.49 KB 0644
ndb_rpl_ignore_db.result File 732 B 0644
ndb_rpl_init_rep_status.result File 2.49 KB 0644
ndb_rpl_innodb2ndb.result File 40.87 KB 0644
ndb_rpl_innodb_trans.result File 2.71 KB 0644
ndb_rpl_load.result File 1.27 KB 0644
ndb_rpl_logging.result File 2.29 KB 0644
ndb_rpl_mix_eng_trans.result File 10.14 KB 0644
ndb_rpl_mix_innodb.result File 3.98 KB 0644
ndb_rpl_mixed_tables.result File 6.91 KB 0644
ndb_rpl_multi.result File 1.7 KB 0644
ndb_rpl_myisam2ndb.result File 40.86 KB 0644
ndb_rpl_ndbapi_multi.result File 201 B 0644
ndb_rpl_rep_error.result File 1.69 KB 0644
ndb_rpl_rep_ignore.result File 1.03 KB 0644
ndb_rpl_skip_gap_event.result File 666 B 0644
ndb_rpl_slave_lsu.result File 53.81 KB 0644
ndb_rpl_slave_lsu_anyval.result File 54.73 KB 0644
ndb_rpl_slave_restart.result File 1.76 KB 0644
ndb_rpl_stm_innodb.result File 4.01 KB 0644
ndb_rpl_sync.result File 1.99 KB 0644
ndb_rpl_ui.result File 709 B 0644
ndb_rpl_ui2.result File 435 B 0644
ndb_rpl_ui3.result File 874 B 0644
rpl_truncate_7ndb.result File 2.91 KB 0644
rpl_truncate_7ndb_2.result File 2.91 KB 0644