I have a project that we need to maintain many MySQL databases on many computers. They will have the same schema.
From time to time, each of those databases must send their content to a master server, which will collect all the incoming data, the content must be dumped into a file, which is an internet through flash drive Can be moved to send a competent computer.
Key names will be named, so there should not be any dispute there, but I do not have to design it completely in a great way. I am thinking about the timestamp of each line and I am running the query "SELECT * FROM [table] WHERE timestamp"> last_backup_time " on every table, then dump it into a file and the master Server on bulk loading.
Distributed computers will not have internet access. We are in very rural part of a third world country.
Any suggestions?
your
SELECT * FROM [table] WHERE timestamp & Gt; last_backup_time will not miss deleted rows.
What you possibly want to do is use MySQL replication by USB Stick, this is on your source server Enable binologue and make sure binollog is not automatically thrown in. Copy the binolog files to the USB stick, then complete master logs ... to delete them on the source server.
Integration ServerUsing mysqlbinlog command and, Binog replace an executable script, then import the data as a SQL script.
The aggregation server should have a copy of each source server database, but may be under a different schema name, unless your SQL should use all the inefficient table names (to refer to the table Use schema.table syntax). Importing Mysqlbinlog generated script (with a proper USE command prefixed) will then mirror the source server change on the aggregation server.
Aggregation in all databases can then be done using fully qualified table names (i.e. schema qualified syntax in JOINS or INSERT ... SELECT statements).
Comments
Post a Comment