<?xml version="1.0" encoding="UTF-8"?> <migration> <databank> <source driverType="org.javahispano.dbmt.migrations.MigrationCSVSource" driver="delimeter=; codepage=Windows-1251 trim=true" url="c:\" /> <target driverType="org.javahispano.dbmt.migrations.MigrationJDBC" driver="com.sap.dbtech.jdbc.DriverSapDB" url="jdbc:sapdb://127.0.0.1/DATABASE_TGT" username="dbuser" password="dbpwd" quoted-names="false" /> </databank> <log level="INFO"/> <steps> <step name="EXAMPLE_STEP" skip='false' source_table="SOURCE_TABLE.csv" target_table="TARGET_TABLE" clear_target="false" where="INTERNAL is null OR INTERNAL=false" log_level="OFF" slice_size="1000" stop_on_error="true" > <field from="ID"/> <field from="NAME" to="OTHER_NAME"/> <field from="substr(DESC,1,2)||'_suffix'" to="NEW_DESC"/> <field from="true" to="LOGIC_FIELD"/> </step> </steps> </migration>
Format | Java class for source | Java class for target | Requirements |
JDBC | org.javahispano.dbmt.migrations.MigrationJDBC | org.javahispano.dbmt.migrations.MigrationJDBC | JDBC 2.0 |
CSV | org.javahispano.dbmt.migrations.MigrationCSVSource | org.javahispano.dbmt.migrations.MigrationCSVTarget | |
DBF | org.javahispano.dbmt.migrations.MigrationDBFSource | org.javahispano.dbmt.migrations.MigrationDBFTarget | JavaDBF 0.4.0 with the following patch |
XML | org.javahispano.dbmt.migrations.MigrationSaxXMLSource | - | JAXP 1.0 |
TXT | - | org.javahispano.dbmt.migrations.MigrationFreemarkerTarget | Freemarker template engine is used for generating textual files |
In this tag we are defining types of the source and the target for this migration, giving the URL and driver parameters to be used. URL and driver parameters are dependent on types of the source and the target.
Note: Databanks org.javahispano.dbmt.MigrationSaxXMLSource and org.javahispano.dbmt.MigrationFreemarkerTarget are not compatible.
This file will be used for all steps. Full file name is url+file. If this parameter is omited then each step will use own file with name equals to step's name
Format | Attribute | Parameter | Source | Target | Description |
JDBC | url | URL | + | + | URL for the connection |
JDBC | driver | DRIVER | + | + | Name of the JDBC driver |
JDBC | username | DB USERNAME | + | + | Username for the connection |
JDBC | password | DB PASSWD | + | + | Password for the connection |
JDBC | quoted-names | quoted | + | + | RDBMS requires names with double quotes |
CSV | url | URL | + | + | URL for CSV file |
CSV | driver | delimeter | + | + | Fields delimeter in CSV file. By default it is comma |
CSV | driver | codepage | + | + | Codepage of CSV file. By default it is Java default codepage |
CSV | driver | trim | + | - | If true then removes leading and trailing whitespace; do nothing otherwise |
DBF | url | URL | + | + | URL for DBF file |
DBF | driver | codepage | + | + | Codepage of DBF file. By default it is Java default codepage |
XML | url | URL | + | - | URL for XML file. Supports jar protocol (jar:http://example.org/archive.jar!/) |
XML | driver | driver | + | - | Path in XML file of rows. By default path is 'select.row'. Minimum number of tags are two. |
TXT | url | URL | - | + | URL for TXT file. |
TXT | driver | file | - | + | File name where output of Freemarker will be stored. |
TXT | driver | codepage | + | + | Codepage of output file. By default it is Freemarker's default codepage |
TXT | driver | template_codepage | + | + | Codepage for templates. By default it is Freemarker's default codepage |
TXT | driver | append | - | + | If true then file will be appended. Otherwise it will be recreated. This option is valid only if parameter 'file' is specified. 'true' is by default |
Pay attention that JDBC and DBF drivers are not included in DBMT's distribution, so you should add them to your classpath. Of course this tag is mandatory.
DBMT uses 'org.javahispano.dbmt' JDK14 logger to informate about migration process.
Log level | Messages |
FINEST | Target's errors |
FINE | Number of inserted rows in commit time |
INFO | Information about current process, number of total inserted, filtered, skipped by target's error |
SEVERE | Fatal error |
In each step there is attribute 'log_level'. MigrationTarget can use for logging all insert commands into file. Now only MigrationJDBC uses this attribute.
Steps are the most important part of a migration and in every step it is defined how the fields of a table should be changed to fit in the new schema. You can write so many step tags as tables we want to migrate.
<step name="EXAMPLE_STEP" skip='false' source_table="SOURCE_TABLE.csv" target_table="TARGET_TABLE" clear_target="false" where="INTERNAL is null OR INTERNAL=false" log_level="OFF" slice_size="1000" stop_on_error="true" > FIELDS </step>
Format Attribute Description | ||
ALL | name | Name of the step. Using in logs. |
ALL | skip | If false then the step will be skipped By default it is false |
ALL | where | SQLJEP expression. Variables are names of columns in SOURCE_TABLE |
ALL | stop_on_error | If false and an exception occurs in MigrationTarget then the exception will be logged and migration process will be continued By default it is true |
JDBC | source_table | * Select statement. For example 'SELECT f1,f2,f3 from SOURCE_TABLE where id>1000' * Callable statement. For example 'call dbproc(1,2)' * Table name. For exmaple 'SOURCE_TABLE'. In this case the following statement will be used 'SELECT * from SOURCE_TABLE' |
JDBC | target_table | Table name |
JDBC | clear_target | * If true then the following statement will performed 'DELETE from TARGET_TABLE'. * Callable statement. For example 'call my_clear'. * If false then do nothing. By default it is false |
JDBC | log_level | * 'FINEST' - all SQL insert statements will be stored in the file 'EXAMPLE_STEP.sql' in the current working directory. * 'FINER' - all SQL execute time errors of insert stamenets will be stored in file 'EXAMPLE_STEP.sql'. This attribute is valid only if stop_on_error="false". By default log_level is OFF |
JDBC | slice_size | Number inserted rows into target table without commits. 0 means commit after all rows |
XML | source_table | MigrationSaxXMLSource can get XML files from JAR archives. In this case this attribute is name of XML file in JAR archive. Otherwise path to XML file is source.url+source_table |
XML | target_table | not implemented |
XML | log_level | not implemented |
XML | slice_size | not implemented |
CSV | source_table | CSV source file. Full path to CSV file is source.url+source_table |
CSV | target_table | CSV target file. Full path to CSV file is target.url+target_table |
CSV | clear_target | If false then CSV file will be appended. By default it is false |
CSV | log_level | not used |
CSV | slice_size | not used |
DBF | source_table | DBF source file. Full path to DBF file is source.url+source_table |
DBF | target_table | DBF target file. Full path to DBF file is target.url+target_table |
DBF | clear_target | If false then DBF file will be appended. By default it is false |
DBF | log_level | not used |
DBF | slice_size | not used |
TXT | source_table | not used |
TXT | target_table | Freemarker's template file. Full path to template file is target_table. URLs are supported. By default Freemarker uses FileTemplateLoader for loading templates. If an URL is specified in 'target_table' attribute then URLTemplateLoader is used. In the template the source table there is as the collection with name 'SOURCE'. |
TXT | clear_target | If false then output file will be appended. By default it is false. This attribute is valid only if there is no 'file' parameter in 'driver' attribute of 'target' tag. |
TXT | log_level | not used |
TXT | slice_size | Number inserted rows into target table without flushes. 0 means flush after all rows |
Next we have to write how the fields will be processed.
<field from="ID"/> <field from="NAME" to="OTHER_NAME"/> <field from="substr(DESC,1,2)||'_suffix'" to="NEW_DESC"/> <field from="true" to="LOGIC_FIELD"/>
'from' attribute is mandatory. It can contain name of the field in source table or SQLJEP expresson where variables are names of columns in the source table.
If 'to' is absent then it will be get the same as 'from'.
Some MigrationTarget objects may not have information about types of fields. For example MigrationDBFTarget. When MigrationDBFTarget creates new DBF file it doesn't have information about fields types. And some MigrationSource objects can convert types. For example MigrationDBFSource represents dates only as java.util.Date type but it can convert dates to java.sql.Time or java.sql.Timestamp or java.sql.Date.
For this purposes tag 'field' has attribute 'type'.
Mapping 'type' attribute to Java class.
'type' field | Java class |
string | java.lang.String |
short | java.lang.Short |
integer | java.lang.Integer |
long | java.lang.Long |
logical | java.lang.Boolean |
fixed(m,n) or bigdecimal | java.math.BigDecimal |
time | java.sql.Time |
date | java.sql.Date |
timestamp | java.sql.Timestamp |