Path: | doc/migration.rdoc |
Last Update: | Sun Feb 19 16:11:20 +0000 2012 |
This guide is based on guides.rubyonrails.org/migrations.html
Migrations make it easy to alter your database‘s schema in a systematic manner. They make it easier to coordinate with other developers and make sure that all developers are using the same database schema.
Migrations are optional, you don‘t have to use them. You can always just create the necessary database structure manually using Sequel‘s schema modification methods or another database tool. However, if you are dealing with other developers, you‘ll have to send them all of the changes you are making. Even if you aren‘t dealing with other developers, you generally have to make the schema changes in 3 places (development, testing, and then production), and it‘s probably easier to use the migrations system to apply the schema changes than it is to keep track of the changes manually and execute them manually at the appropriate time.
Sequel tracks which migrations you have already run, so to apply migrations you generally need to use run Sequel‘s migrator with bin/sequel -m:
sequel -m path/to/migrations postgres://host/database
Migrations in Sequel use a very simple DSL via the Sequel.migration method, and inside the DSL, use the Sequel::Database schema modification methods such as create_table and alter_table.
Here is a fairly basic Sequel migration:
Sequel.migration do up do create_table(:artists) do primary_key :id String :name, :null=>false end end down do drop_table(:artists) end end
This migration has an up block which adds an artist table with an integer primary key named id, and a varchar or text column (depending on the database) named name that doesn‘t accept NULL values. Migrations should include both up and down blocks, with the down block reversing the change made by up. However, if you never need to be able to migrate down (i.e. you are one of the people that doesn‘t make mistakes), you can leave out the down block. In this case, the down block just reverses the changes made by up, dropping the table.
In normal usage, when Sequel‘s migrator runs, it runs the up blocks for all migrations that have not yet been applied. However, you can use the -M switch to specify the version to which to migrate, and if it is lower than the current version, Sequel will run the down block on the appropriate migrations.
You are not limited to creating tables inside a migration, you can alter existing tables as well as modify data. Let‘s say your artist database originally only included artists from Sacramento, CA, USA, but now you want to branch out and include artists in any city:
Sequel.migration do up do add_column :artists, :location, String self[:artists].update(:location=>'Sacramento') end down do drop_column :artists, :location end end
This migration adds a location column to the artists table, and sets the location column to ‘Sacramento‘ for all existing artists. It doesn‘t use a default on the column, because future artists should not be assumed to come from Sacramento. In the down block, it just drops the location column from the artists table, reversing the actions of the up block.
Note that when updating the artists table in the update, a plain dataset is used, self[:artists]. This looks a little weird, but you need to be aware that inside an up or down block in a migration, self always refers to the Sequel::Database object that the migration is being applied to. Since Database#[] creates datasets, using self[:artists] inside the up block creates a dataset on the database representing all columns in the artists table, and updates it to set the location column to ‘Sacramento‘.
It is possible to use model classes inside migrations, as long as they are loaded into the ruby interpreter, but it‘s a bad habit as changes to your model classes can then break old migrations, and this breakage is often not caught until much later, such as when a new developer joins the team and wants to run all migrations to create their development database.
The migration code is not technically part of the core of Sequel. It‘s not loaded by default as it is only useful in specific cases. It is one of the built-in extensions, which receive the same level of support as Sequel‘s core.
If you want to play with Sequel‘s migration tools without using the bin/sequel tool, you need to load the migration extension manually:
Sequel.extension :migration
Migrations themselves do not contain any schema modification methods, but they make it easy to call any of the Sequel::Database modification methods, of which there are many. The main ones are create_table and alter_table, but Sequel also comes with numerous other schema modification methods, most of which are shortcuts for alter_table (all of these methods are described in more detail later):
These methods handle the vast majority of cross database schema modification SQL. If you need to drop down to SQL to execute some database specific code, you can use the run method:
Sequel.migration do up{run 'CREATE TRIGGER ...'} down{run 'DROP TRIGGER ...'} end
In this case, we are using { and } instead of do and end to define the blocks. Just as before, the run methods inside the blocks are called on the Database object, which just executes the code on the underlying database.
Sequel attempts to run migrations inside of a transaction. Some databases do not support schema modifications made in transactions, and if the migration raises an error, it will not rollback the previous schema changes made by the migration. In that case, you will need to update the database by hand.
It‘s recommended to always run migrations on a test database and ensure they work before running them on any production database.
While you can create migration objects yourself and apply them manually, most of the benefit to using migrations come from using Sequel‘s Migrator, which is what the bin/sequel -m switch does. Sequel‘s Migrator expects that each migration will be in a separate file in a specific directory. The -m switch requires an argument be specified that is the path to the directory containing the migration files. For example:
sequel -m db/migrations postgres://localhost/sequel_test
will look in the db/migrations folder relative to the current directory, and run unapplied migrations on the PostgreSQL database sequel_test running on localhost.
Sequel actually ships with two separate migrators. One is the IntegerMigrator, the other is the TimestampMigrator. They both have plusses and minuses:
In order for migration files to work with the Sequel, they must be specified as follows:
version_name.rb
where version is an integer and name is a string which should be a very brief description of what the migration does. Each migration class should contain 1 and only 1 call to Sequel.migration.
These are valid migration names for the IntegerMigrator:
1_create_artists.rb 2_add_artist_location.rb
The only problem with this naming format is that if you have more than 9 migrations, the 10th one will look a bit odd:
1_create_artists.rb 2_add_artist_location.rb ... 9_do_something.rb 10_do_something_else.rb
For this reasons, it‘s often best to start with 001 instead of 1, as that means you don‘t need to worry about that issue until the 1000th migration:
001_create_artists.rb 002_add_artist_location.rb ... 009_do_something.rb 010_do_something_else.rb
It should be fairly obvious, but migrations start at 1, not 0. The migration version number 0 is important though, as it is used to mean that all migrations should be unapplied (i.e. all down blocks run). In Sequel, you can do that with:
sequel -m db/migrations -M 0 postgres://localhost/sequel_test
With the TimestampMigrator, the version integer should represent a timestamp, though this isn‘t strictly required.
For example, for 5/10/2010 12:00:00pm, you could use any of the following formats:
# Date 20100510_create_artists.rb # Date and Time 20100510120000_create_artists.rb # Unix Epoch Time Integer 1273518000_create_artists.rb
The important thing is that all migration files should be in the same format, otherwise when you update, it‘ll be difficult to make sure migrations are applied in the correct order, as well as be difficult to unapply some the affected migrations correctly.
The TimestampMigrator will be used if any filename in the migrations directory has a version greater than 20000101. Otherwise, the IntegerMigrator will be used.
Basically, unless you need the features provided by the TimestampMigrator, stick with the IntegerMigrator, as it is simpler and makes it easier to detect possible errors.
For a single developer, the TimestampMigrator has no real benefits, so I would always recommend the IntegerMigrator. When dealing with multiple developers, it depends on the size of the development team, the team‘s communication level, and the level of overlap between developers.
Let‘s say Alice works on a new feature that requires a migration at the same time Bob works on a separate feature that requires an unrelated migration. If both developers are committing to their own private respositories, when it comes time to merge, the TimestampMigrator will not require any manually changes. That‘s because Alice will have a migration such as 20100512_do_this.rb and Bob will have one such as 20100512_do_that.rb.
If the IntegerMigrator was used, Alice would have 34_do_this.rb and Bob would have 34_do_that.rb. When the IntegerMigrator was used, it would raise an exception due to the duplicate migration version. The only way to fix it would be to renumber one of the two migrations, and have the affected developer manually modify their database.
So for unrelated migrations, the TimestampMigrator works fine. However, let‘s say that the migrations are related, in such a way that if Bob‘s is run first, Alice‘s will fail. In this case, the TimestampMigrator would not raise an error when Bob merges Alice‘s changes, since Bob ran his migration first. However, it would raise an error when Alice runs Bob‘s migration, and could leave the database in an inconsistant state if the database doesn‘t support transactional schema changes.
With the TimestampMigrator, you are trading reliability for convenience. That‘s possibly a valid trade, especially if simultaneous related schema changes by separate developers are unlikely, but you should give it some thought before using it.
Just don‘t do it.
In general, you should not modify any migration that has been run on the database and been committed the source control repository, unless the migration contains a error that causes data loss. As long as it is possible to undo the migration without losing data, you should just add another migration that undoes the actions of the previous bad migration, and maybe does the correct action afterward.
The main problem with modifying existing migrations is that you will have to manually modify any databases that ran the migration before it was modified. If you are a single developer, that may be an option, but certainly if you have multiple developers, it‘s a lot more work.
Sequel doesn‘t come with generators that create migrations for you. However, creating a migration is as simple as creating a file with the appropriate filename in your migrations directory that contains a Sequel.migration call. The minimal do-nothing migration is:
Sequel.migration{}
However, the migrations you write should contain an up block that does something, and a down block that reverses the changes made by the up block:
Sequel.migration do up{...} down{...} end
Inside your migration‘s down and up blocks is where you will call the Database schema modification methods. Here‘s a brief description of the most common schema modification methods:
create_table is the most common schema modification method, and it‘s used for adding new tables to the schema. You provide it with the name of the table as a symbol, as well a block:
create_table(:artists) do primary_key :id String :name end
Not that if you want a primary key for the table, you need to specify it, Sequel does not create one by default.
Most method calls inside the create_table block will create columns, since method_missing calls column Columns are generally created by specifying the column type as the method name, followed by the column name symbol to use, and after that any options that should be used. If the method is a ruby class name that Sequel recognizes, Sequel will transform it into the appropriate type for the given database. So while you specified String, Sequel will actually use varchar or text depending on the underlying database. Here‘s a list of all of ruby classes that Sequel will convert to database types:
create_table(:columns_types) do # common database type used Integer :a0 # integer String :a1 # varchar(255) String :a2, :size=>50 # varchar(50) String :a3, :fixed=>true # char(255) String :a4, :fixed=>true, :size=>50 # char(50) String :a5, :text=>true # text File :b, # blob Fixnum :c # integer Bignum :d # bigint Float :e # double precision BigDecimal :f # numeric BigDecimal :f2, :size=>10 # numeric(10) BigDecimal :f3, :size=>[10, 2] # numeric(10, 2) Date :g # date DateTime :h # timestamp Time :i # timestamp Time :i2, :only_time=>true # time Numeric :j # numeric TrueClass :k # boolean FalseClass :l # boolean end
Note that in addition to the ruby class name, Sequel also pays attention to the column options when determining which database type to use.
Also note that this conversion is only done if you use a supported ruby class name. In all other cases, Sequel uses the type specified verbatim:
create_table(:columns_types) do # database type used string :a1 # string datetime :a2 # datetime blob :a3 # blob inet :a4 # inet end
In addition to specifying the types as methods, you can use the column method and specify the types as the second argument, either as ruby classes, symbols, or strings:
create_table(:columns_types) do # database type used column :a1, :string # string column :a2, String # varchar(255) column :a3, 'string' # string column :a4, :datetime # datetime column :a5, DateTime # timestamp column :a6, 'timestamp(6)' # timestamp(6) end
When using the type name as method, the third argument is an options hash, and when using the column method, the fourth argument is the options hash. The following options are supported:
:default : | The default value for the column. |
:index : | Create an index on this column. |
:null : | Mark the column as allowing NULL values (if true), or not allowing NULL values (if false). If unspecified, will default to whatever the database default is. |
:size : | The size of the column, generally used with string columns to specify the maximum number of characters the column will hold. An array of two integers can be provided to set the size and the precision, respectively, of decimal columns. |
:unique : | Mark the column as unique, generally has the same effect as creating a unique index on the column. |
:unsigned : | Make the column type unsigned, only useful for integer columns. |
In addition to the column method and other methods that create columns, there are a other methods that can be used:
You‘ve seen this one used already. It‘s used to create an autoincrementing integer primary key column.
create_table(:a0){primary_key :id}
If you want to create a primary key column that doesn‘t use an autoincrementing integer, you should not use this method. Instead, you should use the :primary_key option to the column method or type method:
create_table(:a1){Integer :id, :primary_key=>true} # Non autoincrementing integer primary key create_table(:a2){String :name, :primary_key=>true} # varchar(255) primary key
If you want to create a composite primary key, you should call the primary_key method with an array of column symbols:
create_table(:items) do Integer :group_id Integer :position primary_key [:group_id, :position] end
If provided with an array, primary_key does not create a column, it just sets up the primary key constraint.
foreign_key is used to create a foreign key column that references a column in another table (or the same table). It takes the column name as the first argument, the table it references as the second argument, and an options hash as it‘s third argument. A simple example is:
create_table(:albums) do primary_key :id foreign_key :artist_id, :artists String :name end
foreign_key accepts some specific options:
:deferrable : | Makes the foreign key constraint checks deferrable, so they aren‘t checked until the end of the transaction. |
:key : | For foreign key columns, the column in the associated table that this column references. Unnecessary if this column references the primary key of the associated table, at least on most databases. |
:on_delete : | Specify the behavior of this foreign key column when the row with the primary key it references is deleted , can be :restrict, :cascade, :set_null, or :set_default. |
:on_update : | Specify the behavior of this foreign key column when the row with the primary key it references modifies the value of the primary key, can be :restrict, :cascade, :set_null, or :set_default. |
Like primary_key, if you provide foreign_key with an array of symbols, it will not create a column, but create a foreign key constraint:
create_table(:artists) do String :name String :location primary_key [:name, :location] end create_table(:albums) do String :artist_name String :artist_location String :name foreign_key [:artist_name, :artist_location], :artists end
index creates indexes on the table. For single columns, calling index is the same as using the :index option when creating the column:
create_table(:a){Integer :id, :index=>true} # Same as: create_table(:a) do Integer :id index :id end
Similar to the primary_key and foreign_key methods, calling index with an array of symbols will create a multiple column index:
create_table(:albums) do primary_key :id foreign_key :artist_id, :artists Integer :position index [:artist_id, :position] end
The index method also accepts some options:
:name : | The name of the index (generated based on the table and column names if not provided). |
:type : | The type of index to use (only supported by some databases) |
:unique : | Make the index unique, so duplicate values are not allowed. |
:where : | Create a partial index (only supported by some databases) |
The unique method creates a unique constraint on the table. A unique constraint generally operates identically to a unique index, so the following three create_table blocks are pretty much identical:
create_table(:a){Integer :a, :unique=>true} create_table(:a) do Integer :a index :a, :unique=>true end create_table(:a) do Integer :a unique :a end
Just like index, unique can set up a multiple column unique constraint, where the combination of the columns must be unique:
create_table(:a) do Integer :a Integer :b unique [:a, :b] end
Both of these create specialized index types supported by some databases. They both take the same options as index.
constraint creates a named table constraint:
create_table(:artists) do primary_key :id String :name constraint(:name_min_length){char_length(name) > 2} end
Instead of using a block, you can use arguments that will be handled similarly to Dataset#filter:
create_table(:artists) do primary_key :id String :name constraint(:name_length_range, :char_length.sql_function(:name)=>3..50) end
check operates just like constraint, except that it doesn‘t take a name and it creates an unnamed constraint
create_table(:artists) do primary_key :id String :name check{char_length(name) > 2} end
alter_table is used to alter existing tables, changing their columns, indexes, or constraints. It it used just like create_table, accepting a block which is instance_evaled, and providing its own methods:
One of the most common methods, add_column is used to add a column to the table. Its API is similar to that of create_table‘s column method, where the first argument is the column name, the second is the type, and the third is an options hash:
alter_table(:albums) do add_column :copies_sold, Integer, :default=>0 end
When adding a column, it‘s a good idea to provide a default value, unless you want the value for all rows to be set to NULL.
As you may expect, drop_column takes a column name and drops the column. It‘s often used in the down block of a migration to drop a column added in an up block:
alter_table(:albums) do drop_column :copies_sold end
rename_column is used to rename a column. It takes the old column name as the first argument, and the new column name as the second argument:
alter_table(:albums) do rename_column :copies_sold, :total_sales end
If you forgot to include a primary key on the table, and want to add one later, you can use add_primary_key. A common use of this is to make many_to_many association join tables into real models:
alter_table(:albums_artists) do add_primary_key :id end
Just like create_table‘s primary_key method, if you provide an array of symbols, Sequel will not add a column, but will add a composite primary key constraint:
alter_table(:albums_artists) do add_primary_key [:album_id, :artist_id] end
If you just want to take an existing single column and make it a primary key, call add_primary_key with an array with a single symbol:
alter_table(:artists) do add_primary_key [:id] end
add_foreign_key can be used to add a new foreign key column or constraint to a table. Like add_primary_key, if you provide it with a symbol as the first argument, it creates a new column:
alter_table(:albums) do add_foreign_key :artist_id, :artists end
If you want to add a new foreign key constraint to an existing column, you provide an array with a single element:
alter_table(:albums) do add_foreign_key [:artist_id], :artists end
To set up a multiple column foreign key constraint, use an array with multiple column symbols:
alter_table(:albums) do add_foreign_key [:artist_name, :artist_location], :artists end
add_index works just like create_table‘s index method, creating a new index on the table:
alter_table(:albums) do add_index :artist_id end
It accepts the same options as create_table‘s index method, and you can set up a multiple column index using an array:
alter_table(:albums_artists) do add_index [:album_id, :artist_id], :unique=>true end
As you may expect, drop_index drops an existing index:
alter_table(:albums) do drop_index :artist_id end
Just like drop_column, it is often used in the down block of a migration.
Corresponding to create_table‘s full_text_index and spatial_index methods, these two methods create new indexes on the table.
This adds a named constraint to the table, similar to create_table‘s constraint method:
alter_table(:albums) do add_constraint(:name_min_length){char_length(name) > 2} end
There is no method to add an unnamed constraint, but you can pass nil as the first argument of add_constraint to do so. However, it‘s not recommend to do that as it is difficult to drop such a constraint.
This adds a unique constraint to the table, similar to create_table‘s unique method. This usually has the same effect as adding a unique index.
alter_table(:albums) do add_unique_constraint [:artist_id, :name] end
This method drops an existing named constraint:
alter_table(:albums) do drop_constraint(:name_min_length) end
There is no database independent method to drop an unnamed constraint. Generally, the database will give it a name automatically, and you will have to figure out what it is. For that reason, you should not add unnamed constraints that you ever might need to remove.
This modifies the default value of a column:
alter_table(:albums) do set_column_default :copies_sold, 0 end
This modifies a column‘s type. Most databases will attempt to convert existing values in the columns to the new type:
alter_table(:albums) do set_column_type :copies_sold, Bignum end
You can specify the type as a string or symbol, in which case it is used verbatim, or as a supported ruby class, in which case it gets converted to an appropriate database type.
This changes the NULL or NOT NULL setting of a column:
alter_table(:albums) do set_column_allow_null :artist_id, true # NULL set_column_allow_null :copies_sold, false # NOT NULL end
Sequel::Database has many schema modification instance methods, most of which are shortcuts to the same methods in alter_table. The following Database instance methods just call alter_table with a block that calls the method with the same name inside the alter_table block with all arguments after the first argument (which is used as the table name):
For example, the following two method calls do the same thing:
alter_table(:artists){add_column :copies_sold, Integer} add_column :artists, :copies_sold, Integer
There are some other schema modification methods that have no alter_table counterpart:
drop_table takes multiple arguments and treats all arguments as a table name to drop:
drop_table(:albums_artists, :albums, :artists)
Note that when dropping tables, you may need to drop them in a specific order if you are using foreign keys and the database is enforcing referential integrity. In general, you need to drop the tables containing the foreign keys before the tables containing the primary keys they reference.
You can rename an existing table using rename_table. Like rename_column, the first argument is the current name, and the second is the new name:
rename_table(:artist, :artists)
create_table! with the bang drops the table unconditionally (swallowing any errors) before attempting to create it, so:
create_table!(:artists) primary_key :id end
is the same as:
drop_table(:artists) rescue nil create_table(:artists) primary_key :id end
It should not be used inside migrations, as if the table does not exist, it may mess up the migration.
create_table? with a question mark only creates the table if it does not already exist, so:
create_table!(:artists) primary_key :id end
is the same as:
create_table(:artists) primary_key :id end unless table_exists?(:artists)
Like create_table!, it should not be used inside migrations.
These can be used to create views. The difference between them is that create_or_replace_view will unconditionally replace an existing view of the same name, while create_view will probably raise an error. Both methods take the name as the first argument, and either an string or a dataset as the second argument:
create_view(:gold_albums, DB[:albums].filter{copies_sold > 500000}) create_or_replace_view(:gold_albums, "SELECT * FROM albums WHERE copies_sold > 500000")
drop_view drops existing views. Just like drop_table, it can accept multiple arguments:
drop_view(:gold_albums, :platinum_albums)
It‘s usually easy to determine what you should put in your migration‘s up block, as it‘s whatever change you want to make to the database. The down block is less obvious. In general, it should reverse the changes made by the up block, which means it should execute the opposite of what the up block does in the reverse order in which the up block does it. Here‘s an example where you are switching from having a single artist per album to multiple artists per album:
Sequel.migration do up do # Create albums_artists table create_table(:albums_artists) do foreign_key :album_id, :albums foreign_key :artist_id, :artists index [:album_id, :artist_id], :unique=>true end # Insert one row in the albums_artists table # for each row in the albums table where there # is an associated artist DB[:albums_artists].insert([:album_id, :artist_id], DB[:albums].select(:id, :artist_id).exclude(:artist_id=>nil)) # Drop the now unnecesssary column from the albums table drop_column :albums, :artist_id end down do # Add the foreign key column back to the artists table alter_table(:albums){add_foreign_key :artist_id, :artists} # If possible, associate each album with one of the artists # it was associated with. This loses information, but # there's no way around that. DB[:albums_artists]. group(:album_id). select{[album_id, max(artist_id).as(artist_id)]}. having{artist_id > 0}. all do |r| DB[:artists]. filter(:id=>r[:album_id]). update(:artist_id=>r[:artist_id]) end # Drop the albums_artists table drop_table(:albums_artists) end end
Note that the order in which things were done in the down block is in reverse order to how they were done in the up block. Also note how it isn‘t always possible to reverse exactly what was done in the up block. You should try to do so as much as possible, but if you can‘t, you may want to have your down block raise a Sequel::Error exception saying why the migration cannot be reverted.
You can run migrations using the sequel command line program that comes with Sequel. If you use the -m switch, sequel will run the migrator instead of giving you an IRB session. The -m switch requires an argument that should be a path to a directory of migration files:
sequel -m relative/path/to/migrations postgres://host/database sequel -m /absolute/path/to/migrations postgres://host/database
If you do not provide a -M switch, sequel will migrate to the latest version in the directory. If you provide a -M switch, it should specify an integer version to which to migrate.
# Migrate all the way down sequel -m db/migrations -M 0 postgres://host/database # Migrate to version 10 (IntegerMigrator style migrations) sequel -m db/migrations -M 10 postgres://host/database # Migrate to version 20100510 (TimestampMigrator migrations using YYYYMMDD) sequel -m db/migrations -M 20100510 postgres://host/database
Whether or not migrations use the up or down block depends on the version to which you are migrating. If you don‘t provide a -M switch, all unapplied migrations will be migrated up. If you provide a -M, it will depend on which migrations that have been applied. Applied migrations greater than that version will be migrated down, while unapplied migrations less than or equal to that version will be migrated up.
By default, sequel -m operates as a well behaved command line utility should, printing out nothing if there is no error. If you want to see the SQL being executed during a migration, as well as the amount of time that each migration takes, you can use the -E option to sequel to set up a Database logger that logs to STDOUT. You can also log that same output to a file using the -l option with a log file name.
Just don‘t do it.
It can be tempting to use models in your migrations, especially since it‘s easy to load them at the same time using the -L option to sequel. However, this ties your migrations to your models, and makes it so that changes in your models can break old migrations.
With Sequel, it should be easy to use plain datasets to accomplish pretty much anything you would want to accomplish in a migration. Even if you have to copy some code from a model method into a migration itself, it‘s better than having your migration use models and call model methods.
Sequel comes with a schema_dumper extension that dumps the current schema of the database as a migration to STDOUT (which you can redirect to a file using >). This is exposed in the sequel command line tool with the -d and -D switches. -d dumps the schema in database independent format, while -D dumps the schema using a non-portable format, useful if you are using nonportable columns such as inet in your database.
Let‘s say you have an existing database and want to create a migration that would recreate the database‘s schema:
sequel -d postgres://host/database > db/migrations/001_start.rb
or using a nonportable format:
sequel -D postgres://host/database > db/migrations/001_start.rb
The main difference between the two is that -d will use the type methods with the database independent ruby class types, while -D will use the column method with string types.
Note that Sequel cannot dump constraints other than primary key constraints, so it dumps foreign key columns as plain integers. If you are using any real database features such as foreign keys, constraints, or triggers, you should use your database‘s dump and restore programs instead of Sequel‘s schema dumper.
You can take the migration created by the schema dumper to another computer with an empty database, and attempt to recreate the schema using:
sequel -m db/migrations postgres://host/database
Before the Sequel.migration DSL was introduced, Sequel used classes for Migrations:
Class.new(Sequel::Migration) do def up end def down end end
or:
class DoSomething < Sequel::Migration def up end def down end end
This usage is discouraged in new code, but will continue to be supported indefinitely. It is not recommended to convert old-style migration classes to the Sequel.migration DSL, but it is recommended to use the Sequel.migration DSL for all new migrations.