A dataset represents an SQL query, or more generally, an abstract set of rows in the database. Datasets can be used to create, retrieve, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].filter(:author => 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Most dataset methods return modified copies of the dataset (functional style), so you can reuse different datasets to access data:
posts = DB[:posts] davids_posts = posts.filter(:author => 'david') old_posts = posts.filter('stamp < ?', Date.today - 7) davids_old_posts = davids_posts.filter('stamp < ?', Date.today - 7)
Datasets are Enumerable objects, so they can be manipulated using any of the Enumerable methods, such as map, inject, etc.
For more information, see the "Dataset Basics" guide.
EXTENSIONS | = | {} | Hash of extension name symbols to callable objects to load the extension into the Dataset object (usually by extending it with a module defined in the extension). | |
COLUMN_CHANGE_OPTS | = | [:select, :sql, :from, :join].freeze | The dataset options that require the removal of cached columns if changed. | |
NON_SQL_OPTIONS | = | [:server, :defaults, :overrides, :graph, :eager, :eager_graph, :graph_aliases] | Which options don‘t affect the SQL generation. Used by simple_select_all? to determine if this is a simple SELECT * FROM table. | |
CONDITIONED_JOIN_TYPES | = | [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left] | These symbols have _join methods created (e.g. inner_join) that call join_table with the symbol, passing along the arguments and block from the method call. | |
UNCONDITIONED_JOIN_TYPES | = | [:natural, :natural_left, :natural_right, :natural_full, :cross] | These symbols have _join methods created (e.g. natural_join). They accept a table argument and options hash which is passed to join_table, and they raise an error if called with a block. | |
JOIN_METHODS | = | (CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table] | All methods that return modified datasets with a joined table added. | |
QUERY_METHODS | = | (<<-METHS).split.map(&:to_sym) + JOIN_METHODS add_graph_aliases and distinct except exclude exclude_having exclude_where filter for_update from from_self graph grep group group_and_count group_append group_by having intersect invert limit lock_style naked offset or order order_append order_by order_more order_prepend qualify reverse reverse_order select select_all select_append select_group select_more server set_graph_aliases unfiltered ungraphed ungrouped union unlimited unordered where with with_recursive with_sql METHS ).split.map(&:to_sym) + JOIN_METHODS | Methods that return modified datasets |
Register an extension callback for Dataset objects. ext should be the extension name symbol, and mod should either be a Module that the dataset is extended with, or a callable object called with the database object. If mod is not provided, a block can be provided and is treated as the mod object.
If mod is a module, this also registers a Database extension that will extend all of the database‘s datasets.
# File lib/sequel/dataset/query.rb, line 55 55: def self.register_extension(ext, mod=nil, &block) 56: if mod 57: raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block 58: if mod.is_a?(Module) 59: block = proc{|ds| ds.extend(mod)} 60: Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)} 61: else 62: block = mod 63: end 64: end 65: Sequel.synchronize{EXTENSIONS[ext] = block} 66: end
Returns a new clone of the dataset with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted. This method should generally not be called directly by user code.
# File lib/sequel/dataset/query.rb, line 77 77: def clone(opts = nil) 78: c = super() 79: if opts 80: c.instance_variable_set(:@opts, Hash[@opts].merge!(opts)) 81: c.instance_variable_set(:@columns, nil) if @columns && !opts.each_key{|o| break if COLUMN_CHANGE_OPTS.include?(o)} 82: else 83: c.instance_variable_set(:@opts, Hash[@opts]) 84: end 85: c 86: end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. If a block is given, it is treated as a virtual row block, similar to where. Raises an error if arguments are given and DISTINCT ON is not supported.
DB[:items].distinct # SQL: SELECT DISTINCT * FROM items DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id DB[:items].order(:id).distinct{func(:id)} # SQL: SELECT DISTINCT ON (func(id)) * FROM items ORDER BY id
# File lib/sequel/dataset/query.rb, line 98 98: def distinct(*args, &block) 99: virtual_row_columns(args, block) 100: raise(InvalidOperation, "DISTINCT ON not supported") if !args.empty? && !supports_distinct_on? 101: clone(:distinct => args) 102: end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
:alias : | Use the given value as the from_self alias |
:all : | Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur |
:from_self : | Set to false to not wrap the returned dataset in a from_self, use with care. |
DB[:items].except(DB[:other_items]) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1 DB[:items].except(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items DB[:items].except(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 121 121: def except(dataset, opts=OPTS) 122: raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? 123: raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? 124: compound_clone(:except, dataset, opts) 125: end
Performs the inverse of Dataset#where. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.
DB[:items].exclude(:category => 'software') # SELECT * FROM items WHERE (category != 'software') DB[:items].exclude(:category => 'software', :id=>3) # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
# File lib/sequel/dataset/query.rb, line 135 135: def exclude(*cond, &block) 136: _filter_or_exclude(true, :where, *cond, &block) 137: end
Inverts the given conditions and adds them to the HAVING clause.
DB[:items].select_group(:name).exclude_having{count(name) < 2} # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)
# File lib/sequel/dataset/query.rb, line 143 143: def exclude_having(*cond, &block) 144: _filter_or_exclude(true, :having, *cond, &block) 145: end
Returns a copy of the dataset with the source changed. If no source is given, removes all tables. If multiple sources are given, it is the same as using a CROSS JOIN (cartesian product) between all tables. If a block is given, it is treated as a virtual row block, similar to where.
DB[:items].from # SQL: SELECT * DB[:items].from(:blah) # SQL: SELECT * FROM blah DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo DB[:items].from{fun(arg)} # SQL: SELECT * FROM fun(arg)
# File lib/sequel/dataset/query.rb, line 178 178: def from(*source, &block) 179: virtual_row_columns(source, block) 180: table_alias_num = 0 181: ctes = nil 182: source.map! do |s| 183: case s 184: when Dataset 185: if hoist_cte?(s) 186: ctes ||= [] 187: ctes += s.opts[:with] 188: s = s.clone(:with=>nil) 189: end 190: SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) 191: when Symbol 192: sch, table, aliaz = split_symbol(s) 193: if aliaz 194: s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table) 195: SQL::AliasedExpression.new(s, aliaz.to_sym) 196: else 197: s 198: end 199: else 200: s 201: end 202: end 203: o = {:from=>source.empty? ? nil : source} 204: o[:with] = (opts[:with] || []) + ctes if ctes 205: o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 206: clone(o) 207: end
Returns a dataset selecting from the current dataset. Supplying the :alias option controls the alias of the result.
ds = DB[:items].order(:name).select(:id, :name) # SELECT id,name FROM items ORDER BY name ds.from_self # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1 ds.from_self(:alias=>:foo) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo ds.from_self(:alias=>:foo, :column_aliases=>[:c1, :c2]) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo(c1, c2)
# File lib/sequel/dataset/query.rb, line 223 223: def from_self(opts=OPTS) 224: fs = {} 225: @opts.keys.each{|k| fs[k] = nil unless NON_SQL_OPTIONS.include?(k)} 226: clone(fs).from(opts[:alias] ? as(opts[:alias], opts[:column_aliases]) : self) 227: end
Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions (which are only supported on MySQL and PostgreSQL). Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.
Options (all are boolean):
:all_columns : | All columns must be matched to any of the given patterns. |
:all_patterns : | All patterns must match at least one of the columns. |
:case_insensitive : | Use a case insensitive pattern match (the default is case sensitive if the database supports it). |
If both :all_columns and :all_patterns are true, all columns must match all patterns.
Examples:
dataset.grep(:a, '%test%') # SELECT * FROM items WHERE (a LIKE '%test%' ESCAPE '\') dataset.grep([:a, :b], %w'%test% foo') # SELECT * FROM items WHERE ((a LIKE '%test%' ESCAPE '\') OR (a LIKE 'foo' ESCAPE '\') # OR (b LIKE '%test%' ESCAPE '\') OR (b LIKE 'foo' ESCAPE '\')) dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (b LIKE '%foo%' ESCAPE '\')) # AND ((a LIKE '%bar%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w'%foo% %bar%', :all_columns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (a LIKE '%bar%' ESCAPE '\')) # AND ((b LIKE '%foo%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true, :all_columns=>true) # SELECT * FROM a WHERE ((a LIKE '%foo%' ESCAPE '\') AND (b LIKE '%foo%' ESCAPE '\') # AND (a LIKE '%bar%' ESCAPE '\') AND (b LIKE '%bar%' ESCAPE '\'))
# File lib/sequel/dataset/query.rb, line 264 264: def grep(columns, patterns, opts=OPTS) 265: if opts[:all_patterns] 266: conds = Array(patterns).map do |pat| 267: SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) 268: end 269: where(SQL::BooleanExpression.new(opts[:all_patterns] ? :AND : :OR, *conds)) 270: else 271: conds = Array(columns).map do |c| 272: SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) 273: end 274: where(SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *conds)) 275: end 276: end
Returns a copy of the dataset with the results grouped by the value of the given columns. If a block is given, it is treated as a virtual row block, similar to where.
DB[:items].group(:id) # SELECT * FROM items GROUP BY id DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)
# File lib/sequel/dataset/query.rb, line 285 285: def group(*columns, &block) 286: virtual_row_columns(columns, block) 287: clone(:group => (columns.compact.empty? ? nil : columns)) 288: end
Returns a dataset grouped by the given column with count by group. Column aliases may be supplied, and will be included in the select clause. If a block is given, it is treated as a virtual row block, similar to where.
Examples:
DB[:items].group_and_count(:name).all # SELECT name, count(*) AS count FROM items GROUP BY name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count(:first_name, :last_name).all # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] DB[:items].group_and_count(:first_name___name).all # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count{substr(first_name, 1, 1).as(initial)}.all # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1) # => [{:initial=>'a', :count=>1}, ...]
# File lib/sequel/dataset/query.rb, line 316 316: def group_and_count(*columns, &block) 317: select_group(*columns, &block).select_more(COUNT_OF_ALL_AS_COUNT) 318: end
Returns a copy of the dataset with the given columns added to the list of existing columns to group on. If no existing columns are present this method simply sets the columns as the initial ones to group on.
DB[:items].group_append(:b) # SELECT * FROM items GROUP BY b DB[:items].group(:a).group_append(:b) # SELECT * FROM items GROUP BY a, b
# File lib/sequel/dataset/query.rb, line 326 326: def group_append(*columns, &block) 327: columns = @opts[:group] + columns if @opts[:group] 328: group(*columns, &block) 329: end
Adds the appropriate CUBE syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 332 332: def group_cube 333: raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube? 334: clone(:group_options=>:cube) 335: end
Adds the appropriate ROLLUP syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 338 338: def group_rollup 339: raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup? 340: clone(:group_options=>:rollup) 341: end
Adds the appropriate GROUPING SETS syntax to GROUP BY.
# File lib/sequel/dataset/query.rb, line 344 344: def grouping_sets 345: raise Error, "GROUP BY GROUPING SETS not supported on #{db.database_type}" unless supports_grouping_sets? 346: clone(:group_options=>"grouping sets""grouping sets") 347: end
Returns a copy of the dataset with the HAVING conditions changed. See where for argument types.
DB[:items].group(:sum).having(:sum=>10) # SELECT * FROM items GROUP BY sum HAVING (sum = 10)
# File lib/sequel/dataset/query.rb, line 353 353: def having(*cond, &block) 354: _filter(:having, *cond, &block) 355: end
Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
:alias : | Use the given value as the from_self alias |
:all : | Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur |
:from_self : | Set to false to not wrap the returned dataset in a from_self, use with care. |
DB[:items].intersect(DB[:other_items]) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1 DB[:items].intersect(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items DB[:items].intersect(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 374 374: def intersect(dataset, opts=OPTS) 375: raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? 376: raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? 377: compound_clone(:intersect, dataset, opts) 378: end
Inverts the current WHERE and HAVING clauses. If there is neither a WHERE or HAVING clause, adds a WHERE clause that is always false.
DB[:items].where(:category => 'software').invert # SELECT * FROM items WHERE (category != 'software') DB[:items].where(:category => 'software', :id=>3).invert # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
# File lib/sequel/dataset/query.rb, line 388 388: def invert 389: having, where = @opts.values_at(:having, :where) 390: if having.nil? && where.nil? 391: where(false) 392: else 393: o = {} 394: o[:having] = SQL::BooleanExpression.invert(having) if having 395: o[:where] = SQL::BooleanExpression.invert(where) if where 396: clone(o) 397: end 398: end
Alias of inner_join
# File lib/sequel/dataset/query.rb, line 401 401: def join(*args, &block) 402: inner_join(*args, &block) 403: end
Returns a joined dataset. Not usually called directly, users should use the appropriate join method (e.g. join, left_join, natural_join, cross_join) which fills in the type argument.
Takes the following arguments:
type : | The type of join to do (e.g. :inner) | ||||||||
table : | table to join into the current dataset.
Generally one of the following types:
| ||||||||
expr : | conditions used when joining, depends on type:
| ||||||||
options : | a hash of options, with the following keys supported:
| ||||||||
block : | The block argument should only be given if a JOIN with an ON clause is used, in which case it yields the table alias/name for the table currently being joined, the table alias/name for the last joined (or first table), and an array of previous SQL::JoinClause. Unlike where, this block is not treated as a virtual row block. |
Examples:
DB[:a].join_table(:cross, :b) # SELECT * FROM a CROSS JOIN b DB[:a].join_table(:inner, DB[:b], :c=>d) # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d) DB[:a].join_table(:left, :b___c, [:d]) # SELECT * FROM a LEFT JOIN b AS c USING (d) DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js| (Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)} end # SELECT * FROM a NATURAL JOIN b INNER JOIN c # ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b)))
# File lib/sequel/dataset/query.rb, line 464 464: def join_table(type, table, expr=nil, options=OPTS, &block) 465: if hoist_cte?(table) 466: s, ds = hoist_cte(table) 467: return s.join_table(type, ds, expr, options, &block) 468: end 469: 470: using_join = expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)} 471: if using_join && !supports_join_using? 472: h = {} 473: expr.each{|e| h[e] = e} 474: return join_table(type, table, h, options) 475: end 476: 477: table_alias = options[:table_alias] 478: last_alias = options[:implicit_qualifier] 479: qualify_type = options[:qualify] 480: 481: if table.is_a?(SQL::AliasedExpression) 482: table_expr = if table_alias 483: SQL::AliasedExpression.new(table.expression, table_alias, table.columns) 484: else 485: table 486: end 487: table = table_expr.expression 488: table_name = table_alias = table_expr.alias 489: elsif table.is_a?(Dataset) 490: if table_alias.nil? 491: table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 492: table_alias = dataset_alias(table_alias_num) 493: end 494: table_name = table_alias 495: table_expr = SQL::AliasedExpression.new(table, table_alias) 496: else 497: table, implicit_table_alias = split_alias(table) 498: table_alias ||= implicit_table_alias 499: table_name = table_alias || table 500: table_expr = table_alias ? SQL::AliasedExpression.new(table, table_alias) : table 501: end 502: 503: join = if expr.nil? and !block 504: SQL::JoinClause.new(type, table_expr) 505: elsif using_join 506: raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block 507: SQL::JoinUsingClause.new(expr, type, table_expr) 508: else 509: last_alias ||= @opts[:last_joined_table] || first_source_alias 510: if Sequel.condition_specifier?(expr) 511: expr = expr.collect do |k, v| 512: qualify_type = default_join_table_qualification if qualify_type.nil? 513: case qualify_type 514: when false 515: nil # Do no qualification 516: when :deep 517: k = Sequel::Qualifier.new(self, table_name).transform(k) 518: v = Sequel::Qualifier.new(self, last_alias).transform(v) 519: else 520: k = qualified_column_name(k, table_name) if k.is_a?(Symbol) 521: v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) 522: end 523: [k,v] 524: end 525: expr = SQL::BooleanExpression.from_value_pairs(expr) 526: end 527: if block 528: expr2 = yield(table_name, last_alias, @opts[:join] || []) 529: expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 530: end 531: SQL::JoinOnClause.new(expr, type, table_expr) 532: end 533: 534: opts = {:join => (@opts[:join] || []) + [join]} 535: opts[:last_joined_table] = table_name unless options[:reset_implicit_qualifier] == false 536: opts[:num_dataset_sources] = table_alias_num if table_alias_num 537: clone(opts) 538: end
Marks this dataset as a lateral dataset. If used in another dataset‘s FROM or JOIN clauses, it will surround the subquery with LATERAL to enable it to deal with previous tables in the query:
DB.from(:a, DB[:b].where(:a__c=>:b__d).lateral) # SELECT * FROM a, LATERAL (SELECT * FROM b WHERE (a.c = b.d))
# File lib/sequel/dataset/query.rb, line 560 560: def lateral 561: clone(:lateral=>true) 562: end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.
DB[:items].limit(10) # SELECT * FROM items LIMIT 10 DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20 DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10 DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10 DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20
# File lib/sequel/dataset/query.rb, line 574 574: def limit(l, o = (no_offset = true; nil)) 575: return from_self.limit(l, o) if @opts[:sql] 576: 577: if l.is_a?(Range) 578: no_offset = false 579: o = l.first 580: l = l.last - l.first + (l.exclude_end? ? 0 : 1) 581: end 582: l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) 583: if l.is_a?(Integer) 584: raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 585: end 586: 587: ds = clone(:limit=>l) 588: ds = ds.offset(o) unless no_offset 589: ds 590: end
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. You should never pass a string to this method that is derived from user input, as that can lead to SQL injection.
A symbol may be used for database independent locking behavior, but all supported symbols have separate methods (e.g. for_update).
DB[:items].lock_style('FOR SHARE NOWAIT') # SELECT * FROM items FOR SHARE NOWAIT DB[:items].lock_style('FOR UPDATE OF table1 SKIP LOCKED') # SELECT * FROM items FOR UPDATE OF table1 SKIP LOCKED
# File lib/sequel/dataset/query.rb, line 604 604: def lock_style(style) 605: clone(:lock => style) 606: end
Returns a cloned dataset without a row_proc.
ds = DB[:items] ds.row_proc = proc(&:invert) ds.all # => [{2=>:id}] ds.naked.all # => [{:id=>2}]
# File lib/sequel/dataset/query.rb, line 614 614: def naked 615: ds = clone 616: ds.row_proc = nil 617: ds 618: end
Returns a copy of the dataset with a specified order. Can be safely combined with limit. If you call limit with an offset, it will override override the offset if you‘ve called offset first.
DB[:items].offset(10) # SELECT * FROM items OFFSET 10
# File lib/sequel/dataset/query.rb, line 625 625: def offset(o) 626: o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) 627: if o.is_a?(Integer) 628: raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 629: end 630: clone(:offset => o) 631: end
Adds an alternate filter to an existing filter using OR. If no filter exists an Error is raised.
DB[:items].where(:a).or(:b) # SELECT * FROM items WHERE a OR b
# File lib/sequel/dataset/query.rb, line 637 637: def or(*cond, &block) 638: cond = cond.first if cond.size == 1 639: v = @opts[:where] 640: if v.nil? || (cond.respond_to?(:empty?) && cond.empty? && !block) 641: clone 642: else 643: clone(:where => SQL::BooleanExpression.new(:OR, v, filter_expr(cond, &block))) 644: end 645: end
Returns a copy of the dataset with the order changed. If the dataset has an existing order, it is ignored and overwritten with this order. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, such as SQL functions. If a block is given, it is treated as a virtual row block, similar to where.
DB[:items].order(:name) # SELECT * FROM items ORDER BY name DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b DB[:items].order(:a + :b) # SELECT * FROM items ORDER BY (a + b) DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC DB[:items].order(Sequel.asc(:name, :nulls=>:last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC DB[:items].order(nil) # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 661 661: def order(*columns, &block) 662: virtual_row_columns(columns, block) 663: clone(:order => (columns.compact.empty?) ? nil : columns) 664: end
Alias of order_more, for naming consistency with order_prepend.
# File lib/sequel/dataset/query.rb, line 667 667: def order_append(*columns, &block) 668: order_more(*columns, &block) 669: end
Returns a copy of the dataset with the order columns added to the end of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_more(:b) # SELECT * FROM items ORDER BY a, b
# File lib/sequel/dataset/query.rb, line 681 681: def order_more(*columns, &block) 682: columns = @opts[:order] + columns if @opts[:order] 683: order(*columns, &block) 684: end
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a
# File lib/sequel/dataset/query.rb, line 691 691: def order_prepend(*columns, &block) 692: ds = order(*columns, &block) 693: @opts[:order] ? ds.order_more(*@opts[:order]) : ds 694: end
Qualify to the given table, or first source if no table is given.
DB[:items].where(:id=>1).qualify # SELECT items.* FROM items WHERE (items.id = 1) DB[:items].where(:id=>1).qualify(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb, line 703 703: def qualify(table=first_source) 704: o = @opts 705: return clone if o[:sql] 706: h = {} 707: (o.keys & QUALIFY_KEYS).each do |k| 708: h[k] = qualified_expression(o[k], table) 709: end 710: h[:select] = [SQL::ColumnAll.new(table)] if !o[:select] || o[:select].empty? 711: clone(h) 712: end
Modify the RETURNING clause, only supported on a few databases. If returning is used, instead of insert returning the autogenerated primary key or update/delete returning the number of modified rows, results are returned using fetch_rows.
DB[:items].returning # RETURNING * DB[:items].returning(nil) # RETURNING NULL DB[:items].returning(:id, :name) # RETURNING id, name
# File lib/sequel/dataset/query.rb, line 722 722: def returning(*values) 723: raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) 724: clone(:returning=>values) 725: end
Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.
DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC DB[:items].reverse{foo(bar)} # SELECT * FROM items ORDER BY foo(bar) DESC DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC
# File lib/sequel/dataset/query.rb, line 734 734: def reverse(*order, &block) 735: virtual_row_columns(order, block) 736: order(*invert_order(order.empty? ? @opts[:order] : order)) 737: end
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to where.
DB[:items].select(:a) # SELECT a FROM items DB[:items].select(:a, :b) # SELECT a, b FROM items DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/query.rb, line 751 751: def select(*columns, &block) 752: virtual_row_columns(columns, block) 753: clone(:select => columns) 754: end
Returns a copy of the dataset selecting the wildcard if no arguments are given. If arguments are given, treat them as tables and select all columns (using the wildcard) from each table.
DB[:items].select(:a).select_all # SELECT * FROM items DB[:items].select_all(:items) # SELECT items.* FROM items DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items
# File lib/sequel/dataset/query.rb, line 763 763: def select_all(*tables) 764: if tables.empty? 765: clone(:select => nil) 766: else 767: select(*tables.map{|t| i, a = split_alias(t); a || i}.map{|t| SQL::ColumnAll.new(t)}) 768: end 769: end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items DB[:items].select_append(:b) # SELECT *, b FROM items
# File lib/sequel/dataset/query.rb, line 778 778: def select_append(*columns, &block) 779: cur_sel = @opts[:select] 780: if !cur_sel || cur_sel.empty? 781: unless supports_select_all_and_column? 782: return select_all(*(Array(@opts[:from]) + Array(@opts[:join]))).select_more(*columns, &block) 783: end 784: cur_sel = [WILDCARD] 785: end 786: select(*(cur_sel + columns), &block) 787: end
Set both the select and group clauses with the given columns. Column aliases may be supplied, and will be included in the select clause. This also takes a virtual row block similar to where.
DB[:items].select_group(:a, :b) # SELECT a, b FROM items GROUP BY a, b DB[:items].select_group(:c___a){f(c2)} # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)
# File lib/sequel/dataset/query.rb, line 798 798: def select_group(*columns, &block) 799: virtual_row_columns(columns, block) 800: select(*columns).group(*columns.map{|c| unaliased_identifier(c)}) 801: end
Alias for select_append.
# File lib/sequel/dataset/query.rb, line 804 804: def select_more(*columns, &block) 805: select_append(*columns, &block) 806: end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (where SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.
DB[:items].all # Uses the :read_only or :default server DB[:items].delete # Uses the :default server DB[:items].server(:blah).delete # Uses the :blah server
# File lib/sequel/dataset/query.rb, line 817 817: def server(servr) 818: clone(:server=>servr) 819: end
If the database uses sharding and the current dataset has not had a server set, return a cloned dataset that uses the given server. Otherwise, return the receiver directly instead of returning a clone.
# File lib/sequel/dataset/query.rb, line 824 824: def server?(server) 825: if db.sharded? && !opts[:server] 826: server(server) 827: else 828: self 829: end 830: end
Unbind bound variables from this dataset‘s filter and return an array of two objects. The first object is a modified dataset where the filter has been replaced with one that uses bound variable placeholders. The second object is the hash of unbound variables. You can then prepare and execute (or just call) the dataset with the bound variables to get results.
ds, bv = DB[:items].where(:a=>1).unbind ds # SELECT * FROM items WHERE (a = $a) bv # {:a => 1} ds.call(:select, bv)
# File lib/sequel/dataset/query.rb, line 848 848: def unbind 849: u = Unbinder.new 850: ds = clone(:where=>u.transform(opts[:where]), :join=>u.transform(opts[:join])) 851: [ds, u.binds] 852: end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
:alias : | Use the given value as the from_self alias |
:all : | Set to true to use UNION ALL instead of UNION, so duplicate rows can occur |
:from_self : | Set to false to not wrap the returned dataset in a from_self, use with care. |
DB[:items].union(DB[:other_items]) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1 DB[:items].union(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items UNION ALL SELECT * FROM other_items DB[:items].union(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 886 886: def union(dataset, opts=OPTS) 887: compound_clone(:union, dataset, opts) 888: end
Returns a copy of the dataset with the given WHERE conditions imposed upon it.
Accepts the following argument types:
Hash : | list of equality/inclusion expressions |
Array : | depends: |
String : | taken literally |
Symbol : | taken as a boolean column argument (e.g. WHERE active) |
Sequel::SQL::BooleanExpression : | an existing condition expression, probably created using the Sequel expression filter DSL. |
where also accepts a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the "Virtual Rows" guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
DB[:items].where(:id => 3) # SELECT * FROM items WHERE (id = 3) DB[:items].where('price < ?', 100) # SELECT * FROM items WHERE price < 100 DB[:items].where([[:id, [1,2,3]], [:id, 0..10]]) # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10))) DB[:items].where('price < 100') # SELECT * FROM items WHERE price < 100 DB[:items].where(:active) # SELECT * FROM items WHERE :active DB[:items].where{price < 100} # SELECT * FROM items WHERE (price < 100)
Multiple where calls can be chained for scoping:
software = dataset.where(:category => 'software').where{price < 100} # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
See the "Dataset Filtering" guide for more examples and details.
# File lib/sequel/dataset/query.rb, line 954 954: def where(*cond, &block) 955: _filter(:where, *cond, &block) 956: end
Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query. Options:
:args : | Specify the arguments/columns for the CTE, should be an array of symbols. |
:recursive : | Specify that this is a recursive CTE |
DB[:items].with(:items, DB[:syx].where(:name.like('A%'))) # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%' ESCAPE '\')) SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 966 966: def with(name, dataset, opts=OPTS) 967: raise(Error, 'This dataset does not support common table expressions') unless supports_cte? 968: if hoist_cte?(dataset) 969: s, ds = hoist_cte(dataset) 970: s.with(name, ds, opts) 971: else 972: clone(:with=>(@opts[:with]||[]) + [Hash[opts].merge!(:name=>name, :dataset=>dataset)]) 973: end 974: end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE. Options:
:args : | Specify the arguments/columns for the CTE, should be an array of symbols. |
:union_all : | Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts. |
DB[:t].with_recursive(:t, DB[:i1].select(:id, :parent_id).where(:parent_id=>nil), DB[:i1].join(:t, :id=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:id, :parent_id]) # WITH RECURSIVE "t"("id", "parent_id") AS ( # SELECT "id", "parent_id" FROM "i1" WHERE ("parent_id" IS NULL) # UNION ALL # SELECT "i1"."id", "i1"."parent_id" FROM "i1" INNER JOIN "t" ON ("t"."id" = "i1"."parent_id") # ) SELECT * FROM "t"
# File lib/sequel/dataset/query.rb, line 992 992: def with_recursive(name, nonrecursive, recursive, opts=OPTS) 993: raise(Error, 'This datatset does not support common table expressions') unless supports_cte? 994: if hoist_cte?(nonrecursive) 995: s, ds = hoist_cte(nonrecursive) 996: s.with_recursive(name, ds, recursive, opts) 997: elsif hoist_cte?(recursive) 998: s, ds = hoist_cte(recursive) 999: s.with_recursive(name, nonrecursive, ds, opts) 1000: else 1001: clone(:with=>(@opts[:with]||[]) + [Hash[opts].merge!(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]) 1002: end 1003: end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo
You can use placeholders in your SQL and provide arguments for those placeholders:
DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo
You can also provide a method name and arguments to call to get the SQL:
DB[:items].with_sql(:insert_sql, :b=>1) # INSERT INTO items (b) VALUES (1)
# File lib/sequel/dataset/query.rb, line 1017 1017: def with_sql(sql, *args) 1018: if sql.is_a?(Symbol) 1019: sql = send(sql, *args) 1020: else 1021: sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? 1022: end 1023: clone(:sql=>sql) 1024: end
Add the dataset to the list of compounds
# File lib/sequel/dataset/query.rb, line 1029 1029: def compound_clone(type, dataset, opts) 1030: if hoist_cte?(dataset) 1031: s, ds = hoist_cte(dataset) 1032: return s.compound_clone(type, ds, opts) 1033: end 1034: ds = compound_from_self.clone(:compounds=>Array(@opts[:compounds]).map(&:dup) + [[type, dataset.compound_from_self, opts[:all]]]) 1035: opts[:from_self] == false ? ds : ds.from_self(opts) 1036: end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset/query.rb, line 1039 1039: def options_overlap(opts) 1040: !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? 1041: end
Whether this dataset is a simple select from an underlying table, such as:
SELECT * FROM table SELECT table.* FROM table
# File lib/sequel/dataset/query.rb, line 1047 1047: def simple_select_all? 1048: o = @opts.reject{|k,v| v.nil? || NON_SQL_OPTIONS.include?(k)} 1049: if (f = o[:from]) && f.length == 1 && (f.first.is_a?(Symbol) || f.first.is_a?(SQL::AliasedExpression)) 1050: case o.length 1051: when 1 1052: true 1053: when 2 1054: (s = o[:select]) && s.length == 1 && s.first.is_a?(SQL::ColumnAll) 1055: else 1056: false 1057: end 1058: else 1059: false 1060: end 1061: end
These methods all return booleans, with most describing whether or not the dataset supports a feature.
Whether this dataset quotes identifiers.
# File lib/sequel/dataset/features.rb, line 12 12: def quote_identifiers? 13: if defined?(@quote_identifiers) 14: @quote_identifiers 15: else 16: @quote_identifiers = db.quote_identifiers? 17: end 18: end
Whether you must use a column alias list for recursive CTEs (false by default).
# File lib/sequel/dataset/features.rb, line 29 29: def recursive_cte_requires_column_aliases? 30: false 31: end
Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer)
# File lib/sequel/dataset/features.rb, line 41 41: def requires_placeholder_type_specifiers? 42: false 43: end
Whether the dataset supports common table expressions (the WITH clause). If given, type can be :select, :insert, :update, or :delete, in which case it determines whether WITH is supported for the respective statement type.
# File lib/sequel/dataset/features.rb, line 48 48: def supports_cte?(type=:select) 49: false 50: end
Whether the dataset supports common table expressions (the WITH clause) in subqueries. If false, applies the WITH clause to the main query, which can cause issues if multiple WITH clauses use the same name.
# File lib/sequel/dataset/features.rb, line 55 55: def supports_cte_in_subqueries? 56: false 57: end
Whether the database supports derived column lists (e.g. "table_expr AS table_alias(column_alias1, column_alias2, …)"), true by default.
# File lib/sequel/dataset/features.rb, line 62 62: def supports_derived_column_lists? 63: true 64: end
Whether the dataset supports the IS TRUE syntax.
# File lib/sequel/dataset/features.rb, line 103 103: def supports_is_true? 104: true 105: end
Whether the dataset supports the JOIN table USING (column1, …) syntax.
# File lib/sequel/dataset/features.rb, line 108 108: def supports_join_using? 109: true 110: end
Whether limits are supported in correlated subqueries. True by default.
# File lib/sequel/dataset/features.rb, line 118 118: def supports_limits_in_correlated_subqueries? 119: true 120: end
Whether modifying joined datasets is supported.
# File lib/sequel/dataset/features.rb, line 123 123: def supports_modifying_joins? 124: false 125: end
Whether offsets are supported in correlated subqueries, true by default.
# File lib/sequel/dataset/features.rb, line 134 134: def supports_offsets_in_correlated_subqueries? 135: true 136: end
Whether the dataset supports pattern matching by regular expressions.
# File lib/sequel/dataset/features.rb, line 145 145: def supports_regexp? 146: false 147: end
Whether the dataset supports REPLACE syntax, false by default.
# File lib/sequel/dataset/features.rb, line 150 150: def supports_replace? 151: false 152: end
Whether the database supports SELECT *, column FROM table
# File lib/sequel/dataset/features.rb, line 166 166: def supports_select_all_and_column? 167: true 168: end
Whether the dataset supports timezones in literal timestamps
# File lib/sequel/dataset/features.rb, line 171 171: def supports_timestamp_timezones? 172: false 173: end
Whether the dataset supports fractional seconds in literal timestamps
# File lib/sequel/dataset/features.rb, line 176 176: def supports_timestamp_usecs? 177: true 178: end
MUTATION_METHODS | = | QUERY_METHODS - [:naked, :from_self] | All methods that should have a ! method added that modifies the receiver. |
Setup mutation (e.g. filter!) methods. These operate the same as the non-! methods, but replace the options of the current dataset with the options of the resulting dataset.
Do not call this method with untrusted input, as that can result in arbitrary code execution.
# File lib/sequel/dataset/mutation.rb, line 19 19: def self.def_mutation_method(*meths) 20: options = meths.pop if meths.last.is_a?(Hash) 21: mod = options[:module] if options 22: mod ||= self 23: meths.each do |meth| 24: mod.class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end", __FILE__, __LINE__) 25: end 26: end
Load an extension into the receiver. In addition to requiring the extension file, this also modifies the dataset to work with the extension (usually extending it with a module defined in the extension file). If no related extension file exists or the extension does not have specific support for Database objects, an Error will be raised. Returns self.
# File lib/sequel/dataset/mutation.rb, line 40 40: def extension!(*exts) 41: raise_if_frozen! 42: Sequel.extension(*exts) 43: exts.each do |ext| 44: if pr = Sequel.synchronize{EXTENSIONS[ext]} 45: pr.call(self) 46: else 47: raise(Error, "Extension #{ext} does not have specific support handling individual datasets (try: Sequel.extension #{ext.inspect})") 48: end 49: end 50: self 51: end
Avoid self-referential dataset by cloning.
# File lib/sequel/dataset/mutation.rb, line 54 54: def from_self!(*args, &block) 55: raise_if_frozen! 56: @opts = clone.from_self(*args, &block).opts 57: self 58: end
Set whether to quote identifiers for this dataset
# File lib/sequel/dataset/mutation.rb, line 81 81: def quote_identifiers=(v) 82: raise_if_frozen! 83: skip_symbol_cache! 84: @quote_identifiers = v 85: end
Returns an EXISTS clause for the dataset as an SQL::PlaceholderLiteralString.
DB.select(1).where(DB[:items].exists) # SELECT 1 WHERE (EXISTS (SELECT * FROM items))
# File lib/sequel/dataset/sql.rb, line 14 14: def exists 15: SQL::PlaceholderLiteralString.new(EXISTS, [self], true) 16: end
Returns an INSERT SQL query string. See insert.
DB[:items].insert_sql(:a=>1) # => "INSERT INTO items (a) VALUES (1)"
# File lib/sequel/dataset/sql.rb, line 22 22: def insert_sql(*values) 23: return static_sql(@opts[:sql]) if @opts[:sql] 24: 25: check_modification_allowed! 26: 27: columns = [] 28: 29: case values.size 30: when 0 31: return insert_sql({}) 32: when 1 33: case vals = values.at(0) 34: when Hash 35: values = [] 36: vals.each do |k,v| 37: columns << k 38: values << v 39: end 40: when Dataset, Array, LiteralString 41: values = vals 42: end 43: when 2 44: if (v0 = values.at(0)).is_a?(Array) && ((v1 = values.at(1)).is_a?(Array) || v1.is_a?(Dataset) || v1.is_a?(LiteralString)) 45: columns, values = v0, v1 46: raise(Error, "Different number of values and columns given to insert_sql") if values.is_a?(Array) and columns.length != values.length 47: end 48: end 49: 50: if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? 51: columns, values = insert_empty_columns_values 52: end 53: clone(:columns=>columns, :values=>values).send(:_insert_sql) 54: end
Append a literal representation of a value to the given SQL string.
If an unsupported object is given, an Error is raised.
# File lib/sequel/dataset/sql.rb, line 59 59: def literal_append(sql, v) 60: case v 61: when Symbol 62: if skip_symbol_cache? 63: literal_symbol_append(sql, v) 64: else 65: unless l = db.literal_symbol(v) 66: l = String.new 67: literal_symbol_append(l, v) 68: db.literal_symbol_set(v, l) 69: end 70: sql << l 71: end 72: when String 73: case v 74: when LiteralString 75: sql << v 76: when SQL::Blob 77: literal_blob_append(sql, v) 78: else 79: literal_string_append(sql, v) 80: end 81: when Integer 82: sql << literal_integer(v) 83: when Hash 84: literal_hash_append(sql, v) 85: when SQL::Expression 86: literal_expression_append(sql, v) 87: when Float 88: sql << literal_float(v) 89: when BigDecimal 90: sql << literal_big_decimal(v) 91: when NilClass 92: sql << literal_nil 93: when TrueClass 94: sql << literal_true 95: when FalseClass 96: sql << literal_false 97: when Array 98: literal_array_append(sql, v) 99: when Time 100: v.is_a?(SQLTime) ? literal_sqltime_append(sql, v) : literal_time_append(sql, v) 101: when DateTime 102: literal_datetime_append(sql, v) 103: when Date 104: sql << literal_date(v) 105: when Dataset 106: literal_dataset_append(sql, v) 107: else 108: literal_other_append(sql, v) 109: end 110: end
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert to format insert statements and expects a keys array and and an array of value arrays.
This method should be overridden by descendants if the support inserting multiple records in a single SQL statement.
# File lib/sequel/dataset/sql.rb, line 118 118: def multi_insert_sql(columns, values) 119: case multi_insert_sql_strategy 120: when :values 121: sql = LiteralString.new('VALUES ') 122: expression_list_append(sql, values.map{|r| Array(r)}) 123: [insert_sql(columns, sql)] 124: when :union 125: c = false 126: sql = LiteralString.new 127: u = UNION_ALL_SELECT 128: f = empty_from_sql 129: values.each do |v| 130: if c 131: sql << u 132: else 133: sql << SELECT << SPACE 134: c = true 135: end 136: expression_list_append(sql, v) 137: sql << f if f 138: end 139: [insert_sql(columns, sql)] 140: else 141: values.map{|r| insert_sql(columns, r)} 142: end 143: end
Same as select_sql, not aliased directly to make subclassing simpler.
# File lib/sequel/dataset/sql.rb, line 146 146: def sql 147: select_sql 148: end
Returns a TRUNCATE SQL query string. See truncate
DB[:items].truncate_sql # => 'TRUNCATE items'
# File lib/sequel/dataset/sql.rb, line 153 153: def truncate_sql 154: if opts[:sql] 155: static_sql(opts[:sql]) 156: else 157: check_truncation_allowed! 158: raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having] 159: t = String.new 160: source_list_append(t, opts[:from]) 161: _truncate_sql(t) 162: end 163: end
Formats an UPDATE statement using the given values. See update.
DB[:items].update_sql(:price => 100, :category => 'software') # => "UPDATE items SET price = 100, category = 'software'
Raises an Error if the dataset is grouped or includes more than one table.
# File lib/sequel/dataset/sql.rb, line 172 172: def update_sql(values = OPTS) 173: return static_sql(opts[:sql]) if opts[:sql] 174: check_modification_allowed! 175: clone(:values=>values).send(:_update_sql) 176: end
These methods, while public, are not designed to be used directly by the end user.
EMULATED_FUNCTION_MAP | = | {} | Map of emulated function names to native function names. | |
WILDCARD | = | LiteralString.new('*').freeze | ||
ALL | = | ' ALL'.freeze | ||
AND_SEPARATOR | = | " AND ".freeze | ||
APOS | = | "'".freeze | ||
APOS_RE | = | /'/.freeze | ||
ARRAY_EMPTY | = | '(NULL)'.freeze | ||
AS | = | ' AS '.freeze | ||
ASC | = | ' ASC'.freeze | ||
BACKSLASH | = | "\\".freeze | ||
BITCOMP_CLOSE | = | ") - 1)".freeze | ||
BITCOMP_OPEN | = | "((0 - ".freeze | ||
BITWISE_METHOD_MAP | = | {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR} | ||
BOOL_FALSE | = | "'f'".freeze | ||
BOOL_TRUE | = | "'t'".freeze | ||
BRACKET_CLOSE | = | ']'.freeze | ||
BRACKET_OPEN | = | '['.freeze | ||
CASE_ELSE | = | " ELSE ".freeze | ||
CASE_END | = | " END)".freeze | ||
CASE_OPEN | = | '(CASE'.freeze | ||
CASE_THEN | = | " THEN ".freeze | ||
CASE_WHEN | = | " WHEN ".freeze | ||
CAST_OPEN | = | 'CAST('.freeze | ||
COLON | = | ':'.freeze | ||
COLUMN_REF_RE1 | = | Sequel::COLUMN_REF_RE1 | ||
COLUMN_REF_RE2 | = | Sequel::COLUMN_REF_RE2 | ||
COLUMN_REF_RE3 | = | Sequel::COLUMN_REF_RE3 | ||
COMMA | = | ', '.freeze | ||
COMMA_SEPARATOR | = | COMMA | ||
CONDITION_FALSE | = | '(1 = 0)'.freeze | ||
CONDITION_TRUE | = | '(1 = 1)'.freeze | ||
COUNT_FROM_SELF_OPTS | = | [:distinct, :group, :sql, :limit, :offset, :compounds] | ||
COUNT_OF_ALL_AS_COUNT | = | SQL::Function.new(:count, WILDCARD).as(:count) | ||
DATASET_ALIAS_BASE_NAME | = | 't'.freeze | ||
DEFAULT | = | LiteralString.new('DEFAULT').freeze | ||
DEFAULT_VALUES | = | " DEFAULT VALUES".freeze | ||
DELETE | = | 'DELETE'.freeze | ||
DESC | = | ' DESC'.freeze | ||
DISTINCT | = | " DISTINCT".freeze | ||
DOT | = | '.'.freeze | ||
DOUBLE_APOS | = | "''".freeze | ||
DOUBLE_QUOTE | = | '""'.freeze | ||
EQUAL | = | ' = '.freeze | ||
EMPTY_PARENS | = | '()'.freeze | ||
ESCAPE | = | " ESCAPE ".freeze | ||
EXTRACT | = | 'extract('.freeze | ||
EXISTS | = | ['EXISTS '.freeze].freeze | ||
FILTER | = | " FILTER (WHERE ".freeze | ||
FOR_UPDATE | = | ' FOR UPDATE'.freeze | ||
FORMAT_DATE | = | "'%Y-%m-%d'".freeze | ||
FORMAT_DATE_STANDARD | = | "DATE '%Y-%m-%d'".freeze | ||
FORMAT_OFFSET | = | "%+03i%02i".freeze | ||
FORMAT_TIMESTAMP_RE | = | /%[Nz]/.freeze | ||
FORMAT_USEC | = | '%N'.freeze | ||
FRAME_ALL | = | "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING".freeze | ||
FRAME_ROWS | = | "ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW".freeze | ||
FROM | = | ' FROM '.freeze | ||
FUNCTION_DISTINCT | = | "DISTINCT ".freeze | ||
GROUP_BY | = | " GROUP BY ".freeze | ||
HAVING | = | " HAVING ".freeze | ||
INSERT | = | "INSERT".freeze | ||
INTO | = | " INTO ".freeze | ||
IS_LITERALS | = | {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze | ||
IS_OPERATORS | = | ::Sequel::SQL::ComplexExpression::IS_OPERATORS | ||
LATERAL | = | 'LATERAL '.freeze | ||
LIKE_OPERATORS | = | ::Sequel::SQL::ComplexExpression::LIKE_OPERATORS | ||
LIMIT | = | " LIMIT ".freeze | ||
N_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS | ||
NOT_SPACE | = | 'NOT '.freeze | ||
NULL | = | "NULL".freeze | ||
NULLS_FIRST | = | " NULLS FIRST".freeze | ||
NULLS_LAST | = | " NULLS LAST".freeze | ||
OFFSET | = | " OFFSET ".freeze | ||
ON | = | ' ON '.freeze | ||
ON_PAREN | = | " ON (".freeze | ||
ORDER_BY | = | " ORDER BY ".freeze | ||
ORDER_BY_NS | = | "ORDER BY ".freeze | ||
OVER | = | ' OVER '.freeze | ||
PAREN_CLOSE | = | ')'.freeze | ||
PAREN_OPEN | = | '('.freeze | ||
PAREN_SPACE_OPEN | = | ' ('.freeze | ||
PARTITION_BY | = | "PARTITION BY ".freeze | ||
QUALIFY_KEYS | = | [:select, :where, :having, :order, :group] | ||
QUESTION_MARK | = | '?'.freeze | ||
QUESTION_MARK_RE | = | /\?/.freeze | ||
QUOTE | = | '"'.freeze | ||
QUOTE_RE | = | /"/.freeze | ||
RETURNING | = | " RETURNING ".freeze | ||
SELECT | = | 'SELECT'.freeze | ||
SET | = | ' SET '.freeze | ||
SPACE | = | ' '.freeze | ||
SQL_WITH | = | "WITH ".freeze | ||
SPACE_WITH | = | " WITH ".freeze | ||
TILDE | = | '~'.freeze | ||
TIMESTAMP_FORMAT | = | "'%Y-%m-%d %H:%M:%S%N%z'".freeze | ||
STANDARD_TIMESTAMP_FORMAT | = | "TIMESTAMP #{TIMESTAMP_FORMAT}".freeze | ||
TWO_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS | ||
REGEXP_OPERATORS | = | ::Sequel::SQL::ComplexExpression::REGEXP_OPERATORS | ||
UNDERSCORE | = | '_'.freeze | ||
UPDATE | = | 'UPDATE'.freeze | ||
USING | = | ' USING ('.freeze | ||
UNION_ALL_SELECT | = | ' UNION ALL SELECT '.freeze | ||
VALUES | = | " VALUES ".freeze | ||
WHERE | = | " WHERE ".freeze | ||
WITH_ORDINALITY | = | " WITH ORDINALITY".freeze | ||
WITHIN_GROUP | = | " WITHIN GROUP (ORDER BY ".freeze | ||
DATETIME_SECFRACTION_ARG | = | RUBY_VERSION >= '1.9.0' ? 1000000 : 86400000000 |
Define a dataset literalization method for the given type in the given module, using the given clauses.
Arguments:
mod : | Module in which to define method |
type : | Type of SQL literalization method to create, either :select, :insert, :update, or :delete |
clauses : | array of clauses that make up the SQL query for the type. This can either be a single array of symbols/strings, or it can be an array of pairs, with the first element in each pair being an if/elsif/else code fragment, and the second element in each pair being an array of symbol/strings for the appropriate branch. |
# File lib/sequel/dataset/sql.rb, line 199 199: def self.def_sql_method(mod, type, clauses) 200: priv = type == :update || type == :insert 201: 202: lines = [] 203: lines << 'private' if priv 204: lines << "def #{'_' if priv}#{type}_sql" 205: lines << 'if sql = opts[:sql]; return static_sql(sql) end' unless priv 206: lines << 'check_modification_allowed!' if type == :delete 207: lines << 'sql = @opts[:append_sql] || sql_string_origin' 208: 209: if clauses.all?{|c| c.is_a?(Array)} 210: clauses.each do |i, cs| 211: lines << i 212: lines.concat(clause_methods(type, cs).map{|x| "#{x}(sql)"}) 213: end 214: lines << 'end' 215: else 216: lines.concat(clause_methods(type, clauses).map{|x| "#{x}(sql)"}) 217: end 218: 219: lines << 'sql' 220: lines << 'end' 221: 222: mod.class_eval lines.join("\n"), __FILE__, __LINE__ 223: end
Append literalization of boolean constant to SQL string.
# File lib/sequel/dataset/sql.rb, line 370 370: def boolean_constant_sql_append(sql, constant) 371: if (constant == true || constant == false) && !supports_where_true? 372: sql << (constant == true ? CONDITION_TRUE : CONDITION_FALSE) 373: else 374: literal_append(sql, constant) 375: end 376: end
Append literalization of case expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 379 379: def case_expression_sql_append(sql, ce) 380: sql << CASE_OPEN 381: if ce.expression? 382: sql << SPACE 383: literal_append(sql, ce.expression) 384: end 385: w = CASE_WHEN 386: t = CASE_THEN 387: ce.conditions.each do |c,r| 388: sql << w 389: literal_append(sql, c) 390: sql << t 391: literal_append(sql, r) 392: end 393: sql << CASE_ELSE 394: literal_append(sql, ce.default) 395: sql << CASE_END 396: end
Append literalization of complex expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 412 412: def complex_expression_sql_append(sql, op, args) 413: case op 414: when *IS_OPERATORS 415: r = args.at(1) 416: if r.nil? || supports_is_true? 417: raise(InvalidOperation, 'Invalid argument used for IS operator') unless val = IS_LITERALS[r] 418: sql << PAREN_OPEN 419: literal_append(sql, args.at(0)) 420: sql << SPACE << op.to_s << SPACE 421: sql << val << PAREN_CLOSE 422: elsif op == :IS 423: complex_expression_sql_append(sql, "=""=", args) 424: else 425: complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new("!=""!=", *args), SQL::BooleanExpression.new(:IS, args.at(0), nil)]) 426: end 427: when :IN, "NOT IN""NOT IN" 428: cols = args.at(0) 429: vals = args.at(1) 430: col_array = true if cols.is_a?(Array) 431: if vals.is_a?(Array) 432: val_array = true 433: empty_val_array = vals == [] 434: end 435: if empty_val_array 436: literal_append(sql, empty_array_value(op, cols)) 437: elsif col_array 438: if !supports_multiple_column_in? 439: if val_array 440: expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) 441: literal_append(sql, op == :IN ? expr : ~expr) 442: else 443: old_vals = vals 444: vals = vals.naked if vals.is_a?(Sequel::Dataset) 445: vals = vals.to_a 446: val_cols = old_vals.columns 447: complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) 448: end 449: else 450: # If the columns and values are both arrays, use array_sql instead of 451: # literal so that if values is an array of two element arrays, it 452: # will be treated as a value list instead of a condition specifier. 453: sql << PAREN_OPEN 454: literal_append(sql, cols) 455: sql << SPACE << op.to_s << SPACE 456: if val_array 457: array_sql_append(sql, vals) 458: else 459: literal_append(sql, vals) 460: end 461: sql << PAREN_CLOSE 462: end 463: else 464: sql << PAREN_OPEN 465: literal_append(sql, cols) 466: sql << SPACE << op.to_s << SPACE 467: literal_append(sql, vals) 468: sql << PAREN_CLOSE 469: end 470: when :LIKE, 'NOT LIKE''NOT LIKE' 471: sql << PAREN_OPEN 472: literal_append(sql, args.at(0)) 473: sql << SPACE << op.to_s << SPACE 474: literal_append(sql, args.at(1)) 475: sql << ESCAPE 476: literal_append(sql, BACKSLASH) 477: sql << PAREN_CLOSE 478: when :ILIKE, 'NOT ILIKE''NOT ILIKE' 479: complex_expression_sql_append(sql, (op == :ILIKE ? :LIKE : "NOT LIKE""NOT LIKE"), args.map{|v| Sequel.function(:UPPER, v)}) 480: when :** 481: function_sql_append(sql, Sequel.function(:power, *args)) 482: when *TWO_ARITY_OPERATORS 483: if REGEXP_OPERATORS.include?(op) && !supports_regexp? 484: raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}" 485: end 486: sql << PAREN_OPEN 487: literal_append(sql, args.at(0)) 488: sql << SPACE << op.to_s << SPACE 489: literal_append(sql, args.at(1)) 490: sql << PAREN_CLOSE 491: when *N_ARITY_OPERATORS 492: sql << PAREN_OPEN 493: c = false 494: op_str = " #{op} " 495: args.each do |a| 496: sql << op_str if c 497: literal_append(sql, a) 498: c ||= true 499: end 500: sql << PAREN_CLOSE 501: when :NOT 502: sql << NOT_SPACE 503: literal_append(sql, args.at(0)) 504: when :NOOP 505: literal_append(sql, args.at(0)) 506: when 'B~''B~' 507: sql << TILDE 508: literal_append(sql, args.at(0)) 509: when :extract 510: sql << EXTRACT << args.at(0).to_s << FROM 511: literal_append(sql, args.at(1)) 512: sql << PAREN_CLOSE 513: else 514: raise(InvalidOperation, "invalid operator #{op}") 515: end 516: end
Append literalization of delayed evaluation to SQL string, causing the delayed evaluation proc to be evaluated.
# File lib/sequel/dataset/sql.rb, line 525 525: def delayed_evaluation_sql_append(sql, delay) 526: if recorder = @opts[:placeholder_literalizer] 527: recorder.use(sql, lambda{delay.call(self)}, nil) 528: else 529: literal_append(sql, delay.call(self)) 530: end 531: end
Append literalization of function call to SQL string.
# File lib/sequel/dataset/sql.rb, line 534 534: def function_sql_append(sql, f) 535: name = f.name 536: opts = f.opts 537: 538: if opts[:emulate] 539: if emulate_function?(name) 540: emulate_function_sql_append(sql, f) 541: return 542: end 543: 544: name = native_function_name(name) 545: end 546: 547: sql << LATERAL if opts[:lateral] 548: 549: case name 550: when SQL::Identifier 551: if supports_quoted_function_names? && opts[:quoted] != false 552: literal_append(sql, name) 553: else 554: sql << name.value.to_s 555: end 556: when SQL::QualifiedIdentifier 557: if supports_quoted_function_names? && opts[:quoted] != false 558: literal_append(sql, name) 559: else 560: sql << split_qualifiers(name).join(DOT) 561: end 562: else 563: if supports_quoted_function_names? && opts[:quoted] 564: quote_identifier_append(sql, name) 565: else 566: sql << name.to_s 567: end 568: end 569: 570: sql << PAREN_OPEN 571: if opts[:*] 572: sql << WILDCARD 573: else 574: sql << FUNCTION_DISTINCT if opts[:distinct] 575: expression_list_append(sql, f.args) 576: if order = opts[:order] 577: sql << ORDER_BY 578: expression_list_append(sql, order) 579: end 580: end 581: sql << PAREN_CLOSE 582: 583: if group = opts[:within_group] 584: sql << WITHIN_GROUP 585: expression_list_append(sql, group) 586: sql << PAREN_CLOSE 587: end 588: 589: if filter = opts[:filter] 590: sql << FILTER 591: literal_append(sql, filter_expr(filter, &opts[:filter_block])) 592: sql << PAREN_CLOSE 593: end 594: 595: if window = opts[:over] 596: sql << OVER 597: window_sql_append(sql, window.opts) 598: end 599: 600: if opts[:with_ordinality] 601: sql << WITH_ORDINALITY 602: end 603: end
Append literalization of JOIN clause without ON or USING to SQL string.
# File lib/sequel/dataset/sql.rb, line 606 606: def join_clause_sql_append(sql, jc) 607: table = jc.table 608: table_alias = jc.table_alias 609: table_alias = nil if table == table_alias && !jc.column_aliases 610: sql << SPACE << join_type_sql(jc.join_type) << SPACE 611: identifier_append(sql, table) 612: as_sql_append(sql, table_alias, jc.column_aliases) if table_alias 613: end
Append literalization of ordered expression to SQL string.
# File lib/sequel/dataset/sql.rb, line 637 637: def ordered_expression_sql_append(sql, oe) 638: literal_append(sql, oe.expression) 639: sql << (oe.descending ? DESC : ASC) 640: case oe.nulls 641: when :first 642: sql << NULLS_FIRST 643: when :last 644: sql << NULLS_LAST 645: end 646: end
Append literalization of placeholder literal string to SQL string.
# File lib/sequel/dataset/sql.rb, line 649 649: def placeholder_literal_string_sql_append(sql, pls) 650: args = pls.args 651: str = pls.str 652: sql << PAREN_OPEN if pls.parens 653: if args.is_a?(Hash) 654: if args.empty? 655: sql << str 656: else 657: re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ 658: loop do 659: previous, q, str = str.partition(re) 660: sql << previous 661: literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty? 662: break if str.empty? 663: end 664: end 665: elsif str.is_a?(Array) 666: len = args.length 667: str.each_with_index do |s, i| 668: sql << s 669: literal_append(sql, args[i]) unless i == len 670: end 671: unless str.length == args.length || str.length == args.length + 1 672: raise Error, "Mismatched number of placeholders (#{str.length}) and placeholder arguments (#{args.length}) when using placeholder array" 673: end 674: else 675: i = -1 676: match_len = args.length - 1 677: loop do 678: previous, q, str = str.partition(QUESTION_MARK) 679: sql << previous 680: literal_append(sql, args.at(i+=1)) unless q.empty? 681: if str.empty? 682: unless i == match_len 683: raise Error, "Mismatched number of placeholders (#{i+1}) and placeholder arguments (#{args.length}) when using placeholder array" 684: end 685: break 686: end 687: end 688: end 689: sql << PAREN_CLOSE if pls.parens 690: end
Append literalization of qualified identifier to SQL string. If 3 arguments are given, the 2nd should be the table/qualifier and the third should be column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier.
# File lib/sequel/dataset/sql.rb, line 695 695: def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c)) 696: identifier_append(sql, table) 697: sql << DOT 698: identifier_append(sql, column) 699: end
Append literalization of unqualified identifier to SQL string. Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb, line 705 705: def quote_identifier_append(sql, name) 706: if name.is_a?(LiteralString) 707: sql << name 708: else 709: name = name.value if name.is_a?(SQL::Identifier) 710: name = input_identifier(name) 711: if quote_identifiers? 712: quoted_identifier_append(sql, name) 713: else 714: sql << name 715: end 716: end 717: end
Append literalization of identifier or unqualified identifier to SQL string.
# File lib/sequel/dataset/sql.rb, line 720 720: def quote_schema_table_append(sql, table) 721: schema, table = schema_and_table(table) 722: if schema 723: quote_identifier_append(sql, schema) 724: sql << DOT 725: end 726: quote_identifier_append(sql, table) 727: end
Append literalization of quoted identifier to SQL string. This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb, line 733 733: def quoted_identifier_append(sql, name) 734: sql << QUOTE << name.to_s.gsub(QUOTE_RE, DOUBLE_QUOTE) << QUOTE 735: end
Split the schema information from the table, returning two strings, one for the schema and one for the table. The returned schema may be nil, but the table will always have a string value.
Note that this function does not handle tables with more than one level of qualification (e.g. database.schema.table on Microsoft SQL Server).
# File lib/sequel/dataset/sql.rb, line 744 744: def schema_and_table(table_name, sch=nil) 745: sch = sch.to_s if sch 746: case table_name 747: when Symbol 748: s, t, _ = split_symbol(table_name) 749: [s||sch, t] 750: when SQL::QualifiedIdentifier 751: [table_name.table.to_s, table_name.column.to_s] 752: when SQL::Identifier 753: [sch, table_name.value.to_s] 754: when String 755: [sch, table_name] 756: else 757: raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' 758: end 759: end
Splits table_name into an array of strings.
ds.split_qualifiers(:s) # ['s'] ds.split_qualifiers(:t__s) # ['t', 's'] ds.split_qualifiers(Sequel.qualify(:d, :t__s)) # ['d', 't', 's'] ds.split_qualifiers(Sequel.qualify(:h__d, :t__s)) # ['h', 'd', 't', 's']
# File lib/sequel/dataset/sql.rb, line 767 767: def split_qualifiers(table_name, *args) 768: case table_name 769: when SQL::QualifiedIdentifier 770: split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil) 771: else 772: sch, table = schema_and_table(table_name, *args) 773: sch ? [sch, table] : [table] 774: end 775: end
Append literalization of subscripts (SQL array accesses) to SQL string.
# File lib/sequel/dataset/sql.rb, line 778 778: def subscript_sql_append(sql, s) 779: literal_append(sql, s.f) 780: sql << BRACKET_OPEN 781: if s.sub.length == 1 && (range = s.sub.first).is_a?(Range) 782: literal_append(sql, range.begin) 783: sql << COLON 784: e = range.end 785: e -= 1 if range.exclude_end? && e.is_a?(Integer) 786: literal_append(sql, e) 787: else 788: expression_list_append(sql, s.sub) 789: end 790: sql << BRACKET_CLOSE 791: end
Append literalization of windows (for window functions) to SQL string.
# File lib/sequel/dataset/sql.rb, line 794 794: def window_sql_append(sql, opts) 795: raise(Error, 'This dataset does not support window functions') unless supports_window_functions? 796: sql << PAREN_OPEN 797: window, part, order, frame = opts.values_at(:window, :partition, :order, :frame) 798: space = false 799: space_s = SPACE 800: if window 801: literal_append(sql, window) 802: space = true 803: end 804: if part 805: sql << space_s if space 806: sql << PARTITION_BY 807: expression_list_append(sql, Array(part)) 808: space = true 809: end 810: if order 811: sql << space_s if space 812: sql << ORDER_BY_NS 813: expression_list_append(sql, Array(order)) 814: space = true 815: end 816: case frame 817: when nil 818: # nothing 819: when :all 820: sql << space_s if space 821: sql << FRAME_ALL 822: when :rows 823: sql << space_s if space 824: sql << FRAME_ROWS 825: when String 826: sql << space_s if space 827: sql << frame 828: else 829: raise Error, "invalid window frame clause, should be :all, :rows, a string, or nil" 830: end 831: sql << PAREN_CLOSE 832: end
On some adapters, these use native prepared statements and bound variables, on others support is emulated. For details, see the "Prepared Statements/Bound Variables" guide.
PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('?').freeze |
DEFAULT_PREPARED_STATEMENT_MODULE_METHODS | = | %w'execute execute_dui execute_insert'.freeze.each(&:freeze) |
PREPARED_STATEMENT_MODULE_CODE | = | { :bind => "opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze, :prepare => "sql = prepared_statement_name".freeze, :prepare_bind => "sql = prepared_statement_name; opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze |
Set the bind variables to use for the call. If bind variables have already been set for this dataset, they are updated with the contents of bind_vars.
DB[:table].filter(:id=>:$id).bind(:id=>1).call(:first) # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # => {:id=>1}
# File lib/sequel/dataset/prepared_statements.rb, line 261 261: def bind(bind_vars={}) 262: clone(:bind_vars=>@opts[:bind_vars] ? Hash[@opts[:bind_vars]].merge!(bind_vars) : bind_vars) 263: end
For the given type (:select, :first, :insert, :insert_select, :update, or :delete), run the sql with the bind variables specified in the hash. values is a hash passed to insert or update (if one of those types is used), which may contain placeholders.
DB[:table].filter(:id=>:$id).call(:first, :id=>1) # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # => {:id=>1}
# File lib/sequel/dataset/prepared_statements.rb, line 272 272: def call(type, bind_variables={}, *values, &block) 273: prepare(type, nil, *values).call(bind_variables, &block) 274: end
Prepare an SQL statement for later execution. Takes a type similar to call, and the name symbol of the prepared statement. While name defaults to nil, it should always be provided as a symbol for the name of the prepared statement, as some databases require that prepared statements have names.
This returns a clone of the dataset extended with PreparedStatementMethods, which you can call with the hash of bind variables to use. The prepared statement is also stored in the associated database, where it can be called by name. The following usage is identical:
ps = DB[:table].filter(:name=>:$name).prepare(:first, :select_by_name) ps.call(:name=>'Blah') # SELECT * FROM table WHERE name = ? -- ('Blah') # => {:id=>1, :name=>'Blah'} DB.call(:select_by_name, :name=>'Blah') # Same thing
# File lib/sequel/dataset/prepared_statements.rb, line 294 294: def prepare(type, name=nil, *values) 295: ps = to_prepared_statement(type, values) 296: db.set_prepared_statement(name, ps) if name 297: ps 298: end
Return a cloned copy of the current dataset extended with PreparedStatementMethods, setting the type and modify values.
# File lib/sequel/dataset/prepared_statements.rb, line 304 304: def to_prepared_statement(type, values=nil) 305: ps = bind 306: ps.extend(PreparedStatementMethods) 307: ps.orig_dataset = self 308: ps.prepared_type = type 309: ps.prepared_modify_values = values 310: ps 311: end
Dataset graphing automatically creates unique aliases columns in join tables that overlap with already selected column aliases. All of these methods return modified copies of the receiver.
Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases, which replaces the list (the equivalent of select_more when graphing). See set_graph_aliases.
DB[:table].add_graph_aliases(:some_alias=>[:table, :column]) # SELECT ..., table.column AS some_alias
# File lib/sequel/dataset/graph.rb, line 18 18: def add_graph_aliases(graph_aliases) 19: unless (ga = opts[:graph_aliases]) || (opts[:graph] && (ga = opts[:graph][:column_aliases])) 20: raise Error, "cannot call add_graph_aliases on a dataset that has not been called with graph or set_graph_aliases" 21: end 22: columns, graph_aliases = graph_alias_columns(graph_aliases) 23: select_more(*columns).clone(:graph_aliases => Hash[ga].merge!(graph_aliases)) 24: end
Similar to Dataset#join_table, but uses unambiguous aliases for selected columns and keeps metadata about the aliases for use in other methods.
Arguments:
dataset : | Can be a symbol (specifying a table), another dataset, or an SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression. |
join_conditions : | Any condition(s) allowed by join_table. |
block : | A block that is passed to join_table. |
Options:
:from_self_alias : | The alias to use when the receiver is not a graphed dataset but it contains multiple FROM tables or a JOIN. In this case, the receiver is wrapped in a from_self before graphing, and this option determines the alias to use. |
:implicit_qualifier : | The qualifier of implicit conditions, see join_table. |
:join_only : | Only join the tables, do not change the selected columns. |
:join_type : | The type of join to use (passed to join_table). Defaults to :left_outer. |
:qualify: | The type of qualification to do, see join_table. |
:select : | An array of columns to select. When not used, selects all columns in the given dataset. When set to false, selects no columns and is like simply joining the tables, though graph keeps some metadata about the join that makes it important to use graph instead of join_table. |
:table_alias : | The alias to use for the table. If not specified, doesn‘t alias the table. You will get an error if the alias (or table) name is used more than once. |
# File lib/sequel/dataset/graph.rb, line 52 52: def graph(dataset, join_conditions = nil, options = OPTS, &block) 53: # Allow the use of a dataset or symbol as the first argument 54: # Find the table name/dataset based on the argument 55: table_alias = options[:table_alias] 56: table = dataset 57: create_dataset = true 58: 59: case dataset 60: when Symbol 61: # let alias be the same as the table name (sans any optional schema) 62: # unless alias explicitly given in the symbol using ___ notation 63: table_alias ||= split_symbol(table).compact.last 64: when Dataset 65: if dataset.simple_select_all? 66: table = dataset.opts[:from].first 67: table_alias ||= table 68: else 69: table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) 70: end 71: create_dataset = false 72: when SQL::Identifier 73: table_alias ||= table.value 74: when SQL::QualifiedIdentifier 75: table_alias ||= split_qualifiers(table).last 76: when SQL::AliasedExpression 77: return graph(table.expression, join_conditions, {:table_alias=>table.alias}.merge!(options), &block) 78: else 79: raise Error, "The dataset argument should be a symbol or dataset" 80: end 81: table_alias = table_alias.to_sym 82: 83: if create_dataset 84: dataset = db.from(table) 85: end 86: 87: # Raise Sequel::Error with explanation that the table alias has been used 88: raise_alias_error = lambda do 89: raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ 90: "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 91: end 92: 93: # Only allow table aliases that haven't been used 94: raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) 95: 96: table_alias_qualifier = qualifier_from_alias_symbol(table_alias, table) 97: implicit_qualifier = options[:implicit_qualifier] 98: ds = self 99: 100: # Use a from_self if this is already a joined table (or from_self specifically disabled for graphs) 101: if (@opts[:graph_from_self] != false && !@opts[:graph] && joined_dataset?) 102: from_selfed = true 103: implicit_qualifier = options[:from_self_alias] || first_source 104: ds = ds.from_self(:alias=>implicit_qualifier) 105: end 106: 107: # Join the table early in order to avoid cloning the dataset twice 108: ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias_qualifier, :implicit_qualifier=>implicit_qualifier, :qualify=>options[:qualify], &block) 109: 110: return ds if options[:join_only] 111: 112: opts = ds.opts 113: 114: # Whether to include the table in the result set 115: add_table = options[:select] == false ? false : true 116: # Whether to add the columns to the list of column aliases 117: add_columns = !ds.opts.include?(:graph_aliases) 118: 119: if graph = opts[:graph] 120: opts[:graph] = graph = graph.dup 121: select = opts[:select].dup 122: [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup} 123: else 124: # Setup the initial graph data structure if it doesn't exist 125: qualifier = ds.first_source_alias 126: master = alias_symbol(qualifier) 127: raise_alias_error.call if master == table_alias 128: 129: # Master hash storing all .graph related information 130: graph = opts[:graph] = {} 131: 132: # Associates column aliases back to tables and columns 133: column_aliases = graph[:column_aliases] = {} 134: 135: # Associates table alias (the master is never aliased) 136: table_aliases = graph[:table_aliases] = {master=>self} 137: 138: # Keep track of the alias numbers used 139: ca_num = graph[:column_alias_num] = Hash.new(0) 140: 141: # All columns in the master table are never 142: # aliased, but are not included if set_graph_aliases 143: # has been used. 144: if add_columns 145: if (select = @opts[:select]) && !select.empty? && !(select.length == 1 && (select.first.is_a?(SQL::ColumnAll))) 146: select = select.map do |sel| 147: raise Error, "can't figure out alias to use for graphing for #{sel.inspect}" unless column = _hash_key_symbol(sel) 148: column_aliases[column] = [master, column] 149: if from_selfed 150: # Initial dataset was wrapped in subselect, selected all 151: # columns in the subselect, qualified by the subselect alias. 152: Sequel.qualify(qualifier, Sequel.identifier(column)) 153: else 154: # Initial dataset not wrapped in subslect, just make 155: # sure columns are qualified in some way. 156: qualified_expression(sel, qualifier) 157: end 158: end 159: else 160: select = columns.map do |column| 161: column_aliases[column] = [master, column] 162: SQL::QualifiedIdentifier.new(qualifier, column) 163: end 164: end 165: end 166: end 167: 168: # Add the table alias to the list of aliases 169: # Even if it isn't been used in the result set, 170: # we add a key for it with a nil value so we can check if it 171: # is used more than once 172: table_aliases = graph[:table_aliases] 173: table_aliases[table_alias] = add_table ? dataset : nil 174: 175: # Add the columns to the selection unless we are ignoring them 176: if add_table && add_columns 177: column_aliases = graph[:column_aliases] 178: ca_num = graph[:column_alias_num] 179: # Which columns to add to the result set 180: cols = options[:select] || dataset.columns 181: # If the column hasn't been used yet, don't alias it. 182: # If it has been used, try table_column. 183: # If that has been used, try table_column_N 184: # using the next value of N that we know hasn't been 185: # used 186: cols.each do |column| 187: col_alias, identifier = if column_aliases[column] 188: column_alias = "#{table_alias}_#{column}""#{table_alias}_#{column}" 189: if column_aliases[column_alias] 190: column_alias_num = ca_num[column_alias] 191: column_alias = "#{column_alias}_#{column_alias_num}""#{column_alias}_#{column_alias_num}" 192: ca_num[column_alias] += 1 193: end 194: [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias_qualifier, column), column_alias)] 195: else 196: ident = SQL::QualifiedIdentifier.new(table_alias_qualifier, column) 197: [column, ident] 198: end 199: column_aliases[col_alias] = [table_alias, column] 200: select.push(identifier) 201: end 202: end 203: add_columns ? ds.select(*select) : ds 204: end
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of select for a graphed dataset, and must be used instead of select whenever graphing is used.
graph_aliases : | Should be a hash with keys being symbols of column aliases, and values being either symbols or arrays with one to three elements. If the value is a symbol, it is assumed to be the same as a one element array containing that symbol. The first element of the array should be the table alias symbol. The second should be the actual column name symbol. If the array only has a single element the column name symbol will be assumed to be the same as the corresponding hash key. If the array has a third element, it is used as the value returned, instead of table_alias.column_name. |
DB[:artists].graph(:albums, :artist_id=>:id). set_graph_aliases(:name=>:artists, :album_name=>[:albums, :name], :forty_two=>[:albums, :fourtwo, 42]).first # SELECT artists.name, albums.name AS album_name, 42 AS forty_two ...
# File lib/sequel/dataset/graph.rb, line 229 229: def set_graph_aliases(graph_aliases) 230: columns, graph_aliases = graph_alias_columns(graph_aliases) 231: ds = select(*columns) 232: ds.opts[:graph_aliases] = graph_aliases 233: ds 234: end
These methods all execute the dataset‘s SQL on the database. They don‘t return modified datasets, so if used in a method chain they should be the last method called.
ACTION_METHODS | = | (<<-METHS).split.map(&:to_sym) << [] all avg count columns columns! delete each empty? fetch_rows first first! get import insert interval last map max min multi_insert paged_each range select_hash select_hash_groups select_map select_order_map single_record single_record! single_value single_value! sum to_hash to_hash_groups truncate update METHS ).split.map(&:to_sym) | Action methods defined by Sequel that execute code on the database. |
Inserts the given argument into the database. Returns self so it can be used safely when chaining:
DB[:items] << {:id=>0, :name=>'Zero'} << DB[:old_items].select(:id, name)
# File lib/sequel/dataset/actions.rb, line 25 25: def <<(arg) 26: insert(arg) 27: self 28: end
Returns the first record matching the conditions. Examples:
DB[:table][:id=>1] # SELECT * FROM table WHERE (id = 1) LIMIT 1 # => {:id=1}
# File lib/sequel/dataset/actions.rb, line 34 34: def [](*conditions) 35: raise(Error, ARRAY_ACCESS_ERROR_MSG) if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 36: first(*conditions) 37: end
Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.
DB[:table].all # SELECT * FROM table # => [{:id=>1, ...}, {:id=>2, ...}, ...] # Iterate over all rows in the table DB[:table].all{|row| p row}
# File lib/sequel/dataset/actions.rb, line 47 47: def all(&block) 48: _all(block){|a| each{|r| a << r}} 49: end
Returns the average value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1 # => 3 DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb, line 58 58: def avg(column=Sequel.virtual_row(&Proc.new)) 59: aggregate_dataset.get{avg(column).as(:avg)} 60: end
Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.
If you are looking for all columns for a single table and maybe some information about each column (e.g. database type), see Database#schema.
DB[:table].columns # => [:id, :name]
# File lib/sequel/dataset/actions.rb, line 71 71: def columns 72: return @columns if @columns 73: ds = unfiltered.unordered.naked.clone(:distinct => nil, :limit => 1, :offset=>nil) 74: ds.each{break} 75: @columns = ds.instance_variable_get(:@columns) 76: @columns || [] 77: end
Returns the number of records in the dataset. If an argument is provided, it is used as the argument to count. If a block is provided, it is treated as a virtual row, and the result is used as the argument to count.
DB[:table].count # SELECT count(*) AS count FROM table LIMIT 1 # => 3 DB[:table].count(:column) # SELECT count(column) AS count FROM table LIMIT 1 # => 2 DB[:table].count{foo(column)} # SELECT count(foo(column)) AS count FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb, line 100 100: def count(arg=(no_arg=true), &block) 101: if no_arg 102: if block 103: arg = Sequel.virtual_row(&block) 104: aggregate_dataset.get{count(arg).as(:count)} 105: else 106: aggregate_dataset.get{count{}.*.as(:count)}.to_i 107: end 108: elsif block 109: raise Error, 'cannot provide both argument and block to Dataset#count' 110: else 111: aggregate_dataset.get{count(arg).as(:count)} 112: end 113: end
Deletes the records in the dataset. The returned value should be number of records deleted, but that is adapter dependent.
DB[:table].delete # DELETE * FROM table # => 3
# File lib/sequel/dataset/actions.rb, line 120 120: def delete(&block) 121: sql = delete_sql 122: if uses_returning?(:delete) 123: returning_fetch_rows(sql, &block) 124: else 125: execute_dui(sql) 126: end 127: end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
DB[:table].each{|row| p row} # SELECT * FROM table
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you should use all instead of each for the outer queries, or use a separate thread or shard inside each.
# File lib/sequel/dataset/actions.rb, line 138 138: def each 139: if row_proc = @row_proc 140: fetch_rows(select_sql){|r| yield row_proc.call(r)} 141: else 142: fetch_rows(select_sql){|r| yield r} 143: end 144: self 145: end
Returns true if no records exist in the dataset, false otherwise
DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1 # => false
# File lib/sequel/dataset/actions.rb, line 151 151: def empty? 152: ds = @opts[:order] ? unordered : self 153: ds.get(Sequel::SQL::AliasedExpression.new(1, :one)).nil? 154: end
If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If no argument is passed, it returns the first matching record. If any other type of argument(s) is passed, it is given to filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything.
If there are no records in the dataset, returns nil (or an empty array if an integer argument is given).
Examples:
DB[:table].first # SELECT * FROM table LIMIT 1 # => {:id=>7} DB[:table].first(2) # SELECT * FROM table LIMIT 2 # => [{:id=>6}, {:id=>4}] DB[:table].first(:id=>2) # SELECT * FROM table WHERE (id = 2) LIMIT 1 # => {:id=>2} DB[:table].first("id = 3") # SELECT * FROM table WHERE (id = 3) LIMIT 1 # => {:id=>3} DB[:table].first("id = ?", 4) # SELECT * FROM table WHERE (id = 4) LIMIT 1 # => {:id=>4} DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1 # => {:id=>5} DB[:table].first("id > ?", 4){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1 # => {:id=>5} DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2 # => [{:id=>1}]
# File lib/sequel/dataset/actions.rb, line 191 191: def first(*args, &block) 192: ds = block ? filter(&block) : self 193: 194: if args.empty? 195: ds.single_record 196: else 197: args = (args.size == 1) ? args.first : args 198: if args.is_a?(Integer) 199: ds.limit(args).all 200: else 201: ds.filter(args).single_record 202: end 203: end 204: end
Calls first. If first returns nil (signaling that no row matches), raise a Sequel::NoMatchingRow exception.
# File lib/sequel/dataset/actions.rb, line 208 208: def first!(*args, &block) 209: first(*args, &block) || raise(Sequel::NoMatchingRow.new(self)) 210: end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
DB[:table].get(:id) # SELECT id FROM table LIMIT 1 # => 3 ds.get{sum(id)} # SELECT sum(id) AS v FROM table LIMIT 1 # => 6
You can pass an array of arguments to return multiple arguments, but you must make sure each element in the array has an alias that Sequel can determine:
DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1 # => [3, 'foo'] DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1 # => [6, 'foo']
# File lib/sequel/dataset/actions.rb, line 230 230: def get(column=(no_arg=true; nil), &block) 231: ds = naked 232: if block 233: raise(Error, ARG_BLOCK_ERROR_MSG) unless no_arg 234: ds = ds.select(&block) 235: column = ds.opts[:select] 236: column = nil if column.is_a?(Array) && column.length < 2 237: else 238: ds = if column.is_a?(Array) 239: ds.select(*column) 240: else 241: ds.select(auto_alias_expression(column)) 242: end 243: end 244: 245: if column.is_a?(Array) 246: if r = ds.single_record 247: r.values_at(*hash_key_symbols(column)) 248: end 249: else 250: ds.single_value 251: end 252: end
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction.
This method is called with a columns array and an array of value arrays:
DB[:table].import([:x, :y], [[1, 2], [3, 4]]) # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (3, 4)
This method also accepts a dataset instead of an array of value arrays:
DB[:table].import([:x, :y], DB[:table2].select(:a, :b)) # INSERT INTO table (x, y) SELECT a, b FROM table2
Options:
:commit_every : | Open a new transaction for every given number of records. For example, if you provide a value of 50, will commit after every 50 records. |
:return : | When this is set to :primary_key, returns an array of autoincremented primary key values for the rows inserted. |
:server : | Set the server/shard to use for the transaction and insert queries. |
:slice : | Same as :commit_every, :commit_every takes precedence. |
# File lib/sequel/dataset/actions.rb, line 279 279: def import(columns, values, opts=OPTS) 280: return @db.transaction{insert(columns, values)} if values.is_a?(Dataset) 281: 282: return if values.empty? 283: raise(Error, IMPORT_ERROR_MSG) if columns.empty? 284: ds = opts[:server] ? server(opts[:server]) : self 285: 286: if slice_size = opts.fetch(:commit_every, opts.fetch(:slice, default_import_slice)) 287: offset = 0 288: rows = [] 289: while offset < values.length 290: rows << ds._import(columns, values[offset, slice_size], opts) 291: offset += slice_size 292: end 293: rows.flatten 294: else 295: ds._import(columns, values, opts) 296: end 297: end
Inserts values into the associated table. The returned value is generally the value of the primary key for the inserted row, but that is adapter dependent.
insert handles a number of different argument formats:
no arguments or single empty hash : | Uses DEFAULT VALUES |
single hash : | Most common format, treats keys as columns and values as values |
single array : | Treats entries as values, with no columns |
two arrays : | Treats first array as columns, second array as values |
single Dataset : | Treats as an insert based on a selection from the dataset given, with no columns |
array and dataset : | Treats as an insert based on a selection from the dataset given, with the columns given by the array. |
Examples:
DB[:items].insert # INSERT INTO items DEFAULT VALUES DB[:items].insert({}) # INSERT INTO items DEFAULT VALUES DB[:items].insert([1,2,3]) # INSERT INTO items VALUES (1, 2, 3) DB[:items].insert([:a, :b], [1,2]) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(:a => 1, :b => 2) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(DB[:old_items]) # INSERT INTO items SELECT * FROM old_items DB[:items].insert([:a, :b], DB[:old_items]) # INSERT INTO items (a, b) SELECT * FROM old_items
# File lib/sequel/dataset/actions.rb, line 334 334: def insert(*values, &block) 335: sql = insert_sql(*values) 336: if uses_returning?(:insert) 337: returning_fetch_rows(sql, &block) 338: else 339: execute_insert(sql) 340: end 341: end
Returns the interval between minimum and maximum values for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].interval(:id) # SELECT (max(id) - min(id)) FROM table LIMIT 1 # => 6 DB[:table].interval{function(column)} # SELECT (max(function(column)) - min(function(column))) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb, line 350 350: def interval(column=Sequel.virtual_row(&Proc.new)) 351: aggregate_dataset.get{(max(column) - min(column)).as(:interval)} 352: end
Reverses the order and then runs first with the given arguments and block. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error.
DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1 # => {:id=>10} DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2 # => [{:id=>1}, {:id=>2}]
# File lib/sequel/dataset/actions.rb, line 364 364: def last(*args, &block) 365: raise(Error, 'No order specified') unless @opts[:order] 366: reverse.first(*args, &block) 367: end
Maps column values for each record in the dataset (if a column name is given), or performs the stock mapping functionality of Enumerable otherwise. Raises an Error if both an argument and block are given.
DB[:table].map(:id) # SELECT * FROM table # => [1, 2, 3, ...] DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table # => [2, 4, 6, ...]
You can also provide an array of column names:
DB[:table].map([:id, :name]) # SELECT * FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
# File lib/sequel/dataset/actions.rb, line 383 383: def map(column=nil, &block) 384: if column 385: raise(Error, ARG_BLOCK_ERROR_MSG) if block 386: return naked.map(column) if row_proc 387: if column.is_a?(Array) 388: super(){|r| r.values_at(*column)} 389: else 390: super(){|r| r[column]} 391: end 392: else 393: super(&block) 394: end 395: end
Returns the maximum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1 # => 10 DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb, line 404 404: def max(column=Sequel.virtual_row(&Proc.new)) 405: aggregate_dataset.get{max(column).as(:max)} 406: end
Returns the minimum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1 # => 1 DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1 # => 0
# File lib/sequel/dataset/actions.rb, line 415 415: def min(column=Sequel.virtual_row(&Proc.new)) 416: aggregate_dataset.get{min(column).as(:min)} 417: end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
DB[:table].multi_insert([{:x => 1}, {:x => 2}]) # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2)
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
This respects the same options as import.
# File lib/sequel/dataset/actions.rb, line 431 431: def multi_insert(hashes, opts=OPTS) 432: return if hashes.empty? 433: columns = hashes.first.keys 434: import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) 435: end
Yields each row in the dataset, but interally uses multiple queries as needed to process the entire result set without keeping all rows in the dataset in memory, even if the underlying driver buffers all query results in memory.
Because this uses multiple queries internally, in order to remain consistent, it also uses a transaction internally. Additionally, to work correctly, the dataset must have unambiguous order. Using an ambiguous order can result in an infinite loop, as well as subtler bugs such as yielding duplicate rows or rows being skipped.
Sequel checks that the datasets using this method have an order, but it cannot ensure that the order is unambiguous.
Options:
:rows_per_fetch : | The number of rows to fetch per query. Defaults to 1000. |
:strategy : | The strategy to use for paging of results. By default this is :offset, for using an approach with a limit and offset for every page. This can be set to :filter, which uses a limit and a filter that excludes rows from previous pages. In order for this strategy to work, you must be selecting the columns you are ordering by, and none of the columns can contain NULLs. Note that some Sequel adapters have optimized implementations that will use cursors or streaming regardless of the :strategy option used. |
:filter_values : | If the :strategy=>:filter option is used, this option should be a proc that accepts the last retreived row for the previous page and an array of ORDER BY expressions, and returns an array of values relating to those expressions for the last retrieved row. You will need to use this option if your ORDER BY expressions are not simple columns, if they contain qualified identifiers that would be ambiguous unqualified, if they contain any identifiers that are aliased in SELECT, and potentially other cases. |
Examples:
DB[:table].order(:id).paged_each{|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table ORDER BY id LIMIT 1000 OFFSET 1000 # ... DB[:table].order(:id).paged_each(:rows_per_fetch=>100){|row| } # SELECT * FROM table ORDER BY id LIMIT 100 # SELECT * FROM table ORDER BY id LIMIT 100 OFFSET 100 # ... DB[:table].order(:id).paged_each(:strategy=>:filter){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ... DB[:table].order(:table__id).paged_each(:strategy=>:filter, :filter_values=>proc{|row, exprs| [row[:id]]}){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ...
# File lib/sequel/dataset/actions.rb, line 488 488: def paged_each(opts=OPTS) 489: unless @opts[:order] 490: raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered" 491: end 492: unless block_given? 493: return enum_for(:paged_each, opts) 494: end 495: 496: total_limit = @opts[:limit] 497: offset = @opts[:offset] 498: if server = @opts[:server] 499: opts = Hash[opts] 500: opts[:server] = server 501: end 502: 503: rows_per_fetch = opts[:rows_per_fetch] || 1000 504: strategy = if offset || total_limit 505: :offset 506: else 507: opts[:strategy] || :offset 508: end 509: 510: db.transaction(opts) do 511: case strategy 512: when :filter 513: filter_values = opts[:filter_values] || proc{|row, exprs| exprs.map{|e| row[hash_key_symbol(e)]}} 514: base_ds = ds = limit(rows_per_fetch) 515: while ds 516: last_row = nil 517: ds.each do |row| 518: last_row = row 519: yield row 520: end 521: ds = (base_ds.where(ignore_values_preceding(last_row, &filter_values)) if last_row) 522: end 523: else 524: offset ||= 0 525: num_rows_yielded = rows_per_fetch 526: total_rows = 0 527: 528: while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit) 529: if total_limit && total_rows + rows_per_fetch > total_limit 530: rows_per_fetch = total_limit - total_rows 531: end 532: 533: num_rows_yielded = 0 534: limit(rows_per_fetch, offset).each do |row| 535: num_rows_yielded += 1 536: total_rows += 1 if total_limit 537: yield row 538: end 539: 540: offset += rows_per_fetch 541: end 542: end 543: end 544: 545: self 546: end
Returns a Range instance made from the minimum and maximum values for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].range(:id) # SELECT max(id) AS v1, min(id) AS v2 FROM table LIMIT 1 # => 1..10 DB[:table].interval{function(column)} # SELECT max(function(column)) AS v1, min(function(column)) AS v2 FROM table LIMIT 1 # => 0..7
# File lib/sequel/dataset/actions.rb, line 555 555: def range(column=Sequel.virtual_row(&Proc.new)) 556: if r = aggregate_dataset.select{[min(column).as(v1), max(column).as(v2)]}.first 557: (r[:v1]..r[:v2]) 558: end 559: end
Returns a hash with key_column values as keys and value_column values as values. Similar to to_hash, but only selects the columns given. Like to_hash, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash(:id, :name) # SELECT id, name FROM table # => {1=>'a', 2=>'b', ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 578 578: def select_hash(key_column, value_column, opts = OPTS) 579: _select_hash(:to_hash, key_column, value_column, opts) 580: end
Returns a hash with key_column values as keys and an array of value_column values. Similar to to_hash_groups, but only selects the columns given. Like to_hash_groups, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash_groups(:name, :id) # SELECT id, name FROM table # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table # {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 598 598: def select_hash_groups(key_column, value_column, opts = OPTS) 599: _select_hash(:to_hash_groups, key_column, value_column, opts) 600: end
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined. Raises an Error if called with both an argument and a block.
DB[:table].select_map(:id) # SELECT id FROM table # => [3, 5, 8, 1, ...] DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table # => [6, 10, 16, 2, ...]
You can also provide an array of column names:
DB[:table].select_map([:id, :name]) # SELECT id, name FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 622 622: def select_map(column=nil, &block) 623: _select_map(column, false, &block) 624: end
The same as select_map, but in addition orders the array by the column.
DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id # => [1, 2, 3, 4, ...] DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2) # => [2, 4, 6, 8, ...]
You can also provide an array of column names:
DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine. Usually you can do this by calling the as method on the expression and providing an alias.
# File lib/sequel/dataset/actions.rb, line 642 642: def select_order_map(column=nil, &block) 643: _select_map(column, true, &block) 644: end
Limits the dataset to one record, and returns the first record in the dataset, or nil if the dataset has no records. Users should probably use first instead of this method. Example:
DB[:test].single_record # SELECT * FROM test LIMIT 1 # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb, line 652 652: def single_record 653: clone(:limit=>1).single_record! 654: end
Returns the first record in dataset, without limiting the dataset. Returns nil if the dataset has no records. Users should probably use first instead of this method. This should only be used if you know the dataset is already limited to a single record. This method may be desirable to use for performance reasons, as it does not clone the receiver. Example:
DB[:test].single_record! # SELECT * FROM test # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb, line 664 664: def single_record! 665: with_sql_first(select_sql) 666: end
Returns the first value of the first record in the dataset. Returns nil if dataset is empty. Users should generally use get instead of this method. Example:
DB[:test].single_value # SELECT * FROM test LIMIT 1 # => 'value'
# File lib/sequel/dataset/actions.rb, line 674 674: def single_value 675: if r = ungraphed.naked.single_record 676: r.each{|_, v| return v} 677: end 678: end
Returns the first value of the first record in the dataset, without limiting the dataset. Returns nil if the dataset is empty. Users should generally use get instead of this method. Should not be used on graphed datasets or datasets that have row_procs that don‘t return hashes. This method may be desirable to use for performance reasons, as it does not clone the receiver.
DB[:test].single_value! # SELECT * FROM test # => 'value'
# File lib/sequel/dataset/actions.rb, line 688 688: def single_value! 689: with_sql_single_value(select_sql) 690: end
Returns the sum for the given column/expression. Uses a virtual row block if no column is given.
DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1 # => 55 DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1 # => 10
# File lib/sequel/dataset/actions.rb, line 699 699: def sum(column=Sequel.virtual_row(&Proc.new)) 700: aggregate_dataset.get{sum(column).as(:sum)} 701: end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash(:id, :name) # SELECT * FROM table # {1=>'Jim', 2=>'Bob', ...} DB[:table].to_hash(:id) # SELECT * FROM table # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...} DB[:table].to_hash([:id, :name]) # SELECT * FROM table # {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob'=>{:id=>2, :name=>'Bob'}, ...}
Options:
:all : | Use all instead of each to retrieve the objects |
:hash : | The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc. |
# File lib/sequel/dataset/actions.rb, line 728 728: def to_hash(key_column, value_column = nil, opts = OPTS) 729: h = opts[:hash] || {} 730: meth = opts[:all] ? :all : :each 731: if value_column 732: return naked.to_hash(key_column, value_column, opts) if row_proc 733: if value_column.is_a?(Array) 734: if key_column.is_a?(Array) 735: send(meth){|r| h[r.values_at(*key_column)] = r.values_at(*value_column)} 736: else 737: send(meth){|r| h[r[key_column]] = r.values_at(*value_column)} 738: end 739: else 740: if key_column.is_a?(Array) 741: send(meth){|r| h[r.values_at(*key_column)] = r[value_column]} 742: else 743: send(meth){|r| h[r[key_column]] = r[value_column]} 744: end 745: end 746: elsif key_column.is_a?(Array) 747: send(meth){|r| h[key_column.map{|k| r[k]}] = r} 748: else 749: send(meth){|r| h[r[key_column]] = r} 750: end 751: h 752: end
Returns a hash with one column used as key and the values being an array of column values. If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash_groups(:name, :id) # SELECT * FROM table # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...} DB[:table].to_hash_groups(:name) # SELECT * FROM table # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...} DB[:table].to_hash_groups([:first, :middle]) # SELECT * FROM table # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}
Options:
:all : | Use all instead of each to retrieve the objects |
:hash : | The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc. |
to start with a new, empty hash.
# File lib/sequel/dataset/actions.rb, line 779 779: def to_hash_groups(key_column, value_column = nil, opts = OPTS) 780: h = opts[:hash] || {} 781: meth = opts[:all] ? :all : :each 782: if value_column 783: return naked.to_hash_groups(key_column, value_column, opts) if row_proc 784: if value_column.is_a?(Array) 785: if key_column.is_a?(Array) 786: send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)} 787: else 788: send(meth){|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)} 789: end 790: else 791: if key_column.is_a?(Array) 792: send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]} 793: else 794: send(meth){|r| (h[r[key_column]] ||= []) << r[value_column]} 795: end 796: end 797: elsif key_column.is_a?(Array) 798: send(meth){|r| (h[key_column.map{|k| r[k]}] ||= []) << r} 799: else 800: send(meth){|r| (h[r[key_column]] ||= []) << r} 801: end 802: h 803: end
Truncates the dataset. Returns nil.
DB[:table].truncate # TRUNCATE table # => nil
# File lib/sequel/dataset/actions.rb, line 809 809: def truncate 810: execute_ddl(truncate_sql) 811: end
Updates values for the dataset. The returned value is generally the number of rows updated, but that is adapter dependent. values should a hash where the keys are columns to set and values are the values to which to set the columns.
DB[:table].update(:x=>nil) # UPDATE table SET x = NULL # => 10 DB[:table].update(:x=>Sequel[:x]+1, :y=>0) # UPDATE table SET x = (x + 1), y = 0 # => 10
# File lib/sequel/dataset/actions.rb, line 823 823: def update(values=OPTS, &block) 824: sql = update_sql(values) 825: if uses_returning?(:update) 826: returning_fetch_rows(sql, &block) 827: else 828: execute_dui(sql) 829: end 830: end
Execute the given SQL and return the number of rows deleted. This exists solely as an optimization, replacing with_sql(sql).delete. It‘s significantly faster as it does not require cloning the current dataset.
# File lib/sequel/dataset/actions.rb, line 841 841: def with_sql_delete(sql) 842: execute_dui(sql) 843: end
Run the given SQL and yield each returned row to the block.
This method should not be called on a shared dataset if the columns selected in the given SQL do not match the columns in the receiver.
# File lib/sequel/dataset/actions.rb, line 850 850: def with_sql_each(sql) 851: if row_proc = @row_proc 852: fetch_rows(sql){|r| yield row_proc.call(r)} 853: else 854: fetch_rows(sql){|r| yield r} 855: end 856: self 857: end
Run the given SQL and return the first value in the first row, or nil if no rows were returned. For this to make sense, the SQL given should select only a single value. See with_sql_each.
# File lib/sequel/dataset/actions.rb, line 869 869: def with_sql_single_value(sql) 870: if r = with_sql_first(sql) 871: r.each{|_, v| return v} 872: end 873: end
Internals of import. If primary key values are requested, use separate insert commands for each row. Otherwise, call multi_insert_sql and execute each statement it gives separately.
# File lib/sequel/dataset/actions.rb, line 886 886: def _import(columns, values, opts) 887: trans_opts = Hash[opts].merge!(:server=>@opts[:server]) 888: if opts[:return] == :primary_key 889: @db.transaction(trans_opts){values.map{|v| insert(columns, v)}} 890: else 891: stmts = multi_insert_sql(columns, values) 892: @db.transaction(trans_opts){stmts.each{|st| execute_dui(st)}} 893: end 894: end
Return an array of arrays of values given by the symbols in ret_cols.
# File lib/sequel/dataset/actions.rb, line 897 897: def _select_map_multiple(ret_cols) 898: map{|r| r.values_at(*ret_cols)} 899: end
These methods don‘t fit cleanly into another section.
NOTIMPL_MSG | = | "This method must be overridden in Sequel adapters".freeze |
ARRAY_ACCESS_ERROR_MSG | = | 'You cannot call Dataset#[] with an integer or with no arguments.'.freeze |
ARG_BLOCK_ERROR_MSG | = | 'Must use either an argument or a block, not both'.freeze |
IMPORT_ERROR_MSG | = | 'Using Sequel::Dataset#import an empty column array is not allowed'.freeze |
DatasetClass | = | self |
PREPARED_ARG_PLACEHOLDER | = | ':'.freeze |
BindArgumentMethods | = | prepared_statements_module(:bind, ArgumentMapper) |
PreparedStatementMethods | = | prepared_statements_module(:prepare, BindArgumentMethods) |
DatasetClass | = | self |
DatasetClass | = | self |
DatasetClass | = | self |
STREAMING_SUPPORTED | = | ::Mysql2::VERSION >= '0.3.12' |
DatasetClass | = | self |
PreparedStatementMethods | = | prepared_statements_module( "sql = self; opts = Hash[opts]; opts[:arguments] = bind_arguments", Sequel::Dataset::UnnumberedArgumentMapper, %w"execute execute_dui execute_insert") |
OPTS | = | Sequel::OPTS |
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adapter provides a subclass of Sequel::Dataset, and has the Database#dataset method return an instance of that subclass.
# File lib/sequel/dataset/misc.rb, line 30 30: def initialize(db) 31: @db = db 32: @opts = OPTS 33: end
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(:key=>'setting').each_server{|ds| ds.update(:value=>'new_value')}
# File lib/sequel/dataset/misc.rb, line 64 64: def each_server 65: db.servers.each{|s| yield server(s)} 66: end
Returns the string with the LIKE metacharacters (% and _) escaped. Useful for when the LIKE term is a user-provided string where metacharacters should not be recognized. Example:
ds.escape_like("foo\\%_") # 'foo\\\%\_'
# File lib/sequel/dataset/misc.rb, line 73 73: def escape_like(string) 74: string.gsub(/[\\%_]/){|m| "\\#{m}"} 75: end
Yield a hash for each row in the dataset.
# File lib/sequel/adapters/sqlite.rb, line 321 321: def fetch_rows(sql) 322: execute(sql) do |result| 323: i = -1 324: cps = db.conversion_procs 325: type_procs = result.types.map{|t| cps[base_type_name(t)]} 326: cols = result.columns.map{|c| i+=1; [output_identifier(c), i, type_procs[i]]} 327: self.columns = cols.map(&:first) 328: result.each do |values| 329: row = {} 330: cols.each do |name,id,type_proc| 331: v = values[id] 332: if type_proc && v 333: v = type_proc.call(v) 334: end 335: row[name] = v 336: end 337: yield row 338: end 339: end 340: end
Yield all rows matching this dataset. If the dataset is set to split multiple statements, yield arrays of hashes one per statement instead of yielding results for all statements as hashes.
# File lib/sequel/adapters/sqlanywhere.rb, line 145 145: def fetch_rows(sql) 146: db = @db 147: cps = db.conversion_procs 148: api = db.api 149: execute(sql) do |rs| 150: convert = (convert_smallint_to_bool and db.convert_smallint_to_bool) 151: col_infos = [] 152: api.sqlany_num_cols(rs).times do |i| 153: _, _, name, _, type = api.sqlany_get_column_info(rs, i) 154: cp = if type == 500 155: cps[500] if convert 156: else 157: cps[type] 158: end 159: col_infos << [i, output_identifier(name), cp] 160: end 161: 162: self.columns = col_infos.map{|a| a[1]} 163: 164: if rs 165: while api.sqlany_fetch_next(rs) == 1 166: h = {} 167: col_infos.each do |i, name, cp| 168: _, v = api.sqlany_get_column(rs, i) 169: h[name] = cp && v ? cp[v] : v 170: end 171: yield h 172: end 173: end 174: end 175: self 176: end
Yield all rows matching this dataset.
# File lib/sequel/adapters/mysql2.rb, line 255 255: def fetch_rows(sql) 256: execute(sql) do |r| 257: self.columns = if identifier_output_method 258: r.fields.map!{|c| output_identifier(c.to_s)} 259: else 260: r.fields 261: end 262: r.each(:cast_booleans=>convert_tinyint_to_bool?){|h| yield h} 263: end 264: self 265: end
Set the columns and yield the hashes to the block.
# File lib/sequel/adapters/swift.rb, line 136 136: def fetch_rows(sql) 137: execute(sql) do |res| 138: col_map = {} 139: self.columns = res.fields.map do |c| 140: col_map[c] = output_identifier(c) 141: end 142: tz = db.timezone if Sequel.application_timezone 143: res.each do |r| 144: h = {} 145: r.each do |k, v| 146: h[col_map[k]] = case v 147: when StringIO 148: SQL::Blob.new(v.read) 149: when DateTime 150: tz ? Sequel.database_to_application_timestamp(Sequel.send(:convert_input_datetime_no_offset, v, tz)) : v 151: else 152: v 153: end 154: end 155: yield h 156: end 157: end 158: self 159: end
Yield all rows matching this dataset. If the dataset is set to split multiple statements, yield arrays of hashes one per statement instead of yielding results for all statements as hashes.
# File lib/sequel/adapters/mysql.rb, line 299 299: def fetch_rows(sql) 300: execute(sql) do |r| 301: i = -1 302: cps = db.conversion_procs 303: cols = r.fetch_fields.map do |f| 304: # Pretend tinyint is another integer type if its length is not 1, to 305: # avoid casting to boolean if Sequel::MySQL.convert_tinyint_to_bool 306: # is set. 307: type_proc = f.type == 1 && cast_tinyint_integer?(f) ? cps[2] : cps[f.type] 308: [output_identifier(f.name), type_proc, i+=1] 309: end 310: self.columns = cols.map(&:first) 311: if opts[:split_multiple_result_sets] 312: s = [] 313: yield_rows(r, cols){|h| s << h} 314: yield s 315: else 316: yield_rows(r, cols){|h| yield h} 317: end 318: end 319: self 320: end
Alias of first_source_alias
# File lib/sequel/dataset/misc.rb, line 89 89: def first_source 90: first_source_alias 91: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an Error. If the table is aliased, returns the aliased name.
DB[:table].first_source_alias # => :table DB[:table___t].first_source_alias # => :t
# File lib/sequel/dataset/misc.rb, line 101 101: def first_source_alias 102: source = @opts[:from] 103: if source.nil? || source.empty? 104: raise Error, 'No source specified for query' 105: end 106: case s = source.first 107: when SQL::AliasedExpression 108: s.alias 109: when Symbol 110: _, _, aliaz = split_symbol(s) 111: aliaz ? aliaz.to_sym : s 112: else 113: s 114: end 115: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an error. If the table is aliased, returns the original table, not the alias
DB[:table].first_source_table # => :table DB[:table___t].first_source_table # => :table
# File lib/sequel/dataset/misc.rb, line 126 126: def first_source_table 127: source = @opts[:from] 128: if source.nil? || source.empty? 129: raise Error, 'No source specified for query' 130: end 131: case s = source.first 132: when SQL::AliasedExpression 133: s.expression 134: when Symbol 135: sch, table, aliaz = split_symbol(s) 136: aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s 137: else 138: s 139: end 140: end
Sets the frozen flag on the dataset, so you can‘t modify it. Returns the receiver.
# File lib/sequel/dataset/misc.rb, line 78 78: def freeze 79: @frozen = true 80: self 81: end
Whether the object is frozen.
# File lib/sequel/dataset/misc.rb, line 84 84: def frozen? 85: @frozen == true 86: end
Don‘t allow graphing a dataset that splits multiple statements
# File lib/sequel/adapters/mysql.rb, line 323 323: def graph(*) 324: raise(Error, "Can't graph a dataset that splits multiple result sets") if opts[:split_multiple_result_sets] 325: super 326: end
The String instance method to call on identifiers before sending them to the database.
# File lib/sequel/dataset/misc.rb, line 150 150: def identifier_input_method 151: if defined?(@identifier_input_method) 152: @identifier_input_method 153: else 154: @identifier_input_method = db.identifier_input_method 155: end 156: end
The String instance method to call on identifiers before sending them to the database.
# File lib/sequel/dataset/misc.rb, line 160 160: def identifier_output_method 161: if defined?(@identifier_output_method) 162: @identifier_output_method 163: else 164: @identifier_output_method = db.identifier_output_method 165: end 166: end
Create a named prepared statement that is stored in the database (and connection) for reuse.
# File lib/sequel/adapters/mysql2.rb, line 243 243: def prepare(type, name=nil, *values) 244: ps = to_prepared_statement(type, values) 245: ps.extend(PreparedStatementMethods) 246: if name 247: ps.prepared_statement_name = name 248: db.set_prepared_statement(name, ps) 249: end 250: ps 251: end
Prepare the given type of query with the given name and store it in the database. Note that a new native prepared statement is created on each call to this prepared statement.
# File lib/sequel/adapters/sqlite.rb, line 345 345: def prepare(type, name=nil, *values) 346: ps = to_prepared_statement(type, values) 347: ps.extend(PreparedStatementMethods) 348: if name 349: ps.prepared_statement_name = name 350: db.set_prepared_statement(name, ps) 351: end 352: ps 353: end
Splits a possible implicit alias in c, handling both SQL::AliasedExpressions and Symbols. Returns an array of two elements, with the first being the main expression, and the second being the alias.
# File lib/sequel/dataset/misc.rb, line 188 188: def split_alias(c) 189: case c 190: when Symbol 191: c_table, column, aliaz = split_symbol(c) 192: [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] 193: when SQL::AliasedExpression 194: [c.expression, c.alias] 195: when SQL::JoinClause 196: [c.table, c.table_alias] 197: else 198: [c, nil] 199: end 200: end
Makes each yield arrays of rows, with each array containing the rows for a given result set. Does not work with graphing. So you can submit SQL with multiple statements and easily determine which statement returned which results.
Modifies the row_proc of the returned dataset so that it still works as expected (running on the hashes instead of on the arrays of hashes). If you modify the row_proc afterward, note that it will receive an array of hashes instead of a hash.
# File lib/sequel/adapters/mysql.rb, line 337 337: def split_multiple_result_sets 338: raise(Error, "Can't split multiple statements on a graphed dataset") if opts[:graph] 339: ds = clone(:split_multiple_result_sets=>true) 340: ds.row_proc = proc{|x| x.map{|h| row_proc.call(h)}} if row_proc 341: ds 342: end
This returns an SQL::Identifier or SQL::AliasedExpression containing an SQL identifier that represents the unqualified column for the given value. The given value should be a Symbol, SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression containing one of those. In other cases, this returns nil
# File lib/sequel/dataset/misc.rb, line 207 207: def unqualified_column_for(v) 208: unless v.is_a?(String) 209: _unqualified_column_for(v) 210: end 211: end
Creates a unique table alias that hasn‘t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with "_N" if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
You can provide a second addition array argument containing symbols that should not be considered valid table aliases. The current aliases for the FROM and JOIN tables are automatically included in this array.
DB[:table].unused_table_alias(:t) # => :t DB[:table].unused_table_alias(:table) # => :table_0 DB[:table, :table_0].unused_table_alias(:table) # => :table_1 DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2]) # => :table_3
# File lib/sequel/dataset/misc.rb, line 235 235: def unused_table_alias(table_alias, used_aliases = []) 236: table_alias = alias_symbol(table_alias) 237: used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] 238: used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] 239: if used_aliases.include?(table_alias) 240: i = 0 241: loop do 242: ta = "#{table_alias}_#{i}""#{table_alias}_#{i}" 243: return ta unless used_aliases.include?(ta) 244: i += 1 245: end 246: else 247: table_alias 248: end 249: end
These methods all return modified copies of the receiver.