.\" Automatically generated by Pod::Man 4.11 (Pod::Simple 3.35) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" Set up some character translations and predefined strings. \*(-- will .\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left .\" double quote, and \*(R" will give a right double quote. \*(C+ will .\" give a nicer C++. Capital omega is used to do unbreakable dashes and .\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, .\" nothing in troff, for use with C<>. .tr \(*W- .ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' .ie n \{\ . ds -- \(*W- . ds PI pi . if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch . if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch . ds L" "" . ds R" "" . ds C` "" . ds C' "" 'br\} .el\{\ . ds -- \|\(em\| . ds PI \(*p . ds L" `` . ds R" '' . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "Bio::DB::GFF::Adaptor::dbi::pg 3pm" .TH Bio::DB::GFF::Adaptor::dbi::pg 3pm "2020-01-13" "perl v5.30.0" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH "NAME" Bio::DB::GFF::Adaptor::dbi::pg \-\- Database adaptor for a specific postgres schema .SH "NOTES" .IX Header "NOTES" \&\s-1SQL\s0 commands that need to be executed before this adaptor will work: .PP .Vb 1 \& CREATE DATABASE ; .Ve .PP Also, select permission needs to be granted for each table in the database to the owner of the httpd process (usually 'nobody', but for some RedHat systems it is 'apache') if this adaptor is to be used with the Generic Genome Browser (gbrowse): .PP .Vb 8 \& CREATE USER nobody; \& GRANT SELECT ON TABLE fmeta TO nobody; \& GRANT SELECT ON TABLE fgroup TO nobody; \& GRANT SELECT ON TABLE fdata TO nobody; \& GRANT SELECT ON TABLE fattribute_to_feature TO nobody; \& GRANT SELECT ON TABLE fdna TO nobody; \& GRANT SELECT ON TABLE fattribute TO nobody; \& GRANT SELECT ON TABLE ftype TO nobody; .Ve .SS "Optimizing the database" .IX Subsection "Optimizing the database" PostgreSQL generally requires some tuning before you get very good performance for large databases. For general information on tuning a PostgreSQL server, see http://www.varlena.com/GeneralBits/Tidbits/perf.html Of particular importance is executing \s-1VACUUM FULL ANALYZE\s0 whenever you change the database. .PP Additionally, for a \s-1GFF\s0 database, there are a few items you can tune. For each automatic class in your GBrowse conf file, there will be one or two searches done when searching for a feature. If there are lots of features, these search can take several seconds. To speed these searches, do two things: .IP "1." 4 Set 'enable_seqscan = false' in your postgresql.conf file (and restart your server). .IP "2." 4 Create 'partial' indexes for each automatic class, doing this for the example class 'Allele': .Sp .Vb 2 \& CREATE INDEX partial_allele_gclass ON \& fgroup (lower(\*(Aqgname\*(Aq)) WHERE gclass=\*(AqAllele\*(Aq; .Ve .Sp And be sure to run \s-1VACUUM FULL ANALYZE\s0 after creating the indexes. .SH "DESCRIPTION" .IX Header "DESCRIPTION" This adaptor implements a specific postgres database schema that is compatible with Bio::DB::GFF. It inherits from Bio::DB::GFF::Adaptor::dbi, which itself inherits from Bio::DB::GFF. .PP The schema uses several tables: .IP "fdata" 4 .IX Item "fdata" This is the feature data table. Its columns are: .Sp .Vb 11 \& fid feature ID (integer) \& fref reference sequence name (string) \& fstart start position relative to reference (integer) \& fstop stop position relative to reference (integer) \& ftypeid feature type ID (integer) \& fscore feature score (float); may be null \& fstrand strand; one of "+" or "\-"; may be null \& fphase phase; one of 0, 1 or 2; may be null \& gid group ID (integer) \& ftarget_start for similarity features, the target start position (integer) \& ftarget_stop for similarity features, the target stop position (integer) .Ve .Sp Note that it would be desirable to normalize the reference sequence name, since there are usually many features that share the same reference feature. However, in the current schema, query performance suffers dramatically when this additional join is added. .IP "fgroup" 4 .IX Item "fgroup" This is the group table. There is one row for each group. Columns: .Sp .Vb 3 \& gid the group ID (integer) \& gclass the class of the group (string) \& gname the name of the group (string) .Ve .Sp The group table serves multiple purposes. As you might expect, it is used to cluster features that logically belong together, such as the multiple exons of the same transcript. It is also used to assign a name and class to a singleton feature. Finally, the group table is used to identify the target of a similarity hit. This is consistent with the way in which the group field is used in the \s-1GFF\s0 version 2 format. .Sp The fgroup.gid field joins with the fdata.gid field. .Sp Examples: .Sp .Vb 7 \& sql> select * from fgroup where gname=\*(Aqsjj_2L52.1\*(Aq; \& +\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-+ \& | gid | gclass | gname | \& +\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-+ \& | 69736 | PCR_product | sjj_2L52.1 | \& +\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-+ \& 1 row in set (0.70 sec) \& \& sql> select fref,fstart,fstop from fdata,fgroup \& where gclass=\*(AqPCR_product\*(Aq and gname = \*(Aqsjj_2L52.1\*(Aq \& and fdata.gid=fgroup.gid; \& +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-+ \& | fref | fstart | fstop | \& +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-+ \& | CHROMOSOME_II | 1586 | 2355 | \& +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-+ \& 1 row in set (0.03 sec) .Ve .IP "ftype" 4 .IX Item "ftype" This table contains the feature types, one per row. Columns are: .Sp .Vb 3 \& ftypeid the feature type ID (integer) \& fmethod the feature type method name (string) \& fsource the feature type source name (string) .Ve .Sp The ftype.ftypeid field joins with the fdata.ftypeid field. Example: .Sp .Vb 11 \& sql> select fref,fstart,fstop,fmethod,fsource from fdata,fgroup,ftype \& where gclass=\*(AqPCR_product\*(Aq \& and gname = \*(Aqsjj_2L52.1\*(Aq \& and fdata.gid=fgroup.gid \& and fdata.ftypeid=ftype.ftypeid; \& +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-+ \& | fref | fstart | fstop | fmethod | fsource | \& +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-+ \& | CHROMOSOME_II | 1586 | 2355 | PCR_product | GenePairs | \& +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-\-\-+\-\-\-\-\-\-\-\-\-\-\-+ \& 1 row in set (0.08 sec) .Ve .IP "fdna" 4 .IX Item "fdna" This table holds the raw \s-1DNA\s0 of the reference sequences. It has three columns: .Sp .Vb 3 \& fref reference sequence name (string) \& foffset offset of this sequence \& fdna the DNA sequence (longblob) .Ve .Sp To overcome problems loading large blobs, \s-1DNA\s0 is automatically fragmented into multiple segments when loading, and the position of each segment is stored in foffset. The fragment size is controlled by the \-clump_size argument during initialization. .IP "fattribute_to_feature" 4 .IX Item "fattribute_to_feature" This table holds \*(L"attributes\*(R", which are tag/value pairs stuffed into the \s-1GFF\s0 line. The first tag/value pair is treated as the group, and anything else is treated as an attribute (weird, huh?). .Sp .Vb 2 \& CHR_I assembly_tag Finished 2032 2036 . + . Note "Right: cTel33B" \& CHR_I assembly_tag Polymorphism 668 668 . + . Note "A\->C in cTel33B" .Ve .Sp The columns of this table are: .Sp .Vb 3 \& fid feature ID (integer) \& fattribute_id ID of the attribute (integer) \& fattribute_value text of the attribute (text) .Ve .Sp The fdata.fid column joins with fattribute_to_feature.fid. .IP "fattribute" 4 .IX Item "fattribute" This table holds the normalized names of the attributes. Fields are: .Sp .Vb 2 \& fattribute_id ID of the attribute (integer) \& fattribute_name Name of the attribute (varchar) .Ve .SS "Data Loading Methods" .IX Subsection "Data Loading Methods" In addition to implementing the abstract SQL-generating methods of Bio::DB::GFF::Adaptor::dbi, this module also implements the data loading functionality of Bio::DB::GFF. .SS "new" .IX Subsection "new" .Vb 6 \& Title : new \& Usage : $db = Bio::DB::GFF\->new(@args) \& Function: create a new adaptor \& Returns : a Bio::DB::GFF object \& Args : see below \& Status : Public .Ve .PP The new constructor is identical to the \*(L"dbi\*(R" adaptor's \fBnew()\fR method, except that the prefix \*(L"dbi:pg\*(R" is added to the database \s-1DSN\s0 identifier automatically if it is not there already. .PP .Vb 2 \& Argument Description \& \-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\- \& \& \-dsn the DBI data source, e.g. \*(Aqdbi:Pg:dbname=:ens0040\*(Aq or "ens0040" \& \& \-user username for authentication \& \& \-pass the password for authentication .Ve .SS "schema" .IX Subsection "schema" .Vb 6 \& Title : schema \& Usage : $schema = $db\->schema \& Function: return the CREATE script for the schema \& Returns : a list of CREATE statemetns \& Args : none \& Status : protected .Ve .PP This method returns a list containing the various \s-1CREATE\s0 statements needed to initialize the database tables. .SS "setup_load" .IX Subsection "setup_load" .Vb 6 \& Title : setup_load \& Usage : $db\->setup_load \& Function: called before load_gff_line() \& Returns : void \& Args : none \& Status : protected .Ve .PP This method performs schema-specific initialization prior to loading a set of \s-1GFF\s0 records. It prepares a set of \s-1DBI\s0 statement handlers to be used in loading the data. .SS "load_gff_line" .IX Subsection "load_gff_line" .Vb 6 \& Title : load_gff_line \& Usage : $db\->load_gff_line($fields) \& Function: called to load one parsed line of GFF \& Returns : true if successfully inserted \& Args : hashref containing GFF fields \& Status : protected .Ve .PP This method is called once per line of the \s-1GFF\s0 and passed a series of parsed data items that are stored into the hashref \f(CW$fields\fR. The keys are: .PP .Vb 10 \& ref reference sequence \& source annotation source \& method annotation method \& start annotation start \& stop annotation stop \& score annotation score (may be undef) \& strand annotation strand (may be undef) \& phase annotation phase (may be undef) \& group_class class of annotation\*(Aqs group (may be undef) \& group_name ID of annotation\*(Aqs group (may be undef) \& target_start start of target of a similarity hit \& target_stop stop of target of a similarity hit \& attributes array reference of attributes, each of which is a [tag=>value] array ref .Ve .SS "get_table_id" .IX Subsection "get_table_id" .Vb 6 \& Title : get_table_id \& Usage : $integer = $db\->get_table_id($table,@ids) \& Function: get the ID of a group or type \& Returns : an integer ID or undef \& Args : none \& Status : private .Ve .PP This internal method is called by load_gff_line to look up the integer \&\s-1ID\s0 of an existing feature type or group. The arguments are the name of the table, and two string identifiers. For feature types, the identifiers are the method and source. For groups, the identifiers are group name and class. .PP This method requires that a statement handler named \fIlookup_$table\fR, have been created previously by \fBsetup_load()\fR. It is here to overcome deficiencies in mysql's \s-1INSERT\s0 syntax. .SS "range_query" .IX Subsection "range_query" .Vb 6 \& Title : range_query \& Usage : $db\->range_query($range_type,$refseq,$refclass,$start,$stop,$types,$order_by_group,$attributes,$binsize) \& Function: create statement handle for range/overlap queries \& Returns : a DBI statement handle \& Args : see below \& Status : Protected .Ve .PP This method constructs the statement handle for this module's central query: given a range and/or a list of feature types, fetch their \s-1GFF\s0 records. It overrides a method in dbi.pm so that the overlaps query can write \s-1SQL\s0 optimized for Postgres. Specifically, instead of writing the bin related section as a set of ORs, each bin piece is place in a separate select and then they are UNIONed together. This subroutine requires several replacements for other subroutines in dbi.pm. In this module, they are named the same as those in dbi.pm but prefixed with \&\*(L"pg_\*(R". .PP The positional arguments are as follows: .PP .Vb 1 \& Argument Description \& \& $isrange A flag indicating that this is a range. \& query. Otherwise an overlap query is \& assumed. \& \& $refseq The reference sequence name (undef if no range). \& \& $refclass The reference sequence class (undef if no range). \& \& $start The start of the range (undef if none). \& \& $stop The stop of the range (undef if none). \& \& $types Array ref containing zero or feature types in the \& format [method,source]. \& \& $order_by_group A flag indicating that statement handler should group \& the features by group id (handy for iterative fetches) \& \& $attributes A hash containing select attributes. \& \& $binsize A bin size for generating tables of feature density. .Ve .SS "search_notes" .IX Subsection "search_notes" This PostgreSQL adaptor does not implement the search notes method because it can be very slow (although the code for the method is contained in this method but commented out). There is, however, a PostgreSQL adaptor that does implement it in a more efficient way: Bio::DB::GFF::Adaptor::dbi::pg_fts, which inherits from this adaptor and uses the optional PostgreSQL module TSearch2 for full text indexing. See that adaptor's documentation for more information. .PP See also Bio::DB::GFF .PP .Vb 6 \& Title : search_notes \& Usage : @search_results = $db\->search_notes("full text search string",$limit) \& Function: Search the notes for a text string, using mysql full\-text search \& Returns : array of results \& Args : full text search string, and an optional row limit \& Status : public .Ve .PP This is a replacement for the mysql-specific method. Given a search string, it performs a \s-1ILIKE\s0 search of the notes table and returns an array of results. Each row of the returned array is a arrayref containing the following fields: .PP .Vb 3 \& column 1 A Bio::DB::GFF::Featname object, suitable for passing to segment() \& column 2 The text of the note \& column 3 A relevance score. .Ve .PP Note that for large databases this can be very slow and may result in time out or 500\-cgi errors. If this is happening on a regular basis, you should look into using Bio::DB::GFF::Adaptor::dbi::pg_fts which implements the TSearch2 full text indexing scheme. .SS "make_meta_set_query" .IX Subsection "make_meta_set_query" .Vb 6 \& Title : make_meta_set_query \& Usage : $sql = $db\->make_meta_set_query \& Function: return SQL fragment for setting a meta parameter \& Returns : SQL fragment \& Args : none \& Status : public .Ve .PP By default this does nothing; meta parameters are not stored or retrieved. .SS "make_features_by_name_where_part" .IX Subsection "make_features_by_name_where_part" .Vb 8 \& Title : make_features_by_name_where_part \& Usage : $db\->make_features_by_name_where_part \& Function: Overrides a function in Bio::DB::GFF::Adaptor::dbi to insure \& that searches will be case insensitive. It creates the SQL \& fragment needed to select a feature by its group name & class \& Returns : a SQL fragment and bind arguments \& Args : see below \& Status : Protected .Ve