There are several ways to obtain search results as a text stream, without a graphical interface:
By passing option
-tto the recoll program, or by calling it as recollq (through a link).By using the actual recollq program.
By writing a custom Python program, using the Recoll Python API.
The first two methods work in the same way and accept/need the same arguments (except
for the additional -t to recoll). The query to be executed
is specified as command line arguments.
Depending on the platform, recollq is not always built or installed
by default (as recoll -t works the same). This is a very
simple program, and if you can program a little c++, you may find it useful to taylor its
output format to your needs. Apart from being easily customised, recollq is
only really useful on systems where the Qt libraries are not available.
recollq has a man page. The Usage string follows:
Usage: recollq [options] [query elements]
Runs a recoll query and displays result lines.
By default, the argument(s) will be interpreted as a Recoll query language
string. The -q option was kept for compatibility with the GUI and is just
ignored: the query *must* be specified in the non-option arguments.
Query language elements:
* Implicit AND, exclusion, field spec: t1 -t2 title:t3
* OR has priority: t1 OR t2 t3 OR t4 means (t1 OR t2) AND (t3 OR t4)
* Phrase: "t1 t2" (needs additional quoting on cmd line)
Other query modes :
-o Emulate the GUI simple search in ANY TERM mode.
-a Emulate the GUI simple search in ALL TERMS mode.
-f Emulate the GUI simple search in filename mode.
Query and results options:
-c <configdir> : specify configuration directory, overriding $RECOLL_CONFDIR.
-C : collapse duplicates.
-d also dump file contents.
-n [first-]<cnt> define the result slice. The default value for [first] is 0.
Without the option, the default max count is 2000. Use n=0 for no limit.
-b : basic. Just output urls, no mime types or titles.
-Q : no result lines, just the processed query and result count.
-m : dump the whole document meta[] array for each result.
-A : output the document abstracts.
-p <cnt> : show <cnt> snippets, with page numbers instead of the
concatenated abstract.
-g <cnt> : show <cnt> snippets, with line numbers instead of the
concatenated abstract.
-S fld : sort by field <fld>.
-D : sort descending.
-s stemlang : set stemming language to use (must exist in index...).
Use -s "" to turn off stem expansion.
-T <synonyms file>: use the parameter (Thesaurus) for word expansion.
-i <dbdir> : additional index, several can be given.
-e use url encoding (%xx) for urls.
-E use exact result count instead of lower bound estimate.
-F <field name list> : output exactly these fields for each result.
The field values are encoded in base64, output in one line and
separated by one space character. This is the recommended format
for use by other programs. Use a normal query with option -m to
see the field names. Use -F '' to output all fields, but you probably
also want option -N in this case.
-N : with -F, print the (plain text) field names before the field values.
--extract_to <filepath> : extract the first result to filepath, which must not
exist. Use a -n option with an offset to select the appropriate result.
--paths-only: only print results which would have a file:// scheme, and
exclude the scheme.
Other non-query usages:
-P: Show the date span for all the documents present in the index.
Sample execution:
recollq 'ilur -nautique mime:text/html'
Recoll query: ((((ilur:(wqf=11) OR ilurs) AND_NOT (nautique:(wqf=11) OR nautiques OR nautiqu OR nautiquement)) FILTER Ttext/html))
4 results
text/html [file:///Users/dockes/projets/bateaux/ilur/comptes.html] [comptes.html] 18593 bytes
text/html [file:///Users/dockes/projets/nautique/webnautique/articles/ilur1/index.html] [Constructio...
text/html [file:///Users/dockes/projets/pagepers/index.html] [psxtcl/writemime/recoll]...
text/html [file:///Users/dockes/projets/bateaux/ilur/factEtCie/recu-chasse-maree....

