Page MenuHomec4science

No OneTemporary

File Metadata

Created
Sat, Jun 28, 06:45
This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/ABOUT-NLS b/ABOUT-NLS
index 9bff4ce40..cfc796ed6 100644
--- a/ABOUT-NLS
+++ b/ABOUT-NLS
@@ -1,309 +1,310 @@
Invenio NATIVE LANGUAGE SUPPORT
===============================
About
=====
This document describes the Native Language Support (NLS) in Invenio.
Contents
========
1. Native Language Support information for administrators
2. Native Language Support information for translators
3. Native Language Support information for programmers
A. Introducing a new language
B. Integrating translation contributions
1. Native Language Support information for administrators
=========================================================
Invenio is currently available in the following languages:
af = Afrikaans
ar = Arabic
bg = Bulgarian
ca = Catalan
cs = Czech
de = German
el = Greek
en = English
es = Spanish
+ fa = Persian (Farsi)
fr = French
gl = Galician
hr = Croatian
hu = Hungarian
it = Italian
ja = Japanese
ka = Georgian
lt = Lithuanian
no = Norwegian (Bokmål)
pl = Polish
pt = Portuguese
ro = Romanian
ru = Russian
rw = Kinyarwanda
sk = Slovak
sv = Swedish
uk = Ukrainian
zh_CN = Chinese (China)
zh_TW = Chinese (Taiwan)
If you are installing Invenio and you want to enable/disable some
languages, please just follow the standard installation procedure as
described in the INSTALL file. The default language of the
installation as well as the list of all user-seen languages can be
selected in the general invenio.conf file, see variables CFG_SITE_LANG
and CFG_SITE_LANGS.
(Please note that some runtime Invenio daemons -- such as webcoll,
responsible for updating the collection cache, running every hour or
so -- may work twice as long when twice as many user-seen languages
are selected, because it creates collection cache page elements for
every user-seen language. Therefore, if you have defined thousands of
collections and if you find the webcoll speed to be slow in your
setup, you may want to try to limit the list of selected languages.)
2. Native Language Support information for translators
======================================================
If you want to contibute a translation to Invenio, then please follow
the procedure below:
- Please check out the existence of po/LL.po file for your language,
where LL stands for the ISO 639 language code (e.g. `el' for
Greek). If such a file exists, then this language is already
supported, in which case you may want to review the existing
translation (see below). If the file does not exist yet, then you
can create an empty one by copying the invenio.pot template file
into LL.po that you can review as described in the next item.
(Please note that you would have to translate some dynamic
elements that are currently not located in the PO file, see the
appendix A below.)
- Please edit LL.po to review existing translation. The PO file
format is a standard GNU gettext one and so you can take advantage
of dedicated editing modes of programs such as GNU Emacs, KBabel,
or poEdit to edit it. Pay special attention to strings marked as
fuzzy and untranslated. (E.g. in the Emacs PO mode, press `f' and
`u' to find them.) Do not forget to remove fuzzy marks for
reviewed translations. (E.g. in the Emacs PO mode, press `TAB' to
remove fuzzy status of a string.)
- After you are done with translations, please validate your file to
make sure it does not contain formatting errors. (E.g. in the
Emacs PO mode, press `V' to validate the file.)
- If you have access to a test installation of Invenio, you may want
to see your modified PO file in action:
$ cd po
$ emacs ja.po # edit Japanese translation
$ make update-gmo
$ make install
$ sudo apachectl restart
$ firefox http://your.site/?ln=ja # check it out in context
If you do not have access to a test installation, please
contribute your PO file to the developers team (see the next step)
and we shall install it on a test site and contact you so that you
will be able to check your translation in the global context of
the application.
(Note to developers: note that ``make update-gmo'' command may be
necessary to run before ``make'' if the latter fails, even if you
are not touching translation business at all. The reason being
that the gmo files are not stored in CVS, while they are included
in the distribution tarball. So, if you are building from CVS,
and you do not have them in your tree, you may get build errors in
directories like modules/webhelp/web/admin saying things like ``No
rule to make target `index.bg.html'''. The solution is to run
``make update-gmo'' to produce the gmo files before running
``make''. End of note to developers.)
- Please contribute your translation by emailing the file to
<info@invenio-software.org>. You help is greatly appreciated and
will be properly credited in the THANKS file.
See also the GNU gettext manual, especially the chapters 5, 6 and 11.
<http://www.gnu.org/software/gettext/manual/html_chapter/gettext_toc.html>
3. Native Language Support information for programmers
======================================================
Invenio uses standard GNU gettext I18N and L12N philosophy.
In Python programs, all output strings should be made translatable via
the _() convention:
from messages import gettext_set_language
[...]
def square(x, ln=CFG_SITE_LANG):
_ = gettext_set_language(ln)
print _("Hello there!")
print _("The square of %s is %s.") % (x, x*x)
In webdoc source files, the convention is _()_:
_(Search Help)_
Here are some tips for writing easily translatable output messages:
- Do not cut big phrases into several pieces, the meaning may be
harder to grasp and to render properly in another language. Leave
them in the context. Do not try to economize and reuse
standalone-translated words as parts of bigger sentences. The
translation could differ due to gender, for example. Rather
define two sentences instead:
not: _("This %s is not available.") % x,
where x is either _("basket") or _("alert")
but: _("This basket is not available.") and
_("This alert is not available.")
- If you print some value in a translatable phrase, you can use an
unnamed %i or %s string replacement placeholders:
yes: _("There are %i baskets.") % nb_baskets
But, as soon as you are printing more than one value, you should
use named string placeholders because in some languages the parts
of the sentence may be reversed when translated:
not: _("There are %i baskets shared by %i groups.") % \
(nb_baskets, nb_groups)
but: _("There are %(x_nb_baskets)s baskets shared by %(x_nb_groups)s groups.") % \
{'x_nb_baskets': nb_baskets, 'x_nb_groups': nb_groups,}
Please use the `x_' prefix for the named placeholder variables to
ease the localization task of the translator.
- Do not mix HTML presentation inside phrases. If you want to
reserve space for HTML markup, please use generic replacement
placeholders as prologue and epilogue:
not: _("This is <b>cold</b>.")
but: _("This is %(x_fmt_open)scold%(x_fmt_close)s.")
Ditto for links:
not: _("This is <a href="%s">homepage</a>.")
not: _("This is %(x_url_open)shomepage%(x_url_close)s.")
- Do not leave unnecessary things in short commonly used
translatable expressions, such as extraneous spaces or colons
before or after them. Rather put them in the business logic:
not: _(" subject")
but: " " + _("subject")
not: _("Record %i:")
but: _("Record") + "%i:" % recID
On the other hand, in long sentences when the trailing punctuation
has its meaning as an integral part of the label to be shown on
the interface, you should leave them:
not: _("Nearest terms in any collection are")
but: _("Nearest terms in any collection are:")
- Last but not least: the best is to follow the style of existing
messages as a model, so that the translators are presented with a
homogeneous and consistently presented output phrase set.
Appendix A. Introducing a new language
======================================
If you are introducing a new language for the first time, then please
firstly create and edit the PO file as described above in Section 2.
This will make the largest portion of the translating work done, but
it is not fully enough, because we currently have also to translate
some dynamic elements that aren't located in PO files.
The development team can edit the respective files ourself, if the
translator sends over the following translations by email:
- demo server name, from invenio.conf:
Atlantis Institute of Fictive Science
- demo collection names, from democfgdata.sql:
Preprints
Books
Theses
Reports
Articles
Pictures
CERN Divisions
CERN Experiments
Theoretical Physics (TH)
Experimental Physics (EP)
Articles & Preprints
Books & Reports
Multimedia & Arts
Poetry
- demo right-hand-side portalbox, from democfgdata.sql:
ABOUT THIS SITE
Welcome to the demo site of the Invenio, a free document server
software coming from CERN. Please feel free to explore all the
features of this demo site to the full.
SEE ALSO
The development team will than edit various files (po/LINGUAS, config
files, sql files, plenty of Makefile files, etc) as needed.
The last phase of the initial introduction of the new language would
be to translate some short static HTML pages such as:
- modules/webhelp/web/help-central.webdoc
Thanks for helping us to internationalize Invenio.
Appendix B. Integrating translation contributions
=================================================
This appendix contains some tips on integrating translated phrases
that were prepared for different Invenio releases. It is mostly
of interest to Invenio developers or the release manager.
Imagine that we have a working translation file sk.po and that we have
received a contribution co-CONTRIB.po that was prepared for previous
Invenio release, so that the messages do not fully correspond.
Moreover, another person might have had worked with the sk.po file in
the meantime. The goal is to integrate the contributions.
Firstly, check whether the contributed file sk-CONTRIB.po was indeed
prepared for different software release version:
$ msgcmp --use-fuzzy --use-untranslated sk-CONTRIB.po invenio.pot
If yes, then join its translations with the ones in the latest sk.po
file:
$ msgcat sk-CONTRIB.po sk.po > sk-TMP.po
and update the message references:
$ msgmerge sk-TMP.po invenio.pot > sk-NEW.po
This will give the new file sk-NEW.po that should now be msgcmp'rable
to invenio.pot.
Lastly, we will have to go manually through sk-NEW.po in order to
resolve potential translation conflicts (marked via ``#-#-#-#-#''
fuzzy translations). If the conflicts are evident and easy to
resolve, for example corrected typos, we can fix them. If the
conflicts are of translational nature and cannot be resolved without
consulting the translators, we should warn them about the conflicts.
After the evident conflicts are resolved and the file validates okay,
we can rename it to sk.po and we are done.
(Note that we could have used ``--use-first'' option to msgcat if we
were fully sure that the first translation file (sk-CONTRIB) could
have been preferred as far as the quality of translation goes.)
- end of file -
diff --git a/INSTALL b/INSTALL
index 518e162ea..37343e14c 100644
--- a/INSTALL
+++ b/INSTALL
@@ -1,916 +1,914 @@
Invenio INSTALLATION
====================
About
=====
This document specifies how to build, customize, and install Invenio
v1.1.2 for the first time. See RELEASE-NOTES if you are upgrading
from a previous Invenio release.
Contents
========
0. Prerequisites
1. Quick instructions for the impatient Invenio admin
2. Detailed instructions for the patient Invenio admin
0. Prerequisites
================
Here is the software you need to have around before you
start installing Invenio:
a) Unix-like operating system. The main development and
production platforms for Invenio at CERN are GNU/Linux
distributions Debian, Gentoo, Scientific Linux (aka RHEL),
Ubuntu, but we also develop on Mac OS X. Basically any Unix
system supporting the software listed below should do.
If you are using Debian GNU/Linux ``Lenny'' or later, then you
can install most of the below-mentioned prerequisites and
recommendations by running:
$ sudo aptitude install python-dev apache2-mpm-prefork \
mysql-server mysql-client python-mysqldb \
python-4suite-xml python-simplejson python-xml \
python-libxml2 python-libxslt1 gnuplot poppler-utils \
gs-common clisp gettext libapache2-mod-wsgi unzip \
pdftk html2text giflib-tools \
- pstotext netpbm python-chardet
+ pstotext netpbm
You also need to install following packages from PyPi
by running:
$ sudo pip install -r requirements.txt
$ sudo pip install -r requirements-extras.txt
$ sudo pip install -r requirements-flask.txt
$ sudo pip install -r requirements-flask-ext.txt
You may also want to install some of the following packages,
if you have them available on your concrete architecture:
$ sudo aptitude install sbcl cmucl pylint pychecker pyflakes \
python-profiler python-epydoc libapache2-mod-xsendfile \
openoffice.org python-utidylib python-beautifulsoup
+ (Note that if you use pip to manage your Python dependencies
+ instead of operating system packages, please see the section
+ (d) below on how to use pip instead of aptitude.)
+
Moreover, you should install some Message Transfer Agent (MTA)
such as Postfix so that Invenio can email notification
alerts or registration information to the end users, contact
moderators and reviewers of submitted documents, inform
administrators about various runtime system information, etc:
$ sudo aptitude install postfix
After running the above-quoted aptitude command(s), you can
proceed to configuring your MySQL server instance
(max_allowed_packet in my.cnf, see item 0b below) and then to
installing the Invenio software package in the section 1
below.
If you are using another operating system, then please
continue reading the rest of this prerequisites section, and
please consult our wiki pages for any concrete hints for your
specific operating system.
<https://twiki.cern.ch/twiki/bin/view/CDS/Invenio>
b) MySQL server (may be on a remote machine), and MySQL client
(must be available locally too). MySQL versions 4.1 or 5.0
are supported. Please set the variable "max_allowed_packet"
in your "my.cnf" init file to at least 4M. (For sites such as
INSPIRE, having 1M records with 10M citer-citee pairs in its
citation map, you may need to increase max_allowed_packet to
1G.) You may perhaps also want to run your MySQL server
natively in UTF-8 mode by setting "default-character-set=utf8"
in various parts of your "my.cnf" file, such as in the
"[mysql]" part and elsewhere; but this is not really required.
<http://mysql.com/>
c) Apache 2 server, with support for loading DSO modules, and
optionally with SSL support for HTTPS-secure user
authentication, and mod_xsendfile for off-loading file
downloads away from Invenio processes to Apache.
<http://httpd.apache.org/>
<http://tn123.ath.cx/mod_xsendfile/>
- d) Python v2.4 or above:
+ d) Python v2.6 or above:
<http://python.org/>
as well as the following Python modules:
- (mandatory) MySQLdb (version >= 1.2.1_p2; see below)
<http://sourceforge.net/projects/mysql-python>
- (mandatory) Pyparsing, for document parsing
<http://pyparsing.wikispaces.com/>
- (recommended) python-dateutil, for complex date processing:
<http://labix.org/python-dateutil>
- (recommended) PyXML, for XML processing:
<http://pyxml.sourceforge.net/topics/download.html>
- (recommended) PyRXP, for very fast XML MARC processing:
<http://www.reportlab.org/pyrxp.html>
- (recommended) lxml, for XML/XLST processing:
<http://lxml.de/>
- (recommended) libxml2-python, for XML/XLST processing:
<ftp://xmlsoft.org/libxml2/python/>
- - (recommended) simplejson, for AJAX apps:
- <http://undefined.org/python/#simplejson>
- Note that if you are using Python-2.6, you don't need to
- install simplejson, because the module is already included
- in the main Python distribution.
- (recommended) Gnuplot.Py, for producing graphs:
<http://gnuplot-py.sourceforge.net/>
- (recommended) Snowball Stemmer, for stemming:
<http://snowball.tartarus.org/wrappers/PyStemmer-1.0.1.tar.gz>
- (recommended) py-editdist, for record merging:
<http://www.mindrot.org/projects/py-editdist/>
- (recommended) numpy, for citerank methods:
<http://numpy.scipy.org/>
- (recommended) magic, for full-text file handling:
<http://www.darwinsys.com/file/>
- (optional) chardet, for character encoding detection:
<http://chardet.feedparser.org/>
- (optional) 4suite, slower alternative to PyRXP and
libxml2-python:
<http://4suite.org/>
- (optional) feedparser, for web journal creation:
<http://feedparser.org/>
- (optional) RDFLib, to use RDF ontologies and thesauri:
<http://rdflib.net/>
- (optional) mechanize, to run regression web test suite:
<http://wwwsearch.sourceforge.net/mechanize/>
- (optional) python-mock, mocking library for the test suite:
<http://www.voidspace.org.uk/python/mock/>
- - (optional) hashlib, needed only for Python-2.4 and only
- if you would like to use AWS connectivity:
- <http://pypi.python.org/pypi/hashlib>
- (optional) utidylib, for HTML washing:
<http://utidylib.berlios.de/>
- (optional) Beautiful Soup, for HTML washing:
<http://www.crummy.com/software/BeautifulSoup/>
- (optional) Python Twitter (and its dependencies) if you want
to use the Twitter Fetcher bibtasklet:
<http://code.google.com/p/python-twitter/>
- (optional) Python OpenID if you want to enable OpenID support
for authentication:
<http://pypi.python.org/pypi/python-openid/>
- (optional) Python Rauth if you want to enable OAuth 1.0/2.0
support for authentication (depends on Python-2.6 or later):
<http://packages.python.org/rauth/>
+ - (optional) unidecode, for ASCII representation of Unicode
+ text:
+ <https://pypi.python.org/pypi/Unidecode>
+
+ Note that if you are using pip to install and manage your
+ Python dependencies, then you can run:
+
+ $ sudo pip install -r requirements.txt
+ $ sudo pip install -r requirements-extras.txt
- Note: MySQLdb version 1.2.1_p2 or higher is recommended. If
- you are using an older version of MySQLdb, you may get
- into problems with character encoding.
+ to install all manadatory, recommended, and optional packages
+ mentioned above.
e) mod_wsgi Apache module. Versions 3.x and above are
recommended.
<http://code.google.com/p/modwsgi/>
- Note: if you are using Python 2.4 or earlier, then you should
- also install the wsgiref Python module, available from:
- <http://pypi.python.org/pypi/wsgiref/> (As of Python 2.5
- this module is included in standard Python
- distribution.)
-
f) If you want to be able to extract references from PDF fulltext
files, then you need to install pdftotext version 3 at least.
<http://poppler.freedesktop.org/>
<http://www.foolabs.com/xpdf/home.html>
g) If you want to be able to search for words in the fulltext
files (i.e. to have fulltext indexing) or to stamp submitted
files, then you need as well to install some of the following
tools:
- for Microsoft Office/OpenOffice.org document conversion:
OpenOffice.org
<http://www.openoffice.org/>
- for PDF file stamping: pdftk, pdf2ps
<http://www.accesspdf.com/pdftk/>
<http://www.cs.wisc.edu/~ghost/doc/AFPL/>
- for PDF files: pdftotext or pstotext
<http://poppler.freedesktop.org/>
<http://www.foolabs.com/xpdf/home.html>
<http://www.cs.wisc.edu/~ghost/doc/AFPL/>
- for PostScript files: pstotext or ps2ascii
<http://www.cs.wisc.edu/~ghost/doc/AFPL/>
- for DjVu creation, elaboration: DjVuLibre
<http://djvu.sourceforge.net>
- to perform OCR: OCRopus (tested only with release 0.3.1)
<http://code.google.com/p/ocropus/>
- to perform different image elaborations: ImageMagick
<http://www.imagemagick.org/>
- to generate PDF after OCR: netpbm, ReportLab and pyPdf or pyPdf2
<http://netpbm.sourceforge.net/>
<http://www.reportlab.org/rl_toolkit.html>
<http://pybrary.net/pyPdf/>
<http://knowah.github.io/PyPDF2/>
h) If you have chosen to install fast XML MARC Python processors
in the step d) above, then you have to install the parsers
themselves:
- (optional) 4suite:
<http://4suite.org/>
i) (recommended) Gnuplot, the command-line driven interactive
plotting program. It is used to display download and citation
history graphs on the Detailed record pages on the web
interface. Note that Gnuplot must be compiled with PNG output
support, that is, with the GD library. Note also that Gnuplot
is not required, only recommended.
<http://www.gnuplot.info/>
j) (recommended) A Common Lisp implementation, such as CLISP,
SBCL or CMUCL. It is used for the web server log analysing
tool and the metadata checking program. Note that any of the
three implementations CLISP, SBCL, or CMUCL will do. CMUCL
produces fastest machine code, but it does not support UTF-8
yet. Pick up CLISP if you don't know what to do. Note that a
Common Lisp implementation is not required, only recommended.
<http://clisp.cons.org/>
<http://www.cons.org/cmucl/>
<http://sbcl.sourceforge.net/>
k) GNU gettext, a set of tools that makes it possible to
translate the application in multiple languages.
<http://www.gnu.org/software/gettext/>
This is available by default on many systems.
l) (recommended) xlwt 0.7.2, Library to create spreadsheet files
compatible with MS Excel 97/2000/XP/2003 XLS files, on any
platform, with Python 2.3 to 2.6
<http://pypi.python.org/pypi/xlwt>
m) (recommended) matplotlib 1.0.0 is a python 2D plotting library
which produces publication quality figures in a variety of
hardcopy formats and interactive environments across
platforms. matplotlib can be used in python scripts, the
python and ipython shell (ala MATLAB® or Mathematica®),
web application servers, and six graphical user interface
toolkits. It is used to generate pie graphs in the custom
summary query (WebStat)
<http://matplotlib.sourceforge.net>
n) (optional) FFmpeg, an open-source tools an libraries collection
to convert video and audio files. It makes use of both internal
as well as external libraries to generate videos for the web, such
as Theora, WebM and H.264 out of almost any thinkable video input.
FFmpeg is needed to run video related modules and submission workflows
in Invenio. The minimal configuration of ffmpeg for the Invenio demo site
requires a number of external libraries. It is highly recommended
to remove all installed versions and packages that are comming with
various Linux distributions and install the latest versions from
sources. Additionally, you will need the Mediainfo Library for multimedia
metadata handling.
Minimum libraries for the demo site:
- the ffmpeg multimedia encoder tools
<http://ffmpeg.org/>
- a library for jpeg images needed for thumbnail extraction
<http://www.openjpeg.org/>
- a library for the ogg container format, needed for Vorbis and Theora
<http://www.xiph.org/ogg/>
- the OGG Vorbis audi codec library
<http://www.vorbis.com/>
- the OGG Theora video codec library
<http://www.theora.org/>
- the WebM video codec library
<http://www.webmproject.org/>
- the mediainfo library for multimedia metadata
<http://mediainfo.sourceforge.net/>
Recommended for H.264 video (!be aware of licensing issues!):
- a library for H.264 video encoding
<http://www.videolan.org/developers/x264.html>
- a library for Advanced Audi Coding
<http://www.audiocoding.com/faac.html>
- a library for MP3 encoding
<http://lame.sourceforge.net/>
o) (recommended) RabbitMQ is a message broker used by Celery for running
a distributed task queue <http://www.rabbitmq.com/download.html>.
- Install
sudo aptitude install rabbitmq-server
- Enable web interface
sudo rabbitmq-plugins enable rabbitmq_management
- Add user and vhost
sudo rabbitmqctl add_user myuser mypassword
sudo rabbitmqctl add_vhost myvhost
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
- Allow Web UI login
sudo rabbitmqctl set_user_tags myuser management
- Change default user password
sudo rabbitmqctl change_password guest guest
sudo service rabbitmq-server restart
- Starting Celery worker (after Invenio is installed):
celery worker -A invenio -l info -B -E
- Starting Flower (monitoring web interface, requires Python 2.6):
pip install flower
flower --port=5555
http://localhost:55672 (RabbitMQ web admin)
http://localhost:5555 (Flower UI)
1. Quick instructions for the impatient Invenio admin
=========================================================
1a. Installation
----------------
$ cd $HOME/src/
$ wget http://invenio-software.org/download/invenio-1.1.2.tar.gz
$ wget http://invenio-software.org/download/invenio-1.1.2.tar.gz.md5
$ wget http://invenio-software.org/download/invenio-1.1.2.tar.gz.sig
$ md5sum -c invenio-1.1.2.tar.gz.md5
$ gpg --verify invenio-1.1.2.tar.gz.sig invenio-1.1.2.tar.gz
$ tar xvfz invenio-1.1.2.tar.gz
$ cd invenio-1.1.2
$ ./configure
$ make
$ make install
$ make install-bootstrap
$ make install-hogan-plugin
$ make install-mathjax-plugin ## optional
$ make install-jquery-plugins ## optional
$ make install-jquery-tokeninput ## optional
$ make install-plupload-plugin ## optional
$ make install-ckeditor-plugin ## optional
$ make install-pdfa-helper-files ## optional
$ make install-mediaelement ## optional
$ make install-solrutils ## optional
$ make install-js-test-driver ## optional
1b. Configuration
-----------------
$ sudo chown -R www-data.www-data /opt/invenio
$ sudo -u www-data emacs /opt/invenio/etc/invenio-local.conf
$ sudo -u www-data /opt/invenio/bin/inveniocfg --update-all
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-secret-key
$ sudo -u www-data /opt/invenio/bin/inveniocfg --update-all
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-tables
$ sudo -u www-data /opt/invenio/bin/inveniocfg --load-bibfield-conf
$ sudo -u www-data /opt/invenio/bin/inveniocfg --load-webstat-conf
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-apache-conf
$ sudo /etc/init.d/apache2 restart
$ sudo -u www-data /opt/invenio/bin/inveniocfg --check-openoffice
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-demo-site
$ sudo -u www-data /opt/invenio/bin/inveniocfg --load-demo-records
$ sudo -u www-data /opt/invenio/bin/inveniocfg --run-unit-tests
$ sudo -u www-data /opt/invenio/bin/inveniocfg --run-regression-tests
$ sudo -u www-data /opt/invenio/bin/inveniocfg --run-web-tests
$ sudo -u www-data /opt/invenio/bin/inveniocfg --remove-demo-records
$ sudo -u www-data /opt/invenio/bin/inveniocfg --drop-demo-site
$ firefox http://your.site.com/help/admin/howto-run
2. Detailed instructions for the patient Invenio admin
==========================================================
2a. Installation
----------------
The Invenio uses standard GNU autoconf method to build and
install its files. This means that you proceed as follows:
$ cd $HOME/src/
Change to a directory where we will build the Invenio
sources. (The built files will be installed into different
"target" directories later.)
$ wget http://invenio-software.org/download/invenio-1.1.2.tar.gz
$ wget http://invenio-software.org/download/invenio-1.1.2.tar.gz.md5
$ wget http://invenio-software.org/download/invenio-1.1.2.tar.gz.sig
Fetch Invenio source tarball from the distribution server,
together with MD5 checksum and GnuPG cryptographic signature
files useful for verifying the integrity of the tarball.
$ md5sum -c invenio-1.1.2.tar.gz.md5
Verify MD5 checksum.
$ gpg --verify invenio-1.1.2.tar.gz.sig invenio-1.1.2.tar.gz
Verify GnuPG cryptographic signature. Note that you may
first have to import my public key into your keyring, if you
haven't done that already:
$ gpg --keyserver wwwkeys.eu.pgp.net --recv-keys 0xBA5A2B67
The output of the gpg --verify command should then read:
Good signature from "Tibor Simko <tibor@simko.info>"
You can safely ignore any trusted signature certification
warning that may follow after the signature has been
successfully verified.
$ tar xvfz invenio-1.1.2.tar.gz
Untar the distribution tarball.
$ cd invenio-1.1.2
Go to the source directory.
$ ./configure
Configure Invenio software for building on this specific
platform. You can use the following optional parameters:
--prefix=/opt/invenio
Optionally, specify the Invenio general
installation directory (default is /opt/invenio).
It will contain command-line binaries and program
libraries containing the core Invenio
functionality, but also store web pages, runtime log
and cache information, document data files, etc.
Several subdirs like `bin', `etc', `lib', or `var'
will be created inside the prefix directory to this
effect. Note that the prefix directory should be
chosen outside of the Apache htdocs tree, since only
one its subdirectory (prefix/var/www) is to be
accessible directly via the Web (see below).
Note that Invenio won't install to any other
directory but to the prefix mentioned in this
configuration line.
- --with-python=/opt/python/bin/python2.4
+ --with-python=/opt/python/bin/python2.7
Optionally, specify a path to some specific Python
binary. This is useful if you have more than one
Python installation on your system. If you don't set
this option, then the first Python that will be found
in your PATH will be chosen for running Invenio.
--with-mysql=/opt/mysql/bin/mysql
Optionally, specify a path to some specific MySQL
client binary. This is useful if you have more than
one MySQL installation on your system. If you don't
set this option, then the first MySQL client
executable that will be found in your PATH will be
chosen for running Invenio.
--with-clisp=/opt/clisp/bin/clisp
Optionally, specify a path to CLISP executable. This
is useful if you have more than one CLISP
installation on your system. If you don't set this
option, then the first executable that will be found
in your PATH will be chosen for running Invenio.
--with-cmucl=/opt/cmucl/bin/lisp
Optionally, specify a path to CMUCL executable. This
is useful if you have more than one CMUCL
installation on your system. If you don't set this
option, then the first executable that will be found
in your PATH will be chosen for running Invenio.
--with-sbcl=/opt/sbcl/bin/sbcl
Optionally, specify a path to SBCL executable. This
is useful if you have more than one SBCL
installation on your system. If you don't set this
option, then the first executable that will be found
in your PATH will be chosen for running Invenio.
--with-openoffice-python
Optionally, specify the path to the Python interpreter
embedded with OpenOffice.org. This is normally not
contained in the normal path. If you don't specify this
it won't be possible to use OpenOffice.org to convert from and
to Microsoft Office and OpenOffice.org documents.
This configuration step is mandatory. Usually, you do this
step only once.
(Note that if you are building Invenio not from a
released tarball, but from the Git sources, then you have to
generate the configure file via autotools:
$ sudo aptitude install automake1.9 autoconf
$ aclocal-1.9
$ automake-1.9 -a
$ autoconf
after which you proceed with the usual configure command.)
$ make
Launch the Invenio build. Since many messages are printed
during the build process, you may want to run it in a
fast-scrolling terminal such as rxvt or in a detached screen
session.
During this step all the pages and scripts will be
pre-created and customized based on the config you have
edited in the previous step.
Note that on systems such as FreeBSD or Mac OS X you have to
use GNU make ("gmake") instead of "make".
$ make install
Install the web pages, scripts, utilities and everything
needed for Invenio runtime into respective installation
directories, as specified earlier by the configure command.
Note that if you are installing Invenio for the first
time, you will be asked to create symbolic link(s) from
Python's site-packages system-wide directory(ies) to the
installation location. This is in order to instruct Python
where to find Invenio's Python files. You will be
hinted as to the exact command to use based on the
parameters you have used in the configure command.
$ make install-bootstrap
This will automatically download and install Twitter
Bootstrap prerequisite.
$ make install-hogan-plugin
This will automatically download and install Hogan
prerequisite.
$ make install-mathjax-plugin ## optional
This will automatically download and install in the proper
place MathJax, a JavaScript library to render LaTeX formulas
in the client browser.
Note that in order to enable the rendering you will have to
set the variable CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS in
invenio-local.conf to a suitable list of output format
codes. For example:
CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS = hd,hb
$ make install-jquery-plugins ## optional
This will automatically download and install in the proper
place jQuery and related plugins. They are used for AJAX
applications such as the record editor.
Note that `unzip' is needed when installing jquery plugins.
$ make install-jquery-tokeninput ## optional
This will automatically download and install jQuery
Tokeninput pre-requisite.
$ make install-plupload-plugin ## optional
This will automatically download and install plupload
pre-requisite that is used in the deposition interface for
submitting files.
$ make install-ckeditor-plugin ## optional
This will automatically download and install in the proper
place CKeditor, a WYSIWYG Javascript-based editor (e.g. for
the WebComment module).
Note that in order to enable the editor you have to set the
CFG_WEBCOMMENT_USE_RICH_EDITOR to True.
$ make install-pdfa-helper-files ## optional
This will automatically download and install in the proper
place the helper files needed to create PDF/A files out of
existing PDF files.
$ make install-mediaelement ## optional
This will automatically download and install the MediaElementJS
HTML5 video player that is needed for videos on the DEMO site.
$ make install-solrutils ## optional
This will automatically download and install a Solr instance
which can be used for full-text searching. See CFG_SOLR_URL
variable in the invenio.conf. Note that the admin later has
to take care of running init.d scripts which would start the
Solr instance automatically.
$ make install-js-test-driver ## optional
This will automatically download and install JsTestDriver
which is needed to run JS unit tests. Recommended for developers.
2b. Configuration
-----------------
Once the basic software installation is done, we proceed to
configuring your Invenio system.
$ sudo chown -R www-data.www-data /opt/invenio
For the sake of simplicity, let us assume that your Invenio
installation will run under the `www-data' user process
identity. The above command changes ownership of installed
files to www-data, so that we shall run everything under
this user identity from now on.
For production purposes, you would typically enable Apache
server to read all files from the installation place but to
write only to the `var' subdirectory of your installation
place. You could achieve this by configuring Unix directory
group permissions, for example.
$ sudo -u www-data emacs /opt/invenio/etc/invenio-local.conf
Customize your Invenio installation. Please read the
'invenio.conf' file located in the same directory that
contains the vanilla default configuration parameters of
your Invenio installation. If you want to customize some of
these parameters, you should create a file named
'invenio-local.conf' in the same directory where
'invenio.conf' lives and you should write there only the
customizations that you want to be different from the
vanilla defaults.
Here is a realistic, minimalist, yet production-ready
example of what you would typically put there:
$ cat /opt/invenio/etc/invenio-local.conf
[Invenio]
CFG_SITE_NAME = John Doe's Document Server
CFG_SITE_NAME_INTL_fr = Serveur des Documents de John Doe
CFG_SITE_URL = http://your.site.com
CFG_SITE_SECURE_URL = https://your.site.com
CFG_SITE_ADMIN_EMAIL = john.doe@your.site.com
CFG_SITE_SUPPORT_EMAIL = john.doe@your.site.com
CFG_WEBALERT_ALERT_ENGINE_EMAIL = john.doe@your.site.com
CFG_WEBCOMMENT_ALERT_ENGINE_EMAIL = john.doe@your.site.com
CFG_WEBCOMMENT_DEFAULT_MODERATOR = john.doe@your.site.com
CFG_BIBAUTHORID_AUTHOR_TICKET_ADMIN_EMAIL = john.doe@your.site.com
CFG_BIBCATALOG_SYSTEM_EMAIL_ADDRESS = john.doe@your.site.com
CFG_DATABASE_HOST = localhost
CFG_DATABASE_NAME = invenio
CFG_DATABASE_USER = invenio
CFG_DATABASE_PASS = my123p$ss
CFG_BIBDOCFILE_ENABLE_BIBDOCFSINFO_CACHE = 1
You should override at least the parameters mentioned above
in order to define some very essential runtime parameters
such as the name of your document server (CFG_SITE_NAME and
CFG_SITE_NAME_INTL_*), the visible URL of your document
server (CFG_SITE_URL and CFG_SITE_SECURE_URL), the email
address of the local Invenio administrator, comment
moderator, and alert engine (CFG_SITE_SUPPORT_EMAIL,
CFG_SITE_ADMIN_EMAIL, etc), and last but not least your
database credentials (CFG_DATABASE_*).
If this is a first installation of Invenio it is recommended
you set the CFG_BIBDOCFILE_ENABLE_BIBDOCFSINFO_CACHE
variable to 1. If this is instead an upgrade from an existing
installation don't add it until you have run:
$ bibdocfile --fix-bibdocfsinfo-cache .
The Invenio system will then read both the default
invenio.conf file and your customized invenio-local.conf
file and it will override any default options with the ones
you have specifield in your local file. This cascading of
configuration parameters will ease your future upgrades.
If you want to have multiple Invenio instances for distributed
video encoding, you need to share the same configuration amongs
them and make some of the folders of the Invenio installation
available for all nodes.
Configure the allowed tasks for every node:
CFG_BIBSCHED_NODE_TASKS = {
"hostname_machine1" : ["bibindex", "bibupload",
"bibreformat","webcoll", "bibtaskex", "bibrank",
"oaiharvest", "oairepositoryupdater", "inveniogc",
"webstatadmin", "bibclassify", "bibexport",
"dbdump", "batchuploader", "bibauthorid", "bibtasklet"],
"hostname_machine2" : ['bibencode',]
}
Share the following directories among Invenio instances:
/var/tmp-shared
hosts video uploads in a temporary form
/var/tmp-shared/bibencode/jobs
hosts new job files for the video encoding daemon
/var/tmp-shared/bibencode/jobs/done
hosts job files that have been processed by the daemon
/var/data/files
hosts fulltext and media files associated to records
/var/data/submit
hosts files created during submissions
$ sudo -u www-data /opt/invenio/bin/inveniocfg --update-all
Make the rest of the Invenio system aware of your
invenio-local.conf changes. This step is mandatory each
time you edit your conf files.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-secret-key
You may need to create secret key for the Flask application
if you have not done so yet during customisation of your
`invenio-local.conf'. This command will check the contents
of this file and will update it with randomly generated
secret key value.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --update-all
Make the rest of the Invenio system aware of the secret key
change in invenio-local.conf.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-tables
If you are installing Invenio for the first time, you
have to create database tables.
Note that this step checks for potential problems such as
the database connection rights and may ask you to perform
some more administrative steps in case it detects a problem.
Notably, it may ask you to set up database access
permissions, based on your configure values.
If you are installing Invenio for the first time, you
have to create a dedicated database on your MySQL server
that the Invenio can use for its purposes. Please
contact your MySQL administrator and ask him to execute the
commands this step proposes you.
At this point you should now have successfully completed the
"make install" process. We continue by setting up the
Apache web server.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --load-bibfield-conf
Load the configuration file of the BibField module. It will
create `bibfield_config.py' file. (FIXME: When BibField
becomes essential part of Invenio, this step should be later
automatised so that people do not have to run it manually.)
$ sudo -u www-data /opt/invenio/bin/inveniocfg --load-webstat-conf
Load the configuration file of webstat module. It will create
the tables in the database for register customevents, such as
basket hits.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-apache-conf
Running this command will generate Apache virtual host
configurations matching your installation. You will be
instructed to check created files (usually they are located
under /opt/invenio/etc/apache/) and edit your httpd.conf
to activate Invenio virtual hosts.
If you are using Debian GNU/Linux ``Lenny'' or later, then
you can do the following to create your SSL certificate and
to activate your Invenio vhosts:
## make SSL certificate:
$ sudo aptitude install ssl-cert
$ sudo mkdir /etc/apache2/ssl
$ sudo /usr/sbin/make-ssl-cert /usr/share/ssl-cert/ssleay.cnf \
/etc/apache2/ssl/apache.pem
## add Invenio web sites:
$ sudo ln -s /opt/invenio/etc/apache/invenio-apache-vhost.conf \
/etc/apache2/sites-available/invenio
$ sudo ln -s /opt/invenio/etc/apache/invenio-apache-vhost-ssl.conf \
/etc/apache2/sites-available/invenio-ssl
## disable Debian's default web site:
$ sudo /usr/sbin/a2dissite default
## enable Invenio web sites:
$ sudo /usr/sbin/a2ensite invenio
$ sudo /usr/sbin/a2ensite invenio-ssl
## enable SSL module:
$ sudo /usr/sbin/a2enmod ssl
## if you are using xsendfile module, enable it too:
$ sudo /usr/sbin/a2enmod xsendfile
If you are using another operating system, you should do the
equivalent, for example edit your system-wide httpd.conf and
put the following include statements:
Include /opt/invenio/etc/apache/invenio-apache-vhost.conf
Include /opt/invenio/etc/apache/invenio-apache-vhost-ssl.conf
Note that you may need to adapt generated vhost file
snippets to match your concrete operating system specifics.
For example, the generated configuration snippet will
preload Invenio WSGI daemon application upon Apache start up
for faster site response. The generated configuration
assumes that you are using mod_wsgi version 3 or later. If
you are using the old legacy mod_wsgi version 2, then you
would need to comment out the WSGIImportScript directive
from the generated snippet, or else move the WSGI daemon
setup to the top level, outside of the VirtualHost section.
Note also that you may want to tweak the generated Apache
vhost snippet for performance reasons, especially with
respect to WSGIDaemonProcess parameters. For example, you
can increase the number of processes from the default value
`processes=5' if you have lots of RAM and if many concurrent
users may access your site in parallel. However, note that
you must use `threads=1' there, because Invenio WSGI daemon
processes are not fully thread safe yet. This may change in
the future.
$ sudo /etc/init.d/apache2 restart
Please ask your webserver administrator to restart the
Apache server after the above "httpd.conf" changes.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --check-openoffice
If you plan to support MS Office or Open Document Format
files in your installation, you should check whether
LibreOffice or OpenOffice.org is well integrated with
Invenio by running the above command. You may be asked to
create a temporary directory for converting office files
with special ownership (typically as user nobody) and
permissions. Note that you can do this step later.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --create-demo-site
This step is recommended to test your local Invenio
installation. It should give you our "Atlantis Institute of
Science" demo installation, exactly as you see it at
<http://invenio-demo.cern.ch/>.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --load-demo-records
Optionally, load some demo records to be able to test
indexing and searching of your local Invenio demo
installation.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --run-unit-tests
Optionally, you can run the unit test suite to verify the
unit behaviour of your local Invenio installation. Note
that this command should be run only after you have
installed the whole system via `make install'.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --run-regression-tests
Optionally, you can run the full regression test suite to
verify the functional behaviour of your local Invenio
installation. Note that this command requires to have
created the demo site and loaded the demo records. Note
also that running the regression test suite may alter the
database content with junk data, so that rebuilding the
demo site is strongly recommended afterwards.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --run-web-tests
Optionally, you can run additional automated web tests
running in a real browser. This requires to have Firefox
with the Selenium IDE extension installed.
<http://en.www.mozilla.com/en/firefox/>
<http://selenium-ide.openqa.org/>
$ sudo -u www-data /opt/invenio/bin/inveniocfg --remove-demo-records
Optionally, remove the demo records loaded in the previous
step, but keeping otherwise the demo collection, submission,
format, and other configurations that you may reuse and
modify for your own production purposes.
$ sudo -u www-data /opt/invenio/bin/inveniocfg --drop-demo-site
Optionally, drop also all the demo configuration so that
you'll end up with a completely blank Invenio system.
However, you may want to find it more practical not to drop
the demo site configuration but to start customizing from
there.
$ firefox http://your.site.com/help/admin/howto-run
In order to start using your Invenio installation, you
can start indexing, formatting and other daemons as
indicated in the "HOWTO Run" guide on the above URL. You
can also use the Admin Area web interfaces to perform
further runtime configurations such as the definition of
data collections, document types, document formats, word
indexes, etc.
$ sudo ln -s /opt/invenio/etc/bash_completion.d/inveniocfg \
/etc/bash_completion.d/inveniocfg
Optionally, if you are using Bash shell completion, then
you may want to create the above symlink in order to
configure completion for the inveniocfg command.
Good luck, and thanks for choosing Invenio.
- Invenio Development Team
<info@invenio-software.org>
<http://invenio-software.org/>
diff --git a/Makefile.am b/Makefile.am
index b57f6af54..0d8303332 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,583 +1,583 @@
## This file is part of Invenio.
## Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
confignicedir = $(sysconfdir)/build
confignice_SCRIPTS=config.nice
SUBDIRS = po config modules
EXTRA_DIST = UNINSTALL THANKS RELEASE-NOTES configure-tests.py config.nice.in \
config.rpath
# current MathJax version and packages
# See also modules/miscutil/lib/htmlutils.py (get_mathjax_header)
MJV = 2.1
MATHJAX = http://invenio-software.org/download/mathjax/MathJax-v$(MJV).zip
# current CKeditor version
CKV = 3.6.6
CKEDITOR = ckeditor_$(CKV).zip
# current MediaElement.js version
MEV = master
MEDIAELEMENT = http://github.com/johndyer/mediaelement/zipball/$(MEV)
#for solrutils
INVENIO_JAVA_PATH = org/invenio_software/solr
solrdirname = apache-solr-3.1.0
solrdir = $(prefix)/lib/$(solrdirname)
solrutils_dir=$(CURDIR)/modules/miscutil/lib/solrutils
CLASSPATH=.:${solrdir}/dist/solrj-lib/commons-io-1.4.jar:${solrdir}/dist/apache-solr-core-*jar:${solrdir}/contrib/jzlib-1.0.7.jar:${solrdir}/dist/apache-solr-solrj-3.1.0.jar:${solrdir}/dist/solrj-lib/slf4j-api-1.5.5.jar:${solrdir}/dist/*:${solrdir}/contrib/basic-lucene-libs/*:${solrdir}/contrib/analysis-extras/lucene-libs/*:${solrdir}/dist/solrj-lib/*
# git-version-get stuff:
BUILT_SOURCES = $(top_srcdir)/.version
$(top_srcdir)/.version:
echo $(VERSION) > $@-t && mv $@-t $@
dist-hook:
echo $(VERSION) > $(distdir)/.tarball-version
# Bootstrap version
BOOTSTRAPV = 2.2.1
# Hogan.js version
HOGANVER = 2.0.0
check-upgrade:
$(PYTHON) $(top_srcdir)/modules/miscutil/lib/inveniocfg_upgrader.py $(top_srcdir) --upgrade-check
check-custom-templates:
$(PYTHON) $(top_srcdir)/modules/webstyle/lib/template.py --check-custom-templates $(top_srcdir)
kwalitee-check:
@$(PYTHON) $(top_srcdir)/modules/miscutil/lib/kwalitee.py --stats $(top_srcdir)
kwalitee-check-errors-only:
@$(PYTHON) $(top_srcdir)/modules/miscutil/lib/kwalitee.py --check-errors $(top_srcdir)
kwalitee-check-variables:
@$(PYTHON) $(top_srcdir)/modules/miscutil/lib/kwalitee.py --check-variables $(top_srcdir)
kwalitee-check-indentation:
@$(PYTHON) $(top_srcdir)/modules/miscutil/lib/kwalitee.py --check-indentation $(top_srcdir)
kwalitee-check-sql-queries:
@$(PYTHON) $(top_srcdir)/modules/miscutil/lib/kwalitee.py --check-sql $(top_srcdir)
etags:
\rm -f $(top_srcdir)/TAGS
(cd $(top_srcdir) && find $(top_srcdir) -name "*.py" -print | xargs etags)
install-data-local:
for d in / /cache /cache/RTdata /log /tmp /tmp-shared /data /run /tmp-shared/bibencode/jobs/done /tmp-shared/bibedit-cache; do \
mkdir -p $(localstatedir)$$d ; \
done
@echo "************************************************************"
@echo "** Invenio software has been successfully installed! **"
@echo "** **"
@echo "** You may proceed to customizing your installation now. **"
@echo "************************************************************"
install-mathjax-plugin:
@echo "***********************************************************"
@echo "** Installing MathJax plugin, please wait... **"
@echo "***********************************************************"
rm -rf /tmp/invenio-mathjax-plugin
mkdir /tmp/invenio-mathjax-plugin
rm -fr ${prefix}/var/www/MathJax
mkdir -p ${prefix}/var/www/MathJax
(cd /tmp/invenio-mathjax-plugin && \
wget '$(MATHJAX)' -O mathjax.zip && \
unzip -q mathjax.zip && cd mathjax-MathJax-* && cp -r * \
${prefix}/var/www/MathJax)
rm -fr /tmp/invenio-mathjax-plugin
@echo "************************************************************"
@echo "** The MathJax plugin was successfully installed. **"
@echo "** Please do not forget to properly set the option **"
@echo "** CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS and **"
@echo "** CFG_WEBSUBMIT_USE_MATHJAX in invenio.conf. **"
@echo "************************************************************"
uninstall-mathjax-plugin:
@rm -rvf ${prefix}/var/www/MathJax
@echo "***********************************************************"
@echo "** The MathJax plugin was successfully uninstalled. **"
@echo "***********************************************************"
install-jscalendar-plugin:
@echo "***********************************************************"
@echo "** Installing jsCalendar plugin, please wait... **"
@echo "***********************************************************"
rm -rf /tmp/invenio-jscalendar-plugin
mkdir /tmp/invenio-jscalendar-plugin
(cd /tmp/invenio-jscalendar-plugin && \
wget 'http://www.dynarch.com/static/jscalendar-1.0.zip' && \
unzip -u jscalendar-1.0.zip && \
mkdir -p ${prefix}/var/www/jsCalendar && \
cp jscalendar-1.0/img.gif ${prefix}/var/www/jsCalendar/jsCalendar.gif && \
cp jscalendar-1.0/calendar.js ${prefix}/var/www/jsCalendar/ && \
cp jscalendar-1.0/calendar-setup.js ${prefix}/var/www/jsCalendar/ && \
cp jscalendar-1.0/lang/calendar-en.js ${prefix}/var/www/jsCalendar/ && \
cp jscalendar-1.0/calendar-blue.css ${prefix}/var/www/jsCalendar/)
rm -fr /tmp/invenio-jscalendar-plugin
@echo "***********************************************************"
@echo "** The jsCalendar plugin was successfully installed. **"
@echo "***********************************************************"
uninstall-jscalendar-plugin:
@rm -rvf ${prefix}/var/www/jsCalendar
@echo "***********************************************************"
@echo "** The jsCalendar plugin was successfully uninstalled. **"
@echo "***********************************************************"
install-js-test-driver:
@echo "*******************************************************"
@echo "** Installing js-test-driver, please wait... **"
@echo "*******************************************************"
mkdir -p $(prefix)/lib/java/js-test-driver && \
cd $(prefix)/lib/java/js-test-driver && \
wget http://invenio-software.org/download/js-test-driver/JsTestDriver-1.3.5.jar -O JsTestDriver.jar
uninstall-js-test-driver:
@rm -rvf ${prefix}/lib/java/js-test-driver
@echo "*********************************************************"
@echo "** The js-test-driver was successfully uninstalled. **"
@echo "*********************************************************"
install-jquery-plugins:
@echo "***********************************************************"
@echo "** Installing various jQuery plugins, please wait... **"
@echo "***********************************************************"
mkdir -p ${prefix}/var/www/js
mkdir -p $(prefix)/var/www/css
(cd ${prefix}/var/www/js && \
wget http://code.jquery.com/jquery-1.7.1.min.js && \
mv jquery-1.7.1.min.js jquery.min.js && \
wget http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.17/jquery-ui.min.js && \
wget http://invenio-software.org/download/jquery/v1.5/js/jquery.jeditable.mini.js && \
wget https://raw.github.com/malsup/form/master/jquery.form.js --no-check-certificate && \
wget http://jquery-multifile-plugin.googlecode.com/svn/trunk/jquery.MultiFile.pack.js && \
wget -O jquery.tablesorter.zip http://invenio-software.org/download/jquery/jquery.tablesorter.20111208.zip && \
wget http://invenio-software.org/download/jquery/uploadify-v2.1.4.zip -O uploadify.zip && \
wget http://www.datatables.net/download/build/jquery.dataTables.min.js && \
wget http://invenio-software.org/download/jquery/jquery.bookmark.package-1.4.0.zip && \
unzip jquery.tablesorter.zip -d tablesorter && \
rm jquery.tablesorter.zip && \
rm -rf uploadify && \
unzip -u uploadify.zip -d uploadify && \
- wget http://flot.googlecode.com/files/flot-0.6.zip && \
+ wget http://invenio-software.org/download/jquery/flot-0.6.zip && \
wget -O jquery-ui-timepicker-addon.js http://invenio-software.org/download/jquery/jquery-ui-timepicker-addon-1.0.3.js && \
unzip -u flot-0.6.zip && \
mv flot/jquery.flot.selection.min.js flot/jquery.flot.min.js flot/excanvas.min.js ./ && \
rm flot-0.6.zip && rm -r flot && \
mv uploadify/swfobject.js ./ && \
mv uploadify/cancel.png uploadify/uploadify.css uploadify/uploadify.allglyphs.swf uploadify/uploadify.fla uploadify/uploadify.swf ../img/ && \
mv uploadify/jquery.uploadify.v2.1.4.min.js ./jquery.uploadify.min.js && \
rm uploadify.zip && rm -r uploadify && \
wget --no-check-certificate https://github.com/douglascrockford/JSON-js/raw/master/json2.js && \
wget http://invenio-software.org/download/jquery/jquery.hotkeys-0.8.js -O jquery.hotkeys.js && \
wget http://jquery.bassistance.de/treeview/jquery.treeview.zip && \
unzip jquery.treeview.zip -d jquery-treeview && \
rm jquery.treeview.zip && \
wget http://invenio-software.org/download/jquery/v1.5/js/jquery.ajaxPager.js && \
unzip jquery.bookmark.package-1.4.0.zip && \
rm -f jquery.bookmark.ext.* bookmarks-big.png bookmarkBasic.html jquery.bookmark.js jquery.bookmark.pack.js && \
mv bookmarks.png ../img/ && \
mv jquery.bookmark.css ../css/ && \
rm -f jquery.bookmark.package-1.4.0.zip && \
mkdir -p ${prefix}/var/www/img && \
cd ${prefix}/var/www/img && \
wget -r -np -nH --cut-dirs=4 -A "png,css" -P jquery-ui/themes http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/base/ && \
wget -r -np -nH --cut-dirs=4 -A "png,css" -P jquery-ui/themes http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/smoothness/ && \
wget -r -np -nH --cut-dirs=4 -A "png,css" -P jquery-ui/themes http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/redmond/ && \
wget --no-check-certificate -O datatables_jquery-ui.css https://github.com/DataTables/DataTables/raw/master/media/css/demo_table_jui.css && \
wget http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/redmond/jquery-ui.css && \
wget http://jquery-ui.googlecode.com/svn/tags/1.8.17/demos/images/calendar.gif && \
wget -r -np -nH --cut-dirs=5 -A "png" http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/redmond/images/)
@echo "***********************************************************"
@echo "** The jQuery plugins were successfully installed. **"
@echo "***********************************************************"
uninstall-jquery-plugins:
(cd ${prefix}/var/www/js && \
rm -f jquery.min.js && \
rm -f jquery.MultiFile.pack.js && \
rm -f jquery.jeditable.mini.js && \
rm -f jquery.flot.selection.min.js && \
rm -f jquery.flot.min.js && \
rm -f excanvas.min.js && \
rm -f jquery-ui-timepicker-addon.min.js && \
rm -f json2.js && \
rm -f jquery.uploadify.min.js && \
rm -rf tablesorter && \
rm -rf jquery-treeview && \
rm -f jquery.ajaxPager.js && \
rm -f jquery.form.js && \
rm -f jquery.dataTables.min.js && \
rm -f ui.core.js && \
rm -f jquery.bookmark.min.js && \
rm -f jquery.hotkeys.js && \
rm -f jquery.tablesorter.min.js && \
rm -f jquery-ui-1.7.3.custom.min.js && \
rm -f jquery.metadata.js && \
rm -f jquery-latest.js && \
rm -f jquery-ui.min.js)
(cd ${prefix}/var/www/img && \
rm -f cancel.png uploadify.css uploadify.swf uploadify.allglyphs.swf uploadify.fla && \
rm -f datatables_jquery-ui.css \
rm -f bookmarks.png) && \
(cd ${prefix}/var/www/css && \
rm -f jquery.bookmark.css)
@echo "***********************************************************"
@echo "** The jquery plugins were successfully uninstalled. **"
@echo "***********************************************************"
install-ckeditor-plugin:
@echo "***********************************************************"
@echo "** Installing CKeditor plugin, please wait... **"
@echo "***********************************************************"
rm -rf ${prefix}/lib/python/invenio/ckeditor/
rm -rf /tmp/invenio-ckeditor-plugin
mkdir /tmp/invenio-ckeditor-plugin
(cd /tmp/invenio-ckeditor-plugin && \
wget 'http://invenio-software.org/download/ckeditor/$(CKEDITOR)' && \
unzip -u -d ${prefix}/var/www $(CKEDITOR)) && \
find ${prefix}/var/www/ckeditor/ -depth -name '_*' -exec rm -rf {} \; && \
find ${prefix}/var/www/ckeditor/ckeditor* -maxdepth 0 ! -name "ckeditor.js" -exec rm -r {} \; && \
rm -fr /tmp/invenio-ckeditor-plugin
@echo "* Installing Invenio-specific CKeditor config..."
(cd $(top_srcdir)/modules/webstyle/etc && make install)
@echo "***********************************************************"
@echo "** The CKeditor plugin was successfully installed. **"
@echo "** Please do not forget to properly set the option **"
@echo "** CFG_WEBCOMMENT_USE_RICH_TEXT_EDITOR in invenio.conf. **"
@echo "***********************************************************"
uninstall-ckeditor-plugin:
@rm -rvf ${prefix}/var/www/ckeditor
@rm -rvf ${prefix}/lib/python/invenio/ckeditor
@echo "***********************************************************"
@echo "** The CKeditor plugin was successfully uninstalled. **"
@echo "***********************************************************"
install-pdfa-helper-files:
@echo "***********************************************************"
@echo "** Installing PDF/A helper files, please wait... **"
@echo "***********************************************************"
wget 'http://invenio-software.org/download/invenio-demo-site-files/ISOCoatedsb.icc' -O ${prefix}/etc/websubmit/file_converter_templates/ISOCoatedsb.icc
@echo "***********************************************************"
@echo "** The PDF/A helper files were successfully installed. **"
@echo "***********************************************************"
install-mediaelement:
@echo "***********************************************************"
@echo "** MediaElement.js, please wait... **"
@echo "***********************************************************"
rm -rf /tmp/mediaelement
mkdir /tmp/mediaelement
wget 'http://github.com/johndyer/mediaelement/zipball/master' -O '/tmp/mediaelement/mediaelement.zip' --no-check-certificate
unzip -u -d '/tmp/mediaelement' '/tmp/mediaelement/mediaelement.zip'
rm -rf ${prefix}/var/www/mediaelement
mkdir ${prefix}/var/www/mediaelement
mv /tmp/mediaelement/johndyer-mediaelement-*/build/* ${prefix}/var/www/mediaelement
rm -rf /tmp/mediaelement
@echo "***********************************************************"
@echo "** MediaElement.js was successfully installed. **"
@echo "***********************************************************"
install-bootstrap:
@echo "***********************************************************"
@echo "** Installing Twitter Bootstrap, please wait... **"
@echo "***********************************************************"
rm -rf /tmp/invenio-bootstrap
mkdir /tmp/invenio-bootstrap
(cd /tmp/invenio-bootstrap && \
wget -O bootstrap.zip 'http://invenio-software.org/download/bootstrap/bootstrap-${BOOTSTRAPV}.zip' && \
unzip -u bootstrap.zip && \
cp bootstrap/css/bootstrap-responsive.css ${prefix}/var/www/css/bootstrap-responsive.css && \
cp bootstrap/css/bootstrap-responsive.min.css ${prefix}/var/www/css/bootstrap-responsive.min.css && \
cp bootstrap/css/bootstrap.css ${prefix}/var/www/css/bootstrap.css && \
cp bootstrap/css/bootstrap.min.css ${prefix}/var/www/css/bootstrap.min.css && \
cp bootstrap/img/glyphicons-halflings-white.png ${prefix}/var/www/img/glyphicons-halflings-white.png && \
cp bootstrap/img/glyphicons-halflings.png ${prefix}/var/www/img/glyphicons-halflings.png && \
cp bootstrap/js/bootstrap.js ${prefix}/var/www/js/bootstrap.js && \
cp bootstrap/js/bootstrap.min.js ${prefix}/var/www/js/bootstrap.min.js && \
rm -fr /tmp/invenio-bootstrap )
@echo "***********************************************************"
@echo "** The Twitter Bootstrap was successfully installed. **"
@echo "***********************************************************"
uninstall-bootstrap:
rm ${prefix}/var/www/css/bootstrap-responsive.css && \
rm ${prefix}/var/www/css/bootstrap-responsive.min.css && \
rm ${prefix}/var/www/css/bootstrap.css && \
rm ${prefix}/var/www/css/bootstrap.min.css && \
rm ${prefix}/var/www/img/glyphicons-halflings-white.png && \
rm ${prefix}/var/www/img/glyphicons-halflings.png && \
rm ${prefix}/var/www/js/bootstrap.js && \
rm ${prefix}/var/www/js/bootstrap.min.js
@echo "***********************************************************"
@echo "** The Twitter Bootstrap was successfully uninstalled. **"
@echo "***********************************************************"
install-hogan-plugin:
@echo "***********************************************************"
@echo "** Installing Hogan.js, please wait... **"
@echo "***********************************************************"
rm -rf /tmp/hogan
mkdir /tmp/hogan
(cd /tmp/hogan && \
wget -O hogan-${HOGANVER}.js 'http://twitter.github.com/hogan.js/builds/${HOGANVER}/hogan-${HOGANVER}.js' && \
cp hogan-${HOGANVER}.js ${prefix}/var/www/js/hogan.js && \
rm -fr /tmp/hogan )
@echo "***********************************************************"
@echo "** Hogan.js was successfully installed. **"
@echo "***********************************************************"
uninstall-hogan-plugin:
rm ${prefix}/var/www/js/hogan.js
@echo "***********************************************************"
@echo "** Hogan.js was successfully uninstalled. **"
@echo "***********************************************************"
install-jquery-tokeninput:
@echo "***********************************************************"
@echo "** Installing JQuery Tokeninput, please wait... **"
@echo "***********************************************************"
rm -rf /tmp/jquery-tokeninput
mkdir /tmp/jquery-tokeninput
(cd /tmp/jquery-tokeninput && \
wget -O jquery-tokeninput-master.zip 'https://github.com/loopj/jquery-tokeninput/archive/master.zip' --no-check-certificate && \
unzip -u jquery-tokeninput-master.zip && \
cp jquery-tokeninput-master/styles/token-input-facebook.css ${prefix}/var/www/css/token-input-facebook.css && \
cp jquery-tokeninput-master/styles/token-input-mac.css ${prefix}/var/www/css/token-input-mac.css && \
cp jquery-tokeninput-master/styles/token-input.css ${prefix}/var/www/css/token-input.css && \
cp jquery-tokeninput-master/src/jquery.tokeninput.js ${prefix}/var/www/js/jquery.tokeninput.js && \
rm -fr /tmp/jquery-tokeninput )
@echo "***********************************************************"
@echo "** The JQuery Tokeninput was successfully installed. **"
@echo "***********************************************************"
uninstall-jquery-tokeninput:
rm ${prefix}/var/www/css/token-input-facebook.css && \
rm ${prefix}/var/www/css/token-input-mac.css && \
rm ${prefix}/var/www/css/token-input.css && \
rm ${prefix}/var/www/js/jquery.tokeninput.js
@echo "***********************************************************"
@echo "** The JQuery Tokeninput was successfully uninstalled. **"
@echo "***********************************************************"
install-plupload-plugin:
@echo "***********************************************************"
@echo "** Installing Plupload plugin, please wait... **"
@echo "***********************************************************"
rm -rf /tmp/plupload-plugin
mkdir /tmp/plupload-plugin
(cd /tmp/plupload-plugin && \
wget -O plupload-plugin.zip 'http://plupload.com/downloads/plupload_1_5_5.zip' && \
unzip -u plupload-plugin.zip && \
mkdir -p ${prefix}/var/www/js/plupload/i18n/ && \
cp -R plupload/js/jquery.plupload.queue ${prefix}/var/www/js/plupload/ && \
cp -R plupload/js/jquery.ui.plupload ${prefix}/var/www/js/plupload/ && \
cp plupload/js/plupload.browserplus.js ${prefix}/var/www/js/plupload/plupload.browserplus.js && \
cp plupload/js/plupload.flash.js ${prefix}/var/www/js/plupload/plupload.flash.js && \
cp plupload/js/plupload.flash.swf ${prefix}/var/www/js/plupload/plupload.flash.swf && \
cp plupload/js/plupload.full.js ${prefix}/var/www/js/plupload/plupload.full.js && \
cp plupload/js/plupload.gears.js ${prefix}/var/www/js/plupload/plupload.gears.js && \
cp plupload/js/plupload.html4.js ${prefix}/var/www/js/plupload/plupload.html4.js && \
cp plupload/js/plupload.html5.js ${prefix}/var/www/js/plupload/plupload.html5.js && \
cp plupload/js/plupload.js ${prefix}/var/www/js/plupload/plupload.js && \
cp plupload/js/plupload.silverlight.js ${prefix}/var/www/js/plupload/plupload.silverlight.js && \
cp plupload/js/plupload.silverlight.xap ${prefix}/var/www/js/plupload/plupload.silverlight.xap && \
cp plupload/js/i18n/*.js ${prefix}/var/www/js/plupload/i18n/ && \
rm -fr /tmp/plupload-plugin )
@echo "***********************************************************"
@echo "** The Plupload plugin was successfully installed. **"
@echo "***********************************************************"
uninstall-plupload-plugin:
rm -rf ${prefix}/var/www/js/plupload
@echo "***********************************************************"
@echo "** The Plupload was successfully uninstalled. **"
@echo "***********************************************************"
uninstall-pdfa-helper-files:
rm -f ${prefix}/etc/websubmit/file_converter_templates/ISOCoatedsb.icc
@echo "***********************************************************"
@echo "** The PDF/A helper files were successfully uninstalled. **"
@echo "***********************************************************"
#Solrutils allows automatic installation, running and searching of an external Solr index.
install-solrutils:
@echo "***********************************************************"
@echo "** Installing Solrutils and solr, please wait... **"
@echo "***********************************************************"
cd $(prefix)/lib && \
if test -d apache-solr*; then echo A solr directory already exists in `pwd` . \
Please remove it manually, if you are sure it is not needed; exit 2; fi ; \
if test -f apache-solr*; then echo solr tarball already exists in `pwd` . \
Please remove it manually.; exit 2; fi ; \
wget http://archive.apache.org/dist/lucene/solr/3.1.0/apache-solr-3.1.0.tgz && \
tar -xzf apache-solr-3.1.0.tgz && \
rm apache-solr-3.1.0.tgz
cd $(solrdir)/contrib/ ;\
wget http://mirrors.ibiblio.org/pub/mirrors/maven2/com/jcraft/jzlib/1.0.7/jzlib-1.0.7.jar && \
cd $(solrdir)/contrib/ ;\
jar -xf ../example/webapps/solr.war WEB-INF/lib/lucene-core-3.1.0.jar ; \
if test -d basic-lucene-libs; then rm -rf basic-lucene-libs; fi ; \
mv WEB-INF/lib/ basic-lucene-libs ; \
cp $(solrutils_dir)/schema.xml $(solrdir)/example/solr/conf/
cp $(solrutils_dir)/solrconfig.xml $(solrdir)/example/solr/conf/
cd $(solrutils_dir) && \
javac -classpath $(CLASSPATH) -d $(solrdir)/contrib @$(solrutils_dir)/java_sources.txt && \
cd $(solrdir)/contrib/ && \
jar -cf invenio-solr.jar org/invenio_software/solr/*class
update-v0.99.0-tables:
cat $(top_srcdir)/modules/miscutil/sql/tabcreate.sql | grep -v 'INSERT INTO upgrade' | ${prefix}/bin/dbexec
echo "DROP TABLE IF EXISTS oaiREPOSITORY;" | ${prefix}/bin/dbexec
echo "ALTER TABLE bibdoc ADD COLUMN more_info mediumblob NULL default NULL;" | ${prefix}/bin/dbexec
echo "ALTER TABLE schTASK ADD COLUMN priority tinyint(4) NOT NULL default 0;" | ${prefix}/bin/dbexec
echo "ALTER TABLE schTASK ADD KEY priority (priority);" | ${prefix}/bin/dbexec
echo "ALTER TABLE rnkCITATIONDATA DROP PRIMARY KEY;" | ${prefix}/bin/dbexec
echo "ALTER TABLE rnkCITATIONDATA ADD PRIMARY KEY (id);" | ${prefix}/bin/dbexec
echo "ALTER TABLE rnkCITATIONDATA CHANGE id id mediumint(8) unsigned NOT NULL auto_increment;" | ${prefix}/bin/dbexec
echo "ALTER TABLE rnkCITATIONDATA ADD UNIQUE KEY object_name (object_name);" | ${prefix}/bin/dbexec
echo "ALTER TABLE sbmPARAMETERS CHANGE value value text NOT NULL default '';" | ${prefix}/bin/dbexec
echo "ALTER TABLE sbmAPPROVAL ADD note text NOT NULL default '';" | ${prefix}/bin/dbexec
echo "ALTER TABLE hstDOCUMENT CHANGE docsize docsize bigint(15) unsigned NOT NULL;" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtACTIONHISTORY CHANGE client_host client_host int(10) unsigned default NULL;" | ${prefix}/bin/dbexec
update-v0.99.1-tables:
@echo "Nothing to do; table structure did not change between v0.99.1 and v0.99.2."
update-v0.99.2-tables:
@echo "Nothing to do; table structure did not change between v0.99.2 and v0.99.3."
update-v0.99.3-tables:
@echo "Nothing to do; table structure did not change between v0.99.3 and v0.99.4."
update-v0.99.4-tables:
@echo "Nothing to do; table structure did not change between v0.99.4 and v0.99.5."
update-v0.99.5-tables:
@echo "Nothing to do; table structure did not change between v0.99.5 and v0.99.6."
update-v0.99.6-tables:
@echo "Nothing to do; table structure did not change between v0.99.6 and v0.99.7."
update-v0.99.7-tables:
@echo "Nothing to do; table structure did not change between v0.99.7 and v0.99.8."
update-v0.99.8-tables: # from v0.99.8 to v1.0.0-rc0
echo "RENAME TABLE oaiARCHIVE TO oaiREPOSITORY;" | ${prefix}/bin/dbexec
cat $(top_srcdir)/modules/miscutil/sql/tabcreate.sql | grep -v 'INSERT INTO upgrade' | ${prefix}/bin/dbexec
echo "INSERT INTO knwKB (id,name,description,kbtype) SELECT id,name,description,'' FROM fmtKNOWLEDGEBASES;" | ${prefix}/bin/dbexec
echo "INSERT INTO knwKBRVAL (id,m_key,m_value,id_knwKB) SELECT id,m_key,m_value,id_fmtKNOWLEDGEBASES FROM fmtKNOWLEDGEBASEMAPPINGS;" | ${prefix}/bin/dbexec
echo "ALTER TABLE sbmPARAMETERS CHANGE name name varchar(40) NOT NULL default '';" | ${prefix}/bin/dbexec
echo "ALTER TABLE bibdoc CHANGE docname docname varchar(250) COLLATE utf8_bin NOT NULL default 'file';" | ${prefix}/bin/dbexec
echo "ALTER TABLE bibdoc CHANGE status status text NOT NULL default '';" | ${prefix}/bin/dbexec
echo "ALTER TABLE bibdoc ADD COLUMN text_extraction_date datetime NOT NULL default '0000-00-00';" | ${prefix}/bin/dbexec
echo "ALTER TABLE collection DROP COLUMN restricted;" | ${prefix}/bin/dbexec
echo "ALTER TABLE schTASK CHANGE host host varchar(255) NOT NULL default '';" | ${prefix}/bin/dbexec
echo "ALTER TABLE hstTASK CHANGE host host varchar(255) NOT NULL default '';" | ${prefix}/bin/dbexec
echo "ALTER TABLE bib85x DROP INDEX kv, ADD INDEX kv (value(100));" | ${prefix}/bin/dbexec
echo "UPDATE clsMETHOD SET location='http://invenio-software.org/download/invenio-demo-site-files/HEP.rdf' WHERE name='HEP' AND location='';" | ${prefix}/bin/dbexec
echo "UPDATE clsMETHOD SET location='http://invenio-software.org/download/invenio-demo-site-files/NASA-subjects.rdf' WHERE name='NASA-subjects' AND location='';" | ${prefix}/bin/dbexec
echo "UPDATE accACTION SET name='runoairepository', description='run oairepositoryupdater task' WHERE name='runoaiarchive';" | ${prefix}/bin/dbexec
echo "UPDATE accACTION SET name='cfgoaiharvest', description='configure OAI Harvest' WHERE name='cfgbibharvest';" | ${prefix}/bin/dbexec
echo "ALTER TABLE accARGUMENT CHANGE value value varchar(255);" | ${prefix}/bin/dbexec
echo "UPDATE accACTION SET allowedkeywords='doctype,act,categ' WHERE name='submit';" | ${prefix}/bin/dbexec
echo "INSERT INTO accARGUMENT(keyword,value) VALUES ('categ','*');" | ${prefix}/bin/dbexec
echo "INSERT INTO accROLE_accACTION_accARGUMENT(id_accROLE,id_accACTION,id_accARGUMENT,argumentlistid) SELECT DISTINCT raa.id_accROLE,raa.id_accACTION,accARGUMENT.id,raa.argumentlistid FROM accROLE_accACTION_accARGUMENT as raa JOIN accACTION on id_accACTION=accACTION.id,accARGUMENT WHERE accACTION.name='submit' and accARGUMENT.keyword='categ' and accARGUMENT.value='*';" | ${prefix}/bin/dbexec
echo "UPDATE accACTION SET allowedkeywords='name,with_editor_rights' WHERE name='cfgwebjournal';" | ${prefix}/bin/dbexec
echo "INSERT INTO accARGUMENT(keyword,value) VALUES ('with_editor_rights','yes');" | ${prefix}/bin/dbexec
echo "INSERT INTO accROLE_accACTION_accARGUMENT(id_accROLE,id_accACTION,id_accARGUMENT,argumentlistid) SELECT DISTINCT raa.id_accROLE,raa.id_accACTION,accARGUMENT.id,raa.argumentlistid FROM accROLE_accACTION_accARGUMENT as raa JOIN accACTION on id_accACTION=accACTION.id,accARGUMENT WHERE accACTION.name='cfgwebjournal' and accARGUMENT.keyword='with_editor_rights' and accARGUMENT.value='yes';" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskEXTREC CHANGE id id int(15) unsigned NOT NULL auto_increment;" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskEXTREC ADD external_id int(15) NOT NULL default '0';" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskEXTREC ADD collection_id int(15) unsigned NOT NULL default '0';" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskEXTREC ADD original_url text;" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD status char(2) NOT NULL default 'ok';" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD KEY status (status);" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Photos_to_Storage','Attach/edit the pictures uploaded with the \"create_photos_manager_interface()\" function');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFIELDDESC VALUES ('Upload_Photos',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a photos upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Photos_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\n\r\nfrom invenio.websubmit_functions.ParamFile import ParamFromFile\r\nfrom invenio.websubmit_functions.Move_Photos_to_Storage import read_param_file, create_photos_manager_interface, get_session_id\r\n\r\n# Retrieve session id\r\ntry:\r\n # User info is defined only in MBI/MPI actions...\r\n session_id = get_session_id(None, uid, user_info) \r\nexcept:\r\n session_id = get_session_id(req, uid, {})\r\n\r\n# Retrieve context\r\nindir = curdir.split(\'/\')[-3]\r\ndoctype = curdir.split(\'/\')[-2]\r\naccess = curdir.split(\'/\')[-1]\r\n\r\n# Get the record ID, if any\r\nsysno = ParamFromFile(\"%s/%s\" % (curdir,\'SN\')).strip()\r\n\r\n\"\"\"\r\nModify below the configuration of the photos manager interface.\r\nNote: \'can_reorder_photos\' parameter is not yet fully taken into consideration\r\n\r\nDocumentation of the function is available by running:\r\necho -e \'from invenio.websubmit_functions.Move_Photos_to_Storage import create_photos_manager_interface as f\\nprint f.__doc__\' | python\r\n\"\"\"\r\ntext += create_photos_manager_interface(sysno, session_id, uid,\r\n doctype, indir, curdir, access,\r\n can_delete_photos=True,\r\n can_reorder_photos=True,\r\n can_upload_photos=True,\r\n editor_width=700,\r\n editor_height=400,\r\n initial_slider_value=100,\r\n max_slider_value=200,\r\n min_slider_value=80)','0000-00-00','0000-00-00',NULL,NULL,0);" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Photos_to_Storage','iconsize');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFIELDDESC VALUES ('Upload_Files',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a file upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Uploaded_Files_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\nfrom invenio.websubmit_managedocfiles import create_file_upload_interface\r\nfrom invenio.websubmit_functions.Shared_Functions import ParamFromFile\r\n\r\nindir = ParamFromFile(os.path.join(curdir, \'indir\'))\r\ndoctype = ParamFromFile(os.path.join(curdir, \'doctype\'))\r\naccess = ParamFromFile(os.path.join(curdir, \'access\'))\r\ntry:\r\n sysno = int(ParamFromFile(os.path.join(curdir, \'SN\')).strip())\r\nexcept:\r\n sysno = -1\r\nln = ParamFromFile(os.path.join(curdir, \'ln\'))\r\n\r\n\"\"\"\r\nRun the following to get the list of parameters of function \'create_file_upload_interface\':\r\necho -e \'from invenio.websubmit_managedocfiles import create_file_upload_interface as f\\nprint f.__doc__\' | python\r\n\"\"\"\r\ntext = create_file_upload_interface(recid=sysno,\r\n print_outside_form_tag=False,\r\n include_headers=True,\r\n ln=ln,\r\n doctypes_and_desc=[(\'main\',\'Main document\'),\r\n (\'additional\',\'Figure, schema, etc.\')],\r\n can_revise_doctypes=[\'*\'],\r\n can_describe_doctypes=[\'main\'],\r\n can_delete_doctypes=[\'additional\'],\r\n can_rename_doctypes=[\'main\'],\r\n sbm_indir=indir, sbm_doctype=doctype, sbm_access=access)[1]\r\n','0000-00-00','0000-00-00',NULL,NULL,0);" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','forceFileRevision');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Upload_Files_Interface','Display generic interface to add/revise/delete files. To be used before function \"Move_Uploaded_Files_to_Storage\"');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Uploaded_Files_to_Storage','Attach files uploaded with \"Create_Upload_Files_Interface\"')" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','elementNameToDoctype');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createIconDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createRelatedFormats');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','iconsize');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','keepPreviousVersionDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Revised_Files_to_Storage','Revise files initially uploaded with \"Move_Files_to_Storage\"')" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxsize');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','minsize');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','doctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictions');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDeleteDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canReviseDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDescribeDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canCommentDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canKeepDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canAddFormatDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRestrictDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRenameDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canNameNewFiles');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','createRelatedFormats');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','keepDefault');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','showLinks');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','fileLabel');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','filenameLabel');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','descriptionLabel');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','commentLabel');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictionLabel');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','startDoc');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','endDoc');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','defaultFilenameDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxFilesDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','iconsize');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','createIconDoctypes');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','nblength');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_nb_length');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Get_Recid','record_search_pattern');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_FCKeditor_Files_to_Storage','Transfer files attached to the record with the FCKeditor');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_FCKeditor_Files_to_Storage','input_fields');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','layer');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','layer');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','switch_file');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','switch_file');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_restrictions');" | ${prefix}/bin/dbexec
echo "INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_doctypes');" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD round_name varchar(255) NOT NULL default ''" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD restriction varchar(50) NOT NULL default ''" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD in_reply_to_id_cmtRECORDCOMMENT int(15) unsigned NOT NULL default '0'" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD KEY in_reply_to_id_cmtRECORDCOMMENT (in_reply_to_id_cmtRECORDCOMMENT);" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskRECORDCOMMENT ADD in_reply_to_id_bskRECORDCOMMENT int(15) unsigned NOT NULL default '0'" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskRECORDCOMMENT ADD KEY in_reply_to_id_bskRECORDCOMMENT (in_reply_to_id_bskRECORDCOMMENT);" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD reply_order_cached_data blob NULL default NULL;" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskRECORDCOMMENT ADD reply_order_cached_data blob NULL default NULL;" | ${prefix}/bin/dbexec
echo "ALTER TABLE cmtRECORDCOMMENT ADD INDEX (reply_order_cached_data(40));" | ${prefix}/bin/dbexec
echo "ALTER TABLE bskRECORDCOMMENT ADD INDEX (reply_order_cached_data(40));" | ${prefix}/bin/dbexec
echo -e 'from invenio.webcommentadminlib import migrate_comments_populate_threads_index;\
migrate_comments_populate_threads_index()' | $(PYTHON)
echo -e 'from invenio.access_control_firerole import repair_role_definitions;\
repair_role_definitions()' | $(PYTHON)
CLEANFILES = *~ *.pyc *.tmp
diff --git a/THANKS b/THANKS
index 83b410ab1..12ace9bcc 100644
--- a/THANKS
+++ b/THANKS
@@ -1,222 +1,225 @@
Invenio THANKS
==============
Several people besides the core Invenio Development Team
contributed to the project:
- Thierry Thomas <thierry@FreeBSD.org>
Patches for compiling old CDSware 0.3.x sources on FreeBSD.
- Guido Pelzer <guido.pelzer@web.de>
Contributions to the German translation. German stopword list.
- Valerio Gracco <valerio.gracco@cern.ch>
Contributions to the Italian translation.
- Tullio Basaglia <tullio.basaglia@cern.ch>
Contributions to the Italian translation.
- Flavio C. Coelho <fccoelho@fiocruz.br>
Contributions to the Portuguese translation.
- Lyuba Vasilevskaya <lyubov.vassilevskaya@cern.ch>
Contributions to the Russian translation.
- Maria Gomez Marti <maria.gomez.marti@cern.ch>
Contributions to the Spanish translation.
- Magaly Bascones Dominguez <magaly.bascones.dominguez@cern.ch>
Contributions to the Spanish translation.
- Urban Andersson <urban.andersson@hb.se>
Contributions to the Swedish translation.
- Eric Grand <eric.grand@rero.ch>
Contributions to the French translation.
- Theodoropoulos Theodoros <theod@lib.auth.gr>
Contributions to the Greek translation, Greek stopword list,
XML RefWorks output format, Google Analytics documentation update,
BibConvert target/source CLI flag description fix.
- Vasyl Ostrovskyi <vo@imath.kiev.ua>
Contributions to the Ukrainian translation.
- Ferran Jorba <Ferran.Jorba@uab.cat>
Contributions to the Catalan and Spanish translations. Cleanup of
the old PHP-based BibFormat Admin Guide. Several minor patches.
- Beatriu Piera <Beatriz.Piera@uab.es>
Translation of the Search Guide into Catalan and Spanish.
- Anonymous contributor (name withheld by request)
Contributions to the Japanese translation.
- Anonymous contributor (name withheld by request)
Contributions to the Spanish translation.
- Alen Vodopijevec <alen@irb.hr>
Contributions to the Croatian translation.
- Jasna Marković <jmarkov@irb.hr>
Contributions to the Croatian translation.
- Kam-ming Ku <kmku@hkusua.hku.hk>
Contributions to the Chinese translations (zh_CN, zh_TW).
- Benedikt Koeppel <be.public@gmail.com>
Contributions to the German translation.
- Toru Tsuboyama <toru.tsuboyama@kek.jp>
Contributions to the Japanese translation.
- Mike Marino <mmarino@gmail.com>
Several minor patches and suggestions.
- Zbigniew Szklarz <zszklarz@student.agh.edu.pl>
Contributions to the Polish translation.
- Iaroslav Gaponenko <adrahil@gmail.com>
Contributions to the Russian translation.
- Yana Osborne <ianna.osborne@cern.ch>
Contributions to the Russian translation.
- Zbigniew Leonowicz <leonowicz@ieee.org>
Contributions to the Polish translation.
- Makiko Matsumoto <maki.matsumoto@gmail.com> and Takao Ishigaki
Contributions to the Japanese translation.
- Eva Papp <Eva.Papp@cern.ch>
Contributions to the Hungarian translation.
- Nino Jejelava <nino.jejelava@gmail.com>
Contributions to the Georgian translation.
- Cristian Bacchi <cristian.bacchi@gmail.com>
Improvements to the browse interface.
- Genis Musulmanbekov <genis@jinr.ru>
Contributions to the Russian translation.
- Andrey Tremba <metandrey@gmail.com>
Contributions to the Russian translation.
- Cornelia Plott <c.plott@fz-juelich.de>
Contributions to the German translation.
- Johnny Mariéthoz <johnny.mariethoz@rero.ch>
Patch to improve BibRecDocs argument checking.
- Alexander Wagner <a.wagner@fz-juelich.de>
Contributions to the German translation.
- Miguel Martín <miguelm@unizar.es>
Patch to fix traceback in get_collection_reclist() occurring for
misspelled collection names in access control rules.
- Stefan Hesselbach <s.hesselbach@gsi.de>
Patch for OAI harvesting via HTTP proxy.
- Bessem Amira <bessem.amira@cnudst.rnrt.tn>
Contributions to the Arabic translation.
- Kevin M. Flannery <flannery@fnal.gov>
Patch for WebSubmit's Move_Photos_to_Storage.
- Thomas McCauley <thomas.mccauley@cern.ch>
Improvements to Invenio Connector.
-The Flask SSLify plugin was adapted from the original version developed
-by Kenneth Reitz <_@kennethreitz.com>.
-<https://github.com/kennethreitz/flask-sslify>
+ - Mehdi Zahedi <mehdizahedin@gmail.com>
+ Contributions to the Persian (Farsi) translation.
The URL handler was inspired by the Quixote Web Framework which is
``Copyright (c) 2004 Corporation for National Research Initiatives;
All Rights Reserved''.
<http://www.quixote.ca/>
The session handler was adapted from the mod_python session implementation.
<http://www.modpython.org/>
Javascript Quicktags scripts from Alex King are used to provide
additional capabilities to the edition of BibFormat templates through
the web admin interface.
<http://www.alexking.org>
The indexer engine uses the Martin Porter Stemming Algorithm and its
Vivake Gupta free Python implementation.
<http://tartarus.org/~martin/PorterStemmer/>
The CSS style for rounded corners box used in detailed record pages
adapted from Francky Lleyneman liquidcorners CSS.
<http://home.tiscali.nl/developerscorner/liquidcorners/liquidcorners.htm>
The NASA_Subjects.rdf files has been retrieved from the American National
Aeronautics and Space Administration (NASA) who kindly provide this for
free re-use.
<http://nasataxonomy.jpl.nasa.gov/fordevelopers/>
The tiger test picure used in automated demo picture submission was
converted from Ghostscript's 'tiger.eps'.
<http://www.gnu.org/software/ghostscript/>
Some icon images were taken from (i) the Silk icon set, (ii) the
Function icon set, (iii) the activity indicator icon, and (iv) the
Open Icon Libray.
<http://www.famfamfam.com/lab/icons/silk/>
<http://wefunction.com/2008/07/function-free-icon-set/>
<http://www.badeziner.com/2008/05/04/120-free-ajax-activity-indicator-gif-icons/>
<http://openiconlibrary.sourceforge.net/gallery2/>
The unoconv.py script has been adapted from UNOCONV by Dag Wieers.
<http://dag.wieers.com/home-made/unoconv/>
PDFA_def.ps has been adapted from the GPL distribution of GhostScript.
<http://ghostscript.com/>
The ISOCoatedsb.icc ICC profile has been retrieved from the European Color
Initiative.
<http://www.eci.org/>
The PEP8 conformance checking script (pep8.py) was written by
Johann C. Rocholl <johann@browsershots.org>.
The pep8.py version included with Invenio was downloaded from
<http://svn.browsershots.org/trunk/devtools/pep8/pep8.py> on
2009-06-14.
The logicutils library was originally authored by
Peter Norvig <peter.norvig@google.com> and was taken from
<http://code.google.com/p/aima-python/> on 2010-05-20 and later expanded
to fit Invenio needs.
The git-version-gen script was taken from gnulib 20100704+stable-1.
<http://www.gnu.org/software/gnulib/>
The LaTeX-to-Unicode translation table was compiled from:
FX, <http://stackoverflow.com/questions/4578912/replace-all-accented-characters-by-their-latex-equivalent>
Lea Wiemann <LeWiemann@gmail.com>, <http://docutils.sourceforge.net/docutils/writers/newlatex2e/unicode_map.py>
The scientificchar plugin for the CKEditor was adapted from the
specialchar plugin from Frederico Knabben.
<http://ckeditor.com/>
The oai2.xsl.v1.0 OAI to HTML XSLT Style Sheet was taken from EPrints.
<http://www.eprints.org/software/xslt.php>
The xmlDict.py code is authored by Duncan McGreggor and is licensed
under PSF license. It was taken from
<http://code.activestate.com/recipes/410469/>.
The original dateutils.stfrtime function was taken from
<https://github.com/django/django/blob/stable/1.4.x/django/utils/datetime_safe.py>
Author: Russell Keith-Magee <russell@keith-magee.com>
Python 2.4 backport of defaultdict was taken from NLTK
<http://code.google.com/p/nltk/>. collections.defaultdict originally
contributed by Yoav Goldberg <yoav.goldberg@gmail.com>, new version by
Jason Kirtland from Python cookbook.
<http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/523034>
+The Flask SSLify plugin was adapted from the original version developed
+by Kenneth Reitz <_@kennethreitz.com>.
+<https://github.com/kennethreitz/flask-sslify>
+
- end of file -
diff --git a/config/invenio-autotools.conf.in b/config/invenio-autotools.conf.in
index ee06c5ad0..ebae26919 100644
--- a/config/invenio-autotools.conf.in
+++ b/config/invenio-autotools.conf.in
@@ -1,91 +1,91 @@
## This file is part of Invenio.
## Copyright (C) 2008, 2009, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
## DO NOT EDIT THIS FILE.
## YOU SHOULD NOT EDIT THESE VALUES. THEY WERE AUTOMATICALLY
## CALCULATED BY AUTOTOOLS DURING THE "CONFIGURE" STAGE.
[Invenio]
## Invenio version:
CFG_VERSION = @VERSION@
## directories detected from 'configure --prefix ...' parameters:
CFG_PREFIX = @prefix@
CFG_BINDIR = @prefix@/bin
CFG_PYLIBDIR = @prefix@/lib/python
CFG_LOGDIR = @localstatedir@/log
CFG_ETCDIR = @prefix@/etc
CFG_LOCALEDIR = @prefix@/share/locale
CFG_TMPDIR = @localstatedir@/tmp
CFG_TMPSHAREDDIR = @localstatedir@/tmp-shared
CFG_CACHEDIR = @localstatedir@/cache
CFG_WEBDIR = @localstatedir@/www
## path to interesting programs:
CFG_PATH_MYSQL = @MYSQL@
CFG_PATH_PHP = @PHP@
CFG_PATH_GZIP = @GZIP@
CFG_PATH_GUNZIP = @GUNZIP@
CFG_PATH_TAR = @TAR@
CFG_PATH_GFILE = @FILE@
CFG_PATH_CONVERT = @CONVERT@
CFG_PATH_PDFTOTEXT = @PDFTOTEXT@
CFG_PATH_PDFTK = @PDFTK@
CFG_PATH_PDFTOPS = @PDFTOPS@
CFG_PATH_PDF2PS = @PDF2PS@
CFG_PATH_PDFINFO = @PDFINFO@
CFG_PATH_PDFTOPPM = @PDFTOPPM@
CFG_PATH_PAMFILE = @PAMFILE@
CFG_PATH_GS = @GS@
CFG_PATH_PS2PDF = @PS2PDF@
CFG_PATH_PDFLATEX = @PDFLATEX@
CFG_PATH_PDFOPT = @PDFOPT@
CFG_PATH_PSTOTEXT = @PSTOTEXT@
CFG_PATH_PSTOASCII = @PSTOASCII@
CFG_PATH_ANY2DJVU = @ANY2DJVU@
CFG_PATH_DJVUPS = @DJVUPS@
CFG_PATH_DJVUTXT = @DJVUTXT@
CFG_PATH_TIFF2PDF = @TIFF2PDF@
CFG_PATH_OCROSCRIPT = @OCROSCRIPT@
CFG_PATH_OPENOFFICE_PYTHON = @OPENOFFICE_PYTHON@
CFG_PATH_WGET = @WGET@
CFG_PATH_MD5SUM = @MD5SUM@
CFG_PATH_FFMPEG = @FFMPEG@
CFG_PATH_FFPROBE = @FFPROBE@
CFG_PATH_MEDIAINFO = @MEDIAINFO@
-## CFG_BIBINDEX_PATH_TO_STOPWORDS_FILE -- path to the stopwords file. You
-## probably don't want to change this path, although you may want to
+## CFG_BIBRANK_PATH_TO_STOPWORDS_FILE -- path to the default stopwords file.
+## You probably don't want to change this path, although you may want to
## change the content of that file. Note that the file is used by the
## rank engine internally, so it should be given even if stopword
## removal in the indexes is not used.
-CFG_BIBINDEX_PATH_TO_STOPWORDS_FILE = @prefix@/etc/bibrank/stopwords.kb
+CFG_BIBRANK_PATH_TO_STOPWORDS_FILE = @prefix@/etc/bibrank/stopwords.kb
## helper style of variables for WebSubmit:
CFG_WEBSUBMIT_COUNTERSDIR = @localstatedir@/data/submit/counters
CFG_WEBSUBMIT_STORAGEDIR = @localstatedir@/data/submit/storage
CFG_WEBSUBMIT_BIBCONVERTCONFIGDIR = @prefix@/etc/bibconvert/config
## helper style of variables for BibDocFile:
CFG_BIBDOCFILE_FILEDIR = @localstatedir@/data/files
## helper style of variables for WebDeposit
CFG_WEBDEPOSIT_UPLOAD_FOLDER = @localstatedir@/tmp/webdeposit_uploads
## - end of file -
diff --git a/config/invenio.conf b/config/invenio.conf
index e92997440..24d6d771a 100644
--- a/config/invenio.conf
+++ b/config/invenio.conf
@@ -1,2378 +1,2369 @@
## This file is part of Invenio.
## Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
###################################################
## About 'invenio.conf' and 'invenio-local.conf' ##
###################################################
## The 'invenio.conf' file contains the vanilla default configuration
## parameters of a Invenio installation, as coming out of the
## distribution. The file should be self-explanatory. Once installed
## in its usual location (usually /opt/invenio/etc), you could in
## principle go ahead and change the values according to your local
## needs, but this is not advised.
##
## If you would like to customize some of these parameters, you should
## rather create a file named 'invenio-local.conf' in the same
## directory where 'invenio.conf' lives and you should write there
## only the customizations that you want to be different from the
## vanilla defaults.
##
## Here is a realistic, minimalist, yet production-ready example of
## what you would typically put there:
##
## $ cat /opt/invenio/etc/invenio-local.conf
## [Invenio]
## CFG_SITE_NAME = John Doe's Document Server
## CFG_SITE_NAME_INTL_fr = Serveur des Documents de John Doe
## CFG_SITE_URL = http://your.site.com
## CFG_SITE_SECURE_URL = https://your.site.com
## CFG_SITE_ADMIN_EMAIL = john.doe@your.site.com
## CFG_SITE_SUPPORT_EMAIL = john.doe@your.site.com
## CFG_WEBALERT_ALERT_ENGINE_EMAIL = john.doe@your.site.com
## CFG_WEBCOMMENT_ALERT_ENGINE_EMAIL = john.doe@your.site.com
## CFG_WEBCOMMENT_DEFAULT_MODERATOR = john.doe@your.site.com
## CFG_BIBAUTHORID_AUTHOR_TICKET_ADMIN_EMAIL = john.doe@your.site.com
## CFG_BIBCATALOG_SYSTEM_EMAIL_ADDRESS = john.doe@your.site.com
## CFG_DATABASE_HOST = localhost
## CFG_DATABASE_NAME = invenio
## CFG_DATABASE_USER = invenio
## CFG_DATABASE_PASS = my123p$ss
##
## You should override at least the parameters mentioned above and the
## parameters mentioned in the `Part 1: Essential parameters' below in
## order to define some very essential runtime parameters such as the
## name of your document server (CFG_SITE_NAME and
## CFG_SITE_NAME_INTL_*), the visible URL of your document server
## (CFG_SITE_URL and CFG_SITE_SECURE_URL), the email address of the
## local Invenio administrator, comment moderator, and alert engine
## (CFG_SITE_SUPPORT_EMAIL, CFG_SITE_ADMIN_EMAIL, etc), and last but
## not least your database credentials (CFG_DATABASE_*).
##
## The Invenio system will then read both the default invenio.conf
## file and your customized invenio-local.conf file and it will
## override any default options with the ones you have specified in
## your local file. This cascading of configuration parameters will
## ease your future upgrades.
[Invenio]
###################################
## Part 1: Essential parameters ##
###################################
## This part defines essential Invenio internal parameters that
## everybody should override, like the name of the server or the email
## address of the local Invenio administrator.
## CFG_DATABASE_* - specify which MySQL server to use, the name of the
## database to use, and the database access credentials.
CFG_DATABASE_TYPE = mysql
CFG_DATABASE_HOST = localhost
CFG_DATABASE_PORT = 3306
CFG_DATABASE_NAME = invenio
CFG_DATABASE_USER = invenio
CFG_DATABASE_PASS = my123p$ss
## CFG_DATABASE_SLAVE - if you use DB replication, then specify the DB
## slave address credentials. (Assuming the same access rights to the
## DB slave as to the DB master.) If you don't use DB replication,
## then leave this option blank.
CFG_DATABASE_SLAVE =
## CFG_SITE_URL - specify URL under which your installation will be
## visible. For example, use "http://your.site.com". Do not leave
## trailing slash.
CFG_SITE_URL = http://localhost
## CFG_SITE_SECURE_URL - specify secure URL under which your
## installation secure pages such as login or registration will be
## visible. For example, use "https://your.site.com". Do not leave
## trailing slash. If you don't plan on using HTTPS, then you may
## leave this empty.
CFG_SITE_SECURE_URL = https://localhost
## CFG_SITE_NAME -- the visible name of your Invenio installation.
CFG_SITE_NAME = Atlantis Institute of Fictive Science
## CFG_SITE_NAME_INTL -- the international versions of CFG_SITE_NAME
## in various languages. (See also CFG_SITE_LANGS below.)
CFG_SITE_NAME_INTL_en = Atlantis Institute of Fictive Science
CFG_SITE_NAME_INTL_fr = Atlantis Institut des Sciences Fictives
CFG_SITE_NAME_INTL_de = Atlantis Institut der fiktiven Wissenschaft
-CFG_SITE_NAME_INTL_es = Atlantis Instituto de la Ciencia Fictive
+CFG_SITE_NAME_INTL_es = Instituto de Ciencia Ficticia Atlantis
CFG_SITE_NAME_INTL_ca = Institut Atlantis de Ciència Fictícia
CFG_SITE_NAME_INTL_pt = Instituto Atlantis de Ciência Fictícia
CFG_SITE_NAME_INTL_it = Atlantis Istituto di Scienza Fittizia
CFG_SITE_NAME_INTL_ru = Институт Фиктивных Наук Атлантиды
CFG_SITE_NAME_INTL_sk = Atlantis Inštitút Fiktívnych Vied
CFG_SITE_NAME_INTL_cs = Atlantis Institut Fiktivních Věd
CFG_SITE_NAME_INTL_no = Atlantis Institutt for Fiktiv Vitenskap
CFG_SITE_NAME_INTL_sv = Atlantis Institut för Fiktiv Vetenskap
CFG_SITE_NAME_INTL_el = Ινστιτούτο Φανταστικών Επιστημών Ατλαντίδος
CFG_SITE_NAME_INTL_uk = Інститут вигаданих наук в Атлантісі
CFG_SITE_NAME_INTL_ja = Fictive 科学のAtlantis の協会
CFG_SITE_NAME_INTL_pl = Instytut Fikcyjnej Nauki Atlantis
CFG_SITE_NAME_INTL_bg = Институт за фиктивни науки Атлантис
CFG_SITE_NAME_INTL_hr = Institut Fiktivnih Znanosti Atlantis
CFG_SITE_NAME_INTL_zh_CN = 阿特兰提斯虚拟科学学院
CFG_SITE_NAME_INTL_zh_TW = 阿特蘭提斯虛擬科學學院
CFG_SITE_NAME_INTL_hu = Kitalált Tudományok Atlantiszi Intézete
CFG_SITE_NAME_INTL_af = Atlantis Instituut van Fiktiewe Wetenskap
CFG_SITE_NAME_INTL_gl = Instituto Atlantis de Ciencia Fictive
CFG_SITE_NAME_INTL_ro = Institutul Atlantis al Ştiinţelor Fictive
CFG_SITE_NAME_INTL_rw = Atlantis Ishuri Rikuru Ry'ubuhanga
CFG_SITE_NAME_INTL_ka = ატლანტიდის ფიქტიური მეცნიერების ინსტიტუტი
CFG_SITE_NAME_INTL_lt = Fiktyvių Mokslų Institutas Atlantis
CFG_SITE_NAME_INTL_ar = معهد أطلنطيس للعلوم الافتراضية
+CFG_SITE_NAME_INTL_fa = موسسه علوم تخیلی آتلانتیس
## CFG_SITE_LANG -- the default language of the interface: '
CFG_SITE_LANG = en
## CFG_SITE_LANGS -- list of all languages the user interface should
## be available in, separated by commas. The order specified below
## will be respected on the interface pages. A good default would be
## to use the alphabetical order. Currently supported languages
-## include Afrikaans, Arabic, Bulgarian, Catalan, Czech, German, Georgian,
-## Greek, English, Spanish, French, Croatian, Hungarian, Galician,
-## Italian, Japanese, Kinyarwanda, Lithuanian, Norwegian, Polish,
-## Portuguese, Romanian, Russian, Slovak, Swedish, Ukrainian, Chinese
-## (China), Chinese (Taiwan), so that the eventual maximum you can
-## currently select is
-## "af,ar,bg,ca,cs,de,el,en,es,fr,hr,gl,ka,it,rw,lt,hu,ja,no,pl,pt,ro,ru,sk,sv,uk,zh_CN,zh_TW".
-CFG_SITE_LANGS = af,ar,bg,ca,cs,de,el,en,es,fr,hr,gl,ka,it,rw,lt,hu,ja,no,pl,pt,ro,ru,sk,sv,uk,zh_CN,zh_TW
+## include Afrikaans, Arabic, Bulgarian, Catalan, Czech, German,
+## Georgian, Greek, English, Spanish, Persian (Farsi), French,
+## Croatian, Hungarian, Galician, Italian, Japanese, Kinyarwanda,
+## Lithuanian, Norwegian, Polish, Portuguese, Romanian, Russian,
+## Slovak, Swedish, Ukrainian, Chinese (China), Chinese (Taiwan), so
+## that the eventual maximum you can currently select is
+## "af,ar,bg,ca,cs,de,el,en,es,fa,fr,hr,gl,ka,it,rw,lt,hu,ja,no,pl,pt,ro,ru,sk,sv,uk,zh_CN,zh_TW".
+CFG_SITE_LANGS = af,ar,bg,ca,cs,de,el,en,es,fa,fr,hr,gl,ka,it,rw,lt,hu,ja,no,pl,pt,ro,ru,sk,sv,uk,zh_CN,zh_TW
## CFG_EMAIL_BACKEND -- the backend to use for sending emails. Defaults to
## 'flask.ext.email.backends.smtp.Mail' if CFG_MISCUTIL_SMTP_HOST and
## CFG_MISCUTIL_SMTP_PORT are set. Possible values are:
## - flask.ext.email.backends.console.Mail
## - flask.ext.email.backends.dummy.Mail
## - flask.ext.email.backends.filebased.Mail
## - flask.ext.email.backends.locmem.Mail
## - flask.ext.email.backends.smtp.Mail
## - invenio.mailutils_backend_adminonly.ConsoleMail
## - invenio.mailutils_backend_adminonly.SMTPMail
## * sends email only to the CFG_SITE_ADMIN_EMAIL address using SMTP
CFG_EMAIL_BACKEND = flask.ext.email.backends.smtp.Mail
## CFG_SITE_SUPPORT_EMAIL -- the email address of the support team for
## this installation:
CFG_SITE_SUPPORT_EMAIL = info@invenio-software.org
## CFG_SITE_ADMIN_EMAIL -- the email address of the 'superuser' for
## this installation. Enter your email address below and login with
## this address when using Invenio inistration modules. You
## will then be automatically recognized as superuser of the system.
CFG_SITE_ADMIN_EMAIL = info@invenio-software.org
## CFG_SITE_EMERGENCY_EMAIL_ADDRESSES -- list of email addresses to
## which an email should be sent in case of emergency (e.g. bibsched
## queue has been stopped because of an error). Configuration
## dictionary allows for different recipients based on weekday and
## time-of-day. Example:
##
## CFG_SITE_EMERGENCY_EMAIL_ADDRESSES = {
## 'Sunday 22:00-06:00': '0041761111111@email2sms.foo.com',
## '06:00-18:00': 'team-in-europe@foo.com,0041762222222@email2sms.foo.com',
## '18:00-06:00': 'team-in-usa@foo.com',
## '*': 'john.doe.phone@foo.com'}
##
## If you want the emergency email notifications to always go to the
## same address, just use the wildcard line in the above example.
CFG_SITE_EMERGENCY_EMAIL_ADDRESSES = {}
## CFG_SITE_ADMIN_EMAIL_EXCEPTIONS -- set this to 0 if you do not want
## to receive any captured exception via email to CFG_SITE_ADMIN_EMAIL
## address. Captured exceptions will still be available in
## var/log/invenio.err file. Set this to 1 if you want to receive
## some of the captured exceptions (this depends on the actual place
## where the exception is captured). Set this to 2 if you want to
## receive all captured exceptions.
CFG_SITE_ADMIN_EMAIL_EXCEPTIONS = 1
## CFG_SITE_RECORD -- what is the URI part representing detailed
## record pages? We recommend to leave the default value `record'
## unchanged.
CFG_SITE_RECORD = record
## CFG_SITE_SECRET_KEY --- which secret key should we use? This should be set
## to a random value per Invenio installation and must be kept secret, as it is
## used to protect against e.g. cross-site request forgery and other is the
## basis of other security measures in Invenio. A random value can be generated
## using the following command:
## python -c "import os;import re;print re.escape(os.urandom(24).__repr__()[1:-1])"
CFG_SITE_SECRET_KEY =
## CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER -- set this to
## the number of seconds after which to reset the exception notification
## counter. A given repetitive exception is notified via email with a
## logarithmic strategy: the first time it is seen it is sent via email,
## then the second time, then the fourth, then the eighth and so forth.
## If the number of seconds elapsed since the last time it was notified
## is greater than CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER
## then the internal counter is reset in order not to have exception
## notification become more and more rare.
CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER = 14400
## CFG_CERN_SITE -- do we want to enable CERN-specific code?
## Put "1" for "yes" and "0" for "no".
CFG_CERN_SITE = 0
## CFG_INSPIRE_SITE -- do we want to enable INSPIRE-specific code?
## Put "1" for "yes" and "0" for "no".
CFG_INSPIRE_SITE = 0
## CFG_ADS_SITE -- do we want to enable ADS-specific code?
## Put "1" for "yes" and "0" for "no".
CFG_ADS_SITE = 0
## CFG_OPENAIRE_SITE -- do we want to enable OpenAIRE-specific code?
## Put "1" for "yes" and "0" for "no".
CFG_OPENAIRE_SITE = 0
## CFG_FLASK_CACHE_TYPE -- do we want to enable any cache engine?
## 'null', 'redis' or your own e.g. 'invenio.cache.my_cache_engine'
## NOTE: If you disable cache engine it WILL affect some
## functionality such as 'search facets'.
CFG_FLASK_CACHE_TYPE = null
## CFG_FLASK_DISABLED_BLUEPRINTS -- do we want to prevent certain blueprints from
## being loaded?
CFG_FLASK_DISABLED_BLUEPRINTS =
## CFG_FLASK_SERVE_STATIC_FILES -- do we want Flask to serve static files?
## Normally Apache serves static files, but during development and if you are
## using the Werkzeug standalone development server, you can set this flag to
## "1", to enable static file serving.
CFG_FLASK_SERVE_STATIC_FILES = 0
## Now you can tune whether to integrate with external authentication providers
## through the OpenID and OAuth protocols.
## The following variables let you fine-tune which authentication
## providers you want to authorize. You can override here most of
## the variables in lib/invenio/access_control_config.py.
## In particular you can put in these variable the consumer_key and
## consumer_secret of the desired services.
## Note: some providers don't supply an mail address.
## If you choose them, the users will be registered with a temporary email address.
## CFG_OPENID_PROVIDERS -- Comma-separated list of providers you want to enable
## through the OpenID protocol.
## E.g.: CFG_OPENID_PROVIDERS = google,yahoo,aol,wordpress,myvidoop,openid,verisign,myopenid,myspace,livejournal,blogger
CFG_OPENID_PROVIDERS =
## CFG_OAUTH1_PROVIDERS -- Comma-separated list of providers you want to enable
## through the OAuth1 protocol.
## Note: OAuth1 is in general deprecated in favour of OAuth2.
## E.g.: CFG_OAUTH1_PROVIDERS = twitter,linkedin,flickr,
CFG_OAUTH1_PROVIDERS =
## CFG_OAUTH2_PROVIDERS -- Comma-separated list of providers you want to enable
## through the OAuth2 protocol.
## Note: if you enable the "orcid" provider the full profile of the user
## in Orcid will be imported.
## E.g.: CFG_OAUTH2_PROVIDERS = facebook,yammer,foursquare,googleoauth2,instagram,orcid
CFG_OAUTH2_PROVIDERS =
## CFG_OPENID_CONFIGURATIONS -- Mapping of special parameter to configure the
## desired OpenID providers. Use this variable to override out-of-the-box
## parameters already set in lib/python/invenio/access_control_config.py.
## E.g.: CFG_OPENID_CONFIGURATIONS = {'google': {
## 'identifier': 'https://www.google.com/accounts/o8/id',
## 'trust_email': True}}
CFG_OPENID_CONFIGURATIONS = {}
## CFG_OAUTH1_CONFIGURATIONS -- Mapping of special parameter to configure the
## desired OAuth1 providers. Use this variable to override out-of-the-box
## parameters already set in lib/python/invenio/access_control_config.py.
## E.g.: CFG_OAUTH1_CONFIGURATIONS = {'linkedin': {
## 'consumer_key' : 'MY_LINKEDIN_CONSUMER_KEY',
## 'consumer_secret' : 'MY_LINKEDIN_CONSUMER_SECRET'}}
CFG_OAUTH1_CONFIGURATIONS = {}
## CFG_OAUTH2_CONFIGURATIONS -- Mapping of special parameter to configure the
## desired OAuth2 providers. Use this variable to override out-of-the-box
## parameters already set in lib/python/invenio/access_control_config.py.
## E.g.: CFG_OAUTH2_CONFIGURATIONS = {'orcid': {
## 'consumer_key' : 'MY_ORCID_CONSUMER_KEY',
## 'consumer_secret' : 'MY_ORCID_CONSUMER_SECRET'}}
CFG_OAUTH2_CONFIGURATIONS = {}
################################
## Part 2: Web page style ##
################################
## The variables affecting the page style. The most important one is
## the 'template skin' you would like to use and the obfuscation mode
## for your email addresses. Please refer to the WebStyle Admin Guide
## for more explanation. The other variables are listed here mostly
## for backwards compatibility purposes only.
## CFG_WEBSTYLE_TEMPLATE_SKIN -- what template skin do you want to
## use?
CFG_WEBSTYLE_TEMPLATE_SKIN = default
## CFG_WEBSTYLE_EMAIL_ADDRESSES_OBFUSCATION_MODE. How do we "protect"
## email addresses from undesired automated email harvesters? This
## setting will not affect 'support' and 'admin' emails.
## NOTE: there is no ultimate solution to protect against email
## harvesting. All have drawbacks and can more or less be
## circumvented. Choose you preferred mode ([t] means "transparent"
## for the user):
## -1: hide all emails.
## [t] 0 : no protection, email returned as is.
## foo@example.com => foo@example.com
## 1 : basic email munging: replaces @ by [at] and . by [dot]
## foo@example.com => foo [at] example [dot] com
## [t] 2 : transparent name mangling: characters are replaced by
## equivalent HTML entities.
## foo@example.com => &#102;&#111;&#111;&#64;&#101;&#120;&#97;&#109;&#112;&#108;&#101;&#46;&#99;&#111;&#109;
## [t] 3 : javascript insertion. Requires Javascript enabled on client
## side.
## 4 : replaces @ and . characters by gif equivalents.
## foo@example.com => foo<img src="at.gif" alt=" [at] ">example<img src="dot.gif" alt=" [dot] ">com
CFG_WEBSTYLE_EMAIL_ADDRESSES_OBFUSCATION_MODE = 2
## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXLEFTTOP -- eventual global HTML
## left top box:
CFG_WEBSTYLE_CDSPAGEBOXLEFTTOP =
## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXLEFTBOTTOM -- eventual global
## HTML left bottom box:
CFG_WEBSTYLE_CDSPAGEBOXLEFTBOTTOM =
## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXRIGHTTOP -- eventual global
## HTML right top box:
CFG_WEBSTYLE_CDSPAGEBOXRIGHTTOP =
## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXRIGHTBOTTOM -- eventual global
## HTML right bottom box:
CFG_WEBSTYLE_CDSPAGEBOXRIGHTBOTTOM =
## CFG_WEBSTYLE_HTTP_STATUS_ALERT_LIST -- when certain HTTP status
## codes are raised to the WSGI handler, the corresponding exceptions
## and error messages can be sent to the system administrator for
## inspecting. This is useful to detect and correct errors. The
## variable represents a comma-separated list of HTTP statuses that
## should alert admin. Wildcards are possible. If the status is
## followed by an "r", it means that a referer is required to exist
## (useful to distinguish broken known links from URL typos when 404
## errors are raised).
CFG_WEBSTYLE_HTTP_STATUS_ALERT_LIST = 404r,400,5*,41*
## CFG_WEBSTYLE_HTTP_USE_COMPRESSION -- whether to enable deflate
## compression of your HTTP/HTTPS connections. This will affect the Apache
## configuration snippets created by inveniocfg --create-apache-conf and
## the OAI-PMH Identify response.
CFG_WEBSTYLE_HTTP_USE_COMPRESSION = 0
## CFG_WEBSTYLE_REVERSE_PROXY_IPS -- if you are setting a multinode
## environment where an HTTP proxy such as mod_proxy is sitting in
## front of the Invenio web application and is forwarding requests to
## worker nodes, set here the the list of IP addresses of the allowed
## HTTP proxies. This is needed in order to avoid IP address spoofing
## when worker nodes are also available on the public Internet and
## might receive forged HTTP requests. Only HTTP requests coming from
## the specified IP addresses will be considered as forwarded from a
## reverse proxy. E.g. set this to '123.123.123.123'.
CFG_WEBSTYLE_REVERSE_PROXY_IPS =
##################################
## Part 3: WebSearch parameters ##
##################################
## This section contains some configuration parameters for WebSearch
## module. Please note that WebSearch is mostly configured on
## run-time via its WebSearch Admin web interface. The parameters
## below are the ones that you do not probably want to modify very
## often during the runtime. (Note that you may modify them
## afterwards too, though.)
## CFG_WEBSEARCH_SEARCH_CACHE_SIZE -- do you want to enable search
## caching in global search cache engine (e.g. Redis)? This cache is
## used mainly for "next/previous page" functionality, but it caches
## "popular" user queries too if more than one user happen to search
## for the same thing. Note that if you disable the search caching
## features like "facets" will not work. We recommend a value to be
## kept at CFG_WEBSEARCH_SEARCH_CACHE_SIZE = 1.
CFG_WEBSEARCH_SEARCH_CACHE_SIZE = 1
## CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT -- how long should we keep a
## search result in the cache. The value should be more than 0 and
## the unit is second. [600 s = 10 minutes]
CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT = 600
## CFG_WEBSEARCH_FIELDS_CONVERT -- if you migrate from an older
## system, you may want to map field codes of your old system (such as
## 'ti') to Invenio/MySQL ("title"). Use Python dictionary syntax
## for the translation table, e.g. {'wau':'author', 'wti':'title'}.
## Usually you don't want to do that, and you would use empty dict {}.
CFG_WEBSEARCH_FIELDS_CONVERT = {}
## CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH -- width of the
## search pattern window in the light search interface, in
## characters. CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH = 60
CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH = 60
## CFG_WEBSEARCH_SIMPLESEARCH_PATTERN_BOX_WIDTH -- width of the search
## pattern window in the simple search interface, in characters.
CFG_WEBSEARCH_SIMPLESEARCH_PATTERN_BOX_WIDTH = 40
## CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH -- width of the
## search pattern window in the advanced search interface, in
## characters.
CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH = 30
## CFG_WEBSEARCH_NB_RECORDS_TO_SORT -- how many records do we still
## want to sort? For higher numbers we print only a warning and won't
## perform any sorting other than default 'latest records first', as
## sorting would be very time consuming then. We recommend a value of
## not more than a couple of thousands.
CFG_WEBSEARCH_NB_RECORDS_TO_SORT = 1000
## CFG_WEBSEARCH_CALL_BIBFORMAT -- if a record is being displayed but
## it was not preformatted in the "HTML brief" format, do we want to
## call BibFormatting on the fly? Put "1" for "yes" and "0" for "no".
## Note that "1" will display the record exactly as if it were fully
## preformatted, but it may be slow due to on-the-fly processing; "0"
## will display a default format very fast, but it may not have all
## the fields as in the fully preformatted HTML brief format. Note
## also that this option is active only for old (PHP) formats; the new
## (Python) formats are called on the fly by default anyway, since
## they are much faster. When usure, please set "0" here.
CFG_WEBSEARCH_CALL_BIBFORMAT = 0
## CFG_WEBSEARCH_USE_ALEPH_SYSNOS -- do we want to make old SYSNOs
## visible rather than MySQL's record IDs? You may use this if you
## migrate from a different e-doc system, and you store your old
## system numbers into 970__a. Put "1" for "yes" and "0" for
## "no". Usually you don't want to do that, though.
CFG_WEBSEARCH_USE_ALEPH_SYSNOS = 0
## CFG_WEBSEARCH_I18N_LATEST_ADDITIONS -- Put "1" if you want the
## "Latest Additions" in the web collection pages to show
## internationalized records. Useful only if your brief BibFormat
## templates contains internationalized strings. Otherwise put "0" in
## order not to slow down the creation of latest additions by WebColl.
CFG_WEBSEARCH_I18N_LATEST_ADDITIONS = 0
## CFG_WEBSEARCH_INSTANT_BROWSE -- the number of records to display
## under 'Latest Additions' in the web collection pages.
CFG_WEBSEARCH_INSTANT_BROWSE = 10
## CFG_WEBSEARCH_INSTANT_BROWSE_RSS -- the number of records to
## display in the RSS feed.
CFG_WEBSEARCH_INSTANT_BROWSE_RSS = 25
## CFG_WEBSEARCH_RSS_I18N_COLLECTIONS -- comma-separated list of
## collections that feature an internationalized RSS feed on their
## main seach interface page created by webcoll. Other collections
## will have RSS feed using CFG_SITE_LANG.
CFG_WEBSEARCH_RSS_I18N_COLLECTIONS =
## CFG_WEBSEARCH_RSS_TTL -- number of minutes that indicates how long
## a feed cache is valid.
CFG_WEBSEARCH_RSS_TTL = 360
## CFG_WEBSEARCH_RSS_MAX_CACHED_REQUESTS -- maximum number of request kept
## in cache. If the cache is filled, following request are not cached.
CFG_WEBSEARCH_RSS_MAX_CACHED_REQUESTS = 1000
## CFG_WEBSEARCH_AUTHOR_ET_AL_THRESHOLD -- up to how many author names
## to print explicitely; for more print "et al". Note that this is
## used in default formatting that is seldomly used, as usually
## BibFormat defines all the format. The value below is only used
## when BibFormat fails, for example.
CFG_WEBSEARCH_AUTHOR_ET_AL_THRESHOLD = 3
## CFG_WEBSEARCH_NARROW_SEARCH_SHOW_GRANDSONS -- whether to show or
## not collection grandsons in Narrow Search boxes (sons are shown by
## default, grandsons are configurable here). Use 0 for no and 1 for
## yes.
CFG_WEBSEARCH_NARROW_SEARCH_SHOW_GRANDSONS = 1
## CFG_WEBSEARCH_CREATE_SIMILARLY_NAMED_AUTHORS_LINK_BOX -- shall we
## create help links for Ellis, Nick or Ellis, Nicholas and friends
## when Ellis, N was searched for? Useful if you have one author
## stored in the database under several name formats, namely surname
## comma firstname and surname comma initial cataloging policy. Use 0
## for no and 1 for yes.
CFG_WEBSEARCH_CREATE_SIMILARLY_NAMED_AUTHORS_LINK_BOX = 1
## CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS -- MathJax is a JavaScript
## library that renders (La)TeX mathematical formulas in the client
## browser. This parameter must contain a comma-separated list of
## output formats for which to apply the MathJax rendering, for example
## "hb,hd". If the list is empty, MathJax is disabled.
CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS =
## CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_TIMEOUT -- when searching
## external collections (e.g. SPIRES, CiteSeer, etc), how many seconds
## do we wait for reply before abandonning?
CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_TIMEOUT = 5
## CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_MAXRESULTS -- how many
## results do we fetch?
CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_MAXRESULTS = 10
## CFG_WEBSEARCH_SPLIT_BY_COLLECTION -- do we want to split the search
## results by collection or not? Use 0 for not, 1 for yes.
CFG_WEBSEARCH_SPLIT_BY_COLLECTION = 1
## CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS -- the default number of
## records to display per page in the search results pages.
CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS = 10
## CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS -- in order to limit denial of
## service attacks the total number of records per group displayed as a
## result of a search query will be limited to this number. Only the superuser
## queries will not be affected by this limit.
CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS = 200
## CFG_WEBSEARCH_SHOW_COMMENT_COUNT -- do we want to show the 'N comments'
## links on the search engine pages? (useful only when you have allowed
## commenting)
CFG_WEBSEARCH_SHOW_COMMENT_COUNT = 1
## CFG_WEBSEARCH_SHOW_REVIEW_COUNT -- do we want to show the 'N reviews'
## links on the search engine pages? (useful only when you have allowed
## reviewing)
CFG_WEBSEARCH_SHOW_REVIEW_COUNT = 1
## CFG_WEBSEARCH_FULLTEXT_SNIPPETS_GENERATOR -- how do we want to generate
## fulltext? Can be generated by 'native' Invenio or 'SOLR'
CFG_WEBSEARCH_FULLTEXT_SNIPPETS_GENERATOR = native
## CFG_WEBSEARCH_FULLTEXT_SNIPPETS -- how many full-text snippets do
## we want to display for full-text searches? If you want to specify
## different values for different document status types, please add
## more items into this dictionary. (Unless specified, the empty
## value will be used as default.) This is useful if you have
## restricted files of different types with various restrictions on
## what we can show.
CFG_WEBSEARCH_FULLTEXT_SNIPPETS = {
'': 4,
}
## CFG_WEBSEARCH_FULLTEXT_SNIPPETS_CHARS -- what is the maximum size
## of a snippet to display around the pattern found in the full-text?
## If you want to specify different values for different document
## status types, please add more items into this dictionary. (Unless
## specified, the empty value will be used as default.) This is
## useful if you have restricted files of different types with various
## restrictions on what we can show.
CFG_WEBSEARCH_FULLTEXT_SNIPPETS_CHARS = {
'': 100,
}
## CFG_WEBSEARCH_WILDCARD_LIMIT -- some of the queries, wildcard
## queries in particular (ex: cern*, a*), but also regular expressions
## (ex: [a-z]+), may take a long time to respond due to the high
## number of hits. You can limit the number of terms matched by a
## wildcard by setting this variable. A negative value or zero means
## that none of the queries will be limited (which may be wanted by
## also prone to denial-of-service kind of attacks).
CFG_WEBSEARCH_WILDCARD_LIMIT = 50000
## CFG_WEBSEARCH_SYNONYM_KBRS -- defines which knowledge bases are to
## be used for which index in order to provide runtime synonym lookup
## of user-supplied terms, and what massaging function should be used
## upon search pattern before performing the KB lookup. (Can be one
## of `exact', 'leading_to_comma', `leading_to_number'.)
CFG_WEBSEARCH_SYNONYM_KBRS = {
'journal': ['SEARCH-SYNONYM-JOURNAL', 'leading_to_number'],
}
## CFG_SOLR_URL -- optionally, you may use Solr to serve full-text
## queries and ranking. If so, please specify the URL of your Solr instance.
## Example: http://localhost:8983/solr (default solr port)
CFG_SOLR_URL =
## CFG_XAPIAN_ENABLED -- optionally, you may use Xapian to serve full-text
## queries and ranking. If so, please enable it: 1 = enabled
CFG_XAPIAN_ENABLED =
## CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT -- specify the limit when
## the previous/next/back hit links are to be displayed on detailed record pages.
## In order to speeding up list manipulations, if a search returns lots of hits,
## more than this limit, then do not loose time calculating next/previous/back
## hits at all, but display page directly without these.
## Note also that Invenio installations that do not like
## to have the next/previous hit link functionality would be able to set this
## variable to zero and not see anything.
CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT = 1000
## CFG_WEBSEARCH_PREV_NEXT_HIT_FOR_GUESTS -- Set this to 0 if you want
## to disable the previous/next/back hit link functionality for guests
## users.
## Since the previous/next/back hit link functionality is causing the allocation
## of user session in the database even for guests users, it might be useful to
## be able to disable it e.g. when your site is bombarded by web request
## (a.k.a. Slashdot effect).
CFG_WEBSEARCH_PREV_NEXT_HIT_FOR_GUESTS = 1
## CFG_WEBSEARCH_VIEWRESTRCOLL_POLICY -- when a record belongs to more than one
## restricted collection, if the viewrestcoll policy is set to "ALL" (default)
## then the user must be authorized to all the restricted collections, in
## order to be granted access to the specific record. If the policy is set to
## "ANY", then the user need to be authorized to only one of the collections
## in order to be granted access to the specific record.
CFG_WEBSEARCH_VIEWRESTRCOLL_POLICY = ANY
## CFG_WEBSEARCH_SPIRES_SYNTAX -- variable to configure the use of the
## SPIRES query syntax in searches. Values: 0 = SPIRES syntax is
## switched off; 1 = leading 'find' is required; 9 = leading 'find' is
## not required (leading SPIRES operator, space-operator-space, etc
## are also accepted).
CFG_WEBSEARCH_SPIRES_SYNTAX = 1
## CFG_WEBSEARCH_DISPLAY_NEAREST_TERMS -- when user search does not
## return any direct result, what do we want to display? Set to 0 in
## order to display a generic message about search returning no hits.
## Set to 1 in order to display list of nearest terms from the indexes
## that may match user query. Note: this functionality may be slow,
## so you may want to disable it on bigger sites.
CFG_WEBSEARCH_DISPLAY_NEAREST_TERMS = 1
## CFG_WEBSEARCH_DETAILED_META_FORMAT -- the output format to use for
## detailed meta tags containing metadata as configured in the tag
## table. Default output format should be 'hdm', included. This
## format will be included in the header of /record/ pages. For
## efficiency this format should be pre-cached with BibReformat. See
## also CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR and
## CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR.
CFG_WEBSEARCH_DETAILED_META_FORMAT = hdm
## CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR -- decides if meta tags for
## Google Scholar shall be included in the detailed record page
## header, when using the standard formatting templates/elements. See
## also CFG_WEBSEARCH_DETAILED_META_FORMAT and
## CFG_WEBSEARCH_ENABLE_OPENGRAPH. When this variable is changed and
## output format defined in CFG_WEBSEARCH_DETAILED_META_FORMAT is
## cached, a bibreformat must be run for the cached records.
CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR = True
## CFG_WEBSEARCH_ENABLE_OPENGRAPH -- decides if meta tags for the Open
## Graph protocol shall be included in the detailed record page
## header, when using the standard formatting templates/elements. See
## also CFG_WEBSEARCH_DETAILED_META_FORMAT and
## CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR. When this variable is changed
## and output format defined in CFG_WEBSEARCH_DETAILED_META_FORMAT is
## cached, a bibreformat must be run for the cached records. Note that
## enabling Open Graph produces invalid XHTML/HTML5 markup.
CFG_WEBSEARCH_ENABLE_OPENGRAPH = False
## CFG_WEBSEARCH_CITESUMMARY_SELFCITES_THRESHOLD -- switches off
## self-citations computation if the number records in the citesummary
## is above the threshold
CFG_WEBSEARCH_CITESUMMARY_SELFCITES_THRESHOLD = 2000
#######################################
## Part 4: BibHarvest OAI parameters ##
#######################################
## This part defines parameters for the Invenio OAI gateway.
## Useful if you are running Invenio as OAI data provider.
## CFG_OAI_ID_FIELD -- OAI identifier MARC field:
CFG_OAI_ID_FIELD = 909COo
## CFG_OAI_SET_FIELD -- OAI set MARC field:
CFG_OAI_SET_FIELD = 909COp
## CFG_OAI_SET_FIELD -- previous OAI set MARC field:
CFG_OAI_PREVIOUS_SET_FIELD = 909COq
## CFG_OAI_DELETED_POLICY -- OAI deletedrecordspolicy
## (no/transient/persistent):
CFG_OAI_DELETED_POLICY = persistent
## CFG_OAI_ID_PREFIX -- OAI identifier prefix:
CFG_OAI_ID_PREFIX = atlantis.cern.ch
## CFG_OAI_SAMPLE_IDENTIFIER -- OAI sample identifier:
CFG_OAI_SAMPLE_IDENTIFIER = oai:atlantis.cern.ch:123
## CFG_OAI_IDENTIFY_DESCRIPTION -- description for the OAI Identify verb:
CFG_OAI_IDENTIFY_DESCRIPTION = <description>
<eprints xmlns="http://www.openarchives.org/OAI/1.1/eprints"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.openarchives.org/OAI/1.1/eprints
http://www.openarchives.org/OAI/1.1/eprints.xsd">
<content>
<URL>%(CFG_SITE_URL)s</URL>
</content>
<metadataPolicy>
<text>Free and unlimited use by anybody with obligation to refer to original record</text>
</metadataPolicy>
<dataPolicy>
<text>Full content, i.e. preprints may not be harvested by robots</text>
</dataPolicy>
<submissionPolicy>
<text>Submission restricted. Submitted documents are subject of approval by OAI repository admins.</text>
</submissionPolicy>
</eprints>
</description>
## CFG_OAI_LOAD -- OAI number of records in a response:
CFG_OAI_LOAD = 500
## CFG_OAI_EXPIRE -- OAI resumptionToken expiration time:
CFG_OAI_EXPIRE = 90000
## CFG_OAI_SLEEP -- service unavailable between two consecutive
## requests for CFG_OAI_SLEEP seconds:
CFG_OAI_SLEEP = 2
## CFG_OAI_METADATA_FORMATS -- mapping between accepted metadataPrefixes and
## the corresponding output format to use, its schema and its metadataNamespace.
CFG_OAI_METADATA_FORMATS = {
'marcxml': ('XOAIMARC', 'http://www.openarchives.org/OAI/1.1/dc.xsd', 'http://purl.org/dc/elements/1.1/'),
'oai_dc': ('XOAIDC', 'http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd', 'http://www.loc.gov/MARC21/slim'),
}
## CFG_OAI_FRIENDS -- list of OAI baseURL of friend repositories. See:
## <http://www.openarchives.org/OAI/2.0/guidelines-friends.htm>
CFG_OAI_FRIENDS = http://cds.cern.ch/oai2d,http://openaire.cern.ch/oai2d,http://export.arxiv.org/oai2
## The following subfields are a completition to
## CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG. If CFG_OAI_PROVENANCE_BASEURL_SUBFIELD is
## set for a record, then the corresponding field is considered has being
## harvested via OAI-PMH
## CFG_OAI_PROVENANCE_BASEURL_SUBFIELD -- baseURL of the originDescription or a
## record
CFG_OAI_PROVENANCE_BASEURL_SUBFIELD = u
## CFG_OAI_PROVENANCE_DATESTAMP_SUBFIELD -- datestamp of the originDescription
## or a record
CFG_OAI_PROVENANCE_DATESTAMP_SUBFIELD = d
## CFG_OAI_PROVENANCE_METADATANAMESPACE_SUBFIELD -- metadataNamespace of the
## originDescription or a record
CFG_OAI_PROVENANCE_METADATANAMESPACE_SUBFIELD = m
## CFG_OAI_PROVENANCE_ORIGINDESCRIPTION_SUBFIELD -- originDescription of the
## originDescription or a record
CFG_OAI_PROVENANCE_ORIGINDESCRIPTION_SUBFIELD = d
## CFG_OAI_PROVENANCE_HARVESTDATE_SUBFIELD -- harvestDate of the
## originDescription or a record
CFG_OAI_PROVENANCE_HARVESTDATE_SUBFIELD = h
## CFG_OAI_PROVENANCE_ALTERED_SUBFIELD -- altered flag of the
## originDescription or a record
CFG_OAI_PROVENANCE_ALTERED_SUBFIELD = t
## CFG_OAI_FAILED_HARVESTING_STOP_QUEUE -- when harvesting OAI sources
## fails, shall we report an error with the task and stop BibSched
## queue, or simply wait for the next run of the task? A value of 0
## will stop the task upon errors, 1 will let the queue run if the
## next run of the oaiharvest task can safely recover the failure
## (this means that the queue will stop if the task is not set to run
## periodically)
CFG_OAI_FAILED_HARVESTING_STOP_QUEUE = 1
## CFG_OAI_FAILED_HARVESTING_EMAILS_ADMIN -- when
## CFG_OAI_FAILED_HARVESTING_STOP_QUEUE is set to leave the queue
## running upon errors, shall we send an email to admin to notify
## about the failure?
CFG_OAI_FAILED_HARVESTING_EMAILS_ADMIN = True
## NOTE: the following parameters are experimental
## -----------------------------------------------------------------------------
## CFG_OAI_RIGHTS_FIELD -- MARC field dedicated to storing Copyright information
CFG_OAI_RIGHTS_FIELD = 542__
## CFG_OAI_RIGHTS_HOLDER_SUBFIELD -- MARC subfield dedicated to storing the
## Copyright holder information
CFG_OAI_RIGHTS_HOLDER_SUBFIELD = d
## CFG_OAI_RIGHTS_DATE_SUBFIELD -- MARC subfield dedicated to storing the
## Copyright date information
CFG_OAI_RIGHTS_DATE_SUBFIELD = g
## CFG_OAI_RIGHTS_URI_SUBFIELD -- MARC subfield dedicated to storing the URI
## (URL or URN, more detailed statement about copyright status) information
CFG_OAI_RIGHTS_URI_SUBFIELD = u
## CFG_OAI_RIGHTS_CONTACT_SUBFIELD -- MARC subfield dedicated to storing the
## Copyright holder contact information
CFG_OAI_RIGHTS_CONTACT_SUBFIELD = e
## CFG_OAI_RIGHTS_STATEMENT_SUBFIELD -- MARC subfield dedicated to storing the
## Copyright statement as presented on the resource
CFG_OAI_RIGHTS_STATEMENT_SUBFIELD = f
## CFG_OAI_LICENSE_FIELD -- MARC field dedicated to storing terms governing
## use and reproduction (license)
CFG_OAI_LICENSE_FIELD = 540__
## CFG_OAI_LICENSE_TERMS_SUBFIELD -- MARC subfield dedicated to storing the
## Terms governing use and reproduction, e.g. CC License
CFG_OAI_LICENSE_TERMS_SUBFIELD = a
## CFG_OAI_LICENSE_PUBLISHER_SUBFIELD -- MARC subfield dedicated to storing the
## person or institution imposing the license (author, publisher)
CFG_OAI_LICENSE_PUBLISHER_SUBFIELD = b
## CFG_OAI_LICENSE_URI_SUBFIELD -- MARC subfield dedicated to storing the URI
## URI
CFG_OAI_LICENSE_URI_SUBFIELD = u
##------------------------------------------------------------------------------
###################################
## Part 5: BibDocFile parameters ##
###################################
## This section contains some configuration parameters for BibDocFile
## module.
## CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_DOCTYPES -- this is the list of
## doctypes (like 'Main' or 'Additional') and their description that admins
## can choose from when adding new files via the Document File Manager
## admin interface.
## - When no value is provided, admins cannot add new
## file (they can only revise/delete/add format)
## - When a single value is given, it is used as
## default doctype for all new documents
##
## Order is relevant
## Eg:
## [('main', 'Main document'), ('additional', 'Figure, schema. etc')]
CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_DOCTYPES = [
('Main', 'Main document'),
('LaTeX', 'LaTeX'),
('Source', 'Source'),
('Additional', 'Additional File'),
('Audio', 'Audio file'),
('Video', 'Video file'),
('Script', 'Script'),
('Data', 'Data'),
('Figure', 'Figure'),
('Schema', 'Schema'),
('Graph', 'Graph'),
('Image', 'Image'),
('Drawing', 'Drawing'),
('Slides', 'Slides')]
## CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_RESTRICTIONS -- this is the
## list of restrictions (like 'Restricted' or 'No Restriction') and their
## description that admins can choose from when adding or revising files.
## Restrictions can then be configured at the level of WebAccess.
## - When no value is provided, no restriction is
## applied
## - When a single value is given, it is used as
## default resctriction for all documents.
## - The first value of the list is used as default
## restriction if the user if not given the
## choice of the restriction. Order is relevant
##
## Eg:
## [('', 'No restriction'), ('restr', 'Restricted')]
CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_RESTRICTIONS = [
('', 'Public'),
('restricted', 'Restricted')]
## CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_MISC -- set here the other
## default flags and attributes to tune the Document File Manager admin
## interface.
## See the docstring of bibdocfile_managedocfiles.create_file_upload_interface
## to have a description of the available parameters and their syntax.
## In general you will rarely need to change this variable.
CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_MISC = {
'can_revise_doctypes': ['*'],
'can_comment_doctypes': ['*'],
'can_describe_doctypes': ['*'],
'can_delete_doctypes': ['*'],
'can_keep_doctypes': ['*'],
'can_rename_doctypes': ['*'],
'can_add_format_to_doctypes': ['*'],
'can_restrict_doctypes': ['*'],
}
## CFG_BIBDOCFILE_FILESYSTEM_BIBDOC_GROUP_LIMIT -- the fulltext
## documents are stored under "/opt/invenio/var/data/files/gX/Y"
## directories where X is 0,1,... and Y stands for bibdoc ID. Thusly
## documents Y are grouped into directories X and this variable
## indicates the maximum number of documents Y stored in each
## directory X. This limit is imposed solely for filesystem
## performance reasons in order not to have too many subdirectories in
## a given directory.
CFG_BIBDOCFILE_FILESYSTEM_BIBDOC_GROUP_LIMIT = 5000
## CFG_BIBDOCFILE_ADDITIONAL_KNOWN_FILE_EXTENSIONS -- a comma-separated
## list of document extensions not listed in Python standard mimetype
## library that should be recognized by Invenio.
CFG_BIBDOCFILE_ADDITIONAL_KNOWN_FILE_EXTENSIONS = hpg,link,lis,llb,mat,mpp,msg,docx,docm,xlsx,xlsm,xlsb,pptx,pptm,ppsx,ppsm
## CFG_BIBDOCFILE_ADDITIONAL_KNOWN_MIMETYPES -- a mapping of additional
## mimetypes that could be served or have to be recognized by this instance
## of Invenio (this is useful in order to patch old versions of the
## mimetypes Python module).
CFG_BIBDOCFILE_ADDITIONAL_KNOWN_MIMETYPES = {
"application/xml-dtd": ".dtd",
}
## CFG_BIBDOCFILE_DESIRED_CONVERSIONS -- a dictionary having as keys
## a format and as values the corresponding list of desired converted
## formats.
CFG_BIBDOCFILE_DESIRED_CONVERSIONS = {
'pdf' : ('pdf;pdfa', ),
'ps.gz' : ('pdf;pdfa', ),
'djvu' : ('pdf', ),
'sxw': ('doc', 'odt', 'pdf;pdfa', ),
'docx' : ('doc', 'odt', 'pdf;pdfa', ),
'doc' : ('odt', 'pdf;pdfa', 'docx'),
'rtf' : ('pdf;pdfa', 'odt', ),
'odt' : ('pdf;pdfa', 'doc', ),
'pptx' : ('ppt', 'odp', 'pdf;pdfa', ),
'ppt' : ('odp', 'pdf;pdfa', 'pptx'),
'sxi': ('odp', 'pdf;pdfa', ),
'odp' : ('pdf;pdfa', 'ppt', ),
'xlsx' : ('xls', 'ods', 'csv'),
'xls' : ('ods', 'csv'),
'ods' : ('xls', 'xlsx', 'csv'),
'sxc': ('xls', 'xlsx', 'csv'),
'tiff' : ('pdf;pdfa', ),
'tif' : ('pdf;pdfa', ),}
## CFG_BIBDOCFILE_USE_XSENDFILE -- if your web server supports
## XSendfile header, you may want to enable this feature in order for
## to Invenio tell the web server to stream files for download (after
## proper authorization checks) by web server's means. This helps to
## liberate Invenio worker processes from being busy with sending big
## files to clients. The web server will take care of that. Note:
## this feature is still somewhat experimental. Note: when enabled
## (set to 1), then you have to also regenerate Apache vhost conf
## snippets (inveniocfg --update-config-py --create-apache-conf).
CFG_BIBDOCFILE_USE_XSENDFILE = 0
## CFG_BIBDOCFILE_MD5_CHECK_PROBABILITY -- a number between 0 and
## 1 that indicates probability with which MD5 checksum will be
## verified when streaming bibdocfile-managed files. (0.1 will cause
## the check to be performed once for every 10 downloads)
CFG_BIBDOCFILE_MD5_CHECK_PROBABILITY = 0.1
## CFG_BIBDOCFILE_BEST_FORMATS_TO_EXTRACT_TEXT_FROM -- a comma-separated
## list of document extensions in decrescent order of preference
## to suggest what is considered the best format to extract text from.
CFG_BIBDOCFILE_BEST_FORMATS_TO_EXTRACT_TEXT_FROM = ('txt', 'html', 'xml', 'odt', 'doc', 'docx', 'djvu', 'pdf', 'ps', 'ps.gz')
## CFG_BIBDOCFILE_ENABLE_BIBDOCFSINFO_CACHE -- whether to use the
## database table bibdocfsinfo as reference for filesystem
## information. The default is 0. Switch this to 1
## after you have run bibdocfile --fix-bibdocfsinfo-cache
## or on an empty system.
CFG_BIBDOCFILE_ENABLE_BIBDOCFSINFO_CACHE = 0
## CFG_OPENOFFICE_SERVER_HOST -- the host where an OpenOffice Server is
## listening to. If localhost an OpenOffice server will be started
## automatically if it is not already running.
## Note: if you set this to an empty value this will disable the usage of
## OpenOffice for converting documents.
## If you set this to something different than localhost you'll have to take
## care to have an OpenOffice server running on the corresponding host and
## to install the same OpenOffice release both on the client and on the server
## side.
## In order to launch an OpenOffice server on a remote machine, just start
## the usual 'soffice' executable in this way:
## $> soffice -headless -nologo -nodefault -norestore -nofirststartwizard \
## .. -accept=socket,host=HOST,port=PORT;urp;StarOffice.ComponentContext
CFG_OPENOFFICE_SERVER_HOST = localhost
## CFG_OPENOFFICE_SERVER_PORT -- the port where an OpenOffice Server is
## listening to.
CFG_OPENOFFICE_SERVER_PORT = 2002
## CFG_OPENOFFICE_USER -- the user that will be used to launch the OpenOffice
## client. It is recommended to set this to a user who don't own files, like
## e.g. 'nobody'. You should also authorize your Apache server user to be
## able to become this user, e.g. by adding to your /etc/sudoers the following
## line:
## "apache ALL=(nobody) NOPASSWD: ALL"
## provided that apache is the username corresponding to the Apache user.
## On some machine this might be apache2 or www-data.
CFG_OPENOFFICE_USER = nobody
#################################
## Part 6: BibIndex parameters ##
#################################
## This section contains some configuration parameters for BibIndex
## module. Please note that BibIndex is mostly configured on run-time
## via its BibIndex Admin web interface. The parameters below are the
## ones that you do not probably want to modify very often during the
## runtime.
## CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY -- when fulltext indexing, do
## you want to index locally stored files only, or also external URLs?
## Use "0" to say "no" and "1" to say "yes".
CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY = 1
-## CFG_BIBINDEX_REMOVE_STOPWORDS -- when indexing, do we want to remove
-## stopwords? Use "0" to say "no" and "1" to say "yes".
-CFG_BIBINDEX_REMOVE_STOPWORDS = 0
+## (deprecated) CFG_BIBINDEX_REMOVE_STOPWORDS -- configuration moved to
+## DB, variable kept here just for backwards compatibility purposes.
+CFG_BIBINDEX_REMOVE_STOPWORDS =
## CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS -- characters considered as
## alphanumeric separators of word-blocks inside words. You probably
## don't want to change this.
CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS = \!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\=\>\?\@\[\\\]\^\_\`\{\|\}\~
## CFG_BIBINDEX_CHARS_PUNCTUATION -- characters considered as punctuation
## between word-blocks inside words. You probably don't want to
## change this.
CFG_BIBINDEX_CHARS_PUNCTUATION = \.\,\:\;\?\!\"
-## CFG_BIBINDEX_REMOVE_HTML_MARKUP -- should we attempt to remove HTML markup
-## before indexing? Use 1 if you have HTML markup inside metadata
-## (e.g. in abstracts), use 0 otherwise.
+## (deprecated) CFG_BIBINDEX_REMOVE_HTML_MARKUP -- now in database
CFG_BIBINDEX_REMOVE_HTML_MARKUP = 0
-## CFG_BIBINDEX_REMOVE_LATEX_MARKUP -- should we attempt to remove LATEX markup
-## before indexing? Use 1 if you have LATEX markup inside metadata
-## (e.g. in abstracts), use 0 otherwise.
+## (deprecated) CFG_BIBINDEX_REMOVE_LATEX_MARKUP -- now in database
CFG_BIBINDEX_REMOVE_LATEX_MARKUP = 0
## CFG_BIBINDEX_MIN_WORD_LENGTH -- minimum word length allowed to be added to
## index. The terms smaller then this amount will be discarded.
## Useful to keep the database clean, however you can safely leave
## this value on 0 for up to 1,000,000 documents.
CFG_BIBINDEX_MIN_WORD_LENGTH = 0
## CFG_BIBINDEX_URLOPENER_USERNAME and CFG_BIBINDEX_URLOPENER_PASSWORD --
## access credentials to access restricted URLs, interesting only if
## you are fulltext-indexing files located on a remote server that is
## only available via username/password. But it's probably better to
## handle this case via IP or some convention; the current scheme is
## mostly there for demo only.
CFG_BIBINDEX_URLOPENER_USERNAME = mysuperuser
CFG_BIBINDEX_URLOPENER_PASSWORD = mysuperpass
## CFG_INTBITSET_ENABLE_SANITY_CHECKS --
## Enable sanity checks for integers passed to the intbitset data
## structures. It is good to enable this during debugging
## and to disable this value for speed improvements.
CFG_INTBITSET_ENABLE_SANITY_CHECKS = False
## CFG_BIBINDEX_PERFORM_OCR_ON_DOCNAMES -- regular expression that matches
## docnames for which OCR is desired (set this to .* in order to enable
## OCR in general, set this to empty in order to disable it.)
CFG_BIBINDEX_PERFORM_OCR_ON_DOCNAMES = scan-.*
## CFG_BIBINDEX_SPLASH_PAGES -- key-value mapping where the key corresponds
## to a regular expression that matches the URLs of the splash pages of
## a given service and the value is a regular expression of the set of URLs
## referenced via <a> tags in the HTML content of the splash pages that are
## referring to documents that need to be indexed.
## NOTE: for backward compatibility reasons you can set this to a simple
## regular expression that will directly be used as the unique key of the
## map, with corresponding value set to ".*" (in order to match any URL)
CFG_BIBINDEX_SPLASH_PAGES = {
"http://documents\.cern\.ch/setlink\?.*": ".*",
"http://ilcagenda\.linearcollider\.org/subContributionDisplay\.py\?.*|http://ilcagenda\.linearcollider\.org/contributionDisplay\.py\?.*": "http://ilcagenda\.linearcollider\.org/getFile\.py/access\?.*|http://ilcagenda\.linearcollider\.org/materialDisplay\.py\?.*",
}
## CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES -- do we want
## the author word index to exclude first names to keep only last
## names? If set to True, then for the author `Bernard, Denis', only
## `Bernard' will be indexed in the word index, not `Denis'. Note
## that if you change this variable, you have to re-index the author
## index via `bibindex -w author -R'.
CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES = False
-## CFG_BIBINDEX_SYNONYM_KBRS -- defines which knowledge bases are to
-## be used for which index in order to provide index-time synonym
-## lookup, and what massaging function should be used upon search
-## pattern before performing the KB lookup. (Can be one of `exact',
-## 'leading_to_comma', `leading_to_number'.)
-CFG_BIBINDEX_SYNONYM_KBRS = {
- 'global': ['INDEX-SYNONYM-TITLE', 'exact'],
- 'title': ['INDEX-SYNONYM-TITLE', 'exact'],
- }
+## (deprecated) CFG_BIBINDEX_SYNONYM_KBRS -- configuration moved to
+## DB, variable kept here just for backwards compatibility purposes.
+CFG_BIBINDEX_SYNONYM_KBRS = {}
#######################################
## Part 7: Access control parameters ##
#######################################
## This section contains some configuration parameters for the access
## control system. Please note that WebAccess is mostly configured on
## run-time via its WebAccess Admin web interface. The parameters
## below are the ones that you do not probably want to modify very
## often during the runtime. (If you do want to modify them during
## runtime, for example te deny access temporarily because of backups,
## you can edit access_control_config.py directly, no need to get back
## here and no need to redo the make process.)
## CFG_ACCESS_CONTROL_LEVEL_SITE -- defines how open this site is.
## Use 0 for normal operation of the site, 1 for read-only site (all
## write operations temporarily closed), 2 for site fully closed,
## 3 for also disabling any database connection.
## Useful for site maintenance.
CFG_ACCESS_CONTROL_LEVEL_SITE = 0
## CFG_ACCESS_CONTROL_LEVEL_GUESTS -- guest users access policy. Use
## 0 to allow guest users, 1 not to allow them (all users must login).
CFG_ACCESS_CONTROL_LEVEL_GUESTS = 0
## CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS -- account registration and
## activation policy. When 0, users can register and accounts are
## automatically activated. When 1, users can register but admin must
## activate the accounts. When 2, users cannot register nor update
## their email address, only admin can register accounts. When 3,
## users cannot register nor update email address nor password, only
## admin can register accounts. When 4, the same as 3 applies, nor
## user cannot change his login method. When 5, then the same as 4
## applies, plus info about how to get an account is hidden from the
## login page.
CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS = 0
## CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN -- limit account
## registration to certain email addresses? If wanted, give domain
## name below, e.g. "cern.ch". If not wanted, leave it empty.
CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN =
## CFG_ACCESS_CONTROL_NOTIFY_ADMIN_ABOUT_NEW_ACCOUNTS -- send a
## notification email to the administrator when a new account is
## created? Use 0 for no, 1 for yes.
CFG_ACCESS_CONTROL_NOTIFY_ADMIN_ABOUT_NEW_ACCOUNTS = 0
## CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_NEW_ACCOUNT -- send a
## notification email to the user when a new account is created in order to
## to verify the validity of the provided email address? Use
## 0 for no, 1 for yes.
CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_NEW_ACCOUNT = 1
## CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_ACTIVATION -- send a
## notification email to the user when a new account is activated?
## Use 0 for no, 1 for yes.
CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_ACTIVATION = 0
## CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_DELETION -- send a
## notification email to the user when a new account is deleted or
## account demand rejected? Use 0 for no, 1 for yes.
CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_DELETION = 0
## CFG_APACHE_PASSWORD_FILE -- the file where Apache user credentials
## are stored. Must be an absolute pathname. If the value does not
## start by a slash, it is considered to be the filename of a file
## located under prefix/var/tmp directory. This is useful for the
## demo site testing purposes. For the production site, if you plan
## to restrict access to some collections based on the Apache user
## authentication mechanism, you should put here an absolute path to
## your Apache password file.
CFG_APACHE_PASSWORD_FILE = demo-site-apache-user-passwords
## CFG_APACHE_GROUP_FILE -- the file where Apache user groups are
## defined. See the documentation of the preceding config variable.
CFG_APACHE_GROUP_FILE = demo-site-apache-user-groups
###################################
## Part 8: WebSession parameters ##
###################################
## This section contains some configuration parameters for tweaking
## session handling.
## CFG_WEBSESSION_EXPIRY_LIMIT_DEFAULT -- number of days after which a session
## and the corresponding cookie is considered expired.
CFG_WEBSESSION_EXPIRY_LIMIT_DEFAULT = 2
## CFG_WEBSESSION_EXPIRY_LIMIT_REMEMBER -- number of days after which a session
## and the corresponding cookie is considered expired, when the user has
## requested to permanently stay logged in.
CFG_WEBSESSION_EXPIRY_LIMIT_REMEMBER = 365
## CFG_WEBSESSION_RESET_PASSWORD_EXPIRE_IN_DAYS -- when user requested
## a password reset, for how many days is the URL valid?
CFG_WEBSESSION_RESET_PASSWORD_EXPIRE_IN_DAYS = 3
## CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS -- when an account
## activation email was sent, for how many days is the URL valid?
CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS = 3
## CFG_WEBSESSION_NOT_CONFIRMED_EMAIL_ADDRESS_EXPIRE_IN_DAYS -- when
## user won't confirm his email address and not complete
## registeration, after how many days will it expire?
CFG_WEBSESSION_NOT_CONFIRMED_EMAIL_ADDRESS_EXPIRE_IN_DAYS = 10
## CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS -- when set to 1, the session
## system allocates the same uid=0 to all guests users regardless of where they
## come from. 0 allocate a unique uid to each guest.
CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS = 0
## CFG_WEBSESSION_IPADDR_CHECK_SKIP_BITS -- to prevent session cookie
## stealing, Invenio checks that the IP address of a connection is the
## same as that of the connection which created the initial session.
## This variable let you decide how many bits should be skipped during
## this check. Set this to 0 in order to enable full IP address
## checking. Set this to 32 in order to disable IP address checking.
## Intermediate values (say 8) let you have some degree of security so
## that you can trust your local network only while helping to solve
## issues related to outside clients that configured their browser to
## use a web proxy for HTTP connection but not for HTTPS, thus
## potentially having two different IP addresses. In general, if use
## HTTPS in order to serve authenticated content, you can safely set
## CFG_WEBSESSION_IPADDR_CHECK_SKIP_BITS to 32.
CFG_WEBSESSION_IPADDR_CHECK_SKIP_BITS = 0
################################
## Part 9: BibRank parameters ##
################################
## This section contains some configuration parameters for the ranking
## system.
## CFG_BIBRANK_SHOW_READING_STATS -- do we want to show reading
## similarity stats? ('People who viewed this page also viewed')
CFG_BIBRANK_SHOW_READING_STATS = 1
## CFG_BIBRANK_SHOW_DOWNLOAD_STATS -- do we want to show the download
## similarity stats? ('People who downloaded this document also
## downloaded')
CFG_BIBRANK_SHOW_DOWNLOAD_STATS = 1
## CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS -- do we want to show download
## history graph? (0=no | 1=classic/gnuplot | 2=flot)
CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS = 1
## CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS_CLIENT_IP_DISTRIBUTION -- do we
## want to show a graph representing the distribution of client IPs
## downloading given document? (0=no | 1=classic/gnuplot | 2=flot)
CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS_CLIENT_IP_DISTRIBUTION = 0
## CFG_BIBRANK_SHOW_CITATION_LINKS -- do we want to show the 'Cited
## by' links? (useful only when you have citations in the metadata)
CFG_BIBRANK_SHOW_CITATION_LINKS = 1
## CFG_BIBRANK_SHOW_CITATION_STATS -- de we want to show citation
## stats? ('Cited by M recors', 'Co-cited with N records')
CFG_BIBRANK_SHOW_CITATION_STATS = 1
## CFG_BIBRANK_SHOW_CITATION_GRAPHS -- do we want to show citation
## history graph? (0=no | 1=classic/gnuplot | 2=flot)
CFG_BIBRANK_SHOW_CITATION_GRAPHS = 1
## CFG_BIBRANK_SELFCITES_USE_BIBAUTHORID -- use authorids for computing
## self-citations
## falls back to hashing the author string
CFG_BIBRANK_SELFCITES_USE_BIBAUTHORID = 0
## CFG_BIBRANK_SELFCITES_PRECOMPUTE -- use precomputed self-citations
## when displaying itesummary. Precomputing citations allows use to
## speed up things
CFG_BIBRANK_SELFCITES_PRECOMPUTE = 0
####################################
## Part 10: WebComment parameters ##
####################################
## This section contains some configuration parameters for the
## commenting and reviewing facilities.
## CFG_WEBCOMMENT_ALLOW_COMMENTS -- do we want to allow users write
## public comments on records?
CFG_WEBCOMMENT_ALLOW_COMMENTS = 1
## CFG_WEBCOMMENT_ALLOW_REVIEWS -- do we want to allow users write
## public reviews of records?
CFG_WEBCOMMENT_ALLOW_REVIEWS = 1
## CFG_WEBCOMMENT_ALLOW_SHORT_REVIEWS -- do we want to allow short
## reviews, that is just the attribution of stars without submitting
## detailed review text?
CFG_WEBCOMMENT_ALLOW_SHORT_REVIEWS = 0
## CFG_WEBCOMMENT_NB_REPORTS_BEFORE_SEND_EMAIL_TO_ADMIN -- if users
## report a comment to be abusive, how many they have to be before the
## site admin is alerted?
CFG_WEBCOMMENT_NB_REPORTS_BEFORE_SEND_EMAIL_TO_ADMIN = 5
## CFG_WEBCOMMENT_NB_COMMENTS_IN_DETAILED_VIEW -- how many comments do
## we display in the detailed record page upon welcome?
CFG_WEBCOMMENT_NB_COMMENTS_IN_DETAILED_VIEW = 1
## CFG_WEBCOMMENT_NB_REVIEWS_IN_DETAILED_VIEW -- how many reviews do
## we display in the detailed record page upon welcome?
CFG_WEBCOMMENT_NB_REVIEWS_IN_DETAILED_VIEW = 1
## CFG_WEBCOMMENT_ADMIN_NOTIFICATION_LEVEL -- do we notify the site
## admin after every comment?
CFG_WEBCOMMENT_ADMIN_NOTIFICATION_LEVEL = 1
## CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_COMMENTS_IN_SECONDS -- how many
## elapsed seconds do we consider enough when checking for possible
## multiple comment submissions by a user?
CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_COMMENTS_IN_SECONDS = 20
## CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_REVIEWS_IN_SECONDS -- how many
## elapsed seconds do we consider enough when checking for possible
## multiple review submissions by a user?
CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_REVIEWS_IN_SECONDS = 20
## CFG_WEBCOMMENT_USE_RICH_EDITOR -- enable the WYSIWYG
## Javascript-based editor when user edits comments?
CFG_WEBCOMMENT_USE_RICH_TEXT_EDITOR = False
## CFG_WEBCOMMENT_ALERT_ENGINE_EMAIL -- the email address from which the
## alert emails will appear to be sent:
CFG_WEBCOMMENT_ALERT_ENGINE_EMAIL = info@invenio-software.org
## CFG_WEBCOMMENT_DEFAULT_MODERATOR -- if no rules are
## specified to indicate who is the comment moderator of
## a collection, this person will be used as default
CFG_WEBCOMMENT_DEFAULT_MODERATOR = info@invenio-software.org
## CFG_WEBCOMMENT_USE_MATHJAX_IN_COMMENTS -- do we want to allow the use
## of MathJax plugin to render latex input in comments?
CFG_WEBCOMMENT_USE_MATHJAX_IN_COMMENTS = 1
## CFG_WEBCOMMENT_AUTHOR_DELETE_COMMENT_OPTION -- allow comment author to
## delete its own comment?
CFG_WEBCOMMENT_AUTHOR_DELETE_COMMENT_OPTION = 1
# CFG_WEBCOMMENT_EMAIL_REPLIES_TO -- which field of the record define
# email addresses that should be notified of newly submitted comments,
# and for which collection. Use collection names as keys, and list of
# tags as values
CFG_WEBCOMMENT_EMAIL_REPLIES_TO = {
'Articles': ['506__d', '506__m'],
}
# CFG_WEBCOMMENT_RESTRICTION_DATAFIELD -- which field of the record
# define the restriction (must be linked to WebAccess
# 'viewrestrcomment') to apply to newly submitted comments, and for
# which collection. Use collection names as keys, and one tag as value
CFG_WEBCOMMENT_RESTRICTION_DATAFIELD = {
'Articles': '5061_a',
'Pictures': '5061_a',
'Theses': '5061_a',
}
# CFG_WEBCOMMENT_ROUND_DATAFIELD -- which field of the record define
# the current round of comment for which collection. Use collection
# name as key, and one tag as value
CFG_WEBCOMMENT_ROUND_DATAFIELD = {
'Articles': '562__c',
'Pictures': '562__c',
}
# CFG_WEBCOMMENT_MAX_ATTACHMENT_SIZE -- max file size per attached
# file, in bytes. Choose 0 if you don't want to limit the size
CFG_WEBCOMMENT_MAX_ATTACHMENT_SIZE = 5242880
# CFG_WEBCOMMENT_MAX_ATTACHED_FILES -- maxium number of files that can
# be attached per comment. Choose 0 if you don't want to limit the
# number of files. File uploads can be restricted with action
# "attachcommentfile".
CFG_WEBCOMMENT_MAX_ATTACHED_FILES = 5
# CFG_WEBCOMMENT_MAX_COMMENT_THREAD_DEPTH -- how many levels of
# indentation discussions can be. This can be used to ensure that
# discussions will not go into deep levels of nesting if users don't
# understand the difference between "reply to comment" and "add
# comment". When the depth is reached, any "reply to comment" is
# conceptually converted to a "reply to thread" (i.e. reply to this
# parent's comment). Use -1 for no limit, 0 for unthreaded (flat)
# discussions.
CFG_WEBCOMMENT_MAX_COMMENT_THREAD_DEPTH = 1
##################################
## Part 11: BibSched parameters ##
##################################
## This section contains some configuration parameters for the
## bibliographic task scheduler.
## CFG_BIBSCHED_REFRESHTIME -- how often do we want to refresh
## bibsched monitor? (in seconds)
CFG_BIBSCHED_REFRESHTIME = 5
## CFG_BIBSCHED_LOG_PAGER -- what pager to use to view bibsched task
## logs?
CFG_BIBSCHED_LOG_PAGER = /usr/bin/less
## CFG_BIBSCHED_EDITOR -- what editor to use to edit the marcxml
## code of the locked records
CFG_BIBSCHED_EDITOR = /usr/bin/vim
## CFG_BIBSCHED_GC_TASKS_OLDER_THAN -- after how many days to perform the
## gargbage collector of BibSched queue (i.e. removing/moving task to archive).
CFG_BIBSCHED_GC_TASKS_OLDER_THAN = 30
## CFG_BIBSCHED_GC_TASKS_TO_REMOVE -- list of BibTask that can be safely
## removed from the BibSched queue once they are DONE.
CFG_BIBSCHED_GC_TASKS_TO_REMOVE = bibindex,bibreformat,webcoll,bibrank,inveniogc
## CFG_BIBSCHED_GC_TASKS_TO_ARCHIVE -- list of BibTasks that should be safely
## archived out of the BibSched queue once they are DONE.
CFG_BIBSCHED_GC_TASKS_TO_ARCHIVE = bibupload,oairepositoryupdater
## CFG_BIBSCHED_MAX_NUMBER_CONCURRENT_TASKS -- maximum number of BibTasks
## that can run concurrently.
## NOTE: concurrent tasks are still considered as an experimental
## feature. Please keep this value set to 1 on production environments.
CFG_BIBSCHED_MAX_NUMBER_CONCURRENT_TASKS = 1
## CFG_BIBSCHED_PROCESS_USER -- bibsched and bibtask processes must
## usually run under the same identity as the Apache web server
## process in order to share proper file read/write privileges. If
## you want to force some other bibsched/bibtask user, e.g. because
## you are using a local `invenio' user that belongs to your
## `www-data' Apache user group and so shares writing rights with your
## Apache web server process in this way, then please set its username
## identity here. Otherwise we shall check whether your
## bibsched/bibtask processes are run under the same identity as your
## Apache web server process (in which case you can leave the default
## empty value here).
CFG_BIBSCHED_PROCESS_USER =
## CFG_BIBSCHED_NODE_TASKS -- specific nodes may be configured to
## run only specific tasks; if you want this, then this variable is a
## dictionary of the form {'hostname1': ['task1', 'task2']}. The
## default is that any node can run any task.
CFG_BIBSCHED_NODE_TASKS = {}
## CFG_BIBSCHED_MAX_ARCHIVED_ROWS_DISPLAY -- number of tasks displayed
##
CFG_BIBSCHED_MAX_ARCHIVED_ROWS_DISPLAY = 500
###################################
## Part 12: WebBasket parameters ##
###################################
## CFG_WEBBASKET_MAX_NUMBER_OF_DISPLAYED_BASKETS -- a safety limit for
## a maximum number of displayed baskets
CFG_WEBBASKET_MAX_NUMBER_OF_DISPLAYED_BASKETS = 20
## CFG_WEBBASKET_USE_RICH_TEXT_EDITOR -- enable the WYSIWYG
## Javascript-based editor when user edits comments in WebBasket?
CFG_WEBBASKET_USE_RICH_TEXT_EDITOR = False
##################################
## Part 13: WebAlert parameters ##
##################################
## This section contains some configuration parameters for the
## automatic email notification alert system.
## CFG_WEBALERT_ALERT_ENGINE_EMAIL -- the email address from which the
## alert emails will appear to be sent:
CFG_WEBALERT_ALERT_ENGINE_EMAIL = info@invenio-software.org
## CFG_WEBALERT_MAX_NUM_OF_RECORDS_IN_ALERT_EMAIL -- how many records
## at most do we send in an outgoing alert email?
CFG_WEBALERT_MAX_NUM_OF_RECORDS_IN_ALERT_EMAIL = 20
## CFG_WEBALERT_MAX_NUM_OF_CHARS_PER_LINE_IN_ALERT_EMAIL -- number of
## chars per line in an outgoing alert email?
CFG_WEBALERT_MAX_NUM_OF_CHARS_PER_LINE_IN_ALERT_EMAIL = 72
## CFG_WEBALERT_SEND_EMAIL_NUMBER_OF_TRIES -- when sending alert
## emails fails, how many times we retry?
CFG_WEBALERT_SEND_EMAIL_NUMBER_OF_TRIES = 3
## CFG_WEBALERT_SEND_EMAIL_SLEEPTIME_BETWEEN_TRIES -- when sending
## alert emails fails, what is the sleeptime between tries? (in
## seconds)
CFG_WEBALERT_SEND_EMAIL_SLEEPTIME_BETWEEN_TRIES = 300
####################################
## Part 14: WebMessage parameters ##
####################################
## CFG_WEBMESSAGE_MAX_SIZE_OF_MESSAGE -- how large web messages do we
## allow?
CFG_WEBMESSAGE_MAX_SIZE_OF_MESSAGE = 20000
## CFG_WEBMESSAGE_MAX_NB_OF_MESSAGES -- how many messages for a
## regular user do we allow in its inbox?
CFG_WEBMESSAGE_MAX_NB_OF_MESSAGES = 30
## CFG_WEBMESSAGE_DAYS_BEFORE_DELETE_ORPHANS -- how many days before
## we delete orphaned messages?
CFG_WEBMESSAGE_DAYS_BEFORE_DELETE_ORPHANS = 60
##################################
## Part 15: MiscUtil parameters ##
##################################
## CFG_MISCUTIL_SQL_USE_SQLALCHEMY -- whether to use SQLAlchemy.pool
## in the DB engine of Invenio. It is okay to enable this flag
## even if you have not installed SQLAlchemy. Note that Invenio will
## loose some perfomance if this option is enabled.
CFG_MISCUTIL_SQL_USE_SQLALCHEMY = False
## CFG_MISCUTIL_SQL_RUN_SQL_MANY_LIMIT -- how many queries can we run
## inside run_sql_many() in one SQL statement? The limit value
## depends on MySQL's max_allowed_packet configuration.
CFG_MISCUTIL_SQL_RUN_SQL_MANY_LIMIT = 10000
## CFG_MISCUTIL_SMTP_HOST -- which server to use as outgoing mail server to
## send outgoing emails generated by the system, for example concerning
## submissions or email notification alerts.
CFG_MISCUTIL_SMTP_HOST = localhost
## CFG_MISCUTIL_SMTP_PORT -- which port to use on the outgoing mail server
## defined in the previous step.
CFG_MISCUTIL_SMTP_PORT = 25
## CFG_MISCUTIL_SMTP_USER -- which username to use on the outgoing mail server
## defined in CFG_MISCUTIL_SMTP_HOST. If either CFG_MISCUTIL_SMTP_USER or
## CFG_MISCUTIL_SMTP_PASS are empty Invenio won't attempt authentication.
CFG_MISCUTIL_SMTP_USER =
## CFG_MISCUTIL_SMTP_PASS -- which password to use on the outgoing mail
## server defined in CFG_MISCUTIL_SMTP_HOST. If either CFG_MISCUTIL_SMTP_USER
## or CFG_MISCUTIL_SMTP_PASS are empty Invenio won't attempt authentication.
CFG_MISCUTIL_SMTP_PASS =
## CFG_MISCUTIL_SMTP_TLS -- whether to use a TLS (secure) connection when
## talking to the SMTP server defined in CFG_MISCUTIL_SMTP_HOST.
CFG_MISCUTIL_SMTP_TLS = False
## CFG_MISCUTILS_DEFAULT_PROCESS_TIMEOUT -- the default number of seconds after
## which a process launched trough shellutils.run_process_with_timeout will
## be killed. This is useful to catch runaway processes.
CFG_MISCUTIL_DEFAULT_PROCESS_TIMEOUT = 300
## CFG_MATHJAX_HOSTING -- if you plan to use MathJax to display TeX
## formulas on HTML web pages, you can specify whether you wish to use
## 'local' hosting or 'cdn' hosting of MathJax libraries. (If set to
## 'local', you have to run 'make install-mathjax-plugin' as described
## in the INSTALL guide.) If set to 'local', users will use your site
## to download MathJax sources. If set to 'cdn', users will use
## centralized MathJax CDN servers instead. Please note that using
## CDN is suitable only for small institutions or for MathJax
## sponsors; see the MathJax website for more details. (Also, please
## note that if you plan to use MathJax on your site, you have to
## adapt CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS and
## CFG_WEBCOMMENT_USE_MATHJAX_IN_COMMENTS configuration variables
## elsewhere in this file.)
CFG_MATHJAX_HOSTING = local
#################################
## Part 16: BibEdit parameters ##
#################################
## CFG_BIBEDIT_TIMEOUT -- when a user edits a record, this record is
## locked to prevent other users to edit it at the same time.
## How many seconds of inactivity before the locked record again will be free
## for other people to edit?
CFG_BIBEDIT_TIMEOUT = 3600
## CFG_BIBEDIT_LOCKLEVEL -- when a user tries to edit a record which there
## is a pending bibupload task for in the queue, this shouldn't be permitted.
## The lock level determines how thouroughly the queue should be investigated
## to determine if this is the case.
## Level 0 - always permits editing, doesn't look at the queue
## (unsafe, use only if you know what you are doing)
## Level 1 - permits editing if there are no queued bibedit tasks for this record
## (safe with respect to bibedit, but not for other bibupload maintenance jobs)
## Level 2 - permits editing if there are no queued bibupload tasks of any sort
## (safe, but may lock more than necessary if many cataloguers around)
## Level 3 - permits editing if no queued bibupload task concerns given record
## (safe, most precise locking, but slow,
## checks for 001/EXTERNAL_SYSNO_TAG/EXTERNAL_OAIID_TAG)
## The recommended level is 3 (default) or 2 (if you use maintenance jobs often).
CFG_BIBEDIT_LOCKLEVEL = 3
## CFG_BIBEDIT_PROTECTED_FIELDS -- a comma-separated list of fields that BibEdit
## will not allow to be added, edited or deleted. Wildcards are not supported,
## but conceptually a wildcard is added at the end of every field specification.
## Examples:
## 500A - protect all MARC fields with tag 500 and first indicator A
## 5 - protect all MARC fields in the 500-series.
## 909C_a - protect subfield a in tag 909 with first indicator C and empty
## second indicator
## Note that 001 is protected by default, but if protection of other
## identifiers or automated fields is a requirement, they should be added to
## this list.
CFG_BIBEDIT_PROTECTED_FIELDS =
## CFG_BIBEDIT_QUEUE_CHECK_METHOD -- how do we want to check for
## possible queue locking situations to prevent cataloguers from
## editing a record that may be waiting in the queue? Use 'bibrecord'
## for exact checking (always works, but may be slow), use 'regexp'
## for regular expression based checking (very fast, but may be
## inaccurate). When unsure, use 'bibrecord'.
CFG_BIBEDIT_QUEUE_CHECK_METHOD = bibrecord
## CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE -- a dictionary
## containing which collections will be extended with a given template
## while being displayed in BibEdit UI. The collection corresponds with
## the value written in field 980
CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE = { 'POETRY' : 'record_poem'}
## CFG_BIBEDIT_KB_SUBJECTS - Name of the KB used in the field 65017a
## to automatically convert codes into extended version. e.g
## a - Astrophysics
CFG_BIBEDIT_KB_SUBJECTS = Subjects
## CFG_BIBEDIT_KB_INSTITUTIONS - Name of the KB used for institution
## autocomplete. To be applied in fields defined in
## CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS
CFG_BIBEDIT_KB_INSTITUTIONS = InstitutionsCollection
## CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS - list of fields to
## be autocompleted with the KB CFG_BIBEDIT_KB_INSTITUTIONS
CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS = 100__u,700__u,701__u,502__c
## CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING -- maximum number of records
## that can be modified instantly using the multi-record editor. Above
## this limit, modifications will only be executed in limited hours.
CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING = 2000
## CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING -- maximum number of records
## that can be send for modification without having a superadmin role.
## If the number of records is between CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING
## and this number, the modifications will take place only in limited hours.
CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING = 20000
## CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING_TIME -- Allowed time to
## execute modifications on records, when the number exceeds
## CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING.
CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING_TIME = 22:00-05:00
###################################
## Part 17: BibUpload parameters ##
###################################
## CFG_BIBUPLOAD_REFERENCE_TAG -- where do we store references?
CFG_BIBUPLOAD_REFERENCE_TAG = 999
## CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG -- where do we store external
## system numbers? Useful for matching when our records come from an
## external digital library system.
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG = 970__a
## CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG -- where do we store OAI ID tags
## of harvested records? Useful for matching when we harvest stuff
## via OAI that we do not want to reexport via Invenio OAI; so records
## may have only the source OAI ID stored in this tag (kind of like
## external system number too).
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG = 035__a
## CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG -- where do we store OAI SRC
## tags of harvested records? Useful for matching when we harvest stuff
## via OAI that we do not want to reexport via Invenio OAI; so records
## may have only the source OAI SRC stored in this tag (kind of like
## external system number too). Note that the field should be the same of
## CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG.
CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG = 035__9
## CFG_BIBUPLOAD_STRONG_TAGS -- a comma-separated list of tags that
## are strong enough to resist the replace mode. Useful for tags that
## might be created from an external non-metadata-like source,
## e.g. the information about the number of copies left.
CFG_BIBUPLOAD_STRONG_TAGS = 964
## CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS -- a comma-separated list
## of tags that contain provenance information that should be checked
## in the bibupload correct mode via matching provenance codes. (Only
## field instances of the same provenance information would be acted
## upon.) Please specify the whole tag info up to subfield codes.
CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS = 6531_9
## CFG_BIBUPLOAD_FFT_ALLOWED_LOCAL_PATHS -- a comma-separated list of system
## paths from which it is allowed to take fulltextes that will be uploaded via
## FFT (CFG_TMPDIR is included by default).
CFG_BIBUPLOAD_FFT_ALLOWED_LOCAL_PATHS = /tmp,/home
## CFG_BIBUPLOAD_FFT_ALLOWED_EXTERNAL_URLS -- a dictionary containing
## external URLs that can be accessed by Invenio and specific HTTP
## headers that will be used for each URL. The keys of the dictionary
## are regular expressions matching a set of URLs, the values are
## dictionaries of headers as consumed by urllib2.Request. If a
## regular expression matching all URLs is created at the end of the
## list, it means that Invenio will download all URLs. Otherwise
## Invenio will just download authorized URLs. Note: by default, a
## User-Agent built after the current Invenio version, site name, and
## site URL will be used. The values of the header dictionary can
## also contain a call to a python function, in the form of a
## disctionary with two entries: the name of the function to be called
## as a value for the 'fnc' key, and the arguments to this function,
## as a value for the 'args' key (in the form of a dictionary).
## CFG_BIBUPLOAD_FFT_ALLOWED_EXTERNAL_URLS = [
## ('http://myurl.com/.*', {'User-Agent': 'Me'}),
## ('http://yoururl.com/.*', {'User-Agent': 'You', 'Accept': 'text/plain'}),
## ('http://thisurl.com/.*', {'Cookie': {'fnc':'read_cookie', 'args':{'cookiefile':'/tmp/cookies.txt'}}})
## ('http://.*', {'User-Agent': 'Invenio'}),
## ]
CFG_BIBUPLOAD_FFT_ALLOWED_EXTERNAL_URLS = [
('http(s)?://.*', {}),
]
## CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE -- do we want to serialize
## internal representation of records (Pythonic record structure) into
## the database? This can improve internal processing speed of some
## operations at the price of somewhat bigger disk space usage.
## If you change this value after some records have already been added
## to your installation, you may want to run:
## $ /opt/invenio/bin/inveniocfg --reset-recstruct-cache
## in order to either erase the cache thus freeing database space,
## or to fill the cache for all records that have not been cached yet.
CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE = 1
## CFG_BIBUPLOAD_DELETE_FORMATS -- which formats do we want bibupload
## to delete when a record is ingested? Enter comma-separated list of
## formats. For example, 'hb,hd' will delete pre-formatted HTML brief
## and defailed formats from cache, so that search engine will
## generate them on-the-fly. Useful to always present latest data of
## records upon record display, until the periodical bibreformat job
## runs next and updates the cache.
CFG_BIBUPLOAD_DELETE_FORMATS = hb
## CFG_BIBUPLOAD_DISABLE_RECORD_REVISIONS -- set to 1 if keeping
## history of record revisions is not necessary (e.g. because records
## and corresponding modifications are coming always from the same
## external system which already keeps revision history).
CFG_BIBUPLOAD_DISABLE_RECORD_REVISIONS = 0
## CFG_BIBUPLOAD_CONFLICTING_REVISION_TICKET_QUEUE -- Set the name of
## the BibCatalog ticket queue to be used when BibUpload can't
## automatically resolve a revision conflict and has therefore to put
## requested modifications in the holding pen.
CFG_BIBUPLOAD_CONFLICTING_REVISION_TICKET_QUEUE =
## CFG_BATCHUPLOADER_FILENAME_MATCHING_POLICY -- a comma-separated list
## indicating which fields match the file names of the documents to be
## uploaded.
## The matching will be done in the same order as the list provided.
CFG_BATCHUPLOADER_FILENAME_MATCHING_POLICY = reportnumber,recid
## CFG_BATCHUPLOADER_DAEMON_DIR -- Directory where the batchuploader daemon
## will look for the subfolders metadata and document by default.
## If path is relative, CFG_PREFIX will be joined as a prefix
CFG_BATCHUPLOADER_DAEMON_DIR = var/batchupload
## CFG_BATCHUPLOADER_WEB_ROBOT_AGENTS -- Regular expression to specify the
## agents permitted when calling batch uploader web interface
## cds.cern.ch/batchuploader/robotupload
## if using a curl, eg: curl xxx -A invenio
CFG_BATCHUPLOADER_WEB_ROBOT_AGENTS = invenio_webupload|Invenio-.*
## CFG_BATCHUPLOADER_WEB_ROBOT_RIGHTS -- Access list specifying for each
## IP address, which collections are allowed using batch uploader robot
## interface.
CFG_BATCHUPLOADER_WEB_ROBOT_RIGHTS = {
'127.0.0.1': ['*'], # useful for testing
'127.0.1.1': ['*'], # useful for testing
'10.0.0.1': ['BOOK', 'REPORT'], # Example 1
'10.0.0.2': ['POETRY', 'PREPRINT'], # Example 2
}
####################################
## Part 18: BibCatalog parameters ##
####################################
## CFG_BIBCATALOG_SYSTEM -- set desired catalog system. (RT or EMAIL)
CFG_BIBCATALOG_SYSTEM = EMAIL
## Email backend configuration:
CFG_BIBCATALOG_SYSTEM_EMAIL_ADDRESS = info@invenio-software.org
## RT backend configuration:
## CFG_BIBCATALOG_SYSTEM_RT_CLI -- path to the RT CLI client
CFG_BIBCATALOG_SYSTEM_RT_CLI = /usr/bin/rt
## CFG_BIBCATALOG_SYSTEM_RT_URL -- Base URL of the remote RT system
CFG_BIBCATALOG_SYSTEM_RT_URL = http://localhost/rt3
## CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_USER -- Set the username for a default RT account
## on remote system, with limited privileges, in order to only create and modify own tickets.
CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_USER =
## CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_PWD -- Set the password for the default RT account
## on remote system.
CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_PWD =
####################################
## Part 19: BibFormat parameters ##
####################################
## CFG_BIBFORMAT_HIDDEN_TAGS -- comma-separated list of MARC tags that
## are not shown to users not having cataloging authorizations.
CFG_BIBFORMAT_HIDDEN_TAGS = 595
## CFG_BIBFORMAT_HIDDEN_FILE_FORMATS -- comma-separated list of file formats
## that are not shown explicitly to user not having cataloging authorizations.
## e.g. pdf;pdfa,xml
CFG_BIBFORMAT_HIDDEN_FILE_FORMATS =
## CFG_BIBFORMAT_ADDTHIS_ID -- if you want to use the AddThis service from
## <http://www.addthis.com/>, set this value to the pubid parameter as
## provided by the service (e.g. ra-4ff80aae118f4dad), and add a call to
## <BFE_ADDTHIS /> formatting element in your formats, for example
## Default_HTML_detailed.bft.
CFG_BIBFORMAT_ADDTHIS_ID =
## CFG_BIBFORMAT_DISABLE_I18N_FOR_CACHED_FORMATS -- For each output
## format BibReformat currently creates a cache for only one language
## (CFG_SITE_LANG) per record. This means that visitors having set a
## different language than CFG_SITE_LANG will be served an on-the-fly
## output using the language of their choice. You can disable this
## behaviour by specifying below for which output format you would
## like to force the cache to be used whatever language is
## requested. If your format templates do not provide
## internationalization, you can optimize your site by setting for
## eg. hb,hd to always serve the precached output (if it exists) in
## the CFG_SITE_LANG
CFG_BIBFORMAT_DISABLE_I18N_FOR_CACHED_FORMATS =
####################################
## Part 20: BibMatch parameters ##
####################################
## CFG_BIBMATCH_LOCAL_SLEEPTIME -- Determines the amount of seconds to sleep
## between search queries on LOCAL system.
CFG_BIBMATCH_LOCAL_SLEEPTIME = 0.0
## CFG_BIBMATCH_REMOTE_SLEEPTIME -- Determines the amount of seconds to sleep
## between search queries on REMOTE systems.
CFG_BIBMATCH_REMOTE_SLEEPTIME = 2.0
## CFG_BIBMATCH_FUZZY_WORDLIMITS -- Determines the amount of words to extract
## from a certain fields value during fuzzy matching mode. Add/change field
## and appropriate number to the dictionary to configure this.
CFG_BIBMATCH_FUZZY_WORDLIMITS = {
'100__a': 2,
'245__a': 4
}
## CFG_BIBMATCH_FUZZY_EMPTY_RESULT_LIMIT -- Determines the amount of empty results
## to accept during fuzzy matching mode.
CFG_BIBMATCH_FUZZY_EMPTY_RESULT_LIMIT = 1
## CFG_BIBMATCH_QUERY_TEMPLATES -- Here you can set the various predefined querystrings
## used to standardize common matching queries. By default the following templates
## are given:
## title - standard title search. Taken from 245__a (default)
## title-author - title and author search (i.e. this is a title AND author a)
## Taken from 245__a and 100__a
## reportnumber - reportnumber search (i.e. reportnumber:REP-NO-123).
CFG_BIBMATCH_QUERY_TEMPLATES = {
'title' : '[title]',
'title-author' : '[title] [author]',
'reportnumber' : 'reportnumber:[reportnumber]'
}
## CFG_BIBMATCH_MATCH_VALIDATION_RULESETS -- Here you can define the various rulesets for
## validating search results done by BibMatch. Each ruleset contains a certain pattern mapped
## to a tuple defining a "matching-strategy".
##
## The rule-definitions must come in two parts:
##
## * The first part is a string containing a regular expression
## that is matched against the textmarc representation of each record.
## If a match is found, the final rule-set is updated with
## the given "sub rule-set", where identical tag rules are replaced.
##
## * The second item is a list of key->value mappings (dict) that indicates specific
## strategy parameters with corresponding validation rules.
##
## This strategy consists of five items:
##
## * MARC TAGS:
## These MARC tags represents the fields taken from original record and any records from search
## results. When several MARC tags are specified with a given match-strategy, all the fields
## associated with these tags are matched together (i.e. with key "100__a,700__a", all 100__a
## and 700__a fields are matched together. Which is useful when first-author can vary for
## certain records on different systems).
##
## * COMPARISON THRESHOLD:
## a value between 0.0 and 1.0 specifying the threshold for string matches
## to determine if it is a match or not (using normalized string-distance).
## Normally 0.8 (80% match) is considered to be a close match.
##
## * COMPARISON MODE:
## the parse mode decides how the record datafields are compared:
## - 'strict' : all (sub-)fields are compared, and all must match. Order is significant.
## - 'normal' : all (sub-)fields are compared, and all must match. Order is ignored.
## - 'lazy' : all (sub-)fields are compared with each other and at least one must match
## - 'ignored': the tag is ignored in the match. Used to disable previously defined rules.
##
## * MATCHING MODE:
## the comparison mode decides how the fieldvalues are matched:
## - 'title' : uses a method specialized for comparing titles, e.g. looking for subtitles
## - 'author' : uses a special authorname comparison. Will take initials into account.
## - 'identifier' : special matching for identifiers, stripping away punctuation
## - 'date': matches dates by extracting and comparing the year
## - 'normal': normal string comparison.
## Note: Fields are considered matching when all its subfields or values match.
##
## * RESULT MODE:
## the result mode decides how the results from the comparisons are handled further:
## - 'normal' : a failed match will cause the validation to immediately exit as a failure.
## a successful match will cause the validation to continue on other rules (if any)
## - 'final' : a failed match will cause the validation to immediately exit as a failure.
## a successful match will cause validation to immediately exit as a success.
## - 'joker' : a failed match will cause the validation to continue on other rules (if any).
## a successful match will cause validation to immediately exit as a success.
##
## You can add your own rulesets in the dictionary below. The 'default' ruleset is always applied,
## and should therefore NOT be removed, but can be changed. The tag-rules can also be overwritten
## by other rulesets.
##
## WARNING: Beware that the validation quality is only as good as given rules, so matching results
## are never guaranteed to be accurate, as it is very content-specific.
CFG_BIBMATCH_MATCH_VALIDATION_RULESETS = [('default', [{ 'tags' : '245__%,242__%',
'threshold' : 0.8,
'compare_mode' : 'lazy',
'match_mode' : 'title',
'result_mode' : 'normal' },
{ 'tags' : '037__a,088__a',
'threshold' : 1.0,
'compare_mode' : 'lazy',
'match_mode' : 'identifier',
'result_mode' : 'final' },
{ 'tags' : '100__a,700__a',
'threshold' : 0.8,
'compare_mode' : 'normal',
'match_mode' : 'author',
'result_mode' : 'normal' },
{ 'tags' : '773__a',
'threshold' : 1.0,
'compare_mode' : 'lazy',
'match_mode' : 'title',
'result_mode' : 'normal' }]),
('980__ \$\$a(THESIS|Thesis)', [{ 'tags' : '100__a',
'threshold' : 0.8,
'compare_mode' : 'strict',
'match_mode' : 'author',
'result_mode' : 'normal' },
{ 'tags' : '700__a,701__a',
'threshold' : 1.0,
'compare_mode' : 'lazy',
'match_mode' : 'author',
'result_mode' : 'normal' },
{ 'tags' : '100__a,700__a',
'threshold' : 0.8,
'compare_mode' : 'ignored',
'match_mode' : 'author',
'result_mode' : 'normal' }]),
('260__', [{ 'tags' : '260__c',
'threshold' : 0.8,
'compare_mode' : 'lazy',
'match_mode' : 'date',
'result_mode' : 'normal' }]),
('0247_', [{ 'tags' : '0247_a',
'threshold' : 1.0,
'compare_mode' : 'lazy',
'match_mode' : 'identifier',
'result_mode' : 'final' }]),
('020__', [{ 'tags' : '020__a',
'threshold' : 1.0,
'compare_mode' : 'lazy',
'match_mode' : 'identifier',
'result_mode' : 'joker' }])
]
## CFG_BIBMATCH_FUZZY_MATCH_VALIDATION_LIMIT -- Determines the minimum percentage of the
## amount of rules to be positively matched when comparing two records. Should the number
## of matches be lower than required matches but equal to or above this limit,
## the match will be considered fuzzy.
CFG_BIBMATCH_FUZZY_MATCH_VALIDATION_LIMIT = 0.65
## CFG_BIBMATCH_SEARCH_RESULT_MATCH_LIMIT -- Determines the maximum amount of search results
## a single search can return before acting as a non-match.
CFG_BIBMATCH_SEARCH_RESULT_MATCH_LIMIT = 15
######################################
## Part 21: BibAuthorID parameters ##
######################################
# CFG_BIBAUTHORID_MAX_PROCESSES is the max number of processes
# that may be spawned by the disambiguation algorithm
CFG_BIBAUTHORID_MAX_PROCESSES = 12
# CFG_BIBAUTHORID_PERSONID_SQL_MAX_THREADS is the max number of threads
# to parallelize sql queries during personID tables updates
CFG_BIBAUTHORID_PERSONID_SQL_MAX_THREADS = 12
# CFG_BIBAUTHORID_EXTERNAL_CLAIMED_RECORDS_KEY defines the user info
# keys for externally claimed records in an remote-login scenario--e.g. from arXiv.org
# e.g. "external_arxivids" for arXiv SSO
CFG_BIBAUTHORID_EXTERNAL_CLAIMED_RECORDS_KEY =
# CFG_BIBAUTHORID_AID_ENABLED
# Globally enable AuthorID Interfaces.
# If False: No guest, user or operator will have access to the system.
CFG_BIBAUTHORID_ENABLED = True
# CFG_BIBAUTHORID_AID_ON_AUTHORPAGES
# Enable AuthorID information on the author pages.
CFG_BIBAUTHORID_ON_AUTHORPAGES = True
# CFG_BIBAUTHORID_AUTHOR_TICKET_ADMIN_EMAIL defines the eMail address
# all ticket requests concerning authors will be sent to.
CFG_BIBAUTHORID_AUTHOR_TICKET_ADMIN_EMAIL = info@invenio-software.org
#CFG_BIBAUTHORID_UI_SKIP_ARXIV_STUB_PAGE defines if the optional arXive stub page is skipped
CFG_BIBAUTHORID_UI_SKIP_ARXIV_STUB_PAGE = False
#########################################
## Part 22: BibCirculation parameters ##
#########################################
## CFG_BIBCIRCULATION_ITEM_STATUS_OPTIONAL -- comma-separated list of statuses
# Example: missing, order delayed, not published
# You can allways add a new status here, but you may want to run some script
# to update the database if you remove some statuses.
CFG_BIBCIRCULATION_ITEM_STATUS_OPTIONAL =
## Here you can edit the text of the statuses that have specific roles.
# You should run a script to update the database if you change them after having
# used the module for some time.
## Item statuses
# The book is on loan
CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN = on loan
# Available for loan
CFG_BIBCIRCULATION_ITEM_STATUS_ON_SHELF = on shelf
# The book is being processed by the library (cataloguing, etc.)
CFG_BIBCIRCULATION_ITEM_STATUS_IN_PROCESS = in process
# The book has been ordered (bought)
CFG_BIBCIRCULATION_ITEM_STATUS_ON_ORDER = on order
# The order of the book has been cancelled
CFG_BIBCIRCULATION_ITEM_STATUS_CANCELLED = cancelled
# The order of the book has not arrived yet
CFG_BIBCIRCULATION_ITEM_STATUS_NOT_ARRIVED = not arrived
# The order of the book has not arrived yet and has been claimed
CFG_BIBCIRCULATION_ITEM_STATUS_CLAIMED = claimed
# The book has been proposed for acquisition and is under review.
CFG_BIBCIRCULATION_ITEM_STATUS_UNDER_REVIEW = under review
## Loan statuses
# This status should not be confussed with CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN.
# If the item status is CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN, then there is
# a loan with status CFG_BIBCIRCULATION_LOAN_STATUS_ON_LOAN or
# CFG_BIBCIRCULATION_LOAN_STATUS_EXPIRED.
# For each copy, there can only be one active loan ('on loan' or 'expired') at
# the time, since can be many 'returned' loans for the same copy.
CFG_BIBCIRCULATION_LOAN_STATUS_ON_LOAN = on loan
# The due date has come and the item has not been returned
CFG_BIBCIRCULATION_LOAN_STATUS_EXPIRED = expired
# The item has been returned.
CFG_BIBCIRCULATION_LOAN_STATUS_RETURNED = returned
## Request statuses
# There is at least one copy available, and this is the oldest request.
CFG_BIBCIRCULATION_REQUEST_STATUS_PENDING = pending
# There are no copies available, or there is another request with more priority.
CFG_BIBCIRCULATION_REQUEST_STATUS_WAITING = waiting
# The request has become a loan
CFG_BIBCIRCULATION_REQUEST_STATUS_DONE = done
# The request has been cancelled
CFG_BIBCIRCULATION_REQUEST_STATUS_CANCELLED = cancelled
# The request has been generated for a proposed book
CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED = proposed
# ILL request statuses
CFG_BIBCIRCULATION_ILL_STATUS_NEW = new
CFG_BIBCIRCULATION_ILL_STATUS_REQUESTED = requested
CFG_BIBCIRCULATION_ILL_STATUS_ON_LOAN = on loan
CFG_BIBCIRCULATION_ILL_STATUS_RETURNED = returned
CFG_BIBCIRCULATION_ILL_STATUS_CANCELLED = cancelled
CFG_BIBCIRCULATION_ILL_STATUS_RECEIVED = received
#Book proposal statuses
CFG_BIBCIRCULATION_PROPOSAL_STATUS_NEW = proposal-new
CFG_BIBCIRCULATION_PROPOSAL_STATUS_ON_ORDER = proposal-on order
CFG_BIBCIRCULATION_PROPOSAL_STATUS_PUT_ASIDE = proposal-put aside
CFG_BIBCIRCULATION_PROPOSAL_STATUS_RECEIVED = proposal-received
# Purchase statuses
CFG_BIBCIRCULATION_ACQ_STATUS_NEW = new
CFG_BIBCIRCULATION_ACQ_STATUS_ON_ORDER = on order
CFG_BIBCIRCULATION_ACQ_STATUS_PARTIAL_RECEIPT = partial receipt
CFG_BIBCIRCULATION_ACQ_STATUS_RECEIVED = received
CFG_BIBCIRCULATION_ACQ_STATUS_CANCELLED = cancelled
## Library types
# Normal library where you have your books. I can also be a depot.
CFG_BIBCIRCULATION_LIBRARY_TYPE_INTERNAL = internal
# external libraries for ILL.
CFG_BIBCIRCULATION_LIBRARY_TYPE_EXTERNAL = external
# The main library is also an internal library.
# Since you may have several depots or small sites you can tag one of them as
# the main site.
CFG_BIBCIRCULATION_LIBRARY_TYPE_MAIN = main
# It is also an internal library. The copies in this type of library will NOT
# be displayed to borrowers. Use this for depots.
CFG_BIBCIRCULATION_LIBRARY_TYPE_HIDDEN = hidden
## Amazon access key. You will need your own key.
# Example: 1T6P5M3ZDMW9AWJ212R2
CFG_BIBCIRCULATION_AMAZON_ACCESS_KEY =
######################################
## Part 22: BibClassify parameters ##
######################################
# CFG_BIBCLASSIFY_WEB_MAXKW -- maximum number of keywords to display
# in the Keywords tab web page.
CFG_BIBCLASSIFY_WEB_MAXKW = 100
########################################
## Part 23: Plotextractor parameters ##
########################################
## CFG_PLOTEXTRACTOR_SOURCE_BASE_URL -- for acquiring source tarballs for plot
## extraction, where should we look? If nothing is set, we'll just go
## to arXiv, but this can be a filesystem location, too
CFG_PLOTEXTRACTOR_SOURCE_BASE_URL = http://arxiv.org/
## CFG_PLOTEXTRACTOR_SOURCE_TARBALL_FOLDER -- for acquiring source tarballs for plot
## extraction, subfolder where the tarballs sit
CFG_PLOTEXTRACTOR_SOURCE_TARBALL_FOLDER = e-print/
## CFG_PLOTEXTRACTOR_SOURCE_PDF_FOLDER -- for acquiring source tarballs for plot
## extraction, subfolder where the pdf sit
CFG_PLOTEXTRACTOR_SOURCE_PDF_FOLDER = pdf/
## CFG_PLOTEXTRACTOR_DOWNLOAD_TIMEOUT -- a float representing the number of seconds
## to wait between each download of pdf and/or tarball from source URL.
CFG_PLOTEXTRACTOR_DOWNLOAD_TIMEOUT = 2.0
## CFG_PLOTEXTRACTOR_CONTEXT_LIMIT -- when extracting context of plots from
## TeX sources, this is the limitation of characters in each direction to extract
## context from. Default 750.
CFG_PLOTEXTRACTOR_CONTEXT_EXTRACT_LIMIT = 750
## CFG_PLOTEXTRACTOR_DISALLOWED_TEX -- when extracting context of plots from TeX
## sources, this is the list of TeX tags that will trigger 'end of context'.
CFG_PLOTEXTRACTOR_DISALLOWED_TEX = begin,end,section,includegraphics,caption,acknowledgements
## CFG_PLOTEXTRACTOR_CONTEXT_WORD_LIMIT -- when extracting context of plots from
## TeX sources, this is the limitation of words in each direction. Default 75.
CFG_PLOTEXTRACTOR_CONTEXT_WORD_LIMIT = 75
## CFG_PLOTEXTRACTOR_CONTEXT_SENTENCE_LIMIT -- when extracting context of plots from
## TeX sources, this is the limitation of sentences in each direction. Default 2.
CFG_PLOTEXTRACTOR_CONTEXT_SENTENCE_LIMIT = 2
######################################
## Part 24: WebStat parameters ##
######################################
# CFG_WEBSTAT_BIBCIRCULATION_START_YEAR defines the start date of the BibCirculation
# statistics. Value should have the format 'yyyy'. If empty, take all existing data.
CFG_WEBSTAT_BIBCIRCULATION_START_YEAR =
######################################
## Part 25: Web API Key parameters ##
######################################
# CFG_WEB_API_KEY_ALLOWED_URL defines the web apps that are going to use the web
# API key. It has three values, the name of the web app, the time of life for the
# secure url and if a time stamp is needed.
#CFG_WEB_API_KEY_ALLOWED_URL = [('search/\?', 3600, True),
# ('rss', 0, False)]
CFG_WEB_API_KEY_ALLOWED_URL = []
##########################################
## Part 26: WebAuthorProfile parameters ##
##########################################
#CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_LIVE consider a cached element expired after days
#when loading an authorpage, thus recomputing the content live
CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_LIVE = 7
#CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_BIBSCHED consider a cache element expired after days,
#thus recompute it, bibsched daemon
CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_BIBSCHED = 5
#CFG_WEBAUTHORPROFILE_MAX_COLLAB_LIST: limit collaboration list.
#Set to 0 to disable limit.
CFG_WEBAUTHORPROFILE_MAX_COLLAB_LIST = 100
#CFG_WEBAUTHORPROFILE_MAX_KEYWORD_LIST: limit keywords list
#Set to 0 to disable limit.
CFG_WEBAUTHORPROFILE_MAX_KEYWORD_LIST = 100
#CFG_WEBAUTHORPROFILE_MAX_AFF_LIST: limit affiliations list
#Set to 0 to disable limit.
CFG_WEBAUTHORPROFILE_MAX_AFF_LIST = 100
#CFG_WEBAUTHORPROFILE_MAX_COAUTHOR_LIST: limit coauthors list
#Set to 0 to disable limit.
CFG_WEBAUTHORPROFILE_MAX_COAUTHOR_LIST = 100
#CFG_WEBAUTHORPROFILE_MAX_HEP_CHOICES: limit HepRecords choices
#Set to 0 to disable limit.
CFG_WEBAUTHORPROFILE_MAX_HEP_CHOICES = 10
#CFG_WEBAUTHORPROFILE_USE_BIBAUTHORID: use bibauthorid or exactauthor
CFG_WEBAUTHORPROFILE_USE_BIBAUTHORID = False
####################################
## Part 27: BibSort parameters ##
####################################
## CFG_BIBSORT_BUCKETS -- the number of buckets bibsort should use.
## If 0, then no buckets will be used (bibsort will be inactive).
## If different from 0, bibsort will be used for sorting the records.
## The number of buckets should be set with regards to the size
## of the repository; having a larger number of buckets will increase
## the sorting performance for the top results but will decrease
## the performance for sorting the middle results.
## We recommend to to use 1 in case you have less than about
## 1,000,000 records.
## When modifying this variable, re-run rebalancing for all the bibsort
## methods, for having the database in synch.
CFG_BIBSORT_BUCKETS = 1
####################################
## Part 26: Developer options ##
####################################
## CFG_DEVEL_SITE -- is this a development site? If it is, you might
## prefer that it does not do certain things. For example, you might
## not want WebSubmit to send certain emails or trigger certain
## processes on a development site. Put "0" for "no" (meaning we are
## on production site), put "1" for "yes" (meaning we are on
## development site), or put "9" for "maximum debugging info" (which
## will be displayed to *all* users using Flask DebugToolbar, so
## please beware).
## If you do *NOT* want to send emails to their original recipients
## set up corresponding value to CFG_EMAIL_BACKEND (e.g. dummy, locmem).
CFG_DEVEL_SITE = 0
## CFG_DEVEL_TEST_DATABASE_ENGINES -- do we want to enable different testing
## database engines for testing Flask and SQLAlchemy? This setting
## will allow `*_flask_tests.py` to run on databases defined bellow.
## It uses `CFG_DATABASE_*` config variables as defaults for every
## specified engine. Put following keys to the testing database
## configuration dictionary in order to overwrite default values:
## * `engine`: SQLAlchemy engine + driver
## * `username`: The user name.
## * `password`: The database password.
## * `host`: The name of the host.
## * `port`: The port number.
## * `database`: The database name.
## EXAMPLE:
## CFG_DEVEL_TEST_DATABASE_ENGINES = {
## 'PostgreSQL': {'engine': 'postgresql'},
## 'SQLite': {'engine': 'sqlite+pysqlite', 'username': None,
## 'password': None, 'host': None, 'database': None}
## }
## }
CFG_DEVEL_TEST_DATABASE_ENGINES = {}
## CFG_DEVEL_TOOLS -- list of development tools to enable or disable.
## Currently supported tools are:
## * debug-toolbar: Flask Debug Toolbar
## * werkzeug-debugger: Werkzeug Debugger (for Apache)
## * sql-logger: Logging of run_sql SQL queries
## * inspect-templates: Template inspection (formerly CFG_WEBSTYLE_INSPECT_TEMPLATES)
## * no-https-redirect: Do not redirect HTTP to HTTPS
## * assets-debug: Jinja2 assets debugging (i.e. do not merge JavaScript files)
## * intercept-redirects: Intercept redirects (requires debug-toolbar enabled).
## * winpdb-local: Embedded WinPDB Debugger (default password is Change1Me)
## * winpdb-remote: Remote WinPDB Debugger (default password is Change1Me)
## * pydev: PyDev Remote Debugger
##
## IMPORTANT: For werkzeug-debugger, winpdb and pydev to work with Apache you
## must set WSGIDaemonProcess processes=1 threads=1 in invenio-apache-vhost.conf.
CFG_DEVEL_TOOLS =
########################################
## Part 28: JsTestDriver parameters ##
########################################
## CFG_JSTESTDRIVER_PORT -- server port where JS tests will be run.
CFG_JSTESTDRIVER_PORT = 9876
############################
## Part 29: RefExtract ##
############################
## Refextract can automatically submit tickets (after extracting refereces)
## to CFG_REFEXTRACT_TICKET_QUEUE if it is set
CFG_REFEXTRACT_TICKET_QUEUE = None
## Override refextract kbs locations
CFG_REFEXTRACT_KBS_OVERRIDE = {}
##################################
## Part 30: CrossRef parameters ##
##################################
## CFG_CROSSREF_USERNAME -- the username used when sending request
## to the Crossref site.
CFG_CROSSREF_USERNAME =
## CFG_CROSSREF_PASSWORD -- the password used when sending request
## to the Crossref site.
CFG_CROSSREF_PASSWORD =
#####################################
## Part 31: WebLinkback parameters ##
#####################################
## CFG_WEBLINKBACK_TRACKBACK_ENABLED -- whether to enable trackback support
## 1 to enable, 0 to disable it
CFG_WEBLINKBACK_TRACKBACK_ENABLED = 0
####################################
## Part 33: WebSubmit parameters ##
####################################
## CFG_WEBSUBMIT_USE_MATHJAX -- whether to use MathJax and math
## preview panel within submissions (1) or not (0). Customize your
## websubmit_template.tmpl_mathpreview_header() to enable for specific
## fields.
## See also CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS
CFG_WEBSUBMIT_USE_MATHJAX = 0
############################
## Part 34: BibWorkflow ##
############################
## Setting worker that will be used to execut workflows.
## Allowed options: Celery
CFG_BIBWORKFLOW_WORKER = worker_celery
## Messages broker for worker
## RabbitMQ - amqp://guest@localhost//
## Redis - redis://localhost:6379/0
CFG_BROKER_URL = amqp://guest@localhost:5672//
## Broker backend
## RabbitMQ - amqp
## Redis - redis://localhost:6379/0
CFG_CELERY_RESULT_BACKEND = amqp
####################################
## Part 35: BibField parameters ##
####################################
## CFG_BIBFIELD_MASTER_FORMATS -- the name of all the allowed master formats
## that BibField will work with.
CFG_BIBFIELD_MASTER_FORMATS = marc
##########################
## THAT's ALL, FOLKS! ##
##########################
diff --git a/configure-tests.py b/configure-tests.py
index 8870e886b..e0432f11d 100644
--- a/configure-tests.py
+++ b/configure-tests.py
@@ -1,545 +1,545 @@
## This file is part of Invenio.
-## Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 CERN.
+## Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Test the suitability of Python core and the availability of various
Python modules for running Invenio. Warn the user if there are
eventual troubles. Exit status: 0 if okay, 1 if not okay. Useful for
running from configure.ac.
"""
## minimally recommended/required versions:
-CFG_MIN_PYTHON_VERSION = (2, 4)
+CFG_MIN_PYTHON_VERSION = (2, 6)
CFG_MAX_PYTHON_VERSION = (2, 9, 9999)
CFG_MIN_MYSQLDB_VERSION = "1.2.1_p2"
## 0) import modules needed for this testing:
import string
import sys
import getpass
import subprocess
import re
error_messages = []
warning_messages = []
def wait_for_user(msg):
"""Print MSG and prompt user for confirmation."""
try:
raw_input(msg)
except KeyboardInterrupt:
print "\n\nInstallation aborted."
sys.exit(1)
except EOFError:
print " (continuing in batch mode)"
return
## 1) check Python version:
if sys.version_info < CFG_MIN_PYTHON_VERSION:
error_messages.append(
"""
*******************************************************
** ERROR: TOO OLD PYTHON DETECTED: %s
*******************************************************
** You seem to be using a too old version of Python. **
** You must use at least Python %s. **
** **
** Note that if you have more than one Python **
** installed on your system, you can specify the **
** --with-python configuration option to choose **
** a specific (e.g. non system wide) Python binary. **
** **
** Please upgrade your Python before continuing. **
*******************************************************
""" % (string.replace(sys.version, "\n", ""),
'.'.join(CFG_MIN_PYTHON_VERSION))
)
if sys.version_info > CFG_MAX_PYTHON_VERSION:
error_messages.append(
"""
*******************************************************
** ERROR: TOO NEW PYTHON DETECTED: %s
*******************************************************
** You seem to be using a too new version of Python. **
** You must use at most Python %s. **
** **
** Perhaps you have downloaded and are installing an **
** old Invenio version? Please look for more recent **
** Invenio version or please contact the development **
** team at <info@invenio-software.org> about this **
** problem. **
** **
** Installation aborted. **
*******************************************************
""" % (string.replace(sys.version, "\n", ""),
'.'.join(CFG_MAX_PYTHON_VERSION))
)
## 2) check for required modules:
try:
import MySQLdb
import base64
import cPickle
import cStringIO
import cgi
import copy
import fileinput
import getopt
import sys
if sys.hexversion < 0x2060000:
import md5
else:
import hashlib
import marshal
import os
import pyparsing
import signal
import tempfile
import time
import traceback
import unicodedata
import urllib
import zlib
import wsgiref
import sqlalchemy
import werkzeug
import jinja2
import flask
import fixture
import flask.ext.assets
import flask.ext.cache
import flask.ext.sqlalchemy
import flask.ext.testing
import wtforms
import flask.ext.wtf
import flask.ext.admin
## Check Werkzeug version
werkzeug_ver = werkzeug.__version__.split(".")
if werkzeug_ver[0] == "0" and int(werkzeug_ver[1]) < 8:
error_messages.append(
"""
*****************************************************
** Werkzeug version %s detected
*****************************************************
** Your are using an outdated version of Werkzeug **
** with known problems. Please upgrade Werkzeug to **
** at least v0.8 by running e.g.: **
** pip install Werkzeug --upgrade **
*****************************************************
""" % werkzeug.__version__
)
except ImportError, msg:
error_messages.append("""
*************************************************
** IMPORT ERROR %s
*************************************************
** Perhaps you forgot to install some of the **
** prerequisite Python modules? Please look **
** at our INSTALL file for more details and **
** fix the problem before continuing! **
*************************************************
""" % msg
)
## 3) check for recommended modules:
try:
import rdflib
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that rdflib is needed only if you plan **
** to work with the automatic classification of **
** documents based on RDF-based taxonomies. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import pyRXP
except ImportError, msg:
warning_messages.append("""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that PyRXP is not really required but **
** we recommend it for fast XML MARC parsing. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import dateutil
except ImportError, msg:
warning_messages.append("""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that dateutil is not really required but **
** we recommend it for user-friendly date **
** parsing. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import libxml2
except ImportError, msg:
warning_messages.append("""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that libxml2 is not really required but **
** we recommend it for XML metadata conversions **
** and for fast XML parsing. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import libxslt
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that libxslt is not really required but **
** we recommend it for XML metadata conversions. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import Gnuplot
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that Gnuplot.py is not really required but **
** we recommend it in order to have nice download **
** and citation history graphs on Detailed record **
** pages. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import rauth
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that python-rauth is not really required **
** but we recommend it in order to enable oauth **
** based authentication. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import openid
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that python-openid is not really required **
** but we recommend it in order to enable OpenID **
** based authentication. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
import magic
if not hasattr(magic, "open"):
raise StandardError
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that magic module is not really required **
** but we recommend it in order to have detailed **
** content information about fulltext files. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
except StandardError:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING python-magic
*****************************************************
** The python-magic package you installed is not **
** the one supported by Invenio. Please refer to **
** the INSTALL file for more details. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
"""
)
try:
import reportlab
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that reportlab module is not really **
** required, but we recommend it you want to **
** enrich PDF with OCR information. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
try:
try:
import PyPDF2
except ImportError:
import pyPdf
except ImportError, msg:
warning_messages.append(
"""
*****************************************************
** IMPORT WARNING %s
*****************************************************
** Note that pyPdf or pyPdf2 module is not really **
** required, but we recommend it you want to **
** enrich PDF with OCR information. **
** **
** You can safely continue installing Invenio **
** now, and add this module anytime later. (I.e. **
** even after your Invenio installation is put **
** into production.) **
*****************************************************
""" % msg
)
## 4) check for versions of some important modules:
if MySQLdb.__version__ < CFG_MIN_MYSQLDB_VERSION:
error_messages.append(
"""
*****************************************************
** ERROR: PYTHON MODULE MYSQLDB %s DETECTED
*****************************************************
** You have to upgrade your MySQLdb to at least **
** version %s. You must fix this problem **
** before continuing. Please see the INSTALL file **
** for more details. **
*****************************************************
""" % (MySQLdb.__version__, CFG_MIN_MYSQLDB_VERSION)
)
try:
import Stemmer
try:
from Stemmer import algorithms
except ImportError, msg:
error_messages.append(
"""
*****************************************************
** ERROR: STEMMER MODULE PROBLEM %s
*****************************************************
** Perhaps you are using an old Stemmer version? **
** You must either remove your old Stemmer or else **
** upgrade to Snowball Stemmer
** <http://snowball.tartarus.org/wrappers/PyStemmer-1.0.1.tar.gz>
** before continuing. Please see the INSTALL file **
** for more details. **
*****************************************************
""" % (msg)
)
except ImportError:
pass # no prob, Stemmer is optional
## 5) check for Python.h (needed for intbitset):
try:
from distutils.sysconfig import get_python_inc
path_to_python_h = get_python_inc() + os.sep + 'Python.h'
if not os.path.exists(path_to_python_h):
raise StandardError, "Cannot find %s" % path_to_python_h
except StandardError, msg:
error_messages.append(
"""
*****************************************************
** ERROR: PYTHON HEADER FILE ERROR %s
*****************************************************
** You do not seem to have Python developer files **
** installed (such as Python.h). Some operating **
** systems provide these in a separate Python **
** package called python-dev or python-devel. **
** You must install such a package before **
** continuing the installation process. **
*****************************************************
""" % (msg)
)
## 6) Check if ffmpeg is installed and if so, with the minimum configuration for bibencode
try:
try:
process = subprocess.Popen('ffprobe', stderr=subprocess.PIPE, stdout=subprocess.PIPE)
except OSError:
raise StandardError, "FFMPEG/FFPROBE does not seem to be installed!"
returncode = process.wait()
output = process.communicate()[1]
RE_CONFIGURATION = re.compile("(--enable-[a-z0-9\-]*)")
CONFIGURATION_REQUIRED = (
'--enable-gpl',
'--enable-version3',
'--enable-nonfree',
'--enable-libtheora',
'--enable-libvorbis',
'--enable-libvpx',
'--enable-libopenjpeg'
)
options = RE_CONFIGURATION.findall(output)
if sys.version_info < (2, 6):
import sets
s = sets.Set(CONFIGURATION_REQUIRED)
if not s.issubset(options):
raise StandardError, options.difference(s)
else:
if not set(CONFIGURATION_REQUIRED).issubset(options):
raise StandardError, set(CONFIGURATION_REQUIRED).difference(options)
except StandardError, msg:
warning_messages.append(
"""
*****************************************************
** WARNING: FFMPEG CONFIGURATION MISSING %s
*****************************************************
** You do not seem to have FFmpeg configured with **
** the minimum video codecs to run the demo site. **
** Please install the necessary libraries and **
** re-install FFmpeg according to the Invenio **
** installation manual (INSTALL). **
*****************************************************
""" % (msg)
)
if warning_messages:
print """
******************************************************
** WARNING MESSAGES **
******************************************************
"""
for warning in warning_messages:
print warning
if error_messages:
print """
******************************************************
** ERROR MESSAGES **
******************************************************
"""
for error in error_messages:
print error
if warning_messages and error_messages:
print """
There were %(n_err)s error(s) found that you need to solve.
Please see above, solve them, and re-run configure.
Note that there are also %(n_wrn)s warnings you may want
to look into. Aborting the installation.
""" % {'n_wrn': len(warning_messages),
'n_err': len(error_messages)}
sys.exit(1)
elif error_messages:
print """
There were %(n_err)s error(s) found that you need to solve.
Please see above, solve them, and re-run configure.
Aborting the installation.
""" % {'n_err': len(error_messages)}
sys.exit(1)
elif warning_messages:
print """
There were %(n_wrn)s warnings found that you may want to
look into, solve, and re-run configure before you
continue the installation. However, you can also continue
the installation now and solve these issues later, if you wish.
""" % {'n_wrn': len(warning_messages)}
wait_for_user("Press ENTER to continue the installation...")
diff --git a/configure.ac b/configure.ac
index d3e0fdce8..50763ca8f 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1,981 +1,989 @@
## This file is part of Invenio.
## Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
## This is Invenio main configure.ac file. If you change this
## file, then please run "autoreconf" to regenerate the "configure"
## script.
## Initialize autoconf and automake:
AC_INIT([invenio],[m4_esyscmd(./git-version-gen .tarball-version)],[info@invenio-software.org])
AM_INIT_AUTOMAKE([tar-ustar])
## By default we shall install into /opt/invenio. (Do not use
## AC_PREFIX_DEFAULT for this, because it would not work well with
## the localstatedir hack below.)
test "${prefix}" = NONE && prefix=/opt/invenio
## Remove eventual trailing slashes from the prefix value:
test "${prefix%/}" != "" && prefix=${prefix%/}
## Check for install:
AC_PROG_INSTALL
## Check for gettext support:
AM_GNU_GETTEXT(external)
AM_GNU_GETTEXT_VERSION(0.14.4)
## Check for MySQL client:
AC_MSG_CHECKING(for mysql)
AC_ARG_WITH(mysql, AS_HELP_STRING([--with-mysql],[path to a specific MySQL binary (optional)]), MYSQL=${withval})
if test -n "$MYSQL"; then
AC_MSG_RESULT($MYSQL)
else
AC_PATH_PROG(MYSQL, mysql)
if test -z "$MYSQL"; then
AC_MSG_ERROR([
MySQL command-line client was not found in your PATH.
Please install it first.
Available from <http://mysql.com/>.])
fi
fi
## Check for Python:
AC_MSG_CHECKING(for python)
AC_ARG_WITH(python, AS_HELP_STRING([--with-python],[path to a specific Python binary (optional)]), PYTHON=${withval})
if test -n "$PYTHON"; then
AC_MSG_RESULT($PYTHON)
else
AC_PATH_PROG(PYTHON, python)
if test -z "$PYTHON"; then
AC_MSG_ERROR([
Python was not found in your PATH. Please either install it
in your PATH or specify --with-python configure option.
Python is available from <http://python.org/>.])
fi
fi
## Check for OpenOffice.org Python binary:
AC_MSG_CHECKING(for OpenOffice.org Python binary)
AC_ARG_WITH(openoffice-python, AS_HELP_STRING([--with-openoffice-python],[path to a specific OpenOffice.org Python binary (optional)]), OPENOFFICE_PYTHON=`which ${withval}`)
if test -z "$OPENOFFICE_PYTHON"; then
OPENOFFICE_PYTHON=`locate -l 1 -r "o.*office/program/python$"`
OPENOFFICE_PYTHON="$PYTHON $OPENOFFICE_PYTHON"
if test -n "$OPENOFFICE_PYTHON" && ($OPENOFFICE_PYTHON -c "import uno" 2> /dev/null); then
AC_MSG_RESULT($OPENOFFICE_PYTHON)
else
AC_MSG_WARN([
You have not specified the path ot the OpenOffice.org Python binary.
OpenOffice.org and Microsoft Office document conversion and fulltext indexing
will not be available. We recommend you to install OpenOffice.org first
and to rerun the configure script. OpenOffice.org is available from
<http://www.openoffice.org/>.])
fi
elif ($OPENOFFICE_PYTHON -c "import uno" 2> /dev/null); then
AC_MSG_RESULT($OPENOFFICE_PYTHON)
else
AC_MSG_ERROR([
The specified OpenOffice.org Python binary is not correctly configured.
Please specify the correct path to the specific OpenOffice Python binary
(OpenOffice.org is available from <http://www.openoffice.org/>).])
fi
## Check for Python version and modules:
AC_MSG_CHECKING(for required Python modules)
$PYTHON ${srcdir}/configure-tests.py
if test $? -ne 0; then
AC_MSG_ERROR([Please fix the above Python problem before continuing.])
fi
AC_MSG_RESULT(found)
## Check for PHP:
AC_PATH_PROG(PHP, php)
## Check for gzip:
AC_PATH_PROG(GZIP, gzip)
if test -z "$GZIP"; then
AC_MSG_WARN([
Gzip was not found in your PATH. It is used in
the WebSubmit module to compress the data submitted in an archive.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. Gzip is available from
<http://www.gzip.org/>.])
fi
## Check for gunzip:
AC_PATH_PROG(GUNZIP, gunzip)
if test -z "$GUNZIP"; then
AC_MSG_WARN([
Gunzip was not found in your PATH. It is used in
the WebSubmit module to correctly deal with submitted compressed
files.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. Gunzip is available from
<http://www.gzip.org/>.])
fi
## Check for tar:
AC_PATH_PROG(TAR, tar)
if test -z "$TAR"; then
AC_MSG_WARN([
Tar was not found in your PATH. It is used in
the WebSubmit module to pack the submitted data into an archive.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. Tar is available from
<ftp://prep.ai.mit.edu/pub/gnu/tar/>.])
fi
## Check for wget:
AC_PATH_PROG(WGET, wget)
if test -z "$WGET"; then
AC_MSG_WARN([
wget was not found in your PATH. It is used for the fulltext file
retrieval.
You can continue without it but we recomend you to install it first
and to rerun the configure script. wget is available from
<http://www.gnu.org/software/wget/>.])
fi
## Check for md5sum:
AC_PATH_PROG(MD5SUM, md5sum)
if test -z "$MD5SUM"; then
AC_MSG_WARN([
md5sum was not found in your PATH. It is used for the fulltext file
checksum verification.
You can continue without it but we recomend you to install it first
and to rerun the configure script. md5sum is available from
<http://www.gnu.org/software/coreutils/>.])
fi
## Check for ps2pdf:
AC_PATH_PROG(PS2PDF, ps2pdf)
if test -z "$PS2PDF"; then
AC_MSG_WARN([
ps2pdf was not found in your PATH. It is used in
the WebSubmit module to convert submitted PostScripts into PDF.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. ps2pdf is available from
<http://www.cs.wisc.edu/~ghost/doc/AFPL/>.])
fi
## Check for pdflatex:
AC_PATH_PROG(PDFLATEX, pdflatex)
if test -z "$PDFLATEX"; then
AC_MSG_WARN([
pdflatex was not found in your PATH. It is used in
the WebSubmit module to stamp PDF files.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script.])
fi
## Check for tiff2pdf:
AC_PATH_PROG(TIFF2PDF, tiff2pdf)
if test -z "$TIFF2PDF"; then
AC_MSG_WARN([
tiff2pdf was not found in your PATH. It is used in
the WebSubmit module to convert submitted TIFF file into PDF.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. tiff2pdf is available from
<http://www.remotesensing.org/libtiff/>.])
fi
## Check for gs:
AC_PATH_PROG(GS, gs)
if test -z "$GS"; then
AC_MSG_WARN([
gs was not found in your PATH. It is used in
the WebSubmit module to convert submitted PostScripts into PDF.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. gs is available from
<http://www.cs.wisc.edu/~ghost/doc/AFPL/>.])
fi
## Check for pdftotext:
AC_PATH_PROG(PDFTOTEXT, pdftotext)
if test -z "$PDFTOTEXT"; then
AC_MSG_WARN([
pdftotext was not found in your PATH. It is used for the fulltext indexation
of PDF files.
You can continue without it but you may miss fulltext searching capability
of Invenio. We recomend you to install it first and to rerun the configure
script. pdftotext is available from <http://www.foolabs.com/xpdf/home.html>.
])
fi
## Check for pdftotext:
AC_PATH_PROG(PDFINFO, pdfinfo)
if test -z "$PDFINFO"; then
AC_MSG_WARN([
pdfinfo was not found in your PATH. It is used for gathering information on
PDF files.
You can continue without it but you may miss this feature of Invenio.
We recomend you to install it first and to rerun the configure
script. pdftotext is available from <http://www.foolabs.com/xpdf/home.html>.
])
fi
## Check for pdftk:
AC_PATH_PROG(PDFTK, pdftk)
if test -z "$PDFTK"; then
AC_MSG_WARN([
pdftk was not found in your PATH. It is used for the fulltext file stamping.
You can continue without it but you may miss this feature of Invenio.
We recomend you to install it first and to rerun the configure
script. pdftk is available from <http://www.accesspdf.com/pdftk/>.
])
fi
## Check for pdf2ps:
AC_PATH_PROG(PDF2PS, pdf2ps)
if test -z "$PDF2PS"; then
AC_MSG_WARN([
pdf2ps was not found in your PATH. It is used in
the WebSubmit module to convert submitted PDFs into PostScript.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. pdf2ps is available from
<http://www.cs.wisc.edu/~ghost/doc/AFPL/>.])
fi
## Check for pdftops:
AC_PATH_PROG(PDFTOPS, pdftops)
if test -z "$PDFTOPS"; then
AC_MSG_WARN([
pdftops was not found in your PATH. It is used in
the WebSubmit module to convert submitted PDFs into PostScript.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. pdftops is available from
<http://poppler.freedesktop.org/>.])
fi
## Check for pdfopt:
AC_PATH_PROG(PDFOPT, pdfopt)
if test -z "$PDFOPT"; then
AC_MSG_WARN([
pdfopt was not found in your PATH. It is used in
the WebSubmit module to linearized submitted PDFs.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. pdfopt is available from
<http://www.cs.wisc.edu/~ghost/doc/AFPL/>.])
fi
## Check for pdfimages:
AC_PATH_PROG(PDFTOPPM, pdftoppm)
if test -z "$PDFTOPPM"; then
AC_MSG_WARN([
pdftoppm was not found in your PATH. It is used in
the WebSubmit module to extract images from PDFs for OCR.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. pdftoppm is available from
<http://poppler.freedesktop.org/>.])
fi
## Check for pdfimages:
AC_PATH_PROG(PAMFILE, pdftoppm)
if test -z "$PAMFILE"; then
AC_MSG_WARN([
pamfile was not found in your PATH. It is used in
the WebSubmit module to retrieve the size of images extracted from PDFs
for OCR.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. pamfile is available as part of the netpbm utilities
from:
<http://netpbm.sourceforge.net/>.])
fi
## Check for ocroscript:
AC_PATH_PROG(OCROSCRIPT, ocroscript)
if test -z "$OCROSCRIPT"; then
AC_MSG_WARN([
If you plan to run OCR on your PDFs, then please install
ocroscript now. Otherwise you can safely continue. You have also an
option to install ocroscript later and edit invenio-local.conf to let
Invenio know the path to ocroscript.
ocroscript is available as part of OCROpus from
<http://code.google.com/p/ocropus/>.
NOTE: Since OCROpus is being actively developed and its api is continuosly
changing, please install relase 0.3.1])
fi
## Check for pstotext:
AC_PATH_PROG(PSTOTEXT, pstotext)
if test -z "$PSTOTEXT"; then
AC_MSG_WARN([
pstotext was not found in your PATH. It is used for the fulltext indexation
of PDF and PostScript files.
Please install pstotext. Otherwise you can safely continue. You have also an
option to install pstotext later and edit invenio-local.conf to let
Invenio know the path to pstotext.
pstotext is available from <http://www.cs.wisc.edu/~ghost/doc/AFPL/>.
])
fi
## Check for ps2ascii:
AC_PATH_PROG(PSTOASCII, ps2ascii)
if test -z "$PSTOASCII"; then
AC_MSG_WARN([
ps2ascii was not found in your PATH. It is used for the fulltext indexation
of PostScript files.
Please install ps2ascii. Otherwise you can safely continue. You have also an
option to install ps2ascii later and edit invenio-local.conf to let
Invenio know the path to ps2ascii.
ps2ascii is available from <http://www.cs.wisc.edu/~ghost/doc/AFPL/>.
])
fi
## Check for any2djvu:
AC_PATH_PROG(ANY2DJVU, any2djvu)
if test -z "$ANY2DJVU"; then
AC_MSG_WARN([
any2djvu was not found in your PATH. It is used in
the WebSubmit module to convert documents to DJVU.
Please install any2djvu. Otherwise you can safely continue. You have also an
option to install any2djvu later and edit invenio-local.conf to let
Invenio know the path to any2djvu.
any2djvu is available from
<http://djvu.sourceforge.net/>.])
fi
## Check for DJVUPS:
AC_PATH_PROG(DJVUPS, djvups)
if test -z "$DJVUPS"; then
AC_MSG_WARN([
djvups was not found in your PATH. It is used in
the WebSubmit module to convert documents from DJVU.
Please install djvups. Otherwise you can safely continue. You have also an
option to install djvups later and edit invenio-local.conf to let
Invenio know the path to djvups.
djvups is available from
<http://djvu.sourceforge.net/>.])
fi
## Check for DJVUTXT:
AC_PATH_PROG(DJVUTXT, djvutxt)
if test -z "$DJVUTXT"; then
AC_MSG_WARN([
djvutxt was not found in your PATH. It is used in
the WebSubmit module to extract text from DJVU documents.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. djvutxt is available from
<http://djvu.sourceforge.net/>.])
fi
## Check for file:
AC_PATH_PROG(FILE, file)
if test -z "$FILE"; then
AC_MSG_WARN([
File was not found in your PATH. It is used in
the WebSubmit module to check the validity of the submitted files.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. File is available from
<ftp://ftp.astron.com/pub/file/>.])
fi
## Check for convert:
AC_PATH_PROG(CONVERT, convert)
if test -z "$CONVERT"; then
AC_MSG_WARN([
Convert was not found in your PATH. It is used in
the WebSubmit module to create an icon from a submitted picture.
You can continue without it but you will miss some Invenio
functionality. We recommend you to install it first and to rerun
the configure script. Convert is available from
<http://www.imagemagick.org/>.])
fi
## Check for CLISP:
AC_MSG_CHECKING(for clisp)
AC_ARG_WITH(clisp, AS_HELP_STRING([--with-clisp],[path to a specific CLISP binary (optional)]), CLISP=${withval})
if test -n "$CLISP"; then
AC_MSG_RESULT($CLISP)
else
AC_PATH_PROG(CLISP, clisp)
if test -z "$CLISP"; then
AC_MSG_WARN([
GNU CLISP was not found in your PATH. It is used by the WebStat
module to produce statistics about Invenio usage. (Alternatively,
SBCL or CMUCL can be used instead of CLISP.)
You can continue without it but you will miss this feature. We
recommend you to install it first (if you don't have neither CMUCL
nor SBCL) and to rerun the configure script.
GNU CLISP is available from <http://clisp.cons.org/>.])
fi
fi
## Check for CMUCL:
AC_MSG_CHECKING(for cmucl)
AC_ARG_WITH(cmucl, AS_HELP_STRING([--with-cmucl],[path to a specific CMUCL binary (optional)]), CMUCL=${withval})
if test -n "$CMUCL"; then
AC_MSG_RESULT($CMUCL)
else
AC_PATH_PROG(CMUCL, cmucl)
if test -z "$CMUCL"; then
AC_MSG_CHECKING(for lisp) # CMUCL can also be installed under `lisp' exec name
AC_PATH_PROG(CMUCL, lisp)
fi
if test -z "$CMUCL"; then
AC_MSG_WARN([
CMUCL was not found in your PATH. It is used by the WebStat
module to produce statistics about Invenio usage. (Alternatively,
CLISP or SBCL can be used instead of CMUCL.)
You can continue without it but you will miss this feature. We
recommend you to install it first (if you don't have neither CLISP
nor SBCL) and to rerun the configure script.
CMUCL is available from <http://www.cons.org/cmucl/>.])
fi
fi
## Check for SBCL:
AC_MSG_CHECKING(for sbcl)
AC_ARG_WITH(sbcl, AS_HELP_STRING([--with-sbcl],[path to a specific SBCL binary (optional)]), SBCL=${withval})
if test -n "$SBCL"; then
AC_MSG_RESULT($SBCL)
else
AC_PATH_PROG(SBCL, sbcl)
if test -z "$SBCL"; then
AC_MSG_WARN([
SBCL was not found in your PATH. It is used by the WebStat
module to produce statistics about Invenio usage. (Alternatively,
CLISP or CMUCL can be used instead of SBCL.)
You can continue without it but you will miss this feature. We
recommend you to install it first (if you don't have neither CLISP
nor CMUCL) and to rerun the configure script.
SBCL is available from <http://sbcl.sourceforge.net/>.])
fi
fi
## Check for gnuplot:
AC_PATH_PROG(GNUPLOT, gnuplot)
if test -z "$GNUPLOT"; then
AC_MSG_WARN([
Gnuplot was not found in your PATH. It is used by the BibRank
module to produce graphs about download and citation history.
You can continue without it but you will miss these graphs. We
recommend you to install it first and to rerun the configure script.
Gnuplot is available from <http://www.gnuplot.info/>.])
fi
## Check for ffmpeg:
AC_PATH_PROG(FFMPEG, ffmpeg)
AC_PATH_PROG(FFPROBE, ffprobe)
if test -z "$FFMPEG"; then
AC_MSG_WARN([
FFmpeg was not found in your PATH. It is used by the BibEncode
module to for video encoding.
You can continue without but you will not be able to use BibEncode
and no video submission workflows are thereby possible.
We recommend you to install it first if you would like to support video
submissions and to rerun the configure script.
FFmpeg is available from <http://www.ffmpeg.org/>.])
fi
## Check for mediainfo:
AC_PATH_PROG(MEDIAINFO, mediainfo)
if test -z "$MEDIAINFO"; then
AC_MSG_WARN([
Mediainfo was not found in your PATH. It is used by the BibEncode
module to for video encoding and media metadata handling.
You can continue without but you will not be able to use BibEncode
and no video submission workflows are thereby possible.
We recommend you to install it first if you would like to support video
submissions and to rerun the configure script.
Mediainfo is available from <http://mediainfo.sourceforge.net/>.])
fi
## Check for ffmpeg
## Substitute variables:
AC_SUBST(VERSION)
AC_SUBST(OPENOFFICE_PYTHON)
AC_SUBST(MYSQL)
AC_SUBST(PYTHON)
AC_SUBST(GZIP)
AC_SUBST(GUNZIP)
AC_SUBST(TAR)
AC_SUBST(WGET)
AC_SUBST(MD5SUM)
AC_SUBST(PS2PDF)
AC_SUBST(GS)
AC_SUBST(PDFTOTEXT)
AC_SUBST(PDFTK)
AC_SUBST(PDF2PS)
AC_SUBST(PDFTOPS)
AC_SUBST(PDFOPT)
AC_SUBST(PDFTOPPM)
AC_SUBST(OCROSCRIPT)
AC_SUBST(PSTOTEXT)
AC_SUBST(PSTOASCII)
AC_SUBST(ANY2DJVU)
AC_SUBST(DJVUPS)
AC_SUBST(DJVUTXT)
AC_SUBST(FILE)
AC_SUBST(CONVERT)
AC_SUBST(GNUPLOT)
AC_SUBST(CLISP)
AC_SUBST(CMUCL)
AC_SUBST(SBCL)
AC_SUBST(CACHEDIR)
AC_SUBST(FFMPEG)
AC_SUBST(MEDIAINFO)
AC_SUBST(FFPROBE)
AC_SUBST(localstatedir, `eval echo "${localstatedir}"`)
## Define output files:
AC_CONFIG_FILES([config.nice \
Makefile \
po/Makefile.in \
config/Makefile \
config/invenio-autotools.conf \
modules/Makefile \
modules/webauthorprofile/Makefile \
modules/webauthorprofile/lib/Makefile \
modules/webauthorprofile/bin/Makefile \
modules/webauthorprofile/bin/webauthorprofile \
modules/bibauthorid/Makefile \
modules/bibauthorid/bin/Makefile \
modules/bibauthorid/bin/bibauthorid \
modules/bibauthorid/doc/Makefile \
modules/bibauthorid/doc/admin/Makefile \
modules/bibauthorid/doc/hacking/Makefile \
modules/bibauthorid/lib/Makefile \
modules/bibauthorid/etc/Makefile \
modules/bibauthorid/etc/name_authority_files/Makefile \
modules/bibauthorid/web/Makefile \
+ modules/bibauthority/Makefile \
+ modules/bibauthority/bin/Makefile \
+ modules/bibauthority/doc/Makefile \
+ modules/bibauthority/doc/admin/Makefile \
+ modules/bibauthority/doc/hacking/Makefile \
+ modules/bibauthority/lib/Makefile \
+ modules/bibauthority/web/Makefile \
modules/bibcatalog/Makefile \
modules/bibcatalog/doc/Makefile \
modules/bibcatalog/doc/admin/Makefile \
modules/bibcatalog/doc/hacking/Makefile
modules/bibcatalog/lib/Makefile \
modules/bibcheck/Makefile \
modules/bibcheck/doc/Makefile \
modules/bibcheck/doc/admin/Makefile \
modules/bibcheck/doc/hacking/Makefile \
modules/bibcheck/etc/Makefile \
modules/bibcheck/web/Makefile \
modules/bibcheck/web/admin/Makefile \
modules/bibcirculation/Makefile \
modules/bibcirculation/bin/Makefile \
modules/bibcirculation/bin/bibcircd \
modules/bibcirculation/doc/Makefile \
modules/bibcirculation/doc/admin/Makefile \
modules/bibcirculation/doc/hacking/Makefile
modules/bibcirculation/lib/Makefile \
modules/bibcirculation/web/Makefile \
modules/bibcirculation/web/admin/Makefile \
modules/bibclassify/Makefile \
modules/bibclassify/bin/Makefile \
modules/bibclassify/bin/bibclassify \
modules/bibclassify/doc/Makefile \
modules/bibclassify/doc/admin/Makefile \
modules/bibclassify/doc/hacking/Makefile \
modules/bibclassify/etc/Makefile \
modules/bibclassify/lib/Makefile \
modules/bibconvert/Makefile \
modules/bibconvert/bin/Makefile \
modules/bibconvert/bin/bibconvert \
modules/bibconvert/doc/Makefile \
modules/bibconvert/doc/admin/Makefile \
modules/bibconvert/doc/hacking/Makefile \
modules/bibconvert/etc/Makefile \
modules/bibconvert/lib/Makefile \
modules/bibdocfile/Makefile \
modules/bibdocfile/bin/bibdocfile \
modules/bibdocfile/bin/Makefile \
modules/bibdocfile/doc/Makefile \
modules/bibdocfile/doc/hacking/Makefile \
modules/bibdocfile/lib/Makefile \
modules/bibrecord/Makefile \
modules/bibrecord/bin/Makefile \
modules/bibrecord/bin/xmlmarc2textmarc \
modules/bibrecord/bin/textmarc2xmlmarc \
modules/bibrecord/bin/xmlmarclint \
modules/bibrecord/doc/Makefile \
modules/bibrecord/doc/admin/Makefile \
modules/bibrecord/doc/hacking/Makefile \
modules/bibrecord/etc/Makefile \
modules/bibrecord/lib/Makefile \
modules/bibedit/Makefile \
modules/bibedit/bin/Makefile \
modules/bibedit/bin/bibedit \
modules/bibedit/doc/Makefile \
modules/bibedit/doc/admin/Makefile \
modules/bibedit/doc/hacking/Makefile \
modules/bibedit/etc/Makefile \
modules/bibedit/lib/Makefile \
modules/bibedit/web/Makefile \
modules/bibencode/Makefile \
modules/bibencode/bin/Makefile \
modules/bibencode/bin/bibencode \
modules/bibencode/lib/Makefile \
modules/bibencode/etc/Makefile \
modules/bibencode/www/Makefile \
modules/bibexport/Makefile \
modules/bibexport/bin/Makefile \
modules/bibexport/bin/bibexport \
modules/bibexport/doc/Makefile \
modules/bibexport/doc/admin/Makefile \
modules/bibexport/doc/hacking/Makefile
modules/bibexport/etc/Makefile \
modules/bibexport/lib/Makefile \
modules/bibexport/web/Makefile \
modules/bibexport/web/admin/Makefile \
modules/bibfield/Makefile \
modules/bibfield/lib/Makefile \
modules/bibfield/lib/functions/Makefile \
modules/bibfield/etc/Makefile \
modules/bibformat/Makefile \
modules/bibformat/bin/Makefile \
modules/bibformat/bin/bibreformat \
modules/bibformat/doc/Makefile \
modules/bibformat/doc/admin/Makefile \
modules/bibformat/doc/hacking/Makefile \
modules/bibformat/etc/Makefile \
modules/bibformat/etc/templates/Makefile \
modules/bibformat/etc/templates/format_templates/Makefile \
modules/bibformat/etc/format_templates/Makefile \
modules/bibformat/etc/output_formats/Makefile \
modules/bibformat/lib/Makefile \
modules/bibformat/lib/elements/Makefile \
modules/bibformat/lib/template_context_functions/Makefile \
modules/bibformat/web/Makefile \
modules/bibformat/web/admin/Makefile \
modules/oaiharvest/Makefile \
modules/oaiharvest/bin/Makefile \
modules/oaiharvest/bin/oaiharvest \
modules/oaiharvest/doc/Makefile \
modules/oaiharvest/doc/admin/Makefile \
modules/oaiharvest/doc/hacking/Makefile \
modules/oaiharvest/lib/Makefile \
modules/oaiharvest/web/Makefile \
modules/oaiharvest/web/admin/Makefile \
modules/oairepository/Makefile \
modules/oairepository/bin/Makefile \
modules/oairepository/bin/oairepositoryupdater \
modules/oairepository/doc/Makefile \
modules/oairepository/doc/admin/Makefile \
modules/oairepository/doc/hacking/Makefile \
modules/oairepository/etc/Makefile \
modules/oairepository/lib/Makefile \
modules/oairepository/web/Makefile \
modules/oairepository/web/admin/Makefile \
modules/bibindex/Makefile \
modules/bibindex/bin/Makefile \
modules/bibindex/bin/bibindex \
modules/bibindex/bin/bibstat \
modules/bibindex/doc/Makefile \
modules/bibindex/doc/admin/Makefile \
modules/bibindex/doc/hacking/Makefile \
modules/bibindex/lib/Makefile \
+ modules/bibindex/lib/tokenizers/Makefile \
modules/bibindex/web/Makefile \
modules/bibindex/web/admin/Makefile \
modules/bibknowledge/Makefile \
modules/bibknowledge/lib/Makefile \
modules/bibknowledge/doc/Makefile \
modules/bibknowledge/doc/admin/Makefile \
modules/bibknowledge/doc/hacking/Makefile \
modules/bibmatch/Makefile \
modules/bibmatch/bin/Makefile \
modules/bibmatch/bin/bibmatch \
modules/bibmatch/doc/Makefile \
modules/bibmatch/doc/admin/Makefile \
modules/bibmatch/doc/hacking/Makefile \
modules/bibmatch/etc/Makefile \
modules/bibmatch/lib/Makefile \
modules/bibmerge/Makefile \
modules/bibmerge/bin/Makefile \
modules/bibmerge/doc/Makefile \
modules/bibmerge/doc/admin/Makefile \
modules/bibmerge/doc/hacking/Makefile \
modules/bibmerge/lib/Makefile \
modules/bibmerge/web/Makefile \
modules/bibmerge/web/admin/Makefile \
modules/bibrank/Makefile \
modules/bibrank/bin/Makefile \
modules/bibrank/bin/bibrank \
modules/bibrank/bin/bibrankgkb \
modules/bibrank/doc/Makefile \
modules/bibrank/doc/admin/Makefile \
modules/bibrank/doc/hacking/Makefile \
modules/bibrank/etc/Makefile \
modules/bibrank/etc/bibrankgkb.cfg \
modules/bibrank/etc/demo_jif.cfg \
modules/bibrank/etc/template_single_tag_rank_method.cfg \
modules/bibrank/lib/Makefile \
modules/bibrank/web/Makefile \
modules/bibrank/web/admin/Makefile \
modules/bibsched/Makefile \
modules/bibsched/bin/Makefile \
modules/bibsched/bin/bibsched \
modules/bibsched/bin/bibtaskex \
modules/bibsched/bin/bibtasklet \
modules/bibsched/doc/Makefile \
modules/bibsched/doc/admin/Makefile \
modules/bibsched/doc/hacking/Makefile \
modules/bibsched/lib/Makefile \
modules/bibsched/lib/tasklets/Makefile \
modules/bibupload/Makefile \
modules/bibsort/Makefile \
modules/bibsort/bin/Makefile \
modules/bibsort/bin/bibsort \
modules/bibsort/lib/Makefile \
modules/bibsort/etc/Makefile \
modules/bibsort/doc/Makefile \
modules/bibsort/doc/admin/Makefile \
modules/bibsort/doc/hacking/Makefile \
modules/bibsort/web/Makefile \
modules/bibsort/web/admin/Makefile \
modules/bibsword/Makefile \
modules/bibsword/bin/Makefile \
modules/bibsword/bin/bibsword \
modules/bibsword/doc/Makefile \
modules/bibsword/doc/admin/Makefile \
modules/bibsword/doc/hacking/Makefile \
modules/bibsword/lib/Makefile \
modules/bibsword/etc/Makefile \
modules/bibupload/bin/Makefile \
modules/bibupload/bin/bibupload \
modules/bibupload/bin/batchuploader \
modules/bibupload/doc/Makefile \
modules/bibupload/doc/admin/Makefile \
modules/bibupload/doc/hacking/Makefile \
modules/bibupload/lib/Makefile \
modules/elmsubmit/Makefile \
modules/elmsubmit/bin/Makefile \
modules/elmsubmit/bin/elmsubmit \
modules/elmsubmit/doc/Makefile \
modules/elmsubmit/doc/admin/Makefile \
modules/elmsubmit/doc/hacking/Makefile \
modules/elmsubmit/etc/Makefile \
modules/elmsubmit/etc/elmsubmit.cfg \
modules/elmsubmit/lib/Makefile \
modules/miscutil/Makefile \
modules/miscutil/bin/Makefile \
modules/miscutil/bin/dbdump \
modules/miscutil/bin/dbexec \
modules/miscutil/bin/inveniocfg \
modules/miscutil/bin/inveniomanage \
modules/miscutil/bin/plotextractor \
modules/miscutil/demo/Makefile \
modules/miscutil/doc/Makefile \
modules/miscutil/doc/hacking/Makefile \
modules/miscutil/etc/Makefile \
modules/miscutil/etc/bash_completion.d/Makefile \
modules/miscutil/etc/bash_completion.d/inveniocfg \
modules/miscutil/etc/ckeditor_scientificchar/Makefile \
modules/miscutil/etc/ckeditor_scientificchar/dialogs/Makefile \
modules/miscutil/etc/ckeditor_scientificchar/lang/Makefile \
modules/miscutil/etc/templates/Makefile \
modules/miscutil/lib/Makefile \
modules/miscutil/lib/upgrades/Makefile \
modules/miscutil/sql/Makefile \
modules/miscutil/web/Makefile \
modules/webaccess/Makefile \
modules/webaccess/bin/Makefile \
modules/webaccess/bin/authaction \
modules/webaccess/bin/webaccessadmin \
modules/webaccess/etc/Makefile \
modules/webaccess/etc/templates/Makefile \
modules/webaccess/doc/Makefile \
modules/webaccess/doc/admin/Makefile \
modules/webaccess/doc/hacking/Makefile \
modules/webaccess/lib/Makefile \
modules/webaccess/web/Makefile \
modules/webaccess/web/admin/Makefile \
modules/webalert/Makefile \
modules/webalert/bin/Makefile \
modules/webalert/bin/alertengine \
modules/webalert/doc/Makefile \
modules/webalert/doc/admin/Makefile \
modules/webalert/doc/hacking/Makefile \
modules/webalert/lib/Makefile \
modules/webalert/web/Makefile \
modules/webbasket/Makefile \
modules/webbasket/doc/Makefile \
modules/webbasket/doc/admin/Makefile \
modules/webbasket/doc/hacking/Makefile \
modules/webbasket/lib/Makefile \
modules/webbasket/web/Makefile \
modules/webcomment/Makefile \
modules/webcomment/doc/Makefile \
modules/webcomment/doc/admin/Makefile \
modules/webcomment/doc/hacking/Makefile \
modules/webcomment/etc/Makefile \
modules/webcomment/etc/templates/Makefile \
modules/webcomment/lib/Makefile \
modules/webcomment/web/Makefile \
modules/webcomment/web/admin/Makefile \
modules/webhelp/Makefile \
modules/webhelp/web/Makefile \
modules/webhelp/web/admin/Makefile \
modules/webhelp/web/admin/howto/Makefile \
modules/webhelp/web/hacking/Makefile \
modules/webjournal/Makefile \
modules/webjournal/etc/Makefile \
modules/webjournal/doc/Makefile \
modules/webjournal/doc/admin/Makefile \
modules/webjournal/doc/hacking/Makefile \
modules/webjournal/lib/Makefile \
modules/webjournal/lib/elements/Makefile \
modules/webjournal/lib/widgets/Makefile \
modules/webjournal/web/Makefile \
modules/webjournal/web/admin/Makefile \
modules/weblinkback/Makefile \
modules/weblinkback/etc/Makefile \
modules/weblinkback/etc/templates/Makefile \
modules/weblinkback/lib/Makefile \
modules/weblinkback/web/Makefile \
modules/weblinkback/web/admin/Makefile \
modules/webmessage/Makefile \
modules/webmessage/bin/Makefile \
modules/webmessage/bin/webmessageadmin \
modules/webmessage/doc/Makefile \
modules/webmessage/doc/admin/Makefile \
modules/webmessage/doc/hacking/Makefile \
modules/webmessage/etc/Makefile \
modules/webmessage/etc/templates/Makefile \
modules/webmessage/lib/Makefile \
modules/webmessage/web/Makefile \
modules/websearch/Makefile \
modules/websearch/bin/Makefile \
modules/websearch/bin/webcoll \
modules/websearch/doc/Makefile \
modules/websearch/doc/admin/Makefile \
modules/websearch/doc/hacking/Makefile \
modules/websearch/etc/Makefile \
modules/websearch/etc/templates/Makefile \
modules/websearch/lib/Makefile \
modules/websearch/lib/facets/Makefile \
modules/websearch/lib/template_context_functions/Makefile \
modules/websearch/web/Makefile \
modules/websearch/web/admin/Makefile \
modules/websession/Makefile \
modules/websession/bin/Makefile \
modules/websession/bin/inveniogc \
modules/websession/etc/Makefile \
modules/websession/etc/templates/Makefile \
modules/websession/doc/Makefile \
modules/websession/doc/admin/Makefile \
modules/websession/doc/hacking/Makefile \
modules/websession/lib/Makefile \
modules/websession/web/Makefile \
modules/webstat/Makefile \
modules/webstat/bin/Makefile \
modules/webstat/bin/webstat \
modules/webstat/bin/webstatadmin \
modules/webstat/doc/Makefile \
modules/webstat/doc/admin/Makefile \
modules/webstat/doc/hacking/Makefile \
modules/webstat/etc/Makefile \
modules/webstat/lib/Makefile \
modules/webstyle/Makefile \
modules/webstyle/bin/Makefile \
modules/webstyle/bin/gotoadmin \
modules/webstyle/bin/webdoc \
modules/webstyle/css/Makefile \
modules/webstyle/doc/Makefile \
modules/webstyle/doc/admin/Makefile \
modules/webstyle/doc/hacking/Makefile \
modules/webstyle/etc/Makefile \
modules/webstyle/etc/templates/Makefile \
modules/webstyle/img/Makefile \
modules/webstyle/lib/Makefile \
modules/webstyle/lib/goto_plugins/Makefile \
modules/websubmit/Makefile \
modules/websubmit/bin/Makefile \
modules/websubmit/bin/inveniounoconv \
modules/websubmit/bin/websubmitadmin \
modules/websubmit/doc/Makefile \
modules/websubmit/doc/admin/Makefile \
modules/websubmit/doc/hacking/Makefile \
modules/websubmit/etc/Makefile \
modules/websubmit/lib/Makefile \
modules/websubmit/lib/functions/Makefile \
modules/websubmit/web/Makefile \
modules/websubmit/web/admin/Makefile \
modules/docextract/Makefile \
modules/docextract/bin/Makefile \
modules/docextract/bin/docextract \
modules/docextract/bin/refextract \
modules/docextract/doc/Makefile \
modules/docextract/lib/Makefile \
modules/docextract/etc/Makefile \
modules/docextract/doc/admin/Makefile \
modules/docextract/doc/hacking/Makefile \
modules/webdeposit/Makefile \
modules/webdeposit/etc/Makefile \
modules/webdeposit/etc/templates/Makefile \
modules/webdeposit/lib/Makefile \
modules/webdeposit/lib/deposition_fields/Makefile \
modules/webdeposit/lib/deposition_forms/Makefile \
modules/webdeposit/lib/deposition_types/Makefile \
modules/bibworkflow/Makefile \
modules/bibworkflow/bin/Makefile \
modules/bibworkflow/doc/Makefile \
modules/bibworkflow/etc/Makefile \
modules/bibworkflow/etc/workflows/Makefile \
modules/bibworkflow/etc/templates/Makefile \
modules/bibworkflow/etc/tasks/Makefile \
modules/bibworkflow/lib/Makefile \
modules/bibworkflow/lib/workers/Makefile \
modules/bibworkflow/web/Makefile \
modules/webtag/Makefile \
modules/webtag/bin/Makefile \
modules/webtag/doc/Makefile \
modules/webtag/etc/Makefile \
modules/webtag/etc/templates/Makefile \
modules/webtag/lib/Makefile \
modules/webtag/lib/template_context_functions/Makefile \
modules/webtag/web/Makefile \
])
## Finally, write output files:
AC_OUTPUT
## Write help:
AC_MSG_RESULT([****************************************************************************])
AC_MSG_RESULT([** Your Invenio installation is now ready for building. **])
AC_MSG_RESULT([** You have entered the following parameters: **])
AC_MSG_RESULT([** - Invenio main install directory: ${prefix}])
AC_MSG_RESULT([** - Python executable: $PYTHON])
AC_MSG_RESULT([** - MySQL client executable: $MYSQL])
AC_MSG_RESULT([** - CLISP executable: $CLISP])
AC_MSG_RESULT([** - CMUCL executable: $CMUCL])
AC_MSG_RESULT([** - SBCL executable: $SBCL])
AC_MSG_RESULT([** Here are the steps to continue the building process: **])
AC_MSG_RESULT([** 1) Type 'make' to build your Invenio system. **])
AC_MSG_RESULT([** 2) Type 'make install' to install your Invenio system. **])
AC_MSG_RESULT([** After that you can start customizing your installation as documented **])
AC_MSG_RESULT([** in the INSTALL file (i.e. edit invenio.conf, run inveniocfg, etc). **])
AC_MSG_RESULT([** Good luck, and thanks for choosing Invenio. **])
AC_MSG_RESULT([** -- Invenio Development Team <info@invenio-software.org> **])
AC_MSG_RESULT([****************************************************************************])
## end of file
diff --git a/modules/Makefile.am b/modules/Makefile.am
index fe1b29a16..09fb012ab 100644
--- a/modules/Makefile.am
+++ b/modules/Makefile.am
@@ -1,63 +1,64 @@
## This file is part of Invenio.
## Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
SUBDIRS = bibauthorid \
+ bibauthority \
bibcatalog \
bibcheck \
bibcirculation \
bibclassify \
bibconvert \
bibdocfile \
bibedit \
bibencode \
bibexport \
bibfield \
bibformat \
bibindex \
bibknowledge \
bibmatch \
bibmerge \
bibrank \
bibrecord \
bibsched \
bibsort \
bibsword \
bibupload \
bibworkflow \
elmsubmit \
miscutil \
oaiharvest \
oairepository \
webaccess \
webalert \
webauthorprofile \
webbasket \
webcomment \
webdeposit \
webhelp \
webjournal \
weblinkback \
webmessage \
websearch \
websession \
webstat \
webstyle \
websubmit \
webtag \
docextract
CLEANFILES = *~
diff --git a/po/LINGUAS b/modules/bibauthority/Makefile.am
old mode 100644
new mode 100755
similarity index 77%
copy from po/LINGUAS
copy to modules/bibauthority/Makefile.am
index 8f21a0517..cdaf33b34
--- a/po/LINGUAS
+++ b/modules/bibauthority/Makefile.am
@@ -1,46 +1,21 @@
+##
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+
+SUBDIRS = bin doc lib web
+
+CLEANFILES = *~
diff --git a/po/LINGUAS b/modules/bibauthority/bin/Makefile.am
old mode 100644
new mode 100755
similarity index 77%
copy from po/LINGUAS
copy to modules/bibauthority/bin/Makefile.am
index 8f21a0517..cb5b94841
--- a/po/LINGUAS
+++ b/modules/bibauthority/bin/Makefile.am
@@ -1,46 +1,19 @@
+##
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+
+CLEANFILES = *~ *.tmp
diff --git a/po/LINGUAS b/modules/bibauthority/doc/Makefile.am
old mode 100644
new mode 100755
similarity index 69%
copy from po/LINGUAS
copy to modules/bibauthority/doc/Makefile.am
index 8f21a0517..442210bc0
--- a/po/LINGUAS
+++ b/modules/bibauthority/doc/Makefile.am
@@ -1,46 +1,31 @@
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+
+SUBDIRS = admin hacking
+
+#imgdir = $(localstatedir)/www/img/admin
+
+#img_DATA = authority-records-1.png \
+# authority-records-2.png
+
+webdoclibdir = $(libdir)/webdoc/invenio/help
+
+#webdoclib_DATA = authority-records.webdoc
+
+#EXTRA_DIST = $(img_DATA) $(webdoclib_DATA)
+
+CLEANFILES = *~ *.tmp
diff --git a/po/LINGUAS b/modules/bibauthority/doc/admin/Makefile.am
old mode 100644
new mode 100755
similarity index 77%
copy from po/LINGUAS
copy to modules/bibauthority/doc/admin/Makefile.am
index 8f21a0517..abc95460c
--- a/po/LINGUAS
+++ b/modules/bibauthority/doc/admin/Makefile.am
@@ -1,46 +1,24 @@
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+
+webdoclibdir = $(libdir)/webdoc/invenio/admin
+
+webdoclib_DATA = bibauthority-admin-guide.webdoc
+
+EXTRA_DIST = $(webdoclib_DATA)
+
+CLEANFILES = *~ *.tmp
diff --git a/modules/bibauthority/doc/admin/bibauthority-admin-guide.webdoc b/modules/bibauthority/doc/admin/bibauthority-admin-guide.webdoc
new file mode 100755
index 000000000..627bf2d85
--- /dev/null
+++ b/modules/bibauthority/doc/admin/bibauthority-admin-guide.webdoc
@@ -0,0 +1,171 @@
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+<!-- WebDoc-Page-Title: _(BibAuthority Admin Guide)_ -->
+<!-- WebDoc-Page-Navtrail: <a class="navtrail" href="<CFG_SITE_URL>/help/admin<lang:link/>">Admin Area</a> -->
+<!-- WebDoc-Page-Revision: $Id$ -->
+
+<h2>Introduction</h2>
+<p>The INVENIO admin can configure the various ways in which authority control works for INVENIO by means of the <code>bibauthority_config.py</code> file. The location and full contents of this configuration file with a commented example configuration are shown at the bottom of this page. Their functionality is explained in the following paragraphs.</p>
+<p><i>For examples of how Authority Control works in Invenio from a user's perspective, cf. <a href="howto-authority">_(HOWTO Manage Authority Records)_</a>.</i></p>
+
+<h2>Enforcing types of authority records</h2>
+<p>INVENIO is originally agnostic about the types of authority records it contains. Everything it needs to know about authority records comes, on the one hand, from the authority record types that are contained within the '980__a' fields, and from the configurations related to these types on the other hand. Whereas the '980__a' values are usually edited by the librarians, the INVENIO configuration is the responsibility of the administrator. It is important for librarians and administrators to communicate the exact authority record types as well as the desired functionality relative to the types for the various INVENIO modules.</p>
+
+<h2>BibEdit</h2>
+<p>As admin of an INVENIO instance, you have the possibility of configuring which fields are under authority control. In the “Configuration File Overview” at the end of this page you will find an example of a configuration which will enable the auto-complete functionality for the '100__a', '100__u', '110__a', '130__a', '150__a', '700__a' and '700__u' fields of a bibliographic record in BibEdit. The keys of the “CFG BIBAUTHORITY CONTROLLED FIELDS” dictionary indicate which bibliographic fields are under authority control. If the user types Ctrl-Shift-A while typing within one of these fields, they will propose an auto-complete dropdown list in BibEdit. The user still has the option to enter values manually without use of the drop-down list. The values associated with each key of the dictionary indicate which kind of authority record is to be associated with this field. In the example given, the '100__a' field is associated with the authority record type 'AUTHOR'.</p>
+<p>The “CFG BIBAUTHORITY AUTOSUGGEST OPTIONS” dictionary gives us the remaining configurations, specific only to the auto-suggest functionality. The value for the 'index' key determines which index type will be used find the authority records that will populate the drop-down with a list of suggestions (cf. the following paragraph on configuring the BibIndex for authority records). The value of the 'insert_here_field' determines which authority record field contains the value that should be used both for constructing the strings of the entries in the drop-down list as well as the value to be inserted directly into the edited subfield if the user clicks on one of the drop-down entries. <!-- The value of the 'sort' key tells INVENIO how to sort the entries in the drop-down list. If a popularity sort is chosen, the drop-down entries will be sorted according to how often the associated authority record is referenced in this particular INVENIO instance. -->Finally, the value for the 'disambiguation_fields' key is an ordered list of authority record fields that are used, in the order in which they appear in the list, to disambiguate between authority records with exactly the same value in their 'insert_here_field'.</p>
+
+<h2>BibIndex</h2>
+<p>As an admin of INVENIO, you have the possibility of configuring how indexing works in regards to authority records that are referenced by bibliographic records. When a bibliographic record is indexed for a particular index type, and if that index type contains MARC fields which are under authority control in this particular INVENIO instance (as configured by the, “CFG BIBAUTHORITY CONTROLLED FIELDS” dictionary in the bibauthority_config.py configuration file, mentioned above), then the indexer will include authority record data from specific MARC fields of these authority records in the same index. Which authority record fields are to be used to enrich the indexes for bibliographic records can be configured by the “CFG BIBAUTHORITY AUTHORITY SUBFIELDS TO INDEX” dictionary. In the example below each of the 4 authority record types ('AUTHOR', 'INSTITUTION', 'JOURNAL' and 'SUBJECT') is given a list of authority record MARC fields which are to be scanned for data that is to be included in the indexed terms of the dependent bibliographic records. For the 'AUTHOR' authority records, the example specifies that the values of the fields '100__a', '100__d', '100__q', '400__a', '400__d', and '400__q' (i.e. name, alternative names, and year of birth) should all be included in the data to be indexed for any bibliographic records referencing these authority records in their authority-controlled subfields.</p>
+
+<h2>Configuration File Overview</h2>
+<p>The configuration file for the BibAuthority module can be found at <code>invenio/lib/python/invenio/bibauthority_config.py</code>. Below is a commented example configuration to show how one would typically configure the parameters for BibAuthority. The details of how this works were explained in the paragraphs above.</p>
+<pre>
+&#35; CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD
+&#35; the authority record field containing the authority record control number
+CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD = '035__a'
+
+&#35; Separator to be used in control numbers to separate the authority type
+&#35; PREFIX (e.g. "INSTITUTION") from the control_no (e.g. "(CERN)abc123"
+CFG_BIBAUTHORITY_PREFIX_SEP = '|'
+
+&#35; the ('980__a') string that identifies an authority record
+CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER = 'AUTHORITY'
+
+&#35; the name of the authority collection.
+&#35; This is needed for searching within the authority record collection.
+CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME = 'Authority Records'
+
+&#35; used in log file and regression tests
+CFG_BIBAUTHORITY_BIBINDEX_UPDATE_MESSAGE = \
+ "Indexing records dependent on modified authority records"
+
+&#35; CFG_BIBAUTHORITY_TYPE_NAMES
+&#35; Some administrators may want to be able to change the names used for the
+&#35; authority types. Although the keys of this dictionary are hard-coded into
+&#35; Invenio, the values are not and can therefore be changed to match whatever
+&#35; values are to be used in the MARC records.
+&#35; WARNING: These values shouldn't be changed on a running INVENIO installation
+&#35; ... since the same values are hard coded into the MARC data,
+&#35; ... including the 980__a subfields of all authority records
+&#35; ... and the $0 subfields of the bibliographic fields under authority control
+CFG_BIBAUTHORITY_TYPE_NAMES = {
+ 'INSTITUTION': 'INSTITUTION',
+ 'AUTHOR': 'AUTHOR',
+ 'JOURNAL': 'JOURNAL',
+ 'SUBJECT': 'SUBJECT',
+}
+
+&#35; CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC
+&#35; 1. tells us which bibliographic subfields are under authority control
+&#35; 2. tells us which bibliographic subfields refer to which type of
+&#35; ... authority record (must conform to the keys of CFG_BIBAUTHORITY_TYPE_NAMES)
+CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC = {
+ '100__a': 'AUTHOR',
+ '100__u': 'INSTITUTION',
+ '110__a': 'INSTITUTION',
+ '130__a': 'JOURNAL',
+ '150__a': 'SUBJECT',
+ '260__b': 'INSTITUTION',
+ '700__a': 'AUTHOR',
+ '700__u': 'INSTITUTION',
+}
+
+&#35; CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY
+&#35; Tells us which authority record subfields are under authority control
+&#35; used by autosuggest feature in BibEdit
+&#35; authority record subfields use the $4 field for the control_no (not $0)
+CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY = {
+ '500__a': 'AUTHOR',
+ '510__a': 'INSTITUTION',
+ '530__a': 'JOURNAL',
+ '550__a': 'SUBJECT',
+ '909C1u': 'INSTITUTION', # used in bfe_affiliation
+ '920__v': 'INSTITUTION', # used by FZ Juelich demo data
+}
+
+&#35; constants for CFG_BIBEDIT_AUTOSUGGEST_TAGS
+&#35; CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA for alphabetical sorting
+&#35; ... of drop-down suggestions
+&#35; CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR for sorting of drop-down
+&#35; ... suggestions according to a popularity ranking
+CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA = 'alphabetical'
+CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR = 'by popularity'
+
+&#35; CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG
+&#35; some additional configuration for auto-suggest drop-down
+&#35; 'field' : which logical or MARC field field to use for this
+&#35; ... auto-suggest type
+&#35; 'insert_here_field' : which authority record field to use
+&#35; ... for insertion into the auto-completed bibedit field
+&#35; 'disambiguation_fields': an ordered list of fields to use
+&#35; ... in case multiple suggestions have the same 'insert_here_field' values
+&#35; TODO: 'sort_by'. This has not been implemented yet !
+CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG = {
+ 'AUTHOR': {
+ 'field': 'authorityauthor',
+ 'insert_here_field': '100__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR,
+ 'disambiguation_fields': ['100__d', '270__m'],
+ },
+ 'INSTITUTION':{
+ 'field': 'authorityinstitution',
+ 'insert_here_field': '110__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA,
+ 'disambiguation_fields': ['270__b'],
+ },
+ 'JOURNAL':{
+ 'field': 'authorityjournal',
+ 'insert_here_field': '130__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR,
+ },
+ 'SUBJECT':{
+ 'field': 'authoritysubject',
+ 'insert_here_field': '150__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA,
+ },
+}
+
+&#35; list of authority record fields to index for each authority record type
+&#35; R stands for 'repeatable'
+&#35; NR stands for 'non-repeatable'
+CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX = {
+ 'AUTHOR': [
+ '100__a', #Personal Name (NR, NR)
+ '100__d', #Year of birth or other dates (NR, NR)
+ '100__q', #Fuller form of name (NR, NR)
+ '400__a', #(See From Tracing) (R, NR)
+ '400__d', #(See From Tracing) (R, NR)
+ '400__q', #(See From Tracing) (R, NR)
+ ],
+ 'INSTITUTION': [
+ '110__a', #(NR, NR)
+ '410__a', #(R, NR)
+ ],
+ 'JOURNAL': [
+ '130__a', #(NR, NR)
+ '130__f', #(NR, NR)
+ '130__l', #(NR, NR)
+ '430__a', #(R, NR)
+ ],
+ 'SUBJECT': [
+ '150__a', #(NR, NR)
+ '450__a', #(R, NR)
+ ],
+}
+</pre>
diff --git a/po/LINGUAS b/modules/bibauthority/doc/hacking/Makefile.am
old mode 100644
new mode 100755
similarity index 77%
copy from po/LINGUAS
copy to modules/bibauthority/doc/hacking/Makefile.am
index 8f21a0517..6adf37de5
--- a/po/LINGUAS
+++ b/modules/bibauthority/doc/hacking/Makefile.am
@@ -1,46 +1,24 @@
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+
+webdoclibdir = $(libdir)/webdoc/invenio/hacking
+
+webdoclib_DATA = bibauthority-internals.webdoc
+
+EXTRA_DIST = $(webdoclib_DATA)
+
+CLEANFILES = *~ *.tmp
diff --git a/modules/bibauthority/doc/hacking/bibauthority-internals.webdoc b/modules/bibauthority/doc/hacking/bibauthority-internals.webdoc
new file mode 100755
index 000000000..5e1cfdf53
--- /dev/null
+++ b/modules/bibauthority/doc/hacking/bibauthority-internals.webdoc
@@ -0,0 +1,254 @@
+## -*- mode: html; coding: utf-8; -*-
+
+## This file is part of Invenio.
+## Copyright (C) 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+<!-- WebDoc-Page-Title: BibAuthority Internals -->
+<!-- WebDoc-Page-Navtrail: <a class="navtrail" href="<CFG_SITE_URL>/help/hacking">Hacking Invenio</a> -->
+<!-- WebDoc-Page-Revision: $Id$ -->
+
+Here you will find a few explanations to the inner workings of BibAuthority.
+
+<H2 CLASS="western">Indexing</H2>
+
+<H3 CLASS="western">Introduction</H3>
+<P>There are two cases that need special attention when idexing bibliographic
+data that contains references to authority records.
+The first case is relatively simple and requires the
+enriching of bibliographic data with data from authority records
+whenever a bibliographic record is being indexed. The second is a bit
+more complex, for it requires detecting which bibliographic records
+should be re-indexed, based on referenced authority records having
+been updated within a given date range.</P>
+
+<H3 CLASS="western">Indexing by record ID, by modification date
+or by index type</H3>
+<P>First of all, we need to say something about how INVENIO let's the
+admin index the data. INVENIO's indexer (BibIndex) is always run as a
+task that is executed by INVENIO's scheduler (BibSched). Typically,
+this is done either by scheduling a bibindex task from the command
+line (manually), or it is part of a periodic task (BibTask) run
+directly from BibSched, typically ever 5 minutes. In case it is run
+manually, the user has the option of specifying certain record IDs to
+be re-indexed, e.g. by specifying ranges of IDs or collections to be
+re-indexed. In this case, the selected records are re-indexed whether
+or not there were any modifications to the data. Alternatively, the
+user can specify a date range, in which case the indexer will search
+all the record IDs that have been modified in the selected date range
+(by default, the date range would specify all IDs modified since the
+last time the indexer was run) and update the index only for those
+records. As a third option, the user can specify specific types of
+indexes. INVENIO lets you search by different criteria (e.g. 'any
+field', 'title', 'author', 'abstract', 'keyword', 'journal', 'year',
+'fulltext', …), and each of these criteria corresponds to a
+separate index, indexing only the data from the relevant MARC
+subfields. Normally, the indexer would update all index types for any
+given record ID, but with this third option, the user can limit the
+re-indexing to only specific types of indexes if desired.
+</P>
+<P>Note: In reality, INVENIO creates not only 1 but 6 different
+indexes per index type. 3 are forward indexes (mapping words, pairs
+or phrases to record IDs), 3 are reverse indexes (mapping record IDs
+to words, pairs or phrases). The word, pair and phrase indexes are
+used for optimizing the searching speed depending on whether the user
+searches for words, sub-phrases or entire phrases. These details are
+however not relevant for BibAuthority. It simply finds the values to
+be indexed and passes them on to the indexer which indexes them as if
+it was data coming directly from the bibliographic record.</P>
+
+<H3 CLASS="western">Enriching the index data – simple case</H3>
+<P>Once the indexer knows which record ID (and optionally, which
+index type) to re-index, including authority data is simply a
+question of checking whether the MARC subfields currently being
+indexed are under authority control (as specified in the BibAuthority
+configuration file). If they are, the indexer must follow the
+following (pseudo-)algorithm which will fetch the necessary data from
+the referenced authority records:</P>
+<P STYLE="margin-left: 1.25cm"><FONT COLOR="#000000"><B>For</B> each
+subfield and each record ID currently being re-indexed:</FONT></P>
+<P STYLE="margin-left: 2.5cm"><FONT COLOR="#000000"><B>If</B> the
+subfield is under authority control (→ config file):</FONT></P>
+<P STYLE="margin-left: 3.75cm"><FONT COLOR="#000000"><B>Get</B> the
+<I>type</I> of referenced authority record expected for this field</FONT></P>
+<P STYLE="margin-left: 3.75cm"><FONT COLOR="#000000"><B>For</B> each
+authority record control number found in the corresponding 'XXX__0'
+subfields and matching the expected authority record <I>type</I>
+(control number prefix):</FONT></P>
+<P STYLE="margin-left: 5cm"><FONT COLOR="#000000"><B>Find</B> the
+authority record ID (MARC field '001' control number) corresponding
+to the authority record control number (as contained in MARC field
+'035' of the authority record)</FONT></P>
+<P STYLE="margin-left: 5cm"><FONT COLOR="#000000"><B>For</B> each
+authority record subfield marked as index relevant for the given
+$type (→ config file)</FONT></P>
+<P STYLE="margin-left: 6.25cm"><FONT COLOR="#000000"><B>Add</B> the
+values of these subfields to the list of values to be returned and
+used for enriching the indexed strings.</FONT></P>
+<P>The strings collected with this algorithm are simply added to the
+strings already found by the indexer in the regular bibliographic
+record MARC data. Once all the strings are collected, the indexer
+goes on with the usual operation, parsing them 3 different times,
+once for phrases, once for word-pairs, once for words, which are used
+to populate the 6 forward and reverse index tables in the database.</P>
+
+<H3 CLASS="western">Updating the index by date range</H3>
+<P>When a bibindex task is created by date range, we are presented
+with a more tricky situation which requires a more complex treatment
+for it to work properly. As long as the bibindex task is configured
+to index by record ID, the simple algorithm described above is enough
+to properly index the authority data along with the data from
+bibliographic records. This is true also if we use the third option
+described above, specifying the particular index type to re-index
+with the bibindex task. However, if we launch a bibindex task based
+on a date range (by default the date range covers the time since the
+last time bibindex task was run on for each of the index types),
+bibindex would have no way to know that it must update the index for
+a specific bibliographic record if one of the authority records it
+references was modified in the specified date range. This would lead
+to incomplete indexes.</P>
+<P>A first idea was to modify the time-stamp for any bibliographic
+records as soon as an authority record is modified. Every MARC record
+in INVENIO has a 'modification_date' time-stamp which indicates to
+the indexer when this record was last modified. If we search for
+dependent bibliographic records every time we modify an authority
+record, and if we then update the 'modification_date' time-stamp for
+each of these dependent bibliographic records, then we can be sure
+that the indexer would find and re-index these bibliographic records
+as well when indexing by a specified date-range. The problem with
+this is a performance problem. If we update the time-stamp for the
+bibliographic record, this record will be re-indexed for all of the
+mentioned index-types ('author', 'abstract', 'fulltext', etc.), even
+though many of them may not cover MARC subfields that are under
+authority control, and hence re-indexing them because of a change in
+an authority record would be quite useless. In an INVENIO
+installation there would typically be 15-30 index-types. Imagine if
+you make a change to a 'journal' authority record and only 1 out of
+the 20+ index-types is for 'journal'. INVENIO would be re-indexing
+20+ index types in stead of only the 1 index type which is relevant
+to the the type of the changed authority record.</P>
+<P>There are two approaches that could solve this problem equally
+well. The first approach would require checking – for each
+authority record ID which is to be re-indexed – whether there are
+any dependent bibliographic records that need to be re-indexed as
+well. If done in the right manner, this approach would only re-index
+the necessary index types that can contain information from
+referenced authority records, and the user could specify the index
+type to be re-indexed and the right bibliographic records would still
+be found. The second approach works the other way around. In stead of
+waiting until we find a recently modified authority record, and then
+looking for dependent bibliographic records, we directly launch a
+search for bibliographic records containing links to recently updated
+authority records and add the record IDs found in this way to the
+list of record IDs that need to be re-indexed.
+</P>
+<P>Of the two approaches, the second one was choses based solely upon
+considerations of integration into existing INVENIO code. As indexing
+in INVENIO currently works, it is more natural and easily readable to
+apply the second method than the first.</P>
+<P>According to the second method, the pseudo-algorithm for finding
+the bibliographic record IDs that need to be updated based upon
+recently modified authority records in a given date range looks like
+this:</P>
+<P STYLE="margin-left: 1.25cm"><B>For</B> each index-type to
+re-index:</P>
+<P STYLE="margin-left: 2.5cm"><B>For</B> each <I>subfield</I>
+concerned by the index-type:</P>
+<P STYLE="margin-left: 3.75cm"><B>If</B> the <I>subfield</I> is under
+authority control (→ config file):</P>
+<P STYLE="margin-left: 5cm"><B>Get </B>the <I>type</I> of authority
+record associated with this field</P>
+<P STYLE="margin-left: 5cm"><B>Get </B> all of the record IDs for
+authority records updated in the specified date range.</P>
+<P STYLE="margin-left: 5cm"><B>For</B> each record ID</P>
+<P STYLE="margin-left: 6.25cm"><B>Get </B> the authority record
+control numbers of this record ID</P>
+<P STYLE="margin-left: 6.25cm"><B>For</B> each authority record
+control number</P>
+<P STYLE="margin-left: 7.5cm"><B>Search </B>for and <B>add</B> the
+<SPAN STYLE="font-style: normal">record IDs</SPAN> of bibliographic
+records containing this control number (with <I>type</I> in the
+prefix) in the 'XXX__0' field of the current <I>subfield</I><SPAN STYLE="font-style: normal">
+to the list of record IDs to be returned to the caller to be marked
+as needing re-indexing</SPAN>.</P>
+<P STYLE="margin-left: 7.5cm"><BR><BR>
+</P>
+<P>The record IDs returned in this way are added to the record IDs
+that need to be re-indexed (by date range) and then the rest of the
+indexing can run as usual.</P>
+
+<H3 CLASS="western">Implementation specifics</H3>
+<P>The pseudo-algorithms described above were used as described in
+this document, but were not each implemented in a single function. In
+order for parts of them to be reusable and also for the various parts
+to be properly integrated into existing python modules with similar
+functionality (e.g auxiliary search functions were added to INVENIO's
+search_engine.py code), the pseudo-algorithms were split up into
+multiple nested function calls and integrated where it seemed to best
+fit the existing code base of INVENIO. In the case of the
+pseudo-algorithm described in “Updating the index by date range”,
+the very choice of the algorithm had already depended on how to best
+integrate it into the existing code for date-range related indexing.</P>
+
+<H2 CLASS="western">Cross-referencing between MARC records</H2>
+<P>In order to reference authority records, we use alphanumeric strings
+stored in the $0 subfields of fields that contain other, authority-controlled
+subfields as well. The format of these alphanumeric strings for INVENIO is
+in part determined by the MARC standard itself, which states that:</P>
+<blockquote>
+<i>
+<P CLASS="code-western">Subfield $0 contains the system control
+number of the related authority record, or a standard identifier such
+as an International Standard Name Identifier (ISNI). The control
+number or identifier is preceded by the appropriate MARC Organization
+code (for a related authority record) or the Standard Identifier
+source code (for a standard identifier scheme), enclosed in
+parentheses. See MARC Code List for Organizations for a listing of
+organization codes and Standard Identifier Source Codes for code
+systems for standard identifiers. Subfield $0 is repeatable for
+different control numbers or identifiers.</P>
+</i>
+</blockquote>
+<P STYLE="margin-bottom: 0cm">An example of such a string could be
+“(SzGeCERN)abc1234”, where “SzGeCERN” would be the MARC
+organization code,
+and abc1234 would be the unique identifier for this authority record
+within the given organization.</P>
+<P>Since it is possible for a single field (e.g. field '100') to have
+multiple $0 subfields for the same field entry, we need a way to
+specify which $0 subfield reference is associated with which other
+subfield of the same field entry.</P>
+<P>For example, imagine that in bibliographic records both '700__a'
+('other author' <I>name</I>) as well as '700__u' ('other author'
+<I>affiliation</I>) are under authority control. In this case we
+would have two '700__0' subfields. Of of them would reference the
+<I>author</I> authority record (for the <I>name</I><SPAN STYLE="font-style: normal">)</SPAN>,
+the other one would reference an <I>institution</I> authority record
+(for the <I>affiliation</I><SPAN STYLE="font-style: normal">).
+INVENIO needs some way to know which $0 subfield is associated with
+the $a subfield and which one with the $u subfield.</SPAN></P>
+<P STYLE="font-style: normal">We have chosen to solve this in the
+following way. Every $0 subfield value will not only contain the
+authority record control number, but in addition will be prefixed by
+the type of authority record (e.g. 'AUTHOR', 'INSTITUTION', 'JOURNAL'
+or 'SUBJECT), separated from the control number by a separator, e.g. ':' (configurable). A possible
+$0 subfield value could therefore be: “author:(SzGeCERN)abc1234”.
+This will allow INVENIO to know that the $0 subfield containing
+“author:(SzGeCERN)abc1234” is associated with the $a subfield
+(author's name), containing e.g. “Ellis, John”, whereas the $0
+subfield containing “institution:(SzGeCERN)xyz4321” is associated
+with the $u subfield (author's affiliation/institution) of the same
+field entry, containing e.g. “CERN”.</P>
diff --git a/po/LINGUAS b/modules/bibauthority/lib/Makefile.am
old mode 100644
new mode 100755
similarity index 73%
copy from po/LINGUAS
copy to modules/bibauthority/lib/Makefile.am
index 8f21a0517..3045b6ab3
--- a/po/LINGUAS
+++ b/modules/bibauthority/lib/Makefile.am
@@ -1,46 +1,29 @@
+##
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+
+pylibdir = $(libdir)/python/invenio
+
+pylib_DATA = \
+ bibauthority_config.py \
+ bibauthority_engine.py \
+ bibauthority_tests.py \
+ bibauthority_regression_tests.py
+
+EXTRA_DIST = $(pylib_DATA)
+
+CLEANFILES = *~ *.tmp *.pyc
diff --git a/modules/bibauthority/lib/bibauthority_config.py b/modules/bibauthority/lib/bibauthority_config.py
new file mode 100755
index 000000000..a7af7012f
--- /dev/null
+++ b/modules/bibauthority/lib/bibauthority_config.py
@@ -0,0 +1,147 @@
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD
+# the authority record field containing the authority record control number
+CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD = '035__a'
+
+# Separator to be used in control numbers to separate the authority type
+# PREFIX (e.g. "INSTITUTION") from the control_no (e.g. "(CERN)abc123"
+CFG_BIBAUTHORITY_PREFIX_SEP = '|'
+
+# the ('980__a') string that identifies an authority record
+CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER = 'AUTHORITY'
+
+# the name of the authority collection.
+# This is needed for searching within the authority record collection.
+CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME = 'Authority Records'
+
+# CFG_BIBAUTHORITY_TYPE_NAMES
+# Some administrators may want to be able to change the names used for the
+# authority types. Although the keys of this dictionary are hard-coded into
+# Invenio, the values are not and can therefore be changed to match whatever
+# values are to be used in the MARC records.
+# WARNING: These values shouldn't be changed on a running INVENIO installation
+# ... since the same values are hard coded into the MARC data,
+# ... including the 980__a subfields of all authority records
+# ... and the $0 subfields of the bibliographic fields under authority control
+CFG_BIBAUTHORITY_TYPE_NAMES = {
+ 'INSTITUTION': 'INSTITUTION',
+ 'AUTHOR': 'AUTHOR',
+ 'JOURNAL': 'JOURNAL',
+ 'SUBJECT': 'SUBJECT',
+}
+
+# CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC
+# 1. tells us which bibliographic subfields are under authority control
+# 2. tells us which bibliographic subfields refer to which type of
+# ... authority record (must conform to the keys of CFG_BIBAUTHORITY_TYPE_NAMES)
+# Note: if you want to add new tag here you should also append appropriate tag
+# to the miscellaneous index on the BibIndex Admin Site
+CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC = {
+ '100__a': 'AUTHOR',
+ '100__u': 'INSTITUTION',
+ '110__a': 'INSTITUTION',
+ '130__a': 'JOURNAL',
+ '150__a': 'SUBJECT',
+ '260__b': 'INSTITUTION',
+ '700__a': 'AUTHOR',
+ '700__u': 'INSTITUTION',
+}
+
+# CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY
+# Tells us which authority record subfields are under authority control
+# used by autosuggest feature in BibEdit
+# authority record subfields use the $4 field for the control_no (not $0)
+CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY = {
+ '500__a': 'AUTHOR',
+ '510__a': 'INSTITUTION',
+ '530__a': 'JOURNAL',
+ '550__a': 'SUBJECT',
+ '909C1u': 'INSTITUTION', # used in bfe_affiliation
+ '920__v': 'INSTITUTION', # used by FZ Juelich demo data
+}
+
+# constants for CFG_BIBEDIT_AUTOSUGGEST_TAGS
+# CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA for alphabetical sorting
+# ... of drop-down suggestions
+# CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR for sorting of drop-down
+# ... suggestions according to a popularity ranking
+CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA = 'alphabetical'
+CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR = 'by popularity'
+
+# CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG
+# some additional configuration for auto-suggest drop-down
+# 'field' : which logical or MARC field field to use for this
+# ... auto-suggest type
+# 'insert_here_field' : which authority record field to use
+# ... for insertion into the auto-completed bibedit field
+# 'disambiguation_fields': an ordered list of fields to use
+# ... in case multiple suggestions have the same 'insert_here_field' values
+# TODO: 'sort_by'. This has not been implemented yet !
+CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG = {
+ 'AUTHOR': {
+ 'field': 'authorityauthor',
+ 'insert_here_field': '100__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR,
+ 'disambiguation_fields': ['100__d', '270__m'],
+ },
+ 'INSTITUTION':{
+ 'field': 'authorityinstitution',
+ 'insert_here_field': '110__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA,
+ 'disambiguation_fields': ['270__b'],
+ },
+ 'JOURNAL':{
+ 'field': 'authorityjournal',
+ 'insert_here_field': '130__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR,
+ },
+ 'SUBJECT':{
+ 'field': 'authoritysubject',
+ 'insert_here_field': '150__a',
+ 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA,
+ },
+}
+
+# list of authority record fields to index for each authority record type
+# R stands for 'repeatable'
+# NR stands for 'non-repeatable'
+CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX = {
+ 'AUTHOR': [
+ '100__a', #Personal Name (NR, NR)
+ '100__d', #Year of birth or other dates (NR, NR)
+ '100__q', #Fuller form of name (NR, NR)
+ '400__a', #(See From Tracing) (R, NR)
+ '400__d', #(See From Tracing) (R, NR)
+ '400__q', #(See From Tracing) (R, NR)
+ ],
+ 'INSTITUTION': [
+ '110__a', #(NR, NR)
+ '410__a', #(R, NR)
+ ],
+ 'JOURNAL': [
+ '130__a', #(NR, NR)
+ '130__f', #(NR, NR)
+ '130__l', #(NR, NR)
+ '430__a', #(R, NR)
+ ],
+ 'SUBJECT': [
+ '150__a', #(NR, NR)
+ '450__a', #(R, NR)
+ ],
+}
diff --git a/modules/bibauthority/lib/bibauthority_engine.py b/modules/bibauthority/lib/bibauthority_engine.py
new file mode 100755
index 000000000..ecf77a879
--- /dev/null
+++ b/modules/bibauthority/lib/bibauthority_engine.py
@@ -0,0 +1,289 @@
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# pylint: disable=C0103
+"""Invenio BibAuthority Engine."""
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD, \
+ CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX,\
+ CFG_BIBAUTHORITY_PREFIX_SEP
+
+import re
+from invenio.errorlib import register_exception
+from invenio.search_engine import search_pattern, \
+ record_exists
+from invenio.search_engine_utils import get_fieldvalues
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER
+
+def is_authority_record(recID):
+ """
+ returns whether recID is an authority record
+
+ @param recID: the record id to check
+ @type recID: int
+
+ @return: True or False
+ """
+ # low-level: don't use possibly indexed logical fields !
+ return recID in search_pattern(p='980__a:AUTHORITY')
+
+def get_dependent_records_for_control_no(control_no):
+ """
+ returns a list of recIDs that refer to an authority record containing
+ the given control_no.
+ E.g. if an authority record has the control number
+ "AUTHOR:(CERN)aaa0005" in its '035__a' subfield, then this function will return all
+ recIDs of records that contain any 'XXX__0' subfield
+ containing "AUTHOR:(CERN)aaa0005"
+
+ @param control_no: the control number for an authority record
+ @type control_no: string
+
+ @return: list of recIDs
+ """
+ # We don't want to return the recID who's control number is control_no
+ myRecIDs = _get_low_level_recIDs_intbitset_from_control_no(control_no)
+ # Use search_pattern, since we want to find records from both bibliographic
+ # as well as authority record collections
+ return list(search_pattern(p='"' + control_no+'"') - myRecIDs)
+
+def get_dependent_records_for_recID(recID):
+ """
+ returns a list of recIDs that refer to an authority record containing
+ the given record ID.
+
+ 'type' is a string (e.g. "AUTHOR") referring to the type of authority record
+
+ @param recID: the record ID for the authority record
+ @type recID: int
+
+ @return: list of recIDs
+ """
+ recIDs = []
+
+ # get the control numbers
+ control_nos = get_control_nos_from_recID(recID)
+ for control_no in control_nos:
+ recIDs.extend(get_dependent_records_for_control_no(control_no))
+
+ return recIDs
+
+def guess_authority_types(recID):
+ """
+ guesses the type(s) (e.g. AUTHOR, INSTITUTION, etc.)
+ of an authority record (should only have one value)
+
+ @param recID: the record ID of the authority record
+ @type recID: int
+
+ @return: list of strings
+ """
+ types = get_fieldvalues(recID,
+ '980__a',
+ repetitive_values=False) # remove possible duplicates !
+
+ #filter out unwanted information
+ while CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER in types:
+ types.remove(CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER)
+ types = [_type for _type in types if _type.isalpha()]
+
+ return types
+
+def get_low_level_recIDs_from_control_no(control_no):
+ """
+ returns the list of EXISTING record ID(s) of the authority records
+ corresponding to the given (INVENIO) MARC control_no
+ (e.g. 'AUTHOR:(XYZ)abc123')
+ (NB: the list should normally contain exactly 1 element)
+
+ @param control_no: a (INVENIO) MARC internal control_no to an authority record
+ @type control_no: string
+
+ @return:: list containing the record ID(s) of the referenced authority record
+ (should be only one)
+ """
+ # values returned
+# recIDs = []
+ #check for correct format for control_no
+# control_no = ""
+# if CFG_BIBAUTHORITY_PREFIX_SEP in control_no:
+# auth_prefix, control_no = control_no.split(CFG_BIBAUTHORITY_PREFIX_SEP);
+# #enforce expected enforced_type if present
+# if (enforced_type is None) or (auth_prefix == enforced_type):
+# #low-level search needed e.g. for bibindex
+# hitlist = search_pattern(p='980__a:' + auth_prefix)
+# hitlist &= _get_low_level_recIDs_intbitset_from_control_no(control_no)
+# recIDs = list(hitlist)
+
+ recIDs = list(_get_low_level_recIDs_intbitset_from_control_no(control_no))
+
+ # filter out "DELETED" recIDs
+ recIDs = [recID for recID in recIDs if record_exists(recID) > 0]
+
+ # normally there should be exactly 1 authority record per control_number
+ _assert_unique_control_no(recIDs, control_no)
+
+ # return
+ return recIDs
+
+#def get_low_level_recIDs_from_control_no(control_no):
+# """
+# Wrapper function for _get_low_level_recIDs_intbitset_from_control_no()
+# Returns a list of EXISTING record IDs with control_no
+#
+# @param control_no: an (INVENIO) MARC internal control number to an authority record
+# @type control_no: string
+#
+# @return: list (in stead of an intbitset)
+# """
+# #low-level search needed e.g. for bibindex
+# recIDs = list(_get_low_level_recIDs_intbitset_from_control_no(control_no))
+#
+# # filter out "DELETED" recIDs
+# recIDs = [recID for recID in recIDs if record_exists(recID) > 0]
+#
+# # normally there should be exactly 1 authority record per control_number
+# _assert_unique_control_no(recIDs, control_no)
+#
+# # return
+# return recIDs
+
+def _get_low_level_recIDs_intbitset_from_control_no(control_no):
+ """
+ returns the intbitset hitlist of ALL record ID(s) of the authority records
+ corresponding to the given (INVENIO) MARC control number
+ (e.g. '(XYZ)abc123'), (e.g. from the 035 field) of the authority record.
+
+ Note: This function does not filter out DELETED records!!! The caller
+ to this function must do this himself.
+
+ @param control_no: an (INVENIO) MARC internal control number to an authority record
+ @type control_no: string
+
+ @return:: intbitset containing the record ID(s) of the referenced authority record
+ (should be only one)
+ """
+ #low-level search needed e.g. for bibindex
+ hitlist = search_pattern(
+ p=CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD + ":" +
+ '"' + control_no + '"')
+
+ # return
+ return hitlist
+
+def _assert_unique_control_no(recIDs, control_no):
+ """
+ If there are more than one EXISTING recIDs with control_no, log a warning
+
+ @param recIDs: list of record IDs with control_no
+ @type recIDs: list of int
+
+ @param control_no: the control number of the authority record in question
+ @type control_no: string
+ """
+
+ if len(recIDs) > 1:
+ error_message = \
+ "DB inconsistency: multiple rec_ids " + \
+ "(" + ", ".join([str(recID) for recID in recIDs]) + ") " + \
+ "found for authority record control number: " + control_no
+ try:
+ raise Exception
+ except:
+ register_exception(prefix=error_message,
+ alert_admin=True,
+ subject=error_message)
+
+def get_control_nos_from_recID(recID):
+ """
+ get a list of control numbers from the record ID
+
+ @param recID: record ID
+ @type recID: int
+
+ @return: authority record control number
+ """
+ return get_fieldvalues(recID, CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD,
+ repetitive_values=False)
+
+def get_type_from_control_no(control_no):
+ """simply returns the authority record TYPE prefix contained in
+ control_no or else an empty string.
+
+ @param control_no: e.g. "AUTHOR:(CERN)abc123"
+ @type control_no: string
+
+ @return: e.g. "AUTHOR" or ""
+ """
+
+ # pattern: any string, followed by the prefix, followed by a parenthesis
+ pattern = \
+ r'.*' + \
+ r'(?=' + re.escape(CFG_BIBAUTHORITY_PREFIX_SEP) + re.escape('(') + r')'
+ m = re.match(pattern, control_no)
+ return m and m.group(0) or ''
+
+def guess_main_name_from_authority_recID(recID):
+ """
+ get the main name of the authority record
+
+ @param recID: the record ID of authority record
+ @type recID: int
+
+ @return: the main name of this authority record (string)
+ """
+ #tags where the main authority record name can be found
+ main_name_tags = ['100__a', '110__a', '130__a', '150__a']
+ main_name = ''
+ # look for first match only
+ for tag in main_name_tags:
+ fieldvalues = get_fieldvalues(recID, tag, repetitive_values=False)
+ if len(fieldvalues):
+ main_name = fieldvalues[0]
+ break
+ # return first match, if found
+ return main_name
+
+def get_index_strings_by_control_no(control_no):
+ """extracts the index-relevant strings from the authority record referenced by
+ the 'control_no' parameter and returns it as a list of strings
+
+ @param control_no: a (INVENIO) MARC internal control_no to an authority record
+ @type control_no: string (e.g. 'author:(ABC)1234')
+
+ @param expected_type: the type of authority record expected
+ @type expected_type: string, e.g. 'author', 'journal' etc.
+
+ @return: list of index-relevant strings from the referenced authority record
+
+ """
+
+ from invenio.bibindex_engine import list_union
+
+ #return value
+ string_list = []
+ #1. get recID and authority type corresponding to control_no
+ rec_IDs = get_low_level_recIDs_from_control_no(control_no)
+ #2. concatenate and return all the info from the interesting fields for this record
+ for rec_id in rec_IDs: # in case we get multiple authority records
+ for tag in CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX.get(get_type_from_control_no(control_no)):
+ new_strings = get_fieldvalues(rec_id, tag)
+ string_list = list_union(new_strings, string_list)
+ #return
+ return string_list
+
diff --git a/modules/bibauthority/lib/bibauthority_regression_tests.py b/modules/bibauthority/lib/bibauthority_regression_tests.py
new file mode 100755
index 000000000..1747b0b2a
--- /dev/null
+++ b/modules/bibauthority/lib/bibauthority_regression_tests.py
@@ -0,0 +1,100 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2010, 2011, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibAuthority Regression Test Suite."""
+
+__revision__ = "$Id$"
+
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD, \
+ CFG_BIBAUTHORITY_TYPE_NAMES, \
+ CFG_BIBAUTHORITY_PREFIX_SEP
+
+from invenio.testutils import make_test_suite, run_test_suite, \
+ InvenioTestCase
+from invenio.importutils import lazy_import
+is_authority_record = lazy_import('invenio.bibauthority_engine:is_authority_record')
+get_dependent_records_for_control_no = lazy_import('invenio.bibauthority_engine:get_dependent_records_for_control_no')
+get_dependent_records_for_recID = lazy_import('invenio.bibauthority_engine:get_dependent_records_for_recID')
+guess_authority_types = lazy_import('invenio.bibauthority_engine:guess_authority_types')
+get_low_level_recIDs_from_control_no = lazy_import('invenio.bibauthority_engine:get_low_level_recIDs_from_control_no')
+get_control_nos_from_recID = lazy_import('invenio.bibauthority_engine:get_control_nos_from_recID')
+get_index_strings_by_control_no = lazy_import('invenio.bibauthority_engine:get_index_strings_by_control_no')
+guess_main_name_from_authority_recID = lazy_import('invenio.bibauthority_engine:guess_main_name_from_authority_recID')
+get_fieldvalues = lazy_import('invenio.search_engine_utils:get_fieldvalues')
+
+class BibAuthorityEngineTest(InvenioTestCase):
+ """Check BibEdit web pages whether they are up or not."""
+
+ def test_bibauthority_is_authority_record(self):
+ """bibauthority - test is_authority_record()"""
+ self.assertFalse(is_authority_record(1))
+ self.assertTrue(is_authority_record(118))
+
+ def test_bibauthority_get_dependent_records_for_control_no(self):
+ """bibauthority - test get_dependent_records_for_control_no()"""
+ control_no_field = CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD
+ control_nos = get_fieldvalues(118, control_no_field)
+ count = 0
+ for control_no in control_nos:
+ count += len(get_dependent_records_for_control_no(control_no))
+ self.assertTrue(count)
+
+ def test_bibauthority_get_dependent_records_for_recID(self):
+ """bibauthority - test get_dependent_records_for_recID()"""
+ self.assertTrue(len(get_dependent_records_for_recID(118)))
+
+ def test_bibauthority_guess_authority_types(self):
+ """bibauthority - test guess_authority_types()"""
+ _type = CFG_BIBAUTHORITY_TYPE_NAMES['AUTHOR']
+ self.assertEqual(guess_authority_types(118), [_type])
+
+ def test_bibauthority_get_low_level_recIDs(self):
+ """bibauthority - test get_low_level_recIDs_from_control_no()"""
+ _type = CFG_BIBAUTHORITY_TYPE_NAMES['INSTITUTION']
+ control_no = _type + CFG_BIBAUTHORITY_PREFIX_SEP + "(SzGeCERN)iii0002"
+ recIDs = [121]
+ self.assertEqual(get_low_level_recIDs_from_control_no(control_no),
+ recIDs)
+
+ def test_bibauthority_get_control_nos_from_recID(self):
+ """bibauthority - test get_control_nos_from_recID()"""
+ self.assertTrue(len(get_control_nos_from_recID(118)))
+
+ def test_bibauthority_guess_main_name(self):
+ """bibauthority - test guess_main_name_from_authority_recID()"""
+ recID = 118
+ main_name = 'Ellis, John'
+ self.assertEqual(guess_main_name_from_authority_recID(recID),
+ main_name)
+
+ def test_authority_record_string_by_control_no(self):
+ """bibauthority - simple test of get_index_strings_by_control_no()"""
+ # vars
+ _type = CFG_BIBAUTHORITY_TYPE_NAMES['AUTHOR']
+ control_no = _type + CFG_BIBAUTHORITY_PREFIX_SEP + '(SzGeCERN)aaa0005'
+ string = 'Ellis, Jonathan Richard'
+ # run test
+ self.assertTrue(string in get_index_strings_by_control_no(control_no))
+
+TEST_SUITE = make_test_suite(
+ BibAuthorityEngineTest,
+)
+
+if __name__ == "__main__":
+ run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/bibauthority/lib/bibauthority_tests.py b/modules/bibauthority/lib/bibauthority_tests.py
new file mode 100755
index 000000000..8088c115c
--- /dev/null
+++ b/modules/bibauthority/lib/bibauthority_tests.py
@@ -0,0 +1,41 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""Unit Tests for BibAuthority"""
+
+from invenio.bibauthority_config import CFG_BIBAUTHORITY_PREFIX_SEP
+
+import unittest
+from invenio.testutils import make_test_suite, run_test_suite
+from invenio.bibauthority_engine import get_type_from_control_no
+
+class TestBibAuthorityEngine(unittest.TestCase):
+ """Unit tests for bibauthority_engine"""
+
+ def test_split_name_parts(self):
+ """bibauthority - test get_type_from_authority_id"""
+ prefix = "JOURNAL"
+ control_no = "(CERN)abcd1234" # must start with a '('
+ self.assertEqual(get_type_from_control_no(
+ prefix + CFG_BIBAUTHORITY_PREFIX_SEP + control_no),
+ prefix)
+
+TEST_SUITE = make_test_suite(TestBibAuthorityEngine)
+
+if __name__ == "__main__":
+ run_test_suite(TEST_SUITE)
diff --git a/po/LINGUAS b/modules/bibauthority/web/Makefile.am
old mode 100644
new mode 100755
similarity index 71%
copy from po/LINGUAS
copy to modules/bibauthority/web/Makefile.am
index 8f21a0517..deeeb4732
--- a/po/LINGUAS
+++ b/modules/bibauthority/web/Makefile.am
@@ -1,46 +1,17 @@
+##
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
-## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
\ No newline at end of file
diff --git a/modules/bibcirculation/lib/bibcirculation.py b/modules/bibcirculation/lib/bibcirculation.py
index 8f926afb2..b5d7dab2e 100644
--- a/modules/bibcirculation/lib/bibcirculation.py
+++ b/modules/bibcirculation/lib/bibcirculation.py
@@ -1,799 +1,799 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Invenio Bibcirculation User.
When applicable, methods should be renamed, refactored and
appropriate documentation added.
"""
__revision__ = "$Id$"
import datetime, time
# Invenio imports
from invenio.config import \
CFG_SITE_LANG, \
CFG_CERN_SITE, \
CFG_SITE_URL
from invenio.webuser import collect_user_info
from invenio.mailutils import send_email
from invenio.messages import gettext_set_language
from invenio.bibrecord import record_get_field_value
from invenio.search_engine import get_record
# Bibcirculation imports
import invenio.bibcirculation_dblayer as db
from invenio.bibcirculationadminlib import load_template
from invenio.bibcirculation_utils import book_title_from_MARC, \
book_information_from_MARC, \
create_ill_record, \
tag_all_requests_as_done, \
generate_tmp_barcode, \
generate_new_due_date, \
update_requests_statuses, \
search_user
from invenio.bibcirculation_cern_ldap import get_user_info_from_ldap
from invenio.bibcirculation_config import CFG_BIBCIRCULATION_LIBRARIAN_EMAIL, \
CFG_BIBCIRCULATION_LOANS_EMAIL, \
CFG_BIBCIRCULATION_ITEM_STATUS_UNDER_REVIEW, \
CFG_BIBCIRCULATION_REQUEST_STATUS_PENDING, \
CFG_BIBCIRCULATION_REQUEST_STATUS_WAITING, \
CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED, \
CFG_BIBCIRCULATION_ILL_STATUS_NEW, \
CFG_BIBCIRCULATION_PROPOSAL_STATUS_NEW, \
AMZ_BOOK_PUBLICATION_DATE_TAG, \
CFG_BIBCIRCULATION_DEFAULT_LIBRARY_ID
import invenio.template
bc_templates = invenio.template.load('bibcirculation')
def perform_borrower_loans(uid, barcode, borrower_id,
request_id, action, ln=CFG_SITE_LANG):
"""
Display all the loans and the requests of a given borrower.
@param barcode: identify the item. Primary key of crcITEM.
@type barcode: string
@param borrower_id: identify the borrower. Primary key of crcBORROWER.
@type borrower_id: int
@param request_id: identify the request: Primary key of crcLOANREQUEST
@type request_id: int
@return body(html)
"""
_ = gettext_set_language(ln)
infos = []
borrower_id = db.get_borrower_id_by_email(db.get_invenio_user_email(uid))
new_due_date = generate_new_due_date(30)
#renew loan
if action == 'renew':
recid = db.get_id_bibrec(barcode)
item_description = db.get_item_description(barcode)
queue = db.get_queue_request(recid, item_description)
if len(queue) != 0 and queue[0][0] != borrower_id:
message = "It is not possible to renew your loan for %(x_strong_tag_open)s%(x_title)s%(x_strong_tag_close)s" % {'x_title': book_title_from_MARC(recid), 'x_strong_tag_open': '<strong>', 'x_strong_tag_close': '</strong>'}
message += ' ' + _("Another user is waiting for this book.")
infos.append(message)
else:
loan_id = db.get_current_loan_id(barcode)
db.renew_loan(loan_id, new_due_date)
#update_status_if_expired(loan_id)
tag_all_requests_as_done(barcode, borrower_id)
- infos.append(_("Your loan has been renewed with sucess."))
+ infos.append(_("Your loan has been renewed with success."))
#cancel request
elif action == 'cancel':
db.cancel_request(request_id)
barcode_requested = db.get_requested_barcode(request_id)
update_requests_statuses(barcode_requested)
#renew all loans
elif action == 'renew_all':
list_of_barcodes = db.get_borrower_loans_barcodes(borrower_id)
for bc in list_of_barcodes:
bc_recid = db.get_id_bibrec(bc)
item_description = db.get_item_description(bc)
queue = db.get_queue_request(bc_recid, item_description)
#check if there are requests
if len(queue) != 0 and queue[0][0] != borrower_id:
message = "It is not possible to renew your loan for %(x_strong_tag_open)s%(x_title)s%(x_strong_tag_close)s" % {'x_title': book_title_from_MARC(bc_recid), 'x_strong_tag_open': '<strong>', 'x_strong_tag_close': '</strong>'}
message += ' ' + _("Another user is waiting for this book.")
infos.append(message)
else:
loan_id = db.get_current_loan_id(bc)
db.renew_loan(loan_id, new_due_date)
#update_status_if_expired(loan_id)
tag_all_requests_as_done(barcode, borrower_id)
if infos == []:
infos.append(_("All loans have been renewed with success."))
loans = db.get_borrower_loans(borrower_id)
requests = db.get_borrower_requests(borrower_id)
proposals = db.get_borrower_proposals(borrower_id)
body = bc_templates.tmpl_yourloans(loans=loans, requests=requests, proposals=proposals,
borrower_id=borrower_id, infos=infos, ln=ln)
return body
def perform_loanshistoricaloverview(uid, ln=CFG_SITE_LANG):
"""
Display Loans historical overview for user uid.
@param uid: user id
@param ln: language of the page
@return body(html)
"""
invenio_user_email = db.get_invenio_user_email(uid)
borrower_id = db.get_borrower_id_by_email(invenio_user_email)
result = db.get_historical_overview(borrower_id)
body = bc_templates.tmpl_loanshistoricaloverview(result=result, ln=ln)
return body
def perform_get_holdings_information(recid, req, action="borrowal", ln=CFG_SITE_LANG):
"""
Display all the copies of an item. If the parameter action is 'proposal', display
appropriate information to the user.
@param recid: identify the record. Primary key of bibrec.
@type recid: int
@param action: Specifies whether the current record is put up to solicit acquisition
proposals(if "proposal") or not("borrowal").
@type proposal: string
@return body(html)
"""
_ = gettext_set_language(ln)
if action == "proposal":
tag = AMZ_BOOK_PUBLICATION_DATE_TAG
publication_date = record_get_field_value(get_record(recid), tag[:3],
ind1=tag[3], ind2=tag[4],
code=tag[5])
msg = ''
if publication_date:
cur_date = datetime.date.today()
try:
pub_date = time.strptime(publication_date, '%d %b %Y')
pub_date = datetime.date(pub_date[0], pub_date[1], pub_date[2])
if cur_date < pub_date:
msg += _("The publication date of this book is %s.") % (publication_date)
msg += "<br /><br />"
else:
msg += _("This book has no copies in the library. ")
except:
msg += _("This book has no copies in the library. ")
msg += _("If you think this book is interesting, suggest it and tell us why you consider this \
book is important. The library will consider your opinion and if we decide to buy the \
book, we will issue a loan for you as soon as it arrives and send it by internal mail.")
msg += "<br \><br \>"
msg += _("In case we decide not to buy the book, we will offer you an interlibrary loan")
body = bc_templates.tmpl_book_proposal_information(recid, msg, ln=ln)
else:
holdings_information = db.get_holdings_information(recid, False)
body = bc_templates.tmpl_holdings_information(recid=recid,
req=req,
holdings_info=holdings_information,
ln=ln)
return body
def perform_new_request(recid, barcode, action="borrowal", ln=CFG_SITE_LANG):
"""
Display form to be filled by the user.
@param uid: user id
@type: int
@param recid: identify the record. Primary key of bibrec.
@type recid: int
@param barcode: identify the item. Primary key of crcITEM.
@type barcode: string
@return request form
"""
body = bc_templates.tmpl_new_request(recid=recid, barcode=barcode, action=action, ln=ln)
return body
def perform_book_proposal_send(uid, recid, period_from, period_to,
remarks, ln=CFG_SITE_LANG):
"""
The subfield containing the information about the source of importation
of the record acts as the marker for the records put up for acquisition
proposals.
Register the user's book proposal, his period of interest and his remarks
in the 'ILLREQUEST' table. Add a new 'dummy' copy for the proposed book.
Create a loan(hold) request on behalf of the user for that copy and send
a confirmation e-mail to her/him.
"""
_ = gettext_set_language(ln)
user = collect_user_info(uid)
if CFG_CERN_SITE:
try:
borrower = search_user('ccid', user['external_personid'])
except:
borrower = ()
else:
borrower = search_user('email', user['email'])
if borrower != ():
if not db.has_copies(recid):
tmp_barcode = generate_tmp_barcode()
ill_register_request_with_recid(recid, uid, period_from, period_to, remarks,
conditions='register_acquisition_suggestion',
only_edition='False', barcode=tmp_barcode, ln=CFG_SITE_LANG)
db.add_new_copy(tmp_barcode, recid, library_id=CFG_BIBCIRCULATION_DEFAULT_LIBRARY_ID,
collection='', location='',
description=_("This book was suggested for acquisition"), loan_period='',
status=CFG_BIBCIRCULATION_ITEM_STATUS_UNDER_REVIEW, expected_arrival_date='')
db.delete_brief_format_cache(recid)
return perform_new_request_send_message(uid, recid, period_from, period_to, tmp_barcode,
status=CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED,
mail_subject='Acquisition Suggestion',
mail_template='proposal_notification',
mail_remarks=remarks, ln=CFG_SITE_LANG)
return _("This item already has copies.")
else:
if CFG_CERN_SITE:
message = bc_templates.tmpl_message_request_send_fail_cern("Borrower ID not found.")
else:
message = bc_templates.tmpl_message_request_send_fail_other("Borrower ID not found.")
body = bc_templates.tmpl_new_request_send(message=message, ln=ln)
return body
def perform_new_request_send(uid, recid, period_from, period_to,
barcode, ln=CFG_SITE_LANG):
"""
@param recid: recID - Invenio record identifier
@param ln: language of the page
"""
nb_requests = 0
all_copies_on_loan = True
description = db.get_item_description(barcode)
copies = db.get_barcodes(recid, description)
for bc in copies:
nb_requests += db.get_number_requests_per_copy(bc)
if db.is_item_on_loan(bc) is None:
all_copies_on_loan = False
if nb_requests == 0:
if all_copies_on_loan:
status = CFG_BIBCIRCULATION_REQUEST_STATUS_WAITING
else:
status = CFG_BIBCIRCULATION_REQUEST_STATUS_PENDING
else:
status = CFG_BIBCIRCULATION_REQUEST_STATUS_WAITING
return perform_new_request_send_message(uid, recid, period_from, period_to, barcode,
status, mail_subject='New request',
mail_template='notification',
mail_remarks='', ln=ln)
def perform_new_request_send_message(uid, recid, period_from, period_to, barcode,
status, mail_subject, mail_template,
mail_remarks='', ln=CFG_SITE_LANG):
user = collect_user_info(uid)
if CFG_CERN_SITE:
try:
borrower = search_user('ccid', user['external_personid'])
except:
borrower = ()
else:
borrower = search_user('email', user['email'])
if borrower != ():
borrower_id = borrower[0][0]
if db.is_doc_already_requested(recid, barcode, borrower_id):
message = bc_templates.tmpl_message_send_already_requested()
return bc_templates.tmpl_new_request_send(message=message, ln=ln)
borrower_details = db.get_borrower_details(borrower_id)
(_id, ccid, name, email, _phone, address, mailbox) = borrower_details
(title, year, author,
isbn, publisher) = book_information_from_MARC(recid)
req_id = db.new_hold_request(borrower_id, recid, barcode,
period_from, period_to, status)
location = '-'
library = ''
request_date = ''
if status != CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED:
details = db.get_loan_request_details(req_id)
if details:
library = details[3]
location = details[4]
request_date = details[7]
message_template = load_template(mail_template)
# A message to be sent to the user detailing his loan request
# or his new book proposal.
if status == CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED:
message_for_user = message_template % (title)
else:
link_to_holdings_details = CFG_SITE_URL + \
'/record/%s/holdings' % str(recid)
message_for_user = message_template % (name, ccid, email, address,
mailbox, title, author, publisher,
year, isbn, location, library,
link_to_holdings_details, request_date)
send_email(fromaddr = CFG_BIBCIRCULATION_LOANS_EMAIL,
toaddr = email,
subject = mail_subject,
content = message_for_user,
header = '',
footer = '',
attempt_times=1,
attempt_sleeptime=10
)
if status == CFG_BIBCIRCULATION_REQUEST_STATUS_PENDING:
# A message to be sent to the librarian about the pending status.
link_to_item_request_details = CFG_SITE_URL + \
"/admin2/bibcirculation/get_item_requests_details?ln=%s&recid=%s" \
% (ln, str(recid))
message_for_librarian = message_template % (name, ccid, email, address,
mailbox, title, author, publisher,
year, isbn, location, library,
link_to_item_request_details,
request_date)
send_email(fromaddr = CFG_BIBCIRCULATION_LIBRARIAN_EMAIL,
toaddr = CFG_BIBCIRCULATION_LOANS_EMAIL,
subject = mail_subject,
content = message_for_librarian,
header = '',
footer = '',
attempt_times=1,
attempt_sleeptime=10
)
if CFG_CERN_SITE:
if status == CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED:
message = bc_templates.tmpl_message_proposal_send_ok_cern()
else:
message = bc_templates.tmpl_message_request_send_ok_cern()
else:
if status == CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED:
message = bc_templates.tmpl_message_proposal_send_ok_other()
else:
message = bc_templates.tmpl_message_request_send_ok_other()
else:
if CFG_CERN_SITE:
message = bc_templates.tmpl_message_request_send_fail_cern("Borrower ID not found")
else:
message = bc_templates.tmpl_message_request_send_fail_other("Borrower ID not found")
body = bc_templates.tmpl_new_request_send(message=message, ln=ln)
return body
def display_ill_form(ln=CFG_SITE_LANG):
"""
Display ILL form
@param uid: user id
@type: int
"""
body = bc_templates.tmpl_display_ill_form(infos=[], ln=ln)
return body
def ill_request_with_recid(recid, ln=CFG_SITE_LANG):
"""
Display ILL form.
@param recid: identify the record. Primary key of bibrec.
@type recid: int
@param uid: user id
@type: int
"""
body = bc_templates.tmpl_ill_request_with_recid(recid=recid,
infos=[],
ln=ln)
return body
def ill_register_request_with_recid(recid, uid, period_of_interest_from,
period_of_interest_to, additional_comments,
conditions, only_edition, barcode='',
ln=CFG_SITE_LANG):
"""
Register a new ILL request.
@param recid: identify the record. Primary key of bibrec.
@type recid: int
@param uid: user id
@type: int
@param period_of_interest_from: period of interest - from(date)
@type period_of_interest_from: string
@param period_of_interest_to: period of interest - to(date)
@type period_of_interest_to: string
"""
_ = gettext_set_language(ln)
# Create a dictionary.
book_info = "{'recid': " + str(recid) + "}"
user = collect_user_info(uid)
borrower_id = db.get_borrower_id_by_email(user['email'])
if borrower_id is None:
if CFG_CERN_SITE == 1:
result = get_user_info_from_ldap(email=user['email'])
try:
name = result['cn'][0]
except KeyError:
name = None
try:
email = result['mail'][0]
except KeyError:
email = None
try:
phone = result['telephoneNumber'][0]
except KeyError:
phone = None
try:
address = result['physicalDeliveryOfficeName'][0]
except KeyError:
address = None
try:
mailbox = result['postOfficeBox'][0]
except KeyError:
mailbox = None
try:
ccid = result['employeeID'][0]
except KeyError:
ccid = ''
if address is not None:
db.new_borrower(ccid, name, email, phone, address, mailbox, '')
else:
message = bc_templates.tmpl_message_request_send_fail_cern("Office address not available.")
else:
message = bc_templates.tmpl_message_request_send_fail_other("Office address not available.")
return bc_templates.tmpl_ill_register_request_with_recid(
message=message,
ln=ln)
address = db.get_borrower_address(user['email'])
if not address:
if CFG_CERN_SITE == 1:
email = user['email']
result = get_user_info_from_ldap(email)
try:
address = result['physicalDeliveryOfficeName'][0]
except KeyError:
address = None
if address is not None:
db.add_borrower_address(address, email)
else:
message = bc_templates.tmpl_message_request_send_fail_cern("Office address not available.")
else:
message = bc_templates.tmpl_message_request_send_fail_other("Office address not available.")
return bc_templates.tmpl_ill_register_request_with_recid(
message=message,
ln=ln)
if not conditions:
infos = []
infos.append(_("You didn't accept the ILL conditions."))
return bc_templates.tmpl_ill_request_with_recid(recid,
infos=infos,
ln=ln)
elif conditions == 'register_acquisition_suggestion':
# This ILL request entry is a book proposal.
db.ill_register_request(book_info, borrower_id,
period_of_interest_from, period_of_interest_to,
CFG_BIBCIRCULATION_PROPOSAL_STATUS_NEW,
additional_comments,
only_edition or 'False','proposal-book', barcode=barcode)
else:
db.ill_register_request(book_info, borrower_id,
period_of_interest_from, period_of_interest_to,
CFG_BIBCIRCULATION_ILL_STATUS_NEW,
additional_comments,
only_edition or 'False','book', barcode=barcode)
if CFG_CERN_SITE == 1:
message = bc_templates.tmpl_message_request_send_ok_cern()
else:
message = bc_templates.tmpl_message_request_send_ok_other()
#Notify librarian about new ILL request.
send_email(fromaddr=CFG_BIBCIRCULATION_LIBRARIAN_EMAIL,
toaddr=CFG_BIBCIRCULATION_LOANS_EMAIL,
subject='ILL request for books confirmation',
content='',
#hold_request_mail(recid=recid, borrower_id=borrower_id),
attempt_times=1,
attempt_sleeptime=10)
return bc_templates.tmpl_ill_register_request_with_recid(
message=message,
ln=ln)
def ill_register_request(uid, title, authors, place, publisher, year, edition,
isbn, period_of_interest_from, period_of_interest_to,
additional_comments, conditions, only_edition, request_type,
barcode='', ln=CFG_SITE_LANG):
"""
Register new ILL request. Create new record (collection: ILL Books)
@param uid: user id
@type: int
@param authors: book's authors
@type authors: string
@param place: place of publication
@type place: string
@param publisher: book's publisher
@type publisher: string
@param year: year of publication
@type year: string
@param edition: book's edition
@type edition: string
@param isbn: book's isbn
@type isbn: string
@param period_of_interest_from: period of interest - from(date)
@type period_of_interest_from: string
@param period_of_interest_to: period of interest - to(date)
@type period_of_interest_to: string
@param additional_comments: comments given by the user
@type additional_comments: string
@param conditions: ILL conditions
@type conditions: boolean
@param only_edition: borrower wants only the given edition
@type only_edition: boolean
"""
_ = gettext_set_language(ln)
item_info = (title, authors, place, publisher, year, edition, isbn)
create_ill_record(item_info)
book_info = {'title': title, 'authors': authors, 'place': place,
'publisher': publisher, 'year': year, 'edition': edition,
'isbn': isbn}
user = collect_user_info(uid)
borrower_id = db.get_borrower_id_by_email(user['email'])
#Check if borrower is on DB.
if borrower_id != 0:
address = db.get_borrower_address(user['email'])
#Check if borrower has an address.
if address != 0:
#Check if borrower has accepted ILL conditions.
if conditions:
#Register ILL request on crcILLREQUEST.
db.ill_register_request(book_info, borrower_id,
period_of_interest_from,
period_of_interest_to,
CFG_BIBCIRCULATION_ILL_STATUS_NEW,
additional_comments,
only_edition or 'False', request_type,
budget_code='', barcode=barcode)
#Display confirmation message.
message = _("Your ILL request has been registered and the " \
"document will be sent to you via internal mail.")
#Notify librarian about new ILL request.
send_email(fromaddr=CFG_BIBCIRCULATION_LIBRARIAN_EMAIL,
toaddr=CFG_BIBCIRCULATION_LOANS_EMAIL,
subject=_('ILL request for books confirmation'),
content="",
attempt_times=1,
attempt_sleeptime=10
)
#Borrower did not accept ILL conditions.
else:
infos = []
infos.append(_("You didn't accept the ILL conditions."))
body = bc_templates.tmpl_display_ill_form(infos=infos, ln=ln)
#Borrower doesn't have an address.
else:
#If BibCirculation at CERN, use LDAP.
if CFG_CERN_SITE == 1:
email = user['email']
result = get_user_info_from_ldap(email)
try:
ldap_address = result['physicalDeliveryOfficeName'][0]
except KeyError:
ldap_address = None
# verify address
if ldap_address is not None:
db.add_borrower_address(ldap_address, email)
db.ill_register_request(book_info, borrower_id,
period_of_interest_from,
period_of_interest_to,
CFG_BIBCIRCULATION_ILL_STATUS_NEW,
additional_comments,
only_edition or 'False',
request_type, budget_code='', barcode=barcode)
message = _("Your ILL request has been registered and" \
" the document will be sent to you via" \
" internal mail.")
send_email(fromaddr=CFG_BIBCIRCULATION_LIBRARIAN_EMAIL,
toaddr=CFG_BIBCIRCULATION_LOANS_EMAIL,
subject=_('ILL request for books confirmation'),
content="",
attempt_times=1,
attempt_sleeptime=10
)
else:
message = _("It is not possible to validate your request.")
message += ' ' + _("Your office address is not available.")
message += ' ' + _("Please contact %(contact_email)s") % \
{'contact_email': CFG_BIBCIRCULATION_LIBRARIAN_EMAIL}
else:
# Get information from CERN LDAP
if CFG_CERN_SITE == 1:
result = get_user_info_from_ldap(email=user['email'])
try:
name = result['cn'][0]
except KeyError:
name = None
try:
email = result['mail'][0]
except KeyError:
email = None
try:
phone = result['telephoneNumber'][0]
except KeyError:
phone = None
try:
address = result['physicalDeliveryOfficeName'][0]
except KeyError:
address = None
try:
mailbox = result['postOfficeBox'][0]
except KeyError:
mailbox = None
try:
ccid = result['employeeID'][0]
except KeyError:
ccid = ''
# verify address
if address is not None:
db.new_borrower(ccid, name, email, phone, address, mailbox, '')
borrower_id = db.get_borrower_id_by_email(email)
db.ill_register_request(book_info, borrower_id,
period_of_interest_from,
period_of_interest_to,
CFG_BIBCIRCULATION_ILL_STATUS_NEW,
additional_comments,
only_edition or 'False',
request_type, budget_code='', barcode=barcode)
message = _("Your ILL request has been registered and" \
" the document will be sent to you via" \
" internal mail.")
send_email(fromaddr=CFG_BIBCIRCULATION_LIBRARIAN_EMAIL,
toaddr=CFG_BIBCIRCULATION_LOANS_EMAIL,
subject='ILL request for books confirmation',
content="",
attempt_times=1,
attempt_sleeptime=10
)
else:
message = _("It is not possible to validate your request.")
message += ' ' + _("Your office address is not available.")
message += ' ' + _("Please contact %(contact_email)s") % \
{'contact_email': CFG_BIBCIRCULATION_LIBRARIAN_EMAIL}
body = bc_templates.tmpl__with_recid(message=message,
ln=ln)
return body
diff --git a/modules/bibcirculation/lib/bibcirculation_model.py b/modules/bibcirculation/lib/bibcirculation_model.py
index bfcb56f48..342eb350f 100644
--- a/modules/bibcirculation/lib/bibcirculation_model.py
+++ b/modules/bibcirculation/lib/bibcirculation_model.py
@@ -1,284 +1,285 @@
# -*- coding: utf-8 -*-
#
## This file is part of Invenio.
## Copyright (C) 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02D111-1307, USA.
"""
bibcirculation database models.
"""
# General imports.
from invenio.sqlalchemyutils import db
# Create your models here.
from invenio.bibedit_model import Bibrec
from invenio.bibcirculation_receivers import \
post_handler_demosite_populate
from invenio.demosite_manager import populate as demosite_populate
from invenio.signalutils import post_command
post_command.connect(post_handler_demosite_populate, sender=demosite_populate)
class CrcBORROWER(db.Model):
"""Represents a CrcBORROWER record."""
def __init__(self):
pass
__tablename__ = 'crcBORROWER'
id = db.Column(db.Integer(15, unsigned=True), nullable=False,
primary_key=True,
autoincrement=True)
ccid = db.Column(db.Integer(15, unsigned=True), nullable=True,
unique=True, server_default=None)
name = db.Column(db.String(255), nullable=False,
server_default='', index=True)
email = db.Column(db.String(255), nullable=False,
server_default='', index=True)
phone = db.Column(db.String(60), nullable=True)
address = db.Column(db.String(60), nullable=True)
mailbox = db.Column(db.String(30), nullable=True)
borrower_since = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
borrower_until = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
notes = db.Column(db.Text, nullable=True)
class CrcLIBRARY(db.Model):
"""Represents a CrcLIBRARY record."""
def __init__(self):
pass
__tablename__ = 'crcLIBRARY'
id = db.Column(db.Integer(15, unsigned=True), nullable=False,
primary_key=True,
autoincrement=True)
name = db.Column(db.String(80), nullable=False,
server_default='')
address = db.Column(db.String(255), nullable=False,
server_default='')
email = db.Column(db.String(255), nullable=False,
server_default='')
phone = db.Column(db.String(30), nullable=False,
server_default='')
- type = db.Column(db.String(30), nullable=True)
+ type = db.Column(db.String(30), nullable=False,
+ server_default='main')
notes = db.Column(db.Text, nullable=True)
class CrcITEM(db.Model):
"""Represents a CrcITEM record."""
def __init__(self):
pass
__tablename__ = 'crcITEM'
barcode = db.Column(db.String(30), nullable=False,
server_default='',
primary_key=True)
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id), nullable=False,
server_default='0')
id_crcLIBRARY = db.Column(db.Integer(15, unsigned=True),
db.ForeignKey(CrcLIBRARY.id), nullable=False,
server_default='0')
collection = db.Column(db.String(60), nullable=True)
location = db.Column(db.String(60), nullable=True)
description = db.Column(db.String(60), nullable=True)
loan_period = db.Column(db.String(30), nullable=False,
server_default='')
status = db.Column(db.String(20), nullable=False,
server_default='')
expected_arrival_date = db.Column(db.String(60), nullable=False,
server_default='')
creation_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
modification_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
number_of_requests = db.Column(db.Integer(3, unsigned=True),
nullable=False,server_default='0')
class CrcILLREQUEST(db.Model):
"""Represents a CrcILLREQUEST record."""
def __init__(self):
pass
__tablename__ = 'crcILLREQUEST'
id = db.Column(db.Integer(15, unsigned=True), nullable=False,
primary_key=True,
autoincrement=True)
id_crcBORROWER = db.Column(db.Integer(15, unsigned=True),
db.ForeignKey(CrcBORROWER.id),
nullable=False,
server_default='0')
barcode = db.Column(db.String(30), db.ForeignKey(CrcITEM.barcode),
nullable=False,
server_default='')
period_of_interest_from = db.Column(db.DateTime,
nullable=False,
server_default='1900-01-01 00:00:00')
period_of_interest_to = db.Column(db.DateTime,
nullable=False,
server_default='1900-01-01 00:00:00')
id_crcLIBRARY = db.Column(db.Integer(15, unsigned=True),
db.ForeignKey(CrcLIBRARY.id), nullable=False,
server_default='0')
request_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
expected_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
arrival_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
due_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
return_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
status = db.Column(db.String(20), nullable=False,
server_default='')
cost = db.Column(db.String(30), nullable=False,
server_default='')
budget_code = db.Column(db.String(60), nullable=False,
server_default='')
item_info = db.Column(db.Text, nullable=True)
request_type = db.Column(db.Text, nullable=True)
borrower_comments = db.Column(db.Text, nullable=True)
only_this_edition = db.Column(db.String(10), nullable=False,
server_default='')
library_notes = db.Column(db.Text, nullable=True)
overdue_letter_number = db.Column(db.Integer(3, unsigned=True),
nullable=False, server_default='0')
overdue_letter_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
borrower = db.relationship(CrcBORROWER, backref='illrequests')
item = db.relationship(CrcITEM, backref='illrequests')
library = db.relationship(CrcLIBRARY, backref='illrequests')
class CrcLOAN(db.Model):
"""Represents a CrcLOAN record."""
def __init__(self):
pass
__tablename__ = 'crcLOAN'
id = db.Column(db.Integer(15, unsigned=True), nullable=False,
primary_key=True,
autoincrement=True)
id_crcBORROWER = db.Column(db.Integer(15, unsigned=True),
db.ForeignKey(CrcBORROWER.id), nullable=False, server_default='0')
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, server_default='0')
barcode = db.Column(db.String(30), db.ForeignKey(CrcITEM.barcode), nullable=False,
server_default='')
loaned_on = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
returned_on = db.Column(db.Date, nullable=False,
server_default='0000-00-00')
due_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
number_of_renewals = db.Column(db.Integer(3, unsigned=True), nullable=False,
server_default='0')
overdue_letter_number = db.Column(db.Integer(3, unsigned=True), nullable=False,
server_default='0')
overdue_letter_date = db.Column(db.DateTime,
nullable=False,
server_default='1900-01-01 00:00:00')
status = db.Column(db.String(20), nullable=False,
server_default='')
type = db.Column(db.String(20), nullable=False,
server_default='')
notes = db.Column(db.Text, nullable=True)
borrower = db.relationship(CrcBORROWER, backref='loans')
bibrec = db.relationship(Bibrec, backref='loans')
item = db.relationship(CrcITEM, backref='loans')
class CrcLOANREQUEST(db.Model):
"""Represents a CrcLOANREQUEST record."""
def __init__(self):
pass
__tablename__ = 'crcLOANREQUEST'
id = db.Column(db.Integer(15, unsigned=True), nullable=False,
primary_key=True,
autoincrement=True)
id_crcBORROWER = db.Column(db.Integer(15, unsigned=True),
db.ForeignKey(CrcBORROWER.id), nullable=False, server_default='0')
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, server_default='0')
barcode = db.Column(db.String(30), db.ForeignKey(CrcITEM.barcode), nullable=False,
server_default='')
period_of_interest_from = db.Column(db.DateTime,
nullable=False,
server_default='1900-01-01 00:00:00')
period_of_interest_to = db.Column(db.DateTime,
nullable=False,
server_default='1900-01-01 00:00:00')
status = db.Column(db.String(20), nullable=False,
server_default='')
notes = db.Column(db.Text, nullable=True)
request_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
borrower = db.relationship(CrcBORROWER, backref='loanrequests')
bibrec = db.relationship(Bibrec, backref='loanrequests')
item = db.relationship(CrcITEM, backref='loanrequests')
class CrcVENDOR(db.Model):
"""Represents a CrcVENDOR record."""
def __init__(self):
pass
__tablename__ = 'crcVENDOR'
id = db.Column(db.Integer(15, unsigned=True), nullable=False,
primary_key=True,
autoincrement=True)
name = db.Column(db.String(80), nullable=False,
server_default='')
address = db.Column(db.String(255), nullable=False,
server_default='')
email = db.Column(db.String(255), nullable=False,
server_default='')
phone = db.Column(db.String(30), nullable=False,
server_default='')
notes = db.Column(db.Text, nullable=True)
class CrcPURCHASE(db.Model):
"""Represents a CrcPURCHASE record."""
def __init__(self):
pass
__tablename__ = 'crcPURCHASE'
id = db.Column(db.Integer(15, unsigned=True), nullable=False,
primary_key=True,
autoincrement=True)
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, server_default='0')
id_crcVENDOR = db.Column(db.Integer(15, unsigned=True),
db.ForeignKey(CrcVENDOR.id), nullable=False, server_default='0')
ordered_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
expected_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
price = db.Column(db.String(20), nullable=False,
server_default='0')
status = db.Column(db.String(20), nullable=False,
server_default='')
notes = db.Column(db.Text, nullable=True)
bibrec = db.relationship(Bibrec, backref='purchases')
vendor = db.relationship(CrcVENDOR, backref='purchases')
__all__ = ['CrcBORROWER',
'CrcLIBRARY',
'CrcITEM',
'CrcILLREQUEST',
'CrcLOAN',
'CrcLOANREQUEST',
'CrcVENDOR',
'CrcPURCHASE']
diff --git a/modules/bibedit/lib/bibedit_engine.js b/modules/bibedit/lib/bibedit_engine.js
index 39e27d18f..c2a196c58 100644
--- a/modules/bibedit/lib/bibedit_engine.js
+++ b/modules/bibedit/lib/bibedit_engine.js
@@ -1,6009 +1,6009 @@
/*
* This file is part of Invenio.
* Copyright (C) 2009, 2010, 2011, 2012, 2013 CERN.
*
* Invenio is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License, or (at your option) any later version.
*
* Invenio is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Invenio; if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
*/
/*
* This is the main BibEdit Javascript.
*/
/* ************************* Table of contents ********************************
*
* 1. Global variables
*
* 2. Initialization
* - $()
* - initJeditable
* - initMisc
*
* 3. Ajax
* - initAjax
* - createReq
* - onAjaxError
* - onAjaxSuccess
* - queue_request
* - save_changes
*
* 4. Hash management
* - initStateFromHash
* - deserializeHash
* - changeAndSerializeHash
*
* 5. Data logic
* - getTagsSorted
* - getFieldPositionInTag
* - getPreviousTag
* - deleteFieldFromTag
* - cmpFields
* - fieldIsProtected
* - containsProtected
* - getMARC
* - getFieldTag
* - getSubfieldTag
* - validMARC
*
* 6. Record UI
* - onNewRecordClick
* - getRecord
* - onGetRecordSuccess
* - onSubmitClick
* - onPreviewClick
* - onCancelClick
* - onCloneRecordClick
* - onDeleteRecordClick
* - onMergeClick
* - bindNewRecordHandlers
* - cleanUp
* - addHandler_autocompleteAffiliations
*
* 7. Editor UI
* - colorFields
* - reColorFields
* - onMARCTagsClick
* - onHumanTagsClick
* - updateTags
* - onFieldBoxClick
* - onSubfieldBoxClick
* - onAddFieldClick
* - onAddFieldControlfieldClick
* - onAddFieldChange
* - onAddFieldSave
* - onAddSubfieldsClick
* - onAddSubfieldsChange
* - onAddSubfieldsSave
* - onDoubleClick
* - onContentChange
* - onMoveSubfieldClick
* - onDeleteClick
*/
/*
* **************************** 1. Global variables ****************************
*/
// Record data
var gRecID = null;
var gRecIDLoading = null;
var gRecRev = null;
var gRecRevAuthor = null;
var gRecLatestRev = null;
var gRecord = null;
// Search results (record IDs)
var gResultSet = null;
// Current index in the result set
var gResultSetIndex = null;
// Tag format.
var gTagFormat = null;
// Has the record been modified?
var gRecordDirty = false;
// Last recorded cache modification time
var gCacheMTime = null;
// Are we navigating a set of records?
var gNavigatingRecordSet = false;
// The current hash (fragment part of the URL).
var gHash;
// The current hash deserialized to an object.
var gHashParsed;
// Hash check timer ID.
var gHashCheckTimerID;
// The previous and current state (this is not exactly the same as the state
// parameter, but an internal state control mechanism).
var gPrevState;
var gState;
// A current status
var gCurrentStatus;
// Submission mode. Possible values are default and textmarc
var gSubmitMode = 'default';
// a global array of visible changes associated with a currently viewed record
// This array is cleared always when a new changes set is applied... then it is used
// for redrawing the change fields
// The index in this array is used when referring to a particular change [ like finding an appropriate box]
var gHoldingPenChanges = [];
// A global variable used to avoid multiple retrieving of the same changes stored in the Holding Pen
// this is the dictionary indexed by the HoldingPen entry identifiers and containing the javascript objects
// representing the records
// due to this mechanism, applying previously previewed changes, as well as previewing the change for the
// second time, can be made much faster
var gHoldingPenLoadedChanges = {};
// The changes that have been somehow processed and should not be displayed as already processed
var gDisabledHpEntries = {};
// is the read-only mode enabled ?
var gReadOnlyMode = false;
// revisions history
var gRecRevisionHistory = [];
var gUndoList = []; // list of possible undo operations
var gRedoList = []; // list of possible redo operations
// number of bibcirculation copies from the retrieval time
var gPhysCopiesNum = 0;
var gBibCircUrl = null;
var gDisplayBibCircPanel = false;
// KB related variables
var gKBSubject = null;
var gKBInstitution = null;
// Does the record have a PDF attached?
var gRecordHasPDF = false;
// queue with all requests to be sent to server
var gReqQueue = [];
// count number of requests since last save
var gReqCounter = 0;
/*
* **************************** 2. Initialization ******************************
*/
window.onload = function(){
if (typeof(jQuery) == 'undefined'){
alert('ERROR: jQuery not found!\n\n' +
'The Record Editor requires jQuery, which does not appear to be ' +
'installed on this server. Please alert your system ' +
'administrator.\n\nInstructions on how to install jQuery and other ' +
"required plug-ins can be found in Invenio's INSTALL file.");
var imgError = document.createElement('img');
imgError.setAttribute('src', '/img/circle_red.png');
var txtError = document.createTextNode('jQuery missing');
var cellIndicator = document.getElementById('cellIndicator');
cellIndicator.replaceChild(imgError, cellIndicator.firstChild);
var cellStatus = document.getElementById('cellStatus');
cellStatus.replaceChild(txtError, cellStatus.firstChild);
}
};
function resize_content() {
/*
* Resize content table to always fit in the avaiable screen and not have two
* different scroll bars
*/
var bibedit_table_top = $("#bibEditContentTable").offset().top;
var bibedit_table_height = Math.round(.93 * ($(window).height() - bibedit_table_top));
bibedit_table_height = parseInt(bibedit_table_height, 10) + 'px';
$("#bibEditContentTable").css('height', bibedit_table_height);
}
function init_bibedit() {
/*
* Initialize all components.
*/
initMenu();
initDialogs();
initJeditable();
initAjax();
initMisc();
createTopToolbar();
initStateFromHash();
gHashCheckTimerID = setInterval(initStateFromHash, gHASH_CHECK_INTERVAL);
initHotkeys();
initClipboardLibrary();
initClipboard();
bindFocusHandlers();
// Modify BibEdit content table height dinamically to avoid going over the
// viewport
resize_content();
$(window).bind('resize', resize_content);
};
function failInReadOnly(){
/** Function checking if the current BibEdit mode is read-only. In sucha a case, a warning
dialog is displayed and true returned.
If bibEdit is in read/write mode, false is returned
*/
if (gReadOnlyMode === true){
alert("It is impossible to perform this operation in the Read/Only mode. Please switch to Read-write mode before trying again");
return true;
}
else{
return false;
}
}
function initClipboard(){
// attaching the events -> handlers are stored in bibedit_engine.js file
$(document).bind("copy", onPerformCopy);
$(document).bind("paste", onPerformPaste);
}
function initDialogs(){
/*
* Overrides _makeDraggable from jQuery UI dialog code in order to allow
* the dialog go off the viewport
*
*/
if (!$.ui.dialog.prototype._makeDraggableBase) {
$.ui.dialog.prototype._makeDraggableBase = $.ui.dialog.prototype._makeDraggable;
$.ui.dialog.prototype._makeDraggable = function() {
this._makeDraggableBase();
this.uiDialog.draggable("option", "containment", false);
};
}
}
/**
* Error handler when deleting cache of the record
*/
function onDeleteRecordCacheError(XHR, textStatus, errorThrown) {
console.log("Cannot delete record cache file");
updateStatus('ready');
}
function initMisc(){
/*
* Miscellaneous initialization operations.
*/
// CERN allows for capital MARC indicators.
if (gCERN_SITE){
validMARC.reIndicator1 = /[\dA-Za-z]{1}/;
validMARC.reIndicator2 = /[\dA-Za-z]{1}/;
}
// Warn user if BibEdit is being closed while a record is open.
window.onbeforeunload = function() {
if (gRecID && gRecordDirty){
return '******************** WARNING ********************\n' +
' You have unsubmitted changes.\n\n' +
'You should go back to the page and click either:\n' +
' * Submit (to save your changes permanently)\n or\n' +
' * Cancel (to discard your changes)';
}
else {
createReq({recID: gRecID, requestType: 'deleteRecordCache'},
function() {}, false, undefined, onDeleteRecordCacheError);
}
};
//Initialising the BibCircualtion integration plugin
$("#bibEditBibCirculationBtn").bind("click", onBibCirculationBtnClicked);
}
function initJeditable(){
/*
* Overwrite Jeditable plugin function to add the autocomplete handler
* to textboxes corresponding to fields in gTagsToAutocomplete
*/
$.editable.types['textarea'].element = function(settings, original) {
var form = this;
var textarea = $('<textarea />');
if (settings.rows) {
textarea.attr('rows', settings.rows);
} else if (settings.height != "none") {
textarea.height(settings.height);
}
if (settings.cols) {
textarea.attr('cols', settings.cols);
} else if (settings.width != "none") {
textarea.width(settings.width - 6);
}
$(this).append(textarea);
/* original variable is the cell that contains the textbox */
var cell_id_split = $(original).attr('id').split('_');
/* Set max amount of characters for the textarea */
switch (cell_id_split[0]) {
case 'fieldTag':
max_char = "5";
textarea.attr('maxlength', max_char);
break;
case 'subfieldTag':
max_char = "1";
textarea.attr('maxlength', max_char);
break;
default:
max_char = "";
}
/* create subfield id corresponding to original cell */
cell_id_split[0] = 'subfieldTag';
var subfield_id = cell_id_split.join('_');
/* Add autocomplete handler to fields in gTagsToAutocomplete */
var fieldInfo = $(original).parents("tr").siblings().eq(0).children().eq(1).html();
if ($.inArray(fieldInfo + $(original).siblings('#' + subfield_id).text(), gTagsToAutocomplete) != -1) {
addHandler_autocompleteAffiliations(textarea);
}
initInputHotkeys(textarea);
return(textarea);
};
$.editable.addInputType('textarea_custom', {
element : $.editable.types.textarea.element,
plugin : function(settings, original) {
$('textarea', this).bind('click', function(e) {
e.stopPropagation();
});
$('textarea', this).bind('keydown', function(e) {
var TABKEY = 9;
var RETURNKEY = 13;
switch (e.keyCode) {
case RETURNKEY:
// Just save field content
e.stopPropagation();
$(this).blur();
break;
case TABKEY:
// Move between fields
e.preventDefault();
$(this).blur();
var currentElementIndex = $(".tabSwitch").index($(original));
var step = e.shiftKey ? -1 : 1;
$(".tabSwitch").eq(currentElementIndex + step).click();
break;
}
});
$('textarea', this).keyup(function() {
// Keep the limit of max chars for fields/subfields
var max = parseInt($(this).attr('maxlength'));
if( $(this).val().length > max ) {
$(this).val($(this).val().substr(0, $(this).attr('maxlength')));
}
});
}
});
}
/*
* **************************** 3. Ajax ****************************************
*/
function initAjax(){
/*
* Initialize Ajax.
*/
$.ajaxSetup(
{ cache: false,
dataType: 'json',
error: onAjaxError,
type: 'POST',
url: '/'+ gSITE_RECORD +'/edit/'
}
);
}
function createReq(data, onSuccess, asynchronous, deferred, onError) {
/*
* Create Ajax request.
*/
if (typeof asynchronous === "undefined") {
asynchronous = true;
}
if (typeof onError === "undefined") {
asynchronous = onAjaxError;
}
// Include and increment transaction ID.
var tID = createReq.transactionID++;
createReq.transactions[tID] = data['requestType'];
data.ID = tID;
// Include cache modification time if we have it.
if (gCacheMTime) {
data.cacheMTime = gCacheMTime;
}
// Send the request.
$.ajax({data: {jsondata: JSON.stringify(data)},
success: function(json){
onAjaxSuccess(json, onSuccess);
if (deferred !== undefined) {
deferred.resolve(json);
}
},
error: onError,
async: asynchronous})
.done(function(){
createReqAjaxDone(data);
});
}
// Transactions data.
createReq.transactionID = 0;
createReq.transactions = [];
function createReqAjaxDone(data){
/*
* This function is executed after the ajax request in createReq function was finished
* data: the data parameter that was send with ajax request
*/
// If the request was from holding pen, trigger the event to apply holding pen changes
if (data['requestType'] == 'getHoldingPenUpdates') {
$.event.trigger('HoldingPenPageLoaded');
}
}
/**
* Error handler for AJAX bulk requests
* @param {object} data - object describing all operations to be done
* @return {function} function to be used as error handler
*/
function onBulkReqError(data) {
return function (XHR, textStatus, errorThrown) {
console.log("Error while processing:");
console.log(data);
updateStatus("ready");
}
}
function createBulkReq(reqsData, onSuccess, optArgs){
/* optArgs is a disctionary containning the optional arguments
possible keys include:
asynchronous : if the request should be asynchronous
undoRedo : handler for the undo operation
*/
// creating a bulk request ... the cache timestamp is not saved
var data = {'requestType' : 'applyBulkUpdates',
'requestsData' : reqsData,
'recID' : gRecID};
if (optArgs.undoRedo != undefined){
data.undoRedo = optArgs.undoRedo;
}
var errorCallback = onBulkReqError(data);
createReq(data, onSuccess, optArgs.asynchronous, undefined, errorCallback);
}
function onAjaxError(XHR, textStatus, errorThrown){
/*
* Handle Ajax request errors.
*/
console.log('Request completed with status ' + textStatus +
'\nResult: ' + XHR.responseText +
'\nError: ' + errorThrown);
updateStatus('ready');
}
function onAjaxSuccess(json, onSuccess){
/*
* Handle server response to Ajax requests, in particular error situations.
* See BibEdit config for result codes.
* If a function onSuccess is specified this will be called in the end,
* if no error was encountered.
*/
var resCode = json['resultCode'];
var recID = json['recID'];
if (resCode == 100){
// User's session has timed out.
gRecID = null;
gRecIDLoading = null;
window.location = recID ? gSITE_URL + '/'+ gSITE_RECORD +'/' + recID + '/edit/'
: gSITE_URL + '/'+ gSITE_RECORD +'/edit/';
return;
}
else if ($.inArray(resCode, [101, 102, 104, 105, 106, 107, 108, 109]) != -1) {
cleanUp(!gNavigatingRecordSet, null, null, true, true);
args = [];
if (resCode == 104) {
args = json["locked_details"];
}
displayMessage(resCode, false, args);
if (resCode == 107) {
//return;
$('#lnkGetRecord').bind('click', function(event){
getRecord(recID);
event.preventDefault();
});
}
updateStatus('error', gRESULT_CODES[resCode]);
}
else if ($.inArray(resCode, [110, 113]) != -1){
displayMessage(resCode, true, [json['errors'].toString()]);
/* Warn the user leaving toolbar active */
updateStatus('error', gRESULT_CODES[resCode], true);
}
else {
var cacheOutdated = json['cacheOutdated'];
var requestType = createReq.transactions[json['ID']];
if (cacheOutdated && requestType == 'submit') {
// User wants to submit, but cache is outdated. Outdated means that the
// DB version of the record has changed after the cache was created.
displayCacheOutdatedScreen(requestType);
$('#lnkMergeCache').bind('click', onMergeClick);
$('#lnkForceSubmit').bind('click', function(event){
onSubmitClick.force = true;
onSubmitClick();
event.preventDefault();
});
$('#lnkDiscardChanges').bind('click', function(event){
onCancelClick();
event.preventDefault();
});
updateStatus('error', 'Error: Record cache is outdated');
}
else {
if (requestType != 'getRecord') {
// On getRecord requests the below actions will be performed in
// onGetRecordSuccess (after cleanup).
var cacheMTime = json['cacheMTime'];
if (cacheMTime)
// Store new cache modification time.
gCacheMTime = cacheMTime;
var cacheDirty = json['cacheDirty'];
if (cacheDirty){
// Cache is dirty. Enable submit button.
gRecordDirty = cacheDirty;
activateSubmitButton();
}
}
if (onSuccess) {
// No critical errors; call onSuccess function.
onSuccess(json);
}
}
}
}
function queue_request(data) {
/* Adds the request data to the global request queue for later
execution */
if ($('#btnSubmit').is(":disabled")) {
activateSubmitButton();
}
/* Create a deep copy of the data to avoid being manipulated
by other requests */
gReqQueue.push(jQuery.extend(true, {}, data));
/* Update counter of requests to save */
gReqCounter++;
if (gReqCounter === gREQUESTS_UNTIL_SAVE) {
save_changes();
gReqCounter = 0;
}
}
function save_changes() {
/* Sends all pending requests in bulk to the server
Returns deferred object to be able to notify when saving is done
*/
var optArgs = {};
var saveChangesPromise = new $.Deferred();
if (gReqQueue.length > 0) {
updateStatus('saving');
createBulkReq(gReqQueue, function(json){
updateStatus('report', gRESULT_CODES[json['resultCode']]);
updateStatus('ready');
saveChangesPromise.resolve();
}, optArgs);
gReqQueue = [];
}
else {
saveChangesPromise.resolve();
}
return saveChangesPromise;
}
function resetBibeditState(){
/* A function clearing the state of the bibEdit (all the panels content)
*/
gHoldingPenLoadedChanges = {};
gHoldingPenChanges = [];
gDisabledHpEntries = {};
gReadOnlyMode = false;
gRecRevisionHistory = [];
gUndoList = [];
gRedoList = [];
gPhysCopiesNum = 0;
gBibCircUrl = null;
clearWarnings();
updateRevisionsHistory();
updateUrView();
updateBibCirculationPanel();
holdingPenPanelRemoveEntries();
}
/*
* **************************** 4. Hash management *****************************
*/
function initStateFromHash(){
/*
* Initialize or update page state from hash.
* Any program functions changing the hash should use changeAndSerializeHash()
* which circumvents this function, meaning this function should only run on
* page load and when browser navigation buttons (ie. Back and Forward) are
* clicked. Any invalid hashes entered by the user will be ignored.
*/
if (window.location.hash == gHash)
// Hash is the same as last time we checked, do nothing.
return;
gHash = window.location.hash;
gHashParsed = deserializeHash(gHash);
gPrevState = gState;
var tmpState = gHashParsed.state;
var tmpRecID = gHashParsed.recid;
var tmpRecRev = gHashParsed.recrev;
var tmpReadOnlyMode = gHashParsed.romode;
// Find out which internal state the new hash leaves us with
if (tmpState && tmpRecID){
// We have both state and record ID.
if ($.inArray(tmpState, ['edit', 'submit', 'cancel', 'deleteRecord', 'hpapply']) != -1)
gState = tmpState;
else
// Invalid state, fail...
return;
}
else if (tmpState){
// We only have state.
if (tmpState == 'edit')
gState = 'startPage';
else if (tmpState == 'newRecord')
gState = 'newRecord';
else
// Invalid state, fail... (all states but 'edit' and 'newRecord' are
// illegal without record ID).
return;
}
else
// Invalid hash, fail...
return;
if (gState != gPrevState || (gState == 'edit' && parseInt(tmpRecID, 10) != gRecID) || // different record number
(tmpRecRev != undefined && tmpRecRev != gRecRev) || // different revision
(tmpRecRev == undefined && gRecRev != gRecLatestRev) || // latest revision requested but another open
(tmpReadOnlyMode != gReadOnlyMode)){ // switched between read-only and read-write modes
// We have an actual and legal change of state. Clean up and update the
// page.
updateStatus('updating');
if (gRecID && !gRecordDirty && !tmpReadOnlyMode)
// If the record is unchanged, delete the cache.
createReq({recID: gRecID, requestType: 'deleteRecordCache'}, function() {},
true, undefined, onDeleteRecordCacheError);
switch (gState){
case 'startPage':
cleanUp(true, '', 'recID', true, true);
updateStatus('ready');
break;
case 'edit':
var recID = parseInt(tmpRecID, 10);
if (isNaN(recID)){
// Invalid record ID.
cleanUp(true, tmpRecID, 'recID', true);
displayMessage(102);
updateStatus('error', gRESULT_CODES[102]);
}
else{
cleanUp(true, recID, 'recID');
gReadOnlyMode = tmpReadOnlyMode;
if (tmpRecRev != undefined && tmpRecRev != 0){
getRecord(recID, tmpRecRev);
} else {
getRecord(recID);
}
}
break;
case 'hpapply':
var hpID = parseInt(gHashParsed.hpid, 10);
var recID = parseInt(tmpRecID, 10);
if (isNaN(recID) || isNaN(hpID)){
// Invalid record ID or HoldingPen ID.
cleanUp(true, tmpRecID, 'recID', true);
displayMessage(102);
updateStatus('error', gRESULT_CODES[102]);
}
else {
cleanUp(true, recID, 'recID');
gReadOnlyMode = tmpReadOnlyMode;
var hpButton = '#bibeditHPApplyChange' + hpID;
// after the record is created and all the data on the page is loaded
// trigger the click on holdingPen button
$(document).one('HoldingPenPageLoaded', function () {
$(hpButton).click();
});
getRecord(recID);
}
break;
case 'newRecord':
cleanUp(true, '', null, null, true);
displayNewRecordScreen();
bindNewRecordHandlers();
updateStatus('ready');
break;
case 'submit':
cleanUp(true, '', null, true);
displayMessage(4);
updateStatus('ready');
break;
case 'cancel':
cleanUp(true, '', null, true, true);
updateStatus('ready');
break;
case 'deleteRecord':
cleanUp(true, '', null, true);
displayMessage(10);
updateStatus('ready');
break;
}
}
else
// What changed was not of interest, continue as if nothing happened.
return;
}
function deserializeHash(aHash){
/*
* Deserializes a string (given as parameter or taken from the window object)
* into the hash object.
*/
if (aHash == undefined){
aHash = window.location.hash;
}
var hash = {};
var args = aHash.slice(1).split('&');
var tmpArray;
for (var i=0, n=args.length; i<n; i++){
tmpArray = args[i].split('=');
if (tmpArray.length == 2)
hash[tmpArray[0]] = tmpArray[1];
}
return hash;
}
function changeAndSerializeHash(updateData){
/*
* Change the hash object to use the data from the object given as parameter.
* Then update the hash accordingly, WITHOUT invoking initStateFromHash().
*/
clearTimeout(gHashCheckTimerID);
gHashParsed = {};
for (var key in updateData){
gHashParsed[key.toString()] = updateData[key].toString();
}
gHash = '#';
for (key in gHashParsed){
gHash += key + '=' + gHashParsed[key] + '&';
}
gHash = gHash.slice(0, -1);
gState = gHashParsed.state;
window.location.hash = gHash;
gHashCheckTimerID = setInterval(initStateFromHash, gHASH_CHECK_INTERVAL);
}
/*
* **************************** 5. Data logic **********************************
*/
function getTagsSorted(){
/*
* Return field tags in sorted order.
*/
var tags = [];
for (var tag in gRecord){
tags.push(tag);
}
return tags.sort();
}
function getFieldPositionInTag(tag, field){
/*
* Determine the local (in tag) position of a new field.
*/
var fields = gRecord[tag];
if (fields){
var fieldLength = fields.length, i = 0;
while (i < fieldLength && cmpFields(field, fields[i]) != -1)
i++;
return i;
}
else
return 0;
}
function getPreviousTag(tag){
/*
* Determine the previous tag in the record (if the given tag is the first
* tag, 0 will be returned).
*/
var tags = getTagsSorted();
var tagPos = $.inArray(tag, tags);
if (tagPos == -1){
tags.push(tag);
tags.sort();
tagPos = $.inArray(tag, tags);
}
if (tagPos > 0)
return tags[tagPos-1];
return 0;
}
function deleteFieldFromTag(tag, fieldPosition){
/*
* Delete a specified field.
*/
var field = gRecord[tag][fieldPosition];
var fields = gRecord[tag];
fields.splice($.inArray(field, fields), 1);
// If last field, delete tag.
if (fields.length == 0){
delete gRecord[tag];
}
}
function cmpFields(field1, field2){
/*
* Compare fields by indicators (tag assumed equal).
*/
if (field1[1].toLowerCase() > field2[1].toLowerCase())
return 1;
else if (field1[1].toLowerCase() < field2[1].toLowerCase())
return -1;
else if (field1[2].toLowerCase() > field2[2].toLowerCase())
return 1;
else if (field1[2].toLowerCase() < field2[2].toLowerCase())
return -1;
return 0;
}
function insertFieldToRecord(record, fieldId, ind1, ind2, subFields){
/**Inserting a new field on the client side and returning the position of the newly created field*/
newField = [subFields, ind1, ind2, '', 0];
if (record[fieldId] == undefined){
record[fieldId] = [newField];
return 0;
} else {
record[fieldId].push(newField);
return (record[fieldId].length-1);
}
}
function transformRecord(record){
/**Transforming a bibrecord to a form that is easier to compare that is a dictionary
* field identifier -> field indices -> fields list -> [subfields list, position in the record]
*
* The data is enriched with the positions inside the record in a following manner:
* each field consists of:
* */
result = {};
for (fieldId in record){
result[fieldId] = {};
indicesList = []; // a list of all the indices ... utilised later when determining the positions
for (fieldIndex in record[fieldId]){
indices = "";
if (record[fieldId][fieldIndex][1] == ' '){
indices += "_";
}else{
indices += record[fieldId][fieldIndex][1];
}
if (record[fieldId][fieldIndex][2] == ' '){
indices += "_";
}else{
indices += record[fieldId][fieldIndex][2];
}
if (result[fieldId][indices] == undefined){
result[fieldId][indices] = []; // a future list of fields sharing the same indice
indicesList.push(indices);
}
result[fieldId][indices].push([record[fieldId][fieldIndex][0], 0]);
}
// now calculating the positions within a field identifier ( utilised on the website )
position = 0;
indices = indicesList.sort();
for (i in indices){
for (fieldInd in result[fieldId][indices[i]]){
result[fieldId][indices[i]][fieldInd][1] = position;
position ++;
}
}
}
return result;
}
function filterChanges(changeset){
/*Filtering the changes list -> removing the changes related to the fields
* that should never be changed */
unchangableTags = {"001" : true}; // a dictionary of the fields that should not be modified
result = [];
for (changeInd in changeset){
change = changeset[changeInd];
if ((change.tag == undefined) || (!(change.tag in unchangableTags))){
result.push(change);
}
}
return result;
}
///// Functions generating easy to display changes list
function compareFields(fieldId, indicators, fieldPos, field1, field2){
result = [];
for (sfPos in field2){
if (field1[sfPos] == undefined){
// adding the subfield at the end of the record can be treated in a more graceful manner
result.push(
{"change_type" : "subfield_added",
"tag" : fieldId,
"indicators" : indicators,
"field_position" : fieldPos,
"subfield_code" : field2[sfPos][0],
"subfield_content" : field2[sfPos][1]});
}
else
{
// the subfield exists in both the records
if (field1[sfPos][0] != field2[sfPos][0]){
// a structural change ... we replace the entire field
return [{"change_type" : "field_changed",
"tag" : fieldId,
"indicators" : indicators,
"field_position" : fieldPos,
"field_content" : field2}];
} else
{
if (field1[sfPos][1] != field2[sfPos][1]){
result.push({"change_type" : "subfield_changed",
"tag" : fieldId,
"indicators" : indicators,
"field_position" : fieldPos,
"subfield_position" : sfPos,
"subfield_code" : field2[sfPos][0],
"subfield_content" : field2[sfPos][1]});
}
}
}
}
for (sfPos in field1){
if (field2[sfPos] == undefined){
result.push({"change_type" : "subfield_removed",
"tag" : fieldId,
"indicators" : indicators,
"field_position" : fieldPos,
"subfield_position" : sfPos});
}
}
return result;
}
function compareIndicators(fieldId, indicators, fields1, fields2){
/*a helper function allowing to compare inside one indicator
* excluded from compareRecords for the code clarity reason*/
result = [];
for (fieldPos in fields2){
if (fields1[fieldPos] == undefined){
result.push({"change_type" : "field_added",
"tag" : fieldId,
"indicators" : indicators,
"field_content" : fields2[fieldPos][0]});
} else { // comparing the content of the subfields
result = result.concat(compareFields(fieldId, indicators, fields1[fieldPos][1], fields1[fieldPos][0], fields2[fieldPos][0]));
}
}
for (fieldPos in fields1){
if (fields2[fieldPos] == undefined){
fieldPosition = fields1[fieldPos][1];
result.push({"change_type" : "field_removed",
"tag" : fieldId,
"indicators" : indicators,
"field_position" : fieldPosition});
}
}
return result;
}
function compareRecords(record1, record2){
/*Compares two bibrecords, producing a list of atom changes that can be displayed
* to the user if for example applying the Holding Pen change*/
// 1) This is more convenient to have a different structure of the storage
r1 = transformRecord(record1);
r2 = transformRecord(record2);
result = [];
for (fieldId in r2){
if (r1[fieldId] == undefined){
for (indicators in r2[fieldId]){
for (field in r2[fieldId][indicators]){
result.push({"change_type" : "field_added",
"tag" : fieldId,
"indicators" : indicators,
"field_content" : r2[fieldId][indicators][field][0]});
}
}
}
else
{
for (indicators in r2[fieldId]){
if (r1[fieldId][indicators] == undefined){
for (field in r2[fieldId][indicators]){
result.push({"change_type" : "field_added",
"tag" : fieldId,
"indicators" : indicators,
"field_content" : r2[fieldId][indicators][field][0]});
}
}
else{
result = result.concat(compareIndicators(fieldId, indicators,
r1[fieldId][indicators], r2[fieldId][indicators]));
}
}
for (indicators in r1[fieldId]){
if (r2[fieldId][indicators] == undefined){
for (fieldInd in r1[fieldId][indicators]){
fieldPosition = r1[fieldId][indicators][fieldInd][1];
result.push({"change_type" : "field_removed",
"tag" : fieldId,
"field_position" : fieldPosition});
}
}
}
}
}
for (fieldId in r1){
if (r2[fieldId] == undefined){
for (indicators in r1[fieldId]){
for (field in r1[fieldId][indicators])
{
// field position has to be calculated here !!!
fieldPosition = r1[fieldId][indicators][field][1]; // field position inside the mark
result.push({"change_type" : "field_removed",
"tag" : fieldId,
"field_position" : fieldPosition});
}
}
}
}
return result;
}
function fieldIsProtected(MARC){
/*
* Determine if a MARC field is protected or part of a protected group of
* fields.
*/
do{
var i = MARC.length - 1;
if ($.inArray(MARC, gPROTECTED_FIELDS) != -1)
return true;
MARC = MARC.substr(0, i);
i--;
}
while (i >= 1)
return false;
}
function containsProtectedField(fieldData){
/*
* Determine if a field data structure contains protected elements (useful
* when checking if a deletion command is valid).
* The data structure must be an object with the following levels
* - Tag
* - Field position
* - Subfield index
*/
var fieldPositions, subfieldIndexes, MARC;
for (var tag in fieldData){
fieldPositions = fieldData[tag];
for (var fieldPosition in fieldPositions){
subfieldIndexes = fieldPositions[fieldPosition];
if (subfieldIndexes.length == 0){
MARC = getMARC(tag, fieldPosition);
if (fieldIsProtected(MARC))
return MARC;
}
else{
for (var i=0, n=subfieldIndexes.length; i<n; i++){
MARC = getMARC(tag, fieldPosition, subfieldIndexes[i]);
if (fieldIsProtected(MARC))
return MARC;
}
}
}
}
return false;
}
function getMARC(tag, fieldPosition, subfieldIndex){
/*
* Return the MARC representation of a field or a subfield.
*/
var field = gRecord[tag][fieldPosition];
var ind1, ind2;
if (validMARC.reControlTag.test(tag))
ind1 = '', ind2 = '';
else{
ind1 = (field[1] == ' ' || !field[1]) ? '_' : field[1];
ind2 = (field[2] == ' ' || !field[2]) ? '_' : field[2];
}
if (subfieldIndex == undefined)
return tag + ind1 + ind2;
else
return tag + ind1 + ind2 + field[0][subfieldIndex][0];
}
function getFieldTag(MARC){
/*
* Get the tag name of a field in format as specified by gTagFormat.
*/
MARC = MARC.substr(0, 5);
if (gTagFormat == 'human'){
var tagName = gTAG_NAMES[MARC];
if (tagName != undefined)
// Direct hit. Return it.
return tagName;
else{
// Start looking for wildcard hits.
if (MARC.length == 3){
// Controlfield
tagName = gTAG_NAMES[MARC.substr(0, 2) + '%'];
if (tagName != undefined && tagName != MARC + 'x')
return tagName;
}
else{
// Regular field, try finding wildcard hit by shortening expression
// gradually. Ignores wildcards which gives values like '27x'.
var term = MARC + '%', i = 5;
do{
tagName = gTAG_NAMES[term];
if (tagName != undefined){
if (tagName != MARC.substr(0, i) + 'x')
return tagName;
break;
}
i--;
term = MARC.substr(0, i) + '%';
}
while (i >= 3)
}
}
}
return MARC;
}
function getSubfieldTag(MARC){
/*
* Get the tag name of a subfield in format as specified by gTagFormat.
*/
if (gTagFormat == 'human'){
var subfieldName = gTAG_NAMES[MARC];
if (subfieldName != undefined)
return subfieldName;
}
return MARC.charAt(5);
}
function validMARC(datatype, value){
/*
* Validate a value of given datatype according to the MARC standard. The
* value should be restricted/extended to it's expected size before being
* passed to this function.
* Datatype can be 'ControlTag', 'Tag', 'Indicator' or 'SubfieldCode'.
* Returns a boolean.
*/
return eval('validMARC.re' + datatype + '.test(value)');
}
// MARC validation REs
validMARC.reControlTag = /00[1-9A-Za-z]{1}/;
validMARC.reTag = /(0([1-9A-Z][0-9A-Z])|0([1-9a-z][0-9a-z]))|(([1-9A-Z][0-9A-Z]{2})|([1-9a-z][0-9a-z]{2}))/;
validMARC.reIndicator1 = /[\da-zA-Z]{1}/;
validMARC.reIndicator2 = /[\da-zA-Z]{1}/;
//validMARC.reSubfieldCode = /[\da-z!&quot;#$%&amp;'()*+,-./:;&lt;=&gt;?{}_^`~\[\]\\]{1}/;
validMARC.reSubfieldCode = /[\da-z!&quot;#$%&amp;'()*+,-.\/:;&lt;=&gt;?{}_^`~\[\]\\]{1}/;
/*
* **************************** 6. Record UI ***********************************
*/
function onNewRecordClick(event){
/*
* Handle 'New' button (new record).
*/
updateStatus('updating');
if (gRecordDirty){
if (!displayAlert('confirmLeavingChangedRecord')){
updateStatus('ready');
event.preventDefault();
return;
}
}
else
// If the record is unchanged, erase the cache.
if (gReadOnlyMode == false){
createReq({recID: gRecID, requestType: 'deleteRecordCache'}, function() {},
true, undefined, onDeleteRecordCacheError);
}
changeAndSerializeHash({state: 'newRecord'});
cleanUp(true, '');
displayNewRecordScreen();
bindNewRecordHandlers();
updateStatus('ready');
updateToolbar(false);
event.preventDefault();
}
function onTemplateRecordClick(event){
/* Handle 'Template management' button */
var template_window = window.open('/record/edit/templates', '', 'resizeable,scrollbars');
template_window.document.close(); // needed for chrome and safari
}
/**
* Error handler when opening a record
*/
function onGetRecordError() {
var msg = "<em>Error</em>: record cannot be opened. <br /><br /> \
If the problem persists, contact the site admin."
displayMessage(undefined, false, [msg]);
updateStatus("ready");
}
function getRecord(recID, recRev, onSuccess){
/* A function retrieving the bibliographic record, using an AJAX request.
*
* recID : the identifier of a record to be retrieved from the server
* recRev : the revision of the record to be retrieved (0 or undefined
* means retrieving the newest version )
* onSuccess : The callback to be executed upon retrieval. The default
* callback loads the retrieved record into the bibEdit user
* interface
*/
/* Make sure the record revision exists, otherwise default to current */
if ($.inArray(recRev, gRecRevisionHistory) === -1) {
recRev = 0;
}
var getRecordPromise = new $.Deferred();
if (onSuccess == undefined)
onSuccess = onGetRecordSuccess;
if (recRev != undefined && recRev != 0){
changeAndSerializeHash({state: 'edit', recid: recID, recrev: recRev});
}
else{
changeAndSerializeHash({state: 'edit', recid: recID});
}
gRecIDLoading = recID;
reqData = {recID: recID,
requestType: 'getRecord',
deleteRecordCache:
getRecord.deleteRecordCache,
clonedRecord: getRecord.clonedRecord,
inReadOnlyMode: gReadOnlyMode};
if (recRev != undefined && recRev != 0){
reqData.recordRevision = recRev;
reqData.inReadOnlyMode = true;
}
resetBibeditState();
createReq(reqData, function(json) {
onSuccess(json);
// reloading the Holding Pen toolbar
onHoldingPenPanelRecordIdChanged(recID);
}, true, undefined, onGetRecordError);
getRecord.deleteRecordCache = false;
getRecord.clonedRecord = false;
}
// Enable this flag to delete any existing cache before fetching next record.
getRecord.deleteRecordCache = false;
// Enable this flag to tell that we are fetching a record that has just been
// cloned (enables proper feedback, highlighting).
getRecord.clonedRecord = false;
function onGetRecordSuccess(json){
/*
* Handle successfull 'getRecord' requests.
*/
cleanUp(!gNavigatingRecordSet);
// Store record data.
gRecID = json['recID'];
gRecIDLoading = null;
gRecRev = json['recordRevision'];
gRecRevAuthor = json['revisionAuthor'];
gPhysCopiesNum = json['numberOfCopies'];
gBibCircUrl = json['bibCirculationUrl'];
gDisplayBibCircPanel = json['canRecordHavePhysicalCopies'];
gRecordHasPDF = json['record_has_pdf']
// Get KB information
gKBSubject = json['KBSubject'];
gKBInstitution = json['KBInstitution'];
var revDt = formatDateTime(getRevisionDate(gRecRev));
var recordRevInfo = "record revision: " + revDt;
var revAuthorString = gRecRevAuthor;
$('.revisionLine').html(recordRevInfo + ' ' + revAuthorString)
gRecord = json['record'];
gTagFormat = json['tagFormat'];
gRecordDirty = json['cacheDirty'];
gCacheMTime = json['cacheMTime'];
if (json['cacheOutdated']){
// User had an existing outdated cache.
displayCacheOutdatedScreen('getRecord');
$('#lnkMergeCache').bind('click', onMergeClick);
$('#lnkDiscardChanges').bind('click', function(event){
getRecord.deleteRecordCache = true;
getRecord(gRecID);
event.preventDefault();
});
$('#lnkRemoveMsg').bind('click', function(event){
$('#bibEditMessage').html('');
event.preventDefault();
});
}
gHoldingPenChanges = json['pendingHpChanges'];
gDisabledHpEntries = json['disabledHpChanges'];
gHoldingPenLoadedChanges = {};
adjustHPChangesetsActivity();
updateBibCirculationPanel();
// updating the undo/redo lists
gUndoList = json['undoList'];
gRedoList = json['redoList'];
updateUrView();
displayRecord();
// Activate menu record controls.
activateRecordMenu();
// the current mode should is indicated by the result from the server
gReadOnlyMode = (json['inReadOnlyMode'] != undefined) ? json['inReadOnlyMode'] : false;
gRecLatestRev = (json['lastRevision'] != undefined) ? json['lastRevision'] : null;
gRecRevisionHistory = (json['revisionsHistory'] != undefined) ? json['revisionsHistory'] : null;
if (json["resultCode"] === 103) {
gReadOnlyMode = true;
displayMessage(json["resultCode"], true);
}
updateInterfaceAccordingToMode();
if (gRecordDirty){
activateSubmitButton();
}
if (gTagFormat == 'MARC')
$('#btnHumanTags').bind('click', onHumanTagsClick).removeAttr('disabled');
else
$('#btnMARCTags').bind('click', onMARCTagsClick).removeAttr('disabled');
// Unfocus record selection field (to facilitate hotkeys).
$('#txtSearchPattern').blur();
if (json['resultCode'] == 9)
$('#spnRecID').effect('highlight', {color: gCLONED_RECORD_COLOR},
gCLONED_RECORD_COLOR_FADE_DURATION);
updateStatus('report', gRESULT_CODES[json['resultCode']]);
updateRevisionsHistory();
adjustGeneralHPControlsVisibility();
createReq({recID: gRecID, requestType: 'getTickets'}, onGetTicketsSuccess);
// Refresh top toolbar
updateToolbar(false);
updateToolbar(true);
}
function onGetTemplateSuccess(json) {
onGetRecordSuccess(json);
}
function onSubmitPreviewSuccess(dialogPreview, html_preview){
/*
* Confirm whether to submit the record
*
* dialog: object containing the different parts of the modal dialog
* html_preview: a formatted preview of the record content
*/
updateStatus('ready');
addContentToDialog(dialogPreview, html_preview, "Do you want to submit the record?");
dialogPreview.dialogDiv.dialog({
title: "Confirm submit",
close: function() { updateStatus('ready'); },
buttons: {
"Submit changes": function() {
var reqData = {
recID: gRecID,
force: onSubmitClick.force,
requestType: 'submit'
};
if (gSubmitMode == "textmarc") {
reqData.requestType = 'submittextmarc';
reqData.textmarc = $('#textmarc_textbox').val();
}
createReq(reqData, function(json) {
var resCode = json['resultCode'];
if (resCode == 115) {
// There was a textmarc parsing error
displayMessage(resCode, true, json["parse_error"]);
updateStatus('ready');
}
else {
// Submission was successful.
changeAndSerializeHash({state: 'submit', recid: gRecID});
updateStatus('report', gRESULT_CODES[resCode]);
cleanUp(!gNavigatingRecordSet, '', null, true, false);
updateToolbar(false);
resetBibeditState();
displayMessage(resCode, false, [json['recID'], json["new_cnum"]]);
updateStatus('ready');
}
});
$( this ).remove();
},
Cancel: function() {
updateStatus('ready');
$( this ).remove();
}
}});
// Focus on the submit button
$(dialogPreview.dialogDiv).parent().find('button:nth-child(1)').focus();
}
function saveOpenedFields() {
/* Performs the following tasks:
* - Remove volatile content from field templates
* - Save opened content from field templates
* - Save opened textareas
* returns: promise with state of all tasks to perform
*/
function removeVolatileContentFieldTemplates(removingVolatilePromise) {
/* Deletes volatile fields from field templates */
$(".bibEditVolatileSubfield:input").each(function() {
var deleteButtonSelector = $(this).parent().parent().find('img[id^="btnAddFieldRemove_"]');
if (deleteButtonSelector.length === 0) {
/* It is the first element on the field template */
$(this).parent().parent().find(".bibEditCellAddSubfieldCode").remove();
$(this).remove();
}
else {
deleteButtonSelector.click();
}
});
removingVolatilePromise.resolve();
}
function removeEmptyFieldTemplates(removingEmptyFieldTemplatePromise) {
var addFieldInterfaceSelector = $("tbody[id^=rowGroupAddField_]");
addFieldInterfaceSelector.each(function() {
var addSubfieldSelector = $(this).find(".bibEditCellAddSubfieldCode");
if (addSubfieldSelector.length === 0) {
/* All input have been previously removed */
addFieldInterfaceSelector.remove();
}
});
removingEmptyFieldTemplatePromise.resolve();
}
function saveFieldTemplatesContent(savingFieldTemplatesPromise) {
/* Triggers click event on all open field templates */
$(".bibEditTxtValue:input:not(.bibEditVolatileSubfield)").trigger($.Event( 'keyup', {which:$.ui.keyCode.ENTER, keyCode:$.ui.keyCode.ENTER}));
savingFieldTemplatesPromise.resolve();
}
function saveOpenedTextareas(savingOpenedTextareasPromise) {
/* Saves textareas if they are opened */
$(".edit_area textarea").trigger($.Event( 'keydown', {which:$.ui.keyCode.ENTER, keyCode:$.ui.keyCode.ENTER}));
savingOpenedTextareasPromise.resolve();
}
var removingVolatilePromise = new $.Deferred();
var removingEmptyFieldTemplatePromise = new $.Deferred();
var savingFieldTemplatesPromise = new $.Deferred();
var savingOpenedTextareasPromise = new $.Deferred();
removeVolatileContentFieldTemplates(removingVolatilePromise);
removeEmptyFieldTemplates(removingEmptyFieldTemplatePromise);
saveFieldTemplatesContent(savingFieldTemplatesPromise);
saveOpenedTextareas(savingOpenedTextareasPromise);
var savingContent = $.when(removingVolatilePromise,
removingEmptyFieldTemplatePromise,
savingFieldTemplatesPromise,
savingOpenedTextareasPromise);
return savingContent;
}
function onSubmitClick() {
/*
* Handle 'Submit' button (submit record).
*/
save_changes().done(function() {
updateStatus('updating');
/* Save all opened fields before submitting */
var savingOpenedFields = saveOpenedFields(savingOpenedFields);
savingOpenedFields.done(function() {
var dialogPreview = createDialog("Loading...", "Retrieving preview...", 750, 700, true);
// Get preview of the record and let the user confirm submit
getPreview(dialogPreview, onSubmitPreviewSuccess);
});
});
}
// Enable this flag to force the next submission even if cache is outdated.
onSubmitClick.force = false;
function onPreviewClick() {
/*
* Handle 'Preview' button (preview record).
*/
clearWarnings();
var reqData = {
'new_window': true,
recID: gRecID,
submitMode: gSubmitMode,
requestType: 'preview'
};
if (gSubmitMode == "textmarc") {
reqData.textmarc = $("#textmarc_textbox").val();
}
save_changes().done(function() {
var dialogPreview = createDialog("Loading...", "Retrieving preview...", 750, 700, true);
createReq(reqData, function(json) {
// Preview was successful.
$(dialogPreview.dialogDiv).remove();
var resCode = json['resultCode'];
if (resCode == 115) {
// There was a textmarc parsing error
displayMessage(resCode, true, json["parse_error"]);
updateStatus('ready');
return;
}
var html_preview = json['html_preview'];
var preview_window = openCenteredPopup('', 'Record preview', 768, 768);
if ( preview_window === null ) {
var msg = "<strong> The preview window cannot be opened.</strong><br />\
Your browser might be blocking popups. Check the options and\
enable popups for this page.";
displayMessage(undefined, true, [msg]);
}
preview_window.document.write(html_preview);
preview_window.document.close(); // needed for chrome and safari
});
});
}
function onPrintClick() {
/*
* Print page, makes use of special css rules @media print
*/
// If we are in textarea view, copy the contents to the helper div
$('#print_helper').text($('#textmarc_textbox').val());
$("#bibEditContentTable").css('height', "100%");
window.print();
resize_content();
}
function onTextMarcBoxKeyUp() {
/* Handler for keyup event inside the textmarc editing area */
gRecordDirty = true;
activateSubmitButton();
// Disable keyup event on textarea
$(this).off("keyup");
}
function onTextMarcClick() {
/*
* 1) Send request to server that will return textmarc from the cache content
* 2) Remove editor table and display content in textbox
* 3) Activate flag to know we are in text marc mode (for submission)
*/
/* Save the content in all textareas that are currently opened before changing
view mode
*/
save_changes().done(function() {
$(".edit_area textarea").trigger($.Event( 'keydown', {which:$.ui.keyCode.ENTER, keyCode:$.ui.keyCode.ENTER}));
createReq({recID: gRecID, requestType: 'getTextMarc'
}, function(json) {
// Request was successful.
$("#bibEditMessage").empty();
var textmarc_box = $('<textarea>');
textmarc_box.attr('id', 'textmarc_textbox');
textmarc_box.addClass("bibedit_input");
textmarc_box.html(json['textmarc']);
$('#bibEditTable').remove();
$('#bibEditContentTable').append(textmarc_box);
// Avoids having two different scrollbars
$('#bibEditContentTable').css('overflow', 'visible');
// Create an extra div to store the textarea content whenever printing
var print_helper = $('<div>');
print_helper.attr('id', 'print_helper');
$('#bibEditContentTable').append(print_helper);
// Bind keyup event to textarea to detect when changes have been
// introduced
textmarc_box.on("keyup", onTextMarcBoxKeyUp);
// Disable menu buttons
deactivateRecordMenu();
if (gRecordDirty) {
activateSubmitButton();
}
// Disable reference extraction in textmarc mode
$('#img_run_refextract, #img_extract_free_text').off('click').removeClass(
'bibEditImgCtrlEnabled').addClass('bibEditImgCtrlDisabled');
// Empty undo/redo handlers
gUndoList = []; // list of possible undo operations
gRedoList = []; // list of possible redo operations
updateUrView();
// Disable read/only mode button
$("#btnSwitchReadOnly").prop('disabled', true);
// Activate textmarc flag
gSubmitMode = 'textmarc';
// Change icon to table view
$("#img_textmarc").attr('src', '/img/bibedit_tableview.png');
$("#img_textmarc").attr('id', 'img_tableview');
$("#img_tableview").off("click").on("click", onTableViewClick);
});
});
}
function onTableViewClick() {
/*
* 1) Send request to validate textmarc and create a cache file with its
* content
* 2) Get the record from the cache and display it in the table
*/
createReq({recID: gRecID, textmarc: $('#textmarc_textbox').val(),
requestType: 'getTableView', recordDirty: gRecordDirty
}, function(json) {
var resCode = json['resultCode'];
if (resCode == 115) {
// There was a textmarc parsing error
displayMessage(resCode, true, json["parse_error"]);
updateStatus('ready');
}
else if (resCode == 116) {
// Change to table view was successful
getRecord(gRecID);
// Change icon to textmarc view
$("#img_tableview").attr('src', '/img/bibedit_textmarc.png');
$("#img_tableview").attr('id', 'img_textmarc');
$("#img_textmarc").off("click").on("click", onTextMarcClick);
// Enable back read/only mode button
$("#btnSwitchReadOnly").prop('disabled', true);
// Activate default submission flag
gSubmitMode = 'default';
}
});
}
function onOpenPDFClick() {
/*
* Create request to retrieve PDF from record and open it in new window
*/
createReq({recID: gRecID, requestType: 'get_pdf_url'
}, function(json){
// Preview was successful.
var pdf_url = json['pdf_url'];
var preview_window = openCenteredPopup(pdf_url);
if ( preview_window === null ) {
var msg = "<strong> The preview window cannot be opened.</strong><br />\
Your browser might be blocking popups. Check the options and\
enable popups for this page.";
displayMessage(undefined, true, [msg]);
}
preview_window.document.close(); // needed for chrome and safari
});
}
function getPreview(dialog, onSuccess) {
/*
* Get preview to be added to the dialog before submission
*/
clearWarnings();
var html_preview;
var reqData = {
'new_window': false,
recID: gRecID,
submitMode: gSubmitMode,
requestType: 'preview'
};
if (gSubmitMode == "textmarc") {
reqData.textmarc = $("#textmarc_textbox").val();
}
createReq(reqData, function(json){
// Preview was successful.
html_preview = json['html_preview'];
var resCode = json['resultCode'];
if (resCode == 115) {
// There was a parsing error
displayMessage(resCode, true, json["parse_error"]);
updateStatus('ready');
$(dialog.dialogDiv).remove();
return;
}
onSuccess(dialog, html_preview);
});
}
function onCancelClick(){
/*
* Handle 'Cancel' button (cancel editing).
*/
updateStatus('updating');
if (!gRecordDirty || displayAlert('confirmCancel')) {
createReq({
recID: gRecID,
requestType: 'cancel'
}, function(json){
// Cancellation was successful.
changeAndSerializeHash({
state: 'cancel',
recid: gRecID
});
cleanUp(!gNavigatingRecordSet, '', null, true, true);
updateStatus('report', gRESULT_CODES[json['resultCode']]);
}, false);
holdingPenPanelRemoveEntries();
gUndoList = [];
gRedoList = [];
gReadOnlyMode = false;
gRecRevisionHistory = [];
gHoldingPenLoadedChanges = [];
gHoldingPenChanges = [];
gPhysCopiesNum = 0;
gBibCircUrl = null;
// making the changes visible
updateBibCirculationPanel();
updateRevisionsHistory();
updateUrView();
updateToolbar(false);
}
else {
updateStatus('ready');
}
}
function onCloneRecordClick(){
/*
* Handle 'Clone' button (clone record).
*/
updateStatus('updating');
if (!displayAlert('confirmClone')){
updateStatus('ready');
return;
}
else if (!gRecordDirty) {
// If the record is unchanged, erase the cache.
createReq({recID: gRecID, requestType: 'deleteRecordCache'}, function() {},
true, undefined, onDeleteRecordCacheError);
}
createReq({requestType: 'newRecord', newType: 'clone', recID: gRecID},
function(json){
var newRecID = json['newRecID'];
$('#txtSearchPattern').val(newRecID);
getRecord.clonedRecord = true;
getRecord(newRecID);
}, false);
}
function onDeleteRecordClick(){
/*
* Handle 'Delete record' button.
*/
if (gPhysCopiesNum > 0){
displayAlert('errorPhysicalCopiesExist');
return;
}
if (displayAlert('confirmDeleteRecord')){
updateStatus('updating');
createReq({recID: gRecID, requestType: 'deleteRecord'}, function(json){
// Record deletion was successful.
changeAndSerializeHash({state: 'deleteRecord', recid: gRecID});
cleanUp(!gNavigatingRecordSet, '', null, true);
var resCode = json['resultCode'];
// now cleaning the interface - removing holding pen entries and record history
resetBibeditState();
updateStatus('report', gRESULT_CODES[resCode]);
displayMessage(resCode);
updateToolbar(false);
}, false);
}
}
function onMergeClick(event){
/*
* Handle click on 'Merge' link (to merge outdated cache with current DB
* version of record).
*/
notImplemented(event);
updateStatus('updating');
createReq({recID: gRecID, requestType: 'prepareRecordMerge'}, function(json){
// Null gRecID to avoid warning when leaving page.
gRecID = null;
var recID = json['recID'];
window.location = gSITE_URL + '/'+ gSITE_RECORD +'/merge/#recid1=' + recID + '&recid2=' +
'tmp';
});
event.preventDefault();
}
function bindNewRecordHandlers(){
/*
* Bind event handlers to links on 'Create new record' page.
*/
$('#lnkNewEmptyRecord').bind('click', function(event){
updateStatus('updating');
createReq({requestType: 'newRecord', newType: 'empty'}, function(json){
getRecord(json['newRecID']);
}, false);
event.preventDefault();
});
for (var i=0, n=gRECORD_TEMPLATES.length; i<n; i++)
$('#lnkNewTemplateRecord_' + i).bind('click', function(event){
updateStatus('updating');
var templateNo = this.id.split('_')[1];
createReq({requestType: 'newRecord', newType: 'template',
templateFilename: gRECORD_TEMPLATES[templateNo][0]}, function(json){
getRecord(json['newRecID'], 0, onGetTemplateSuccess); // recRev = 0 -> current revision
}, false);
event.preventDefault();
});
//binding import function
$('#lnkNewTemplateRecordImport_crossref').bind('click', function(event){
var doiElement = $('#doi_crossref');
if (!doiElement.val()) {
//if no DOI specified
errorDoi(117, doiElement)
} else {
updateStatus('updating');
createReq({requestType: 'newRecord', newType: 'import', doi: doiElement.val()},function(json){
if (json['resultCode'] == 7) {
getRecord(json['newRecID'], 0, onGetTemplateSuccess); // recRev = 0 -> current revision
} else {
errorDoi(json['resultCode'], doiElement);
updateStatus('error', 'Error !');
}
}, false);
}
event.preventDefault();
});
// bind enter key with "crossref" link clicked
$('#doi_crossref').bind('keyup', function (e){
if (e.which == 13){
$('#lnkNewTemplateRecordImport_crossref').click();
}
});
}
function errorDoi(code, element){
/*
* Displays a warning message in the import from crossref textbox
*/
var msg;
switch(code) {
case 117:
msg = "Please input the DOI";
break;
case 118:
msg = "Record with given DOI was not found";
break;
case 119:
msg = "This is not a correct DOI, please correct it";
break;
case 120:
msg = "Crossref account is not set up. Contact the site admin.";
break;
default:
msg = "Error while importing data";
}
var warning = '<span class="doiWarning" style="padding-left: 5px; color: #ff0000;">' + msg + '</span>'
$(".doiWarning").remove();
element.after(warning);
}
function cleanUp(disableRecBrowser, searchPattern, searchType,
focusOnSearchBox, resetHeadline){
/*
* Clean up display and data.
*/
// Deactivate controls.
deactivateRecordMenu();
if (disableRecBrowser){
disableRecordBrowser();
gResultSet = null;
gResultSetIndex = null;
gNavigatingRecordSet = false;
}
// Clear main content area.
$('#bibEditContentTable').empty();
$('#bibEditMessage').empty();
// Clear search area.
if (typeof(searchPattern) == 'string' || typeof(searchPattern) == 'number')
$('#txtSearchPattern').val(searchPattern);
if ($.inArray(searchType, ['recID', 'reportnumber', 'anywhere']) != -1)
$('#sctSearchType').val(searchPattern);
if (focusOnSearchBox)
$('#txtSearchPattern').focus();
// Clear tickets.
$('#tickets').empty();
// Clear data.
gRecID = null;
gRecord = null;
gTagFormat = null;
gRecordDirty = false;
gCacheMTime = null;
gSelectionMode = false;
gReadOnlyMode = false;
$('#btnSwitchReadOnly').html("Read-only");
gHoldingPenLoadedChanges = null;
gHoldingPenChanges = null;
gUndoList = [];
gRedoList = [];
gBibCircUrl = null;
gPhysCopiesNum = 0;
gSubmitMode = "default";
}
function addHandler_autocompleteAffiliations(tg) {
/*
* Add autocomplete handler to a given cell
*/
/* If gKBInstitution is not defined in the system, do nothing */
if ($.inArray(gKBInstitution,gAVAILABLE_KBS) == -1)
return
$(tg).autocomplete({
source: function( request, response ) {
$.getJSON("/kb/export",
{ kbname: gKBInstitution, format: 'jquery', term: request.term},
response);
},
search: function() {
var term = this.value;
if (term.length < 3) {
return false;
}
return true;
}
});
}
/*
* **************************** 7. Editor UI ***********************************
*/
function colorFields(){
/*
* Color every other field (rowgroup) gray to increase readability.
*/
$('#bibEditTable tbody:even').each(function(){
$(this).addClass('bibEditFieldColored');
});
}
function reColorFields(){
/*
* Update coloring by removing existing, then recolor.
*/
$('#bibEditTable tbody').each(function(){
$(this).removeClass('bibEditFieldColored');
});
colorFields();
}
function onMARCTagsClick(event){
/*
* Handle 'MARC' link (MARC tags).
*/
$(this).unbind('click').attr('disabled', 'disabled');
createReq({recID: gRecID, requestType: 'changeTagFormat', tagFormat: 'MARC'});
gTagFormat = 'MARC';
updateTags();
$('#btnHumanTags').bind('click', onHumanTagsClick).removeAttr('disabled');
event.preventDefault();
}
function onHumanTagsClick(event){
/*
* Handle 'Human' link (Human tags).
*/
$(this).unbind('click').attr('disabled', 'disabled');
createReq({recID: gRecID, requestType: 'changeTagFormat',
tagFormat: 'human'});
gTagFormat = 'human';
updateTags();
$('#btnMARCTags').bind('click', onMARCTagsClick).removeAttr('disabled');
event.preventDefault();
}
function onLnkSpecialSymbolsClick(){
var special_char_list = ['&#192;','&#193;','&#194;','&#195;','&#196;','&#197;',
'&#198;','&#199;','&#200;','&#201;','&#202;','&#203;',
'&#204;','&#205;','&#206;','&#207;','&#208;','&#209;',
'&#210;','&#211;','&#212;','&#213;','&#214;','&#215;',
'&#216;','&#217;','&#218;','&#219;','&#220;','&#221;',
'&#222;','&#223;','&#224;','&#225;','&#226;','&#227;',
'&#228;','&#229;','&#230;','&#231;','&#232;','&#233;',
'&#234;','&#235;','&#236;','&#237;','&#238;','&#239;',
'&#240;','&#241;','&#242;','&#243;','&#244;','&#245;',
'&#246;','&#247;','&#248;','&#249;','&#250;','&#251;',
'&#252;','&#253;','&#254;','&#255;'];
var html_content;
html_content = '<html><head><title>Special Symbols</title>';
html_content += '<style type="text/css">';
html_content += '#char_table_div { padding: 20px 0px 0px 20px; }';
html_content += '#symbol_table { border: 1px solid black; border-collapse:collapse;}';
html_content += 'td { border: 1px solid black; padding: 5px 5px 5px 5px;}';
html_content += '</style>';
html_content += '</head><body>';
html_content += '<div id="char_table_div"><table id="symbol_table"><tr>';
var char_list_length = special_char_list.length;
for (var i=0; i<char_list_length; i++) {
html_content += '<td>' + special_char_list[i] + '</td>';
if ((i+1)%10 == 0) {
html_content += '</tr><tr>';
}
}
html_content += '</tr></table></div></body></html>';
var special_char_window = window.open('', '', 'width=310,height=310,resizeable,scrollbars');
special_char_window.document.write(html_content);
special_char_window.document.close(); // needed for chrome and safari
}
function updateTags(){
/*
* Check and update all tags (also subfield codes) against the currently
* selected tag format.
*/
$('.bibEditCellFieldTag').each(function(){
var currentTag = $(this).text();
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2];
var newTag = getFieldTag(getMARC(tag, fieldPosition));
if (newTag != currentTag)
$(this).text(newTag);
});
$('.bibEditCellSubfieldTag').each(function(){
var currentTag = $(this).text();
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2],
subfieldIndex = tmpArray[3];
var newTag = getSubfieldTag(getMARC(tag, fieldPosition, subfieldIndex));
if (newTag != currentTag)
$(this).text(newTag);
});
}
function onFieldBoxClick(box){
/*
* Handle field select boxes.
*/
// Check/uncheck all subfield boxes, add/remove selected class.
var rowGroup = $('#rowGroup_' + box.id.slice(box.id.indexOf('_')+1));
if (box.checked){
$(rowGroup).find('td[id^=content]').andSelf().addClass('bibEditSelected');
if (gReadOnlyMode == false){
$('#btnDeleteSelected').removeAttr('disabled');
}
}
else{
$(rowGroup).find('td[id^=content]').andSelf().removeClass(
'bibEditSelected');
if (!$('.bibEditSelected').length)
// Nothing is selected, disable "Delete selected"-button.
$('#btnDeleteSelected').attr('disabled', 'disabled');
}
$(rowGroup).find('input[type="checkbox"]').attr('checked', box.checked);
}
function onSubfieldBoxClick(box){
/*
* Handle subfield select boxes.
*/
var tmpArray = box.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2],
subfieldIndex = tmpArray[3];
var fieldID = tag + '_' + fieldPosition;
var subfieldID = fieldID + '_' + subfieldIndex;
// If uncheck, uncheck field box and remove selected class.
if (!box.checked){
$('#content_' + subfieldID).removeClass('bibEditSelected');
$('#boxField_' + fieldID).attr('checked', false);
$('#rowGroup_' + fieldID).removeClass('bibEditSelected');
if (!$('.bibEditSelected').length)
// Nothing is selected, disable "Delete selected"-button.
$('#btnDeleteSelected').attr('disabled', 'disabled');
}
// If check and all other subfield boxes checked, check field box, add
// selected class.
else{
$('#content_' + subfieldID).addClass('bibEditSelected');
var field = gRecord[tag][fieldPosition];
if (field[0].length == $(
'#rowGroup_' + fieldID + ' input[type=checkbox]' +
'[class=bibEditBoxSubfield]:checked').length){
$('#boxField_' + fieldID).attr('checked', true);
$('#rowGroup_' + fieldID).addClass('bibEditSelected');
}
$('#btnDeleteSelected').removeAttr('disabled');
}
}
function addFieldGatherInformations(fieldTmpNo){
/**
* Purpose: Gather the information about a current form
* Called when adding x similar fields
*
* Input(s): int:fieldTmpNo - temporary number to identify the field being
* added
*
* Returns: [template_num, data]
* where data is in the same format as the templates data.
*
*/
var templateNum = $('#selectAddFieldTemplate_' + fieldTmpNo).attr("value");
var tag = $("#txtAddFieldTag_" + fieldTmpNo).attr("value");
// now checking if this is a controlfield ... controlfield if ind1 box is invisible
if ($("#txtAddFieldInd1_" + fieldTmpNo + ":visible").length == 1){
var ind1 = $("#txtAddFieldInd1_" + fieldTmpNo).attr("value");
var ind2 = $("#txtAddFieldInd2_" + fieldTmpNo).attr("value");
var subfieldTmpNo = $('#rowGroupAddField_' + fieldTmpNo).data('freeSubfieldTmpNo');
var subfields = [];
for (i=0; i < subfieldTmpNo; i++){
var subfieldCode = $('#txtAddFieldSubfieldCode_' + fieldTmpNo + '_' + i).attr("value");
var subfieldValueSelector = $('#txtAddFieldValue_' + fieldTmpNo + '_' + i);
var subfieldValue = subfieldValueSelector.attr("value");
if (subfieldValueSelector.hasClass("bibEditVolatileSubfield")) {
subfieldValue = "VOLATILE:" + subfieldValue;
}
subfields.push([subfieldCode, subfieldValue]);
}
data = {
"name": "nonexisting template - values taken from the field",
"description": "The description of a template",
"tag" : tag,
"ind1" : ind1,
"ind2" : ind2,
"subfields" : subfields,
"isControlfield" : false
};
} else {
cfValue = $("#txtAddFieldValue_" + fieldTmpNo + "_0").attr("value");
data = {
"name": "nonexisting template - values taken from the field",
"description": "The description of a template",
"tag" : tag,
"value" : cfValue,
"isControlfield" : true
}
}
return [templateNum, data];
}
function addFieldAddSubfieldEditor(jQRowGroupID, fieldTmpNo, defaultCode, defaultValue){
/**
Adding a subfield input control into the editor
optional parameters:
defaultCode - the subfield code that will be displayed
defaultValue - the value that will be displayed by default in the editor
*/
var subfieldTmpNo = $(jQRowGroupID).data('freeSubfieldTmpNo');
$(jQRowGroupID).data('freeSubfieldTmpNo', subfieldTmpNo+1);
var addFieldRows = $(jQRowGroupID + ' tr');
$(addFieldRows).eq(addFieldRows.length-1).before(createAddFieldRow(
fieldTmpNo, subfieldTmpNo, defaultCode, defaultValue));
$('#txtAddFieldSubfieldCode_' + fieldTmpNo + '_' + subfieldTmpNo).bind(
'keyup', onAddFieldChange);
$('#btnAddFieldRemove_' + fieldTmpNo + '_' + subfieldTmpNo).bind('click', function(){
$('#rowAddField_' + this.id.slice(this.id.indexOf('_')+1)).remove();
});
$('#txtAddFieldValue_' + fieldTmpNo + '_' + subfieldTmpNo).on(
'focus', function(e){
if ($(this).hasClass('bibEditVolatileSubfield')){
$(this).select();
$(this).removeClass("bibEditVolatileSubfield");
}
}
).on("mouseup", function(e) { e.preventDefault(); });
var contentEditorId = '#txtAddFieldValue_' + fieldTmpNo + '_' + subfieldTmpNo;
$(contentEditorId).bind('keyup', function(e){
onAddFieldValueKeyPressed(e, jQRowGroupID, fieldTmpNo, subfieldTmpNo);
});
}
function onAddFieldJumpToNextSubfield(jQRowGroupID, fieldTmpNo, subfieldTmpNo){
/* Gets all the open text boxes for the current field and submits the changes
* if it is the last one.
*/
var fieldOpenInputs = $('input[id^="txtAddFieldValue_' + fieldTmpNo + '"]');
var currentInputSelector = "#txtAddFieldValue_" + fieldTmpNo + "_" + subfieldTmpNo;
var currentInput = $(currentInputSelector);
var currentInputIndex = fieldOpenInputs.index(currentInput);
if (currentInputIndex === fieldOpenInputs.length-1) {
addFieldSave(fieldTmpNo);
}
else {
fieldOpenInputs[currentInputIndex+1].focus();
}
}
function applyFieldTemplate(jQRowGroupID, formData, fieldTmpNo){
/** A function that applies a template
formNo is the number of addfield form that is treated at teh moment
formData is the data of the field template
*/
// first cleaning the existing fields
$(jQRowGroupID).data('isControlfield', formData.isControlfield);
if (formData.isControlfield){
changeFieldToControlfield(fieldTmpNo);
$("#txtAddFieldTag_" + fieldTmpNo).attr("value", formData.tag);
$("#txtAddFieldInd1_" + fieldTmpNo).attr("value", '');
$("#txtAddFieldInd2_" + fieldTmpNo).attr("value", '');
$("#txtAddFieldValue_" + fieldTmpNo + "_0").attr("value", formData.value);
}
else
{
changeFieldToDatafield(fieldTmpNo);
var subfieldTmpNo = $(jQRowGroupID).data('freeSubfieldTmpNo');
$(jQRowGroupID).data('freeSubfieldTmpNo', 0);
for (i=subfieldTmpNo-1; i>=0; i--){
$('#rowAddField_' + fieldTmpNo + '_' + i).remove();
}
for (subfieldInd in formData.subfields){
subfield = formData.subfields[subfieldInd];
addFieldAddSubfieldEditor(jQRowGroupID, fieldTmpNo, subfield[0], subfield[1]);
}
// now changing the main field properties
$("#txtAddFieldTag_" + fieldTmpNo).attr("value", formData.tag);
$("#txtAddFieldInd1_" + fieldTmpNo).attr("value", formData.ind1);
$("#txtAddFieldInd2_" + fieldTmpNo).attr("value", formData.ind2);
}
}
function createAddFieldInterface(initialContent, initialTemplateNo){
/* Create form to add a new field. If only one field is selected, the
* new field will be inserted below it. Otherwise, the new field will
* be inserted in the 3rd position
*/
// Check if we are in the use case of adding in a specific position
var selected_fields = getSelectedFields();
var insert_below_selected = false;
if (selected_fields != undefined) {
var count_fields = 0;
var selected_local_field_pos;
var selected_tag, selected_ind1, selected_ind2;
for (var tag in selected_fields.fields) {
for (var localFieldPos in selected_fields.fields[tag]) {
count_fields++;
selected_local_field_pos = localFieldPos;
}
selected_tag = tag;
selected_ind1 = selected_fields.fields[tag][localFieldPos][1];
selected_ind2 = selected_fields.fields[tag][localFieldPos][2];
}
if (count_fields === 1)
insert_below_selected = true;
}
var fieldTmpNo = onAddFieldClick.addFieldFreeTmpNo++;
var jQRowGroupID = '#rowGroupAddField_' + fieldTmpNo;
$('#bibEditColFieldTag').css('width', '90px');
var tbodyElements = $('#bibEditTable tbody');
// If only one field selected, add below the selected field
if (insert_below_selected === true) {
$('#rowGroup' + '_' + selected_tag + '_' + selected_local_field_pos).after(
createAddFieldForm(fieldTmpNo, initialTemplateNo, selected_tag, selected_ind1, selected_ind2));
$(jQRowGroupID).data('insertionPoint', parseInt(selected_local_field_pos) + 1);
$(jQRowGroupID).data('selected_tag', selected_tag);
}
else {
var insertionPoint = (tbodyElements.length >= 4) ? 3 : tbodyElements.length-1;
$('#bibEditTable tbody').eq(insertionPoint).after(
createAddFieldForm(fieldTmpNo, initialTemplateNo));
}
$(jQRowGroupID).data('freeSubfieldTmpNo', 1);
// Bind event handlers.
$('#btnAddFieldAddSubfield_' + fieldTmpNo).bind('click', function(){
addFieldAddSubfieldEditor(jQRowGroupID, fieldTmpNo, "", "");
});
$('#txtAddFieldTag_' + fieldTmpNo).bind('keyup', onAddFieldChange);
initInputHotkeys('#txtAddFieldTag_' + fieldTmpNo);
$('#txtAddFieldInd1_' + fieldTmpNo).bind('keyup', onAddFieldChange);
initInputHotkeys('#txtAddFieldInd1_' + fieldTmpNo);
$('#txtAddFieldInd2_' + fieldTmpNo).bind('keyup', onAddFieldChange);
initInputHotkeys('#txtAddFieldInd2_' + fieldTmpNo);
$('#txtAddFieldSubfieldCode_' + fieldTmpNo + '_0').bind('keyup', onAddFieldChange);
initInputHotkeys('#txtAddFieldSubfieldCode_' + fieldTmpNo + '_0');
$('#txtAddFieldValue_' + fieldTmpNo + '_0').bind('keyup', function (e){
onAddFieldValueKeyPressed(e, jQRowGroupID, fieldTmpNo, 0);
});
initInputHotkeys('#txtAddFieldValue_' + fieldTmpNo + '_0');
$('#selectAddFieldTemplate_' + fieldTmpNo).bind('change', function(e){
value = $('#selectAddFieldTemplate_' + fieldTmpNo).attr("value");
applyFieldTemplate(jQRowGroupID, fieldTemplates[value], fieldTmpNo);
});
$('#selectAddSimilarFields_' + fieldTmpNo).bind('click', function(e){
var data = addFieldGatherInformations(fieldTmpNo);
var numRepetitions = parseInt($('#selectAddFieldTemplateTimes_' + fieldTmpNo).attr('value'));
for (var i=0; i< numRepetitions; i++){
createAddFieldInterface(data[1], data[0]);
}
});
if (initialContent != undefined){
applyFieldTemplate(jQRowGroupID, initialContent , fieldTmpNo);
}else{
$(jQRowGroupID).data('isControlfield', false);
}
reColorFields();
if (insert_below_selected === true) {
$('#txtAddFieldSubfieldCode_' + fieldTmpNo + '_0').focus();
}
else {
$('#txtAddFieldTag_' + fieldTmpNo).focus();
}
// Color the new form for a short period.
$(jQRowGroupID).effect('highlight', {color: gNEW_ADD_FIELD_FORM_COLOR},
gNEW_ADD_FIELD_FORM_COLOR_FADE_DURATION);
}
function onAddSubfieldValueKeyPressed(e, tag, fieldPosition, subfieldPosition){
if (e.which == 13){
// enter key pressed.
var subfieldsNum = $('#rowGroup_' + tag + '_' + fieldPosition + ' .bibEditTxtSubfieldCode').length;
if (subfieldPosition < (subfieldsNum - 1)){
//jump to the next field
$('#txtAddSubfieldsCode_' + tag + '_' + fieldPosition + '_' + (subfieldPosition + 1))[0].focus();
} else {
onAddSubfieldsSave(e, tag, fieldPosition);
}
}
if (e.which == 27){
// escape key pressed
$('#rowAddSubfields_' + tag + '_' + fieldPosition + '_' + 0).nextAll().andSelf().remove();
}
}
function onAddFieldValueKeyPressed(e, jQRowGroupID, fieldTmpNo, subfieldInd){
if (e.which == 13){
// enter key pressed
onAddFieldJumpToNextSubfield(jQRowGroupID, fieldTmpNo, subfieldInd);
}
if (e.which == 27){
// escape key pressed
$(jQRowGroupID).remove();
if (!$('#bibEditTable > [id^=rowGroupAddField]').length)
$('#bibEditColFieldTag').css('width', '48px');
reColorFields();
}
}
function onAddFieldClick(){
/*
* Handle 'Add field' button.
*/
if (failInReadOnly())
return;
activateSubmitButton();
createAddFieldInterface();
}
// Incrementing temporary field numbers.
onAddFieldClick.addFieldFreeTmpNo = 100000;
function changeFieldToControlfield(fieldTmpNo){
/**
Switching the field to be a control field
*/
// removing additional entries
var addFieldRows = $('#rowGroupAddField_' + fieldTmpNo + ' tr');
$(addFieldRows).slice(2, addFieldRows.length-1).remove();
// Clear all fields.
var addFieldTextInput = $('#rowGroupAddField_' + fieldTmpNo +
' input[type=text]');
$(addFieldTextInput).val('').removeClass('bibEditInputError');
// Toggle hidden fields.
var elems = $('#txtAddFieldInd1_' + fieldTmpNo + ', #txtAddFieldInd2_' +
fieldTmpNo + ', #txtAddFieldSubfieldCode_' + fieldTmpNo + '_0,' +
'#btnAddFieldAddSubfield_' + fieldTmpNo).hide();
$('#txtAddFieldTag_' + fieldTmpNo).focus();
}
function changeFieldToDatafield(fieldTmpNo){
/**
Switching the field to be a datafield
*/
// making the elements visible
var elems = $('#txtAddFieldInd1_' + fieldTmpNo + ', #txtAddFieldInd2_' +
fieldTmpNo + ', #txtAddFieldSubfieldCode_' + fieldTmpNo + '_0,' +
'#btnAddFieldAddSubfield_' + fieldTmpNo).show();
$('#txtAddFieldTag_' + fieldTmpNo).focus();
}
function onAddFieldChange(event) {
/*
* Validate MARC and add or remove error class.
*/
// first handling the case of escape key, which is a little different that others
var fieldTmpNo = this.id.split('_')[1];
if (event.which == 27){
// escape key pressed
var jQRowGroupID = "#rowGroupAddField_" + fieldTmpNo;
$(jQRowGroupID).remove();
if (!$('#bibEditTable > [id^=rowGroupAddField]').length)
$('#bibEditColFieldTag').css('width', '48px');
reColorFields();
}
else if (this.value.length == this.maxLength){
var fieldType;
if (this.id.indexOf('Tag') != -1){
var jQRowGroupID = "#rowGroupAddField_" + fieldTmpNo;
fieldType = ($(jQRowGroupID).data('isControlfield')) ? 'ControlTag' : 'Tag';
}
else if (this.id.indexOf('Ind1') != -1)
fieldType = 'Indicator1';
else if (this.id.indexOf('Ind2') != -1)
fieldType = 'Indicator2';
else
fieldType = 'SubfieldCode';
var valid = (((fieldType == 'Indicator1' || fieldType == 'Indicator2')
&& (this.value == '_' || this.value == ' '))
|| validMARC(fieldType, this.value));
if (!valid && !$(this).hasClass('bibEditInputError'))
$(this).addClass('bibEditInputError');
else if (valid){
if ($(this).hasClass('bibEditInputError'))
$(this).removeClass('bibEditInputError');
if (event.keyCode != 9 && event.keyCode != 16){
switch(fieldType){
case 'ControlTag':
$(this).parent().nextAll().eq(3).children('input').focus();
break;
case 'Tag':
case 'Indicator1':
$(this).next().focus();
break;
case 'Indicator2':
// in case the indicator is present, we can be sure this is not a control field... so we can safely jump to the subfield code input
$('#txtAddFieldSubfieldCode_' + fieldTmpNo + '_0')[0].focus();
break;
case 'SubfieldCode':
/* Generate ID of the field tag input */
var fieldTagID = ('#' + $(this).attr('id').replace('SubfieldCode', 'Tag')).split('_');
fieldTagID.pop();
fieldTagID = fieldTagID.join('_');
var fieldTag = $('body').find("#txtAddFieldTag_" + fieldTmpNo).val(),
fieldInd1 = $('body').find("#txtAddFieldInd1_" + fieldTmpNo).val(),
fieldInd2 = $('body').find("#txtAddFieldInd2_" + fieldTmpNo).val();
if (fieldInd1 == '') {
fieldInd1 = '_';
}
if (fieldInd2 == '') {
fieldInd2 = '_';
}
if ($.inArray(fieldTag + fieldInd1 + fieldInd2 + this.value, gTagsToAutocomplete) != -1) {
addHandler_autocompleteAffiliations($(this).parent().next().children('input'));
}
$(this).parent().next().children('input').focus();
break;
default:
;
}
}
}
}
else if ($(this).hasClass('bibEditInputError'))
$(this).removeClass('bibEditInputError');
}
function onAddFieldSave(event){
var fieldTmpNo = this.id.split('_')[1];
addFieldSave(fieldTmpNo);
}
function addFieldSave(fieldTmpNo)
{
/*
* Handle 'Save' button in add field form.
*/
var jQRowGroupID = "#rowGroupAddField_" + fieldTmpNo;
var controlfield = $(jQRowGroupID).data('isControlfield');
var tag = $('#txtAddFieldTag_' + fieldTmpNo).val();
var value = $('#txtAddFieldValue_' + fieldTmpNo + '_0').val();
var subfields = [], ind1 = ' ', ind2 = ' ';
// variables used when we are adding a field in a specific position
var insertPosition = $(jQRowGroupID).data('insertionPoint');
var selected_tag = $(jQRowGroupID).data('selected_tag');
if (controlfield) {
// Controlfield. Validate and prepare to update.
if (fieldIsProtected(tag)){
displayAlert('alertAddProtectedField', [tag]);
updateStatus('ready');
return;
}
if (!validMARC('ControlTag', tag) || value == '') {
displayAlert('alertCriticalInput');
updateStatus('ready');
return;
}
var field = [[], ' ', ' ', value, 0];
var fieldPosition = getFieldPositionInTag(tag, field);
}
else {
// Regular field. Validate and prepare to update.
ind1 = $('#txtAddFieldInd1_' + fieldTmpNo).val();
ind1 = (ind1 == '' || ind1 == '_') ? ' ' : ind1;
ind2 = $('#txtAddFieldInd2_' + fieldTmpNo).val();
ind2 = (ind2 == '' || ind2 == '_') ? ' ' : ind2;
var MARC = tag + ind1 + ind2;
if (fieldIsProtected(MARC)) {
displayAlert('alertAddProtectedField', [MARC]);
updateStatus('ready');
return;
}
var validInd1 = (ind1 == ' ' || validMARC('Indicator1', ind1));
var validInd2 = (ind2 == ' ' || validMARC('Indicator2', ind2));
if (!validMARC('Tag', tag) || !validInd1 || !validInd2) {
displayAlert('alertCriticalInput');
updateStatus('ready');
return;
}
// Collect valid subfields in an array.
var invalidOrEmptySubfields = false;
$('#rowGroupAddField_' + fieldTmpNo + ' .bibEditTxtSubfieldCode'
).each(function() {
var subfieldTmpNo = this.id.slice(this.id.lastIndexOf('_')+1);
var txtValue = $('#txtAddFieldValue_' + fieldTmpNo + '_' +
subfieldTmpNo);
var value = $(txtValue).val();
value = value.replace(/^\s+|\s+$/g,""); // Remove whitespace from the ends of strings
if (isSubjectSubfield(MARC, this.value)) {
value = check_subjects_KB(value);
}
var isStillVolatile = txtValue.hasClass('bibEditVolatileSubfield');
if (!$(this).hasClass('bibEditInputError')
&& this.value != ''
&& !$(txtValue).hasClass('bibEditInputError')
&& value != ''){
if (!isStillVolatile){
subfields.push([this.value, value]);
}
}
else
invalidOrEmptySubfields = true;
});
if (invalidOrEmptySubfields){
if (!subfields.length){
// No valid subfields.
displayAlert('alertCriticalInput');
updateStatus('ready');
return;
}
else if (!displayAlert('confirmInvalidOrEmptyInput')){
updateStatus('ready');
return;
}
}
if (subfields[0] == undefined){
displayAlert('alertEmptySubfieldsList');
return;
}
var field = [subfields, ind1, ind2, '', 0];
var fieldPosition;
if ((insertPosition != undefined) && (tag == selected_tag)) {
fieldPosition = $(jQRowGroupID).data('insertionPoint');
}
else {
fieldPosition = getFieldPositionInTag(tag, field);
}
}
var subfieldsExtended = [];
/* Loop through all subfields to look for new subfields in the format
* $$aContent$$bMore content and split them accordingly */
for (var i=0, n=subfields.length; i<n ;i++) {
if (valueContainsSubfields(subfields[i][1])) {
var subfieldsToAdd = new Array(), subfieldCode = subfields[i][0];
splitContentSubfields(subfields[i][1], subfieldCode, subfieldsToAdd);
subfieldsExtended.push.apply(subfieldsExtended,subfieldsToAdd);
}
else{
subfieldsExtended.push(subfields[i]);
}
}
if (typeof subfieldsExtended[0] != 'undefined') {
/* We have split some subfields */
for (var i=0, n=subfieldsExtended.length; i < n; i++) {
subfields[i] = subfieldsExtended[i];
}
}
/* If adding a reference, add $$9 CURATOR */
if (tag == '999') {
subfields[subfields.length] = new Array('9', 'CURATOR');
}
// adding an undo handler
var undoHandler = prepareUndoHandlerAddField(tag,
ind1,
ind2,
fieldPosition,
subfields,
controlfield,
value);
addUndoOperation(undoHandler);
// Create Ajax request.
var data = {
recID: gRecID,
requestType: 'addField',
controlfield: controlfield,
fieldPosition: fieldPosition,
tag: tag,
ind1: ind1,
ind2: ind2,
subfields: subfields,
value: value,
undoRedo: undoHandler
};
queue_request(data);
// Continue local updating.
var fields = gRecord[tag];
// New field?
if (!fields) {
gRecord[tag] = [field];
}
else{
fields.splice(fieldPosition, 0, field);
}
// Remove form.
$('#rowGroupAddField_' + fieldTmpNo).remove();
if (!$('#bibEditTable > [id^=rowGroupAddField]').length)
$('#bibEditColFieldTag').css('width', '48px');
// Redraw all fields with the same tag and recolor the full table.
redrawFields(tag);
reColorFields();
// Scroll and color the new field for a short period.
var rowGroup = $('#rowGroup_' + tag + '_' + fieldPosition);
var newContent = $('#fieldTag_' + tag + '_' + fieldPosition);
if (insertPosition === undefined) {
$(newContent).focus();
}
$(rowGroup).effect('highlight', {color: gNEW_CONTENT_COLOR},
gNEW_CONTENT_COLOR_FADE_DURATION);
}
function onAddSubfieldsClick(img){
/*
* Handle 'Add subfield' buttons.
*/
var fieldID = img.id.slice(img.id.indexOf('_')+1);
addSubfield(fieldID);
}
function onDOISearchClick(button){
/*
* Handle 'Search for DOI' button.
*/
// gets the doi based from appropriate cell
var doi = $(button).parent().prev().text();
createReq({doi: doi, requestType: 'DOISearch'}, function(json)
{
if (json['doi_url'] !== undefined) {
window.open(json['doi_url']);
} else {
alert("DOI not found !");
}
}, false);
}
function addSubfield(fieldID, defSubCode, defValue) {
/* add a subfield based on fieldID, where the first 3 digits are
* the main tag, followed by _ and the position of the field.
* defSubCode = the default value for subfield code
*/
var jQRowGroupID = '#rowGroup_' + fieldID;
var tmpArray = fieldID.split('_');
var tag = tmpArray[0];var fieldPosition = tmpArray[1];
if ($('#rowAddSubfieldsControls_' + fieldID).length == 0){
// The 'Add subfields' form does not exist for this field.
$(jQRowGroupID).append(createAddSubfieldsForm(fieldID, defSubCode, defValue));
$(jQRowGroupID).data('freeSubfieldTmpNo', 1);
$('#txtAddSubfieldsCode_' + fieldID + '_' + 0).bind('keyup',
onAddSubfieldsChange);
$('#txtAddSubfieldsValue_' + fieldID + '_0').bind('keyup', function (e){
onAddSubfieldValueKeyPressed(e, tag, fieldPosition, 0);
});
$('#txtAddSubfieldsCode_' + fieldID + '_' + 0).focus();
}
else{
// The 'Add subfields' form exist for this field. Just add another row.
var subfieldTmpNo = $(jQRowGroupID).data('freeSubfieldTmpNo');
$(jQRowGroupID).data('freeSubfieldTmpNo', subfieldTmpNo+1);
var subfieldTmpID = fieldID + '_' + subfieldTmpNo;
$('#rowAddSubfieldsControls_' + fieldID).before(
createAddSubfieldsRow(fieldID, subfieldTmpNo));
$('#txtAddSubfieldsCode_' + subfieldTmpID).bind('keyup',
onAddSubfieldsChange);
$('#btnAddSubfieldsRemove_' + subfieldTmpID).bind('click', function(){
$('#rowAddSubfields_' + subfieldTmpID).remove();
});
$('#txtAddSubfieldsValue_' + subfieldTmpID).bind('keyup', function (e){
onAddSubfieldValueKeyPressed(e, tag, fieldPosition, subfieldTmpNo);
});
}
}
function onAddSubfieldsChange(event){
/*
* Validate subfield code and add or remove error class.
*/
if (this.value.length == 1){
var valid = validMARC('SubfieldCode', this.value);
if (!valid && !$(this).hasClass('bibEditInputError')){
$(this).addClass('bibEditInputError');
}
else if (valid){
if ($(this).hasClass('bibEditInputError')) {
$(this).removeClass('bibEditInputError');
}
if (event.keyCode != 9 && event.keyCode != 16){
/* If we are creating a new field present in gTagsToAutocomplete, add autocomplete handler */
var fieldInfo = $(this).parents("tr").siblings().eq(0).children().eq(1).html();
if ($.inArray(fieldInfo + this.value, gTagsToAutocomplete) != -1) {
addHandler_autocompleteAffiliations($(this).parent().next().children('input'));
}
$(this).parent().next().children('input').focus();
}
}
}
else if ($(this).hasClass('bibEditInputError')){
$(this).removeClass('bibEditInputError');
}
}
function onAddSubfieldsSave(event, tag, fieldPosition) {
/*
* Handle 'Save' button in add subfields form.
*/
var field = gRecord[tag][fieldPosition];
var fieldID = tag + '_' + fieldPosition;
var tag_ind = tag + field[1] + field[2];
var subfields = [];
var protectedSubfield = false, invalidOrEmptySubfields = false;
// Collect valid fields in an array.
$('#rowGroup_' + fieldID + ' .bibEditTxtSubfieldCode'
).each(function(){
var MARC = getMARC(tag, fieldPosition) + this.value;
if ($.inArray(MARC, gPROTECTED_FIELDS) != -1) {
protectedSubfield = MARC;
return false;
}
var subfieldTmpNo = this.id.slice(this.id.lastIndexOf('_') + 1);
var txtValue = $('#txtAddSubfieldsValue_' + fieldID + '_' +
subfieldTmpNo);
var value = $(txtValue).val();
/* Check if we need to transform automatically the value (in the case
of a subject subfield)
*/
if (isSubjectSubfield(tag_ind, this.value)) {
value = check_subjects_KB(value);
}
if (!$(this).hasClass('bibEditInputError') && this.value != ''
&& !$(txtValue).hasClass('bibEditInputError') && value != '')
subfields.push([this.value, value]);
else
invalidOrEmptySubfields = true;
});
// Report problems, like protected, empty or invalid fields.
if (protectedSubfield){
displayAlert('alertAddProtectedSubfield');
updateStatus('ready');
return;
}
if (invalidOrEmptySubfields && !displayAlert('confirmInvalidOrEmptyInput')){
updateStatus('ready');
return;
}
/* Check if $$9 CURATOR is present */
var iscurated = false;
if (tag === "999") {
current_field_subfields = field[0];
for (var i = 0, j = current_field_subfields.length; i < j; i++) {
if (current_field_subfields[i][0] == "9" && current_field_subfields[i][1] == "CURATOR") {
iscurated = true;
}
}
}
if (!subfields.length == 0) {
/* Loop through all subfields to look for new subfields in the format
* $$aContent$$bMore content and split them accordingly */
var subfieldsExtended = [];
for (var i=0, j=subfields.length; i < j; i++) {
if (valueContainsSubfields(subfields[i][1])) {
var subfieldsToAdd = new Array(), subfieldCode = subfields[i][0];
splitContentSubfields(subfields[i][1], subfieldCode, subfieldsToAdd);
subfieldsExtended.push.apply(subfieldsExtended,subfieldsToAdd);
}
else{
subfieldsExtended.push(subfields[i]);
}
}
if (typeof subfieldsExtended[0] != 'undefined') {
/* We have split some subfields */
for (var i=0, j=subfieldsExtended.length; i < j; i++) {
subfields[i] = subfieldsExtended[i];
}
}
if (tag === "999" && !iscurated) {
subfields.push(new Array('9', 'CURATOR'));
}
// creating the undo/redo handler
var urHandler = prepareUndoHandlerAddSubfields(tag, fieldPosition, subfields);
addUndoOperation(urHandler);
// Create Ajax request
var data = {
recID: gRecID,
requestType: 'addSubfields',
tag: tag,
fieldPosition: fieldPosition,
subfields: subfields,
undoRedo: urHandler
};
queue_request(data);
// Continue local updating
field[0] = field[0].concat(subfields);
var rowGroup = $('#rowGroup_' + fieldID);
var coloredRowGroup = $(rowGroup).hasClass('bibEditFieldColored');
$(rowGroup).replaceWith(createField(tag, field, fieldPosition));
if (coloredRowGroup)
$('#rowGroup_' + fieldID).addClass('bibEditFieldColored');
// Color the new fields for a short period.
var rows = $('#rowGroup_' + fieldID + ' tr');
$(rows).slice(rows.length - subfields.length).effect('highlight', {
color: gNEW_CONTENT_COLOR}, gNEW_CONTENT_COLOR_FADE_DURATION);
} else {
// No valid fields were submitted.
$('#rowAddSubfields_' + fieldID + '_' + 0).nextAll().andSelf().remove();
updateStatus('ready');
}
}
function convertFieldIntoEditable(cell, shouldSelect){
// chacking if the clicked field is still present int the DOM structure ... if not, we have just removed the element
if ($(cell).parent().parent().parent()[0] == undefined){
return;
}
// first we have to detach all exisiting editables ... which means detaching the event
editEvent = 'customclick';
$(cell).unbind(editEvent);
/*
This binding allows to wait if other textarea is opened before
opening the new one. In this way we can jump from one field to the
other without the new one being closed.
*/
$(cell).unbind('click').bind('click', function(event) {
var self = this;
function trigger_click() {
$(self).trigger('customclick');
}
if ($(".edit_area textarea").length > 0) {
$(".edit_area textarea").parent().submit(function() {
setTimeout(trigger_click, 30);
});
} else {
trigger_click();
}
});
$(cell).editable(
/* function to send edited content to */
function(value) {
newVal = onEditableCellChange(value, this);
if (typeof newVal === "undefined") {
/* content could not be changed, keep old value */
var tmpArray = this.id.split('_');
var tag = tmpArray[1],
fieldPosition = tmpArray[2],
subfieldIndex = tmpArray[3];
var field = gRecord[tag][fieldPosition];
return field[0][subfieldIndex][1];
}
if (newVal.substring(0,9) == "VOLATILE:"){
$(cell).addClass("bibEditVolatileSubfield");
newVal = newVal.substring(9);
if (!shouldSelect) {
// the field should start selecting all the content upon the click
// because it is VOLATILE
convertFieldIntoEditable(cell, true);
}
}
else{
$(cell).removeClass("bibEditVolatileSubfield");
if (shouldSelect){
// this is not a volatile field any more - clicking should not
// select all the content inside.
convertFieldIntoEditable(cell, false);
}
}
return newVal;
},
/* start of jEditable options */
{
type: 'textarea_custom',
callback: function(data, settings){
/* Function to run after submitting edited content */
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2],
subfieldIndex = tmpArray[3];
for (changeNum in gHoldingPenChanges){
change = gHoldingPenChanges[changeNum];
if (change.tag == tag &&
change.field_position == fieldPosition &&
change.subfield_position != undefined &&
change.subfield_position == subfieldIndex){
addChangeControl(changeNum, true);
}
}
},
event: editEvent,
data: function() {
// Get the real content from the record structure (instead of
// from the view, where HTML entities are escaped).
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2],
subfieldIndex = tmpArray[3];
var field = gRecord[tag][fieldPosition];
var tmpResult = "";
if (tmpArray[0] == 'fieldTag'){
var ind1 = (field[1] == " ") ? "_" : field[1];
var ind2 = (field[2] == " ") ? "_" : field[2];
tmpResult = tag + ind1 + ind2;
}
else if (subfieldIndex == undefined){
// Controlfield
tmpResult = field[3];
}
else if (tmpArray[0] == 'subfieldTag'){
tmpResult = field[0][subfieldIndex][0];
}
else {
tmpResult = field[0][subfieldIndex][1];
}
if (tmpResult.substring(0,9) == "VOLATILE:"){
tmpResult = tmpResult.substring(9);
}
return tmpResult;
},
placeholder: '',
onblur: 'submit',
select: shouldSelect
});
}
function onContentClick(cell) {
/*
* Handle click on editable content fields.
*/
function open_field() {
/*
Converts <td> element into editable object the first time click
is triggered
*/
var shouldSelect = false;
// Check if subfield is volatile subfield from a template
if ( $(cell).hasClass('bibEditVolatileSubfield') ) {
shouldSelect = true;
}
if (!$(cell).hasClass('edit_area')) {
$(cell).addClass('edit_area').removeAttr('onclick');
convertFieldIntoEditable(cell, shouldSelect);
$(cell).trigger('click');
}
}
if ($(".edit_area textarea").length > 0) {
/* There is another textarea open, wait for it to close */
$(".edit_area textarea").parent().submit(function() {
setTimeout(open_field, 30);
});
} else {
open_field();
}
}
function getUpdateSubfieldValueRequestData(tag, fieldPosition, subfieldIndex,
subfieldCode, value, changeNo, undoDescriptor, modifySubfieldCode){
var requestType;
if (modifySubfieldCode == true) {
requestType = 'modifySubfieldTag';
}
else {
requestType = 'modifyContent';
}
var data = {
recID: gRecID,
requestType: requestType,
tag: tag,
fieldPosition: fieldPosition,
subfieldIndex: subfieldIndex,
subfieldCode: subfieldCode,
value: value
};
if (changeNo != undefined && changeNo != -1){
data.hpChanges = {toDisable: [changeNo]};
}
if (undoDescriptor != undefined && undoDescriptor != null){
data.undoRedo = undoDescriptor;
}
return data;
}
function updateSubfieldValue(tag, fieldPosition, subfieldIndex, subfieldCode,
value, consumedChange, undoDescriptor,
modifySubfieldCode){
// Create Ajax request for simple updating the subfield value
if (consumedChange == undefined || consumedChange == null){
consumedChange = -1;
}
var data = getUpdateSubfieldValueRequestData(tag,
fieldPosition,
subfieldIndex,
subfieldCode,
value,
consumedChange,
undoDescriptor,
modifySubfieldCode);
queue_request(data);
}
function getBulkUpdateSubfieldContentRequestData(tag, fieldPosition,
subfieldIndex, subfieldCode,
value, consumedChange,
undoDescriptor, subfieldsToAdd) {
/*
*Purpose: prepare data to be included in the request for a bulk update
* of the subfield content
*
*Return: object: Array of changes to be applied
*
*/
var changesAdd = [];
var data = getUpdateSubfieldValueRequestData(tag,
fieldPosition,
subfieldIndex,
subfieldCode,
value,
consumedChange,
null,
false);
changesAdd.push(data);
data = {
recID: gRecID,
requestType: 'addSubfields',
tag: tag,
fieldPosition: fieldPosition,
subfields: subfieldsToAdd
};
changesAdd.push(data);
return changesAdd;
}
function bulkUpdateSubfieldContent(tag, fieldPosition, subfieldIndex, subfieldCode,
value, consumedChange, undoDescriptor, subfieldsToAdd, subfields_offset) {
/*
*Purpose: perform request for a bulk update as the user introduced in the content
* field multiple subfields to be added in the form $$aTest$$bAnother
*
*Input(s): string:tag - Field tag to be updated
* int:fieldPosition - position of the field with regard to the rest
* of fields with the same tag
* int:subfieldIndex - position of the subfield with regard to the
* other subfields in that field instance
* string:subfieldCode - Code of the subfield that is being modified
* string:value - old value present in the subfield
* consumedChange - undefined behaviour
* object:undoDescriptor - undo operations relative to the update
* action
* object:subfieldsToAdd - array containing subfields to add)
*
*/
if (consumedChange == undefined || consumedChange == null){
consumedChange = -1;
}
var data = getBulkUpdateSubfieldContentRequestData(tag,
fieldPosition,
subfieldIndex,
subfieldCode,
value,
consumedChange,
undoDescriptor,
subfieldsToAdd,
subfields_offset);
var bulk_data = {'requestType' : 'applyBulkUpdates',
'requestsData' : data,
'recID' : gRecID};
bulk_data.undoRedo = undoDescriptor;
queue_request(bulk_data);
}
function updateFieldTag(oldTag, newTag, oldInd1, oldInd2, ind1, ind2, fieldPosition,
consumedChange, undoDescriptor){
// Create Ajax request for simple updating the subfield value
if (consumedChange == undefined || consumedChange == null){
consumedChange = -1;
}
var data = getUpdateFieldTagRequestData(oldTag,
oldInd1,
oldInd2,
newTag,
ind1,
ind2,
fieldPosition,
consumedChange,
undoDescriptor);
queue_request(data);
}
function getUpdateFieldTagRequestData(oldTag, oldInd1, oldInd2, newTag, ind1, ind2,
fieldPosition, changeNo, undoDescriptor){
var data = {
recID: gRecID,
requestType: "modifyFieldTag",
fieldPosition: fieldPosition,
oldTag: oldTag,
newTag: newTag,
ind1: ind1,
ind2: ind2,
oldInd1: oldInd1,
oldInd2: oldInd2
};
if (changeNo != undefined && changeNo != -1){
data.hpChanges = {toDisable: [changeNo]};
}
if (undoDescriptor != undefined && undoDescriptor != null){
data.undoRedo = undoDescriptor;
}
return data;
}
/*call autosuggest, get the values, suggest them to the user*/
/*this is typically called when autosuggest key is pressed*/
function onAutosuggest(event) {
var mytarget = event.target;
if (event.srcElement) mytarget = event.srcElement;/*fix for IE*/
var myparent = mytarget.parentNode;
var mygrandparent = myparent.parentNode;
var parentid = myparent.id;
var value = mytarget.value;
var mylen = value.length;
var replacement = ""; //used by autocomplete
var tmpArray = mygrandparent.id.split('_');
/*ids for autosuggest/autocomplete html elements*/
var content_id = 'content_'+tmpArray[1]+'_'+tmpArray[2]+'_'+tmpArray[3];
var autosuggest_id = 'autosuggest_'+tmpArray[1]+'_'+tmpArray[2]+'_'+tmpArray[3];
var select_id = 'select_'+tmpArray[1]+'_'+tmpArray[2]+'_'+tmpArray[3];
var maintag = tmpArray[1], fieldPosition = tmpArray[2],
subfieldIndex = tmpArray[3];
var field = gRecord[maintag][fieldPosition];
var subfieldcode = field[0][subfieldIndex][0];
var subtag1 = field[1];
var subtag2 = field[2];
//check if this an autosuggest or autocomplete field.
var fullcode = getMARC(maintag, fieldPosition, subfieldIndex);
var reqtype = ""; //autosuggest or autocomplete, according to tag..
for (var i=0;i<gAUTOSUGGEST_TAGS.length;i++) {if (fullcode == gAUTOSUGGEST_TAGS[i]) {reqtype = "autosuggest"}}
for (var i=0;i<gAUTOCOMPLETE_TAGS.length;i++) {if (fullcode == gAUTOCOMPLETE_TAGS[i]) {reqtype = "autocomplete"}}
if (fullcode == gKEYWORD_TAG) {reqtype = "autokeyword"}
if (reqtype == "") {
return;
}
// Create Ajax request.
var data = {
recID: gRecID,
maintag: maintag,
subtag1: subtag1,
subtag2: subtag2,
subfieldcode: subfieldcode,
requestType: reqtype,
value: value
}; //reqtype is autosuggest, autocomplete or autokeyword
createReq(data, function(json){
updateStatus('report', gRESULT_CODES[json['resultCode']]);
suggestions = json[reqtype];
if (reqtype == 'autocomplete') {
if ((suggestions != null) && (suggestions.length > 0)) {
//put the first one "here"
replacement = suggestions[0];
var myelement = document.getElementById(mygrandparent.id);
if (myelement != null) {
//put in the the gRecord
gRecord[maintag][fieldPosition][0][subfieldIndex][1] = replacement;
mytarget.value = replacement;
}
//for the rest, create new subfields
for (var i=1, n=suggestions.length; i < n; i++) {
var valuein = suggestions[i];
var addhereID = maintag+"_"+fieldPosition; //an id to indicate where the new subfield goes
addSubfield(addhereID, subfieldcode, valuein);
}
} else { //autocomplete, nothing found
alert("No suggestions for your search term "+value);
}
} //autocomplete
if ((reqtype == 'autosuggest') || (reqtype == 'autokeyword')) {
if ((suggestions != null) && (suggestions.length > 0)) {
/*put the suggestions in the div autosuggest_xxxx*/
//make a nice box..
mysel = '<table width="400" border="0"><tr><td><span class="bibeditscrollArea"><ul>';
//create the select items..
for (var i=0, n=suggestions.length; i < n; i++) {
tmpid = select_id+"-"+suggestions[i];
mysel = mysel +'<li onClick="onAutosuggestSelect(\''+tmpid+'\');">'+suggestions[i]+"</li>";
}
mysel = mysel+"</ul></td>"
//add a stylish close link in case the user does not find
//the value among the suggestions
mysel = mysel + "<td><form><input type='button' value='close' onClick='onAutosuggestSelect(\""+select_id+"-"+'\");></form></td>';
mysel = mysel+"</tr></table>";
//for (var i=0;i<suggestions.length;i++) { mysel = mysel + +suggestions[i]+ " "; }
autosugg_in = document.getElementById(autosuggest_id);
if (autosugg_in != null) {autosugg_in.innerHTML = mysel;}
} else { //there were no suggestions
alert("No suggestions for your search term "+value);
}
} //autosuggest
}, false); /*NB! This function is called synchronously.*/
} //onAutoSuggest
/*put the content of the autosuggest select into the field where autoselect was lauched*/
function onAutosuggestSelect(selectidandselval) {
/*first take the selectid. It is the string before the first hyphen*/
var tmpArray = selectidandselval.split('-');
var selectid = tmpArray[0];
var selval = tmpArray[1];
/*generate the content element id and autosuggest element id from the selectid*/
var tmpArray = selectid.split('_');
var content_id = 'content_'+tmpArray[1]+'_'+tmpArray[2]+'_'+tmpArray[3];
var autosuggest_id = 'autosuggest_'+tmpArray[1]+'_'+tmpArray[2]+'_'+tmpArray[3];
var content_t = document.getElementById(content_id); //table
var content = null; //the actual text
//this is interesting, since if the user is browsing the list of selections by mouse,
//the autogrown form has disapperaed and there is only the table left.. so check..
if (content_t.innerHTML.indexOf("<form>") ==0) {
var content_f = null; //form
var content_ta = null; //textarea
if (content_t) {
content_f = content_t.firstChild; //form is the sub-elem of table
}
if (content_f) {
content_ta = content_f.firstChild; //textarea is the sub-elem of form
}
if (!(content_ta)) {return;}
content = content_ta;
} else {
content = content_t;
}
/*put value in place*/
if (selval) {
content.innerHTML = selval;
content.value = selval;
}
/*remove autosuggest box*/
var autosugg_in = document.getElementById(autosuggest_id);
autosugg_in.innerHTML = "";
}
function check_subjects_KB(value) {
/*
* Query Subjects KB to look for a match
*/
/* If KB is not defined in the system, just return value*/
if ($.inArray(gKBSubject,gAVAILABLE_KBS) == -1)
return value;
var response='';
$.ajaxSetup({async:false});
$.getJSON("/kb/export",
{ kbname: gKBSubject, format: 'json', searchkey: value},
function(data) {if (data[0]) {response = data[0].label;}}
);
$.ajaxSetup({async:true});
if (response) {
return response;
}
return value;
}
function onMoveSubfieldClick(type, tag, fieldPosition, subfieldIndex){
/*
* Handle subfield moving arrows.
*/
if (failInReadOnly()){
return;
}
// Check if moving is possible
if (type == 'up') {
if ( (parseInt(subfieldIndex) - 1 )< 0) {
updateStatus('ready', '');
return;
}
}
else {
if ((parseInt(subfieldIndex) + 1) >= gRecord[tag][fieldPosition][0].length) {
updateStatus('ready', '');
return;
}
}
// creating the undoRedo Hanglers
var undoHandler = prepareUndoHandlerMoveSubfields(tag, parseInt(fieldPosition), parseInt(subfieldIndex), type);
addUndoOperation(undoHandler);
var ajaxData = performMoveSubfield(tag, fieldPosition, subfieldIndex, type, undoHandler);
queue_request(ajaxData);
}
function onDeleteClick(event){
/*
* Handle 'Delete selected' button or delete hotkeys.
*/
if (failInReadOnly()){
return;
}
var toDelete = getSelectedFields();
// Assert that no protected fields are scheduled for deletion.
var protectedField = containsProtectedField(toDelete);
if (protectedField){
displayAlert('alertDeleteProtectedField', [protectedField]);
updateStatus('ready');
return;
}
// register the undo Handler
var urHandler = prepareUndoHandlerDeleteFields(toDelete);
addUndoOperation(urHandler);
var ajaxData = deleteFields(toDelete, urHandler);
// Disable the delete button
$('#btnDeleteSelected').attr('disabled', 'disabled');
queue_request(ajaxData);
}
function onMoveFieldUp(tag, fieldPosition) {
if (failInReadOnly()){
return;
}
fieldPosition = parseInt(fieldPosition);
var thisField = gRecord[tag][fieldPosition];
if (fieldPosition > 0) {
var prevField = gRecord[tag][fieldPosition-1];
// check if the previous field has the same indicators
if ( cmpFields(thisField, prevField) == 0 ) {
var undoHandler = prepareUndoHandlerMoveField(tag, fieldPosition, "up");
addUndoOperation(undoHandler);
var ajaxData = performMoveField(tag, fieldPosition, "up", undoHandler);
queue_request(ajaxData);
}
}
}
function onMoveFieldDown(tag, fieldPosition) {
if (failInReadOnly()){
return;
}
fieldPosition = parseInt(fieldPosition);
var thisField = gRecord[tag][fieldPosition];
if (fieldPosition < gRecord[tag].length-1) {
var nextField = gRecord[tag][fieldPosition+1];
// check if the next field has the same indicators
if ( cmpFields(thisField, nextField) == 0 ) {
var undoHandler = prepareUndoHandlerMoveField(tag, fieldPosition, "down");
addUndoOperation(undoHandler);
var ajaxData = performMoveField(tag, fieldPosition, "down", undoHandler);
queue_request(ajaxData);
}
}
}
function updateInterfaceAccordingToMode(){
/* updates the user interface (in particular the activity of menu buttons)
accordingly to the surrent operation mode of BibEdit.
*/
// updating the switch button caption
if (gReadOnlyMode){
deactivateRecordMenu();
$('#btnSwitchReadOnly').html("R/W");
} else {
activateRecordMenu();
$('#btnSwitchReadOnly').html("Read-only");
}
}
function switchToReadOnlyMode(){
// Moving to the read only mode with BibEdit
if (gRecordDirty == true){
alert("Please submit the record or cancel your changes before going to the read-only mode ");
return false;
}
gReadOnlyMode = true;
createReq({recID: gRecID, requestType: 'deleteRecordCache'}, function() {},
true, undefined, onDeleteRecordCacheError);
gCacheMTime = 0;
updateInterfaceAccordingToMode();
}
function canSwitchToReadWriteMode(){
/*A function determining if at current moment, it is possible to switch to the read/write mode*/
// If the revision is not the newest -> return false
if (!(gRecRev === gRecLatestRev)) {
return false;
}
else {
return true;
}
}
function switchToReadWriteMode(){
// swtching to a normal editing mode of BibEdit
if (!canSwitchToReadWriteMode()){
alert("Only the latest revision can be edited");
return false;
}
gReadOnlyMode = false;
// reading the record as if it was just opened
getRecord(gRecID);
updateInterfaceAccordingToMode();
}
function onSwitchReadOnlyMode(){
// an event habdler being executed when user clicks on the switch to read only mode button
if (gReadOnlyMode){
switchToReadWriteMode();
} else {
switchToReadOnlyMode();
}
}
// functions handling the revisions history
function getCompareClickedHandler(revisionId){
return function(e){
//document.location = "/"+ gSITE_RECORD +"/merge/#recid1=" + gRecID + "&recid2=" + gRecID + "." + revisionId;
var comparisonUrl = "/"+ gSITE_RECORD +"/edit/compare_revisions?recid=" +
gRecID + "&rev1=" + gRecRev + "&rev2=" + revisionId;
var newWindow = window.open(comparisonUrl);
newWindow.focus();
return false;
};
}
function onRevertClick(revisionId){
/*
* Handle 'Revert' button (submit record).
*/
updateStatus('updating');
if (displayAlert('confirmRevert')){
- createReq({recID: gRecID, revId: revisionId, requestType: 'revert',
+ createReq({recID: gRecID, revId: revisionId, lastRevId: gRecLatestRev, requestType: 'revert',
force: onSubmitClick.force}, function(json){
// Submission was successful.
changeAndSerializeHash({state: 'submit', recid: gRecID});
var resCode = json['resultCode'];
cleanUp(!gNavigatingRecordSet, '', null, true);
// clear the list of record revisions
resetBibeditState();
displayMessage(resCode, false, [json['recID']]);
updateToolbar(false);
updateStatus('report', gRESULT_CODES[resCode]);
});
onSubmitClick.force = false;
}
else
updateStatus('ready');
holdingPenPanelRemoveEntries(); // clearing the holding pen entries list
}
function getRevertClickedHandler(revisionId){
return function(e){
onRevertClick(revisionId);
return false;
};
}
function updateRevisionsHistory(){
if (gRecRevisionHistory == null){
return;
}
var result = "";
var results = [];
for (revInd in gRecRevisionHistory){
tmpResult = displayRevisionHistoryEntry(gRecID, gRecRevisionHistory[revInd]);
tmpResult["revisionID"] = gRecRevisionHistory[revInd];
results.push(tmpResult);
result += tmpResult["HTML"];
}
$("#bibEditRevisionsHistory").html(result);
$(".bibEditRevHistoryEntryContent").bind("click", function(evt){
var revision = $(this)[0].id.split("_")[1];
updateStatus('updating');
getRecord(gRecID, revision);
});
/*Attaching the actions on user interface*/
for (resultInd in results){
result = results[resultInd];
$('#' + result['compareImgId']).bind("click", getCompareClickedHandler(result["revisionID"]));
$('#' + result['revertImgId']).bind("click", getRevertClickedHandler(result["revisionID"]));
}
}
function encodeXml(str){
var resultString = "";
for (var i=0, n=str.length; i<n; i++){
var c = str.charAt(i);
switch (c){
case '<':
resultString += "&lt;";
break;
case '>':
resultString += "&gt;";
break;
case '&':
resultString += "&amp;";
break;
case '"':
resultString += "&quot;";
break;
case "'":
resultString += "&apos;";
break;
default:
resultString += c;
}
}
return resultString;
}
function getSelectionMarcXml(){
/*Gets the MARC XML of the current editor selection*/
var checkedFieldBoxes = $('input[class="bibEditBoxField"]:checked'); // interesting only for the controlfields
// where no subfields are
var checkedSubfieldBoxes = $('input[class="bibEditBoxSubfield"]:checked');
// now constructing the interesting data
var selectionNormal = {}; // a dictionary of identifiers taht have appeared already
var selectionControlFields = [];
var selectedFields = []; // a list of fields already selected
var currentField = null; // a curently edited field
// Collect subfields to be deleted in toDelete.
var normalFieldsXml = "";
var controlFieldsXml = "";
$(checkedSubfieldBoxes).each(function(){
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2], subfieldIndex = tmpArray[3];
if (currentField == null || currentField.tag != tag || currentField.position != fieldPosition){
if (currentField != null){
var newPos = selectedFields.length;
selectedFields[newPos] = currentField;
normalFieldsXml += "</datafield>"
}
// creating an empty field
currentField={};
currentField.subfields = [];
currentField.tag = tag;
currentField.position = fieldPosition;
currentField.ind1 = gRecord[tag][fieldPosition][1];
currentField.ind2 = gRecord[tag][fieldPosition][2];
currentField.isControlField = false;
selectionNormal[tag] = true;
normalFieldsXml += "<datafield tag=\"" + currentField.tag + "\" ind1=\"" +
currentField.ind1 + "\" ind2=\"" + currentField.ind2 + "\">";
}
// appending a current subfield
var newPos = currentField.subfields.length;
subfield = gRecord[tag][fieldPosition][0][subfieldIndex];
currentField.subfields[newPos] = subfield;
normalFieldsXml += "<subfield code=\"" + subfield[0] + "\">" + encodeXml(subfield[1]) + "</subfield>";
});
if (currentField != null){
var newPos = selectedFields.length;
selectedFields[newPos] = currentField;
normalFieldsXml += "</datafield>";
}
// now extending by the control fields (they did not appear earlier)
$(checkedFieldBoxes).each(function(){
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2];
if (selectionNormal[tag] == undefined){
// we have a control field ! otherwise, the field has been already utilised
currentField = {};
currentField.tag = tag;
currentField.value = gRecord[tag][fieldPosition][3]
var newPos = selectionControlFields.length;
selectionControlFields[newPos] = currentField;
controlFieldsXml += "<controlfield tag=\"" + currentField.tag + "\">" + currentField.value+ "</controlfield>";
}
});
return "<record>" + controlFieldsXml + normalFieldsXml + "</record>";
}
function onPerformCopy(){
/** The handler performing the copy operation
*/
if (document.activeElement.type == "textarea" || document.activeElement.type == "text"){
/*we do not want to perform this in case we are in an ordinary text area*/
return;
}
var valueToCopy = getSelectionMarcXml();
clipboardCopyValue(valueToCopy);
}
function onPerformPaste(){
/* Performing the paste operation -> retriexing the MARC XML from the clipboard,
decoding and applying the code to the
According to the default behaviour, the fields are appended as last of the same kind
*/
if (!gRecord) {
return;
}
if (document.activeElement.type == "textarea" || document.activeElement.type == "text"){
/*we do not want to perform this in case we are in an ordinary text area*/
return;
}
var clipboardContent = clipboardPasteValue();
if (!clipboardContent) {
return;
}
var record = null;
try {
record = decodeMarcXMLRecord(clipboardContent);
} catch (err) {
alert("Error when parsing XML occured ... " + err.mesage);
}
var changesAdd = []; // the ajax requests for all the fields
var undoHandlers = [];
for (tag in record){
if (gRecord[tag] == undefined){
gRecord[tag] = [];
}
// now appending the fields
for (fieldInd in record[tag]){
newPos = gRecord[tag].length;
gRecord[tag][newPos] = record[tag][fieldInd];
// enqueue ajax add field request
isControlfield = record[tag][fieldInd][0].length == 0;
ind1 = record[tag][fieldInd][1];
ind2 = record[tag][fieldInd][2];
subfields = record[tag][fieldInd][0];
value: record[tag][fieldInd][3]; // in case of a control field
changesAdd.push({
recID: gRecID,
requestType: "addField",
controlfield : isControlfield,
fieldPosition : newPos,
tag: tag,
ind1: record[tag][fieldInd][1],
ind2: record[tag][fieldInd][2],
subfields: record[tag][fieldInd][0],
value: record[tag][fieldInd][3]
});
undoHandler = prepareUndoHandlerAddField(
tag, ind1, ind2, newPos, subfields, isControlfield, value);
undoHandlers.push(undoHandler);
}
}
undoHandlers.reverse();
var undoHandler = prepareUndoHandlerBulkOperation(undoHandlers, "paste");
addUndoOperation(undoHandler);
// now sending the Ajax Request
var optArgs = {
undoRedo: undoHandler
};
createBulkReq(changesAdd, function(json){
updateStatus('report', gRESULT_CODES[json['resultCode']])}, optArgs);
// tags have to be redrawn in the increasing order
tags = [];
for (tag in record){
tags.push(tag);
}
tags.sort();
for (tagInd in tags){
redrawFields(tags[tagInd]);
}
reColorFields();
}
function addUndoOperation(operation){
gUndoList.push(operation);
invalidateRedo();
updateUrView();
}
function invalidateRedo(){
/** Invalidates the redo list - after some modification*/
gRedoList = [];
}
function adjustUndoRedoBtnsActivity(){
/** Making the undo/redo buttons active/inactive according to the needs
*/
if (gUndoList.length > 0){
$("#btnUndo").addAttribute("disabled", "");
}
else{
$("#btnUndo").removeAttr("disabled");
}
if (gRedoList.length > 0){
$("#btnRedo").addAttribute("disabled", "");
}
else{
$("#btnRedo").removeAttr("disabled");
}
}
function undoMany(number){
/** A function undoing many operations from the undo list
Arguments:
number: number of operations to undo
*/
var undoOperations = []
for (i=0;i<number;i++){
undoOperations.push(getUndoOperation());
}
performUndoOperations(undoOperations);
updateUrView();
}
function prepareUndoHandlerEmpty(){
/** Creating an empty undo/redo handler - might be useful in some cases
when undo operation is required but should not be registered
*/
return {
operation_type: "no_operation"
};
}
function prepareUndoHandlerAddField(tag, ind1, ind2, fieldPosition, subfields,
isControlField, value ){
/** A function creating an undo handler for the operation of affing a new
field
Arguments:
tag: tag of the field
ind1: first indicator (a single character string)
ind2: second indicator (a single character string)
fieldPosition: a position of the field among other fields with the same
tag and possibly different indicators)
subFields: a list of fields subfields. each subfield is decribed by
a pair: [code, value]
isControlField: a boolean value indicating if the field is a control field
value: a value of a control field. (important in case of passing
iscontrolField equal true)
*/
var result = {};
result.operation_type = "add_field";
result.newSubfields = subfields;
result.tag = tag;
result.ind1 = ind1;
result.ind2 = ind2;
result.fieldPosition = fieldPosition;
result.isControlField = isControlField;
if (isControlField){
// value == false means that we are dealing with a control field
result.value = value;
} else{
result.subfields = subfields;
}
return result;
}
function prepareUndoHandlerVisualizeChangeset(changesetNumber, changesListBefore, changesListAfter){
var result = {};
result.operation_type = "visualize_hp_changeset";
result.changesetNumber = changesetNumber;
result.oldChangesList = changesListBefore;
result.newChangesList = changesListAfter;
return result;
}
function prepareUndoHandlerApplyHPChange(changeHandler, changeNo){
/** changeHandler - handler to the original undo/redo handler associated with the action
*/
var result = {};
result.operation_type = "apply_hp_change";
result.handler = changeHandler;
result.changeNo = changeNo;
result.changeType = gHoldingPenChanges[changeNo].change_type;
return result;
}
function prepareUndoHandlerApplyHPChanges(changeHandlers, changesBefore){
/** Producing the undo/redo handler associated with application of
more than one HoldingPen change
Arguments:
changeHandlers - a list od undo/redo handlers associated with subsequent changes.
changesBefore = a list of Holding Pen changes before the operation
*/
var result = {};
result.operation_type = "apply_hp_changes";
result.handlers = changeHandlers;
result.changesBefore = changesBefore;
return result;
}
function prepareUndoHandlerRemoveAllHPChanges(hpChanges){
/** A function preparing the undo handler associated with the
removal of all the Holding Pen changes present in teh interface */
var result = {};
result.operation_type = "remove_all_hp_changes";
result.old_changes_list = hpChanges;
return result;
}
function prepareUndoHandlerBulkOperation(undoHandlers, handlerTitle){
/*
Preparing an und/redo handler allowing to treat the bulk operations
( like for example in case of pasting fields )
arguments:
undoHandlers : handlers of separate operations from the bulk
handlerTitle : a message to be displayed in the undo menu
*/
var result = {};
result.operation_type = "bulk_operation";
result.handlers = undoHandlers;
result.title = handlerTitle;
return result;
}
function urPerformAddSubfields(tag, fieldPosition, subfields, isUndo){
var ajaxData = {
recID: gRecID,
requestType: 'addSubfields',
tag: tag,
fieldPosition: fieldPosition,
subfields: subfields,
undoRedo: (isUndo ? "undo": "redo")
};
gRecord[tag][fieldPosition][0] = gRecord[tag][fieldPosition][0].concat(subfields);
redrawFields(tag);
reColorFields();
return ajaxData;
}
function performModifyHPChanges(changesList, isUndo){
/** Undoing or redoing the operation of modifying the changeset
*/
// first local updates
gHoldingPenChanges = changesList;
refreshChangesControls();
var result = prepareOtherUpdateRequest(isUndo);
result.undoRedo = isUndo ? "undo" : "redo";
result.hpChanges = {toOverride: changesList};
return result;
}
function hideUndoPreview(){
$("#undoOperationVisualisationField").addClass("bibEditHiddenElement");
// clearing the selection !
$(".bibEditURDescEntrySelected").removeClass("bibEditURDescEntrySelected");
}
function getRedoOperation(){
// getting the operation to be redoed
currentElement = gRedoList[0];
gRedoList.splice(0, 1);
gUndoList.push(currentElement);
return currentElement;
}
function getUndoOperation(){
// getting the operation to be undoe
currentElement = gUndoList[gUndoList.length - 1];
gUndoList.splice(gUndoList.length - 1, 1);
gRedoList.splice(0, 0, currentElement);
return currentElement;
}
function setAllUnselected(){
// make all the fields and subfields deselected
setSelectionStatusAll(false);
}
function setSelectionStatusAll(status){
// Changing the selection status for all the fields
subfieldBoxes = $('.bibEditBoxSubfield');
subfieldBoxes.each(function(e){
if (subfieldBoxes[e].checked != status){
subfieldBoxes[e].click();
}
});
}
function prepareApplyAllHPChangesHandler(){
// a container for many undo/redo operations in the same time
throw 'To implement';
}
/*** Handlers for specific operations*/
function renderURList(list, idPrefix, isInverted){
// rendering the view of undo/redo list into a human-readible HTML
// list -> an undo or redo list
// idPrefix -> te prefix of the DOM identifier
var result = "";
var isPair = false;
var helperCnt = 0;
var iterationBeginning = list.length - 1;
var iterationJump = -1;
var iterationEnd = -1;
if (isInverted === true){
iterationBeginning = 0;
iterationJump = 1;
iterationEnd = list.length;
}
for (entryInd = iterationBeginning ; entryInd != iterationEnd ; entryInd += iterationJump){
result += "<div class=\"" + (isPair ? "bibEditURPairRow" : "bibEditUROddRow" )+ " bibEditURDescEntry\" id=\"" + idPrefix + "_" + helperCnt + "\">";
result += getHumanReadableUREntry(list[entryInd]);
result += "</div>";
isPair = ! isPair;
helperCnt += 1;
}
result += "";
return result;
}
function prepareApplyHPChangeHandler(){
// A handler for HoldingPen change application/rejection
throw 'to implement';
}
function processURUntil(entry){
// Executing the bulk undo/redo
var idParts = $(entry).attr("id").split("_");
var index = parseInt(idParts[1]);
if (idParts[0] == "undo"){
undoMany(index+1);
}
else{
redoMany(index+1);
}
}
function prepareUndoHandlerChangeSubfield(tag, fieldPos, subfieldPos, oldVal,
newVal, oldCode, newCode, operation_type){
var result = {};
result.operation_type = operation_type;
result.tag = tag;
result.oldVal = oldVal;
result.newVal = newVal;
result.oldCode = oldCode;
result.newCode = newCode;
result.fieldPos = fieldPos;
result.subfieldPos = subfieldPos;
return result;
}
function prepareUndoHandlerChangeFieldCode(oldTag, oldInd1, oldInd2, newTag, newInd1,
newInd2, fieldPos, operation_type){
var result = {};
result.operation_type = operation_type;
result.oldTag = oldTag;
result.oldInd1 = oldInd1;
result.oldInd2 = oldInd2;
result.newTag = newTag;
result.ind1 = newInd1;
result.ind2 = newInd2;
result.fieldPos = fieldPos;
if (gRecord[newTag] == undefined) {
result.newFieldPos = 0;
}
else {
result.newFieldPos = gRecord[newTag].length;
}
return result;
}
function setAllSelected(){
// make all the fields and subfields selected
setSelectionStatusAll(true);
}
function showUndoPreview(){
$("#undoOperationVisualisationField").removeClass("bibEditHiddenElement");
}
function prepareUndoHandlerMoveSubfields(tag, fieldPosition, subfieldPosition, direction){
var result = {};
result.operation_type = "move_subfield";
result.tag = tag;
result.field_position = fieldPosition;
result.subfield_position = subfieldPosition;
result.direction = direction;
return result;
}
// Handlers to implement:
function setFieldUnselected(tag, fieldPos){
// unselect a given field
setSelectionStatusField(tag, fieldPos, false);
}
function urPerformRemoveField(tag, position, isUndo){
var toDeleteData = {};
var toDeleteTmp = {};
toDeleteTmp[position] = [];
toDeleteData[tag] = toDeleteTmp;
// first preparing the data of Ajax request
var ajaxData = {
recID: gRecID,
requestType: 'deleteFields',
toDelete: toDeleteData,
undoRedo: (isUndo ? "undo": "redo")
};
// updating the local model
gRecord[tag].splice(position,1);
if (gRecord[tag] == []){
gRecord[tag] = undefined;
}
redrawFields(tag);
reColorFields();
return ajaxData;
}
function prepareOtherUpdateRequest(isUndo){
return {
requestType : 'otherUpdateRequest',
recID : gRecID,
undoRedo: ((isUndo === true) ? "undo" : "redo"),
hpChanges: {}
};
}
function performUndoApplyHpChanges(subRequests, oldChanges){
/**
Arguemnts:
subRequests - subrequests performing the appropriate undo operations
*/
// removing all teh undo/redo informations as they should be passed globally
for (ind in subRequests){
subRequests[ind].undoRedo = undefined;
}
// var gHoldingPenChanges
return {
requestType: 'applyBulkUpdates',
undoRedo: "undo",
requestsData: subRequests,
hpChanges: {toOverride: oldChanges}
};
}
function performBulkOperation(subHandlers, isUndo){
/**
return the bulk operation
Arguments:
subReqs : requests performing the sub-operations
isUndo - is current request undo or redo ?
*/
var subReqs = [];
if (isUndo === true){
subReqs = preparePerformUndoOperations(subHandlers);
} else {
// We can not simply assign and revers as the original would be modified
var handlers = [];
for (handlerInd = subHandlers.length -1; handlerInd >= 0; handlerInd--){
handlers.push(subHandlers[handlerInd]);
}
subReqs = preparePerformRedoOperations(handlers);
}
for (ind in subReqs){
subReqs[ind].undoRedo = undefined;
}
return {
requestType: 'applyBulkUpdates',
undoRedo: (isUndo === true ? "undo" : "redo"),
requestsData: subReqs,
hpChanges: {}
};
}
function preparePerformRedoOperations(operations){
/** Redos an operation passed as an argument */
var ajaxRequestsData = [];
for (operationInd in operations){
var operation = operations[operationInd];
var ajaxData = {};
var isMultiple = false; // is the current decription a list of descriptors ?
switch (operation.operation_type){
case "no_operation":
ajaxData = prepareOtherUpdateRequest(false);
break;
case "change_content":
ajaxData = urPerformChangeSubfieldContent(operation.tag,
operation.fieldPos,
operation.subfieldPos,
operation.newCode,
operation.newVal,
false);
break;
case "change_subfield_code":
ajaxData = urPerformChangeSubfieldCode(operation.tag,
operation.fieldPos,
operation.subfieldPos,
operation.newCode,
operation.newVal,
false);
break;
case "change_field_code":
ajaxData = urPerformChangeFieldCode(operation.newTag,
operation.ind1,
operation.ind2,
operation.oldTag,
operation.oldInd1,
operation.oldInd2,
operation.fieldPos,
false);
break;
case "add_field":
ajaxData = urPerformAddField(operation.isControlField,
operation.fieldPosition,
operation.tag,
operation.ind1,
operation.ind2,
operation.subfields,
operation.value,
false);
break;
case "add_subfields":
ajaxData = urPerformAddSubfields(operation.tag,
operation.fieldPosition,
operation.newSubfields,
false);
break;
case "delete_fields":
ajaxData = urPerformDeletePositionedFieldsSubfields(operation.toDelete, false);
break;
case "move_field":
ajaxData = performMoveField(operation.tag, operation.field_position, operation.direction , false);
break;
case "move_subfield":
ajaxData = performMoveSubfield(operation.tag, operation.field_position, operation.subfield_position, operation.direction, false);
break;
case "bulk_operation":
ajaxData = performBulkOperation(operation.handlers, false);
break;
case "apply_hp_change":
removeViewedChange(operation.changeNo); // we redo the change application so the change itself gets removed
ajaxData = preparePerformRedoOperations([operation.handler]);
ajaxData[0].hpChange = {};
ajaxData[0].hpChange.toDisable = [operation.changeNo]; // reactivate this change
isMultiple = true;
break;
case "apply_hp_changes":
// in this case many changes are applied at once and the list of changes is completely overriden
ajaxData = performUndoApplyHpChanges();
case "change_field":
ajaxData = urPerformChangeField(operation.tag, operation.fieldPos,
operation.newInd1, operation.newInd2,
operation.newSubfields,
operation.newIsControlField,
operation.oldValue , false);
break;
case "visualize_hp_changeset":
ajaxData = prepareVisualizeChangeset(operation.changesetNumber,
operation.newChangesList, "redo");
break;
case "remove_all_hp_changes":
ajaxData = performModifyHPChanges([], false);
break;
default:
alert("Error: wrong operation to redo");
break;
}
// now dealing with the results
if (isMultiple){
// in this case we have to merge lists rather than include inside
for (elInd in ajaxData){
ajaxRequestsData.push(ajaxData[elInd]);
}
}
else{
ajaxRequestsData.push(ajaxData);
}
}
return ajaxRequestsData;
}
function performRedoOperations(operations){
ajaxRequestsData = preparePerformRedoOperations(operations);
// now submitting the bulk request
var optArgs = {
// undoRedo: "redo"
};
var bulk_data = {'requestType' : 'applyBulkUpdates',
'requestsData' : ajaxRequestsData,
'recID' : gRecID};
queue_request(bulk_data);
}
function prepareUndoHandlerDeleteFields(toDelete){
/*Creating Undo/Redo handler for the operation of removal of fields and/or subfields
Arguments: toDelete - indicates fields and subfields scheduled to be deleted.
this argument should have a following structure:
{
"fields" : { tag: {fieldsPosition: field_structure_similar_to_on_from_gRecord}}
"subfields" : {tag: { fieldPosition: { subfieldPosition: [code, value]}}}
}
*/
var result = {};
result.operation_type = "delete_fields";
result.toDelete = toDelete;
return result;
}
function setSubfieldUnselected(tag, fieldPos, subfieldPos){
// unseelcting a subfield
setSelectionStatusSubfield(tag, fieldPos, subfieldPos, false);
}
function prepareUndoHandlerAddSubfields(tag, fieldPosition, subfields){
/**
tag : tag of the field inside which the fields should be added
fieldPosition: position of the field
subfields: new subfields to be added. This argument should be a list
of lists representing a single subfield. Each subfield is represented
by a list, containing 2 elements. [subfield_code, subfield_value]
*/
var result = {};
result.operation_type = "add_subfields";
result.tag = tag;
result.fieldPosition = fieldPosition;
result.newSubfields = subfields;
return result;
}
function setFieldSelected(tag, fieldPos){
// select a given field
setSelectionStatusField(tag, fieldPos, true);
}
function redoMany(number){
// redoing an indicated number of operations
var redoOperations = [];
for (i=0;i<number;i++){
redoOperations.push(getRedoOperation());
}
performRedoOperations(redoOperations);
updateUrView();
}
function urPerformAddField(controlfield, fieldPosition, tag, ind1, ind2, subfields, value, isUndo){
var ajaxData = {
recID: gRecID,
requestType: 'addField',
controlfield: controlfield,
fieldPosition: fieldPosition,
tag: tag,
ind1: ind1,
ind2: ind2,
subfields: subfields,
value: value,
undoRedo: (isUndo? "undo": "redo")
};
// updating the local situation
if (gRecord[tag] == undefined){
gRecord[tag] = [];
}
var newField = [(controlfield ? [] : subfields), ind1, ind2,
(controlfield ? value: ""), 0];
gRecord[tag].splice(fieldPosition, 0, newField);
redrawFields(tag);
reColorFields();
return ajaxData;
}
function urPerformRemoveSubfields(tag, fieldPosition, subfields, isUndo){
var toDelete = {};
toDelete[tag] = {};
toDelete[tag][fieldPosition] = []
var startingPosition = gRecord[tag][fieldPosition][0].length - subfields.length;
for (var i=startingPosition, n=gRecord[tag][fieldPosition][0].length; i<n ; i++){
toDelete[tag][fieldPosition].push(i);
}
var ajaxData = {
recID: gRecID,
requestType: 'deleteFields',
toDelete: toDelete,
undoRedo: (isUndo ? "undo": "redo")
};
// modifying the client-side interface
gRecord[tag][fieldPosition][0].splice( gRecord[tag][fieldPosition][0].length - subfields.length, subfields.length);
redrawFields(tag);
reColorFields();
return ajaxData;
}
function updateUrView(){
/*Updating the information box in the bibEdit menu
(What are the current undo/redo handlers*/
$('#undoOperationVisualisationFieldContent')[0].innerHTML = (gUndoList.length == 0) ? "(empty)" :
renderURList(gUndoList, "undo");
// gUndoList[gUndoList.length - 1].operation_type;
$('#redoOperationVisualisationFieldContent')[0].innerHTML = (gRedoList.length == 0) ? "(empty)" :
renderURList(gRedoList, "redo", true);
// now attaching the events ... the function is uniform for all the elements present inside the document
var urEntries = $('.bibEditURDescEntry');
urEntries.each(function(index){
$(urEntries[index]).bind("mouseover", function (e){
$(urEntries[index]).find(".bibEditURDescEntryDetails").removeClass("bibEditHiddenElement");
urMarkSelectedUntil(urEntries[index]);
});
$(urEntries[index]).bind("mouseout", function(e){
$(urEntries[index]).find(".bibEditURDescEntryDetails").addClass("bibEditHiddenElement");
});
$(urEntries[index]).bind("click", function(e){
processURUntil(urEntries[index]);
});
});
}
function performMoveSubfield(tag, fieldPosition, subfieldIndex, direction, undoRedo){
var newSubfieldIndex = parseInt(subfieldIndex) + (direction == "up" ? -1 : 1);
var fieldID = tag + '_' + fieldPosition;
var field = gRecord[tag][fieldPosition];
var subfields = field[0];
// Create Ajax request.
var ajaxData = {
recID: gRecID,
requestType: 'moveSubfield',
tag: tag,
fieldPosition: fieldPosition,
subfieldIndex: subfieldIndex,
newSubfieldIndex: newSubfieldIndex,
undoRedo: (undoRedo == true) ? "undo" : ((undoRedo == false) ? "redo" : undoRedo)
};
// Continue local updating.
var subfieldToSwap = subfields[newSubfieldIndex];
subfields[newSubfieldIndex] = subfields[subfieldIndex];
subfields[subfieldIndex] = subfieldToSwap;
var rowGroup = $('#rowGroup_' + fieldID);
var coloredRowGroup = $(rowGroup).hasClass('bibEditFieldColored');
$(rowGroup).replaceWith(createField(tag, field, fieldPosition));
if (coloredRowGroup)
$('#rowGroup_' + fieldID).addClass('bibEditFieldColored');
// taking care of having only the new subfield position selected
setAllUnselected();
setSubfieldSelected(tag, fieldPosition, newSubfieldIndex);
return ajaxData;
}
function onRedo(evt){
if (gRedoList.length <= 0){
alert("No Redo operations to process");
return;
}
redoMany(1);
}
// functions related to the automatic field selection/unseletion
function hideRedoPreview(){
$("#redoOperationVisualisationField").addClass("bibEditHiddenElement");
// clearing the selection !
$(".bibEditURDescEntrySelected").removeClass("bibEditURDescEntrySelected");
}
function urPerformAddPositionedFieldsSubfields(toAdd, isUndo){
return createFields(toAdd, isUndo);
}
function setSubfieldSelected(tag, fieldPos, subfieldPos){
// selecting a subfield
setSelectionStatusSubfield(tag, fieldPos, subfieldPos, true);
}
function getHumanReadableUREntry(handler){
// rendering a human readable description of an undo/redo operation
// handler : the u/r handler to render
var operationDescription;
switch (handler.operation_type){
case "move_field":
operationDescription = "move field";
break;
case "move_field":
operationDescription = "change field";
break;
case "move_subfield":
operationDescription = "move subfield";
break;
case "change_content":
operationDescription = "edit subfield";
break;
case "change_subfield_code":
operationDescription = "edit subfield code";
break;
case "change_field_code":
operationDescription = "edit field code";
break;
case "add_field":
operationDescription = "add field";
break;
case "add_subfields":
operationDescription = "add field";
break;
case "delete_fields":
operationDescription = "delete";
break;
case "bulk_operation":
operationDescription = handler.title;
break;
case "apply_hp_change":
operationDescription = "holding pen";
break;
case "visualize_hp_changeset":
operationDescription = "show changes";
break;
case "remove_all_hp_changes":
operationDescription = "remove changes";
break;
default:
operationDescription = "unknown operation";
break;
}
// now rendering parameters of the handler
var readableDescriptors = {
'tag' : 'tag',
'operation_type' : false,
'field_position' : false,
'subfield_position' : false,
'subfieldPos' : false,
'newVal' : 'new value',
'oldVal' : 'old value',
'fieldPos' : false,
'toDelete' : false,
'handlers' : false,
'newFieldPos' : false
};
var handlerDetails = '<table>';
for (characteristic in handler){
if (readableDescriptors[characteristic] != false){
var characteristicString = characteristic;
if (readableDescriptors[characteristic] != undefined){
characteristicString = readableDescriptors[characteristic];
}
handlerDetails += '<tr><td class="bibEditURDescChar">'
+ characteristicString + ':</td><td>' + handler[characteristic] + '</td></tr>';
}
}
handlerDetails += '</table>';
// now generating the final result
return '<div class="bibEditURDescHeader">'
+ operationDescription + '</div><div class="bibEditURDescEntryDetails bibEditHiddenElement">'
+ handlerDetails + '</div>';
}
function urMarkSelectedUntil(entry){
// marking all the detailed entries, until a given one as selected
// these entries have the same prefix but a smaller number
var identifierParts = $(entry).attr("id").split("_");
var position = parseInt(identifierParts[1]);
var potentialElements = $(".bibEditURDescEntry");
potentialElements.each(function(index){
var curIdentifierParts = $(potentialElements[index]).attr("id").split("_");
if ((curIdentifierParts[0] == identifierParts[0]) && (parseInt(curIdentifierParts[1]) <= position)){
$(potentialElements[index]).addClass("bibEditURDescEntrySelected");
}
else {
$(potentialElements[index]).removeClass("bibEditURDescEntrySelected");
}
});
}
function onUndo(evt){
if (gUndoList.length <= 0){
alert("No Undo operations to process");
return;
}
undoMany(1);
}
function preparePerformUndoOperations(operations){
/** Undos an operation passed as an argument */
var ajaxRequestsData = [];
for (operationInd in operations){
var operation = operations[operationInd];
var action = null;
var actionData = null;
var ajaxData = {};
var isMultiple = false; // is the current oepration handler a list
// of operations rather than a single op ?
switch (operation.operation_type){
case "no_operation":
ajaxData = prepareOtherUpdateRequest(true);
break;
case "change_content":
ajaxData = urPerformChangeSubfieldContent(operation.tag,
operation.fieldPos,
operation.subfieldPos,
operation.oldCode,
operation.oldVal,
true);
break;
case "change_subfield_code":
ajaxData = urPerformChangeSubfieldCode(operation.tag,
operation.fieldPos,
operation.subfieldPos,
operation.oldCode,
operation.oldVal,
true);
break;
case "change_field_code":
ajaxData = urPerformChangeFieldCode(operation.oldTag,
operation.oldInd1,
operation.oldInd2,
operation.newTag,
operation.ind1,
operation.ind2,
operation.newFieldPos,
true);
break;
case "add_field":
ajaxData = urPerformRemoveField(operation.tag,
operation.fieldPosition,
true);
break;
case "add_subfields":
ajaxData = urPerformRemoveSubfields(operation.tag,
operation.fieldPosition,
operation.newSubfields,
true);
break;
case "delete_fields":
ajaxData = urPerformAddPositionedFieldsSubfields(operation.toDelete, true);
break;
case "move_field":
var newDirection = "up";
var newPosition = operation.field_position + 1;
if (operation.direction == "up"){
newDirection = "down";
newPosition = operation.field_position - 1;
}
ajaxData = performMoveField(operation.tag, newPosition, newDirection, true);
break;
case "move_subfield":
var newDirection = "up";
var newPosition = operation.subfield_position + 1;
if (operation.direction == "up"){
newDirection = "down";
newPosition = operation.subfield_position - 1;
}
ajaxData = performMoveSubfield(operation.tag, operation.field_position,
newPosition, newDirection, true);
break;
case "bulk_operation":
ajaxData = performBulkOperation(operation.handlers, true);
break;
case "apply_hp_change":
ajaxData = preparePerformUndoOperations([operation.handler]);
ajaxData[0]["hpChange"] = {};
ajaxData[0]["hpChange"]["toEnable"] = [operation.changeNo]; // reactivate
isMultiple = true;
revertViewedChange(operation.changeNo);
break;
case "visualize_hp_changeset":
ajaxData = prepareUndoVisualizeChangeset(operation.changesetNumber,
operation.oldChangesList);
break;
case "change_field":
ajaxData = urPerformChangeField(operation.tag, operation.fieldPos,
operation.oldInd1, operation.oldInd2,
operation.oldSubfields,
operation.oldIsControlField,
operation.oldValue , true);
break;
case "remove_all_hp_changes":
ajaxData = performModifyHPChanges(operation.old_changes_list, true);
break;
default:
alert("Error: wrong operation to undo");
}
if (isMultiple){
// in this case we have to merge lists rather than include inside
for (elInd in ajaxData){
ajaxRequestsData.push(ajaxData[elInd]);
}
}
else{
ajaxRequestsData.push(ajaxData);
}
}
return ajaxRequestsData;
}
function performUndoOperations(operations){
var ajaxRequestsData = preparePerformUndoOperations(operations);
// now submitting the ajax request
var optArgs={
// undoRedo: "undo"
};
var bulk_data = {'requestType' : 'applyBulkUpdates',
'requestsData' : ajaxRequestsData,
'recID' : gRecID};
queue_request(bulk_data);
}
function prepareUndoHandlerMoveField(tag, fieldPosition, direction){
var result = {};
result.tag = tag;
result.operation_type = "move_field";
result.field_position = fieldPosition;
result.direction = direction;
return result;
}
function prepareUndoHandlerChangeField(tag, fieldPos,
oldInd1, oldInd2, oldSubfields, oldIsControlField, oldValue,
newInd1, newInd2, newSubfields, newIsControlField, newValue){
/** Function building a handler allowing to undo the operation of
changing the field structure.
Changing can happen only if tag and position remain the same,
Otherwise we deal with removal and adding of a field
Arguments:
tag - tag of a field
fieldPos - position of a field
oldInd1, oldInd2 - indices of the old field
oldSubfields - subfields present int the old structure
oldIsControlField - a boolean value indicating if the field
is a control field
oldValue - a value before change in case of field being a control field.
if the field is normal field, this should be equal ""
newInd1, newInd2, newSubfields, newIsControlField, newValue -
Similar parameters describing new structure of a field
*/
var result = {};
result.operation_type = "change_field";
result.tag = tag;
result.fieldPos = fieldPos;
result.oldInd1 = oldInd1;
result.oldInd2 = oldInd2;
result.oldSubfields = oldSubfields;
result.oldIsControlField = oldIsControlField;
result.oldValue = oldValue;
result.newInd1 = newInd1;
result.newInd2 = newInd2;
result.newSubfields = newSubfields;
result.newIsControlField = newIsControlField;
result.newValue = newValue;
return result;
}
function showRedoPreview(){
$("#redoOperationVisualisationField").removeClass("bibEditHiddenElement");
}
function deleteFields(toDeleteStruct, undoRedo){
// a function deleting the specified fields on both client and server sides
//
// toDeleteFields : a structure describing fields and subfields to delete
// this structure is the same as for the function createFields
var toDelete = {};
// first we convert the data into a different format, loosing the informations about
// subfields of entirely removed fields
// first the entirely deleted fields
for (tag in toDeleteStruct.fields){
if (toDelete[tag] == undefined){
toDelete[tag] = {};
}
for (fieldPos in toDeleteStruct.fields[tag]){
toDelete[tag][fieldPos] = [];
}
}
for (tag in toDeleteStruct.subfields){
if (toDelete[tag] == undefined){
toDelete[tag] = {};
}
for (fieldPos in toDeleteStruct.subfields[tag]){
toDelete[tag][fieldPos] = [];
for (subfieldPos in toDeleteStruct.subfields[tag][fieldPos]){
toDelete[tag][fieldPos].push(subfieldPos);
}
}
}
var tagsToRedraw = [];
// reColorTable is true if any field are completely deleted.
var reColorTable = false;
// first we have to encode all the data in a single dictionary
// Create Ajax request.
var ajaxData = {
recID: gRecID,
requestType: 'deleteFields',
toDelete: toDelete,
undoRedo: (undoRedo == true) ? "undo" : ((undoRedo == false) ? "redo" : undoRedo)
};
// Continue local updating.
// Parse data structure and delete accordingly in record.
var fieldsToDelete, subfieldIndexesToDelete, field, subfields, subfieldIndex;
for (var tag in toDelete) {
tagsToRedraw.push(tag);
fieldsToDelete = toDelete[tag];
// The fields should be treated in the decreasing order (during the removal, indices may change)
traversingOrder = [];
for (fieldPosition in fieldsToDelete) {
traversingOrder.push(fieldPosition);
}
// normal sorting will do this in a lexycographical order ! (problems if > 10 subfields
// function provided, allows sorting in the reversed order
var traversingOrder = traversingOrder.sort(function(a, b){
return b - a;
});
for (var fieldInd in traversingOrder) {
var fieldPosition = traversingOrder[fieldInd];
var fieldID = tag + '_' + fieldPosition;
subfieldIndexesToDelete = fieldsToDelete[fieldPosition];
if (subfieldIndexesToDelete.length == 0)
deleteFieldFromTag(tag, fieldPosition);
else {
// normal sorting will do this in a lexycographical order ! (problems if > 10 subfields
subfieldIndexesToDelete.sort(function(a, b){
return a - b;
});
field = gRecord[tag][fieldPosition];
subfields = field[0];
for (var j = subfieldIndexesToDelete.length - 1; j >= 0; j--){
subfields.splice(subfieldIndexesToDelete[j], 1);
}
}
}
}
// If entire fields has been deleted, redraw all fields with the same tag
// and recolor the full table.
for (tag in tagsToRedraw) {
redrawFields(tagsToRedraw[tag]);
}
reColorFields();
return ajaxData;
}
function getSelectedFields(){
/** Function returning a list of selected fields
Returns all the fields and subfields that are slected.
The structure of a result is following:
{
"fields" : { tag: {fieldsPosition: field_structure_similar_to_on_from_gRecord}}
"subfields" : {tag: { fieldPosition: { subfieldPosition: [code, value]}}}
}
*/
var selectedFields = {};
var selectedSubfields = {};
var checkedFieldBoxes = $('input[class="bibEditBoxField"]:checked');
var checkedSubfieldBoxes = $('input[class="bibEditBoxSubfield"]:checked');
if (!checkedFieldBoxes.length && !checkedSubfieldBoxes.length)
// No fields selected
return;
// Collect fields to be deleted in toDelete.
$(checkedFieldBoxes).each(function(){
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2];
if (!selectedFields[tag]) {
selectedFields[tag] = {};
}
selectedFields[tag][fieldPosition] = gRecord[tag][fieldPosition];
});
// Collect subfields to be deleted in toDelete.
$(checkedSubfieldBoxes).each(function(){
var tmpArray = this.id.split('_');
var tag = tmpArray[1], fieldPosition = tmpArray[2], subfieldIndex = tmpArray[3];
if (selectedFields[tag] == undefined || selectedFields[tag][fieldPosition] == undefined){
// this field has not been selected entirely, we can proceed with processing subfield slection
if (!selectedSubfields[tag]) {
selectedSubfields[tag] = {};
selectedSubfields[tag][fieldPosition] = {};
selectedSubfields[tag][fieldPosition][subfieldIndex] =
gRecord[tag][fieldPosition][0][subfieldIndex];
}
else {
if (!selectedSubfields[tag][fieldPosition])
selectedSubfields[tag][fieldPosition] = {};
selectedSubfields[tag][fieldPosition][subfieldIndex] =
gRecord[tag][fieldPosition][0][subfieldIndex];
}
} else {
// this subfield is a part of entirely selected field... we have already included the information about subfields
}
});
var result={};
result.fields = selectedFields;
result.subfields = selectedSubfields;
return result;
}
function urPerformChangeSubfieldContent(tag, fieldPos, subfieldPos, code, val, isUndo){
// changing the server side model
var ajaxData = {
recID: gRecID,
requestType: 'modifyContent',
tag: tag,
fieldPosition: fieldPos,
subfieldIndex: subfieldPos,
subfieldCode: code,
value: val,
undoRedo: (isUndo ? "undo": "redo")
};
// changing the model
gRecord[tag][fieldPos][0][subfieldPos][0] = code;
gRecord[tag][fieldPos][0][subfieldPos][1] = val;
// changing the display .... what if being edited right now ?
redrawFields(tag);
reColorFields();
return ajaxData;
}
function urPerformChangeSubfieldCode(tag, fieldPos, subfieldPos, code, val, isUndo){
// changing the server side model
var ajaxData = {
recID: gRecID,
requestType: 'modifySubfieldTag',
tag: tag,
fieldPosition: fieldPos,
subfieldIndex: subfieldPos,
subfieldCode: code,
value: val,
undoRedo: (isUndo ? "undo": "redo")
};
gRecord[tag][fieldPos][0][subfieldPos][0] = code;
gRecord[tag][fieldPos][0][subfieldPos][1] = val;
// changing the display .... what if being edited right now ?
redrawFields(tag);
reColorFields();
return ajaxData;
}
function urPerformChangeFieldCode(oldTag, oldInd1, oldInd2, newTag, ind1, ind2,
fieldPos, isUndo){
// changing the server side model
var ajaxData = {
recID: gRecID,
requestType: 'modifyFieldTag',
oldTag: newTag,
oldInd1: ind1,
oldInd2: ind2,
newTag: oldTag,
fieldPosition: fieldPos,
ind1: oldInd1,
ind2: oldInd2,
undoRedo: (isUndo ? "undo": "redo")
};
// updating the local model
var currentField = gRecord[newTag][fieldPos];
currentField[1] = oldInd1;
currentField[2] = oldInd2;
gRecord[newTag].splice(fieldPos,1);
if (gRecord[newTag].length == 0){
delete gRecord[newTag];
}
var fieldNewPos;
if (gRecord[oldTag] == undefined) {
fieldNewPos = 0;
gRecord[oldTag] = [];
gRecord[oldTag][fieldNewPos] = currentField;
}
else {
fieldNewPos = gRecord[oldTag].length;
gRecord[oldTag].splice(fieldNewPos, 0, currentField);
}
// changing the display .... what if being edited right now ?
redrawFields(newTag);
redrawFields(oldTag);
reColorFields();
return ajaxData;
}
function performChangeField(tag, fieldPos, ind1, ind2, subFields, isControlfield,
value, undoRedo){
/** Function changing the field structure and generating an appropriate AJAX
request handler
Arguments:
tag, fieldPos, ind1, ind2, subFields, isControlfield, value - standard
values describing a field. tag, fieldPos are used to locate the field
instance (which has to exist) and its content is modified accordingly.
undoRedo - a undoRedo Handler or one of the words "undo"/"redo"
*/
var ajaxData = {
recID: gRecID,
requestType: "modifyField",
controlfield : isControlfield,
fieldPosition : fieldPos,
ind1: ind1,
ind2: ind2,
tag: tag,
subFields: subFields,
undoRedo : undoRedo,
hpChanges: {}
}
// local changes
gRecord[tag][fieldPos][0] = subFields;
gRecord[tag][fieldPos][1] = ind1;
gRecord[tag][fieldPos][2] = ind2;
gRecord[tag][fieldPos][3] = value;
redrawFields(tag);
reColorFields();
return ajaxData;
}
function urPerformChangeField(tag, fieldPos, ind1, ind2, subFields,
isControlfield, value, isUndo){
/**
*/
return performChangeField(tag, fieldPos, ind1, ind2, subFields,
isControlfield, value, (isUndo ? "undo" : "redo"));
}
function performMoveField(tag, oldFieldPosition, direction, undoRedo){
var newFieldPosition = oldFieldPosition + (direction == "up" ? -1 : 1);
// Create Ajax request.
var ajaxData = {
recID: gRecID,
requestType: 'moveField',
tag: tag,
fieldPosition: oldFieldPosition,
direction: direction,
undoRedo: (undoRedo == true) ? "undo" : ((undoRedo == false) ? "redo" : undoRedo)
};
//continue updating locally
var currentField = gRecord[tag][oldFieldPosition];
gRecord[tag][oldFieldPosition] = gRecord[tag][newFieldPosition];
gRecord[tag][newFieldPosition] = currentField;
$('tbody#rowGroup_'+tag+'_'+(newFieldPosition)).replaceWith(
createField(tag, gRecord[tag][newFieldPosition], newFieldPosition));
$('tbody#rowGroup_'+tag+'_'+oldFieldPosition).replaceWith(
createField(tag, gRecord[tag][oldFieldPosition], oldFieldPosition));
reColorFields();
// Now taking care of having the new field selected and the rest unselected
setAllUnselected();
setFieldSelected(tag, newFieldPosition);
//$('#boxField_'+tag+'_'+(newFieldPosition)).click();
return ajaxData;
}
function setSelectionStatusField(tag, fieldPos, status){
var fieldCheckbox = $('#boxField_' + tag + '_' + fieldPos);
var subfieldCheckboxes = $('#rowGroup_' + tag + '_' + fieldPos + ' .bibEditBoxSubfield');
fieldCheckbox.each(function(ind){
if (fieldCheckbox[ind].checked != status)
{
fieldCheckbox[ind].click();
}
});
}
function urPerformDeletePositionedFieldsSubfields(toDelete, isUndo){
return deleteFields(toDelete, isUndo);
}
/** General Undo/Redo treatment lists */
function setSelectionStatusSubfield(tag, fieldPos, subfieldPos, status){
var subfieldCheckbox = $('#boxSubfield_' + tag + '_' + fieldPos + '_' + subfieldPos);
if (subfieldCheckbox[0].checked != status)
{
subfieldCheckbox[0].click();
}
}
function createFields(toCreateFields, isUndo){
// a function adding fields.
// toCreateFields : a structure describing fields and subfields to create
// this structure is the same as for the function deleteFields
// 1) Preparing the AJAX request
var tagsToRedraw = {};
var ajaxData = {
recID: gRecID,
requestType: 'addFieldsSubfieldsOnPositions',
fieldsToAdd: toCreateFields.fields,
subfieldsToAdd: toCreateFields.subfields
};
if (isUndo != undefined){
ajaxData['undoRedo'] = (isUndo ? "undo": "redo");
}
// 2) local processing -> creating the fields locally
// - first creating the missing fields so all the subsequent field indices are correcr
for (tag in toCreateFields.fields){
if (gRecord[tag] == undefined){
gRecord[tag] = [];
}
tagsToRedraw[tag] = true;
var fieldIndices = [];
for (fieldPos in toCreateFields.fields[tag]){
fieldIndices.push(fieldPos);
}
fieldIndices.sort(); // we have to add fields in the increasing order
for (indInd in fieldIndices){
var fieldIndexToAdd = fieldIndices[indInd]; // index of the field index to add in the indices array
var newField = toCreateFields.fields[tag][fieldIndexToAdd];
gRecord[tag].splice(fieldIndexToAdd, 0, newField);
}
}
// - now appending the remaining subfields
for (tag in toCreateFields.subfields){
tagsToRedraw[tag] = true;
for (fieldPos in toCreateFields.subfields[tag]){
var subfieldPositions = [];
for (subfieldPos in toCreateFields.subfields[tag][fieldPos]){
subfieldPositions.push(subfieldPos);
}
subfieldPositions.sort();
for (subfieldInd in subfieldPositions){
subfieldPosition = subfieldPositions[subfieldInd];
gRecord[tag][fieldPos][0].splice(
subfieldPosition, 0,
toCreateFields.subfields[tag][fieldPos][subfieldPosition]);
}
}
}
// - redrawint the affected tags
for (tag in tagsToRedraw){
redrawFields(tag);
}
reColorFields();
return ajaxData;
}
/* Bibcirculation Panel functions */
function isBibCirculationPanelNecessary(){
/** A function checking if the BibCirculation connectivity panel should
be displayed. This information is derieved from the state of the record.
Returns true or false
*/
if (gRecID === null){
return false;
}
// only if the record is saved and exists in the database and belongs
// to a particular colelction
return gDisplayBibCircPanel;
}
function updateBibCirculationPanel(){
/** Updates the BibCirculation panel contents and visibility
*/
if (gDisplayBibCircPanel === false){
// in case, the panel is present, should be hidden
$("#bibEditBibCircConnection").addClass("bibEditHiddenElement");
}
else {
// the panel must be present - we have to show it
$(".bibEditBibCircConnection").removeClass("bibEditHiddenElement");
}
var interfaceElement = $("#bibEditBibCircConnection");
if (isBibCirculationPanelNecessary()){
interfaceElement.removeClass("bibEditHiddenElement");
} else {
interfaceElement.addClass("bibEditHiddenElement");
}
// updating the content
var copiesCountElement = $('#bibEditBibCirculationCopies');
copiesCountElement.attr("innerHTML", gPhysCopiesNum);
}
function bibCircIntGetEditCopyUrl(recId){
/**A function returning the address under which, the edition of a
given record is possible
*/
// return "/admin/bibcirculation/bibcirculationadmin.py/get_item_details?recid=" + recId;
return gBibCircUrl;
}
function onBibCirculationBtnClicked(e){
/** A function redirecting the user to the BibCiculation web interface
*/
var link = bibCircIntGetEditCopyUrl(gRecID);
window.open(link);
}
/* ---- Helper functions for adding subfields into the subfield content ---- */
function valueContainsSubfields(value) {
/*
* Purpose: Check if value has subfields inside. E.g. test$$xAnother test
*
* Input(s): string:value - value introduced into the subfield
*
* Returns: boolean - true (subfields inside), false (no subfields)
*/
var regExp = new RegExp(".*\\$\\$[0-9a-zA-Z].*");
return regExp.test(value);
}
function splitContentSubfields(value, subfieldCode, subfieldsToAdd, isSubject) {
/*
* Purpose: split content into pairs subfield index - subfield value
*
* Input(s): string:value - value introduced into the subfield
* Array:subfieldsToAdd - will contain all subfields extracted
*
*/
var splitValue = value.split('$$');
subfieldsToAdd.push(new Array(subfieldCode, splitValue[0]));
for (var i=1, n=splitValue.length; i<n; i++) {
var subfieldValue = splitValue[i].substring(1);
if (isSubject) {
subfieldValue = check_subjects_KB(subfieldValue);
}
subfieldsToAdd.push(new Array(splitValue[i][0], subfieldValue));
}
}
/* ---- All functions related to change in an editable area ---- */
function is_reference_manually_curated(field) {
/*
* Checks if the given field has a subfield with code 9 and content
* CURATOR. Used to check if a reference is manually curated
*/
for (var i=0, n=field[0].length; i < n; i++) {
if (field[0][i][0] == '9' && field[0][i][1] == "CURATOR")
return true;
}
return false;
}
/**
* Checks if the field being edited is a subject field
* @param {String} tag_ind
* @param {String} subfield_code
* @return {Boolean}
*/
function isSubjectSubfield(tag_ind, subfield_code) {
return (tag_ind === "65017" && subfield_code === "a")
}
/**
* Helper function to clean the content inputed by a user in a cell
* @param {String} value
* @return {String}
*/
function sanitize_value(value) {
value = value.replace(/\n/g, ' '); // Replace newlines with spaces.
value = value.replace(/^\s+|\s+$/g,""); // Remove whitespace from the ends of strings
return escapeHTML(value);
}
/**
* Function called when a field tag is changed
* @param {String} value
* @param {Object} cell
* @return {String}
*/
function onFieldTagChange(value, cell) {
function updateModel() {
var currentField = gRecord[oldTag][cell.fieldPosition];
currentField[1] = newInd1;
currentField[2] = newInd2;
gRecord[oldTag].splice(cell.fieldPosition,1);
if (gRecord[oldTag].length == 0){
delete gRecord[oldTag];
}
var fieldNewPos;
if (gRecord[newTag] == undefined) {
fieldNewPos = 0;
gRecord[newTag] = [];
gRecord[newTag][fieldNewPos] = currentField;
}
else {
fieldNewPos = gRecord[newTag].length;
gRecord[newTag].splice(fieldNewPos, 0, currentField);
}
}
function redrawTags() {
redrawFields(oldTag);
redrawFields(newTag);
reColorFields();
}
var old_value = cell.tag_ind;
if (old_value.replace(/_/g, " ") === value.replace(/_/g, " ")) {
return value;
}
/* Create undo/redo handler */
var oldTag = old_value.substring(0,3),
oldInd1 = old_value.substring(3,4),
oldInd2 = old_value.substring(4,5),
newTag = value.substring(0,3),
newInd1 = value.substring(3,4),
newInd2 = value.substring(4,5),
operation_type = "change_field_code";
urHandler = prepareUndoHandlerChangeFieldCode(oldTag,
oldInd1,
oldInd2,
newTag,
newInd1,
newInd2,
cell.fieldPosition,
operation_type);
addUndoOperation(urHandler);
/* Send AJAX request */
updateFieldTag(oldTag, newTag, oldInd1, oldInd2, newInd1, newInd2,
cell.fieldPosition, null, urHandler);
/* Update client side model */
updateModel();
redrawTags();
return value;
}
/**
* Function called when a subfield code is changed
* @param {String} value
* @param {Object} cell
* @return {Object}
*/
function onSubfieldCodeChange(value, cell) {
function updateModel() {
subfield_instance[0] = value;
}
var field_instance = gRecord[cell.tag][cell.fieldPosition];
var subfield_instance = field_instance[0][cell.subfieldIndex];
if (subfield_instance[0] == value) {
return value;
}
var old_subfield_code = subfield_instance[0]; // get old subfield code from gRecord
var operation_type = "change_subfield_code";
urHandler = prepareUndoHandlerChangeSubfield(cell.tag,
cell.fieldPosition,
cell.subfieldIndex,
subfield_instance[1],
subfield_instance[1],
old_subfield_code,
value,
operation_type);
addUndoOperation(urHandler);
updateSubfieldValue(cell.tag, cell.fieldPosition, cell.subfieldIndex, value,
subfield_instance[1], null, urHandler, true);
updateModel();
return value;
}
/**
* Function called when a subfield value is changed
* @param {String} value
* @param {Object} cell
* @return {String}
*/
function onContentChange(value, cell) {
function redrawTags() {
redrawFieldPosition(cell.tag, cell.fieldPosition);
reColorFields();
}
function updateModel() {
subfield_instance[1] = value;
field_instance[0].push.apply(field_instance[0], subfieldsToAdd);
}
/* Get field instance to be updated from global variable gRecord */
var field_instance = gRecord[cell.tag][cell.fieldPosition];
var subfield_instance = field_instance[0][cell.subfieldIndex];
/* Nothing has changed, return */
if (subfield_instance[1] === value) {
return value;
}
var isSubject = isSubjectSubfield(cell.tag_ind, subfield_instance[0]);
var subfieldsToAdd = [],
bulkOperation = false;
var old_subfield_code = subfield_instance[0];
var old_subfield_value = subfield_instance[1];
/* Check if there are subfields inside of the content value
* e.g 999C5 $$mThis a test$$hThis is a second subfield */
if (valueContainsSubfields(value)) {
bulkOperation = true;
splitContentSubfields(value, old_subfield_code, subfieldsToAdd, isSubject);
value = subfieldsToAdd[0][1];
subfieldsToAdd = subfieldsToAdd.slice(1);
}
else {
/* If editing subject field, check KB */
if (isSubject) {
value = check_subjects_KB(value);
}
}
/* If editing a reference, add curator subfield */
if (cell.tag_ind == '999C5' && !is_reference_manually_curated(field_instance)) {
bulkOperation = true;
subfieldsToAdd.push(new Array('9', 'CURATOR'));
}
if (bulkOperation) {
/* Prepare undo handlers to modify subfield content and to
* add new subfields */
var undoHandlers = [];
undoHandlers.push(prepareUndoHandlerChangeSubfield(cell.tag,
cell.fieldPosition,
cell.subfieldIndex,
old_subfield_value,
value,
subfield_instance[0],
subfield_instance[0],
"change_content"));
undoHandlers.push(prepareUndoHandlerAddSubfields(cell.tag,
cell.fieldPosition,
subfieldsToAdd));
urHandler = prepareUndoHandlerBulkOperation(undoHandlers, "addSufields");
addUndoOperation(urHandler);
bulkUpdateSubfieldContent(cell.tag, cell.fieldPosition, cell.subfieldIndex, subfield_instance[0], value, null,
urHandler, subfieldsToAdd);
updateModel();
redrawTags();
}
else {
operation_type = "change_content";
urHandler = prepareUndoHandlerChangeSubfield(cell.tag,
cell.fieldPosition,
cell.subfieldIndex,
old_subfield_value,
value,
old_subfield_code,
old_subfield_code,
operation_type);
addUndoOperation(urHandler);
updateSubfieldValue(cell.tag, cell.fieldPosition, cell.subfieldIndex, old_subfield_code,
value, null, urHandler);
updateModel();
}
return value;
}
/**
* Extracts all the relevant info from the cell object
* @param {Object} th
* @return {Object}
*/
function get_cell_info(th) {
var cell = {};
var tmpArray = th.id.split('_');
cell.type = tmpArray[0];
cell.tag = tmpArray[1];
cell.fieldPosition = tmpArray[2];
cell.subfieldIndex = tmpArray[3];
var field_instance = gRecord[cell.tag][cell.fieldPosition];
cell.tag_ind = cell.tag + field_instance[1] + field_instance[2];
return cell;
}
/**
* Highlights the content changed on BibEdit's edit table
* @param {Object} cell
* @param {String} value
*/
function highlight_change(cell, value) {
var selector;
switch (cell.type) {
case 'subfieldTag':
selector = '#subfieldTag_' + cell.tag + '_' + cell.fieldPosition +
'_' + cell.subfieldIndex;
break;
case 'fieldTag':
var newTag = value.substring(0,3);
var newFieldPos;
if (gRecord[newTag].length === 1) {
newFieldPos = 0;
}
else {
newFieldPos = gRecord[newTag].length - 1;
}
selector = '#fieldTag_' + newTag + '_' + newFieldPos;
$(selector).focus();
break;
case 'content':
selector = '#content_' + cell.tag + '_' + cell.fieldPosition +
'_' + cell.subfieldIndex;
break;
default:
return;
}
setTimeout('$("' + selector + '").effect("highlight", {color: gNEW_CONTENT_COLOR}, ' +
'gNEW_CONTENT_COLOR_FADE_DURATION)', gNEW_CONTENT_HIGHLIGHT_DELAY);
}
/**
* Function called when an editable cell (using jEditable plugin) changes value
* @param {String} value
* @param {Object} th
* @return {String}
*/
function onEditableCellChange(value, th) {
if (failInReadOnly()) {
return;
}
value = sanitize_value(value);
/* return an object with all the info we need */
var cell = get_cell_info(th);
switch (cell.type) {
case 'subfieldTag':
/* A subfield code has been changed */
value = onSubfieldCodeChange(value, cell);
break;
case 'fieldTag':
/* A field tag has been changed */
value = onFieldTagChange(value, cell);
break;
case 'content':
/* A subfield value has been changed */
value = onContentChange(value, cell);
break;
default:
// something unwanted happened, do nothing
return value;
}
highlight_change(cell, value);
return value;
}
/* Functions specific to display modes */
function onfocusreference() {
if ($("#focuson_references").prop("checked") === true) {
$.each(gDisplayReferenceTags, function() {
$("tbody[id^='rowGroup_" + this + "']").show();
});
}
else {
$.each(gDisplayReferenceTags, function() {
$("tbody[id^='rowGroup_" + this + "']").hide();
});
}
}
function onfocusauthor() {
if ($("#focuson_authors").prop("checked") === true) {
$.each(gDisplayAuthorTags, function() {
$("tbody[id^='rowGroup_" + this + "']").show();
});
}
else {
$.each(gDisplayAuthorTags, function() {
$("tbody[id^='rowGroup_" + this + "']").hide();
});
}
}
function onfocusother() {
var tags = [];
tags = tags.concat(gDisplayReferenceTags, gDisplayAuthorTags);
var myselector = $();
$.each(tags, function() {
myselector = myselector.add("tbody[id^='rowGroup_" + this + "']");
});
if ($("#focuson_others").prop("checked") === true) {
$("tbody:[id^='rowGroup_']").not(myselector).show();
}
else {
$("tbody:[id^='rowGroup_']").not(myselector).hide();
}
}
function displayAllTags() {
$("#focuson_references").prop("checked", true);
$("#focuson_authors").prop("checked", true);
$("#focuson_others").prop("checked", true);
}
function getUnmarkedTags() {
return $("#focuson_list input:checkbox:not(:checked)");
}
function setUnmarkedTags(tags) {
$.each(tags, function() {
this.click();
})
}
function bindFocusHandlers() {
$("#focuson_references").on("click", onfocusreference);
$("#focuson_authors").on("click", onfocusauthor);
$("#focuson_others").on("click", onfocusother);
}
diff --git a/modules/bibedit/lib/bibedit_engine.py b/modules/bibedit/lib/bibedit_engine.py
index e1e274235..be2a39d47 100644
--- a/modules/bibedit/lib/bibedit_engine.py
+++ b/modules/bibedit/lib/bibedit_engine.py
@@ -1,1675 +1,1679 @@
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
# pylint: disable=C0103
"""Invenio BibEdit Engine."""
__revision__ = "$Id"
from datetime import datetime
import re
import difflib
import zlib
import copy
import urllib
import urllib2
import cookielib
from invenio import bibformat
from invenio.jsonutils import json, CFG_JSON_AVAILABLE
from invenio.urlutils import auto_version_url
from invenio.xmlmarc2textmarc import create_marc_record
from invenio.bibedit_config import CFG_BIBEDIT_AJAX_RESULT_CODES, \
CFG_BIBEDIT_JS_CHECK_SCROLL_INTERVAL, CFG_BIBEDIT_JS_HASH_CHECK_INTERVAL, \
CFG_BIBEDIT_JS_CLONED_RECORD_COLOR, \
CFG_BIBEDIT_JS_CLONED_RECORD_COLOR_FADE_DURATION, \
CFG_BIBEDIT_JS_NEW_ADD_FIELD_FORM_COLOR, \
CFG_BIBEDIT_JS_NEW_ADD_FIELD_FORM_COLOR_FADE_DURATION, \
CFG_BIBEDIT_JS_NEW_CONTENT_COLOR, \
CFG_BIBEDIT_JS_NEW_CONTENT_COLOR_FADE_DURATION, \
CFG_BIBEDIT_JS_NEW_CONTENT_HIGHLIGHT_DELAY, \
CFG_BIBEDIT_JS_STATUS_ERROR_TIME, CFG_BIBEDIT_JS_STATUS_INFO_TIME, \
CFG_BIBEDIT_JS_TICKET_REFRESH_DELAY, CFG_BIBEDIT_MAX_SEARCH_RESULTS, \
CFG_BIBEDIT_TAG_FORMAT, CFG_BIBEDIT_AJAX_RESULT_CODES_REV, \
CFG_BIBEDIT_AUTOSUGGEST_TAGS, CFG_BIBEDIT_AUTOCOMPLETE_TAGS_KBS,\
CFG_BIBEDIT_KEYWORD_TAXONOMY, CFG_BIBEDIT_KEYWORD_TAG, \
CFG_BIBEDIT_KEYWORD_RDFLABEL, CFG_BIBEDIT_REQUESTS_UNTIL_SAVE, \
CFG_BIBEDIT_DOI_LOOKUP_FIELD, CFG_DOI_USER_AGENT, \
CFG_BIBEDIT_DISPLAY_REFERENCE_TAGS, CFG_BIBEDIT_DISPLAY_AUTHOR_TAGS
from invenio.config import CFG_SITE_LANG, CFG_DEVEL_SITE
from invenio.bibedit_dblayer import get_name_tags_all, reserve_record_id, \
get_related_hp_changesets, get_hp_update_xml, delete_hp_change, \
get_record_last_modification_date, get_record_revision_author, \
get_marcxml_of_record_revision, delete_related_holdingpen_changes, \
get_record_revisions
from invenio.bibedit_utils import cache_exists, cache_expired, \
create_cache_file, delete_cache_file, get_bibrecord, \
get_cache_file_contents, get_cache_mtime, get_record_templates, \
get_record_template, latest_record_revision, record_locked_by_other_user, \
record_locked_by_queue, save_xml_record, touch_cache_file, \
update_cache_file_contents, get_field_templates, get_marcxml_of_revision, \
revision_to_timestamp, timestamp_to_revision, \
get_record_revision_timestamps, record_revision_exists, \
can_record_have_physical_copies, extend_record_with_template, \
replace_references, merge_record_with_template, record_xml_output, \
record_is_conference, add_record_cnum, get_xml_from_textmarc, \
- record_locked_by_user_details, crossref_process_template
+ record_locked_by_user_details, crossref_process_template, \
+ modify_record_timestamp
from invenio.bibrecord import create_record, print_rec, record_add_field, \
record_add_subfield_into, record_delete_field, \
record_delete_subfield_from, \
record_modify_subfield, record_move_subfield, \
create_field, record_replace_field, record_move_fields, \
record_modify_controlfield, record_get_field_values, \
record_get_subfields, record_get_field_instances, record_add_fields, \
record_strip_empty_fields, record_strip_empty_volatile_subfields, \
record_strip_controlfields, record_order_subfields, field_xml_output
from invenio.config import CFG_BIBEDIT_PROTECTED_FIELDS, CFG_CERN_SITE, \
CFG_SITE_URL, CFG_SITE_RECORD, CFG_BIBEDIT_KB_SUBJECTS, \
CFG_BIBEDIT_KB_INSTITUTIONS, CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS, \
CFG_INSPIRE_SITE
from invenio.search_engine import record_exists, perform_request_search
from invenio.webuser import session_param_get, session_param_set
from invenio.bibcatalog import bibcatalog_system
from invenio.webpage import page
from invenio.htmlutils import get_mathjax_header
from invenio.textutils import wash_for_xml, show_diff
from invenio.bibknowledge import get_kbd_values_for_bibedit, get_kbr_values, \
get_kbt_items_for_bibedit, kb_exists
from invenio.batchuploader_engine import perform_upload_check
from invenio.bibcirculation_dblayer import get_number_copies, has_copies
from invenio.bibcirculation_utils import create_item_details_url
from invenio.refextract_api import FullTextNotAvailable
from invenio import xmlmarc2textmarc as xmlmarc2textmarc
from invenio.bibdocfile import BibRecDocs, InvenioBibDocFileError
from invenio.crossrefutils import get_marcxml_for_doi, CrossrefError
import invenio.template
bibedit_templates = invenio.template.load('bibedit')
re_revdate_split = re.compile('^(\d\d\d\d)(\d\d)(\d\d)(\d\d)(\d\d)(\d\d)')
def get_empty_fields_templates():
"""
Returning the templates of empty fields::
-an empty data field
-an empty control field
"""
return [{
"name": "Empty field",
"description": "The data field not containing any " + \
"information filled in",
"tag" : "",
"ind1" : "",
"ind2" : "",
"subfields" : [("","")],
"isControlfield" : False
},{
"name" : "Empty control field",
"description" : "The controlfield not containing any " + \
"data or tag description",
"isControlfield" : True,
"tag" : "",
"value" : ""
}]
def get_available_fields_templates():
"""
A method returning all the available field templates
Returns a list of descriptors. Each descriptor has
the same structure as a full field descriptor inside the
record
"""
templates = get_field_templates()
result = get_empty_fields_templates()
for template in templates:
tplTag = template[3].keys()[0]
field = template[3][tplTag][0]
if (field[0] == []):
# if the field is a controlField, add different structure
result.append({
"name" : template[1],
"description" : template[2],
"isControlfield" : True,
"tag" : tplTag,
"value" : field[3]
})
else:
result.append({
"name": template[1],
"description": template[2],
"tag" : tplTag,
"ind1" : field[1],
"ind2" : field[2],
"subfields" : field[0],
"isControlfield" : False
})
return result
def perform_request_init(uid, ln, req, lastupdated):
"""Handle the initial request by adding menu and JavaScript to the page."""
errors = []
warnings = []
body = ''
# Add script data.
record_templates = get_record_templates()
record_templates.sort()
tag_names = get_name_tags_all()
protected_fields = ['001']
protected_fields.extend(CFG_BIBEDIT_PROTECTED_FIELDS.split(','))
cern_site = 'false'
if not CFG_JSON_AVAILABLE:
title = 'Record Editor'
body = '''Sorry, the record editor cannot operate when the
`simplejson' module is not installed. Please see the INSTALL
file.'''
return page(title = title,
body = body,
errors = [],
warnings = [],
uid = uid,
language = ln,
navtrail = "",
lastupdated = lastupdated,
req = req)
body += '<link rel="stylesheet" type="text/css" href="/img/jquery-ui.css" />'
body += '<link rel="stylesheet" type="text/css" href="%s/%s" />' % (CFG_SITE_URL,
auto_version_url("img/" + 'bibedit.css'))
if CFG_CERN_SITE:
cern_site = 'true'
data = {'gRECORD_TEMPLATES': record_templates,
'gTAG_NAMES': tag_names,
'gPROTECTED_FIELDS': protected_fields,
'gSITE_URL': '"' + CFG_SITE_URL + '"',
'gSITE_RECORD': '"' + CFG_SITE_RECORD + '"',
'gCERN_SITE': cern_site,
'gHASH_CHECK_INTERVAL': CFG_BIBEDIT_JS_HASH_CHECK_INTERVAL,
'gCHECK_SCROLL_INTERVAL': CFG_BIBEDIT_JS_CHECK_SCROLL_INTERVAL,
'gSTATUS_ERROR_TIME': CFG_BIBEDIT_JS_STATUS_ERROR_TIME,
'gSTATUS_INFO_TIME': CFG_BIBEDIT_JS_STATUS_INFO_TIME,
'gCLONED_RECORD_COLOR':
'"' + CFG_BIBEDIT_JS_CLONED_RECORD_COLOR + '"',
'gCLONED_RECORD_COLOR_FADE_DURATION':
CFG_BIBEDIT_JS_CLONED_RECORD_COLOR_FADE_DURATION,
'gNEW_ADD_FIELD_FORM_COLOR':
'"' + CFG_BIBEDIT_JS_NEW_ADD_FIELD_FORM_COLOR + '"',
'gNEW_ADD_FIELD_FORM_COLOR_FADE_DURATION':
CFG_BIBEDIT_JS_NEW_ADD_FIELD_FORM_COLOR_FADE_DURATION,
'gNEW_CONTENT_COLOR': '"' + CFG_BIBEDIT_JS_NEW_CONTENT_COLOR + '"',
'gNEW_CONTENT_COLOR_FADE_DURATION':
CFG_BIBEDIT_JS_NEW_CONTENT_COLOR_FADE_DURATION,
'gNEW_CONTENT_HIGHLIGHT_DELAY':
CFG_BIBEDIT_JS_NEW_CONTENT_HIGHLIGHT_DELAY,
'gTICKET_REFRESH_DELAY': CFG_BIBEDIT_JS_TICKET_REFRESH_DELAY,
'gRESULT_CODES': CFG_BIBEDIT_AJAX_RESULT_CODES,
'gAUTOSUGGEST_TAGS' : CFG_BIBEDIT_AUTOSUGGEST_TAGS,
'gAUTOCOMPLETE_TAGS' : CFG_BIBEDIT_AUTOCOMPLETE_TAGS_KBS.keys(),
'gKEYWORD_TAG' : '"' + CFG_BIBEDIT_KEYWORD_TAG + '"',
'gREQUESTS_UNTIL_SAVE' : CFG_BIBEDIT_REQUESTS_UNTIL_SAVE,
'gAVAILABLE_KBS': get_available_kbs(),
'gTagsToAutocomplete': CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS,
'gDOILookupField': '"' + CFG_BIBEDIT_DOI_LOOKUP_FIELD + '"',
'gDisplayReferenceTags': CFG_BIBEDIT_DISPLAY_REFERENCE_TAGS,
'gDisplayAuthorTags': CFG_BIBEDIT_DISPLAY_AUTHOR_TAGS
}
body += '<script type="text/javascript">\n'
for key in data:
body += ' var %s = %s;\n' % (key, data[key])
body += ' </script>\n'
# Adding the information about field templates
fieldTemplates = get_available_fields_templates()
body += "<script>\n" + \
" var fieldTemplates = %s\n" % (json.dumps(fieldTemplates), ) + \
"</script>\n"
# Add scripts (the ordering is NOT irrelevant).
scripts = ['jquery-ui.min.js', 'jquery.jeditable.mini.js', 'jquery.hotkeys.js',
'json2.js', 'bibedit_refextract.js', 'bibedit_display.js', 'bibedit_engine.js', 'bibedit_keys.js',
'bibedit_menu.js', 'bibedit_holdingpen.js', 'marcxml.js',
'bibedit_clipboard.js']
for script in scripts:
body += ' <script type="text/javascript" src="%s/%s">' \
'</script>\n' % (CFG_SITE_URL, auto_version_url("js/" + script))
# Init BibEdit
body += '<script>$(init_bibedit);</script>'
# Build page structure and menu.
# rec = create_record(format_record(235, "xm"))[0]
#oaiId = record_extract_oai_id(rec)
body += bibedit_templates.menu()
body += bibedit_templates.focuson()
body += """<div id="bibEditContent">
<div class="revisionLine"></div>
<div id="Toptoolbar"></div>
<div id="bibEditMessage"></div>
<div id="bibEditContentTable"></div>
</div>"""
return body, errors, warnings
def get_available_kbs():
"""
Return list of KBs that are available in the system to be used with
BibEdit
"""
kb_list = [CFG_BIBEDIT_KB_INSTITUTIONS, CFG_BIBEDIT_KB_SUBJECTS]
available_kbs = [kb for kb in kb_list if kb_exists(kb)]
return available_kbs
def record_has_pdf(recid):
""" Check if record has a pdf attached
"""
rec_info = BibRecDocs(recid)
docs = rec_info.list_bibdocs()
return bool(docs)
def get_marcxml_of_revision_id(recid, revid):
"""
Return MARCXML string with corresponding to revision REVID
(=RECID.REVDATE) of a record. Return empty string if revision
does not exist.
"""
job_date = "%s-%s-%s %s:%s:%s" % re_revdate_split.search(revid).groups()
tmp_res = get_marcxml_of_record_revision(recid, job_date)
if tmp_res:
for row in tmp_res:
xml = zlib.decompress(row[0]) + "\n"
# xml contains marcxml of record
# now we create a record object from this xml and sort fields and subfields
# and return marcxml
rec = create_record(xml)[0]
record_order_subfields(rec)
marcxml = record_xml_output(rec, order_fn="_order_by_tags")
return marcxml
def perform_request_compare(ln, recid, rev1, rev2):
"""Handle a request for comparing two records"""
body = ""
errors = []
warnings = []
if (not record_revision_exists(recid, rev1)) or \
(not record_revision_exists(recid, rev2)):
body = "The requested record revision does not exist !"
else:
xml1 = get_marcxml_of_revision_id(recid, rev1)
xml2 = get_marcxml_of_revision_id(recid, rev2)
# Create MARC representations of the records
marc1 = create_marc_record(create_record(xml1)[0], '', {"text-marc": 1, "aleph-marc": 0})
marc2 = create_marc_record(create_record(xml2)[0], '', {"text-marc": 1, "aleph-marc": 0})
comparison = show_diff(marc1, marc2)
job_date1 = "%s-%s-%s %s:%s:%s" % re_revdate_split.search(rev1).groups()
job_date2 = "%s-%s-%s %s:%s:%s" % re_revdate_split.search(rev2).groups()
body += bibedit_templates.history_comparebox(ln, job_date1,
job_date2, comparison)
return body, errors, warnings
def perform_request_newticket(recid, uid):
"""create a new ticket with this record's number
@param recid: record id
@param uid: user id
@return: (error_msg, url)
"""
t_url = ""
errmsg = ""
if bibcatalog_system is not None:
t_id = bibcatalog_system.ticket_submit(uid, "", recid, "")
if t_id:
#get the ticket's URL
t_url = bibcatalog_system.ticket_get_attribute(uid, t_id, 'url_modify')
else:
errmsg = "ticket_submit failed"
else:
errmsg = "No ticket system configured"
return (errmsg, t_url)
def perform_request_ajax(req, recid, uid, data, isBulk = False):
"""Handle Ajax requests by redirecting to appropriate function."""
response = {}
request_type = data['requestType']
undo_redo = None
if data.has_key("undoRedo"):
undo_redo = data["undoRedo"]
# Call function based on request type.
if request_type == 'searchForRecord':
# Search request.
response.update(perform_request_bibedit_search(data, req))
elif request_type in ['changeTagFormat']:
# User related requests.
response.update(perform_request_user(req, request_type, recid, data))
elif request_type in ('getRecord', 'submit', 'cancel', 'newRecord',
'deleteRecord', 'deleteRecordCache', 'prepareRecordMerge', 'revert',
'updateCacheRef', 'submittextmarc'):
# 'Major' record related requests.
response.update(perform_request_record(req, request_type, recid, uid,
data))
elif request_type in ('addField', 'addSubfields', \
'addFieldsSubfieldsOnPositions', 'modifyContent', \
'modifySubfieldTag', 'modifyFieldTag', \
'moveSubfield', 'deleteFields', 'moveField', \
'modifyField', 'otherUpdateRequest', \
'disableHpChange', 'deactivateHoldingPenChangeset'):
# Record updates.
cacheMTime = data['cacheMTime']
if data.has_key('hpChanges'):
hpChanges = data['hpChanges']
else:
hpChanges = {}
response.update(perform_request_update_record(request_type, recid, \
uid, cacheMTime, data, \
hpChanges, undo_redo, \
isBulk))
elif request_type in ('autosuggest', 'autocomplete', 'autokeyword'):
response.update(perform_request_autocomplete(request_type, recid, uid, \
data))
elif request_type in ('getTickets', ):
# BibCatalog requests.
response.update(perform_request_bibcatalog(request_type, recid, uid))
elif request_type in ('getHoldingPenUpdates', ):
response.update(perform_request_holdingpen(request_type, recid))
elif request_type in ('getHoldingPenUpdateDetails', \
'deleteHoldingPenChangeset'):
updateId = data['changesetNumber']
response.update(perform_request_holdingpen(request_type, recid, \
updateId))
elif request_type in ('applyBulkUpdates', ):
# a general version of a bulk request
changes = data['requestsData']
cacheMTime = data['cacheMTime']
response.update(perform_bulk_request_ajax(req, recid, uid, changes, \
undo_redo, cacheMTime))
elif request_type in ('preview', ):
response.update(perform_request_preview_record(request_type, recid, uid, data))
elif request_type in ('get_pdf_url', ):
response.update(perform_request_get_pdf_url(recid))
elif request_type in ('refextract', ):
txt = None
if data.has_key('txt'):
txt = data["txt"]
response.update(perform_request_ref_extract(recid, uid, txt))
elif request_type in ('refextracturl', ):
response.update(perform_request_ref_extract_url(recid, uid, data['url']))
elif request_type == 'getTextMarc':
response.update(perform_request_get_textmarc(recid, uid))
elif request_type == "getTableView":
response.update(perform_request_get_tableview(recid, uid, data))
elif request_type == "DOISearch":
response.update(perform_doi_search(data['doi']))
return response
def perform_bulk_request_ajax(req, recid, uid, reqsData, undoRedo, cacheMTime):
""" An AJAX handler used when treating bulk updates """
lastResult = {}
lastTime = cacheMTime
isFirst = True
for data in reqsData:
assert data != None
data['cacheMTime'] = lastTime
if isFirst and undoRedo != None:
# we add the undo/redo handler to the first operation in order to
# save the handler on the server side !
data['undoRedo'] = undoRedo
isFirst = False
lastResult = perform_request_ajax(req, recid, uid, data, isBulk=True)
# now we have to update the cacheMtime in next request !
# if lastResult.has_key('cacheMTime'):
try:
lastTime = lastResult['cacheMTime']
except:
raise Exception(str(lastResult))
return lastResult
def perform_request_bibedit_search(data, req):
"""Handle search requests."""
response = {}
searchType = data['searchType']
if searchType is None:
searchType = "anywhere"
searchPattern = data['searchPattern']
if searchType == 'anywhere':
pattern = searchPattern
else:
pattern = searchType + ':' + searchPattern
result_set = list(perform_request_search(req=req, p=pattern))
response['resultCode'] = 1
response['resultSet'] = result_set[0:CFG_BIBEDIT_MAX_SEARCH_RESULTS]
return response
def perform_request_user(req, request_type, recid, data):
"""Handle user related requests."""
response = {}
if request_type == 'changeTagFormat':
tagformat_settings = session_param_get(req, 'bibedit_tagformat', {})
tagformat_settings[recid] = data['tagFormat']
session_param_set(req, 'bibedit_tagformat', tagformat_settings)
response['resultCode'] = 2
return response
def perform_request_holdingpen(request_type, recId, changeId=None):
"""
A method performing the holdingPen ajax request. The following types of
requests can be made::
-getHoldingPenUpdates: retrieving the holding pen updates pending
for a given record
"""
response = {}
if request_type == 'getHoldingPenUpdates':
changeSet = get_related_hp_changesets(recId)
changes = []
for change in changeSet:
changes.append((str(change[0]), str(change[1])))
response["changes"] = changes
elif request_type == 'getHoldingPenUpdateDetails':
# returning the list of changes related to the holding pen update
# the format based on what the record difference xtool returns
assert(changeId != None)
hpContent = get_hp_update_xml(changeId)
holdingPenRecord = create_record(hpContent[0], "xm")[0]
# order subfields alphabetically
record_order_subfields(holdingPenRecord)
# databaseRecord = get_record(hpContent[1])
response['record'] = holdingPenRecord
response['changeset_number'] = changeId
elif request_type == 'deleteHoldingPenChangeset':
assert(changeId != None)
delete_hp_change(changeId)
return response
def perform_request_record(req, request_type, recid, uid, data, ln=CFG_SITE_LANG):
"""Handle 'major' record related requests like fetching, submitting or
deleting a record, cancel editing or preparing a record for merging.
"""
response = {}
if request_type == 'newRecord':
# Create a new record.
new_recid = reserve_record_id()
new_type = data['newType']
if new_type == 'empty':
# Create a new empty record.
create_cache_file(recid, uid)
response['resultCode'], response['newRecID'] = 6, new_recid
elif new_type == 'template':
# Create a new record from XML record template.
template_filename = data['templateFilename']
template = get_record_template(template_filename)
if not template:
response['resultCode'] = 108
else:
record = create_record(template)[0]
if not record:
response['resultCode'] = 109
else:
record_add_field(record, '001',
controlfield_value=str(new_recid))
create_cache_file(new_recid, uid, record, True)
response['resultCode'], response['newRecID'] = 7, new_recid
elif new_type == 'import':
# Import data from external source, using DOI
doi = data['doi']
if not doi:
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV['error_no_doi_specified']
else:
try:
marcxml_template = get_marcxml_for_doi(doi)
except CrossrefError, inst:
response['resultCode'] = \
CFG_BIBEDIT_AJAX_RESULT_CODES_REV[inst.code]
except:
response['resultCode'] = 0
else:
record = crossref_process_template(marcxml_template, CFG_INSPIRE_SITE)
if not record:
response['resultCode'] = 109
else:
record_add_field(record, '001',
controlfield_value=str(new_recid))
create_cache_file(new_recid, uid, record, True)
response['resultCode'], response['newRecID'] = 7, new_recid
elif new_type == 'clone':
# Clone an existing record (from the users cache).
existing_cache = cache_exists(recid, uid)
if existing_cache:
try:
record = get_cache_file_contents(recid, uid)[2]
except:
# if, for example, the cache format was wrong (outdated)
record = get_bibrecord(recid)
else:
# Cache missing. Fall back to using original version.
record = get_bibrecord(recid)
record_delete_field(record, '001')
record_add_field(record, '001', controlfield_value=str(new_recid))
create_cache_file(new_recid, uid, record, True)
response['resultCode'], response['newRecID'] = 8, new_recid
elif request_type == 'getRecord':
# Fetch the record. Possible error situations:
# - Non-existing record
# - Deleted record
# - Record locked by other user
# - Record locked by queue
# A cache file will be created if it does not exist.
# If the cache is outdated (i.e., not based on the latest DB revision),
# cacheOutdated will be set to True in the response.
record_status = record_exists(recid)
existing_cache = cache_exists(recid, uid)
read_only_mode = False
if data.has_key("inReadOnlyMode"):
read_only_mode = data['inReadOnlyMode']
if record_status == 0:
response['resultCode'] = 102
elif not read_only_mode and not existing_cache and \
record_locked_by_other_user(recid, uid):
name, email, locked_since = record_locked_by_user_details(recid, uid)
response['locked_details'] = {'name': name,
'email': email,
'locked_since': locked_since}
response['resultCode'] = 104
elif not read_only_mode and existing_cache and \
cache_expired(recid, uid) and \
record_locked_by_other_user(recid, uid):
response['resultCode'] = 104
elif not read_only_mode and record_locked_by_queue(recid):
response['resultCode'] = 105
else:
if data.get('deleteRecordCache'):
delete_cache_file(recid, uid)
existing_cache = False
pending_changes = []
disabled_hp_changes = {}
if read_only_mode:
if data.has_key('recordRevision') and data['recordRevision'] != 'sampleValue':
record_revision_ts = data['recordRevision']
record_xml = get_marcxml_of_revision(recid, \
record_revision_ts)
record = create_record(record_xml)[0]
record_revision = timestamp_to_revision(record_revision_ts)
pending_changes = []
disabled_hp_changes = {}
else:
# a normal cacheless retrieval of a record
record = get_bibrecord(recid)
record_revision = get_record_last_modification_date(recid)
if record_revision == None:
record_revision = datetime.now().timetuple()
pending_changes = []
disabled_hp_changes = {}
cache_dirty = False
mtime = 0
undo_list = []
redo_list = []
elif not existing_cache:
record_revision, record = create_cache_file(recid, uid)
mtime = get_cache_mtime(recid, uid)
pending_changes = []
disabled_hp_changes = {}
undo_list = []
redo_list = []
cache_dirty = False
else:
#TODO: This try except should be replaced with something nicer,
# like an argument indicating if a new cache file is to
# be created
try:
cache_dirty, record_revision, record, pending_changes, \
disabled_hp_changes, undo_list, redo_list = \
get_cache_file_contents(recid, uid)
touch_cache_file(recid, uid)
mtime = get_cache_mtime(recid, uid)
if not latest_record_revision(recid, record_revision) and \
get_record_revisions(recid) != ():
# This sould prevent from using old cache in case of
# viewing old version. If there are no revisions,
# it means we should skip this step because this
# is a new record
response['cacheOutdated'] = True
except:
record_revision, record = create_cache_file(recid, uid)
mtime = get_cache_mtime(recid, uid)
pending_changes = []
disabled_hp_changes = {}
cache_dirty = False
undo_list = []
redo_list = []
if data.get('clonedRecord',''):
response['resultCode'] = 9
else:
response['resultCode'] = 3
revision_author = get_record_revision_author(recid, record_revision)
latest_revision = get_record_last_modification_date(recid)
if latest_revision == None:
latest_revision = datetime.now().timetuple()
last_revision_ts = revision_to_timestamp(latest_revision)
revisions_history = get_record_revision_timestamps(recid)
number_of_physical_copies = get_number_copies(recid)
bibcirc_details_URL = create_item_details_url(recid, ln)
can_have_copies = can_record_have_physical_copies(recid)
# For some collections, merge template with record
template_to_merge = extend_record_with_template(recid)
if template_to_merge:
merged_record = merge_record_with_template(record, template_to_merge)
if merged_record:
record = merged_record
create_cache_file(recid, uid, record, True)
if record_status == -1:
# The record was deleted
response['resultCode'] = 103
response['record_has_pdf'] = record_has_pdf(recid)
# order subfields alphabetically
record_order_subfields(record)
response['cacheDirty'], response['record'], \
response['cacheMTime'], response['recordRevision'], \
response['revisionAuthor'], response['lastRevision'], \
response['revisionsHistory'], response['inReadOnlyMode'], \
response['pendingHpChanges'], response['disabledHpChanges'], \
response['undoList'], response['redoList'] = cache_dirty, \
record, mtime, revision_to_timestamp(record_revision), \
revision_author, last_revision_ts, revisions_history, \
read_only_mode, pending_changes, disabled_hp_changes, \
undo_list, redo_list
response['numberOfCopies'] = number_of_physical_copies
response['bibCirculationUrl'] = bibcirc_details_URL
response['canRecordHavePhysicalCopies'] = can_have_copies
# Set tag format from user's session settings.
tagformat_settings = session_param_get(req, 'bibedit_tagformat')
tagformat = (tagformat_settings is not None) and tagformat_settings.get(recid, CFG_BIBEDIT_TAG_FORMAT) or CFG_BIBEDIT_TAG_FORMAT
response['tagFormat'] = tagformat
# KB information
response['KBSubject'] = CFG_BIBEDIT_KB_SUBJECTS
response['KBInstitution'] = CFG_BIBEDIT_KB_INSTITUTIONS
elif request_type == 'submit':
# Submit the record. Possible error situations:
# - Missing cache file
# - Cache file modified in other editor
# - Record locked by other user
# - Record locked by queue
# If the cache is outdated cacheOutdated will be set to True in the
# response.
if not cache_exists(recid, uid):
response['resultCode'] = 106
elif not get_cache_mtime(recid, uid) == data['cacheMTime']:
response['resultCode'] = 107
elif cache_expired(recid, uid) and \
record_locked_by_other_user(recid, uid):
response['resultCode'] = 104
elif record_locked_by_queue(recid):
response['resultCode'] = 105
else:
try:
tmp_result = get_cache_file_contents(recid, uid)
record_revision = tmp_result[1]
record = tmp_result[2]
pending_changes = tmp_result[3]
# disabled_changes = tmp_result[4]
xml_record = wash_for_xml(print_rec(record))
record, status_code, list_of_errors = create_record(xml_record)
# Simulate upload to catch errors
errors_upload = perform_upload_check(xml_record, '--replace')
if errors_upload:
response['resultCode'], response['errors'] = 113, \
errors_upload
return response
elif status_code == 0:
response['resultCode'], response['errors'] = 110, \
list_of_errors
if not data['force'] and not latest_record_revision(recid, record_revision):
response['cacheOutdated'] = True
else:
if record_is_conference(record):
new_cnum = add_record_cnum(recid, uid)
if new_cnum:
response["new_cnum"] = new_cnum
save_xml_record(recid, uid)
response['resultCode'] = 4
except Exception, e:
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV[ \
'error_wrong_cache_file_format']
if CFG_DEVEL_SITE: # return debug information in the request
response['exception_message'] = e.__str__()
elif request_type == 'revert':
revId = data['revId']
job_date = "%s-%s-%s %s:%s:%s" % re_revdate_split.search(revId).groups()
revision_xml = get_marcxml_of_revision(recid, job_date)
+ # Modify the 005 tag in order to merge with the latest version of record
+ last_revision_ts = data['lastRevId'] + ".0"
+ revision_xml = modify_record_timestamp(revision_xml, last_revision_ts)
save_xml_record(recid, uid, revision_xml)
if (cache_exists(recid, uid)):
delete_cache_file(recid, uid)
response['resultCode'] = 4
elif request_type == 'cancel':
# Cancel editing by deleting the cache file. Possible error situations:
# - Cache file modified in other editor
if cache_exists(recid, uid):
if get_cache_mtime(recid, uid) == data['cacheMTime']:
delete_cache_file(recid, uid)
response['resultCode'] = 5
else:
response['resultCode'] = 107
else:
response['resultCode'] = 5
elif request_type == 'deleteRecord':
# Submit the record. Possible error situations:
# - Record locked by other user
# - Record locked by queue
# As the user is requesting deletion we proceed even if the cache file
# is missing and we don't check if the cache is outdated or has
# been modified in another editor.
existing_cache = cache_exists(recid, uid)
pending_changes = []
if has_copies(recid):
response['resultCode'] = \
CFG_BIBEDIT_AJAX_RESULT_CODES_REV['error_physical_copies_exist']
elif existing_cache and cache_expired(recid, uid) and \
record_locked_by_other_user(recid, uid):
response['resultCode'] = \
CFG_BIBEDIT_AJAX_RESULT_CODES_REV['error_rec_locked_by_user']
elif record_locked_by_queue(recid):
response['resultCode'] = \
CFG_BIBEDIT_AJAX_RESULT_CODES_REV['error_rec_locked_by_queue']
else:
if not existing_cache:
record_revision, record, pending_changes, \
deactivated_hp_changes, undo_list, redo_list = \
create_cache_file(recid, uid)
else:
try:
record_revision, record, pending_changes, \
deactivated_hp_changes, undo_list, redo_list = \
get_cache_file_contents(recid, uid)[1:]
except:
record_revision, record, pending_changes, \
deactivated_hp_changes = create_cache_file(recid, uid)
record_add_field(record, '980', ' ', ' ', '', [('c', 'DELETED')])
undo_list = []
redo_list = []
update_cache_file_contents(recid, uid, record_revision, record, \
pending_changes, \
deactivated_hp_changes, undo_list, \
redo_list)
save_xml_record(recid, uid)
delete_related_holdingpen_changes(recid) # we don't need any changes
# related to a deleted record
response['resultCode'] = 10
elif request_type == 'deleteRecordCache':
# Delete the cache file. Ignore the request if the cache has been
# modified in another editor.
if data.has_key('cacheMTime'):
if cache_exists(recid, uid) and get_cache_mtime(recid, uid) == \
data['cacheMTime']:
delete_cache_file(recid, uid)
response['resultCode'] = 11
elif request_type == 'updateCacheRef':
# Update cache with the contents coming from BibEdit JS interface
# Used when updating references using ref extractor
record_revision, record, pending_changes, \
deactivated_hp_changes, undo_list, redo_list = \
get_cache_file_contents(recid, uid)[1:]
record = create_record(data['recXML'])[0]
response['cacheMTime'], response['cacheDirty'] = update_cache_file_contents(recid, uid, record_revision, record, \
pending_changes, \
deactivated_hp_changes, undo_list, \
redo_list), True
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV['cache_updated_with_references']
elif request_type == 'prepareRecordMerge':
# We want to merge the cache with the current DB version of the record,
# so prepare an XML file from the file cache, to be used by BibMerge.
# Possible error situations:
# - Missing cache file
# - Record locked by other user
# - Record locked by queue
# We don't check if cache is outdated (a likely scenario for this
# request) or if it has been modified in another editor.
if not cache_exists(recid, uid):
response['resultCode'] = 106
elif cache_expired(recid, uid) and \
record_locked_by_other_user(recid, uid):
response['resultCode'] = 104
elif record_locked_by_queue(recid):
response['resultCode'] = 105
else:
save_xml_record(recid, uid, to_upload=False, to_merge=True)
response['resultCode'] = 12
elif request_type == 'submittextmarc':
# Textmarc content coming from the user
textmarc_record = data['textmarc']
xml_conversion_status = get_xml_from_textmarc(recid, textmarc_record)
if xml_conversion_status['resultMsg'] == "textmarc_parsing_error":
response.update(xml_conversion_status)
return response
# Simulate upload to catch errors
errors_upload = perform_upload_check(xml_conversion_status['resultXML'], '--replace')
if errors_upload:
response['resultCode'], response['errors'] = 113, \
errors_upload
return response
response.update(xml_conversion_status)
if xml_conversion_status['resultMsg'] == 'textmarc_parsing_success':
create_cache_file(recid, uid,
create_record(response['resultXML'])[0])
save_xml_record(recid, uid)
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV["record_submitted"]
return response
def perform_request_update_record(request_type, recid, uid, cacheMTime, data, \
hpChanges, undoRedoOp, isBulk=False):
"""
Handle record update requests like adding, modifying, moving or deleting
of fields or subfields. Possible common error situations::
- Missing cache file
- Cache file modified in other editor
@param undoRedoOp: Indicates in "undo"/"redo"/undo_descriptor operation is
performed by a current request.
"""
response = {}
if not cache_exists(recid, uid):
response['resultCode'] = 106
elif not get_cache_mtime(recid, uid) == cacheMTime and isBulk == False:
# In case of a bulk request, the changes are deliberately performed
# immediately one after another
response['resultCode'] = 107
else:
try:
record_revision, record, pending_changes, deactivated_hp_changes, \
undo_list, redo_list = get_cache_file_contents(recid, uid)[1:]
except:
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV[ \
'error_wrong_cache_file_format']
return response
# process all the Holding Pen changes operations ... regardles the
# request type
if hpChanges.has_key("toDisable"):
for changeId in hpChanges["toDisable"]:
pending_changes[changeId]["applied_change"] = True
if hpChanges.has_key("toEnable"):
for changeId in hpChanges["toEnable"]:
pending_changes[changeId]["applied_change"] = False
if hpChanges.has_key("toOverride"):
pending_changes = hpChanges["toOverride"]
if hpChanges.has_key("changesetsToDeactivate"):
for changesetId in hpChanges["changesetsToDeactivate"]:
deactivated_hp_changes[changesetId] = True
if hpChanges.has_key("changesetsToActivate"):
for changesetId in hpChanges["changesetsToActivate"]:
deactivated_hp_changes[changesetId] = False
# processing the undo/redo entries
if undoRedoOp == "undo":
try:
redo_list = [undo_list[-1]] + redo_list
undo_list = undo_list[:-1]
except:
raise Exception("An exception occured when undoing previous" + \
" operation. Undo list: " + str(undo_list) + \
" Redo list " + str(redo_list))
elif undoRedoOp == "redo":
try:
undo_list = undo_list + [redo_list[0]]
redo_list = redo_list[1:]
except:
raise Exception("An exception occured when redoing previous" + \
" operation. Undo list: " + str(undo_list) + \
" Redo list " + str(redo_list))
else:
# This is a genuine operation - we have to add a new descriptor
# to the undo list and cancel the redo unless the operation is
# a bulk operation
if undoRedoOp != None:
undo_list = undo_list + [undoRedoOp]
redo_list = []
else:
assert isBulk == True
field_position_local = data.get('fieldPosition')
if field_position_local is not None:
field_position_local = int(field_position_local)
if request_type == 'otherUpdateRequest':
# An empty request. Might be useful if we want to perform
# operations that require only the actions performed globally,
# like modifying the holdingPen changes list
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV[ \
'editor_modifications_changed']
elif request_type == 'deactivateHoldingPenChangeset':
# the changeset has been marked as processed ( user applied it in
# the editor). Marking as used in the cache file.
# CAUTION: This function has been implemented here because logically
# it fits with the modifications made to the cache file.
# No changes are made to the Holding Pen physically. The
# changesets are related to the cache because we want to
# cancel the removal every time the cache disappears for
# any reason
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV[ \
'disabled_hp_changeset']
elif request_type == 'addField':
if data['controlfield']:
record_add_field(record, data['tag'],
controlfield_value=data['value'])
response['resultCode'] = 20
else:
record_add_field(record, data['tag'], data['ind1'],
data['ind2'], subfields=data['subfields'],
field_position_local=field_position_local)
response['resultCode'] = 21
elif request_type == 'addSubfields':
subfields = data['subfields']
for subfield in subfields:
record_add_subfield_into(record, data['tag'], subfield[0],
subfield[1], subfield_position=None,
field_position_local=field_position_local)
if len(subfields) == 1:
response['resultCode'] = 22
else:
response['resultCode'] = 23
elif request_type == 'addFieldsSubfieldsOnPositions':
#1) Sorting the fields by their identifiers
fieldsToAdd = data['fieldsToAdd']
subfieldsToAdd = data['subfieldsToAdd']
for tag in fieldsToAdd.keys():
positions = fieldsToAdd[tag].keys()
positions.sort()
for position in positions:
# now adding fields at a position
isControlfield = (len(fieldsToAdd[tag][position][0]) == 0)
# if there are n subfields, this is a control field
if isControlfield:
controlfieldValue = fieldsToAdd[tag][position][3]
record_add_field(record, tag, field_position_local = \
int(position), \
controlfield_value = \
controlfieldValue)
else:
subfields = fieldsToAdd[tag][position][0]
ind1 = fieldsToAdd[tag][position][1]
ind2 = fieldsToAdd[tag][position][2]
record_add_field(record, tag, ind1, ind2, subfields = \
subfields, field_position_local = \
int(position))
# now adding the subfields
for tag in subfieldsToAdd.keys():
for fieldPosition in subfieldsToAdd[tag].keys(): #now the fields
#order not important !
subfieldsPositions = subfieldsToAdd[tag][fieldPosition]. \
keys()
subfieldsPositions.sort()
for subfieldPosition in subfieldsPositions:
subfield = subfieldsToAdd[tag][fieldPosition]\
[subfieldPosition]
record_add_subfield_into(record, tag, subfield[0], \
subfield[1], \
subfield_position = \
int(subfieldPosition), \
field_position_local = \
int(fieldPosition))
response['resultCode'] = \
CFG_BIBEDIT_AJAX_RESULT_CODES_REV['added_positioned_subfields']
elif request_type == 'modifyField': # changing the field structure
# first remove subfields and then add new... change the indices
subfields = data['subFields'] # parse the JSON representation of
# the subfields here
new_field = create_field(subfields, data['ind1'], data['ind2'])
record_replace_field(record, data['tag'], new_field, \
field_position_local = data['fieldPosition'])
response['resultCode'] = 26
elif request_type == 'modifyContent':
if data['subfieldIndex'] != None:
record_modify_subfield(record, data['tag'],
data['subfieldCode'], data['value'],
int(data['subfieldIndex']),
field_position_local=field_position_local)
else:
record_modify_controlfield(record, data['tag'], data["value"],
field_position_local=field_position_local)
response['resultCode'] = 24
elif request_type == 'modifySubfieldTag':
record_add_subfield_into(record, data['tag'], data['subfieldCode'],
data["value"], subfield_position= int(data['subfieldIndex']),
field_position_local=field_position_local)
record_delete_subfield_from(record, data['tag'], int(data['subfieldIndex']) + 1,
field_position_local=field_position_local)
response['resultCode'] = 24
elif request_type == 'modifyFieldTag':
subfields = record_get_subfields(record, data['oldTag'],
field_position_local=field_position_local)
record_add_field(record, data['newTag'], data['ind1'],
data['ind2'] , subfields=subfields)
record_delete_field(record, data['oldTag'], ind1=data['oldInd1'], \
ind2=data['oldInd2'], field_position_local=field_position_local)
response['resultCode'] = 32
elif request_type == 'moveSubfield':
record_move_subfield(record, data['tag'],
int(data['subfieldIndex']), int(data['newSubfieldIndex']),
field_position_local=field_position_local)
response['resultCode'] = 25
elif request_type == 'moveField':
if data['direction'] == 'up':
final_position_local = field_position_local-1
else: # direction is 'down'
final_position_local = field_position_local+1
record_move_fields(record, data['tag'], [field_position_local],
final_position_local)
response['resultCode'] = 32
elif request_type == 'deleteFields':
to_delete = data['toDelete']
deleted_fields = 0
deleted_subfields = 0
for tag in to_delete:
#Sorting the fields in a edcreasing order by the local position!
fieldsOrder = to_delete[tag].keys()
fieldsOrder.sort(lambda a, b: int(b) - int(a))
for field_position_local in fieldsOrder:
if not to_delete[tag][field_position_local]:
# No subfields specified - delete entire field.
record_delete_field(record, tag,
field_position_local=int(field_position_local))
deleted_fields += 1
else:
for subfield_position in \
to_delete[tag][field_position_local][::-1]:
# Delete subfields in reverse order (to keep the
# indexing correct).
record_delete_subfield_from(record, tag,
int(subfield_position),
field_position_local=int(field_position_local))
deleted_subfields += 1
if deleted_fields == 1 and deleted_subfields == 0:
response['resultCode'] = 26
elif deleted_fields and deleted_subfields == 0:
response['resultCode'] = 27
elif deleted_subfields == 1 and deleted_fields == 0:
response['resultCode'] = 28
elif deleted_subfields and deleted_fields == 0:
response['resultCode'] = 29
else:
response['resultCode'] = 30
response['cacheMTime'], response['cacheDirty'] = \
update_cache_file_contents(recid, uid, record_revision,
record, \
pending_changes, \
deactivated_hp_changes, \
undo_list, redo_list), \
True
return response
def perform_request_autocomplete(request_type, recid, uid, data):
"""
Perfrom an AJAX request associated with the retrieval of autocomplete
data.
@param request_type: Type of the currently served request
@param recid: the identifer of the record
@param uid: The identifier of the user being currently logged in
@param data: The request data containing possibly important additional
arguments
"""
response = {}
# get the values based on which one needs to search
searchby = data['value']
#we check if the data is properly defined
fulltag = ''
if data.has_key('maintag') and data.has_key('subtag1') and \
data.has_key('subtag2') and data.has_key('subfieldcode'):
maintag = data['maintag']
subtag1 = data['subtag1']
subtag2 = data['subtag2']
u_subtag1 = subtag1
u_subtag2 = subtag2
if (not subtag1) or (subtag1 == ' '):
u_subtag1 = '_'
if (not subtag2) or (subtag2 == ' '):
u_subtag2 = '_'
subfieldcode = data['subfieldcode']
fulltag = maintag+u_subtag1+u_subtag2+subfieldcode
if (request_type == 'autokeyword'):
#call the keyword-form-ontology function
if fulltag and searchby:
items = get_kbt_items_for_bibedit(CFG_BIBEDIT_KEYWORD_TAXONOMY, \
CFG_BIBEDIT_KEYWORD_RDFLABEL, \
searchby)
response['autokeyword'] = items
if (request_type == 'autosuggest'):
#call knowledge base function to put the suggestions in an array..
if fulltag and searchby and len(searchby) > 3:
#add trailing '*' wildcard for 'search_unit_in_bibxxx()' if not already present
suggest_values = get_kbd_values_for_bibedit(fulltag, "", searchby+"*")
#remove ..
new_suggest_vals = []
for sugg in suggest_values:
if sugg.startswith(searchby):
new_suggest_vals.append(sugg)
response['autosuggest'] = new_suggest_vals
if (request_type == 'autocomplete'):
#call the values function with the correct kb_name
if CFG_BIBEDIT_AUTOCOMPLETE_TAGS_KBS.has_key(fulltag):
kbname = CFG_BIBEDIT_AUTOCOMPLETE_TAGS_KBS[fulltag]
#check if the seachby field has semicolons. Take all
#the semicolon-separated items..
items = []
vals = []
if searchby:
if searchby.rfind(';'):
items = searchby.split(';')
else:
items = [searchby.strip()]
for item in items:
item = item.strip()
kbrvals = get_kbr_values(kbname, item, '', 'e') #we want an exact match
if kbrvals and kbrvals[0]: #add the found val into vals
vals.append(kbrvals[0])
#check that the values are not already contained in other
#instances of this field
record = get_cache_file_contents(recid, uid)[2]
xml_rec = wash_for_xml(print_rec(record))
record, status_code, dummy_errors = create_record(xml_rec)
existing_values = []
if (status_code != 0):
existing_values = record_get_field_values(record,
maintag,
subtag1,
subtag2,
subfieldcode)
#get the new values.. i.e. vals not in existing
new_vals = vals
for val in new_vals:
if val in existing_values:
new_vals.remove(val)
response['autocomplete'] = new_vals
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV['autosuggestion_scanned']
return response
def perform_request_bibcatalog(request_type, recid, uid):
"""Handle request to BibCatalog (RT).
"""
response = {}
if request_type == 'getTickets':
# Insert the ticket data in the response, if possible
if bibcatalog_system is None:
response['tickets'] = "<!--No ticket system configured-->"
elif bibcatalog_system and uid:
bibcat_resp = bibcatalog_system.check_system(uid)
if bibcat_resp == "":
tickets_found = bibcatalog_system.ticket_search(uid, \
status=['new', 'open'], recordid=recid)
t_url_str = '' #put ticket urls here, formatted for HTML display
for t_id in tickets_found:
#t_url = bibcatalog_system.ticket_get_attribute(uid, \
# t_id, 'url_display')
ticket_info = bibcatalog_system.ticket_get_info( \
uid, t_id, ['url_display', 'url_close'])
t_url = ticket_info['url_display']
t_close_url = ticket_info['url_close']
#format..
t_url_str += "#" + str(t_id) + '<a href="' + t_url + \
'">[read]</a> <a href="' + t_close_url + \
'">[close]</a><br/>'
#put ticket header and tickets links in the box
t_url_str = "<strong>Tickets</strong><br/>" + t_url_str + \
"<br/>" + '<a href="new_ticket?recid=' + str(recid) + \
'>[new ticket]</a>'
response['tickets'] = t_url_str
#add a new ticket link
else:
#put something in the tickets container, for debug
response['tickets'] = "<!--"+bibcat_resp+"-->"
response['resultCode'] = 31
return response
def _add_curated_references_to_record(recid, uid, bibrec):
"""
Adds references from the cache that have been curated (contain $$9CURATOR)
to the bibrecord object
@param recid: record id, used to retrieve cache
@param uid: id of the current user, used to retrieve cache
@param bibrec: bibrecord object to add references to
"""
dummy1, dummy2, record, dummy3, dummy4, dummy5, dummy6 = get_cache_file_contents(recid, uid)
for field_instance in record_get_field_instances(record, "999", "C", "5"):
for subfield_instance in field_instance[0]:
if subfield_instance[0] == '9' and subfield_instance[1] == 'CURATOR':
# Add reference field on top of references, removing first $$o
field_instance = ([subfield for subfield in field_instance[0]
if subfield[0] != 'o'], field_instance[1],
field_instance[2], field_instance[3],
field_instance[4])
record_add_fields(bibrec, '999', [field_instance],
field_position_local=0)
def _xml_to_textmarc_references(bibrec):
"""
Convert XML record to textmarc and return the lines related to references
@param bibrec: bibrecord object to be converted
@return: textmarc lines with references
@rtype: string
"""
sysno = ""
options = {"aleph-marc":0, "correct-mode":1, "append-mode":0,
"delete-mode":0, "insert-mode":0, "replace-mode":0,
"text-marc":1}
# Using deepcopy as function create_marc_record() modifies the record passed
textmarc_references = [ line.strip() for line
in xmlmarc2textmarc.create_marc_record(copy.deepcopy(bibrec),
sysno, options).split('\n')
if '999C5' in line ]
return textmarc_references
def perform_request_ref_extract_url(recid, uid, url):
"""
Making use of the refextractor API, extract references from the url
received from the client
@param recid: opened record id
@param uid: active user id
@param url: URL to extract references from
@return response to be returned to the client code
"""
response = {}
try:
recordExtended = replace_references(recid, uid, url=url)
except FullTextNotAvailable:
response['ref_xmlrecord'] = False
response['ref_msg'] = "File not found. Server returned code 404"
return response
except:
response['ref_xmlrecord'] = False
response['ref_msg'] = """Error while fetching PDF. Bad URL or file could
not be retrieved """
return response
if not recordExtended:
response['ref_msg'] = """No references were found in the given PDF """
return response
ref_bibrecord = create_record(recordExtended)[0]
_add_curated_references_to_record(recid, uid, ref_bibrecord)
response['ref_bibrecord'] = ref_bibrecord
response['ref_xmlrecord'] = record_xml_output(ref_bibrecord)
textmarc_references = _xml_to_textmarc_references(ref_bibrecord)
response['ref_textmarc'] = '<div class="refextracted">' + '<br />'.join(textmarc_references) + "</div>"
return response
def perform_request_ref_extract(recid, uid, txt=None):
""" Handle request to extract references in the given record
@param recid: record id from which the references should be extracted
@type recid: str
@param txt: string containing references
@type txt: str
@param uid: user id
@type uid: int
@return: xml record with references extracted
@rtype: dictionary
"""
text_no_references_found_msg = """ No references extracted. The automatic
extraction did not recognize any reference in the
pasted text.<br /><br />If you want to add the references
manually, an easily recognizable format is:<br/><br/>
&nbsp;&nbsp;&nbsp;&nbsp;[1] Phys. Rev A71 (2005) 42<br />
&nbsp;&nbsp;&nbsp;&nbsp;[2] ATLAS-CMS-2007-333
"""
pdf_no_references_found_msg = """ No references were found in the attached
PDF.
"""
response = {}
response['ref_xmlrecord'] = False
recordExtended = None
try:
if txt:
recordExtended = replace_references(recid, uid,
txt=txt.decode('utf-8'))
if not recordExtended:
response['ref_msg'] = text_no_references_found_msg
else:
recordExtended = replace_references(recid, uid)
if not recordExtended:
response['ref_msg'] = pdf_no_references_found_msg
except FullTextNotAvailable:
response['ref_msg'] = """ The fulltext is not available.
"""
except:
response['ref_msg'] = """ An error ocurred while extracting references.
"""
if not recordExtended:
return response
ref_bibrecord = create_record(recordExtended)[0]
_add_curated_references_to_record(recid, uid, ref_bibrecord)
response['ref_bibrecord'] = ref_bibrecord
response['ref_xmlrecord'] = record_xml_output(ref_bibrecord)
textmarc_references = _xml_to_textmarc_references(ref_bibrecord)
response['ref_textmarc'] = '<div class="refextracted">' + '<br />'.join(textmarc_references) + "</div>"
return response
def perform_request_preview_record(request_type, recid, uid, data):
""" Handle request to preview record with formatting
"""
response = {}
if request_type == "preview":
if data["submitMode"] == "textmarc":
textmarc_record = data['textmarc']
xml_conversion_status = get_xml_from_textmarc(recid, textmarc_record)
if xml_conversion_status['resultMsg'] == 'textmarc_parsing_error':
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV['textmarc_parsing_error']
response.update(xml_conversion_status)
return response
record = create_record(xml_conversion_status["resultXML"])[0]
elif cache_exists(recid, uid):
dummy1, dummy2, record, dummy3, dummy4, dummy5, dummy6 = get_cache_file_contents(recid, uid)
else:
record = get_bibrecord(recid)
# clean the record from unfilled volatile fields
record_strip_empty_volatile_subfields(record)
record_strip_empty_fields(record)
response['html_preview'] = _get_formated_record(record, data['new_window'])
# clean the record from unfilled volatile fields
record_strip_empty_volatile_subfields(record)
record_strip_empty_fields(record)
response['html_preview'] = _get_formated_record(record, data['new_window'])
return response
def perform_request_get_pdf_url(recid):
""" Handle request to get the URL of the attached PDF
"""
response = {}
rec_info = BibRecDocs(recid)
docs = rec_info.list_bibdocs()
doc_pdf_url = ""
for doc in docs:
try:
doc_pdf_url = doc.get_file('pdf').get_url()
except InvenioBibDocFileError:
continue
if doc_pdf_url:
response['pdf_url'] = doc_pdf_url
break
if not doc_pdf_url:
response['pdf_url'] = ""
return response
def perform_request_get_textmarc(recid, uid):
""" Get record content from cache, convert it to textmarc and return it
"""
textmarc_options = {"aleph-marc":0, "correct-mode":1, "append-mode":0,
"delete-mode":0, "insert-mode":0, "replace-mode":0,
"text-marc":1}
bibrecord = get_cache_file_contents(recid, uid)[2]
record_strip_empty_fields(bibrecord)
record_strip_controlfields(bibrecord)
textmarc = xmlmarc2textmarc.create_marc_record(
copy.deepcopy(bibrecord), sysno="", options=textmarc_options)
return {'textmarc': textmarc}
def perform_request_get_tableview(recid, uid, data):
""" Convert textmarc inputed by user to marcxml and if there are no
parsing errors, create cache file
"""
response = {}
textmarc_record = data['textmarc']
xml_conversion_status = get_xml_from_textmarc(recid, textmarc_record)
response.update(xml_conversion_status)
if xml_conversion_status['resultMsg'] == 'textmarc_parsing_error':
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV['textmarc_parsing_error']
else:
create_cache_file(recid, uid,
create_record(xml_conversion_status['resultXML'])[0], data['recordDirty'])
response['resultCode'] = CFG_BIBEDIT_AJAX_RESULT_CODES_REV['tableview_change_success']
return response
def _get_formated_record(record, new_window):
"""Returns a record in a given format
@param record: BibRecord object
@param new_window: Boolean, indicates if it is needed to add all the headers
to the page (used when clicking Preview button)
"""
from invenio.config import CFG_WEBSTYLE_TEMPLATE_SKIN
xml_record = wash_for_xml(record_xml_output(record))
result = ''
if new_window:
result = """ <html><head><title>Record preview</title>
<script type="text/javascript" src="%(site_url)s/js/jquery.min.js"></script>
<link rel="stylesheet" href="%(site_url)s/img/invenio%(cssskin)s.css" type="text/css"></head>
"""%{'site_url': CFG_SITE_URL,
'cssskin': CFG_WEBSTYLE_TEMPLATE_SKIN != 'default' and '_' + CFG_WEBSTYLE_TEMPLATE_SKIN or ''
}
result += get_mathjax_header(True) + '<body>'
result += "<h2> Brief format preview </h2><br />"
result += bibformat.format_record(recID=None,
of="hb",
xml_record=xml_record) + "<br />"
result += "<br /><h2> Detailed format preview </h2><br />"
result += bibformat.format_record(recID=None,
of="hd",
xml_record=xml_record)
#Preview references
result += "<br /><h2> References </h2><br />"
result += bibformat.format_record(0,
'hdref',
xml_record=xml_record)
result += """<script>
$('#referenceinp_link').hide();
$('#referenceinp_link_span').hide();
</script>
"""
if new_window:
result += "</body></html>"
return result
########### Functions related to templates web interface #############
def perform_request_init_template_interface():
"""Handle a request to manage templates"""
errors = []
warnings = []
body = ''
# Add script data.
record_templates = get_record_templates()
record_templates.sort()
data = {'gRECORD_TEMPLATES': record_templates,
'gSITE_RECORD': '"' + CFG_SITE_RECORD + '"',
'gSITE_URL': '"' + CFG_SITE_URL + '"'}
body += '<script type="text/javascript">\n'
for key in data:
body += ' var %s = %s;\n' % (key, data[key])
body += ' </script>\n'
# Add scripts (the ordering is NOT irrelevant).
scripts = ['jquery-ui.min.js',
'json2.js', 'bibedit_display.js',
'bibedit_template_interface.js']
for script in scripts:
body += ' <script type="text/javascript" src="%s/js/%s">' \
'</script>\n' % (CFG_SITE_URL, script)
body += ' <div id="bibEditTemplateList"></div>\n'
body += ' <div id="bibEditTemplateEdit"></div>\n'
return body, errors, warnings
def perform_request_ajax_template_interface(data):
"""Handle Ajax requests by redirecting to appropriate function."""
response = {}
request_type = data['requestType']
if request_type == 'editTemplate':
# Edit a template request.
response.update(perform_request_edit_template(data))
return response
def perform_request_edit_template(data):
""" Handle request to edit a template """
response = {}
template_filename = data['templateFilename']
template = get_record_template(template_filename)
if not template:
response['resultCode'] = 1
else:
response['templateMARCXML'] = template
return response
def perform_doi_search(doi):
"""Search for DOI on the dx.doi.org page
@return: the url returned by this page"""
response = {}
url = "http://dx.doi.org/"
val = {'hdl': doi}
url_data = urllib.urlencode(val)
cj = cookielib.CookieJar()
header = [('User-Agent', CFG_DOI_USER_AGENT)]
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
opener.addheaders = header
try:
resp = opener.open(url, url_data)
except:
return response
else:
response['doi_url'] = resp.geturl()
return response
diff --git a/modules/bibedit/lib/bibedit_model.py b/modules/bibedit/lib/bibedit_model.py
index a15525bca..70b2c2ea5 100644
--- a/modules/bibedit/lib/bibedit_model.py
+++ b/modules/bibedit/lib/bibedit_model.py
@@ -1,3563 +1,3565 @@
# -*- coding: utf-8 -*-
#
## This file is part of Invenio.
## Copyright (C) 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02D111-1307, USA.
"""
BibEdit database models.
"""
# General imports.
from invenio.sqlalchemyutils import db
from invenio.search_engine_utils import get_fieldvalues
from werkzeug import cached_property
from invenio.config import \
CFG_CERN_SITE
# Create your models here.
class Bibrec(db.Model):
"""Represents a Bibrec record."""
def __init__(self):
pass
__tablename__ = 'bibrec'
id = db.Column(db.MediumInteger(8, unsigned=True), primary_key=True,
nullable=False,
autoincrement=True)
creation_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00',
index=True)
modification_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00',
index=True)
master_format = db.Column(db.String(16), nullable=False,
server_default='marc')
@property
def deleted(self):
"""
Return True if record is marked as deleted.
"""
# record exists; now check whether it isn't marked as deleted:
dbcollids = get_fieldvalues(self.id, "980__%")
return ("DELETED" in dbcollids) or \
(CFG_CERN_SITE and "DUMMY" in dbcollids)
@staticmethod
def _next_merged_recid(recid):
""" Returns the ID of record merged with record with ID = recid """
merged_recid = None
for val in get_fieldvalues(recid, "970__d"):
try:
merged_recid = int(val)
break
except ValueError:
pass
if not merged_recid:
return None
else:
return merged_recid
@cached_property
def merged_recid(self):
""" Return the record object with
which the given record has been merged.
@param recID: deleted record recID
@type recID: int
@return: merged record recID
@rtype: int or None
"""
return Bibrec._next_merged_recid(self.id)
@property
def merged_recid_final(self):
""" Returns the last record from hierarchy of
records merged with this one """
cur_id = self.id
next_id = Bibrec._next_merged_recid(cur_id)
while next_id:
cur_id = next_id
next_id = Bibrec._next_merged_recid(cur_id)
return cur_id
@cached_property
def is_restricted(self):
"""Returns True is record is restricted."""
from invenio.search_engine import get_restricted_collections_for_recid
if get_restricted_collections_for_recid(self.id,
recreate_cache_if_needed=False):
return True
elif self.is_processed:
return True
return False
@cached_property
def is_processed(self):
"""Returns True is recods is processed (not in any collection)."""
from invenio.search_engine import is_record_in_any_collection
return not is_record_in_any_collection(self.id,
recreate_cache_if_needed=False)
class Bibfmt(db.Model):
"""Represents a Bibfmt record."""
def __init__(self):
pass
__tablename__ = 'bibfmt'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id), nullable=False, server_default='0',
primary_key=True, autoincrement=False)
format = db.Column(db.String(10), nullable=False,
server_default='', primary_key=True, index=True)
last_updated = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00', index=True)
value = db.Column(db.iLargeBinary)
bibrec = db.relationship(Bibrec, backref='bibfmt')
class BibHOLDINGPEN(db.Model):
"""Represents a BibHOLDINGPEN record."""
def __init__(self):
pass
__tablename__ = 'bibHOLDINGPEN'
changeset_id = db.Column(db.Integer(11), primary_key=True,
autoincrement=True)
changeset_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00', index=True)
changeset_xml = db.Column(db.Text, nullable=False)
oai_id = db.Column(db.String(40), nullable=False,
server_default='')
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id), nullable=False, server_default='0')
bibrec = db.relationship(Bibrec, backref='holdingpen')
class Bibdoc(db.Model):
"""Represents a Bibdoc record."""
__tablename__ = 'bibdoc'
id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True,
nullable=False, autoincrement=True)
status = db.Column(db.Text, nullable=False)
docname = db.Column(db.String(250), nullable=True, # collation='utf8_bin'
server_default='file', index=True)
creation_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00', index=True)
modification_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00', index=True)
text_extraction_date = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
doctype = db.Column(db.String(255))
class BibdocBibdoc(db.Model):
"""Represents a BibdocBibdoc record."""
__tablename__ = 'bibdoc_bibdoc'
id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True,
nullable=False, autoincrement=True)
id_bibdoc1 = db.Column(db.MediumInteger(9, unsigned=True),
db.ForeignKey(Bibdoc.id), nullable=True)
version1 = db.Column(db.TinyInteger(4, unsigned=True))
format1 = db.Column(db.String(50))
id_bibdoc2 = db.Column(db.MediumInteger(9, unsigned=True),
db.ForeignKey(Bibdoc.id), nullable=True)
version2 = db.Column(db.TinyInteger(4, unsigned=True))
format2 = db.Column(db.String(50))
rel_type = db.Column(db.String(255), nullable=True)
bibdoc1 = db.relationship(Bibdoc, backref='bibdoc2s',
primaryjoin=Bibdoc.id == id_bibdoc1)
bibdoc2 = db.relationship(Bibdoc, backref='bibdoc1s',
primaryjoin=Bibdoc.id == id_bibdoc2)
class BibrecBibdoc(db.Model):
"""Represents a BibrecBibdoc record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bibdoc'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id), nullable=False,
server_default='0', primary_key=True)
id_bibdoc = db.Column(db.MediumInteger(9, unsigned=True),
db.ForeignKey(Bibdoc.id), nullable=False,
server_default='0', primary_key=True)
docname = db.Column(db.String(250), nullable=False, # collation='utf8_bin'
server_default='file', index=True)
type = db.Column(db.String(255), nullable=True)
bibrec = db.relationship(Bibrec, backref='bibdocs')
bibdoc = db.relationship(Bibdoc, backref='bibrecs')
class HstDOCUMENT(db.Model):
"""Represents a HstDOCUMENT record."""
def __init__(self):
pass
__tablename__ = 'hstDOCUMENT'
id = db.Column(db.Integer(15, unsigned=True), primary_key=True,
nullable=False, autoincrement=True)
id_bibdoc = db.Column(db.MediumInteger(9, unsigned=True),
db.ForeignKey(Bibdoc.id), primary_key=True, nullable=False,
autoincrement=False)
docname = db.Column(db.String(250), nullable=False, index=True)
docformat = db.Column(db.String(50), nullable=False, index=True)
docversion = db.Column(db.TinyInteger(4, unsigned=True),
nullable=False)
docsize = db.Column(db.BigInteger(15, unsigned=True),
nullable=False)
docchecksum = db.Column(db.Char(32), nullable=False)
doctimestamp = db.Column(db.DateTime, nullable=False, index=True)
action = db.Column(db.String(50), nullable=False, index=True)
job_id = db.Column(db.MediumInteger(15, unsigned=True),
nullable=True, index=True)
job_name = db.Column(db.String(255), nullable=True, index=True)
job_person = db.Column(db.String(255), nullable=True, index=True)
job_date = db.Column(db.DateTime, nullable=True, index=True)
job_details = db.Column(db.iBinary, nullable=True)
class HstRECORD(db.Model):
"""Represents a HstRECORD record."""
def __init__(self):
pass
__tablename__ = 'hstRECORD'
id = db.Column(db.Integer(15, unsigned=True), primary_key=True,
nullable=False, autoincrement=True)
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id), autoincrement=False,
nullable=False, primary_key=True)
marcxml = db.Column(db.iBinary, nullable=False)
job_id = db.Column(db.MediumInteger(15, unsigned=True),
nullable=False, index=True)
job_name = db.Column(db.String(255), nullable=False, index=True)
job_person = db.Column(db.String(255), nullable=False, index=True)
job_date = db.Column(db.DateTime, nullable=False, index=True)
job_details = db.Column(db.iBinary, nullable=False)
+ affected_fields = db.Column(db.Text(), nullable=False,
+ server_default='')
# GENERATED
class Bib00x(db.Model):
"""Represents a Bib00x record."""
def __init__(self):
pass
__tablename__ = 'bib00x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib00x(db.Model):
"""Represents a BibrecBib00x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib00x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib00x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib00xs')
bibxxx = db.relationship(Bib00x, backref='bibrecs')
class Bib01x(db.Model):
"""Represents a Bib01x record."""
def __init__(self):
pass
__tablename__ = 'bib01x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib01x(db.Model):
"""Represents a BibrecBib01x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib01x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib01x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib01xs')
bibxxx = db.relationship(Bib01x, backref='bibrecs')
class Bib02x(db.Model):
"""Represents a Bib02x record."""
def __init__(self):
pass
__tablename__ = 'bib02x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib02x(db.Model):
"""Represents a BibrecBib02x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib02x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib02x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib02xs')
bibxxx = db.relationship(Bib02x, backref='bibrecs')
class Bib03x(db.Model):
"""Represents a Bib03x record."""
def __init__(self):
pass
__tablename__ = 'bib03x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib03x(db.Model):
"""Represents a BibrecBib03x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib03x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib03x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib03xs')
bibxxx = db.relationship(Bib03x, backref='bibrecs')
class Bib04x(db.Model):
"""Represents a Bib04x record."""
def __init__(self):
pass
__tablename__ = 'bib04x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib04x(db.Model):
"""Represents a BibrecBib04x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib04x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib04x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib04xs')
bibxxx = db.relationship(Bib04x, backref='bibrecs')
class Bib05x(db.Model):
"""Represents a Bib05x record."""
def __init__(self):
pass
__tablename__ = 'bib05x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib05x(db.Model):
"""Represents a BibrecBib05x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib05x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib05x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib05xs')
bibxxx = db.relationship(Bib05x, backref='bibrecs')
class Bib06x(db.Model):
"""Represents a Bib06x record."""
def __init__(self):
pass
__tablename__ = 'bib06x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib06x(db.Model):
"""Represents a BibrecBib06x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib06x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib06x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib06xs')
bibxxx = db.relationship(Bib06x, backref='bibrecs')
class Bib07x(db.Model):
"""Represents a Bib07x record."""
def __init__(self):
pass
__tablename__ = 'bib07x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib07x(db.Model):
"""Represents a BibrecBib07x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib07x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib07x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib07xs')
bibxxx = db.relationship(Bib07x, backref='bibrecs')
class Bib08x(db.Model):
"""Represents a Bib08x record."""
def __init__(self):
pass
__tablename__ = 'bib08x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib08x(db.Model):
"""Represents a BibrecBib08x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib08x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib08x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib08xs')
bibxxx = db.relationship(Bib08x, backref='bibrecs')
class Bib09x(db.Model):
"""Represents a Bib09x record."""
def __init__(self):
pass
__tablename__ = 'bib09x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib09x(db.Model):
"""Represents a BibrecBib09x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib09x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib09x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib09xs')
bibxxx = db.relationship(Bib09x, backref='bibrecs')
class Bib10x(db.Model):
"""Represents a Bib10x record."""
def __init__(self):
pass
__tablename__ = 'bib10x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib10x(db.Model):
"""Represents a BibrecBib10x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib10x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib10x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib10xs')
bibxxx = db.relationship(Bib10x, backref='bibrecs')
class Bib11x(db.Model):
"""Represents a Bib11x record."""
def __init__(self):
pass
__tablename__ = 'bib11x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib11x(db.Model):
"""Represents a BibrecBib11x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib11x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib11x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib11xs')
bibxxx = db.relationship(Bib11x, backref='bibrecs')
class Bib12x(db.Model):
"""Represents a Bib12x record."""
def __init__(self):
pass
__tablename__ = 'bib12x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib12x(db.Model):
"""Represents a BibrecBib12x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib12x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib12x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib12xs')
bibxxx = db.relationship(Bib12x, backref='bibrecs')
class Bib13x(db.Model):
"""Represents a Bib13x record."""
def __init__(self):
pass
__tablename__ = 'bib13x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib13x(db.Model):
"""Represents a BibrecBib13x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib13x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib13x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib13xs')
bibxxx = db.relationship(Bib13x, backref='bibrecs')
class Bib14x(db.Model):
"""Represents a Bib14x record."""
def __init__(self):
pass
__tablename__ = 'bib14x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib14x(db.Model):
"""Represents a BibrecBib14x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib14x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib14x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib14xs')
bibxxx = db.relationship(Bib14x, backref='bibrecs')
class Bib15x(db.Model):
"""Represents a Bib15x record."""
def __init__(self):
pass
__tablename__ = 'bib15x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib15x(db.Model):
"""Represents a BibrecBib15x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib15x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib15x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib15xs')
bibxxx = db.relationship(Bib15x, backref='bibrecs')
class Bib16x(db.Model):
"""Represents a Bib16x record."""
def __init__(self):
pass
__tablename__ = 'bib16x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib16x(db.Model):
"""Represents a BibrecBib16x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib16x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib16x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib16xs')
bibxxx = db.relationship(Bib16x, backref='bibrecs')
class Bib17x(db.Model):
"""Represents a Bib17x record."""
def __init__(self):
pass
__tablename__ = 'bib17x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib17x(db.Model):
"""Represents a BibrecBib17x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib17x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib17x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib17xs')
bibxxx = db.relationship(Bib17x, backref='bibrecs')
class Bib18x(db.Model):
"""Represents a Bib18x record."""
def __init__(self):
pass
__tablename__ = 'bib18x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib18x(db.Model):
"""Represents a BibrecBib18x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib18x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib18x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib18xs')
bibxxx = db.relationship(Bib18x, backref='bibrecs')
class Bib19x(db.Model):
"""Represents a Bib19x record."""
def __init__(self):
pass
__tablename__ = 'bib19x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib19x(db.Model):
"""Represents a BibrecBib19x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib19x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib19x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib19xs')
bibxxx = db.relationship(Bib19x, backref='bibrecs')
class Bib20x(db.Model):
"""Represents a Bib20x record."""
def __init__(self):
pass
__tablename__ = 'bib20x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib20x(db.Model):
"""Represents a BibrecBib20x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib20x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib20x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib20xs')
bibxxx = db.relationship(Bib20x, backref='bibrecs')
class Bib21x(db.Model):
"""Represents a Bib21x record."""
def __init__(self):
pass
__tablename__ = 'bib21x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib21x(db.Model):
"""Represents a BibrecBib21x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib21x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib21x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib21xs')
bibxxx = db.relationship(Bib21x, backref='bibrecs')
class Bib22x(db.Model):
"""Represents a Bib22x record."""
def __init__(self):
pass
__tablename__ = 'bib22x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib22x(db.Model):
"""Represents a BibrecBib22x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib22x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib22x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib22xs')
bibxxx = db.relationship(Bib22x, backref='bibrecs')
class Bib23x(db.Model):
"""Represents a Bib23x record."""
def __init__(self):
pass
__tablename__ = 'bib23x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib23x(db.Model):
"""Represents a BibrecBib23x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib23x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib23x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib23xs')
bibxxx = db.relationship(Bib23x, backref='bibrecs')
class Bib24x(db.Model):
"""Represents a Bib24x record."""
def __init__(self):
pass
__tablename__ = 'bib24x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib24x(db.Model):
"""Represents a BibrecBib24x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib24x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib24x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib24xs')
bibxxx = db.relationship(Bib24x, backref='bibrecs')
class Bib25x(db.Model):
"""Represents a Bib25x record."""
def __init__(self):
pass
__tablename__ = 'bib25x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib25x(db.Model):
"""Represents a BibrecBib25x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib25x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib25x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib25xs')
bibxxx = db.relationship(Bib25x, backref='bibrecs')
class Bib26x(db.Model):
"""Represents a Bib26x record."""
def __init__(self):
pass
__tablename__ = 'bib26x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib26x(db.Model):
"""Represents a BibrecBib26x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib26x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib26x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib26xs')
bibxxx = db.relationship(Bib26x, backref='bibrecs')
class Bib27x(db.Model):
"""Represents a Bib27x record."""
def __init__(self):
pass
__tablename__ = 'bib27x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib27x(db.Model):
"""Represents a BibrecBib27x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib27x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib27x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib27xs')
bibxxx = db.relationship(Bib27x, backref='bibrecs')
class Bib28x(db.Model):
"""Represents a Bib28x record."""
def __init__(self):
pass
__tablename__ = 'bib28x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib28x(db.Model):
"""Represents a BibrecBib28x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib28x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib28x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib28xs')
bibxxx = db.relationship(Bib28x, backref='bibrecs')
class Bib29x(db.Model):
"""Represents a Bib29x record."""
def __init__(self):
pass
__tablename__ = 'bib29x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib29x(db.Model):
"""Represents a BibrecBib29x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib29x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib29x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib29xs')
bibxxx = db.relationship(Bib29x, backref='bibrecs')
class Bib30x(db.Model):
"""Represents a Bib30x record."""
def __init__(self):
pass
__tablename__ = 'bib30x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib30x(db.Model):
"""Represents a BibrecBib30x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib30x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib30x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib30xs')
bibxxx = db.relationship(Bib30x, backref='bibrecs')
class Bib31x(db.Model):
"""Represents a Bib31x record."""
def __init__(self):
pass
__tablename__ = 'bib31x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib31x(db.Model):
"""Represents a BibrecBib31x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib31x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib31x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib31xs')
bibxxx = db.relationship(Bib31x, backref='bibrecs')
class Bib32x(db.Model):
"""Represents a Bib32x record."""
def __init__(self):
pass
__tablename__ = 'bib32x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib32x(db.Model):
"""Represents a BibrecBib32x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib32x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib32x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib32xs')
bibxxx = db.relationship(Bib32x, backref='bibrecs')
class Bib33x(db.Model):
"""Represents a Bib33x record."""
def __init__(self):
pass
__tablename__ = 'bib33x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib33x(db.Model):
"""Represents a BibrecBib33x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib33x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib33x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib33xs')
bibxxx = db.relationship(Bib33x, backref='bibrecs')
class Bib34x(db.Model):
"""Represents a Bib34x record."""
def __init__(self):
pass
__tablename__ = 'bib34x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib34x(db.Model):
"""Represents a BibrecBib34x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib34x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib34x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib34xs')
bibxxx = db.relationship(Bib34x, backref='bibrecs')
class Bib35x(db.Model):
"""Represents a Bib35x record."""
def __init__(self):
pass
__tablename__ = 'bib35x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib35x(db.Model):
"""Represents a BibrecBib35x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib35x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib35x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib35xs')
bibxxx = db.relationship(Bib35x, backref='bibrecs')
class Bib36x(db.Model):
"""Represents a Bib36x record."""
def __init__(self):
pass
__tablename__ = 'bib36x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib36x(db.Model):
"""Represents a BibrecBib36x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib36x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib36x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib36xs')
bibxxx = db.relationship(Bib36x, backref='bibrecs')
class Bib37x(db.Model):
"""Represents a Bib37x record."""
def __init__(self):
pass
__tablename__ = 'bib37x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib37x(db.Model):
"""Represents a BibrecBib37x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib37x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib37x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib37xs')
bibxxx = db.relationship(Bib37x, backref='bibrecs')
class Bib38x(db.Model):
"""Represents a Bib38x record."""
def __init__(self):
pass
__tablename__ = 'bib38x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib38x(db.Model):
"""Represents a BibrecBib38x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib38x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib38x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib38xs')
bibxxx = db.relationship(Bib38x, backref='bibrecs')
class Bib39x(db.Model):
"""Represents a Bib39x record."""
def __init__(self):
pass
__tablename__ = 'bib39x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib39x(db.Model):
"""Represents a BibrecBib39x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib39x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib39x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib39xs')
bibxxx = db.relationship(Bib39x, backref='bibrecs')
class Bib40x(db.Model):
"""Represents a Bib40x record."""
def __init__(self):
pass
__tablename__ = 'bib40x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib40x(db.Model):
"""Represents a BibrecBib40x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib40x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib40x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib40xs')
bibxxx = db.relationship(Bib40x, backref='bibrecs')
class Bib41x(db.Model):
"""Represents a Bib41x record."""
def __init__(self):
pass
__tablename__ = 'bib41x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib41x(db.Model):
"""Represents a BibrecBib41x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib41x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib41x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib41xs')
bibxxx = db.relationship(Bib41x, backref='bibrecs')
class Bib42x(db.Model):
"""Represents a Bib42x record."""
def __init__(self):
pass
__tablename__ = 'bib42x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib42x(db.Model):
"""Represents a BibrecBib42x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib42x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib42x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib42xs')
bibxxx = db.relationship(Bib42x, backref='bibrecs')
class Bib43x(db.Model):
"""Represents a Bib43x record."""
def __init__(self):
pass
__tablename__ = 'bib43x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib43x(db.Model):
"""Represents a BibrecBib43x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib43x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib43x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib43xs')
bibxxx = db.relationship(Bib43x, backref='bibrecs')
class Bib44x(db.Model):
"""Represents a Bib44x record."""
def __init__(self):
pass
__tablename__ = 'bib44x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib44x(db.Model):
"""Represents a BibrecBib44x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib44x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib44x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib44xs')
bibxxx = db.relationship(Bib44x, backref='bibrecs')
class Bib45x(db.Model):
"""Represents a Bib45x record."""
def __init__(self):
pass
__tablename__ = 'bib45x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib45x(db.Model):
"""Represents a BibrecBib45x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib45x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib45x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib45xs')
bibxxx = db.relationship(Bib45x, backref='bibrecs')
class Bib46x(db.Model):
"""Represents a Bib46x record."""
def __init__(self):
pass
__tablename__ = 'bib46x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib46x(db.Model):
"""Represents a BibrecBib46x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib46x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib46x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib46xs')
bibxxx = db.relationship(Bib46x, backref='bibrecs')
class Bib47x(db.Model):
"""Represents a Bib47x record."""
def __init__(self):
pass
__tablename__ = 'bib47x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib47x(db.Model):
"""Represents a BibrecBib47x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib47x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib47x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib47xs')
bibxxx = db.relationship(Bib47x, backref='bibrecs')
class Bib48x(db.Model):
"""Represents a Bib48x record."""
def __init__(self):
pass
__tablename__ = 'bib48x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib48x(db.Model):
"""Represents a BibrecBib48x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib48x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib48x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib48xs')
bibxxx = db.relationship(Bib48x, backref='bibrecs')
class Bib49x(db.Model):
"""Represents a Bib49x record."""
def __init__(self):
pass
__tablename__ = 'bib49x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib49x(db.Model):
"""Represents a BibrecBib49x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib49x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib49x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib49xs')
bibxxx = db.relationship(Bib49x, backref='bibrecs')
class Bib50x(db.Model):
"""Represents a Bib50x record."""
def __init__(self):
pass
__tablename__ = 'bib50x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib50x(db.Model):
"""Represents a BibrecBib50x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib50x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib50x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib50xs')
bibxxx = db.relationship(Bib50x, backref='bibrecs')
class Bib51x(db.Model):
"""Represents a Bib51x record."""
def __init__(self):
pass
__tablename__ = 'bib51x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib51x(db.Model):
"""Represents a BibrecBib51x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib51x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib51x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib51xs')
bibxxx = db.relationship(Bib51x, backref='bibrecs')
class Bib52x(db.Model):
"""Represents a Bib52x record."""
def __init__(self):
pass
__tablename__ = 'bib52x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib52x(db.Model):
"""Represents a BibrecBib52x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib52x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib52x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib52xs')
bibxxx = db.relationship(Bib52x, backref='bibrecs')
class Bib53x(db.Model):
"""Represents a Bib53x record."""
def __init__(self):
pass
__tablename__ = 'bib53x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib53x(db.Model):
"""Represents a BibrecBib53x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib53x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib53x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib53xs')
bibxxx = db.relationship(Bib53x, backref='bibrecs')
class Bib54x(db.Model):
"""Represents a Bib54x record."""
def __init__(self):
pass
__tablename__ = 'bib54x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib54x(db.Model):
"""Represents a BibrecBib54x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib54x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib54x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib54xs')
bibxxx = db.relationship(Bib54x, backref='bibrecs')
class Bib55x(db.Model):
"""Represents a Bib55x record."""
def __init__(self):
pass
__tablename__ = 'bib55x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib55x(db.Model):
"""Represents a BibrecBib55x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib55x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib55x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib55xs')
bibxxx = db.relationship(Bib55x, backref='bibrecs')
class Bib56x(db.Model):
"""Represents a Bib56x record."""
def __init__(self):
pass
__tablename__ = 'bib56x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib56x(db.Model):
"""Represents a BibrecBib56x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib56x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib56x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib56xs')
bibxxx = db.relationship(Bib56x, backref='bibrecs')
class Bib57x(db.Model):
"""Represents a Bib57x record."""
def __init__(self):
pass
__tablename__ = 'bib57x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib57x(db.Model):
"""Represents a BibrecBib57x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib57x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib57x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib57xs')
bibxxx = db.relationship(Bib57x, backref='bibrecs')
class Bib58x(db.Model):
"""Represents a Bib58x record."""
def __init__(self):
pass
__tablename__ = 'bib58x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib58x(db.Model):
"""Represents a BibrecBib58x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib58x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib58x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib58xs')
bibxxx = db.relationship(Bib58x, backref='bibrecs')
class Bib59x(db.Model):
"""Represents a Bib59x record."""
def __init__(self):
pass
__tablename__ = 'bib59x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib59x(db.Model):
"""Represents a BibrecBib59x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib59x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib59x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib59xs')
bibxxx = db.relationship(Bib59x, backref='bibrecs')
class Bib60x(db.Model):
"""Represents a Bib60x record."""
def __init__(self):
pass
__tablename__ = 'bib60x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib60x(db.Model):
"""Represents a BibrecBib60x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib60x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib60x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib60xs')
bibxxx = db.relationship(Bib60x, backref='bibrecs')
class Bib61x(db.Model):
"""Represents a Bib61x record."""
def __init__(self):
pass
__tablename__ = 'bib61x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib61x(db.Model):
"""Represents a BibrecBib61x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib61x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib61x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib61xs')
bibxxx = db.relationship(Bib61x, backref='bibrecs')
class Bib62x(db.Model):
"""Represents a Bib62x record."""
def __init__(self):
pass
__tablename__ = 'bib62x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib62x(db.Model):
"""Represents a BibrecBib62x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib62x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib62x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib62xs')
bibxxx = db.relationship(Bib62x, backref='bibrecs')
class Bib63x(db.Model):
"""Represents a Bib63x record."""
def __init__(self):
pass
__tablename__ = 'bib63x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib63x(db.Model):
"""Represents a BibrecBib63x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib63x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib63x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib63xs')
bibxxx = db.relationship(Bib63x, backref='bibrecs')
class Bib64x(db.Model):
"""Represents a Bib64x record."""
def __init__(self):
pass
__tablename__ = 'bib64x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib64x(db.Model):
"""Represents a BibrecBib64x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib64x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib64x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib64xs')
bibxxx = db.relationship(Bib64x, backref='bibrecs')
class Bib65x(db.Model):
"""Represents a Bib65x record."""
def __init__(self):
pass
__tablename__ = 'bib65x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib65x(db.Model):
"""Represents a BibrecBib65x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib65x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib65x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib65xs')
bibxxx = db.relationship(Bib65x, backref='bibrecs')
class Bib66x(db.Model):
"""Represents a Bib66x record."""
def __init__(self):
pass
__tablename__ = 'bib66x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib66x(db.Model):
"""Represents a BibrecBib66x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib66x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib66x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib66xs')
bibxxx = db.relationship(Bib66x, backref='bibrecs')
class Bib67x(db.Model):
"""Represents a Bib67x record."""
def __init__(self):
pass
__tablename__ = 'bib67x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib67x(db.Model):
"""Represents a BibrecBib67x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib67x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib67x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib67xs')
bibxxx = db.relationship(Bib67x, backref='bibrecs')
class Bib68x(db.Model):
"""Represents a Bib68x record."""
def __init__(self):
pass
__tablename__ = 'bib68x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib68x(db.Model):
"""Represents a BibrecBib68x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib68x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib68x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib68xs')
bibxxx = db.relationship(Bib68x, backref='bibrecs')
class Bib69x(db.Model):
"""Represents a Bib69x record."""
def __init__(self):
pass
__tablename__ = 'bib69x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib69x(db.Model):
"""Represents a BibrecBib69x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib69x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib69x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib69xs')
bibxxx = db.relationship(Bib69x, backref='bibrecs')
class Bib70x(db.Model):
"""Represents a Bib70x record."""
def __init__(self):
pass
__tablename__ = 'bib70x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib70x(db.Model):
"""Represents a BibrecBib70x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib70x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib70x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib70xs')
bibxxx = db.relationship(Bib70x, backref='bibrecs')
class Bib71x(db.Model):
"""Represents a Bib71x record."""
def __init__(self):
pass
__tablename__ = 'bib71x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib71x(db.Model):
"""Represents a BibrecBib71x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib71x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib71x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib71xs')
bibxxx = db.relationship(Bib71x, backref='bibrecs')
class Bib72x(db.Model):
"""Represents a Bib72x record."""
def __init__(self):
pass
__tablename__ = 'bib72x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib72x(db.Model):
"""Represents a BibrecBib72x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib72x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib72x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib72xs')
bibxxx = db.relationship(Bib72x, backref='bibrecs')
class Bib73x(db.Model):
"""Represents a Bib73x record."""
def __init__(self):
pass
__tablename__ = 'bib73x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib73x(db.Model):
"""Represents a BibrecBib73x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib73x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib73x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib73xs')
bibxxx = db.relationship(Bib73x, backref='bibrecs')
class Bib74x(db.Model):
"""Represents a Bib74x record."""
def __init__(self):
pass
__tablename__ = 'bib74x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib74x(db.Model):
"""Represents a BibrecBib74x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib74x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib74x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib74xs')
bibxxx = db.relationship(Bib74x, backref='bibrecs')
class Bib75x(db.Model):
"""Represents a Bib75x record."""
def __init__(self):
pass
__tablename__ = 'bib75x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib75x(db.Model):
"""Represents a BibrecBib75x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib75x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib75x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib75xs')
bibxxx = db.relationship(Bib75x, backref='bibrecs')
class Bib76x(db.Model):
"""Represents a Bib76x record."""
def __init__(self):
pass
__tablename__ = 'bib76x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib76x(db.Model):
"""Represents a BibrecBib76x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib76x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib76x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib76xs')
bibxxx = db.relationship(Bib76x, backref='bibrecs')
class Bib77x(db.Model):
"""Represents a Bib77x record."""
def __init__(self):
pass
__tablename__ = 'bib77x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib77x(db.Model):
"""Represents a BibrecBib77x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib77x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib77x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib77xs')
bibxxx = db.relationship(Bib77x, backref='bibrecs')
class Bib78x(db.Model):
"""Represents a Bib78x record."""
def __init__(self):
pass
__tablename__ = 'bib78x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib78x(db.Model):
"""Represents a BibrecBib78x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib78x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib78x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib78xs')
bibxxx = db.relationship(Bib78x, backref='bibrecs')
class Bib79x(db.Model):
"""Represents a Bib79x record."""
def __init__(self):
pass
__tablename__ = 'bib79x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib79x(db.Model):
"""Represents a BibrecBib79x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib79x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib79x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib79xs')
bibxxx = db.relationship(Bib79x, backref='bibrecs')
class Bib80x(db.Model):
"""Represents a Bib80x record."""
def __init__(self):
pass
__tablename__ = 'bib80x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib80x(db.Model):
"""Represents a BibrecBib80x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib80x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib80x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib80xs')
bibxxx = db.relationship(Bib80x, backref='bibrecs')
class Bib81x(db.Model):
"""Represents a Bib81x record."""
def __init__(self):
pass
__tablename__ = 'bib81x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib81x(db.Model):
"""Represents a BibrecBib81x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib81x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib81x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib81xs')
bibxxx = db.relationship(Bib81x, backref='bibrecs')
class Bib82x(db.Model):
"""Represents a Bib82x record."""
def __init__(self):
pass
__tablename__ = 'bib82x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib82x(db.Model):
"""Represents a BibrecBib82x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib82x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib82x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib82xs')
bibxxx = db.relationship(Bib82x, backref='bibrecs')
class Bib83x(db.Model):
"""Represents a Bib83x record."""
def __init__(self):
pass
__tablename__ = 'bib83x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib83x(db.Model):
"""Represents a BibrecBib83x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib83x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib83x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib83xs')
bibxxx = db.relationship(Bib83x, backref='bibrecs')
class Bib84x(db.Model):
"""Represents a Bib84x record."""
def __init__(self):
pass
__tablename__ = 'bib84x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib84x(db.Model):
"""Represents a BibrecBib84x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib84x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib84x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib84xs')
bibxxx = db.relationship(Bib84x, backref='bibrecs')
class Bib85x(db.Model):
"""Represents a Bib85x record."""
def __init__(self):
pass
__tablename__ = 'bib85x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib85x(db.Model):
"""Represents a BibrecBib85x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib85x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib85x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib85xs')
bibxxx = db.relationship(Bib85x, backref='bibrecs')
class Bib86x(db.Model):
"""Represents a Bib86x record."""
def __init__(self):
pass
__tablename__ = 'bib86x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib86x(db.Model):
"""Represents a BibrecBib86x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib86x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib86x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib86xs')
bibxxx = db.relationship(Bib86x, backref='bibrecs')
class Bib87x(db.Model):
"""Represents a Bib87x record."""
def __init__(self):
pass
__tablename__ = 'bib87x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib87x(db.Model):
"""Represents a BibrecBib87x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib87x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib87x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib87xs')
bibxxx = db.relationship(Bib87x, backref='bibrecs')
class Bib88x(db.Model):
"""Represents a Bib88x record."""
def __init__(self):
pass
__tablename__ = 'bib88x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib88x(db.Model):
"""Represents a BibrecBib88x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib88x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib88x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib88xs')
bibxxx = db.relationship(Bib88x, backref='bibrecs')
class Bib89x(db.Model):
"""Represents a Bib89x record."""
def __init__(self):
pass
__tablename__ = 'bib89x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib89x(db.Model):
"""Represents a BibrecBib89x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib89x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib89x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib89xs')
bibxxx = db.relationship(Bib89x, backref='bibrecs')
class Bib90x(db.Model):
"""Represents a Bib90x record."""
def __init__(self):
pass
__tablename__ = 'bib90x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib90x(db.Model):
"""Represents a BibrecBib90x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib90x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib90x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib90xs')
bibxxx = db.relationship(Bib90x, backref='bibrecs')
class Bib91x(db.Model):
"""Represents a Bib91x record."""
def __init__(self):
pass
__tablename__ = 'bib91x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib91x(db.Model):
"""Represents a BibrecBib91x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib91x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib91x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib91xs')
bibxxx = db.relationship(Bib91x, backref='bibrecs')
class Bib92x(db.Model):
"""Represents a Bib92x record."""
def __init__(self):
pass
__tablename__ = 'bib92x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib92x(db.Model):
"""Represents a BibrecBib92x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib92x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib92x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib92xs')
bibxxx = db.relationship(Bib92x, backref='bibrecs')
class Bib93x(db.Model):
"""Represents a Bib93x record."""
def __init__(self):
pass
__tablename__ = 'bib93x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib93x(db.Model):
"""Represents a BibrecBib93x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib93x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib93x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib93xs')
bibxxx = db.relationship(Bib93x, backref='bibrecs')
class Bib94x(db.Model):
"""Represents a Bib94x record."""
def __init__(self):
pass
__tablename__ = 'bib94x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib94x(db.Model):
"""Represents a BibrecBib94x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib94x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib94x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib94xs')
bibxxx = db.relationship(Bib94x, backref='bibrecs')
class Bib95x(db.Model):
"""Represents a Bib95x record."""
def __init__(self):
pass
__tablename__ = 'bib95x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib95x(db.Model):
"""Represents a BibrecBib95x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib95x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib95x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib95xs')
bibxxx = db.relationship(Bib95x, backref='bibrecs')
class Bib96x(db.Model):
"""Represents a Bib96x record."""
def __init__(self):
pass
__tablename__ = 'bib96x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib96x(db.Model):
"""Represents a BibrecBib96x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib96x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib96x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib96xs')
bibxxx = db.relationship(Bib96x, backref='bibrecs')
class Bib97x(db.Model):
"""Represents a Bib97x record."""
def __init__(self):
pass
__tablename__ = 'bib97x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib97x(db.Model):
"""Represents a BibrecBib97x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib97x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib97x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib97xs')
bibxxx = db.relationship(Bib97x, backref='bibrecs')
class Bib98x(db.Model):
"""Represents a Bib98x record."""
def __init__(self):
pass
__tablename__ = 'bib98x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib98x(db.Model):
"""Represents a BibrecBib98x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib98x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib98x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib98xs')
bibxxx = db.relationship(Bib98x, backref='bibrecs')
class Bib99x(db.Model):
"""Represents a Bib99x record."""
def __init__(self):
pass
__tablename__ = 'bib99x'
id = db.Column(db.MediumInteger(8, unsigned=True),
primary_key=True,
autoincrement=True)
tag = db.Column(db.String(6), nullable=False, index=True,
server_default='')
value = db.Column(db.Text(35), nullable=False,
index=True)
class BibrecBib99x(db.Model):
"""Represents a BibrecBib99x record."""
def __init__(self):
pass
__tablename__ = 'bibrec_bib99x'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
nullable=False, primary_key=True, index=True,
server_default='0')
id_bibxxx = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bib99x.id),
nullable=False, primary_key=True, index=True,
server_default='0')
field_number = db.Column(db.SmallInteger(5, unsigned=True),
primary_key=True)
bibrec = db.relationship(Bibrec, backref='bib99xs')
bibxxx = db.relationship(Bib99x, backref='bibrecs')
__all__ = ['Bibrec',
'Bibfmt',
'BibHOLDINGPEN',
'Bibdoc',
'BibdocBibdoc',
'BibrecBibdoc',
'HstDOCUMENT',
'HstRECORD',
'Bib00x',
'BibrecBib00x',
'Bib01x',
'BibrecBib01x',
'Bib02x',
'BibrecBib02x',
'Bib03x',
'BibrecBib03x',
'Bib04x',
'BibrecBib04x',
'Bib05x',
'BibrecBib05x',
'Bib06x',
'BibrecBib06x',
'Bib07x',
'BibrecBib07x',
'Bib08x',
'BibrecBib08x',
'Bib09x',
'BibrecBib09x',
'Bib10x',
'BibrecBib10x',
'Bib11x',
'BibrecBib11x',
'Bib12x',
'BibrecBib12x',
'Bib13x',
'BibrecBib13x',
'Bib14x',
'BibrecBib14x',
'Bib15x',
'BibrecBib15x',
'Bib16x',
'BibrecBib16x',
'Bib17x',
'BibrecBib17x',
'Bib18x',
'BibrecBib18x',
'Bib19x',
'BibrecBib19x',
'Bib20x',
'BibrecBib20x',
'Bib21x',
'BibrecBib21x',
'Bib22x',
'BibrecBib22x',
'Bib23x',
'BibrecBib23x',
'Bib24x',
'BibrecBib24x',
'Bib25x',
'BibrecBib25x',
'Bib26x',
'BibrecBib26x',
'Bib27x',
'BibrecBib27x',
'Bib28x',
'BibrecBib28x',
'Bib29x',
'BibrecBib29x',
'Bib30x',
'BibrecBib30x',
'Bib31x',
'BibrecBib31x',
'Bib32x',
'BibrecBib32x',
'Bib33x',
'BibrecBib33x',
'Bib34x',
'BibrecBib34x',
'Bib35x',
'BibrecBib35x',
'Bib36x',
'BibrecBib36x',
'Bib37x',
'BibrecBib37x',
'Bib38x',
'BibrecBib38x',
'Bib39x',
'BibrecBib39x',
'Bib40x',
'BibrecBib40x',
'Bib41x',
'BibrecBib41x',
'Bib42x',
'BibrecBib42x',
'Bib43x',
'BibrecBib43x',
'Bib44x',
'BibrecBib44x',
'Bib45x',
'BibrecBib45x',
'Bib46x',
'BibrecBib46x',
'Bib47x',
'BibrecBib47x',
'Bib48x',
'BibrecBib48x',
'Bib49x',
'BibrecBib49x',
'Bib50x',
'BibrecBib50x',
'Bib51x',
'BibrecBib51x',
'Bib52x',
'BibrecBib52x',
'Bib53x',
'BibrecBib53x',
'Bib54x',
'BibrecBib54x',
'Bib55x',
'BibrecBib55x',
'Bib56x',
'BibrecBib56x',
'Bib57x',
'BibrecBib57x',
'Bib58x',
'BibrecBib58x',
'Bib59x',
'BibrecBib59x',
'Bib60x',
'BibrecBib60x',
'Bib61x',
'BibrecBib61x',
'Bib62x',
'BibrecBib62x',
'Bib63x',
'BibrecBib63x',
'Bib64x',
'BibrecBib64x',
'Bib65x',
'BibrecBib65x',
'Bib66x',
'BibrecBib66x',
'Bib67x',
'BibrecBib67x',
'Bib68x',
'BibrecBib68x',
'Bib69x',
'BibrecBib69x',
'Bib70x',
'BibrecBib70x',
'Bib71x',
'BibrecBib71x',
'Bib72x',
'BibrecBib72x',
'Bib73x',
'BibrecBib73x',
'Bib74x',
'BibrecBib74x',
'Bib75x',
'BibrecBib75x',
'Bib76x',
'BibrecBib76x',
'Bib77x',
'BibrecBib77x',
'Bib78x',
'BibrecBib78x',
'Bib79x',
'BibrecBib79x',
'Bib80x',
'BibrecBib80x',
'Bib81x',
'BibrecBib81x',
'Bib82x',
'BibrecBib82x',
'Bib83x',
'BibrecBib83x',
'Bib84x',
'BibrecBib84x',
'Bib85x',
'BibrecBib85x',
'Bib86x',
'BibrecBib86x',
'Bib87x',
'BibrecBib87x',
'Bib88x',
'BibrecBib88x',
'Bib89x',
'BibrecBib89x',
'Bib90x',
'BibrecBib90x',
'Bib91x',
'BibrecBib91x',
'Bib92x',
'BibrecBib92x',
'Bib93x',
'BibrecBib93x',
'Bib94x',
'BibrecBib94x',
'Bib95x',
'BibrecBib95x',
'Bib96x',
'BibrecBib96x',
'Bib97x',
'BibrecBib97x',
'Bib98x',
'BibrecBib98x',
'Bib99x',
'BibrecBib99x']
diff --git a/modules/bibedit/lib/bibedit_utils.py b/modules/bibedit/lib/bibedit_utils.py
index 26c7e727e..ab7e1b5ce 100644
--- a/modules/bibedit/lib/bibedit_utils.py
+++ b/modules/bibedit/lib/bibedit_utils.py
@@ -1,1021 +1,1037 @@
## This file is part of Invenio.
## Copyright (C) 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
# pylint: disable=C0103
"""BibEdit Utilities.
This module contains support functions (i.e., those that are not called directly
by the web interface), that might be imported by other modules or that is called
by both the web and CLI interfaces.
"""
__revision__ = "$Id$"
import cPickle
import difflib
import fnmatch
import marshal
import os
import re
import time
import zlib
import tempfile
import sys
from datetime import datetime
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
from invenio.bibedit_config import CFG_BIBEDIT_FILENAME, \
CFG_BIBEDIT_RECORD_TEMPLATES_PATH, CFG_BIBEDIT_TO_MERGE_SUFFIX, \
CFG_BIBEDIT_FIELD_TEMPLATES_PATH, CFG_BIBEDIT_AJAX_RESULT_CODES_REV, \
CFG_BIBEDIT_CACHEDIR
from invenio.bibedit_dblayer import get_record_last_modification_date, \
delete_hp_change
from invenio.bibrecord import create_record, create_records, \
record_get_field_value, record_has_field, record_xml_output, \
record_strip_empty_fields, record_strip_empty_volatile_subfields, \
record_order_subfields, record_get_field_instances, \
record_add_field, field_get_subfield_codes, field_add_subfield, \
field_get_subfield_values, record_delete_fields, record_add_fields, \
- record_get_field_values, print_rec, record_modify_subfield
+ record_get_field_values, print_rec, record_modify_subfield, \
+ record_modify_controlfield
from invenio.bibtask import task_low_level_submission
from invenio.config import CFG_BIBEDIT_LOCKLEVEL, \
CFG_BIBEDIT_TIMEOUT, CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG as OAIID_TAG, \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG as SYSNO_TAG, \
CFG_BIBEDIT_QUEUE_CHECK_METHOD, \
CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE, CFG_INSPIRE_SITE
from invenio.dateutils import convert_datetext_to_dategui
from invenio.textutils import wash_for_xml
from invenio.bibedit_dblayer import get_bibupload_task_opts, \
get_marcxml_of_record_revision, get_record_revisions, \
get_info_of_record_revision
from invenio.search_engine import print_record, record_exists, get_colID, \
guess_primary_collection_of_a_record, get_record, \
get_all_collections_of_a_record
from invenio.search_engine_utils import get_fieldvalues
from invenio.webuser import get_user_info, getUid, get_email
from invenio.dbquery import run_sql
from invenio.websearchadminlib import get_detailed_page_tabs
from invenio.access_control_engine import acc_authorize_action
from invenio.refextract_api import extract_references_from_record_xml, \
extract_references_from_string_xml, \
extract_references_from_url_xml
from invenio.textmarc2xmlmarc import transform_file, ParseError
from invenio.bibauthorid_name_utils import split_name_parts, \
create_normalized_name
from invenio.bibknowledge import get_kbr_values
# Precompile regexp:
re_file_option = re.compile(r'^%s' % CFG_BIBEDIT_CACHEDIR)
re_xmlfilename_suffix = re.compile('_(\d+)_\d+\.xml$')
re_revid_split = re.compile('^(\d+)\.(\d{14})$')
re_revdate_split = re.compile('^(\d\d\d\d)(\d\d)(\d\d)(\d\d)(\d\d)(\d\d)')
re_taskid = re.compile('ID="(\d+)"')
re_tmpl_name = re.compile('<!-- BibEdit-Template-Name: (.*) -->')
re_tmpl_description = re.compile('<!-- BibEdit-Template-Description: (.*) -->')
re_ftmpl_name = re.compile('<!-- BibEdit-Field-Template-Name: (.*) -->')
re_ftmpl_description = re.compile('<!-- BibEdit-Field-Template-Description: (.*) -->')
VOLATILE_PREFIX = "VOLATILE:"
# Authorization
def user_can_edit_record_collection(req, recid):
""" Check if user has authorization to modify a collection
the recid belongs to
"""
def remove_volatile(field_value):
""" Remove volatile keyword from field value """
if field_value.startswith(VOLATILE_PREFIX):
field_value = field_value[len(VOLATILE_PREFIX):]
return field_value
# Get the collections the record belongs to
record_collections = get_all_collections_of_a_record(recid)
uid = getUid(req)
# In case we are creating a new record
if cache_exists(recid, uid):
dummy1, dummy2, record, dummy3, dummy4, dummy5, dummy6 = get_cache_file_contents(recid, uid)
values = record_get_field_values(record, '980', code="a")
record_collections.extend([remove_volatile(v) for v in values])
normalized_collections = []
for collection in record_collections:
# Get the normalized collection name present in the action table
res = run_sql("""SELECT value FROM accARGUMENT
WHERE keyword='collection'
AND value=%s;""", (collection,))
if res:
normalized_collections.append(res[0][0])
if not normalized_collections:
# Check if user has access to all collections
auth_code, auth_message = acc_authorize_action(req, 'runbibedit',
collection='')
if auth_code == 0:
return True
else:
for collection in normalized_collections:
auth_code, auth_message = acc_authorize_action(req, 'runbibedit',
collection=collection)
if auth_code == 0:
return True
return False
# Helper functions
def assert_undo_redo_lists_correctness(undo_list, redo_list):
for undoItem in undo_list:
assert undoItem != None;
for redoItem in redo_list:
assert redoItem != None;
def record_find_matching_fields(key, rec, tag="", ind1=" ", ind2=" ", \
exact_match=False):
"""
This utility function will look for any fieldvalues containing or equal
to, if exact match is wanted, given keyword string. The found fields will be
returned as a list of field instances per tag. The fields to search can be
narrowed down to tag/indicator level.
@param key: keyword to search for
@type key: string
@param rec: a record structure as returned by bibrecord.create_record()
@type rec: dict
@param tag: a 3 characters long string
@type tag: string
@param ind1: a 1 character long string
@type ind1: string
@param ind2: a 1 character long string
@type ind2: string
@return: a list of found fields in a tuple per tag: (tag, field_instances) where
field_instances is a list of (Subfields, ind1, ind2, value, field_position_global)
and subfields is list of (code, value)
@rtype: list
"""
if not tag:
all_field_instances = rec.items()
else:
all_field_instances = [(tag, record_get_field_instances(rec, tag, ind1, ind2))]
matching_field_instances = []
for current_tag, field_instances in all_field_instances:
found_fields = []
for field_instance in field_instances:
# Get values to match: controlfield_value + subfield values
values_to_match = [field_instance[3]] + \
[val for code, val in field_instance[0]]
if exact_match and key in values_to_match:
found_fields.append(field_instance)
else:
for value in values_to_match:
if value.find(key) > -1:
found_fields.append(field_instance)
break
if len(found_fields) > 0:
matching_field_instances.append((current_tag, found_fields))
return matching_field_instances
# Operations on the BibEdit cache file
def cache_exists(recid, uid):
"""Check if the BibEdit cache file exists."""
return os.path.isfile('%s.tmp' % _get_file_path(recid, uid))
def get_cache_mtime(recid, uid):
"""Get the last modified time of the BibEdit cache file. Check that the
cache exists before calling this function.
"""
try:
return int(os.path.getmtime('%s.tmp' % _get_file_path(recid, uid)))
except OSError:
pass
def cache_expired(recid, uid):
"""Has it been longer than the number of seconds given by
CFG_BIBEDIT_TIMEOUT since last cache update? Check that the
cache exists before calling this function.
"""
return get_cache_mtime(recid, uid) < int(time.time()) - CFG_BIBEDIT_TIMEOUT
def create_cache_file(recid, uid, record='', cache_dirty=False, pending_changes=[], disabled_hp_changes = {}, undo_list = [], redo_list=[]):
"""Create a BibEdit cache file, and return revision and record. This will
overwrite any existing cache the user has for this record.
datetime.
"""
if not record:
record = get_bibrecord(recid)
if not record:
return
file_path = '%s.tmp' % _get_file_path(recid, uid)
record_revision = get_record_last_modification_date(recid)
if record_revision == None:
record_revision = datetime.now().timetuple()
cache_file = open(file_path, 'w')
assert_undo_redo_lists_correctness(undo_list, redo_list)
# Order subfields alphabetically after loading the record
record_order_subfields(record)
cPickle.dump([cache_dirty, record_revision, record, pending_changes, disabled_hp_changes, undo_list, redo_list], cache_file)
cache_file.close()
return record_revision, record
def touch_cache_file(recid, uid):
"""Touch a BibEdit cache file. This should be used to indicate that the
user has again accessed the record, so that locking will work correctly.
"""
if cache_exists(recid, uid):
os.system('touch %s.tmp' % _get_file_path(recid, uid))
def get_bibrecord(recid):
"""Return record in BibRecord wrapping."""
if record_exists(recid):
return create_record(print_record(recid, 'xm'))[0]
def get_cache_file_contents(recid, uid):
"""Return the contents of a BibEdit cache file."""
cache_file = _get_cache_file(recid, uid, 'r')
if cache_file:
cache_dirty, record_revision, record, pending_changes, disabled_hp_changes, undo_list, redo_list = cPickle.load(cache_file)
cache_file.close()
assert_undo_redo_lists_correctness(undo_list, redo_list)
return cache_dirty, record_revision, record, pending_changes, disabled_hp_changes, undo_list, redo_list
def update_cache_file_contents(recid, uid, record_revision, record, pending_changes, disabled_hp_changes, undo_list, redo_list):
"""Save updates to the record in BibEdit cache. Return file modificaton
time.
"""
cache_file = _get_cache_file(recid, uid, 'w')
if cache_file:
assert_undo_redo_lists_correctness(undo_list, redo_list)
cPickle.dump([True, record_revision, record, pending_changes, disabled_hp_changes, undo_list, redo_list], cache_file)
cache_file.close()
return get_cache_mtime(recid, uid)
def delete_cache_file(recid, uid):
"""Delete a BibEdit cache file."""
try:
os.remove('%s.tmp' % _get_file_path(recid, uid))
except OSError:
# File was probably already removed
pass
def delete_disabled_changes(used_changes):
for change_id in used_changes:
delete_hp_change(change_id)
def save_xml_record(recid, uid, xml_record='', to_upload=True, to_merge=False):
"""Write XML record to file. Default behaviour is to read the record from
a BibEdit cache file, filter out the unchanged volatile subfields,
write it back to an XML file and then pass this file to BibUpload.
@param xml_record: give XML as string in stead of reading cache file
@param to_upload: pass the XML file to BibUpload
@param to_merge: prepare an XML file for BibMerge to use
"""
if not xml_record:
# Read record from cache file.
cache = get_cache_file_contents(recid, uid)
if cache:
record = cache[2]
used_changes = cache[4]
xml_record = record_xml_output(record)
delete_cache_file(recid, uid)
delete_disabled_changes(used_changes)
else:
record = create_record(xml_record)[0]
# clean the record from unfilled volatile fields
record_strip_empty_volatile_subfields(record)
record_strip_empty_fields(record)
# order subfields alphabetically before saving the record
record_order_subfields(record)
xml_to_write = wash_for_xml(record_xml_output(record))
# Write XML file.
if not to_merge:
file_path = '%s.xml' % _get_file_path(recid, uid)
else:
file_path = '%s_%s.xml' % (_get_file_path(recid, uid),
CFG_BIBEDIT_TO_MERGE_SUFFIX)
xml_file = open(file_path, 'w')
xml_file.write(xml_to_write)
xml_file.close()
user_name = get_user_info(uid)[1]
if to_upload:
# Pass XML file to BibUpload.
task_low_level_submission('bibupload', 'bibedit', '-P', '5', '-r',
file_path, '-u', user_name)
return True
# Security: Locking and integrity
def latest_record_revision(recid, revision_time):
"""Check if timetuple REVISION_TIME matches latest modification date."""
latest = get_record_last_modification_date(recid)
# this can be none if the record is new
return (latest == None) or (revision_time == latest)
def record_locked_by_other_user(recid, uid):
"""Return true if any other user than UID has active caches for record
RECID.
"""
active_uids = _uids_with_active_caches(recid)
try:
active_uids.remove(uid)
except ValueError:
pass
return bool(active_uids)
def get_record_locked_since(recid, uid):
""" Get modification time for the given recid and uid
"""
filename = "%s_%s_%s.tmp" % (CFG_BIBEDIT_FILENAME,
recid,
uid)
locked_since = ""
try:
locked_since = time.ctime(os.path.getmtime('%s%s%s' % (
CFG_BIBEDIT_CACHEDIR, os.sep, filename)))
except OSError:
pass
return locked_since
def record_locked_by_user_details(recid, uid):
""" Get the details about the user that has locked a record and the
time the record has been locked.
@return: user details and time when record was locked
@rtype: tuple
"""
active_uids = _uids_with_active_caches(recid)
try:
active_uids.remove(uid)
except ValueError:
pass
record_blocked_by_nickname = record_blocked_by_email = locked_since = ""
if active_uids:
record_blocked_by_uid = active_uids[0]
record_blocked_by_nickname = get_user_info(record_blocked_by_uid)[1]
record_blocked_by_email = get_email(record_blocked_by_uid)
locked_since = get_record_locked_since(recid, record_blocked_by_uid)
return record_blocked_by_nickname, record_blocked_by_email, locked_since
def record_locked_by_queue(recid):
"""Check if record should be locked for editing because of the current state
of the BibUpload queue. The level of checking is based on
CFG_BIBEDIT_LOCKLEVEL.
"""
# Check for *any* scheduled bibupload tasks.
if CFG_BIBEDIT_LOCKLEVEL == 2:
return _get_bibupload_task_ids()
filenames = _get_bibupload_filenames()
# Check for match between name of XML-files and record.
# Assumes that filename ends with _<recid>.xml.
if CFG_BIBEDIT_LOCKLEVEL == 1:
recids = []
for filename in filenames:
filename_suffix = re_xmlfilename_suffix.search(filename)
if filename_suffix:
recids.append(int(filename_suffix.group(1)))
return recid in recids
# Check for match between content of files and record.
if CFG_BIBEDIT_LOCKLEVEL == 3:
while True:
lock = _record_in_files_p(recid, filenames)
# Check if any new files were added while we were searching
if not lock:
filenames_updated = _get_bibupload_filenames()
for filename in filenames_updated:
if not filename in filenames:
break
else:
return lock
else:
return lock
# History/revisions
def revision_to_timestamp(td):
"""
Converts the revision date to the timestamp
"""
return "%04i%02i%02i%02i%02i%02i" % (td.tm_year, td.tm_mon, td.tm_mday, \
td.tm_hour, td.tm_min, td.tm_sec)
def timestamp_to_revision(timestamp):
"""
Converts the timestamp to a correct revision date
"""
year = int(timestamp[0:4])
month = int(timestamp[4:6])
day = int(timestamp[6:8])
hour = int(timestamp[8:10])
minute = int(timestamp[10:12])
second = int(timestamp[12:14])
return datetime(year, month, day, hour, minute, second).timetuple()
def get_record_revision_timestamps(recid):
"""return list of timestamps describing teh revisions of a given record"""
rev_ids = get_record_revision_ids(recid)
result = []
for rev_id in rev_ids:
result.append(rev_id.split(".")[1])
return result
def get_record_revision_ids(recid):
"""Return list of all record revision IDs.
Return revision IDs in chronologically decreasing order (latest first).
"""
res = []
tmp_res = get_record_revisions(recid)
for row in tmp_res:
res.append('%s.%s' % (row[0], row[1]))
return res
def get_marcxml_of_revision(recid, revid):
"""Return MARCXML string of revision.
Return empty string if revision does not exist. REVID should be a string.
"""
res = ''
tmp_res = get_marcxml_of_record_revision(recid, revid)
if tmp_res:
for row in tmp_res:
res += zlib.decompress(row[0]) + '\n'
return res;
def get_marcxml_of_revision_id(revid):
"""Return MARCXML string of revision.
Return empty string if revision does not exist. REVID should be a string.
"""
recid, job_date = split_revid(revid, 'datetext')
return get_marcxml_of_revision(recid, job_date);
def get_info_of_revision_id(revid):
"""Return info string regarding revision.
Return empty string if revision does not exist. REVID should be a string.
"""
recid, job_date = split_revid(revid, 'datetext')
res = ''
tmp_res = get_info_of_record_revision(recid, job_date)
if tmp_res:
task_id = str(tmp_res[0][0])
author = tmp_res[0][1]
if not author:
author = 'N/A'
res += '%s %s %s' % (revid.ljust(22), task_id.ljust(15), author.ljust(15))
job_details = tmp_res[0][2].split()
upload_mode = job_details[0] + job_details[1][:-1]
upload_file = job_details[2] + job_details[3][:-1]
res += '%s %s' % (upload_mode, upload_file)
return res
def revision_format_valid_p(revid):
"""Test validity of revision ID format (=RECID.REVDATE)."""
if re_revid_split.match(revid):
return True
return False
def record_revision_exists(recid, revid):
results = get_record_revisions(recid)
for res in results:
if res[1] == revid:
return True
return False
def split_revid(revid, dateformat=''):
"""Split revid and return tuple (recid, revdate).
Optional dateformat can be datetext or dategui.
"""
recid, revdate = re_revid_split.search(revid).groups()
if dateformat:
datetext = '%s-%s-%s %s:%s:%s' % re_revdate_split.search(
revdate).groups()
if dateformat == 'datetext':
revdate = datetext
elif dateformat == 'dategui':
revdate = convert_datetext_to_dategui(datetext, secs=True)
return recid, revdate
+def modify_record_timestamp(revision_xml, last_revision_ts):
+ """ Modify tag 005 to add the revision passed as parameter.
+ @param revision_xml: marcxml representation of the record to modify
+ @type revision_xml: string
+ @param last_revision_ts: timestamp to add to 005 tag
+ @type last_revision_ts: string
+
+ @return: marcxml with 005 tag modified
+ """
+ recstruct = create_record(revision_xml)[0]
+ record_modify_controlfield(recstruct, "005", last_revision_ts,
+ field_position_local=0)
+ return record_xml_output(recstruct)
+
+
def get_xml_comparison(header1, header2, xml1, xml2):
"""Return diff of two MARCXML records."""
return ''.join(difflib.unified_diff(xml1.splitlines(1),
xml2.splitlines(1), header1, header2))
#Templates
def get_templates(templatesDir, tmpl_name, tmpl_description, extractContent = False):
"""Return list of templates [filename, name, description, content*]
the extractContent variable indicated if the parsed content should
be included"""
template_fnames = fnmatch.filter(os.listdir(
templatesDir), '*.xml')
templates = []
for fname in template_fnames:
filepath = '%s%s%s' % (templatesDir, os.sep, fname)
template_file = open(filepath,'r')
template = template_file.read()
template_file.close()
fname_stripped = os.path.splitext(fname)[0]
mo_name = tmpl_name.search(template)
mo_description = tmpl_description.search(template)
date_modified = time.ctime(os.path.getmtime(filepath))
if mo_name:
name = mo_name.group(1)
else:
name = fname_stripped
if mo_description:
description = mo_description.group(1)
else:
description = ''
if (extractContent):
parsedTemplate = create_record(template)[0]
if parsedTemplate != None:
# If the template was correct
templates.append([fname_stripped, name, description, parsedTemplate])
else:
raise "Problem when parsing the template %s" % (fname, )
else:
templates.append([fname_stripped, name, description, date_modified])
return templates
# Field templates
def get_field_templates():
"""Returns list of field templates [filename, name, description, content]"""
return get_templates(CFG_BIBEDIT_FIELD_TEMPLATES_PATH, re_ftmpl_name, re_ftmpl_description, True)
# Record templates
def get_record_templates():
"""Return list of record template [filename, name, description] ."""
return get_templates(CFG_BIBEDIT_RECORD_TEMPLATES_PATH, re_tmpl_name, re_tmpl_description, False)
def get_record_template(name):
"""Return an XML record template."""
filepath = '%s%s%s.xml' % (CFG_BIBEDIT_RECORD_TEMPLATES_PATH, os.sep, name)
if os.path.isfile(filepath):
template_file = open(filepath, 'r')
template = template_file.read()
template_file.close()
return template
# Private functions
def _get_cache_file(recid, uid, mode):
"""Return a BibEdit cache file object."""
if cache_exists(recid, uid):
return open('%s.tmp' % _get_file_path(recid, uid), mode)
def _get_file_path(recid, uid, filename=''):
"""Return the file path to a BibEdit file (excluding suffix).
If filename is specified this replaces the config default.
"""
if not filename:
return '%s%s%s_%s_%s' % (CFG_BIBEDIT_CACHEDIR, os.sep, CFG_BIBEDIT_FILENAME,
recid, uid)
else:
return '%s%s%s_%s_%s' % (CFG_BIBEDIT_CACHEDIR, os.sep, filename, recid, uid)
def _uids_with_active_caches(recid):
"""Return list of uids with active caches for record RECID. Active caches
are caches that have been modified a number of seconds ago that is less than
the one given by CFG_BIBEDIT_TIMEOUT.
"""
re_tmpfilename = re.compile('%s_%s_(\d+)\.tmp' % (CFG_BIBEDIT_FILENAME,
recid))
tmpfiles = fnmatch.filter(os.listdir(CFG_BIBEDIT_CACHEDIR), '%s*.tmp' %
CFG_BIBEDIT_FILENAME)
expire_time = int(time.time()) - CFG_BIBEDIT_TIMEOUT
active_uids = []
for tmpfile in tmpfiles:
mo = re_tmpfilename.match(tmpfile)
if mo and int(os.path.getmtime('%s%s%s' % (
CFG_BIBEDIT_CACHEDIR, os.sep, tmpfile))) > expire_time:
active_uids.append(int(mo.group(1)))
return active_uids
def _get_bibupload_task_ids():
"""Return list of all BibUpload task IDs.
Ignore tasks submitted by user bibreformat.
"""
res = run_sql('''SELECT id FROM schTASK WHERE proc LIKE "bibupload%" AND user <> "bibreformat" AND status IN ("WAITING", "SCHEDULED", "RUNNING", "CONTINUING", "ABOUT TO STOP", "ABOUT TO SLEEP", "SLEEPING")''')
return [row[0] for row in res]
def _get_bibupload_filenames():
"""Return paths to all files scheduled for upload."""
task_ids = _get_bibupload_task_ids()
filenames = []
tasks_opts = get_bibupload_task_opts(task_ids)
for task_opts in tasks_opts:
if task_opts:
record_options = marshal.loads(task_opts[0][0])
for option in record_options[1:]:
if re_file_option.search(option):
filenames.append(option)
return filenames
def _record_in_files_p(recid, filenames):
"""Search XML files for given record."""
# Get id tags of record in question
rec_oaiid = rec_sysno = -1
rec_oaiid_tag = get_fieldvalues(recid, OAIID_TAG)
if rec_oaiid_tag:
rec_oaiid = rec_oaiid_tag[0]
rec_sysno_tag = get_fieldvalues(recid, SYSNO_TAG)
if rec_sysno_tag:
rec_sysno = rec_sysno_tag[0]
# For each record in each file, compare ids and abort if match is found
for filename in filenames:
try:
if CFG_BIBEDIT_QUEUE_CHECK_METHOD == 'regexp':
# check via regexp: this is fast, but may not be precise
re_match_001 = re.compile('<controlfield tag="001">%s</controlfield>' % (recid))
re_match_oaiid = re.compile('<datafield tag="%s" ind1=" " ind2=" ">(\s*<subfield code="a">\s*|\s*<subfield code="9">\s*.*\s*</subfield>\s*<subfield code="a">\s*)%s' % (OAIID_TAG[0:3],rec_oaiid))
re_match_sysno = re.compile('<datafield tag="%s" ind1=" " ind2=" ">(\s*<subfield code="a">\s*|\s*<subfield code="9">\s*.*\s*</subfield>\s*<subfield code="a">\s*)%s' % (SYSNO_TAG[0:3],rec_sysno))
file_content = open(filename).read()
if re_match_001.search(file_content):
return True
if rec_oaiid_tag:
if re_match_oaiid.search(file_content):
return True
if rec_sysno_tag:
if re_match_sysno.search(file_content):
return True
else:
# by default, check via bibrecord: this is accurate, but may be slow
file_ = open(filename)
records = create_records(file_.read(), 0, 0)
for i in range(0, len(records)):
record, all_good = records[i][:2]
if record and all_good:
if _record_has_id_p(record, recid, rec_oaiid, rec_sysno):
return True
file_.close()
except IOError:
continue
return False
def _record_has_id_p(record, recid, rec_oaiid, rec_sysno):
"""Check if record matches any of the given IDs."""
if record_has_field(record, '001'):
if (record_get_field_value(record, '001', '%', '%')
== str(recid)):
return True
if record_has_field(record, OAIID_TAG[0:3]):
if (record_get_field_value(
record, OAIID_TAG[0:3], OAIID_TAG[3],
OAIID_TAG[4], OAIID_TAG[5]) == rec_oaiid):
return True
if record_has_field(record, SYSNO_TAG[0:3]):
if (record_get_field_value(
record, SYSNO_TAG[0:3], SYSNO_TAG[3],
SYSNO_TAG[4], SYSNO_TAG[5]) == rec_sysno):
return True
return False
def can_record_have_physical_copies(recid):
"""Determine if the record can have physical copies
(addable through the bibCirculation module).
The information is derieved using the tabs displayed for a given record.
Only records already saved within the collection may have the physical copies
@return: True or False
"""
if get_record(recid) == None:
return False
col_id = get_colID(guess_primary_collection_of_a_record(recid))
collections = get_detailed_page_tabs(col_id, recid)
if (not collections.has_key("holdings")) or \
(not collections["holdings"].has_key("visible")):
return False
return collections["holdings"]["visible"] == True
def get_record_collections(recid):
""" Returns all collections of a record, field 980
@param recid: record id to get collections from
@type: string
@return: list of collections
@rtype: list
"""
recstruct = get_record(recid)
return [collection for collection in record_get_field_values(recstruct,
tag="980",
ind1=" ",
ind2=" ",
code="a")]
def extend_record_with_template(recid):
""" Determine if the record has to be extended with the content
of a template as defined in CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE
@return: template name to be applied to record or False if no template
has to be applied
"""
rec_collections = get_record_collections(recid)
for collection in rec_collections:
if collection in CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE:
return CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE[collection]
return False
def merge_record_with_template(rec, template_name):
""" Extend the record rec with the contents of the template and return it"""
template = get_record_template(template_name)
if not template:
return
template_bibrec = create_record(template)[0]
for field_tag in template_bibrec:
if not record_has_field(rec, field_tag):
for field_instance in template_bibrec[field_tag]:
record_add_field(rec, field_tag, field_instance[1],
field_instance[2], subfields=field_instance[0])
else:
for template_field_instance in template_bibrec[field_tag]:
subfield_codes_template = field_get_subfield_codes(template_field_instance)
for field_instance in rec[field_tag]:
subfield_codes = field_get_subfield_codes(field_instance)
for code in subfield_codes_template:
if code not in subfield_codes:
field_add_subfield(field_instance, code,
field_get_subfield_values(template_field_instance,
code)[0])
return rec
#################### Reference extraction ####################
def replace_references(recid, uid=None, txt=None, url=None):
"""Replace references for a record
The record itself is not updated, the marc xml of the document with updated
references is returned
Parameters:
* recid: the id of the record
* txt: references in text mode
* inspire: format of ther references
"""
# Parse references
if txt is not None:
references_xml = extract_references_from_string_xml(txt, is_only_references=True)
elif url is not None:
references_xml = extract_references_from_url_xml(url)
else:
references_xml = extract_references_from_record_xml(recid)
references = create_record(references_xml.encode('utf-8'))
dummy1, dummy2, record, dummy3, dummy4, dummy5, dummy6 = get_cache_file_contents(recid, uid)
out_xml = None
references_to_add = record_get_field_instances(references[0],
tag='999',
ind1='C',
ind2='5')
refextract_status = record_get_field_instances(references[0],
tag='999',
ind1='C',
ind2='6')
if references_to_add:
# Replace 999 fields
record_delete_fields(record, '999')
record_add_fields(record, '999', references_to_add)
record_add_fields(record, '999', refextract_status)
# Update record references
out_xml = record_xml_output(record)
return out_xml
#################### cnum generation ####################
def record_is_conference(record):
"""
Determine if the record is a new conference based on the value present
on field 980
@param record: record to be checked
@type record: bibrecord object
@return: True if record is a conference, False otherwise
@rtype: boolean
"""
# Get collection field content (tag 980)
tag_980_content = record_get_field_values(record, "980", " ", " ", "a")
if "CONFERENCES" in tag_980_content:
return True
return False
def add_record_cnum(recid, uid):
"""
Check if the record has already a cnum. If not generate a new one
and return the result
@param recid: recid of the record under check. Used to retrieve cache file
@type recid: int
@param uid: id of the user. Used to retrieve cache file
@type uid: int
@return: None if cnum already present, new cnum otherwise
@rtype: None or string
"""
# Import placed here to avoid circular dependency
from invenio.sequtils_cnum import CnumSeq, ConferenceNoStartDateError
record_revision, record, pending_changes, deactivated_hp_changes, \
undo_list, redo_list = get_cache_file_contents(recid, uid)[1:]
record_strip_empty_volatile_subfields(record)
# Check if record already has a cnum
tag_111__g_content = record_get_field_value(record, "111", " ", " ", "g")
if tag_111__g_content:
return
else:
cnum_seq = CnumSeq()
try:
new_cnum = cnum_seq.next_value(xml_record=wash_for_xml(print_rec(record)))
except ConferenceNoStartDateError:
return None
field_add_subfield(record['111'][0], 'g', new_cnum)
update_cache_file_contents(recid, uid, record_revision,
record, \
pending_changes, \
deactivated_hp_changes, \
undo_list, redo_list)
return new_cnum
def get_xml_from_textmarc(recid, textmarc_record):
"""
Convert textmarc to marcxml and return the result of the conversion
@param recid: id of the record that is being converted
@type: int
@param textmarc_record: record content in textmarc format
@type: string
@return: dictionary with the following keys:
* resultMsg: message describing conversion status
* resultXML: xml resulting from conversion
* parse_error: in case of error, a description of it
@rtype: dict
"""
response = {}
# Let's remove empty lines
textmarc_record = os.linesep.join([s for s in textmarc_record.splitlines() if s])
# Create temp file with textmarc to be converted by textmarc2xmlmarc
(file_descriptor, file_name) = tempfile.mkstemp()
f = os.fdopen(file_descriptor, "w")
# Write content appending sysno at beginning
for line in textmarc_record.splitlines():
f.write("%09d %s\n" % (recid, re.sub("\s+", " ", line.strip())))
f.close()
old_stdout = sys.stdout
try:
# Redirect output, transform, restore old references
new_stdout = StringIO()
sys.stdout = new_stdout
try:
transform_file(file_name)
response['resultMsg'] = 'textmarc_parsing_success'
response['resultXML'] = new_stdout.getvalue()
except ParseError, e:
# Something went wrong, notify user
response['resultXML'] = ""
response['resultMsg'] = 'textmarc_parsing_error'
response['parse_error'] = [e.lineno, " ".join(e.linecontent.split()[1:]), e.message]
finally:
sys.stdout = old_stdout
return response
#################### crossref utils ####################
def crossref_process_template(template, change=False):
"""
Creates record from template based on xml template
@param change: if set to True, makes changes to the record (translating the
title, unifying autroh names etc.), if not - returns record without
any changes
@return: record
"""
record = create_record(template)[0]
if change:
crossref_translate_title(record)
crossref_normalize_name(record)
return record
def crossref_translate_title(record):
"""
Convert the record's title to the Inspire specific abbreviation
of the title (using JOURNALS knowledge base)
@return: changed record
"""
# probably there is only one 773 field
# but just in case let's treat it as a list
for field in record_get_field_instances(record, '773'):
title = field[0][0][1]
new_title = get_kbr_values("JOURNALS", title, searchtype='e')
if new_title:
# returned value is a list, and we need only the first value
new_title = new_title[0][0]
position = field[4]
record_modify_subfield(rec=record, tag='773', subfield_code='p', \
value=new_title, subfield_position=0, field_position_global=position)
def crossref_normalize_name(record):
"""
Changes the format of author's name (often with initials) to the proper,
unified one, using bibauthor_name_utils tools
@return: changed record
"""
# pattern for removing the spaces between two initials
pattern_initials = '([A-Z]\\.)\\s([A-Z]\\.)'
# first, change the main author
for field in record_get_field_instances(record, '100'):
main_author = field[0][0][1]
new_author = create_normalized_name(split_name_parts(main_author))
# remove spaces between initials
# two iterations are required
for _ in range(2):
new_author = re.sub(pattern_initials, '\g<1>\g<2>', new_author)
position = field[4]
record_modify_subfield(rec=record, tag='100', subfield_code='a', \
value=new_author, subfield_position=0, field_position_global=position)
# then, change additional authors
for field in record_get_field_instances(record, '700'):
author = field[0][0][1]
new_author = create_normalized_name(split_name_parts(author))
for _ in range(2):
new_author = re.sub(pattern_initials, '\g<1>\g<2>',new_author)
position = field[4]
record_modify_subfield(rec=record, tag='700', subfield_code='a', \
value=new_author, subfield_position=0, field_position_global=position)
diff --git a/modules/bibencode/lib/bibencode_batch_engine.py b/modules/bibencode/lib/bibencode_batch_engine.py
index 5037bcc1c..391102828 100644
--- a/modules/bibencode/lib/bibencode_batch_engine.py
+++ b/modules/bibencode/lib/bibencode_batch_engine.py
@@ -1,760 +1,760 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Bibencode batch processing submodule"""
from string import Template
from pprint import pprint
import os
import shutil
import uuid
from pprint import pformat
from invenio.bibtask import (
task_update_progress,
write_message,
task_low_level_submission
)
from invenio.bibdocfile import BibRecDocs, compose_file, compose_format, decompose_file
from invenio.search_engine import (
record_exists,
get_collection_reclist,
search_pattern,
get_fieldvalues
)
from invenio.bibencode_encode import encode_video, assure_quality
from invenio.bibencode_extract import extract_frames
from invenio.bibencode_profiles import (
get_encoding_profile,
get_extract_profile
)
from invenio.bibdocfilecli import cli_fix_marc
from invenio.bibencode_utils import chose2
from invenio.bibencode_metadata import (
pbcore_metadata
)
from invenio.bibencode_utils import getval, chose2, generate_timestamp
from invenio.bibencode_config import (
CFG_BIBENCODE_DAEMON_DIR_NEWJOBS,
CFG_BIBENCODE_PBCORE_MARC_XSLT,
CFG_BIBENCODE_ASPECT_RATIO_MARC_FIELD
)
from invenio.mailutils import send_email
from invenio.messages import gettext_set_language
from invenio.webuser import emailUnique, get_user_preferences
from invenio.bibformat_xslt_engine import format
from invenio.jsonutils import json, json_decode_file
import invenio.config
## Stored messages for email notifications
global _BATCH_STEP, _BATCH_STEPS
_BATCH_STEP = 1
_BATCH_STEPS = 1
global _MSG_HISTORY, _UPD_HISTORY
_MSG_HISTORY = []
_UPD_HISTORY = []
def _notify_error_admin(batch_job,
email_admin=invenio.config.CFG_SITE_ADMIN_EMAIL):
"""Sends a notification email to the specified address, containing
admin-only information. Is called by process_batch_job() if an error
occured during the processing.
@param email_admin: email address of the admin
@type email_admin: string
"""
if not email_admin:
return
template = ("BibEncode batch processing has reported an error during the"
"execution of a job within the batch description <br/><br/>"
"This is the batch description: <br/><br/>"
"%(batch_description)s <br/><br/>"
"This is the message log: <br/><br/>"
"%(message_log)s")
html_text = template % {"batch_description": pformat(batch_job).replace("\n", "<br/>"),
"message_log": "\n".join(_MSG_HISTORY)}
text = html_text.replace("<br/>", "\n")
send_email(fromaddr=invenio.config.CFG_SITE_ADMIN_EMAIL,
toaddr=email_admin,
subject="Error during BibEncode batch processing",
content=text,
html_content=html_text)
def _notify_error_user(email_user, original_filename, recid, submission_title, ln=invenio.config.CFG_SITE_LANG):
"""Sends an error notification to the specified addres of the user.
Is called by process_batch_job() if an error occured during the processing.
@param email_user: email address of the user
@type email_user: string
@param email_admin: email address of the admin
@type email_admin: string
"""
if not email_user:
return
uid = emailUnique(email_user)
if uid != -1 and uid != 0:
language = getval(get_user_preferences(uid), "language")
if language:
ln = language
_ = gettext_set_language(ln)
rec_url = invenio.config.CFG_SITE_URL + "/record/" + str(recid)
template = ("<br/>" +
_("We are sorry, a problem has occured during the processing of"
" your video upload%(submission_title)s.") +
"<br/><br/>" +
_("The file you uploaded was %(input_filename)s.") +
"<br/><br/>" +
_("Your video might not be fully available until intervention.") +
"<br/>" +
_("You can check the status of your video here: %(record_url)s.") +
"<br/>" +
_("You might want to take a look at "
" %(guidelines_url)s"
" and modify or redo your submission."))
text = template % {"input_filename": "%s" % original_filename,
"submission_title": " %s" % submission_title,
"record_url": "%s" % rec_url,
"guidelines_url": "localhost"}
text = text.replace("<br/>", "\n")
html_text = template % {"input_filename": "<strong>%s</strong>" % original_filename,
"submission_title": " <strong>%s</strong>" % submission_title,
"record_url": "<a href=\"%s\">%s</a>" % (rec_url, rec_url),
"guidelines_url": "<a href=\"locahost\">%s</a>" % _("the video guidelines")}
send_email(fromaddr=invenio.config.CFG_SITE_ADMIN_EMAIL,
toaddr=email_user,
subject="Problem during the processing of your video",
content=text,
html_content=html_text
)
def _notify_success_user(email_user, original_filename, recid, submission_title, ln=invenio.config.CFG_SITE_LANG):
"""Sends an success notification to the specified addres of the user.
Is called by process_batch_job() if the processing was successfull.
@param email_user: email address of the user
@type email_user: string
@param email_admin: email address of the admin
@type email_admin: string
"""
uid = emailUnique(email_user)
if uid != -1 and uid != 0:
language = getval(get_user_preferences(uid), "language")
if language:
ln = language
_ = gettext_set_language(ln)
rec_url = invenio.config.CFG_SITE_URL + "/record/" + str(recid)
template = ("<br/>" +
_("Your video submission%(submission_title)s was successfully processed.") +
"<br/><br/>" +
_("The file you uploaded was %(input_filename)s.") +
"<br/><br/>" +
_("Your video is now available here: %(record_url)s.") +
"<br/>" +
_("If the videos quality is not as expected, you might want to take "
"a look at %(guidelines_url)s"
" and modify or redo your submission."))
text = template % {"input_filename": "%s" % original_filename,
"submission_title": " %s" % submission_title,
"record_url": "%s" % rec_url,
"guidelines_url": "localhost"}
text = text.replace("<br/>", "\n")
html_text = template % {"input_filename": "<strong>%s</strong>" % original_filename,
"submission_title": " <strong>%s</strong>" % submission_title,
"record_url": "<a href=\"%s\">%s</a>" % (rec_url, rec_url),
"guidelines_url": "<a href=\"locahost\">%s</a>" % _("the video guidelines")}
send_email(fromaddr=invenio.config.CFG_SITE_ADMIN_EMAIL,
toaddr=email_user,
subject="Your video submission is now complete",
content=text,
html_content=html_text
)
def _task_update_overall_status(message):
""" Generates an overall update message for the BibEncode task.
Stores the messages in a global list for notifications
@param message: the message that should be printed as task status
@type message: string
"""
message = "[%d/%d]%s" % (_BATCH_STEP, _BATCH_STEPS, message)
task_update_progress(message)
global _UPD_HISTORY
_UPD_HISTORY.append(message)
def _task_write_message(message):
""" Stores the messages in a global list for notifications
@param message: the message that should be printed as task status
@type message: string
"""
write_message(message)
global _MSG_HISTORY
_MSG_HISTORY.append(message)
def clean_job_for_quality(batch_job_dict, fallback=True):
"""
Removes jobs from the batch description that are not suitable for the master
video's quality. It applies only for encoding jobs!
@param batch_job_dict: the dict containing the batch description
@type batch_job_dict: dict
@param
@return: the cleaned dict
@rtype: dict
"""
survived_jobs = []
fallback_jobs = []
other_jobs = []
for job in batch_job_dict['jobs']:
if job['mode'] == 'encode':
if getval(job, 'fallback') and fallback:
fallback_jobs.append(job)
if getval(job, 'enforce'):
survived_jobs.append(job)
else:
profile = None
if getval(job, 'profile'):
profile = get_encoding_profile(job['profile'])
if assure_quality(input_file=batch_job_dict['input'],
aspect=chose2('aspect', job, profile),
target_width=chose2('width', job, profile),
target_height=chose2('height', job, profile),
target_bitrate=chose2('videobitrate', job, profile)):
survived_jobs.append(job)
else:
other_jobs.append(job)
if survived_jobs:
survived_jobs.extend(other_jobs)
new_jobs = survived_jobs
else:
fallback_jobs.extend(other_jobs)
new_jobs = fallback_jobs
pprint(locals())
batch_job_dict['jobs'] = new_jobs
return batch_job_dict
def create_update_jobs_by_collection(
batch_template_file,
collection,
job_directory=CFG_BIBENCODE_DAEMON_DIR_NEWJOBS):
""" Creates the job description files to update a whole collection
@param batch_template_file: fullpath to the template for the update
@type batch_tempalte_file: string
@param collection: name of the collection that should be updated
@type collection: string
@param job_directory: fullpath to the directory storing the job files
@type job_directory: string
"""
recids = get_collection_reclist(collection)
return create_update_jobs_by_recids(recids, batch_template_file,
job_directory)
def create_update_jobs_by_search(pattern,
batch_template_file,
job_directory=CFG_BIBENCODE_DAEMON_DIR_NEWJOBS
):
""" Creates the job description files to update all records that fit a
search pattern. Be aware of the search limitations!
@param search_pattern: The pattern to search for
@type search_pattern: string
@param batch_template_file: fullpath to the template for the update
@type batch_tempalte_file: string
@param job_directory: fullpath to the directory storing the job files
@type job_directory: string
"""
recids = search_pattern(p=pattern)
return create_update_jobs_by_recids(recids, batch_template_file,
job_directory)
def create_update_jobs_by_recids(recids,
batch_template_file,
job_directory=CFG_BIBENCODE_DAEMON_DIR_NEWJOBS
):
""" Creates the job description files to update all given recids
@param recids: Iterable set of recids
@type recids: iterable
@param batch_template_file: fullpath to the template for the update
@type batch_tempalte_file: string
@param job_directory: fullpath to the directory storing the job files
@type job_directory: string
"""
batch_template = json_decode_file(batch_template_file)
for recid in recids:
task_update_progress("Creating Update Job for %d" % recid)
write_message("Creating Update Job for %d" % recid)
job = dict(batch_template)
job['recid'] = recid
timestamp = generate_timestamp()
job_filename = "update_%d_%s.job" % (recid, timestamp)
create_job_from_dictionary(job, job_filename, job_directory)
return 1
def create_job_from_dictionary(
job_dict,
job_filename=None,
job_directory=CFG_BIBENCODE_DAEMON_DIR_NEWJOBS
):
""" Creates a job from a given dictionary
@param job_dict: Dictionary that contains the job description
@type job_dict: job_dict
@param job_filename: Filename for the job
@type job_filename: string
@param job_directory: fullpath to the directory storing the job files
@type job_directory: string
"""
if not job_filename:
job_filename = str(uuid.uuid4())
if not job_filename.endswith(".job"):
job_filename += ".job"
job_fullpath = os.path.join(job_directory, job_filename)
job_string = json.dumps(job_dict, sort_keys=False, indent=4)
file = open(job_fullpath, "w")
file.write(job_string)
file.close()
def sanitise_batch_job(batch_job):
""" Checks the correctness of the batch job dictionary and additionally
sanitises some values.
@param batch_job: The batch description dictionary
@type batch_job: dictionary
"""
def san_bitrate(bitrate):
""" Sanitizes bitrates
"""
if type(str()) == type(bitrate):
if bitrate.endswith('k'):
try:
bitrate = int(bitrate[:-1])
bitrate *= 1000
return bitrate
except ValueError:
raise Exception("Could not parse bitrate")
elif type(int) == type(bitrate):
return bitrate
else:
raise Exception("Could not parse bitrate")
if not getval(batch_job, 'update_from_master'):
if not getval(batch_job, 'input'):
raise Exception("No input file in batch description")
if not getval(batch_job, 'recid'):
raise Exception("No recid in batch description")
if not getval(batch_job, 'jobs'):
raise Exception("No job list in batch description")
if getval(batch_job, 'update_from_master'):
if (not getval(batch_job, 'bibdoc_master_comment') and
not getval(batch_job, 'bibdoc_master_description') and
not getval(batch_job, 'bibdoc_master_subformat')):
raise Exception("If update_from_master ist set, a comment or"
" description or subformat for matching must be given")
if getval(batch_job, 'marc_snippet'):
if not os.path.exists(getval(batch_job, 'marc_snippet')):
raise Exception("The marc snipped file %s was not found" %
getval(batch_job, 'marc_snippet'))
for job in batch_job['jobs']:
if job['mode'] == 'encode':
if getval(job, 'videobitrate'):
job['videobitrate'] = san_bitrate(getval(job, 'videobitrate'))
if getval(job, 'audiobitrate'):
job['audiobitrate'] = san_bitrate(getval(job, 'audiobitrate'))
return batch_job
def process_batch_job(batch_job_file):
""" Processes a batch job description dictionary
@param batch_job_file: a fullpath to a batch job file
@type batch_job_file: string
@return: 1 if the process was successfull, 0 if not
@rtype; int
"""
def upload_marcxml_file(marcxml):
""" Creates a temporary marcxml file and sends it to bibupload
"""
xml_filename = 'bibencode_'+ str(batch_job['recid']) + '_' + str(uuid.uuid4()) + '.xml'
xml_filename = os.path.join(invenio.config.CFG_TMPSHAREDDIR, xml_filename)
xml_file = file(xml_filename, 'w')
xml_file.write(marcxml)
xml_file.close()
targs = ['-c', xml_filename]
task_low_level_submission('bibupload', 'bibencode', *targs)
#---------#
# GENERAL #
#---------#
_task_write_message("----------- Handling Master -----------")
## Check the validity of the batch file here
batch_job = json_decode_file(batch_job_file)
## Sanitise batch description and raise errrors
batch_job = sanitise_batch_job(batch_job)
## Check if the record exists
if record_exists(batch_job['recid']) < 1:
raise Exception("Record not found")
recdoc = BibRecDocs(batch_job['recid'])
#--------------------#
# UPDATE FROM MASTER #
#--------------------#
## We want to add new stuff to the video's record, using the master as input
if getval(batch_job, 'update_from_master'):
found_master = False
bibdocs = recdoc.list_bibdocs()
for bibdoc in bibdocs:
bibdocfiles = bibdoc.list_all_files()
for bibdocfile in bibdocfiles:
comment = bibdocfile.get_comment()
description = bibdocfile.get_description()
subformat = bibdocfile.get_subformat()
m_comment = getval(batch_job, 'bibdoc_master_comment', comment)
m_description = getval(batch_job, 'bibdoc_master_description', description)
m_subformat = getval(batch_job, 'bibdoc_master_subformat', subformat)
if (comment == m_comment and
description == m_description and
subformat == m_subformat):
found_master = True
batch_job['input'] = bibdocfile.get_full_path()
## Get the aspect of the from the record
try:
## Assumes pbcore metadata mapping
batch_job['aspect'] = get_fieldvalues(124, CFG_BIBENCODE_ASPECT_RATIO_MARC_FIELD)[0]
except IndexError:
pass
break
if found_master:
break
if not found_master:
_task_write_message("Video master for record %d not found"
% batch_job['recid'])
task_update_progress("Video master for record %d not found"
% batch_job['recid'])
## Maybe send an email?
return 1
## Clean the job to do no upscaling etc
if getval(batch_job, 'assure_quality'):
batch_job = clean_job_for_quality(batch_job)
global _BATCH_STEPS
_BATCH_STEPS = len(batch_job['jobs'])
## Generate the docname from the input filename's name or given name
bibdoc_video_docname, bibdoc_video_extension = decompose_file(batch_job['input'])[1:]
if not bibdoc_video_extension or getval(batch_job, 'bibdoc_master_extension'):
bibdoc_video_extension = getval(batch_job, 'bibdoc_master_extension')
if getval(batch_job, 'bibdoc_master_docname'):
bibdoc_video_docname = getval(batch_job, 'bibdoc_master_docname')
write_message("Creating BibDoc for %s" % bibdoc_video_docname)
## If the bibdoc exists, receive it
if bibdoc_video_docname in recdoc.get_bibdoc_names():
bibdoc_video = recdoc.get_bibdoc(bibdoc_video_docname)
## Create a new bibdoc if it does not exist
else:
bibdoc_video = recdoc.add_bibdoc(docname=bibdoc_video_docname)
## Get the directory auf the newly created bibdoc to copy stuff there
bibdoc_video_directory = bibdoc_video.get_base_dir()
#--------#
# MASTER #
#--------#
if not getval(batch_job, 'update_from_master'):
if getval(batch_job, 'add_master'):
## Generate the right name for the master
## The master should be hidden first an then renamed
## when it is really available
## !!! FIX !!!
_task_write_message("Adding %s master to the BibDoc"
% bibdoc_video_docname)
master_format = compose_format(
bibdoc_video_extension,
getval(batch_job, 'bibdoc_master_subformat', 'master')
)
## If a file of the same format is there, something is wrong, remove it!
## it might be caused by a previous corrupted submission etc.
if bibdoc_video.format_already_exists_p(master_format):
bibdoc_video.delete_file(master_format, 1)
bibdoc_video.add_file_new_format(
batch_job['input'],
version=1,
description=getval(batch_job, 'bibdoc_master_description'),
comment=getval(batch_job, 'bibdoc_master_comment'),
docformat=master_format
)
#-----------#
# JOBS LOOP #
#-----------#
return_code = 1
global _BATCH_STEP
for job in batch_job['jobs']:
_task_write_message("----------- Job %s of %s -----------"
% (_BATCH_STEP, _BATCH_STEPS))
## Try to substitute docname with master docname
if getval(job, 'bibdoc_docname'):
job['bibdoc_docname'] = Template(job['bibdoc_docname']).safe_substitute({'bibdoc_master_docname': bibdoc_video_docname})
#-------------#
# TRANSCODING #
#-------------#
if job['mode'] == 'encode':
## Skip the job if assure_quality is not set and marked as fallback
if not getval(batch_job, 'assure_quality') and getval(job, 'fallback'):
continue
if getval(job, 'profile'):
profile = get_encoding_profile(job['profile'])
else:
profile = None
## We need an extension defined fot the video container
bibdoc_video_extension = getval(job, 'extension',
getval(profile, 'extension'))
if not bibdoc_video_extension:
raise Exception("No container/extension defined")
## Get the docname and subformat
bibdoc_video_subformat = getval(job, 'bibdoc_subformat')
bibdoc_slave_video_docname = getval(job, 'bibdoc_docname', bibdoc_video_docname)
## The subformat is incompatible with ffmpegs name convention
## We do the encoding without and rename it afterwards
bibdoc_video_fullpath = compose_file(
bibdoc_video_directory,
bibdoc_slave_video_docname,
bibdoc_video_extension
)
_task_write_message("Transcoding %s to %s;%s" % (bibdoc_slave_video_docname,
bibdoc_video_extension,
bibdoc_video_subformat))
## We encode now directly into the bibdocs directory
encoding_result = encode_video(
input_file=batch_job['input'],
output_file=bibdoc_video_fullpath,
acodec=getval(job, 'audiocodec'),
vcodec=getval(job, 'videocodec'),
abitrate=getval(job, 'videobitrate'),
vbitrate=getval(job, 'audiobitrate'),
resolution=getval(job, 'resolution'),
passes=getval(job, 'passes', 1),
special=getval(job, 'special'),
specialfirst=getval(job, 'specialfirst'),
specialsecond=getval(job, 'specialsecond'),
metadata=getval(job, 'metadata'),
width=getval(job, 'width'),
height=getval(job, 'height'),
aspect=getval(batch_job, 'aspect'), # Aspect for every job
profile=getval(job, 'profile'),
update_fnc=_task_update_overall_status,
message_fnc=_task_write_message
)
return_code &= encoding_result
## only on success
if encoding_result:
## Rename it, adding the subformat
os.rename(bibdoc_video_fullpath,
compose_file(bibdoc_video_directory,
- bibdoc_slave_video_docname,
bibdoc_video_extension,
bibdoc_video_subformat,
- 1)
+ 1,
+ bibdoc_slave_video_docname)
)
bibdoc_video._build_file_list()
bibdoc_video_format = compose_format(bibdoc_video_extension,
bibdoc_video_subformat)
if getval(job, 'bibdoc_comment'):
bibdoc_video.set_comment(getval(job, 'bibdoc_comment'),
bibdoc_video_format)
if getval(job, 'bibdoc_description'):
bibdoc_video.set_description(getval(job, 'bibdoc_description'),
bibdoc_video_format)
#------------#
# EXTRACTION #
#------------#
# if there are multiple extraction jobs, all the produced files
# with the same name will be in the same bibdoc! Make sure that
# you use different subformats or docname templates to avoid
# conflicts.
if job['mode'] == 'extract':
if getval(job, 'profile'):
profile = get_extract_profile(job['profile'])
else:
profile = {}
bibdoc_frame_subformat = getval(job, 'bibdoc_subformat')
_task_write_message("Extracting frames to temporary directory")
tmpdir = invenio.config.CFG_TMPDIR + "/" + str(uuid.uuid4())
os.mkdir(tmpdir)
#Move this to the batch description
bibdoc_frame_docname = getval(job, 'bibdoc_docname', bibdoc_video_docname)
tmpfname = (tmpdir + "/" + bibdoc_frame_docname + '.'
+ getval(profile, 'extension',
getval(job, 'extension', 'jpg')))
extraction_result = extract_frames(input_file=batch_job['input'],
output_file=tmpfname,
size=getval(job, 'size'),
positions=getval(job, 'positions'),
numberof=getval(job, 'numberof'),
width=getval(job, 'width'),
height=getval(job, 'height'),
aspect=getval(batch_job, 'aspect'),
profile=getval(job, 'profile'),
update_fnc=_task_update_overall_status,
)
return_code &= extraction_result
## only on success:
if extraction_result:
## for every filename in the directorys, create a bibdoc that contains
## all sizes of the frame from the two directories
files = os.listdir(tmpdir)
for filename in files:
## The docname was altered by BibEncode extract through substitution
## Retrieve it from the filename again
bibdoc_frame_docname, bibdoc_frame_extension = os.path.splitext(filename)
_task_write_message("Creating new bibdoc for %s" % bibdoc_frame_docname)
## If the bibdoc exists, receive it
if bibdoc_frame_docname in recdoc.get_bibdoc_names():
bibdoc_frame = recdoc.get_bibdoc(bibdoc_frame_docname)
## Create a new bibdoc if it does not exist
else:
bibdoc_frame = recdoc.add_bibdoc(docname=bibdoc_frame_docname)
## The filename including path from tmpdir
fname = os.path.join(tmpdir, filename)
bibdoc_frame_format = compose_format(bibdoc_frame_extension, bibdoc_frame_subformat)
## Same as with the master, if the format allready exists,
## override it, because something went wrong before
if bibdoc_frame.format_already_exists_p(bibdoc_frame_format):
bibdoc_frame.delete_file(bibdoc_frame_format, 1)
_task_write_message("Adding %s jpg;%s to BibDoc"
% (bibdoc_frame_docname,
getval(job, 'bibdoc_subformat')))
bibdoc_frame.add_file_new_format(
fname,
version=1,
description=getval(job, 'bibdoc_description'),
comment=getval(job, 'bibdoc_comment'),
docformat=bibdoc_frame_format)
## Remove the temporary folders
_task_write_message("Removing temporary directory")
shutil.rmtree(tmpdir)
_BATCH_STEP = _BATCH_STEP + 1
#-----------------#
# FIX BIBDOC/MARC #
#-----------------#
_task_write_message("----------- Handling MARCXML -----------")
## Fix the BibDoc for all the videos previously created
_task_write_message("Updating BibDoc of %s" % bibdoc_video_docname)
bibdoc_video._build_file_list()
## Fix the MARC
_task_write_message("Fixing MARC")
cli_fix_marc({}, [batch_job['recid']], False)
if getval(batch_job, 'collection'):
## Make the record visible by moving in from the collection
marcxml = ("<record><controlfield tag=\"001\">%d</controlfield>"
"<datafield tag=\"980\" ind1=\" \" ind2=\" \">"
"<subfield code=\"a\">%s</subfield></datafield></record>"
) % (batch_job['recid'], batch_job['collection'])
upload_marcxml_file(marcxml)
#---------------------#
# ADD MASTER METADATA #
#---------------------#
if getval(batch_job, 'add_master_metadata'):
_task_write_message("Adding master metadata")
pbcore = pbcore_metadata(input_file = getval(batch_job, 'input'),
pbcoreIdentifier = batch_job['recid'],
aspect_override = getval(batch_job, 'aspect'))
marcxml = format(pbcore, CFG_BIBENCODE_PBCORE_MARC_XSLT)
upload_marcxml_file(marcxml)
#------------------#
# ADD MARC SNIPPET #
#------------------#
if getval(batch_job, 'marc_snippet'):
marc_snippet = open(getval(batch_job, 'marc_snippet'))
marcxml = marc_snippet.read()
marc_snippet.close()
upload_marcxml_file(marcxml)
#--------------#
# DELETE INPUT #
#--------------#
if getval(batch_job, 'delete_input'):
_task_write_message("Deleting input file")
# only if successfull
if not return_code:
# only if input matches pattern
if getval(batch_job, 'delete_input_pattern', '') in getval(batch_job, 'input'):
try:
os.remove(getval(batch_job, 'input'))
except OSError:
pass
#--------------#
# NOTIFICATION #
#--------------#
## Send Notification emails on errors
if not return_code:
if getval(batch_job, 'notify_user'):
_notify_error_user(getval(batch_job, 'notify_user'),
getval(batch_job, 'submission_filename', batch_job['input']),
getval(batch_job, 'recid'),
getval(batch_job, 'submission_title', ""))
_task_write_message("Notify user because of an error")
if getval(batch_job, 'notify_admin'):
_task_write_message("Notify admin because of an error")
if type(getval(batch_job, 'notify_admin') == type(str()) ):
_notify_error_admin(batch_job,
getval(batch_job, 'notify_admin'))
else:
_notify_error_admin(batch_job)
else:
if getval(batch_job, 'notify_user'):
_task_write_message("Notify user because of success")
_notify_success_user(getval(batch_job, 'notify_user'),
getval(batch_job, 'submission_filename', batch_job['input']),
getval(batch_job, 'recid'),
getval(batch_job, 'submission_title', ""))
return 1
diff --git a/modules/bibencode/lib/bibencode_config.py b/modules/bibencode/lib/bibencode_config.py
index 1ab6c4cca..670b7aefc 100644
--- a/modules/bibencode/lib/bibencode_config.py
+++ b/modules/bibencode/lib/bibencode_config.py
@@ -1,234 +1,235 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Bibencode configuration submodule"""
__revision__ = "$Id$"
import invenio.config
import re
#-----------------------#
# General Configuration #
#-----------------------#
## The command for probing with FFMPEG
CFG_BIBENCODE_FFMPEG_PROBE_COMMAND = invenio.config.CFG_PATH_FFPROBE + " %s -loglevel verbose -show_format -show_streams"
## The command for probing with MEDIAINFO
CFG_BIBENCODE_MEDIAINFO_COMMAND = invenio.config.CFG_PATH_MEDIAINFO + " %s -f --Output=XML"
## Image extraction base command
CFG_BIBENCODE_FFMPEG_EXTRACT_COMMAND = invenio.config.CFG_PATH_FFMPEG + " -ss %.2f -i %s -r 1 -vframes 1 -f image2 -s %s %s"
## Commands for multipass encoding
## In the first pass, you can dump the output to /dev/null and ignore audio
## CFG_BIBENCODE_FFMPEG_COMMAND_PASS_1 = "ffmpeg -i %s -y -loglevel verbose -vcodec %s -pass 1 -passlogfile %s -an -f rawvideo -b %s -s %s %s /dev/null"
## CFG_BIBENCODE_FFMPEG_COMMAND_PASS_2 = "ffmpeg -i %s -y -loglevel verbose -vcodec %s -pass 2 -passlogfile %s -acodec %s -b %s -ab %s -s %s %s %s"
CFG_BIBENCODE_FFMPEG_PASSLOGFILE_PREFIX = invenio.config.CFG_LOGDIR + "/bibencode2pass-%s-%s"
## Path to the encoding logfiles
## Filenames will later be substituted with process specific information
CFG_BIBENCODE_FFMPEG_ENCODING_LOG = invenio.config.CFG_LOGDIR + "/bibencode_%s.log"
## Path to probing logiles
CFG_BIBENCODE_FFMPEG_PROBE_LOG = invenio.config.CFG_LOGDIR + "/bibencode_probe_%s.log"
## The pattern for the encoding status specific string in the FFmpeg output
CFG_BIBENCODE_FFMPEG_ENCODE_TIME = re.compile("^.+time=(\d\d:\d\d:\d\d.\d\d).+$")
## The pattern for the configuration string with information about compiling options
CFD_BIBENCODE_FFMPEG_OUT_RE_CONFIGURATION = re.compile("(--enable-[a-z0-9\-]*)")
## The minimum ffmpeg compile options for BibEncode to work correctly
CFG_BIBENCODE_FFMPEG_CONFIGURATION_REQUIRED = (
'--enable-gpl',
'--enable-version3',
'--enable-nonfree',
'--enable-libfaac',
+ ## '--enable-libfdk-aac',
'--enable-libtheora',
'--enable-libvorbis',
'--enable-libvpx',
'--enable-libx264',
## '--enable-funky'
)
## Path to the directory for transcoded files
CFG_BIBENCODE_TARGET_DIRECTORY = invenio.config.CFG_TMPDIR + "/"
#------------------------#
# Metadata Configuration #
#------------------------#
## Template for key-value pairs that can be used with FFMPEG to set metadata.
## Not all keys are represented in every video container format.
## FFMPEG will try to write any given key-value pairs. If the container
## format does not support some pairs there wont be an error.
## You might like to verify that the attributes were really written
## by using FFPROBE.
## The FFMPEG argument structure is:
## -metadata key1="value1" -metadata key2="value2 ...
CFG_BIBENCODE_FFMPEG_METADATA_TEMPLATE = {
'title': None,
'author': None,
'album_artist': None,
'album': None,
'grouping': None,
'composer': None,
'year': None,
'track': None,
'comment': None,
'genre': None,
'copyright': None,
'description': None,
'synopsis': None,
'show': None,
'episode_id': None,
"network": None,
'lyrics': None
}
# Duration: 00:02:28.58, start: 0.000000, bitrate: 9439 kb/s
# timcode start? bitrate
CFG_BIBENCODE_FFMPEG_RE_VIDEOINFO_DURATION = re.compile("^\s*Duration: (.*?), start: (\d+\.\d+), bitrate: (\d+?) kb\/s$")
# Stream #0.0(eng): Video: h264 (Main), yuv420p, 1920x1056, 9338 kb/s, 23.98 fps, 23.98 tbr, 2997 tbn, 5994 tbc
# Stream #0.1(eng): Video: wmv3, yuv420p, 1440x1080, 9500 kb/s, 25 tbr, 1k tbn, 1k tbc
# number language codec color resolution bitrate fps tbr tbn tbc
CFG_BIBENCODE_FFMPEG_RE_VIDEOINFO_VSTREAM = re.compile("^\s*Stream #(\d+.\d+)\(?(\w+)?\)?: Video: ([a-zA-Z0-9\(\) ]*), (\w+), (\d+x\d+), (\d+) kb\/s, (.+) fps, (.+) tbr, (.+) tbn, (.+) tbc$")
# Stream #0.0(eng): Audio: wmav2, 44100 Hz, 2 channels, s16, 320 kb/s
# Stream #0.1(eng): Audio: aac, 44100 Hz, stereo, s16, 97 kb/s
# number language codec samplerate channels bit-depth bitrate
CFG_BIBENCODE_FFMPEG_RE_VIDEOINFO_ASTREAM = re.compile("^\s*Stream #(\d+.\d+)\(?(\w+)?\)?: Audio: ([a-zA-Z0-9\(\) ]*), (\d+) Hz, ([a-zA-Z0-9 ]+), (\w+), (\d+) kb\/s$")
## FFMPEG command for setting metadata
## This will create a copy of the master and write the metadata there
CFG_BIBENCODE_FFMPEG_METADATA_SET_COMMAND = "ffmpeg -y -i %s -acodec copy -vcodec copy %s"
## FFMPEG metadata argument template
## had to remove '-metadata ' in front because of issues with command splitting
CFG_BIBENCODE_FFMPEG_METADATA_ARGUMENT = "%s=\"%s\""
## File containing mappings from ffprobe and mediainfo to pbcore
CFG_BIBENCODE_PBCORE_MAPPINGS = invenio.config.CFG_ETCDIR + "/bibencode/pbcore_mappings.json"
## XSLT Template from PBCORE to MARCXML
CFG_BIBENCODE_PBCORE_MARC_XSLT = invenio.config.CFG_ETCDIR + "/bibencode/pbcore_to_marc_nons.xsl"
CFG_BIBENCODE_ASPECT_RATIO_MARC_FIELD = "951__x"
## Metadata Patterns for parsing
def create_metadata_re_dict():
""" Creates a dictionary with Regex patterns from the metadata template dictionary
"""
metadata_re_dictionary = {}
for key, value in CFG_BIBENCODE_FFMPEG_METADATA_TEMPLATE.iteritems():
metadata_re_dictionary[key] = re.compile("^\s*%s\s*:\s(((\S*)\s*(\S*))*)$" % key)
return metadata_re_dictionary
CFG_BIBENCODE_FFMPEG_METADATA_RE_DICT = create_metadata_re_dict()
#----------------------#
# Parameter Validation #
#----------------------#
CFG_BIBENCODE_VALID_MODES = ['encode', 'extract', 'meta', 'batch', 'daemon', 'cdsmedia']
CFG_BIBENCODE_FFMPEG_VALID_SIZES = [
'sqcif', 'qcif', 'cif', '4cif', '16cif', 'qqvga', 'qvga', 'vga', 'svga',
'xga', 'uxga', 'qxga', 'sxga', 'qsxga', 'hsxga', 'wvga', 'wxga', 'wsxga',
'wuxga', 'woxga', 'wqsxga', 'wquxga', 'whsxga', 'cga', 'ega',
'hd480', 'hd720', 'hd1080'
]
CFG_BIBENCODE_RESOLUTIONS = {
"ntsc": "720x480",
"pal": "720x576",
"qntsc": "352x240",
"qpal": "352x288",
"sntsc": "640x480",
"spal": "768x576",
"film": "352x240",
"ntsc-film": "352x240",
"sqcif": "128x96",
"qcif": "176x144",
"cif": "352x288",
"4cif": "704x576",
"16cif": "1408x1152",
"qqvga": "160x120",
"qvga": "320x240",
"vga": "640x480",
"svga": "800x600",
"xga": "1024x768",
"uxga": "1600x1200",
"qxga": "2048x1536",
"sxga": "1280x1024",
"qsxga": "2560x2048",
"hsxga": "5120x4096",
"wvga": "852x480",
"wxga": "1366x768",
"wsxga": "1600x1024",
"wuxga": "1920x1200",
"woxga": "2560x1600",
"wqsxga": "3200x2048",
"wquxga": "3840x2400",
"whsxga": "6400x4096",
"whuxga": "7680x4800",
"cga": "320x200",
"ega": "640x350",
"hd480": "852x480",
"hd720": "1280x720",
"hd1080": "1920x1080"
}
CFG_BIBENCODE_FFMPEG_RE_VALID_SIZE = re.compile("^\d+x\d+$")
CFG_BIBENCODE_FFMPEG_VALID_VCODECS = [
'libx264', 'libvpx', 'libtheora', 'mpeg4', 'wmv2', 'wmv1', 'flv'
]
CFG_BIBENCODE_FFMPEG_VALID_ACODECS = [
'libmp3lame', 'libvorbis', 'wma1', 'wma2', 'libfaac'
]
#------------------------#
# Profiles Configuration #
#------------------------#
CFG_BIBENCODE_PROFILES_ENCODING = invenio.config.CFG_ETCDIR + "/bibencode/encoding_profiles.json"
CFG_BIBENCODE_PROFILES_ENCODING_LOCAL = invenio.config.CFG_ETCDIR + "/bibencode/encoding_profiles_local.json"
CFG_BIBENCODE_PROFILES_EXTRACT = invenio.config.CFG_ETCDIR + "/bibencode/extract_profiles.json"
CFG_BIBENCODE_PROFILES_EXTRACT_LOCAL = invenio.config.CFG_ETCDIR + "/bibencode/extract_profiles_local.json"
CFG_BIBENCODE_TEMPLATE_BATCH_SUBMISSION = invenio.config.CFG_ETCDIR + "/bibencode/batch_template_submission.json"
#----------------------#
# Daemon Configuration #
#----------------------#
CFG_BIBENCODE_DAEMON_DIR_NEWJOBS = invenio.config.CFG_TMPSHAREDDIR + '/bibencode/jobs'
CFG_BIBENCODE_DAEMON_DIR_OLDJOBS = invenio.config.CFG_TMPSHAREDDIR + '/bibencode/jobs/done'
#-------------------#
# WebSubmit Support #
#-------------------#
CFG_BIBENCODE_WEBSUBMIT_ASPECT_SAMPLE_FNAME = 'aspect_sample_.jpg'
CFG_BIBENCODE_WEBSUBMIT_ASPECT_SAMPLE_DIR = 'aspect_samples'
diff --git a/modules/bibencode/lib/bibencode_encode.py b/modules/bibencode/lib/bibencode_encode.py
index 14145f942..f4a2b32ab 100644
--- a/modules/bibencode/lib/bibencode_encode.py
+++ b/modules/bibencode/lib/bibencode_encode.py
@@ -1,614 +1,614 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibEncode encoding submodule"""
from invenio.bibtask import (
write_message,
task_update_progress,
)
from invenio.bibencode_config import (
CFG_BIBENCODE_FFMPEG_ENCODING_LOG,
CFG_BIBENCODE_FFMPEG_PASSLOGFILE_PREFIX,
CFG_BIBENCODE_FFMPEG_METADATA_ARGUMENT,
CFG_BIBENCODE_FFMPEG_ENCODE_TIME
)
from invenio.bibencode_utils import (
timecode_to_seconds,
generate_timestamp,
chose,
getval,
aspect_string_to_float
)
from invenio.bibencode_profiles import get_encoding_profile
from invenio.bibencode_metadata import (
ffprobe_metadata,
mediainfo_metadata
)
from invenio.config import CFG_PATH_FFMPEG
import time
import os
import subprocess
import uuid
def _filename_log(output_filename, nofpass=1):
""" Constructs the filename including path for the encoding err file
@param output_filename: name of the video file to be created
@type output_filename: string
@param nofpass: number of encoding passes
@type nofpass: int
@return: the constructed log filename
@rtype: string
"""
fname = os.path.split(output_filename)[1]
fname = os.path.splitext(fname)[0]
return CFG_BIBENCODE_FFMPEG_ENCODING_LOG % (generate_timestamp() +
"_" + fname + "_%d" % nofpass)
def determine_aspect(input_file):
""" Checks video metadata to find the display aspect ratio.
Returns None if the DAR is not stored in the video container.
@param input_file: full path of the video
@type input_file: string
"""
videoinfo = ffprobe_metadata(input_file)
if not videoinfo:
return None
for stream in videoinfo['streams']:
if stream['codec_type'] == 'video':
fwidth = int(stream['width'])
fheight = int(stream['height'])
if 'display_aspect_ratio' in stream:
return (stream['display_aspect_ratio'], fwidth, fheight)
return (None, fwidth, fheight)
def determine_resolution_preserving_aspect(input_file, width=None,
height=None, aspect=None):
""" Determines the right resolution for a given width or height while
preserving the aspect ratio.
@param input_file: full path of the video
@type input_file: string
@param width: The proposed width for the new size.
@type width: int
@param height: The proposed height for the new size
@type height: int
@param aspect: Override aspect ratio determined from the input file
@type aspect: float or "4:3" like string
@return: An FFMPEG compatible size string '640x480'
@rtype: string
"""
def _make_even(number):
""" Resolutions need to be even numbers for some video encoders.
We simply increase the resulution by one pixel if it is not even.
"""
if number % 2 != 0:
return number+1
else:
return number
if aspect:
if type(aspect) == type(str()):
aspect_ratio = aspect_string_to_float(aspect)
elif type(aspect) == type(float()):
aspect_ratio = aspect
else:
raise ValueError
else:
aspect_ratio_tuple = determine_aspect(input_file)
if aspect_ratio_tuple[0] is None:
aspect_ratio = float(aspect_ratio_tuple[1]) / float(aspect_ratio_tuple[2])
else:
aspect_ratio = aspect_string_to_float(aspect_ratio_tuple[0])
nresolution = None
if width and not height:
## The resolution hast to fit exactly the width
nheight = int(width / aspect_ratio)
nheight = _make_even(nheight)
nresolution = "%dx%d" % (width, nheight)
elif height and not width:
## The resolution hast to fit exactly the height
nwidth = int(height * aspect_ratio)
nwidth = _make_even(nwidth)
nresolution = "%dx%d" % (nwidth, height)
elif width and height:
## The resolution hast to be within both parameters, seen as a maximum
nwidth = width
nheight = height
new_aspect_ratio = float(width) / float(height)
if aspect_ratio > new_aspect_ratio:
nheight = int(width / aspect_ratio)
else:
nwidth = int(height * aspect_ratio)
nheight = _make_even(nheight)
nwidth = _make_even(nwidth)
nresolution = "%dx%d" % (nwidth, nheight)
else:
## Return the original size in square pixels
## original height * aspect_ratio
nwidth = aspect_ratio_tuple[2] * aspect_ratio
nwidth = _make_even(nwidth)
nresolution = "%dx%d" % (nwidth, aspect_ratio_tuple[2])
return nresolution
def assure_quality(input_file, aspect=None, target_width=None,
target_height=None, target_bitrate=None,
accept_anamorphic=True, tolerance=0.95):
"""
Checks if the original video material would support the target resolution
and/or bitrate.
@param input_file: full path of the video to check
@type input_file: string
@param aspect: the aspect ratio as override
@type aspect: float
@param target_width: width of the new video
@type target_width: int
@param target_height: height of the new video
@type target_height: int
@param target_bitrate: bitrate of the new video in bit/s.
@type target_bitrate: int
@param dismiss_aspect: do not care about the aspect
@type dismiss_aspect: bool
@return: 1 if the video supports the quality, 0 if not
@rtype: bool
"""
if target_bitrate:
target_bitrate = int(target_bitrate * tolerance)
if target_height:
target_height = int(target_height * tolerance)
if target_width:
target_width = int (target_width * tolerance)
## First get the size and aspect using ffprobe
## ffprobe is more reliable in this case then mediainfo
aspect_ratio_tuple = determine_aspect(input_file)
fwidth = aspect_ratio_tuple[1]
fheight = aspect_ratio_tuple[2]
if not aspect:
aspect = aspect_ratio_tuple[0]
## Get the bitrate with mediainfo now, because it is more realiable
## than ffprobe in this case
fbitrate = None
videoinfo = mediainfo_metadata(input_file)
for track in videoinfo:
if track['kind_of_stream'] == 'Video':
fbitrate = getval(track, 'bit_rate')
break
if fbitrate:
fbitrate = int(fbitrate)
# This adapts anamorphic videos.
# If it is stored anamorphic, calculate the real width from the height
# we can use our determine_resolution function for this
if accept_anamorphic and aspect:
fwidth = determine_resolution_preserving_aspect(
input_file=input_file,
width=None,
height=fheight,
aspect=aspect).split('x')[0]
fwidth = int(fwidth)
if target_height and target_width:
if target_width > fwidth or target_height > fheight:
return False
elif target_height:
if target_height > fheight:
return False
elif target_width:
if target_width > fwidth:
return False
if target_bitrate:
## If the video bitrate is unreadable, assume it is ok and our library
## has problems reading it out
if fbitrate and target_bitrate > fbitrate:
return False
return True
def encode_video(input_file, output_file,
acodec=None, vcodec=None,
abitrate=None, vbitrate=None,
resolution=None,
passes=1,
special=None, specialfirst=None, specialsecond=None,
metadata=None,
width=None, height=None, aspect=None,
profile=None,
update_fnc=task_update_progress,
message_fnc=write_message
):
""" Starts an ffmpeg encoding process based on the given parameters.
The encoding is run as a subprocess. The progress of the subprocess is
continiously written to the given messaging functions. In a normale case,
these should be the ones of BibTask.
@param input_file: Path to the input video.
@type input_file: string
@param output_file: Path to the output file. If no other parameters are giv
than input and output files, FFmpeg tries to auto-discover the right codecs
for the given file extension. In this case, every other aspect like
resolution and bitrates will be the same as in the input video.
@type output_file: string
@param acodec: The audio codec to use. This must be an available codec of
libavcodec within FFmpeg.
@type acodec: string
@param vcodec: The video codec to use. This must be an available codec of
libavcodec within FFmpeg.
@type vcodec: string
@param abitrate: Bitrate of the audio stream. In bit/s.
@type abitrate: int
@param vbitrate: Bitrate of the video stream. In bit/s.
@type vbitrate: int
@param resolution: Fixed size of the frames in the transcoded video.
FFmpeg notation: 'WxH' or preset like 'vga'. See also 'width'
@param passes: Number of encoding passes. Either 1 or 2.
@type passes: int
@param special: Additional FFmpeg parameters.
@type special: string
@param specialfirst: Additional FFmpeg parameters for the first pass.
The 'special' parameter is ignored if this ist not 'None'
@type specialfirst: string
@param specialsecond: Additional FFmpeg parameters for the second pass.
The 'special' parameter is ignored if this is not 'None'
@type specialsecond: string
@param metadata: Metadata that should be added to the transcoded video.
This must be a dictionary. As with as metadata in FFmpeg, there is no
guarantee that the metadata specified in the dictionary will really be added
to the file, because it will largelydepend on the container format and its
supported fields.
@type metadata: dict
@param width: Instead of giving a fixed resolution, you can use width and
height as dimensional constrains. The algorithm will try to preserve the
original aspect and fit the new frame size into the given dimensions.
@type width: int
@param height: see 'width'
@type height: int
@param aspect: A float representing the aspect ratio of the video:
4:3 equals 1.33 and 16:9 equals 1.77.
This is a fallback in case the algorithm fails to determine the real aspect
ratio from the video. See also 'width'
@type aspect: float or "4:3" like string
@param profile: A profile to use. The priority is on the parameters
directly given to the function.
@type profile: string
@param update_fnc: A function called to display or log an the encoding
status. This function must accept a string.
@type update_fnc: function
@param message_fnc: A function to log important messages or errors.
This function must accept a string.
@type message_fnc: function
@return: True if the encoding was successful, False if not
@rtype: boolean
"""
def encode():
""" Subfunction to run the acutal encoding
"""
## Start process
process = subprocess.Popen(command,
stderr=log_file_handle,
close_fds=True)
## While the process is running
time.sleep(1)
while process.poll() is None:
# Update the status in bibsched
update_status()
time.sleep(4)
## If the process was terminated
if process.poll() == -15:
# Encoding was terminated by system
message_fnc("FFMPEG was terminated")
update_fnc(" FFMPEG was terminated")
return 0
## If there was an error during encoding
if process.poll() == 1:
update_fnc(" An FFMPEG error has appeared, see log")
message_fnc("An FFMPEG error has appeared encoding %s" % output_file)
message_fnc("Command was: %s" % ' '.join(command))
message_fnc("Last lines of the FFmpeg log:")
## open the logfile again an retrieve the size
log_file_handle2 = open(log_file_name, 'rb')
size = os.fstat(log_file_handle2.fileno())[6]
## Read the last lines
log_file_handle2.seek(-min(size, 10000), 2)
lastlines = log_file_handle2.read().splitlines()[-5:]
for line in lastlines:
message_fnc(line)
return 0
## If everything went fine
if process.poll() == 0:
message_fnc("Encoding of %s done" % output_file)
update_fnc("Encoding of %s done" % output_file)
return 1
def build_command(nofpass=1):
""" Builds the ffmpeg command according to the function params
"""
def insert(key, value):
""" Shortcut for inserting parameters into the arg list
"""
base_args.insert(-1, key)
base_args.insert(-1, value)
## Determine base command arguments from the pass to run
base_args = None
if passes == 1:
base_args = [CFG_PATH_FFMPEG, '-y', '-i', input_file, output_file]
elif passes == 2:
if nofpass == 1:
base_args = [CFG_PATH_FFMPEG, '-y', '-i', input_file,
'-pass', '1', '-passlogfile', pass_log_file,
'-an', '-f', 'rawvideo', '/dev/null']
elif nofpass == 2:
base_args = [CFG_PATH_FFMPEG, '-y', '-i', input_file,
'-pass', '2', '-passlogfile',
pass_log_file, output_file]
## Insert additional arguments
if acodec is not None:
insert('-acodec', acodec)
if vcodec is not None:
insert('-vcodec', vcodec)
if abitrate is not None:
- insert('-ab', str(abitrate))
+ insert('-b:a', str(abitrate))
if vbitrate is not None:
- insert('-b', str(vbitrate))
+ insert('-b:v', str(vbitrate))
## If a resolution is given
if resolution:
insert('-s', resolution)
## If not, you can give width and height and generate the resolution
else:
## Use our new function to get the size of the input
nresolution = determine_resolution_preserving_aspect(input_file,
width,
height,
aspect)
insert('-s', nresolution)
## Metadata additions
if type(metadata) is type(dict()):
## build metadata arguments for ffmpeg
for key, value in metadata.iteritems():
if value is not None:
meta_arg = (
CFG_BIBENCODE_FFMPEG_METADATA_ARGUMENT % (key, value)
)
insert("-metadata", meta_arg)
## Special argument additions
if passes == 1:
if passes == 1 and special is not None:
for val in special.split():
base_args.insert(-1, val)
elif passes == 2:
if nofpass == 1:
if specialfirst is not None:
for val in specialfirst.split():
base_args.insert(-1, val)
if nofpass == 2:
if specialsecond is not None:
for val in specialsecond.split():
base_args.insert(-1, val)
return base_args
def update_status():
""" Parses the encoding status and updates the task in bibsched
"""
def graphical(value):
""" Converts a percentage value to a nice graphical representation
"""
## If the given value is a valid precentage
if value >= 0 and value <= 100:
## This is to get nice, aligned output in bibsched
oval = str(value).zfill(3)
return (
"[" + "#"*(value/10) + " "*(10-(value/10)) +
"][%d/%d] %s%%" % (nofpass, passes, oval)
)
else:
## Sometimes the parsed values from FFMPEG are totaly off.
## Or maybe nneeded values are not avail. for the given video.
## In this case there is no estimate.
return "[ no est. ][%d/%d] " % (nofpass, passes)
## init variables
time_string = '0.0'
percentage_done = -1
## try to read the encoding log
try:
filehandle = open(log_file_name, 'rb')
except IOError:
message_fnc("Error opening %s" % log_file_name)
update_fnc("Could not open encoding log")
return
## Check the size of the file before reading from the end
size = os.path.getsize(log_file_name)
if not size:
return
## Go to the end of the log
filehandle.seek(-min(10000, size), 2)
chunk = filehandle.read()
lines = chunk.splitlines()
## try to parse the status
for line in reversed(lines):
if CFG_BIBENCODE_FFMPEG_ENCODE_TIME.match(line):
time_string = (
CFG_BIBENCODE_FFMPEG_ENCODE_TIME.match(line).groups()
)[0]
break
filehandle.close()
try:
percentage_done = int(timecode_to_seconds(time_string) / total_seconds * 100)
except:
precentage_done = -1
## Now update the bibsched progress
opath, ofile = os.path.split(output_file)
if len(opath) > 8:
opath = "..." + opath[-8:]
ohint = opath + '/' + ofile
update_fnc(graphical(percentage_done) + " > " + ohint)
#------------------#
# PROFILE HANDLING #
#------------------#
if profile:
profile = get_encoding_profile(profile)
acodec = chose(acodec, 'audiocodec', profile)
vcodec = chose(vcodec, 'videocodec', profile)
abitrate = chose(abitrate, 'audiobitrate', profile)
vbitrate = chose(vbitrate, 'videobitrate', profile)
resolution = chose(resolution, 'resolution', profile)
passes = getval(profile, 'passes', 1)
special = chose(special, 'special', profile)
specialfirst = chose(specialfirst, 'special_firstpass', profile)
specialsecond = chose(specialsecond, 'special_secondpass', profile)
metadata = chose(metadata, 'metadata', profile)
width = chose(width, 'width', profile)
height = chose(height, 'height', profile)
aspect = chose(aspect, 'aspect', profile)
#----------#
# ENCODING #
#----------#
## Mark Task as stoppable
# task_sleep_now_if_required()
tech_metadata = ffprobe_metadata(input_file)
try:
total_seconds = float(tech_metadata['format']['duration'])
except:
total_seconds = 0.0
## Run the encoding
pass_log_file = CFG_BIBENCODE_FFMPEG_PASSLOGFILE_PREFIX % (
os.path.splitext(os.path.split(input_file)[1])[0],
str(uuid.uuid4()))
no_error = True
## For every encoding pass to do
for apass in range(0, passes):
nofpass = apass + 1
if no_error:
## Create Logfiles
log_file_name = _filename_log(output_file, nofpass)
try:
log_file_handle = open(log_file_name, 'w')
except IOError:
message_fnc("Error creating %s" % log_file_name)
update_fnc("Error creating logfile")
return 0
## Build command for FFMPEG
command = build_command(nofpass)
## Start encoding, result will define to continue or not to
no_error = encode()
## !!! Status Update
return no_error
def propose_resolutions(video_file, display_aspect=None, res_16_9=['1920x1080', '1280x720', '854x480', '640x360'], res_4_3=['640x480'], lq_fallback=True):
""" Returns a list of possible resolutions that would work with the given
video, based on its own resultion ans aspect ratio
@ param display_aspect: Sets the display aspect ratio for videos where
this might not be detectable
@param res_16_9: Possible resolutions to select from for 16:9 videos
@param res_4_3: Possible resolutions to select from for 4:3 videos
@param lq_fallback: Return the videos own resultion if none of the given fits
"""
def eq(a,b):
if abs(a-b) < 0.01:
return 1
else:
return 0
def get_smaler_or_equal_res(height, avail_res):
smaler_res = []
for res in avail_res:
vres = int(res.split('x')[1])
if vres <= height:
smaler_res.append(res)
return smaler_res
def get_res_for_weird_aspect(width, aspect, avail_res):
smaler_res = []
for res in avail_res:
hres, vres = res.split('x')
hres = int(hres)
vres = int(vres)
if hres <= width:
height = hres * (1.0 / aspect)
if height % 2 != 0:
height = height-1
smaler_res.append(str(hres) + 'x' + str(int(height)))
return smaler_res
meta_dict = ffprobe_metadata(video_file)
for stream in meta_dict['streams']:
if stream['codec_type'] == 'video':
width = int(stream['width'])
height = int(stream['height'])
# If the display aspect ratio is in the meta, we can even override
# the ratio that was given to the function as a fallback
# But the information in the file could be wrong ...
# Which information is trustful?
if 'display_aspect_ratio' in stream:
display_aspect = stream['display_aspect_ratio']
break
# Calculate the aspect factors
if display_aspect == None:
# Assume square pixels
display_aspect = float(width) / float(height)
else:
asp_w, asp_h = display_aspect.split(':')
display_aspect = float(asp_w) / float(asp_h)
# Check if 16:9
if eq(display_aspect, 1.77):
possible_res = get_smaler_or_equal_res(height, res_16_9)
# Check if 4:3
elif eq(display_aspect, 1.33):
possible_res = get_smaler_or_equal_res(height, res_4_3)
# Weird aspect
else:
possible_res = get_res_for_weird_aspect(width, display_aspect, res_16_9)
# If the video is crap
if not possible_res and lq_fallback:
return [str(width) + 'x' + str(height)]
else:
return possible_res
diff --git a/modules/bibfield/etc/atlantis.cfg b/modules/bibfield/etc/atlantis.cfg
index fce9a420f..a2083ea85 100644
--- a/modules/bibfield/etc/atlantis.cfg
+++ b/modules/bibfield/etc/atlantis.cfg
@@ -1,777 +1,1150 @@
###############################################################################
########## ##########
########## Invenio Atlantis Site Bibfield Configuration File ##########
########## ##########
###############################################################################
abstract:
creator:
- @legacy(("520__a", "abstract", "summary"),
+ @legacy((("520", "520__", "520__%"), "abstract", ""),
+ ("520__a", "abstract", "summary"),
("520__b", "expansion"),
("520__9", "number"))
marc, "520__", {'summary':value['a'], 'expansion':value['b'], 'number':value['9']}
+ producer:
+ json_for_marc, {"520__a": "summary", "520__b": "expansion", "520__9": "number"}
+ json_for_dc, {"dc:description":"summary"}
abstract_french:
creator:
- @legacy(("590__a", "summary"),
- ("590__b", "expansion"),)
+ @legacy((("590", "590__", "590__%"), ""),
+ ("590__a", "summary"),
+ ("590__b", "expansion"))
marc, "590__", {'summary':value['a'], 'expansion':value['b']}
+ producer:
+ json_for_marc, {"590__a": "sumary", "590__b": "expansion"}
accelerator_experiment:
creator:
- @legacy(("693__a", "accelerator"),
+ @legacy((("693", "693__", "693__%"), ""),
+ ("693__a", "accelerator"),
("693__e", "experiment"),
- ("693__f", "facility"),)
+ ("693__f", "facility"))
marc, "693__", {'accelerator':value['a'], 'experiment':value['e'], 'facility':value['f']}
+ producer:
+ json_for_marc, {"693__a": "accelerator", "693__b": "experiment", "693__f": "facility"}
action_note:
creator:
- @legacy(("583__a", "action"),
+ @legacy((("583", "583__", "583__%"), ""),
+ ("583__a", "action"),
("583__c", "time"),
("583__i", "email"),
- ("583__z", "note"),)
+ ("583__z", "note"))
marc, "583__", {'action':value['a'], 'time':value['c'], 'email':value['i'], 'note':value['z']}
+ producer:
+ json_for_marc, {"583__a": "action", "583__c": "time", "583__i": "email", "583__z": "note"}
address:
creator:
- @legacy(("270__a", "address"),
+ @legacy((("270", "270__", "270__%"), ""),
+ ("270__a", "address"),
("270__b", "city"),
("270__d", "country"),
("270__e", "pc"),
("270__k", "telephone"),
("270__l", "fax"),
("270__m", "email"),
("270__p", "contact"),
("270__s", "suffix"),
- ("270__9", "telex"),)
+ ("270__9", "telex"))
marc, "270__", {'address':value['a'], 'city':value['b'], 'country':value['d'], 'pc':value['e'], 'telephone':value['k'], 'fax':value['l'], 'email':value['m'], 'contact':value['p'], 'suffix':value['s'], 'telex':value['9']}
+ producer:
+ json_for_marc, {"270__a":"address", "270__b":"city", "270__d":"country", "270__e":"pc", "270__k":"telephone", "270__l":"fax", "270__m":"email", "270__p":"contact", "270__s":"suffix", "270__9":"telex"}
affiliation:
creator:
- @legacy(("901__u", ""),)
+ @legacy((("901", "901__", "901__%"), ""),
+ ("901__u", ""))
marc, "901__", value['u']
+ producer:
+ json_for_marc, {"901__u": ""}
agency_code:
creator:
- @legacy(("003", "agency_code"),)
+ @legacy(("003", "agency_code"), )
marc, "003", value
documentation:
"It contains the code for the agency whose system control number is present in field recid"
+ producer:
+ json_for_marc, {"003": ""}
aleph_linking_page:
creator:
- @legacy(("962__a", "type"),
+ @legacy((("962", "962__", "962__%"), ""),
+ ("962__a", "type"),
("962__b", "sysno"),
("962__l", "library"),
("962__n", "down_link"),
("962__m", "up_link"),
("962__y", "volume_link"),
("962__p", "part_link"),
("962__i", "issue_link"),
("962__k", "pages"),
("962__t", "base"))
marc, "962__", {'type':value['a'], 'sysno':value['b'], 'library':value['l'], 'down_link':value['n'], 'up_link':value['n'], 'volume_link':value['y'], 'part_link':value['p'], 'issue_link':value['i'], 'pages':value['k'], 'base':value['t']}
+ producer:
+ json_for_marc, {"962__a":"type", "962__b":"sysno", "962__l":"library", "962__n":"down_link", "962__m":"up_link", "962__y":"volume_link", "962__p":"part_link", "962__i":"issue_link", "962__k":"pages", "962__t":"base"}
authors[0], creator:
creator:
- @legacy(("100__a", "first author name", "full_name"),
+ @legacy((("100", "100__", "100__%"), ""),
+ ("100__a", "first author name", "full_name"),
("100__e", "relator_name"),
("100__h", "CCID"),
("100__i", "INSPIRE_number"),
("100__u", "first author affiliation", "affiliation"))
marc, "100__", { 'full_name':value['a'], 'first_name':util_split(value['a'],',',1), 'last_name':util_split(value['a'],',',0), 'relator_name':value['e'], 'CCID':value['h'], 'INSPIRE_number':value['i'], 'affiliation':value['u'] }
checker:
check_field_existence(0,1)
check_field_type('str')
documentation:
"Main Author"
- @subfield fn: "First name"
- @subfield ln: "Last name"
+ producer:
+ json_for_marc, {"100__a": "full_name", "100__e": "relator_name", "100__h": "CCID", "100__i": "INSPIRE_number", "100__u": "affiliation"}
+ json_for_dc, {"dc:creator": "full_name"}
authors[n], contributor:
creator:
- @legacy(("700__a", "additional author name", "full_name"),
+ @legacy((("700", "700__", "700__%"), ""),
+ ("700__a", "additional author name", "full_name"),
("700__u", "additional author affiliation", "affiliation"))
marc, "700__", {'full_name': value['a'], 'first_name':util_split(value['a'],',',1), 'last_name':util_split(value['a'],',',0), 'relator_name':value['e'], 'CCID':value['h'], 'INSPIRE_number':value['i'], 'affiliation':value['u'] }
checker:
check_field_existence(0,'n')
check_field_type('str')
documentation:
"Authors"
+ producer:
+ json_for_marc, {"700__a": "full_name", "700__e": "relator_name", "700__h": "CCID", "700__i": "INSPIRE_number", "700__u": "affiliation"}
+ json_for_dc, {"dc:contributor": "full_name"}
author_archive:
creator:
- @legacy(("720__a", ""),)
+ @legacy((("720", "720__", "720__%"), ""),
+ ("720__a", ""))
marc, "720__", value['a']
+ producer:
+ json_for_marc, {"720__a": ""}
base:
creator:
- @legacy(("960__a", ""),)
+ @legacy((("960", "960__", "960__%"), ""),
+ ("960__a", ""))
marc, "960__", value['a']
+ producer:
+ json_for_marc, {"960__a": ""}
+
cataloguer_info:
creator:
- @legacy(("961__a", "cataloguer"),
+ @legacy((("961", "961__", "961__%"), ""),
+ ("961__a", "cataloguer"),
("961__b", "level"),
("961__c", "modification_date"),
("961__l", "library"),
("961__h", "hour"),
("961__x", "creation_date"))
marc, "961__", {'cataloguer':value['a'], 'level':value['b'], 'modification_date':value['c'], 'library':value['l'], 'hour':value['h'], 'creation_date':value['x']}
+ producer:
+ json_for_marc, {"961__a": "cataloguer", "961__b": "level", "961__c": "modification_date", "961__l": "library", "961__h": "hour", "961__x": "creation_date"}
+
classification_terms:
creator:
- @legacy(("694__a", "term"),
+ @legacy((("694", "694__", "694__%"), ""),
+ ("694__a", "term"),
("694__9", "institute"))
marc, "694__", {'term':value['a'], 'institute':value['9']}
+ producer:
+ json_for_marc, {"694__a": "term", "694__9": "institute"}
cern_bookshop_statistics:
creator:
- @legacy(("599__a", "number_of_books_bought"),
+ @legacy((("599", "599__", "599__%"), ""),
+ ("599__a", "number_of_books_bought"),
("599__b", "number_of_books_sold"),
("599__c", "relation"))
marc, "599__", {'number_of_books_bought':value['a'], 'number_of_books_sold':value['b'], 'relation':value['c']}
+ producer:
+ json_for_marc, {"599__a":"number_of_books_bought", "599__b":"number_of_books_sold", "599__c":"relation"}
code_designation:
creator:
- @legacy(("030__a", "coden", "coden"),
+ @legacy((("033", "033__", "033__%"), ""),
+ ("030__a", "coden", "coden"),
("030__9", "source"))
marc, "030__", {'coden':value['a'], 'source':value['9']}
+ producer:
+ json_for_marc, {"030__a":"coden", "030__9":"source"}
collection:
creator:
- @legacy(("980__%", "collection identifier", ""),
+ @legacy((("980", "980__", "980__%"), ""),
+ ("980__%", "collection identifier", ""),
("980__a", "primary"),
("980__b", "secondary"),
("980__c", "deleted"))
marc, "980__", { 'primary':value['a'], 'secondary':value['b'], 'deleted':value['c'] }
+ producer:
+ json_for_marc, {"980__a":"primary", "980__b":"secondary", "980__c":"deleted"}
comment:
creator:
- @legacy(("500__a", "comment", ""),)
+ @legacy((("500", "500__", "500__%"), ""),
+ ("500__a", "comment", ""))
marc, "500__", value['a']
+ producer:
+ json_for_marc, {"500__a": ""}
+
content_type:
creator:
- @legacy(("336__a", ""),)
+ @legacy((("336", "336__", "336__%"), ""),
+ ("336__a", ""))
marc, "336__", value['a']
documentation:
"Note: use for SLIDES"
+ producer:
+ json_for_marc, {"336__a": ""}
+
copyright:
creator:
- @legacy(("598__a", ""),)
+ @legacy((("598", "598__", "598__%"), ""),
+ ("598__a", ""))
marc, "598__", value['a']
+ producer:
+ json_for_marc, {"580__a": ""}
+
corporate_name[0]:
creator:
- @legacy(("110__a", "name"),
+ @legacy((("110", "110__", "110__%"), ""),
+ ("110__a", "name"),
("110__b", "subordinate_unit"),
("110__g", "collaboration"))
marc, "110__", {'name':value['a'], 'subordinate_unit':value['b'], 'collaboration':value['g']}
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {"110__a":"name", "110__b":"subordinate_unit", "110__":"collaboration"}
+
corporate_name[n]:
creator:
- @legacy(("710__a", "name"),
+ @legacy((("710", "710__", "710__%"), ""),
+ ("710__a", "name"),
("710__b", "subordinate_unit"),
("710__g", "collaboration", "collaboration"))
marc, "710__", {'name':value['a'], 'subordinate_unit':value['b'], 'collaboration':value['g']}
checker:
check_field_existence(0,'n')
+ producer:
+ json_for_marc, {"710__a":"name", "710__b":"subordinate_unit", "710__":"collaboration"}
cumulative_index:
creator:
- @legacy(("555__a", ""),)
+ @legacy((("555", "555__", "555__%"), ""),
+ ("555__a", ""))
marc, "555__", value['a']
+ producer:
+ json_for_marc, {"555__a": ""}
current_publication_prequency:
creator:
- @legacy(("310__a", ""),)
+ @legacy((("310", "310__", "310__%"), ""),
+ ("310__a", ""))
marc, "310__", value['a']
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {"310__a": ""}
publishing_country:
creator:
- @legacy(("044__a", ""),)
+ @legacy((("044", "044__", "044__%"), ""),
+ ("044__a", ""))
marc, "044__", value['a']
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {"044__a": ""}
coyright:
creator:
- @legacy(("542__d", "holder"),
+ @legacy((("542", "542__", "542__%"), ""),
+ ("542__d", "holder"),
("542__g", "date"),
("542__u", "url"),
("542__e", "holder_contact"),
("542__f", "statement"),
("542__3", "materials"),)
marc, "542__", {'holder':value['d'], 'date':value['g'], 'url':value['u'], 'holder_contact':value['e'], 'statement':value['f'], 'materials':value['3']}
+ producer:
+ json_for_marc, {"542__d": "holder", "542__g": "date", "542__u": "url", "542__e": "holder_contact", "542__f": "statement", "542__3": "materials"}
+
dewey_decimal_classification_number:
creator:
- @legacy(("082__a", ""))
+ @legacy((("082", "082__", "082__%"), ""),
+ ("082__a", ""))
marc, "082__", value['a']
+ producer:
+ json_for_marc, {"082__a": ""}
dissertation_note:
creator:
- @legacy(("502__a","diploma"),
+ @legacy((("502", "502__", "502__%"), ""),
+ ("502__a","diploma"),
("502__b","university"),
("502__c","defense_date"))
marc, "502__", {'diploma':value['a'], 'university':value['b'], 'defense_date':value['c']}
+ producer:
+ json_for_marc, {"502__a": "diploma", "502__b": "university", "502__b": "defense_date"}
@persistent_identifier(3)
doi:
creator:
- @legacy (("0247_2", ""),)
+ @legacy((("024", "0247_", "0247_%"), ""),
+ ("0247_a", ""))
marc, "0247_", get_doi(value)
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {'0247_2': 'str("DOI")', '0247_a': ''}
edition_statement:
creator:
- @legacy(("250__a", ""),)
+ @legacy((("250", "250__", "250__%"), ""),
+ ("250__a", ""))
marc, "250__", value['a']
documentation:
"Information relating to the edition of a work as determined by applicable cataloging rules."
+ producer:
+ json_for_marc, {"250__a": ""}
email:
creator:
- @legacy(("8560_f", "email"),)
+ @legacy((("856", "8560_", "8560_%"), ""),
+ ("8560_f", "email"))
marc, "8560_", value['f']
+ producer:
+ json_for_marc, {"8560_f": ""}
email_message:
creator:
- @legacy(("859__a","contact"),
+ @legacy((("859", "859__", "859__%"), ""),
+ ("859__a","contact"),
("859__f","address"),
("859__x","date"))
marc, "859__", {'contact':value['a'], 'address':value['f'], 'date':value['x']}
+ producer:
+ json_for_marc, {"859__a": 'contact',"859__f": 'address', "859__x": 'date'}
-fft:
+fft[n]:
creator:
@legacy(("FFT__a", "path"),
("FFT__d", "description"),
- ("FFT__f", "format"),
+ ("FFT__f", "eformat"),
+ ("FFT__i", "temporary_id"),
("FFT__m", "new_name"),
("FFT__o", "flag"),
("FFT__r", "restriction"),
("FFT__s", "timestamp"),
("FFT__t", "docfile_type"),
("FFT__v", "version"),
("FFT__x", "icon_path"),
- ("FFT__z", "comment"))
- marc, "FFT__", {'path': value['a'], 'description': value['d'],
- 'format': value['f'], 'new_name': value['m'],
- 'flag': value['o'], 'restriction': value['r'],
- 'timestamp': value['s'], 'docfile_type': value['t'],
- 'version': value['v'], 'icon_path': value['x'],
- 'comment': value['s']}
+ ("FFT__z", "comment"),
+ ("FFT__w", "document_moreinfo"),
+ ("FFT__p", "version_moreinfo"),
+ ("FFT__b", "version_format_moreinfo"),
+ ("FFT__f", "format_moreinfo"))
+ marc, "FFT__", {'path': value['a'],
+ 'description': value['d'],
+ 'eformat': value['f'],
+ 'temporary_id': value['i'],
+ 'new_name': value['m'],
+ 'flag': value['o'],
+ 'restriction': value['r'],
+ 'timestamp': value['s'],
+ 'docfile_type': value['t'],
+ 'version': value['v'],
+ 'icon_path': value['x'],
+ 'comment': value['z'],
+ 'document_moreinfo': value['w'],
+ 'version_moreinfo': value['p'],
+ 'version_format_moreinfo': value['b'],
+ 'format_moreinfo': value['u']
+ }
+ @only_if_value((is_local_url(value['u']), ))
+ marc, "8564_", {'hots_name': value['a'],
+ 'access_number': value['b'],
+ 'compression_information': value['c'],
+ 'path':value['d'],
+ 'electronic_name': value['f'],
+ 'request_processor': value['h'],
+ 'institution': value['i'],
+ 'formart': value['q'],
+ 'settings': value['r'],
+ 'file_size': value['s'],
+ 'url': value['u'],
+ 'subformat':value['x'],
+ 'description':value['y'],
+ 'comment':value['z']}
+ producer:
+ json_for_marc, {"FFT__a": "path", "FFT__d": "description", "FFT__f": "eformat", "FFT__i": "temporary_id", "FFT__m": "new_name", "FFT__o": "flag", "FFT__r": "restriction", "FFT__s": "timestamp", "FFT__t": "docfile_type", "FFT__v": "version", "FFT__x": "icon_path", "FFT__z": "comment", "FFT__w": "document_moreinfo", "FFT__p": "version_moreinfo", "FFT__b": "version_format_moreinfo", "FFT__f": "format_moreinfo"}
funding_info:
creator:
- @legacy(("536__a", "agency"),
+ @legacy((("536", "536__", "536__%"), ""),
+ ("536__a", "agency"),
("536__c", "grant_number"),
("536__f", "project_number"),
("536__r", "access_info"))
marc, "536__", {'agency':value['a'], 'grant_number':value['c'], 'project_number':value['f'], 'access_info':value['r']}
documentation:
- @subfield access_info: "Note: used for Open Access tag in OpenAIRE"
+ @subfield access_info: "note: used for open access tag in openaire"
+ producer:
+ json_for_marc, {"536__a": "agency", "536__c": "grant_number", "536__f": "project_number", "536__r": "access_info"}
+
imprint:
creator:
- @legacy(("260__a", "place"),
+ @legacy((("260", "260__", "260__%"), ""),
+ ("260__a", "place"),
("260__b", "publisher_name"),
("260__c", "date"),
("260__g", "reprinted_editions"))
marc, "260__", {'place':value['a'], 'publisher_name':value['b'], 'date':value['c'], 'reprinted_editions':value['g']}
+ producer:
+ json_for_marc, {"260__a": "place", "260__b": "publisher_name", "260__c": "date", "260__g": "reprinted_editions"}
internal_notes:
creator:
- @legacy(("595__a", "internal notes", "internal_note"),
+ @legacy((("595", "595__", "595__%"), ""),
+ ("595__a", "internal notes", "internal_note"),
("595__d", "control_field"),
- ("595__i", "INSPEC_number"),
+ ("595__i", "inspec_number"),
("595__s", "subject"))
- marc, "595__", {'internal_note':value['a'], 'control_field':value['d'], 'INSPEC_number':value['i'], 'subject':value['s']}
+ marc, "595__", {'internal_note':value['a'], 'control_field':value['d'], 'inspec_number':value['i'], 'subject':value['s']}
+ producer:
+ json_for_marc, {"595__a": "internal_note", "595__d": "control_field","595__i": "inspec_number", "595__s": "subject"}
isbn:
creator:
- @legacy(("020__a", "isbn", "isbn"),
+ @legacy((("020", "020__", "020__%"), ""),
+ ("020__a", "isbn", "isbn"),
("020__u", "medium"))
marc, "020__", {'isbn':value['a'], 'medium':value['u']}
checker:
check_field_type('isbn', 'isbn')
+ producer:
+ json_for_marc, {"020__a": "isbn", "020__u": "medium"}
isn:
creator:
- @legacy(("021__a", ""),)
+ @legacy((("021", "021__", "021__%"), ""),
+ ("021__a", ""))
marc, "021__", value['a']
+ producer:
+ json_for_marc, {"021__a": ""}
issn:
creator:
- @legacy(("022__a", "issn", ""),)
+ @legacy((("022", "022__", "022__%"), ""),
+ ("022__a", "issn", ""))
marc, "022__", value['a']
checker:
check_field_type('issn')
+ producer:
+ json_for_marc, {"022__a": ""}
item:
creator:
- @legacy(("964__a", ""),)
+ @legacy((("964", "964__", "964__%"), ""),
+ ("964__a", ""))
marc, "964__", value['a']
+ producer:
+ json_for_marc, {"964__a": ""}
+
+journal_info:
+ creator:
+ @legacy((("909", "909C4", "909C4%"), "journal", ""),
+ ("909C4a", "doi", "doi"),
+ ("909C4c", "journal page", "pagination"),
+ ("909C4d", "date"),
+ ("909C4e", "recid"),
+ ("909C4f", "note"),
+ ("909C4p", "journal title", "title"),
+ ("909C4u", "url"),
+ ("909C4v", "journal volume", "volume"),
+ ("909C4y", "journal year", "year"),
+ ("909C4t", "talk"),
+ ("909C4w", "cnum"),
+ ("909C4x", "reference"))
+ marc, "909C4", {'doi':value['a'], 'pagination':value['c'], 'date':value['d'], 'recid':value['e'], 'note':value['f'], 'title':value['p'], 'url':value['u'], 'volume':value['v'], 'year':value['y'], 'talk':value['t'], 'cnum':value['w'], 'reference':value['x']}
+ producer:
+ json_for_marc, {"909C4a": "doi","909C4c": "pagination", "909C4d": "date", "909C4e": "recid", "909C4f": "note", "909C4p": "title", "909C4u": "url","909C4v": "volume", "909C4y": "year", "909C4t": "talk", "909C4w": "cnum", "909C4x": "reference"}
+
keywords[n]:
creator:
- @legacy(("6531_a", "keyword", "term"),
+ @legacy((("653", "6531_", "6531_%"), ""),
+ ("6531_a", "keyword", "term"),
("6531_9", "institute"))
marc, "6531_", { 'term': value['a'], 'institute': value['9'] }
checker:
check_field_existence(0,'n')
check_field_type('str')
+ producer:
+ json_for_marc, {"6531_a": "term", "6531_9": "institute"}
language:
creator:
- @legacy(("041__a", ""),)
+ @legacy((("041", "041__", "041__%"), ""),
+ ("041__a", ""))
marc, "041__", value['a']
+ producer:
+ json_for_marc, {"041__a": ""}
+ json_for_dc, {"dc:language": ""}
language_note:
creator:
- @legacy(("546__a", "language_note"),
+ @legacy((("546", "546__", "546__%"), ""),
+ ("546__a", "language_note"),
("546__g", "target_language"))
marc, "546__", {'language_note':value['a'], 'target_language':value['g']}
+ producer:
+ json_for_marc, {"546__a": "language_note", "546__g": "target_language"}
library_of_congress_call_number:
creator:
- @legacy(("050__a", "classification_number"),
+ @legacy((("050", "050__", "050__%"), ""),
+ ("050__a", "classification_number"),
("050__b", "item_number"))
marc, "050__", {'classification_number':value['a'], 'item_number':value['b']}
+ producer:
+ json_for_marc, {"050__a": "classification_number", "050__b": "item_number"}
license:
creator:
- @legacy(("540__a", "license"),
+ @legacy((("540", "540__", "540__%"), ""),
+ ("540__a", "license"),
("540__b", "imposing"),
("540__u", "url"),
("540__3", "material"))
marc, "540__", {'license':value['a'], 'imposing':value['b'], 'url':value['u'], 'material':value['3'],}
+ producer:
+ json_for_marc, {"540__a": "license", "540__b": "imposing", "540__u": "url", "540__3": "material"}
location:
creator:
- @legacy(("852__a", ""),)
+ @legacy((("852", "852__", "852__%"), ""),
+ ("852__a", ""))
marc, "852__", value['a']
+ producer:
+ json_for_marc, {"852__a": ""}
medium:
creator:
- @legacy(("340__a", "material"),
+ @legacy((("340", "340__", "340__%"), ""),
+ ("340__a", "material"),
("340__c", "suface"),
("340__d", "recording_technique"),
- ("340__d", "CD-ROM"))
- marc, "340__", {'material':value['a'], 'surface':value['c'], 'recording_technique':value['d'], 'CD-ROM':value['9']}
+ ("340__d", "cd-rom"))
+ marc, "340__", {'material':value['a'], 'surface':value['c'], 'recording_technique':value['d'], 'cd-rom':value['9']}
+ producer:
+ json_for_marc, {"340__a": "material", "340__c": "suface", "340__d": "recording_technique", "340__d": "cd-rom"}
meeting_name[0]:
creator:
- @legacy(("111__a", "meeting"),
+ @legacy((("111", "111__", "111__%"), ""),
+ ("111__a", "meeting"),
("111__c", "location"),
("111__d", "date"),
("111__f", "year"),
("111__g", "coference_code"),
("111__n", "number_of_parts"),
("111__w", "country"),
("111__z", "closing_date"),
("111__9", "opening_date"))
marc, "111__", {'meeting':value['a'], 'location':value['c'], 'date':value['d'], 'year':value['f'], 'coference_code':value['g'], 'number_of_parts':value['n'], 'country':value['w'], 'closing_date':value['z'], 'opening_date':value['9']}
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {"111__a": "meeting", "111__c": "location", "111__d": "date","111__f": "year", "111__g": "coference_code", "111__n": "number_of_parts", "111__w": "country", "111__z": "closing_date", "111__9": "opening_date"}
meeting_name[n]:
creator:
- @legacy(("711__a", "meeting"),
+ @legacy((("711", "711__", "711__%"), ""),
+ ("711__a", "meeting"),
("711__c", "location"),
("711__d", "date"),
("711__f", "work_date"),
("711__g", "coference_code"),
("711__n", "number_of_parts"),
("711__9", "opening_date"))
marc, "711__", {'meeting':value['a'], 'location':value['c'], 'date':value['d'], 'work_date':value['f'], 'coference_code':value['g'], 'number_of_parts':value['n'], 'opening_date':value['9']}
checker:
check_field_existence(0,'n')
-
-modification_date:
- creator:
- @legacy(("005", ""),)
- marc, "005", datetime.datetime(*(time.strptime(value, '%Y%m%d%H%M%S.0')[0:6]))
- checker:
- check_field_existence(1)
- check_field_type('datetime.datetime')
+ producer:
+ json_for_marc, {"711__a": "meeting", "711__c": "location", "711__d": "date", "711__f": "work_date", "711__g": "coference_code", "711__n": "number_of_parts", "711__9": "opening_date"}
@persistent_identifier(4)
oai:
creator:
- @legacy(("0248_a", "oai"),
+ @legacy((("024", "0248_", "0248_%"), ""),
+ ("0248_a", "oai"),
("0248_p", "indicator"))
marc, "0248_", {'value': value['a'], 'indicator': value['p']}
+ producer:
+ json_for_marc, {"0248_a": "oai", "0248_p": "indicator"}
observation:
creator:
- @legacy(("691__a", ""),)
+ @legacy((("691", "691__", "691__%"), ""),
+ ("691__a", ""))
marc, "691__", value['a']
+ producer:
+ json_for_marc, {"691__a": ""}
observation_french:
creator:
- @legacy(("597__a", ""),)
+ @legacy((("597", "597__", "597__%"), ""),
+ ("597__a", ""))
marc, "597__", value['a']
+ producer:
+ json_for_marc, {"597__a": ""}
other_report_number:
creator:
- @legacy(("084__a", "clasification_number"),
+ @legacy((("084", "084__", "084__%"), ""),
+ ("084__a", "clasification_number"),
("084__b", "collection_short"),
("084__2", "source_number"))
marc, "084__", {'clasification_number':value['a'], 'collection_short':value['b'], 'source_number':value['2'],}
+ producer:
+ json_for_marc, {"084__a": "clasification_number", "084__b": "collection_short", "084__2": "source_number"}
owner:
creator:
- @legacy(("963__a",""),)
+ @legacy((("963", "963__", "963__%"), ""),
+ ("963__a",""))
marc, "963__", value['a']
+ producer:
+ json_for_marc, {"963__a": ""}
prepublication:
creator:
- @legacy(("269__a", "place"),
+ @legacy((("269", "269__", "269__%"), ""),
+ ("269__a", "place"),
("269__b", "publisher_name"),
("269__c", "date"))
marc, "269__", {'place':value['a'], 'publisher_name': value['b'], 'date':value['c']}
documentation:
"""
- NOTE: Don't use the following lines for CER base=14,2n,41-45 !!
- NOTE: Don't use for THESES
+ note: don't use the following lines for cer base=14,2n,41-45 !!
+ note: don't use for theses
"""
+ producer:
+ json_for_marc, {"269__a": "place", "269__b": "publisher_name", "269__c": "date"}
primary_report_number:
creator:
- @legacy(("037__a", "primary report number", ""),)
+ @legacy((("037", "037__", "037__%"), ""),
+ ("037__a", "primary report number", ""), )
marc, "037__", value['a']
+ producer:
+ json_for_marc, {"037__a": ""}
publication_info:
creator:
- @legacy(("773__a", "DOI"),
+ @legacy((("773", "773__", "773__%"), ""),
+ ("773__a", "doi"),
("773__c", "pagination"),
("773__d", "date"),
("773__e", "recid"),
("773__f", "note"),
("773__p", "title"),
("773__u", "url"),
("773__v", "volume"),
("773__y", "year"),
("773__t", "talk"),
- ("773__w", "CNUM"),
+ ("773__w", "cnum"),
("773__x", "reference"))
- marc, "773__", {'DOI':value['a'], 'pagination':value['c'], 'date':value['d'], 'recid':value['e'], 'note':value['f'], 'title':value['p'], 'url':value['u'], 'volume':value['v'], 'year':value['y'], 'talk':value['t'], 'CNUM':value['w'], 'reference':value['x']}
+ marc, "773__", {'doi':value['a'], 'pagination':value['c'], 'date':value['d'], 'recid':value['e'], 'note':value['f'], 'title':value['p'], 'url':value['u'], 'volume':value['v'], 'year':value['y'], 'talk':value['t'], 'cnum':value['w'], 'reference':value['x']}
documentation:
- "NOTE: publication_info.DOI not to be used, used instead DOI"
+ "note: publication_info.doi not to be used, used instead doi"
+ producer:
+ json_for_marc, {"773__a": "doi", "773__c": "pagination", "773__d": "date", "773__e": "recid", "773__f": "note", "773__p": "title", "773__u": "url", "773__v": "volume", "773__y": "year", "773__t": "talk", "773__w": "cnum", "773__x": "reference"}
physical_description:
creator:
- @legacy(("300__a", "pagination"),
+ @legacy((("300", "300__", "300__%", "")),
+ ("300__a", "pagination"),
("300__b", "details"))
marc, "300__", {'pagination':value['a'], 'details':value['b']}
+ producer:
+ json_for_marc, {"300__a": "pagination", "300__b": "details"}
@persistent_identifier(0)
recid:
creator:
- @legacy(("001", "record ID", "recid"),)
+ @legacy(("001", "record id", ""), )
marc, "001", value
checker:
check_field_existence(1)
check_field_type('num')
documentation:
"""
- This is the main persistent identifier of a record and will be used
+ this is the main persistent identifier of a record and will be used
internally as this.
- Important: This is a mandatory field and it shouldn't be remove neither from this
+ important: this is a mandatory field and it shouldn't be remove neither from this
configuration file nor from the persistent identifier list
"""
+ producer:
+ json_for_marc, {"001": ""}
reference:
creator:
- @legacy(("999C5", "reference", ""),
+ @legacy((("999", "999C5", "999C5%"), ""),
+ ("999C5", "reference", ""),
("999C5a", "doi"),
("999C5h", "authors"),
("999C5m", "misc"),
("999C5n", "issue_number"),
("999C5o", "order_number"),
("999C5p", "page"),
("999C5r", "report_number"),
("999C5s", "title"),
("999C5u", "url"),
("999C5v", "volume"),
("999C5y", "year"),)
marc, "999C5", {'doi':value['a'], 'authors':value['h'], 'misc':value['m'], 'issue_number':value['n'], 'order_number':value['o'], 'page':value['p'], 'report_number':value['r'], 'title':value['s'], 'url':value['u'], 'volume':value['v'], 'year':value['y'],}
+ producer:
+ json_for_marc, {"999C5a": "doi", "999C5h": "authors", "999C5m": "misc", "999C5n": "issue_number", "999C5o":"order_number", "999C5p":"page", "999C5r":"report_number", "999C5s":"title", "999C5u":"url", "999C5v":"volume", "999C5y": "year"}
restriction_access:
creator:
- @legacy(("506__a", "terms"),
+ @legacy((("506", "506__", "506__%"), ""),
+ ("506__a", "terms"),
("506__9", "local_info"))
marc, "506__", {'terms':value['a'], 'local_info':value['9']}
+ producer:
+ json_for_marc, {"506__a": "terms", "506__9": "local_info"}
report_number:
creator:
- @legacy(("088_a", "additional report number", "report_number"),
- ("088_9", "internal"))
+ @legacy((("088", "088__", "088__%"), ""),
+ ("088__a", "additional report number", "report_number"),
+ ("088__9", "internal"))
marc, "088__", {'report_number':value['a'], 'internal':value['9']}
+ producer:
+ json_for_marc, {"088__a": "report_number", "088__9": "internal"}
series:
creator:
- @legacy(("490__a", "statement"),
+ @legacy((("490", "490__", "490__%"), ""),
+ ("490__a", "statement"),
("490__v", "volume"))
marc, "490__", {'statement':value['a'], 'volume':value['v']}
+ producer:
+ json_for_marc, {"490__a": "statement", "490__v": "volume"}
slac_note:
creator:
- @legacy(("596__a", "slac_note"),)
+ @legacy((("596", "596__", "596__%"), ""),
+ ("596__a", "slac_note", ""), )
marc, "596__", value['a']
+ producer:
+ json_for_marc, {"596__a": ""}
source_of_acquisition:
creator:
- @legacy(("541__a","source_of_acquisition"),
+ @legacy((("541", "541__", "541__%"), ""),
+ ("541__a","source_of_acquisition"),
("541__d","date"),
("541__e","accession_number"),
("541_f_","owner"),
("541__h","price_paid"),
("541__9","price_user"))
marc, "541__", {'source_of_acquisition':value['a'], 'date':value['d'], 'accession_number':value['e'], 'owner':value['f'], 'price_paid':value['h'], 'price_user':value['9']}
+ producer:
+ json_for_marc, {"541__a": "source_of_acquisition", "541__d": "date", "541__e": "accession_number", "541_f_": "owner", "541__h": "price_paid", "541__9":"price_user"}
status_week:
creator:
- @legacy(("916__a","acquistion_proceedings"),
+ @legacy((("916", "916__", "916__%"), ""),
+ ("916__a","acquistion_proceedings"),
("916__d","display_period"),
("916__e","copies_bought"),
("916__s","status"),
("916__w","status_week"),
("916__y","year"))
marc, "916__", {'acquistion_proceedings':value['a'], 'display_period':value['d'], 'copies_bought':value['e'], 'status':value['s'], 'status_week':value['w'], 'year':value['y']}
+ producer:
+ json_for_marc, {"916__a": "acquistion_proceedings", "916__d": "display_period", "916__e": "copies_bought", "916__s": "status", "916__w": "status_week", "916__y":"year"}
subject:
creator:
- @legacy(("65017a", "main subject", "term"),
+ @legacy((("650", "65017", "65017%"), ""),
+ ("65017a", "main subject", "term"),
("650172", "source"),
("65017e", "relator"))
marc, "65017", {'term':value['a'], 'source':value['2'], 'relator':value['e']}
documentation:
- @subfield term: "Topical term or geographic name"
- @subfield source: "Source of heading or term"
- @subfield relator: "Specifies the relationship between the topical heading and the described materials"
+ @subfield term: "topical term or geographic name"
+ @subfield source: "source of heading or term"
+ @subfield relator: "specifies the relationship between the topical heading and the described materials"
+ producer:
+ json_for_marc, {"65017a": "term", "650172": "source", "65017e": "relator"}
+ json_for_dc, {"dc:subject": "term"}
subject_additional:
creator:
- @legacy(("65027a", "additional subject", "term"),
+ @legacy((("650", "65027", "65027%"), ""),
+ ("65027a", "additional subject", "term"),
("650272", "source"),
("65027e", "relator"),
("65027p", "percentage"))
marc, "65027", {'term':value['a'], 'source':value['2'], 'relator':value['e'], 'percentage':value['p']}
documentation:
- @subfield term: "Topical term or geographic name"
- @subfield source: "Source of heading or term"
- @subfield relator: "Specifies the relationship between the topical heading and the described materials"
- @subfield perentage: "Percentage (relevance of topic, used for INTC)"
+ @subfield term: "topical term or geographic name"
+ @subfield source: "source of heading or term"
+ @subfield relator: "specifies the relationship between the topical heading and the described materials"
+ @subfield perentage: "percentage (relevance of topic, used for intc)"
+ producer:
+ json_for_marc, {"65027a": "term", "650272": "source", "65027e": "relator", "65027p": "percentage"}
subject_indicator:
creator:
- @legacy(("690C_a", ""),)
- marc, "690C_", value['a']
+ @legacy((("690", "690__", "690__%"), ""),
+ ("690c_a", ""))
+ marc, "690c_", value['a']
+ producer:
+ json_for_marc, {"690c_a": ""}
@persistent_identifier(2)
system_control_number:
creator:
- @legacy(("035__a", "system_control_number"),
+ @legacy((("035", "035__", "035__%"), ""),
+ ("035__a", "system_control_number"),
("035__9", "institute"))
marc, "035__", {'value': value['a'], 'canceled':value['z'], 'linkpage':value['6'], 'institute':value['9']}
documentation:
@subfield institute: "inspire {record with other subject than Particle Physics to import into INSPIRE}"
+ producer:
+ json_for_marc, {"035__a": "system_control_number", "035__9": "institute"}
@persistent_identifier(1)
system_number:
creator:
- @legacy(("970__a", "sysno"),
+ @legacy((("970", "970__", "970__%"), ""),
+ ("970__a", "sysno"),
("970__d", "recid"))
marc, "970__", {'value':value['a'], 'recid':value['d']}
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {"970__a": "sysno", "970__d": "recid"}
thesaurus_terms:
creator:
- @legacy(("695__a", "term"),
+ @legacy((("695", "695__", "695__%"), ""),
+ ("695__a", "term"),
("695__9", "institute"))
marc, "695__", {'term':value['a'], 'institute':value['9']}
+ producer:
+ json_for_marc, {"695__a": "term", "695__9": "institute"}
time_and_place_of_event_note:
creator:
- @legacy(("518__d", "date"),
+ @legacy((("518", "518__", "518__%"), ""),
+ ("518__d", "date"),
("518__g", "conference_identification"),
("518__h", "starting_time"),
("518__l", "speech_length"),
("518__r", "meeting"))
marc, "519__", {'date':value['d'], 'conference_identification':value['g'], 'starting_time':value['h'], 'speech_length':value['l'], 'meeting':value['r']}
+ producer:
+ json_for_marc, {"518__d": "date", "518__g": "conference_identification", "518__h": "starting_time", "518__l": "speech_length", "518__r": "meeting"}
abbreviated_title:
creator:
- @legacy(("210__a", ""))
+ @legacy((("210", "210__", "210__%"), ""),
+ ("210__a", ""))
marc, "210__", value['a']
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {"210__a": ""}
main_title_statement:
creator:
- @legacy(("145__a", "title"),
+ @legacy((("145", "145__", "145__%"), ""),
+ ("145__a", "title"),
("145__b", "subtitle"),)
marc, "145__", {'title':value['a'], 'subtitle':value['b']}
checker:
check_field_existence(0,1)
+ producer:
+ json_for_marc, {"145__a": "title", "145__b": "subtitle"}
-title_aditional:
+title_additional:
creator:
- @legacy(("246__%", "additional title", ""),
+ @legacy((("246", "246__", "246__%"), ""),
+ ("246__%", "additional title", ""),
("246__a", "title"),
("246__b", "subtitle"),
("246__g", "misc"),
("246__i", "text"),
("246__n", "part_number"),
("246__p", "part_name"))
marc, "246__", { 'title':value['a'], 'subtitle':value['b'], 'misc':value['g'], 'text':value['i'], 'part_number':value['n'], 'part_name':value['p']}
+ producer:
+ json_for_marc, {"246__a": "title", "246__b": "subtitle", "246__g": "misc", "246__i": "text", "246__n": "part_number", "246__p": "part_name"}
title:
creator:
- @legacy(("245__%", "main title", ""),
+ @legacy((("245", "245__", "245__%"), ""),
+ ("245__%", "main title", ""),
("245__a", "title", "title"),
("245__b", "subtitle"),
("245__n", "volume"),
("245__k", "form"))
marc, "245__", { 'title':value['a'], 'subtitle':value['b'], 'volume': value['n'], 'form':value['k'] }
checker:
- check_field_existence(0,1)
+ check_field_existence(0, 1, continuable=False)
check_field_type('str')
documentation:
- "Title"
+ "title"
+ producer:
+ json_for_marc, {"245__a": "title", "245__b": "subtitle", "245__k": "form"}
+ json_for_dc, {"dc:title": "title"}
title_key:
creator:
- @legacy(("022__a", ""))
- marc, "022__", value['a']
+ @legacy((("222", "222__", "222__%"), ""),
+ ("222__a", ""))
+ marc, "222__", value['a']
+ producer:
+ json_for_marc, {"222__a": ""}
title_other:
creator:
- @legacy(("246_3a", "title"),
+ @legacy((("246", "246_3", "246_3%"), ""),
+ ("246_3a", "title"),
("246_3i", "text"),
("246_39", "sigle"))
marc, "246_3", { 'title':value['a'], 'text':value['i'], 'sigle':value['9']}
+ producer:
+ json_for_marc, {"246_3a": "title", "246_3i": "text", "246_39": "sigle"}
title_parallel:
creator:
- @legacy(("246_1a", "title"),
+ @legacy((("246", "246_1", "246_1%"), ""),
+ ("246_1a", "title"),
("246_1i", "text"))
marc, "246_1", { 'title':value['a'], 'text':value['i']}
+ producer:
+ json_for_marc, {"246_1a": "title", "246_1i": "text"}
title_translation:
creator:
- @legacy(("242__a", "title"),
+ @legacy((("242", "242__", "242__%"), ""),
+ ("242__a", "title"),
("242__b", "subtitle"),
("242__y", "language"))
marc, "242__", {'title':value['a'], 'subtitle':value['b'], 'language':value['y']}
documentation:
- @subfield language: "Language code of translated title"
+ @subfield language: "language code of translated title"
+ producer:
+ json_for_marc, {"242__a": "title", "242__b": "subtitle", "242__y": "language"}
type:
creator:
- @legacy(("594__a", ""),)
+ @legacy((("594", "594__", "594__%"), ""),
+ ("594__a", ""))
marc, "594__", value['a']
+ producer:
+ json_for_marc, {"594__a": ""}
udc:
creator:
- @legacy(("080__a", ""),)
+ @legacy((("080", "080__", "080__%"), ""),
+ ("080__a", ""))
marc, "080__", value['a']
documentation:
- "Universal Decimal Classification number"
+ "universal decimal classification number"
+ producer:
+ json_for_marc, {"080__a": ""}
url:
creator:
- @legacy(("8564_p", "path"),
+ @legacy((("856", "8564_", "8564_%"), ""),
+ ("8564_a", "host_name"),
+ ("8564_b", "access_number"),
+ ("8564_c", "compression_information"),
+ ("8564_d", "path"),
+ ("8564_f", "electronic_name"),
+ ("8564_h", "request_processor"),
+ ("8564_i", "institution"),
("8564_q", "eformat"),
+ ("8564_r", "settings"),
("8564_s", "file_size"),
("8564_u", "url", "url"),
- ("8564_x", "nonpublic_note"),
- ("8564_y", "caption", "link"),
- ("8564_z", "public_note"))
- marc, "8564_", {'path':value['d'], 'eformart':value['q'], 'file_size':value['s'], 'url':value['u'], 'nonpublic_note':value['x'], 'link':value['y'], 'public_note':value['z']}
+ ("8564_x", "subformat"),
+ ("8564_y", "caption", "description"),
+ ("8564_z", "comment"))
+ @only_if_value((not is_local_url(value['u']), ))
+ marc, "8564_", {'host_name': value['a'],
+ 'access_number': value['b'],
+ 'compression_information': value['c'],
+ 'path':value['d'],
+ 'electronic_name': value['f'],
+ 'request_processor': value['h'],
+ 'institution': value['i'],
+ 'eformart': value['q'],
+ 'settings': value['r'],
+ 'size': value['s'],
+ 'url': value['u'],
+ 'subformat':value['x'],
+ 'description':value['y'],
+ 'comment':value['z']}
documentation:
@subfield url: "used for URL and URN, repeatable for URN. repeat 856 for several url's"
@subfield public_note: "Stamped by WebSubmit: DATE"
+ producer:
+ json_for_marc, {"8564_a": "host_name", "8564_b": "access_number", "8564_c": "compression_information", "8564_d": "path", "8564_f": "electronic_name", "8564_h": "request_processor", "8564_i": "institution", "8564_q": "eformat", "8564_r": "settings", "8564_s": "file_size", "8564_u": "url", "8564_x": "subformat", "8564_y": "description", "8564_z": "comment"}
+ json_for_dc, {"dc:identifier": "url"}
+
+version_id:
+ creator:
+ @legacy(("005", ""),)
+ marc, "005", datetime.datetime(*(time.strptime(value, '%Y%m%d%H%M%S.0')[0:6]))
+ checker:
+ check_field_existence(1)
+ check_field_type('datetime.datetime')
+ producer:
+ json_for_marc, {"005": "self.get('version_id').strftime('%Y%m%d%H%M%S.0')"}
+ json_for_dc, {"dc:date": "self.get('version_id').strftime('%Y-%m-%dT%H:%M:%SZ')"}
###############################################################################
########## ##########
########## Derived and Calculated Fields Definitions ##########
########## ##########
###############################################################################
_persistent_identifiers_keys:
calculated:
@parse_first(('system_control_number', 'recid', 'doi', 'oai', 'system_number'))
get_persistent_identifiers_keys(self.keys())
documentation:
"""
This field will tell you which fields among all are considered as
persistent identifiers (decorated with @persistent_identifier)
If a new persistent identifier field is added the cached version of this
field must be rebuild.
Note: If a new persistent idenfier is added the list of fields to parse
before this one should be updated
"""
+_files:
+ calculated:
+ @legacy(("8564_z", "comment"),
+ ("8564_y", "caption", "description"),
+ ("8564_q", "eformat"),
+ ("8564_f", "name"),
+ ("8564_s", "size"),
+ ("8564_u", "url", "url")
+ )
+ @parse_first(('recid', ))
+ get_files_from_bibdoc(self.get('recid', -1))
+ documentation:
+ """
+ Retrieves all the files related with the recid that were passed to the system
+ using the FFT field described above
+
+ Note: this is a mandatory field and it shouldn't be remove from this configuration
+ file. On the other hand the function that retrieve the metadata from BibDoc could
+ be enrich.
+ """
+ producer:
+ json_for_marc, {"8564_z": "comment", "8564_y": "description", "8564_q": "eformat", "8564_f": "name", "8564_s": "size", "8564_u": "url"}
+ json_for_dc, {"dc:identifier": "url"}
+
+_bibdocs:
+ calculated:
+ @do_not_cache
+ get_bibdoc(self.get('recid', -1))
number_of_authors:
derived:
@parse_first(('authors',))
@depends_on(('authors',))
len(self['authors'])
checker:
check_field_existence(0, 1)
check_field_type('num')
documentation:
"Number of authors"
creation_date:
derived:
@parse_first(('recid', ))
@depends_on(('recid', ))
get_creation_date(self['recid'])
documentation:
"Creation date"
+filetypes:
+ derived:
+ @parse_first(('recid',))
+ @depends_on(('recid',))
+ get_filetypes(self['recid'])
+ documentation:
+ "Filetypes of all files attached to the record"
+
_number_of_copies:
calculated:
@parse_first(('recid', 'collection'))
@depends_on(('recid', 'collection.primary'))
- @only_if(('BOOK' in self['collection.primary'],))
+ @only_if(('BOOK' in self.get('collection.primary', []),))
get_number_of_copies(self['recid'])
checker:
check_field_existence(0, 1)
check_field_type('num')
documentation:
"Number of copies"
_number_of_reviews:
calculated:
@parse_first(('recid', ))
get_number_of_reviews(self.get('recid'))
documentation:
"Number of reviews"
_number_of_comments:
calculated:
@parse_first(('recid', ))
get_number_of_comments(self.get('recid'))
documentation:
"Number of comments"
_cited_by_count:
calculated:
@parse_first(('recid', ))
get_cited_by_count(self.get('recid'))
documentation:
"How many records cite given record"
diff --git a/modules/bibfield/lib/bibfield.py b/modules/bibfield/lib/bibfield.py
index b766bb7b2..f25176527 100644
--- a/modules/bibfield/lib/bibfield.py
+++ b/modules/bibfield/lib/bibfield.py
@@ -1,137 +1,163 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibField engine
"""
__revision__ = "$Id$"
import os
try:
import cPickle as pickle
except:
import pickle
from pprint import pformat
from werkzeug import import_string
from invenio.config import CFG_PYLIBDIR, CFG_LOGDIR
from invenio.datastructures import LaziestDict
from invenio.dbquery import run_sql
from invenio.errorlib import register_exception
from invenio.signalutils import record_after_update
from invenio.bibfield_jsonreader import JsonReader
-from invenio.bibfield_utils import BlobWrapper
+from invenio.bibfield_utils import BlobWrapper, BibFieldDict
# Lazy loader of bibfield readers
def reader_discover(key):
try:
candidate = import_string('invenio.bibfield_%sreader:readers' % (key, ))
if issubclass(candidate, JsonReader):
return candidate
except:
register_exception()
raise KeyError(key)
CFG_BIBFIELD_READERS = LaziestDict(reader_discover)
@record_after_update.connect
def delete_record_cache(sender, recid=None, **kwargs):
get_record(recid, reset_cache=True)
-def create_record(blob, master_format='marc', verbose=0, **aditional_info):
+def create_record(blob, master_format='marc', verbose=0, **additional_info):
"""
Creates a record object from the blob description using the apropiate reader
for it.
@return Record object
"""
- blob_wrapper = BlobWrapper(blob=blob, master_format=master_format, **aditional_info)
+ blob_wrapper = BlobWrapper(blob=blob, master_format=master_format, **additional_info)
- return CFG_BIBFIELD_READERS[master_format](blob_wrapper)
+ return CFG_BIBFIELD_READERS[master_format](blob_wrapper, check=True)
-def create_records(blob, master_format='marc', verbose=0, **aditional_info):
+def create_records(blob, master_format='marc', verbose=0, **additional_info):
"""
Creates a list of records from the blod descriptions using the split_records
function to divide then.
@see create_record()
@return List of record objects initiated by the functions create_record()
"""
- record_blods = CFG_BIBFIELD_READERS[master_format].split_blob(blob)
+ record_blods = CFG_BIBFIELD_READERS[master_format].split_blob(blob, additional_info.get('schema', None))
- return [create_record(record_blob, master_format, verbose=verbose, **aditional_info) for record_blob in record_blods]
+ return [create_record(record_blob, master_format, verbose=verbose, **additional_info) for record_blob in record_blods]
-def get_record(recid, reset_cache=False):
+def get_record(recid, reset_cache=False, fields=()):
"""
Record factory, it retrieves the record from bibfmt table if it is there,
if not, or reset_cache is set to True, it searches for the appropriate
reader to create the representation of the record.
@return: Bibfield object representing the record or None if the recid is not
present in the system
"""
+ record = None
#Search for recjson
if not reset_cache:
res = run_sql("SELECT value FROM bibfmt WHERE id_bibrec=%s AND format='recjson'",
(recid,))
if res:
- return JsonReader(BlobWrapper(pickle.loads(res[0][0])))
+ record = JsonReader(BlobWrapper(pickle.loads(res[0][0])))
#There is no version cached or we want to renew it
#Then retrieve information and blob
+ if not record or reset_cache:
+ blob_wrapper = _build_wrapper(recid)
+ if not blob_wrapper:
+ return None
+ record = CFG_BIBFIELD_READERS[blob_wrapper.master_format](blob_wrapper)
+
+ #Update bibfmt for future uses
+ run_sql("REPLACE INTO bibfmt(id_bibrec, format, last_updated, value) VALUES (%s, 'recjson', NOW(), %s)",
+ (recid, pickle.dumps((record.rec_json))))
+
+ if fields:
+ chunk = BibFieldDict()
+ for key in fields:
+ chunk[key] = record.get(key)
+ record = chunk
+ return record
- blob_wrapper = _build_wrapper(recid)
- if not blob_wrapper:
- return None
- record = CFG_BIBFIELD_READERS[blob_wrapper.master_format](blob_wrapper)
-
- #Update bibfmt for future uses
- run_sql("REPLACE INTO bibfmt(id_bibrec, format, last_updated, value) VALUES (%s, 'recjson', NOW(), %s)",
- (recid, pickle.dumps((record.rec_json))))
+def guess_legacy_field_names(fields, master_format='marc'):
+ """
+ Using the legacy rules written in the config file (@legacy) tries to find
+ the equivalent json field for one or more legacy fields.
- return record
+ >>> guess_legacy_fields(('100__a', '245'), 'marc')
+ {'100__a':['authors[0].full_name'], '245':['title']}
+ """
+ from invenio.bibfield_config import legacy_rules
+
+ res = {}
+ if isinstance(fields, basestring):
+ fields = (fields, )
+ for field in fields:
+ try:
+ res[field] = legacy_rules[master_format].get(field, [])
+ except:
+ res[field] = []
+ return res
def _build_wrapper(recid):
#TODO: update to look inside mongoDB for the parameters and the blob
# Now is just working for marc and recstruct
try:
master_format = run_sql("SELECT master_format FROM bibrec WHERE id=%s", (recid,))[0][0]
except:
return None
schema = 'recstruct'
if master_format == 'marc':
from invenio.search_engine import get_record as se_get_record
blob = se_get_record(recid)
else:
return None
return BlobWrapper(blob, master_format=master_format, schema=schema)
diff --git a/modules/bibfield/lib/bibfield_config_engine.py b/modules/bibfield/lib/bibfield_config_engine.py
index 62a6b3231..10b6f6713 100644
--- a/modules/bibfield/lib/bibfield_config_engine.py
+++ b/modules/bibfield/lib/bibfield_config_engine.py
@@ -1,471 +1,561 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibField configuration loader
This module uses pyparsing to read from the configuration file all the rules.
http://pyparsing.wikispaces.com/
"""
import os
import re
from invenio.config import CFG_BIBFIELD_MASTER_FORMATS, CFG_ETCDIR, CFG_PYLIBDIR
from invenio.bibfield_utils import BibFieldDict
from pyparsing import ParseException, FollowedBy, Suppress, OneOrMore, Literal, \
LineEnd, ZeroOrMore, Optional, Forward, Word, QuotedString, alphas, \
alphanums, originalTextFor, oneOf, nestedExpr, quotedString, removeQuotes, \
lineEnd, empty, col, restOfLine, delimitedList, nums
def _create_config_parser():
"""
Creates a parser using pyparsing that works with bibfield rule definitions
BNF like grammar:
- rule ::= ([persitent_identifier] json_id ["[0]" | "[n]"] "," aliases":" INDENT body UNDENT) | include
+ rule ::= ([persitent_identifier] json_id ["[0]" | "[n]"] "," aliases":" INDENT body UNDENT) | include | python_comment
include ::= "include(" PATH ")"
- body ::= [inherit_from] (creator | derived | calculated) [checker] [documentation]
+ body ::= [inherit_from] (creator | derived | calculated) [checker] [documentation] [producer]
aliases ::= json_id ["[0]" | "[n]"] ["," aliases]
+
creator ::= "creator:" INDENT creator_body+ UNDENT
- creator_body ::= [parse_first] [legacy] source_format "," source_tag "," python_allowed_expr
+ creator_body ::= [decorators] source_format "," source_tag "," python_allowed_expr
source_format ::= MASTER_FORMATS
source_tag ::= QUOTED_STRING
derived ::= "derived" INDENT derived_calculated_body UNDENT
calculated ::= "calculated:" INDENT derived_calculated_body UNDENT
- derived_calculated_body ::= [parse_first] [depends_on] [only_if] [do_not_cache] "," python_allowed_exp
-
+ derived_calculated_body ::= [decorators] "," python_allowed_exp
+ decorators ::= (peristent_identfier | legacy | do_not_cache | parse_first | depends_on | only_if | only_if_master_value)*
peristent_identfier ::= @persitent_identifier( level )
- inherit_from ::= "@inherit_from()"
legacy ::= "@legacy(" correspondences+ ")"
- do_not_cache ::= "@do_not_cache"
correspondences ::= "(" source_tag [ "," tag_name ] "," json_id ")"
parse_first ::= "@parse_first(" jsonid+ ")"
depends_on ::= "@depends_on(" json_id+ ")"
only_if ::= "@only_if(" python_condition+ ")"
+ only_if_master_value ::= "@only_if_master_value(" python_condition+ ")"
+
+ inherit_from ::= "@inherit_from()"
+ do_not_cache ::= "@do_not_cache"
python_allowed_exp ::= ident | list_def | dict_def | list_access | dict_access | function_call
checker ::= "checker:" INDENT checker_function+ UNDENT
documentation ::= INDENT doc_string subfield* UNDENT
doc_string ::= QUOTED_STRING
subfield ::= "@subfield" json_id["."json_id*] ":" docstring
+
+ producer ::= "producer:" INDENT producer_body UNDENT
+ producer_body ::= producer_code "," python_dictionary
+ producer_code ::= ident
"""
indent_stack = [1]
def check_sub_indent(str, location, tokens):
cur_col = col(location, str)
if cur_col > indent_stack[-1]:
indent_stack.append(cur_col)
else:
raise ParseException(str, location, "not a subentry")
def check_unindent(str, location, tokens):
if location >= len(str):
return
cur_col = col(location, str)
if not(cur_col < indent_stack[-1] and cur_col <= indent_stack[-2]):
raise ParseException(str, location, "not an unindent")
def do_unindent():
indent_stack.pop()
INDENT = lineEnd.suppress() + empty + empty.copy().setParseAction(check_sub_indent)
UNDENT = FollowedBy(empty).setParseAction(check_unindent)
UNDENT.setParseAction(do_unindent)
- json_id = (Word(alphanums + "_") + Optional(oneOf("[0] [n]")))\
+ json_id = (Word(alphas + "_", alphanums + "_") + Optional(oneOf("[0] [n]")))\
.setResultsName("json_id", listAllMatches=True)\
.setParseAction(lambda tokens: "".join(tokens))
aliases = delimitedList((Word(alphanums + "_") + Optional(oneOf("[0] [n]")))
.setParseAction(lambda tokens: "".join(tokens)))\
.setResultsName("aliases")
python_allowed_expr = Forward()
ident = Word(alphas + "_", alphanums + "_")
dict_def = originalTextFor(nestedExpr('{', '}'))
list_def = originalTextFor(nestedExpr('[', ']'))
dict_access = list_access = originalTextFor(ident + nestedExpr('[', ']'))
function_call = originalTextFor(ZeroOrMore(ident + ".") + ident + nestedExpr('(', ')'))
- python_allowed_expr << (ident ^ dict_def ^ list_def ^ dict_access ^ list_access ^ function_call)\
+ python_allowed_expr << (ident ^ dict_def ^ list_def ^ dict_access ^ list_access ^ function_call ^ restOfLine)\
.setResultsName("value", listAllMatches=True)
persistent_identifier = (Suppress("@persistent_identifier") + nestedExpr("(", ")"))\
.setResultsName("persistent_identifier")
- inherit_from = (Suppress("@inherit_from") + originalTextFor(nestedExpr("(", ")")))\
- .setResultsName("inherit_from")
legacy = (Suppress("@legacy") + originalTextFor(nestedExpr("(", ")")))\
.setResultsName("legacy", listAllMatches=True)
only_if = (Suppress("@only_if") + originalTextFor(nestedExpr("(", ")")))\
.setResultsName("only_if")
+ only_if_master_value = (Suppress("@only_if_value") + originalTextFor(nestedExpr("(", ")")))\
+ .setResultsName("only_if_master_value")
depends_on = (Suppress("@depends_on") + originalTextFor(nestedExpr("(", ")")))\
.setResultsName("depends_on")
parse_first = (Suppress("@parse_first") + originalTextFor(nestedExpr("(", ")")))\
.setResultsName("parse_first")
do_not_cache = (Suppress("@") + "do_not_cache")\
.setResultsName("do_not_cache")
+ field_decorator = parse_first ^ depends_on ^ only_if ^ only_if_master_value ^ do_not_cache ^ legacy
+
+ #Independent decorators
+ inherit_from = (Suppress("@inherit_from") + originalTextFor(nestedExpr("(", ")")))\
+ .setResultsName("inherit_from")
+
master_format = (Suppress("@master_format") + originalTextFor(nestedExpr("(", ")")))\
.setResultsName("master_format")
- derived_calculated_body = Optional(parse_first) + Optional(depends_on) + Optional(only_if) + Optional(do_not_cache) + python_allowed_expr
+ derived_calculated_body = ZeroOrMore(field_decorator) + python_allowed_expr
derived = "derived" + Suppress(":") + INDENT + derived_calculated_body + UNDENT
calculated = "calculated" + Suppress(":") + INDENT + derived_calculated_body + UNDENT
source_tag = quotedString\
- .setParseAction(removeQuotes)\
- .setResultsName("source_tag", listAllMatches=True)
+ .setParseAction(removeQuotes)\
+ .setResultsName("source_tag", listAllMatches=True)
source_format = oneOf(CFG_BIBFIELD_MASTER_FORMATS)\
.setResultsName("source_format", listAllMatches=True)
- creator_body = (Optional(parse_first) + Optional(depends_on) + Optional(only_if) + Optional(legacy) + source_format + Suppress(",") + source_tag + Suppress(",") + python_allowed_expr)\
- .setResultsName("creator_def", listAllMatches=True)
+ creator_body = (ZeroOrMore(field_decorator) + source_format + Suppress(",") + source_tag + Suppress(",") + python_allowed_expr)\
+ .setResultsName("creator_def", listAllMatches=True)
creator = "creator" + Suppress(":") + INDENT + OneOrMore(creator_body) + UNDENT
checker_function = (Optional(master_format) + ZeroOrMore(ident + ".") + ident + originalTextFor(nestedExpr('(', ')')))\
.setResultsName("checker_function", listAllMatches=True)
checker = ("checker" + Suppress(":") + INDENT + OneOrMore(checker_function) + UNDENT)
doc_string = QuotedString(quoteChar='"""', multiline=True) | quotedString.setParseAction(removeQuotes)
subfield = (Suppress("@subfield") + Word(alphanums + "_" + '.') + Suppress(":") + Optional(doc_string))\
.setResultsName("subfields", listAllMatches=True)
documentation = ("documentation" + Suppress(":") + INDENT + Optional(doc_string).setResultsName("main_doc") + ZeroOrMore(subfield) + UNDENT)\
.setResultsName("documentation")
+ producer_code = Word(alphas + "_", alphanums + "_")\
+ .setResultsName("producer_code", listAllMatches=True)
+ producer_body = (producer_code + Suppress(",") + python_allowed_expr)\
+ .setResultsName("producer_def", listAllMatches=True)
+ producer = "producer" + Suppress(":") + INDENT + OneOrMore(producer_body) + UNDENT
+
field_def = (creator | derived | calculated)\
.setResultsName("type_field", listAllMatches=True)
- body = Optional(inherit_from) + Optional(field_def) + Optional(checker) + Optional(documentation)
+ body = Optional(inherit_from) + Optional(field_def) + Optional(checker) + Optional(documentation) + Optional(producer)
comment = Literal("#") + restOfLine + LineEnd()
include = (Suppress("include") + quotedString)\
.setResultsName("includes", listAllMatches=True)
rule = (Optional(persistent_identifier) + json_id + Optional(Suppress(",") + aliases) + Suppress(":") + INDENT + body + UNDENT)\
.setResultsName("rules", listAllMatches=True)
return OneOrMore(rule | include | comment.suppress())
class BibFieldParserException(Exception):
"""
Exception raised when some error happens when parsing doctype and rule
documents
"""
pass
class BibFieldParser(object):
"""
BibField rule parser
"""
def __init__(self,
base_dir=CFG_ETCDIR + '/bibfield',
main_config_file='bibfield.cfg'):
"""
Creates the parsers for the rules and parses all the
documents inside base_dir
@param base_dir: Full path where the configuration files are placed
@param main_config_file: Name of the main file that contains the rules
to perform the translation
"""
self.base_dir = base_dir
self.main_config_file = main_config_file
self.config_rules = {}
+ self.legacy_rules = {}
+
self._unresolved_inheritence = []
self._create_config_rules()
def write_to_file(self, file_name=CFG_PYLIBDIR + '/invenio/bibfield_config.py'):
"""
Writes into file_name config_rules and to access
then afterwards from the readers
"""
fd = open(file_name, 'w')
- fd.write('config_rules=%s' % (str(self.config_rules,)))
+ fd.write('config_rules=%s' % (repr(self.config_rules), ))
+ fd.write('\n')
+ fd.write('legacy_rules=%s' % (repr(self.legacy_rules), ))
fd.write('\n')
fd.close()
def _create_config_rules(self):
"""
Fills up config_rules dictionary with the rules defined inside the
configuration file.
It also resolve the includes present inside the main configuration file
and recursively the ones in the other files.
It uses @see: _create_creator_rule() and @see: _create_derived_calculated_rule()
to fill up config_rules
"""
parser = _create_config_parser()
main_rules = parser \
.parseFile(self.base_dir + '/' + self.main_config_file,
parseAll=True)
rules = main_rules.rules
includes = main_rules.includes
already_includes = [self.main_config_file]
#Resolve includes
for include in includes:
if include[0] in already_includes:
continue
already_includes.append(include[0])
if os.path.exists(include[0]):
tmp = parser.parseFile(include[0], parseAll=True)
else:
#CHECK: This will raise an IOError if the file doesn't exist
tmp = parser.parseFile(self.base_dir + '/' + include[0],
parseAll=True)
if rules and tmp.rules:
rules += tmp.rules
else:
rules = tmp.rules
if includes and tmp.includes:
includes += tmp.includes
else:
includes = tmp.includes
#Create config rules
for rule in rules:
if rule.inherit_from or rule.type_field[0] == "creator":
self._create_creator_rule(rule)
elif rule.type_field[0] == "derived" or rule.type_field[0] == "calculated":
self._create_derived_calculated_rule(rule)
else:
assert False, 'Type creator, derived or calculated expected or inherit field'
#Resolve inheritance
for i in xrange(len(self._unresolved_inheritence) - 1, -1, -1):
self._resolve_inheritance(self._unresolved_inheritence[i])
del self._unresolved_inheritence[i]
def _create_creator_rule(self, rule):
"""
Creates the config_rule entries for the creator rules.
The result looks like this:
- {'title':{'rules': { 'inherit_from' : (inherit_from_list),
- 'source_format' : [translation_rules],
- 'legacy': (legacy_rules)}
- 'checker': [(function_name, arguments), ...]
- 'documentation' : {'doc_string' : '...',
- 'subfields' : .....},
- 'type' : 'real'
- 'aliases' : [list_of_aliases_ids]
- },
+ {'json_id':{'rules': { 'inherit_from' : (inherit_from_list),
+ 'source_format' : [translation_rules],
+ 'parse_first' : (parse_first_json_ids),
+ 'depends_on' : (depends_on_json_id),
+ 'only_if' : (only_if_boolean_expressions),
+ 'only_if_master_value': (only_if_master_value_boolean_expressions),
+ },
+ 'checker': [(function_name, arguments), ...]
+ 'documentation' : {'doc_string': '...',
+ 'subfields' : .....},
+ 'type' : 'real'
+ 'aliases' : [list_of_aliases_ids]
+ },
....
}
"""
json_id = rule.json_id[0]
#Workaround to keep clean doctype files
#Just creates a dict entry with the main json field name and points it to
#the full one i.e.: 'authors' : ['authors[0]', 'authors[n]']
if '[0]' in json_id or '[n]' in json_id:
- main_json_id = re.sub('[\[n\] \[0\]]', '', json_id)
+ main_json_id = re.sub('(\[n\]|\[0\])', '', json_id)
if not main_json_id in self.config_rules:
self.config_rules[main_json_id] = []
self.config_rules[main_json_id].append(json_id)
aliases = []
if rule.aliases:
aliases = rule.aliases.asList()
persistent_id = None
if rule.persistent_identifier:
persistent_id = int(rule.persistent_identifier[0][0])
inherit_from = None
if rule.inherit_from:
self._unresolved_inheritence.append(json_id)
inherit_from = eval(rule.inherit_from[0])
rules = {}
for creator in rule.creator_def:
source_format = creator.source_format[0]
if source_format not in rules:
#Allow several tags point to the same json id
rules[source_format] = []
- legacy = ()
- if creator.legacy:
- legacy = eval(creator.legacy[0][0])
-
- depends_on = only_if = parse_first = None
- if creator.parse_first:
- parse_first = creator.parse_first[0]
- if rule.depends_on:
- depends_on = rule.depends_on[0]
- if rule.only_if:
- only_if = rule.only_if[0]
-
- rules[source_format].append({'source_tag' : creator.source_tag[0].split(),
- 'value' : creator.value[0],
- 'legacy' : legacy,
- 'parse_first': parse_first,
- 'depends_on' : depends_on,
- 'only_if' : only_if})
+ (depends_on, only_if, only_if_master_value, parse_first) = self._create_decorators_content(creator)
+ self._create_legacy_rules(creator.legacy, json_id, source_format)
+
+ rules[source_format].append({'source_tag' : creator.source_tag[0].split(),
+ 'value' : creator.value[0],
+ 'depends_on' : depends_on,
+ 'only_if' : only_if,
+ 'only_if_master_value' : only_if_master_value,
+ 'parse_first' : parse_first})
#Chech duplicate names to overwrite configuration
if not json_id in self.config_rules:
self.config_rules[json_id] = {'inherit_from' : inherit_from,
'rules' : rules,
'checker' : [],
'documentation' : BibFieldDict(),
+ 'producer' : {},
'type' : 'real',
'aliases' : aliases,
'persistent_identifier': persistent_id,
'overwrite' : False}
else:
self.config_rules[json_id]['overwrite'] = True
self.config_rules[json_id]['rules'].update(rules)
self.config_rules[json_id]['aliases'] = \
aliases or self.config_rules[json_id]['aliases']
self.config_rules[json_id]['persistent_identifier'] = \
persistent_id or self.config_rules[json_id]['persistent_identifier']
self.config_rules[json_id]['inherit_from'] = \
inherit_from or self.config_rules[json_id]['inherit_from']
self._create_checkers(rule)
self._create_documentation(rule)
+ self._create_producer(rule)
def _create_derived_calculated_rule(self, rule):
"""
Creates the config_rules entries for the virtual fields
The result is similar to the one of real fields but in this case there is
only one rule.
"""
json_id = rule.json_id[0]
#Chech duplicate names
if json_id in self.config_rules:
raise BibFieldParserException("Name error: '%s' field name already defined"
% (rule.json_id[0],))
- depends_on = only_if = parse_first = persistent_id = None
-
aliases = []
if rule.aliases:
aliases = rule.aliases.asList()
if re.search('^_[a-zA-Z0-9]', json_id):
aliases.append(json_id[1:])
do_not_cache = False
- if rule.depends_on:
- depends_on = rule.depends_on[0]
- if rule.only_if:
- only_if = rule.only_if[0]
- if rule.parse_first:
- parse_first = rule.parse_first[0]
if rule.do_not_cache:
do_not_cache = True
+
+ persistent_id = None
if rule.persistent_identifier:
persistent_id = int(rule.persistent_identifier[0][0])
+ (depends_on, only_if, only_if_master_value, parse_first) = self._create_decorators_content(rule)
+ self._create_legacy_rules(rule.legacy, json_id)
+
self.config_rules[json_id] = {'rules' : {},
'checker' : [],
'documentation': BibFieldDict(),
+ 'producer' : {},
'aliases' : aliases,
'type' : rule.type_field[0],
'persistent_identifier' : persistent_id,
'overwrite' : False}
- self.config_rules[json_id]['rules'] = {'value' : rule.value[0],
- 'parse_first' : parse_first,
- 'depends_on' : depends_on,
- 'only_if' : only_if,
- 'do_not_cache': do_not_cache}
+ self.config_rules[json_id]['rules'] = {'value' : rule.value[0],
+ 'depends_on' : depends_on,
+ 'only_if' : only_if,
+ 'only_if_master_value': only_if_master_value,
+ 'parse_first' : parse_first,
+ 'do_not_cache' : do_not_cache}
self._create_checkers(rule)
self._create_documentation(rule)
+ self._create_producer(rule)
+
+ def _create_decorators_content(self, rule):
+ """
+ Extracts from the rule all the possible decorators.
+ """
+ depends_on = only_if = only_if_master_value = parse_first = None
+
+ if rule.depends_on:
+ depends_on = rule.depends_on[0]
+ if rule.only_if:
+ only_if = rule.only_if[0]
+ if rule.only_if_master_value:
+ only_if_master_value = rule.only_if_master_value[0]
+ if rule.parse_first:
+ parse_first = rule.parse_first[0]
+
+ return (depends_on, only_if, only_if_master_value, parse_first)
+
+ def _create_legacy_rules(self, legacy_rules, json_id, source_format=None):
+ """
+ Creates the legacy rules dictionary:
+
+ {'100' : ['authors[0]'],
+ '100__' : ['authors[0]'],
+ '100__%': ['authors[0]'],
+ '100__a': ['auhtors[0].full_name'],
+ .......
+ }
+ """
+ if not legacy_rules:
+ return
+ for legacy_rule in legacy_rules:
+ legacy_rule = eval(legacy_rule[0])
+
+ if source_format is None:
+ inner_source_format = legacy_rule[0]
+ legacy_rule = legacy_rule[1]
+ else:
+ inner_source_format = source_format
+
+ if not inner_source_format in self.legacy_rules:
+ self.legacy_rules[inner_source_format] = {}
+
+ for field_legacy_rule in legacy_rule:
+ #Allow string and tuple in the config file
+ legacy_fields = isinstance(field_legacy_rule[0], basestring) and (field_legacy_rule[0], ) or field_legacy_rule[0]
+ json_field = json_id
+ if field_legacy_rule[-1]:
+ json_field = '.'.join((json_field, field_legacy_rule[-1]))
+ for legacy_field in legacy_fields:
+ if not legacy_field in self.legacy_rules[inner_source_format]:
+ self.legacy_rules[inner_source_format][legacy_field] = []
+ self.legacy_rules[inner_source_format][legacy_field].append(json_field)
def _resolve_inheritance(self, json_id):
"""docstring for _resolve_inheritance"""
inherit_from_list = self.config_rules[json_id]['inherit_from']
rule = self.config_rules[json_id]
for inherit_json_id in inherit_from_list:
#Check if everithing is fine
if inherit_json_id == json_id:
raise BibFieldParserException("Inheritance from itself")
if inherit_json_id not in self.config_rules:
raise BibFieldParserException("Unable to solve %s inheritance" % (inherit_json_id,))
if inherit_json_id in self._unresolved_inheritence:
self._resolve_inheritance(inherit_json_id)
self._unresolved_inheritence.remove(inherit_json_id)
inherit_rule = self.config_rules[inherit_json_id]
for format in inherit_rule['rules']:
if not format in rule['rules']:
rule['rules'][format] = []
rule['rules'][format].extend(inherit_rule['rules'][format])
rule['checker'].extend(inherit_rule['checker'])
def _create_checkers(self, rule):
"""
Creates the list of checker functions and arguments for the given rule
"""
json_id = rule.json_id[0]
assert json_id in self.config_rules
if rule.checker_function:
if self.config_rules[json_id]['overwrite']:
self.config_rules[json_id]['checker'] = []
for checker in rule.checker_function:
if checker.master_format:
master_format = eval(rule.master_format[0])
checker_function_name = checker[1]
arguments = checker[2][1:-1]
else:
master_format = ('all',)
checker_function_name = checker[0]
arguments = checker[1][1:-1]
#json_id : (master_format, checker_name, parameters)
self.config_rules[json_id]['checker'].append((master_format,
checker_function_name,
arguments))
def _create_documentation(self, rule):
"""
Creates the documentation dictionary for the given rule
"""
json_id = rule.json_id[0]
assert json_id in self.config_rules
if rule.documentation:
if self.config_rules[json_id]['overwrite']:
self.config_rules[json_id]['documentation'] = BibFieldDict()
config_doc = self.config_rules[json_id]['documentation']
config_doc['doc_string'] = rule.documentation.main_doc
config_doc['subfields'] = None
if rule.documentation.subfields:
for subfield in rule.documentation.subfields:
key = "%s.%s" % ('subfields', subfield[0].replace('.', '.subfields.'))
config_doc[key] = {'doc_string': subfield[1],
'subfields' : None}
+
+ def _create_producer(self, rule):
+ """
+ Creates the dictionary of possible producer formats for the given rule
+ """
+ json_id = rule.json_id[0]
+ assert json_id in self.config_rules
+
+ if rule.producer_def:
+ if self.config_rules[json_id]['overwrite']:
+ self.config_rules[json_id]['producer'] = {}
+ for producer in rule.producer_def:
+ producer_code = producer.producer_code[0]
+ rule = producer.value[0]
+ if not producer_code in self.config_rules[json_id]['producer']:
+ self.config_rules[json_id]['producer'][producer_code] = []
+ self.config_rules[json_id]['producer'][producer_code].append(eval(rule))
diff --git a/modules/bibfield/lib/bibfield_config_engine_unit_tests.py b/modules/bibfield/lib/bibfield_config_engine_unit_tests.py
index 59b6d1138..882615922 100644
--- a/modules/bibfield/lib/bibfield_config_engine_unit_tests.py
+++ b/modules/bibfield/lib/bibfield_config_engine_unit_tests.py
@@ -1,76 +1,76 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibFieldParser Unit tests.
"""
from invenio.testutils import make_test_suite, run_test_suite, InvenioTestCase
class BibFieldParserUnitTests(InvenioTestCase):
"""
Test to verify the correct creation of bibfield_config.py from the rules and
doctypes files.
"""
def setUp(self):
"""Loads bibfield configuration test files"""
super(BibFieldParserUnitTests, self).setUp()
from invenio.bibfield_config_engine import BibFieldParser
parser = BibFieldParser(main_config_file="test_bibfield.cfg")
self.config_rules = parser.config_rules
def test_bibfield_rules_parser(self):
- """Checks if the configuration rules are well built"""
+ """BibField - configuration rules building process"""
self.assertTrue(len(self.config_rules) >= 20)
#Check imports
self.assertTrue('authors' in self.config_rules)
self.assertTrue('title' in self.config_rules)
#Check work arroung for [n] and [0]
self.assertTrue(len(self.config_rules['authors']) == 2)
self.assertEqual(self.config_rules['authors'], ['authors[0]', 'authors[n]'])
self.assertTrue('authors[0]' in self.config_rules)
self.assertTrue('authors[n]' in self.config_rules)
self.assertTrue(self.config_rules['doi']['persistent_identifier'])
#Check if derived and calulated are well parserd
self.assertTrue('dummy' in self.config_rules)
self.assertTrue(self.config_rules['dummy']['type'] == 'derived')
self.assertTrue(self.config_rules['dummy']['persistent_identifier'])
self.assertTrue(self.config_rules['_number_of_copies']['type'] == 'calculated')
self.assertTrue(self.config_rules['authors[0]']['type'] == 'real')
self.assertTrue(self.config_rules['_random']['rules']['do_not_cache'])
self.assertFalse(self.config_rules['_number_of_copies']['rules']['do_not_cache'])
#Check inheritance
self.assertTrue('main_author' in self.config_rules)
self.assertEqual(self.config_rules['main_author']['rules'],
self.config_rules['authors[0]']['rules'])
def test_bibfield_docytpes_parser(self):
#TODO: next iteration will come with this
pass
def test_writing_bibfield_config_file(self):
#TODO: tests
pass
TEST_SUITE = make_test_suite(BibFieldParserUnitTests)
if __name__ == "__main__":
run_test_suite(TEST_SUITE)
diff --git a/modules/bibfield/lib/bibfield_jsonreader.py b/modules/bibfield/lib/bibfield_jsonreader.py
index b56a95a3f..56e784b47 100644
--- a/modules/bibfield/lib/bibfield_jsonreader.py
+++ b/modules/bibfield/lib/bibfield_jsonreader.py
@@ -1,334 +1,362 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibField Json Reader
"""
__revision__ = "$Id$"
+import os
import re
import sys
if sys.version_info < (2,5):
def all(list):
for element in list:
if not element:
return False
return True
def any(list):
for element in list:
if element:
return True
return False
-from invenio.bibfield_utils import BibFieldDict, BibFieldCheckerException
+from invenio.config import CFG_PYLIBDIR
+from invenio.pluginutils import PluginContainer
+
+from invenio.bibfield_utils import BibFieldDict, \
+ InvenioBibFieldContinuableError, \
+ InvenioBibFieldError
from invenio.bibfield_config import config_rules
class JsonReader(BibFieldDict):
"""
Base class inside the hierarchy that contains several method implementations
that will be shared, eventually, by all the *Reader classes.
In this particular case this class is expecting that the base format is json,
so no conversion is needed.
"""
- def __init__(self, blob_wrapper=None):
+ def __init__(self, blob_wrapper=None, check = False):
"""
blob -> _prepare_blob(...) -> rec_tree -> _translate(...) -> rec_json -> check_record(...)
"""
super(JsonReader, self).__init__()
self.blob_wrapper = blob_wrapper
self.rec_tree = None # all record information represented as a tree (intermediate structure)
- self._missing_cfg = []
- self._warning_messages = []
-
self.__parsed = []
if self.blob_wrapper:
try:
self['__master_format'] = self.blob_wrapper.master_format
except AttributeError:
pass # We are retrieving the cached version from the data base containing __master_format
self._prepare_blob()
self._translate()
self._post_process_json()
- self.check_record()
+ if check:
+ self.check_record()
self.is_init_phase = False
else:
self['__master_format'] = 'json'
@staticmethod
- def split_blob(blob):
+ def split_blob(blob, schema):
"""
In case of several records inside the blob this method specify how to split
then and work one by one afterwards
"""
raise NotImplementedError("This method must be implemented by each reader")
def get_persistent_identifiers(self):
"""
Using _persistent_identifiers_keys calculated fields gets a subset
of the record containing al persistent indentifiers
"""
return dict((key, self[key]) for key in self.get('_persistent_identifiers_keys', reset_cache=True))
def is_empty(self):
"""
One record is empty if there is nothing stored inside rec_json or there is
- only '__master_format'
+ only '__key'
"""
- if not self.rec_json:
- return True
- if self.keys() == ['__master_format']:
+ if self.rec_json is None or len(self.rec_json.keys()) == 0:
return True
if all(key.startswith('_') for key in self.keys()):
return True
return False
- def check_record(self):
+ def check_record(self, reset=True):
"""
Using the checking rules defined inside bibfied configurations files checks
if the record is well build. If not it stores the problems inside
- self._warning_messages
+ self['__error_messages'] splitting then by continuable errors and fatal/non-continuable
+ errors
"""
def check_rules(checker_functions, key):
"""docstring for check_rule"""
for checker_function in checker_functions:
if 'all' in checker_function[0] or self['__master_format'] in checker_function[0]:
try:
self._try_to_eval("%s(self,'%s',%s)" % (checker_function[1], key, checker_function[2]))
- except BibFieldCheckerException, err:
- self._warning_messages.append(str(err))
+ except InvenioBibFieldContinuableError, err:
+ self['__error_messages']['cerror'].append('Checking CError - ' + str(err))
+ except InvenioBibFieldError, err:
+ self['__error_messages']['error'].append('Checking Error - ' + str(err))
+
+ if reset or '__error_messages.error' not in self or '__error_messages.cerror' not in self:
+ self.rec_json['__error_messages'] = {'error': [], 'cerror': []}
+
for key in self.keys():
try:
check_rules(config_rules[key]['checker'], key)
except TypeError:
for kkey in config_rules[key]:
check_rules(config_rules[kkey]['checker'], kkey)
except KeyError:
continue
+ @property
+ def fatal_errors(self):
+ """@return All the fatal/non-continuable errors that check_record has found"""
+ return self.get('__error_messages.error', [])
+
+ @property
+ def continuable_errors(self):
+ """@return All the continuable errors that check_record has found"""
+ return self.get('__error_messages.cerror', [])
+
def legacy_export_as_marc(self):
"""
It creates a valid marcxml using the legacy rules defined in the config
file
"""
+ from collections import Iterable
def encode_for_marcxml(value):
from invenio.textutils import encode_for_xml
+ if isinstance(value, unicode):
+ value = value.encode('utf8')
return encode_for_xml(str(value))
- formatstring_controlfield = '<controlfield tag="{tag}">{content}</controlfield>'
- formatstring_datafield = '<datafield tag="{tag}" ind1="{ind1}" ind2="{ind2}">{content}</datafield>'
-
- def create_marc_representation(key, value, legacy_rules):
- """
- Helper function to create the marc representation of one field
-
- #FIXME: refactor this spaghetti code
- """
- output = ''
+ export = '<record>'
+ marc_dicts = self.produce_json_for_marc()
+ for marc_dict in marc_dicts:
content = ''
tag = ''
ind1 = ''
ind2 = ''
-
- if not value:
- return ''
-
- for legacy_rule in legacy_rules:
- if not '%' in legacy_rule[0]:
- if len(legacy_rule[0]) == 3 and legacy_rule[0].startswith('00'):
+ for key, value in marc_dict.iteritems():
+ if isinstance(value, basestring) or not isinstance(value, Iterable):
+ value = [value]
+ for v in value:
+ if v is None:
+ continue
+ if key.startswith('00') and len(key) == 3:
# Control Field (No indicators no subfields)
- formatstring = None
- if legacy_rule[0] == '005':
- #Especial format for date only for 005 tag
- formatstring = "%Y%m%d%H%M%S.0"
- output += '<controlfield tag="%s">%s</controlfield>' % (legacy_rule[0],
- self.get(key,
- default='',
- formatstring=formatstring,
- formatfunction=encode_for_marcxml)
- )
- elif len(legacy_rule[0]) == 6:
- #Data Field
- if not (tag == legacy_rule[0][:3] and ind1 == legacy_rule[0][3].replace('_', '') and ind2 == legacy_rule[0][4].replace('_', '')):
- tag = legacy_rule[0][:3]
- ind1 = legacy_rule[0][3].replace('_', '')
- ind2 = legacy_rule[0][4].replace('_', '')
+ export += '<controlfield tag="%s">%s</controlfield>\n' % (key, encode_for_marcxml(v))
+ elif len(key) == 6:
+ if not (tag == key[:3] and ind1 == key[3].replace('_', '') and ind2 == key[4].replace('_', '')):
+ tag = key[:3]
+ ind1 = key[3].replace('_', '')
+ ind2 = key[4].replace('_', '')
if content:
- output += '<datafield tag="%s" ind1="%s" ind2="%s">%s</datafield>' % (tag, ind1, ind2, content)
+ export += '<datafield tag="%s" ind1="%s" ind2="%s">%s</datafield>\n' % (tag, ind1, ind2, content)
content = ''
- try:
- tmp = value.get(legacy_rule[-1])
- if tmp:
- tmp = encode_for_marcxml(tmp)
- else:
- continue
- except AttributeError:
- tmp = encode_for_marcxml(value)
+ content += '<subfield code="%s">%s</subfield>' % (key[5], encode_for_marcxml(v))
+ else:
+ pass
- content += '<subfield code="%s">%s</subfield>' % (legacy_rule[0][5], tmp)
if content:
- output += '<datafield tag="%s" ind1="%s" ind2="%s">%s</datafield>' % (tag, ind1, ind2, content)
- return output
-
- export = '<record>'
-
- for key in [k for k in config_rules.iterkeys() if k in self]:
- values = self[key.replace('[n]', '[1:]')]
- if not isinstance(values, list):
- values = [values]
- for value in values:
- try:
- export += create_marc_representation(key, value, sum([rule['legacy'] for rule in config_rules[key]['rules']['marc']], ()))
- except (TypeError, KeyError):
- break
+ export += '<datafield tag="%s" ind1="%s" ind2="%s">%s</datafield>\n' % (tag, ind1, ind2, content)
export += '</record>'
return export
def get_legacy_recstruct(self):
"""
It creates the recstruct representation using the legacy rules defined in
the configuration file
#CHECK: it might be a bit overkilling
"""
from invenio.bibrecord import create_record
return create_record(self.legacy_export_as_marc())[0]
def _prepare_blob(self):
"""
This method might be overwritten by the *Reader and should take care of
transforming the blob into an homogeneous format that the _translate()
method understands
Overwriting this method is optional if there is no need of transforming
the blob into something that _translate understands
"""
#In this case no translation needed
self.rec_tree = self.rec_json = self.blob_wrapper.blob
def _translate(self):
"""
Using the intermediate structure (self.rec_tree) that _prepare_blob has
created it transforms the record into a jsonic structure using the rules
present into the bibfield configuration file.
To apply this rules it takes into account the type of the reader (which
indeed means the type of source format) and the doctype.
"""
if self.__class__.__name__ == 'JsonReader':
#No translation needed for json
pass
else:
#TODO: allow a list of doctypes and get the union of them
# fields = doctype_definition[blob.doctype]['fields']
# Now just getting all the possible field from config_rules
fields = dict(zip(config_rules.keys(), config_rules.keys()))
for json_id, field_name in fields.iteritems():
self._unpack_rule(json_id, field_name)
def _get_elements_from_rec_tree(self, regex_rules):
- """docstring for _get_elements_from_rec_tree"""
for regex_rule in regex_rules:
for element in self.rec_tree[re.compile(regex_rule)]:
yield element
def _unpack_rule(self, json_id, field_name=None):
if not field_name:
field_name = json_id
rule_def = config_rules[json_id]
if isinstance(rule_def, list): # Undo the workaround for [0] and [n]
return all([self._unpack_rule(json_id_rule) for json_id_rule in rule_def])
if (json_id, field_name) in self.__parsed:
return field_name in self
self.__parsed.append((json_id, field_name))
if rule_def['type'] == 'real':
try:
rules = rule_def['rules'][self['__master_format']]
except KeyError:
return False
return all(self._apply_rule(field_name, rule_def['aliases'], rule) for rule in rules)
else:
return self._apply_virtual_rule(field_name, rule_def['aliases'], rule_def['rules'], rule_def['type'])
def _apply_rule(self, field_name, aliases, rule):
- """docstring for _apply_rule"""
if 'entire_record' in rule['source_tag'] or any(key in self.rec_tree for key in rule['source_tag']):
if rule['parse_first']:
for json_id in self._try_to_eval(rule['parse_first']):
self._unpack_rule(json_id)
if rule['depends_on'] and not all(k in self for k in self._try_to_eval(rule['depends_on'])):
return False
if rule['only_if'] and not all(self._try_to_eval(rule['only_if'])):
return False
if 'entire_record' in rule['source_tag']:
self[field_name] = self._try_to_eval(rule['value'], value=self.rec_tree)
else:
for elements in self._get_elements_from_rec_tree(rule['source_tag']):
if isinstance(elements, list):
+ returned_value = False
for element in elements:
- self[field_name] = self._try_to_eval(rule['value'], value=element)
+ if rule['only_if_master_value'] and not all(self._try_to_eval(rule['only_if_master_value'], value=element)):
+ returned_value = returned_value or False
+ else:
+ try:
+ self[field_name] = self._try_to_eval(rule['value'], value=element)
+ returned_value = returned_value or True
+ except Exception, e:
+ self['__error_messages.error[n]'] = 'Rule Error - Unable to apply rule for field %s - %s' % (field_name, str(e))
+ returned_value = returned_value or False
else:
- self[field_name] = self._try_to_eval(rule['value'], value=elements)
+ if rule['only_if_master_value'] and not all(self._try_to_eval(rule['only_if_master_value'], value=elements)):
+ return False
+ else:
+ try:
+ self[field_name] = self._try_to_eval(rule['value'], value=elements)
+ except Exception, e:
+ self['__error_messages.error[n]'] = 'Rule Error - Unable to apply rule for field %s - %s' % (field_name, str(e))
+ returned_value = returned_value or False
+
for alias in aliases:
self['__aliases'][alias] = field_name
return True
else:
return False
def _apply_virtual_rule(self, field_name, aliases, rule, rule_type):
if rule['parse_first']:
for json_id in self._try_to_eval(rule['parse_first']):
self._unpack_rule(json_id)
if rule['depends_on'] and not all(k in self for k in self._try_to_eval(rule['depends_on'])):
return False
if rule['only_if'] and not all(self._try_to_eval(rule['only_if'])):
return False
#Apply rule
if rule_type == 'derived':
- self[field_name] = self._try_to_eval(rule['value'])
+ try:
+ self[field_name] = self._try_to_eval(rule['value'])
+ except Exception, e:
+ self['__error_messages.cerror[n]'] = 'Virtual Rule CError - Unable to evaluate %s - %s' % (field_name, str(e))
else:
- self[field_name] = [self._try_to_eval(rule['value']), rule['value']]
-
- if rule['do_not_cache']:
- self['__do_not_cache'].append(field_name)
+ self['__calculated_functions'][field_name] = rule['value']
+ if rule['do_not_cache']:
+ self['__do_not_cache'].append(field_name)
+ self[field_name] = None
+ else:
+ try:
+ self[field_name] = self._try_to_eval(rule['value'])
+ except Exception, e:
+ self['__error_messages.cerror[n]'] = 'Virtual Rule CError - Unable to evaluate %s - %s' % (field_name, str(e))
for alias in aliases:
self['__aliases'][alias] = field_name
return True
def _post_process_json(self):
"""
If needed this method will post process the json structure, e.g. pruning
the json to delete None values
"""
- pass
+ def remove_none_values(d):
+ if d is None or not isinstance(d, dict):
+ return
+ for key, value in d.items():
+ if value is None:
+ del d[key]
+ if isinstance(value, dict):
+ remove_none_values(value)
+ if isinstance(value, list):
+ for element in value:
+ if element is None:
+ value.remove(element)
+ else:
+ remove_none_values(element)
+ remove_none_values(self.rec_json)
+
+
+
+for key, value in PluginContainer(os.path.join(CFG_PYLIBDIR, 'invenio', 'bibfield_functions', 'produce_json_for_*.py')).iteritems():
+ setattr(JsonReader, key, value)
## Compulsory plugin interface
readers = JsonReader
diff --git a/modules/bibfield/lib/bibfield_marcreader.py b/modules/bibfield/lib/bibfield_marcreader.py
index d902a4e11..d1a20f17d 100644
--- a/modules/bibfield/lib/bibfield_marcreader.py
+++ b/modules/bibfield/lib/bibfield_marcreader.py
@@ -1,87 +1,87 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
"""
__revision__ = "$Id$"
from invenio.bibfield_jsonreader import JsonReader
from invenio.bibfield_utils import CoolDict, CoolList
class MarcReader(JsonReader):
"""
Reader class that understands MARC21 as base format
"""
@staticmethod
- def split_blob(blob):
+ def split_blob(blob, schema):
"""
Splits the blob using <record.*?>.*?</record> as pattern.
Note 1: Taken from invenio.bibrecord:create_records
Note 2: Use the DOTALL flag to include newlines.
"""
import re
regex = re.compile('<record.*?>.*?</record>', re.DOTALL)
return regex.findall(blob)
def _prepare_blob(self):
"""
Transforms the blob into rec_tree structure to use it in the standar
translation phase inside C{JsonReader}
"""
self.rec_tree = CoolDict()
try:
if self.blob_wrapper.schema.lower().startswith('file:'):
self.blob_wrapper.blob = open(self.blob_wrapper.blob_file_name, 'r').read()
if self.blob_wrapper.schema.lower() in ['recstruct']:
self.__create_rectree_from_recstruct()
elif self.blob_wrapper.schema.lower() in ['xml', 'file:xml']:
#TODO: Implement translation directrly from xml
from invenio.bibrecord import create_record
self.blob_wrapper.blob = create_record(self.blob_wrapper.blob)[0]
self.__create_rectree_from_recstruct()
except AttributeError:
#Assume marcxml
from invenio.bibrecord import create_record
self.blob_wrapper.blob = create_record(self.blob_wrapper.blob)[0]
self.__create_rectree_from_recstruct()
def __create_rectree_from_recstruct(self):
"""
Using rectruct as base format it creates the intermediate structure that
_translate will use.
"""
for key, values in self.blob_wrapper.blob.iteritems():
if key < '010' and key.isdigit():
#Control field, it assumes controlfields are numeric only
self.rec_tree[key] = CoolList([value[3] for value in values])
else:
for value in values:
field = CoolDict()
for subfield in value[0]:
field.extend(subfield[0], subfield[1])
self.rec_tree.extend((key + value[1] + value[2]).replace(' ', '_'), field)
## Compulsory plugin interface
readers = MarcReader
diff --git a/modules/bibfield/lib/bibfield_marcreader_regression_tests.py b/modules/bibfield/lib/bibfield_marcreader_regression_tests.py
index 6f45ceabd..a327ff076 100644
--- a/modules/bibfield/lib/bibfield_marcreader_regression_tests.py
+++ b/modules/bibfield/lib/bibfield_marcreader_regression_tests.py
@@ -1,556 +1,626 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011 CERN.
+## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibField Marc21 reader regression tests.
"""
__revision__ = "$Id$"
from invenio.bibfield_utils import BlobWrapper
from invenio.bibfield_marcreader import MarcReader
from invenio.testutils import InvenioTestCase, make_test_suite, run_test_suite
class BibFieldMarcReaderMarcXML(InvenioTestCase):
"""
"""
def test_marcxml_to_cool_struct_preparation(self):
- """docstring for test_marcxml_to_cool_struct_preparation"""
+ """Bibfield - intermediate structure from marc xml"""
#First record from demobibcfg.xml
xml = """
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-EX-0106015</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Photolab</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">ALEPH experiment: Candidate of Higgs boson production</subfield>
</datafield>
<datafield tag="246" ind1=" " ind2="1">
<subfield code="a">Expérience ALEPH: Candidat de la production d'un boson Higgs</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">14 06 2000</subfield>
</datafield>
<datafield tag="340" ind1=" " ind2=" ">
<subfield code="a">FILM</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Candidate for the associated production of the Higgs boson and Z boson. Both, the Higgs and Z boson decay into 2 jets each. The green and the yellow jets belong to the Higgs boson. They represent the fragmentation of a bottom andanti-bottom quark. The red and the blue jets stem from the decay of the Z boson into a quark anti-quark pair. Left: View of the event along the beam axis. Bottom right: Zoom around the interaction point at the centre showing detailsof the fragmentation of the bottom and anti-bottom quarks. As expected for b quarks, in each jet the decay of a long-lived B meson is visible. Top right: "World map" showing the spatial distribution of the jets in the event.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">Press</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Experiments and Tracks</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">LEP</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">neil.calder@cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0106015_01.jpg</subfield>
<subfield code="r">restricted_picture</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0106015_01.gif</subfield>
<subfield code="f">.gif;icon</subfield>
<subfield code="r">restricted_picture</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0003717PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2001-06-14</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-08-27</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="p">Bldg. 2</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="r">Calder, N</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
"""
blob = BlobWrapper(blob=xml, master_format='marc', schema='xml')
r = MarcReader(blob)
self.assertTrue(r.rec_tree)
self.assertTrue(len(r.rec_tree.keys()) >= 14)
self.assertTrue('100__' in r.rec_tree)
def test_rec_json_creation_from_marcxml(self):
- """docstring for test_rec_json_creation_from_marcxml"""
+ """BibField - recjson from marcxml"""
xml = """
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">astro-ph/9812226</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Efstathiou, G P</subfield>
<subfield code="u">Cambridge University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Constraints on $\Omega_{\Lambda}$ and $\Omega_{m}$from Distant Type 1a Supernovae and Cosmic Microwave Background Anisotropies</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">14 Dec 1998</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">6 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We perform a combined likelihood analysis of the latest cosmic microwave background anisotropy data and distant Type 1a Supernova data of Perlmutter etal (1998a). Our analysis is restricted tocosmological models where structure forms from adiabatic initial fluctuations characterised by a power-law spectrum with negligible tensor component. Marginalizing over other parameters, our bestfit solution gives Omega_m = 0.25 (+0.18, -0.12) and Omega_Lambda = 0.63 (+0.17, -0.23) (95 % confidence errors) for the cosmic densities contributed by matter and a cosmological constantrespectively. The results therefore strongly favour a nearly spatially flat Universe with a non-zero cosmological constant.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Astrophysics and Astronomy</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lasenby, A N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hobson, M P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, R S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bridle, S L</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">George Efstathiou &lt;gpe@ast.cam.ac.uk&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig1.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig3.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig5.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig6.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig7.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1998</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1998-12-14</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-04-07</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="p">Mon. Not. R. Astron. Soc.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4162242</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond, J.R. 1996, Theory and Observations of the Cosmic Background Radiation, in "Cosmology and Large Scale Structure", Les Houches Session LX, August 1993, eds. R. Schaeffer, J. Silk, M. Spiro and J. Zinn-Justin, Elsevier SciencePress, Amsterdam, p469</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond J.R., Efstathiou G., Tegmark M., 1997</subfield>
<subfield code="p">L33</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">291</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 291 (1997) L33</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond, J.R., Jaffe, A. 1997, in Proc. XXXI Rencontre de Moriond, ed. F. Bouchet, Edition Fronti eres, in press</subfield>
<subfield code="r">astro-ph/9610091</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond J.R., Jaffe A.H. and Knox L.E., 1998</subfield>
<subfield code="r">astro-ph/9808264</subfield>
<subfield code="s">Astrophys.J. 533 (2000) 19</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Burles S., Tytler D., 1998a, to appear in the Proceedings of the Second Oak Ridge Symposium on Atomic &amp; Nuclear Astrophysics, ed. A. Mezzacappa, Institute of Physics, Bristol</subfield>
<subfield code="r">astro-ph/9803071</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Burles S., Tytler D., 1998b, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9712109</subfield>
<subfield code="s">Astrophys.J. 507 (1998) 732</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Caldwell, R.R., Dave, R., Steinhardt P.J., 1998</subfield>
<subfield code="p">1582</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">80</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 80 (1998) 1582</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Carroll S.M., Press W.H., Turner E.L., 1992, Ann. Rev. Astr. Astrophys., 30, 499. Chaboyer B., 1998</subfield>
<subfield code="r">astro-ph/9808200</subfield>
<subfield code="s">Phys.Rept. 307 (1998) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Devlin M.J., De Oliveira-Costa A., Herbig T., Miller A.D., Netterfield C.B., Page L., Tegmark M., 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9808043</subfield>
<subfield code="s">Astrophys. J. 509 (1998) L69-72</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Efstathiou G. 1996, Observations of Large-Scale Structure in the Universe, in "Cosmology and Large Scale Structure", Les Houches Session LX, August 1993, eds. R. Schaeffer, J. Silk, M. Spiro and J. Zinn-Justin, Elsevier SciencePress, Amsterdam, p135. Efstathiou G., Bond J.R., Mon. Not. R. Astron. Soc.in press</subfield>
<subfield code="r">astro-ph/9807130</subfield>
<subfield code="s">Astrophys. J. 518 (1999) 2-23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Evrard G., 1998, submitted to Mon. Not. R. Astron. Soc</subfield>
<subfield code="r">astro-ph/9701148</subfield>
<subfield code="s">Mon.Not.Roy.Astron.Soc. 292 (1997) 289</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Freedman J.B., Mould J.R., Kennicutt R.C., Madore B.F., 1998</subfield>
<subfield code="r">astro-ph/9801090</subfield>
<subfield code="s">Astrophys. J. 480 (1997) 705</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Garnavich P.M. et al. 1998</subfield>
<subfield code="r">astro-ph/9806396</subfield>
<subfield code="s">Astrophys.J. 509 (1998) 74-79</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Goobar A., Perlmutter S., 1995</subfield>
<subfield code="p">14</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">450</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Astrophys. J. 450 (1995) 14</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Hamuy M., Phillips M.M., Maza J., Suntzeff N.B., Schommer R.A., Aviles R. 1996</subfield>
<subfield code="p">2391</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">112</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Astrophys. J. 112 (1996) 2391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Hancock S., Gutierrez C.M., Davies R.D., Lasenby A.N., Rocha G., Rebolo R., Watson R.A., Tegmark M., 1997</subfield>
<subfield code="p">505</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">298</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 298 (1997) 505</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Hancock S., Rocha G., Lasenby A.N., Gutierrez C.M., 1998</subfield>
<subfield code="p">L1</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">294</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 294 (1998) L1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Herbig T., De Oliveira-Costa A., Devlin M.J., Miller A.D., Page L., Tegmark M., 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9808044</subfield>
<subfield code="s">Astrophys.J. 509 (1998) L73-76</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Lineweaver C.H., 1998. Astrophys. J.505, L69. Lineweaver, C.H., Barbosa D., 1998a</subfield>
<subfield code="p">624</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">446</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astrophys. J. 446 (1998) 624</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Lineweaver, C.H., Barbosa D., 1998b</subfield>
<subfield code="p">799</subfield>
<subfield code="t">Astron. Astrophys.</subfield>
<subfield code="v">329</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astron. Astrophys. 329 (1998) 799</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">De Oliveira-Costa A., Devlin M.J., Herbig T., Miller A.D., Netterfield C.B. Page L., Tegmark M., 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9808045</subfield>
<subfield code="s">Astrophys. J. 509 (1998) L77-80</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Ostriker J.P., Steinhardt P.J., 1995</subfield>
<subfield code="p">600</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">377</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nature 377 (1995) 600</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Peebles P.J.E., 1993, Principles of Physical Cosmology, Princeton University Press, Princeton, New Jersey. Perlmutter S, et al., 1995, In Presentations at the NATO ASI in Aiguablava, Spain, LBL-38400; also published in Thermonuclear Supernova, P. Ruiz-Lapuente, R. Cana and J. Isern (eds), Dordrecht, Kluwer, 1997, p749. Perlmutter S, et al., 1997</subfield>
<subfield code="p">565</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">483</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Astrophys. J. 483 (1997) 565</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Perlmutter S. et al., 1998a, Astrophys. J.in press. (P98)</subfield>
<subfield code="r">astro-ph/9812133</subfield>
<subfield code="s">Astrophys. J. 517 (1999) 565-586</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Perlmutter S. et al., 1998b, In Presentation at the January 1988 Meeting of the American Astronomical Society, Washington D.C., LBL-42230, available at www-supernova.lbl.gov; B.A.A.S., volume : 29 (1997) 1351Perlmutter S, et al., 1998c</subfield>
<subfield code="p">51</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">391</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nature 391 (1998) 51</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Ratra B., Peebles P.J.E., 1988</subfield>
<subfield code="p">3406</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">37</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Rev. D 37 (1988) 3406</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Riess A. et al. 1998, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9805201</subfield>
<subfield code="s">Astron. J. 116 (1998) 1009-1038</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Seljak U., Zaldarriaga M. 1996</subfield>
<subfield code="p">437</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">469</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Astrophys. J. 469 (1996) 437</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Seljak U. &amp; Zaldarriaga M., 1998</subfield>
<subfield code="r">astro-ph/9811123</subfield>
<subfield code="s">Phys. Rev. D60 (1999) 043504</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Tegmark M., 1997</subfield>
<subfield code="p">3806</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">79</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. Lett. 79 (1997) 3806</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Tegmark M. 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9809201</subfield>
<subfield code="s">Astrophys. J. 514 (1999) L69-L72</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Tegmark, M., Eisenstein D.J., Hu W., Kron R.G., 1998</subfield>
<subfield code="r">astro-ph/9805117</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Wambsganss J., Cen R., Ostriker J.P., 1998</subfield>
<subfield code="p">29</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">494</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astrophys. J. 494 (1998) 29</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Webster M., Bridle S.L., Hobson M.P., Lasenby A.N., Lahav O., Rocha, G., 1998, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">White M., 1998, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9802295</subfield>
<subfield code="s">Astrophys. J. 506 (1998) 495</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Zaldarriaga, M., Spergel D.N., Seljak U., 1997</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">488</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Astrophys. J. 488 (1997) 1</subfield>
</datafield>
</record>
"""
blob = BlobWrapper(blob=xml, master_format='marc', schema="xml")
r = MarcReader(blob)
self.assertTrue(r.rec_json)
self.assertTrue(r['__master_format'] == 'marc')
self.assertTrue('authors' in r)
self.assertTrue(r['authors[0].full_name'] == "Efstathiou, G P")
self.assertTrue(len(r['authors']) == 5)
self.assertTrue('title.title' in r)
self.assertTrue(r['title.title'] == "Constraints on $\Omega_{\Lambda}$ and $\Omega_{m}$from Distant Type 1a Supernovae and Cosmic Microwave Background Anisotropies")
self.assertTrue('abstract.summary' in r)
self.assertTrue(r['abstract.summary'] == "We perform a combined likelihood analysis of the latest cosmic microwave background anisotropy data and distant Type 1a Supernova data of Perlmutter etal (1998a). Our analysis is restricted tocosmological models where structure forms from adiabatic initial fluctuations characterised by a power-law spectrum with negligible tensor component. Marginalizing over other parameters, our bestfit solution gives Omega_m = 0.25 (+0.18, -0.12) and Omega_Lambda = 0.63 (+0.17, -0.23) (95 % confidence errors) for the cosmic densities contributed by matter and a cosmological constantrespectively. The results therefore strongly favour a nearly spatially flat Universe with a non-zero cosmological constant.")
self.assertTrue('reference' in r)
self.assertTrue(len(r['reference']) == 36)
def test_rec_json_creation_from_marcxml_file(self):
- """docstring for test_rec_json_creation_from_marcxml_file"""
+ """BibField - recjson from marcxml file"""
import os
import tempfile
from invenio.config import CFG_TMPDIR
xml = """
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621.38</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Hughes, Robert James</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Introduction to electronics</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">London</subfield>
<subfield code="b">English Univ. Press</subfield>
<subfield code="c">1962</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">432 p</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pipe, Peter</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1962</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
"""
fd, name = tempfile.mkstemp(suffix='.xml', dir=CFG_TMPDIR)
os.write(fd, xml)
os.close(fd)
blob = BlobWrapper(blob='', master_format='marc', schema='file:xml', blob_file_name=name)
r = MarcReader(blob)
self.assertTrue(r.rec_json)
self.assertTrue('authors' in r)
self.assertTrue(r['authors[0].full_name'] == "Hughes, Robert James")
class BibFieldMarcReaderRecstruct(InvenioTestCase):
"""
"""
def test_rectruct_to_cool_struct_preparation(self):
- """docstring for test_rectruct_to_cool_struct_preparation"""
+ """BibField -intermediate structure from recjson"""
from invenio.search_engine import get_record as search_engine_get_record
bibrecord = search_engine_get_record(13)
blob = BlobWrapper(blob=bibrecord, master_format='marc', schema='recstruct')
r = MarcReader(blob)
self.assertTrue(r.rec_tree)
self.assertTrue(len(r.rec_tree.keys()) >= 14)
self.assertTrue('100__' in r.rec_tree)
def test_recjson_creation_from_recstruct(self):
- """docstring for test_recjson_creation_from_recstruc"""
+ """BibField - recjson from recstruct"""
from invenio.search_engine import get_record as search_engine_get_record
bibrecord = search_engine_get_record(7)
blob = BlobWrapper(blob=bibrecord, master_format='marc', schema='recstruct')
r = MarcReader(blob)
self.assertTrue(r.rec_json)
self.assertTrue(r['__master_format'] == 'marc')
self.assertTrue('title' in r)
self.assertTrue(r['title.title'] == 'Tim Berners-Lee')
self.assertTrue('collection.primary' in r)
self.assertTrue(r['collection.primary'] == 'PICTURE')
-TEST_SUITE = make_test_suite(BibFieldMarcReaderMarcXML, BibFieldMarcReaderRecstruct)
+
+class BibFieldCheckRecord(InvenioTestCase):
+ """
+
+ """
+
+ def test_check_error_reporting(self):
+ """BibField - check error reporting"""
+ xml = """
+ <record>
+ <datafield tag="020" ind1=" " ind2=" ">
+ <subfield code="a">2225350574</subfield>
+ </datafield>
+ <datafield tag="041" ind1=" " ind2=" ">
+ <subfield code="a">fre</subfield>
+ </datafield>
+ <datafield tag="080" ind1=" " ind2=" ">
+ <subfield code="a">518.5:62.01</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="a">Dasse, Michel</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="a">Analyse informatique</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="n">t.1</subfield>
+ <subfield code="p">Les preliminaires</subfield>
+ </datafield>
+ <datafield tag="260" ind1=" " ind2=" ">
+ <subfield code="a">Paris</subfield>
+ <subfield code="b">Masson</subfield>
+ <subfield code="c">1972</subfield>
+ </datafield>
+ <datafield tag="490" ind1=" " ind2=" ">
+ <subfield code="a">Informatique</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="0">
+ <subfield code="y">1972</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="0">
+ <subfield code="b">21</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="1">
+ <subfield code="c">1990-01-27</subfield>
+ <subfield code="l">00</subfield>
+ <subfield code="m">2002-04-12</subfield>
+ <subfield code="o">BATCH</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="S">
+ <subfield code="s">m</subfield>
+ <subfield code="w">198604</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">BOOK</subfield>
+ </datafield>
+ </record>
+ """
+ blob = BlobWrapper(blob=xml, master_format='marc', schema="xml")
+ r = MarcReader(blob, check=True)
+
+ self.assertTrue('title' in r)
+ self.assertEquals(len(r['title']), 2)
+ self.assertEquals(len(r.fatal_errors), 1)
+
+ r.rec_json['title'] = r.rec_json['title'][0]
+ r.check_record(reset = True)
+
+TEST_SUITE = make_test_suite(BibFieldMarcReaderMarcXML,
+ BibFieldMarcReaderRecstruct,
+ BibFieldCheckRecord)
if __name__ == "__main__":
run_test_suite(TEST_SUITE)
diff --git a/modules/bibfield/lib/bibfield_regression_tests.py b/modules/bibfield/lib/bibfield_regression_tests.py
index efece87d7..5ea9b0f02 100644
--- a/modules/bibfield/lib/bibfield_regression_tests.py
+++ b/modules/bibfield/lib/bibfield_regression_tests.py
@@ -1,162 +1,323 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011 CERN.
+## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibField module regression tests.
"""
__revision__ = "$Id$"
from invenio.config import CFG_TMPDIR
from invenio.bibfield import get_record, create_record, create_records
from invenio.dbquery import run_sql
from invenio.testutils import InvenioTestCase, make_test_suite, run_test_suite
class BibFieldRecordFieldValuesTest(InvenioTestCase):
"""
Check values returned by BibField for record fields are consistent or not
"""
def test_normal_fields_availability_and_values(self):
"""bibfield - access to normal fields"""
record = get_record(12)
self.assertTrue(record.get('asdas') is None)
self.assertEqual('12', record['recid'])
self.assertTrue('recid' in record.get_persistent_identifiers())
self.assertEqual(record['recid'], record.get('recid'))
self.assertEqual('Physics at the front-end of a neutrino factory : a quantitative appraisal', record['title.title'])
self.assertEqual('Physics at the front-end of a neutrino factory : a quantitative appraisal', record['title']['title'])
- self.assertEqual(None, record['title.subtitle'])
+ self.assertFalse('title.subtitle' in record)
self.assertEqual('Physics at the front-end of a neutrino factory : a quantitative appraisal', record.get('title.title'))
self.assertEqual('Mangano', record['authors[0].last_name'])
self.assertEqual('M L', record['authors[0].first_name'])
self.assertEqual(19, len(record['authors']))
self.assertEqual(19, len(record['authors.last_name']))
def test_compare_field_values_with_bibrecord_values(self):
"""bibfield - same value as in bibrecord"""
from invenio.bibrecord import record_get_field_values
from invenio.search_engine import get_record as search_engine_get_record
record = get_record(1)
bibrecord_value = record_get_field_values(search_engine_get_record(1), '245', ' ', ' ', 'a')[0]
self.assertEqual(bibrecord_value, record['title.title'])
def test_derived_fields_availability_and_values(self):
"""bibfield - values of derived fields"""
record = get_record(12)
self.assertEqual(19, record['number_of_authors'])
def test_calculated_fields_availability_and_values(self):
"""bibfield - values of calculated fields"""
record = get_record(31)
self.assertEqual(2, record['_number_of_copies'])
run_sql("insert into crcITEM(barcode, id_bibrec) VALUES('test',31)")
self.assertEqual(2, record['_number_of_copies'])
self.assertEqual(3, record.get('_number_of_copies', reset_cache=True))
run_sql("delete from crcITEM WHERE barcode='test'")
record.update_field_cache('_number_of_copies')
self.assertEqual(2, record['_number_of_copies'])
self.assertEqual(2, record['number_of_copies'])
def test_get_using_format_string(self):
"""
bibfield - format values using format string
"""
#Only python 2.5 or higher
#record = get_record(97)
#self.assertEqual('Treusch, R', record.get('authors[0]', formatstring="{0[last_name]}, {0[first_name]}"))
def test_get_using_formating_function(self):
"""bibfield - format values using formating function"""
def dummy(s):
return s.upper()
record = get_record(1)
self.assertEqual('ALEPH EXPERIMENT: CANDIDATE OF HIGGS BOSON PRODUCTION',
record.get('title.title', formatfunction=dummy))
+ def test_get_record_using_field_filter(self):
+ """bibfield - get record filtering fields"""
+ authors = get_record(12, fields=('authors',))
+ self.assertEquals(len(authors['authors']), 19)
+ mainauthor_title = get_record(12, fields=('authors[0]', 'title'))
+ self.assertTrue('authors[0].full_name' in mainauthor_title)
+ self.assertTrue('title' in mainauthor_title)
+
class BibFieldCreateRecordTests(InvenioTestCase):
"""
Bibfield - demo file parsing test
"""
def setUp(self):
"""Initialize stuff"""
f = open(CFG_TMPDIR + '/demobibdata.xml', 'r')
blob = f.read()
f.close()
self.recs = [rec for rec in create_records(blob, master_format='marc', schema='xml')]
def test_records_created(self):
""" bibfield - demo file how many records are created """
- self.assertEqual(113, len(self.recs))
+ self.assertEqual(141, len(self.recs))
def test_create_record_with_collection_tag(self):
""" bibfield - create_record() for single record in collection"""
blob = """
<collection>
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
</record>
</collection>
"""
record = create_record(blob, master_format='marc', schema='xml')
record1 = create_records(blob, master_format='marc', schema='xml')[0]
self.assertEqual(record1, record)
def test_empty_collection(self):
"""bibfield - empty collection"""
blob_error0 = """<collection></collection>"""
rec = create_record(blob_error0, master_format='marc', schema='xml')
self.assertTrue(rec.is_empty())
records = create_records(blob_error0)
self.assertEqual(len(records), 0)
+ def test_fft_url_tags(self):
+ """bibfield - FFT versus URL"""
+ marc_blob = """
+ <record>
+ <datafield tag="037" ind1=" " ind2=" ">
+ <subfield code="a">CERN-HI-6206002</subfield>
+ </datafield>
+ <datafield tag="041" ind1=" " ind2=" ">
+ <subfield code="a">eng</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="a">At CERN in 1962</subfield>
+ <subfield code="s">eight Nobel prizewinners</subfield>
+ </datafield>
+ <datafield tag="260" ind1=" " ind2=" ">
+ <subfield code="c">1962</subfield>
+ </datafield>
+ <datafield tag="506" ind1="1" ind2=" ">
+ <subfield code="a">jekyll_only</subfield>
+ </datafield>
+ <datafield tag="521" ind1=" " ind2=" ">
+ <subfield code="a">In 1962, CERN hosted the 11th International Conference on High Energy Physics. Among the distinguished visitors were eight Nobel prizewinners.Left to right: Cecil F. Powell, Isidor I. Rabi, Werner Heisenberg, Edwin M. McMillan, Emile Segre, Tsung Dao Lee, Chen Ning Yang and Robert Hofstadter.</subfield>
+ </datafield>
+ <datafield tag="590" ind1=" " ind2=" ">
+ <subfield code="a">En 1962, le CERN est l'hote de la onzieme Conference Internationale de Physique des Hautes Energies. Parmi les visiteurs eminents se trouvaient huit laureats du prix Nobel.De gauche a droite: Cecil F. Powell, Isidor I. Rabi, Werner Heisenberg, Edwin M. McMillan, Emile Segre, Tsung Dao Lee, Chen Ning Yang et Robert Hofstadter.</subfield>
+ </datafield>
+ <datafield tag="595" ind1=" " ind2=" ">
+ <subfield code="a">Press</subfield>
+ </datafield>
+ <datafield tag="650" ind1="1" ind2="7">
+ <subfield code="2">SzGeCERN</subfield>
+ <subfield code="a">Personalities and History of CERN</subfield>
+ </datafield>
+ <datafield tag="653" ind1="1" ind2=" ">
+ <subfield code="a">Nobel laureate</subfield>
+ </datafield>
+ <datafield tag="FFT" ind1=" " ind2=" ">
+ <subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/6206002.jpg</subfield>
+ <subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/6206002.gif</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="0">
+ <subfield code="o">0000736PHOPHO</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="0">
+ <subfield code="y">1962</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="0">
+ <subfield code="b">81</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="1">
+ <subfield code="c">1998-07-23</subfield>
+ <subfield code="l">50</subfield>
+ <subfield code="m">2002-07-15</subfield>
+ <subfield code="o">CM</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.nobel.se/physics/laureates/1950/index.html</subfield>
+ <subfield code="y">The Nobel Prize in Physics 1950 : Cecil Frank Powell</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.nobel.se/physics/laureates/1944/index.html</subfield>
+ <subfield code="y">The Nobel Prize in Physics 1944 : Isidor Isaac Rabi</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.nobel.se/physics/laureates/1932/index.html</subfield>
+ <subfield code="y">The Nobel Prize in Physics 1932 : Werner Karl Heisenberg</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.nobel.se/chemistry/laureates/1951/index.html</subfield>
+ <subfield code="y">The Nobel Prize in Chemistry 1951 : Edwin Mattison McMillan</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.nobel.se/physics/laureates/1959/index.html</subfield>
+ <subfield code="y">The Nobel Prize in Physics 1959 : Emilio Gino Segre</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.nobel.se/physics/laureates/1957/index.html</subfield>
+ <subfield code="y">The Nobel Prize in Physics 1957 : Chen Ning Yang and Tsung-Dao Lee</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.nobel.se/physics/laureates/1961/index.html</subfield>
+ <subfield code="y">The Nobel Prize in Physics 1961 : Robert Hofstadter</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="P">
+ <subfield code="s">6206002 (1962)</subfield>
+ </datafield>
+ <datafield tag="909" ind1="C" ind2="S">
+ <subfield code="s">n</subfield>
+ <subfield code="w">199830</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">PICTURE</subfield>
+ </datafield>
+ </record>"""
+ rec = create_record(marc_blob, master_format='marc', schema='xml')
+ self.assertTrue('fft' in rec)
+ self.assertTrue(len(rec['fft']) == 1)
+ self.assertTrue(rec['fft[0].path'] == "http://invenio-software.org/download/invenio-demo-site-files/6206002.jpg")
+ self.assertTrue('url' in rec)
+ self.assertTrue(len(rec['url']) == 7)
+ self.assertTrue(rec['url[0].url'] == "http://www.nobel.se/physics/laureates/1950/index.html")
+
+ def test_bibdoc_integration(self):
+ """bibfield - bibdoc integration"""
+ rec = get_record(7)
+
+ self.assertTrue('_files' in rec)
+ self.assertEquals(len(rec['files']), 2)
+ image = rec['files'][1]
+ self.assertEquals(image['eformat'], '.jpeg')
+ self.assertEquals(image['name'], '9806033')
+
+ bibdoc = rec['bibdocs'].list_latest_files()[1]
+ self.assertEquals(image['name'], bibdoc.name)
+
class BibFieldLegacyTests(InvenioTestCase):
"""
Legacy functionality tests
"""
def test_legacy_export_as_marc(self):
"""docstring for test_legacy_export_as_marc"""
pass
def test_get_legacy_recstruct(self):
"""bibfield - legacy functions"""
from invenio.search_engine import get_record as search_engine_get_record
+ from invenio.bibrecord import record_get_field_value
+
bibfield_recstruct = get_record(8).get_legacy_recstruct()
bibrecord = search_engine_get_record(8)
- self.assertEqual(bibfield_recstruct['100'][0][0], bibrecord['100'][0][0])
+ self.assertEqual(record_get_field_value(bibfield_recstruct, '100', code='a'),
+ record_get_field_value(bibrecord, '100', code='a'))
self.assertEqual(len(bibfield_recstruct['999']), len(bibrecord['999']))
+ def test_guess_legacy_field_names(self):
+ """bibfied - guess legacy fields"""
+ from invenio.bibfield import guess_legacy_field_names
+
+ legacy_fields = guess_legacy_field_names(('100__a', '245'))
+ self.assertEqual(legacy_fields['100__a'][0], 'authors[0].full_name')
+ self.assertEqual(legacy_fields['245'][0], 'title')
+
+ legacy_fields = guess_legacy_field_names('001', 'marc')
+ self.assertEqual(legacy_fields['001'][0], 'recid')
+
+ self.assertEquals(guess_legacy_field_names('foo', 'marc'), {'foo': []})
+ self.assertEquals(guess_legacy_field_names('foo', 'bar'), {'foo': []})
+
+
+class BibFieldProducerTests(InvenioTestCase):
+ """
+ Low level output tests
+ """
+
+ def test_produce_json_for_marc(self):
+ """bibfield - produce json marc"""
+ record = get_record(1)
+ produced_marc = record.produce_json_for_marc()
+
+ self.assertTrue({'001': '1'} in produced_marc)
+
+ def test_produce_json_for_dublin_core(self):
+ """bibfield - produce json dublin core"""
+ record = get_record(1)
+ date = record.get('version_id').strftime('%Y-%m-%dT%H:%M:%SZ')
+ produced_dc = record.produce_json_for_dc()
+
+ self.assertTrue({'dc:date': date} in produced_dc)
+
TEST_SUITE = make_test_suite(BibFieldRecordFieldValuesTest,
BibFieldCreateRecordTests,
BibFieldLegacyTests
)
if __name__ == "__main__":
run_test_suite(TEST_SUITE)
diff --git a/modules/bibfield/lib/bibfield_utils.py b/modules/bibfield/lib/bibfield_utils.py
index f3551f734..b94d4821e 100644
--- a/modules/bibfield/lib/bibfield_utils.py
+++ b/modules/bibfield/lib/bibfield_utils.py
@@ -1,628 +1,690 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibField Utils
Helper classes and functions to work with BibField
"""
import re
__revision__ = "$Id$"
import os
import datetime
from werkzeug.utils import import_string
from invenio.config import CFG_PYLIBDIR
from invenio.datastructures import LaziestDict
CFG_BIBFIELD_FUNCTIONS = LaziestDict(lambda key: import_string('invenio.bibfield_functions.%s:%s' % (key, key)))
class BibFieldException(Exception):
"""
General exception to use within BibField
"""
pass
-class BibFieldCheckerException(Exception):
- """
- Exception raised when some error happens during checking
- """
+class InvenioBibFieldContinuableError(Exception):
+ """BibField continuable error"""
pass
+class InvenioBibFieldError(Exception):
+ """BibField fatal error, @see CFG_BIBUPLOAD_BIBFIELD_STOP_ERROR_POLICY"""
+
+
class BibFieldDict(object):
"""
This class implements a I{dict} mostly and uses special key naming for
accessing as describe in __getitem__
>>> #Creating a dictionary
>>> d = BibFieldDict()
>>> #Filling up the dictionary
>>> d['foo'] = {'a': 'world', 'b':'hello'}
>>> d['a'] = [ {'b':1}, {'b':2}, {'b':3} ]
- >>> d['_c'] = "random.randint(1,100)"
+ >>> d['_c'] = random.randint(1,100)
+ >>> d['__calculated_functions']['_c'] = "random.randint(1,100)"
>>> #Accessing data inside the dictionary
>>> d['a']
>>> d['a[0]']
>>> d['a.b']
>>> d['a[1:]']
>>> d['_c'] #this value will be calculated on the fly
"""
def __init__(self):
self.rec_json = {}
self.rec_json['__aliases'] = {}
self.rec_json['__do_not_cache'] = []
self.is_init_phase = True
+ self.rec_json['__calculated_functions'] = {}
def __getitem__(self, key):
"""
As in C{dict.__getitem__} but using BibField name convention.
@param key: String containing the name of the field and subfield.
For e.g. lest work with:
- {'a': [ {'b':1}, {'b':2}, {'b':3} ], '_c': [42, random.randint(1,100)"] }
+ {'a': [ {'b':1}, {'b':2}, {'b':3} ], '_c': 42 }
- 'a' -> All the 'a' field info
[{'b': 1}, {'b': 2}, {'b': 3}]
- 'a[0]' -> All the info of the first element inside 'a'
{'b': 1}
- 'a[0].b' -> Field 'b' for the first element in 'a'
1
- 'a[1:]' -> All the 'a' field info but the first
[{'b': 2}, {'b': 3}]
- 'a.b' -> All the 'b' inside 'b'
[1, 2, 3]
- '_c- -> will give us the random number that is cached
42
- ... any other combination ...
- ... as deep as the dictionary is ...
NOTE: accessing one value in a normal way, meaning d['a'], is almost as
fast as accessing a regular dictionary. But using the special name
convention is a bit slower than using the regular access.
d['a[0].b'] -> 10000 loops, best of 3: 18.4 us per loop
d['a'][0]['b'] -> 1000000 loops, best of 3: 591 ns per loop
@return: The value of the field, this might be, a dictionary, a list,
a string, or any combination of the three depending on the value of
field
"""
+ if not self.is_cacheable(key):
+ dict_part = self._recalculate_field_value(key)
+ else:
+ dict_part = self.rec_json
+
try:
if '.' not in key and '[' not in key:
- dict_part = self.rec_json[key]
+ dict_part = dict_part[key]
else:
- dict_part = self.rec_json
for group in prepare_field_keys(key):
dict_part = self._get_intermediate_value(dict_part, group)
except KeyError, err:
return self[key.replace(err.args[0], self.rec_json['__aliases'][err.args[0]].replace('[n]', '[1:]'), 1)]
- if re.search('^_[a-zA-Z0-9]', key):
- if key in self.rec_json['__do_not_cache']:
- self.update_field_cache(key)
- dict_part = dict_part[0]
-
return dict_part
def __setitem__(self, key, value):
"""
As in C{dict.__setitem__} but using BibField name convention.
@note: at creation time dict['a[-1]'] = 'something' will mean
dict['a'].append('something') and if the field already exists and is
not a list, then this method will create a list with the existing value
and append the new one,
dict['a'] = 'first value' -> {'a':'first value'}
dict['a'] = 'second value' -> {'a':['first value', 'second value']}
There is one class variable self.is_init_phase for that matter.
@param key: String containing the name of the field and subfield.
@param value: The new value
"""
if self.is_init_phase:
if '.' not in key and '[' not in key:
if not key in self.rec_json:
self.rec_json[key] = value
return
tmp = self.rec_json[key]
if tmp is None:
self.rec_json[key] = value
else:
if not isinstance(tmp, list):
self.rec_json[key] = [tmp]
self.rec_json[key].append(value)
else:
try:
dict_part = eval("self.rec_json%s" % (''.join(prepare_field_keys(key)),)) # kwalitee: disable=eval
except:
build_data_structure(self.rec_json, key)
dict_part = eval("self.rec_json%s" % (''.join(prepare_field_keys(key)),))
if dict_part:
exec("self.rec_json%s.append(value)" % (''.join(prepare_field_keys(key, write=True)[:-1]),))
else:
exec("self.rec_json%s = value" % (''.join(prepare_field_keys(key)),))
else:
if '.' not in key and '[' not in key:
self.rec_json[key] = value
else:
try:
exec("self.rec_json%s = value" % (''.join(prepare_field_keys(key)),))
except:
build_data_structure(self.rec_json, key)
exec("self.rec_json%s = value" % (''.join(prepare_field_keys(key)),))
def __delitem__(self, key):
"""
As in C{dict.__delitem__}.
@note: It only works with first keys
"""
del self.rec_json[key]
def __contains__(self, key):
"""
As in C{dict.__contains__} but using BibField name convention.
@param key: Name of the key
@return: True if the dictionary contains the special key
"""
if '.' not in key and '[' not in key:
return key in self.rec_json
try:
self[key]
except:
return False
return True
def __eq__(self, other):
"""@see C{dict.__eq__}"""
+ if not self.keys() == other.keys():
+ return False
try:
- return dict.__eq__(self.rec_json, other.rec_json)
+ for key in [k for k in self.keys() if not k in self['__do_not_cache']]:
+ if not self.get(k) == other.get(k):
+ return False
except:
return False
+ return True
def __repr__(self):
- """@see C{dict.__repr__}"""
- return repr(self.rec_json)
+ """
+ Hides the '__keys' from the dictionary representation, if those keys
+ are needed record.rec_json could be accessed.
+ @see C{dict.__repr__}
+ """
+ info = dict((key, value) for key, value in self.rec_json.iteritems() if not re.search('^__[a-zA-Z0-9]', key))
+ if not info:
+ info = {}
+ return repr(info)
def __iter__(self):
"""@see C{dict.__iter__}"""
return iter(self.rec_json)
def __len__(self):
"""@see C{dict.__len__}"""
return len(self.rec_json)
def keys(self):
"""@see C{dict.keys}"""
return self.rec_json.keys()
def iteritems(self):
"""@see C{dict.iteritems}"""
return self.rec_json.iteritems()
def iterkeys(self):
"""@see C{dict.iterkeys}"""
return self.rec_json.iterkeys()
def itervalues(self):
"""@see C{dict.itervalues}"""
return self.rec_json.itervalues()
def has_key(self, key):
"""
As in C{dict.has_key} but using BibField name convention.
@see __contains__(self, key)
"""
return self.__contains__(key)
def get(self, field=None, default=None, reset_cache=False, formatstring=None, formatfunction=None):
"""
As in C{dict.get} it Retrieves the value of field from the json structure
but using BibField name convention and also applies some formating if
present.
@see __getitem__(self, key)
@param field: Name of the field/s to retrieve. If it is None then it
will return the entire dictionary.
@param default: in case of error this value will be returned
@param formatstring: Optional parameter to format the output value.
This could be a format string, like this example:
>>> d['foo'] = {'a': 'world', 'b':'hello'}
>>> get('foo', formatstring="{0[b]} {0[a]}!")
>>> 'hello world!'
Note: Use this parameter only if you are running python 2.5 or higher.
@param formatfunction: Optional parameter to format the output value.
This parameter must be function and must handle all the possible
- parameter types (strin, dict or list)
+ parameter types (str, dict or list)
@return: The value of the field, this might be, a dictionary, a list,
a string, or any combination of the three depending on the value of
field. If any formating parameter is present, then the return value
will be the formated value.
"""
- if re.search('^_[a-zA-Z0-9]', field) and reset_cache:
+ if reset_cache:
self.update_field_cache(field)
value = self.rec_json
if field:
try:
value = self.__getitem__(field)
except:
return default
if not value:
return default
if formatstring:
value = self._apply_formatstring(value, formatstring)
if formatfunction:
value = formatfunction(value)
return value
+ def is_cacheable(self, field):
+ """
+ Check if a field is inside the __do_not_cache or not
+
+ @return True if it is not in __do_not_cache
+ """
+ return not get_main_field(field) in self.rec_json['__do_not_cache']
+
+
def update_field_cache(self, field):
"""
Updates the value of the cache for the given calculated field
"""
- calculated_field = self.rec_json.get(field)
-
- if calculated_field and re.search('^_[a-zA-Z0-9]', field):
- calculated_field[0] = self._try_to_eval(calculated_field[1])
+ field = get_main_field(field)
+ if re.search('^_[a-zA-Z0-9]', field) and not field in self.rec_json['__do_not_cache']:
+ self.rec_json[field] = self._recalculate_field_value(field)[field]
def update_all_fields_cache(self):
"""
Update the cache of all the calculated fields
@see: update_field_cache()
"""
for field in [key for key in self.keys() if re.search('^_[a-zA-Z0-9]', key)]:
self.update_field_cache(field)
+ def _recalculate_field_value(self, field):
+ """
+ Obtains the new vaule of field using
+ """
+ field = get_main_field(field)
+ return {field: self._try_to_eval(self['__calculated_functions'][field])}
+
def _try_to_eval(self, string, bibfield_functions_only=False, **context):
"""
This method takes care of evaluating the python expression, and, if an
exception happens, it tries to import the needed module from bibfield_functions
or from the python path using plugin utils
@param string: String to evaluate
@param context: Context needed, in some cases, to evaluate the string
@return: The value of the expression inside string
"""
if not string:
return None
res = None
imports = []
while (True):
try:
res = eval(string, globals().update(context), locals()) # kwalitee: disable=eval
except NameError, err:
import_name = str(err).split("'")[1]
if not import_name in imports:
if import_name in CFG_BIBFIELD_FUNCTIONS:
globals()[import_name] = CFG_BIBFIELD_FUNCTIONS[import_name]
elif not bibfield_functions_only:
globals()[import_name] = __import__(import_name)
imports.append(import_name)
continue
assert False, 'Error not expected when trying to import bibfield function module'
return res
def _apply_formatstring(self, value, formatstring):
"""
Helper function that simply formats the result of get() using a
format string
If the value is of type datetime it tries to apply the format using
strftime(formatstring).
@see: get(self, field=None, formatstring=None, formatfunction=None)
@param value: String, dict or list to apply the format string
@param formatstring: formatstring
@return: Formated value of "value"
"""
if not value:
return ''
if isinstance(value, datetime.datetime):
if formatstring == value.strftime(formatstring):
value = value.isoformat()
else:
return value.strftime(formatstring)
if isinstance(value, list):
tmp = ''
for element in value:
tmp += self._apply_formatstring(element, formatstring)
return tmp
elif isinstance(value, dict) or isinstance(value, basestring):
return formatstring.format(value)
else:
assert False, 'String, Dictionay or List expected'
def _get_intermediate_value(self, dict_part, field):
"""
Helper function that fetch the value of some field from dict_part
@see: get(self, field=None, formatstring=None, formatfunction=None)
@param dict_part: Dictionary or list containing all the information from
this method will fetch field.
@param field: Name or index of the field to fetch from dict_part
@return: The value of the field, this might be, a dictionary, a list,
a string, or any combination of the three depending on the value of
field
"""
if isinstance(dict_part, dict):
return eval('dict_part%s' % field) # kwalitee: disable=eval
elif isinstance(dict_part, list):
tmp = []
for element in dict_part:
tmp.append(self._get_intermediate_value(element, field))
return tmp
else:
assert False, 'Dictionay or List expected get %s' % (type(dict_part),)
class BlobWrapper(object):
"""
Wrapper class to work easily with the blob and the information related to it
inside the *Reader
"""
def __init__(self, blob, **kw):
self.__info = kw
self.blob = blob
def __getattr__(self, name):
"""Trick to access the information inside self.__info using dot syntax"""
try:
return self.__info[name]
except KeyError:
raise AttributeError("%r object has no attribute %r" % (type(self).__name__, name))
class CoolDict(dict):
"""
C{dict} but it keeps track of which elements has been consumed/accessed
and which not
"""
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
self._consumed = {}
if self:
for key, value in dict.iteritems(self):
self[key] = value
def __getitem__(self, key):
"""
As in C{dict} but in this case the key could be a compiled regular expression.
Also update the consumed list in case the item is not a list or other
dictionary.
@return: Like in C{dict.__getitem__} or, if a regular expression is used,
a list containing all the items inside the dictionary which key matches
the regular expression ([] if none)
"""
try:
keys = filter(key.match, self.keys())
values = []
for key in keys:
value = dict.get(self, key)
values.append(dict.get(self, key))
if not isinstance(value, dict) and not isinstance(value, list):
self._consumed[key] = True
return values
except AttributeError:
try:
value = dict.get(self, key)
if not isinstance(value, dict) and not isinstance(value, list):
self._consumed[key] = True
return value
except:
return None
def __setitem__(self, key, value):
"""
As in C{dict} but in this case it takes care of updating the consumed
value for each element inside value depending on its type.
"""
if isinstance(value, dict):
dict.__setitem__(self, key, CoolDict(value))
self._consumed[key] = self[key]._consumed
elif isinstance(value, list):
dict.__setitem__(self, key, CoolList(value))
self._consumed[key] = self[key]._consumed
else:
dict.__setitem__(self, key, value)
self._consumed[key] = False
def extend(self, key, value):
"""
If the key is present inside the dictionary it creates a list (it not
present) and extends it with the new value. Almost as in C{list.extend}
"""
if key in self:
current_value = dict.get(self, key)
if not isinstance(current_value, list):
current_value = CoolList([current_value])
current_value.append(value)
value = current_value
self[key] = value
def iteritems(self):
""" As in C{dict} but it updates the consumed value if needed"""
for key, value in dict.iteritems(self):
if not isinstance(value, dict) and not isinstance(value, list):
self._consumed[key] = True
yield key, value
raise StopIteration
@property
def consumed(self):
for key, value in self._consumed.iteritems():
if not isinstance(value, dict) and not isinstance(value, list):
if not value:
return False
elif not dict.get(self, key).consumed:
return False
return True
class CoolList(list):
"""
C{list} but it keeps track of which elements has been consumed/accessed and
which not
"""
def __init__(self, *args, **kwargs):
list.__init__(self, *args, **kwargs)
self._consumed = []
if self:
for i, value in enumerate(list.__iter__(self)):
self._consumed.append(None)
self[i] = value
def __getitem__(self, index):
"""As in C{list}, also update the consumed list in case the item is not
a dictionary or other list.
@return: Like in C{list.__getitem__}
"""
value = list.__getitem__(self, index)
if not isinstance(value, dict) and not isinstance(value, list):
self._consumed[index] = True
return value
def __setitem__(self, index, value):
"""
As in C{list} but in this case it takes care of updating the consumed
value for each element inside value depending on its type
"""
if isinstance(value, dict):
list.__setitem__(self, index, CoolDict(value))
self._consumed[index] = self[index]._consumed
elif isinstance(value, list):
list.__setitem__(self, index, CoolList(value))
self._consumed[index] = self[index]._consumed
else:
list.__setitem__(self, index, value)
self._consumed[index] = False
def __iter__(self, *args, **kwargs):
""" As in C{dict} but it updates the consumed value if needed"""
for index, value in enumerate(list.__iter__(self)):
if not isinstance(value, dict) and not isinstance(value, list):
self._consumed[index] = True
yield value
raise StopIteration
def append(self, element):
"""@see __setitem__() """
self += [None]
self._consumed += [None]
self[len(self) - 1] = element
@property
def consumed(self):
for index, value in enumerate(self._consumed):
if not isinstance(value, dict) and not isinstance(value, list):
if not value:
return False
elif not list.__getitem__(self, index).consumed:
return False
return True
def prepare_field_keys(field, write=False):
"""
Helper function to split the name of the fields and the indexes in a
proper way to be used by eval function
@see: bibfield.get()
@param field: String containing all the names and indexes
@param write: If the fields are use to write inside the record then the
granularity is lower for convenience
@return: List of string that can be evaluated by eval function
"""
parts = field.split('.')
keys = []
for part in parts:
if '[' in part:
if write:
keys.append('["%s"]' % (part[:part.find('[')]))
keys.append(part[part.find('['):].replace('n', '-1'))
else:
keys.append('["%s"]%s' % (part[:part.find('[')], part[part.find('['):].replace('n', '-1')))
else:
keys.append('["%s"]' % part)
return keys
def build_data_structure(record, field):
"""
Helper functions that builds the record structure
@param record: Existing data structure
@param field: New field to add to the structure
"""
eval_string = ''
for key in prepare_field_keys(field, write=True):
if key == '[-1]':
try:
eval("record%s.append(None)" % (eval_string,)) # kwalitee: disable=eval
except AttributeError:
exec("record%s=[None]" % (eval_string,))
elif key == '[0]':
try:
eval("record%s" % (eval_string + key,)) # kwalitee: disable=eval
rec_part = eval("record%s" % (eval_string,)) # kwalitee: disable=eval
if not isinstance(rec_part, list):
pass
rec_part.insert(0, None)
except TypeError:
exec("record%s=list([None])" % (eval_string,))
else:
try:
eval("record%s" % (eval_string + key,)) # kwalitee: disable=eval
except KeyError:
exec("record%s=None" % (eval_string + key,))
except TypeError:
exec("record%s={}" % (eval_string,))
exec("record%s=None" % (eval_string + key,))
eval_string += key
+
+
+def get_main_field(field):
+ """
+ From a given field it gets the outer field of the tree.
+
+ i.e.: 'a[0].b.c' returns 'a'
+ """
+ if '.' in field:
+ field = field.split('.')[0]
+ if '[' in field:
+ field = field.split('[')[0]
+ return field
+
+
+def get_producer_rules(field, code):
+ """docstring for get_producer_rules"""
+ from invenio.bibfield_config import config_rules
+
+ rule = config_rules[field]
+ if isinstance(rule, list):
+ if len(rule) == 1:
+ # case field[n]
+ return [(rule[0].replace('[n]', ''), config_rules[rule[0]]['producer'].get(code, {}))]
+ else:
+ # case field[1], field[n]
+ rules = []
+ for new_field in rule:
+ rules.append((new_field.replace('[n]', '[1:]'), config_rules[new_field]['producer'].get(code, {})))
+ return rules
+ else:
+ return [(field, rule['producer'].get(code, {}))]
diff --git a/modules/bibfield/lib/bibfield_utils_unit_tests.py b/modules/bibfield/lib/bibfield_utils_unit_tests.py
index 2a1eaeede..7bc601ae4 100644
--- a/modules/bibfield/lib/bibfield_utils_unit_tests.py
+++ b/modules/bibfield/lib/bibfield_utils_unit_tests.py
@@ -1,187 +1,190 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibFieldUtils Unit tests.
"""
from invenio.testutils import make_test_suite, run_test_suite, InvenioTestCase
class BibFieldCoolListDictUnitTests(InvenioTestCase):
"""
Test class to verify the correct behaviour of the classes involved into the
intermediate structure
"""
def test_cool_list(self):
"""Bibfield Utils, CoolList - Unit tests"""
from invenio.bibfield_utils import CoolList
ll = CoolList()
ll.append(1)
ll.append(2)
ll.append(3)
self.assertFalse(ll.consumed)
ll[1]
self.assertEqual(ll._consumed, [False, True, False])
self.assertFalse(ll.consumed)
[i for i in ll]
self.assertTrue(ll.consumed)
ll[1] = [4, 5, 6]
self.assertFalse(ll.consumed)
self.assertEqual(ll._consumed, [True, [False, False, False], True])
[i for i in ll]
self.assertFalse(ll.consumed)
self.assertEqual(ll._consumed, [True, [False, False, False], True])
ll[1]
self.assertFalse(ll.consumed)
[i for i in ll[1]]
self.assertTrue(ll.consumed)
def test_cool_dict(self):
"""Bibfield Utils, CoolDict - Unit tests"""
from invenio.bibfield_utils import CoolDict, CoolList
d = CoolDict()
d['a'] = 1
d['b'] = 2
d['c'] = 3
self.assertFalse(d.consumed)
d['a']
self.assertFalse(d.consumed)
[v for dummy_k, v in d.iteritems()]
self.assertTrue(d.consumed)
d['b'] = {'d': 1}
self.assertFalse(d.consumed)
d['b']
self.assertFalse(d.consumed)
[v for dummy_k, v in d['b'].iteritems()]
self.assertTrue(d.consumed)
d.extend('a', 11)
self.assertFalse(d.consumed)
self.assertTrue(isinstance(d['a'], CoolList))
[i for i in d['a']]
self.assertTrue(d.consumed)
def test_cool_list_and_dict(self):
"""Bibfield Utils, CoolList and CoolDict - Unit tests"""
from invenio.bibfield_utils import CoolDict, CoolList
d = CoolDict()
l = CoolList()
d['a'] = l
self.assertTrue(d.consumed)
l.append(1)
l.append(2)
d['a'] = l
self.assertFalse(d.consumed)
d['b'] = CoolList([{'a': 1}, {'a': 2}])
self.assertFalse(d.consumed)
[v for dummy_k, v in d.iteritems()]
self.assertFalse(d.consumed)
[i for i in d['a']]
[v for i in d['b'] for dummy_k, v in i.iteritems()]
self.assertTrue(d.consumed)
class BibFieldUtilsUnitTests(InvenioTestCase):
"""
Test class for bibfield utilities
"""
def test_prepare_field_keys(self):
"""BibField Utils, prepare_field_keys - Unit Test"""
from invenio.bibfield_utils import prepare_field_keys
key = 'authors'
self.assertEqual(prepare_field_keys(key), ['["authors"]'])
self.assertEqual(prepare_field_keys(key, write=True), ['["authors"]'])
key = 'authors[0]'
self.assertEqual(prepare_field_keys(key), ['["authors"][0]'])
self.assertEqual(prepare_field_keys(key, True), ['["authors"]', '[0]'])
key = 'authors[n]'
self.assertEqual(prepare_field_keys(key), ['["authors"][-1]'])
self.assertEqual(prepare_field_keys(key, True), ['["authors"]', '[-1]'])
key = 'authors.ln'
self.assertEqual(prepare_field_keys(key), ['["authors"]', '["ln"]'])
self.assertEqual(prepare_field_keys(key, True), ['["authors"]', '["ln"]'])
key = 'a[1].b[0].c.d[n]'
self.assertEqual(prepare_field_keys(key), ['["a"][1]', '["b"][0]', '["c"]', '["d"][-1]'])
self.assertEqual(prepare_field_keys(key, True), ['["a"]', '[1]', '["b"]', '[0]', '["c"]', '["d"]', '[-1]'])
def test_build_data_structure(self):
"""BibField Utils, build_data_structure - Unit Test"""
from invenio.bibfield_utils import build_data_structure
d = dict()
build_data_structure(d, 'authors')
self.assertEqual(d, {'authors': None})
build_data_structure(d, 'authors[0]')
self.assertEqual(d, {'authors': [None]})
build_data_structure(d, 'authors[n]')
self.assertEqual(d, {'authors': [None, None]})
d = dict()
build_data_structure(d, 'a[0].b[n].c.d[n]')
self.assertEqual(d, {'a': [{'b': [{'c': {'d': [None]}}]}]})
class BibFieldDictUnitTest(InvenioTestCase):
"""
Test class for bibfield base dictionary
"""
def test_bibfielddict(self):
"""BibFieldDict - Unit Test"""
+ import random
from invenio.bibfield_utils import BibFieldDict
d = BibFieldDict()
d['foo'] = {'a': 'world', 'b': 'hello'}
d['a'] = [{'b': 1}, {'b': 2}, {'b': 3}]
- d['_c'] = [1, "random.random()"]
- d['_cc'] = [1, "random.random()"]
+ d['_c'] = 1
+ d['_cc'] = random.random()
d['__do_not_cache'].append('_cc')
+ d['__calculated_functions']['_c'] = "random.random()"
+ d['__calculated_functions']['_cc'] = "random.random()"
d['__aliases']['aa'] = 'a'
- self.assertTrue(len(d.keys()) == 6)
+ self.assertTrue(len(d.keys()) == 7)
self.assertTrue('foo' in d)
self.assertTrue('a.b' in d)
self.assertEqual(d['foo'], {'a': 'world', 'b': 'hello'})
self.assertEqual(d['a'], d.get('a'))
self.assertEqual(d['a[-1].b'], 3)
self.assertEqual(d['a'], d['aa'])
self.assertEqual(d['a[1].b'], d['aa[1].b'])
self.assertEqual(d['_c'], 1)
self.assertNotEqual(d['_c'], d.get('_c', reset_cache=True))
self.assertNotEqual(d['_cc'], 1)
self.assertNotEqual(d['_cc'], d.get('_cc'))
#Python 2.5 or higher
#self.assertEqual('hello world!', d.get('foo', formatstring="{0[b]} {0[a]}!"))
def dummy(s):
return s.upper()
self.assertEqual('HELLO', d.get('foo.b', formatfunction=dummy))
TEST_SUITE = make_test_suite(BibFieldCoolListDictUnitTests, BibFieldUtilsUnitTests, BibFieldDictUnitTest)
if __name__ == "__main__":
run_test_suite(TEST_SUITE)
diff --git a/modules/bibfield/lib/functions/check_field_existence.py b/modules/bibfield/lib/functions/check_field_existence.py
index c17f97633..ec2aa7d22 100644
--- a/modules/bibfield/lib/functions/check_field_existence.py
+++ b/modules/bibfield/lib/functions/check_field_existence.py
@@ -1,78 +1,81 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-def check_field_existence(record, field, min_value, max_value=None, subfield=None):
+def check_field_existence(record, field, min_value, max_value=None, subfield=None, continuable=True):
"""
Checks field.subfield existence inside the record according to max and min values
@param record: BibFieldDict where the record is stored
@param field: Main json ID or field name to make test on
@param min_value: Minimum number of occurrences of field.
If max_value is not present then min_value represents the fix number of times that
field should be present.
@param max_value: Maximum number of occurrences of a field, this might be a fix number
or "n".
@param subfield: If this parameter is present, instead of applying the checker
to the field, it is applied to record['field.subfield']
@note: This checker also modify the record if the field is not repeatable,
meaning that min_value=1 or min_value=0,max_value=1
"""
- from invenio.bibfield_utils import BibFieldCheckerException
+ from invenio.bibfield_utils import InvenioBibFieldContinuableError, \
+ InvenioBibFieldError
+
+ error = continuable and InvenioBibFieldContinuableError or InvenioBibFieldError
field = '[n]' in field and field[:-3] or field
key = subfield and "%s.%s" % (field, subfield) or field
if min_value == 0: # (0,1), (0,'n'), (0,n)
if not max_value:
- raise BibFieldCheckerException("Minimun value = 0 and no max value for '%s'" % (key,))
+ raise error("Minimun value = 0 and no max value for '%s'" % (key,))
if key in record:
value = record[key]
if max_value == 1 and isinstance(value, list) and len(value) != 1:
- raise BibFieldCheckerException("Field '%s' is not repeatable" % (key,))
+ raise error("Field '%s' is not repeatable" % (key,))
elif max_value != 'n':
if isinstance(value, list) and len(value) > max_value:
- raise BibFieldCheckerException("Field '%s' is repeatable only %s times" % (key, max_value))
+ raise error("Field '%s' is repeatable only %s times" % (key, max_value))
elif min_value == 1: # (1,-) (1,'n'), (1, n)
if not key in record:
- raise BibFieldCheckerException("Field '%s' is mandatory" % (key,))
+ raise error("Field '%s' is mandatory" % (key,))
value = record[key]
if not value:
- raise BibFieldCheckerException("Field '%s' is mandatory" % (key,))
+ raise error("Field '%s' is mandatory" % (key,))
if not max_value:
if isinstance(value, list) and len(value) != 1:
- raise BibFieldCheckerException("Field '%s' is mandatory and not repeatable" % (key,))
+ raise error("Field '%s' is mandatory and not repeatable" % (key,))
elif max_value != 'n':
if isinstance(value, list) and len(value) > max_value:
- raise BibFieldCheckerException("Field '%s' is mandatory and repeatable only %s times" % (key, max_value))
+ raise error("Field '%s' is mandatory and repeatable only %s times" % (key, max_value))
else:
if not key in record:
- raise BibFieldCheckerException("Field '%s' must be present inside the record %s times" % (key, min_value))
+ raise error("Field '%s' must be present inside the record %s times" % (key, min_value))
value = record[key]
if not value:
- raise BibFieldCheckerException("Field '%s' must be present inside the record %s times" % (key, min_value))
+ raise error("Field '%s' must be present inside the record %s times" % (key, min_value))
if not max_value:
if not isinstance(value, list) or len(value) != min_value:
- raise BibFieldCheckerException("Field '%s' must be present inside the record %s times" % (key, min_value))
+ raise error("Field '%s' must be present inside the record %s times" % (key, min_value))
else:
if max_value != 'n' and (not isinstance(value, list) or len(value) < min_value or len(value) > max_value):
- raise BibFieldCheckerException("Field '%s' must be present inside the record between %s and %s times" % (key, min_value, max_value))
+ raise error("Field '%s' must be present inside the record between %s and %s times" % (key, min_value, max_value))
elif not isinstance(value, list) or len(value) < min_value:
- raise BibFieldCheckerException("Field '%s' must be present inside the record between %s and 'n' times" % (key, min_value))
+ raise error("Field '%s' must be present inside the record between %s and 'n' times" % (key, min_value))
diff --git a/modules/bibfield/lib/functions/check_field_type.py b/modules/bibfield/lib/functions/check_field_type.py
index 138ecb7fa..7f05fea9a 100644
--- a/modules/bibfield/lib/functions/check_field_type.py
+++ b/modules/bibfield/lib/functions/check_field_type.py
@@ -1,75 +1,79 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
from werkzeug.utils import import_string
from invenio.datastructures import LaziestDict
CFG_BIBFIELD_TYPES = LaziestDict(lambda key: import_string('invenio.bibfield_functions.%s:%s' % (key, key)))
-def check_field_type(record, field, field_type, subfield=None):
+def check_field_type(record, field, field_type, subfield=None, continuable=True):
"""
Checks if record[field.subfield] is of type "field_type"
@note: If record[field.subfield] is a list or a dictionary then it checks
every single element inside is type is a "system type"
@param record: BibFieldDict where the record is stored
@param field: Main json ID or field name to make test on
@param field_type: Field_Type defined by the user inside bibfield_types or a system type
i.e.: "datetime.datetime"
@param subfield: If this parameter is present, instead of applying the checker
to the field, it is applied to record['field.subfield']
"""
field = '[n]' in field and field[:-3] or field
key = subfield and "%s.%s" % (field, subfield) or field
if not key in record:
return
- from invenio.bibfield_utils import BibFieldCheckerException
+ from invenio.bibfield_utils import InvenioBibFieldContinuableError, \
+ InvenioBibFieldError
+
+ error = continuable and InvenioBibFieldContinuableError or InvenioBibFieldError
+
new_type = 'is_type_%s' % (field_type, )
if new_type in CFG_BIBFIELD_TYPES:
globals()[new_type] = CFG_BIBFIELD_TYPES[new_type]
if not eval('%s(record[key])' % (new_type,)):
- raise BibFieldCheckerException("Field %s should be of type '%s'" % (key, field_type))
+ raise error("Field %s should be of type '%s'" % (key, field_type))
else:
if not check_field_sys_type(record[key], field_type):
- raise BibFieldCheckerException("Field %s should be of type '%s'" % (key, field_type))
+ raise error("Field %s should be of type '%s'" % (key, field_type))
def check_field_sys_type(value, field_type):
"""
Helper function to check if value is of field_type
"""
if isinstance(value, list):
for element in value:
if not check_field_sys_type(element, field_type):
return False
elif isinstance(value, dict):
for element in value.itervalues():
if not check_field_sys_type(element, field_type):
return False
elif value:
new_type = field_type.split('.')[0]
globals()[new_type] = __import__(new_type)
if not isinstance(value, eval(field_type)):
return False
return True
diff --git a/modules/bibformat/lib/elements/bfe_publisher.py b/modules/bibfield/lib/functions/get_bibdoc.py
similarity index 66%
copy from modules/bibformat/lib/elements/bfe_publisher.py
copy to modules/bibfield/lib/functions/get_bibdoc.py
index 3a5f78261..b7f5ac012 100644
--- a/modules/bibformat/lib/elements/bfe_publisher.py
+++ b/modules/bibfield/lib/functions/get_bibdoc.py
@@ -1,33 +1,35 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""BibFormat element - Prints publisher name
-"""
-__revision__ = "$Id$"
-def format_element(bfo):
+def get_bibdoc(recid):
"""
- Prints the publisher name
+ Retrieves using BibDoc all the files related with a given record
- @see: place.py, date.py, reprints.py, imprint.py, pagination.py
- """
+ @param recid
- publisher = bfo.field('260__b')
+ @return BibDoc of the given record
+ """
+ if not recid or recid < 0:
+ return None
- if publisher != "sine nomine":
- return publisher
+ from invenio.bibdocfile import BibDoc, InvenioBibDocFileError
+ try:
+ return BibDoc(int(recid))
+ except InvenioBibDocFileError:
+ return None
diff --git a/modules/bibfield/lib/functions/get_files_from_bibdoc.py b/modules/bibfield/lib/functions/get_files_from_bibdoc.py
new file mode 100644
index 000000000..02e347a1b
--- /dev/null
+++ b/modules/bibfield/lib/functions/get_files_from_bibdoc.py
@@ -0,0 +1,58 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+def get_files_from_bibdoc(recid):
+ """
+ Retrieves using BibDoc all the files related with a given record
+
+ @param recid
+
+ @return List of dictionaries containing all the information stored
+ inside BibDoc if the current record has files attached, the
+ empty list otherwise
+ """
+ if not recid or recid < 0:
+ return []
+
+ from invenio.bibdocfile import BibRecDocs, InvenioBibDocFileError
+ files = []
+ try:
+ bibrecdocs = BibRecDocs(int(recid))
+ except InvenioBibDocFileError:
+ return []
+ latest_files = bibrecdocs.list_latest_files()
+ for afile in latest_files:
+ file_dict = {}
+ file_dict['comment'] = afile.get_comment()
+ file_dict['description'] = afile.get_description()
+ file_dict['eformat'] = afile.get_format()
+ file_dict['full_name'] = afile.get_full_name()
+ file_dict['full_path'] = afile.get_full_path()
+ file_dict['magic'] = afile.get_magic()
+ file_dict['name'] = afile.get_name()
+ file_dict['path'] = afile.get_path()
+ file_dict['size'] = afile.get_size()
+ file_dict['status'] = afile.get_status()
+ file_dict['subformat'] = afile.get_subformat()
+ file_dict['superformat'] = afile.get_superformat()
+ file_dict['type'] = afile.get_type()
+ file_dict['url'] = afile.get_url()
+ file_dict['version'] = afile.get_version()
+ files.append(file_dict)
+ return files
diff --git a/modules/bibformat/lib/elements/bfe_publisher.py b/modules/bibfield/lib/functions/get_filetypes.py
similarity index 61%
copy from modules/bibformat/lib/elements/bfe_publisher.py
copy to modules/bibfield/lib/functions/get_filetypes.py
index 3a5f78261..5486bf781 100644
--- a/modules/bibformat/lib/elements/bfe_publisher.py
+++ b/modules/bibfield/lib/functions/get_filetypes.py
@@ -1,33 +1,36 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""BibFormat element - Prints publisher name
-"""
-__revision__ = "$Id$"
-def format_element(bfo):
+from invenio.bibdocfile import BibRecDocs
+
+
+def get_filetypes(recid):
"""
- Prints the publisher name
+ Returns filetypes extensions associated with given record.
- @see: place.py, date.py, reprints.py, imprint.py, pagination.py
+ Takes as a parameter the recid of a record.
+ @param url_field: recid of a record
"""
+ docs = BibRecDocs(recid)
+ return [_get_filetype(d.format) for d in docs.list_latest_files()]
- publisher = bfo.field('260__b')
- if publisher != "sine nomine":
- return publisher
+def _get_filetype(pre_ext):
+ ext = pre_ext.split(";")[0]
+ return ext[1:]
diff --git a/modules/bibformat/lib/elements/bfe_publisher.py b/modules/bibfield/lib/functions/is_local_url.py
similarity index 67%
copy from modules/bibformat/lib/elements/bfe_publisher.py
copy to modules/bibfield/lib/functions/is_local_url.py
index 3a5f78261..9475efb8a 100644
--- a/modules/bibformat/lib/elements/bfe_publisher.py
+++ b/modules/bibfield/lib/functions/is_local_url.py
@@ -1,33 +1,31 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""BibFormat element - Prints publisher name
-"""
-__revision__ = "$Id$"
-def format_element(bfo):
- """
- Prints the publisher name
+def is_local_url(url):
+ """Checks if the current url is local using CFG_SITE_URL"""
+ import re
- @see: place.py, date.py, reprints.py, imprint.py, pagination.py
- """
+ from invenio.config import CFG_SITE_URL, CFG_SITE_SECURE_URL
+ try:
+ if re.match(CFG_SITE_URL, url) or re.match(CFG_SITE_SECURE_URL, url):
+ return True
+ except:
+ pass
+ return False
- publisher = bfo.field('260__b')
-
- if publisher != "sine nomine":
- return publisher
diff --git a/modules/bibfield/lib/functions/is_type_isbn.py b/modules/bibfield/lib/functions/is_type_isbn.py
index 146d70ff7..67df41ef1 100644
--- a/modules/bibfield/lib/functions/is_type_isbn.py
+++ b/modules/bibfield/lib/functions/is_type_isbn.py
@@ -1,57 +1,60 @@
## This file is part of Invenio.
## Copyright (C) 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
def _convert_x_to_10(x):
if x != 'X':
return int(x)
else:
return 10
def is_type_isbn10(val):
"""
Test if argument is an ISBN-10 number
Courtesy Wikipedia:
http://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
val = val.replace("-", "").replace(" ", "")
if len(val) != 10:
return False
r = sum([(10 - i) * (_convert_x_to_10(x)) for i, x in enumerate(val)])
return not (r % 11)
def is_type_isbn13(val):
"""
Test if argument is an ISBN-13 number
Courtesy Wikipedia:
http://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
val = val.replace("-", "").replace(" ", "")
if len(val) != 13:
return False
total = sum([int(num) * weight for num, weight in zip(val, (1, 3) * 6)])
ck = (10 - total) % 10
return ck == int(val[-1])
def is_type_isbn(val):
""" Test if argument is an ISBN-10 or ISBN-13 number """
- return is_type_isbn10(val) or is_type_isbn13(val)
+ try:
+ return is_type_isbn10(val) or is_type_isbn13(val)
+ except:
+ return False
diff --git a/modules/bibfield/lib/functions/produce_json_for_dc.py b/modules/bibfield/lib/functions/produce_json_for_dc.py
new file mode 100644
index 000000000..1aa61f5b0
--- /dev/null
+++ b/modules/bibfield/lib/functions/produce_json_for_dc.py
@@ -0,0 +1,63 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+def produce_json_for_dc(self, fields=None):
+ """
+ Export the record in dublin core format.
+
+ @param tags: list of tags to include in the output, if None or
+ empty list all available tags will be included.
+ """
+ from invenio.bibfield_utils import get_producer_rules
+
+ if not fields:
+ fields = self.keys()
+
+ out = []
+
+ for field in fields:
+ if field.startswith('__'):
+ continue
+ try:
+ dc_rules = get_producer_rules(field, 'json_for_dc')
+ for rule in dc_rules:
+ field = self.get(rule[0], None)
+ if field is None:
+ continue
+ if not isinstance(field, list):
+ field = [field, ]
+ for f in field:
+ for r in rule[1]:
+ tmp_dict = {}
+ for key, subfield in r.iteritems():
+ if not subfield:
+ tmp_dict[key] = f
+ else:
+ try:
+ tmp_dict[key] = f[subfield]
+ except:
+ try:
+ tmp_dict[key] = self._try_to_eval(subfield, value=f)
+ except Exception,e:
+ self['__error_messages.cerror[n]'] = 'Producer CError - Unable to produce %s - %s' % (field, str(e))
+ if tmp_dict:
+ out.append(tmp_dict)
+ except KeyError:
+ self['__error_messages.cerror[n]'] = 'Producer CError - No producer rule for field %s' % field
+ return out
diff --git a/modules/bibfield/lib/functions/produce_json_for_marc.py b/modules/bibfield/lib/functions/produce_json_for_marc.py
new file mode 100644
index 000000000..8fb7abcf0
--- /dev/null
+++ b/modules/bibfield/lib/functions/produce_json_for_marc.py
@@ -0,0 +1,64 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+
+def produce_json_for_marc(self, fields=None):
+ """
+ Export the record in marc format.
+
+ @param tags: list of tags to include in the output, if None or
+ empty list all available tags will be included.
+ """
+ from invenio.bibfield_utils import get_producer_rules
+
+ if not fields:
+ fields = self.keys()
+
+ out = []
+
+ for field in fields:
+ if field.startswith('__'):
+ continue
+ try:
+ marc_rules = get_producer_rules(field, 'json_for_marc')
+ for rule in marc_rules:
+ field = self.get(rule[0], None)
+ if field is None:
+ continue
+ if not isinstance(field, list):
+ field = [field, ]
+ for f in field:
+ for r in rule[1]:
+ tmp_dict = {}
+ for key, subfield in r.iteritems():
+ if not subfield:
+ tmp_dict[key] = f
+ else:
+ try:
+ tmp_dict[key] = f[subfield]
+ except:
+ try:
+ tmp_dict[key] = self._try_to_eval(subfield, value=f)
+ except Exception,e:
+ self['__error_messages.cerror[n]'] = 'Producer CError - Unable to produce %s - %s' % (field, str(e))
+ if tmp_dict:
+ out.append(tmp_dict)
+ except KeyError:
+ self['__error_messages.cerror[n]'] = 'Producer CError - No producer rule for field %s' % field
+ return out
diff --git a/modules/bibformat/etc/format_templates/Authority_HTML_brief.bft b/modules/bibformat/etc/format_templates/Authority_HTML_brief.bft
new file mode 100755
index 000000000..578f80105
--- /dev/null
+++ b/modules/bibformat/etc/format_templates/Authority_HTML_brief.bft
@@ -0,0 +1,7 @@
+<name>Default HTML brief</name>
+<description>Brief Authority HTML format.</description>
+
+<BFE_AUTHORITY_AUTHOR detail="no"/>
+<BFE_AUTHORITY_INSTITUTION detail="no"/>
+<BFE_AUTHORITY_JOURNAL detail="no"/>
+<BFE_AUTHORITY_SUBJECT detail="no"/>
\ No newline at end of file
diff --git a/modules/bibformat/etc/format_templates/Authority_HTML_detailed.bft b/modules/bibformat/etc/format_templates/Authority_HTML_detailed.bft
new file mode 100755
index 000000000..944ce7396
--- /dev/null
+++ b/modules/bibformat/etc/format_templates/Authority_HTML_detailed.bft
@@ -0,0 +1,15 @@
+<name>Authority HTML detailed</name>
+<description>Detailed Authority HTML format.</description>
+
+<h1>Authority Record</h1>
+
+<div>
+
+<BFE_AUTHORITY_CONTROL_NO/>
+
+<BFE_AUTHORITY_AUTHOR detail="yes"/>
+<BFE_AUTHORITY_INSTITUTION detail="yes"/>
+<BFE_AUTHORITY_JOURNAL detail="yes"/>
+<BFE_AUTHORITY_SUBJECT detail="yes"/>
+
+</div>
\ No newline at end of file
diff --git a/modules/bibformat/etc/format_templates/Default_HTML_detailed.bft b/modules/bibformat/etc/format_templates/Default_HTML_detailed.bft
index 52ed96823..314a598a4 100644
--- a/modules/bibformat/etc/format_templates/Default_HTML_detailed.bft
+++ b/modules/bibformat/etc/format_templates/Default_HTML_detailed.bft
@@ -1,46 +1,48 @@
<name>Default HTML detailed</name>
<description>This is the default HTML detailed format.</description>
<BFE_TOPBANNER
prefix='<div style="padding-left:10px;padding-right:10px">'
suffix='</div><hr/>'/>
<div style="padding-left:10px;padding-right:10px">
<BFE_TITLE prefix="<center><big><big><strong>" separator="<br /><br />" suffix="</strong></big></big></center>" />
<p align="center">
<BFE_AUTHORS suffix="<br />" limit="25" interactive="yes" print_affiliations="yes" affiliation_prefix="<small> (" affiliation_suffix=")</small>"/>
<BFE_ADDRESSES />
<BFE_AFFILIATION />
<BFE_DATE prefix="<br />" suffix="<br />"/>
<BFE_PUBLISHER prefix="<small>" suffix="</small>"/>
<BFE_PLACE prefix="<small>" suffix="</small>"/>
<BFE_ISBN prefix="<br />ISBN: " />
</p>
<p style="margin-left: 15%; width: 70%">
<BFE_ABSTRACT
prefix_en="<small><strong>Abstract: </strong>"
prefix_fr="<small><strong>Résumé: </strong>"
suffix_en="</small><br />"
suffix_fr="</small><br />"
/>
<BFE_KEYWORDS
prefix="<br /><small><strong>Keyword(s): </strong></small>"
keyword_prefix="<small>"
keyword_suffix="</small>" />
<BFE_NOTES
note_prefix="<br /><small><strong>Note: </strong>"
note_suffix=" </small>"
suffix="<br />" />
<BFE_PUBLI_INFO prefix="<br /><br /><strong>Published in: </strong>"/><br />
<BFE_DOI prefix="<small><strong>DOI: </strong>" suffix=" </small><br />" />
<BFE_PLOTS width="200px" caption="no"/>
+</p>
+
</div>
<BFE_APPEARS_IN_COLLECTIONS prefix="<p style='margin-left: 10px;'><em>The record appears in these collections:</em><br />" suffix="</p>">
diff --git a/modules/bibformat/etc/format_templates/Makefile.am b/modules/bibformat/etc/format_templates/Makefile.am
index 5364db2ac..925d3b0e3 100644
--- a/modules/bibformat/etc/format_templates/Makefile.am
+++ b/modules/bibformat/etc/format_templates/Makefile.am
@@ -1,43 +1,45 @@
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
etcdir = $(sysconfdir)/bibformat/format_templates
etc_DATA = Default_HTML_captions.bft Picture_HTML_brief.bft \
Default_HTML_detailed.bft Default_HTML_portfolio.bft \
Picture_HTML_detailed.bft Default_HTML_brief.bft \
BibTeX.bft MARCXML.bft Excel.bft \
Default_HTML_similarity.bft NLM.xsl \
OAI_DC.xsl OAI_MARC.bft DC.xsl EndNote.xsl \
RSS.xsl RefWorks.xsl MODS.xsl \
Default_HTML_references.bft Default_HTML_files.bft \
Default_HTML_actions.bft Journal_HTML_detailed.bft \
Journal_HTML_brief.bft \
Poetry_HTML_brief.bft Poetry_HTML_detailed.bft \
AID_HTML_very_brief.bft Podcast.xsl \
Video_HTML_brief.bft Video_HTML_detailed.bft \
Basket_Search_Result.bft Default_HTML_meta.bft \
WebAuthorProfile_affiliations_helper.bft DataCite.xsl \
- Default_Mobile_brief.bft Default_Mobile_detailed.bft
+ Default_Mobile_brief.bft Default_Mobile_detailed.bft \
+ Authority_HTML_brief.bft Authority_HTML_detailed.bft
+
tmpdir = $(prefix)/var/tmp
tmp_DATA = Test1.bft Test3.bft Test_2.bft Test_no_template.test
EXTRA_DIST = $(etc_DATA) $(tmp_DATA)
CLEANFILES = *.tmp
diff --git a/modules/bibformat/etc/output_formats/HB.bfo b/modules/bibformat/etc/output_formats/HB.bfo
index b9b263161..d4809d100 100644
--- a/modules/bibformat/etc/output_formats/HB.bfo
+++ b/modules/bibformat/etc/output_formats/HB.bfo
@@ -1,8 +1,9 @@
tag 980.a:
PICTURE --- Picture_HTML_brief.tpl
POETRY --- Poetry_HTML_brief.tpl
+AUTHORITY --- Authority_HTML_brief.bft
tag 773.t:
Atlantis Times --- Journal_HTML_brief.tpl
tag 980.a:
VIDEO --- Video_HTML_brief.tpl
-default: Default_HTML_brief.tpl
\ No newline at end of file
+default: Default_HTML_brief.tpl
diff --git a/modules/bibformat/etc/output_formats/HD.bfo b/modules/bibformat/etc/output_formats/HD.bfo
index 8446f6913..16a7c1322 100644
--- a/modules/bibformat/etc/output_formats/HD.bfo
+++ b/modules/bibformat/etc/output_formats/HD.bfo
@@ -1,8 +1,9 @@
tag 980.a:
PICTURE --- Picture_HTML_detailed.tpl
POETRY --- Poetry_HTML_detailed.tpl
+AUTHORITY --- Authority_HTML_detailed.bft
tag 773.t:
Atlantis Times --- Journal_HTML_detailed.bft
tag 980.a:
VIDEO --- Video_HTML_detailed.tpl
default: Default_HTML_detailed.tpl
diff --git a/modules/bibformat/lib/bibformat_regression_tests.py b/modules/bibformat/lib/bibformat_regression_tests.py
index d89bcac58..41f2d4a3c 100644
--- a/modules/bibformat/lib/bibformat_regression_tests.py
+++ b/modules/bibformat/lib/bibformat_regression_tests.py
@@ -1,480 +1,614 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2007, 2008, 2010, 2011, 2012 CERN.
+## Copyright (C) 2007, 2008, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibFormat module regression tests."""
__revision__ = "$Id$"
-from invenio.config import CFG_SITE_URL, CFG_SITE_LANG, CFG_SITE_RECORD
-from invenio.importutils import lazy_import
-from invenio.testutils import InvenioTestCase, make_test_suite, \
- run_test_suite, test_web_page_content
+import unittest
+import re
+from invenio.config import CFG_SITE_URL, \
+ CFG_SITE_LANG, \
+ CFG_SITE_RECORD, \
+ CFG_SITE_NAME
+from invenio.importutils import lazy_import
+from invenio.testutils import InvenioTestCase, \
+ make_test_suite, \
+ run_test_suite, \
+ test_web_page_content, \
+ get_authenticated_mechanize_browser, \
+ make_url
format_record = lazy_import('invenio.bibformat:format_record')
BibFormatObject = lazy_import('invenio.bibformat_engine:BibFormatObject')
+bfe_authority_author = lazy_import('invenio.bibformat_elements.bfe_authority_author')
+
class BibFormatAPITest(InvenioTestCase):
"""Check BibFormat API"""
def test_basic_formatting(self):
"""bibformat - Checking BibFormat API"""
result = format_record(recID=73,
of='hx',
ln=CFG_SITE_LANG,
verbose=0,
search_pattern=[],
xml_record=None,
user_info=None,
on_the_fly=True)
pageurl = CFG_SITE_URL + '/%s/73?of=hx' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=result)
class BibFormatObjectAPITest(InvenioTestCase):
"""Check BibFormatObject (bfo) APIs"""
def test_knowledge_base(self):
"""bibformat - Checking BibFormatObject KB bridge"""
self.bfo_test_1 = BibFormatObject(12)
self.assertEqual(self.bfo_test_1.kb('DBCOLLID2BIBTEX', 'THESIS'),
'phdthesis')
self.assertEqual(self.bfo_test_1.kb('DBCOLLID2BIBTEX', 'THESIS', 'bar'),
'phdthesis')
self.assertEqual(self.bfo_test_1.kb('DBCOLLID2BIBTEX', 'foo'),
'')
self.assertEqual(self.bfo_test_1.kb('DBCOLLID2BIBTEX', 'foo', 'bar'),
'bar')
self.assertEqual(self.bfo_test_1.kb('DBCOLLID2BIBTEX', ''),
'')
self.assertEqual(self.bfo_test_1.kb('DBCOLLID2BIBTEX', '', 'bar'),
'bar')
class BibFormatBibTeXTest(InvenioTestCase):
"""Check output produced by BibFormat for BibTeX output for
various records"""
def setUp(self):
"""Prepare some ideal outputs"""
self.record_74_hx = '''<pre>
@article{Wang:74,
- author = "Wang, B and Lin, C Y and Abdalla, E",
- title = "Quasinormal modes of Reissner-Nordstrom Anti-de Sitter
- Black Holes",
- journal = "Phys. Lett., B",
- number = "hep-th/0003295",
- volume = "481",
- pages = "79-88",
- year = "2000",
+ author = "Wang, B and Lin, C Y and Abdalla, E",
+ title = "{Quasinormal modes of Reissner-Nordstrom Anti-de Sitter
+ Black Holes}",
+ journal = "Phys. Lett., B",
+ number = "hep-th/0003295",
+ volume = "481",
+ pages = "79-88",
+ year = "2000",
}
</pre>'''
def test_bibtex_output(self):
"""bibformat - BibTeX output"""
pageurl = CFG_SITE_URL + '/%s/74?of=hx' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=self.record_74_hx)
self.assertEqual([], result)
class BibFormatDetailedHTMLTest(InvenioTestCase):
"""Check output produced by BibFormat for detailed HTML ouput for
various records"""
def setUp(self):
"""Prepare some ideal outputs"""
# Record 7 (Article)
self.record_74_hd_header = '''<table border="0" width="100%">
<tr>
<td>Published Article<small> / Particle Physics - Theory</small></td>
<td><small><strong></strong></small></td>
<td align="right"><strong>hep-th/0003295</strong></td>
</tr>
</table>'''
self.record_74_hd_title = '''<center><big><big><strong>Quasinormal modes of Reissner-Nordstrom Anti-de Sitter Black Holes</strong></big></big></center>'''
self.record_74_hd_authors = '''<a href="%(siteurl)s/search?f=author&amp;p=Wang%%2C%%20B&amp;ln=%(lang)s">Wang, B</a><small> (Fudan University)</small> ; <a href="%(siteurl)s/search?f=author&amp;p=Lin%%2C%%20C%%20Y&amp;ln=%(lang)s">Lin, C Y</a> ; <a href="%(siteurl)s/search?f=author&amp;p=Abdalla%%2C%%20E&amp;ln=%(lang)s">Abdalla, E</a><br />'''% \
{'siteurl' : CFG_SITE_URL,
'lang': CFG_SITE_LANG}
self.record_74_hd_abstract = '''<small><strong>Abstract: </strong>Complex frequencies associated with quasinormal modes for large Reissner-Nordstr$\ddot{o}$m Anti-de Sitter black holes have been computed. These frequencies have close relation to the black hole charge and do not linearly scale withthe black hole temperature as in Schwarzschild Anti-de Sitter case. In terms of AdS/CFT correspondence, we found that the bigger the black hole charge is, the quicker for the approach to thermal equilibrium in the CFT. The propertiesof quasinormal modes for $l&gt;0$ have also been studied.</small><br />'''
self.record_74_hd_pubinfo = '''<strong>Published in: </strong><a href="https://cds.cern.ch/ejournals.py?publication=Phys.%20Lett.%2C%20B&amp;volume=481&amp;year=2000&amp;page=79">Phys. Lett., B :481 2000 79-88</a>'''
self.record_74_hd_fulltext = '''0003295.pdf"><img style="border:none"'''
self.record_74_hd_citations = '''<strong>Cited by:</strong> try citation search for <a href="%(siteurl)s/search?f=reference&amp;p=hep-th/0003295&amp;ln=%(lang)s">hep-th/0003295</a>'''% \
{'siteurl' : CFG_SITE_URL,
'lang': CFG_SITE_LANG}
self.record_74_hd_references = '''<li><small>[17]</small> <small>A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, Phys. Rev., D60: 104026 (1999) 5070 90 110 130 150 r+ 130 230 330 50 70 90 110 130 150 r+</small> </li>'''
# Record 7 (Picture)
self.record_7_hd_header = '''<table border="0" width="100%">
<tr>
<td>Pictures<small> / Life at CERN</small></td>
<td><small><strong></strong></small></td>
<td align="right"><strong>CERN-GE-9806033</strong></td>
</tr>
</table>'''
self.record_7_hd_title = '''<center><big><big><strong>Tim Berners-Lee</strong></big></big></center>'''
self.record_7_hd_date = '''<center>28 Jun 1998</center>'''
self.record_7_hd_abstract = '''<p><span class="blocknote">
Caption</span><br /> <small>Conference "Internet, Web, What's next?" on 26 June 1998 at CERN : Tim Berners-Lee, inventor of the World-Wide Web and Director of the W3C, explains how the Web came to be and give his views on the future.</small></p><p><span class="blocknote">
Légende</span><br /><small>Conference "Internet, Web, What's next?" le 26 juin 1998 au CERN: Tim Berners-Lee, inventeur du World-Wide Web et directeur du W3C, explique comment le Web est ne, et donne ses opinions sur l'avenir.</small></p>'''
self.record_7_hd_resource = '''<img src="%s/%s/7/files/9806033.gif?subformat=icon" alt="9806033" style="max-width:250px;_width:250px;" />''' % (CFG_SITE_URL, CFG_SITE_RECORD)
self.record_7_hd_resource_link = '%s/%s/7/files/9806033.jpeg' % (CFG_SITE_URL, CFG_SITE_RECORD)
def test_detailed_html_output(self):
"""bibformat - Detailed HTML output"""
# Test record 74 (Article)
pageurl = CFG_SITE_URL + '/%s/74?of=hd' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=[self.record_74_hd_header,
self.record_74_hd_title,
self.record_74_hd_authors,
self.record_74_hd_abstract,
self.record_74_hd_pubinfo,
self.record_74_hd_fulltext,
#self.record_74_hd_citations,
#self.record_74_hd_references
])
self.assertEqual([], result)
# Test record 7 (Picture)
pageurl = CFG_SITE_URL + '/%s/7?of=hd' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=[self.record_7_hd_header,
self.record_7_hd_title,
self.record_7_hd_date,
self.record_7_hd_abstract,
self.record_7_hd_resource,
self.record_7_hd_resource_link])
self.assertEqual([], result)
def test_detailed_html_edit_record(self):
"""bibformat - Detailed HTML output edit record link presence"""
pageurl = CFG_SITE_URL + '/%s/74?of=hd' % CFG_SITE_RECORD
result = test_web_page_content(pageurl, username='admin',
expected_text="Edit This Record")
self.assertEqual([], result)
def test_detailed_html_no_error_message(self):
"""bibformat - Detailed HTML output without error message"""
# No error message should be displayed in the web interface, whatever happens
pageurl = CFG_SITE_URL + '/%s/74?of=hd' % CFG_SITE_RECORD
result = test_web_page_content(pageurl, username='admin',
expected_text=["Exception",
"Could not"])
self.assertNotEqual([], result)
pageurl = CFG_SITE_URL + '/%s/7?of=hd' % CFG_SITE_RECORD
result = test_web_page_content(pageurl, username='admin',
expected_text=["Exception",
"Could not"])
self.assertNotEqual([], result)
class BibFormatNLMTest(InvenioTestCase):
"""Check output produced by BibFormat for NLM output for various
records"""
def setUp(self):
"""Prepare some ideal outputs"""
self.record_70_xn = '''<?xml version="1.0" encoding="UTF-8"?>
<articles>
<article xmlns:xlink="http://www.w3.org/1999/xlink/">
<front>
<journal-meta>
<journal-title>J. High Energy Phys.</journal-title>
<abbrev-journal-title>J. High Energy Phys.</abbrev-journal-title>
<issn>1126-6708</issn>
</journal-meta>
<article-meta>
<title-group>
<article-title>AdS/CFT For Non-Boundary Manifolds</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>McInnes</surname>
<given-names>B</given-names>
</name>
<aff>
<institution>National University of Singapore</institution>
</aff>
</contrib>
</contrib-group>
<pub-date pub-type="pub">
<year>2000</year>
</pub-date>
<volume>05</volume>
<fpage/>
<lpage/>
<self-uri xlink:href="%(siteurl)s/%(CFG_SITE_RECORD)s/70"/>
<self-uri xlink:href="%(siteurl)s/%(CFG_SITE_RECORD)s/70/files/0003291.pdf"/>
<self-uri xlink:href="%(siteurl)s/%(CFG_SITE_RECORD)s/70/files/0003291.ps.gz"/>
</article-meta>
<abstract>In its Euclidean formulation, the AdS/CFT correspondence begins as a study of Yang-Mills conformal field theories on the sphere, S^4. It has been successfully extended, however, to S^1 X S^3 and to the torus T^4. It is natural tohope that it can be made to work for any manifold on which it is possible to define a stable Yang-Mills conformal field theory. We consider a possible classification of such manifolds, and show how to deal with the most obviousobjection : the existence of manifolds which cannot be represented as boundaries. We confirm Witten's suggestion that this can be done with the help of a brane in the bulk.</abstract>
</front>
<article-type>research-article</article-type>
<ref/>
</article>
</articles>''' % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
def test_nlm_output(self):
"""bibformat - NLM output"""
pageurl = CFG_SITE_URL + '/%s/70?of=xn' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=self.record_70_xn)
try:
self.assertEqual([], result)
except AssertionError:
result = test_web_page_content(pageurl,
expected_text=self.record_70_xn.replace('<fpage/>', '<fpage></fpage>').replace('<lpage/>', '<lpage></lpage>'))
self.assertEqual([], result)
class BibFormatBriefHTMLTest(InvenioTestCase):
"""Check output produced by BibFormat for brief HTML ouput for
various records"""
def setUp(self):
"""Prepare some ideal outputs"""
self.record_76_hb_title = '''Ιθάκη'''
self.record_76_hb_author = '''Καβάφης, Κ Π'''
self.record_76_hb_body = '''\
Σα βγεις στον πηγαιμό για την Ιθάκη, <br />
να εύχεσαι νάναι μακρύς ο δρόμος, <br />
γεμάτος περιπέτειες, γεμάτος γνώσεις'''
def test_brief_html_output(self):
"""bibformat - Brief HTML output"""
pageurl = CFG_SITE_URL + '/%s/76?of=HB' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=[self.record_76_hb_title,
self.record_76_hb_author,
self.record_76_hb_body])
self.assertEqual([], result)
class BibFormatMARCXMLTest(InvenioTestCase):
"""Check output produced by BibFormat for MARCXML ouput for various records"""
def setUp(self):
"""Prepare some ideal outputs"""
self.record_9_xm_beg = '''<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
<record>
<controlfield tag="001">9</controlfield>
<controlfield tag="005">'''
self.record_9_xm_end = '''\
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">PRE-25553</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">RL-82-024</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Ellis, J</subfield>
+ <subfield code="0">AUTHOR|(SzGeCERN)aaa0005</subfield>
<subfield code="u">University of Oxford</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Grand unification with large supersymmetry breaking</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">Mar 1982</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">18 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Theoretical Physics</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ibanez, L E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ross, G G</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1982</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Oxford Univ.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Univ. Auton. Madrid</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Rutherford Lab.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-28</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-01-04</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">1982n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
</collection>'''
def test_marcxml_output(self):
"""bibformat - MARCXML output"""
pageurl = CFG_SITE_URL + '/%s/9?of=xm' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=[self.record_9_xm_beg,
self.record_9_xm_end])
self.assertEqual([], result)
class BibFormatMARCTest(InvenioTestCase):
"""Check output produced by BibFormat for MARC ouput for various
records"""
def setUp(self):
"""Prepare some ideal outputs"""
self.record_29_hm_beg = '''000000029 001__ 29
000000029 005__ '''
self.record_29_hm_end = '''\
000000029 020__ $$a0720421039
000000029 041__ $$aeng
000000029 080__ $$a517.11
000000029 100__ $$aKleene, Stephen Cole$$uUniversity of Wisconsin
000000029 245__ $$aIntroduction to metamathematics
000000029 260__ $$aAmsterdam$$bNorth-Holland$$c1952 (repr.1964.)
000000029 300__ $$a560 p
000000029 490__ $$aBibl. Matematica$$v1
000000029 909C0 $$y1952
000000029 909C0 $$b21
000000029 909C1 $$c1990-01-27$$l00$$m2002-04-12$$oBATCH
000000029 909CS $$sm$$w198606
000000029 980__ $$aBOOK'''
def test_marc_output(self):
"""bibformat - MARC output"""
pageurl = CFG_SITE_URL + '/%s/29?of=hm' % CFG_SITE_RECORD
result = test_web_page_content(pageurl,
expected_text=[self.record_29_hm_beg,
self.record_29_hm_end])
self.assertEqual([], result)
class BibFormatTitleFormattingTest(InvenioTestCase):
"""Check title formatting produced by BibFormat."""
def test_subtitle_in_html_brief(self):
"""bibformat - title subtitle in HTML brief formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=statistics+computer',
expected_text="Statistics: a computer approach"))
def test_subtitle_in_html_detailed(self):
"""bibformat - title subtitle in HTML detailed formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=statistics+computer&of=HD',
expected_text="Statistics: a computer approach"))
def test_title_edition_in_html_brief(self):
"""bibformat - title edition in HTML brief formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=2nd',
expected_text="Introductory statistics: a decision map; 2nd ed"))
def test_title_edition_in_html_detailed(self):
"""bibformat - title edition in HTML detailed formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=2nd&of=HD',
expected_text="Introductory statistics: a decision map; 2nd ed"))
def test_title_part_in_html_brief(self):
"""bibformat - title part in HTML brief formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=analyse+informatique',
expected_text="Analyse informatique, t.2"))
def test_title_part_in_html_detailed(self):
"""bibformat - title part in HTML detailed formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=analyse+informatique&of=HD',
expected_text="Analyse informatique, t.2: L'accomplissement"))
class BibFormatISBNFormattingTest(InvenioTestCase):
"""Check ISBN formatting produced by BibFormat."""
def test_isbn_in_html_detailed(self):
"""bibformat - ISBN in HTML detailed formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=analyse+informatique&of=HD',
expected_text="ISBN: 2225350574"))
class BibFormatPublInfoFormattingTest(InvenioTestCase):
"""Check publication reference info formatting produced by BibFormat."""
def test_publinfo_in_html_brief(self):
"""bibformat - publication reference info in HTML brief formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=recid%3A84',
expected_text="Nucl. Phys. B: 656 (2003) pp. 23-36"))
def test_publinfo_in_html_detailed(self):
"""bibformat - publication reference info in HTML detailed formats"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/%s/84' % CFG_SITE_RECORD,
expected_text="Nucl. Phys. B: 656 (2003) pp. 23-36"))
+
+class BibFormatAuthorityRecordsTest(unittest.TestCase):
+ """Check authority record related functions"""
+
+ def test_brief_output(self):
+ """bibformat - brief authority record format outputs something"""
+ self.assertEqual([],
+ test_web_page_content(CFG_SITE_URL + '/search?cc=Authority+Records&rg=100',
+ expected_text="Ellis, John, 1946-"))
+
+ def test_detailed_output(self):
+ """bibformat - brief authority record format outputs some basic information"""
+ self.assertEqual([],
+ test_web_page_content(CFG_SITE_URL + '/record/118',
+ expected_text=["Ellis, Jonathan Richard, 1946-", "Control Number"]))
+
+ def test_empty_string(self):
+ """bibformat - no empty strings output for variant (4xx) fields"""
+ class BFO:
+ lang = 'en'
+ def fields(self, afield):
+ if '400' in afield: return [{'a':'A'},{},{'a':'B'}]
+ else: afield; return []
+
+ bfo = BFO()
+ self.assertTrue("Variant" in bfe_authority_author.format_element(bfo, detail='yes'))
+ self.assertTrue(", , " not in bfe_authority_author.format_element(bfo, detail='yes'))
+
+
+class BibFormatAuthorityRecordsBrowsingTest(unittest.TestCase):
+ """Tests authority records browsing pre and successor"""
+
+ def setUp(self):
+ self.re_institution = re.compile(r"Werkstoffsynthese und Herstellverfahren")
+ self.re_non_compact = re.compile(r"Non-compact supergravity")
+ self.re_cern_control_number = re.compile(r"INSTITUTION|(SzGeCERN)")
+ self.re_institution_energy = re.compile(r"Institut für Energieforschung")
+
+ def test_format_authority_browsing_pre_and_successor(self):
+ """bibformat - test format authority browsing pre and successor"""
+ base = "/record/140/"
+ parameters = {}
+ url = make_url(base, **parameters)
+
+ error_messages = []
+ browser = get_authenticated_mechanize_browser("admin", "")
+ browser.open(url)
+ link = browser.find_link(text_regex=re.compile("2 dependent records"))
+ resp = browser.follow_link(link)
+ link = browser.find_link(text_regex=re.compile("Detailed record"), nr=1)
+ resp = browser.follow_link(link)
+ found = self.re_institution.search(resp.read())
+ if not found:
+ error_messages.append("There is no 'Werkstoffsynthese und Herstellverfahren' in html response.")
+ link = browser.find_link(text_regex=re.compile("1 dependent record"))
+ resp = browser.follow_link(link)
+ found = self.re_institution.search(resp.read())
+ if not found:
+ error_messages.append("There is no 'Werkstoffsynthese und Herstellverfahren' in html response.")
+ self.assertEqual([], error_messages)
+
+
+ def test_format_authority_browsing_ellis(self):
+ """bibformat - test format authority browsing Ellis authority record"""
+ base = "/record/12/"
+ parameters = {}
+ url = make_url(base, **parameters)
+
+ error_messages = []
+ browser = get_authenticated_mechanize_browser("admin", "")
+ browser.open(url)
+ link = browser.find_link(text_regex=re.compile("Ellis, J"))
+ resp = browser.follow_link(link)
+ link = browser.find_link(text_regex=re.compile("Detailed record"), nr=0)
+ resp = browser.follow_link(link)
+ link = browser.find_link(text_regex=re.compile("4 dependent records"))
+ resp = browser.follow_link(link)
+ found = self.re_non_compact.search(resp.read())
+ if not found:
+ error_messages.append("There is no 'Non-compact supergravity' in html response.")
+ self.assertEqual([], error_messages)
+
+
+ def test_format_authority_browsing_cern(self):
+ """bibformat - test format authority browsing cern authority record"""
+ base = "/record/12/"
+ parameters = {}
+ url = make_url(base, **parameters)
+
+ error_messages = []
+ browser = get_authenticated_mechanize_browser("admin", "")
+ browser.open(url)
+ link = browser.find_link(text_regex=re.compile("CERN"))
+ resp = browser.follow_link(link)
+ found = self.re_cern_control_number.search(resp.read())
+ if not found:
+ error_messages.append("There is no CERN control number in html response.")
+ self.assertEqual([], error_messages)
+
+ def test_format_authority_browsing_parent_child(self):
+ """bibformat - test format authority browsing parent child"""
+ base = "/record/129/"
+ parameters = {}
+ url = make_url(base, **parameters)
+
+ error_messages = []
+ browser = get_authenticated_mechanize_browser("admin", "")
+ browser.open(url)
+ link = browser.find_link(text_regex=re.compile("Institut für Kernphysik"))
+ resp = browser.follow_link(link)
+ link = browser.find_link(text_regex=re.compile("Forschungszentrum Jülich"))
+ resp = browser.follow_link(link)
+ link = browser.find_link(text_regex=re.compile("6 dependent records"))
+ resp = browser.follow_link(link)
+ found = self.re_institution_energy.search(resp.read())
+ if not found:
+ error_messages.append("There is no 'Institut für Energieforschung' in html response.")
+ self.assertEqual([], error_messages)
+
+
TEST_SUITE = make_test_suite(BibFormatBibTeXTest,
BibFormatDetailedHTMLTest,
BibFormatBriefHTMLTest,
BibFormatNLMTest,
BibFormatMARCTest,
BibFormatMARCXMLTest,
BibFormatAPITest,
BibFormatObjectAPITest,
BibFormatTitleFormattingTest,
BibFormatISBNFormattingTest,
- BibFormatPublInfoFormattingTest)
+ BibFormatPublInfoFormattingTest,
+ BibFormatAuthorityRecordsTest,
+ BibFormatAuthorityRecordsBrowsingTest)
if __name__ == "__main__":
run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/bibformat/lib/bibformat_web_tests.py b/modules/bibformat/lib/bibformat_web_tests.py
index 14a509b90..e3f8ce8df 100644
--- a/modules/bibformat/lib/bibformat_web_tests.py
+++ b/modules/bibformat/lib/bibformat_web_tests.py
@@ -1,50 +1,52 @@
# -*- coding: utf-8 -*-
## This file is part of Invenio.
## Copyright (C) 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibFormat module web tests."""
from invenio.config import CFG_SITE_SECURE_URL
from invenio.testutils import make_test_suite, \
run_test_suite, \
InvenioWebTestCase
class InvenioBibFormatWebTest(InvenioWebTestCase):
"""BibFormat web tests."""
-
+
def test_format_many_authors(self):
"""bibformat - web test format many authors"""
self.browser.get(CFG_SITE_SECURE_URL)
self.fill_textbox(textbox_name="p", text="recid:10")
self.find_element_by_name_with_timeout("action_search")
self.browser.find_element_by_name("action_search").click()
self.handle_popup_dialog()
self.page_source_test(expected_text='Bruneliere, R')
self.find_element_by_link_text_with_timeout("Detailed record")
self.browser.find_element_by_link_text("Detailed record").click()
self.page_source_test(expected_text='Show all 315 authors')
self.find_element_by_link_text_with_timeout("Show all 315 authors")
self.browser.find_element_by_link_text("Show all 315 authors").click()
- self.page_source_test(expected_text=['Zobernig, G', 'Hide'])
+ self.page_source_test(expected_text=['Zobernig, G', 'Hide'])
+
+
TEST_SUITE = make_test_suite(InvenioBibFormatWebTest, )
if __name__ == '__main__':
run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/bibformat/lib/elements/Makefile.am b/modules/bibformat/lib/elements/Makefile.am
index 049676d59..47423b9e1 100644
--- a/modules/bibformat/lib/elements/Makefile.am
+++ b/modules/bibformat/lib/elements/Makefile.am
@@ -1,49 +1,53 @@
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
pylibdir=$(libdir)/python/invenio/bibformat_elements
pylib_DATA = bfe_field.py bfe_title.py bfe_authors.py bfe_abstract.py bfe_affiliation.py \
bfe_imprint.py bfe_fulltext.py bfe_place.py bfe_publisher.py bfe_topbanner.py \
bfe_date_rec.py bfe_keywords.py bfe_notes.py bfe_reprints.py bfe_publi_info.py \
bfe_cited_by.py bfe_references.py bfe_title_brief.py \
bfe_report_numbers.py bfe_additional_report_numbers.py bfe_url.py \
bfe_addresses.py bfe_contact.py bfe_photo_resources_brief.py \
bfe_collection.py bfe_editors.py bfe_bibtex.py bfe_edit_record.py \
bfe_date.py bfe_xml_record.py bfe_external_publications.py __init__.py \
bfe_bfx_engine.py bfe_creation_date.py bfe_server_info.py bfe_issn.py \
bfe_client_info.py bfe_language.py bfe_record_id.py bfe_comments.py \
bfe_pagination.py bfe_fulltext_mini.py bfe_year.py bfe_isbn.py \
bfe_appears_in_collections.py bfe_photos.py bfe_record_stats.py \
bfe_edit_files.py bfe_plots.py bfe_plots_thumb.py bfe_sword_push.py \
bfe_video_sources.py bfe_video_bigthumb.py \
bfe_aid_authors.py bfe_doi.py bfe_addthis.py \
bfe_duration.py bfe_record_url.py bfe_video_selector.py \
bfe_video_platform_downloads.py bfe_video_platform_suggestions.py \
bfe_video_platform_sources.py bfe_sciencewise.py bfe_bookmark.py \
bfe_oai_marcxml.py bfe_copyright.py bfe_meta.py bfe_meta_opengraph_image.py \
bfe_meta_opengraph_video.py bfe_webauthorpage_affiliations.py \
- bfe_qrcode.py
+ bfe_qrcode.py \
+ bfe_authority_author.py bfe_authority_institution.py \
+ bfe_authority_journal.py bfe_authority_subject.py \
+ bfe_authority_control_no.py
+
tmpdir = $(prefix)/var/tmp/tests_bibformat_elements
tmp_DATA = test_1.py bfe_test_2.py bfe_test_4.py test3.py test_5.py \
test_no_element.test __init__.py
EXTRA_DIST = $(pylib_DATA) $(tmp_DATA)
CLEANFILES = *~ *.tmp *.pyc
diff --git a/modules/bibformat/lib/elements/bfe_affiliation.py b/modules/bibformat/lib/elements/bfe_affiliation.py
index ee0c8bd12..f97fd9694 100644
--- a/modules/bibformat/lib/elements/bfe_affiliation.py
+++ b/modules/bibformat/lib/elements/bfe_affiliation.py
@@ -1,41 +1,71 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibFormat element - Prints affiliation
"""
__revision__ = "$Id$"
import cgi
+from invenio.config import \
+ CFG_SITE_URL, CFG_SITE_NAME
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME
+
+from invenio.bibauthority_engine import \
+ get_low_level_recIDs_from_control_no
def format_element(bfo):
"""
HTML Affiliation display
"""
- affiliations = bfo.fields('909C1u')
- if len(affiliations) > 0:
- out = "<br/>"
- for affiliation in affiliations:
- out += cgi.escape(affiliation) +" "
- return out
+ affiliations = bfo.fields('909C1', repeatable_subfields_p=True)
+ out = ""
+ for affiliation_dict in affiliations:
+ if 'u' in affiliation_dict:
+ recIDs = []
+ affiliation = affiliation_dict['u'][0]
+ control_nos = affiliation_dict.get('0')
+ for control_no in control_nos or []:
+ recIDs.extend(get_low_level_recIDs_from_control_no(control_no))
+ affiliation = cgi.escape(affiliation)
+ if len(recIDs) == 1:
+ affiliation = '<a href="' + CFG_SITE_URL + \
+ '/record/' + str(recIDs[0]) + \
+ '?ln=' + bfo.lang + \
+ '">' + affiliation + '</a>'
+ elif len(recIDs) > 1:
+ affiliation = '<a href="' + CFG_SITE_URL + \
+ '/search?' + \
+ 'p=recid:' + " or recid:".join([str(_id) for _id in recIDs]) + \
+ '&amp;c=' + CFG_SITE_NAME + \
+ '&amp;c=' + CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME + \
+ '&amp;ln=' + bfo.lang + \
+ '">' + affiliation + '</a>'
+
+ out += affiliation + " "
+
+ if out:
+ return "<br/>" + out
+
def escape_values(bfo):
"""
Called by BibFormat in order to check if output of this element
should be escaped.
"""
return 0
diff --git a/modules/bibformat/lib/elements/bfe_authority_author.py b/modules/bibformat/lib/elements/bfe_authority_author.py
new file mode 100755
index 000000000..8c820b593
--- /dev/null
+++ b/modules/bibformat/lib/elements/bfe_authority_author.py
@@ -0,0 +1,73 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibFormat element - Prints author data from an Authority Record.
+"""
+
+import re
+
+__revision__ = "$Id$"
+
+def format_element(bfo, detail='no'):
+ """ Prints the data of an author authority record in HTML. By default prints
+ brief version.
+
+ @param detail: whether the 'detailed' rather than the 'brief' format
+ @type detail: 'yes' or 'no'
+ """
+
+ from invenio.messages import gettext_set_language
+ _ = gettext_set_language(bfo.lang) # load the right message language
+ # return value
+ out = ""
+ # local function
+ def stringify_dict(d):
+ """ return string composed values in d """
+ _str = ""
+ if 'a' in d:
+ _str += d['a']
+ if 'd' in d:
+ _str += ", " + d['d']
+ return _str or ''
+ # brief
+ main_dicts = bfo.fields('100%%')
+ if len(main_dicts):
+ main_dict = main_dicts[0]
+ main = stringify_dict(main_dict)
+ out += "<p>" + "<strong>" + _("Main %s name") % _("author") + "</strong>" + ": " + main + "</p>"
+ # detail
+ if detail.lower() == "yes":
+ sees = [stringify_dict(see_dict) for see_dict in bfo.fields('400%%')]
+ sees = filter(None, sees) # fastest way to remove empty ""s
+ sees = [re.sub(",{2,}",",", x) for x in sees] # prevent ",,"
+ if len(sees):
+ out += "<p>" + "<strong>" + _("Variant(s)") + "</strong>" + ": " + ", ".join(sees) + "</p>"
+ see_alsos = [stringify_dict(see_also_dict) for see_also_dict in bfo.fields('500%%')]
+ see_alsos = filter(None, see_alsos) # fastest way to remove empty ""s
+ see_alsos = [re.sub(",{2,}",",", x) for x in see_alsos] # prevent ",,"
+ if len(see_alsos):
+ out += "<p>" + "<strong>" + _("See also") + "</strong>" + ": " + ", ".join(see_alsos) + "</p>"
+ # return
+ return out
+
+def escape_values(bfo):
+ """
+ Called by BibFormat in order to check if output of this element
+ should be escaped.
+ """
+ return 0
diff --git a/modules/bibformat/lib/elements/bfe_authority_control_no.py b/modules/bibformat/lib/elements/bfe_authority_control_no.py
new file mode 100755
index 000000000..dbc79fd95
--- /dev/null
+++ b/modules/bibformat/lib/elements/bfe_authority_control_no.py
@@ -0,0 +1,119 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibFormat element - Prints the control number of an Authority Record.
+"""
+
+from invenio.config import CFG_SITE_URL, CFG_SITE_NAME
+
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME
+from invenio.bibauthority_engine import \
+ get_low_level_recIDs_from_control_no, \
+ get_dependent_records_for_control_no
+
+__revision__ = "$Id$"
+
+def format_element(bfo):
+ """ Prints the control number of an author authority record in HTML.
+ By default prints brief version.
+
+ @param brief: whether the 'brief' rather than the 'detailed' format
+ @type brief: 'yes' or 'no'
+ """
+
+ from invenio.messages import gettext_set_language
+ _ = gettext_set_language(bfo.lang) # load the right message language
+
+ control_nos = [d['a'] for d in bfo.fields('035__')]
+ control_nos = filter(None, control_nos) # fastest way to remove empty ""s
+
+ control_nos_formatted = []
+ for control_no in control_nos:
+# recIDs = []
+# types = guess_authority_types(bfo.recID)
+# # control_no example: AUTHOR:(CERN)aaa0005"
+# control_nos = [(type + CFG_BIBAUTHORITY_PREFIX_SEP + control_no) for type in types]
+# for control_no in control_nos:
+# recIDs.extend(list(search_pattern(p='"' + control_no + '"')))
+ recIDs = get_dependent_records_for_control_no(control_no)
+ count = len(recIDs)
+ count_string = str(count) + " dependent records"
+
+ # if we have dependent records, provide a link to them
+ if count:
+ prefix_pattern = "<a href='" + CFG_SITE_URL + "%s" + "'>"
+ postfix = "</a>"
+ url_str = ''
+ # we have multiple dependent records
+ if count > 1:
+ # joining control_nos might be more helpful for the user
+ # than joining recIDs... or maybe not...
+# p_val = '"' + '" or "'.join(control_nos) + '"' # more understandable for the user
+ p_val = "recid:" + ' or recid:'.join([str(recID) for recID in recIDs]) # more efficient
+ # include "&c=" parameter for bibliographic records
+ # and one "&c=" parameter for authority records
+ url_str = \
+ "/search" + \
+ "?p=" + p_val + \
+ "&c=" + CFG_SITE_NAME + \
+ "&c=" + CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME + \
+ "&sc=1" + \
+ "&ln=" + bfo.lang
+ # we have exactly one dependent record
+ elif count == 1:
+ url_str = "/record/" + str(recIDs[0])
+
+ prefix = prefix_pattern % (url_str)
+ count_string = prefix + count_string + postfix
+ #assemble the html and append to list
+ html_str = control_no + " (" + count_string + ")"
+
+ # check if there are more than one authority record with the same
+ # control number. If so, warn the user about this inconsistency.
+ # TODO: hide this warning from unauthorized users
+ my_recIDs = get_low_level_recIDs_from_control_no(control_no)
+ if len(my_recIDs) > 1:
+ url_str = \
+ "/search" + \
+ "?p=" + "recid:" + 'or recid:'.join([str(_id) for _id in my_recIDs]) + \
+ "&c=" + CFG_SITE_NAME + \
+ "&c=" + CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME + \
+ "&sc=1" + \
+ "&ln=" + bfo.lang
+ html_str += \
+ ' <span style="color:red">' + \
+ '(Warning, there is currently ' + \
+ '<a href="' + url_str + '">more than one authority record</a> ' + \
+ 'with this Control Number)' + \
+ '</span>'
+
+ control_nos_formatted.append(html_str)
+
+ title = "<strong>" + _("Control Number(s)") + "</strong>"
+ content = ", ".join(control_nos_formatted) \
+ or "<strong style='color:red'>Missing !</strong>"
+
+ return "<p>" + title + ": " + content + "</p>"
+
+def escape_values(bfo):
+ """
+ Called by BibFormat in order to check if output of this element
+ should be escaped.
+ """
+ return 0
diff --git a/modules/bibformat/lib/elements/bfe_authority_institution.py b/modules/bibformat/lib/elements/bfe_authority_institution.py
new file mode 100755
index 000000000..94ced00cc
--- /dev/null
+++ b/modules/bibformat/lib/elements/bfe_authority_institution.py
@@ -0,0 +1,205 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibFormat element - Prints institution data from an Authority Record.
+"""
+
+__revision__ = "$Id$"
+
+from invenio.config import CFG_SITE_URL
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD, \
+ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME
+
+from invenio.bibauthority_engine import \
+ get_control_nos_from_recID, \
+ guess_main_name_from_authority_recID
+from invenio.search_engine import \
+ perform_request_search, \
+ get_record
+
+def format_element(bfo, detail='no'):
+ """ Prints the data of an institution authority record in HTML. By default prints
+ brief version.
+
+ @param detail: whether the 'detailed' rather than the 'brief' format
+ @type detail: 'yes' or 'no'
+ """
+ from invenio.messages import gettext_set_language
+ _ = gettext_set_language(bfo.lang) # load the right message language
+
+ # return value
+ out = ""
+ # brief
+ main_dicts = bfo.fields('110%%')
+ if len(main_dicts):
+ main = main_dicts[0].get('a') or ""
+ out += "<p>" + "<strong>" + _("Main %s name") % _("institution") + "</strong>" + ": " + main + "</p>"
+ # detail
+ if detail.lower() == "yes":
+ sees = [see_dict['a'] for see_dict in bfo.fields('410%%') if 'a' in see_dict]
+ sees = filter(None, sees) # fastest way to remove empty ""s
+ if len(sees):
+ out += "<p>" + "<strong>" + _("Variant(s)") + "</strong>" + ": " + ", ".join(sees) + "</p>"
+ see_also_dicts = bfo.fields('510%%')
+ cc_val = CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME
+ c_val = "Authority Institution"
+ record_url_pattern = "/record/" + "%s"
+ search_url_pattern = "/search?" + \
+ "cc=" + "%s" + \
+ "&c=" + "%s" + \
+ "&p=" + "%s" + \
+ "&sc=" + "%s"
+ link_pattern = "<a href='" + CFG_SITE_URL + '%s' + "'>" + '%s' + "</a>"
+ # populate the first 3 lists
+ parent_htmls, predecessor_htmls, successor_htmls = \
+ get_main_htmls(see_also_dicts, cc_val, c_val, record_url_pattern,
+ search_url_pattern, link_pattern)
+ # populate the list of children
+ child_htmls = \
+ get_child_htmls(bfo.recID, cc_val, c_val, record_url_pattern,
+ link_pattern)
+ # put it all together
+ if len(parent_htmls):
+ out += "<p>" + "<strong>" + _("Parent") + "</strong>" + ": " + ", ".join(parent_htmls) + "</p>"
+ if len(child_htmls):
+ out += "<p>" + "<strong>" + _("Children") + "</strong>" + ": " + ", ".join(child_htmls) + "</p>"
+ if len(predecessor_htmls):
+ out += "<p>" + "<strong>" + _("Predecessor") + "</strong>" + ": " + ", ".join(predecessor_htmls) + "</p>"
+ if len(successor_htmls):
+ out += "<p>" + "<strong>" + _("Successor") + "</strong>" + ": " + ", ".join(successor_htmls) + "</p>"
+ # return
+ return out
+
+def get_main_htmls(see_also_dicts, cc_val, c_val, record_url_pattern,
+ search_url_pattern, link_pattern):
+ """parent_htmls, predecessor_htmls, successor_htmls can all be deduced
+ directly from the metadata of the record"""
+ # reusable vars
+ f_val = CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD
+ sc_val = "1"
+ parent_htmls = []
+ predecessor_htmls = []
+ successor_htmls = []
+
+ # start processing
+ for see_also_dict in see_also_dicts:
+ if 'w' in see_also_dict:
+ # $w contains 'a' for predecessor, 'b' for successor, etc.
+ w_subfield = see_also_dict.get('w')
+ # $4 contains control_no of linked authority record
+ _4_subfield = see_also_dict.get('4')
+ # $a contains the name of the linked institution
+ out_string = see_also_dict.get('a') or _4_subfield
+ # if we have something to display
+ if out_string:
+ url = ''
+ # if we have a control number
+ if _4_subfield:
+ p_val = _4_subfield
+# if CFG_BIBAUTHORITY_PREFIX_SEP in _4_subfield:
+# unused, p_val = _4_subfield.split(CFG_BIBAUTHORITY_PREFIX_SEP);
+ recIDs = perform_request_search(cc=cc_val,
+ c=c_val,
+ p=p_val,
+ f=f_val)
+ if len(recIDs) == 1:
+ url = record_url_pattern % (recIDs[0])
+ elif len(recIDs) > 1:
+ p_val = "recid:" + \
+ " or recid:".join([str(r) for r in recIDs])
+ url = search_url_pattern % (cc_val,
+ c_val,
+ p_val,
+ sc_val)
+ # if we found one or multiple records for the control_no,
+ # make the out_string a clickable url towards those records
+ if url:
+ out_string = link_pattern % (url, out_string)
+ # add the out_string to the appropriate list
+ if w_subfield == 't':
+ parent_htmls.append(out_string)
+ elif w_subfield == 'a':
+ predecessor_htmls.append(out_string)
+ elif w_subfield == 'b':
+ successor_htmls.append(out_string)
+ # return
+ return parent_htmls, predecessor_htmls, successor_htmls
+
+def get_child_htmls(this_recID, cc_val, c_val, record_url_pattern,
+ link_pattern):
+ """children aren'r referenced by parents, so we need special treatment to find
+ them"""
+ control_nos = get_control_nos_from_recID(this_recID)
+ for control_no in control_nos:
+ url = ''
+ p_val = '510%4:"' + control_no + '" and 510%w:t'
+ # find a first, fuzzy result set
+ # narrowing down on a few possible recIDs
+ recIDs = perform_request_search(cc=cc_val,
+ c=c_val,
+ p=p_val)
+ # now filter to find the ones where the subfield conditions of p_val
+ # are both true within the exact same field
+ sf_req = [('w', 't'), ('4', control_no)]
+ recIDs = filter(lambda x:
+ match_all_subfields_for_tag(x, '510', sf_req),
+ recIDs)
+ # proceed with assembling the html link
+ child_htmls = []
+ for recID in recIDs:
+ url = record_url_pattern % str(recID)
+ display = guess_main_name_from_authority_recID(recID) or str(recID)
+ out_html = link_pattern % (url, display)
+ child_htmls.append(out_html)
+ return child_htmls
+
+def match_all_subfields_for_tag(recID, field_tag, subfields_required=[]):
+ """
+ Tests whether the record with recID has at least one field with 'field_tag'
+ where all of the required subfields in subfields_required match a subfield
+ in the given field both in code and value
+
+ @param recID: record ID
+ @type recID: int
+
+ @param field_tag: a 3 digit code for the field tag code
+ @type field_tag: string
+
+ @param subfields_required: a list of subfield code/value tuples
+ @type subfields_required: list of tuples of strings.
+ same format as in get_record():
+ e.g. [('w', 't'),
+ ('4', 'XYZ123')]
+
+ @return: boolean
+ """
+ rec = get_record(recID)
+ for field in rec[field_tag]:
+ subfields_present = field[0]
+ intersection = set(subfields_present) & set(subfields_required)
+ if set(subfields_required) == intersection:
+ return True
+ return False
+
+def escape_values(bfo):
+ """
+ Called by BibFormat in order to check if output of this element
+ should be escaped.
+ """
+ return 0
\ No newline at end of file
diff --git a/modules/bibformat/lib/elements/bfe_authority_journal.py b/modules/bibformat/lib/elements/bfe_authority_journal.py
new file mode 100755
index 000000000..ae8f920b4
--- /dev/null
+++ b/modules/bibformat/lib/elements/bfe_authority_journal.py
@@ -0,0 +1,70 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibFormat element - Prints journal data from an Authority Record.
+"""
+
+import re
+
+__revision__ = "$Id$"
+
+def format_element(bfo, detail='no'):
+ """ Prints the data of a journal authority record in HTML. By default prints
+ brief version.
+
+ @param detail: whether the 'detailed' rather than the 'brief' format
+ @type detail: 'yes' or 'no'
+ """
+ from invenio.messages import gettext_set_language
+ _ = gettext_set_language(bfo.lang) # load the right message language
+ # return value
+ out = ""
+ # local function
+ def stringify_dict(d):
+ """ return string composed values in d """
+ _str = ""
+ if 'a' in d:
+ _str += d['a']
+ return _str or ''
+ # brief
+ main_dicts = bfo.fields('130%%')
+ if len(main_dicts):
+ main_dict = main_dicts[0]
+ main = stringify_dict(main_dict)
+ out += "<p>" + "<strong>" + _("Main %s name") % _("journal") + "</strong>" + ": " + main + "</p>"
+ # detail
+ if detail.lower() == "yes":
+ sees = [stringify_dict(see_dict) for see_dict in bfo.fields('430%%')]
+ sees = filter(None, sees) # fastest way to remove empty ""s
+ sees = [re.sub(",{2,}",",", x) for x in sees] # prevent ",,"
+ if len(sees):
+ out += "<p>" + "<strong>" + _("Variant(s)") + "</strong>" + ": " + ", ".join(sees) + "</p>"
+ see_alsos = [stringify_dict(see_also_dict) for see_also_dict in bfo.fields('530%%')]
+ see_alsos = filter(None, see_alsos) # fastest way to remove empty ""s
+ see_alsos = [re.sub(",{2,}",",", x) for x in see_alsos] # prevent ",,"
+ if len(see_alsos):
+ out += "<p>" + "<strong>" + _("See also") + "</strong>" + ": " + ", ".join(see_alsos) + "</p>"
+ # return
+ return out
+
+def escape_values(bfo):
+ """
+ Called by BibFormat in order to check if output of this element
+ should be escaped.
+ """
+ return 0
diff --git a/modules/bibformat/lib/elements/bfe_authority_subject.py b/modules/bibformat/lib/elements/bfe_authority_subject.py
new file mode 100755
index 000000000..3e9cc6b19
--- /dev/null
+++ b/modules/bibformat/lib/elements/bfe_authority_subject.py
@@ -0,0 +1,70 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibFormat element - Prints subject data from an Authority Record.
+"""
+
+import re
+
+__revision__ = "$Id$"
+
+def format_element(bfo, detail='no'):
+ """ Prints the data of a subject authority record in HTML. By default prints
+ brief version.
+
+ @param detail: whether the 'detailed' rather than the 'brief' format
+ @type detail: 'yes' or 'no'
+ """
+ from invenio.messages import gettext_set_language
+ _ = gettext_set_language(bfo.lang) # load the right message language
+ # return value
+ out = ""
+ # local function
+ def stringify_dict(d):
+ """ return string composed values in d """
+ _str = ""
+ if 'a' in d:
+ _str += d['a']
+ return _str or ''
+ # brief
+ main_dicts = bfo.fields('150%%')
+ if len(main_dicts):
+ main_dict = main_dicts[0]
+ main = stringify_dict(main_dict)
+ out += "<p>" + "<strong>" + _("Main %s name") % _("subject") + "</strong>" + ": " + main + "</p>"
+ # detail
+ if detail.lower() == "yes":
+ sees = [stringify_dict(see_dict) for see_dict in bfo.fields('450%%')]
+ sees = filter(None, sees) # fastest way to remove empty ""s
+ sees = [re.sub(",{2,}",",", x) for x in sees] # prevent ",,"
+ if len(sees):
+ out += "<p>" + "<strong>" + _("Variant(s)") + "</strong>" + ": " + ", ".join(sees) + "</p>"
+ see_alsos = [stringify_dict(see_also_dict) for see_also_dict in bfo.fields('550%%')]
+ see_alsos = filter(None, see_alsos) # fastest way to remove empty ""s
+ see_alsos = [re.sub(",{2,}",",", x) for x in see_alsos] # prevent ",,"
+ if len(see_alsos):
+ out += "<p>" + "<strong>" + _("See also") + "</strong>" + ": " + ", ".join(see_alsos) + "</p>"
+ # return
+ return out
+
+def escape_values(bfo):
+ """
+ Called by BibFormat in order to check if output of this element
+ should be escaped.
+ """
+ return 0
diff --git a/modules/bibformat/lib/elements/bfe_authors.py b/modules/bibformat/lib/elements/bfe_authors.py
index c0e309f91..7389c781a 100644
--- a/modules/bibformat/lib/elements/bfe_authors.py
+++ b/modules/bibformat/lib/elements/bfe_authors.py
@@ -1,163 +1,201 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibFormat element - Prints authors
"""
__revision__ = "$Id$"
import re
from urllib import quote
from cgi import escape
from invenio.config import CFG_SITE_URL
from invenio.messages import gettext_set_language
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME, \
+ CFG_BIBAUTHORITY_TYPE_NAMES, \
+ CFG_BIBAUTHORITY_PREFIX_SEP
+from invenio.bibauthority_engine import \
+ get_low_level_recIDs_from_control_no
def format_element(bfo, limit, separator=' ; ',
extension='[...]',
print_links="yes",
print_affiliations='no',
affiliation_prefix=' (',
affiliation_suffix=')',
interactive="no",
highlight="no",
link_author_pages="no",
link_mobile_pages="no",
relator_code_pattern=None):
"""
Prints the list of authors of a record.
@param limit: the maximum number of authors to display
@param separator: the separator between authors.
@param extension: a text printed if more authors than 'limit' exist
@param print_links: if yes, prints the authors as HTML link to their publications
@param print_affiliations: if yes, make each author name followed by its affiliation
@param affiliation_prefix: prefix printed before each affiliation
@param affiliation_suffix: suffix printed after each affiliation
@param interactive: if yes, enable user to show/hide authors when there are too many (html + javascript)
@param highlight: highlights authors corresponding to search query if set to 'yes'
@param link_author_pages: should we link to author pages if print_links in on?
@param link_mobile_pages: should we link to mobile app pages if print_links in on?
@param relator_code_pattern: a regular expression to filter authors based on subfield $4 (relator code)
"""
_ = gettext_set_language(bfo.lang) # load the right message language
authors = []
- authors_1 = bfo.fields('100__')
- authors_2 = bfo.fields('700__')
+ authors_1 = bfo.fields('100__', repeatable_subfields_p=True)
+ authors_2 = bfo.fields('700__', repeatable_subfields_p=True)
authors.extend(authors_1)
authors.extend(authors_2)
+ # make unique string per key
+ for author in authors:
+ if 'a' in author:
+ author['a'] = author['a'][0]
+ if 'u' in author:
+ author['u'] = author['u'][0]
+ pattern = '%s' + CFG_BIBAUTHORITY_PREFIX_SEP + "("
+ for control_no in author.get('0', []):
+ if pattern % (CFG_BIBAUTHORITY_TYPE_NAMES["INSTITUTION"]) in control_no:
+ author['u0'] = control_no # overwrite if multiples
+ elif pattern % (CFG_BIBAUTHORITY_TYPE_NAMES["AUTHOR"]) in control_no:
+ author['a0'] = control_no # overwrite if multiples
+
+
if relator_code_pattern:
p = re.compile(relator_code_pattern)
authors = filter(lambda x: p.match(x.get('4', '')), authors)
nb_authors = len(authors)
bibrec_id = bfo.control_field("001")
# Process authors to add link, highlight and format affiliation
for author in authors:
if author.has_key('a'):
if highlight == 'yes':
from invenio import bibformat_utils
author['a'] = bibformat_utils.highlight(author['a'],
bfo.search_pattern)
if print_links.lower() == "yes":
if link_author_pages == "yes":
author['a'] = '<a rel="author" href="' + CFG_SITE_URL + \
'/author/' + quote(author['a']) + \
'?recid=' + bibrec_id + \
'&ln=' + bfo.lang + \
'">' + escape(author['a']) + '</a>'
elif link_mobile_pages == 'yes':
author['a'] = '<a rel="external" href="#page=search' + \
'&amp;f=author&amp;p=' + quote(author['a']) + \
'">' + escape(author['a']) + '</a>'
else:
+ auth_coll_param = ''
+ if 'a0' in author:
+ recIDs = get_low_level_recIDs_from_control_no(author['a0'])
+ if len(recIDs):
+ auth_coll_param = '&amp;c=' + \
+ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME
author['a'] = '<a href="' + CFG_SITE_URL + \
'/search?f=author&amp;p=' + quote(author['a']) + \
+ auth_coll_param + \
'&amp;ln=' + bfo.lang + \
'">' + escape(author['a']) + '</a>'
if author.has_key('u'):
if print_affiliations == "yes":
+ if 'u0' in author:
+ recIDs = get_low_level_recIDs_from_control_no(author['u0'])
+ # if there is more than 1 recID, clicking on link and
+ # thus displaying the authority record's page should
+ # contain a warning that there are multiple authority
+ # records with the same control number
+ if len(recIDs):
+ author['u'] = '<a href="' + CFG_SITE_URL + '/record/' + \
+ str(recIDs[0]) + \
+ '?ln=' + bfo.lang + \
+ '">' + author['u'] + '</a>'
author['u'] = affiliation_prefix + author['u'] + \
affiliation_suffix
# Flatten author instances
if print_affiliations == 'yes':
authors = [author.get('a', '') + author.get('u', '')
for author in authors]
else:
authors = [author.get('a', '')
for author in authors]
if limit.isdigit() and nb_authors > int(limit) and interactive != "yes":
return separator.join(authors[:int(limit)]) + extension
elif limit.isdigit() and nb_authors > int(limit) and interactive == "yes":
out = '<a name="show_hide" />'
out += separator.join(authors[:int(limit)])
out += '<span id="more_%s" style="">' % bibrec_id + separator + \
separator.join(authors[int(limit):]) + '</span>'
out += ' <span id="extension_%s"></span>' % bibrec_id
out += ' <small><i><a id="link_%s" href="#" style="color:rgb(204,0,0);"></a></i></small>' % bibrec_id
out += '''
<script type="text/javascript">
$('#link_%(recid)s').click(function(event) {
event.preventDefault();
var more = document.getElementById('more_%(recid)s');
var link = document.getElementById('link_%(recid)s');
var extension = document.getElementById('extension_%(recid)s');
if (more.style.display=='none'){
more.style.display = '';
extension.style.display = 'none';
link.innerHTML = "%(show_less)s"
} else {
more.style.display = 'none';
extension.style.display = '';
link.innerHTML = "%(show_more)s"
}
link.style.color = "rgb(204,0,0);"
});
function set_up_%(recid)s(){
var extension = document.getElementById('extension_%(recid)s');
extension.innerHTML = "%(extension)s";
$('#link_%(recid)s').click();
}
</script>
''' % {'show_less':_("Hide"),
'show_more':_("Show all %i authors") % nb_authors,
'extension':extension,
'recid': bibrec_id}
out += '<script type="text/javascript">set_up_%s()</script>' % bibrec_id
return out
elif nb_authors > 0:
return separator.join(authors)
def escape_values(bfo):
"""
Called by BibFormat in order to check if output of this element
should be escaped.
"""
return 0
diff --git a/modules/bibformat/lib/elements/bfe_bibtex.py b/modules/bibformat/lib/elements/bfe_bibtex.py
index bafd97957..f6160aa44 100644
--- a/modules/bibformat/lib/elements/bfe_bibtex.py
+++ b/modules/bibformat/lib/elements/bfe_bibtex.py
@@ -1,496 +1,506 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibFormat element - Prints BibTeX meta-data
"""
__revision__ = "$Id$"
from invenio.config import CFG_SITE_LANG
def format_element(bfo, width="50"):
"""
Prints a full BibTeX record.
'width' must be bigger than or equal to 30.
This format element is an example of large element, which does
all the formatting by itself
@param width: the width (in number of characters) of the record
"""
out = "@"
width = int(width)
if width < 30:
width = 30
- name_width = 19
+ name_width = 20
value_width = width-name_width
recID = bfo.control_field('001')
#Print entry type
import invenio.bibformat_elements.bfe_collection as bfe_collection
collection = bfe_collection.format_element(bfo=bfo, kb="DBCOLLID2BIBTEX")
if collection == "":
out += "article"
else:
out += collection
out += "{"
#Print BibTeX key
#
#Try to have: author_name:recID
#If author_name cannot be found, use primary_report_number
#If primary_report_number cannot be found, use additional_report_number
#If additional_report_number cannot be found, use title:recID
#If title cannot be found, use only recID
#
#The construction of this key is inherited from old BibTeX format
#written in EL, in old BibFormat.
key = recID
author = bfo.field("100__a")
if author != "":
key = get_name(author)+":"+recID
else:
author = bfo.field("700__a")
if author != "":
key = get_name(author)+":"+recID
else:
primary_report_number = bfo.field("037__a")
if primary_report_number != "":
key = primary_report_number
else:
additional_report_number = bfo.field("088__a")
if additional_report_number != "":
key = primary_report_number
else:
title = bfo.field("245__a")
if title != "":
key = get_name(title)+":"+recID
out += key +","
#Print authors
#If author cannot be found, print a field key=recID
import invenio.bibformat_elements.bfe_authors as bfe_authors
authors = bfe_authors.format_element(bfo=bfo,
limit="",
separator=" and ",
extension="",
print_links="no")
if authors == "":
out += format_bibtex_field("key",
recID,
name_width,
value_width)
else:
out += format_bibtex_field("author",
authors,
name_width,
value_width)
#Print editors
import invenio.bibformat_elements.bfe_editors as bfe_editors
editors = bfe_editors.format_element(bfo=bfo, limit="",
separator=" and ",
extension="",
print_links="no")
out += format_bibtex_field("editor",
editors,
name_width,
value_width)
#Print title
import invenio.bibformat_elements.bfe_title as bfe_title
title = bfe_title.format_element(bfo=bfo, separator = ". ")
out += format_bibtex_field("title",
- title,
+ '{' + title + '}',
name_width,
value_width)
#Print institution
if collection == "techreport":
publication_name = bfo.field("269__b")
out += format_bibtex_field("institution",
publication_name,
name_width, value_width)
#Print organization
if collection == "inproceedings" or collection == "proceedings":
organization = []
organization_1 = bfo.field("260__b")
if organization_1 != "":
organization.append(organization_1)
organization_2 = bfo.field("269__b")
if organization_2 != "":
organization.append(organization_2)
out += format_bibtex_field("organization",
". ".join(organization),
name_width,
value_width)
#Print publisher
if collection == "book" or \
collection == "inproceedings" \
or collection == "proceedings":
publishers = []
import invenio.bibformat_elements.bfe_publisher as bfe_publisher
publisher = bfe_publisher.format_element(bfo=bfo)
if publisher != "":
publishers.append(publisher)
publication_name = bfo.field("269__b")
if publication_name != "":
publishers.append(publication_name)
imprint_publisher_name = bfo.field("933__b")
if imprint_publisher_name != "":
publishers.append(imprint_publisher_name)
imprint_e_journal__publisher_name = bfo.field("934__b")
if imprint_e_journal__publisher_name != "":
publishers.append(imprint_e_journal__publisher_name)
out += format_bibtex_field("publisher",
". ".join(publishers),
name_width,
value_width)
#Print journal
if collection == "article":
journals = []
host_title = bfo.field("773__p")
if host_title != "":
journals.append(host_title)
journal = bfo.field("909C4p")
if journal != "":
journals.append(journal)
out += format_bibtex_field("journal",
". ".join(journals),
name_width,
value_width)
#Print school
if collection == "phdthesis":
university = bfo.field("502__b")
out += format_bibtex_field("school",
university,
name_width,
value_width)
+ # Collaboration
+ collaborations = []
+ for collaboration in bfo.fields("710__g"):
+ if collaboration not in collaborations:
+ collaborations.append(collaboration)
+ out += format_bibtex_field("collaboration",
+ ", ".join(collaborations),
+ name_width,
+ value_width)
+
#Print address
if collection == "book" or \
collection == "inproceedings" or \
collection == "proceedings" or \
collection == "phdthesis" or \
collection == "techreport":
addresses = []
publication_place = bfo.field("260__a")
if publication_place != "":
addresses.append(publication_place)
publication_place_2 = bfo.field("269__a")
if publication_place_2 != "":
addresses.append(publication_place_2)
imprint_publisher_place = bfo.field("933__a")
if imprint_publisher_place != "":
addresses.append(imprint_publisher_place)
imprint_e_journal__publisher_place = bfo.field("934__a")
if imprint_e_journal__publisher_place != "":
addresses.append(imprint_e_journal__publisher_place)
out += format_bibtex_field("address",
". ".join(addresses),
name_width,
value_width)
#Print number
if collection == "techreport" or \
collection == "article":
numbers = []
primary_report_number = bfo.field("037__a")
if primary_report_number != "":
numbers.append(primary_report_number)
additional_report_numbers = bfo.fields("088__a")
additional_report_numbers = ". ".join(additional_report_numbers)
if additional_report_numbers != "":
numbers.append(additional_report_numbers)
host_number = bfo.field("773__n")
if host_number != "":
numbers.append(host_number)
number = bfo.field("909C4n")
if number != "":
numbers.append(number)
out += format_bibtex_field("number",
". ".join(numbers),
name_width,
value_width)
#Print volume
if collection == "article" or \
collection == "book":
volumes = []
host_volume = bfo.field("773__v")
if host_volume != "":
volumes.append(host_volume)
volume = bfo.field("909C4v")
if volume != "":
volumes.append(volume)
out += format_bibtex_field("volume",
". ".join(volumes),
name_width,
value_width)
#Print series
if collection == "book":
series = bfo.field("490__a")
out += format_bibtex_field("series",
series,
name_width,
value_width)
#Print pages
if collection == "article" or \
collection == "inproceedings":
pages = []
host_pages = bfo.field("773c")
if host_pages != "":
pages.append(host_pages)
nb_pages = bfo.field("909C4c")
if nb_pages != "":
pages.append(nb_pages)
phys_pagination = bfo.field("300__a")
if phys_pagination != "":
pages.append(phys_pagination)
out += format_bibtex_field("pages",
". ".join(pages),
name_width,
value_width)
#Print month
month = get_month(bfo.field("269__c"))
if month == "":
month = get_month(bfo.field("260__c"))
if month == "":
month = get_month(bfo.field("502__c"))
out += format_bibtex_field("month",
month,
name_width,
value_width)
#Print year
year = get_year(bfo.field("269__c"))
if year == "":
year = get_year(bfo.field("260__c"))
if year == "":
year = get_year(bfo.field("502__c"))
if year == "":
year = get_year(bfo.field("909C0y"))
out += format_bibtex_field("year",
year,
name_width,
value_width)
#Print note
note = bfo.field("500__a")
out += format_bibtex_field("note",
note,
name_width,
value_width)
out +="\n}"
return out
def format_bibtex_field(name, value, name_width=20, value_width=40):
"""
Formats a name and value to display as BibTeX field.
'name_width' is the width of the name of the field (everything before " = " on first line)
'value_width' is the width of everything after " = ".
6 empty chars are printed before the name, then the name and then it is filled with spaces to meet
the required width. Therefore name_width must be > 6 + len(name)
Then " = " is printed (notice spaces).
So the total width will be::
name_width + value_width + len(" = ")
(3)
if value is empty string, then return empty string.
For example format_bibtex_field('author', 'a long value for this record', 13, 15) will
return :
>>
>> name = "a long value
>> for this record",
"""
if name_width < 6 + len(name):
name_width = 6 + len(name)
if value_width < 2:
value_width = 2
if value is None or value == "":
return ""
#format name
name = "\n "+name
name = name.ljust(name_width)
#format value
value = '"'+value+'"' #Add quotes to value
value_lines = []
last_cut = 0
cursor = value_width -1 #First line is smaller because of quote
increase = False
while cursor < len(value):
if cursor == last_cut: #Case where word is bigger than the max
#number of chars per line
increase = True
cursor = last_cut+value_width-1
if value[cursor] != " " and not increase:
cursor -= 1
elif value[cursor] != " " and increase:
cursor += 1
else:
value_lines.append(value[last_cut:cursor])
last_cut = cursor
cursor += value_width
increase = False
#Take rest of string
last_line = value[last_cut:]
if last_line != "":
value_lines.append(last_line)
tabs = "".ljust(name_width + 2)
value = ("\n"+tabs).join(value_lines)
return name + ' = ' + value + ","
def get_name(string):
"""
Tries to return the last name contained in a string.
In fact returns the text before any comma in 'string', whith
spaces removed. If comma not found, get longest word in 'string'
Behaviour inherited from old GET_NAME function defined as UFD in
old BibFormat. We need to return the same value, to keep back
compatibility with already generated BibTeX records.
Eg: get_name("سtlund, عvind B") returns "سtlund".
"""
names = string.split(',')
if len(names) == 1:
#Comma not found.
#Split around any space
longest_name = ""
words = string.split()
for word in words:
if len(word) > len(longest_name):
longest_name = word
return longest_name
else:
return names[0].replace(" ", "")
def get_year(date, default=""):
"""
Returns the year from a textual date retrieved from a record
The returned value is a 4 digits string.
If year cannot be found, returns 'default'
Returns first value found.
@param date: the textual date to retrieve the year from
@param default: a default value to return if year not fount
"""
import re
year_pattern = re.compile(r'\d\d\d\d')
result = year_pattern.search(date)
if result is not None:
return result.group()
return default
def get_month(date, ln=CFG_SITE_LANG, default=""):
"""
Returns the year from a textual date retrieved from a record
The returned value is the 3 letters short month name in language 'ln'
If year cannot be found, returns 'default'
@param date: the textual date to retrieve the year from
@param default: a default value to return if year not fount
"""
import re
from invenio.dateutils import get_i18n_month_name
from invenio.messages import language_list_long
#Look for textual month like "Jan" or "sep" or "November" or "novem"
#Limit to CFG_SITE_LANG as language first (most probable date)
#Look for short months. Also matches for long months
short_months = [get_i18n_month_name(month).lower()
for month in range(1, 13)] # ["jan","feb","mar",...]
short_months_pattern = re.compile(r'('+r'|'.join(short_months)+r')',
re.IGNORECASE) # (jan|feb|mar|...)
result = short_months_pattern.search(date)
if result is not None:
try:
month_nb = short_months.index(result.group().lower()) + 1
return get_i18n_month_name(month_nb, "short", ln)
except:
pass
#Look for month specified as number in the form 2004/03/08 or 17 02 2004
#(always take second group of 2 or 1 digits separated by spaces or - etc.)
month_pattern = re.compile(r'\d([\s]|[-/.,])+(?P<month>(\d){1,2})([\s]|[-/.,])')
result = month_pattern.search(date)
if result is not None:
try:
month_nb = int(result.group("month"))
return get_i18n_month_name(month_nb, "short", ln)
except:
pass
#Look for textual month like "Jan" or "sep" or "November" or "novem"
#Look for the month in each language
#Retrieve ['en', 'fr', 'de', ...]
language_list_short = [x[0]
for x in language_list_long()]
for lang in language_list_short: #For each language
#Look for short months. Also matches for long months
short_months = [get_i18n_month_name(month, "short", lang).lower()
for month in range(1, 13)] # ["jan","feb","mar",...]
short_months_pattern = re.compile(r'('+r'|'.join(short_months)+r')',
re.IGNORECASE) # (jan|feb|mar|...)
result = short_months_pattern.search(date)
if result is not None:
try:
month_nb = short_months.index(result.group().lower()) + 1
return get_i18n_month_name(month_nb, "short", ln)
except:
pass
return default
diff --git a/modules/bibformat/lib/elements/bfe_meta.py b/modules/bibformat/lib/elements/bfe_meta.py
index 0ca0f144c..ab9ea6f46 100644
--- a/modules/bibformat/lib/elements/bfe_meta.py
+++ b/modules/bibformat/lib/elements/bfe_meta.py
@@ -1,117 +1,117 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibFormat element - meta"""
__revision__ = "$Id$"
import cgi
from invenio.bibformat_elements.bfe_server_info import format_element as server_info
from invenio.bibformat_elements.bfe_client_info import format_element as client_info
from invenio.htmlutils import create_tag
-from invenio.bibindex_engine import get_field_tags
+from invenio.bibindex_engine_utils import get_field_tags
from invenio.config import CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR, CFG_WEBSEARCH_ENABLE_OPENGRAPH
def format_element(bfo, name, tag_name='', tag='', kb='', kb_default_output='', var='', protocol='googlescholar'):
"""Prints a custom field in a way suitable to be used in HTML META
tags. In particular conforms to Google Scholar harvesting protocol as
defined http://scholar.google.com/intl/en/scholar/inclusion.html and
Open Graph http://ogp.me/
@param tag_name: the name, from tag table, of the field to be exported
looks initially for names prefixed by "meta-"<tag_name>
then looks for exact name, then falls through to "tag"
@param tag: the MARC tag to be exported (only if not defined by tag_name)
@param name: name to be displayed in the meta headers, labelling this value.
@param kb: a knowledge base through which to process the retrieved value if necessary.
@param kb: when a '<code>kb</code>' is specified and no match for value is found, what shall we
return? Either return the given parameter or specify "{value}" to return the retrieved
value before processing though kb.
@param var: the name of a variable to output instead of field from metadata.
Allowed values are those supported by bfe_server_info and
bfe_client_info. Overrides <code>name</code> and <code>tag_name</code>
@param protocol: the protocol this tag is aimed at. Can be used to switch on/off support for a given "protocol". Can take values among 'googlescholar', 'opengraph'
@see: bfe_server_info.py, bfe_client_info.py
"""
if protocol == 'googlescholar' and not CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR:
return ""
elif protocol == 'opengraph' and not CFG_WEBSEARCH_ENABLE_OPENGRAPH:
return ""
tags = []
if var:
# delegate to bfe_server_info or bfe_client_info:
value = server_info(bfo, var)
if value.startswith("Unknown variable: "):
# Oops variable was not defined there
value = client_info(bfo, var)
return not value.startswith("Unknown variable: ") and \
create_metatag(name=name, content=cgi.escape(value, True)) \
or ""
elif tag_name:
# First check for special meta named tags
tags = get_field_tags("meta-" + tag_name)
if not tags:
# then check for regular tags
tags = get_field_tags(tag_name)
if not tags and tag:
# fall back to explicit marc tag
tags = [tag]
if not tags:
return ''
out = []
values = [bfo.fields(marctag, escape=9) for marctag in tags]
for value in values:
if isinstance(value, list):
for val in value:
if isinstance(val, dict):
out.extend(val.values())
else:
out.append(val)
elif isinstance(value, dict):
out.extend(value.values())
else:
out.append(value)
out = dict(zip(out, len(out)*[''])).keys() # Remove duplicates
if name == 'citation_date':
for idx in range(len(out)):
out[idx] = out[idx].replace('-', '/')
if kb:
if kb_default_output == "{value}":
out = [bfo.kb(kb, value, value) for value in out]
else:
out = [bfo.kb(kb, value, kb_default_output) for value in out]
return '\n'.join([create_metatag(name=name, content=value) for value in out])
def create_metatag(name, content):
"""
Wraps create_tag
"""
if name.startswith("og:"):
return create_tag('meta', property=name, content=content)
else:
return create_tag('meta', name=name, content=content)
def escape_values(bfo):
"""
Called by BibFormat in order to check if output of this element
should be escaped.
"""
return 0
diff --git a/modules/bibformat/lib/elements/bfe_publisher.py b/modules/bibformat/lib/elements/bfe_publisher.py
index 3a5f78261..e0a50f712 100644
--- a/modules/bibformat/lib/elements/bfe_publisher.py
+++ b/modules/bibformat/lib/elements/bfe_publisher.py
@@ -1,33 +1,54 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibFormat element - Prints publisher name
"""
__revision__ = "$Id$"
+from invenio.config import CFG_SITE_URL
+
+from invenio.bibauthority_engine import get_low_level_recIDs_from_control_no
+
def format_element(bfo):
"""
Prints the publisher name
@see: place.py, date.py, reprints.py, imprint.py, pagination.py
"""
publisher = bfo.field('260__b')
+ control_no = bfo.field('260__0')
if publisher != "sine nomine":
+ if control_no:
+ recIDs = get_low_level_recIDs_from_control_no(control_no)
+ if len(recIDs):
+ publisher = '<a href="' + CFG_SITE_URL + '/record/' + \
+ str(recIDs[0]) + \
+ '?ln=' + bfo.lang + \
+ '">' + publisher + '</a>'
return publisher
+
+
+def escape_values(bfo):
+ """
+ Called by BibFormat in order to check if output of this element
+ should be escaped.
+ """
+ return 0
+
diff --git a/modules/bibindex/bin/bibstat.in b/modules/bibindex/bin/bibstat.in
index 6bdb9226d..87fbe7f2c 100644
--- a/modules/bibindex/bin/bibstat.in
+++ b/modules/bibindex/bin/bibstat.in
@@ -1,213 +1,211 @@
#!@PYTHON@
## -*- mode: python; coding: utf-8; -*-
##
## This file is part of Invenio.
-## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibStat reports some interesting numbers on the bibliographic record set.
"""
__revision__ = "$Id$"
## import interesting modules:
try:
import sys
from invenio.flaskshell import *
from invenio.dbquery import run_sql, get_table_status_info
- from invenio.dbquery import CFG_DATABASE_HOST,
+ from invenio.dbquery import CFG_DATABASE_HOST, \
CFG_DATABASE_PORT, \
CFG_DATABASE_NAME
import getopt
import time
except ImportError, e:
print "Error: %s" % e
import sys
sys.exit(1)
def report_table_status(tablename):
"""Report stats for the table TABLENAME. If TABLENAME does not
exists, return empty string.
"""
out = ""
table_info = get_table_status_info(tablename)
if table_info:
out = "%14s %17d %17d %17d" % (table_info['Name'],
table_info['Rows'],
table_info['Data_length'],
table_info['Max_data_length']
)
return out
def report_definitions_of_physical_tags():
"""
Report definitions of physical MARC tags.
"""
print "### 1 - PHYSICAL TAG DEFINITIONS"
print
print "# MARC tag ... description"
res = run_sql('SELECT id,value,name FROM tag ORDER BY value')
for row in res:
(dummytagid, tagvalue, tagname) = row
print "%s ... %s" % (tagvalue, tagname,)
def report_definitions_of_logical_fields():
"""
Report definitions of logical fields.
"""
print
print "### 2 - LOGICAL FIELD DEFINITIONS"
print
print "# logical field: associated physical tags",
res = run_sql('SELECT id,name,code FROM field ORDER BY code')
for row in res:
(fieldid, dummyfieldname, fieldcode) = row
print
print "%s:" % (fieldcode,),
res2 = run_sql("""SELECT value FROM tag, field_tag
WHERE id_field=%s AND id_tag=id
""", (fieldid,))
for row2 in res2:
tag = row2[0]
print tag,
print
def report_definitions_of_indexes():
"""
Report definitions of indexes.
"""
print
print "### 3 - INDEX DEFINITIONS"
print
print "# index (stemming): associated logical fields",
res = run_sql("""SELECT id,name,stemming_language FROM idxINDEX
ORDER BY name""")
for row in res:
(indexid, indexname, indexstem) = row
if indexstem:
indexname += ' (%s)' % indexstem
print
print "%s:" % (indexname,),
res2 = run_sql("""SELECT code FROM field, idxINDEX_field
WHERE id_idxINDEX=%s AND id_field=id
""", (indexid,))
for row2 in res2:
code = row2[0]
print code,
print
def report_on_all_bibliographic_tables():
"""Report stats for all the interesting bibliographic tables."""
print
print "### 4 - TABLE SPACE AND SIZE INFO"
print ''
print "# %12s %17s %17s %17s" % ("TABLE", "ROWS", "DATA SIZE", "INDEX SIZE")
for i in range(0, 10):
for j in range(0, 10):
print report_table_status("bib%1d%1dx" % (i, j))
print report_table_status("bibrec_bib%1d%1dx" % (i, j))
for i in range(0, 11):
print report_table_status("idxWORD%02dF" % i)
print report_table_status("idxWORD%02dR" % i)
for i in range(0, 11):
print report_table_status("idxPHRASE%02dF" % i)
print report_table_status("idxPHRASE%02dR" % i)
return
def report_tag_usage():
"""Analyze bibxxx tables and report info on usage of various tags."""
print ''
print "### 5 - TAG USAGE INFO"
print ''
print "# TAG NB_RECORDS\t# recID1 recID2 ... recID9 (example records)"
for i in range(0, 10):
for j in range(0, 10):
bibxxx = "bib%1d%1dx" % (i, j)
bibrec_bibxxx = 'bibrec_' + bibxxx
# detect all the various tags in use:
res = run_sql("SELECT DISTINCT(tag) FROM %s" % (bibxxx,))
for row in res:
tag = row[0]
# detect how many records have this tag in use:
res_usage = run_sql("""SELECT DISTINCT(b.id) FROM bibrec AS b,
%s AS bb, %s AS bx
WHERE b.id=bb.id_bibrec
AND bb.id_bibxxx=bx.id
AND bx.tag=%%s
""" % (bibrec_bibxxx, bibxxx),
(tag,))
# print results
print tag, (8-len(tag))*' ', len(res_usage), \
'\t\t', '#', " ".join([str(row[0]) for row in
res_usage[:9]])
def report_header():
"""
Start reporting.
"""
print '### BIBSTAT REPORT FOR DB %s:%s.%s RUN AT %s' % (CFG_DATABASE_HOST,
CFG_DATABASE_PORT,
CFG_DATABASE_NAME,
time.asctime())
print ''
def report_footer():
"""
Stop reporting.
"""
print
print
print '### END OF BIBSTAT REPORT'
def usage(exitcode=1, msg=""):
"""Prints usage info."""
if msg:
sys.stderr.write("Error: %s.\n" % msg)
sys.stderr.write("Usage: %s [options]\n" % sys.argv[0])
sys.stderr.write("General options:\n")
sys.stderr.write(" -h, --help \t\t Print this help.\n")
sys.stderr.write(" -V, --version \t\t Print version information.\n")
sys.exit(exitcode)
def main():
"""Report stats on the Invenio bibliographic tables."""
try:
opts, dummyargs = getopt.getopt(sys.argv[1:], "hV", ["help", "version"])
except getopt.GetoptError, err:
usage(1, err)
if opts:
for opt in opts:
if opt[0] in ["-h", "--help"]:
usage(0)
elif opt[0] in ["-V", "--version"]:
print __revision__
sys.exit(0)
else:
usage(1)
else:
report_header()
report_definitions_of_physical_tags()
report_definitions_of_logical_fields()
report_definitions_of_indexes()
report_on_all_bibliographic_tables()
report_tag_usage()
report_footer()
if __name__ == "__main__":
main()
-
-
diff --git a/modules/bibindex/doc/admin/bibindex-admin-guide.webdoc b/modules/bibindex/doc/admin/bibindex-admin-guide.webdoc
index 138bbf342..91e4d3515 100644
--- a/modules/bibindex/doc/admin/bibindex-admin-guide.webdoc
+++ b/modules/bibindex/doc/admin/bibindex-admin-guide.webdoc
@@ -1,332 +1,332 @@
## -*- mode: html; coding: utf-8; -*-
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
<!-- WebDoc-Page-Title: BibIndex Admin Guide -->
<!-- WebDoc-Page-Navtrail: <a class="navtrail" href="<CFG_SITE_URL>/help/admin<lang:link/>">_(Admin Area)_</a> -->
<!-- WebDoc-Page-Revision: $Id$ -->
<p><table class="errorbox">
<thead>
<tr>
<th class="errorboxheader">
WARNING: BIBINDEX ADMIN GUIDE IS UNDER DEVELOPMENT
</th>
</tr>
</thead>
<tbody>
<tr>
<td class="errorboxbody">
BibIndex Admin Guide is not yet completed. Most of admin-level
functionality for BibIndex exists only in commandline mode. We
are in the process of developing both the guide as well as the
web admin interface. If you are interested in seeing some
specific things implemented with high priority, please contact us
at <CFG_SITE_SUPPORT_EMAIL>. Thanks for your interest!
</td>
</tr>
</tbody>
</table></p>
<h2>Contents</h2>
<strong>1.<a href="#1">Overview</a></strong><br/>
<strong>2. <a href="#2">Configure Metadata Tags and Fields</a></strong><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.1 <a href="#2.1">Configure Physical MARC Tags</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2.2 <a href="#2.2">Configure Logical Fields</a><br/>
<strong>3. <a href="#3">Configure Word/Phrase Indexes</a></strong><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.1 <a href="#3.1">Define New Index</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.2 <a href="#3.2">Configure Word-Breaking Procedure</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.3 <a href="#3.3">Configure Stopwords List</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.4 <a href="#3.4">Configure Stemming</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.5 <a href="#3.5">Configure Word Length</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.6 <a href="#3.6">Configure Removal of HTML Code</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.7 <a href="#3.7">Configure Accent Stripping</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.8 <a href="#3.8">Configure Fulltext Indexing</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3.8.1 <a href="#3.8.1">Configure Solr Fulltext Indexing</a><br/>
<strong>4. <a href="#4">Run BibIndex Daemon</a></strong><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.1 <a href="#4.1">Run BibIndex daemon</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.2 <a href="#4.2">Checking and repairing indexes</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.3 <a href="#4.3">Reindexing</a><br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.4 <a href="#4.4">Solr fulltext indexing</a><br/>
<a name="1"></a><h2>1. Overview</h2>
<a name="2"></a><h2>2. Configure Metadata Tags and Fields</h2>
<a name="2.1"></a><h3>2.1 Configure Physical MARC Tags</h3>
<a name="2.2"></a><h3>2.2 Configure Logical Fields</h3>
<a name="3"></a><h2>3. Configure Word/Phrase Indexes</h2>
<a name="3.1"></a><h3>3.1 Define New Index</h3>
<p>To define a new index you must first give the index a internal
name. An empty index is then created by preparing the database tables.</p>
<p>Before the index can be used for searching, the fields that should be
included in the index must be selected.</p>
<p>When desired to fill the index based on the fields selected, you can
schedule the update by running <b>bibindex -w indexname</b> together
with other desired parameters.</p>
<a name="3.2"></a><h3>3.2 Configure Word-Breaking Procedure</h3>
<p>Can be configured by changing
<b>CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS</b> and
<b>CFG_BIBINDEX_CHARS_PUNCTUATION</b> in the general config file.</p>
<p>How the words are broken up defines what is added to the index. Should
only "director-general" be added, or should "director", "general" and
"director-general" be added? The index can vary between 300 000 and 3
000 000 terms based the policy for breaking words.</p>
<a name="3.3"></a><h3>3.3 Configure Stopwords List</h3>
<p>BibIndex supports stopword removal by not adding words which exists in
a given stopword list to the index. Stopword removal makes the index
smaller by removing much used words.</p>
<p>Which stopword list that should be used can be configured in the
general config file file by changing the value of the variable
CFG_BIBINDEX_PATH_TO_STOPWORDS_FILE. If no stopword list should be
used, the value should be 0.</p>
<a name="3.4"></a><h3>3.4 Configure stemming</h3>
<p>The BibIndex indexer supports stemming, removing the ending of
words thus creating a smaller indexer. For example, using English, the
word "information" will be stemmed to "inform"; or "looking", "looks",
and "looked" will be all stemmed to "look", thus giving more hits to each
word.</p>
<p>Currently you can configure the stemming language on a per-index basis. All
searches referring a stemmed index will also be stemmed based on the same
language.</p>
<a name="3.5"></a><h3>3.5 Configure Word Length</h3>
<p>By setting the value of <b>CFG_BIBINDEX_MIN_WORD_LENGTH</b> in the
general config file higher than 0, only words with the number of
characters higher than this will be added to the index.</p>
<a name="3.6"></a><h3>3.6 Configure Removal of HTML and LaTeX Code</h3>
-<p>By setting the value of <b>CFG_BIBINDEX_REMOVE_HTML_MARKUP</b> in
-the general config file, the indexer may try to remove all HTML code
+<p>If you set the <b>Remove HTML Markup</b> parameter in the admin interface
+to 'Yes' the indexer will try to remove all HTML code
from documents before indexing, and index only the text left. (HTML
code is defined as everything between '&lt;' and '>' in a text.)</p>
-<p>By setting the value of <b>CFG_BIBINDEX_REMOVE_LATEX_MARKUP</b> in
-the general config file, the indexer may try to remove all LaTeX code
+<p>If you set the <b>Remove LATEX Markup</b> parameter in the admin interface
+to 'Yes', the indexer will try to remove all LaTeX code
from documents before indexing, and index only the text left. (LaTeX
code is defined as everything between '\command{' and '}' in a text, or
'{\command ' and '}').</p>
<a name="3.7"></a><h3>3.7 Configure Accent Stripping</h3>
<a name="3.8"></a><h3>3.8 Configure Fulltext Indexing</h3>
<p>The metadata tags are usually indexed by its content. There are
special cases however, such as the fulltext indexing. In this case
the tag contains an URL to the fulltext material and we would like to
fetch this material and index words found in this material rather than
in the metadata itself. This is possible via special tag assignement
via <code>tagToWordsFunctions</code> variable.</p>
<p>The default setup is configured in the way that if the indexer sees
that it has to index tag <code>8564_u</code>, it switches into the
fulltext indexing mode described above. It can index locally stored
files or even fetch them from external URLs, depending on the value of
the <b>CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY</b> configuration
variable. When fetching files from remote URLs, when it ends on a
splash page (an intermediate page before getting to fulltext file
itself), it can find and follow any further links to fulltext files.</p>
<p>The default setup also differentiate between metadata and fulltext
indexing, so that <code>any field</code> index does process only
metadata, not fulltext. If you want to have the fulltext indexed
together with the metadata, so that both are searched by default, you
can go to BibIndex Admin interface and in the Manage Logical Fields
explicitly add the tag <code>8564_u</code> under <code>any
field</code> field.</p>
<a name="3.8.1"></a><h3>3.8.1 Configure Solr Fulltext Indexing</h3>
<p>Solr can be used to index fulltext and to serve fulltext queries. To use it, the following steps are necessary:</p>
<p>First, Solr is installed:</p>
<blockquote>
<pre>
$ cd &lt;invenio source tree&gt;
$ sudo make install-solrutils
</pre>
</blockquote>
<p>Second, <code>invenio-local.conf</code> is amended:</p>
<blockquote>
<pre>
CFG_SOLR_URL = http://localhost:8983/solr
</pre>
</blockquote>
<p>Third, Solr is set to index fulltext:</p>
<blockquote>
<pre>
UPDATE idxINDEX SET indexer='SOLR' WHERE name='fulltext'
</pre>
</blockquote>
<p>Fourth, Solr is started:</p>
<blockquote>
<pre>
&lt;invenio installation&gt;/lib/apache-solr-3.1.0/example$ sudo -u www-data java -jar start.jar
</pre>
</blockquote>
<a name="4"></a><h2>4. Run BibIndex Daemon</h2>
<a name="4.1"></a><h3>4.1 Run BibIndex daemon</h3>
<p>To index your newly created or modified documents, bibindex must be
run periodically via bibsched. This is achieved by the sleep option
(-s) to bibindex. For more information please see <a
href="howto-run">HOWTO Run</a> admin guide.</p>
<a name="4.2"></a><h3>4.2 Checking and repairing indexes</h3>
<p>Upon each indexing run, bibindex checks and reports any
inconsistencies in the indexes. You can also manually check for the
index corruption yourself by using the check (-k) option to bibindex.</p>
<p>If a problem is found during the check, bibindex hints you to run
repairing (-r). If you run it, then during repair bibindex tries to
correct problems automatically by its own means. Usually it succeeds.</p>
<p>When the automatic repairing does not succeed though, then manual
intervention is required. The easiest thing to get the indexes back
to shape are commands like: (assuming the problem is with the index ID
1):
<blockquote>
<pre>
$ echo "DELETE FROM idxWORD01R WHERE type='TEMPORARY' or type='FUTURE';" | \
/opt/invenio/bin/dbexec
</pre>
</blockquote>
to leave only the 'CURRENT' reverse index. After that you can rerun
the index checking procedure (-k) and, if successful, continue with
the normal web site operation. However, a full reindexing should be
scheduled for the forthcoming night or weekend.</p>
<a name="4.3"></a><h3>4.3 Reindexing</h3>
<p>The procedure of reindexing is taking place into the real indexes that
are also used for searching. Therefore the end users will feel
immediately any change in the indexes. If you need to reindex your
records from scratch, then the best procedure is the following: reindex
the collection index only (fast operation), recreate collection cache,
and only after that reindex all the other indexes (slow operation).
This will ensure that the records in your system will be at least
browsable while the indexes are being rebuilt. The steps to perform are:</p>
<p>First we reindex the collection index:
<blockquote>
<pre>
$ bibindex --reindex -f50000 -wcollection # reindex the collection index (fast)
$ echo "UPDATE collection SET reclist=NULL;" | \
/opt/invenio/bin/dbexec # clean collection cache
$ webcoll -f # recreate the collection cache
$ bibsched # run the two above-submitted tasks
$ sudo apachectl restart
</pre>
</blockquote></p>
<p>Then we launch (slower) reindexing of the remaining indexes:
<blockquote>
<pre>
$ bibindex --reindex -f50000 # reindex other indexes (slow)
$ webcoll -f
$ bibsched # run the two above-submitted tasks, and put the queue back in auto mode
$ sudo apachectl restart
</pre>
</blockquote></p>
<p>You may optionally want to reindex the word ranking tables:
<blockquote>
<pre>
$ bibsched # wait for all active tasks to finish, and put the queue into manual mode
$ cd invenio-0.92.1 # source dir
$ grep rnkWORD ./modules/miscutil/sql/tabbibclean.sql | \
/opt/invenio/bin/dbexec # truncate rank indexes
$ echo "UPDATE rnkMETHOD SET last_updated='0000-00-00 00:00:00';" | \
/opt/invenio/bin/dbexec # rewind the last ranking time
</pre>
</blockquote></p>
<p>Secondly, if you have been using custom ranking methods using new
rnkWORD* tables (most probably you have not), you would have to
truncate them too:
<blockquote>
<pre>
&nbsp; # find out which custom ranking indexes were added:
&nbsp; $ echo "SELECT id FROM rnkMETHOD" | /opt/invenio/bin/dbexec
&nbsp; id
&nbsp; 66
&nbsp; 67
&nbsp; [...]
&nbsp;
&nbsp; # for every ranking index id, truncate corresponding ranking tables:
&nbsp; $ echo "TRUNCATE rnkWORD66F" | /opt/invenio/bin/dbexec
&nbsp; $ echo "TRUNCATE rnkWORD66R" | /opt/invenio/bin/dbexec
&nbsp; $ echo "TRUNCATE rnkWORD67F" | /opt/invenio/bin/dbexec
&nbsp; $ echo "TRUNCATE rnkWORD67R" | /opt/invenio/bin/dbexec
</pre>
</blockquote></p>
<p>At last, we launch reindexing of the ranking indexes:
<blockquote>
<pre>
$ bibrank -f50000
$ bibsched # run the three above-submitted tasks, and put the queue back in auto mode
$ sudo apachectl restart
</pre>
</blockquote>
and we are done.</p>
<p>In the future Invenio should ideally run indexing into
invisible tables that would be switched against the production ones
once the indexing process is successfully over. For the time being,
if reindexing takes several hours in your installation (e.g. if you
have 1,000,000 records), you may want to mysqlhotcopy your tables and
run reindexing on those copies yourself.</p>
<a name="4.4"></a><h3>4.4 Solr fulltext indexing</h3>
<p>If Solr is used for both fulltext and ranking, only the <code>BibRank</code>
daemon shall run. Since Solr documents can only be overriden and not updated, the
-<code>BibRank</code> daemon also indexes fulltext.</p>
\ No newline at end of file
+<code>BibRank</code> daemon also indexes fulltext.</p>
diff --git a/modules/bibindex/lib/Makefile.am b/modules/bibindex/lib/Makefile.am
index 30663b8f7..905a12f40 100644
--- a/modules/bibindex/lib/Makefile.am
+++ b/modules/bibindex/lib/Makefile.am
@@ -1,29 +1,31 @@
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+SUBDIRS = tokenizers
+
pylibdir = $(libdir)/python/invenio
pylib_DATA = bibindex_engine.py bibindex_engine_config.py bibindex_engine_unit_tests.py \
bibindex_fixtures.py \
bibindex_model.py \
bibindexadminlib.py bibindex_engine_stemmer.py bibindex_engine_stopwords.py \
bibindex_engine_stemmer_unit_tests.py bibindex_engine_stemmer_greek.py \
- bibindex_engine_tokenizer.py bibindex_engine_tokenizer_unit_tests.py \
- bibindexadmin_regression_tests.py bibindex_engine_washer.py
-
+ bibindex_engine_tokenizer_unit_tests.py \
+ bibindexadmin_regression_tests.py bibindex_engine_washer.py \
+ bibindex_regression_tests.py bibindex_engine_utils.py
EXTRA_DIST = $(pylib_DATA)
CLEANFILES = *~ *.tmp *.pyc
diff --git a/modules/bibindex/lib/bibindex_engine.py b/modules/bibindex/lib/bibindex_engine.py
index 715f9822a..233e98638 100644
--- a/modules/bibindex/lib/bibindex_engine.py
+++ b/modules/bibindex/lib/bibindex_engine.py
@@ -1,1733 +1,1955 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibIndex indexing engine implementation. See bibindex executable for entry point.
"""
__revision__ = "$Id$"
-import os
import re
import sys
import time
-import urllib2
-import logging
-
-from invenio.config import \
- CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS, \
- CFG_BIBINDEX_CHARS_PUNCTUATION, \
- CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY, \
- CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES, \
- CFG_BIBINDEX_SYNONYM_KBRS, \
- CFG_CERN_SITE, CFG_INSPIRE_SITE, \
- CFG_BIBINDEX_SPLASH_PAGES, \
- CFG_SOLR_URL, \
- CFG_XAPIAN_ENABLED
+import fnmatch
+from datetime import datetime
+from time import strptime
+
+from invenio.config import CFG_SOLR_URL
from invenio.bibindex_engine_config import CFG_MAX_MYSQL_THREADS, \
- CFG_MYSQL_THREAD_TIMEOUT, \
- CFG_CHECK_MYSQL_THREADS
-from invenio.bibindex_engine_tokenizer import \
- BibIndexFuzzyNameTokenizer, BibIndexExactNameTokenizer, \
- BibIndexPairTokenizer, BibIndexWordTokenizer, \
- BibIndexPhraseTokenizer
-from invenio.bibindexadminlib import get_idx_indexer
-from invenio.bibdocfile import bibdocfile_url_p, \
- bibdocfile_url_to_bibdoc, normalize_format, \
- download_url, guess_format_from_url, BibRecDocs, \
- decompose_bibdocfile_url
-from invenio.websubmit_file_converter import convert_file, get_file_converter_logger
+ CFG_MYSQL_THREAD_TIMEOUT, \
+ CFG_CHECK_MYSQL_THREADS, \
+ CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR, \
+ CFG_BIBINDEX_INDEX_TABLE_TYPE, \
+ CFG_BIBINDEX_ADDING_RECORDS_STARTED_STR, \
+ CFG_BIBINDEX_UPDATE_MESSAGE
+from invenio.bibauthority_config import \
+ CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC, \
+ CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD
+from invenio.bibauthority_engine import get_index_strings_by_control_no,\
+ get_control_nos_from_recID
+from invenio.bibindexadminlib import get_idx_remove_html_markup, \
+ get_idx_remove_latex_markup, \
+ get_idx_remove_stopwords
+from invenio.bibdocfile import BibRecDocs
from invenio.search_engine import perform_request_search, \
get_index_stemming_language, \
- get_synonym_terms
+ get_synonym_terms, \
+ search_pattern, \
+ search_unit_in_bibrec
from invenio.dbquery import run_sql, DatabaseError, serialize_via_marshal, \
deserialize_via_marshal, wash_table_column_name
from invenio.bibindex_engine_washer import wash_index_term
from invenio.bibtask import task_init, write_message, get_datetime, \
task_set_option, task_get_option, task_get_task_param, \
task_update_progress, task_sleep_now_if_required
from invenio.intbitset import intbitset
from invenio.errorlib import register_exception
-from invenio.htmlutils import get_links_in_html_page
-from invenio.search_engine_utils import get_fieldvalues
-from invenio.solrutils_bibindex_indexer import solr_add_fulltext, solr_commit
-from invenio.xapianutils_bibindex_indexer import xapian_add
from invenio.bibrankadminlib import get_def_name
+from invenio.solrutils_bibindex_indexer import solr_commit
+from invenio.bibindex_tokenizers.BibIndexJournalTokenizer import \
+ CFG_JOURNAL_TAG, \
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM, \
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK
+from invenio.bibindex_engine_utils import load_tokenizers, \
+ get_all_index_names_and_column_values, \
+ get_idx_indexer, \
+ get_index_tags, \
+ get_field_tags, \
+ get_tag_indexes, \
+ get_all_indexes, \
+ get_all_virtual_indexes, \
+ get_index_virtual_indexes, \
+ is_index_virtual, \
+ get_virtual_index_building_blocks, \
+ get_index_id_from_index_name, \
+ get_index_name_from_index_id, \
+ run_sql_drop_silently
+from invenio.search_engine_utils import get_fieldvalues
+from invenio.bibfield import get_record
+from invenio.memoiseutils import Memoise
if sys.hexversion < 0x2040000:
# pylint: disable=W0622
from sets import Set as set
# pylint: enable=W0622
-# FIXME: journal tag and journal pubinfo standard format are defined here:
-if CFG_CERN_SITE:
- CFG_JOURNAL_TAG = '773__%'
- CFG_JOURNAL_PUBINFO_STANDARD_FORM = "773__p 773__v (773__y) 773__c"
- CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK = r'^\w.*\s\w.*\s\(\d+\)\s\w.*$'
-elif CFG_INSPIRE_SITE:
- CFG_JOURNAL_TAG = '773__%'
- CFG_JOURNAL_PUBINFO_STANDARD_FORM = "773__p,773__v,773__c"
- CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK = r'^\w.*,\w.*,\w.*$'
-else:
- CFG_JOURNAL_TAG = '909C4%'
- CFG_JOURNAL_PUBINFO_STANDARD_FORM = "909C4p 909C4v (909C4y) 909C4c"
- CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK = r'^\w.*\s\w.*\s\(\d+\)\s\w.*$'
## precompile some often-used regexp for speed reasons:
re_subfields = re.compile('\$\$\w')
-re_block_punctuation_begin = re.compile(r"^" + CFG_BIBINDEX_CHARS_PUNCTUATION + "+")
-re_block_punctuation_end = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION + "+$")
-re_punctuation = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION)
-re_separators = re.compile(CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS)
re_datetime_shift = re.compile("([-\+]{0,1})([\d]+)([dhms])")
-re_arxiv = re.compile(r'^arxiv:\d\d\d\d\.\d\d\d\d')
+
nb_char_in_line = 50 # for verbose pretty printing
chunksize = 1000 # default size of chunks that the records will be treated by
base_process_size = 4500 # process base size
_last_word_table = None
-fulltext_added = intbitset() # stores ids of records whose fulltexts have been added
+
+_TOKENIZERS = load_tokenizers()
def list_union(list1, list2):
"Returns union of the two lists."
union_dict = {}
for e in list1:
union_dict[e] = 1
for e in list2:
union_dict[e] = 1
return union_dict.keys()
+def list_unique(_list):
+ """Returns a _list with duplicates removed."""
+ _dict = {}
+ for e in _list:
+ _dict[e] = 1
+ return _dict.keys()
+
## safety function for killing slow DB threads:
def kill_sleepy_mysql_threads(max_threads=CFG_MAX_MYSQL_THREADS, thread_timeout=CFG_MYSQL_THREAD_TIMEOUT):
"""Check the number of DB threads and if there are more than
MAX_THREADS of them, lill all threads that are in a sleeping
state for more than THREAD_TIMEOUT seconds. (This is useful
for working around the the max_connection problem that appears
during indexation in some not-yet-understood cases.) If some
threads are to be killed, write info into the log file.
"""
res = run_sql("SHOW FULL PROCESSLIST")
if len(res) > max_threads:
for row in res:
r_id, dummy, dummy, dummy, r_command, r_time, dummy, dummy = row
if r_command == "Sleep" and int(r_time) > thread_timeout:
run_sql("KILL %s", (r_id,))
write_message("WARNING: too many DB threads, killing thread %s" % r_id, verbose=1)
return
def get_associated_subfield_value(recID, tag, value, associated_subfield_code):
"""Return list of ASSOCIATED_SUBFIELD_CODE, if exists, for record
RECID and TAG of value VALUE. Used by fulltext indexer only.
Note: TAG must be 6 characters long (tag+ind1+ind2+sfcode),
otherwise en empty string is returned.
FIXME: what if many tag values have the same value but different
associated_subfield_code? Better use bibrecord library for this.
"""
out = ""
if len(tag) != 6:
return out
bibXXx = "bib" + tag[0] + tag[1] + "x"
bibrec_bibXXx = "bibrec_" + bibXXx
query = """SELECT bb.field_number, b.tag, b.value FROM %s AS b, %s AS bb
WHERE bb.id_bibrec=%%s AND bb.id_bibxxx=b.id AND tag LIKE
%%s%%""" % (bibXXx, bibrec_bibXXx)
res = run_sql(query, (recID, tag[:-1]))
field_number = -1
for row in res:
if row[1] == tag and row[2] == value:
field_number = row[0]
if field_number > 0:
for row in res:
if row[0] == field_number and row[1] == tag[:-1] + associated_subfield_code:
out = row[2]
break
return out
-def get_field_tags(field):
- """Returns a list of MARC tags for the field code 'field'.
- Returns empty list in case of error.
- Example: field='author', output=['100__%','700__%']."""
- out = []
- query = """SELECT t.value FROM tag AS t, field_tag AS ft, field AS f
- WHERE f.code=%s AND ft.id_field=f.id AND t.id=ft.id_tag
- ORDER BY ft.score DESC"""
- res = run_sql(query, (field,))
- return [row[0] for row in res]
-
-def get_words_from_journal_tag(recID, tag):
- """
- Special procedure to extract words from journal tags. Joins
- title/volume/year/page into a standard form that is also used for
- citations.
- """
-
- # get all journal tags/subfields:
- bibXXx = "bib" + tag[0] + tag[1] + "x"
- bibrec_bibXXx = "bibrec_" + bibXXx
- query = """SELECT bb.field_number,b.tag,b.value FROM %s AS b, %s AS bb
- WHERE bb.id_bibrec=%%s
- AND bb.id_bibxxx=b.id AND tag LIKE %%s""" % (bibXXx, bibrec_bibXXx)
- res = run_sql(query, (recID, tag))
- # construct journal pubinfo:
- dpubinfos = {}
- for row in res:
- nb_instance, subfield, value = row
- if subfield.endswith("c"):
- # delete pageend if value is pagestart-pageend
- # FIXME: pages may not be in 'c' subfield
- value = value.split('-', 1)[0]
- if dpubinfos.has_key(nb_instance):
- dpubinfos[nb_instance][subfield] = value
- else:
- dpubinfos[nb_instance] = {subfield: value}
- # construct standard format:
- lwords = []
- for dpubinfo in dpubinfos.values():
- # index all journal subfields separately
- for tag, val in dpubinfo.items():
- lwords.append(val)
- # index journal standard format:
- pubinfo = CFG_JOURNAL_PUBINFO_STANDARD_FORM
- for tag, val in dpubinfo.items():
- pubinfo = pubinfo.replace(tag, val)
- if CFG_JOURNAL_TAG[:-1] in pubinfo:
- # some subfield was missing, do nothing
- pass
- else:
- lwords.append(pubinfo)
- # return list of words and pubinfos:
- return lwords
-
-def get_field_count(recID, tags):
- """
- Return number of field instances having TAGS in record RECID.
-
- @param recID: record ID
- @type recID: int
- @param tags: list of tags to count, e.g. ['100__a', '700__a']
- @type tags: list
- @return: number of tags present in record
- @rtype: int
- @note: Works internally via getting field values, which may not be
- very efficient. Could use counts only, or else retrieve stored
- recstruct format of the record and walk through it.
- """
- out = 0
- for tag in tags:
- out += len(get_fieldvalues(recID, tag))
- return out
def get_author_canonical_ids_for_recid(recID):
"""
Return list of author canonical IDs (e.g. `J.Ellis.1') for the
given record. Done by consulting BibAuthorID module.
"""
from invenio.bibauthorid_dbinterface import get_persons_from_recids
lwords = []
res = get_persons_from_recids([recID])
if res is None:
## BibAuthorID is not enabled
return lwords
else:
dpersons, dpersoninfos = res
for aid in dpersoninfos.keys():
author_canonical_id = dpersoninfos[aid].get('canonical_id', '')
if author_canonical_id:
lwords.append(author_canonical_id)
return lwords
-def get_words_from_date_tag(datestring, stemming_language=None):
- """
- Special procedure to index words from tags storing date-like
- information in format YYYY or YYYY-MM or YYYY-MM-DD. Namely, we
- are indexing word-terms YYYY, YYYY-MM, YYYY-MM-DD, but never
- standalone MM or DD.
- """
- out = []
- for dateword in datestring.split():
- # maybe there are whitespaces, so break these too
- out.append(dateword)
- parts = dateword.split('-')
- for nb in range(1, len(parts)):
- out.append("-".join(parts[:nb]))
- return out
-
-def get_words_from_fulltext(url_direct_or_indirect, stemming_language=None):
- """Returns all the words contained in the document specified by
- URL_DIRECT_OR_INDIRECT with the words being split by various
- SRE_SEPARATORS regexp set earlier. If FORCE_FILE_EXTENSION is
- set (e.g. to "pdf", then treat URL_DIRECT_OR_INDIRECT as a PDF
- file. (This is interesting to index Indico for example.) Note
- also that URL_DIRECT_OR_INDIRECT may be either a direct URL to
- the fulltext file or an URL to a setlink-like page body that
- presents the links to be indexed. In the latter case the
- URL_DIRECT_OR_INDIRECT is parsed to extract actual direct URLs
- to fulltext documents, for all knows file extensions as
- specified by global CONV_PROGRAMS config variable.
- """
- write_message("... reading fulltext files from %s started" % url_direct_or_indirect, verbose=2)
- try:
- if bibdocfile_url_p(url_direct_or_indirect):
- write_message("... %s is an internal document" % url_direct_or_indirect, verbose=2)
- bibdoc = bibdocfile_url_to_bibdoc(url_direct_or_indirect)
- indexer = get_idx_indexer('fulltext')
- if indexer != 'native':
- # A document might belong to multiple records
- for rec_link in bibdoc.bibrec_links:
- recid = rec_link["recid"]
- # Adds fulltexts of all files once per records
- if not recid in fulltext_added:
- bibrecdocs = BibRecDocs(recid)
- text = bibrecdocs.get_text()
- if indexer == 'SOLR' and CFG_SOLR_URL:
- solr_add_fulltext(recid, text)
- elif indexer == 'XAPIAN' and CFG_XAPIAN_ENABLED:
- xapian_add(recid, 'fulltext', text)
-
- fulltext_added.add(recid)
- # we are relying on an external information retrieval system
- # to provide full-text indexing, so dispatch text to it and
- # return nothing here:
- return []
- else:
- text = ""
- if hasattr(bibdoc, "get_text"):
- text = bibdoc.get_text()
- return get_words_from_phrase(text, stemming_language)
- else:
- if CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY:
- write_message("... %s is external URL but indexing only local files" % url_direct_or_indirect, verbose=2)
- return []
- write_message("... %s is an external URL" % url_direct_or_indirect, verbose=2)
- urls_to_index = set()
- for splash_re, url_re in CFG_BIBINDEX_SPLASH_PAGES.iteritems():
- if re.match(splash_re, url_direct_or_indirect):
- write_message("... %s is a splash page (%s)" % (url_direct_or_indirect, splash_re), verbose=2)
- html = urllib2.urlopen(url_direct_or_indirect).read()
- urls = get_links_in_html_page(html)
- write_message("... found these URLs in %s splash page: %s" % (url_direct_or_indirect, ", ".join(urls)), verbose=3)
- for url in urls:
- if re.match(url_re, url):
- write_message("... will index %s (matched by %s)" % (url, url_re), verbose=2)
- urls_to_index.add(url)
- if not urls_to_index:
- urls_to_index.add(url_direct_or_indirect)
- write_message("... will extract words from %s" % ', '.join(urls_to_index), verbose=2)
- words = {}
- for url in urls_to_index:
- tmpdoc = download_url(url)
- file_converter_logger = get_file_converter_logger()
- old_logging_level = file_converter_logger.getEffectiveLevel()
- if task_get_task_param("verbose") > 3:
- file_converter_logger.setLevel(logging.DEBUG)
- try:
- try:
- tmptext = convert_file(tmpdoc, output_format='.txt')
- text = open(tmptext).read()
- os.remove(tmptext)
-
- indexer = get_idx_indexer('fulltext')
- if indexer != 'native':
- if indexer == 'SOLR' and CFG_SOLR_URL:
- solr_add_fulltext(None, text) # FIXME: use real record ID
- if indexer == 'XAPIAN' and CFG_XAPIAN_ENABLED:
- #xapian_add(None, 'fulltext', text) # FIXME: use real record ID
- pass
- # we are relying on an external information retrieval system
- # to provide full-text indexing, so dispatch text to it and
- # return nothing here:
- tmpwords = []
- else:
- tmpwords = get_words_from_phrase(text, stemming_language)
- words.update(dict(map(lambda x: (x, 1), tmpwords)))
- except Exception, e:
- message = 'ERROR: it\'s impossible to correctly extract words from %s referenced by %s: %s' % (url, url_direct_or_indirect, e)
- register_exception(prefix=message, alert_admin=True)
- write_message(message, stream=sys.stderr)
- finally:
- os.remove(tmpdoc)
- if task_get_task_param("verbose") > 3:
- file_converter_logger.setLevel(old_logging_level)
- return words.keys()
- except Exception, e:
- message = 'ERROR: it\'s impossible to correctly extract words from %s: %s' % (url_direct_or_indirect, e)
- register_exception(prefix=message, alert_admin=True)
- write_message(message, stream=sys.stderr)
- return []
-
-
-def get_nothing_from_phrase(phrase, stemming_language=None):
- """ A dump implementation of get_words_from_phrase to be used when
- when a tag should not be indexed (such as when trying to extract phrases from
- 8564_u)."""
- return []
def swap_temporary_reindex_tables(index_id, reindex_prefix="tmp_"):
"""Atomically swap reindexed temporary table with the original one.
Delete the now-old one."""
- write_message("Putting new tmp index tables for id %s into production" % index_id)
- run_sql(
- "RENAME TABLE " +
- "idxWORD%02dR TO old_idxWORD%02dR," % (index_id, index_id) +
- "%sidxWORD%02dR TO idxWORD%02dR," % (reindex_prefix, index_id, index_id) +
- "idxWORD%02dF TO old_idxWORD%02dF," % (index_id, index_id) +
- "%sidxWORD%02dF TO idxWORD%02dF," % (reindex_prefix, index_id, index_id) +
- "idxPAIR%02dR TO old_idxPAIR%02dR," % (index_id, index_id) +
- "%sidxPAIR%02dR TO idxPAIR%02dR," % (reindex_prefix, index_id, index_id) +
- "idxPAIR%02dF TO old_idxPAIR%02dF," % (index_id, index_id) +
- "%sidxPAIR%02dF TO idxPAIR%02dF," % (reindex_prefix, index_id, index_id) +
- "idxPHRASE%02dR TO old_idxPHRASE%02dR," % (index_id, index_id) +
- "%sidxPHRASE%02dR TO idxPHRASE%02dR," % (reindex_prefix, index_id, index_id) +
- "idxPHRASE%02dF TO old_idxPHRASE%02dF," % (index_id, index_id) +
- "%sidxPHRASE%02dF TO idxPHRASE%02dF;" % (reindex_prefix, index_id, index_id)
- )
- write_message("Dropping old index tables for id %s" % index_id)
- run_sql("DROP TABLE old_idxWORD%02dR, old_idxWORD%02dF, old_idxPAIR%02dR, old_idxPAIR%02dF, old_idxPHRASE%02dR, old_idxPHRASE%02dF" % (index_id, index_id, index_id, index_id, index_id, index_id)) # kwalitee: disable=sql
+ is_virtual = is_index_virtual(index_id)
+ if is_virtual:
+ write_message("Removing %s index tables for id %s" % (reindex_prefix, index_id))
+ query = """DROP TABLE IF EXISTS %%sidxWORD%02dR, %%sidxWORD%02dF,
+ %%sidxPAIR%02dR, %%sidxPAIR%02dF,
+ %%sidxPHRASE%02dR, %%sidxPHRASE%02dF
+ """ % ((index_id,)*6)
+ query = query % ((reindex_prefix,)*6)
+ run_sql(query)
+ else:
+ write_message("Putting new tmp index tables for id %s into production" % index_id)
+ run_sql(
+ "RENAME TABLE " +
+ "idxWORD%02dR TO old_idxWORD%02dR," % (index_id, index_id) +
+ "%sidxWORD%02dR TO idxWORD%02dR," % (reindex_prefix, index_id, index_id) +
+ "idxWORD%02dF TO old_idxWORD%02dF," % (index_id, index_id) +
+ "%sidxWORD%02dF TO idxWORD%02dF," % (reindex_prefix, index_id, index_id) +
+ "idxPAIR%02dR TO old_idxPAIR%02dR," % (index_id, index_id) +
+ "%sidxPAIR%02dR TO idxPAIR%02dR," % (reindex_prefix, index_id, index_id) +
+ "idxPAIR%02dF TO old_idxPAIR%02dF," % (index_id, index_id) +
+ "%sidxPAIR%02dF TO idxPAIR%02dF," % (reindex_prefix, index_id, index_id) +
+ "idxPHRASE%02dR TO old_idxPHRASE%02dR," % (index_id, index_id) +
+ "%sidxPHRASE%02dR TO idxPHRASE%02dR," % (reindex_prefix, index_id, index_id) +
+ "idxPHRASE%02dF TO old_idxPHRASE%02dF," % (index_id, index_id) +
+ "%sidxPHRASE%02dF TO idxPHRASE%02dF;" % (reindex_prefix, index_id, index_id)
+ )
+ write_message("Dropping old index tables for id %s" % index_id)
+ run_sql_drop_silently("DROP TABLE old_idxWORD%02dR, old_idxWORD%02dF, old_idxPAIR%02dR, old_idxPAIR%02dF, old_idxPHRASE%02dR, old_idxPHRASE%02dF" % (index_id, index_id, index_id, index_id, index_id, index_id)) # kwalitee: disable=sql
+
def init_temporary_reindex_tables(index_id, reindex_prefix="tmp_"):
"""Create reindexing temporary tables."""
write_message("Creating new tmp index tables for id %s" % index_id)
- run_sql("""DROP TABLE IF EXISTS %sidxWORD%02dF""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
+ run_sql_drop_silently("""DROP TABLE IF EXISTS %sidxWORD%02dF""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
run_sql("""CREATE TABLE %sidxWORD%02dF (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM""" % (reindex_prefix, index_id))
- run_sql("""DROP TABLE IF EXISTS %sidxWORD%02dR""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
+ run_sql_drop_silently("""DROP TABLE IF EXISTS %sidxWORD%02dR""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
run_sql("""CREATE TABLE %sidxWORD%02dR (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM""" % (reindex_prefix, index_id))
- run_sql("""DROP TABLE IF EXISTS %sidxPAIR%02dF""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
+ run_sql_drop_silently("""DROP TABLE IF EXISTS %sidxPAIR%02dF""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
run_sql("""CREATE TABLE %sidxPAIR%02dF (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM""" % (reindex_prefix, index_id))
- run_sql("""DROP TABLE IF EXISTS %sidxPAIR%02dR""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
+ run_sql_drop_silently("""DROP TABLE IF EXISTS %sidxPAIR%02dR""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
run_sql("""CREATE TABLE %sidxPAIR%02dR (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM""" % (reindex_prefix, index_id))
- run_sql("""DROP TABLE IF EXISTS %sidxPHRASE%02dF""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
+ run_sql_drop_silently("""DROP TABLE IF EXISTS %sidxPHRASE%02dF""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
run_sql("""CREATE TABLE %sidxPHRASE%02dF (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM""" % (reindex_prefix, index_id))
- run_sql("""DROP TABLE IF EXISTS %sidxPHRASE%02dR""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
+ run_sql_drop_silently("""DROP TABLE IF EXISTS %sidxPHRASE%02dR""" % (wash_table_column_name(reindex_prefix), index_id)) # kwalitee: disable=sql
run_sql("""CREATE TABLE %sidxPHRASE%02dR (
id_bibrec mediumint(9) unsigned NOT NULL default '0',
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM""" % (reindex_prefix, index_id))
- run_sql("UPDATE idxINDEX SET last_updated='0000-00-00 00:00:00' WHERE id=%s", (index_id,))
-def get_fuzzy_authors_from_phrase(phrase, stemming_language=None):
- """
- Return list of fuzzy phrase-tokens suitable for storing into
- author phrase index.
- """
- author_tokenizer = BibIndexFuzzyNameTokenizer()
- return author_tokenizer.tokenize(phrase)
-def get_exact_authors_from_phrase(phrase, stemming_language=None):
- """
- Return list of exact phrase-tokens suitable for storing into
- exact author phrase index.
- """
- author_tokenizer = BibIndexExactNameTokenizer()
- return author_tokenizer.tokenize(phrase)
+def remove_subfields(s):
+ "Removes subfields from string, e.g. 'foo $$c bar' becomes 'foo bar'."
+ return re_subfields.sub(' ', s)
-def get_author_family_name_words_from_phrase(phrase, stemming_language=None):
- """
- Return list of words from author family names, not his/her first
- names. The phrase is assumed to be the full author name. This is
- useful for CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES.
- """
- d_family_names = {}
- # first, treat everything before first comma as surname:
- if ',' in phrase:
- d_family_names[phrase.split(',', 1)[0]] = 1
- # second, try fuzzy author tokenizer to find surname variants:
- for name in get_fuzzy_authors_from_phrase(phrase, stemming_language):
- if ',' in name:
- d_family_names[name.split(',', 1)[0]] = 1
- # now extract words from these surnames:
- d_family_names_words = {}
- for family_name in d_family_names.keys():
- for word in get_words_from_phrase(family_name, stemming_language):
- d_family_names_words[word] = 1
- return d_family_names_words.keys()
-
-def get_words_from_phrase(phrase, stemming_language=None):
- """
- Return a list of words extracted from phrase.
- """
- words_tokenizer = BibIndexWordTokenizer(stemming_language)
- return words_tokenizer.tokenize(phrase)
-def get_phrases_from_phrase(phrase, stemming_language=None):
- """Return list of phrases found in PHRASE. Note that the phrase is
- split into groups depending on the alphanumeric characters and
- punctuation characters definition present in the config file.
- """
- phrase_tokenizer = BibIndexPhraseTokenizer(stemming_language)
- return phrase_tokenizer.tokenize(phrase)
+def get_field_indexes(field):
+ """Returns indexes names and ids corresponding to the given field"""
+ if field[0:3].isdigit():
+ #field is actually a tag
+ return get_tag_indexes(field, virtual=False)
+ else:
+ #future implemeptation for fields
+ return []
-def get_pairs_from_phrase(phrase, stemming_language=None):
- """
- Return list of oairs extracted from phrase.
- """
- pairs_tokenizer = BibIndexPairTokenizer(stemming_language)
- return pairs_tokenizer.tokenize(phrase)
+get_field_indexes_memoised = Memoise(get_field_indexes)
-def remove_subfields(s):
- "Removes subfields from string, e.g. 'foo $$c bar' becomes 'foo bar'."
- return re_subfields.sub(' ', s)
-def get_index_id_from_index_name(index_name):
- """Returns the words/phrase index id for INDEXNAME.
- Returns empty string in case there is no words table for this index.
- Example: field='author', output=4."""
- out = 0
- query = """SELECT w.id FROM idxINDEX AS w
- WHERE w.name=%s LIMIT 1"""
- res = run_sql(query, (index_name,), 1)
- if res:
- out = res[0][0]
+def get_all_synonym_knowledge_bases():
+ """Returns a dictionary of name key and knowledge base name and match type tuple value
+ information of all defined words indexes that have knowledge base information.
+ Returns empty dictionary in case there are no tags indexed.
+ Example: output['global'] = ('INDEX-SYNONYM-TITLE', 'exact'), output['title'] = ('INDEX-SYNONYM-TITLE', 'exact')."""
+ res = get_all_index_names_and_column_values("synonym_kbrs")
+ out = {}
+ for row in res:
+ kb_data = row[1]
+ # ignore empty strings
+ if len(kb_data):
+ out[row[0]] = tuple(kb_data.split(CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR))
return out
-def get_index_name_from_index_id(index_id):
- """Returns the words/phrase index name for INDEXID.
- Returns '' in case there is no words table for this indexid.
- Example: field=9, output='fulltext'."""
- res = run_sql("SELECT name FROM idxINDEX WHERE id=%s", (index_id,))
- if res:
- return res[0][0]
- return ''
-
-def get_index_tags(indexname):
- """Returns the list of tags that are indexed inside INDEXNAME.
- Returns empty list in case there are no tags indexed in this index.
- Note: uses get_field_tags() defined before.
- Example: field='author', output=['100__%', '700__%']."""
- out = []
- query = """SELECT f.code FROM idxINDEX AS w, idxINDEX_field AS wf,
- field AS f WHERE w.name=%s AND w.id=wf.id_idxINDEX
- AND f.id=wf.id_field"""
- res = run_sql(query, (indexname,))
- for row in res:
- out.extend(get_field_tags(row[0]))
+
+def get_index_remove_stopwords(index_id):
+ """Returns value of a remove_stopword field from idxINDEX database table
+ if it's not 'No'. If it's 'No' returns False.
+ Just for consistency with WordTable.
+ @param index_id: id of the index
+ """
+ result = get_idx_remove_stopwords(index_id)
+ if isinstance(result, tuple):
+ return False
+ if result == 'No' or result == '':
+ return False
+ return result
+
+
+def get_index_remove_html_markup(index_id):
+ """ Gets remove_html_markup parameter from database ('Yes' or 'No') and
+ changes it to True, False.
+ Just for consistency with WordTable."""
+ result = get_idx_remove_html_markup(index_id)
+ if result == 'Yes':
+ return True
+ return False
+
+
+def get_index_remove_latex_markup(index_id):
+ """ Gets remove_latex_markup parameter from database ('Yes' or 'No') and
+ changes it to True, False.
+ Just for consistency with WordTable."""
+ result = get_idx_remove_latex_markup(index_id)
+ if result == 'Yes':
+ return True
+ return False
+
+
+def get_index_tokenizer(index_id):
+ """Returns value of a tokenizer field from idxINDEX database table
+ @param index_id: id of the index
+ """
+ query = "SELECT tokenizer FROM idxINDEX WHERE id=%s" % index_id
+ out = None
+ try:
+ res = run_sql(query)
+ if res:
+ out = _TOKENIZERS[res[0][0]]
+ except DatabaseError:
+ write_message("Exception caught for SQL statement: %s; column tokenizer might not exist" % query, sys.stderr)
+ except KeyError:
+ write_message("Exception caught: there is no such tokenizer")
+ out = None
return out
-def get_all_indexes():
- """Returns the list of the names of all defined words indexes.
- Returns empty list in case there are no tags indexed in this index.
- Example: output=['global', 'author']."""
- out = []
- query = """SELECT name FROM idxINDEX"""
+
+def get_last_updated_all_indexes():
+ """Returns last modification date for all defined indexes"""
+ query= """SELECT name, last_updated FROM idxINDEX"""
res = run_sql(query)
- for row in res:
- out.append(row[0])
- return out
+ return res
+
def split_ranges(parse_string):
"""Parse a string a return the list or ranges."""
recIDs = []
ranges = parse_string.split(",")
for arange in ranges:
tmp_recIDs = arange.split("-")
if len(tmp_recIDs) == 1:
recIDs.append([int(tmp_recIDs[0]), int(tmp_recIDs[0])])
else:
if int(tmp_recIDs[0]) > int(tmp_recIDs[1]): # sanity check
tmp = tmp_recIDs[0]
tmp_recIDs[0] = tmp_recIDs[1]
tmp_recIDs[1] = tmp
recIDs.append([int(tmp_recIDs[0]), int(tmp_recIDs[1])])
return recIDs
def get_word_tables(tables):
""" Given a list of table names it return a list of tuples
(index_id, index_name, index_tags).
- If tables is empty it returns the whole list."""
+ """
wordTables = []
if tables:
- indexes = tables.split(",")
- for index in indexes:
+ for index in tables:
index_id = get_index_id_from_index_name(index)
if index_id:
wordTables.append((index_id, index, get_index_tags(index)))
else:
write_message("Error: There is no %s words table." % index, sys.stderr)
- else:
- for index in get_all_indexes():
- index_id = get_index_id_from_index_name(index)
- wordTables.append((index_id, index, get_index_tags(index)))
return wordTables
def get_date_range(var):
"Returns the two dates contained as a low,high tuple"
limits = var.split(",")
if len(limits) == 1:
low = get_datetime(limits[0])
return low, None
if len(limits) == 2:
low = get_datetime(limits[0])
high = get_datetime(limits[1])
return low, high
return None, None
def create_range_list(res):
"""Creates a range list from a recID select query result contained
in res. The result is expected to have ascending numerical order."""
if not res:
return []
row = res[0]
if not row:
return []
else:
range_list = [[row, row]]
for row in res[1:]:
row_id = row
if row_id == range_list[-1][1] + 1:
range_list[-1][1] = row_id
else:
range_list.append([row_id, row_id])
return range_list
def beautify_range_list(range_list):
"""Returns a non overlapping, maximal range list"""
ret_list = []
for new in range_list:
found = 0
for old in ret_list:
if new[0] <= old[0] <= new[1] + 1 or new[0] - 1 <= old[1] <= new[1]:
old[0] = min(old[0], new[0])
old[1] = max(old[1], new[1])
found = 1
break
if not found:
ret_list.append(new)
return ret_list
+
def truncate_index_table(index_name):
"""Properly truncate the given index."""
index_id = get_index_id_from_index_name(index_name)
if index_id:
write_message('Truncating %s index table in order to reindex.' % index_name, verbose=2)
run_sql("UPDATE idxINDEX SET last_updated='0000-00-00 00:00:00' WHERE id=%s", (index_id,))
run_sql("TRUNCATE idxWORD%02dF" % index_id) # kwalitee: disable=sql
run_sql("TRUNCATE idxWORD%02dR" % index_id) # kwalitee: disable=sql
run_sql("TRUNCATE idxPHRASE%02dF" % index_id) # kwalitee: disable=sql
run_sql("TRUNCATE idxPHRASE%02dR" % index_id) # kwalitee: disable=sql
def update_index_last_updated(index_id, starting_time=None):
"""Update last_updated column of the index table in the database.
Puts starting time there so that if the task was interrupted for record download,
the records will be reindexed next time."""
if starting_time is None:
return None
write_message("updating last_updated to %s..." % starting_time, verbose=9)
return run_sql("UPDATE idxINDEX SET last_updated=%s WHERE id=%s",
(starting_time, index_id,))
+
def get_percentage_completed(num_done, num_total):
""" Return a string containing the approx. percentage completed """
percentage_remaining = 100.0 * float(num_done) / float(num_total)
if percentage_remaining:
percentage_display = "(%.1f%%)" % (percentage_remaining,)
else:
percentage_display = ""
return percentage_display
+def _fill_dict_of_indexes_with_empty_sets():
+ """find_affected_records internal function.
+ Creates dict: {'index_name1':set([]), ...}
+ """
+ index_dict = {}
+ tmp_all_indexes = get_all_indexes(virtual=False)
+ for index in tmp_all_indexes:
+ index_dict[index] = set([])
+ return index_dict
+
+def find_affected_records_for_index(indexes=[], recIDs=[], force_all_indexes=False):
+ """
+ Function checks which records need to be changed/reindexed
+ for given index/indexes.
+ Makes use of hstRECORD table where different revisions of record
+ are kept.
+ If parameter force_all_indexes is set function will assign all recIDs to all indexes.
+ @param indexes: names of indexes for reindexation separated by coma
+ @param recIDs: recIDs for reindexation in form: [[range1_down, range1_up],[range2_down, range2_up]..]
+ @param force_all_indexes: should we index all indexes?
+ """
+
+ tmp_dates = dict(get_last_updated_all_indexes())
+ modification_dates = dict([(date, tmp_dates[date] or datetime(1000,1,1,1,1,1)) for date in tmp_dates])
+ tmp_all_indexes = get_all_indexes(virtual=False)
+
+ if not indexes:
+ indexes = tmp_all_indexes
+ else:
+ indexes = indexes.split(",")
+
+ def _should_reindex_for_revision(index_name, revision_date):
+ try:
+ if modification_dates[index_name] < revision_date and index_name in indexes:
+ return True
+ return False
+ except KeyError:
+ return False
+
+ if force_all_indexes:
+ records_for_indexes = {}
+ all_recIDs = []
+ for recIDs_range in recIDs:
+ all_recIDs.extend(range(recIDs_range[0], recIDs_range[1]+1))
+ for index in indexes:
+ records_for_indexes[index] = all_recIDs
+ return records_for_indexes
+
+ indexes_to_change = _fill_dict_of_indexes_with_empty_sets()
+ recIDs_info = []
+ for recIDs_range in recIDs:
+ query = """SELECT id_bibrec,job_date,affected_fields FROM hstRECORD WHERE
+ id_bibrec BETWEEN %s AND %s""" % (recIDs_range[0], recIDs_range[1])
+ res = run_sql(query)
+ if res:
+ recIDs_info.extend(res)
+
+ for recID_info in recIDs_info:
+ recID, revision, affected_fields = recID_info
+ affected_fields = affected_fields.split(",")
+ indexes_for_recID = set()
+ for field in affected_fields:
+ if field:
+ field_indexes = get_field_indexes_memoised(field) or []
+ indexes_names = set([idx[1] for idx in field_indexes])
+ indexes_for_recID |= indexes_names
+ else:
+ #record was inserted, all fields were changed, no specific affected fields
+ indexes_for_recID |= set(tmp_all_indexes)
+ indexes_for_recID_filtered = [ind for ind in indexes_for_recID if _should_reindex_for_revision(ind, revision)]
+ for index in indexes_for_recID_filtered:
+ indexes_to_change[index].add(recID)
+
+ indexes_to_change = dict((k, list(sorted(v))) for k, v in indexes_to_change.iteritems() if v)
+
+ return indexes_to_change
+
+
#def update_text_extraction_date(first_recid, last_recid):
#"""for all the bibdoc connected to the specified recid, set
#the text_extraction_date to the task_starting_time."""
#run_sql("UPDATE bibdoc JOIN bibrec_bibdoc ON id=id_bibdoc SET text_extraction_date=%s WHERE id_bibrec BETWEEN %s AND %s", (task_get_task_param('task_starting_time'), first_recid, last_recid))
class WordTable:
"A class to hold the words table."
- def __init__(self, index_name, index_id, fields_to_index, table_name_pattern, default_get_words_fnc, tag_to_words_fnc_map, wash_index_terms=50, is_fulltext_index=False):
+ def __init__(self, index_name, index_id, fields_to_index, table_name_pattern, wordtable_type, tag_to_tokenizer_map, wash_index_terms=50):
"""Creates words table instance.
@param index_name: the index name
@param index_id: the index integer identificator
@param fields_to_index: a list of fields to index
@param table_name_pattern: i.e. idxWORD%02dF or idxPHRASE%02dF
- @parm default_get_words_fnc: the default function called to extract words from a metadata
- @param tag_to_words_fnc_map: a mapping to specify particular function to
+ @parm wordtable_type: type of the wordtable: Words, Pairs, Phrases
+ @param tag_to_tokenizer_map: a mapping to specify particular tokenizer to
extract words from particular metdata (such as 8564_u)
@param wash_index_terms: do we wash index terms, and if yes (when >0),
how many characters do we keep in the index terms; see
max_char_length parameter of wash_index_term()
"""
self.index_name = index_name
self.index_id = index_id
self.tablename = table_name_pattern % index_id
+ self.virtual_tablename_pattern = table_name_pattern[table_name_pattern.find('idx'):-1]
self.humanname = get_def_name('%s' % (str(index_id),), "idxINDEX")[0][1]
self.recIDs_in_mem = []
self.fields_to_index = fields_to_index
self.value = {}
- self.stemming_language = get_index_stemming_language(index_id)
- self.is_fulltext_index = is_fulltext_index
+ try:
+ self.stemming_language = get_index_stemming_language(index_id)
+ except KeyError:
+ self.stemming_language = ''
+ self.remove_stopwords = get_index_remove_stopwords(index_id)
+ self.remove_html_markup = get_index_remove_html_markup(index_id)
+ self.remove_latex_markup = get_index_remove_latex_markup(index_id)
+ self.tokenizer = get_index_tokenizer(index_id)(self.stemming_language,
+ self.remove_stopwords,
+ self.remove_html_markup,
+ self.remove_latex_markup)
+ self.default_tokenizer_function = self.tokenizer.get_tokenizing_function(wordtable_type)
self.wash_index_terms = wash_index_terms
-
- # tagToFunctions mapping. It offers an indirection level necessary for
- # indexing fulltext. The default is get_words_from_phrase
- self.tag_to_words_fnc_map = tag_to_words_fnc_map
- self.default_get_words_fnc = default_get_words_fnc
+ self.is_virtual = is_index_virtual(self.index_id)
+ self.virtual_indexes = get_index_virtual_indexes(self.index_id)
+
+ # tagToTokenizer mapping. It offers an indirection level necessary for
+ # indexing fulltext.
+ self.tag_to_words_fnc_map = {}
+ for k in tag_to_tokenizer_map.keys():
+ special_tokenizer_for_tag = _TOKENIZERS[tag_to_tokenizer_map[k]](self.stemming_language,
+ self.remove_stopwords,
+ self.remove_html_markup,
+ self.remove_latex_markup)
+ special_tokenizer_function = special_tokenizer_for_tag.get_tokenizing_function(wordtable_type)
+ self.tag_to_words_fnc_map[k] = special_tokenizer_function
if self.stemming_language and self.tablename.startswith('idxWORD'):
write_message('%s has stemming enabled, language %s' % (self.tablename, self.stemming_language))
+
+ def turn_off_virtual_indexes(self):
+ self.virtual_indexes = []
+
+ def turn_on_virtual_indexes(self):
+ self.virtual_indexes = get_index_virtual_indexes(self.index_id)
+
def get_field(self, recID, tag):
"""Returns list of values of the MARC-21 'tag' fields for the
record 'recID'."""
out = []
bibXXx = "bib" + tag[0] + tag[1] + "x"
bibrec_bibXXx = "bibrec_" + bibXXx
query = """SELECT value FROM %s AS b, %s AS bb
WHERE bb.id_bibrec=%%s AND bb.id_bibxxx=b.id
AND tag LIKE %%s""" % (bibXXx, bibrec_bibXXx)
res = run_sql(query, (recID, tag))
for row in res:
out.append(row[0])
return out
def clean(self):
"Cleans the words table."
self.value = {}
def put_into_db(self, mode="normal"):
"""Updates the current words table in the corresponding DB
idxFOO table. Mode 'normal' means normal execution,
mode 'emergency' means words index reverting to old state.
"""
write_message("%s %s wordtable flush started" % (self.tablename, mode))
write_message('...updating %d words into %s started' % \
(len(self.value), self.tablename))
task_update_progress("(%s:%s) flushed %d/%d words" % (self.tablename, self.humanname, 0, len(self.value)))
self.recIDs_in_mem = beautify_range_list(self.recIDs_in_mem)
- if mode == "normal":
- for group in self.recIDs_in_mem:
- query = """UPDATE %sR SET type='TEMPORARY' WHERE id_bibrec
- BETWEEN %%s AND %%s AND type='CURRENT'""" % self.tablename[:-1]
- write_message(query % (group[0], group[1]), verbose=9)
- run_sql(query, (group[0], group[1]))
-
- nb_words_total = len(self.value)
- nb_words_report = int(nb_words_total / 10.0)
- nb_words_done = 0
- for word in self.value.keys():
- self.put_word_into_db(word)
- nb_words_done += 1
- if nb_words_report != 0 and ((nb_words_done % nb_words_report) == 0):
- write_message('......processed %d/%d words' % (nb_words_done, nb_words_total))
- percentage_display = get_percentage_completed(nb_words_done, nb_words_total)
- task_update_progress("(%s:%s) flushed %d/%d words %s" % (self.tablename, self.humanname, nb_words_done, nb_words_total, percentage_display))
- write_message('...updating %d words into %s ended' % \
- (nb_words_total, self.tablename))
-
- write_message('...updating reverse table %sR started' % self.tablename[:-1])
- if mode == "normal":
- for group in self.recIDs_in_mem:
- query = """UPDATE %sR SET type='CURRENT' WHERE id_bibrec
- BETWEEN %%s AND %%s AND type='FUTURE'""" % self.tablename[:-1]
- write_message(query % (group[0], group[1]), verbose=9)
- run_sql(query, (group[0], group[1]))
- query = """DELETE FROM %sR WHERE id_bibrec
- BETWEEN %%s AND %%s AND type='TEMPORARY'""" % self.tablename[:-1]
- write_message(query % (group[0], group[1]), verbose=9)
- run_sql(query, (group[0], group[1]))
- #if self.is_fulltext_index:
- #update_text_extraction_date(group[0], group[1])
- write_message('End of updating wordTable into %s' % self.tablename, verbose=9)
- elif mode == "emergency":
- for group in self.recIDs_in_mem:
- query = """UPDATE %sR SET type='CURRENT' WHERE id_bibrec
- BETWEEN %%s AND %%s AND type='TEMPORARY'""" % self.tablename[:-1]
- write_message(query % (group[0], group[1]), verbose=9)
- run_sql(query, (group[0], group[1]))
- query = """DELETE FROM %sR WHERE id_bibrec
- BETWEEN %%s AND %%s AND type='FUTURE'""" % self.tablename[:-1]
- write_message(query % (group[0], group[1]), verbose=9)
- run_sql(query, (group[0], group[1]))
- write_message('End of emergency flushing wordTable into %s' % self.tablename, verbose=9)
- write_message('...updating reverse table %sR ended' % self.tablename[:-1])
+ all_indexes = [(self.index_id, self.humanname)]
+ if self.virtual_indexes:
+ all_indexes.extend(self.virtual_indexes)
+ for ind_id, ind_name in all_indexes:
+ tab_name = self.tablename[:-1] + "R"
+ if ind_id != self.index_id:
+ tab_name = self.virtual_tablename_pattern % ind_id + "R"
+ if mode == "normal":
+ for group in self.recIDs_in_mem:
+ query = """UPDATE %s SET type='TEMPORARY' WHERE id_bibrec
+ BETWEEN %%s AND %%s AND type='CURRENT'""" % tab_name
+ write_message(query % (group[0], group[1]), verbose=9)
+ run_sql(query, (group[0], group[1]))
+
+ nb_words_total = len(self.value)
+ nb_words_report = int(nb_words_total / 10.0)
+ nb_words_done = 0
+ for word in self.value.keys():
+ self.put_word_into_db(word, ind_id)
+ nb_words_done += 1
+ if nb_words_report != 0 and ((nb_words_done % nb_words_report) == 0):
+ write_message('......processed %d/%d words' % (nb_words_done, nb_words_total))
+ percentage_display = get_percentage_completed(nb_words_done, nb_words_total)
+ task_update_progress("(%s:%s) flushed %d/%d words %s" % (tab_name, ind_name, nb_words_done, nb_words_total, percentage_display))
+ write_message('...updating %d words into %s ended' % \
+ (nb_words_total, tab_name))
+
+ write_message('...updating reverse table %s started' % tab_name)
+ if mode == "normal":
+ for group in self.recIDs_in_mem:
+ query = """UPDATE %s SET type='CURRENT' WHERE id_bibrec
+ BETWEEN %%s AND %%s AND type='FUTURE'""" % tab_name
+ write_message(query % (group[0], group[1]), verbose=9)
+ run_sql(query, (group[0], group[1]))
+ query = """DELETE FROM %s WHERE id_bibrec
+ BETWEEN %%s AND %%s AND type='TEMPORARY'""" % tab_name
+ write_message(query % (group[0], group[1]), verbose=9)
+ run_sql(query, (group[0], group[1]))
+ #if self.is_fulltext_index:
+ #update_text_extraction_date(group[0], group[1])
+ write_message('End of updating wordTable into %s' % tab_name, verbose=9)
+ elif mode == "emergency":
+ for group in self.recIDs_in_mem:
+ query = """UPDATE %s SET type='CURRENT' WHERE id_bibrec
+ BETWEEN %%s AND %%s AND type='TEMPORARY'""" % tab_name
+ write_message(query % (group[0], group[1]), verbose=9)
+ run_sql(query, (group[0], group[1]))
+ query = """DELETE FROM %s WHERE id_bibrec
+ BETWEEN %%s AND %%s AND type='FUTURE'""" % tab_name
+ write_message(query % (group[0], group[1]), verbose=9)
+ run_sql(query, (group[0], group[1]))
+ write_message('End of emergency flushing wordTable into %s' % tab_name, verbose=9)
+ write_message('...updating reverse table %s ended' % tab_name)
self.clean()
self.recIDs_in_mem = []
write_message("%s %s wordtable flush ended" % (self.tablename, mode))
task_update_progress("(%s:%s) flush ended" % (self.tablename, self.humanname))
- def load_old_recIDs(self, word):
+ def load_old_recIDs(self, word, index_id=None):
"""Load existing hitlist for the word from the database index files."""
- query = "SELECT hitlist FROM %s WHERE term=%%s" % self.tablename
+ tab_name = self.tablename
+ if index_id != self.index_id:
+ tab_name = self.virtual_tablename_pattern % index_id + "F"
+ query = "SELECT hitlist FROM %s WHERE term=%%s" % tab_name
res = run_sql(query, (word,))
if res:
return intbitset(res[0][0])
else:
return None
def merge_with_old_recIDs(self, word, set):
"""Merge the system numbers stored in memory (hash of recIDs with value +1 or -1
according to whether to add/delete them) with those stored in the database index
and received in set universe of recIDs for the given word.
Return False in case no change was done to SET, return True in case SET
was changed.
"""
oldset = intbitset(set)
set.update_with_signs(self.value[word])
return set != oldset
- def put_word_into_db(self, word):
+ def put_word_into_db(self, word, index_id):
"""Flush a single word to the database and delete it from memory"""
-
- set = self.load_old_recIDs(word)
+ tab_name = self.tablename
+ if index_id != self.index_id:
+ tab_name = self.virtual_tablename_pattern % index_id + "F"
+ set = self.load_old_recIDs(word, index_id)
if set is not None: # merge the word recIDs found in memory:
if not self.merge_with_old_recIDs(word, set):
# nothing to update:
write_message("......... unchanged hitlist for ``%s''" % word, verbose=9)
pass
else:
# yes there were some new words:
write_message("......... updating hitlist for ``%s''" % word, verbose=9)
- run_sql("UPDATE %s SET hitlist=%%s WHERE term=%%s" % wash_table_column_name(self.tablename), (set.fastdump(), word)) # kwalitee: disable=sql
+ run_sql("UPDATE %s SET hitlist=%%s WHERE term=%%s" % wash_table_column_name(tab_name), (set.fastdump(), word)) # kwalitee: disable=sql
else: # the word is new, will create new set:
write_message("......... inserting hitlist for ``%s''" % word, verbose=9)
set = intbitset(self.value[word].keys())
try:
- run_sql("INSERT INTO %s (term, hitlist) VALUES (%%s, %%s)" % wash_table_column_name(self.tablename), (word, set.fastdump())) # kwalitee: disable=sql
+ run_sql("INSERT INTO %s (term, hitlist) VALUES (%%s, %%s)" % wash_table_column_name(tab_name), (word, set.fastdump())) # kwalitee: disable=sql
except Exception, e:
## We send this exception to the admin only when is not
## already reparing the problem.
register_exception(prefix="Error when putting the term '%s' into db (hitlist=%s): %s\n" % (repr(word), set, e), alert_admin=(task_get_option('cmd') != 'repair'))
if not set: # never store empty words
- run_sql("DELETE FROM %s WHERE term=%%s" % wash_table_column_name(self.tablename), (word,)) # kwalitee: disable=sql
+ run_sql("DELETE FROM %s WHERE term=%%s" % wash_table_column_name(tab_name), (word,)) # kwalitee: disable=sql
- del self.value[word]
def display(self):
"Displays the word table."
keys = self.value.keys()
keys.sort()
for k in keys:
write_message("%s: %s" % (k, self.value[k]))
def count(self):
"Returns the number of words in the table."
return len(self.value)
def info(self):
"Prints some information on the words table."
write_message("The words table contains %d words." % self.count())
def lookup_words(self, word=""):
"Lookup word from the words table."
if not word:
done = 0
while not done:
try:
word = raw_input("Enter word: ")
done = 1
except (EOFError, KeyboardInterrupt):
return
if self.value.has_key(word):
write_message("The word '%s' is found %d times." \
% (word, len(self.value[word])))
else:
write_message("The word '%s' does not exist in the word file."\
% word)
def add_recIDs(self, recIDs, opt_flush):
"""Fetches records which id in the recIDs range list and adds
them to the wordTable. The recIDs range list is of the form:
[[i1_low,i1_high],[i2_low,i2_high], ..., [iN_low,iN_high]].
"""
+ if self.is_virtual:
+ return
global chunksize, _last_word_table
flush_count = 0
records_done = 0
records_to_go = 0
for arange in recIDs:
records_to_go = records_to_go + arange[1] - arange[0] + 1
time_started = time.time() # will measure profile time
for arange in recIDs:
i_low = arange[0]
chunksize_count = 0
while i_low <= arange[1]:
task_sleep_now_if_required()
# calculate chunk group of recIDs and treat it:
i_high = min(i_low + opt_flush - flush_count - 1, arange[1])
i_high = min(i_low + chunksize - chunksize_count - 1, i_high)
try:
self.chk_recID_range(i_low, i_high)
except StandardError:
if self.index_name == 'fulltext' and CFG_SOLR_URL:
solr_commit()
raise
- write_message("%s adding records #%d-#%d started" % \
+ write_message(CFG_BIBINDEX_ADDING_RECORDS_STARTED_STR % \
(self.tablename, i_low, i_high))
if CFG_CHECK_MYSQL_THREADS:
kill_sleepy_mysql_threads()
percentage_display = get_percentage_completed(records_done, records_to_go)
task_update_progress("(%s:%s) adding recs %d-%d %s" % (self.tablename, self.humanname, i_low, i_high, percentage_display))
self.del_recID_range(i_low, i_high)
just_processed = self.add_recID_range(i_low, i_high)
flush_count = flush_count + i_high - i_low + 1
chunksize_count = chunksize_count + i_high - i_low + 1
records_done = records_done + just_processed
- write_message("%s adding records #%d-#%d ended " % \
+ write_message(CFG_BIBINDEX_ADDING_RECORDS_STARTED_STR % \
(self.tablename, i_low, i_high))
-
if chunksize_count >= chunksize:
chunksize_count = 0
# flush if necessary:
if flush_count >= opt_flush:
self.put_into_db()
self.clean()
if self.index_name == 'fulltext' and CFG_SOLR_URL:
solr_commit()
write_message("%s backing up" % (self.tablename))
flush_count = 0
self.log_progress(time_started, records_done, records_to_go)
# iterate:
i_low = i_high + 1
if flush_count > 0:
self.put_into_db()
if self.index_name == 'fulltext' and CFG_SOLR_URL:
solr_commit()
self.log_progress(time_started, records_done, records_to_go)
- def add_recIDs_by_date(self, dates, opt_flush):
- """Add records that were modified between DATES[0] and DATES[1].
- If DATES is not set, then add records that were modified since
- the last update of the index.
- """
- if not dates:
- table_id = self.tablename[-3:-1]
- query = """SELECT last_updated FROM idxINDEX WHERE id=%s"""
- res = run_sql(query, (table_id,))
- if not res:
- return
- if not res[0][0]:
- dates = ("0000-00-00", None)
- else:
- dates = (res[0][0], None)
- if dates[1] is None:
- res = intbitset(run_sql("""SELECT b.id FROM bibrec AS b
- WHERE b.modification_date >= %s""",
- (dates[0],)))
- if self.is_fulltext_index:
- res |= intbitset(run_sql("""SELECT id_bibrec FROM bibrec_bibdoc JOIN bibdoc ON id_bibdoc=id WHERE text_extraction_date <= modification_date AND modification_date >= %s AND status<>'DELETED'""", (dates[0],)))
- elif dates[0] is None:
- res = intbitset(run_sql("""SELECT b.id FROM bibrec AS b
- WHERE b.modification_date <= %s""",
- (dates[1],)))
- if self.is_fulltext_index:
- res |= intbitset(run_sql("""SELECT id_bibrec FROM bibrec_bibdoc JOIN bibdoc ON id_bibdoc=id WHERE text_extraction_date <= modification_date AND modification_date <= %s AND status<>'DELETED'""", (dates[1],)))
- else:
- res = intbitset(run_sql("""SELECT b.id FROM bibrec AS b
- WHERE b.modification_date >= %s AND
- b.modification_date <= %s""",
- (dates[0], dates[1])))
- if self.is_fulltext_index:
- res |= intbitset(run_sql("""SELECT id_bibrec FROM bibrec_bibdoc JOIN bibdoc ON id_bibdoc=id WHERE text_extraction_date <= modification_date AND modification_date >= %s AND modification_date <= %s AND status<>'DELETED'""", (dates[0], dates[1],)))
- alist = create_range_list(list(res))
- if not alist:
- write_message("No new records added. %s is up to date" % self.tablename)
- else:
- self.add_recIDs(alist, opt_flush)
- # special case of author indexes where we need to re-index
- # those records that were affected by changed BibAuthorID
- # attributions:
- if self.index_name in ('author', 'firstauthor', 'exactauthor', 'exactfirstauthor'):
- from invenio.bibauthorid_personid_maintenance import get_recids_affected_since
- # dates[1] is ignored, since BibAuthorID API does not offer upper limit search
- alist = create_range_list(get_recids_affected_since(dates[0]))
- if not alist:
- write_message("No new records added by author canonical IDs. %s is up to date" % self.tablename)
- else:
- self.add_recIDs(alist, opt_flush)
-
def add_recID_range(self, recID1, recID2):
"""Add records from RECID1 to RECID2."""
wlist = {}
self.recIDs_in_mem.append([recID1, recID2])
# special case of author indexes where we also add author
# canonical IDs:
if self.index_name in ('author', 'firstauthor', 'exactauthor', 'exactfirstauthor'):
for recID in range(recID1, recID2 + 1):
if not wlist.has_key(recID):
wlist[recID] = []
wlist[recID] = list_union(get_author_canonical_ids_for_recid(recID),
wlist[recID])
- # special case of journal index:
- if self.fields_to_index == [CFG_JOURNAL_TAG]:
- # FIXME: quick hack for the journal index; a special
- # treatment where we need to associate more than one
- # subfield into indexed term
- for recID in range(recID1, recID2 + 1):
- new_words = get_words_from_journal_tag(recID, self.fields_to_index[0])
- if not wlist.has_key(recID):
- wlist[recID] = []
- wlist[recID] = list_union(new_words, wlist[recID])
- elif self.index_name in ('authorcount',):
- # FIXME: quick hack for the authorcount index; we have to
- # count the number of author fields only
+
+ if len(self.fields_to_index) == 0:
+ #'no tag' style of indexing - use bibfield instead of directly consulting bibrec
+ tokenizing_function = self.default_tokenizer_function
for recID in range(recID1, recID2 + 1):
- new_words = [str(get_field_count(recID, self.fields_to_index)),]
- if not wlist.has_key(recID):
- wlist[recID] = []
- wlist[recID] = list_union(new_words, wlist[recID])
+ record = get_record(recID)
+ if record:
+ new_words = tokenizing_function(record)
+ if not wlist.has_key(recID):
+ wlist[recID] = []
+ wlist[recID] = list_union(new_words, wlist[recID])
+ # case of special indexes:
+ elif self.index_name in ('authorcount', 'journal'):
+ for tag in self.fields_to_index:
+ tokenizing_function = self.tag_to_words_fnc_map.get(tag, self.default_tokenizer_function)
+ for recID in range(recID1, recID2 + 1):
+ new_words = tokenizing_function(recID)
+ if not wlist.has_key(recID):
+ wlist[recID] = []
+ wlist[recID] = list_union(new_words, wlist[recID])
+ # usual tag-by-tag indexing for the rest:
else:
- # usual tag-by-tag indexing:
for tag in self.fields_to_index:
- get_words_function = self.tag_to_words_fnc_map.get(tag, self.default_get_words_fnc)
- bibXXx = "bib" + tag[0] + tag[1] + "x"
- bibrec_bibXXx = "bibrec_" + bibXXx
- query = """SELECT bb.id_bibrec,b.value FROM %s AS b, %s AS bb
- WHERE bb.id_bibrec BETWEEN %%s AND %%s
- AND bb.id_bibxxx=b.id AND tag LIKE %%s""" % (bibXXx, bibrec_bibXXx)
- res = run_sql(query, (recID1, recID2, tag))
- if tag == '8564_u':
- ## FIXME: Quick hack to be sure that hidden files are
- ## actually indexed.
- res = set(res)
- for recid in xrange(int(recID1), int(recID2) + 1):
- for bibdocfile in BibRecDocs(recid).list_latest_files():
- res.add((recid, bibdocfile.get_url()))
- for row in sorted(res):
+ tokenizing_function = self.tag_to_words_fnc_map.get(tag, self.default_tokenizer_function)
+ phrases = self.get_phrases_for_tokenizing(tag, recID1, recID2)
+ for row in sorted(phrases):
recID, phrase = row
if not wlist.has_key(recID):
wlist[recID] = []
- new_words = get_words_function(phrase, stemming_language=self.stemming_language) # ,self.separators
+ new_words = tokenizing_function(phrase)
wlist[recID] = list_union(new_words, wlist[recID])
+
# lookup index-time synonyms:
- if CFG_BIBINDEX_SYNONYM_KBRS.has_key(self.index_name):
+ synonym_kbrs = get_all_synonym_knowledge_bases()
+ if synonym_kbrs.has_key(self.index_name):
if len(wlist) == 0: return 0
recIDs = wlist.keys()
for recID in recIDs:
for word in wlist[recID]:
word_synonyms = get_synonym_terms(word,
- CFG_BIBINDEX_SYNONYM_KBRS[self.index_name][0],
- CFG_BIBINDEX_SYNONYM_KBRS[self.index_name][1],
+ synonym_kbrs[self.index_name][0],
+ synonym_kbrs[self.index_name][1],
use_memoise=True)
+
if word_synonyms:
wlist[recID] = list_union(word_synonyms, wlist[recID])
# were there some words for these recIDs found?
- if len(wlist) == 0: return 0
recIDs = wlist.keys()
for recID in recIDs:
# was this record marked as deleted?
if "DELETED" in self.get_field(recID, "980__c"):
wlist[recID] = []
write_message("... record %d was declared deleted, removing its word list" % recID, verbose=9)
write_message("... record %d, termlist: %s" % (recID, wlist[recID]), verbose=9)
+ self.index_virtual_indexes_reversed(wlist, recID1, recID2)
+
+ if len(wlist) == 0: return 0
# put words into reverse index table with FUTURE status:
for recID in recIDs:
run_sql("INSERT INTO %sR (id_bibrec,termlist,type) VALUES (%%s,%%s,'FUTURE')" % wash_table_column_name(self.tablename[:-1]), (recID, serialize_via_marshal(wlist[recID]))) # kwalitee: disable=sql
# ... and, for new records, enter the CURRENT status as empty:
try:
run_sql("INSERT INTO %sR (id_bibrec,termlist,type) VALUES (%%s,%%s,'CURRENT')" % wash_table_column_name(self.tablename[:-1]), (recID, serialize_via_marshal([]))) # kwalitee: disable=sql
except DatabaseError:
# okay, it's an already existing record, no problem
pass
# put words into memory word list:
put = self.put
for recID in recIDs:
for w in wlist[recID]:
put(recID, w, 1)
-
return len(recIDs)
+
+ def get_phrases_for_tokenizing(self, tag, first_recID, last_recID):
+ """Gets phrases for later tokenization for a range of records and
+ specific tag.
+ @param tag: MARC tag
+ @param first_recID: first recID from the range of recIDs to index
+ @param last_recID: last recID from the range of recIDs to index
+ """
+ bibXXx = "bib" + tag[0] + tag[1] + "x"
+ bibrec_bibXXx = "bibrec_" + bibXXx
+ query = """SELECT bb.id_bibrec,b.value FROM %s AS b, %s AS bb
+ WHERE bb.id_bibrec BETWEEN %%s AND %%s
+ AND bb.id_bibxxx=b.id AND tag LIKE %%s""" % (bibXXx, bibrec_bibXXx)
+ phrases = run_sql(query, (first_recID, last_recID, tag))
+ if tag == '8564_u':
+ ## FIXME: Quick hack to be sure that hidden files are
+ ## actually indexed.
+ phrases = set(phrases)
+ for recid in xrange(int(first_recID), int(last_recID) + 1):
+ for bibdocfile in BibRecDocs(recid).list_latest_files():
+ phrases.add((recid, bibdocfile.get_url()))
+ #authority records
+ pattern = tag.replace('%', '*')
+ matches = fnmatch.filter(CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC.keys(), pattern)
+ if not len(matches):
+ return phrases
+ phrases = set(phrases)
+ for tag_match in matches:
+ authority_tag = tag_match[0:3] + "__0"
+ for recID in xrange(int(first_recID), int(last_recID) + 1):
+ control_nos = get_fieldvalues(recID, authority_tag)
+ for control_no in control_nos:
+ new_strings = get_index_strings_by_control_no(control_no)
+ for string_value in new_strings:
+ phrases.add((recID, string_value))
+ return phrases
+
+
+ def index_virtual_indexes_reversed(self, wlist, recID1, recID2):
+ """Inserts indexed words into all virtual indexes connected to
+ this index"""
+ #first: need to take old values from given index to remove
+ #them from virtual indexes
+ query = """SELECT id_bibrec, termlist FROM %sR WHERE id_bibrec
+ BETWEEN %%s AND %%s""" % wash_table_column_name(self.tablename[:-1])
+ old_index_values = run_sql(query, (recID1, recID2))
+ if old_index_values:
+ zipped = zip(*old_index_values)
+ old_index_values = dict(zip(zipped[0], map(deserialize_via_marshal, zipped[1])))
+ else:
+ old_index_values = dict()
+ recIDs = wlist.keys()
+
+ for vindex_id, vindex_name in self.virtual_indexes:
+ #second: need to take old values from virtual index
+ #to have a list of words from which we can remove old values from given index
+ tab_name = self.virtual_tablename_pattern % vindex_id + "R"
+ query = """SELECT id_bibrec, termlist FROM %s WHERE type='CURRENT' AND id_bibrec
+ BETWEEN %%s AND %%s""" % tab_name
+ old_virtual_index_values = run_sql(query, (recID1, recID2))
+ if old_virtual_index_values:
+ zipped = zip(*old_virtual_index_values)
+ old_virtual_index_values = dict(zip(zipped[0], map(deserialize_via_marshal, zipped[1])))
+ else:
+ old_virtual_index_values = dict()
+ for recID in recIDs:
+ to_serialize = list((set(old_virtual_index_values.get(recID) or []) - set(old_index_values.get(recID) or [])) | set(wlist[recID]))
+ run_sql("INSERT INTO %s (id_bibrec,termlist,type) VALUES (%%s,%%s,'FUTURE')" % wash_table_column_name(tab_name), (recID, serialize_via_marshal(to_serialize))) # kwalitee: disable=sql
+ try:
+ run_sql("INSERT INTO %s (id_bibrec,termlist,type) VALUES (%%s,%%s,'CURRENT')" % wash_table_column_name(tab_name), (recID, serialize_via_marshal([]))) # kwalitee: disable=sql
+ except DatabaseError:
+ pass
+ if len(recIDs) != (recID2 - recID1 + 1):
+ #for records in range(recID1, recID2) which weren't updated:
+ #need to prevent them from being deleted by function: 'put_into_db'
+ #which deletes all records with 'CURRENT' status
+ query = """INSERT INTO %s (id_bibrec, termlist, type)
+ SELECT id_bibrec, termlist, 'FUTURE' FROM %s
+ WHERE id_bibrec BETWEEN %%s AND %%s
+ AND type='CURRENT'
+ AND id_bibrec IN (
+ SELECT id_bibrec FROM %s
+ WHERE id_bibrec BETWEEN %%s AND %%s
+ GROUP BY id_bibrec HAVING COUNT(id_bibrec) = 1
+ )
+ """ % ((wash_table_column_name(tab_name),)*3)
+ run_sql(query, (recID1, recID2, recID1, recID2))
+
+
def log_progress(self, start, done, todo):
"""Calculate progress and store it.
start: start time,
done: records processed,
todo: total number of records"""
time_elapsed = time.time() - start
# consistency check
if time_elapsed == 0 or done > todo:
return
time_recs_per_min = done / (time_elapsed / 60.0)
write_message("%d records took %.1f seconds to complete.(%1.f recs/min)"\
% (done, time_elapsed, time_recs_per_min))
if time_recs_per_min:
write_message("Estimated runtime: %.1f minutes" % \
((todo - done) / time_recs_per_min))
def put(self, recID, word, sign):
"""Adds/deletes a word to the word list."""
try:
if self.wash_index_terms:
word = wash_index_term(word, self.wash_index_terms)
if self.value.has_key(word):
# the word 'word' exist already: update sign
self.value[word][recID] = sign
else:
self.value[word] = {recID: sign}
except:
write_message("Error: Cannot put word %s with sign %d for recID %s." % (word, sign, recID))
def del_recIDs(self, recIDs):
"""Fetches records which id in the recIDs range list and adds
them to the wordTable. The recIDs range list is of the form:
[[i1_low,i1_high],[i2_low,i2_high], ..., [iN_low,iN_high]].
"""
count = 0
for arange in recIDs:
task_sleep_now_if_required()
self.del_recID_range(arange[0], arange[1])
count = count + arange[1] - arange[0]
self.put_into_db()
if self.index_name == 'fulltext' and CFG_SOLR_URL:
solr_commit()
def del_recID_range(self, low, high):
"""Deletes records with 'recID' system number between low
and high from memory words index table."""
write_message("%s fetching existing words for records #%d-#%d started" % \
(self.tablename, low, high), verbose=3)
self.recIDs_in_mem.append([low, high])
query = """SELECT id_bibrec,termlist FROM %sR as bb WHERE bb.id_bibrec
BETWEEN %%s AND %%s""" % (self.tablename[:-1])
recID_rows = run_sql(query, (low, high))
for recID_row in recID_rows:
recID = recID_row[0]
wlist = deserialize_via_marshal(recID_row[1])
for word in wlist:
self.put(recID, word, -1)
write_message("%s fetching existing words for records #%d-#%d ended" % \
(self.tablename, low, high), verbose=3)
+
def report_on_table_consistency(self):
"""Check reverse words index tables (e.g. idxWORD01R) for
interesting states such as 'TEMPORARY' state.
Prints small report (no of words, no of bad words).
"""
# find number of words:
query = """SELECT COUNT(*) FROM %s""" % (self.tablename)
res = run_sql(query, None, 1)
if res:
nb_words = res[0][0]
else:
nb_words = 0
# find number of records:
query = """SELECT COUNT(DISTINCT(id_bibrec)) FROM %sR""" % (self.tablename[:-1])
res = run_sql(query, None, 1)
if res:
nb_records = res[0][0]
else:
nb_records = 0
# report stats:
write_message("%s contains %d words from %d records" % (self.tablename, nb_words, nb_records))
# find possible bad states in reverse tables:
query = """SELECT COUNT(DISTINCT(id_bibrec)) FROM %sR WHERE type <> 'CURRENT'""" % (self.tablename[:-1])
res = run_sql(query)
if res:
nb_bad_records = res[0][0]
else:
nb_bad_records = 999999999
if nb_bad_records:
write_message("EMERGENCY: %s needs to repair %d of %d index records" % \
(self.tablename, nb_bad_records, nb_records))
else:
write_message("%s is in consistent state" % (self.tablename))
return nb_bad_records
def repair(self, opt_flush):
"""Repair the whole table"""
# find possible bad states in reverse tables:
query = """SELECT COUNT(DISTINCT(id_bibrec)) FROM %sR WHERE type <> 'CURRENT'""" % (self.tablename[:-1])
res = run_sql(query, None, 1)
if res:
nb_bad_records = res[0][0]
else:
nb_bad_records = 0
if nb_bad_records == 0:
return
query = """SELECT id_bibrec FROM %sR WHERE type <> 'CURRENT'""" \
% (self.tablename[:-1])
res = intbitset(run_sql(query))
recIDs = create_range_list(list(res))
flush_count = 0
records_done = 0
records_to_go = 0
for arange in recIDs:
records_to_go = records_to_go + arange[1] - arange[0] + 1
time_started = time.time() # will measure profile time
for arange in recIDs:
i_low = arange[0]
chunksize_count = 0
while i_low <= arange[1]:
task_sleep_now_if_required()
# calculate chunk group of recIDs and treat it:
i_high = min(i_low + opt_flush - flush_count - 1, arange[1])
i_high = min(i_low + chunksize - chunksize_count - 1, i_high)
self.fix_recID_range(i_low, i_high)
flush_count = flush_count + i_high - i_low + 1
chunksize_count = chunksize_count + i_high - i_low + 1
records_done = records_done + i_high - i_low + 1
if chunksize_count >= chunksize:
chunksize_count = 0
# flush if necessary:
if flush_count >= opt_flush:
self.put_into_db("emergency")
self.clean()
flush_count = 0
self.log_progress(time_started, records_done, records_to_go)
# iterate:
i_low = i_high + 1
if flush_count > 0:
self.put_into_db("emergency")
self.log_progress(time_started, records_done, records_to_go)
write_message("%s inconsistencies repaired." % self.tablename)
def chk_recID_range(self, low, high):
"""Check if the reverse index table is in proper state"""
## check db
query = """SELECT COUNT(*) FROM %sR WHERE type <> 'CURRENT'
AND id_bibrec BETWEEN %%s AND %%s""" % self.tablename[:-1]
res = run_sql(query, (low, high), 1)
if res[0][0] == 0:
write_message("%s for %d-%d is in consistent state" % (self.tablename, low, high))
return # okay, words table is consistent
## inconsistency detected!
write_message("EMERGENCY: %s inconsistencies detected..." % self.tablename)
error_message = "Errors found. You should check consistency of the " \
"%s - %sR tables.\nRunning 'bibindex --repair' is " \
"recommended." % (self.tablename, self.tablename[:-1])
write_message("EMERGENCY: " + error_message, stream=sys.stderr)
raise StandardError(error_message)
def fix_recID_range(self, low, high):
"""Try to fix reverse index database consistency (e.g. table idxWORD01R) in the low,high doc-id range.
Possible states for a recID follow:
CUR TMP FUT: very bad things have happened: warn!
CUR TMP : very bad things have happened: warn!
CUR FUT: delete FUT (crash before flushing)
CUR : database is ok
TMP FUT: add TMP to memory and del FUT from memory
flush (revert to old state)
TMP : very bad things have happened: warn!
FUT: very bad things have happended: warn!
"""
state = {}
query = "SELECT id_bibrec,type FROM %sR WHERE id_bibrec BETWEEN %%s AND %%s"\
% self.tablename[:-1]
res = run_sql(query, (low, high))
for row in res:
if not state.has_key(row[0]):
state[row[0]] = []
state[row[0]].append(row[1])
ok = 1 # will hold info on whether we will be able to repair
for recID in state.keys():
if not 'TEMPORARY' in state[recID]:
if 'FUTURE' in state[recID]:
if 'CURRENT' not in state[recID]:
write_message("EMERGENCY: Index record %d is in inconsistent state. Can't repair it." % recID)
ok = 0
else:
write_message("EMERGENCY: Inconsistency in index record %d detected" % recID)
query = """DELETE FROM %sR
WHERE id_bibrec=%%s""" % self.tablename[:-1]
run_sql(query, (recID,))
write_message("EMERGENCY: Inconsistency in record %d repaired." % recID)
else:
if 'FUTURE' in state[recID] and not 'CURRENT' in state[recID]:
self.recIDs_in_mem.append([recID, recID])
# Get the words file
query = """SELECT type,termlist FROM %sR
WHERE id_bibrec=%%s""" % self.tablename[:-1]
write_message(query, verbose=9)
res = run_sql(query, (recID,))
for row in res:
wlist = deserialize_via_marshal(row[1])
write_message("Words are %s " % wlist, verbose=9)
if row[0] == 'TEMPORARY':
sign = 1
else:
sign = -1
for word in wlist:
self.put(recID, word, sign)
else:
write_message("EMERGENCY: %s for %d is in inconsistent "
"state. Couldn't repair it." % (self.tablename,
recID), stream=sys.stderr)
ok = 0
if not ok:
error_message = "Unrepairable errors found. You should check " \
"consistency of the %s - %sR tables. Deleting affected " \
"TEMPORARY and FUTURE entries from these tables is " \
"recommended; see the BibIndex Admin Guide." % \
(self.tablename, self.tablename[:-1])
write_message("EMERGENCY: " + error_message, stream=sys.stderr)
raise StandardError(error_message)
+
+ def remove_dependent_index(self, id_dependent):
+ """Removes terms found in dependent index from virtual index.
+ Function finds words for removal and then removes them from
+ forward and reversed tables term by term.
+ @param id_dependent: id of an index which we want to remove from this
+ virtual index
+ """
+ if not self.is_virtual:
+ write_message("Index is not virtual...")
+ return
+
+ global chunksize
+ terms_current_counter = 0
+ terms_done = 0
+ terms_to_go = 0
+
+ for_full_removal, for_partial_removal = self.get_words_to_remove(id_dependent, misc_lookup=False)
+ query = """SELECT t.term, m.hitlist FROM %s%02dF as t INNER JOIN %s%02dF as m
+ ON t.term=m.term""" % (self.tablename[:-3], self.index_id, self.tablename[:-3], id_dependent)
+ terms_and_hitlists = dict(run_sql(query))
+ terms_to_go = len(for_full_removal) + len(for_partial_removal)
+ task_sleep_now_if_required()
+ #full removal
+ for term in for_full_removal:
+ terms_current_counter += 1
+ hitlist = intbitset(terms_and_hitlists[term])
+ for recID in hitlist:
+ self.remove_single_word_reversed_table(term, recID)
+ self.remove_single_word_forward_table(term)
+ if terms_current_counter % chunksize == 0:
+ terms_done += terms_current_counter
+ terms_current_counter = 0
+ write_message("removed %s/%s terms..." % (terms_done, terms_to_go))
+ task_sleep_now_if_required()
+ terms_done += terms_current_counter
+ terms_current_counter = 0
+ #partial removal
+ for term, indexes in for_partial_removal.iteritems():
+ self.value = {}
+ terms_current_counter += 1
+ hitlist = intbitset(terms_and_hitlists[term])
+ if len(indexes) > 0:
+ hitlist -= self._find_common_hitlist(term, id_dependent, indexes)
+ for recID in hitlist:
+ self.remove_single_word_reversed_table(term, recID)
+ if self.value.has_key(term):
+ self.value[term][recID] = -1
+ else:
+ self.value[term] = {recID: -1}
+ if self.value:
+ self.put_word_into_db(term, self.index_id)
+ if terms_current_counter % chunksize == 0:
+ terms_done += terms_current_counter
+ terms_current_counter = 0
+ write_message("removed %s/%s terms..." % (terms_done, terms_to_go))
+ task_sleep_now_if_required()
+
+
+ def remove_single_word_forward_table(self, word):
+ """Immediately and irreversibly removes a word from forward table"""
+ run_sql("""DELETE FROM %s WHERE term=%%s""" % self.tablename, (word, )) # kwalitee: disable=sql
+
+ def remove_single_word_reversed_table(self, word, recID):
+ """Removes single word from temlist for given recID"""
+ old_set = run_sql("""SELECT termlist FROM %sR WHERE id_bibrec=%%s""" % \
+ wash_table_column_name(self.tablename[:-1]), (recID, ))
+ new_set = []
+ if old_set:
+ new_set = deserialize_via_marshal(old_set[0][0])
+ if word in new_set:
+ new_set.remove(word)
+ if new_set:
+ run_sql("""UPDATE %sR SET termlist=%%s
+ WHERE id_bibrec=%%s AND
+ type='CURRENT'""" % \
+ wash_table_column_name(self.tablename[:-1]), (serialize_via_marshal(new_set), recID))
+
+ def _find_common_hitlist(self, term, id_dependent, indexes):
+ """Checks 'indexes' for records that have 'term' indexed
+ and returns intersection between found records
+ and records that have a 'term' inside index
+ defined by id_dependent parameter"""
+ query = """SELECT m.hitlist FROM idxWORD%02dF as t INNER JOIN idxWORD%02dF as m
+ ON t.term=m.term WHERE t.term='%s'"""
+ common_hitlist = intbitset([])
+ for _id in indexes:
+ res = run_sql(query % (id_dependent, _id, term))
+ if res:
+ common_hitlist |= intbitset(res[0][0])
+ return common_hitlist
+
+ def get_words_to_remove(self, id_dependent, misc_lookup=False):
+ """Finds words in dependent index which should be removed from virtual index.
+ Example:
+ Virtual index 'A' consists of 'B' and 'C' dependent indexes and we want to
+ remove 'B' from virtual index 'A'.
+ First we need to check if 'B' and 'C' have common words. If they have
+ we need to be careful not to remove common words from 'A', because we want
+ to remove only words from 'B'.
+ Then we need to check common words for 'A' and 'B'. These are potential words
+ for removal. We need to substract common words for 'B' and 'C' from common words
+ for 'A' and 'B' to be sure that correct words are removed.
+ @return: (list, dict), list contains terms/words for full removal, dict
+ contains words for partial removal together with ids of indexes in which
+ given term/word also exists
+ """
+
+ query = """SELECT t.term FROM %s%02dF as t INNER JOIN %s%02dF as m
+ ON t.term=m.term"""
+ dependent_indexes = get_virtual_index_building_blocks(self.index_id)
+ other_ids = list(dependent_indexes and zip(*dependent_indexes)[0] or [])
+ if id_dependent in other_ids:
+ other_ids.remove(id_dependent)
+ if not misc_lookup:
+ misc_id = get_index_id_from_index_name('miscellaneous')
+ if misc_id in other_ids:
+ other_ids.remove(misc_id)
+
+ #intersections between dependent indexes
+ left_in_other_indexes = {}
+ for _id in other_ids:
+ intersection = zip(*run_sql(query % (self.tablename[:-3], id_dependent, self.tablename[:-3], _id))) # kwalitee: disable=sql
+ terms = bool(intersection) and intersection[0] or []
+ for term in terms:
+ if left_in_other_indexes.has_key(term):
+ left_in_other_indexes[term].append(_id)
+ else:
+ left_in_other_indexes[term] = [_id]
+
+ #intersection between virtual index and index we want to remove
+ main_intersection = zip(*run_sql(query % (self.tablename[:-3], self.index_id, self.tablename[:-3], id_dependent))) # kwalitee: disable=sql
+ terms_main = set(bool(main_intersection) and main_intersection[0] or [])
+ return list(terms_main - set(left_in_other_indexes.keys())), left_in_other_indexes
+
+
def main():
"""Main that construct all the bibtask."""
task_init(authorization_action='runbibindex',
authorization_msg="BibIndex Task Submission",
description="""Examples:
\t%s -a -i 234-250,293,300-500 -u admin@localhost
\t%s -a -w author,fulltext -M 8192 -v3
\t%s -d -m +4d -A on --flush=10000\n""" % ((sys.argv[0],) * 3), help_specific_usage=""" Indexing options:
-a, --add\t\tadd or update words for selected records
-d, --del\t\tdelete words for selected records
-i, --id=low[-high]\t\tselect according to doc recID
-m, --modified=from[,to]\tselect according to modification date
-c, --collection=c1[,c2]\tselect according to collection
-R, --reindex\treindex the selected indexes from scratch
Repairing options:
-k, --check\t\tcheck consistency for all records in the table(s)
-r, --repair\t\ttry to repair all records in the table(s)
Specific options:
-w, --windex=w1[,w2]\tword/phrase indexes to consider (all)
-M, --maxmem=XXX\tmaximum memory usage in kB (no limit)
-f, --flush=NNN\t\tfull consistent table flush after NNN records (10000)
+ --force\tforce indexing of all records for provided indexes
+ -Z, --remove-dependent-index=w\tname of an index for removing from virtual index
""",
version=__revision__,
- specific_params=("adi:m:c:w:krRM:f:", [
+ specific_params=("adi:m:c:w:krRM:f:oZ:", [
"add",
"del",
"id=",
"modified=",
"collection=",
"windex=",
"check",
"repair",
"reindex",
"maxmem=",
"flush=",
+ "force",
+ "remove-dependent-index="
]),
task_stop_helper_fnc=task_stop_table_close_fnc,
task_submit_elaborate_specific_parameter_fnc=task_submit_elaborate_specific_parameter,
task_run_fnc=task_run_core,
task_submit_check_options_fnc=task_submit_check_options)
def task_submit_check_options():
"""Check for options compatibility."""
if task_get_option("reindex"):
if task_get_option("cmd") != "add" or task_get_option('id') or task_get_option('collection'):
print >> sys.stderr, "ERROR: You can use --reindex only when adding modified record."
return False
return True
def task_submit_elaborate_specific_parameter(key, value, opts, args):
""" Given the string key it checks it's meaning, eventually using the
value. Usually it fills some key in the options dict.
It must return True if it has elaborated the key, False, if it doesn't
know that key.
eg:
if key in ['-n', '--number']:
self.options['number'] = value
return True
return False
"""
if key in ("-a", "--add"):
task_set_option("cmd", "add")
if ("-x", "") in opts or ("--del", "") in opts:
raise StandardError("Can not have --add and --del at the same time!")
elif key in ("-k", "--check"):
task_set_option("cmd", "check")
elif key in ("-r", "--repair"):
task_set_option("cmd", "repair")
elif key in ("-d", "--del"):
task_set_option("cmd", "del")
elif key in ("-i", "--id"):
task_set_option('id', task_get_option('id') + split_ranges(value))
elif key in ("-m", "--modified"):
task_set_option("modified", get_date_range(value))
elif key in ("-c", "--collection"):
task_set_option("collection", value)
elif key in ("-R", "--reindex"):
task_set_option("reindex", True)
elif key in ("-w", "--windex"):
task_set_option("windex", value)
elif key in ("-M", "--maxmem"):
task_set_option("maxmem", int(value))
if task_get_option("maxmem") < base_process_size + 1000:
raise StandardError("Memory usage should be higher than %d kB" % \
(base_process_size + 1000))
elif key in ("-f", "--flush"):
task_set_option("flush", int(value))
+ elif key in ("-o", "--force"):
+ task_set_option("force", True)
+ elif key in ("-Z", "--remove-dependent-index",):
+ task_set_option("remove-dependent-index", value)
else:
return False
return True
def task_stop_table_close_fnc():
""" Close tables to STOP. """
global _last_word_table
if _last_word_table:
_last_word_table.put_into_db()
+
+def get_recIDs_by_date_bibliographic(dates, index_name, force_all=False):
+ """ Finds records that were modified between DATES[0] and DATES[1]
+ for given index.
+ If DATES is not set, then finds records that were modified since
+ the last update of the index.
+ @param wordtable_type: can be 'Words', 'Pairs' or 'Phrases'
+ """
+ index_id = get_index_id_from_index_name(index_name)
+ if not dates:
+ query = """SELECT last_updated FROM idxINDEX WHERE id=%s"""
+ res = run_sql(query, (index_id,))
+ if not res:
+ return set([])
+ if not res[0][0] or force_all:
+ dates = ("0000-00-00", None)
+ else:
+ dates = (res[0][0], None)
+ if dates[1] is None:
+ res = intbitset(run_sql("""SELECT b.id FROM bibrec AS b WHERE b.modification_date >= %s""",
+ (dates[0],)))
+ if index_name == 'fulltext':
+ res |= intbitset(run_sql("""SELECT id_bibrec FROM bibrec_bibdoc JOIN bibdoc ON id_bibdoc=id
+ WHERE text_extraction_date <= modification_date AND
+ modification_date >= %s
+ AND status<>'DELETED'""",
+ (dates[0],)))
+ elif dates[0] is None:
+ res = intbitset(run_sql("""SELECT b.id FROM bibrec AS b WHERE b.modification_date <= %s""",
+ (dates[1],)))
+ if index_name == 'fulltext':
+ res |= intbitset(run_sql("""SELECT id_bibrec FROM bibrec_bibdoc JOIN bibdoc ON id_bibdoc=id
+ WHERE text_extraction_date <= modification_date
+ AND modification_date <= %s
+ AND status<>'DELETED'""",
+ (dates[1],)))
+ else:
+ res = intbitset(run_sql("""SELECT b.id FROM bibrec AS b
+ WHERE b.modification_date >= %s AND
+ b.modification_date <= %s""",
+ (dates[0], dates[1])))
+ if index_name == 'fulltext':
+ res |= intbitset(run_sql("""SELECT id_bibrec FROM bibrec_bibdoc JOIN bibdoc ON id_bibdoc=id
+ WHERE text_extraction_date <= modification_date AND
+ modification_date >= %s AND
+ modification_date <= %s AND
+ status<>'DELETED'""",
+ (dates[0], dates[1],)))
+ # special case of author indexes where we need to re-index
+ # those records that were affected by changed BibAuthorID attributions:
+ if index_name in ('author', 'firstauthor', 'exactauthor', 'exactfirstauthor'):
+ from invenio.bibauthorid_personid_maintenance import get_recids_affected_since
+ # dates[1] is ignored, since BibAuthorID API does not offer upper limit search
+ rec_list_author = intbitset(get_recids_affected_since(dates[0]))
+ res = res | rec_list_author
+ return set(res)
+
+
+def get_recIDs_by_date_authority(dates, index_name, force_all=False):
+ """ Finds records that were modified between DATES[0] and DATES[1]
+ for given index.
+ If DATES is not set, then finds records that were modified since
+ the last update of the index.
+ Searches for bibliographic records connected to authority records
+ that have been changed.
+ """
+ index_id = get_index_id_from_index_name(index_name)
+ index_tags = get_index_tags(index_name)
+ if not dates:
+ query = """SELECT last_updated FROM idxINDEX WHERE id=%s"""
+ res = run_sql(query, (index_id,))
+ if not res:
+ return set([])
+ if not res[0][0] or force_all:
+ dates = ("0000-00-00", None)
+ else:
+ dates = (res[0][0], None)
+ res = intbitset()
+ for tag in index_tags:
+ pattern = tag.replace('%', '*')
+ matches = fnmatch.filter(CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC.keys(), pattern)
+ if not len(matches):
+ continue
+ for tag_match in matches:
+ # get the type of authority record associated with this field
+ auth_type = CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC.get(tag_match)
+ # find updated authority records of this type
+ # dates[1] is ignored, needs dates[0] to find res
+ now = datetime.now()
+ auth_recIDs = search_pattern(p='980__a:' + auth_type) \
+ & search_unit_in_bibrec(str(dates[0]), str(now), type='m')
+ # now find dependent bibliographic records
+ for auth_recID in auth_recIDs:
+ # get the fix authority identifier of this authority record
+ control_nos = get_control_nos_from_recID(auth_recID)
+ # there may be multiple control number entries! (the '035' field is repeatable!)
+ for control_no in control_nos:
+ # get the bibrec IDs that refer to AUTHORITY_ID in TAG
+ tag_0 = tag_match[:5] + '0' # possibly do the same for '4' subfields ?
+ fieldvalue = '"' + control_no + '"'
+ res |= search_pattern(p=tag_0 + ':' + fieldvalue)
+ return set(res)
+
+
+def get_not_updated_recIDs(modified_dates, indexes, force_all=False):
+ """Finds not updated recIDs in database for indexes.
+ @param modified_dates: between this dates we should look for modified records
+ @type modified_dates: [date_old, date_new]
+ @param indexes: list of indexes
+ @type indexes: string separated by coma
+ @param force_all: if True all records will be taken
+ """
+ found_recIDs = set()
+ write_message(CFG_BIBINDEX_UPDATE_MESSAGE)
+ for index in indexes:
+ found_recIDs |= get_recIDs_by_date_bibliographic(modified_dates, index, force_all)
+ found_recIDs |= get_recIDs_by_date_authority(modified_dates, index, force_all)
+ return list(sorted(found_recIDs))
+
+
+def get_recIDs_from_cli():
+ """
+ Gets recIDs ranges from CLI for indexing when
+ user specified 'id' or 'collection' option or
+ search for modified recIDs when they're not specified.
+ """
+ indexes = task_get_option("windex")
+ if not indexes:
+ indexes = get_all_indexes()
+ else:
+ indexes = indexes.split(",")
+ # need to first update idxINDEX table to find proper recIDs for reindexing
+ if task_get_option("reindex"):
+ for index_name in indexes:
+ run_sql("""UPDATE idxINDEX SET last_updated='0000-00-00 00:00:00'
+ WHERE name=%s""", (index_name,))
+
+ if task_get_option("id"):
+ return task_get_option("id")
+ elif task_get_option("collection"):
+ l_of_colls = task_get_option("collection").split(",")
+ recIDs = perform_request_search(c=l_of_colls)
+ recIDs_range = []
+ for recID in recIDs:
+ recIDs_range.append([recID, recID])
+ return recIDs_range
+ elif task_get_option("cmd") == "add":
+ recs = get_not_updated_recIDs(task_get_option("modified"),
+ indexes,
+ task_get_option("force"))
+ recIDs_range = beautify_range_list(create_range_list(recs))
+ return recIDs_range
+ return []
+
+
+def remove_dependent_index(virtual_indexes, dependent_index):
+ """
+ Removes dependent index from virtual indexes.
+ @param virtual_indexes: names of virtual_indexes separated by comma
+ @type virtual_indexes: string
+ @param dependent_index: name of dependent index
+ @type dependent_index: string
+ """
+ if not virtual_indexes:
+ write_message("You should specify a name of a virtual index...")
+ else:
+ virtual_indexes = virtual_indexes.split(",")
+ id_dependent = get_index_id_from_index_name(dependent_index)
+ wordTables = get_word_tables(virtual_indexes)
+ for index_id, index_name, index_tags in wordTables:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.remove_dependent_index(id_dependent)
+
+ wordTable.report_on_table_consistency()
+ task_sleep_now_if_required()
+
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxPAIR%02dF',
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Pairs"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.remove_dependent_index(id_dependent)
+
+ wordTable.report_on_table_consistency()
+ task_sleep_now_if_required()
+
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxPHRASE%02dF',
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.remove_dependent_index(id_dependent)
+
+ wordTable.report_on_table_consistency()
+
+ query = """DELETE FROM idxINDEX_idxINDEX WHERE id_virtual=%s AND id_normal=%s"""
+ run_sql(query, (index_id, id_dependent))
+
+
def task_run_core():
"""Runs the task by fetching arguments from the BibSched task queue. This is
what BibSched will be invoking via daemon call.
The task prints Fibonacci numbers for up to NUM on the stdout, and some
messages on stderr.
Return 1 in case of success and 0 in case of failure."""
global _last_word_table
if task_get_option("cmd") == "check":
- wordTables = get_word_tables(task_get_option("windex"))
+ indexes = task_get_option("windex") and task_get_option("windex").split(",") or get_all_indexes()
+ wordTables = get_word_tables(indexes)
for index_id, index_name, index_tags in wordTables:
- if index_name == 'year' and CFG_INSPIRE_SITE:
- fnc_get_words_from_phrase = get_words_from_date_tag
- elif index_name in ('author', 'firstauthor') and \
- CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES:
- fnc_get_words_from_phrase = get_author_family_name_words_from_phrase
- else:
- fnc_get_words_from_phrase = get_words_from_phrase
wordTable = WordTable(index_name=index_name,
index_id=index_id,
fields_to_index=index_tags,
table_name_pattern='idxWORD%02dF',
- default_get_words_fnc=fnc_get_words_from_phrase,
- tag_to_words_fnc_map={'8564_u': get_words_from_fulltext},
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexFulltextTokenizer"},
wash_index_terms=50)
_last_word_table = wordTable
wordTable.report_on_table_consistency()
task_sleep_now_if_required(can_stop_too=True)
- if index_name in ('author', 'firstauthor') and \
- CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES:
- fnc_get_pairs_from_phrase = get_pairs_from_phrase # FIXME
- else:
- fnc_get_pairs_from_phrase = get_pairs_from_phrase
+
wordTable = WordTable(index_name=index_name,
index_id=index_id,
fields_to_index=index_tags,
table_name_pattern='idxPAIR%02dF',
- default_get_words_fnc=fnc_get_pairs_from_phrase,
- tag_to_words_fnc_map={'8564_u': get_nothing_from_phrase},
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Pairs"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
wash_index_terms=100)
_last_word_table = wordTable
wordTable.report_on_table_consistency()
task_sleep_now_if_required(can_stop_too=True)
- if index_name in ('author', 'firstauthor'):
- fnc_get_phrases_from_phrase = get_fuzzy_authors_from_phrase
- elif index_name in ('exactauthor', 'exactfirstauthor'):
- fnc_get_phrases_from_phrase = get_exact_authors_from_phrase
- else:
- fnc_get_phrases_from_phrase = get_phrases_from_phrase
+
wordTable = WordTable(index_name=index_name,
index_id=index_id,
fields_to_index=index_tags,
table_name_pattern='idxPHRASE%02dF',
- default_get_words_fnc=fnc_get_phrases_from_phrase,
- tag_to_words_fnc_map={'8564_u': get_nothing_from_phrase},
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Phrases"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
wash_index_terms=0)
_last_word_table = wordTable
wordTable.report_on_table_consistency()
task_sleep_now_if_required(can_stop_too=True)
_last_word_table = None
return True
+ #virtual index: remove dependent index
+ if task_get_option("remove-dependent-index"):
+ remove_dependent_index(task_get_option("windex"),
+ task_get_option("remove-dependent-index"))
+ return True
+
+ #initialization for Words,Pairs,Phrases
+ recIDs_range = get_recIDs_from_cli()
+ recIDs_for_index = find_affected_records_for_index(task_get_option("windex"),
+ recIDs_range,
+ (task_get_option("force") or \
+ task_get_option("reindex") or \
+ task_get_option("cmd") == "del"))
+
+ wordTables = get_word_tables(recIDs_for_index.keys())
+ if not wordTables:
+ write_message("Selected indexes/recIDs are up to date.")
# Let's work on single words!
- wordTables = get_word_tables(task_get_option("windex"))
for index_id, index_name, index_tags in wordTables:
- is_fulltext_index = index_name == 'fulltext'
reindex_prefix = ""
if task_get_option("reindex"):
reindex_prefix = "tmp_"
init_temporary_reindex_tables(index_id, reindex_prefix)
- if index_name == 'year' and CFG_INSPIRE_SITE:
- fnc_get_words_from_phrase = get_words_from_date_tag
- elif index_name in ('author', 'firstauthor') and \
- CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES:
- fnc_get_words_from_phrase = get_author_family_name_words_from_phrase
- else:
- fnc_get_words_from_phrase = get_words_from_phrase
+
wordTable = WordTable(index_name=index_name,
index_id=index_id,
fields_to_index=index_tags,
table_name_pattern=reindex_prefix + 'idxWORD%02dF',
- default_get_words_fnc=fnc_get_words_from_phrase,
- tag_to_words_fnc_map={'8564_u': get_words_from_fulltext},
- is_fulltext_index=is_fulltext_index,
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexFulltextTokenizer"},
wash_index_terms=50)
_last_word_table = wordTable
wordTable.report_on_table_consistency()
try:
if task_get_option("cmd") == "del":
- if task_get_option("id"):
- wordTable.del_recIDs(task_get_option("id"))
- task_sleep_now_if_required(can_stop_too=True)
- elif task_get_option("collection"):
- l_of_colls = task_get_option("collection").split(",")
- recIDs = perform_request_search(c=l_of_colls)
- recIDs_range = []
- for recID in recIDs:
- recIDs_range.append([recID, recID])
+ if task_get_option("id") or task_get_option("collection"):
wordTable.del_recIDs(recIDs_range)
task_sleep_now_if_required(can_stop_too=True)
else:
error_message = "Missing IDs of records to delete from " \
"index %s." % wordTable.tablename
write_message(error_message, stream=sys.stderr)
raise StandardError(error_message)
elif task_get_option("cmd") == "add":
- if task_get_option("id"):
- wordTable.add_recIDs(task_get_option("id"), task_get_option("flush"))
- task_sleep_now_if_required(can_stop_too=True)
- elif task_get_option("collection"):
- l_of_colls = task_get_option("collection").split(",")
- recIDs = perform_request_search(c=l_of_colls)
- recIDs_range = []
- for recID in recIDs:
- recIDs_range.append([recID, recID])
- wordTable.add_recIDs(recIDs_range, task_get_option("flush"))
- task_sleep_now_if_required(can_stop_too=True)
- else:
- wordTable.add_recIDs_by_date(task_get_option("modified"), task_get_option("flush"))
- ## here we used to update last_updated info, if run via automatic mode;
- ## but do not update here anymore, since idxPHRASE will be acted upon later
- task_sleep_now_if_required(can_stop_too=True)
+ final_recIDs = beautify_range_list(create_range_list(recIDs_for_index[index_name]))
+ wordTable.add_recIDs(final_recIDs, task_get_option("flush"))
+ task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("cmd") == "repair":
wordTable.repair(task_get_option("flush"))
task_sleep_now_if_required(can_stop_too=True)
else:
error_message = "Invalid command found processing %s" % \
wordTable.tablename
write_message(error_message, stream=sys.stderr)
raise StandardError(error_message)
except StandardError, e:
write_message("Exception caught: %s" % e, sys.stderr)
register_exception(alert_admin=True)
if _last_word_table:
_last_word_table.put_into_db()
raise
wordTable.report_on_table_consistency()
task_sleep_now_if_required(can_stop_too=True)
# Let's work on pairs now
- if index_name in ('author', 'firstauthor') and \
- CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES:
- fnc_get_pairs_from_phrase = get_pairs_from_phrase # FIXME
- else:
- fnc_get_pairs_from_phrase = get_pairs_from_phrase
wordTable = WordTable(index_name=index_name,
index_id=index_id,
fields_to_index=index_tags,
table_name_pattern=reindex_prefix + 'idxPAIR%02dF',
- default_get_words_fnc=fnc_get_pairs_from_phrase,
- tag_to_words_fnc_map={'8564_u': get_nothing_from_phrase},
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Pairs"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
wash_index_terms=100)
_last_word_table = wordTable
wordTable.report_on_table_consistency()
try:
if task_get_option("cmd") == "del":
- if task_get_option("id"):
- wordTable.del_recIDs(task_get_option("id"))
- task_sleep_now_if_required(can_stop_too=True)
- elif task_get_option("collection"):
- l_of_colls = task_get_option("collection").split(",")
- recIDs = perform_request_search(c=l_of_colls)
- recIDs_range = []
- for recID in recIDs:
- recIDs_range.append([recID, recID])
+ if task_get_option("id") or task_get_option("collection"):
wordTable.del_recIDs(recIDs_range)
task_sleep_now_if_required(can_stop_too=True)
else:
error_message = "Missing IDs of records to delete from " \
"index %s." % wordTable.tablename
write_message(error_message, stream=sys.stderr)
raise StandardError(error_message)
elif task_get_option("cmd") == "add":
- if task_get_option("id"):
- wordTable.add_recIDs(task_get_option("id"), task_get_option("flush"))
- task_sleep_now_if_required(can_stop_too=True)
- elif task_get_option("collection"):
- l_of_colls = task_get_option("collection").split(",")
- recIDs = perform_request_search(c=l_of_colls)
- recIDs_range = []
- for recID in recIDs:
- recIDs_range.append([recID, recID])
- wordTable.add_recIDs(recIDs_range, task_get_option("flush"))
- task_sleep_now_if_required(can_stop_too=True)
- else:
- wordTable.add_recIDs_by_date(task_get_option("modified"), task_get_option("flush"))
- # let us update last_updated timestamp info, if run via automatic mode:
- task_sleep_now_if_required(can_stop_too=True)
+ final_recIDs = beautify_range_list(create_range_list(recIDs_for_index[index_name]))
+ wordTable.add_recIDs(final_recIDs, task_get_option("flush"))
+ task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("cmd") == "repair":
wordTable.repair(task_get_option("flush"))
task_sleep_now_if_required(can_stop_too=True)
else:
error_message = "Invalid command found processing %s" % \
wordTable.tablename
write_message(error_message, stream=sys.stderr)
raise StandardError(error_message)
except StandardError, e:
write_message("Exception caught: %s" % e, sys.stderr)
register_exception()
if _last_word_table:
_last_word_table.put_into_db()
raise
wordTable.report_on_table_consistency()
task_sleep_now_if_required(can_stop_too=True)
# Let's work on phrases now
- if index_name in ('author', 'firstauthor'):
- fnc_get_phrases_from_phrase = get_fuzzy_authors_from_phrase
- elif index_name in ('exactauthor', 'exactfirstauthor'):
- fnc_get_phrases_from_phrase = get_exact_authors_from_phrase
- else:
- fnc_get_phrases_from_phrase = get_phrases_from_phrase
wordTable = WordTable(index_name=index_name,
index_id=index_id,
fields_to_index=index_tags,
table_name_pattern=reindex_prefix + 'idxPHRASE%02dF',
- default_get_words_fnc=fnc_get_phrases_from_phrase,
- tag_to_words_fnc_map={'8564_u': get_nothing_from_phrase},
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Phrases"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
wash_index_terms=0)
_last_word_table = wordTable
wordTable.report_on_table_consistency()
try:
if task_get_option("cmd") == "del":
- if task_get_option("id"):
- wordTable.del_recIDs(task_get_option("id"))
- task_sleep_now_if_required(can_stop_too=True)
- elif task_get_option("collection"):
- l_of_colls = task_get_option("collection").split(",")
- recIDs = perform_request_search(c=l_of_colls)
- recIDs_range = []
- for recID in recIDs:
- recIDs_range.append([recID, recID])
+ if task_get_option("id") or task_get_option("collection"):
wordTable.del_recIDs(recIDs_range)
task_sleep_now_if_required(can_stop_too=True)
else:
error_message = "Missing IDs of records to delete from " \
"index %s." % wordTable.tablename
write_message(error_message, stream=sys.stderr)
raise StandardError(error_message)
elif task_get_option("cmd") == "add":
- if task_get_option("id"):
- wordTable.add_recIDs(task_get_option("id"), task_get_option("flush"))
- task_sleep_now_if_required(can_stop_too=True)
- elif task_get_option("collection"):
- l_of_colls = task_get_option("collection").split(",")
- recIDs = perform_request_search(c=l_of_colls)
- recIDs_range = []
- for recID in recIDs:
- recIDs_range.append([recID, recID])
- wordTable.add_recIDs(recIDs_range, task_get_option("flush"))
- task_sleep_now_if_required(can_stop_too=True)
- else:
- wordTable.add_recIDs_by_date(task_get_option("modified"), task_get_option("flush"))
+ final_recIDs = beautify_range_list(create_range_list(recIDs_for_index[index_name]))
+ wordTable.add_recIDs(final_recIDs, task_get_option("flush"))
+ if not task_get_option("id") and not task_get_option("collection"):
# let us update last_updated timestamp info, if run via automatic mode:
update_index_last_updated(index_id, task_get_task_param('task_starting_time'))
- task_sleep_now_if_required(can_stop_too=True)
+ task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("cmd") == "repair":
wordTable.repair(task_get_option("flush"))
task_sleep_now_if_required(can_stop_too=True)
else:
error_message = "Invalid command found processing %s" % \
wordTable.tablename
write_message(error_message, stream=sys.stderr)
raise StandardError(error_message)
except StandardError, e:
write_message("Exception caught: %s" % e, sys.stderr)
register_exception()
if _last_word_table:
_last_word_table.put_into_db()
raise
wordTable.report_on_table_consistency()
task_sleep_now_if_required(can_stop_too=True)
if task_get_option("reindex"):
swap_temporary_reindex_tables(index_id, reindex_prefix)
update_index_last_updated(index_id, task_get_task_param('task_starting_time'))
task_sleep_now_if_required(can_stop_too=True)
_last_word_table = None
return True
### okay, here we go:
if __name__ == '__main__':
main()
diff --git a/modules/bibindex/lib/bibindex_engine_config.py b/modules/bibindex/lib/bibindex_engine_config.py
index ba247c776..eae7a1233 100644
--- a/modules/bibindex/lib/bibindex_engine_config.py
+++ b/modules/bibindex/lib/bibindex_engine_config.py
@@ -1,37 +1,56 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibIndex indexing engine configuration parameters.
"""
__revision__ = \
"$Id$"
+import os
## configuration parameters read from the general config file:
-from invenio.config import CFG_VERSION
+from invenio.config import CFG_VERSION, CFG_PYLIBDIR
## version number:
BIBINDEX_ENGINE_VERSION = "Invenio/%s bibindex/%s" % (CFG_VERSION, CFG_VERSION)
## safety parameters concerning DB thread-multiplication problem:
CFG_CHECK_MYSQL_THREADS = 0 # to check or not to check the problem?
CFG_MAX_MYSQL_THREADS = 50 # how many threads (connections) we
# consider as still safe
CFG_MYSQL_THREAD_TIMEOUT = 20 # we'll kill threads that were sleeping
# for more than X seconds
+
+
+
+CFG_BIBINDEX_SYNONYM_MATCH_TYPE = { 'None': '-None-',
+ 'exact': 'exact',
+ 'leading_to_comma': 'leading_to_comma',
+ 'leading_to_number': 'leading_to_number'}
+CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR = ","
+CFG_BIBINDEX_INDEX_TABLE_TYPE = { 'Words': 'Words',
+ 'Pairs': 'Pairs',
+ 'Phrases': 'Phrases' }
+
+CFG_BIBINDEX_TOKENIZERS_PATH = os.path.join(CFG_PYLIBDIR, 'invenio', 'bibindex_tokenizers')
+
+CFG_BIBINDEX_ADDING_RECORDS_STARTED_STR = "%s adding records #%d-#%d started"
+
+CFG_BIBINDEX_UPDATE_MESSAGE = "Searching for records which should be reindexed..."
+
diff --git a/modules/bibindex/lib/bibindex_engine_stopwords.py b/modules/bibindex/lib/bibindex_engine_stopwords.py
index 8a217ce2e..f7e487a12 100644
--- a/modules/bibindex/lib/bibindex_engine_stopwords.py
+++ b/modules/bibindex/lib/bibindex_engine_stopwords.py
@@ -1,52 +1,74 @@
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibIndex engine stopwords facility."""
__revision__ = "$Id$"
-from invenio.config import CFG_BIBINDEX_PATH_TO_STOPWORDS_FILE, \
- CFG_BIBINDEX_REMOVE_STOPWORDS
+from invenio.config import CFG_BIBRANK_PATH_TO_STOPWORDS_FILE, \
+ CFG_ETCDIR
+from invenio.bibindex_engine_utils import get_all_index_names_and_column_values
-def create_stopwords(filename=CFG_BIBINDEX_PATH_TO_STOPWORDS_FILE):
+def create_stopwords(filename=CFG_BIBRANK_PATH_TO_STOPWORDS_FILE):
"""Create stopword dictionary out of FILENAME."""
try:
file_descriptor = open(filename, 'r')
except IOError:
return {}
lines = file_descriptor.readlines()
file_descriptor.close()
stopdict = {}
for line in lines:
stopdict[line.rstrip()] = 1
return stopdict
-stopwords = create_stopwords()
-def is_stopword(word, force_check=0):
- """Return true if WORD is found among stopwords, false otherwise.
- Also, return false if BibIndex wasn't configured to use
- stopwords. However, if FORCE_CHECK is set to 1, then do not
- pay attention to whether the admin disabled stopwords
- functionality, but look up the word anyway. This mode is
- useful for ranking.
+def map_stopwords_paths_to_stopwords_kb():
+ """
+ Maps paths to stopwords file to stopwords dicts.
+ It ensures that given stopwords dict is mapped only once.
+ Here is an example of an entry:
+ "/opt/invenio/etc/bibrank/stopwords.kb" : {... ,'of':1, ... }
+ It will always map the default stopwords knowledge base given by
+ CFG_BIBRANK_PATH_TO_STOPWORDS_FILE. It is useful for bibrank module.
+ """
+ stopwords_kb_map = {}
+ stopwords_kb_map[CFG_BIBRANK_PATH_TO_STOPWORDS_FILE] = create_stopwords()
+ index_stopwords = get_all_index_names_and_column_values("remove_stopwords")
+ for index, stopwords in index_stopwords:
+ if stopwords and stopwords != 'No':
+ stopwords_path = CFG_ETCDIR + "/bibrank/" + stopwords
+ if not stopwords_kb_map.has_key(stopwords_path):
+ stopwords_kb_map[stopwords_path] = create_stopwords(stopwords_path)
+ return stopwords_kb_map
+
+
+stopwords_kb = map_stopwords_paths_to_stopwords_kb()
+
+def is_stopword(word, stopwords_path = CFG_BIBRANK_PATH_TO_STOPWORDS_FILE):
+ """Return true if WORD is found among stopwords for given index, false otherwise.
+ It searches in the default stopwords knowledge base if stopwords_path is not specified
+ which is useful for bibrank module. If one wants to search in diffrent stopwords knowledge base
+ he must specify the path to stopwords file.
+ @param word: word we want to check if it's stopword or not
+ @param index: path to stopwords knowledge base we want to search in
"""
# note: input word is assumed to be in lowercase
- if (CFG_BIBINDEX_REMOVE_STOPWORDS or force_check) and \
- stopwords.has_key(word):
- return True
+ if stopwords_kb.has_key(stopwords_path):
+ if stopwords_kb[stopwords_path].has_key(word):
+ return True
return False
diff --git a/modules/bibindex/lib/bibindex_engine_tokenizer.py b/modules/bibindex/lib/bibindex_engine_tokenizer.py
deleted file mode 100644
index a68d6c113..000000000
--- a/modules/bibindex/lib/bibindex_engine_tokenizer.py
+++ /dev/null
@@ -1,532 +0,0 @@
-# -*- coding:utf-8 -*-
-##
-## This file is part of Invenio.
-## Copyright (C) 2010, 2011, 2012 CERN.
-##
-## Invenio is free software; you can redistribute it and/or
-## modify it under the terms of the GNU General Public License as
-## published by the Free Software Foundation; either version 2 of the
-## License, or (at your option) any later version.
-##
-## Invenio is distributed in the hope that it will be useful, but
-## WITHOUT ANY WARRANTY; without even the implied warranty of
-## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-## General Public License for more details.
-##
-## You should have received a copy of the GNU General Public License
-## along with Invenio; if not, write to the Free Software Foundation, Inc.,
-## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""bibindex_engine_tokenizer: a set of classes implementing index tokenization
-
-The idea is that Tokenizer classes provide a method, tokenize(), which turns
-input strings into lists of strings. The output strings are calculated based
-on the input string as tokens suitable for word or phrase indexing.
-"""
-
-import re
-
-from invenio.config import \
- CFG_BIBINDEX_REMOVE_HTML_MARKUP, \
- CFG_BIBINDEX_REMOVE_LATEX_MARKUP, \
- CFG_BIBINDEX_CHARS_PUNCTUATION, \
- CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS
-from invenio.htmlutils import remove_html_markup
-from invenio.textutils import wash_for_utf8, strip_accents
-
-from invenio.bibindex_engine_washer import \
- lower_index_term, remove_latex_markup, \
- apply_stemming_and_stopwords_and_length_check, \
- wash_author_name
-
-latex_formula_re = re.compile(r'\$.*?\$|\\\[.*?\\\]')
-phrase_delimiter_re = re.compile(r'[\.:;\?\!]')
-space_cleaner_re = re.compile(r'\s+')
-re_block_punctuation_begin = re.compile(r"^" + CFG_BIBINDEX_CHARS_PUNCTUATION + "+")
-re_block_punctuation_end = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION + "+$")
-re_punctuation = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION)
-re_separators = re.compile(CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS)
-re_arxiv = re.compile(r'^arxiv:\d\d\d\d\.\d\d\d\d')
-
-re_pattern_fuzzy_author_trigger = re.compile(r'[\s\,\.]')
-# FIXME: re_pattern_fuzzy_author_trigger could be removed and an
-# BibAuthorID API function could be called instead after we
-# double-check that there are no circular imports.
-re_pattern_author_canonical_id = re.compile(r'\.[0-9]+$')
-
-def author_name_requires_phrase_search(p):
- """
- Detect whether author query pattern p requires phrase search.
- Notably, look for presence of spaces and commas.
- """
- if re_pattern_fuzzy_author_trigger.search(p):
- return True
- return False
-
-class BibIndexTokenizer(object):
- """Base class for the tokenizers
-
- Tokenizers act as filters which turn input strings into lists of strings
- which represent the idexable components of that string.
- """
-
- def scan_string(self, s):
- """Return an intermediate representation of the tokens in s.
-
- Every tokenizer should have a scan_string function, which scans the
- input string and lexically tags its components. These units are
- grouped together sequentially. The output of scan_string is usually
- something like:
- {
- 'TOKEN_TAG_LIST' : a list of valid keys in this output set,
- 'key1' : [val1, val2, val3] - where key describes the in some
- meaningful way
- }
-
- @param s: the input to be lexically tagged
- @type s: string
-
- @return: dict of lexically tagged input items
- In a sample Tokenizer where scan_string simply splits s on
- space, scan_string might output the following for
- "Assam and Darjeeling":
- {
- 'TOKEN_TAG_LIST' : 'word_list',
- 'word_list' : ['Assam', 'and', 'Darjeeling']
- }
- @rtype: dict
- """
- raise NotImplementedError
-
- def parse_scanned(self, o):
- """Calculate the token list from the intermediate representation o.
-
- While this should be an interesting computation over the intermediate
- representation generated by scan_string, obviously in the split-on-
- space example we need only return o['word_list'].
-
- @param t: a dictionary with a 'word_list' key
- @type t: dict
-
- @return: the token items from 'word_list'
- @rtype: list of string
- """
- raise NotImplementedError
-
- def tokenize(self, s):
- """Main entry point. Return token list from input string s.
-
- Simply composes the functionality above.
-
- @param s: the input to be lexically tagged
- @type s: string
-
- @return: the token items derived from s
- @rtype: list of string
- """
- raise NotImplementedError
-
-
-class BibIndexPhraseTokenizer(BibIndexTokenizer):
- """The original phrase is returned"""
-
- def __init__(self, stemming_language = None):
- self.stemming_language = stemming_language
-
- def tokenize(self, phrase):
- """Return list of phrases found in PHRASE. Note that the phrase is
- split into groups depending on the alphanumeric characters and
- punctuation characters definition present in the config file.
- """
- phrase = wash_for_utf8(phrase)
- return [phrase]
- ## Note that we don't break phrases, they are used for exact style
- ## of searching.
- words = {}
- phrase = strip_accents(phrase)
- # 1st split phrase into blocks according to whitespace
- for block1 in phrase_delimiter_re.split(strip_accents(phrase)):
- block1 = block1.strip()
- if block1 and self.stemming_language:
- new_words = []
- for block2 in re_punctuation.split(block1):
- block2 = block2.strip()
- if block2:
- for block3 in block2.split():
- block3 = block3.strip()
- if block3:
- # Note that we don't stem phrases, they
- # are used for exact style of searching.
- new_words.append(block3)
- block1 = ' '.join(new_words)
- if block1:
- words[block1] = 1
- return words.keys()
-
-
-class BibIndexWordTokenizer(BibIndexTokenizer):
- """A phrase is split into words"""
-
- def __init__(self, stemming_language = None):
- self.stemming_language = stemming_language
-
- def tokenize(self, phrase):
- """Return list of words found in PHRASE. Note that the phrase is
- split into groups depending on the alphanumeric characters and
- punctuation characters definition present in the config file.
- """
- words = {}
- formulas = []
- if CFG_BIBINDEX_REMOVE_HTML_MARKUP and phrase.find("</") > -1:
- phrase = remove_html_markup(phrase)
- if CFG_BIBINDEX_REMOVE_LATEX_MARKUP:
- formulas = latex_formula_re.findall(phrase)
- phrase = remove_latex_markup(phrase)
- phrase = latex_formula_re.sub(' ', phrase)
- phrase = wash_for_utf8(phrase)
- phrase = lower_index_term(phrase)
- # 1st split phrase into blocks according to whitespace
- for block in strip_accents(phrase).split():
- # 2nd remove leading/trailing punctuation and add block:
- block = re_block_punctuation_begin.sub("", block)
- block = re_block_punctuation_end.sub("", block)
- if block:
- stemmed_block = apply_stemming_and_stopwords_and_length_check(block, self.stemming_language)
- if stemmed_block:
- words[stemmed_block] = 1
- if re_arxiv.match(block):
- # special case for blocks like `arXiv:1007.5048' where
- # we would like to index the part after the colon
- # regardless of dot or other punctuation characters:
- words[block.split(':', 1)[1]] = 1
- # 3rd break each block into subblocks according to punctuation and add subblocks:
- for subblock in re_punctuation.split(block):
- stemmed_subblock = apply_stemming_and_stopwords_and_length_check(subblock, self.stemming_language)
- if stemmed_subblock:
- words[stemmed_subblock] = 1
- # 4th break each subblock into alphanumeric groups and add groups:
- for alphanumeric_group in re_separators.split(subblock):
- stemmed_alphanumeric_group = apply_stemming_and_stopwords_and_length_check(alphanumeric_group, self.stemming_language)
- if stemmed_alphanumeric_group:
- words[stemmed_alphanumeric_group] = 1
- for block in formulas:
- words[block] = 1
- return words.keys()
-
-class BibIndexPairTokenizer(BibIndexTokenizer):
- """A phrase is split into pairs of words"""
-
- def __init__(self, stemming_language = None):
- self.stemming_language = stemming_language
-
- def tokenize(self, phrase):
- """Return list of words found in PHRASE. Note that the phrase is
- split into groups depending on the alphanumeric characters and
- punctuation characters definition present in the config file.
- """
- words = {}
- if CFG_BIBINDEX_REMOVE_HTML_MARKUP and phrase.find("</") > -1:
- phrase = remove_html_markup(phrase)
- if CFG_BIBINDEX_REMOVE_LATEX_MARKUP:
- phrase = remove_latex_markup(phrase)
- phrase = latex_formula_re.sub(' ', phrase)
- phrase = wash_for_utf8(phrase)
- phrase = lower_index_term(phrase)
- # 1st split phrase into blocks according to whitespace
- last_word = ''
- for block in strip_accents(phrase).split():
- # 2nd remove leading/trailing punctuation and add block:
- block = re_block_punctuation_begin.sub("", block)
- block = re_block_punctuation_end.sub("", block)
- if block:
- if self.stemming_language:
- block = apply_stemming_and_stopwords_and_length_check(block, self.stemming_language)
- # 3rd break each block into subblocks according to punctuation and add subblocks:
- for subblock in re_punctuation.split(block):
- if self.stemming_language:
- subblock = apply_stemming_and_stopwords_and_length_check(subblock, self.stemming_language)
- if subblock:
- # 4th break each subblock into alphanumeric groups and add groups:
- for alphanumeric_group in re_separators.split(subblock):
- if self.stemming_language:
- alphanumeric_group = apply_stemming_and_stopwords_and_length_check(alphanumeric_group, self.stemming_language)
- if alphanumeric_group:
- if last_word:
- words['%s %s' % (last_word, alphanumeric_group)] = 1
- last_word = alphanumeric_group
- return words.keys()
-
-class BibIndexExactNameTokenizer(BibIndexTokenizer):
- """
- Human name exact tokenizer.
- """
-
- def tokenize(self, s):
- """
- Main API.
- """
- return [wash_author_name(s)]
-
-class BibIndexFuzzyNameTokenizer(BibIndexTokenizer):
- """Human name tokenizer.
-
- Human names are divided into three classes of tokens:
- 'lastnames', i.e., family, tribal or group identifiers,
- 'nonlastnames', i.e., personal names distinguishing individuals,
- 'titles', both incidental and permanent, e.g., 'VIII', '(ed.)', 'Msc'
- """
-
- def __init__(self):
- self.single_initial_re = re.compile('^\w\.$')
- self.split_on_re = re.compile('[\.\s-]')
- # lastname_stopwords describes terms which should not be used for indexing,
- # in multiple-word last names. These are purely conjunctions, serving the
- # same function as the American hyphen, but using linguistic constructs.
- self.lastname_stopwords = set(['y', 'of', 'and', 'de'])
-
- def scan(self, s):
- """Scan a name string and output an object representing its structure.
-
- @param s: the input to be lexically tagged
- @type s: string
-
- @return: dict of lexically tagged input items.
-
- Sample output for the name 'Jingleheimer Schmitt, John Jacob, XVI.' is:
- {
- 'TOKEN_TAG_LIST' : ['lastnames', 'nonlastnames', 'titles', 'raw'],
- 'lastnames' : ['Jingleheimer', 'Schmitt'],
- 'nonlastnames' : ['John', 'Jacob'],
- 'titles' : ['XVI.'],
- 'raw' : 'Jingleheimer Schmitt, John Jacob, XVI.'
- }
- @rtype: dict
- """
- retval = {'TOKEN_TAG_LIST' : ['lastnames', 'nonlastnames', 'titles', 'raw'],
- 'lastnames' : [],
- 'nonlastnames' : [],
- 'titles' : [],
- 'raw' : s}
- l = s.split(',')
- if len(l) < 2:
- # No commas means a simple name
- new = s.strip()
- new = s.split(' ')
- if len(new) == 1:
- retval['lastnames'] = new # rare single-name case
- else:
- retval['lastnames'] = new[-1:]
- retval['nonlastnames'] = new[:-1]
- for tag in ['lastnames', 'nonlastnames']:
- retval[tag] = [x.strip() for x in retval[tag]]
- retval[tag] = [re.split(self.split_on_re, x) for x in retval[tag]]
- # flatten sublists
- retval[tag] = [item for sublist in retval[tag] for item in sublist]
- retval[tag] = [x for x in retval[tag] if x != '']
- else:
- # Handle lastname-first multiple-names case
- retval['titles'] = l[2:] # no titles? no problem
- retval['nonlastnames'] = l[1]
- retval['lastnames'] = l[0]
- for tag in ['lastnames', 'nonlastnames']:
- retval[tag] = retval[tag].strip()
- retval[tag] = re.split(self.split_on_re, retval[tag])
- # filter empty strings
- retval[tag] = [x for x in retval[tag] if x != '']
- retval['titles'] = [x.strip() for x in retval['titles'] if x != '']
-
- return retval
-
- def parse_scanned(self, scanned):
- """Return all the indexable variations for a tagged token dictionary.
-
- Does this via the combinatoric expansion of the following rules:
- - Expands first names as name, first initial with period, first initial
- without period.
- - Expands compound last names as each of their non-stopword subparts.
- - Titles are treated literally, but applied serially.
-
- Please note that titles will be applied to complete last names only.
- So for example, if there is a compound last name of the form,
- "Ibanez y Gracia", with the title, "(ed.)", then only the combination
- of those two strings will do, not "Ibanez" and not "Gracia".
-
- @param scanned: lexically tagged input items in the form of the output
- from scan()
- @type scanned: dict
-
- @return: combinatorically expanded list of strings for indexing
- @rtype: list of string
- """
-
- def _fully_expanded_last_name(first, lastlist, title = None):
- """Return a list of all of the first / last / title combinations.
-
- @param first: one possible non-last name
- @type first: string
-
- @param lastlist: the strings of the tokens in the (possibly compound) last name
- @type lastlist: list of string
-
- @param title: one possible title
- @type title: string
- """
- retval = []
- title_word = ''
- if title != None:
- title_word = ', ' + title
-
- last = ' '.join(lastlist)
- retval.append(first + ' ' + last + title_word)
- retval.append(last + ', ' + first + title_word)
- for last in lastlist:
- if last in self.lastname_stopwords:
- continue
- retval.append(first + ' ' + last + title_word)
- retval.append(last + ', ' + first + title_word)
-
- return retval
-
- last_parts = scanned['lastnames']
- first_parts = scanned['nonlastnames']
- titles = scanned['titles']
- raw = scanned['raw']
-
- if len(first_parts) == 0: # rare single-name case
- return scanned['lastnames']
-
- expanded = []
- for exp in self.__expand_nonlastnames(first_parts):
- expanded.extend(_fully_expanded_last_name(exp, last_parts, None))
- for title in titles:
- # Drop titles which are parenthesized. This eliminates (ed.) from the index, but
- # leaves XI, for example. This gets rid of the surprising behavior that searching
- # for 'author:ed' retrieves people who have been editors, but whose names aren't
- # Ed.
- # TODO: Make editorship and other special statuses a MARC field.
- if title.find('(') != -1:
- continue
- # XXX: remember to document that titles can only be applied to complete last names
- expanded.extend(_fully_expanded_last_name(exp, [' '.join(last_parts)], title))
-
- return sorted(list(set(expanded)))
-
- def __expand_nonlastnames(self, namelist):
- """Generate every expansion of a series of human non-last names.
-
- Example:
- "Michael Edward" -> "Michael Edward", "Michael E.", "Michael E", "M. Edward", "M Edward",
- "M. E.", "M. E", "M E.", "M E", "M.E."
- ...but never:
- "ME"
-
- @param namelist: a collection of names
- @type namelist: list of string
-
- @return: a greatly expanded collection of names
- @rtype: list of string
- """
-
- def _expand_name(name):
- """Lists [name, initial, empty]"""
- if name == None:
- return []
- return [name, name[0]]
-
- def _pair_items(head, tail):
- """Lists every combination of head with each and all of tail"""
- if len(tail) == 0:
- return [head]
- l = []
- l.extend([head + ' ' + tail[0]])
- #l.extend([head + '-' + tail[0]])
- l.extend(_pair_items(head, tail[1:]))
- return l
-
- def _collect(head, tail):
- """Brings together combinations of things"""
-
- def _cons(a, l):
- l2 = l[:]
- l2.insert(0, a)
- return l2
-
- if len(tail) == 0:
- return [head]
- l = []
- l.extend(_pair_items(head, _expand_name(tail[0])))
- l.extend([' '.join(_cons(head, tail)).strip()])
- #l.extend(['-'.join(_cons(head, tail)).strip()])
- l.extend(_collect(head, tail[1:]))
- return l
-
- def _expand_contract(namelist):
- """Runs collect with every head in namelist and its tail"""
- val = []
- for i in range(len(namelist)):
- name = namelist[i]
- for expansion in _expand_name(name):
- val.extend(_collect(expansion, namelist[i+1:]))
- return val
-
- def _add_squashed(namelist):
- """Finds cases like 'M. E.' and adds 'M.E.'"""
- val = namelist
-
- def __check_parts(parts):
- if len(parts) < 2:
- return False
- for part in parts:
- if not self.single_initial_re.match(part):
- return False
- return True
-
- for name in namelist:
- parts = name.split(' ')
- if not __check_parts(parts):
- continue
- val.extend([''.join(parts)])
-
- return val
-
- return _add_squashed(_expand_contract(namelist))
-
- def tokenize(self, s):
- """Main entry point. Output the list of strings expanding s.
-
- Does this via the combinatoric expansion of the following rules:
- - Expands first names as name, first initial with period, first initial
- without period.
- - Expands compound last names as each of their non-stopword subparts.
- - Titles are treated literally, but applied serially.
-
- Please note that titles will be applied to complete last names only.
- So for example, if there is a compound last name of the form,
- "Ibanez y Gracia", with the title, "(ed.)", then only the combination
- of those two strings will do, not "Ibanez" and not "Gracia".
-
- @param s: the input to be lexically tagged
- @type s: string
-
- @return: combinatorically expanded list of strings for indexing
- @rtype: list of string
-
- @note: A simple wrapper around scan and parse_scanned.
- """
- return self.parse_scanned(self.scan(s))
-
-
-if __name__ == "__main__":
- """Trivial manual test framework"""
- import sys
- args = sys.argv[1:]
-
- test_str = ''
- if len(args) == 0:
- test_str = "Michael Peskin"
- elif len(args) == 1:
- test_str = args[0]
- else:
- test_str = ' '.join(args)
-
- tokenizer = BibIndexFuzzyNameTokenizer()
- print "Tokenizes as:", tokenizer.tokenize(test_str)
diff --git a/modules/bibindex/lib/bibindex_engine_tokenizer_unit_tests.py b/modules/bibindex/lib/bibindex_engine_tokenizer_unit_tests.py
index 1d15e0a3f..e2c5a8b1f 100644
--- a/modules/bibindex/lib/bibindex_engine_tokenizer_unit_tests.py
+++ b/modules/bibindex/lib/bibindex_engine_tokenizer_unit_tests.py
@@ -1,319 +1,371 @@
# -*- coding:utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2010, 2011, 2012 CERN.
+## Copyright (C) 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""bibindex_engine_tokenizer_tests - unit tests for bibindex_engine_tokenizer
+"""bibindex_engine_tokenizer_tests - unit tests for tokenizers
There should always be at least one test class for each class in b_e_t.
"""
+from invenio.importutils import lazy_import
from invenio.testutils import make_test_suite, run_test_suite, InvenioTestCase
-from invenio import bibindex_engine_tokenizer as tokenizer_lib
+load_tokenizers = lazy_import('invenio.bibindex_engine_utils:load_tokenizers')
+_TOKENIZERS = None
-class TestFuzzyNameTokenizerScanning(InvenioTestCase):
+
+
+class TestAuthorTokenizerScanning(InvenioTestCase):
"""Test BibIndex name tokenization"""
def setUp(self):
- self.tokenizer = tokenizer_lib.BibIndexFuzzyNameTokenizer()
- self.scan = self.tokenizer.scan
+ _TOKENIZERS = load_tokenizers()
+ self.tokenizer = _TOKENIZERS["BibIndexAuthorTokenizer"]()
+ self.scan = self.tokenizer.scan_string_for_phrases
def test_bifnt_scan_single(self):
- """BibIndexFuzzyNameTokenizer - scanning single names like 'Dido'"""
+ """BibIndexAuthorTokenizer - scanning single names like 'Dido'"""
teststr = "Dido"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'], 'lastnames': ['Dido'], 'nonlastnames': [], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_simple_western_forward(self):
- """BibIndexFuzzyNameTokenizer - scanning simple Western-style: first last"""
+ """BibIndexAuthorTokenizer - scanning simple Western-style: first last"""
teststr = "Ringo Starr"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'], 'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_simple_western_reverse(self):
- """BibIndexFuzzyNameTokenizer - scanning simple Western-style: last, first"""
+ """BibIndexAuthorTokenizer - scanning simple Western-style: last, first"""
teststr = "Starr, Ringo"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'], 'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_forward(self):
- """BibIndexFuzzyNameTokenizer - scanning multiword: first middle last"""
+ """BibIndexAuthorTokenizer - scanning multiword: first middle last"""
teststr = "Michael Edward Peskin"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['Michael', 'Edward'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dotcrammed(self):
- """BibIndexFuzzyNameTokenizer - scanning multiword: f.m. last"""
+ """BibIndexAuthorTokenizer - scanning multiword: f.m. last"""
teststr = "M.E. Peskin"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['M', 'E'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dotcrammed_reversed(self):
- """BibIndexFuzzyNameTokenizer - scanning multiword: last, f.m."""
+ """BibIndexAuthorTokenizer - scanning multiword: last, f.m."""
teststr = "Peskin, M.E."
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['M', 'E'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dashcrammed(self):
- """BibIndexFuzzyNameTokenizer - scanning multiword: first-middle last"""
+ """BibIndexAuthorTokenizer - scanning multiword: first-middle last"""
teststr = "Jean-Luc Picard"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Picard'], 'nonlastnames': ['Jean', 'Luc'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dashcrammed_reversed(self):
- """BibIndexFuzzyNameTokenizer - scanning multiword: last, first-middle"""
+ """BibIndexAuthorTokenizer - scanning multiword: last, first-middle"""
teststr = "Picard, Jean-Luc"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Picard'], 'nonlastnames': ['Jean', 'Luc'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_compound_lastname_dashes(self):
- """BibIndexFuzzyNameTokenizer - scanning multiword: first middle last-last"""
+ """BibIndexAuthorTokenizer - scanning multiword: first middle last-last"""
teststr = "Cantina Octavia Jones-Smith"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Jones', 'Smith'], 'nonlastnames': ['Cantina', 'Octavia'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_compound_lastname_dashes_reverse(self):
- """BibIndexFuzzyNameTokenizer - scanning multiword: last-last, first middle"""
+ """BibIndexAuthorTokenizer - scanning multiword: last-last, first middle"""
teststr = "Jones-Smith, Cantina Octavia"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Jones', 'Smith'], 'nonlastnames': ['Cantina', 'Octavia'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_compound_lastname_reverse(self):
- """BibIndexFuzzyNameTokenizer - scanning compound last: last last, first"""
+ """BibIndexAuthorTokenizer - scanning compound last: last last, first"""
teststr = "Alvarez Gaume, Joachim"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Alvarez', 'Gaume'], 'nonlastnames': ['Joachim'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_titled(self):
- """BibIndexFuzzyNameTokenizer - scanning title-bearing: last, first, title"""
+ """BibIndexAuthorTokenizer - scanning title-bearing: last, first, title"""
teststr = "Epstein, Brian, The Fifth Beatle"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Epstein'], 'nonlastnames': ['Brian'], 'titles': ['The Fifth Beatle'], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_wildly_interesting(self):
- """BibIndexFuzzyNameTokenizer - scanning last last last, first first, title, title"""
+ """BibIndexAuthorTokenizer - scanning last last last, first first, title, title"""
teststr = "Ibanez y Gracia, Maria Luisa, II., ed."
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Ibanez', 'y', 'Gracia'], 'nonlastnames': ['Maria', 'Luisa'], 'titles': ['II.', 'ed.'], 'raw' : teststr}
self.assertEqual(output, anticipated)
-class TestFuzzyNameTokenizerTokens(InvenioTestCase):
+class TestAuthorTokenizerTokens(InvenioTestCase):
"""Test BibIndex name variant token generation from scanned and tagged sets"""
def setUp(self):
- self.tokenizer = tokenizer_lib.BibIndexFuzzyNameTokenizer()
- self.get_index_tokens = self.tokenizer.parse_scanned
+ _TOKENIZERS = load_tokenizers()
+ self.tokenizer = _TOKENIZERS["BibIndexAuthorTokenizer"]()
+ self.get_index_tokens = self.tokenizer.parse_scanned_for_phrases
def test_bifnt_tokenize_single(self):
- """BibIndexFuzzyNameTokenizer - tokens for single-word name
+ """BibIndexAuthorTokenizer - tokens for single-word name
Ronaldo
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Ronaldo'], 'nonlastnames': [], 'titles': [], 'raw' : 'Ronaldo'}
output = self.get_index_tokens(tagged_data)
anticipated = ['Ronaldo']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_simple_forward(self):
- """BibIndexFuzzyNameTokenizer - tokens for first last
+ """BibIndexAuthorTokenizer - tokens for first last
Ringo Starr
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : 'Ringo Starr'}
output = self.get_index_tokens(tagged_data)
anticipated = ['R Starr', 'Ringo Starr', 'Starr, R', 'Starr, Ringo']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_simple_reverse(self):
- """BibIndexFuzzyNameTokenizer - tokens for last, first
+ """BibIndexAuthorTokenizer - tokens for last, first
Starr, Ringo
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : 'Starr, Ringo'}
output = self.get_index_tokens(tagged_data)
anticipated = ['R Starr', 'Ringo Starr', 'Starr, R', 'Starr, Ringo']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_twoname_forward(self):
- """BibIndexFuzzyNameTokenizer - tokens for first middle last
+ """BibIndexAuthorTokenizer - tokens for first middle last
Michael Edward Peskin
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['Michael', 'Edward'], 'titles': [], 'raw' : 'Michael Edward Peskin'}
output = self.get_index_tokens(tagged_data)
anticipated = ['E Peskin', 'Edward Peskin', 'M E Peskin', 'M Edward Peskin', 'M Peskin',
'Michael E Peskin', 'Michael Edward Peskin', 'Michael Peskin',
'Peskin, E', 'Peskin, Edward', 'Peskin, M',
'Peskin, M E', 'Peskin, M Edward', 'Peskin, Michael',
'Peskin, Michael E', 'Peskin, Michael Edward']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_compound_last(self):
- """BibIndexFuzzyNameTokenizer - tokens for last last, first
+ """BibIndexAuthorTokenizer - tokens for last last, first
Alvarez Gaume, Joachim
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Alvarez', 'Gaume'], 'nonlastnames': ['Joachim'], 'titles': [], 'raw' : 'Alvarez Gaume, Joachim'}
output = self.get_index_tokens(tagged_data)
anticipated = ['Alvarez Gaume, J', 'Alvarez Gaume, Joachim', 'Alvarez, J', 'Alvarez, Joachim', 'Gaume, J',
'Gaume, Joachim', 'J Alvarez', 'J Alvarez Gaume', 'J Gaume', 'Joachim Alvarez',
'Joachim Alvarez Gaume', 'Joachim Gaume']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_titled(self):
- """BibIndexFuzzyNameTokenizer - tokens for last, first, title
+ """BibIndexAuthorTokenizer - tokens for last, first, title
Epstein, Brian, The Fifth Beatle
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Epstein'], 'nonlastnames': ['Brian'], 'titles': ['The Fifth Beatle'], 'raw' : 'Epstein, Brian, The Fifth Beatle'}
output = self.get_index_tokens(tagged_data)
anticipated = ['B Epstein', 'B Epstein, The Fifth Beatle', 'Brian Epstein',
'Brian Epstein, The Fifth Beatle', 'Epstein, B', 'Epstein, B, The Fifth Beatle',
'Epstein, Brian', 'Epstein, Brian, The Fifth Beatle']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_wildly_interesting(self):
- """BibIndexFuzzyNameTokenizer - tokens for last last last, first first, title, title
+ """BibIndexAuthorTokenizer - tokens for last last last, first first, title, title
Ibanez y Gracia, Maria Luisa, II, (ed.)
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Ibanez', 'y', 'Gracia'], 'nonlastnames': ['Maria', 'Luisa'], 'titles': ['II', '(ed.)'], 'raw' : 'Ibanez y Gracia, Maria Luisa, II, (ed.)'}
output = self.get_index_tokens(tagged_data)
anticipated = ['Gracia, L', 'Gracia, Luisa', 'Gracia, M', 'Gracia, M L', 'Gracia, M Luisa',
'Gracia, Maria', 'Gracia, Maria L', 'Gracia, Maria Luisa',
'Ibanez y Gracia, L', 'Ibanez y Gracia, L, II',
'Ibanez y Gracia, Luisa', 'Ibanez y Gracia, Luisa, II',
'Ibanez y Gracia, M', 'Ibanez y Gracia, M L', 'Ibanez y Gracia, M L, II',
'Ibanez y Gracia, M Luisa', 'Ibanez y Gracia, M Luisa, II',
'Ibanez y Gracia, M, II',
'Ibanez y Gracia, Maria',
'Ibanez y Gracia, Maria L', 'Ibanez y Gracia, Maria L, II',
'Ibanez y Gracia, Maria Luisa', 'Ibanez y Gracia, Maria Luisa, II',
'Ibanez y Gracia, Maria, II',
'Ibanez, L', 'Ibanez, Luisa',
'Ibanez, M', 'Ibanez, M L', 'Ibanez, M Luisa', 'Ibanez, Maria',
'Ibanez, Maria L', 'Ibanez, Maria Luisa', 'L Gracia', 'L Ibanez',
'L Ibanez y Gracia', 'L Ibanez y Gracia, II', 'Luisa Gracia', 'Luisa Ibanez',
'Luisa Ibanez y Gracia', 'Luisa Ibanez y Gracia, II', 'M Gracia',
'M Ibanez', 'M Ibanez y Gracia', 'M Ibanez y Gracia, II', 'M L Gracia',
'M L Ibanez', 'M L Ibanez y Gracia', 'M L Ibanez y Gracia, II',
'M Luisa Gracia', 'M Luisa Ibanez', 'M Luisa Ibanez y Gracia', 'M Luisa Ibanez y Gracia, II',
'Maria Gracia',
'Maria Ibanez', 'Maria Ibanez y Gracia', 'Maria Ibanez y Gracia, II',
'Maria L Gracia', 'Maria L Ibanez', 'Maria L Ibanez y Gracia', 'Maria L Ibanez y Gracia, II',
'Maria Luisa Gracia', 'Maria Luisa Ibanez', 'Maria Luisa Ibanez y Gracia',
'Maria Luisa Ibanez y Gracia, II']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_multimiddle_forward(self):
- """BibIndexFuzzyNameTokenizer - tokens for first middle middle last
+ """BibIndexAuthorTokenizer - tokens for first middle middle last
W K H Panofsky
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Panofsky'], 'nonlastnames': ['W', 'K', 'H'], 'titles': [], 'raw' : 'W K H Panofsky'}
output = self.get_index_tokens(tagged_data)
anticipated = ['H Panofsky', 'K H Panofsky', 'K Panofsky', 'Panofsky, H', 'Panofsky, K',
'Panofsky, K H', 'Panofsky, W', 'Panofsky, W H', 'Panofsky, W K',
'Panofsky, W K H', 'W H Panofsky',
'W K H Panofsky', 'W K Panofsky', 'W Panofsky']
self.assertEqual(output, anticipated)
def test_tokenize(self):
- """BibIndexFuzzyNameTokenizer - check tokenize()
+ """BibIndexAuthorTokenizer - check tokenize_for_phrases()
Ringo Starr
"""
teststr = "Ringo Starr"
- output = self.tokenizer.tokenize(teststr)
+ output = self.tokenizer.tokenize_for_phrases(teststr)
anticipated = ['R Starr', 'Ringo Starr', 'Starr, R', 'Starr, Ringo']
self.assertEqual(output, anticipated)
-class TestExactNameTokenizer(InvenioTestCase):
+class TestExactAuthorTokenizer(InvenioTestCase):
"""Test exact author name tokenizer."""
def setUp(self):
"""setup"""
- self.tokenizer = tokenizer_lib.BibIndexExactNameTokenizer()
+ _TOKENIZERS = load_tokenizers()
+ self.tokenizer = _TOKENIZERS["BibIndexExactAuthorTokenizer"]()
+ self.tokenize = self.tokenizer.tokenize_for_phrases
def test_exact_author_name_tokenizer_bare(self):
"""BibIndexExactNameTokenizer - bare name"""
- self.assertEqual(self.tokenizer.tokenize('John Doe'),
+ self.assertEqual(self.tokenize('John Doe'),
['John Doe'])
def test_exact_author_name_tokenizer_dots(self):
"""BibIndexExactNameTokenizer - name with dots"""
- self.assertEqual(self.tokenizer.tokenize('J. Doe'),
+ self.assertEqual(self.tokenize('J. Doe'),
['J Doe'])
- self.assertEqual(self.tokenizer.tokenize('J.R. Doe'),
+ self.assertEqual(self.tokenize('J.R. Doe'),
['J R Doe'])
- self.assertEqual(self.tokenizer.tokenize('J. R. Doe'),
+ self.assertEqual(self.tokenize('J. R. Doe'),
['J R Doe'])
def test_exact_author_name_tokenizer_trailing_dots(self):
"""BibIndexExactNameTokenizer - name with trailing dots"""
- self.assertEqual(self.tokenizer.tokenize('Doe, J'),
+ self.assertEqual(self.tokenize('Doe, J'),
['Doe, J'])
- self.assertEqual(self.tokenizer.tokenize('Doe, J.'),
+ self.assertEqual(self.tokenize('Doe, J.'),
['Doe, J'])
def test_exact_author_name_tokenizer_hyphens(self):
"""BibIndexExactNameTokenizer - name with hyphens"""
- self.assertEqual(self.tokenizer.tokenize('Doe, Jean-Pierre'),
+ self.assertEqual(self.tokenize('Doe, Jean-Pierre'),
['Doe, Jean Pierre'])
-TEST_SUITE = make_test_suite(TestFuzzyNameTokenizerScanning,
- TestFuzzyNameTokenizerTokens,
- TestExactNameTokenizer,)
+
+class TestCJKTokenizer(InvenioTestCase):
+ """Tests for CJK Tokenizer which splits CJK words into characters and treats
+ every single character as a word"""
+
+
+ @classmethod
+ def setUp(self):
+ _TOKENIZERS = load_tokenizers()
+ self.tokenizer = _TOKENIZERS["BibIndexCJKTokenizer"]()
+
+ def test_tokenize_for_words_phrase_galaxy(self):
+ """tokenizing phrase: galaxy s4据信"""
+ phrase = "galaxy s4据信"
+ result = self.tokenizer.tokenize_for_words(phrase)
+ self.assertEqual(sorted(['galaxy','s4','据','信']), sorted(result))
+
+ def test_tokenize_for_words_phrase_with_special_punctuation(self):
+ """tokenizing phrase: 马英九:台湾民"""
+ phrase = u"马英九:台湾民"
+ result = self.tokenizer.tokenize_for_words(phrase)
+ self.assertEqual(sorted(['马','英','九','台','湾','民']), sorted(result))
+
+ def test_tokenize_for_words_phrase_with_special_punctuation_two(self):
+ """tokenizing phrase: 色的“刀子嘴”"""
+ phrase = u"色的“刀子嘴”"
+ result = self.tokenizer.tokenize_for_words(phrase)
+ self.assertEqual(sorted(['色','的','刀','子','嘴']), sorted(result))
+
+ def test_tokenize_for_words_simple_phrase(self):
+ """tokenizing phrase: 春眠暁覚"""
+ self.assertEqual(sorted(self.tokenizer.tokenize_for_words(u'春眠暁覚')), sorted(['春', '眠', '暁', '覚']))
+
+ def test_tokenize_for_words_mixed_phrase(self):
+ """tokenizing phrase: 春眠暁ABC覚"""
+ self.assertEqual(sorted(self.tokenizer.tokenize_for_words(u'春眠暁ABC覚')), sorted(['春', '眠', '暁', 'abc', '覚']))
+
+ def test_tokenize_for_words_phrase_with_comma(self):
+ """tokenizing phrase: 春眠暁, 暁"""
+ phrase = u"春眠暁, 暁"
+ self.assertEqual(sorted(self.tokenizer.tokenize_for_words(phrase)), sorted(['春','眠','暁']))
+
+
+TEST_SUITE = make_test_suite(TestAuthorTokenizerScanning,
+ TestAuthorTokenizerTokens,
+ TestExactAuthorTokenizer,
+ TestCJKTokenizer)
if __name__ == '__main__':
#unittest.main()
run_test_suite(TEST_SUITE)
diff --git a/modules/bibindex/lib/bibindex_engine_unit_tests.py b/modules/bibindex/lib/bibindex_engine_unit_tests.py
index 98c1af25b..48960eff8 100644
--- a/modules/bibindex/lib/bibindex_engine_unit_tests.py
+++ b/modules/bibindex/lib/bibindex_engine_unit_tests.py
@@ -1,159 +1,227 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011 CERN.
+## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Unit tests for the indexing engine."""
__revision__ = \
"$Id$"
from invenio.importutils import lazy_import
from invenio.testutils import make_test_suite, run_test_suite, InvenioTestCase
bibindex_engine = lazy_import('invenio.bibindex_engine')
-
+load_tokenizers = lazy_import('invenio.bibindex_engine_utils.load_tokenizers')
class TestListSetOperations(InvenioTestCase):
"""Tests for list set operations."""
def test_list_union(self):
"""bibindex engine - list union"""
self.assertEqual([1, 2, 3, 4],
bibindex_engine.list_union([1, 2, 3],
[1, 3, 4]))
+ def test_list_unique(self):
+ """bibindex engine - list unique"""
+ self.assertEqual([1, 2, 3],
+ bibindex_engine.list_unique([1, 2, 3, 3, 1, 2]))
+
+
class TestWashIndexTerm(InvenioTestCase):
"""Tests for washing index terms, useful for both searching and indexing."""
def test_wash_index_term_short(self):
"""bibindex engine - wash index term, short word"""
self.assertEqual("ellis",
bibindex_engine.wash_index_term("ellis"))
def test_wash_index_term_long(self):
"""bibindex engine - wash index term, long word"""
self.assertEqual(50*"e",
bibindex_engine.wash_index_term(1234*"e"))
def test_wash_index_term_case(self):
"""bibindex engine - wash index term, lower the case"""
self.assertEqual("ellis",
bibindex_engine.wash_index_term("Ellis"))
def test_wash_index_term_unicode(self):
"""bibindex engine - wash index term, unicode"""
self.assertEqual("ελληνικό αλφάβητο",
bibindex_engine.wash_index_term("Ελληνικό αλφάβητο"))
class TestGetWordsFromPhrase(InvenioTestCase):
"""Tests for getting words from phrase."""
+ def setUp(self):
+ self._TOKENIZERS = load_tokenizers()
+
def test_easy_phrase(self):
"""bibindex engine - getting words from `word1 word2' phrase"""
test_phrase = 'word1 word2'
l_words_expected = ['word1', 'word2']
- l_words_obtained = bibindex_engine.get_words_from_phrase(test_phrase)
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"]()
+ l_words_obtained = tokenizer.tokenize_for_words(test_phrase)
l_words_obtained.sort()
self.assertEqual(l_words_obtained, l_words_expected)
def test_stemming_phrase(self):
"""bibindex engine - getting stemmed words from l'anthropologie"""
test_phrase = "l'anthropologie"
l_words_not_expected = ['anthropolog', 'l', "l'anthropolog", "l'anthropologi"]
l_words_expected = ['anthropologi', 'l', "l'anthropologi"]
- l_words_obtained = bibindex_engine.get_words_from_phrase(test_phrase, 'en')
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"]('en')
+ l_words_obtained = tokenizer.tokenize_for_words(test_phrase)
l_words_obtained.sort()
self.assertNotEqual(l_words_obtained, l_words_not_expected)
self.assertEqual(l_words_obtained, l_words_expected)
+ def test_remove_stopwords_phrase(self):
+ """bibindex engine - test for removing stopwords from 'theory of' """
+ test_phrase = 'theory of'
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"](remove_stopwords='stopwords.kb')
+ words_obtained = tokenizer.tokenize_for_words(test_phrase)
+ words_expected = ['theory']
+ self.assertEqual(words_expected, words_obtained)
+
+ def test_stemming_and_remove_stopwords_phrase(self):
+ """bibindex engine - test for removing stopwords and stemming from 'beams of photons' """
+ test_phrase = 'beams of photons'
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"]('en', remove_stopwords='stopwords.kb')
+ words_obtained = tokenizer.tokenize_for_words(test_phrase)
+ words_expected = ['beam','photon']
+ self.assertEqual(words_expected, words_obtained)
+
def test_dashed_phrase(self):
"""bibindex engine - getting words from `word1-word2' phrase"""
test_phrase = 'word1-word2'
l_words_expected = ['word1', 'word1-word2', 'word2']
- l_words_obtained = bibindex_engine.get_words_from_phrase(test_phrase)
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"]()
+ l_words_obtained = tokenizer.tokenize_for_words(test_phrase)
l_words_obtained.sort()
self.assertEqual(l_words_obtained, l_words_expected)
def test_arXiv_good(self):
"""bibindex engine - getting words from `arXiv:1007.5048' phrase"""
test_phrase = 'arXiv:1007.5048'
l_words_expected = ['1007', '1007.5048', '5048', 'arxiv', 'arxiv:1007.5048']
- l_words_obtained = bibindex_engine.get_words_from_phrase(test_phrase)
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"]()
+ l_words_obtained = tokenizer.tokenize_for_words(test_phrase)
l_words_obtained.sort()
self.assertEqual(l_words_obtained, l_words_expected)
def test_arXiv_bad(self):
"""bibindex engine - getting words from `arXiv:1xy7.5z48' phrase"""
test_phrase = 'arXiv:1xy7.5z48'
l_words_expected = ['1xy7', '5z48', 'arxiv', 'arxiv:1xy7.5z48']
- l_words_obtained = bibindex_engine.get_words_from_phrase(test_phrase)
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"]()
+ l_words_obtained = tokenizer.tokenize_for_words(test_phrase)
l_words_obtained.sort()
self.assertEqual(l_words_obtained, l_words_expected)
+
+class TestGetPairsFromPhrase(InvenioTestCase):
+ """Tests for getting pairs from phrase."""
+
+ def setUp(self):
+ self._TOKENIZERS = load_tokenizers()
+
+ def test_remove_stopwords_phrase_first(self):
+ """bibindex engine - getting pairs from phrase with stopwords removed first"""
+ test_phrase = 'Matrices on a point as the theory of everything'
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"](remove_stopwords='stopwords.kb')
+ pairs_obtained = tokenizer.tokenize_for_pairs(test_phrase)
+ pairs_expected = ['matrices theory']
+ self.assertEqual(pairs_expected, pairs_obtained)
+
+ def test_remove_stopwords_phrase_second(self):
+ """bibindex engine - getting pairs from phrase with stopwords removed second"""
+ test_phrase = 'Nonlocal action for long-distance'
+ tokenizer = self._TOKENIZERS["BibIndexDefaultTokenizer"](remove_stopwords='stopwords.kb')
+ pairs_obtained = tokenizer.tokenize_for_pairs(test_phrase)
+ pairs_expected = ['nonlocal action', 'long distance', 'action long']
+ self.assertEqual(pairs_expected, pairs_obtained)
+
+
class TestGetWordsFromDateTag(InvenioTestCase):
"""Tests for getting words for date-like tag."""
+ def setUp(self):
+ self._TOKENIZERS = load_tokenizers()
+
def test_dateindex_yyyy(self):
"""bibindex engine - index date-like tag, yyyy"""
+ tokenizer = self._TOKENIZERS["BibIndexYearTokenizer"]()
self.assertEqual(["2010"],
- bibindex_engine.get_words_from_date_tag("2010"))
+ tokenizer.get_words_from_date_tag("2010"))
def test_dateindex_yyyy_mm(self):
"""bibindex engine - index date-like tag, yyyy-mm"""
+ tokenizer = self._TOKENIZERS["BibIndexYearTokenizer"]()
self.assertEqual(["2010-03", "2010"],
- bibindex_engine.get_words_from_date_tag("2010-03"))
+ tokenizer.get_words_from_date_tag("2010-03"))
def test_dateindex_yyyy_mm_dd(self):
"""bibindex engine - index date-like tag, yyyy-mm-dd"""
+ tokenizer = self._TOKENIZERS["BibIndexYearTokenizer"]()
self.assertEqual(["2010-03-08", "2010", "2010-03", ],
- bibindex_engine.get_words_from_date_tag("2010-03-08"))
+ tokenizer.get_words_from_date_tag("2010-03-08"))
def test_dateindex_freetext(self):
"""bibindex engine - index date-like tag, yyyy-mm-dd"""
+ tokenizer = self._TOKENIZERS["BibIndexYearTokenizer"]()
self.assertEqual(["dd", "mon", "yyyy"],
- bibindex_engine.get_words_from_date_tag("dd mon yyyy"))
+ tokenizer.get_words_from_date_tag("dd mon yyyy"))
class TestGetAuthorFamilyNameWords(InvenioTestCase):
"""Tests for getting family name words from author names."""
+ def setUp(self):
+ self._TOKENIZERS = load_tokenizers()
+
def test_authornames_john_doe(self):
"""bibindex engine - get author family name words for John Doe"""
+ tokenizer = self._TOKENIZERS["BibIndexAuthorTokenizer"]()
self.assertEqual(['doe',],
- bibindex_engine.get_author_family_name_words_from_phrase('John Doe'))
+ tokenizer.get_author_family_name_words_from_phrase('John Doe'))
def test_authornames_doe_john(self):
"""bibindex engine - get author family name words for Doe, John"""
+ tokenizer = self._TOKENIZERS["BibIndexAuthorTokenizer"]()
self.assertEqual(['doe',],
- bibindex_engine.get_author_family_name_words_from_phrase('Doe, John'))
+ tokenizer.get_author_family_name_words_from_phrase('Doe, John'))
def test_authornames_campbell_wilson(self):
"""bibindex engine - get author family name words for Campbell-Wilson, D"""
+ tokenizer = self._TOKENIZERS["BibIndexAuthorTokenizer"]()
self.assertEqual(['campbell', 'wilson', 'campbell-wilson'],
- bibindex_engine.get_author_family_name_words_from_phrase('Campbell-Wilson, D'))
+ tokenizer.get_author_family_name_words_from_phrase('Campbell-Wilson, D'))
TEST_SUITE = make_test_suite(TestListSetOperations,
TestWashIndexTerm,
TestGetWordsFromPhrase,
+ TestGetPairsFromPhrase,
TestGetWordsFromDateTag,
- TestGetAuthorFamilyNameWords)
+ TestGetAuthorFamilyNameWords,)
if __name__ == "__main__":
run_test_suite(TEST_SUITE)
diff --git a/modules/bibindex/lib/bibindex_engine_utils.py b/modules/bibindex/lib/bibindex_engine_utils.py
new file mode 100644
index 000000000..20028e522
--- /dev/null
+++ b/modules/bibindex/lib/bibindex_engine_utils.py
@@ -0,0 +1,307 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+"""bibindex_engine_utils: here are some useful regular experssions for tokenizers
+ and several helper functions.
+"""
+
+
+import re
+import sys
+import os
+
+from invenio.dbquery import run_sql, \
+ DatabaseError
+from invenio.bibtask import write_message
+from invenio.search_engine_utils import get_fieldvalues
+from invenio.config import \
+ CFG_BIBINDEX_CHARS_PUNCTUATION, \
+ CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS
+from invenio.pluginutils import PluginContainer
+from invenio.bibindex_engine_config import CFG_BIBINDEX_TOKENIZERS_PATH
+
+
+latex_formula_re = re.compile(r'\$.*?\$|\\\[.*?\\\]')
+phrase_delimiter_re = re.compile(r'[\.:;\?\!]')
+space_cleaner_re = re.compile(r'\s+')
+re_block_punctuation_begin = re.compile(r"^" + CFG_BIBINDEX_CHARS_PUNCTUATION + "+")
+re_block_punctuation_end = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION + "+$")
+re_punctuation = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION)
+re_separators = re.compile(CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS)
+re_arxiv = re.compile(r'^arxiv:\d\d\d\d\.\d\d\d\d')
+
+re_pattern_fuzzy_author_trigger = re.compile(r'[\s\,\.]')
+# FIXME: re_pattern_fuzzy_author_trigger could be removed and an
+# BibAuthorID API function could be called instead after we
+# double-check that there are no circular imports.
+
+
+
+def load_tokenizers():
+ """
+ Load all the bibindex tokenizers and returns it.
+ """
+ return PluginContainer(os.path.join(CFG_BIBINDEX_TOKENIZERS_PATH, 'BibIndex*.py'))
+
+
+
+def get_all_index_names_and_column_values(column_name):
+ """Returns a list of tuples of name and another column of all defined words indexes.
+ Returns empty list in case there are no tags indexed in this index or in case
+ the column name does not exist.
+ Example: output=[('global', something), ('title', something)]."""
+ out = []
+ query = """SELECT name, %s FROM idxINDEX""" % column_name
+ try:
+ res = run_sql(query)
+ for row in res:
+ out.append((row[0], row[1]))
+ except DatabaseError:
+ write_message("Exception caught for SQL statement: %s; column %s might not exist" % (query, column_name), sys.stderr)
+ return out
+
+
+
+def author_name_requires_phrase_search(p):
+ """
+ Detect whether author query pattern p requires phrase search.
+ Notably, look for presence of spaces and commas.
+ """
+ if re_pattern_fuzzy_author_trigger.search(p):
+ return True
+ return False
+
+
+def get_field_count(recID, tags):
+ """
+ Return number of field instances having TAGS in record RECID.
+
+ @param recID: record ID
+ @type recID: int
+ @param tags: list of tags to count, e.g. ['100__a', '700__a']
+ @type tags: list
+ @return: number of tags present in record
+ @rtype: int
+ @note: Works internally via getting field values, which may not be
+ very efficient. Could use counts only, or else retrieve stored
+ recstruct format of the record and walk through it.
+ """
+ out = 0
+ for tag in tags:
+ out += len(get_fieldvalues(recID, tag))
+ return out
+
+
+def run_sql_drop_silently(query):
+ """
+ SQL DROP statement with IF EXISTS part generates
+ warning if table does not exist. To mute the warning
+ we can remove IF EXISTS and catch SQL exception telling
+ us that table does not exist.
+ """
+ try:
+ query = query.replace(" IF EXISTS", "")
+ run_sql(query)
+ except Exception, e:
+ if str(e).find("Unknown table") > -1:
+ pass
+ else:
+ raise e
+
+
+def get_idx_indexer(name):
+ """Returns the indexer field value"""
+ try:
+ return run_sql("SELECT indexer FROM idxINDEX WHERE NAME=%s", (name, ))[0][0]
+ except StandardError, e:
+ return (0, e)
+
+
+def get_all_indexes(virtual=True, with_ids=False):
+ """Returns the list of the names of all defined words indexes.
+ Returns empty list in case there are no tags indexed in this index.
+ @param virtual: if True function will return also virtual indexes
+ @param with_ids: if True function will return also IDs of found indexes
+ Example: output=['global', 'author']."""
+ out = []
+ if virtual:
+ query = """SELECT %s name FROM idxINDEX"""
+ query = query % (with_ids and "id," or "")
+ else:
+ query = """SELECT %s w.name FROM idxINDEX AS w
+ WHERE w.id NOT IN (SELECT DISTINCT id_virtual FROM idxINDEX_idxINDEX)"""
+ query = query % (with_ids and "w.id," or "")
+ res = run_sql(query)
+ if with_ids:
+ out = [row for row in res]
+ else:
+ out = [row[0] for row in res]
+ return out
+
+
+def get_all_virtual_indexes():
+ """ Returns all defined 'virtual' indexes. """
+ query = """SELECT DISTINCT v.id_virtual, w.name FROM idxINDEX_idxINDEX AS v,
+ idxINDEX AS w
+ WHERE v.id_virtual=w.id"""
+ res = run_sql(query)
+ return res
+
+
+def get_index_virtual_indexes(index_id):
+ """Returns 'virtual' indexes that should be indexed together with
+ given index."""
+ query = """SELECT v.id_virtual, w.name FROM idxINDEX_idxINDEX AS v,
+ idxINDEX AS w
+ WHERE v.id_virtual=w.id AND
+ v.id_normal=%s"""
+ res = run_sql(query, (index_id,))
+ return res
+
+
+def is_index_virtual(index_id):
+ """Checks if index is virtual"""
+ query = """SELECT id_virtual FROM idxINDEX_idxINDEX
+ WHERE id_virtual=%s"""
+ res = run_sql(query, (index_id,))
+ if res:
+ return True
+ return False
+
+
+def get_virtual_index_building_blocks(index_id):
+ """Returns indexes that made up virtual index of given index_id.
+ If index_id is an id of normal index (not virtual) returns
+ empty tuple.
+ """
+ query = """SELECT v.id_normal, w.name FROM idxINDEX_idxINDEX AS v,
+ idxINDEX AS w
+ WHERE v.id_normal=w.id AND
+ v.id_virtual=%s"""
+ res = run_sql(query, (index_id,))
+ return res
+
+
+def get_index_id_from_index_name(index_name):
+ """Returns the words/phrase index id for INDEXNAME.
+ Returns empty string in case there is no words table for this index.
+ Example: field='author', output=4."""
+ out = 0
+ query = """SELECT w.id FROM idxINDEX AS w
+ WHERE w.name=%s LIMIT 1"""
+ res = run_sql(query, (index_name,), 1)
+ if res:
+ out = res[0][0]
+ return out
+
+
+def get_index_name_from_index_id(index_id):
+ """Returns the words/phrase index name for INDEXID.
+ Returns '' in case there is no words table for this indexid.
+ Example: field=9, output='fulltext'."""
+ res = run_sql("SELECT name FROM idxINDEX WHERE id=%s", (index_id,))
+ if res:
+ return res[0][0]
+ return ''
+
+
+def get_field_tags(field):
+ """Returns a list of MARC tags for the field code 'field'.
+ Returns empty list in case of error.
+ Example: field='author', output=['100__%','700__%']."""
+ out = []
+ query = """SELECT t.value FROM tag AS t, field_tag AS ft, field AS f
+ WHERE f.code=%s AND ft.id_field=f.id AND t.id=ft.id_tag
+ ORDER BY ft.score DESC"""
+ res = run_sql(query, (field,))
+ return [row[0] for row in res]
+
+
+def get_tag_indexes(tag, virtual=True):
+ """Returns indexes names and ids corresponding to the given tag
+ @param tag: MARC tag in one of the forms:
+ 'xx%', 'xxx', 'xxx__a', 'xxx__%'
+ @param virtual: if True function will also return virtual indexes"""
+ tag2 = tag[0:2] + "%" #for tags in the form: 10%
+ tag3 = tag[:-1] + "%" #for tags in the form: 100__%
+ query = """SELECT DISTINCT w.id,w.name FROM idxINDEX AS w,
+ idxINDEX_field AS wf,
+ field_tag AS ft,
+ tag as t
+ WHERE (t.value=%%s OR
+ t.value=%%s OR
+ %s) AND
+ t.id=ft.id_tag AND
+ ft.id_field=wf.id_field AND
+ wf.id_idxINDEX=w.id"""
+ if tag[-1] == "%":
+ missing_piece = "t.value LIKE %s"
+ elif tag[-1] != "%" and len(tag) == 3:
+ missing_piece = "t.value LIKE %s"
+ tag3 = tag + "%" #for all tags which start from 'tag'
+ else:
+ missing_piece = "t.value=%s"
+ query = query % missing_piece
+ res = run_sql(query, (tag, tag2, tag3))
+ if res:
+ if virtual:
+ response = list(res)
+ index_ids = map(str, zip(*res)[0])
+ query = """SELECT DISTINCT v.id_virtual,w.name FROM idxINDEX_idxINDEX AS v,
+ idxINDEX as w
+ WHERE v.id_virtual=w.id AND
+ v.id_normal IN ("""
+ query = query + ", ".join(index_ids) + ")"
+ response.extend(run_sql(query))
+ return tuple(response)
+ return res
+ return None
+
+
+def get_index_tags(indexname, virtual=True):
+ """Returns the list of tags that are indexed inside INDEXNAME.
+ Returns empty list in case there are no tags indexed in this index.
+ Note: uses get_field_tags() defined before.
+ Example: field='author', output=['100__%', '700__%']."""
+ out = []
+ query = """SELECT f.code FROM idxINDEX AS w, idxINDEX_field AS wf,
+ field AS f WHERE w.name=%s AND w.id=wf.id_idxINDEX
+ AND f.id=wf.id_field"""
+ res = run_sql(query, (indexname,))
+ for row in res:
+ out.extend(get_field_tags(row[0]))
+ if not out and virtual:
+ index_id = get_index_id_from_index_name(indexname)
+ try:
+ dependent_indexes = map(str, zip(*get_virtual_index_building_blocks(index_id))[0])
+ except IndexError:
+ return out
+ tags = set()
+ query = """SELECT DISTINCT f.code FROM idxINDEX AS w, idxINDEX_field AS wf, field AS f
+ WHERE w.id=wf.id_idxINDEX AND
+ f.id=wf.id_field AND
+ w.id IN ("""
+ query = query + ", ".join(dependent_indexes) + ")"
+ res = run_sql(query)
+ for row in res:
+ tags |= set(get_field_tags(row[0]))
+ return list(tags)
+ return out
+
+
diff --git a/modules/bibindex/lib/bibindex_engine_washer.py b/modules/bibindex/lib/bibindex_engine_washer.py
index 95181f68e..f40242cbe 100644
--- a/modules/bibindex/lib/bibindex_engine_washer.py
+++ b/modules/bibindex/lib/bibindex_engine_washer.py
@@ -1,143 +1,170 @@
# -*- coding:utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
import re
from invenio.bibindex_engine_stemmer import stem
from invenio.bibindex_engine_stopwords import is_stopword
-from invenio.config import CFG_BIBINDEX_MIN_WORD_LENGTH
+from invenio.config import CFG_BIBINDEX_MIN_WORD_LENGTH, \
+ CFG_ETCDIR
re_pattern_fuzzy_author_dots = re.compile(r'[\.\-]+')
re_pattern_fuzzy_author_spaces = re.compile(r'\s+')
re_pattern_author_canonical_id = re.compile(r'\.[0-9]+$')
+
re_unicode_lowercase_a = re.compile(unicode(r"(?u)[áàäâãå]", "utf-8"))
re_unicode_lowercase_ae = re.compile(unicode(r"(?u)[æ]", "utf-8"))
re_unicode_lowercase_e = re.compile(unicode(r"(?u)[éèëê]", "utf-8"))
re_unicode_lowercase_i = re.compile(unicode(r"(?u)[íìïî]", "utf-8"))
re_unicode_lowercase_o = re.compile(unicode(r"(?u)[óòöôõø]", "utf-8"))
re_unicode_lowercase_u = re.compile(unicode(r"(?u)[úùüû]", "utf-8"))
re_unicode_lowercase_y = re.compile(unicode(r"(?u)[ýÿ]", "utf-8"))
re_unicode_lowercase_c = re.compile(unicode(r"(?u)[çć]", "utf-8"))
re_unicode_lowercase_n = re.compile(unicode(r"(?u)[ñ]", "utf-8"))
re_unicode_uppercase_a = re.compile(unicode(r"(?u)[ÁÀÄÂÃÅ]", "utf-8"))
re_unicode_uppercase_ae = re.compile(unicode(r"(?u)[Æ]", "utf-8"))
re_unicode_uppercase_e = re.compile(unicode(r"(?u)[ÉÈËÊ]", "utf-8"))
re_unicode_uppercase_i = re.compile(unicode(r"(?u)[ÍÌÏÎ]", "utf-8"))
re_unicode_uppercase_o = re.compile(unicode(r"(?u)[ÓÒÖÔÕØ]", "utf-8"))
re_unicode_uppercase_u = re.compile(unicode(r"(?u)[ÚÙÜÛ]", "utf-8"))
re_unicode_uppercase_y = re.compile(unicode(r"(?u)[Ý]", "utf-8"))
re_unicode_uppercase_c = re.compile(unicode(r"(?u)[ÇĆ]", "utf-8"))
re_unicode_uppercase_n = re.compile(unicode(r"(?u)[Ñ]", "utf-8"))
re_latex_lowercase_a = re.compile("\\\\[\"H'`~^vu=k]\{?a\}?")
re_latex_lowercase_ae = re.compile("\\\\ae\\{\\}?")
re_latex_lowercase_e = re.compile("\\\\[\"H'`~^vu=k]\\{?e\\}?")
re_latex_lowercase_i = re.compile("\\\\[\"H'`~^vu=k]\\{?i\\}?")
re_latex_lowercase_o = re.compile("\\\\[\"H'`~^vu=k]\\{?o\\}?")
re_latex_lowercase_u = re.compile("\\\\[\"H'`~^vu=k]\\{?u\\}?")
re_latex_lowercase_y = re.compile("\\\\[\"']\\{?y\\}?")
re_latex_lowercase_c = re.compile("\\\\['uc]\\{?c\\}?")
re_latex_lowercase_n = re.compile("\\\\[c'~^vu]\\{?n\\}?")
re_latex_uppercase_a = re.compile("\\\\[\"H'`~^vu=k]\\{?A\\}?")
re_latex_uppercase_ae = re.compile("\\\\AE\\{?\\}?")
re_latex_uppercase_e = re.compile("\\\\[\"H'`~^vu=k]\\{?E\\}?")
re_latex_uppercase_i = re.compile("\\\\[\"H'`~^vu=k]\\{?I\\}?")
re_latex_uppercase_o = re.compile("\\\\[\"H'`~^vu=k]\\{?O\\}?")
re_latex_uppercase_u = re.compile("\\\\[\"H'`~^vu=k]\\{?U\\}?")
re_latex_uppercase_y = re.compile("\\\\[\"']\\{?Y\\}?")
re_latex_uppercase_c = re.compile("\\\\['uc]\\{?C\\}?")
re_latex_uppercase_n = re.compile("\\\\[c'~^vu]\\{?N\\}?")
def lower_index_term(term):
"""
Return safely lowered index term TERM. This is done by converting
to UTF-8 first, because standard Python lower() function is not
UTF-8 safe. To be called by both the search engine and the
indexer when appropriate (e.g. before stemming).
In case of problems with UTF-8 compliance, this function raises
UnicodeDecodeError, so the client code may want to catch it.
"""
return unicode(term, 'utf-8').lower().encode('utf-8')
latex_markup_re = re.compile(r"\\begin(\[.+?\])?\{.+?\}|\\end\{.+?}|\\\w+(\[.+?\])?\{(?P<inside1>.*?)\}|\{\\\w+ (?P<inside2>.*?)\}")
def remove_latex_markup(phrase):
ret_phrase = ''
index = 0
for match in latex_markup_re.finditer(phrase):
ret_phrase += phrase[index:match.start()]
ret_phrase += match.group('inside1') or match.group('inside2') or ''
index = match.end()
ret_phrase += phrase[index:]
return ret_phrase
-def apply_stemming_and_stopwords_and_length_check(word, stemming_language):
- """Return WORD after applying stemming and stopword and length checks.
- See the config file in order to influence these.
+
+def apply_stemming(word, stemming_language):
+ """Returns word after applying stemming (if stemming language is set).
+ You can change your stemming language in database.
+
+ @param word: word to be checked
+ @type word: str
+ @param stemming_language: abbreviation of language or None
+ @type stemming_language: str
"""
- # now check against stopwords:
- if is_stopword(word):
- return ""
- # finally check the word length:
- if len(word) < CFG_BIBINDEX_MIN_WORD_LENGTH:
- return ""
- # stem word, when configured so:
if stemming_language:
word = stem(word, stemming_language)
return word
+
+def remove_stopwords(word, stopwords_kb = False):
+ """Returns word after stopword check.
+ One must specify the name of the knowledge base.
+
+ @param word: word to be checked
+ @type word: str
+ @param stopwords_kb: name of the stopwords knowledge base
+ @type word: str
+ """
+ if stopwords_kb:
+ stopwords_path = CFG_ETCDIR + "/bibrank/" + stopwords_kb
+ if is_stopword(word, stopwords_path):
+ return ""
+ return word
+
+def length_check(word):
+ """Returns word after length check.
+
+ @param word: word to be checked
+ @type word: str
+ """
+ if len(word) < CFG_BIBINDEX_MIN_WORD_LENGTH:
+ return ""
+ return word
+
def wash_index_term(term, max_char_length=50, lower_term=True):
"""
Return washed form of the index term TERM that would be suitable
for storing into idxWORD* tables. I.e., lower the TERM if
LOWER_TERM is True, and truncate it safely to MAX_CHAR_LENGTH
UTF-8 characters (meaning, in principle, 4*MAX_CHAR_LENGTH bytes).
The function works by an internal conversion of TERM, when needed,
from its input Python UTF-8 binary string format into Python
Unicode format, and then truncating it safely to the given number
of UTF-8 characters, without possible mis-truncation in the middle
of a multi-byte UTF-8 character that could otherwise happen if we
would have been working with UTF-8 binary representation directly.
Note that MAX_CHAR_LENGTH corresponds to the length of the term
column in idxINDEX* tables.
"""
if lower_term:
washed_term = unicode(term, 'utf-8').lower()
else:
washed_term = unicode(term, 'utf-8')
if len(washed_term) <= max_char_length:
# no need to truncate the term, because it will fit
# nicely even if it uses four-byte UTF-8 characters
return washed_term.encode('utf-8')
else:
# truncate the term in a safe position:
return washed_term[:max_char_length].encode('utf-8')
def wash_author_name(p):
"""
Wash author name suitable for author searching. Notably, replace
dots and hyphens with spaces, and collapse spaces.
"""
if re_pattern_author_canonical_id.search(p):
# we have canonical author ID form, so ignore all washing
return p
out = re_pattern_fuzzy_author_dots.sub(" ", p)
out = re_pattern_fuzzy_author_spaces.sub(" ", out)
return out.strip()
diff --git a/modules/bibindex/lib/bibindex_fixtures.py b/modules/bibindex/lib/bibindex_fixtures.py
index 3c9310518..9fc953110 100644
--- a/modules/bibindex/lib/bibindex_fixtures.py
+++ b/modules/bibindex/lib/bibindex_fixtures.py
@@ -1,293 +1,568 @@
# -*- coding: utf-8 -*-
#
## This file is part of Invenio.
## Copyright (C) 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
from fixture import DataSet
from invenio.websearch_fixtures import FieldData
class IdxINDEXData(DataSet):
class IdxINDEX_1:
last_updated = None
description = u'This index contains words/phrases from global fields.'
stemming_language = u''
id = 1
indexer = u'native'
name = u'global'
+ synonym_kbrs = u'INDEX-SYNONYM-TITLE,exact'
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_2:
last_updated = None
description = u'This index contains words/phrases from collection identifiers fields.'
stemming_language = u''
id = 2
indexer = u'native'
name = u'collection'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_3:
last_updated = None
description = u'This index contains words/phrases from abstract fields.'
stemming_language = u''
id = 3
indexer = u'native'
name = u'abstract'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_4:
last_updated = None
description = u'This index contains fuzzy words/phrases from author fields.'
stemming_language = u''
id = 4
indexer = u'native'
name = u'author'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexAuthorTokenizer'
class IdxINDEX_5:
last_updated = None
description = u'This index contains words/phrases from keyword fields.'
stemming_language = u''
id = 5
indexer = u'native'
name = u'keyword'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_6:
last_updated = None
description = u'This index contains words/phrases from references fields.'
stemming_language = u''
id = 6
indexer = u'native'
name = u'reference'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_7:
last_updated = None
description = u'This index contains words/phrases from report numbers fields.'
stemming_language = u''
id = 7
indexer = u'native'
name = u'reportnumber'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_8:
last_updated = None
description = u'This index contains words/phrases from title fields.'
stemming_language = u''
id = 8
indexer = u'native'
name = u'title'
+ synonym_kbrs = u'INDEX-SYNONYM-TITLE,exact'
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_9:
last_updated = None
description = u'This index contains words/phrases from fulltext fields.'
stemming_language = u''
id = 9
indexer = u'native'
name = u'fulltext'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexFulltextTokenizer'
class IdxINDEX_10:
last_updated = None
description = u'This index contains words/phrases from year fields.'
stemming_language = u''
id = 10
indexer = u'native'
name = u'year'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexYearTokenizer'
class IdxINDEX_11:
last_updated = None
description = u'This index contains words/phrases from journal publication information fields.'
stemming_language = u''
id = 11
indexer = u'native'
name = u'journal'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexJournalTokenizer'
class IdxINDEX_12:
last_updated = None
description = u'This index contains words/phrases from collaboration name fields.'
stemming_language = u''
id = 12
indexer = u'native'
name = u'collaboration'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_13:
last_updated = None
description = u'This index contains words/phrases from institutional affiliation fields.'
stemming_language = u''
id = 13
indexer = u'native'
name = u'affiliation'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_14:
last_updated = None
description = u'This index contains exact words/phrases from author fields.'
stemming_language = u''
id = 14
indexer = u'native'
name = u'exactauthor'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_15:
last_updated = None
description = u'This index contains exact words/phrases from figure captions.'
stemming_language = u''
id = 15
indexer = u'native'
name = u'caption'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEX_16:
last_updated = None
description = u'This index contains fuzzy words/phrases from first author field.'
stemming_language = u''
id = 16
indexer = u'native'
name = u'firstauthor'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexAuthorTokenizer'
class IdxINDEX_17:
last_updated = None
description = u'This index contains exact words/phrases from first author field.'
stemming_language = u''
id = 17
indexer = u'native'
name = u'exactfirstauthor'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexExactAuthorTokenizer'
class IdxINDEX_18:
last_updated = None
description = u'This index contains number of authors of the record.'
stemming_language = u''
id = 18
indexer = u'native'
name = u'authorcount'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexAuthorCountTokenizer'
class IdxINDEX_19:
last_updated = None
description = u'This index contains exact words/phrases from title fields.'
stemming_language = u''
id = 19
indexer = u'native'
name = u'exacttitle'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
+
+ class IdxINDEX_20:
+ last_updated = None
+ description = u'This index contains words/phrases from author authority records.'
+ stemming_language = u''
+ id = 20
+ indexer = u'native'
+ name = u'authorityauthor'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexAuthorTokenizer'
+
+ class IdxINDEX_21:
+ last_updated = None
+ description = u'This index contains words/phrases from institution authority records.'
+ stemming_language = u''
+ id = 21
+ indexer = u'native'
+ name = u'authorityinstitution'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
+
+ class IdxINDEX_22:
+ last_updated = None
+ description = u'This index contains words/phrases from journal authority records.'
+ stemming_language = u''
+ id = 22
+ indexer = u'native'
+ name = u'authorityjournal'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
+
+ class IdxINDEX_23:
+ last_updated = None
+ description = u'This index contains words/phrases from subject authority records.'
+ stemming_language = u''
+ id = 23
+ indexer = u'native'
+ name = u'authoritysubject'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
+
+ class IdxINDEX_24:
+ last_updated = None
+ description = u'This index contains number of copies of items in the library.'
+ stemming_language = u''
+ id = 24
+ indexer = u'native'
+ name = u'itemcount'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexItemCountTokenizer'
+
+ class IdxINDEX_25:
+ last_updated = None
+ description = u'This index contains extensions of files connected to records.'
+ stemming_language = u''
+ id = 25
+ indexer = u'native'
+ name = u'filetype'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexFiletypeTokenizer'
+
+ class IdxINDEX_26:
+ last_updated = None
+ description = u'This index contains words/phrases from miscellaneous fields.'
+ stemming_language = u''
+ id = 26
+ indexer = u'native'
+ name = u'miscellaneous'
+ synonym_kbrs = u''
+ remove_stopwords = u'No'
+ remove_html_markup = u'No'
+ remove_latex_markup = u'No'
+ tokenizer = u'BibIndexDefaultTokenizer'
class IdxINDEXFieldData(DataSet):
class IdxINDEXField_10_12:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_10.ref('id')
id_field = FieldData.Field_12.ref('id')
class IdxINDEXField_11_19:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_11.ref('id')
id_field = FieldData.Field_19.ref('id')
class IdxINDEXField_12_20:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_12.ref('id')
id_field = FieldData.Field_20.ref('id')
class IdxINDEXField_13_21:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_13.ref('id')
id_field = FieldData.Field_21.ref('id')
class IdxINDEXField_14_22:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_14.ref('id')
id_field = FieldData.Field_22.ref('id')
class IdxINDEXField_15_27:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_15.ref('id')
id_field = FieldData.Field_27.ref('id')
class IdxINDEXField_16_28:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_16.ref('id')
id_field = FieldData.Field_28.ref('id')
class IdxINDEXField_17_29:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_17.ref('id')
id_field = FieldData.Field_29.ref('id')
class IdxINDEXField_18_30:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_18.ref('id')
id_field = FieldData.Field_30.ref('id')
class IdxINDEXField_19_32:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_19.ref('id')
id_field = FieldData.Field_32.ref('id')
class IdxINDEXField_1_1:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_1.ref('id')
id_field = FieldData.Field_1.ref('id')
class IdxINDEXField_2_10:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_2.ref('id')
id_field = FieldData.Field_10.ref('id')
class IdxINDEXField_3_4:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_3.ref('id')
id_field = FieldData.Field_4.ref('id')
class IdxINDEXField_4_3:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_4.ref('id')
id_field = FieldData.Field_3.ref('id')
class IdxINDEXField_5_5:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_5.ref('id')
id_field = FieldData.Field_5.ref('id')
class IdxINDEXField_6_8:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_6.ref('id')
id_field = FieldData.Field_8.ref('id')
class IdxINDEXField_7_6:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_7.ref('id')
id_field = FieldData.Field_6.ref('id')
class IdxINDEXField_8_2:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_8.ref('id')
id_field = FieldData.Field_2.ref('id')
class IdxINDEXField_9_9:
regexp_alphanumeric_separators = u''
regexp_punctuation = u'[.,:;?!"]'
id_idxINDEX = IdxINDEXData.IdxINDEX_9.ref('id')
- id_field = FieldData.Field_9.ref('id')
\ No newline at end of file
+ id_field = FieldData.Field_9.ref('id')
+
+ class IdxINDEXField_20_33:
+ regexp_alphanumeric_separators = u''
+ regexp_punctuation = u'[.,:;?!"]'
+ id_idxINDEX = IdxINDEXData.IdxINDEX_20.ref('id')
+ id_field = FieldData.Field_33.ref('id')
+
+ class IdxINDEXField_21_34:
+ regexp_alphanumeric_separators = u''
+ regexp_punctuation = u'[.,:;?!"]'
+ id_idxINDEX = IdxINDEXData.IdxINDEX_21.ref('id')
+ id_field = FieldData.Field_34.ref('id')
+
+ class IdxINDEXField_22_35:
+ regexp_alphanumeric_separators = u''
+ regexp_punctuation = u'[.,:;?!"]'
+ id_idxINDEX = IdxINDEXData.IdxINDEX_22.ref('id')
+ id_field = FieldData.Field_35.ref('id')
+
+ class IdxINDEXField_23_36:
+ regexp_alphanumeric_separators = u''
+ regexp_punctuation = u'[.,:;?!"]'
+ id_idxINDEX = IdxINDEXData.IdxINDEX_23.ref('id')
+ id_field = FieldData.Field_36.ref('id')
+
+ class IdxINDEXField_24_37:
+ regexp_alphanumeric_separators = u''
+ regexp_punctuation = u'[.,:;?!"]'
+ id_idxINDEX = IdxINDEXData.IdxINDEX_24.ref('id')
+ id_field = FieldData.Field_37.ref('id')
+
+ class IdxINDEXField_25_38:
+ regexp_alphanumeric_separators = u''
+ regexp_punctuation = u'[.,:;?!"]'
+ id_idxINDEX = IdxINDEXData.IdxINDEX_25.ref('id')
+ id_field = FieldData.Field_38.ref('id')
+
+ class IdxINDEXField_26_39:
+ regexp_alphanumeric_separators = u''
+ regexp_punctuation = u'[.,:;?!"]'
+ id_idxINDEX = IdxINDEXData.IdxINDEX_26.ref('id')
+ id_field = FieldData.Field_39.ref('id')
+
+
+class IdxINDEXIdxINDEXData(DataSet):
+
+ class IdxINDEXIdxINDEX_1_2:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_2.ref('id')
+
+ class IdxINDEXIdxINDEX_1_3:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_3.ref('id')
+
+ class IdxINDEXIdxINDEX_1_5:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_5.ref('id')
+
+ class IdxINDEXIdxINDEX_1_7:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_7.ref('id')
+
+ class IdxINDEXIdxINDEX_1_8:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_8.ref('id')
+
+ class IdxINDEXIdxINDEX_1_10:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_10.ref('id')
+
+ class IdxINDEXIdxINDEX_1_11:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_11.ref('id')
+
+ class IdxINDEXIdxINDEX_1_12:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_12.ref('id')
+
+ class IdxINDEXIdxINDEX_1_13:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_13.ref('id')
+
+ class IdxINDEXIdxINDEX_1_19:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_19.ref('id')
+
+ class IdxINDEXIdxINDEX_1_26:
+ id_virtual = IdxINDEXData.IdxINDEX_1.ref('id')
+ id_normal = IdxINDEXData.IdxINDEX_26.ref('id')
diff --git a/modules/bibindex/lib/bibindex_model.py b/modules/bibindex/lib/bibindex_model.py
index 53273ca0f..5f8defc5d 100644
--- a/modules/bibindex/lib/bibindex_model.py
+++ b/modules/bibindex/lib/bibindex_model.py
@@ -1,1685 +1,2290 @@
# -*- coding: utf-8 -*-
#
## This file is part of Invenio.
-## Copyright (C) 2011, 2012 CERN.
+## Copyright (C) 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02D111-1307, USA.
"""
bibindex database models.
"""
# General imports.
from invenio.sqlalchemyutils import db
# Create your models here.
from invenio.bibedit_model import Bibrec
from invenio.websearch_model import Field
class IdxINDEX(db.Model):
"""Represents a IdxINDEX record."""
def __init__(self):
pass
__tablename__ = 'idxINDEX'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True)
name = db.Column(db.String(50), unique=True, nullable=False,
server_default='')
description = db.Column(db.String(255), nullable=False,
server_default='')
last_updated = db.Column(db.DateTime, nullable=False,
server_default='1900-01-01 00:00:00')
stemming_language = db.Column(db.String(10), nullable=False,
server_default='')
indexer = db.Column(db.String(10), nullable=False, server_default='native')
+ synonym_kbrs = db.Column(db.String(255), nullable=False, server_default='')
+ remove_stopwords = db.Column(db.String(255), nullable=False, server_default='')
+ remove_html_markup = db.Column(db.String(10), nullable=False, server_default='')
+ remove_latex_markup = db.Column(db.String(10), nullable=False, server_default='')
+ tokenizer = db.Column(db.String(50), nullable=False, server_default='')
+class IdxINDEXIdxINDEX(db.Model):
+ """Represents an IdxINDEXIdxINDEX record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxINDEX_idxINDEX'
+ id_virtual = db.Column(db.MediumInteger(9, unsigned=True),
+ db.ForeignKey(IdxINDEX.id), nullable=False,
+ server_default='0', primary_key=True)
+ id_normal = db.Column(db.MediumInteger(9, unsigned=True),
+ db.ForeignKey(IdxINDEX.id), nullable=False,
+ server_default='0', primary_key=True)
class IdxINDEXNAME(db.Model):
"""Represents a IdxINDEXNAME record."""
def __init__(self):
pass
__tablename__ = 'idxINDEXNAME'
id_idxINDEX = db.Column(db.MediumInteger(9, unsigned=True),
db.ForeignKey(IdxINDEX.id), primary_key=True)
ln = db.Column(db.Char(5), primary_key=True,
server_default='')
type = db.Column(db.Char(3), primary_key=True,
server_default='sn')
value = db.Column(db.String(255), nullable=False)
idxINDEX = db.relationship(IdxINDEX, backref='names')
class IdxINDEXField(db.Model):
"""Represents a IdxINDEXField record."""
def __init__(self):
pass
__tablename__ = 'idxINDEX_field'
id_idxINDEX = db.Column(db.MediumInteger(9, unsigned=True),
db.ForeignKey(IdxINDEX.id), primary_key=True)
id_field = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Field.id),
primary_key=True)
regexp_punctuation = db.Column(db.String(255),
nullable=False,
server_default='[.,:;?!"]')
regexp_alphanumeric_separators = db.Column(db.String(255),
nullable=False) #FIX ME ,
#server_default='[!"#$\\%&''()*+,-./:;<=>?@[\\]^\\_`{|}~]')
idxINDEX = db.relationship(IdxINDEX, backref='fields')
field = db.relationship(Field, backref='idxINDEXes')
#GENERATED
class IdxPAIR01F(db.Model):
"""Represents a IdxPAIR01F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR01F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR01R(db.Model):
"""Represents a IdxPAIR01R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR01R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR02F(db.Model):
"""Represents a IdxPAIR02F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR02F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR02R(db.Model):
"""Represents a IdxPAIR02R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR02R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR03F(db.Model):
"""Represents a IdxPAIR03F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR03F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR03R(db.Model):
"""Represents a IdxPAIR03R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR03R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR04F(db.Model):
"""Represents a IdxPAIR04F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR04F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR04R(db.Model):
"""Represents a IdxPAIR04R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR04R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR05F(db.Model):
"""Represents a IdxPAIR05F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR05F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR05R(db.Model):
"""Represents a IdxPAIR05R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR05R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR06F(db.Model):
"""Represents a IdxPAIR06F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR06F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR06R(db.Model):
"""Represents a IdxPAIR06R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR06R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR07F(db.Model):
"""Represents a IdxPAIR07F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR07F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR07R(db.Model):
"""Represents a IdxPAIR07R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR07R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR08F(db.Model):
"""Represents a IdxPAIR08F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR08F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR08R(db.Model):
"""Represents a IdxPAIR08R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR08R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR09F(db.Model):
"""Represents a IdxPAIR09F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR09F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR09R(db.Model):
"""Represents a IdxPAIR09R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR09R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR10F(db.Model):
"""Represents a IdxPAIR10F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR10F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR10R(db.Model):
"""Represents a IdxPAIR10R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR10R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR11F(db.Model):
"""Represents a IdxPAIR11F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR11F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR11R(db.Model):
"""Represents a IdxPAIR11R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR11R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR12F(db.Model):
"""Represents a IdxPAIR12F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR12F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR12R(db.Model):
"""Represents a IdxPAIR12R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR12R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR13F(db.Model):
"""Represents a IdxPAIR13F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR13F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR13R(db.Model):
"""Represents a IdxPAIR13R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR13R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR14F(db.Model):
"""Represents a IdxPAIR14F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR14F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR14R(db.Model):
"""Represents a IdxPAIR14R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR14R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR15F(db.Model):
"""Represents a IdxPAIR15F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR15F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR15R(db.Model):
"""Represents a IdxPAIR15R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR15R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR16F(db.Model):
"""Represents a IdxPAIR16F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR16F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR16R(db.Model):
"""Represents a IdxPAIR16R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR16R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR17F(db.Model):
"""Represents a IdxPAIR17F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR17F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR17R(db.Model):
"""Represents a IdxPAIR17R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR17R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR18F(db.Model):
"""Represents a IdxPAIR18F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR18F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR18R(db.Model):
"""Represents a IdxPAIR18R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR18R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPAIR19F(db.Model):
"""Represents a IdxPAIR19F record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR19F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPAIR19R(db.Model):
"""Represents a IdxPAIR19R record."""
def __init__(self):
pass
__tablename__ = 'idxPAIR19R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
+class IdxPAIR20F(db.Model):
+ """Represents a IdxPAIR20F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR20F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPAIR20R(db.Model):
+ """Represents a IdxPAIR20R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR20R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPAIR21F(db.Model):
+ """Represents a IdxPAIR21F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR21F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPAIR21R(db.Model):
+ """Represents a IdxPAIR21R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR21R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPAIR22F(db.Model):
+ """Represents a IdxPAIR22F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR22F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPAIR22R(db.Model):
+ """Represents a IdxPAIR22R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR22R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPAIR23F(db.Model):
+ """Represents a IdxPAIR23F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR23F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPAIR23R(db.Model):
+ """Represents a IdxPAIR23R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR23R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPAIR24F(db.Model):
+ """Represents a IdxPAIR24F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR24F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPAIR24R(db.Model):
+ """Represents a IdxPAIR24R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR24R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPAIR25F(db.Model):
+ """Represents a IdxPAIR25F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR25F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPAIR25R(db.Model):
+ """Represents a IdxPAIR25R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR25R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPAIR26F(db.Model):
+ """Represents a IdxPAIR26F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR26F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPAIR26R(db.Model):
+ """Represents a IdxPAIR26R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPAIR26R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
class IdxPHRASE01F(db.Model):
"""Represents a IdxPHRASE01F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE01F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE01R(db.Model):
"""Represents a IdxPHRASE01R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE01R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE02F(db.Model):
"""Represents a IdxPHRASE02F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE02F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE02R(db.Model):
"""Represents a IdxPHRASE02R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE02R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE03F(db.Model):
"""Represents a IdxPHRASE03F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE03F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE03R(db.Model):
"""Represents a IdxPHRASE03R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE03R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE04F(db.Model):
"""Represents a IdxPHRASE04F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE04F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE04R(db.Model):
"""Represents a IdxPHRASE04R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE04R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE05F(db.Model):
"""Represents a IdxPHRASE05F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE05F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE05R(db.Model):
"""Represents a IdxPHRASE05R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE05R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE06F(db.Model):
"""Represents a IdxPHRASE06F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE06F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE06R(db.Model):
"""Represents a IdxPHRASE06R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE06R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE07F(db.Model):
"""Represents a IdxPHRASE07F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE07F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE07R(db.Model):
"""Represents a IdxPHRASE07R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE07R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE08F(db.Model):
"""Represents a IdxPHRASE08F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE08F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE08R(db.Model):
"""Represents a IdxPHRASE08R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE08R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE09F(db.Model):
"""Represents a IdxPHRASE09F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE09F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE09R(db.Model):
"""Represents a IdxPHRASE09R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE09R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE10F(db.Model):
"""Represents a IdxPHRASE10F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE10F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE10R(db.Model):
"""Represents a IdxPHRASE10R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE10R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE11F(db.Model):
"""Represents a IdxPHRASE11F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE11F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE11R(db.Model):
"""Represents a IdxPHRASE11R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE11R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE12F(db.Model):
"""Represents a IdxPHRASE12F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE12F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE12R(db.Model):
"""Represents a IdxPHRASE12R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE12R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE13F(db.Model):
"""Represents a IdxPHRASE13F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE13F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE13R(db.Model):
"""Represents a IdxPHRASE13R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE13R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE14F(db.Model):
"""Represents a IdxPHRASE14F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE14F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE14R(db.Model):
"""Represents a IdxPHRASE14R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE14R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE15F(db.Model):
"""Represents a IdxPHRASE15F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE15F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE15R(db.Model):
"""Represents a IdxPHRASE15R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE15R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE16F(db.Model):
"""Represents a IdxPHRASE16F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE16F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE16R(db.Model):
"""Represents a IdxPHRASE16R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE16R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE17F(db.Model):
"""Represents a IdxPHRASE17F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE17F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE17R(db.Model):
"""Represents a IdxPHRASE17R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE17R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE18F(db.Model):
"""Represents a IdxPHRASE18F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE18F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE18R(db.Model):
"""Represents a IdxPHRASE18R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE18R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxPHRASE19F(db.Model):
"""Represents a IdxPHRASE19F record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE19F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(100), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxPHRASE19R(db.Model):
"""Represents a IdxPHRASE19R record."""
def __init__(self):
pass
__tablename__ = 'idxPHRASE19R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
+class IdxPHRASE20F(db.Model):
+ """Represents a IdxPHRASE20F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE20F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPHRASE20R(db.Model):
+ """Represents a IdxPHRASE20R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE20R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPHRASE21F(db.Model):
+ """Represents a IdxPHRASE21F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE21F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPHRASE21R(db.Model):
+ """Represents a IdxPHRASE21R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE21R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPHRASE22F(db.Model):
+ """Represents a IdxPHRASE22F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE22F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPHRASE22R(db.Model):
+ """Represents a IdxPHRASE22R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE22R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPHRASE23F(db.Model):
+ """Represents a IdxPHRASE23F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE23F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPHRASE23R(db.Model):
+ """Represents a IdxPHRASE23R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE23R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPHRASE24F(db.Model):
+ """Represents a IdxPHRASE24F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE24F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPHRASE24R(db.Model):
+ """Represents a IdxPHRASE24R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE24R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPHRASE25F(db.Model):
+ """Represents a IdxPHRASE25F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE25F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPHRASE25R(db.Model):
+ """Represents a IdxPHRASE25R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE25R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxPHRASE26F(db.Model):
+ """Represents a IdxPHRASE26F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE26F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(100), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxPHRASE26R(db.Model):
+ """Represents a IdxPHRASE26R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxPHRASE26R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
class IdxWORD01F(db.Model):
"""Represents a IdxWORD01F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD01F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD01R(db.Model):
"""Represents a IdxWORD01R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD01R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD02F(db.Model):
"""Represents a IdxWORD02F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD02F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD02R(db.Model):
"""Represents a IdxWORD02R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD02R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD03F(db.Model):
"""Represents a IdxWORD03F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD03F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD03R(db.Model):
"""Represents a IdxWORD03R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD03R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD04F(db.Model):
"""Represents a IdxWORD04F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD04F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD04R(db.Model):
"""Represents a IdxWORD04R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD04R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD05F(db.Model):
"""Represents a IdxWORD05F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD05F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD05R(db.Model):
"""Represents a IdxWORD05R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD05R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD06F(db.Model):
"""Represents a IdxWORD06F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD06F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD06R(db.Model):
"""Represents a IdxWORD06R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD06R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD07F(db.Model):
"""Represents a IdxWORD07F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD07F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD07R(db.Model):
"""Represents a IdxWORD07R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD07R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD08F(db.Model):
"""Represents a IdxWORD08F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD08F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD08R(db.Model):
"""Represents a IdxWORD08R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD08R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD09F(db.Model):
"""Represents a IdxWORD09F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD09F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD09R(db.Model):
"""Represents a IdxWORD09R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD09R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD10F(db.Model):
"""Represents a IdxWORD10F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD10F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD10R(db.Model):
"""Represents a IdxWORD10R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD10R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD11F(db.Model):
"""Represents a IdxWORD11F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD11F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD11R(db.Model):
"""Represents a IdxWORD11R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD11R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD12F(db.Model):
"""Represents a IdxWORD12F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD12F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD12R(db.Model):
"""Represents a IdxWORD12R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD12R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD13F(db.Model):
"""Represents a IdxWORD13F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD13F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD13R(db.Model):
"""Represents a IdxWORD13R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD13R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD14F(db.Model):
"""Represents a IdxWORD14F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD14F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD14R(db.Model):
"""Represents a IdxWORD14R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD14R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD15F(db.Model):
"""Represents a IdxWORD15F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD15F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD15R(db.Model):
"""Represents a IdxWORD15R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD15R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD16F(db.Model):
"""Represents a IdxWORD16F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD16F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD16R(db.Model):
"""Represents a IdxWORD16R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD16R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD17F(db.Model):
"""Represents a IdxWORD17F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD17F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD17R(db.Model):
"""Represents a IdxWORD17R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD17R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD18F(db.Model):
"""Represents a IdxWORD18F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD18F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD18R(db.Model):
"""Represents a IdxWORD18R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD18R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
class IdxWORD19F(db.Model):
"""Represents a IdxWORD19F record."""
def __init__(self):
pass
__tablename__ = 'idxWORD19F'
id = db.Column(db.MediumInteger(9, unsigned=True),
primary_key=True,
autoincrement=True)
term = db.Column(db.String(50), nullable=True,
unique=True)
hitlist = db.Column(db.iLargeBinary, nullable=True)
class IdxWORD19R(db.Model):
"""Represents a IdxWORD19R record."""
def __init__(self):
pass
__tablename__ = 'idxWORD19R'
id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
db.ForeignKey(Bibrec.id),
primary_key=True)
termlist = db.Column(db.iLargeBinary, nullable=True)
type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
nullable=False,
server_default='CURRENT',
primary_key=True)
+class IdxWORD20F(db.Model):
+ """Represents a IdxWORD20F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD20F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(50), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxWORD20R(db.Model):
+ """Represents a IdxWORD20R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD20R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxWORD21F(db.Model):
+ """Represents a IdxWORD21F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD21F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(50), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxWORD21R(db.Model):
+ """Represents a IdxWORD21R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD21R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxWORD22F(db.Model):
+ """Represents a IdxWORD22F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD22F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(50), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxWORD22R(db.Model):
+ """Represents a IdxWORD22R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD22R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxWORD23F(db.Model):
+ """Represents a IdxWORD23F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD23F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(50), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxWORD23R(db.Model):
+ """Represents a IdxWORD23R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD23R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxWORD24F(db.Model):
+ """Represents a IdxWORD24F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD24F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(50), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxWORD24R(db.Model):
+ """Represents a IdxWORD24R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD24R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxWORD25F(db.Model):
+ """Represents a IdxWORD25F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD25F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(50), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxWORD25R(db.Model):
+ """Represents a IdxWORD25R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD25R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
+class IdxWORD26F(db.Model):
+ """Represents a IdxWORD26F record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD26F'
+ id = db.Column(db.MediumInteger(9, unsigned=True),
+ primary_key=True,
+ autoincrement=True)
+ term = db.Column(db.String(50), nullable=True,
+ unique=True)
+ hitlist = db.Column(db.iLargeBinary, nullable=True)
+
+class IdxWORD26R(db.Model):
+ """Represents a IdxWORD26R record."""
+ def __init__(self):
+ pass
+ __tablename__ = 'idxWORD26R'
+ id_bibrec = db.Column(db.MediumInteger(8, unsigned=True),
+ db.ForeignKey(Bibrec.id),
+ primary_key=True)
+ termlist = db.Column(db.iLargeBinary, nullable=True)
+ type = db.Column(db.Enum('CURRENT', 'FUTURE', 'TEMPORARY'),
+ nullable=False,
+ server_default='CURRENT',
+ primary_key=True)
+
__all__ = ['IdxINDEX',
+ 'IdxINDEXIdxINDEX',
'IdxINDEXNAME',
'IdxINDEXField',
'IdxPAIR01F',
'IdxPAIR01R',
'IdxPAIR02F',
'IdxPAIR02R',
'IdxPAIR03F',
'IdxPAIR03R',
'IdxPAIR04F',
'IdxPAIR04R',
'IdxPAIR05F',
'IdxPAIR05R',
'IdxPAIR06F',
'IdxPAIR06R',
'IdxPAIR07F',
'IdxPAIR07R',
'IdxPAIR08F',
'IdxPAIR08R',
'IdxPAIR09F',
'IdxPAIR09R',
'IdxPAIR10F',
'IdxPAIR10R',
'IdxPAIR11F',
'IdxPAIR11R',
'IdxPAIR12F',
'IdxPAIR12R',
'IdxPAIR13F',
'IdxPAIR13R',
'IdxPAIR14F',
'IdxPAIR14R',
'IdxPAIR15F',
'IdxPAIR15R',
'IdxPAIR16F',
'IdxPAIR16R',
'IdxPAIR17F',
'IdxPAIR17R',
'IdxPAIR18F',
'IdxPAIR18R',
'IdxPAIR19F',
'IdxPAIR19R',
+ 'IdxPAIR20F',
+ 'IdxPAIR20R',
+ 'IdxPAIR21F',
+ 'IdxPAIR21R',
+ 'IdxPAIR22F',
+ 'IdxPAIR22R',
+ 'IdxPAIR23F',
+ 'IdxPAIR23R',
+ 'IdxPAIR24F',
+ 'IdxPAIR24R',
+ 'IdxPAIR25F',
+ 'IdxPAIR25R',
+ 'IdxPAIR26F',
+ 'IdxPAIR26R',
'IdxPHRASE01F',
'IdxPHRASE01R',
'IdxPHRASE02F',
'IdxPHRASE02R',
'IdxPHRASE03F',
'IdxPHRASE03R',
'IdxPHRASE04F',
'IdxPHRASE04R',
'IdxPHRASE05F',
'IdxPHRASE05R',
'IdxPHRASE06F',
'IdxPHRASE06R',
'IdxPHRASE07F',
'IdxPHRASE07R',
'IdxPHRASE08F',
'IdxPHRASE08R',
'IdxPHRASE09F',
'IdxPHRASE09R',
'IdxPHRASE10F',
'IdxPHRASE10R',
'IdxPHRASE11F',
'IdxPHRASE11R',
'IdxPHRASE12F',
'IdxPHRASE12R',
'IdxPHRASE13F',
'IdxPHRASE13R',
'IdxPHRASE14F',
'IdxPHRASE14R',
'IdxPHRASE15F',
'IdxPHRASE15R',
'IdxPHRASE16F',
'IdxPHRASE16R',
'IdxPHRASE17F',
'IdxPHRASE17R',
'IdxPHRASE18F',
'IdxPHRASE18R',
'IdxPHRASE19F',
'IdxPHRASE19R',
+ 'IdxPHRASE20F',
+ 'IdxPHRASE20R',
+ 'IdxPHRASE21F',
+ 'IdxPHRASE21R',
+ 'IdxPHRASE22F',
+ 'IdxPHRASE22R',
+ 'IdxPHRASE23F',
+ 'IdxPHRASE23R',
+ 'IdxPHRASE24F',
+ 'IdxPHRASE24R',
+ 'IdxPHRASE25F',
+ 'IdxPHRASE25R',
+ 'IdxPHRASE26F',
+ 'IdxPHRASE26R',
'IdxWORD01F',
'IdxWORD01R',
'IdxWORD02F',
'IdxWORD02R',
'IdxWORD03F',
'IdxWORD03R',
'IdxWORD04F',
'IdxWORD04R',
'IdxWORD05F',
'IdxWORD05R',
'IdxWORD06F',
'IdxWORD06R',
'IdxWORD07F',
'IdxWORD07R',
'IdxWORD08F',
'IdxWORD08R',
'IdxWORD09F',
'IdxWORD09R',
'IdxWORD10F',
'IdxWORD10R',
'IdxWORD11F',
'IdxWORD11R',
'IdxWORD12F',
'IdxWORD12R',
'IdxWORD13F',
'IdxWORD13R',
'IdxWORD14F',
'IdxWORD14R',
'IdxWORD15F',
'IdxWORD15R',
'IdxWORD16F',
'IdxWORD16R',
'IdxWORD17F',
'IdxWORD17R',
'IdxWORD18F',
'IdxWORD18R',
'IdxWORD19F',
- 'IdxWORD19R']
+ 'IdxWORD19R',
+ 'IdxWORD20F',
+ 'IdxWORD20R',
+ 'IdxWORD21F',
+ 'IdxWORD21R',
+ 'IdxWORD22F',
+ 'IdxWORD22R',
+ 'IdxWORD23F',
+ 'IdxWORD23R',
+ 'IdxWORD24F',
+ 'IdxWORD24R',
+ 'IdxWORD25F',
+ 'IdxWORD25R',
+ 'IdxWORD26F',
+ 'IdxWORD26R']
diff --git a/modules/bibindex/lib/bibindex_regression_tests.py b/modules/bibindex/lib/bibindex_regression_tests.py
new file mode 100644
index 000000000..d0d28ed67
--- /dev/null
+++ b/modules/bibindex/lib/bibindex_regression_tests.py
@@ -0,0 +1,1425 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2006, 2007, 2008, 2010, 2011, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndex Regression Test Suite."""
+
+__revision__ = "$Id$"
+
+import unittest
+import os
+import re
+from datetime import timedelta
+
+from invenio.testutils import make_test_suite, run_test_suite, nottest, InvenioTestCase
+from invenio.importutils import lazy_import
+
+WordTable = lazy_import('invenio.bibindex_engine:WordTable')
+get_word_tables = lazy_import('invenio.bibindex_engine:get_word_tables')
+find_affected_records_for_index = lazy_import('invenio.bibindex_engine:find_affected_records_for_index')
+get_recIDs_by_date_authority = lazy_import('invenio.bibindex_engine:get_recIDs_by_date_authority')
+get_recIDs_by_date_bibliographic = lazy_import('invenio.bibindex_engine:get_recIDs_by_date_bibliographic')
+create_range_list = lazy_import('invenio.bibindex_engine:create_range_list')
+beautify_range_list = lazy_import('invenio.bibindex_engine:beautify_range_list')
+get_last_updated_all_indexes = lazy_import('invenio.bibindex_engine:get_last_updated_all_indexes')
+
+get_index_id_from_index_name = lazy_import('invenio.bibindex_engine_utils:get_index_id_from_index_name')
+get_index_tags = lazy_import('invenio.bibindex_engine_utils:get_index_tags')
+get_tag_indexes = lazy_import('invenio.bibindex_engine_utils:get_tag_indexes')
+get_all_indexes = lazy_import('invenio.bibindex_engine_utils:get_all_indexes')
+
+from invenio.bibindex_engine_config import CFG_BIBINDEX_ADDING_RECORDS_STARTED_STR, \
+ CFG_BIBINDEX_INDEX_TABLE_TYPE, \
+ CFG_BIBINDEX_UPDATE_MESSAGE
+
+task_low_level_submission = lazy_import('invenio.bibtask:task_low_level_submission')
+from invenio.config import CFG_BINDIR, CFG_LOGDIR
+
+from invenio.dbquery import run_sql, deserialize_via_marshal
+from invenio.intbitset import intbitset
+get_record = lazy_import('invenio.search_engine:get_record')
+get_fieldvalues = lazy_import('invenio.search_engine_utils:get_fieldvalues')
+
+get_index_strings_by_control_no = lazy_import('invenio.bibauthority_engine:get_index_strings_by_control_no')
+get_control_nos_from_recID = lazy_import('invenio.bibauthority_engine:get_control_nos_from_recID')
+
+run_sql_drop_silently = lazy_import('invenio.bibindex_engine_utils:run_sql_drop_silently')
+
+bibupload = lazy_import('invenio.bibupload:bibupload')
+xml_marc_to_records = lazy_import('invenio.bibupload:xml_marc_to_records')
+
+wipe_out_record_from_all_tables = lazy_import('invenio.bibupload_regression_tests:wipe_out_record_from_all_tables')
+record_get_field_value = lazy_import('invenio.bibrecord:record_get_field_value')
+get_max_recid = lazy_import('invenio.bibsort_engine:get_max_recid')
+
+
+def reindex_for_type_with_bibsched(index_name, force_all=False):
+ """Runs bibindex for the specified index and returns the task_id.
+ @param index_name: name of the index to reindex
+ @param force_all: if it's True function will reindex all records
+ not just affected ones
+ """
+ program = os.path.join(CFG_BINDIR, 'bibindex')
+ args = ['bibindex', 'bibindex_regression_tests', '-w', index_name, '-u', 'admin']
+ if force_all:
+ args.append("--force")
+ task_id = task_low_level_submission(*args)
+ COMMAND = "%s %s > /dev/null 2> /dev/null" % (program, str(task_id))
+ os.system(COMMAND)
+ return task_id
+
+
+def prepare_for_index_update(index_id, parameters={}):
+ """ Prepares SQL query for an update of an index in the idxINDEX table.
+ Takes into account remove_stopwords, remove_html_markup, remove_latex_markup,
+ tokenizer and last_updated as parameters to change.
+ remove_html_markup and remove_latex_markup accepts these values:
+ '' to leave it unchanged
+ 'Yes' to change it to 'Yes'
+ 'No' to change it to 'No'.
+ For remove_stopwords instead of 'Yes' one must give the name of the file (for example: 'stopwords.kb')
+ from CFG_ETCDIR/bibrank/ directory pointing at stopwords knowledge base.
+ For tokenizer please specify the name of the tokenizer.
+ For last_updated provide a date in format: '2013-01-31 00:00:00'
+ @param index_id: id of the index to change
+ @param parameters: dict with names of parameters and their new values
+ """
+ if len(parameters) == 0:
+ return ''
+
+ parameter_set = False
+ query_update = "UPDATE idxINDEX SET "
+ for key in parameters:
+ if parameters[key]:
+ query_update += parameter_set and ", " or ""
+ query_update += "%s='%s'" % (key, parameters[key])
+ parameter_set = True
+ query_update += " WHERE id=%s" % index_id
+ return query_update
+
+
+@nottest
+def reindex_word_tables_into_testtables(index_name, recids = None, prefix = 'test', parameters={}, turn_off_virtual_indexes=True):
+ """Function for setting up a test enviroment. Reindexes an index with a given name to a
+ new temporary table with a given prefix. During the reindexing it changes some parameters
+ of chosen index. It's useful for conducting tests concerning the reindexing process.
+ Reindexes only idxWORDxxx tables.
+ @param index_name: name of the index we want to reindex
+ @param recids: None means reindexing all records, set ids of the records to update only part of them
+ @param prefix: prefix for the new tabels, if it's set to boolean False function will reindex to original table
+ @param parameters: dict with parameters and their new values; for more specific
+ description take a look at 'prepare_for_index_update' function.
+ @param turn_off_virtual_indexes: if True only specific index will be reindexed
+ without connected virtual indexes
+ """
+ index_id = get_index_id_from_index_name(index_name)
+ query_update = prepare_for_index_update(index_id, parameters)
+ last_updated = run_sql("""SELECT last_updated FROM idxINDEX WHERE id=%s""" % index_id)[0][0]
+
+ test_tablename = "%s_idxWORD%02d" % (prefix, index_id)
+ query_drop_forward_index_table = """DROP TABLE IF EXISTS %sF""" % test_tablename
+ query_drop_reversed_index_table = """DROP TABLE IF EXISTS %sR""" % test_tablename
+
+ query_create_forward_index_table = """CREATE TABLE %sF (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM""" % test_tablename
+ query_create_reversed_index_table = """CREATE TABLE %sR (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM""" % test_tablename
+
+ run_sql_drop_silently(query_drop_forward_index_table)
+ run_sql_drop_silently(query_drop_reversed_index_table)
+ run_sql(query_create_forward_index_table)
+ run_sql(query_create_reversed_index_table)
+ if query_update:
+ run_sql(query_update)
+
+ pattern = 'idxWORD'
+ if prefix:
+ pattern = '%s_idxWORD' % prefix
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=get_index_tags(index_name),
+ table_name_pattern= pattern + '%02dF',
+ wordtable_type = CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ if turn_off_virtual_indexes:
+ wordTable.turn_off_virtual_indexes()
+ if recids:
+ wordTable.add_recIDs(recids, 10000)
+ else:
+ recIDs_for_index = find_affected_records_for_index(index_name,
+ [[1, get_max_recid()]],
+ True)
+ bib_recIDs = get_recIDs_by_date_bibliographic([], index_name)
+ auth_recIDs = get_recIDs_by_date_authority([], index_name)
+ final_recIDs = bib_recIDs | auth_recIDs
+ final_recIDs = set(final_recIDs) & set(recIDs_for_index[index_name])
+ final_recIDs = beautify_range_list(create_range_list(list(final_recIDs)))
+ wordTable.add_recIDs(final_recIDs, 10000)
+ return last_updated
+
+
+@nottest
+def remove_reindexed_word_testtables(index_name, prefix = 'test'):
+ """
+ Removes prefix_idxWORDxxx tables created during tests.
+ @param index_name: name of the index
+ @param prefix: prefix for the tables
+ """
+ index_id = get_index_id_from_index_name(index_name)
+ test_tablename = "%s_idxWORD%02d" % (prefix, index_id)
+ query_drop_forward_index_table = """DROP TABLE IF EXISTS %sF""" % test_tablename
+ query_drop_reversed_index_table = """DROP TABLE IF EXISTS %sR""" % test_tablename
+ run_sql(query_drop_forward_index_table)
+ run_sql(query_drop_reversed_index_table)
+
+
+class BibIndexRemoveStopwordsTest(InvenioTestCase):
+ """Tests remove_stopwords parameter of an index. Changes it in the database
+ and reindexes from scratch into a new table to see the diffrence which is brought
+ by change. Uses 'title' index.
+ """
+
+ test_counter = 0
+ reindexed = False
+
+ @classmethod
+ def setUp(self):
+ """reindexation to new table"""
+ if not self.reindexed:
+ self.last_updated = reindex_word_tables_into_testtables(
+ 'title',
+ parameters = {'remove_stopwords':'stopwords.kb',
+ 'last_updated':'0000-00-00 00:00:00'})
+ self.reindexed = True
+
+ @classmethod
+ def tearDown(self):
+ """cleaning up"""
+ self.test_counter += 1
+ if self.test_counter == 4:
+ remove_reindexed_word_testtables('title')
+ reverse_changes = prepare_for_index_update(
+ get_index_id_from_index_name('title'),
+ parameters = {'remove_stopwords':'No',
+ 'last_updated':self.last_updated})
+ run_sql(reverse_changes)
+
+ def test_check_occurrences_of_stopwords_in_testable_word_of(self):
+ """Tests if term 'of' is in the new reindexed table"""
+
+ query = "SELECT hitlist FROM test_idxWORD08F WHERE term='of'"
+ res = run_sql(query)
+ self.assertEqual(0, len(res))
+
+ def test_check_occurrences_of_stopwords_in_testable_word_everything(self):
+ """Tests if term 'everything' is in the new reindexed table"""
+
+ query = "SELECT hitlist FROM test_idxWORD08F WHERE term='everything'"
+ res = run_sql(query)
+ self.assertEqual(0, len(res))
+
+ def test_compare_non_stopwords_occurrences_in_original_and_test_tables_word_theory(self):
+ """Checks if stopwords removing has no influence on indexation of word 'theory' """
+
+ word = "theori" #theori not theory, because of default stemming for title index
+ query = "SELECT hitlist FROM test_idxWORD08F WHERE term='%s'" % word
+ iset_removed = "iset_removed"
+ iset_original = "iset_original"
+ res = run_sql(query)
+ if res:
+ iset_removed = intbitset(res[0][0])
+ query = "SELECT hitlist FROM idxWORD08F WHERE term='%s'" % word
+ res = run_sql(query)
+ if res:
+ iset_original = intbitset(res[0][0])
+ self.assertEqual(len(iset_removed), len(iset_original))
+
+ def test_compare_non_stopwords_occurrences_in_original_and_test_tables_word_on(self):
+ """Checks if stopwords removing has no influence on indexation of word 'o(n)' """
+
+ word = "o(n)"
+ query = "SELECT hitlist FROM test_idxWORD08F WHERE term='%s'" % word
+ iset_removed = "iset_removed"
+ iset_original = "iset_original"
+ res = run_sql(query)
+ if res:
+ iset_removed = intbitset(res[0][0])
+ query = "SELECT hitlist FROM idxWORD08F WHERE term='%s'" % word
+ res = run_sql(query)
+ if res:
+ iset_original = intbitset(res[0][0])
+ self.assertEqual(len(iset_removed), len(iset_original))
+
+
+class BibIndexRemoveLatexTest(InvenioTestCase):
+ """Tests remove_latex_markup parameter of an index. Changes it in the database
+ and reindexes from scratch into a new table to see the diffrence which is brought
+ by change. Uses 'abstract' index.
+ """
+
+ test_counter = 0
+ reindexed = False
+
+ @classmethod
+ def setUp(self):
+ """reindexation to new table"""
+ if not self.reindexed:
+ self.last_updated = reindex_word_tables_into_testtables(
+ 'abstract',
+ parameters = {'remove_latex_markup':'Yes',
+ 'last_updated':'0000-00-00 00:00:00'})
+ self.reindexed = True
+
+ @classmethod
+ def tearDown(self):
+ """cleaning up"""
+ self.test_counter += 1
+ if self.test_counter == 4:
+ remove_reindexed_word_testtables('abstract')
+ reverse_changes = prepare_for_index_update(
+ get_index_id_from_index_name('abstract'),
+ parameters = {'remove_latex_markup':'No',
+ 'last_updated':self.last_updated})
+ run_sql(reverse_changes)
+
+
+ def test_check_occurrences_after_latex_removal_word_u1(self):
+ """Tests how many times experssion 'u(1)' occures"""
+
+ word = "u(1)"
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ iset = "iset_change"
+ if res:
+ iset = intbitset(res[0][0])
+ self.assertEqual(3, len(iset))
+
+ def test_check_exact_occurrences_after_latex_removal_word_theta(self):
+ """Tests where experssion 'theta' occures"""
+
+ word = "theta"
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual([12], ilist)
+
+ def test_compare_occurrences_after_and_before_latex_removal_math_expression(self):
+ """Checks if latex removal has no influence on indexation of expression 's(u(n_1)*u(n_2))' """
+
+ word = 's(u(n_1)*u(n_2))'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ ilist_test = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist_test = iset.tolist()
+ word = 's(u(n_1)*u(n_2))'
+ query = "SELECT hitlist FROM idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ ilist = ["default_not_equal"]
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual(ilist, ilist_test)
+
+ def test_check_occurrences_latex_expression_with_u1(self):
+ """Tests influence of latex removal on record 80"""
+
+ word = '%over u(1)%'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term LIKE '%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual([80], ilist)
+
+
+class BibIndexRemoveHtmlTest(InvenioTestCase):
+ """Tests remove_html_markup parameter of an index. Changes it in the database
+ and reindexes from scratch into a new table to see the diffrence which is brought
+ by change. Uses 'abstract' index.
+ """
+
+ test_counter = 0
+ reindexed = False
+
+ @classmethod
+ def setUp(self):
+ """reindexation to new table"""
+ if not self.reindexed:
+ self.last_updated = reindex_word_tables_into_testtables(
+ 'abstract',
+ parameters = {'remove_html_markup':'Yes',
+ 'last_updated':'0000-00-00 00:00:00'})
+ self.reindexed = True
+
+ @classmethod
+ def tearDown(self):
+ """cleaning up"""
+ self.test_counter += 1
+ if self.test_counter == 2:
+ remove_reindexed_word_testtables('abstract')
+ reverse_changes = prepare_for_index_update(
+ get_index_id_from_index_name('abstract'),
+ parameters = {'remove_html_markup':'No',
+ 'last_updated':self.last_updated})
+ run_sql(reverse_changes)
+
+
+ def test_check_occurrences_after_html_removal_tag_p(self):
+ """Tests if expression 'water-hog</p>' is not indexed after html markup removal"""
+
+ word = 'water-hog</p>'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual(0, len(ilist))
+
+
+ def test_check_occurrences_after_and_before_html_removal_word_style(self):
+ """Tests html markup removal influence on expression 'style="width' """
+
+ word = 'style="width'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ ilist_test = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist_test = iset.tolist()
+ query = "SELECT hitlist FROM idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('abstract'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertNotEqual(ilist, ilist_test)
+
+
+class BibIndexYearIndexTest(InvenioTestCase):
+ """
+ Checks year index. Tests are diffrent than those inside WebSearch module because
+ they only test content and reindexation and not the search itself.
+ """
+
+ test_counter = 0
+ reindexed = False
+
+ @classmethod
+ def setUp(self):
+ """reindexation to new table"""
+ if not self.reindexed:
+ self.last_updated = reindex_word_tables_into_testtables(
+ 'year',
+ parameters = {'last_updated':'0000-00-00 00:00:00'})
+ self.reindexed = True
+
+
+ @classmethod
+ def tearDown(self):
+ """cleaning up"""
+ self.test_counter += 1
+ if self.test_counter == 3:
+ remove_reindexed_word_testtables('year')
+ reverse_changes = prepare_for_index_update(
+ get_index_id_from_index_name('year'),
+ parameters = {'last_updated':self.last_updated})
+ run_sql(reverse_changes)
+
+
+ def test_occurrences_in_year_index_1973(self):
+ """checks content of year index for year 1973"""
+ word = '1973'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('year'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual([34], ilist)
+
+
+ def test_occurrences_in_year_index_2001(self):
+ """checks content of year index for year 2001"""
+ word = '2001'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('year'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual([2, 11, 12, 15], ilist)
+
+
+ def test_comparison_for_number_of_items(self):
+ """checks the reindexation of year index"""
+ query_test = "SELECT count(*) FROM test_idxWORD%02dF" % get_index_id_from_index_name('year')
+ query_orig = "SELECT count(*) FROM idxWORD%02dF" % get_index_id_from_index_name('year')
+ num_orig = 0
+ num_test = 1
+ res = run_sql(query_test)
+ if res:
+ num_test = res[0][0]
+ res = run_sql(query_orig)
+ if res:
+ num_orig = res[0][0]
+ self.assertEqual(num_orig, num_test)
+
+
+
+class BibIndexAuthorCountIndexTest(InvenioTestCase):
+ """
+ Checks author count index. Tests are diffrent than those inside WebSearch module because
+ they only test content and reindexation and not the search itself.
+ """
+
+ test_counter = 0
+ reindexed = False
+
+ @classmethod
+ def setUp(self):
+ """reindexation to new table"""
+ if not self.reindexed:
+ self.last_updated = reindex_word_tables_into_testtables(
+ 'authorcount',
+ parameters = {'last_updated':'0000-00-00 00:00:00'})
+ self.reindexed = True
+
+ @classmethod
+ def tearDown(self):
+ """cleaning up"""
+ self.test_counter += 1
+ if self.test_counter == 2:
+ remove_reindexed_word_testtables('authorcount')
+ reverse_changes = prepare_for_index_update(
+ get_index_id_from_index_name('authorcount'),
+ parameters = {'last_updated':self.last_updated})
+ run_sql(reverse_changes)
+
+
+ def test_occurrences_in_authorcount_index(self):
+ """checks content of authorcount index for papers with 4 authors"""
+ word = '4'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('authorcount'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual([51, 54, 59, 66, 92, 96], ilist)
+
+
+ def test_comparison_for_number_of_items(self):
+ """checks the reindexation of authorcount index"""
+ query_test = "SELECT count(*) FROM test_idxWORD%02dF" % get_index_id_from_index_name('authorcount')
+ query_orig = "SELECT count(*) FROM idxWORD%02dF" % get_index_id_from_index_name('authorcount')
+ num_orig = 0
+ num_test = 1
+ res = run_sql(query_test)
+ if res:
+ num_test = res[0][0]
+ res = run_sql(query_orig)
+ if res:
+ num_orig = res[0][0]
+ self.assertEqual(num_orig, num_test)
+
+
+class BibIndexItemCountIndexTest(InvenioTestCase):
+ """
+ Checks item count index. Checks a number of copies of books for records
+ as well as occurrences of particular number of copies in test data.
+ """
+
+ def test_occurrences_in_itemcount_index_two_copies(self):
+ """checks content of itemcount index for records with two copies of a book"""
+ word = '2'
+ query = "SELECT hitlist FROM idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('itemcount'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual([31, 34], ilist)
+
+ def test_records_for_number_of_copies_record1(self):
+ """checks content of itemcount index for record: 1"""
+ query = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=1" \
+ % get_index_id_from_index_name('itemcount')
+ res = run_sql(query)
+ self.assertEqual(deserialize_via_marshal(res[0][0]),['0'])
+
+ def test_records_for_number_of_copies_record30(self):
+ """checks content of itemcount index for record: 30"""
+ query = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=30" \
+ % get_index_id_from_index_name('itemcount')
+ res = run_sql(query)
+ self.assertEqual(deserialize_via_marshal(res[0][0]),['1'])
+
+ def test_records_for_number_of_copies_record32(self):
+ """checks content of itemcount index for record: 32"""
+ query = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=32" \
+ % get_index_id_from_index_name('itemcount')
+ res = run_sql(query)
+ self.assertEqual(deserialize_via_marshal(res[0][0]),['3'])
+
+
+class BibIndexFiletypeIndexTest(InvenioTestCase):
+ """
+ Checks filetype index. Tests are diffrent than those inside WebSearch module because
+ they only test content and indexation and not the search itself.
+ """
+
+ def test_occurances_of_tif_filetype(self):
+ """tests which records has file with 'tif' extension"""
+ query = "SELECT hitlist FROM idxWORD%02dF where term='tif'" \
+ % get_index_id_from_index_name('filetype')
+ res = run_sql(query)
+ value = []
+ if res:
+ iset = intbitset(res[0][0])
+ value = iset.tolist()
+ self.assertEqual(sorted(value), [66, 71])
+
+ def test_filetypes_of_records(self):
+ """tests files extensions of record 1 and 77"""
+ query1 = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=1" \
+ % get_index_id_from_index_name('filetype')
+ query2 = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=77" \
+ % get_index_id_from_index_name('filetype')
+ res1 = run_sql(query1)
+ res2 = run_sql(query2)
+ set1 = deserialize_via_marshal(res1[0][0])
+ set2 = deserialize_via_marshal(res2[0][0])
+ self.assertEqual(set1, ['gif', 'jpg'])
+ self.assertEqual(set2, ['pdf', 'ps.gz'])
+
+
+class BibIndexJournalIndexTest(InvenioTestCase):
+ """
+ Checks journal index. Tests are diffrent than those inside WebSearch module because
+ they only test content and reindexation and not the search itself.
+ """
+ test_counter = 0
+ reindexed = False
+
+ @classmethod
+ def setUp(self):
+ """reindexation to new table"""
+ if not self.reindexed:
+ self.last_updated = reindex_word_tables_into_testtables(
+ 'journal',
+ parameters = {'last_updated':'0000-00-00 00:00:00'})
+ self.reindexed = True
+
+ @classmethod
+ def tearDown(self):
+ """cleaning up"""
+ self.test_counter += 1
+ if self.test_counter == 2:
+ remove_reindexed_word_testtables('journal')
+ reverse_changes = prepare_for_index_update(
+ get_index_id_from_index_name('journal'),
+ parameters = {'last_updated':self.last_updated})
+ run_sql(reverse_changes)
+
+
+ def test_occurrences_in_journal_index(self):
+ """checks content of journal index for phrase: 'prog. theor. phys.' """
+ word = 'prog. theor. phys.'
+ query = "SELECT hitlist FROM test_idxWORD%02dF WHERE term='%s'" % (get_index_id_from_index_name('journal'), word)
+ res = run_sql(query)
+ ilist = []
+ if res:
+ iset = intbitset(res[0][0])
+ ilist = iset.tolist()
+ self.assertEqual([86], ilist)
+
+
+ def test_comparison_for_number_of_items(self):
+ """checks the reindexation of journal index"""
+ query_test = "SELECT count(*) FROM test_idxWORD%02dF" % get_index_id_from_index_name('journal')
+ query_orig = "SELECT count(*) FROM idxWORD%02dF" % get_index_id_from_index_name('journal')
+ num_orig = 0
+ num_test = 1
+ res = run_sql(query_test)
+ if res:
+ num_test = res[0][0]
+ res = run_sql(query_orig)
+ if res:
+ num_orig = res[0][0]
+ self.assertEqual(num_orig, num_test)
+
+
+class BibIndexCJKTokenizerTitleIndexTest(InvenioTestCase):
+ """
+ Checks CJK tokenization on title index.
+ """
+ test_counter = 0
+ reindexed = False
+
+ @classmethod
+ def setUp(self):
+ """reindexation to new table"""
+ if not self.reindexed:
+ self.last_updated = reindex_word_tables_into_testtables(
+ 'title',
+ parameters = {'tokenizer':'BibIndexCJKTokenizer',
+ 'last_updated':'0000-00-00 00:00:00'})
+ self.reindexed = True
+
+ @classmethod
+ def tearDown(self):
+ """cleaning up"""
+ self.test_counter += 1
+ if self.test_counter == 2:
+ remove_reindexed_word_testtables('title')
+ reverse_changes = prepare_for_index_update(
+ get_index_id_from_index_name('title'),
+ parameters = {'tokenizer':'BibIndexDefaultTokenizer',
+ 'last_updated':self.last_updated})
+ run_sql(reverse_changes)
+
+
+ def test_splliting_and_indexing_CJK_characters_forward_table(self):
+ """CJK Tokenizer - searching for a CJK term in title index, forward table"""
+ query = "SELECT * from test_idxWORD%02dF where term='\xe6\x95\xac'" % get_index_id_from_index_name('title')
+ res = run_sql(query)
+ iset = []
+ if res:
+ iset = intbitset(res[0][2])
+ iset = iset.tolist()
+ self.assertEqual(iset, [104])
+
+ def test_splliting_and_indexing_CJK_characters_reversed_table(self):
+ """CJK Tokenizer - comparing terms for record with chinese poetry in title index, reverse table"""
+ query = "SELECT * from test_idxWORD%02dR where id_bibrec='104'" % get_index_id_from_index_name('title')
+ res = run_sql(query)
+ iset = []
+ if res:
+ iset = deserialize_via_marshal(res[0][1])
+ self.assertEqual(iset, ['\xe6\x95\xac', '\xe7\x8d\xa8', '\xe4\xba\xad', '\xe5\x9d\x90'])
+
+
+class BibIndexAuthorityRecordTest(InvenioTestCase):
+ """Test if BibIndex correctly knows when to update the index for a
+ bibliographic record if it is dependent upon an authority record changed
+ within the given date range"""
+
+ def test_authority_record_recently_updated(self):
+ """bibindex - reindexing after recently changed authority record"""
+
+ authRecID = 118
+ bibRecID = 9
+ index_name = 'author'
+ table = "idxWORD%02dF" % get_index_id_from_index_name(index_name)
+ reindex_for_type_with_bibsched(index_name)
+ run_sql("UPDATE bibrec SET modification_date = now() WHERE id = %s", (authRecID,))
+ # run bibindex again
+ task_id = reindex_for_type_with_bibsched(index_name, force_all=True)
+
+ filename = os.path.join(CFG_LOGDIR, 'bibsched_task_' + str(task_id) + '.log')
+ _file = open(filename)
+ text = _file.read() # small file
+ _file.close()
+ self.assertTrue(text.find(CFG_BIBINDEX_UPDATE_MESSAGE) >= 0)
+ self.assertTrue(text.find(CFG_BIBINDEX_ADDING_RECORDS_STARTED_STR % (table, 1, get_max_recid())) >= 0)
+
+ def test_authority_record_enriched_index(self):
+ """bibindex - test whether reverse index for bibliographic record
+ contains words from referenced authority records"""
+ bibRecID = 9
+ authority_string = 'jonathan'
+ index_name = 'author'
+ table = "idxWORD%02dR" % get_index_id_from_index_name(index_name)
+
+ reindex_for_type_with_bibsched(index_name, force_all=True)
+ self.assertTrue(
+ authority_string in deserialize_via_marshal(
+ run_sql("SELECT termlist FROM %s WHERE id_bibrec = %s" % (table, bibRecID))[0][0]
+ )
+ )
+
+ def test_indexing_of_deleted_authority_record(self):
+ """bibindex - no info for indexing from deleted authority record"""
+ recID = 119 # deleted record
+ control_nos = get_control_nos_from_recID(recID)
+ info = get_index_strings_by_control_no(control_nos[0])
+ self.assertEqual([], info)
+
+ def test_authority_record_get_values_by_bibrecID_from_tag(self):
+ """bibindex - find authors in authority records for given bibrecID"""
+ tags = ['100__a']
+ bibRecID = 9
+ values = []
+ for tag in tags:
+ authority_tag = tag[0:3] + "__0"
+ control_nos = get_fieldvalues(bibRecID, authority_tag)
+ for control_no in control_nos:
+ new_strings = get_index_strings_by_control_no(control_no)
+ values.extend(new_strings)
+ self.assertTrue('Ellis, Jonathan Richard' in values)
+
+
+def insert_record_one_and_second_revision():
+ """Inserts test record no. 1 and a second revision for that record"""
+
+ rev1 = """<record>
+ <controlfield tag="001">123456789</controlfield>
+ <controlfield tag="005">20110101000000.0</controlfield>
+ <datafield tag ="100" ind1=" " ind2=" ">
+ <subfield code="a">Close, John</subfield>
+ <subfield code="u">DESY</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="a">Particles world</subfield>
+ </datafield>
+ </record>"""
+ rev1_final = rev1.replace('<controlfield tag="001">123456789</controlfield>','')
+ rev1_final = rev1_final.replace('<controlfield tag="005">20110101000000.0</controlfield>','')
+
+ rev2 = rev1.replace('<subfield code="a">Close, John</subfield>', '<subfield code="a">Dawkins, Richard</subfield>')
+ rev2 = rev2.replace('Particles world', 'Particles universe')
+
+ rec1 = xml_marc_to_records(rev1_final)
+ res = bibupload(rec1[0], opt_mode='insert')
+ _id = res[1]
+ rec = get_record(_id)
+ _rev = record_get_field_value(rec, '005', '', '')
+
+ #need to index for the first time
+ indexes = get_all_indexes(virtual=False)
+ wtabs = get_word_tables(indexes)
+ for index_id, index_name, index_tags in wtabs:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type = CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.add_recIDs([[_id, _id]], 10000)
+
+ #upload the second revision, but don't index
+ rev2_final = rev2.replace('123456789', str(_id))
+ rev2_final = rev2_final.replace('20110101000000.0', _rev)
+ rec2 = xml_marc_to_records(rev2_final)
+ res = bibupload(rec2[0], opt_mode='correct')
+
+ return _id
+
+
+def insert_record_two_and_second_revision():
+ """Inserts test record no. 2 and a revision for that record"""
+
+ rev1 = """<record>
+ <controlfield tag="001">123456789</controlfield>
+ <controlfield tag="005">20110101000000.0</controlfield>
+ <datafield tag ="100" ind1=" " ind2=" ">
+ <subfield code="a">Locke, John</subfield>
+ <subfield code="u">UNITRA</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="a">Collision course</subfield>
+ </datafield>
+ </record>"""
+ rev1_final = rev1.replace('<controlfield tag="001">123456789</controlfield>','')
+ rev1_final = rev1_final.replace('<controlfield tag="005">20110101000000.0</controlfield>','')
+
+ rev2 = rev1.replace('Collision course', 'Course of collision')
+
+ rec1 = xml_marc_to_records(rev1_final)
+ res = bibupload(rec1[0], opt_mode='insert')
+ id_bibrec = res[1]
+ rec = get_record(id_bibrec)
+ _rev = record_get_field_value(rec, '005', '', '')
+
+ #need to index for the first time
+ indexes = get_all_indexes(virtual=False)
+ wtabs = get_word_tables(indexes)
+ for index_id, index_name, index_tags in wtabs:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type = CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.add_recIDs([[id_bibrec, id_bibrec]], 10000)
+
+ #upload the second revision, but don't index
+ rev2_final = rev2.replace('123456789', str(id_bibrec))
+ rev2_final = rev2_final.replace('20110101000000.0', _rev)
+ rec2 = xml_marc_to_records(rev2_final)
+ res = bibupload(rec2[0], opt_mode='correct')
+
+ return id_bibrec
+
+
+def create_index_tables(index_id):
+ query_create = """CREATE TABLE IF NOT EXISTS idxWORD%02dF (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM"""
+
+ query_create_r = """CREATE TABLE IF NOT EXISTS idxWORD%02dR (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM"""
+ run_sql(query_create % index_id)
+ run_sql(query_create_r % index_id)
+
+
+def drop_index_tables(index_id):
+ query_drop = """DROP TABLE IF EXISTS idxWORD%02d%s"""
+ run_sql(query_drop % (index_id, "F"))
+ run_sql(query_drop % (index_id, "R"))
+
+
+def create_virtual_index(index_id, dependent_indexes):
+ """creates new virtual index and binds it to specific dependent indexes"""
+ query = """INSERT INTO idxINDEX (id, name, tokenizer) VALUES (%s, 'testindex', 'BibIndexDefaultTokenizer')"""
+ run_sql(query % index_id)
+ query = """INSERT INTO idxINDEX_idxINDEX VALUES (%s, %s)"""
+ for index in dependent_indexes:
+ run_sql(query % (index_id, get_index_id_from_index_name(index)))
+ create_index_tables(index_id)
+
+
+def remove_virtual_index(index_id):
+ """removes tables and other traces after virtual index"""
+ drop_index_tables(index_id)
+ query = """DELETE FROM idxINDEX WHERE id=%s""" % index_id
+ run_sql(query)
+ query = """DELETE FROM idxINDEX_idxINDEX WHERE id_virtual=%s"""
+ run_sql(query % index_id)
+
+
+class BibIndexFindingAffectedIndexes(InvenioTestCase):
+ """
+ Checks if function 'find_affected_records_for_index'
+ works correctly.
+ """
+
+ counter = 0
+ indexes = ['global', 'fulltext', 'caption', 'journal', 'miscellaneous', 'reportnumber', 'year']
+
+ @classmethod
+ def setUp(self):
+ if self.counter == 0:
+ self.last_updated = dict(get_last_updated_all_indexes())
+ res = run_sql("SELECT job_date FROM hstRECORD WHERE id_bibrec=10 AND affected_fields<>''")
+ self.hst_date = res[0][0]
+ date_to_set = self.hst_date - timedelta(seconds=1)
+ for index in self.indexes:
+ run_sql("""UPDATE idxINDEX SET last_updated=%s
+ WHERE name=%s""", (str(date_to_set), index))
+
+ @classmethod
+ def tearDown(self):
+ self.counter += 1
+ if self.counter >= 8:
+ for index in self.indexes:
+ run_sql("""UPDATE idxINDEX SET last_updated=%s
+ WHERE name=%s""", (self.last_updated[index], index))
+
+ def test_find_proper_indexes(self):
+ """bibindex - checks if affected indexes are found correctly"""
+ records_for_indexes = find_affected_records_for_index([], [[1,20]])
+ self.assertEqual(sorted(['miscellaneous', 'fulltext', 'caption', 'journal', 'reportnumber', 'year']),
+ sorted(records_for_indexes.keys()))
+
+ def test_find_proper_recrods_for_miscellaneous_index(self):
+ """bibindex - checks if affected recids are found correctly for miscellaneous index"""
+ records_for_indexes = find_affected_records_for_index([], [[1,20]])
+ self.assertEqual(records_for_indexes['miscellaneous'], [10,12])
+
+ def test_find_proper_records_for_year_index(self):
+ """bibindex - checks if affected recids are found correctly for year index"""
+ records_for_indexes = find_affected_records_for_index("", [[1,20]])
+ self.assertEqual(records_for_indexes['year'], [10,12])
+
+ def test_find_proper_records_for_caption_index(self):
+ """bibindex - checks if affected recids are found correctly for caption index"""
+ records_for_indexes = find_affected_records_for_index("", [[1,100]])
+ self.assertEqual(records_for_indexes['caption'], [10,12, 55, 98])
+
+ def test_find_proper_records_for_journal_index(self):
+ """bibindex - checks if affected recids are found correctly for journal index"""
+ records_for_indexes = find_affected_records_for_index("", [[1,100]])
+ self.assertEqual(records_for_indexes['journal'], [10])
+
+ def test_find_proper_records_specified_only_year(self):
+ """bibindex - checks if affected recids are found correctly for year index if we specify only year index as input"""
+ records_for_indexes = find_affected_records_for_index("year", [[1, 100]])
+ self.assertEqual(records_for_indexes["year"], [10, 12, 55])
+
+ def test_find_proper_records_force_all(self):
+ """bibindex - checks if all recids will be assigned to all specified indexes"""
+ records_for_indexes = find_affected_records_for_index("year,title", [[10, 15]], True)
+ self.assertEqual(records_for_indexes["year"], records_for_indexes["title"])
+ self.assertEqual(records_for_indexes["year"], [10, 11, 12, 13, 14, 15])
+
+ def test_find_proper_records_nothing_for_title_index(self):
+ """bibindex - checks if nothing was found for title index in range of records: 1 - 20"""
+ records_for_indexes = find_affected_records_for_index("title", [[1, 20]])
+ self.assertRaises(KeyError, lambda :records_for_indexes["title"])
+
+
+
+
+class BibIndexIndexingAffectedIndexes(InvenioTestCase):
+
+ started = False
+ records = []
+ counter = 0
+
+ @classmethod
+ def setUp(self):
+ self.counter += 1
+ if not self.started:
+ self.records.append(insert_record_one_and_second_revision())
+ self.records.append(insert_record_two_and_second_revision())
+ records_for_indexes = find_affected_records_for_index([], [self.records])
+ wtabs = get_word_tables(records_for_indexes.keys())
+ for index_id, index_name, index_tags in wtabs:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type = CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.add_recIDs([self.records], 10000)
+ self.started = True
+
+ @classmethod
+ def tearDown(self):
+ if self.counter == 3:
+ for rec in self.records:
+ wipe_out_record_from_all_tables(rec)
+ indexes = get_all_indexes(virtual=False)
+ wtabs = get_word_tables(indexes)
+ for index_id, index_name, index_tags in wtabs:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type = CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.del_recIDs([self.records])
+
+
+ def test_proper_content_in_title_index(self):
+ """bibindex - checks reindexation of title index for test records.."""
+ index_id = get_index_id_from_index_name('title')
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec IN (""" % (index_id,)
+ query = query + ", ".join(map(str, self.records)) + ")"
+ resp = run_sql(query)
+ affiliation_rec1 = deserialize_via_marshal(resp[0][0])
+ affiliation_rec2 = deserialize_via_marshal(resp[1][0])
+ self.assertEqual(['univers', 'particl'], affiliation_rec1)
+ self.assertEqual(['of', 'cours', 'collis'], affiliation_rec2)
+
+
+ def test_proper_content_in_author_index(self):
+ """bibindex - checks reindexation of author index for test records.."""
+ index_id = get_index_id_from_index_name('author')
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec IN (""" % (index_id,)
+ query = query + ", ".join(map(str, self.records)) + ")"
+ resp = run_sql(query)
+ author_rec1 = deserialize_via_marshal(resp[0][0])
+ author_rec2 = deserialize_via_marshal(resp[1][0])
+ self.assertEqual(['dawkins', 'richard', ], author_rec1)
+ self.assertEqual(['john', 'locke'], author_rec2)
+
+
+ def test_proper_content_in_global_index(self):
+ """bibindex - checks reindexation of global index for test records.."""
+ index_id = get_index_id_from_index_name('global')
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec IN (""" % (index_id,)
+ query = query + ", ".join(map(str, self.records)) + ")"
+ resp = run_sql(query)
+ global_rec1 = deserialize_via_marshal(resp[0][0])
+ global_rec2 = deserialize_via_marshal(resp[1][0])
+ self.assertEqual(True, 'dawkin' in global_rec1)
+ self.assertEqual(False, 'close' in global_rec1)
+ self.assertEqual(True, 'univers' in global_rec1)
+ self.assertEqual(True, 'john' in global_rec2)
+ self.assertEqual(False, 'john' in global_rec1)
+
+
+class BibIndexFindingIndexesForTags(InvenioTestCase):
+ """ Tests function 'get_tag_indexes' """
+
+ def test_fulltext_tag_virtual_indexes_on(self):
+ """bibindex - checks if 'get_tag_indexes' for tag 8564_u will find only 'fulltext' index"""
+ self.assertEqual(('fulltext',), zip(*get_tag_indexes('8564_u'))[1])
+
+ def test_title_tag_virtual_indexes_on(self):
+ """bibindex - checks if 'get_tag_indexes' for tag 245__% will find also 'global' index"""
+ self.assertEqual(('title', 'exacttitle', 'global'), zip(*get_tag_indexes('245__%'))[1])
+
+ def test_title_tag_virtual_indexes_off(self):
+ """bibindex - checks if 'get_tag_indexes' for tag 245__% wont find 'global' index (with virtual=False)"""
+ self.assertEqual(('title', 'exacttitle'), zip(*get_tag_indexes('245__%', virtual=False))[1])
+
+ def test_author_tag_virtual_indexes_on(self):
+ """bibindex - checks 'get_tag_indexes' for tag '100'"""
+ self.assertEqual(('author', 'affiliation', 'exactauthor', 'firstauthor',
+ 'exactfirstauthor', 'authorcount', 'authorityauthor',
+ 'miscellaneous', 'global'),
+ zip(*get_tag_indexes('100'))[1])
+
+ def test_author_exact_tag_virtual_indexes_off(self):
+ """bibindex - checks 'get_tag_indexes' for tag '100__a'"""
+ self.assertEqual(('author', 'exactauthor', 'firstauthor',
+ 'exactfirstauthor', 'authorcount',
+ 'authorityauthor', 'miscellaneous'),
+ zip(*get_tag_indexes('100__a', virtual=False))[1])
+
+ def test_wide_tag_virtual_indexes_off(self):
+ """bibindex - checks 'get_tag_indexes' for tag like '86%'"""
+ self.assertEqual(('miscellaneous',), zip(*get_tag_indexes('86%', virtual=False))[1])
+
+ def test_909_tags_in_misc_index(self):
+ """bibindex - checks connection between misc index and tags: 909C1%, 909C4%"""
+ self.assertEqual(('miscellaneous',), zip(*get_tag_indexes('909C1%', virtual=False))[1])
+ self.assertEqual('miscellaneous' in zip(*get_tag_indexes('909C4%', virtual=False))[1], False)
+
+ def test_year_tag_virtual_indexes_on(self):
+ """bibindex - checks 'get_tag_indexes' for tag 909C0y"""
+ self.assertEqual(('year', 'global'), zip(*get_tag_indexes('909C0y'))[1])
+
+ def test_wide_tag_authority_index_virtual_indexes_off(self):
+ """bibindex - checks 'get_tag_indexes' for tag like '15%'"""
+ self.assertEqual(('authoritysubject', 'miscellaneous'), zip(*get_tag_indexes('15%',virtual=False))[1])
+
+
+class BibIndexFindingTagsForIndexes(InvenioTestCase):
+ """ Tests function 'get_index_tags' """
+
+
+ def test_tags_for_author_index(self):
+ """bibindex - checks if 'get_index_tags' find proper tags for 'author' index """
+ self.assertEqual(get_index_tags('author'), ['100__a', '700__a'])
+
+ def test_tags_for_global_index_virtual_indexes_off(self):
+ """bibindex - checks if 'get_index_tags' find proper tags for 'global' index """
+ self.assertEqual(get_index_tags('global', virtual=False),[])
+
+ def test_tags_for_global_index_virtual_indexes_on(self):
+ """bibindex - checks if 'get_index_tags' find proper tags for 'global' index """
+ tags = get_index_tags('global')
+ self.assertEqual('86%' in tags, True)
+ self.assertEqual('100__a' in tags, True)
+ self.assertEqual('245__%' in tags, True)
+
+
+class BibIndexGlobalIndexContentTest(InvenioTestCase):
+ """ Tests if virtual global index is correctly indexed"""
+
+ def is_part_of(self, container, content):
+ """checks if content is a part of container"""
+ ctr = set(container)
+ cont = set(content)
+ return cont.issubset(ctr)
+
+ def test_title_index_compatibility_reversed_table(self):
+ """bibindex - checks if the same words are in title and global index, reversed table"""
+ global_id = get_index_id_from_index_name('global')
+ title_id = get_index_id_from_index_name('title')
+ for rec in range(1, 4):
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s""" % (title_id, rec)
+ res = run_sql(query)
+ termlist_title = deserialize_via_marshal(res[0][0])
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s""" % (global_id, rec)
+ glob = run_sql(query)
+ termlist_global = deserialize_via_marshal(glob[0][0])
+ self.assertEqual(self.is_part_of(termlist_global, termlist_title), True)
+
+ def test_abstract_index_compatibility_reversed_table(self):
+ """bibindex - checks if the same words are in abstract and global index, reversed table"""
+ global_id = get_index_id_from_index_name('global')
+ abstract_id = get_index_id_from_index_name('abstract')
+ for rec in range(6, 9):
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s""" % (abstract_id, rec)
+ res = run_sql(query)
+ termlist_abstract = deserialize_via_marshal(res[0][0])
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s""" % (global_id, rec)
+ glob = run_sql(query)
+ termlist_global = deserialize_via_marshal(glob[0][0])
+ self.assertEqual(self.is_part_of(termlist_global, termlist_abstract), True)
+
+ def test_misc_index_compatibility_reversed_table(self):
+ """bibindex - checks if the same words are in misc and global index, reversed table"""
+ global_id = get_index_id_from_index_name('global')
+ misc_id = get_index_id_from_index_name('miscellaneous')
+ for rec in range(10, 14):
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s""" % (misc_id, rec)
+ res = run_sql(query)
+ termlist_misc = deserialize_via_marshal(res[0][0])
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s""" % (global_id, rec)
+ glob = run_sql(query)
+ termlist_global = deserialize_via_marshal(glob[0][0])
+ self.assertEqual(self.is_part_of(termlist_global, termlist_misc), True)
+
+ def test_journal_index_compatibility_forward_table(self):
+ """bibindex - checks if the same words are in journal and global index, forward table"""
+ global_id = get_index_id_from_index_name('global')
+ journal_id = get_index_id_from_index_name('journal')
+ query = """SELECT term FROM idxWORD%02dF""" % journal_id
+ res = zip(*run_sql(query))[0]
+ query = """SELECT term FROM idxWORD%02dF""" % global_id
+ glob = zip(*run_sql(query))[0]
+ self.assertEqual(self.is_part_of(glob, res), True)
+
+ def test_keyword_index_compatibility_forward_table(self):
+ """bibindex - checks if the same pairs are in keyword and global index, forward table"""
+ global_id = get_index_id_from_index_name('global')
+ keyword_id = get_index_id_from_index_name('keyword')
+ query = """SELECT term FROM idxPAIR%02dF""" % keyword_id
+ res = zip(*run_sql(query))[0]
+ query = """SELECT term FROM idxPAIR%02dF""" % global_id
+ glob = zip(*run_sql(query))[0]
+ self.assertEqual(self.is_part_of(glob, res), True)
+
+ def test_affiliation_index_compatibility_forward_table(self):
+ """bibindex - checks if the same phrases are in affiliation and global index, forward table"""
+ global_id = get_index_id_from_index_name('global')
+ affiliation_id = get_index_id_from_index_name('affiliation')
+ query = """SELECT term FROM idxPHRASE%02dF""" % affiliation_id
+ res = zip(*run_sql(query))[0]
+ query = """SELECT term FROM idxPHRASE%02dF""" % global_id
+ glob = zip(*run_sql(query))[0]
+ self.assertEqual(self.is_part_of(glob, res), True)
+
+
+class BibIndexVirtualIndexAlsoChangesTest(InvenioTestCase):
+ """ Tests if virtual index changes after changes in dependent index"""
+
+ counter = 0
+ indexes = ["title"]
+ _id = 39
+
+ @classmethod
+ def prepare_virtual_index(self):
+ """creates new virtual index and binds it to specific normal index"""
+ create_virtual_index(self._id, self.indexes)
+ wtabs = get_word_tables(self.indexes)
+ for index_id, index_name, index_tags in wtabs:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ wordTable.add_recIDs([[1, 10]], 1000)
+
+ @classmethod
+ def reindex_virtual_index(self, special_tokenizer=False):
+ """reindexes virtual and dependent indexes with different tokenizer"""
+ def tokenize_for_words(phrase):
+ return phrase.split(" ")
+
+ wtabs = get_word_tables(self.indexes)
+ for index_id, index_name, index_tags in wtabs:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexEmptyTokenizer"},
+ wash_index_terms=50)
+ if special_tokenizer == True:
+ wordTable.default_tokenizer_function = tokenize_for_words
+ wordTable.add_recIDs([[1, 10]], 1000)
+
+ @classmethod
+ def setUp(self):
+ self.counter += 1
+ if self.counter == 1:
+ self.prepare_virtual_index()
+ elif self.counter == 2:
+ self.reindex_virtual_index(special_tokenizer=True)
+
+ @classmethod
+ def tearDown(self):
+ if self.counter == 3:
+ self.reindex_virtual_index()
+ elif self.counter == 4:
+ remove_virtual_index(self._id)
+
+ def test_virtual_index_1_has_10_records(self):
+ """bibindex - checks if virtual index was filled with only ten records from title index"""
+ query = "SELECT count(*) FROM idxWORD%02dR" % self._id
+ self.assertEqual(10, run_sql(query)[0][0])
+
+ def test_virtual_index_2_correct_content_record_1(self):
+ """bibindex - after reindexing with different tokenizer virtual index also changes - record 1"""
+ query = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s" % (self._id, 1)
+ self.assertEqual('Higgs' in deserialize_via_marshal(run_sql(query)[0][0]), True)
+
+ def test_virtual_index_3_correct_content_record_3(self):
+ """bibindex - after reindexing with different tokenizer virtual index also changes - record 3"""
+ query = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s" % (self._id, 3)
+ self.assertEqual(['Conference', 'Biology', 'Molecular', 'European'],
+ deserialize_via_marshal(run_sql(query)[0][0]))
+
+ def test_virtual_index_4_cleaned_up(self):
+ """bibindex - after reindexing with normal title tokenizer everything is back to normal"""
+ #this is version of test for installation with PyStemmer package
+ #without this package word 'biology' is stemmed differently
+ query = "SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=%s" % (self._id, 3)
+ self.assertEqual(['biolog', 'molecular', 'confer', 'european'],
+ deserialize_via_marshal(run_sql(query)[0][0]))
+
+
+class BibIndexVirtualIndexRemovalTest(InvenioTestCase):
+
+ counter = 0
+ indexes = ["authorcount", "journal", "year"]
+ _id = 40
+
+ @classmethod
+ def setUp(self):
+ self.counter += 1
+ if self.counter == 1:
+ create_virtual_index(self._id, self.indexes)
+ wtabs = get_word_tables(self.indexes)
+ for index_id, index_name, index_tags in wtabs:
+ wordTable = WordTable(index_name=index_name,
+ index_id=index_id,
+ fields_to_index=index_tags,
+ table_name_pattern='idxWORD%02dF',
+ wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"],
+ tag_to_tokenizer_map={'8564_u': "BibIndexFulltextTokenizer"},
+ wash_index_terms=50)
+ wordTable.add_recIDs([[1, 113]], 1000)
+ #removal part
+ w = WordTable("testindex", self._id, [], "idxWORD%02dF", CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"], {}, 50)
+ w.remove_dependent_index(int(get_index_id_from_index_name("authorcount")))
+
+
+ @classmethod
+ def tearDown(self):
+ if self.counter == 9:
+ remove_virtual_index(self._id)
+
+
+ def test_authorcount_removal_number_of_items(self):
+ """bibindex - checks virtual index after authorcount index removal - number of items"""
+ query = """SELECT count(*) FROM idxWORD%02dF"""
+ res = run_sql(query % self._id)
+ self.assertEqual(157, res[0][0])
+
+ def test_authorcount_removal_common_terms_intact(self):
+ """bibindex - checks virtual index after authorcount index removal - common terms"""
+ query = """SELECT term FROM idxWORD%02dF WHERE term IN ('10', '2', '4', '7')"""
+ res = run_sql(query % self._id)
+ self.assertEqual(4, len(res))
+
+ def test_authorcount_removal_no_315_term(self):
+ """bibindex - checks virtual index after authorcount index removal - no '315' term in virtual index"""
+ query = """SELECT term FROM idxWORD%02dF WHERE term='315'"""
+ res = run_sql(query % self._id)
+ self.assertEqual(0, len(res))
+
+ def test_authorcount_removal_term_10_hitlist(self):
+ """bibindex - checks virtual index after authorcount index removal - hitlist for '10' term"""
+ query = """SELECT hitlist FROM idxWORD%02dF WHERE term='10'"""
+ res = run_sql(query % self._id)
+ self.assertEqual([80, 92], intbitset(res[0][0]).tolist())
+
+ def test_authorcount_removal_term_1985_hitlist(self):
+ """bibindex - checks virtual index after authorcount index removal - hitlist for '1985' term"""
+ query = """SELECT hitlist FROM idxWORD%02dF WHERE term='1985'"""
+ res = run_sql(query % self._id)
+ self.assertEqual([16, 18], intbitset(res[0][0]).tolist())
+
+ def test_authorcount_removal_record_16_hitlist(self):
+ """bibindex - checks virtual index after authorcount index removal - termlist for record 16"""
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=16"""
+ res = run_sql(query % self._id)
+ self.assertEqual(['1985'], deserialize_via_marshal(res[0][0]))
+
+ def test_authorcount_removal_record_10_hitlist(self):
+ """bibindex - checks virtual index after authorcount index removal - termlist for record 10"""
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=10"""
+ res = run_sql(query % self._id)
+ self.assertEqual(['2002', 'Eur. Phys. J., C'], deserialize_via_marshal(res[0][0]))
+
+ def test_year_removal_number_of_items(self):
+ """bibindex - checks virtual index after year removal - number of items"""
+ #must be run after: tearDown
+ w = WordTable("testindex", self._id, [], "idxWORD%02dF", CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"], {}, 50)
+ w.remove_dependent_index(int(get_index_id_from_index_name("year")))
+ query = """SELECT count(*) FROM idxWORD%02dF"""
+ res = run_sql(query % self._id)
+ self.assertEqual(134, res[0][0])
+
+ def test_year_removal_record_18_hitlist(self):
+ """bibindex - checks virtual index after year removal - termlist for record 18"""
+ #must be run after: tearDown, test_year_removal_number_of_items
+ query = """SELECT termlist FROM idxWORD%02dR WHERE id_bibrec=18"""
+ res = run_sql(query % self._id)
+ self.assertEqual(['151', '357','1985', 'Phys. Lett., B 151 (1985) 357', 'Phys. Lett., B'],
+ deserialize_via_marshal(res[0][0]))
+
+
+TEST_SUITE = make_test_suite(BibIndexRemoveStopwordsTest,
+ BibIndexRemoveLatexTest,
+ BibIndexRemoveHtmlTest,
+ BibIndexYearIndexTest,
+ BibIndexAuthorCountIndexTest,
+ BibIndexItemCountIndexTest,
+ BibIndexFiletypeIndexTest,
+ BibIndexJournalIndexTest,
+ BibIndexCJKTokenizerTitleIndexTest,
+ BibIndexAuthorityRecordTest,
+ BibIndexFindingAffectedIndexes,
+ BibIndexIndexingAffectedIndexes,
+ BibIndexFindingIndexesForTags,
+ BibIndexFindingTagsForIndexes,
+ BibIndexGlobalIndexContentTest,
+ BibIndexVirtualIndexAlsoChangesTest,
+ BibIndexVirtualIndexRemovalTest)
+
+if __name__ == "__main__":
+ run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/bibindex/lib/bibindexadmin_regression_tests.py b/modules/bibindex/lib/bibindexadmin_regression_tests.py
index c81d59fa6..df4f5adfa 100644
--- a/modules/bibindex/lib/bibindexadmin_regression_tests.py
+++ b/modules/bibindex/lib/bibindexadmin_regression_tests.py
@@ -1,76 +1,320 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibIndex Admin Regression Test Suite."""
__revision__ = "$Id$"
import unittest
+import re
from invenio.config import CFG_SITE_URL
from invenio.testutils import make_test_suite, run_test_suite, \
- test_web_page_content, merge_error_messages
+ test_web_page_content, merge_error_messages, \
+ get_authenticated_mechanize_browser, make_url
+
class BibIndexAdminWebPagesAvailabilityTest(unittest.TestCase):
"""Check BibIndex Admin web pages whether they are up or not."""
def test_bibindex_admin_interface_pages_availability(self):
"""bibindexadmin - availability of BibIndex Admin interface pages"""
baseurl = CFG_SITE_URL + '/admin/bibindex/bibindexadmin.py/'
_exports = ['',
'index',
'index?mtype=perform_showindexoverview',
+ 'index?mtype=perform_showvirtualindexoverview',
'index?mtype=perform_editindexes',
'index?mtype=perform_addindex',
+ 'index?mtype=perform_editvirtualindex',
+ 'index?mtype=perform_addvirtualindex',
'field',
'field?mtype=perform_showfieldoverview',
'field?mtype=perform_editfields',
'field?mtype=perform_addfield',
+ 'editindex?mtype=perform_modifysynonymkb',
+ 'editindex?mtype=perform_modifystopwords',
+ 'editindex?mtype=perform_modifyremovehtml',
+ 'editindex?mtype=perform_modifyremovelatex',
+ 'editindex?mtype=perform_modifytokenizer'
]
error_messages = []
for url in [baseurl + page for page in _exports]:
# first try as guest:
error_messages.extend(test_web_page_content(url,
username='guest',
expected_text=
'Authorization failure'))
# then try as admin:
error_messages.extend(test_web_page_content(url,
username='admin'))
if error_messages:
self.fail(merge_error_messages(error_messages))
return
def test_bibindex_admin_guide_availability(self):
"""bibindexadmin - availability of BibIndex Admin guide pages"""
url = CFG_SITE_URL + '/help/admin/bibindex-admin-guide'
error_messages = test_web_page_content(url,
expected_text="BibIndex Admin Guide")
if error_messages:
self.fail(merge_error_messages(error_messages))
return
-TEST_SUITE = make_test_suite(BibIndexAdminWebPagesAvailabilityTest)
+
+def check_admin_forms_with_dropdown_list(url, fields):
+ """Logs in as 'admin' and checks given url for
+ dropdown lists in forms available in the given page.
+ Fills them with given fields and returns html body response.
+ @param url: url of the forms to test
+ @param fields: a dict of the form fields and their values (only dropdown lists)
+ @return: html body
+ """
+ browser = get_authenticated_mechanize_browser("admin","")
+ browser.open(url)
+ browser.select_form(nr=0)
+ form = browser.form
+ for key in fields:
+ form[key] = [fields[key]]
+ resp = browser.submit()
+ #second page - confirmation
+ browser.select_form(nr=1)
+ resp = browser.submit()
+ return resp.read()
+
+def check_admin_forms_with_input_text(url, fields):
+ """Logs in as 'admin' and checks given url for
+ input texts in forms available in the given page.
+ Fills them with given fields and returns html body response.
+ @param url: url of the forms to test
+ @param fields: a dict of the form fields and their values
+ @return: html body
+ """
+ browser = get_authenticated_mechanize_browser("admin","")
+ browser.open(url)
+ browser.select_form(nr=0)
+ form = browser.form
+ for key in fields:
+ form[key] = fields[key]
+ resp = browser.submit()
+ #second page - confirmation
+ browser.select_form(nr=1)
+ resp = browser.submit()
+ return resp.read()
+
+class BibIndexAdminSynonymKnowledgeBaseTest(unittest.TestCase):
+ """Tests BibIndexAdmin's ability to change knowledge base details for indexes"""
+
+ def setUp(self):
+ self.re_operation_successfull = re.compile(r"Operation successfully completed")
+
+ def test_change_title_index_knowledge_base(self):
+ """tests if information about title index's knowledge base can be changed properly"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifysynonymkb'}
+ url = make_url(base,**parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxKB":"INDEX-SYNONYM-TITLE",
+ "idxMATCH":"leading_to_comma"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+ def test_change_title_index_knowledge_base_back(self):
+ """tests if information about title index's knowledge base can be changed back"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifysynonymkb'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxKB":"INDEX-SYNONYM-TITLE",
+ "idxMATCH":"exact"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed." in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+class BibIndexAdminRemoveStopwordsTest(unittest.TestCase):
+ """Tests BibIndexAdmin's ability to change stopwords configuration details for indexes.
+ Tests change the databse entries in idxINDEX table, but don't reindex information contained in idxWORDXXF/R.
+ """
+
+ def setUp(self):
+ self.re_operation_successfull = re.compile(r"Operation successfully completed")
+ self.re_stopwords_not_changed = re.compile(r"Stopwords have not been changed")
+
+ def test_change_title_index_remove_stopword_configuration(self):
+ """tests if index's remove stopwords configuration can be changed"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifystopwords'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_input_text(url, {"idxSTOPWORDS":"stopwords.kb"})
+
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+ def test_change_title_index_remove_stopword_configuration_back(self):
+ """tests if index's remove stopwords configuration can be changed back"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifystopwords'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_input_text(url, {"idxSTOPWORDS":"No"})
+
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+class BibIndexAdminRemoveHTMLTest(unittest.TestCase):
+ """Tests BibIndexAdmin's ability to change 'remove html' configuration details for indexes.
+ Tests change the databse entries in idxINDEX table, but don't reindex information contained in idxWORDXXF/R.
+ """
+
+ def setUp(self):
+ self.re_operation_successfull = re.compile(r"Operation successfully completed")
+ self.re_removehtml_not_changed = re.compile(r"Remove HTML markup parameter has not been changed")
+
+ def test_change_title_index_remove_html_configuration(self):
+ """tests if index's 'remove html' configuration can be changed"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifyremovehtml'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxHTML":"Yes"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+ def test_change_title_index_remove_html_configuration_back(self):
+ """tests if index's 'remove html' configuration can be changed back"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifyremovehtml'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxHTML":"No"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+
+class BibIndexAdminRemoveLatexTest(unittest.TestCase):
+ """Tests BibIndexAdmin's ability to change 'remove latex' configuration details for indexes.
+ Tests change the databse entries in idxINDEX table, but don't reindex information contained in idxWORDXXF/R.
+ """
+
+ def setUp(self):
+ self.re_operation_successfull = re.compile(r"Operation successfully completed")
+ self.re_removehtml_not_changed = re.compile(r"Remove latex markup parameter has not been changed")
+
+ def test_change_title_index_remove_html_configuration(self):
+ """tests if index's 'remove latex' configuration can be changed"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifyremovelatex'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxLATEX":"Yes"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+ def test_change_title_index_remove_html_configuration_back(self):
+ """tests if index's 'remove latex' configuration can be changed back"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifyremovelatex'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxLATEX":"No"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+class BibIndexAdminTokenizerTest(unittest.TestCase):
+ """Tests BibIndexAdmin's ability to change tokenizer configuration details for indexes.
+ Tests change the databse entries in idxINDEX table, but don't reindex information contained in idxWORDXXF/R.
+ """
+
+ def setUp(self):
+ self.re_operation_successfull = re.compile(r"Operation successfully completed")
+
+
+ def test_change_title_index_tokenizer_configuration(self):
+ """tests if index's tokenizer configuration can be changed"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifytokenizer'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxTOK":"BibIndexEmptyTokenizer"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+ def test_change_title_index_tokenizer_configuration_back(self):
+ """tests if index's tokenizer configuration can be changed back"""
+
+ base = "/admin/bibindex/bibindexadmin.py/editindex"
+ parameters = {'idxID':'8', 'ln':'en', 'mtype':'perform_modifytokenizer'}
+ url = make_url(base, **parameters)
+
+ html_response = check_admin_forms_with_dropdown_list(url, {"idxTOK":"BibIndexDefaultTokenizer"})
+ success = self.re_operation_successfull.search(html_response)
+ if not success:
+ error_messages = """There is no "Operation successfully completed" in html response."""
+ self.fail(merge_error_messages(error_messages))
+
+
+
+TEST_SUITE = make_test_suite(BibIndexAdminWebPagesAvailabilityTest,
+ BibIndexAdminSynonymKnowledgeBaseTest,
+ BibIndexAdminRemoveStopwordsTest,
+ BibIndexAdminRemoveHTMLTest,
+ BibIndexAdminRemoveLatexTest,
+ BibIndexAdminTokenizerTest)
if __name__ == "__main__":
run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/bibindex/lib/bibindexadminlib.py b/modules/bibindex/lib/bibindexadminlib.py
index 333fdf9a7..93b3313ef 100644
--- a/modules/bibindex/lib/bibindexadminlib.py
+++ b/modules/bibindex/lib/bibindexadminlib.py
@@ -1,1759 +1,2768 @@
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Invenio BibIndex Administrator Interface."""
__revision__ = "$Id$"
import random
from invenio.config import \
CFG_SITE_LANG, \
CFG_SITE_URL, \
CFG_BINDIR
from invenio.bibrankadminlib import write_outcome, modify_translations, \
get_def_name, get_name, get_languages, addadminbox, tupletotable, \
createhiddenform
from invenio.access_control_engine import acc_authorize_action
from invenio.dbquery import run_sql, get_table_status_info, wash_table_column_name
from invenio.bibindex_engine_stemmer import get_stemming_language_map
import invenio.template
+from invenio.bibindex_engine_config import CFG_BIBINDEX_SYNONYM_MATCH_TYPE, \
+ CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR
+from invenio.bibknowledge_dblayer import get_all_kb_names
+from invenio.bibindex_engine_utils import load_tokenizers, \
+ get_idx_indexer, \
+ get_all_indexes, \
+ get_all_virtual_indexes, \
+ get_virtual_index_building_blocks, \
+ get_index_name_from_index_id, \
+ get_all_index_names_and_column_values, \
+ is_index_virtual
+
+
+_TOKENIZERS = load_tokenizers()
+
+
websearch_templates = invenio.template.load('websearch')
def getnavtrail(previous = ''):
"""Get the navtrail"""
navtrail = """<a class="navtrail" href="%s/help/admin">Admin Area</a> """ % (CFG_SITE_URL,)
navtrail = navtrail + previous
return navtrail
-def perform_index(ln=CFG_SITE_LANG, mtype='', content=''):
+def perform_index(ln=CFG_SITE_LANG, mtype='', content='', **params):
"""start area for modifying indexes
mtype - the method that called this method.
content - the output from that method."""
fin_output = """
<table>
<tr>
<td>0.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s">Show all</a></small></td>
<td>1.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_showindexoverview#1">Overview of indexes</a></small></td>
- <td>2.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_editindexes#2">Edit index</a></small></td>
- <td>3.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_addindex#3">Add new index</a></small></td>
- <td>4.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/field?ln=%s">Manage logical fields</a></small></td>
- <td>5.&nbsp;<small><a href="%s/help/admin/bibindex-admin-guide">Guide</a></small></td>
+ <td>2.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_showvirtualindexoverview#2">Overview of virtual indexes</a></small></td>
+ <td>3.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_editindexes#2">Edit index</a></small></td>
+ <td>4.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_addindex#3">Add new index</a></small></td>
+ <td>5.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/field?ln=%s">Manage logical fields</a></small></td>
+ <td>6.&nbsp;<small><a href="%s/help/admin/bibindex-admin-guide">Guide</a></small></td>
</tr>
</table>
- """ % (CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL)
+ """ % (CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL)
if mtype == "perform_showindexoverview" and content:
fin_output += content
elif mtype == "perform_showindexoverview" or not mtype:
- fin_output += perform_showindexoverview(ln, callback='')
+ fin_output += perform_showindexoverview(ln, callback='', **params)
+
+ if mtype == "perform_showvirtualindexoverview" and content:
+ fin_output += content
+ elif mtype == "perform_showvirtualindexoverview" or not mtype:
+ fin_output += perform_showvirtualindexoverview(ln, callback='', **params)
if mtype == "perform_editindexes" and content:
fin_output += content
elif mtype == "perform_editindexes" or not mtype:
- fin_output += perform_editindexes(ln, callback='')
+ fin_output += perform_editindexes(ln, callback='', **params)
if mtype == "perform_addindex" and content:
fin_output += content
elif mtype == "perform_addindex" or not mtype:
- fin_output += perform_addindex(ln, callback='')
+ fin_output += perform_addindex(ln, callback='', **params)
+
+ if mtype == "perform_editvirtualindexes" and content:
+ fin_output += content
+ elif mtype == "perform_editvirtualindexes":
+ #not visible in 'show all' view of 'Manage Indexes'
+ fin_output += perform_editvirtualindexes(ln, callback='', **params)
+
+ if mtype == "perform_addvirtualindex" and content:
+ fin_output += content
+ elif mtype == "perform_addvirtualindex":
+ #not visible in 'show all' view of 'Manage Indexes'
+ fin_output += perform_addvirtualindex(ln, callback='', **params)
+
+ if mtype == "perform_deletevirtualindex" and content:
+ fin_output += content
+ elif mtype == "perform_deletevirtualindex":
+ #not visible in 'show all' view of 'Manage Indexes'
+ fin_output += perform_deletevirtualindex(ln, callback='', **params)
return addadminbox("<b>Menu</b>", [fin_output])
def perform_field(ln=CFG_SITE_LANG, mtype='', content=''):
"""Start area for modifying fields
mtype - the method that called this method.
content - the output from that method."""
fin_output = """
<table>
<tr>
<td>0.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/field?ln=%s">Show all</a></small></td>
<td>1.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/field?ln=%s&amp;mtype=perform_showfieldoverview#1">Overview of logical fields</a></small></td>
<td>2.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/field?ln=%s&amp;mtype=perform_editfields#2">Edit logical field</a></small></td>
<td>3.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/field?ln=%s&amp;mtype=perform_addfield#3">Add new logical field</a></small></td>
<td>4.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s">Manage Indexes</a></small></td>
<td>5.&nbsp;<small><a href="%s/help/admin/bibindex-admin-guide">Guide</a></small></td>
</tr>
</table>
""" % (CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL)
if mtype == "perform_showfieldoverview" and content:
fin_output += content
elif mtype == "perform_showfieldoverview" or not mtype:
fin_output += perform_showfieldoverview(ln, callback='')
if mtype == "perform_editfields" and content:
fin_output += content
elif mtype == "perform_editfields" or not mtype:
fin_output += perform_editfields(ln, callback='')
if mtype == "perform_addfield" and content:
fin_output += content
elif mtype == "perform_addfield" or not mtype:
fin_output += perform_addfield(ln, callback='')
return addadminbox("<b>Menu</b>", [fin_output])
def perform_editfield(fldID, ln=CFG_SITE_LANG, mtype='', content='', callback='yes', confirm=-1):
"""form to modify a field. this method is calling other methods which again is calling this and sending back the output of the method.
if callback, the method will call perform_editcollection, if not, it will just return its output.
fldID - id of the field
mtype - the method that called this method.
content - the output from that method."""
fld_dict = dict(get_def_name('', "field"))
if fldID in [-1, "-1"]:
return addadminbox("Edit logical field", ["""<b><span class="info">Please go back and select a logical field</span></b>"""])
fin_output = """
<table>
<tr>
<td><b>Menu</b></td>
</tr>
<tr>
<td>0.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editfield?fldID=%s&amp;ln=%s">Show all</a></small></td>
<td>1.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editfield?fldID=%s&amp;ln=%s&amp;mtype=perform_modifyfield">Modify field code</a></small></td>
<td>2.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editfield?fldID=%s&amp;ln=%s&amp;mtype=perform_modifyfieldtranslations">Modify translations</a></small></td>
<td>3.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editfield?fldID=%s&amp;ln=%s&amp;mtype=perform_modifyfieldtags">Modify MARC tags</a></small></td>
<td>4.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editfield?fldID=%s&amp;ln=%s&amp;mtype=perform_deletefield">Delete field</a></small></td>
</tr><tr>
<td>5.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editfield?fldID=%s&amp;ln=%s&amp;mtype=perform_showdetailsfield">Show field usage</a></small></td>
</tr>
</table>
""" % (CFG_SITE_URL, fldID, ln, CFG_SITE_URL, fldID, ln, CFG_SITE_URL, fldID, ln, CFG_SITE_URL, fldID, ln, CFG_SITE_URL, fldID, ln, CFG_SITE_URL, fldID, ln)
if mtype == "perform_modifyfield" and content:
fin_output += content
elif mtype == "perform_modifyfield" or not mtype:
fin_output += perform_modifyfield(fldID, ln, callback='')
if mtype == "perform_modifyfieldtranslations" and content:
fin_output += content
elif mtype == "perform_modifyfieldtranslations" or not mtype:
fin_output += perform_modifyfieldtranslations(fldID, ln, callback='')
if mtype == "perform_modifyfieldtags" and content:
fin_output += content
elif mtype == "perform_modifyfieldtags" or not mtype:
fin_output += perform_modifyfieldtags(fldID, ln, callback='')
if mtype == "perform_deletefield" and content:
fin_output += content
elif mtype == "perform_deletefield" or not mtype:
fin_output += perform_deletefield(fldID, ln, callback='')
return addadminbox("Edit logical field '%s'" % fld_dict[int(fldID)], [fin_output])
def perform_editindex(idxID, ln=CFG_SITE_LANG, mtype='', content='', callback='yes', confirm=-1):
"""form to modify a index. this method is calling other methods which again is calling this and sending back the output of the method.
idxID - id of the index
mtype - the method that called this method.
content - the output from that method."""
if idxID in [-1, "-1"]:
return addadminbox("Edit index", ["""<b><span class="info">Please go back and select a index</span></b>"""])
fin_output = """
<table>
<tr>
<td><b>Menu</b></td>
</tr>
<tr>
<td>0.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s">Show all</a></small></td>
<td>1.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifyindex">Modify index name / descriptor</a></small></td>
<td>2.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifyindextranslations">Modify translations</a></small></td>
<td>3.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifyindexfields">Modify index fields</a></small></td>
<td>4.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifyindexstemming">Modify index stemming language</a></small></td>
- <td>5.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_deleteindex">Delete index</a></small></td>
+ <td>5.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifysynonymkb">Modify synonym knowledge base</a></small></td>
+ <td>6.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifystopwords">Modify remove stopwords</a></small></td>
+ <td>7.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifyremovehtml">Modify remove HTML markup</a></small></td>
+ <td>8.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifyremovelatex">Modify remove latex markup</a></small></td>
+ <td>9.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifytokenizer">Modify tokenizer</a></small></td>
+ <td>10.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifyindexer">Modify indexer</a></small></td>
+ <td>11.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s&amp;mtype=perform_deleteindex">Delete index</a></small></td>
</tr>
</table>
- """ % (CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln)
+ """ % (CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln)
+
if mtype == "perform_modifyindex" and content:
fin_output += content
elif mtype == "perform_modifyindex" or not mtype:
fin_output += perform_modifyindex(idxID, ln, callback='')
if mtype == "perform_modifyindextranslations" and content:
fin_output += content
elif mtype == "perform_modifyindextranslations" or not mtype:
fin_output += perform_modifyindextranslations(idxID, ln, callback='')
if mtype == "perform_modifyindexfields" and content:
fin_output += content
elif mtype == "perform_modifyindexfields" or not mtype:
fin_output += perform_modifyindexfields(idxID, ln, callback='')
if mtype == "perform_modifyindexstemming" and content:
fin_output += content
elif mtype == "perform_modifyindexstemming" or not mtype:
fin_output += perform_modifyindexstemming(idxID, ln, callback='')
+ if mtype == "perform_modifysynonymkb" and content:
+ fin_output += content
+ elif mtype == "perform_modifysynonymkb" or not mtype:
+ fin_output += perform_modifysynonymkb(idxID, ln, callback='')
+
+ if mtype == "perform_modifystopwords" and content:
+ fin_output += content
+ elif mtype == "perform_modifystopwords" or not mtype:
+ fin_output += perform_modifystopwords(idxID, ln, callback='')
+
+ if mtype == "perform_modifyremovehtml" and content:
+ fin_output += content
+ elif mtype == "perform_modifyremovehtml" or not mtype:
+ fin_output += perform_modifyremovehtml(idxID, ln, callback='')
+
+ if mtype == "perform_modifyremovelatex" and content:
+ fin_output += content
+ elif mtype == "perform_modifyremovelatex" or not mtype:
+ fin_output += perform_modifyremovelatex(idxID, ln, callback='')
+
+ if mtype == "perform_modifytokenizer" and content:
+ fin_output += content
+ elif mtype == "perform_modifytokenizer" or not mtype:
+ fin_output += perform_modifytokenizer(idxID, ln, callback='')
+
+ if mtype == "perform_modifyindexer" and content:
+ fin_output += content
+ elif mtype == "perform_modifyindexer" or not mtype:
+ fin_output += perform_modifyindexer(idxID, ln, callback='')
+
if mtype == "perform_deleteindex" and content:
fin_output += content
elif mtype == "perform_deleteindex" or not mtype:
fin_output += perform_deleteindex(idxID, ln, callback='')
return addadminbox("Edit index", [fin_output])
+
+def perform_editvirtualindex(idxID, ln=CFG_SITE_LANG, mtype='', content='', callback='yes', confirm=-1):
+
+ if idxID in [-1, "-1"]:
+ return addadminbox("Edit virtual index", ["""<b><span class="info">Please go back and select an index</span></b>"""])
+
+ fin_output = """
+ <table>
+ <tr>
+ <td><b>Menu</b></td>
+ </tr>
+ <tr>
+ <td>0.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editvirtualindex?idxID=%s&amp;ln=%s">Show all</a></small></td>
+ <td>1.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/editvirtualindex?idxID=%s&amp;ln=%s&amp;mtype=perform_modifydependentindexes">Modify depedent indexes</a></small></td>
+ <td>2.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_showvirtualindexoverview#2">Overview of virtual indexes</a></small></td>
+ </tr>
+ </table>
+ """ % (CFG_SITE_URL, idxID, ln, CFG_SITE_URL, idxID, ln, CFG_SITE_URL, ln)
+
+ if mtype == "perform_modifydependentindexes" and content:
+ fin_output += content
+ elif mtype == "perform_modifydependentindexes" or not mtype:
+ fin_output += perform_modifydependentindexes(idxID, ln, callback='')
+
+ index_name = "( %s )" % get_index_name_from_index_id(idxID)
+
+ return addadminbox("Edit virtual index %s" % index_name, [fin_output])
+
+
def perform_showindexoverview(ln=CFG_SITE_LANG, callback='', confirm=0):
- subtitle = """<a name="1"></a>1. Overview of indexes"""
+ subtitle = """<a name="1"></a>1. Overview of all indexes"""
output = """<table cellpadding="3" border="1">"""
- output += """<tr><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td></tr>""" % ("ID", "Name", "Fwd.Idx Size", "Rev.Idx Size", "Fwd.Idx Words", "Rev.Idx Records", "Last updated", "Fields", "Translations", "Stemming Language")
+ output += """<tr><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></tr>""" % ("ID", "Name", "Fwd.Idx Size", "Rev.Idx Size", "Fwd.Idx Words", "Rev.Idx Records", "Last updated", "Fields", "Translations", "Stemming Language", "Synonym knowledge base", "Remove stopwords", "Remove HTML markup", "Remove Latex markup", "Tokenizer", "Indexer type")
+
idx = get_idx()
idx_dict = dict(get_def_name('', "idxINDEX"))
stemming_language_map = get_stemming_language_map()
stemming_language_map_reversed = dict([(elem[1], elem[0]) for elem in stemming_language_map.iteritems()])
- for idxID, idxNAME, idxDESC, idxUPD, idxSTEM in idx:
+ virtual_indexes = dict(get_all_virtual_indexes())
+
+ for idxID, idxNAME, idxDESC, idxUPD, idxSTEM, idxSYNKB, idxSTOPWORDS, idxHTML, idxLATEX, idxTOK in idx:
forward_table_status_info = get_table_status_info('idxWORD%sF' % (idxID < 10 and '0%s' % idxID or idxID))
reverse_table_status_info = get_table_status_info('idxWORD%sR' % (idxID < 10 and '0%s' % idxID or idxID))
if str(idxUPD)[-3:] == ".00":
idxUPD = str(idxUPD)[0:-3]
lang = get_lang_list("idxINDEXNAME", "id_idxINDEX", idxID)
idx_fld = get_idx_fld(idxID)
fld = ""
for row in idx_fld:
fld += row[3] + ", "
if fld.endswith(", "):
fld = fld[:-2]
if len(fld) == 0:
fld = """<strong><span class="info">None</span></strong>"""
date = (idxUPD and idxUPD or """<strong><span class="info">Not updated</span></strong>""")
stemming_lang = stemming_language_map_reversed.get(idxSTEM, None)
if not stemming_lang:
stemming_lang = """<strong><span class="info">None</span></strong>"""
+ synonym_kb = get_idx_synonym_kb(idxID)
+ if not synonym_kb:
+ synonym_kb = """<strong><span class="info">None</span></strong>"""
+
+ remove_stopwords = get_idx_remove_stopwords(idxID)
+ if not remove_stopwords:
+ remove_stopwords = """<strong><span class="info">None</span></strong>"""
+
+ remove_html_markup = get_idx_remove_html_markup(idxID)
+ if not remove_html_markup:
+ remove_html_markup = """<strong><span class="info">None</span></strong>"""
+
+ remove_latex_markup = get_idx_remove_latex_markup(idxID)
+ if not remove_latex_markup:
+ remove_latex_markup = """<strong><span class="info">None</span></strong>"""
+
+ tokenizer = get_idx_tokenizer(idxID)
+ if not remove_latex_markup:
+ tokenizer = """<strong><span class="info">None</span></strong>"""
+
+ type_of_indexer = virtual_indexes.get(idxID) and "virtual" or get_idx_indexer(idxNAME)
+
if forward_table_status_info and reverse_table_status_info:
- output += """<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td></tr>""" % \
+ output += """<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td></tr>""" % \
(idxID,
"""<a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s" title="%s">%s</a>""" % (CFG_SITE_URL, idxID, ln, idxDESC, idx_dict.get(idxID, idxNAME)),
"%s MB" % websearch_templates.tmpl_nice_number(forward_table_status_info['Data_length'] / 1048576.0, max_ndigits_after_dot=3),
"%s MB" % websearch_templates.tmpl_nice_number(reverse_table_status_info['Data_length'] / 1048576.0, max_ndigits_after_dot=3),
websearch_templates.tmpl_nice_number(forward_table_status_info['Rows']),
websearch_templates.tmpl_nice_number(reverse_table_status_info['Rows'], max_ndigits_after_dot=3),
date,
fld,
lang,
- stemming_lang)
+ stemming_lang,
+ synonym_kb,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup,
+ tokenizer,
+ type_of_indexer)
elif not forward_table_status_info:
- output += """<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td></tr>""" % \
+ output += """<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td></tr>""" % \
(idxID,
"""<a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s">%s</a>""" % (CFG_SITE_URL, idxID, ln, idx_dict.get(idxID, idxNAME)),
"Error", "%s MB" % websearch_templates.tmpl_nice_number(reverse_table_status_info['Data_length'] / 1048576.0, max_ndigits_after_dot=3),
"Error",
websearch_templates.tmpl_nice_number(reverse_table_status_info['Rows'], max_ndigits_after_dot=3),
date,
"",
- lang)
+ lang,
+ synonym_kb,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup,
+ tokenizer,
+ type_of_indexer)
elif not reverse_table_status_info:
- output += """<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td></tr>""" % \
+ output += """<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%s</td></tr>""" % \
(idxID,
"""<a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&amp;ln=%s">%s</a>""" % (CFG_SITE_URL, idxID, ln, idx_dict.get(idxID, idxNAME)),
"%s MB" % websearch_templates.tmpl_nice_number(forward_table_status_info['Data_length'] / 1048576.0, max_ndigits_after_dot=3),
"Error", websearch_templates.tmpl_nice_number(forward_table_status_info['Rows'], max_ndigits_after_dot=3),
"Error",
date,
"",
- lang)
+ lang,
+ synonym_kb,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup,
+ tokenizer,
+ type_of_indexer)
output += "</table>"
body = [output]
if callback:
return perform_index(ln, "perform_showindexoverview", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
+
+def perform_showvirtualindexoverview(ln=CFG_SITE_LANG, callback='', confirm=0):
+ subtitle = """<a name="1"></a>2. Overview of virtual indexes"""
+ output = """
+ <table>
+ <tr>
+ <td>1.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_editvirtualindexes#1">Edit virtual index</a></small></td>
+ <td>2.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_addvirtualindex#2">Add new virtual index</a></small></td>
+ <td>3.&nbsp;<small><a href="%s/admin/bibindex/bibindexadmin.py/index?ln=%s&amp;mtype=perform_deletevirtualindex#3">Delete virtual index</a></small></td>
+ </tr>
+ </table>
+ """ % (CFG_SITE_URL, ln, CFG_SITE_URL, ln, CFG_SITE_URL, ln)
+ output += """<table cellpadding="3" border="1">"""
+ output += """<tr><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td></tr>""" % ("ID", "Virtual index", "Dependent indexes")
+ idx = get_all_virtual_indexes()
+ for idxID, idxNAME in idx:
+ normal_indexes = zip(*get_virtual_index_building_blocks(idxID))[1]
+ output += """<tr><td>%s</td><td>%s</td><td>%s</td></tr>""" % \
+ (idxID,
+ """<a href="%s/admin/bibindex/bibindexadmin.py/editvirtualindex?idxID=%s&amp;ln=%s">%s</a>""" % (CFG_SITE_URL, idxID, ln, idxNAME),
+ ", ".join(normal_indexes))
+ output += "</table>"
+
+ body = [output]
+ if callback:
+ return perform_index(ln, "perform_showvirtualindexoverview", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
def perform_editindexes(ln=CFG_SITE_LANG, callback='yes', content='', confirm=-1):
"""show a list of indexes that can be edited."""
- subtitle = """<a name="2"></a>2. Edit index&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (CFG_SITE_URL)
+ subtitle = """<a name="3"></a>3. Edit index&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (CFG_SITE_URL)
fin_output = ''
idx = get_idx()
output = ""
if len(idx) > 0:
text = """
<span class="adminlabel">Index name</span>
<select name="idxID" class="admin_w200">
<option value="-1">- Select a index -</option>
"""
- for (idxID, idxNAME, idxDESC, idxUPD, idxSTEM) in idx:
+ for (idxID, idxNAME, idxDESC, idxUPD, idxSTEM, idxSYNKB, idxSTOPWORDS, idxHTML, idxLATEX, idxTOK) in idx:
text += """<option value="%s">%s</option>""" % (idxID, idxNAME)
text += """</select>"""
output += createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/editindex" % CFG_SITE_URL,
text=text,
button="Edit",
ln=ln,
confirm=1)
else:
output += """No indexes exists"""
body = [output]
if callback:
return perform_index(ln, "perform_editindexes", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
+
+def perform_editvirtualindexes(ln=CFG_SITE_LANG, callback='yes', content='', confirm=-1):
+ """show a list of indexes that can be edited."""
+
+ subtitle = """<a name="2"></a>1. Edit virtual index&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (CFG_SITE_URL)
+
+ idx = get_all_virtual_indexes()
+ output = ""
+ if len(idx) > 0:
+ text = """
+ <span class="adminlabel">Virtual index name</span>
+ <select name="idxID" class="admin_w200">
+ <option value="-1">- Select a index -</option>
+ """
+ for (idxID, idxNAME) in idx:
+ text += """<option value="%s">%s</option>""" % (idxID, idxNAME)
+ text += """</select>"""
+
+ output += createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/editvirtualindex" % CFG_SITE_URL,
+ text=text,
+ button="Edit",
+ ln=ln,
+ confirm=1)
+ else:
+ output += """No indexes exist"""
+
+ body = [output]
+
+ if callback:
+ return perform_index(ln, "perform_editvirtualindexes", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
def perform_editfields(ln=CFG_SITE_LANG, callback='yes', content='', confirm=-1):
"""show a list of all logical fields that can be edited."""
- subtitle = """<a name="2"></a>2. Edit logical field&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (CFG_SITE_URL)
+ subtitle = """<a name="4"></a>4. Edit logical field&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (CFG_SITE_URL)
fin_output = ''
res = get_fld()
output = ""
if len(res) > 0:
text = """
<span class="adminlabel">Field name</span>
<select name="fldID" class="admin_w200">
<option value="-1">- Select a field -</option>
"""
for (fldID, name, code) in res:
text += """<option value="%s">%s</option>""" % (fldID, name)
text += """</select>"""
output += createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/editfield" % CFG_SITE_URL,
text=text,
button="Edit",
ln=ln,
confirm=1)
else:
output += """No logical fields exists"""
body = [output]
if callback:
return perform_field(ln, "perform_editfields", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_addindex(ln=CFG_SITE_LANG, idxNAME='', callback="yes", confirm=-1):
"""form to add a new index.
idxNAME - the name of the new index"""
output = ""
subtitle = """<a name="3"></a>3. Add new index"""
text = """
<span class="adminlabel">Index name</span>
<input class="admin_w200" type="text" name="idxNAME" value="%s" /><br />
""" % idxNAME
output = createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/addindex" % CFG_SITE_URL,
text=text,
ln=ln,
button="Add index",
confirm=1)
if idxNAME and confirm in ["1", 1]:
res = add_idx(idxNAME)
output += write_outcome(res) + """<br /><a href="%s/admin/bibindex/bibindexadmin.py/editindex?idxID=%s&ln=%s">Configure this index</a>.""" % (CFG_SITE_URL, res[1], ln)
elif confirm not in ["-1", -1]:
output += """<b><span class="info">Please give the index a name.</span></b>
"""
body = [output]
if callback:
return perform_index(ln, "perform_addindex", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
+
+def perform_addvirtualindex(ln=CFG_SITE_LANG, idxNEWVID='', idxNEWPID='', callback="yes", confirm=-1):
+ """form to add a new virtual index from the set of physical indexes.
+ idxID - the name of the new virtual index"""
+ idx = get_all_indexes(virtual=False, with_ids=True)
+
+ output = ""
+ subtitle = """<a name="3"></a>2. Add new virtual index"""
+
+ if len(idx) > 0:
+ text = """
+ <span class="adminlabel">Choose new virtual index</span>
+ <select name="idxNEWVID" class="admin_w200">
+ <option value="-1">- Select an index -</option>
+ """
+
+ for (idxID, idxNAME) in idx:
+ checked = str(idxNEWVID) == str(idxID) and 'selected="selected"' or ''
+ text += """<option value="%s" %s>%s</option>
+ """ % (idxID, checked, idxNAME)
+ text += """</select>"""
+
+ text += """&nbsp;&nbsp;
+ <span class="adminlabel">Add physical index</span>
+ <select name="idxNEWPID" class="admin_w200">
+ <option value="-1">- Select an index -</option>
+ """
+ for (idxID, idxNAME) in idx:
+ text += """<option value="%s">%s</option>""" % (idxID, idxNAME)
+ text += """</select>"""
+
+ output += createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/addvirtualindex" % CFG_SITE_URL,
+ text=text,
+ button="Add index",
+ ln=ln,
+ confirm=1)
+ else:
+ output += """No index exists"""
+
+ if idxNEWVID not in ['', "-1", -1] and idxNEWPID not in ['', "-1", -1] and confirm in ["1", 1]:
+ res = add_virtual_idx(idxNEWVID, idxNEWPID)
+ output += write_outcome(res)
+ output += """<br /><span class="info">Please note you must run as soon as possible:
+ <pre>$> %s/bibindex --reindex -w %s</pre></span>""" % (CFG_BINDIR, dict(idx)[int(idxNEWPID)])
+ elif confirm not in ["-1", -1] or idxNEWVID in ["-1", -1] or idxNEWPID in ["-1", -1]:
+ output += """<b><span class="info">Please specify the index.</span></b>"""
+
+ body = [output]
+
+ if callback:
+ return perform_index(ln, "perform_addvirtualindex", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
def perform_modifyindextranslations(idxID, ln=CFG_SITE_LANG, sel_type='', trans=[], confirm=-1, callback='yes'):
"""Modify the translations of a index
sel_type - the nametype to modify
trans - the translations in the same order as the languages from get_languages()"""
output = ''
subtitle = ''
langs = get_languages()
if confirm in ["2", 2] and idxID:
finresult = modify_translations(idxID, langs, sel_type, trans, "idxINDEX")
idx_dict = dict(get_def_name('', "idxINDEX"))
if idxID and idx_dict.has_key(int(idxID)):
idxID = int(idxID)
subtitle = """<a name="2"></a>2. Modify translations for index.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
if type(trans) is str:
trans = [trans]
if sel_type == '':
sel_type = get_idx_nametypes()[0][0]
header = ['Language', 'Translation']
actions = []
types = get_idx_nametypes()
if len(types) > 1:
text = """
<span class="adminlabel">Name type</span>
<select name="sel_type" class="admin_w200">
"""
for (key, value) in types:
text += """<option value="%s" %s>%s""" % (key, key == sel_type and 'selected="selected"' or '', value)
trans_names = get_name(idxID, ln, key, "field")
if trans_names and trans_names[0][0]:
text += ": %s" % trans_names[0][0]
text += "</option>"
text += """</select>"""
output += createhiddenform(action="modifyindextranslations#2",
text=text,
button="Select",
idxID=idxID,
ln=ln,
confirm=0)
if confirm in [-1, "-1", 0, "0"]:
trans = []
for (key, value) in langs:
try:
trans_names = get_name(idxID, key, sel_type, "idxINDEX")
trans.append(trans_names[0][0])
except StandardError, e:
trans.append('')
for nr in range(0,len(langs)):
actions.append(["%s" % (langs[nr][1],)])
actions[-1].append('<input type="text" name="trans" size="30" value="%s"/>' % trans[nr])
text = tupletotable(header=header, tuple=actions)
output += createhiddenform(action="modifyindextranslations#2",
text=text,
button="Modify",
idxID=idxID,
sel_type=sel_type,
ln=ln,
confirm=2)
if sel_type and len(trans):
if confirm in ["2", 2]:
output += write_outcome(finresult)
body = [output]
if callback:
return perform_editindex(idxID, ln, "perform_modifyindextranslations", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyfieldtranslations(fldID, ln=CFG_SITE_LANG, sel_type='', trans=[], confirm=-1, callback='yes'):
"""Modify the translations of a field
sel_type - the nametype to modify
trans - the translations in the same order as the languages from get_languages()"""
output = ''
subtitle = ''
langs = get_languages()
if confirm in ["2", 2] and fldID:
finresult = modify_translations(fldID, langs, sel_type, trans, "field")
fld_dict = dict(get_def_name('', "field"))
if fldID and fld_dict.has_key(int(fldID)):
fldID = int(fldID)
subtitle = """<a name="3"></a>3. Modify translations for logical field '%s'&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (fld_dict[fldID], CFG_SITE_URL)
if type(trans) is str:
trans = [trans]
if sel_type == '':
sel_type = get_fld_nametypes()[0][0]
header = ['Language', 'Translation']
actions = []
types = get_fld_nametypes()
if len(types) > 1:
text = """
<span class="adminlabel">Name type</span>
<select name="sel_type" class="admin_w200">
"""
for (key, value) in types:
text += """<option value="%s" %s>%s""" % (key, key == sel_type and 'selected="selected"' or '', value)
trans_names = get_name(fldID, ln, key, "field")
if trans_names and trans_names[0][0]:
text += ": %s" % trans_names[0][0]
text += "</option>"
text += """</select>"""
output += createhiddenform(action="modifyfieldtranslations#3",
text=text,
button="Select",
fldID=fldID,
ln=ln,
confirm=0)
if confirm in [-1, "-1", 0, "0"]:
trans = []
for (key, value) in langs:
try:
trans_names = get_name(fldID, key, sel_type, "field")
trans.append(trans_names[0][0])
except StandardError, e:
trans.append('')
for nr in range(0,len(langs)):
actions.append(["%s" % (langs[nr][1],)])
actions[-1].append('<input type="text" name="trans" size="30" value="%s"/>' % trans[nr])
text = tupletotable(header=header, tuple=actions)
output += createhiddenform(action="modifyfieldtranslations#3",
text=text,
button="Modify",
fldID=fldID,
sel_type=sel_type,
ln=ln,
confirm=2)
if sel_type and len(trans):
if confirm in ["2", 2]:
output += write_outcome(finresult)
body = [output]
if callback:
return perform_editfield(fldID, ln, "perform_modifytranslations", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_showdetailsfieldtag(fldID, tagID, ln=CFG_SITE_LANG, callback="yes", confirm=-1):
"""form to add a new field.
fldNAME - the name of the new field
code - the field code"""
fld_dict = dict(get_def_name('', "field"))
fldID = int(fldID)
tagname = run_sql("SELECT name from tag where id=%s", (tagID, ))[0][0]
output = ""
subtitle = """<a name="4.1"></a>Showing details for MARC tag '%s'""" % tagname
output += "<br /><b>This MARC tag is used directly in these logical fields:</b>&nbsp;"
fld_tag = get_fld_tags('', tagID)
exist = {}
for (id_field,id_tag, tname, tvalue, score) in fld_tag:
output += "%s, " % fld_dict[int(id_field)]
exist[id_field] = 1
output += "<br /><b>This MARC tag is used indirectly in these logical fields:</b>&nbsp;"
tag = run_sql("SELECT value from tag where id=%s", (id_tag, ))
tag = tag[0][0]
for i in range(0, len(tag) - 1):
res = run_sql("SELECT id_field,id_tag FROM field_tag,tag WHERE tag.id=field_tag.id_tag AND tag.value=%s", ('%' + tag[0:i] + '%',))
for (id_field, id_tag) in res:
output += "%s, " % fld_dict[int(id_field)]
exist[id_field] = 1
res = run_sql("SELECT id_field,id_tag FROM field_tag,tag WHERE tag.id=field_tag.id_tag AND tag.value like %s", (tag, ))
for (id_field, id_tag) in res:
if not exist.has_key(id_field):
output += "%s, " % fld_dict[int(id_field)]
body = [output]
if callback:
return perform_modifyfieldtags(fldID, ln, "perform_showdetailsfieldtag", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_showdetailsfield(fldID, ln=CFG_SITE_LANG, callback="yes", confirm=-1):
"""form to add a new field.
fldNAME - the name of the new field
code - the field code"""
fld_dict = dict(get_def_name('', "field"))
col_dict = dict(get_def_name('', "collection"))
fldID = int(fldID)
col_fld = get_col_fld('', '', fldID)
sort_types = dict(get_sort_nametypes())
fin_output = ""
subtitle = """<a name="1"></a>5. Show usage for logical field '%s'""" % fld_dict[fldID]
output = "This logical field is used in these collections:<br />"
ltype = ''
exist = {}
for (id_collection, id_field, id_fieldvalue, ftype, score, score_fieldvalue) in col_fld:
if ltype != ftype:
output += "<br /><b>%s:&nbsp;</b>" % sort_types[ftype]
ltype = ftype
exist = {}
if not exist.has_key(id_collection):
output += "%s, " % col_dict[int(id_collection)]
exist[id_collection] = 1
if not col_fld:
output = "This field is not used by any collections."
fin_output = addadminbox('Collections', [output])
body = [fin_output]
if callback:
return perform_editfield(ln, "perform_showdetailsfield", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_addfield(ln=CFG_SITE_LANG, fldNAME='', code='', callback="yes", confirm=-1):
"""form to add a new field.
fldNAME - the name of the new field
code - the field code"""
output = ""
subtitle = """<a name="3"></a>3. Add new logical field"""
code = str.replace(code,' ', '')
text = """
<span class="adminlabel">Field name</span>
<input class="admin_w200" type="text" name="fldNAME" value="%s" /><br />
<span class="adminlabel">Field code</span>
<input class="admin_w200" type="text" name="code" value="%s" /><br />
""" % (fldNAME, code)
output = createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/addfield" % CFG_SITE_URL,
text=text,
ln=ln,
button="Add field",
confirm=1)
if fldNAME and code and confirm in ["1", 1]:
res = add_fld(fldNAME, code)
output += write_outcome(res)
elif confirm not in ["-1", -1]:
output += """<b><span class="info">Please give the logical field a name and code.</span></b>
"""
body = [output]
if callback:
return perform_field(ln, "perform_addfield", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_deletefield(fldID, ln=CFG_SITE_LANG, callback='yes', confirm=0):
"""form to remove a field.
fldID - the field id from table field.
"""
fld_dict = dict(get_def_name('', "field"))
if not fld_dict.has_key(int(fldID)):
return """<b><span class="info">Field does not exist</span></b>"""
subtitle = """<a name="4"></a>4. Delete the logical field '%s'&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (fld_dict[int(fldID)], CFG_SITE_URL)
output = ""
if fldID:
fldID = int(fldID)
if confirm in ["0", 0]:
check = run_sql("SELECT id_field from idxINDEX_field where id_field=%s", (fldID, ))
text = ""
if check:
text += """<b><span class="info">This field is used in an index, deletion may cause problems.</span></b><br />"""
text += """Do you want to delete the logical field '%s' and all its relations and definitions.""" % (fld_dict[fldID])
output += createhiddenform(action="deletefield#4",
text=text,
button="Confirm",
fldID=fldID,
confirm=1)
elif confirm in ["1", 1]:
res = delete_fld(fldID)
if res[0] == 1:
return """<br /><b><span class="info">Field deleted.</span></b>""" + write_outcome(res)
else:
output += write_outcome(res)
body = [output]
if callback:
return perform_editfield(fldID, ln, "perform_deletefield", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_deleteindex(idxID, ln=CFG_SITE_LANG, callback='yes', confirm=0):
"""form to delete an index.
idxID - the index id from table idxINDEX.
"""
if idxID:
- subtitle = """<a name="5"></a>5. Delete the index.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
+ subtitle = """<a name="5"></a>11. Delete the index.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
output = ""
if confirm in ["0", 0]:
idx = get_idx(idxID)
if idx:
text = ""
text += """<b><span class="info">By deleting an index, you may also loose any indexed data in the forward and reverse table for this index.</span></b><br />"""
text += """Do you want to delete the index '%s' and all its relations and definitions.""" % (idx[0][1])
output += createhiddenform(action="deleteindex#5",
text=text,
button="Confirm",
idxID=idxID,
confirm=1)
else:
return """<br /><b><span class="info">Index specified does not exist.</span></b>"""
elif confirm in ["1", 1]:
res = delete_idx(idxID)
if res[0] == 1:
return """<br /><b><span class="info">Index deleted.</span></b>""" + write_outcome(res)
else:
output += write_outcome(res)
body = [output]
if callback:
return perform_editindex(idxID, ln, "perform_deleteindex", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
+
+def perform_deletevirtualindex(ln=CFG_SITE_LANG, idxID='', callback='yes', confirm=-1):
+ """form to delete a virtual index.
+ idxID - the index id from table idxINDEX.
+ """
+ output = ""
+ subtitle = """<a name="3"></a>3. Delete virtual index"""
+
+ idx = get_all_virtual_indexes()
+ if len(idx) > 0:
+ text = """<span class="adminlabel">Choose a virtual index</span>
+ <select name="idxID" class="admin_w200">
+ <option value="-1">- Select an index -</option>
+ """
+ for idx_id, idx_name in idx:
+ selected = str(idxID) == str(idx_id) and 'selected="selected"' or ''
+ text += """<option value="%s" %s>%s</option>""" % (idx_id, selected, idx_name)
+ text += """</select>"""
+
+ output += createhiddenform(action="deletevirtualindex#3",
+ text=text,
+ button="Confirm",
+ confirm=1)
+ else:
+ output = "No index specified"
+
+ if confirm in ["1", 1] and idxID not in ['', "-1", -1]:
+ res = delete_virtual_idx(int(idxID))
+ if res[0] == 1:
+ output += """<br /><b><span class="info">Virtual index deleted.</span></b><br />"""
+ output += write_outcome(res)
+ else:
+ output += write_outcome(res)
+ elif idxID in ["-1", -1]:
+ output += """<b><span class="info">Please specify the index.</span></b>"""
+
+ body = [output]
+
+ if callback:
+ return perform_index(ln, "perform_deletevirtualindex", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+
+def perform_modifydependentindexes(idxID, ln=CFG_SITE_LANG, newIDs=[], callback='yes', confirm=-1):
+ """page on which dependent indexes for specific virtual index
+ can be chosen"""
+ subtitle = ""
+ output = ""
+
+ non_virtual_indexes = dict(get_all_indexes(virtual=False, with_ids=True)) #[(id1, name1), (id2, name2)..]
+
+ already_dependent = dict(get_virtual_index_building_blocks(idxID))
+
+ if not already_dependent:
+ idxID = -1
+ if idxID not in [-1, "-1"]:
+ subtitle = """<a name="1"></a>1. Modify dependent indexes.&nbsp;&nbsp;&nbsp;
+ <small>
+ [<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]
+ </small>""" % CFG_SITE_URL
+ if confirm in [-1, "-1"]:
+ newIDs = []
+ if not newIDs:
+ newIDs = []
+
+ tick_list = ""
+ checked_values = already_dependent.values()
+ if confirm > -1:
+ checked_values = newIDs
+ for index_name in non_virtual_indexes.values():
+ checked = index_name in checked_values and 'checked="checked"' or ''
+ tick_list += """<input type="checkbox" name='newIDs' value="%s" %s >%s </br>""" % \
+ (index_name, checked, index_name)
+
+ output += createhiddenform(action="modifydependentindexes#1",
+ text=tick_list,
+ button="Modify",
+ idxID=idxID,
+ ln=ln,
+ confirm=0)
+
+ if confirm in [0, "0"] and newIDs == []:
+ output += "</br>"
+ text = """
+ <span class="important">Removing all dependent indexes
+ means removing virtual index.</span>
+ <br /> <strong>Are you sure you want to do this?</strong>"""
+ output += createhiddenform(action="modifydependentindexes#1",
+ text=text,
+ button="Confirm",
+ idxID=idxID,
+ newIDs=newIDs,
+ ln=ln,
+ confirm=1)
+
+ elif confirm in [0, "0"]:
+ output += "</br>"
+ text = """
+ <span class="important">You are about to change dependent indexes</span>.<br /> <strong>Are you sure you want to do this?</strong>"""
+ output += createhiddenform(action="modifydependentindexes#1",
+ text=text,
+ button="Confirm",
+ idxID=idxID,
+ newIDs=newIDs,
+ ln=ln,
+ confirm=1)
+ elif idxID > -1 and confirm in [1, "1"]:
+ output += "</br>"
+ to_add, to_remove = find_dependent_indexes_to_change(idxID, newIDs)
+ res = modify_dependent_indexes(idxID, to_add, to_remove)
+ output += write_outcome(res)
+ if len(to_remove) + len(to_add) > 0:
+ output += """<br /><span class="info">Please note you should run as soon as possible:"""
+ for index in to_add:
+ output += """<pre>$> %s/bibindex --reindex -w %s</pre>
+ """ % (CFG_BINDIR, index)
+ for index in to_remove:
+ output += """<pre>$> %s/bibindex -w %s --remove-dependent-index %s</pre>
+ """ % (CFG_BINDIR, get_index_name_from_index_id(idxID), index)
+ if len(to_remove) + len(to_add) > 0:
+ output += "</span>"
+ elif confirm in [1, "1"]:
+ output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
+ else:
+ output = """It seems that this index is not virtual."""
+
+ body = [output]
+
+ if callback:
+ return perform_editvirtualindex(idxID, ln, "perform_modifydependentindexes", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+def find_dependent_indexes_to_change(idxID, new_indexes):
+ """From new set of dependent indexes finds out
+ which indexes should be added and which should be removed
+ from database (idxINDEX_idxINDEX table)
+ @param idxID: id of the virtual index
+ @param new_indexes: future set of dependent indexes
+ """
+ if not type(new_indexes) is list:
+ new_indexes = [new_indexes]
+ dependent_indexes = dict(get_virtual_index_building_blocks(idxID)).values()
+ to_add = set(new_indexes) - set(dependent_indexes)
+ to_remove = set(dependent_indexes) - set(new_indexes)
+ return list(to_add), list(to_remove)
+
+
def perform_showfieldoverview(ln=CFG_SITE_LANG, callback='', confirm=0):
subtitle = """<a name="1"></a>1. Logical fields overview"""
output = """<table cellpadding="3" border="1">"""
output += """<tr><td><strong>%s</strong></td><td><strong>%s</strong></td><td><strong>%s</strong></td></tr>""" % ("Field", "MARC Tags", "Translations")
query = "SELECT id,name FROM field"
res = run_sql(query)
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(get_def_name('', "field"))
for field_id,field_name in res:
query = "SELECT tag.value FROM tag, field_tag WHERE tag.id=field_tag.id_tag AND field_tag.id_field=%s ORDER BY field_tag.score DESC,tag.value ASC"
res = run_sql(query, (field_id, ))
field_tags = ""
for row in res:
field_tags = field_tags + row[0] + ", "
if field_tags.endswith(", "):
field_tags = field_tags[:-2]
if not field_tags:
field_tags = """<b><span class="info">None</span></b>"""
lang = get_lang_list("fieldname", "id_field", field_id)
output += """<tr><td>%s</td><td>%s</td><td>%s</td></tr>""" % ("""<a href="%s/admin/bibindex/bibindexadmin.py/editfield?fldID=%s&ln=%s">%s</a>""" % (CFG_SITE_URL, field_id, ln, fld_dict[field_id]), field_tags, lang)
output += "</table>"
body = [output]
if callback:
return perform_field(ln, "perform_showfieldoverview", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyindex(idxID, ln=CFG_SITE_LANG, idxNAME='', idxDESC='', callback='yes', confirm=-1):
"""form to modify an index name.
idxID - the index name to change.
idxNAME - new name of index
idxDESC - description of index content"""
subtitle = ""
output = ""
idx = get_idx(idxID)
if not idx:
idxID = -1
if idxID not in [-1, "-1"]:
subtitle = """<a name="2"></a>1. Modify index name.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
if confirm in [-1, "-1"]:
idxNAME = idx[0][1]
idxDESC = idx[0][2]
text = """
<span class="adminlabel">Index name</span>
<input class="admin_w200" type="text" name="idxNAME" value="%s" /><br />
<span class="adminlabel">Index description</span>
<textarea class="admin_w200" name="idxDESC">%s</textarea><br />
""" % (idxNAME, idxDESC)
output += createhiddenform(action="modifyindex#1",
text=text,
button="Modify",
idxID=idxID,
ln=ln,
confirm=1)
if idxID > -1 and idxNAME and confirm in [1, "1"]:
res = modify_idx(idxID, idxNAME, idxDESC)
output += write_outcome(res)
elif confirm in [1, "1"]:
output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
else:
output = """No index to modify."""
body = [output]
if callback:
return perform_editindex(idxID, ln, "perform_modifyindex", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyindexstemming(idxID, ln=CFG_SITE_LANG, idxSTEM='', callback='yes', confirm=-1):
"""form to modify an index name.
idxID - the index name to change.
idxSTEM - new stemming language code"""
subtitle = ""
output = ""
stemming_language_map = get_stemming_language_map()
stemming_language_map['None'] = ''
idx = get_idx(idxID)
if not idx:
idxID = -1
if idxID not in [-1, "-1"]:
subtitle = """<a name="4"></a>4. Modify index stemming language.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
if confirm in [-1, "-1"]:
idxSTEM = idx[0][4]
if not idxSTEM:
idxSTEM = ''
language_html_element = """<select name="idxSTEM" class="admin_w200">"""
languages = stemming_language_map.keys()
languages.sort()
for language in languages:
if stemming_language_map[language] == idxSTEM:
selected = 'selected="selected"'
else:
selected = ""
language_html_element += """<option value="%s" %s>%s</option>""" % (stemming_language_map[language], selected, language)
language_html_element += """</select>"""
text = """
<span class="adminlabel">Index stemming language</span>
""" + language_html_element
output += createhiddenform(action="modifyindexstemming#4",
text=text,
button="Modify",
idxID=idxID,
ln=ln,
confirm=0)
if confirm in [0, "0"] and get_idx(idxID)[0][4] == idxSTEM:
output += """<span class="info">Stemming language has not been changed</span>"""
elif confirm in [0, "0"]:
text = """
<span class="important">You are about to either disable or change the stemming language setting for this index. Please note that it is not recommended to enable stemming for structured-data indexes like "report number", "year", "author" or "collection". On the contrary, it is advisable to enable stemming for indexes like "fulltext", "abstract", "title", etc. since this would overall improve the retrieval quality. <br /> Beware, however, that after disabling or changing the stemming language setting of an index you will have to reindex it. It is a good idea to change the stemming language and to reindex during low usage hours of your service, since searching results will be potentially affected by the discrepancy between search terms now being (not) stemmed and indexes still using the previous settings until the reindexing is completed</span>.<br /> <strong>Are you sure you want to disable/change the stemming language setting of this index?</strong>"""
output += createhiddenform(action="modifyindexstemming#4",
text=text,
button="Modify",
idxID=idxID,
idxSTEM=idxSTEM,
ln=ln,
confirm=1)
elif idxID > -1 and confirm in [1, "1"]:
res = modify_idx_stemming(idxID, idxSTEM)
output += write_outcome(res)
output += """<br /><span class="info">Please note you must run as soon as possible:
<pre>$> %s/bibindex --reindex -w %s</pre></span>
""" % (CFG_BINDIR, get_idx(idxID)[0][1])
elif confirm in [1, "1"]:
output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
else:
output = """No index to modify."""
body = [output]
if callback:
return perform_editindex(idxID, ln, "perform_modifyindexstemming", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
+def perform_modifyindexer(idxID, ln=CFG_SITE_LANG, indexer='', callback='yes', confirm=-1):
+ """form to modify an indexer.
+ idxID - the index name to change.
+ idexer - indexer type: native/SOLR/XAPIAN/virtual"""
+ subtitle = ""
+ output = ""
+
+ idx = get_idx(idxID)
+ if idx:
+ current_indexer = is_index_virtual(idx[0][0]) and "virtual" or get_idx_indexer(idx[0][1])
+ subtitle = """<a name="4"></a>5. Modify indexer.&nbsp;&nbsp;&nbsp;
+ <small>
+ [<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]
+ </small>""" % CFG_SITE_URL
+ if confirm in [-1, "-1"]:
+ indexer = current_indexer or ''
+ items = ["native"]
+ if idx[0][1] == "fulltext":
+ items.extend(["SOLR", "XAPIAN"])
+ else:
+ items.extend(["virtual"])
+
+ html_element = """<select name="indexer" class="admin_w200">"""
+ for item in items:
+ selected = indexer==item and 'selected="selected"' or ''
+ html_element += """<option value="%s" %s>%s</option>""" % (item, selected, item)
+ html_element += """</select>"""
+
+ text = """<span class="adminlabel">Indexer type</span>""" + html_element
+ output += createhiddenform(action="modifyindexer#5",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ ln=ln,
+ confirm=1)
+
+ if confirm in [1, "1"] and idx[0][1]=="fulltext":
+ res = modify_idx_indexer(idxID, indexer)
+ output += write_outcome(res)
+ output += """<br /><span class="info">Please note you should run:
+ <pre>$> %s/bibindex --reindex -w fulltext</pre></span>""" % CFG_BINDIR
+ elif confirm in [1, "1"]:
+ if indexer=="virtual" and current_indexer == "native":
+ params = {'idxNEWVID': idxID}
+ return perform_index(ln, "perform_addvirtualindex", "", **params)
+ elif indexer=="native" and current_indexer == "virtual":
+ params = {'idxID':idxID}
+ return perform_index(ln, "perform_deletevirtualindex", "", **params)
+ else:
+ output = """No index to modify."""
+
+ body = [output]
+
+ if callback:
+ return perform_editindex(idxID, ln, "perform_modifyindexer", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+def perform_modifysynonymkb(idxID, ln=CFG_SITE_LANG, idxKB='', idxMATCH='', callback='yes', confirm=-1):
+ """form to modify the knowledge base for the synonym lookup.
+ idxID - the index name to change.
+ idxKB - new knowledge base name
+ idxMATCH - new match type
+ """
+
+ subtitle = ""
+ output = ""
+
+ idx = get_idx(idxID)
+ if not idx:
+ idxID = -1
+ if idxID not in [-1, "-1"]:
+ subtitle = """<a name="4"></a>5. Modify knowledge base for synonym lookup.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
+ if confirm in [-1, "-1"]:
+ field_value = get_idx_synonym_kb(idxID)
+ if CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR in field_value:
+ idxKB, idxMATCH = field_value.split(CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR)
+ if not idxKB:
+ idxKB = ''
+ idxMATCH = ''
+
+ kb_html_element = """<select name="idxKB" class="admin_w200">"""
+ knowledge_base_names = get_all_kb_names()
+ knowledge_base_names.append(CFG_BIBINDEX_SYNONYM_MATCH_TYPE["None"])
+ knowledge_base_names.sort()
+ for knowledge_base_name in knowledge_base_names:
+ if knowledge_base_name == idxKB:
+ selected = 'selected="selected"'
+ else:
+ selected = ""
+ kb_html_element += """<option value="%s" %s>%s</option>""" % (knowledge_base_name, selected, knowledge_base_name)
+ kb_html_element += """</select>"""
+
+ match_html_element = """<select name="idxMATCH" class="admin_w200">"""
+ match_names = CFG_BIBINDEX_SYNONYM_MATCH_TYPE.values()
+ match_names.sort()
+ for match_name in match_names:
+ if match_name == idxMATCH:
+ selected = 'selected="selected"'
+ else:
+ selected = ""
+ match_html_element += """<option value="%s" %s>%s</option>""" % (match_name, selected, match_name)
+ match_html_element += """</select>"""
+
+ text = """<span class="adminlabel">Knowledge base name and match type</span>""" + kb_html_element + match_html_element
+
+ output += createhiddenform(action="modifysynonymkb#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ ln=ln,
+ confirm=0)
+
+ if confirm in [0, "0"] and get_idx(idxID)[0][5] == idxKB + CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR + idxMATCH:
+ output += """<span class="info">Knowledge base has not been changed</span>"""
+ elif confirm in [0, "0"]:
+ text = """
+ <span class="important">You are going to change the knowledge base for this index.<br /> <strong>Are you sure you want
+ to change the knowledge base of this index?</strong>"""
+ output += createhiddenform(action="modifysynonymkb#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ idxKB=idxKB,
+ idxMATCH=idxMATCH,
+ ln=ln,
+ confirm=1)
+ elif idxID > -1 and confirm in [1, "1"]:
+ res = modify_idx_synonym_kb(idxID, idxKB, idxMATCH)
+ output += write_outcome(res)
+ output += """<br /><span class="info">Please note that you must run as soon as possible:
+ <pre>$> %s/bibindex --reindex -w %s</pre></span>""" % (CFG_BINDIR, get_idx(idxID)[0][1])
+ elif confirm in [1, "1"]:
+ output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
+ else:
+ output = """No index to modify."""
+
+ body = [output]
+
+ if callback:
+ return perform_editindex(idxID, ln, "perform_modifysynonymkb", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+
+def perform_modifystopwords(idxID, ln=CFG_SITE_LANG, idxSTOPWORDS='', callback='yes', confirm=-1):
+ """Form to modify the stopwords configuration
+ @param idxID: id of the index on which modification will be performed.
+ @param idxSTOPWORDS: remove stopwords or not ('Yes' or 'No')
+ """
+
+ subtitle = ""
+ output = ""
+
+ idx = get_idx(idxID)
+ if not idx:
+ idxID = -1
+ if idxID not in [-1, "-1"]:
+ subtitle = """<a name="4"></a>6. Modify remove stopwords.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
+ if confirm in [-1, "-1"]:
+ idxSTOPWORDS = get_idx_remove_stopwords(idxID)
+ if not idxSTOPWORDS:
+ idxSTOPWORDS = ''
+ if isinstance(idxSTOPWORDS, tuple):
+ idxSTOPWORDS = ''
+
+ stopwords_html_element = """<input class="admin_w200" type="text" name="idxSTOPWORDS" value="%s" /><br />""" % idxSTOPWORDS
+
+ text = """<span class="adminlabel">Remove stopwords</span><br />""" + stopwords_html_element
+
+ output += createhiddenform(action="modifystopwords#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ ln=ln,
+ confirm=0)
+
+ if confirm in [0, "0"] and get_idx(idxID)[0][6] == idxSTOPWORDS:
+ output += """<span class="info">Stopwords have not been changed</span>"""
+ elif confirm in [0, "0"] and idxSTOPWORDS == '':
+ output += """<span class="info">You need to provide a name of the file with stopwords</span>"""
+ elif confirm in [0, "0"]:
+ text = """<span class="important">You are going to change the stopwords configuration for this index.<br />
+ <strong>Are you sure you want to do this?</strong>"""
+ output += createhiddenform(action="modifystopwords#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ idxSTOPWORDS=idxSTOPWORDS,
+ ln=ln,
+ confirm=1)
+ elif idxID > -1 and confirm in [1, "1"]:
+ res = modify_idx_stopwords(idxID, idxSTOPWORDS)
+ output += write_outcome(res)
+ output += """<br /><span class="info">Please note you must run as soon as possible:
+ <pre>$> %s/bibindex --reindex -w %s</pre></span>""" % (CFG_BINDIR, get_idx(idxID)[0][1])
+ elif confirm in [1, "1"]:
+ output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
+ else:
+ output = """No index to modify."""
+
+ body = [output]
+
+ if callback:
+ return perform_editindex(idxID, ln, "perform_modifystopwords", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+def perform_modifyremovehtml(idxID, ln=CFG_SITE_LANG, idxHTML='', callback='yes', confirm=-1):
+ """Form to modify the 'remove html' configuration.
+ @param idxID: id of the index on which modification will be performed.
+ @param idxHTML: remove html markup or not ('Yes' or 'No')"""
+
+ subtitle = ""
+ output = ""
+
+ idx = get_idx(idxID)
+ if not idx:
+ idxID = -1
+ if idxID not in [-1, "-1"]:
+ subtitle = """<a name="4"></a>7. Modify remove HTML markup.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
+ if confirm in [-1, "-1"]:
+ idxHTML = get_idx_remove_html_markup(idxID)
+ if not idxHTML:
+ idxHTML = ''
+
+ remove_html_element = """<select name="idxHTML" class="admin_w200">"""
+ if idxHTML == 'Yes':
+ remove_html_element += """<option value="Yes" selected ="selected">Yes</option>"""
+ remove_html_element += """<option value="No">No</option>"""
+ elif idxHTML == 'No':
+ remove_html_element += """<option value="Yes">Yes</option>"""
+ remove_html_element += """<option value="No" selected ="selected">No</option>"""
+ else:
+ remove_html_element += """<option value="Yes">Yes</option>"""
+ remove_html_element += """<option value="No">No</option>"""
+ remove_html_element += """</select>"""
+
+
+ text = """<span class="adminlabel">Remove HTML markup</span>""" + remove_html_element
+ output += createhiddenform(action="modifyremovehtml#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ ln=ln,
+ confirm=0)
+
+ if confirm in [0, "0"] and get_idx_remove_html_markup(idxID) == idxHTML:
+ output += """<span class="info">Remove HTML markup parameter has not been changed</span>"""
+ elif confirm in [0, "0"]:
+ text = """<span class="important">You are going to change the remove HTML markup for this index.<br />
+ <strong>Are you sure you want to change the remove HTML markup of this index?</strong>"""
+ output += createhiddenform(action="modifyremovehtml#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ idxHTML=idxHTML,
+ ln=ln,
+ confirm=1)
+ elif idxID > -1 and confirm in [1, "1"]:
+ res = modify_idx_html_markup(idxID, idxHTML)
+ output += write_outcome(res)
+ output += """<br /><span class="info">Please note you must run as soon as possible:
+ <pre>$> %s/bibindex --reindex -w %s</pre></span>""" % (CFG_BINDIR, get_idx(idxID)[0][1])
+ elif confirm in [1, "1"]:
+ output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
+ else:
+ output = """No index to modify."""
+
+ body = [output]
+
+ if callback:
+ return perform_editindex(idxID, ln, "perform_modifyremovehtml", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+def perform_modifyremovelatex(idxID, ln=CFG_SITE_LANG, idxLATEX='', callback='yes', confirm=-1):
+ """Form to modify the 'remove latex' configuration.
+ @param idxID: id of the index on which modification will be performed.
+ @param idxLATEX: remove latex markup or not ('Yes' or 'No')"""
+
+ subtitle = ""
+ output = ""
+
+ idx = get_idx(idxID)
+ if not idx:
+ idxID = -1
+ if idxID not in [-1, "-1"]:
+ subtitle = """<a name="4"></a>8. Modify remove latex markup.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
+ if confirm in [-1, "-1"]:
+ idxLATEX = get_idx_remove_latex_markup(idxID)
+ if not idxLATEX:
+ idxLATEX = ''
+
+ remove_latex_element = """<select name="idxLATEX" class="admin_w200">"""
+ if idxLATEX == 'Yes':
+ remove_latex_element += """<option value="Yes" selected ="selected">Yes</option>"""
+ remove_latex_element += """<option value="No">No</option>"""
+ elif idxLATEX == 'No':
+ remove_latex_element += """<option value="Yes">Yes</option>"""
+ remove_latex_element += """<option value="No" selected ="selected">No</option>"""
+ else:
+ remove_latex_element += """<option value="Yes">Yes</option>"""
+ remove_latex_element += """<option value="No">No</option>"""
+ remove_latex_element += """</select>"""
+
+
+ text = """<span class="adminlabel">Remove latex markup</span>""" + remove_latex_element
+ output += createhiddenform(action="modifyremovelatex#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ ln=ln,
+ confirm=0)
+
+ if confirm in [0, "0"] and get_idx_remove_latex_markup(idxID) == idxLATEX:
+ output += """<span class="info">Remove latex markup parameter has not been changed</span>"""
+ elif confirm in [0, "0"]:
+ text = """<span class="important">You are going to change the remove latex markup for this index.<br />
+ <strong>Are you sure you want to change the remove latex markup of this index?</strong>"""
+ output += createhiddenform(action="modifyremovelatex#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ idxLATEX=idxLATEX,
+ ln=ln,
+ confirm=1)
+ elif idxID > -1 and confirm in [1, "1"]:
+ res = modify_idx_latex_markup(idxID, idxLATEX)
+ output += write_outcome(res)
+ output += """<br /><span class="info">Please note you must run as soon as possible:
+ <pre>$> %s/bibindex --reindex -w %s</pre></span>""" % (CFG_BINDIR, get_idx(idxID)[0][1])
+ elif confirm in [1, "1"]:
+ output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
+ else:
+ output = """No index to modify."""
+
+ body = [output]
+
+ if callback:
+ return perform_editindex(idxID, ln, "perform_modifyremovelatex", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+def perform_modifytokenizer(idxID, ln=CFG_SITE_LANG, idxTOK='', callback='yes', confirm=-1):
+ """Form to modify the 'tokenizer' configuration.
+ @param idxID: id of the index on which modification will be performed.
+ @param idxTOK: tokenizer name"""
+
+ subtitle = ""
+ output = ""
+
+ idx = get_idx(idxID)
+ if not idx:
+ idxID = -1
+ if idxID not in [-1, "-1"]:
+ subtitle = """<a name="4"></a>9. Modify tokenizer.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
+ if confirm in [-1, "-1"]:
+ idxTOK = get_idx_tokenizer(idxID)
+ if not idxTOK:
+ idxTOK = ''
+
+
+ tokenizer_element = """<select name="idxTOK" class="admin_w200">"""
+ for key in _TOKENIZERS:
+ if key == idxTOK:
+ tokenizer_element += """<option value="%s" selected ="selected">%s</option>""" % (key, key)
+ else:
+ tokenizer_element += """<option value="%s">%s</option>""" % (key, key)
+ tokenizer_element += """</select>"""
+
+ text = """<span class="adminlabel">Tokenizer</span>""" + tokenizer_element
+ output += createhiddenform(action="modifytokenizer#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ ln=ln,
+ confirm=0)
+
+
+ if confirm in [0, "0"] and get_idx_tokenizer(idxID) == idxTOK:
+ output += """<span class="info">Tokenizer has not been changed</span>"""
+ elif confirm in [0, "0"]:
+ text = """<span class="important">You are going to change a tokenizer for this index.<br />
+ <strong>Are you sure you want to do this?</strong>"""
+ output += createhiddenform(action="modifytokenizer#4",
+ text=text,
+ button="Modify",
+ idxID=idxID,
+ idxTOK=idxTOK,
+ ln=ln,
+ confirm=1)
+ elif idxID > -1 and confirm in [1, "1"]:
+ res = modify_idx_tokenizer(idxID, idxTOK)
+ output += write_outcome(res)
+ output += """<br /><span class="info">Please note you must run as soon as possible:
+ <pre>$> %s/bibindex --reindex -w %s</pre></span>""" % (CFG_BINDIR, get_idx(idxID)[0][1])
+ elif confirm in [1, "1"]:
+ output += """<br /><b><span class="info">Please give a name for the index.</span></b>"""
+
+ else:
+ output = """No index to modify."""
+
+ body = [output]
+
+ if callback:
+ return perform_editindex(idxID, ln, "perform_modifytokenizer", addadminbox(subtitle, body))
+ else:
+ return addadminbox(subtitle, body)
+
+
+
def perform_modifyfield(fldID, ln=CFG_SITE_LANG, code='', callback='yes', confirm=-1):
"""form to modify a field.
fldID - the field to change."""
subtitle = ""
output = ""
fld_dict = dict(get_def_name('', "field"))
if fldID not in [-1, "-1"]:
if confirm in [-1, "-1"]:
res = get_fld(fldID)
code = res[0][2]
else:
code = str.replace("%s" % code, " ", "")
fldID = int(fldID)
subtitle = """<a name="2"></a>1. Modify field code for logical field '%s'&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (fld_dict[int(fldID)], CFG_SITE_URL)
text = """
<span class="adminlabel">Field code</span>
<input class="admin_w200" type="text" name="code" value="%s" /><br />
""" % code
output += createhiddenform(action="modifyfield#2",
text=text,
button="Modify",
fldID=fldID,
ln=ln,
confirm=1)
if fldID > -1 and confirm in [1, "1"]:
fldID = int(fldID)
res = modify_fld(fldID, code)
output += write_outcome(res)
else:
output = """No field to modify.
"""
body = [output]
if callback:
return perform_editfield(fldID, ln, "perform_modifyfield", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyindexfields(idxID, ln=CFG_SITE_LANG, callback='yes', content='', confirm=-1):
"""Modify which logical fields to use in this index.."""
output = ''
subtitle = """<a name="3"></a>3. Modify index fields.&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % CFG_SITE_URL
output = """<dl>
<dt>Menu</dt>
<dd><a href="%s/admin/bibindex/bibindexadmin.py/addindexfield?idxID=%s&amp;ln=%s#3.1">Add field to index</a></dd>
<dd><a href="%s/admin/bibindex/bibindexadmin.py/field?ln=%s">Manage fields</a></dd>
</dl>
""" % (CFG_SITE_URL, idxID, ln, CFG_SITE_URL, ln)
header = ['Field', '']
actions = []
idx_fld = get_idx_fld(idxID)
if len(idx_fld) > 0:
for (idxID, idxNAME,fldID, fldNAME, regexp_punct, regexp_alpha_sep) in idx_fld:
actions.append([fldNAME])
for col in [(('Remove','removeindexfield'),)]:
actions[-1].append('<a href="%s/admin/bibindex/bibindexadmin.py/%s?idxID=%s&amp;fldID=%s&amp;ln=%s#3.1">%s</a>' % (CFG_SITE_URL, col[0][1], idxID, fldID, ln, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/bibindex/bibindexadmin.py/%s?fldID=%s&amp;flID=%s&amp;ln=%s#4.1">%s</a>' % (CFG_SITE_URL, function, idxID, fldID, ln, str)
output += tupletotable(header=header, tuple=actions)
else:
output += """No index fields exists"""
output += content
body = [output]
if callback:
return perform_editindex(idxID, ln, "perform_modifyindexfields", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyfieldtags(fldID, ln=CFG_SITE_LANG, callback='yes', content='', confirm=-1):
"""show the sort fields of this collection.."""
output = ''
fld_dict = dict(get_def_name('', "field"))
fld_type = get_fld_nametypes()
fldID = int(fldID)
subtitle = """<a name="4"></a>3. Modify MARC tags for the logical field '%s'&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/bibindex-admin-guide">?</a>]</small>""" % (fld_dict[int(fldID)], CFG_SITE_URL)
output = """<dl>
<dt>Menu</dt>
<dd><a href="%s/admin/bibindex/bibindexadmin.py/addtag?fldID=%s&amp;ln=%s#4.1">Add MARC tag</a></dd>
<dd><a href="%s/admin/bibindex/bibindexadmin.py/deletetag?fldID=%s&amp;ln=%s#4.1">Delete unused MARC tags</a></dd>
</dl>
""" % (CFG_SITE_URL, fldID, ln, CFG_SITE_URL, fldID, ln)
header = ['', 'Value', 'Comment', 'Actions']
actions = []
res = get_fld_tags(fldID)
if len(res) > 0:
i = 0
for (fldID, tagID, tname, tvalue, score) in res:
move = ""
if i != 0:
move += """<a href="%s/admin/bibindex/bibindexadmin.py/switchtagscore?fldID=%s&amp;id_1=%s&amp;id_2=%s&amp;ln=%s&amp=rand=%s#4"><img border="0" src="%s/img/smallup.gif" title="Move tag up"></a>""" % (CFG_SITE_URL, fldID, tagID, res[i - 1][1], ln, random.randint(0, 1000), CFG_SITE_URL)
else:
move += "&nbsp;&nbsp;&nbsp;"
i += 1
if i != len(res):
move += '<a href="%s/admin/bibindex/bibindexadmin.py/switchtagscore?fldID=%s&amp;id_1=%s&amp;id_2=%s&amp;ln=%s&amp;rand=%s#4"><img border="0" src="%s/img/smalldown.gif" title="Move tag down"></a>' % (CFG_SITE_URL, fldID, tagID, res[i][1], ln, random.randint(0, 1000), CFG_SITE_URL)
actions.append([move, tvalue, tname])
for col in [(('Details','showdetailsfieldtag'), ('Modify','modifytag'),('Remove','removefieldtag'),)]:
actions[-1].append('<a href="%s/admin/bibindex/bibindexadmin.py/%s?fldID=%s&amp;tagID=%s&amp;ln=%s#4.1">%s</a>' % (CFG_SITE_URL, col[0][1], fldID, tagID, ln, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/bibindex/bibindexadmin.py/%s?fldID=%s&amp;tagID=%s&amp;ln=%s#4.1">%s</a>' % (CFG_SITE_URL, function, fldID, tagID, ln, str)
output += tupletotable(header=header, tuple=actions)
else:
output += """No fields exists"""
output += content
body = [output]
if callback:
return perform_editfield(fldID, ln, "perform_modifyfieldtags", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_addtag(fldID, ln=CFG_SITE_LANG, value=['',-1], name='', callback="yes", confirm=-1):
"""form to add a new field.
fldNAME - the name of the new field
code - the field code"""
output = ""
subtitle = """<a name="4.1"></a>Add MARC tag to logical field"""
text = """
Add new tag:<br />
<span class="adminlabel">Tag value</span>
<input class="admin_w200" maxlength="6" type="text" name="value" value="%s" /><br />
<span class="adminlabel">Tag comment</span>
<input class="admin_w200" type="text" name="name" value="%s" /><br />
""" % ((name=='' and value[0] or name), value[0])
text += """Or existing tag:<br />
<span class="adminlabel">Tag</span>
<select name="value" class="admin_w200">
<option value="-1">- Select a tag -</option>
"""
fld_tags = get_fld_tags(fldID)
tags = get_tags()
fld_tags = dict(map(lambda x: (x[1], x[0]), fld_tags))
for (id_tag, tname, tvalue) in tags:
if not fld_tags.has_key(id_tag):
text += """<option value="%s" %s>%s</option>""" % (tvalue, (tvalue==value[1] and 'selected="selected"' or ''), "%s - %s" % (tvalue, tname))
text += """</select>"""
output = createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/addtag" % CFG_SITE_URL,
text=text,
fldID=fldID,
ln=ln,
button="Add tag",
confirm=1)
if (value[0] and value[1] in [-1, "-1"]) or (not value[0] and value[1] not in [-1, "-1"]):
if confirm in ["1", 1]:
res = add_fld_tag(fldID, name, (value[0] !='' and value[0] or value[1]))
output += write_outcome(res)
elif confirm not in ["-1", -1]:
output += """<b><span class="info">Please choose to add either a new or an existing MARC tag, but not both.</span></b>
"""
body = [output]
if callback:
return perform_modifyfieldtags(fldID, ln, "perform_addtag", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifytag(fldID, tagID, ln=CFG_SITE_LANG, name='', value='', callback='yes', confirm=-1):
"""form to modify a field.
fldID - the field to change."""
subtitle = ""
output = ""
fld_dict = dict(get_def_name('', "field"))
fldID = int(fldID)
tagID = int(tagID)
tag = get_tags(tagID)
if confirm in [-1, "-1"] and not value and not name:
name = tag[0][1]
value = tag[0][2]
subtitle = """<a name="3.1"></a>Modify MARC tag"""
text = """
Any modifications will apply to all logical fields using this tag.<br />
<span class="adminlabel">Tag value</span>
<input class="admin_w200" type="text" name="value" value="%s" /><br />
<span class="adminlabel">Comment</span>
<input class="admin_w200" type="text" name="name" value="%s" /><br />
""" % (value, name)
output += createhiddenform(action="modifytag#4.1",
text=text,
button="Modify",
fldID=fldID,
tagID=tagID,
ln=ln,
confirm=1)
if name and value and confirm in [1, "1"]:
res = modify_tag(tagID, name, value)
output += write_outcome(res)
body = [output]
if callback:
return perform_modifyfieldtags(fldID, ln, "perform_modifytag", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_removefieldtag(fldID, tagID, ln=CFG_SITE_LANG, callback='yes', confirm=0):
"""form to remove a tag from a field.
fldID - the current field, remove the tag from this field.
tagID - remove the tag with this id"""
subtitle = """<a name="4.1"></a>Remove MARC tag from logical field"""
output = ""
fld_dict = dict(get_def_name('', "field"))
if fldID and tagID:
fldID = int(fldID)
tagID = int(tagID)
tag = get_fld_tags(fldID, tagID)
if confirm not in ["1", 1]:
text = """Do you want to remove the tag '%s - %s ' from the field '%s'.""" % (tag[0][3], tag[0][2], fld_dict[fldID])
output += createhiddenform(action="removefieldtag#4.1",
text=text,
button="Confirm",
fldID=fldID,
tagID=tagID,
confirm=1)
elif confirm in ["1", 1]:
res = remove_fldtag(fldID, tagID)
output += write_outcome(res)
body = [output]
if callback:
return perform_modifyfieldtags(fldID, ln, "perform_removefieldtag", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_addindexfield(idxID, ln=CFG_SITE_LANG, fldID='', callback="yes", confirm=-1):
"""form to add a new field.
fldNAME - the name of the new field
code - the field code"""
output = ""
subtitle = """<a name="4.1"></a>Add logical field to index"""
text = """
<span class="adminlabel">Field name</span>
<select name="fldID" class="admin_w200">
<option value="-1">- Select a field -</option>
"""
fld = get_fld()
for (fldID2, fldNAME, fldCODE) in fld:
text += """<option value="%s" %s>%s</option>""" % (fldID2, (fldID==fldID2 and 'selected="selected"' or ''), fldNAME)
text += """</select>"""
output = createhiddenform(action="%s/admin/bibindex/bibindexadmin.py/addindexfield" % CFG_SITE_URL,
text=text,
idxID=idxID,
ln=ln,
button="Add field",
confirm=1)
if fldID and not fldID in [-1, "-1"] and confirm in ["1", 1]:
res = add_idx_fld(idxID, fldID)
output += write_outcome(res)
elif confirm in ["1", 1]:
output += """<b><span class="info">Please select a field to add.</span></b>"""
body = [output]
if callback:
return perform_modifyindexfields(idxID, ln, "perform_addindexfield", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_removeindexfield(idxID, fldID, ln=CFG_SITE_LANG, callback='yes', confirm=0):
"""form to remove a field from an index.
idxID - the current index, remove the field from this index.
fldID - remove the field with this id"""
subtitle = """<a name="3.1"></a>Remove field from index"""
output = ""
if fldID and idxID:
fldID = int(fldID)
idxID = int(idxID)
fld = get_fld(fldID)
idx = get_idx(idxID)
if fld and idx and confirm not in ["1", 1]:
text = """Do you want to remove the field '%s' from the index '%s'.""" % (fld[0][1], idx[0][1])
output += createhiddenform(action="removeindexfield#3.1",
text=text,
button="Confirm",
idxID=idxID,
fldID=fldID,
confirm=1)
elif confirm in ["1", 1]:
res = remove_idxfld(idxID, fldID)
output += write_outcome(res)
body = [output]
if callback:
return perform_modifyindexfields(idxID, ln, "perform_removeindexfield", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_switchtagscore(fldID, id_1, id_2, ln=CFG_SITE_LANG):
"""Switch the score of id_1 and id_2 in the table type.
colID - the current collection
id_1/id_2 - the id's to change the score for.
type - like "format" """
output = ""
name_1 = run_sql("select name from tag where id=%s", (id_1, ))[0][0]
name_2 = run_sql("select name from tag where id=%s", (id_2, ))[0][0]
res = switch_score(fldID, id_1, id_2)
output += write_outcome(res)
return perform_modifyfieldtags(fldID, ln, content=output)
def perform_deletetag(fldID, ln=CFG_SITE_LANG, tagID=-1, callback='yes', confirm=-1):
"""form to delete an MARC tag not in use.
fldID - the collection id of the current collection.
fmtID - the format id to delete."""
subtitle = """<a name="10.3"></a>Delete an unused MARC tag"""
output = """
<dl>
<dd>Deleting an MARC tag will also delete the translations associated.</dd>
</dl>
"""
fldID = int(fldID)
if tagID not in [-1," -1"] and confirm in [1, "1"]:
ares = delete_tag(tagID)
fld_tag = get_fld_tags()
fld_tag = dict(map(lambda x: (x[1], x[0]), fld_tag))
tags = get_tags()
text = """
<span class="adminlabel">MARC tag</span>
<select name="tagID" class="admin_w200">
"""
text += """<option value="-1">- Select MARC tag -"""
i = 0
for (id, name, value) in tags:
if not fld_tag.has_key(id):
text += """<option value="%s" %s>%s</option>""" % (id, id == int(tagID) and 'selected="selected"' or '', "%s - %s" % (value, name))
i += 1
text += """</select><br />"""
if i == 0:
output += """<b><span class="info">No unused MARC tags</span></b><br />"""
else:
output += createhiddenform(action="deletetag#4.1",
text=text,
button="Delete",
fldID=fldID,
ln=ln,
confirm=0)
if tagID not in [-1,"-1"]:
tagID = int(tagID)
tags = get_tags(tagID)
if confirm in [0, "0"]:
text = """<b>Do you want to delete the MARC tag '%s'.</b>""" % tags[0][2]
output += createhiddenform(action="deletetag#4.1",
text=text,
button="Confirm",
fldID=fldID,
tagID=tagID,
ln=ln,
confirm=1)
elif confirm in [1, "1"]:
output += write_outcome(ares)
elif confirm not in [-1, "-1"]:
output += """<b><span class="info">Choose a MARC tag to delete.</span></b>"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_modifyfieldtags(fldID, ln, content=output)
def compare_on_val(first, second):
"""Compare the two values"""
return cmp(first[1], second[1])
def get_col_fld(colID=-1, type = '', id_field=''):
"""Returns either all portalboxes associated with a collection, or based on either colID or language or both.
colID - collection id
ln - language id"""
sql = "SELECT id_collection,id_field,id_fieldvalue,type,score,score_fieldvalue FROM collection_field_fieldvalue, field WHERE id_field=field.id"
params = []
try:
if id_field:
sql += " AND id_field=%s"
params.append(id_field)
sql += " ORDER BY type, score desc, score_fieldvalue desc"
res = run_sql(sql, tuple(params))
return res
except StandardError, e:
return ""
def get_idx(idxID=''):
- sql = "SELECT id,name,description,last_updated,stemming_language FROM idxINDEX"
+ sql = "SELECT id,name,description,last_updated,stemming_language, synonym_kbrs,remove_stopwords,remove_html_markup,remove_latex_markup,tokenizer FROM idxINDEX"
params = []
try:
if idxID:
sql += " WHERE id=%s"
params.append(idxID)
sql += " ORDER BY id asc"
res = run_sql(sql, tuple(params))
return res
except StandardError, e:
return ""
-def get_idx_indexer(name):
- """Returns the indexer field value"""
+def get_idx_synonym_kb(idxID):
+ """Returns a synonym knowledge base field value"""
try:
- return run_sql("SELECT indexer FROM idxINDEX WHERE NAME=%s", (name, ))[0][0]
+ return run_sql("SELECT synonym_kbrs FROM idxINDEX WHERE ID=%s", (idxID, ))[0][0]
+ except StandardError, e:
+ return e.__str__()
+
+
+def get_idx_remove_stopwords(idxID):
+ """Returns a stopwords field value"""
+
+ try:
+ return run_sql("SELECT remove_stopwords FROM idxINDEX WHERE ID=%s", (idxID, ))[0][0]
+ except StandardError, e:
+ return (0, e)
+
+
+def get_idx_remove_html_markup(idxID):
+ """Returns a remove html field value"""
+
+ try:
+ return run_sql("SELECT remove_html_markup FROM idxINDEX WHERE ID=%s", (idxID, ))[0][0]
+ except StandardError, e:
+ return (0, e)
+
+
+def get_idx_remove_latex_markup(idxID):
+ """Returns a remove latex field value"""
+
+ try:
+ return run_sql("SELECT remove_latex_markup FROM idxINDEX WHERE ID=%s", (idxID, ))[0][0]
+ except StandardError, e:
+ return (0, e)
+
+def get_idx_tokenizer(idxID):
+ """Returns a tokenizer field value"""
+
+ try:
+ return run_sql("SELECT tokenizer FROM idxINDEX WHERE ID=%s", (idxID, ))[0][0]
except StandardError, e:
return (0, e)
def get_fld_tags(fldID='', tagID=''):
"""Returns tags associated with a field.
fldID - field id
tagID - tag id"""
sql = "SELECT id_field,id_tag, tag.name, tag.value, score FROM field_tag,tag WHERE tag.id=field_tag.id_tag"
params = []
try:
if fldID:
sql += " AND id_field=%s"
params.append(fldID)
if tagID:
sql += " AND id_tag=%s"
params.append(tagID)
sql += " ORDER BY score desc, tag.value, tag.name"
res = run_sql(sql, tuple(params))
return res
except StandardError, e:
return ""
def get_tags(tagID=''):
"""Returns all or a given tag.
tagID - tag id
ln - language id"""
sql = "SELECT id, name, value FROM tag"
params = []
try:
if tagID:
sql += " WHERE id=%s"
params.append(tagID)
sql += " ORDER BY name, value"
res = run_sql(sql, tuple(params))
return res
except StandardError, e:
return ""
def get_fld(fldID=''):
"""Returns all fields or only the given field"""
try:
if not fldID:
res = run_sql("SELECT id, name, code FROM field ORDER by name, code")
else:
res = run_sql("SELECT id, name, code FROM field WHERE id=%s ORDER by name, code", (fldID, ))
return res
except StandardError, e:
return ""
def get_fld_id(fld_name=''):
"""Returns field id for a field name"""
try:
res = run_sql('SELECT id FROM field WHERE name=%s', (fld_name,))
return res[0][0]
except StandardError, e:
return ''
def get_fld_value(fldvID = ''):
"""Returns fieldvalue"""
try:
sql = "SELECT id, name, value FROM fieldvalue"
params = []
if fldvID:
sql += " WHERE id=%s"
params.append(fldvID)
res = run_sql(sql, tuple(params))
return res
except StandardError, e:
return ""
def get_idx_fld(idxID=''):
"""Return a list of fields associated with one or all indexes"""
try:
sql = "SELECT id_idxINDEX, idxINDEX.name, id_field, field.name, regexp_punctuation, regexp_alphanumeric_separators FROM idxINDEX, field, idxINDEX_field WHERE idxINDEX.id = idxINDEX_field.id_idxINDEX AND field.id = idxINDEX_field.id_field"
params = []
if idxID:
sql += " AND id_idxINDEX=%s"
params.append(idxID)
sql += " ORDER BY id_idxINDEX asc"
res = run_sql(sql, tuple(params))
return res
except StandardError, e:
return ""
def get_col_nametypes():
"""Return a list of the various translationnames for the fields"""
type = []
type.append(('ln', 'Long name'))
return type
def get_fld_nametypes():
"""Return a list of the various translationnames for the fields"""
type = []
type.append(('ln', 'Long name'))
return type
def get_idx_nametypes():
"""Return a list of the various translationnames for the index"""
type = []
type.append(('ln', 'Long name'))
return type
def get_sort_nametypes():
"""Return a list of the various translationnames for the fields"""
type = {}
type['soo'] = 'Sort options'
type['seo'] = 'Search options'
type['sew'] = 'Search within'
return type
def remove_fld(colID,fldID, fldvID=''):
"""Removes a field from the collection given.
colID - the collection the format is connected to
fldID - the field which should be removed from the collection."""
try:
sql = "DELETE FROM collection_field_fieldvalue WHERE id_collection=%s AND id_field=%s"
params = [colID, fldID]
if fldvID:
sql += " AND id_fieldvalue=%s"
params.append(fldvID)
res = run_sql(sql, tuple(params))
return (1, "")
except StandardError, e:
return (0, e)
def remove_idxfld(idxID, fldID):
"""Remove a field from a index in table idxINDEX_field
idxID - index id from idxINDEX
fldID - field id from field table"""
try:
sql = "DELETE FROM idxINDEX_field WHERE id_field=%s and id_idxINDEX=%s"
res = run_sql(sql, (fldID, idxID))
return (1, "")
except StandardError, e:
return (0, e)
def remove_fldtag(fldID,tagID):
"""Removes a tag from the field given.
fldID - the field the tag is connected to
tagID - the tag which should be removed from the field."""
try:
sql = "DELETE FROM field_tag WHERE id_field=%s AND id_tag=%s"
res = run_sql(sql, (fldID, tagID))
return (1, "")
except StandardError, e:
return (0, e)
def delete_tag(tagID):
"""Deletes all data for the given field
fldID - delete all data in the tables associated with field and this id """
try:
res = run_sql("DELETE FROM tag where id=%s", (tagID, ))
return (1, "")
except StandardError, e:
return (0, e)
+
def delete_idx(idxID):
"""Deletes all data for the given index together with the idxWORDXXR and idxWORDXXF tables"""
try:
idxID = int(idxID)
res = run_sql("DELETE FROM idxINDEX WHERE id=%s", (idxID, ))
res = run_sql("DELETE FROM idxINDEXNAME WHERE id_idxINDEX=%s", (idxID, ))
res = run_sql("DELETE FROM idxINDEX_field WHERE id_idxINDEX=%s", (idxID, ))
res = run_sql("DROP TABLE idxWORD%02dF" % idxID) # kwalitee: disable=sql
res = run_sql("DROP TABLE idxWORD%02dR" % idxID) # kwalitee: disable=sql
res = run_sql("DROP TABLE idxPAIR%02dF" % idxID) # kwalitee: disable=sql
res = run_sql("DROP TABLE idxPAIR%02dR" % idxID) # kwalitee: disable=sql
res = run_sql("DROP TABLE idxPHRASE%02dF" % idxID) # kwalitee: disable=sql
res = run_sql("DROP TABLE idxPHRASE%02dR" % idxID) # kwalitee: disable=sql
return (1, "")
except StandardError, e:
return (0, e)
+def delete_virtual_idx(idxID):
+ """Deletes this virtual index - it means that function
+ changes type of the index from 'virtual' to 'normal'
+ @param idxID -id of the virtual index to delete/change into normal idx
+ """
+ try:
+ run_sql("""UPDATE idxINDEX SET indexer='native'
+ WHERE id=%s""", (idxID, ))
+ run_sql("""DELETE FROM idxINDEX_idxINDEX
+ WHERE id_virtual=%s""", (idxID, ))
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
def delete_fld(fldID):
"""Deletes all data for the given field
fldID - delete all data in the tables associated with field and this id """
try:
res = run_sql("DELETE FROM collection_field_fieldvalue WHERE id_field=%s", (fldID, ))
res = run_sql("DELETE FROM field_tag WHERE id_field=%s", (fldID, ))
res = run_sql("DELETE FROM idxINDEX_field WHERE id_field=%s", (fldID, ))
res = run_sql("DELETE FROM field WHERE id=%s", (fldID, ))
return (1, "")
except StandardError, e:
return (0, e)
def add_idx(idxNAME):
"""Add a new index. returns the id of the new index.
idxID - the id for the index, number
idxNAME - the default name for the default language of the format."""
try:
idxID = 0
res = run_sql("SELECT id from idxINDEX WHERE name=%s", (idxNAME,))
if res:
return (0, (0, "A index with the given name already exists."))
for i in xrange(1, 100):
res = run_sql("SELECT id from idxINDEX WHERE id=%s", (i, ))
res2 = get_table_status_info("idxWORD%02d%%" % i)
if not res and not res2:
idxID = i
break
if idxID == 0:
return (0, (0, "Not possible to create new indexes, delete an index and try again."))
res = run_sql("INSERT INTO idxINDEX (id, name) VALUES (%s,%s)", (idxID, idxNAME))
type = get_idx_nametypes()[0][0]
res = run_sql("INSERT INTO idxINDEXNAME (id_idxINDEX, ln, type, value) VALUES (%s,%s,%s,%s)",
(idxID, CFG_SITE_LANG, type, idxNAME))
res = run_sql("""CREATE TABLE IF NOT EXISTS idxWORD%02dF (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM""" % idxID)
res = run_sql("""CREATE TABLE IF NOT EXISTS idxWORD%02dR (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM""" % idxID)
res = run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR%02dF (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM""" % idxID)
res = run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR%02dR (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM""" % idxID)
res = run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE%02dF (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM""" % idxID)
res = run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE%02dR (
id_bibrec mediumint(9) unsigned NOT NULL default '0',
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM""" % idxID)
res = run_sql("SELECT id from idxINDEX WHERE id=%s", (idxID, ))
res2 = get_table_status_info("idxWORD%02dF" % idxID)
res3 = get_table_status_info("idxWORD%02dR" % idxID)
if res and res2 and res3:
return (1, res[0][0])
elif not res:
return (0, (0, "Could not add the new index to idxINDEX"))
elif not res2:
return (0, (0, "Forward table not created for unknown reason."))
elif not res3:
return (0, (0, "Reverse table not created for unknown reason."))
except StandardError, e:
return (0, e)
+
+def add_virtual_idx(id_virtual, id_normal):
+ """Adds new virtual index and its first dependent index.
+ Doesn't change index's settings, but they're not
+ used anymore.
+ Uses function add_dependent_index, because
+ query in both cases is the same.
+ """
+ try:
+ run_sql("""UPDATE idxINDEX SET indexer='virtual'
+ WHERE id=%s""", (id_virtual, ))
+ return add_dependent_index(id_virtual, id_normal)
+ except StandardError, e:
+ return (0, e)
+
+
+def modify_dependent_indexes(idxID, indexes_to_add, indexes_to_remove):
+ """Adds and removes dependent indexes"""
+ all_indexes = dict(get_all_index_names_and_column_values("id"))
+ for index_name in indexes_to_add:
+ res = add_dependent_index(idxID, all_indexes[index_name])
+ if res[0] == 0:
+ return res
+ for index_name in indexes_to_remove:
+ res = remove_dependent_index(idxID, all_indexes[index_name])
+ if res[0] == 0:
+ return res
+ return (1, "")
+
+
+def add_dependent_index(id_virtual, id_normal):
+ """Adds dependent index to specific virtual index"""
+ try:
+ query = """INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal)
+ VALUES (%s, %s)""" % (id_virtual, id_normal)
+ res = run_sql(query)
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
+def remove_dependent_index(id_virtual, id_normal):
+ """Remove dependent index to specific virtual index"""
+ try:
+ query = """DELETE FROM idxINDEX_idxINDEX
+ WHERE id_virtual=%s AND
+ id_normal=%s
+ """ % (id_virtual, id_normal)
+ res = run_sql(query)
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
+
+
def add_fld(name, code):
"""Add a new logical field. Returns the id of the field.
code - the code for the field,
name - the default name for the default language of the field."""
try:
type = get_fld_nametypes()[0][0]
res = run_sql("INSERT INTO field (name, code) VALUES (%s,%s)", (name, code))
fldID = run_sql("SELECT id FROM field WHERE code=%s", (code,))
res = run_sql("INSERT INTO fieldname (id_field, type, ln, value) VALUES (%s,%s,%s,%s)", (fldID[0][0], type, CFG_SITE_LANG, name))
if fldID:
return (1, fldID[0][0])
else:
raise StandardError
except StandardError, e:
return (0, e)
def add_fld_tag(fldID, name, value):
"""Add a sort/search/field to the collection.
colID - the id of the collection involved
fmtID - the id of the format.
score - the score of the format, decides sorting, if not given, place the format on top"""
try:
res = run_sql("SELECT score FROM field_tag WHERE id_field=%s ORDER BY score desc", (fldID, ))
if res:
score = int(res[0][0]) + 1
else:
score = 0
res = run_sql("SELECT id FROM tag WHERE value=%s", (value,))
if not res:
if name == '':
name = value
res = run_sql("INSERT INTO tag (name, value) VALUES (%s,%s)", (name, value))
res = run_sql("SELECT id FROM tag WHERE value=%s", (value,))
res = run_sql("INSERT INTO field_tag(id_field, id_tag, score) values(%s, %s, %s)", (fldID, res[0][0], score))
return (1, "")
except StandardError, e:
return (0, e)
def add_idx_fld(idxID, fldID):
"""Add a field to an index"""
try:
sql = "SELECT id_idxINDEX FROM idxINDEX_field WHERE id_idxINDEX=%s and id_field=%s"
res = run_sql(sql, (idxID, fldID))
if res:
return (0, (0, "The field selected already exists for this index"))
sql = "INSERT INTO idxINDEX_field(id_idxINDEX, id_field) values (%s, %s)"
res = run_sql(sql, (idxID, fldID))
return (1, "")
except StandardError, e:
return (0, e)
def modify_idx(idxID, idxNAME, idxDESC):
"""Modify index name or index description in idxINDEX table"""
try:
res = run_sql("UPDATE idxINDEX SET name=%s WHERE id=%s", (idxNAME, idxID))
res = run_sql("UPDATE idxINDEX SET description=%s WHERE ID=%s", (idxDESC, idxID))
return (1, "")
except StandardError, e:
return (0, e)
def modify_idx_stemming(idxID, idxSTEM):
"""Modify the index stemming language in idxINDEX table"""
try:
- res = run_sql("UPDATE idxINDEX SET stemming_language=%s WHERE ID=%s", (idxSTEM, idxID))
+ run_sql("UPDATE idxINDEX SET stemming_language=%s WHERE ID=%s", (idxSTEM, idxID))
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
+def modify_idx_indexer(idxID, indexer):
+ """Modify an indexer type in idxINDEX table"""
+ try:
+ res = run_sql("UPDATE idxINDEX SET indexer=%s WHERE ID=%s", (indexer, idxID))
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
+def modify_idx_synonym_kb(idxID, idxKB, idxMATCH):
+ """Modify the knowledge base for the synonym lookup in idxINDEX table
+ @param idxID: id of the index in idxINDEX table
+ @param idxKB: name of the knowledge base (for example: INDEX-SYNONYM-TITLE)
+ @param idxMATCH: type of match in the knowledge base: exact, leading-to-coma, leading-to-number
+ """
+ try:
+ field_value = ""
+ if idxKB != CFG_BIBINDEX_SYNONYM_MATCH_TYPE["None"] and idxMATCH != CFG_BIBINDEX_SYNONYM_MATCH_TYPE["None"]:
+ field_value = idxKB + CFG_BIBINDEX_COLUMN_VALUE_SEPARATOR + idxMATCH
+ run_sql("UPDATE idxINDEX SET synonym_kbrs=%s WHERE ID=%s", (field_value, idxID))
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
+def modify_idx_stopwords(idxID, idxSTOPWORDS):
+ """Modify the stopwords in idxINDEX table
+ @param idxID: id of the index which we modify
+ @param idxSTOPWORDS: tells if stopwords should be removed ('Yes' or 'No')
+ """
+
+ try:
+ run_sql("UPDATE idxINDEX SET remove_stopwords=%s WHERE ID=%s", (idxSTOPWORDS, idxID))
return (1, "")
except StandardError, e:
return (0, e)
+def modify_idx_html_markup(idxID, idxHTML):
+ """Modify the index remove html markup in idxINDEX table"""
+
+ try:
+ run_sql("UPDATE idxINDEX SET remove_html_markup=%s WHERE ID=%s", (idxHTML, idxID))
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
+def modify_idx_latex_markup(idxID, idxLATEX):
+ """Modify the index remove latex markup in idxINDEX table"""
+
+ try:
+ run_sql("UPDATE idxINDEX SET remove_latex_markup=%s WHERE ID=%s", (idxLATEX, idxID))
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
+
+
+def modify_idx_tokenizer(idxID, idxTOK):
+ """Modify a tokenizer in idxINDEX table for given index"""
+
+ try:
+ run_sql("UPDATE idxINDEX SET tokenizer=%s WHERE ID=%s", (idxTOK, idxID))
+ return (1, "")
+ except StandardError, e:
+ return (0, e)
def modify_fld(fldID, code):
"""Modify the code of field
fldID - the id of the field to modify
code - the new code"""
try:
sql = "UPDATE field SET code=%s"
sql += " WHERE id=%s"
res = run_sql(sql, (code, fldID))
return (1, "")
except StandardError, e:
return (0, e)
def modify_tag(tagID, name, value):
"""Modify the name and value of a tag.
tagID - the id of the tag to modify
name - the new name of the tag
value - the new value of the tag"""
try:
sql = "UPDATE tag SET name=%s WHERE id=%s"
res = run_sql(sql, (name, tagID))
sql = "UPDATE tag SET value=%s WHERE id=%s"
res = run_sql(sql, (value, tagID))
return (1, "")
except StandardError, e:
return (0, e)
def switch_score(fldID, id_1, id_2):
"""Switch the scores of id_1 and id_2 in the table given by the argument.
colID - collection the id_1 or id_2 is connected to
id_1/id_2 - id field from tables like format..portalbox...
table - name of the table"""
try:
res1 = run_sql("SELECT score FROM field_tag WHERE id_field=%s and id_tag=%s", (fldID, id_1))
res2 = run_sql("SELECT score FROM field_tag WHERE id_field=%s and id_tag=%s", (fldID, id_2))
res = run_sql("UPDATE field_tag SET score=%s WHERE id_field=%s and id_tag=%s", (res2[0][0], fldID, id_1))
res = run_sql("UPDATE field_tag SET score=%s WHERE id_field=%s and id_tag=%s", (res1[0][0], fldID, id_2))
return (1, "")
except StandardError, e:
return (0, e)
def get_lang_list(table, field, id):
langs = run_sql("SELECT ln FROM %s WHERE %s=%%s" % (wash_table_column_name(table), wash_table_column_name(field)), (id, )) # kwalitee: disable=sql
exists = {}
lang = ''
for lng in langs:
if not exists.has_key(lng[0]):
lang += lng[0] + ", "
exists[lng[0]] = 1
if lang.endswith(", "):
lang = lang [:-2]
if len(exists) == 0:
lang = """<b><span class="info">None</span></b>"""
return lang
def check_user(req, role, adminarea=2, authorized=0):
# FIXME: Add doctype.
# This function is similar to the one found in
# oairepository/lib/oai_repository_admin.py, bibrank/lib/bibrankadminlib.py and
# websubmit/lib/websubmitadmin_engine.py.
auth_code, auth_message = acc_authorize_action(req, role)
if not authorized and auth_code != 0:
return ("false", auth_message)
return ("", auth_message)
diff --git a/modules/bibindex/lib/tokenizers/BibIndexAuthorCountTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexAuthorCountTokenizer.py
new file mode 100644
index 000000000..6b420f6b8
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexAuthorCountTokenizer.py
@@ -0,0 +1,46 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexAuthorCountTokenizer: counts number of authors for any publication
+ given by recID. Will look at tags: '100_a' and '700_a' which are:
+ 'first author name' and 'additional author name'.
+"""
+
+
+from invenio.bibindex_engine_utils import get_field_count
+from invenio.bibindex_tokenizers.BibIndexEmptyTokenizer import BibIndexEmptyTokenizer
+
+
+
+class BibIndexAuthorCountTokenizer(BibIndexEmptyTokenizer):
+ """
+ Returns a number of authors who created a publication with given recID in the database.
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ self.tags = ['100__a', '700__a']
+
+
+ def tokenize(self, recID):
+ """Uses get_field_count from bibindex_engine_utils
+ for finding a number of authors of a publication and pass it in the list"""
+ return [str(get_field_count(recID, self.tags)),]
+
+
+ def get_tokenizing_function(self, wordtable_type):
+ return self.tokenize
diff --git a/modules/bibindex/lib/tokenizers/BibIndexAuthorTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexAuthorTokenizer.py
new file mode 100644
index 000000000..c1d03f64c
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexAuthorTokenizer.py
@@ -0,0 +1,336 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexAuthorTokenizer: tokenizer introduced for author index.
+ It tokenizes author name in a fuzzy way. Creates different variants of an author name.
+ For example: John Cleese will be tokenized into: 'C John', 'Cleese John', 'John, C', 'John, Cleese'
+"""
+
+
+import re
+
+from invenio.config import CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES
+from invenio.bibindex_tokenizers.BibIndexDefaultTokenizer import BibIndexDefaultTokenizer
+
+
+
+class BibIndexAuthorTokenizer(BibIndexDefaultTokenizer):
+ """Human name tokenizer.
+
+ Human names are divided into three classes of tokens:
+ 'lastnames', i.e., family, tribal or group identifiers,
+ 'nonlastnames', i.e., personal names distinguishing individuals,
+ 'titles', both incidental and permanent, e.g., 'VIII', '(ed.)', 'Msc'
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ BibIndexDefaultTokenizer.__init__(self, stemming_language,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup)
+ self.single_initial_re = re.compile('^\w\.$')
+ self.split_on_re = re.compile('[\.\s-]')
+ # lastname_stopwords describes terms which should not be used for indexing,
+ # in multiple-word last names. These are purely conjunctions, serving the
+ # same function as the American hyphen, but using linguistic constructs.
+ self.lastname_stopwords = set(['y', 'of', 'and', 'de'])
+
+ def scan_string_for_phrases(self, s):
+ """Scan a name string and output an object representing its structure.
+
+ @param s: the input to be lexically tagged
+ @type s: string
+
+ @return: dict of lexically tagged input items.
+
+ Sample output for the name 'Jingleheimer Schmitt, John Jacob, XVI.' is:
+ {
+ 'TOKEN_TAG_LIST' : ['lastnames', 'nonlastnames', 'titles', 'raw'],
+ 'lastnames' : ['Jingleheimer', 'Schmitt'],
+ 'nonlastnames' : ['John', 'Jacob'],
+ 'titles' : ['XVI.'],
+ 'raw' : 'Jingleheimer Schmitt, John Jacob, XVI.'
+ }
+ @rtype: dict
+ """
+ retval = {'TOKEN_TAG_LIST' : ['lastnames', 'nonlastnames', 'titles', 'raw'],
+ 'lastnames' : [],
+ 'nonlastnames' : [],
+ 'titles' : [],
+ 'raw' : s}
+ l = s.split(',')
+ if len(l) < 2:
+ # No commas means a simple name
+ new = s.strip()
+ new = s.split(' ')
+ if len(new) == 1:
+ retval['lastnames'] = new # rare single-name case
+ else:
+ retval['lastnames'] = new[-1:]
+ retval['nonlastnames'] = new[:-1]
+ for tag in ['lastnames', 'nonlastnames']:
+ retval[tag] = [x.strip() for x in retval[tag]]
+ retval[tag] = [re.split(self.split_on_re, x) for x in retval[tag]]
+ # flatten sublists
+ retval[tag] = [item for sublist in retval[tag] for item in sublist]
+ retval[tag] = [x for x in retval[tag] if x != '']
+ else:
+ # Handle lastname-first multiple-names case
+ retval['titles'] = l[2:] # no titles? no problem
+ retval['nonlastnames'] = l[1]
+ retval['lastnames'] = l[0]
+ for tag in ['lastnames', 'nonlastnames']:
+ retval[tag] = retval[tag].strip()
+ retval[tag] = re.split(self.split_on_re, retval[tag])
+ # filter empty strings
+ retval[tag] = [x for x in retval[tag] if x != '']
+ retval['titles'] = [x.strip() for x in retval['titles'] if x != '']
+
+ return retval
+
+ def parse_scanned_for_phrases(self, scanned):
+ """Return all the indexable variations for a tagged token dictionary.
+
+ Does this via the combinatoric expansion of the following rules:
+ - Expands first names as name, first initial with period, first initial
+ without period.
+ - Expands compound last names as each of their non-stopword subparts.
+ - Titles are treated literally, but applied serially.
+
+ Please note that titles will be applied to complete last names only.
+ So for example, if there is a compound last name of the form,
+ "Ibanez y Gracia", with the title, "(ed.)", then only the combination
+ of those two strings will do, not "Ibanez" and not "Gracia".
+
+ @param scanned: lexically tagged input items in the form of the output
+ from scan()
+ @type scanned: dict
+
+ @return: combinatorically expanded list of strings for indexing
+ @rtype: list of string
+ """
+
+ def _fully_expanded_last_name(first, lastlist, title = None):
+ """Return a list of all of the first / last / title combinations.
+
+ @param first: one possible non-last name
+ @type first: string
+
+ @param lastlist: the strings of the tokens in the (possibly compound) last name
+ @type lastlist: list of string
+
+ @param title: one possible title
+ @type title: string
+ """
+ retval = []
+ title_word = ''
+ if title != None:
+ title_word = ', ' + title
+
+ last = ' '.join(lastlist)
+ retval.append(first + ' ' + last + title_word)
+ retval.append(last + ', ' + first + title_word)
+ for last in lastlist:
+ if last in self.lastname_stopwords:
+ continue
+ retval.append(first + ' ' + last + title_word)
+ retval.append(last + ', ' + first + title_word)
+
+ return retval
+
+ last_parts = scanned['lastnames']
+ first_parts = scanned['nonlastnames']
+ titles = scanned['titles']
+ raw = scanned['raw']
+
+ if len(first_parts) == 0: # rare single-name case
+ return scanned['lastnames']
+
+ expanded = []
+ for exp in self.__expand_nonlastnames(first_parts):
+ expanded.extend(_fully_expanded_last_name(exp, last_parts, None))
+ for title in titles:
+ # Drop titles which are parenthesized. This eliminates (ed.) from the index, but
+ # leaves XI, for example. This gets rid of the surprising behavior that searching
+ # for 'author:ed' retrieves people who have been editors, but whose names aren't
+ # Ed.
+ # TODO: Make editorship and other special statuses a MARC field.
+ if title.find('(') != -1:
+ continue
+ # XXX: remember to document that titles can only be applied to complete last names
+ expanded.extend(_fully_expanded_last_name(exp, [' '.join(last_parts)], title))
+
+ return sorted(list(set(expanded)))
+
+ def __expand_nonlastnames(self, namelist):
+ """Generate every expansion of a series of human non-last names.
+
+ Example:
+ "Michael Edward" -> "Michael Edward", "Michael E.", "Michael E", "M. Edward", "M Edward",
+ "M. E.", "M. E", "M E.", "M E", "M.E."
+ ...but never:
+ "ME"
+
+ @param namelist: a collection of names
+ @type namelist: list of string
+
+ @return: a greatly expanded collection of names
+ @rtype: list of string
+ """
+
+ def _expand_name(name):
+ """Lists [name, initial, empty]"""
+ if name == None:
+ return []
+ return [name, name[0]]
+
+ def _pair_items(head, tail):
+ """Lists every combination of head with each and all of tail"""
+ if len(tail) == 0:
+ return [head]
+ l = []
+ l.extend([head + ' ' + tail[0]])
+ #l.extend([head + '-' + tail[0]])
+ l.extend(_pair_items(head, tail[1:]))
+ return l
+
+ def _collect(head, tail):
+ """Brings together combinations of things"""
+
+ def _cons(a, l):
+ l2 = l[:]
+ l2.insert(0, a)
+ return l2
+
+ if len(tail) == 0:
+ return [head]
+ l = []
+ l.extend(_pair_items(head, _expand_name(tail[0])))
+ l.extend([' '.join(_cons(head, tail)).strip()])
+ #l.extend(['-'.join(_cons(head, tail)).strip()])
+ l.extend(_collect(head, tail[1:]))
+ return l
+
+ def _expand_contract(namelist):
+ """Runs collect with every head in namelist and its tail"""
+ val = []
+ for i in range(len(namelist)):
+ name = namelist[i]
+ for expansion in _expand_name(name):
+ val.extend(_collect(expansion, namelist[i+1:]))
+ return val
+
+ def _add_squashed(namelist):
+ """Finds cases like 'M. E.' and adds 'M.E.'"""
+ val = namelist
+
+ def __check_parts(parts):
+ if len(parts) < 2:
+ return False
+ for part in parts:
+ if not self.single_initial_re.match(part):
+ return False
+ return True
+
+ for name in namelist:
+ parts = name.split(' ')
+ if not __check_parts(parts):
+ continue
+ val.extend([''.join(parts)])
+
+ return val
+
+ return _add_squashed(_expand_contract(namelist))
+
+
+ def tokenize_for_fuzzy_authors(self, phrase):
+ """Output the list of strings expanding phrase.
+
+ Does this via the combinatoric expansion of the following rules:
+ - Expands first names as name, first initial with period, first initial
+ without period.
+ - Expands compound last names as each of their non-stopword subparts.
+ - Titles are treated literally, but applied serially.
+
+ Please note that titles will be applied to complete last names only.
+ So for example, if there is a compound last name of the form,
+ "Ibanez y Gracia", with the title, "(ed.)", then only the combination
+ of those two strings will do, not "Ibanez" and not "Gracia".
+
+ Old: BibIndexFuzzyAuthorTokenizer
+
+ @param phrase: the input to be lexically tagged
+ @type phrase: string
+
+ @return: combinatorically expanded list of strings for indexing
+ @rtype: list of string
+
+ @note: A simple wrapper around scan and parse_scanned.
+ """
+ return self.parse_scanned_for_phrases(self.scan_string_for_phrases(phrase))
+
+
+ def tokenize_for_phrases(self, phrase):
+ """
+ Another name for tokenize_for_fuzzy_authors.
+ It's for the compatibility.
+ See: tokenize_for_fuzzy_authors
+ """
+ return self.tokenize_for_fuzzy_authors(phrase)
+
+
+ def tokenize_for_words_default(self, phrase):
+ """Default tokenize_for_words inherited from default tokenizer"""
+ return super(BibIndexAuthorTokenizer, self).tokenize_for_words(phrase)
+
+
+ def get_author_family_name_words_from_phrase(self, phrase):
+ """ Return list of words from author family names, not his/her first names.
+
+ The phrase is assumed to be the full author name. This is
+ useful for CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES.
+
+ @param phrase: phrase to get family name from
+ """
+ d_family_names = {}
+ # first, treat everything before first comma as surname:
+ if ',' in phrase:
+ d_family_names[phrase.split(',', 1)[0]] = 1
+ # second, try fuzzy author tokenizer to find surname variants:
+ for name in self.tokenize_for_phrases(phrase):
+ if ',' in name:
+ d_family_names[name.split(',', 1)[0]] = 1
+ # now extract words from these surnames:
+ d_family_names_words = {}
+ for family_name in d_family_names.keys():
+ for word in self.tokenize_for_words_default(family_name):
+ d_family_names_words[word] = 1
+ return d_family_names_words.keys()
+
+
+ def tokenize_for_words(self, phrase):
+ """
+ If CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES is 1 we tokenize only for family names.
+ In other case we perform standard tokenization for words.
+ """
+ if CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES:
+ return self.get_author_family_name_words_from_phrase(phrase)
+ else:
+ return self.tokenize_for_words_default(phrase)
+
+
diff --git a/modules/bibindex/lib/tokenizers/BibIndexCJKTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexCJKTokenizer.py
new file mode 100644
index 000000000..751f3399b
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexCJKTokenizer.py
@@ -0,0 +1,133 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexCJKTokenizer: makes search in collections with CJK papers and publications more reliable
+ If phrase has characters from CJK language set tokenizer will treat it diffrently than phrase without these chars.
+ CJK Tokenizer splits CJK words into single characters (it adds space between every two CJK characters).
+"""
+
+import re
+
+from invenio.bibindex_tokenizers.BibIndexDefaultTokenizer import BibIndexDefaultTokenizer
+
+is_character_from_CJK_set = re.compile(u'[\u3400-\u4DBF\u4E00-\u9FFF]')
+special_CJK_punctuation = re.compile(u'[\uff1a,\uff0c,\u3001,\u3002,\u201c,\u201d]')
+
+
+def is_from_CJK_set_single_character_match(char):
+ if not isinstance(char, unicode):
+ char = char.decode("utf8")
+ res = is_character_from_CJK_set.match(char)
+ if res:
+ return True
+ return False
+
+
+def is_from_CJK_set_full_match(text):
+ if not isinstance(text, unicode):
+ text = text.decode("utf8")
+ res = is_character_from_CJK_set.findall(text)
+ if len(res) == len(text):
+ return True
+ return False
+
+
+def is_there_any_CJK_character_in_text(text):
+ if not isinstance(text, unicode):
+ text = text.decode("utf8")
+ res = is_character_from_CJK_set.search(text)
+ if res is not None:
+ return True
+ return False
+
+
+def is_non_CJK_expression(word):
+ return not is_there_any_CJK_character_in_text(word)
+
+
+class BibIndexCJKTokenizer(BibIndexDefaultTokenizer):
+ """A phrase is split into CJK characters.
+ CJK is Chinese, Japanese and Korean unified character set.
+ It means that for example, phrase: '据信,新手机更轻'
+ will be split into: ['据', '信', '新', '手', '机', '更', '轻']"""
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ """Initialisation"""
+ BibIndexDefaultTokenizer.__init__(self, stemming_language,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup)
+
+
+ def tokenize_for_words_default(self, phrase):
+ """Default tokenize_for_words inherited from default tokenizer"""
+ return super(BibIndexCJKTokenizer, self).tokenize_for_words(phrase)
+
+
+ def tokenize_for_words(self, phrase):
+ """
+ Splits phrase into words with additional spaces
+ between CJK characters to enhance search for CJK papers and stuff.
+ If there is no single CJK character in whole phrase it behaves the standard way:
+ it splits phrase into words with use of BibIndexDefaultTokenizer's tokenize_for_words.
+
+ @param phrase: CJK phrase to be tokenized
+ @type phrase: string
+
+ @return: list of CJK characters and non-CJK words
+ @rtype: list of string
+ """
+ if is_there_any_CJK_character_in_text(phrase):
+ #remove special CJK punctuation
+ phrase = special_CJK_punctuation.sub("", phrase)
+ #first, we split our phrase with default word tokenizer to make it easier later
+ pre_tokenized = self.tokenize_for_words_default(phrase)
+ #list for keeping CJK chars and non-CJK words
+ chars = []
+ #every CJK word splits into a set of single characters
+ #for example: "春眠暁覚" into ['春','眠','暁','覚']
+ words = [ word.decode("utf8") for word in pre_tokenized]
+ for word in words:
+ if is_from_CJK_set_full_match(word):
+ chars.extend(word)
+ else:
+ non_chinese = u""
+ for char in word:
+ if is_from_CJK_set_single_character_match(char):
+ if non_chinese:
+ chars.append(non_chinese)
+ non_chinese = u""
+ chars.append(char)
+ else:
+ non_chinese = non_chinese + char
+ if non_chinese:
+ chars.append(non_chinese)
+ clean_dict = {}
+ for c in chars:
+ clean_dict[c] = 1
+ chars = [c.encode("utf8") for c in clean_dict.keys()]
+ return chars
+ else:
+ return self.tokenize_for_words_default(phrase)
+
+
+ def tokenize_for_pairs(self, phrase):
+ return []
+
+ def tokenize_for_phrases(self, phrase):
+ return []
diff --git a/modules/bibindex/lib/tokenizers/BibIndexDefaultTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexDefaultTokenizer.py
new file mode 100644
index 000000000..08def5e95
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexDefaultTokenizer.py
@@ -0,0 +1,165 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexDefaultTokenizer: useful for almost all indexes.
+ It performs standard tokenization. It splits phrases into words/pairs or doesnt split at all, strips accents,
+ removes alphanumeric characters and html and latex markup if we want to. Also can stem words for a given language.
+"""
+
+from invenio.bibindex_engine_config import \
+ CFG_BIBINDEX_INDEX_TABLE_TYPE
+from invenio.htmlutils import remove_html_markup
+from invenio.textutils import wash_for_utf8, strip_accents
+from invenio.bibindex_engine_washer import \
+ lower_index_term, remove_latex_markup, \
+ apply_stemming, remove_stopwords, length_check
+from invenio.bibindex_engine_utils import latex_formula_re, \
+ re_block_punctuation_begin, \
+ re_block_punctuation_end, \
+ re_punctuation, \
+ re_separators, \
+ re_arxiv
+from invenio.bibindex_tokenizers.BibIndexTokenizer import BibIndexTokenizer
+
+
+
+class BibIndexDefaultTokenizer(BibIndexTokenizer):
+ """
+ It's a standard tokenizer. It is useful for most of the indexes.
+ Its behaviour depends on stemming, remove stopwords, remove html markup and remove latex markup parameters.
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ """initialization"""
+ self.stemming_language = stemming_language
+ self.remove_stopwords = remove_stopwords
+ self.remove_html_markup = remove_html_markup
+ self.remove_latex_markup = remove_latex_markup
+
+
+ def get_tokenizing_function(self, wordtable_type):
+ """Picks correct tokenize_for_xxx function depending on type of tokenization (wordtable_type)"""
+ if wordtable_type == CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"]:
+ return self.tokenize_for_words
+ elif wordtable_type == CFG_BIBINDEX_INDEX_TABLE_TYPE["Pairs"]:
+ return self.tokenize_for_pairs
+ elif wordtable_type == CFG_BIBINDEX_INDEX_TABLE_TYPE["Phrases"]:
+ return self.tokenize_for_phrases
+
+
+
+ def tokenize_for_words(self, phrase):
+ """Return list of words found in PHRASE. Note that the phrase is
+ split into groups depending on the alphanumeric characters and
+ punctuation characters definition present in the config file.
+ """
+
+ words = {}
+ formulas = []
+ if self.remove_html_markup and phrase.find("</") > -1:
+ phrase = remove_html_markup(phrase)
+ if self.remove_latex_markup:
+ formulas = latex_formula_re.findall(phrase)
+ phrase = remove_latex_markup(phrase)
+ phrase = latex_formula_re.sub(' ', phrase)
+ phrase = wash_for_utf8(phrase)
+ phrase = lower_index_term(phrase)
+ # 1st split phrase into blocks according to whitespace
+ for block in strip_accents(phrase).split():
+ # 2nd remove leading/trailing punctuation and add block:
+ block = re_block_punctuation_begin.sub("", block)
+ block = re_block_punctuation_end.sub("", block)
+ if block:
+ stemmed_block = remove_stopwords(block, self.remove_stopwords)
+ stemmed_block = length_check(stemmed_block)
+ stemmed_block = apply_stemming(stemmed_block, self.stemming_language)
+ if stemmed_block:
+ words[stemmed_block] = 1
+ if re_arxiv.match(block):
+ # special case for blocks like `arXiv:1007.5048' where
+ # we would like to index the part after the colon
+ # regardless of dot or other punctuation characters:
+ words[block.split(':', 1)[1]] = 1
+ # 3rd break each block into subblocks according to punctuation and add subblocks:
+ for subblock in re_punctuation.split(block):
+ stemmed_subblock = remove_stopwords(subblock, self.remove_stopwords)
+ stemmed_subblock = length_check(stemmed_subblock)
+ stemmed_subblock = apply_stemming(stemmed_subblock, self.stemming_language)
+ if stemmed_subblock:
+ words[stemmed_subblock] = 1
+ # 4th break each subblock into alphanumeric groups and add groups:
+ for alphanumeric_group in re_separators.split(subblock):
+ stemmed_alphanumeric_group = remove_stopwords(alphanumeric_group, self.remove_stopwords)
+ stemmed_alphanumeric_group = length_check(stemmed_alphanumeric_group)
+ stemmed_alphanumeric_group = apply_stemming(stemmed_alphanumeric_group, self.stemming_language)
+ if stemmed_alphanumeric_group:
+ words[stemmed_alphanumeric_group] = 1
+ for block in formulas:
+ words[block] = 1
+ return words.keys()
+
+
+ def tokenize_for_pairs(self, phrase):
+ """Return list of words found in PHRASE. Note that the phrase is
+ split into groups depending on the alphanumeric characters and
+ punctuation characters definition present in the config file.
+ """
+
+ words = {}
+ if self.remove_html_markup and phrase.find("</") > -1:
+ phrase = remove_html_markup(phrase)
+ if self.remove_latex_markup:
+ phrase = remove_latex_markup(phrase)
+ phrase = latex_formula_re.sub(' ', phrase)
+ phrase = wash_for_utf8(phrase)
+ phrase = lower_index_term(phrase)
+ # 1st split phrase into blocks according to whitespace
+ last_word = ''
+ for block in strip_accents(phrase).split():
+ # 2nd remove leading/trailing punctuation and add block:
+ block = re_block_punctuation_begin.sub("", block)
+ block = re_block_punctuation_end.sub("", block)
+ if block:
+ block = remove_stopwords(block, self.remove_stopwords)
+ block = length_check(block)
+ block = apply_stemming(block, self.stemming_language)
+ # 3rd break each block into subblocks according to punctuation and add subblocks:
+ for subblock in re_punctuation.split(block):
+ subblock = remove_stopwords(subblock, self.remove_stopwords)
+ subblock = length_check(subblock)
+ subblock = apply_stemming(subblock, self.stemming_language)
+ if subblock:
+ # 4th break each subblock into alphanumeric groups and add groups:
+ for alphanumeric_group in re_separators.split(subblock):
+ alphanumeric_group = remove_stopwords(alphanumeric_group, self.remove_stopwords)
+ alphanumeric_group = length_check(alphanumeric_group)
+ alphanumeric_group = apply_stemming(alphanumeric_group, self.stemming_language)
+ if alphanumeric_group:
+ if last_word:
+ words['%s %s' % (last_word, alphanumeric_group)] = 1
+ last_word = alphanumeric_group
+ return words.keys()
+
+
+ def tokenize_for_phrases(self, phrase):
+ """Return list of phrases found in PHRASE. Note that the phrase is
+ split into groups depending on the alphanumeric characters and
+ punctuation characters definition present in the config file.
+ """
+ phrase = wash_for_utf8(phrase)
+ return [phrase]
diff --git a/modules/bibindex/lib/tokenizers/BibIndexEmptyTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexEmptyTokenizer.py
new file mode 100644
index 000000000..e47f9a6d3
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexEmptyTokenizer.py
@@ -0,0 +1,61 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexEmptyTokenizer: useful for situations when we want to apply tokenizer
+ for consistency, but we don't want to have any results from it.
+"""
+
+
+from invenio.bibindex_engine_config import CFG_BIBINDEX_INDEX_TABLE_TYPE
+from invenio.bibindex_tokenizers.BibIndexTokenizer import BibIndexTokenizer
+
+
+
+class BibIndexEmptyTokenizer(BibIndexTokenizer):
+ """Empty tokenizer do nothing.
+ It returns empty lists for tokenize_for_words, tokenize_for_pairs and tokenize_for_phrases methods.
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ """@param stemming_language: dummy
+ @param remove_stopwords: dummy
+ @param remove_html_markup: dummy
+ @param remove_latex_markup: dummy
+ """
+ pass
+
+
+ def get_tokenizing_function(self, wordtable_type):
+ """Picks correct tokenize_for_xxx function depending on type of tokenization (wordtable_type)"""
+ if wordtable_type == CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"]:
+ return self.tokenize_for_words
+ elif wordtable_type == CFG_BIBINDEX_INDEX_TABLE_TYPE["Pairs"]:
+ return self.tokenize_for_pairs
+ elif wordtable_type == CFG_BIBINDEX_INDEX_TABLE_TYPE["Phrases"]:
+ return self.tokenize_for_phrases
+
+
+ def tokenize_for_words(self, phrase):
+ return []
+
+ def tokenize_for_pairs(self, phrase):
+ return []
+
+ def tokenize_for_phrases(self, phrase):
+ return []
+
diff --git a/modules/bibindex/lib/tokenizers/BibIndexExactAuthorTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexExactAuthorTokenizer.py
new file mode 100644
index 000000000..220df1042
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexExactAuthorTokenizer.py
@@ -0,0 +1,44 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexExactAuthorTokenizer: performs only washing on author name and leaves it alone
+ in the same form.
+"""
+
+from invenio.bibindex_engine_washer import wash_author_name
+from invenio.bibindex_tokenizers.BibIndexDefaultTokenizer import BibIndexDefaultTokenizer
+
+
+
+
+class BibIndexExactAuthorTokenizer(BibIndexDefaultTokenizer):
+ """
+ Human name exact tokenizer.
+ Old: BibIndexExactNameTokenizer
+ """
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ BibIndexDefaultTokenizer.__init__(self, stemming_language,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup)
+
+ def tokenize_for_phrases(self, s):
+ """
+ Returns washed autor name.
+ """
+ return [wash_author_name(s)]
diff --git a/modules/bibindex/lib/tokenizers/BibIndexFiletypeTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexFiletypeTokenizer.py
new file mode 100644
index 000000000..476a47c64
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexFiletypeTokenizer.py
@@ -0,0 +1,63 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexFiletypeTokenizer: 'tokenizes' for file extensions.
+ Tokenizer is adapted to work with bibfield and its get_record function.
+"""
+
+
+from invenio.bibindex_tokenizers.BibIndexEmptyTokenizer import BibIndexEmptyTokenizer
+
+
+class BibIndexFiletypeTokenizer(BibIndexEmptyTokenizer):
+ """
+ Tokenizes for file extensions.
+ Tokenizer is adapted to work with bibfield and its get_record function.
+
+ It accepts as an input a record created by a get_record function:
+
+ from bibfield import get_record
+ record16 = get_record(16)
+ tokenizer = BibIndexFiletypeTokenizer()
+ new_words = tokenizer.tokenize(record16)
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ pass
+
+
+ def tokenize(self, record):
+ """'record' is a recjson record from bibfield.
+
+ Function uses derived field 'filetypes'
+ from the record.
+
+ @param urls: recjson record
+ """
+ values = []
+ try:
+ if record.has_key('filetypes'):
+ values = record['filetypes']
+ except KeyError:
+ pass
+ except TypeError:
+ return []
+ return values
+
+ def get_tokenizing_function(self, wordtable_type):
+ return self.tokenize
diff --git a/modules/bibindex/lib/tokenizers/BibIndexFulltextTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexFulltextTokenizer.py
new file mode 100644
index 000000000..2d7cc5387
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexFulltextTokenizer.py
@@ -0,0 +1,181 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexFulltextTokenizer: extracts words form a given document.
+ Document is given by its URL.
+"""
+
+import os
+import sys
+import logging
+import urllib2
+import re
+
+
+from invenio.config import \
+ CFG_SOLR_URL, \
+ CFG_XAPIAN_ENABLED, \
+ CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY, \
+ CFG_BIBINDEX_SPLASH_PAGES
+from invenio.htmlutils import get_links_in_html_page
+from invenio.websubmit_file_converter import convert_file, get_file_converter_logger
+from invenio.solrutils_bibindex_indexer import solr_add_fulltext
+from invenio.xapianutils_bibindex_indexer import xapian_add
+from invenio.bibdocfile import bibdocfile_url_p, \
+ bibdocfile_url_to_bibdoc, download_url, \
+ BibRecDocs
+from invenio.bibindex_engine_utils import get_idx_indexer
+from invenio.bibtask import write_message
+from invenio.errorlib import register_exception
+from invenio.intbitset import intbitset
+from invenio.bibindex_tokenizers.BibIndexDefaultTokenizer import BibIndexDefaultTokenizer
+
+
+fulltext_added = intbitset() # stores ids of records whose fulltexts have been added
+
+
+
+
+class BibIndexFulltextTokenizer(BibIndexDefaultTokenizer):
+ """
+ Exctracts all the words contained in document specified by url.
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ self.verbose = 3
+ BibIndexDefaultTokenizer.__init__(self, stemming_language,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup)
+
+ def set_verbose(self, verbose):
+ """Allows to change verbosity level during indexing"""
+ self.verbose = verbose
+
+ def tokenize_for_words_default(self, phrase):
+ """Default tokenize_for_words inherited from default tokenizer"""
+ return super(BibIndexFulltextTokenizer, self).tokenize_for_words(phrase)
+
+
+ def get_words_from_fulltext(self, url_direct_or_indirect):
+ """Returns all the words contained in the document specified by
+ URL_DIRECT_OR_INDIRECT with the words being split by various
+ SRE_SEPARATORS regexp set earlier. If FORCE_FILE_EXTENSION is
+ set (e.g. to "pdf", then treat URL_DIRECT_OR_INDIRECT as a PDF
+ file. (This is interesting to index Indico for example.) Note
+ also that URL_DIRECT_OR_INDIRECT may be either a direct URL to
+ the fulltext file or an URL to a setlink-like page body that
+ presents the links to be indexed. In the latter case the
+ URL_DIRECT_OR_INDIRECT is parsed to extract actual direct URLs
+ to fulltext documents, for all knows file extensions as
+ specified by global CONV_PROGRAMS config variable.
+ """
+ write_message("... reading fulltext files from %s started" % url_direct_or_indirect, verbose=2)
+ try:
+ if bibdocfile_url_p(url_direct_or_indirect):
+ write_message("... %s is an internal document" % url_direct_or_indirect, verbose=2)
+ bibdoc = bibdocfile_url_to_bibdoc(url_direct_or_indirect)
+ indexer = get_idx_indexer('fulltext')
+ if indexer != 'native':
+ # A document might belong to multiple records
+ for rec_link in bibdoc.bibrec_links:
+ recid = rec_link["recid"]
+ # Adds fulltexts of all files once per records
+ if not recid in fulltext_added:
+ bibrecdocs = BibRecDocs(recid)
+ text = bibrecdocs.get_text()
+ if indexer == 'SOLR' and CFG_SOLR_URL:
+ solr_add_fulltext(recid, text)
+ elif indexer == 'XAPIAN' and CFG_XAPIAN_ENABLED:
+ xapian_add(recid, 'fulltext', text)
+
+ fulltext_added.add(recid)
+ # we are relying on an external information retrieval system
+ # to provide full-text indexing, so dispatch text to it and
+ # return nothing here:
+ return []
+ else:
+ text = ""
+ if hasattr(bibdoc, "get_text"):
+ text = bibdoc.get_text()
+ return self.tokenize_for_words_default(text)
+ else:
+ if CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY:
+ write_message("... %s is external URL but indexing only local files" % url_direct_or_indirect, verbose=2)
+ return []
+ write_message("... %s is an external URL" % url_direct_or_indirect, verbose=2)
+ urls_to_index = set()
+ for splash_re, url_re in CFG_BIBINDEX_SPLASH_PAGES.iteritems():
+ if re.match(splash_re, url_direct_or_indirect):
+ write_message("... %s is a splash page (%s)" % (url_direct_or_indirect, splash_re), verbose=2)
+ html = urllib2.urlopen(url_direct_or_indirect).read()
+ urls = get_links_in_html_page(html)
+ write_message("... found these URLs in %s splash page: %s" % (url_direct_or_indirect, ", ".join(urls)), verbose=3)
+ for url in urls:
+ if re.match(url_re, url):
+ write_message("... will index %s (matched by %s)" % (url, url_re), verbose=2)
+ urls_to_index.add(url)
+ if not urls_to_index:
+ urls_to_index.add(url_direct_or_indirect)
+ write_message("... will extract words from %s" % ', '.join(urls_to_index), verbose=2)
+ words = {}
+ for url in urls_to_index:
+ tmpdoc = download_url(url)
+ file_converter_logger = get_file_converter_logger()
+ old_logging_level = file_converter_logger.getEffectiveLevel()
+ if self.verbose > 3:
+ file_converter_logger.setLevel(logging.DEBUG)
+ try:
+ try:
+ tmptext = convert_file(tmpdoc, output_format='.txt')
+ text = open(tmptext).read()
+ os.remove(tmptext)
+
+ indexer = get_idx_indexer('fulltext')
+ if indexer != 'native':
+ if indexer == 'SOLR' and CFG_SOLR_URL:
+ solr_add_fulltext(None, text) # FIXME: use real record ID
+ if indexer == 'XAPIAN' and CFG_XAPIAN_ENABLED:
+ #xapian_add(None, 'fulltext', text) # FIXME: use real record ID
+ pass
+ # we are relying on an external information retrieval system
+ # to provide full-text indexing, so dispatch text to it and
+ # return nothing here:
+ tmpwords = []
+ else:
+ tmpwords = self.tokenize_for_words_default(text)
+ words.update(dict(map(lambda x: (x, 1), tmpwords)))
+ except Exception, e:
+ message = 'ERROR: it\'s impossible to correctly extract words from %s referenced by %s: %s' % (url, url_direct_or_indirect, e)
+ register_exception(prefix=message, alert_admin=True)
+ write_message(message, stream=sys.stderr)
+ finally:
+ os.remove(tmpdoc)
+ if self.verbose > 3:
+ file_converter_logger.setLevel(old_logging_level)
+ return words.keys()
+ except Exception, e:
+ message = 'ERROR: it\'s impossible to correctly extract words from %s: %s' % (url_direct_or_indirect, e)
+ register_exception(prefix=message, alert_admin=True)
+ write_message(message, stream=sys.stderr)
+ return []
+
+
+ def tokenize_for_words(self, phrase):
+ return self.get_words_from_fulltext(phrase)
+
diff --git a/modules/bibindex/lib/tokenizers/BibIndexItemCountTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexItemCountTokenizer.py
new file mode 100644
index 000000000..83d28f923
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexItemCountTokenizer.py
@@ -0,0 +1,49 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexItemCountTokenizer: counts the number of copies of a book which is
+ owned by the library in the real world.
+"""
+
+from invenio.bibindex_tokenizers.BibIndexEmptyTokenizer import BibIndexEmptyTokenizer
+
+
+
+class BibIndexItemCountTokenizer(BibIndexEmptyTokenizer):
+ """
+ Returns a number of copies of a book which is owned by the library.
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ pass
+
+
+ def tokenize(self, record):
+ """Tokenizes for number of copies of a book in the 'real' library"""
+ count = 0
+ try:
+ count = record['_number_of_copies']
+ except KeyError:
+ pass
+ except TypeError:
+ return []
+ return [str(count)]
+
+
+ def get_tokenizing_function(self, wordtable_type):
+ return self.tokenize
diff --git a/modules/bibindex/lib/tokenizers/BibIndexJournalTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexJournalTokenizer.py
new file mode 100644
index 000000000..d5464521f
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexJournalTokenizer.py
@@ -0,0 +1,108 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexJournalTokenizer: useful for journal index.
+ Agregates info about journal in a specific way given by its variable
+ journal_pubinfo_standard_form.
+ Behaves in the same way for all index table types:
+ - Words
+ - Pairs
+ - Phrases
+"""
+
+from invenio.dbquery import run_sql
+from invenio.bibindex_tokenizers.BibIndexEmptyTokenizer import BibIndexEmptyTokenizer
+from invenio.config import \
+ CFG_CERN_SITE, \
+ CFG_INSPIRE_SITE
+
+
+if CFG_CERN_SITE:
+ CFG_JOURNAL_TAG = '773__%'
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM = "773__p 773__v (773__y) 773__c"
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK = r'^\w.*\s\w.*\s\(\d+\)\s\w.*$'
+elif CFG_INSPIRE_SITE:
+ CFG_JOURNAL_TAG = '773__%'
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM = "773__p,773__v,773__c"
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK = r'^\w.*,\w.*,\w.*$'
+else:
+ CFG_JOURNAL_TAG = '909C4%'
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM = "909C4p 909C4v (909C4y) 909C4c"
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK = r'^\w.*\s\w.*\s\(\d+\)\s\w.*$'
+
+
+
+class BibIndexJournalTokenizer(BibIndexEmptyTokenizer):
+ """
+ Tokenizer for journal index. It returns joined title/volume/year/page as a word from journal tag.
+ (In fact it's an aggregator.)
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ self.tag = CFG_JOURNAL_TAG
+ self.journal_pubinfo_standard_form = CFG_JOURNAL_PUBINFO_STANDARD_FORM
+ self.journal_pubinfo_standard_form_regexp_check = CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK
+
+
+ def tokenize(self, recID):
+ """
+ Special procedure to extract words from journal tags. Joins
+ title/volume/year/page into a standard form that is also used for
+ citations.
+ """
+ # get all journal tags/subfields:
+ bibXXx = "bib" + self.tag[0] + self.tag[1] + "x"
+ bibrec_bibXXx = "bibrec_" + bibXXx
+ query = """SELECT bb.field_number,b.tag,b.value FROM %s AS b, %s AS bb
+ WHERE bb.id_bibrec=%%s
+ AND bb.id_bibxxx=b.id AND tag LIKE %%s""" % (bibXXx, bibrec_bibXXx)
+ res = run_sql(query, (recID, self.tag))
+ # construct journal pubinfo:
+ dpubinfos = {}
+ for row in res:
+ nb_instance, subfield, value = row
+ if subfield.endswith("c"):
+ # delete pageend if value is pagestart-pageend
+ # FIXME: pages may not be in 'c' subfield
+ value = value.split('-', 1)[0]
+ if dpubinfos.has_key(nb_instance):
+ dpubinfos[nb_instance][subfield] = value
+ else:
+ dpubinfos[nb_instance] = {subfield: value}
+
+ # construct standard format:
+ lwords = []
+ for dpubinfo in dpubinfos.values():
+ # index all journal subfields separately
+ for tag, val in dpubinfo.items():
+ lwords.append(val)
+ # index journal standard format:
+ pubinfo = self.journal_pubinfo_standard_form
+ for tag, val in dpubinfo.items():
+ pubinfo = pubinfo.replace(tag, val)
+ if self.tag[:-1] in pubinfo:
+ # some subfield was missing, do nothing
+ pass
+ else:
+ lwords.append(pubinfo)
+
+ # return list of words and pubinfos:
+ return lwords
+
+ def get_tokenizing_function(self, wordtable_type):
+ return self.tokenize
diff --git a/modules/bibindex/lib/tokenizers/BibIndexTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexTokenizer.py
new file mode 100644
index 000000000..80ecc9558
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexTokenizer.py
@@ -0,0 +1,116 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexTokenizer: generic, not implemented tokenizer for inheritance
+"""
+
+
+class BibIndexTokenizer(object):
+ """Base class for the tokenizers
+
+ Tokenizers act as filters which turn input strings into lists of strings
+ which represent the idexable components of that string.
+ """
+ #words part
+ def scan_string_for_words(self, s):
+ """Return an intermediate representation of the tokens in s.
+
+ Every tokenizer should have a scan_string function, which scans the
+ input string and lexically tags its components. These units are
+ grouped together sequentially. The output of scan_string is usually
+ something like:
+ {
+ 'TOKEN_TAG_LIST' : a list of valid keys in this output set,
+ 'key1' : [val1, val2, val3] - where key describes the in some
+ meaningful way
+ }
+
+ @param s: the input to be lexically tagged
+ @type s: string
+
+ @return: dict of lexically tagged input items
+ In a sample Tokenizer where scan_string simply splits s on
+ space, scan_string might output the following for
+ "Assam and Darjeeling":
+ {
+ 'TOKEN_TAG_LIST' : 'word_list',
+ 'word_list' : ['Assam', 'and', 'Darjeeling']
+ }
+ @rtype: dict
+ """
+ raise NotImplementedError
+
+ def parse_scanned_for_words(self, o):
+ """Calculate the token list from the intermediate representation o.
+
+ While this should be an interesting computation over the intermediate
+ representation generated by scan_string, obviously in the split-on-
+ space example we need only return o['word_list'].
+
+ @param t: a dictionary with a 'word_list' key
+ @type t: dict
+
+ @return: the token items from 'word_list'
+ @rtype: list of string
+ """
+ raise NotImplementedError
+
+ def tokenize_for_words(self, s):
+ """Main entry point. Return token list from input string s.
+
+ Simply composes the functionality above.
+
+ @param s: the input to be lexically tagged
+ @type s: string
+
+ @return: the token items derived from s
+ @rtype: list of string
+ """
+ raise NotImplementedError
+
+ #pairs part
+ def scan_string_for_pairs(self, s):
+ """ See: scan_string_for_words """
+ raise NotImplementedError
+
+ def parse_scanned_for_pairs(self, o):
+ """ See: parse_scanned_for_words """
+ raise NotImplementedError
+
+ def tokenize_for_pairs(self, s):
+ """ See: tokenize_for_words """
+ raise NotImplementedError
+
+ #phrases part
+ def scan_string_for_phrases(self, s):
+ """ See: scan_string_for_words """
+ raise NotImplementedError
+
+ def parse_scanned_for_phrases(self, o):
+ """ See: parse_scanned_for_words """
+ raise NotImplementedError
+
+ def tokenize_for_phrases(self, s):
+ """ See: tokenize_for_words """
+ raise NotImplementedError
+
+
+ def get_tokenizing_function(self, wordtable_type):
+ """Chooses tokenize_for_words, tokenize_for_phrases or tokenize_for_pairs
+ depending on type of tokenization we want to perform."""
+ raise NotImplementedError
diff --git a/modules/bibindex/lib/tokenizers/BibIndexYearTokenizer.py b/modules/bibindex/lib/tokenizers/BibIndexYearTokenizer.py
new file mode 100644
index 000000000..68c379768
--- /dev/null
+++ b/modules/bibindex/lib/tokenizers/BibIndexYearTokenizer.py
@@ -0,0 +1,71 @@
+# -*- coding:utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2010, 2011, 2012 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+"""BibIndexYearTokenizer: useful for year index. Extracts words (year) from date tags.
+"""
+
+from invenio.config import \
+ CFG_INSPIRE_SITE
+from invenio.bibindex_tokenizers.BibIndexDefaultTokenizer import BibIndexDefaultTokenizer
+
+
+
+class BibIndexYearTokenizer(BibIndexDefaultTokenizer):
+ """
+ Year tokenizer. It tokenizes words from date tags or uses default word tokenizer.
+ """
+
+ def __init__(self, stemming_language = None, remove_stopwords = False, remove_html_markup = False, remove_latex_markup = False):
+ BibIndexDefaultTokenizer.__init__(self, stemming_language,
+ remove_stopwords,
+ remove_html_markup,
+ remove_latex_markup)
+
+
+ def get_words_from_date_tag(self, datestring):
+ """
+ Special procedure to index words from tags storing date-like
+ information in format YYYY or YYYY-MM or YYYY-MM-DD. Namely, we
+ are indexing word-terms YYYY, YYYY-MM, YYYY-MM-DD, but never
+ standalone MM or DD.
+ """
+ out = []
+ for dateword in datestring.split():
+ # maybe there are whitespaces, so break these too
+ out.append(dateword)
+ parts = dateword.split('-')
+ for nb in range(1, len(parts)):
+ out.append("-".join(parts[:nb]))
+ return out
+
+
+ def tokenize_for_words_default(self, phrase):
+ """Default tokenize_for_words inherited from default tokenizer"""
+ return super(BibIndexYearTokenizer, self).tokenize_for_words(phrase)
+
+
+ def tokenize_for_words(self, phrase):
+ """
+ If CFG_INSPIRE_SITE is 1 we perform special tokenization which relies on getting words form date tag.
+ In other case we perform default tokenization.
+ """
+ if CFG_INSPIRE_SITE:
+ return self.get_words_from_date_tag(phrase)
+ else:
+ return self.tokenize_for_words_default(phrase)
+
diff --git a/po/LINGUAS b/modules/bibindex/lib/tokenizers/Makefile.am
similarity index 77%
copy from po/LINGUAS
copy to modules/bibindex/lib/tokenizers/Makefile.am
index 8f21a0517..394bd73c2 100644
--- a/po/LINGUAS
+++ b/modules/bibindex/lib/tokenizers/Makefile.am
@@ -1,46 +1,24 @@
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-##
-## This is the list of all languages supported by Invenio:
-af
-ar
-bg
-ca
-cs
-de
-el
-en
-es
-fr
-hr
-hu
-gl
-it
-ka
-lt
-ja
-no
-pl
-pt
-ro
-ru
-rw
-sk
-sv
-uk
-zh_CN
-zh_TW
+
+pylibdir=$(libdir)/python/invenio/bibindex_tokenizers
+
+pylib_DATA = *.py
+
+EXTRA_DIST = $(pylib_DATA)
+
+CLEANFILES = *~ *.tmp *.pyc
diff --git a/modules/bibindex/lib/tokenizers/__init__.py b/modules/bibindex/lib/tokenizers/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/modules/bibindex/web/admin/bibindexadmin.py b/modules/bibindex/web/admin/bibindexadmin.py
index ecf294f22..7d6f13e2b 100644
--- a/modules/bibindex/web/admin/bibindexadmin.py
+++ b/modules/bibindex/web/admin/bibindexadmin.py
@@ -1,648 +1,902 @@
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Invenio BibIndex Administrator Interface."""
__revision__ = "$Id$"
__lastupdated__ = """$Date$"""
import invenio.bibindexadminlib as bic
from invenio.webpage import page, error_page
from invenio.config import CFG_SITE_URL, CFG_SITE_LANG, CFG_SITE_NAME
from invenio.webuser import getUid, page_not_authorized
def deletetag(req, fldID, ln=CFG_SITE_LANG, tagID=-1, callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_deletetag(fldID=fldID,
ln=ln,
tagID=tagID,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def addtag(req, fldID, ln=CFG_SITE_LANG, value=['',-1], name='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_addtag(fldID=fldID,
ln=ln,
value=value,
name=name,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def modifyfieldtags(req, fldID, ln=CFG_SITE_LANG, callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_modifyfieldtags(fldID=fldID,
ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def addindexfield(req, idxID, ln=CFG_SITE_LANG, fldID='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_addindexfield(idxID=idxID,
ln=ln,
fldID=fldID,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def modifyindexfields(req, idxID, ln=CFG_SITE_LANG, callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_modifyindexfields(idxID=idxID,
ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def showdetailsfieldtag(req, fldID, tagID, ln=CFG_SITE_LANG, callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_showdetailsfieldtag(fldID=fldID,
tagID=tagID,
ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def showdetailsfield(req, fldID, ln=CFG_SITE_LANG, callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_showdetailsfield(fldID=fldID,
ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def modifyfield(req, fldID, ln=CFG_SITE_LANG, code='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_modifyfield(fldID=fldID,
ln=ln,
code=code,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def modifyindex(req, idxID, ln=CFG_SITE_LANG, idxNAME='', idxDESC='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_modifyindex(idxID=idxID,
ln=ln,
idxNAME=idxNAME,
idxDESC=idxDESC,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def modifyindexstemming(req, idxID, ln=CFG_SITE_LANG, idxSTEM='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_modifyindexstemming(idxID=idxID,
ln=ln,
idxSTEM=idxSTEM,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+def modifyindexer(req, idxID, ln=CFG_SITE_LANG, indexer='', callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit Index",
+ body=bic.perform_modifyindexer(idxID=idxID,
+ ln=ln,
+ indexer=indexer,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
+def modifydependentindexes(req, idxID, ln=CFG_SITE_LANG, newIDs=[], callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit Virtual Index",
+ body=bic.perform_modifydependentindexes(idxID=idxID,
+ ln=ln,
+ newIDs=newIDs,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
+def modifysynonymkb(req, idxID, ln=CFG_SITE_LANG, idxKB='', idxMATCH='', callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail()
+ navtrail_previous_links += """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a>""" % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit Index",
+ body=bic.perform_modifysynonymkb(idxID=idxID,
+ ln=ln,
+ idxKB=idxKB,
+ idxMATCH=idxMATCH,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
+def modifystopwords(req, idxID, ln=CFG_SITE_LANG, idxSTOPWORDS='', callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail()
+ navtrail_previous_links += """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit Index",
+ body=bic.perform_modifystopwords(idxID=idxID,
+ ln=ln,
+ idxSTOPWORDS=idxSTOPWORDS,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
+def modifyremovehtml(req, idxID, ln=CFG_SITE_LANG, idxHTML='', callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail()
+ navtrail_previous_links += """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit Index",
+ body=bic.perform_modifyremovehtml(idxID=idxID,
+ ln=ln,
+ idxHTML=idxHTML,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
+def modifyremovelatex(req, idxID, ln=CFG_SITE_LANG, idxLATEX='', callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail()
+ navtrail_previous_links += """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit Index",
+ body=bic.perform_modifyremovelatex(idxID=idxID,
+ ln=ln,
+ idxLATEX=idxLATEX,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
+def modifytokenizer(req, idxID, ln=CFG_SITE_LANG, idxTOK='', callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail()
+ navtrail_previous_links += """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit Index",
+ body=bic.perform_modifytokenizer(idxID=idxID,
+ ln=ln,
+ idxTOK=idxTOK,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
def modifytag(req, fldID, tagID, ln=CFG_SITE_LANG, name='', value='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_modifytag(fldID=fldID,
tagID=tagID,
ln=ln,
name=name,
value=value,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def deletefield(req, fldID, ln=CFG_SITE_LANG, confirm=0):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_deletefield(fldID=fldID,
ln=ln,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def deleteindex(req, idxID, ln=CFG_SITE_LANG, confirm=0):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_deleteindex(idxID=idxID,
ln=ln,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+def deletevirtualindex(req, idxID, ln=CFG_SITE_LANG, confirm=0):
+ navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req, 'cfgbibindex')
+ if not auth[0]:
+ return page(title="Manage Indexes",
+ body=bic.perform_deletevirtualindex(idxID=idxID,
+ ln=ln,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
def showfieldoverview(req, ln=CFG_SITE_LANG, callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Manage logical fields",
body=bic.perform_showfieldoverview(ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def editfields(req, ln=CFG_SITE_LANG, callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Manage logical fields",
body=bic.perform_editfields(ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def editfield(req, fldID, ln=CFG_SITE_LANG, mtype='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_editfield(fldID=fldID,
ln=ln,
mtype=mtype,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def editindex(req, idxID, ln=CFG_SITE_LANG, mtype='', callback='yes', confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_editindex(idxID=idxID,
ln=ln,
mtype=mtype,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+def editvirtualindex(req, idxID, ln=CFG_SITE_LANG, mtype='', callback='yes', confirm=-1):
+ navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Edit virtual index",
+ body=bic.perform_editvirtualindex(idxID=idxID,
+ ln=ln,
+ mtype=mtype,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ req=req,
+ navtrail = navtrail_previous_links,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+
def modifyindextranslations(req, idxID, ln=CFG_SITE_LANG, sel_type='', trans = [], confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_modifyindextranslations(idxID=idxID,
ln=ln,
sel_type=sel_type,
trans=trans,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def modifyfieldtranslations(req, fldID, ln=CFG_SITE_LANG, sel_type='', trans = [], confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_modifyfieldtranslations(fldID=fldID,
ln=ln,
sel_type=sel_type,
trans=trans,
confirm=confirm),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def addfield(req, ln=CFG_SITE_LANG, fldNAME='', code='', callback="yes", confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Manage logical fields",
body=bic.perform_addfield(ln=ln,
fldNAME=fldNAME,
code=code,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
navtrail = navtrail_previous_links,
req=req,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def addindex(req, ln=CFG_SITE_LANG, idxNAME='', callback="yes", confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Manage Indexes",
body=bic.perform_addindex(ln=ln,
idxNAME=idxNAME,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
navtrail = navtrail_previous_links,
req=req,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
+def addvirtualindex(req, ln=CFG_SITE_LANG, idxNEWVID='', idxNEWPID='', callback="yes", confirm=-1):
+ navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
+
+ try:
+ uid = getUid(req)
+ except:
+ return error_page('Error', req)
+
+ auth = bic.check_user(req,'cfgbibindex')
+ if not auth[0]:
+ return page(title="Manage Indexes",
+ body=bic.perform_addvirtualindex(ln=ln,
+ idxNEWVID=idxNEWVID,
+ idxNEWPID=idxNEWPID,
+ callback=callback,
+ confirm=confirm),
+ uid=uid,
+ language=ln,
+ navtrail = navtrail_previous_links,
+ req=req,
+ lastupdated=__lastupdated__)
+ else:
+ return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
+
def switchtagscore(req, fldID, id_1, id_2, ln=CFG_SITE_LANG):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_switchtagscore(fldID=fldID,
id_1=id_1,
id_2=id_2,
ln=ln),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def removeindexfield(req, idxID, fldID, ln=CFG_SITE_LANG, callback="yes", confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/index">Manage Indexes</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Index",
body=bic.perform_removeindexfield(idxID=idxID,
fldID=fldID,
ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
navtrail = navtrail_previous_links,
req=req,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def removefieldtag(req, fldID, tagID, ln=CFG_SITE_LANG, callback="yes", confirm=-1):
navtrail_previous_links = bic.getnavtrail() + """&gt; <a class="navtrail" href="%s/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a> """ % (CFG_SITE_URL)
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Edit Logical Field",
body=bic.perform_removefieldtag(fldID=fldID,
tagID=tagID,
ln=ln,
callback=callback,
confirm=confirm),
uid=uid,
language=ln,
navtrail = navtrail_previous_links,
req=req,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def index(req, ln=CFG_SITE_LANG, mtype='', content=''):
navtrail_previous_links = bic.getnavtrail()
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Manage Indexes",
body=bic.perform_index(ln=ln,
mtype=mtype,
content=content),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
def field(req, ln=CFG_SITE_LANG, mtype='', content=''):
navtrail_previous_links = bic.getnavtrail()
try:
uid = getUid(req)
except:
return error_page('Error', req)
auth = bic.check_user(req,'cfgbibindex')
if not auth[0]:
return page(title="Manage logical fields",
body=bic.perform_field(ln=ln,
mtype=mtype,
content=content),
uid=uid,
language=ln,
req=req,
navtrail = navtrail_previous_links,
lastupdated=__lastupdated__)
else:
return page_not_authorized(req=req, text=auth[1], navtrail=navtrail_previous_links)
diff --git a/modules/bibknowledge/lib/bibknowledge_dblayer.py b/modules/bibknowledge/lib/bibknowledge_dblayer.py
index d64160b2e..6d53ab2df 100644
--- a/modules/bibknowledge/lib/bibknowledge_dblayer.py
+++ b/modules/bibknowledge/lib/bibknowledge_dblayer.py
@@ -1,437 +1,450 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Database access related functions for BibKnowledge.
"""
__revision__ = "$Id$"
from invenio.dbquery import run_sql
from invenio.memoiseutils import Memoise
def get_kbs_info(kbtypeparam="", searchkbname=""):
"""Returns all kbs as list of dictionaries {id, name, description, kbtype}
If the KB is dynamic, the dynamic kb key are added in the dict.
"""
out = []
query = "SELECT id, name, description, kbtype FROM knwKB ORDER BY name"
res = run_sql(query)
for row in res:
doappend = 1 # by default
kbid = row[0]
name = row[1]
description = row[2]
kbtype = row[3]
dynres = {}
if kbtype == 'd':
#get the dynamic config
dynres = get_kb_dyn_config(kbid)
if kbtypeparam:
doappend = 0
if (kbtype == kbtypeparam):
doappend = 1
if searchkbname:
doappend = 0
if (name == searchkbname):
doappend = 1
if doappend:
mydict = {'id':kbid, 'name':name,
'description':description,
'kbtype':kbtype}
mydict.update(dynres)
out.append(mydict)
return out
+
+def get_all_kb_names():
+ """Returns all knowledge base names
+ @return list of names
+ """
+ out = []
+ res = run_sql("""SELECT name FROM knwKB""")
+ for row in res:
+ out.append(row[0])
+ return out
+
+
+
def get_kb_id(kb_name):
"""Returns the id of the kb with given name"""
res = run_sql("""SELECT id FROM knwKB WHERE name LIKE %s""",
(kb_name,))
if len(res) > 0:
return res[0][0]
else:
return None
get_kb_id_memoised = Memoise(get_kb_id)
def get_kb_name(kb_id):
"""Returns the name of the kb with given id
@param kb_id the id
@return string
"""
res = run_sql("""SELECT name FROM knwKB WHERE id=%s""",
(kb_id,))
if len(res) > 0:
return res[0][0]
else:
return None
def get_kb_type(kb_id):
"""Returns the type of the kb with given id
@param kb_id knowledge base id
@return kb_type
"""
res = run_sql("""SELECT kbtype FROM knwKB WHERE id=%s""",
(kb_id,))
if len(res) > 0:
return res[0][0]
else:
return None
def get_kb_mappings(kb_name="", sortby="to", keylike="", valuelike="", match_type="s"):
"""Returns a list of all mappings from the given kb, ordered by key
@param kb_name knowledge base name. if "", return all
@param sortby the sorting criteria ('from' or 'to')
@keylike return only entries where key matches this
@valuelike return only entries where value matches this
"""
out = []
k_id = get_kb_id(kb_name)
if len(keylike) > 0:
if match_type == "s":
keylike = "%"+keylike+"%"
else:
keylike = '%'
if len(valuelike) > 0:
if match_type == "s":
valuelike = "%"+valuelike+"%"
else:
valuelike = '%'
if not kb_name:
res = run_sql("""SELECT m.id, m.m_key, m.m_value, m.id_knwKB,
k.name
FROM knwKBRVAL m, knwKB k
where m_key like %s
and m_value like %s
and m.id_knwKB = k.id""", (keylike, valuelike))
else:
res = run_sql("""SELECT m.id, m.m_key, m.m_value, m.id_knwKB,
k.name
FROM knwKBRVAL m, knwKB k
WHERE id_knwKB=%s
and m.id_knwKB = k.id
and m_key like %s
and m_value like %s""", (k_id, keylike, valuelike))
#sort res
lres = list(res)
if sortby == "from":
lres.sort(lambda x, y:cmp(x[1], y[1]))
else:
lres.sort(lambda x, y:cmp(x[2], y[2]))
for row in lres:
out.append({'id':row[0], 'key':row[1],
'value': row[2],
'kbid': row[3], 'kbname': row[4]})
return out
def get_kb_dyn_config(kb_id):
"""
Returns a dictionary of 'field'=> y, 'expression'=> z
for a knowledge base of type 'd'. The dictionary may have coll_id, collection.
@param kb_id the id
@return dict
"""
res = run_sql("""SELECT output_tag, search_expression, id_collection
FROM knwKBDDEF where
id_knwKB = %s""", (kb_id, ))
mydict = {}
for row in res:
mydict['field'] = row[0]
mydict['expression'] = row[1]
mydict['coll_id'] = row[2]
#put a collection field if collection exists..
if mydict.has_key('coll_id'):
c_id = mydict['coll_id']
res = run_sql("""SELECT name from collection where id = %s""", (c_id,))
if res:
mydict['collection'] = res[0][0]
return mydict
def save_kb_dyn_config(kb_id, field, expression, collection=""):
"""Saves a dynamic knowledge base configuration
@param kb_id the id
@param field the field where values are extracted
@param expression ..using this expression
@param collection ..in a certain collection (default is all)
"""
#check that collection exists
coll_id = None
if collection:
res = run_sql("""SELECT id from collection where name = %s""", (collection,))
if res:
coll_id = res[0][0]
run_sql("""DELETE FROM knwKBDDEF where id_knwKB = %s""", (kb_id, ))
run_sql("""INSERT INTO knwKBDDEF (id_knwKB, output_tag, search_expression, id_collection)
VALUES (%s,%s,%s,%s)""", (kb_id, field, expression, coll_id))
return ""
def get_kb_description(kb_name):
"""Returns the description of the given kb
@param kb_id the id
@return string
"""
k_id = get_kb_id(kb_name)
res = run_sql("""SELECT description FROM knwKB WHERE id=%s""", (k_id,))
return res[0][0]
def add_kb(kb_name, kb_description, kb_type=None):
"""
Adds a new kb with given name and description. Returns the id of
the kb.
If name already exists replace old value
@param kb_name the name of the kb to create
@param kb_description a description for the kb
@return the id of the newly created kb
"""
kb_db = 'w' #the typical written_as - change_to
if not kb_type:
pass
else:
if kb_type == 'taxonomy':
kb_db = 't'
if kb_type == 'dynamic':
kb_db = 'd'
run_sql("""REPLACE INTO knwKB (name, description, kbtype)
VALUES (%s,%s,%s)""", (kb_name, kb_description, kb_db))
return get_kb_id(kb_name)
def delete_kb(kb_name):
"""Deletes the given kb"""
k_id = get_kb_id(kb_name)
run_sql("""DELETE FROM knwKBRVAL WHERE id_knwKB = %s""", (k_id,))
run_sql("""DELETE FROM knwKB WHERE id = %s""", (k_id,))
#finally, delete from COLL table
run_sql("""DELETE FROM knwKBDDEF where id_knwKB = %s""", (k_id,))
return True
def kb_exists(kb_name):
"""Returns True if a kb with the given name exists"""
rows = run_sql("""SELECT id FROM knwKB WHERE name = %s""",
(kb_name,))
if len(rows) > 0:
return True
else:
return False
def update_kb(kb_name, new_name, new_description=''):
"""Updates given kb with new name and (optionally) new description"""
k_id = get_kb_id(kb_name)
run_sql("""UPDATE knwKB
SET name = %s , description = %s
WHERE id = %s""", (new_name, new_description, k_id))
return True
def add_kb_mapping(kb_name, key, value):
"""Adds new mapping key->value in given kb"""
k_id = get_kb_id(kb_name)
run_sql("""REPLACE INTO knwKBRVAL (m_key, m_value, id_knwKB)
VALUES (%s, %s, %s)""", (key, value, k_id))
return True
def remove_kb_mapping(kb_name, key):
"""Removes mapping with given key from given kb"""
k_id = get_kb_id(kb_name)
run_sql("""DELETE FROM knwKBRVAL
WHERE m_key = %s AND id_knwKB = %s""",
(key, k_id))
return True
def kb_mapping_exists(kb_name, key):
"""Returns true if the mapping with given key exists in the given kb"""
if kb_exists(kb_name):
k_id = get_kb_id(kb_name)
rows = run_sql("""SELECT id FROM knwKBRVAL
WHERE m_key = %s
AND id_knwKB = %s""", (key, k_id))
if len(rows) > 0:
return True
return False
def kb_key_rules(key):
"""Returns a list of 4-tuples that have a key->value mapping in some KB
The format of the tuples is [kb_id, kb_name,key,value] """
res = run_sql("""SELECT f.id, f.name, m.m_key, m.m_value
from knwKBRVAL as m JOIN
knwKB as f on
m.id_knwKB=f.id WHERE
m.m_key = %s""", (key, ))
return res
def kb_value_rules(value):
"""Returns a list of 4-tuples that have a key->value mapping in some KB
The format of the tuples is [kb_id, kb_name,key,value] """
res = run_sql("""SELECT f.id, f.name, m.m_key, m.m_value from
knwKBRVAL as m JOIN
knwKB as f on
m.id_knwKB=f.id WHERE
m.m_value = %s""", (value, ))
return res
def get_kb_mapping_value(kb_name, key):
"""
Returns a value of the given key from the given kb.
If mapping not found, returns None #'default'
@param kb_name the name of a knowledge base
@param key the key to look for
#@param default a default value to return if mapping is not found
"""
k_id = get_kb_id(kb_name)
res = run_sql("""SELECT m_value FROM knwKBRVAL
WHERE m_key LIKE %s
AND id_knwKB = %s LIMIT 1""",
(key, k_id))
if len(res) > 0:
return res[0][0]
else:
return None # default
def update_kb_mapping(kb_name, key, new_key, new_value):
"""Updates the mapping given by key with new key and value"""
k_id = get_kb_id(kb_name)
run_sql("""UPDATE knwKBRVAL
SET m_key = %s , m_value = %s
WHERE m_key = %s AND id_knwKB = %s""",
(new_key, new_value, key, k_id))
return True
#the following functions should be used by a higher level API
def get_kba_values(kb_name, searchname="", searchtype="s"):
"""Returns the "authority file" type of list of values for a
given knowledge base.
@param kb_name the name of the knowledge base
@param searchname search by this..
@param searchtype s=substring, e=exact, sw=startswith
"""
k_id = get_kb_id(kb_name)
if searchtype == 's' and searchname:
searchname = '%'+searchname+'%'
if searchtype == 'sw' and searchname: #startswith
searchname = searchname+'%'
if not searchname:
searchname = '%'
res = run_sql("""SELECT m_value FROM knwKBRVAL
WHERE m_value LIKE %s
AND id_knwKB = %s""",
(searchname, k_id))
return res
def get_kbr_keys(kb_name, searchkey="", searchvalue="", searchtype='s'):
"""Returns keys from a knowledge base
@param kb_name the name of the knowledge base
@param searchkey search using this key
@param searchvalue search using this value
@param searchtype s=substring, e=exact, sw=startswith
"""
k_id = get_kb_id(kb_name)
if searchtype == 's' and searchkey:
searchkey = '%'+searchkey+'%'
if searchtype == 's' and searchvalue:
searchvalue = '%'+searchvalue+'%'
if searchtype == 'sw' and searchvalue: #startswith
searchvalue = searchvalue+'%'
if not searchvalue:
searchvalue = '%'
if not searchkey:
searchkey = '%'
return run_sql("""SELECT m_key FROM knwKBRVAL
WHERE m_value LIKE %s
AND m_key LIKE %s
AND id_knwKB = %s""",
(searchvalue, searchkey, k_id))
def get_kbr_values(kb_name, searchkey="%", searchvalue="", searchtype='s', use_memoise=False):
"""Returns values from a knowledge base
Note the intentional asymmetry between searchkey and searchvalue:
If searchkey is unspecified or empty for substring, it matches anything,
but if it is empty for exact, it matches nothing.
If searchvalue is unspecified or empty, it matches anything in all cases.
@param kb_name the name of the knowledge base
@param searchkey search using this key
@param searchvalue search using this value
@param searchtype s=substring, e=exact, sw=startswith
@param use_memoise: can we memoise while doing lookups?
@type use_memoise: bool
@return a list of values
"""
if use_memoise:
k_id = get_kb_id_memoised(kb_name)
else:
k_id = get_kb_id(kb_name)
if searchtype == 's':
searchkey = '%'+searchkey+'%'
if searchtype == 's' and searchvalue:
searchvalue = '%'+searchvalue+'%'
if searchtype == 'sw' and searchvalue: #startswith
searchvalue = searchvalue+'%'
if not searchvalue:
searchvalue = '%'
return run_sql("""SELECT m_value FROM knwKBRVAL
WHERE m_value LIKE %s
AND m_key LIKE %s
AND id_knwKB = %s""",
(searchvalue, searchkey, k_id))
get_kbr_values_memoised = Memoise(get_kbr_values)
def get_kbr_items(kb_name, searchkey="", searchvalue="", searchtype='s'):
"""Returns dicts of 'key' and 'value' from a knowledge base
@param kb_name the name of the knowledge base
@param searchkey search using this key
@param searchvalue search using this value
@param searchtype s=substring, e=exact, sw=startswith
@return a list of dictionaries [{'key'=>x, 'value'=>y},..]
"""
k_id = get_kb_id(kb_name)
if searchtype == 's' and searchkey:
searchkey = '%'+searchkey+'%'
if searchtype == 's' and searchvalue:
searchvalue = '%'+searchvalue+'%'
if searchtype == 'sw' and searchvalue: #startswith
searchvalue = searchvalue+'%'
if not searchvalue:
searchvalue = '%'
if not searchkey:
searchkey = '%'
res = []
rows = run_sql("""SELECT m_key, m_value FROM knwKBRVAL
WHERE m_value LIKE %s
AND m_key LIKE %s
AND id_knwKB = %s""",
(searchvalue, searchkey, k_id))
for row in rows:
mdict = {}
m_key = row[0]
m_value = row[1]
mdict['key'] = m_key
mdict['value'] = m_value
res.append(mdict)
return res
diff --git a/modules/bibrank/lib/bibrank_citation_indexer.py b/modules/bibrank/lib/bibrank_citation_indexer.py
index 77496703b..c28770c61 100644
--- a/modules/bibrank/lib/bibrank_citation_indexer.py
+++ b/modules/bibrank/lib/bibrank_citation_indexer.py
@@ -1,1016 +1,1017 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
__revision__ = "$Id$"
import re
import time
import os
import sys
import ConfigParser
from itertools import islice
from datetime import datetime
from invenio.dbquery import run_sql, serialize_via_marshal, \
deserialize_via_marshal
-from invenio.bibindex_engine import CFG_JOURNAL_PUBINFO_STANDARD_FORM
+from invenio.bibindex_tokenizers.BibIndexJournalTokenizer import \
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM, \
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK
from invenio.search_engine import search_pattern, search_unit
from invenio.search_engine_utils import get_fieldvalues
from invenio.bibformat_utils import parse_tag
from invenio.bibknowledge import get_kb_mappings
from invenio.bibtask import write_message, task_get_option, \
task_update_progress, task_sleep_now_if_required, \
task_get_task_param
from invenio.errorlib import register_exception
-from invenio.bibindex_engine import get_field_tags
-from invenio.bibindex_engine import CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK
+from invenio.bibindex_engine_utils import get_field_tags
INTBITSET_OF_DELETED_RECORDS = search_unit(p='DELETED', f='980', m='a')
re_CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK = re.compile(CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK)
def get_recids_matching_query(p, f, m='e'):
"""Return set of recIDs matching query for pattern p in field f."""
return search_pattern(p=p, f=f, m=m) - INTBITSET_OF_DELETED_RECORDS
def get_citation_weight(rank_method_code, config, chunk_size=20000):
"""return a dictionary which is used by bibrank daemon for generating
the index of sorted research results by citation information
"""
begin_time = time.time()
quick = task_get_option("quick") != "no"
# id option forces re-indexing a certain range
# even if there are no new recs
if task_get_option("id"):
# construct a range of records to index
updated_recids = []
for first, last in task_get_option("id"):
updated_recids += range(first, last+1)
if len(updated_recids) > 10000:
str_updated_recids = str(updated_recids[:10]) + ' ... ' + str(updated_recids[-10:])
else:
str_updated_recids = str(updated_recids)
write_message('Records to process: %s' % str_updated_recids)
index_update_time = None
else:
bibrank_update_time = get_bibrankmethod_lastupdate(rank_method_code)
if not quick:
bibrank_update_time = "0000-00-00 00:00:00"
write_message("bibrank: %s" % bibrank_update_time)
index_update_time = get_bibindex_update_time()
write_message("bibindex: %s" % index_update_time)
if index_update_time > datetime.now().strftime("%Y-%m-%d %H:%M:%S"):
index_update_time = "0000-00-00 00:00:00"
updated_recids = get_modified_recs(bibrank_update_time,
index_update_time)
if len(updated_recids) > 10000:
str_updated_recids = str(updated_recids[:10]) + ' ... ' + str(updated_recids[-10:])
else:
str_updated_recids = str(updated_recids)
write_message("%s records to update" % str_updated_recids)
if updated_recids:
# result_intermediate should be warranted to exists!
# but if the user entered a "-R" (do all) option, we need to
# make an empty start set
if quick:
dicts = {
'cites_weight': last_updated_result(rank_method_code),
'cites': get_cit_dict("citationdict"),
'refs': get_cit_dict("reversedict"),
'selfcites': get_cit_dict("selfcitdict"),
'selfrefs': get_cit_dict("selfcitedbydict"),
'authorcites': get_initial_author_dict(),
}
else:
dicts = {
'cites_weight': {},
'cites': {},
'refs': {},
'selfcites': {},
'selfrefs': {},
'authorcites': {},
}
# Process fully the updated records
process_and_store(updated_recids, config, dicts, chunk_size, quick)
end_time = time.time()
write_message("Total time of get_citation_weight(): %.2f sec" % \
(end_time - begin_time))
task_update_progress("citation analysis done")
cites_weight = dicts['cites_weight']
else:
cites_weight = {}
write_message("No new records added since last time this " \
"rank method was executed")
return cites_weight, index_update_time
def process_and_store(recids, config, dicts, chunk_size, quick):
# Process recent records first
# The older records were most likely added by the above steps
# to be reprocessed so they only have minor changes
recids_iter = iter(sorted(recids, reverse=True))
# Split records to process into chunks so that we do not
# fill up too much memory
while True:
task_sleep_now_if_required()
chunk = list(islice(recids_iter, chunk_size))
if not chunk:
if not quick:
store_dicts(dicts)
break
write_message("Processing chunk #%s to #%s" % (chunk[0], chunk[-1]))
# dicts are modified in-place
process_chunk(chunk, config, dicts)
if quick:
# Store partial result as it is just an update and not
# a creation from scratch
store_dicts(dicts)
def process_chunk(recids, config, dicts):
cites_weight = dicts['cites_weight']
cites = dicts['cites']
refs = dicts['refs']
old_refs = {}
for recid in recids:
old_refs[recid] = set(refs.get(recid, []))
old_cites = {}
for recid in recids:
old_cites[recid] = set(cites.get(recid, []))
process_inner(recids, config, dicts)
# Records cited by updated_recid_list
# They can only loose references as added references
# are already added to the dicts at this point
for somerecid in recids:
for recid in set(old_cites[somerecid]) - set(cites.get(somerecid, [])):
refs[recid] = list(set(refs.get(recid, [])) - set([somerecid]))
if not refs[recid]:
del refs[recid]
# Records referenced by updated_recid_list
# They can only loose citations as added citations
# are already added to the dicts at this point
for somerecid in recids:
for recid in set(old_refs[somerecid]) - set(refs.get(somerecid, [])):
cites[recid] = list(set(cites.get(recid, [])) - set([somerecid]))
cites_weight[recid] = len(cites[recid])
if not cites[recid]:
del cites[recid]
del cites_weight[recid]
def process_inner(recids, config, dicts, do_catchup=True):
tags = get_tags_config(config)
# call the procedure that does the hard work by reading fields of
# citations and references in the updated_recid's (but nothing else)!
write_message("Entering get_citation_informations", verbose=9)
citation_informations = get_citation_informations(recids, tags,
fetch_catchup_info=do_catchup)
write_message("Entering ref_analyzer", verbose=9)
# call the analyser that uses the citation_informations to really
# search x-cites-y in the coll..
return ref_analyzer(citation_informations,
dicts,
recids,
tags,
do_catchup=do_catchup)
def get_bibrankmethod_lastupdate(rank_method_code):
"""return the last excution date of bibrank method
"""
query = """SELECT DATE_FORMAT(last_updated, '%%Y-%%m-%%d %%H:%%i:%%s')
FROM rnkMETHOD WHERE name =%s"""
last_update_time = run_sql(query, [rank_method_code])
try:
r = last_update_time[0][0]
except IndexError:
r = "0000-00-00 00:00:00"
return r
def get_bibindex_update_time():
try:
# check indexing times of `journal' and `reportnumber`
# indexes, and only fetch records which have been indexed
sql = "SELECT DATE_FORMAT(MIN(last_updated), " \
"'%%Y-%%m-%%d %%H:%%i:%%s') FROM idxINDEX WHERE name IN (%s,%s)"
index_update_time = run_sql(sql, ('journal', 'reportnumber'), 1)[0][0]
except IndexError:
write_message("Not running citation indexer since journal/reportnumber"
" indexes are not created yet.")
index_update_time = "0000-00-00 00:00:00"
return index_update_time
def get_modified_recs(bibrank_method_lastupdate, indexes_lastupdate):
"""Get records to be updated by bibrank indexing
Return the list of records which have been modified between the last
execution of bibrank method and the latest journal/report index updates.
The result is expected to have ascending id order.
"""
query = """SELECT id FROM bibrec
WHERE modification_date >= %s
AND modification_date < %s
ORDER BY id ASC"""
records = run_sql(query, (bibrank_method_lastupdate, indexes_lastupdate))
return [r[0] for r in records]
def last_updated_result(rank_method_code):
""" return the last value of dictionary in rnkMETHODDATA table if it
exists and initialize the value of last updated records by zero,
otherwise an initial dictionary with zero as value for all recids
"""
query = """SELECT relevance_data FROM rnkMETHOD, rnkMETHODDATA WHERE
rnkMETHOD.id = rnkMETHODDATA.id_rnkMETHOD
AND rnkMETHOD.Name = '%s'""" % rank_method_code
try:
rdict = run_sql(query)[0][0]
except IndexError:
dic = {}
else:
dic = deserialize_via_marshal(rdict)
return dic
def format_journal(format_string, mappings):
"""format the publ infostring according to the format"""
def replace(char, data):
return data.get(char, char)
return ''.join(replace(c, mappings) for c in format_string)
def get_tags_config(config):
"""Fetch needs config from our config file"""
# Probably "citation" unless this file gets renamed
function = config.get("rank_method", "function")
write_message("config function %s" % function, verbose=9)
tags = {}
# 037a: contains (often) the "hep-ph/0501084" tag of THIS record
try:
tag = config.get(function, "primary_report_number")
except ConfigParser.NoOptionError:
tags['record_pri_number'] = None
else:
tags['record_pri_number'] = tagify(parse_tag(tag))
# 088a: additional short identifier for the record
try:
tag = config.get(function, "additional_report_number")
except ConfigParser.NoOptionError:
tags['record_add_number'] = None
else:
tags['record_add_number'] = tagify(parse_tag(tag))
# 999C5r. this is in the reference list, refers to other records.
# Looks like: hep-ph/0408002
try:
tag = config.get(function, "reference_via_report_number")
except ConfigParser.NoOptionError:
tags['refs_report_number'] = None
else:
tags['refs_report_number'] = tagify(parse_tag(tag))
# 999C5s. this is in the reference list, refers to other records.
# Looks like: Phys.Rev.,A21,78
try:
tag = config.get(function, "reference_via_pubinfo")
except ConfigParser.NoOptionError:
tags['refs_journal'] = None
else:
tags['refs_journal'] = tagify(parse_tag(tag))
# 999C5a. this is in the reference list, refers to other records.
# Looks like: 10.1007/BF03170733
try:
tag = config.get(function, "reference_via_doi")
except ConfigParser.NoOptionError:
tags['refs_doi'] = None
else:
tags['refs_doi'] = tagify(parse_tag(tag))
# Fields needed to construct the journals for this record
try:
tag = {
'pages': config.get(function, "pubinfo_journal_page"),
'year': config.get(function, "pubinfo_journal_year"),
'journal': config.get(function, "pubinfo_journal_title"),
'volume': config.get(function, "pubinfo_journal_volume"),
}
except ConfigParser.NoOptionError:
tags['publication'] = None
else:
tags['publication'] = {
'pages': tagify(parse_tag(tag['pages'])),
'year': tagify(parse_tag(tag['year'])),
'journal': tagify(parse_tag(tag['journal'])),
'volume': tagify(parse_tag(tag['volume'])),
}
# Fields needed to lookup the DOIs
tags['doi'] = get_field_tags('doi')
# 999C5s. A standardized way of writing a reference in the reference list.
# Like: Nucl. Phys. B 710 (2000) 371
try:
tags['publication_format'] = config.get(function,
"pubinfo_journal_format")
except ConfigParser.NoOptionError:
tags['publication_format'] = CFG_JOURNAL_PUBINFO_STANDARD_FORM
# Print values of tags for debugging
write_message("tag values: %r" % [tags], verbose=9)
return tags
def get_journal_info(recid, tags):
record_info = []
# TODO: handle recors with multiple journals
tagsvalues = {} # we store the tags and their values here
# like c->444 y->1999 p->"journal of foo",
# v->20
tmp = get_fieldvalues(recid, tags['publication']['journal'])
if tmp:
tagsvalues["p"] = tmp[0]
tmp = get_fieldvalues(recid, tags['publication']['volume'])
if tmp:
tagsvalues["v"] = tmp[0]
tmp = get_fieldvalues(recid, tags['publication']['year'])
if tmp:
tagsvalues["y"] = tmp[0]
tmp = get_fieldvalues(recid, tags['publication']['pages'])
if tmp:
# if the page numbers have "x-y" take just x
pages = tmp[0]
hpos = pages.find("-")
if hpos > 0:
pages = pages[:hpos]
tagsvalues["c"] = pages
# check if we have the required data
ok = True
for c in tags['publication_format']:
if c in ('p', 'v', 'y', 'c'):
if c not in tagsvalues:
ok = False
if ok:
publ = format_journal(tags['publication_format'], tagsvalues)
record_info += [publ]
alt_volume = get_alt_volume(tagsvalues['v'])
if alt_volume:
tagsvalues2 = tagsvalues.copy()
tagsvalues2['v'] = alt_volume
publ = format_journal(tags['publication_format'], tagsvalues2)
record_info += [publ]
# Add codens
for coden in get_kb_mappings('CODENS',
value=tagsvalues['p']):
tagsvalues2 = tagsvalues.copy()
tagsvalues2['p'] = coden['key']
publ = format_journal(tags['publication_format'], tagsvalues2)
record_info += [publ]
return record_info
def get_alt_volume(volume):
alt_volume = None
if re.match(ur'[a-zA-Z]\d+', volume, re.U|re.I):
alt_volume = volume[1:] + volume[0]
elif re.match(ur'\d+[a-zA-Z]', volume, re.U|re.I):
alt_volume = volume[-1] + volume[:-1]
return alt_volume
def get_citation_informations(recid_list, tags, fetch_catchup_info=True):
"""scans the collections searching references (999C5x -fields) and
citations for items in the recid_list
returns a 4 list of dictionaries that contains the citation information
of cds records
examples: [ {} {} {} {} ]
[ {5: 'SUT-DP-92-70-5'},
{ 93: ['astro-ph/9812088']},
{ 93: ['Phys. Rev. Lett. 96 (2006) 081301'] }, {} ]
NB: stuff here is for analysing new or changed records.
see "ref_analyzer" for more.
"""
begin_time = os.times()[4]
records_info = {
'report-numbers': {},
'journals': {},
'doi': {},
}
references_info = {
'report-numbers': {},
'journals': {},
'doi': {},
}
# perform quick check to see if there are some records with
# reference tags, because otherwise get.cit.inf would be slow even
# if there is nothing to index:
if run_sql("SELECT value FROM bib%sx WHERE tag=%%s LIMIT 1" % tags['refs_journal'][0:2],
(tags['refs_journal'], )) or \
run_sql("SELECT value FROM bib%sx WHERE tag=%%s LIMIT 1" % tags['refs_report_number'][0:2],
(tags['refs_report_number'], )):
done = 0 # for status reporting
for recid in recid_list:
if done % 10 == 0:
task_sleep_now_if_required()
# in fact we can sleep any time here
if done % 1000 == 0:
mesg = "get cit.inf done %s of %s" % (done, len(recid_list))
write_message(mesg)
task_update_progress(mesg)
done += 1
if recid in INTBITSET_OF_DELETED_RECORDS:
# do not treat this record since it was deleted; we
# skip it like this in case it was only soft-deleted
# e.g. via bibedit (i.e. when collection tag 980 is
# DELETED but other tags like report number or journal
# publication info remained the same, so the calls to
# get_fieldvalues() below would return old values)
continue
if tags['refs_report_number']:
references_info['report-numbers'][recid] \
= get_fieldvalues(recid,
tags['refs_report_number'],
sort=False)
msg = "references_info['report-numbers'][%s] = %r" \
% (recid, references_info['report-numbers'][recid])
write_message(msg, verbose=9)
if tags['refs_journal']:
references_info['journals'][recid] = []
for ref in get_fieldvalues(recid,
tags['refs_journal'],
sort=False):
try:
# Inspire specific parsing
journal, volume, page = ref.split(',')
except ValueError:
pass
else:
alt_volume = get_alt_volume(volume)
if alt_volume:
alt_ref = ','.join([journal, alt_volume, page])
references_info['journals'][recid] += [alt_ref]
references_info['journals'][recid] += [ref]
msg = "references_info['journals'][%s] = %r" \
% (recid, references_info['journals'][recid])
write_message(msg, verbose=9)
if tags['refs_doi']:
references_info['doi'][recid] \
= get_fieldvalues(recid, tags['refs_doi'], sort=False)
msg = "references_info['doi'][%s] = %r" \
% (recid, references_info['doi'][recid])
write_message(msg, verbose=9)
if not fetch_catchup_info:
# We do not need the extra info
continue
if tags['record_pri_number'] or tags['record_add_number']:
records_info['report-numbers'][recid] = []
if tags['record_pri_number']:
records_info['report-numbers'][recid] \
+= get_fieldvalues(recid,
tags['record_pri_number'],
sort=False)
if tags['record_add_number']:
records_info['report-numbers'][recid] \
+= get_fieldvalues(recid,
tags['record_add_number'],
sort=False)
msg = "records_info[%s]['report-numbers'] = %r" \
% (recid, records_info['report-numbers'][recid])
write_message(msg, verbose=9)
if tags['doi']:
records_info['doi'][recid] = []
for tag in tags['doi']:
records_info['doi'][recid] += get_fieldvalues(recid,
tag,
sort=False)
msg = "records_info[%s]['doi'] = %r" \
% (recid, records_info['doi'][recid])
write_message(msg, verbose=9)
# get a combination of
# journal vol (year) pages
if tags['publication']:
records_info['journals'][recid] = get_journal_info(recid, tags)
msg = "records_info[%s]['journals'] = %r" \
% (recid, records_info['journals'][recid])
write_message(msg, verbose=9)
else:
mesg = "Warning: there are no records with tag values for " \
"%s or %s. Nothing to do." % \
(tags['refs_journal'], tags['refs_report_number'])
write_message(mesg)
mesg = "get cit.inf done fully"
write_message(mesg)
task_update_progress(mesg)
end_time = os.times()[4]
write_message("Execution time for generating citation info "
"from record: %.2f sec" % (end_time - begin_time))
return records_info, references_info
def standardize_report_number(report_number):
# Remove category for arxiv papers
report_number = re.sub(ur'(?:arXiv:)?(\d{4}\.\d{4}) \[[a-zA-Z\.-]+\]',
ur'arXiv:\g<1>',
report_number,
re.I | re.U)
return report_number
def ref_analyzer(citation_informations, dicts,
updated_recids, tags, do_catchup=True):
"""Analyze the citation informations and calculate the citation weight
and cited by list dictionary.
"""
citations_weight = dicts['cites_weight']
citations = dicts['cites']
references = dicts['refs']
selfcites = dicts['selfcites']
selfrefs = dicts['selfrefs']
authorcites = dicts['authorcites']
def step(msg_prefix, recid, done, total):
if done % 30 == 0:
task_sleep_now_if_required()
if done % 1000 == 0:
mesg = "%s done %s of %s" % (msg_prefix, done, total)
write_message(mesg)
task_update_progress(mesg)
write_message("Processing: %s" % recid, verbose=9)
def add_to_dicts(citer, cited):
# Make sure we don't add ourselves
# Workaround till we know why we are adding ourselves.
if citer == cited:
return
if cited not in citations_weight:
citations_weight[cited] = 0
# Citations and citations weight
if citer not in citations.setdefault(cited, []):
citations[cited].append(citer)
citations_weight[cited] += 1
# References
if cited not in references.setdefault(citer, []):
references[citer].append(cited)
# dict of recid -> institute_give_publ_id
records_info, references_info = citation_informations
t1 = os.times()[4]
write_message("Phase 0: temporarily remove changed records from " \
"citation dictionaries; they will be filled later")
if do_catchup:
for somerecid in updated_recids:
try:
del citations[somerecid]
except KeyError:
pass
for somerecid in updated_recids:
try:
del references[somerecid]
except KeyError:
pass
# Try to find references based on 999C5r
# e.g 8 -> ([astro-ph/9889],[hep-ph/768])
# meaning: rec 8 contains these in bibliography
write_message("Phase 1: Report numbers references")
done = 0
for thisrecid, refnumbers in references_info['report-numbers'].iteritems():
step("Report numbers references", thisrecid, done,
len(references_info['report-numbers']))
done += 1
for refnumber in (r for r in refnumbers if r):
field = 'reportnumber'
refnumber = standardize_report_number(refnumber)
# Search for "hep-th/5644654 or such" in existing records
recids = get_recids_matching_query(p=refnumber, f=field)
write_message("These match searching %s in %s: %s" % \
(refnumber, field, list(recids)), verbose=9)
if not recids:
insert_into_missing(thisrecid, refnumber)
else:
remove_from_missing(refnumber)
if len(recids) > 1:
store_citation_warning('multiple-matches', refnumber)
msg = "Whoops: record '%d' report number value '%s' " \
"matches many records; taking only the first one. %s" % \
(thisrecid, refnumber, repr(recids))
write_message(msg, stream=sys.stderr)
for recid in list(recids)[:1]: # take only the first one
add_to_dicts(thisrecid, recid)
mesg = "done fully"
write_message(mesg)
task_update_progress(mesg)
t2 = os.times()[4]
# Try to find references based on 999C5s
# e.g. Phys.Rev.Lett. 53 (1986) 2285
write_message("Phase 2: Journal references")
done = 0
for thisrecid, refs in references_info['journals'].iteritems():
step("Journal references", thisrecid, done,
len(references_info['journals']))
done += 1
for reference in (r for r in refs if r):
p = reference
field = 'journal'
# check reference value to see whether it is well formed:
if not re_CFG_JOURNAL_PUBINFO_STANDARD_FORM_REGEXP_CHECK.match(p):
store_citation_warning('not-well-formed', p)
msg = "Whoops, record '%d' reference value '%s' " \
"is not well formed; skipping it." % (thisrecid, p)
write_message(msg, stream=sys.stderr)
continue # skip this ill-formed value
recids = search_unit(p, field) - INTBITSET_OF_DELETED_RECORDS
write_message("These match searching %s in %s: %s" \
% (reference, field, list(recids)), verbose=9)
if not recids:
insert_into_missing(thisrecid, p)
else:
remove_from_missing(p)
if len(recids) > 1:
store_citation_warning('multiple-matches', p)
msg = "Whoops: record '%d' reference value '%s' " \
"matches many records; taking only the first one. %s" % \
(thisrecid, p, repr(recids))
write_message(msg, stream=sys.stderr)
for recid in list(recids)[:1]: # take only the first one
add_to_dicts(thisrecid, recid)
mesg = "done fully"
write_message(mesg)
task_update_progress(mesg)
t3 = os.times()[4]
# Try to find references based on 999C5a
# e.g. 10.1007/BF03170733
write_message("Phase 3: DOI references")
done = 0
for thisrecid, refs in references_info['doi'].iteritems():
step("DOI references", thisrecid, done, len(references_info['doi']))
done += 1
for reference in (r for r in refs if r):
p = reference
field = 'doi'
recids = get_recids_matching_query(p, field)
write_message("These match searching %s in %s: %s" \
% (reference, field, list(recids)), verbose=9)
if not recids:
insert_into_missing(thisrecid, p)
else:
remove_from_missing(p)
if len(recids) > 1:
store_citation_warning('multiple-matches', p)
msg = "Whoops: record '%d' DOI value '%s' " \
"matches many records; taking only the first one. %s" % \
(thisrecid, p, repr(recids))
write_message(msg, stream=sys.stderr)
for recid in list(recids)[:1]: # take only the first one
add_to_dicts(thisrecid, recid)
mesg = "done fully"
write_message(mesg)
task_update_progress(mesg)
t4 = os.times()[4]
# Search for stuff like CERN-TH-4859/87 in list of refs
write_message("Phase 4: report numbers catchup")
done = 0
for thisrecid, reportcodes in records_info['report-numbers'].iteritems():
step("Report numbers catchup", thisrecid, done,
len(records_info['report-numbers']))
done += 1
for reportcode in (r for r in reportcodes if r):
if reportcode.startswith('arXiv'):
std_reportcode = standardize_report_number(reportcode)
report_pattern = r'^%s( *\[[a-zA-Z.-]*\])?' % \
re.escape(std_reportcode)
recids = get_recids_matching_query(report_pattern,
tags['refs_report_number'],
'r')
else:
recids = get_recids_matching_query(reportcode,
tags['refs_report_number'],
'e')
for recid in recids:
add_to_dicts(recid, thisrecid)
mesg = "done fully"
write_message(mesg)
task_update_progress(mesg)
# Find this record's pubinfo in other records' bibliography
write_message("Phase 5: journals catchup")
done = 0
t5 = os.times()[4]
for thisrecid, rec_journals in records_info['journals'].iteritems():
step("Journals catchup", thisrecid, done,
len(records_info['journals']))
done += 1
for journal in rec_journals:
journal = journal.replace("\"", "")
# Search the publication string like
# Phys. Lett., B 482 (2000) 417 in 999C5s
recids = search_unit(p=journal, f=tags['refs_journal'], m='a') \
- INTBITSET_OF_DELETED_RECORDS
write_message("These records match %s in %s: %s" \
% (journal, tags['refs_journal'], list(recids)), verbose=9)
for recid in recids:
add_to_dicts(recid, thisrecid)
mesg = "done fully"
write_message(mesg)
task_update_progress(mesg)
write_message("Phase 6: DOI catchup")
done = 0
t6 = os.times()[4]
for thisrecid, dois in records_info['doi'].iteritems():
step("DOI catchup", thisrecid, done, len(records_info['doi']))
done += 1
for doi in dois:
# Search the publication string like
# Phys. Lett., B 482 (2000) 417 in 999C5a
recids = search_unit(p=doi, f=tags['refs_doi'], m='a') \
- INTBITSET_OF_DELETED_RECORDS
write_message("These records match %s in %s: %s" \
% (doi, tags['refs_doi'], list(recids)), verbose=9)
for recid in recids:
add_to_dicts(recid, thisrecid)
mesg = "done fully"
write_message(mesg)
task_update_progress(mesg)
write_message("Phase 7: remove empty lists from dicts")
# Remove empty lists in citation and reference
keys = citations.keys()
for k in keys:
if not citations[k]:
del citations[k]
keys = references.keys()
for k in keys:
if not references[k]:
del references[k]
if task_get_task_param('verbose') >= 3:
# Print only X first to prevent flood
write_message("citation_list (x is cited by y):")
write_message(dict(islice(citations.iteritems(), 10)))
write_message("size: %s" % len(citations))
write_message("reference_list (x cites y):")
write_message(dict(islice(references.iteritems(), 10)))
write_message("size: %s" % len(references))
write_message("selfcitedbydic (x is cited by y and one of the " \
"authors of x same as y's):")
write_message(dict(islice(selfcites.iteritems(), 10)))
write_message("size: %s" % len(selfcites))
write_message("selfdic (x cites y and one of the authors of x " \
"same as y's):")
write_message(dict(islice(selfrefs.iteritems(), 10)))
write_message("size: %s" % len(selfrefs))
write_message("authorcitdic (author is cited in recs):")
write_message(dict(islice(authorcites.iteritems(), 10)))
write_message("size: %s" % len(authorcites))
t7 = os.times()[4]
write_message("Execution time for analyzing the citation information " \
"generating the dictionary:")
write_message("... checking ref report numbers: %.2f sec" % (t2-t1))
write_message("... checking ref journals: %.2f sec" % (t3-t2))
write_message("... checking ref DOI: %.2f sec" % (t4-t3))
write_message("... checking rec report numbers: %.2f sec" % (t5-t4))
write_message("... checking rec journals: %.2f sec" % (t6-t5))
write_message("... checking rec DOI: %.2f sec" % (t7-t6))
write_message("... total time of ref_analyze: %.2f sec" % (t7-t1))
return citations_weight, citations, references, selfcites, \
selfrefs, authorcites
def store_dicts(dicts):
"""Insert the reference and citation list into the database"""
insert_into_cit_db(dicts['refs'], "reversedict")
insert_into_cit_db(dicts['cites'], "citationdict")
insert_into_cit_db(dicts['selfcites'], "selfcitedbydict")
insert_into_cit_db(dicts['selfrefs'], "selfcitdict")
def insert_into_cit_db(dic, name):
"""Stores citation dictionary in the database"""
ndate = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
s = serialize_via_marshal(dic)
write_message("size of %s %s" % (name, len(s)))
# check that this column really exists
run_sql("""REPLACE INTO rnkCITATIONDATA(object_name, object_value,
last_updated) VALUES (%s, %s, %s)""", (name, s, ndate))
def get_cit_dict(name):
"""get a named citation dict from the db"""
cdict = run_sql("""SELECT object_value FROM rnkCITATIONDATA
WHERE object_name = %s""", (name, ))
if cdict and cdict[0] and cdict[0][0]:
dict_from_db = deserialize_via_marshal(cdict[0][0])
else:
dict_from_db = {}
return dict_from_db
def get_initial_author_dict():
"""read author->citedinlist dict from the db"""
adict = {}
try:
ah = run_sql("SELECT aterm,hitlist FROM rnkAUTHORDATA")
for (a, h) in ah:
adict[a] = deserialize_via_marshal(h)
return adict
except:
register_exception(prefix="could not read rnkAUTHORDATA",
alert_admin=True)
return {}
def insert_into_missing(recid, report):
"""put the referingrecordnum-publicationstring into
the "we are missing these" table"""
if len(report) >= 255:
# Invalid report, it is too long
# and does not fit in the database column
# (currently varchar 255)
return
wasalready = run_sql("""SELECT id_bibrec
FROM rnkCITATIONDATAEXT
WHERE id_bibrec = %s
AND extcitepubinfo = %s""",
(recid, report))
if not wasalready:
run_sql("""INSERT INTO rnkCITATIONDATAEXT(id_bibrec, extcitepubinfo)
VALUES (%s,%s)""", (recid, report))
def remove_from_missing(report):
"""remove the recid-ref -pairs from the "missing" table for report x: prob
in the case ref got in our library collection"""
run_sql("""DELETE FROM rnkCITATIONDATAEXT
WHERE extcitepubinfo = %s""", (report,))
def create_analysis_tables():
"""temporary simple table + index"""
sql1 = "CREATE TABLE IF NOT EXISTS tmpcit (citer mediumint(10), \
cited mediumint(10)) TYPE=MyISAM"
sql2 = "CREATE UNIQUE INDEX citercited ON tmpcit(citer, cited)"
sql3 = "CREATE INDEX citer ON tmpcit(citer)"
sql4 = "CREATE INDEX cited ON tmpcit(cited)"
run_sql(sql1)
run_sql(sql2)
run_sql(sql3)
run_sql(sql4)
def write_citer_cited(citer, cited):
"""write an entry to tmp table"""
run_sql("INSERT INTO tmpcit(citer, cited) VALUES (%s,%s)", (citer, cited))
def print_missing(num):
"""
Print the contents of rnkCITATIONDATAEXT table containing external
records that were cited by NUM or more internal records.
NUM is by default taken from the -E command line option.
"""
if not num:
num = task_get_option("print-extcites")
write_message("Listing external papers cited by %i or more \
internal records:" % num)
res = run_sql("""SELECT COUNT(id_bibrec), extcitepubinfo
FROM rnkCITATIONDATAEXT
GROUP BY extcitepubinfo HAVING COUNT(id_bibrec) >= %s
ORDER BY COUNT(id_bibrec) DESC""", (num,))
for (cnt, brec) in res:
print str(cnt)+"\t"+brec
write_message("Listing done.")
def tagify(parsedtag):
"""aux auf to make '100__a' out of ['100','','','a']"""
tag = ""
for t in parsedtag:
if t == '':
t = '_'
tag += t
return tag
def store_citation_warning(warning_type, cit_info):
r = run_sql("""SELECT 1 FROM rnkCITATIONDATAERR
WHERE type = %s
AND citinfo = %s""", (warning_type, cit_info))
if not r:
run_sql("""INSERT INTO rnkCITATIONDATAERR (type, citinfo)
VALUES (%s, %s)""", (warning_type, cit_info))
diff --git a/modules/bibrank/lib/bibrank_citerank_indexer.py b/modules/bibrank/lib/bibrank_citerank_indexer.py
index c06e4fac6..08d096ad3 100644
--- a/modules/bibrank/lib/bibrank_citerank_indexer.py
+++ b/modules/bibrank/lib/bibrank_citerank_indexer.py
@@ -1,891 +1,891 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""Implementation of different ranking methods based on
the citation graph:
- citation count/ time decayed citation count
- pagerank / pagerank with external citations
- time decayed pagerank
"""
# pylint: disable=E0611
import ConfigParser
from math import exp
import datetime
import time
import re
import sys
try:
from numpy import array, ones, zeros, int32, float32, sqrt, dot
import_numpy = 1
except ImportError:
import_numpy = 0
if sys.hexversion < 0x2040000:
# pylint: disable=W0622
from sets import Set as set
# pylint: enable=W0622
from invenio.dbquery import run_sql, serialize_via_marshal, \
deserialize_via_marshal
from invenio.bibtask import write_message
from invenio.config import CFG_ETCDIR
def get_citations_from_file(filename):
"""gets the citation data (who cites who) from a file and returns
- a dictionary of type x:{x1,x2..},
where x is cited by x1,x2..
- a dictionary of type a:{b},
where recid 'a' is asociated with an index 'b' """
cit = {}
dict_of_ids = {}
count = 0
try:
citation_file = open(filename, "r")
except StandardError:
write_message("Cannot find file: %s" % filename, sys.stderr)
raise StandardError
for line in citation_file:
tokens = line.strip().split()
recid_cites = int(tokens[0])
recid_cited = int(tokens[1])
if recid_cited not in cit:
cit[recid_cited] = []
#without this, duplicates might be introduced
if recid_cites not in cit[recid_cited] and recid_cites != recid_cited:
cit[recid_cited].append(recid_cites)
if recid_cites not in dict_of_ids:
dict_of_ids[recid_cites] = count
count += 1
if recid_cited not in dict_of_ids:
dict_of_ids[recid_cited] = count
count += 1
citation_file.close()
write_message("Citation data collected from file: %s" %filename, verbose=2)
write_message("Ids and recids corespondace: %s" \
%str(dict_of_ids), verbose=9)
write_message("Citations: %s" % str(cit), verbose=9)
return cit, dict_of_ids
def get_citations_from_db():
"""gets the citation data (who cites who) from the rnkCITATIONDATA table,
and returns:
-a dictionary of type x:{x1,x2..}, where x is cited by x1,x2..
-a dict of type a:{b} where recid 'a' is asociated with an index 'b'"""
dict_of_ids = {}
count = 0
query = "select object_value from rnkCITATIONDATA \
where object_name = 'citationdict'"
cit_compressed = run_sql(query)
cit = []
if cit_compressed and cit_compressed[0] and cit_compressed[0][0]:
cit = deserialize_via_marshal(cit_compressed[0][0])
if cit:
for item in cit:
#check for duplicates in citation dictionary
cit[item] = set(cit[item])
if item in cit[item]:
cit[item].remove(item)
if item not in dict_of_ids:
dict_of_ids[item] = count
count += 1
for value in cit[item]:
if value not in dict_of_ids:
dict_of_ids[value] = count
count += 1
write_message("Citation data collected\
from rnkCITATIONDATA", verbose=2)
write_message("Ids and recids corespondace: %s" \
% str(dict_of_ids), verbose=9)
write_message("Citations: %s" % str(cit), verbose=9)
return cit, dict_of_ids
else:
write_message("Error while extracting citation data \
from rnkCITATIONDATA table", verbose=1)
else:
write_message("Error while extracting citation data \
from rnkCITATIONDATA table", verbose=1)
return {}, {}
def construct_ref_array(cit, dict_of_ids, len_):
"""returns an array with the number of references that each recid has """
ref = array((), int32)
ref = zeros(len_, int32)
for key in cit:
for value in cit[key]:
ref[dict_of_ids[value]] += 1
write_message("Number of references: %s" %str(ref), verbose=9)
write_message("Finished computing total number \
of references for each paper.", verbose=5)
return ref
def get_external_links_from_file(filename, ref, dict_of_ids):
"""returns a dictionary containing the number of
external links for each recid
external link=citation that is not in our database """
ext_links = {}
#format: ext_links[dict_of_ids[recid]]=number of total external links
try:
external_file = open(filename, "r")
except StandardError:
write_message("Cannot find file: %s" % filename, sys.stderr)
raise StandardError
for line in external_file:
tokens = line.strip().split()
recid = int(tokens[0])
nr_of_external = int(tokens[1])
ext_links[dict_of_ids[recid]] = nr_of_external - ref[dict_of_ids[recid]]
if ext_links[dict_of_ids[recid]] < 0:
ext_links[dict_of_ids[recid]] = 0
external_file.close()
write_message("External link information extracted", verbose=2)
return ext_links
def get_external_links_from_db_old(ref, dict_of_ids, reference_indicator):
"""returns a dictionary containing the number of
external links for each recid
external link=citation that is not in our database """
ext_links = {}
reference_tag_regex = reference_indicator + "[a-z]"
for recid in dict_of_ids:
query = "select COUNT(DISTINCT field_number) from bibrec_bib99x \
where id_bibrec='%s' and id_bibxxx in \
(select id from bib99x where tag RLIKE '%s');" \
% (str(recid), reference_tag_regex)
result_set = run_sql(query)
if result_set:
total_links = int(result_set[0][0])
internal_links = ref[dict_of_ids[recid]]
ext_links[dict_of_ids[recid]] = total_links - internal_links
if ext_links[dict_of_ids[recid]] < 0:
ext_links[dict_of_ids[recid]] = 0
else:
ext_links[dict_of_ids[recid]] = 0
write_message("External link information extracted", verbose=2)
write_message("External links: %s" % str(ext_links), verbose=9)
return ext_links
def get_external_links_from_db(ref, dict_of_ids, reference_indicator):
"""returns a dictionary containing the number of
external links for each recid
external link=citation that is not in our database """
ext_links = {}
dict_all_ref = {}
for recid in dict_of_ids:
dict_all_ref[recid] = 0
ext_links[dict_of_ids[recid]] = 0
reference_db_id = reference_indicator[0:2]
reference_tag_regex = reference_indicator + "[a-z]"
tag_list = run_sql("select id from bib" + reference_db_id + \
"x where tag RLIKE %s", (reference_tag_regex, ))
tag_set = set()
for tag in tag_list:
tag_set.add(tag[0])
ref_list = run_sql("select id_bibrec, id_bibxxx, field_number from \
bibrec_bib" + reference_db_id + "x group by \
id_bibrec, field_number")
for item in ref_list:
recid = int(item[0])
id_bib = int(item[1])
if recid in dict_of_ids and id_bib in tag_set:
dict_all_ref[recid] += 1
for recid in dict_of_ids:
total_links = dict_all_ref[recid]
internal_links = ref[dict_of_ids[recid]]
ext_links[dict_of_ids[recid]] = total_links - internal_links
if ext_links[dict_of_ids[recid]] < 0:
ext_links[dict_of_ids[recid]] = 0
write_message("External link information extracted", verbose=2)
write_message("External links: %s" % str(ext_links), verbose=9)
return ext_links
def avg_ext_links_with_0(ext_links):
"""returns the average number of external links per paper
including in the counting the papers with 0 external links"""
total = 0.0
for item in ext_links:
total += ext_links[item]
avg_ext = total/len(ext_links)
write_message("The average number of external links per paper (including \
papers with 0 external links) is: %s" % str(avg_ext), verbose=3)
return avg_ext
def avg_ext_links_without_0(ext_links):
"""returns the average number of external links per paper
excluding in the counting the papers with 0 external links"""
count = 0.0
total = 0.0
for item in ext_links:
if ext_links[item] != 0:
count += 1
total += ext_links[item]
avg_ext = total/count
write_message("The average number of external links per paper (excluding \
papers with 0 external links) is: %s" % str(avg_ext), verbose=3)
return avg_ext
def leaves(ref):
"""returns the number of papers that do not cite any other paper"""
nr_of_leaves = 0
for i in ref:
if i == 0:
nr_of_leaves += 1
write_message("The number of papers that do not cite \
any other papers: %s" % str(leaves), verbose=3)
return nr_of_leaves
def get_dates_from_file(filename, dict_of_ids):
"""Returns the year of the publication for each paper.
In case the year is not in the db, the year of the submission is taken"""
dates = {}
# the format is: dates[dict_of_ids[recid]] = year
try:
dates_file = open(filename, "r")
except StandardError:
write_message("Cannot find file: %s" % filename, sys.stderr)
raise StandardError
for line in dates_file:
tokens = line.strip().split()
recid = int(tokens[0])
year = int(tokens[1])
dates[dict_of_ids[recid]] = year
dates_file.close()
write_message("Dates extracted", verbose=2)
write_message("Dates dictionary %s" % str(dates), verbose=9)
return dates
def get_dates_from_db(dict_of_ids, publication_year_tag, creation_date_tag):
"""Returns the year of the publication for each paper.
In case the year is not in the db, the year of the submission is taken"""
current_year = int(datetime.datetime.now().strftime("%Y"))
publication_year_db_id = publication_year_tag[0:2]
creation_date_db_id = creation_date_tag[0:2]
total = 0
count = 0
dict_of_dates = {}
for recid in dict_of_ids:
dict_of_dates[recid] = 0
date_list = run_sql("select id, tag, value from bib" + \
publication_year_db_id + "x where tag=%s", \
(publication_year_tag, ))
date_dict = {}
for item in date_list:
date_dict[int(item[0])] = item[2]
pattern = re.compile('.*(\d{4}).*')
date_list = run_sql("select id_bibrec, id_bibxxx, field_number \
from bibrec_bib" + publication_year_db_id +"x")
for item in date_list:
recid = int(item[0])
id_ = int(item[1])
if id_ in date_dict and recid in dict_of_dates:
reg = pattern.match(date_dict[id_])
if reg:
date = int(reg.group(1))
if date > 1000 and date <= current_year:
dict_of_dates[recid] = date
total += date
count += 1
not_covered = []
for recid in dict_of_dates:
if dict_of_dates[recid] == 0:
not_covered.append(recid)
date_list = run_sql("select id, tag, value from bib" + \
creation_date_db_id + "x where tag=%s", \
(creation_date_tag, ))
date_dict = {}
for item in date_list:
date_dict[int(item[0])] = item[2]
date_list = run_sql("select id_bibrec, id_bibxxx, field_number \
from bibrec_bib" + creation_date_db_id + "x")
for item in date_list:
recid = int(item[0])
id_ = int(item[1])
if id_ in date_dict and recid in not_covered:
date = int(str(date_dict[id_])[0:4])
if date > 1000 and date <= current_year:
dict_of_dates[recid] = date
total += date
count += 1
dates = {}
med = total/count
for recid in dict_of_dates:
if dict_of_dates[recid] == 0:
dates[dict_of_ids[recid]] = med
else:
dates[dict_of_ids[recid]] = dict_of_dates[recid]
write_message("Dates extracted", verbose=2)
write_message("Dates dictionary %s" % str(dates), verbose=9)
return dates
def construct_sparse_matrix(cit, ref, dict_of_ids, len_, damping_factor):
"""returns several structures needed in the calculation
of the PAGERANK method using this structures, we don't need
to keep the full matrix in the memory"""
sparse = {}
for item in cit:
for value in cit[item]:
sparse[(dict_of_ids[item], dict_of_ids[value])] = \
damping_factor * 1.0/ref[dict_of_ids[value]]
semi_sparse = []
for j in range(len_):
if ref[j] == 0:
semi_sparse.append(j)
semi_sparse_coeficient = damping_factor/len_
#zero_coeficient = (1-damping_factor)/len_
write_message("Sparse information calculated", verbose=3)
return sparse, semi_sparse, semi_sparse_coeficient
def construct_sparse_matrix_ext(cit, ref, ext_links, dict_of_ids, alpha, beta):
"""if x doesn't cite anyone: cites everyone : 1/len_ -- should be used!
returns several structures needed in the calculation
of the PAGERANK_EXT method"""
len_ = len(dict_of_ids)
sparse = {}
semi_sparse = {}
sparse[0, 0] = 1.0 - alpha
for j in range(len_):
sparse[j+1, 0] = alpha/(len_)
if j not in ext_links:
sparse[0, j+1] = beta/(len_ + beta)
else:
if ext_links[j] == 0:
sparse[0, j+1] = beta/(len_ + beta)
else:
aux = beta * ext_links[j]
if ref[j] == 0:
sparse[0, j+1] = aux/(aux + len_)
else:
sparse[0, j+1] = aux/(aux + ref[j])
if ref[j] == 0:
semi_sparse[j+1] = (1.0 - sparse[0, j + 1])/len_
for item in cit:
for value in cit[item]:
sparse[(dict_of_ids[item] + 1, dict_of_ids[value] + 1)] = \
(1.0 - sparse[0, dict_of_ids[value] + 1])/ref[dict_of_ids[value]]
#for i in range(len_ + 1):
# a = ""
# for j in range (len_ + 1):
# if (i,j) in sparse:
# a += str(sparse[(i,j)]) + "\t"
# else:
# a += "0\t"
# print a
#print semi_sparse
write_message("Sparse information calculated", verbose=3)
return sparse, semi_sparse
def construct_sparse_matrix_time(cit, ref, dict_of_ids, \
damping_factor, date_coef):
"""returns several structures needed in the calculation of the PAGERANK_time
method using this structures,
we don't need to keep the full matrix in the memory"""
len_ = len(dict_of_ids)
sparse = {}
for item in cit:
for value in cit[item]:
sparse[(dict_of_ids[item], dict_of_ids[value])] = damping_factor * \
date_coef[dict_of_ids[value]]/ref[dict_of_ids[value]]
semi_sparse = []
for j in range(len_):
if ref[j] == 0:
semi_sparse.append(j)
semi_sparse_coeficient = damping_factor/len_
#zero_coeficient = (1-damping_factor)/len_
write_message("Sparse information calculated", verbose=3)
return sparse, semi_sparse, semi_sparse_coeficient
def statistics_on_sparse(sparse):
"""returns the number of papers that cite themselves"""
count_diag = 0
for (i, j) in sparse.keys():
if i == j:
count_diag += 1
write_message("The number of papers that cite themselves: %s" % \
str(count_diag), verbose=3)
return count_diag
def pagerank(conv_threshold, check_point, len_, sparse, \
semi_sparse, semi_sparse_coef):
"""the core function of the PAGERANK method
returns an array with the ranks coresponding to each recid"""
weights_old = ones((len_), float32) # initial weights
weights_new = array((), float32)
converged = False
nr_of_check_points = 0
difference = len_
while not converged:
nr_of_check_points += 1
for step in (range(check_point)):
weights_new = zeros((len_), float32)
for (i, j) in sparse.keys():
weights_new[i] += sparse[(i, j)]*weights_old[j]
semi_total = 0.0
for j in semi_sparse:
semi_total += weights_old[j]
weights_new = weights_new + semi_sparse_coef * semi_total + \
(1.0/len_ - semi_sparse_coef) * sum(weights_old)
if step == check_point - 1:
diff = weights_new - weights_old
difference = sqrt(dot(diff, diff))/len_
write_message("Finished step: %s, %s " \
%(str(check_point*(nr_of_check_points-1) + step), \
str(difference)), verbose=5)
weights_old = weights_new.copy()
converged = (difference < conv_threshold)
write_message("PageRank calculated for all recids finnished in %s steps. \
The threshold was %s" % (str(nr_of_check_points), str(difference)),\
verbose=2)
return weights_old
def pagerank_ext(conv_threshold, check_point, len_, sparse, semi_sparse):
"""the core function of the PAGERANK_EXT method
returns an array with the ranks coresponding to each recid"""
weights_old = array((), float32)
weights_old = ones((len_), float32)
weights_new = array((), float32)
converged = False
nr_of_check_points = 0
difference = len_
while not converged:
nr_of_check_points += 1
for step in (range(check_point)):
weights_new = zeros((len_), float32)
for (i, j) in sparse.keys():
weights_new[i] += sparse[(i, j)]*weights_old[j]
total_sum = 0.0
for j in semi_sparse:
total_sum += semi_sparse[j]*weights_old[j]
weights_new[1:len_] = weights_new[1:len_] + total_sum
if step == check_point - 1:
diff = weights_new - weights_old
difference = sqrt(dot(diff, diff))/len_
write_message("Finished step: %s, %s " \
% (str(check_point*(nr_of_check_points-1) + step), \
str(difference)), verbose=5)
weights_old = weights_new.copy()
converged = (difference < conv_threshold)
write_message("PageRank calculated for all recids finnished in %s steps. \
The threshold was %s" % (str(nr_of_check_points), \
str(difference)), verbose=2)
#return weights_old[1:len_]/(len_ - weights_old[0])
return weights_old[1:len_]
def pagerank_time(conv_threshold, check_point, len_, \
sparse, semi_sparse, semi_sparse_coeficient, date_coef):
"""the core function of the PAGERANK_TIME method: pageRank + time decay
returns an array with the ranks coresponding to each recid"""
weights_old = array((), float32)
weights_old = ones((len_), float32) # initial weights
weights_new = array((), float32)
converged = False
nr_of_check_points = 0
difference = len_
while not converged:
nr_of_check_points += 1
for step in (range(check_point)):
weights_new = zeros((len_), float32)
for (i, j) in sparse.keys():
weights_new[i] += sparse[(i, j)]*weights_old[j]
semi_total = 0.0
for j in semi_sparse:
semi_total += weights_old[j]*date_coef[j]
zero_total = 0.0
for i in range(len_):
zero_total += weights_old[i]*date_coef[i]
#dates = array(date_coef.keys())
#zero_total = dot(weights_old, dates)
weights_new = weights_new + semi_sparse_coeficient * semi_total + \
(1.0/len_ - semi_sparse_coeficient) * zero_total
if step == check_point - 1:
diff = weights_new - weights_old
difference = sqrt(dot(diff, diff))/len_
write_message("Finished step: %s, %s " \
% (str(check_point*(nr_of_check_points-1) + step), \
str(difference)), verbose=5)
weights_old = weights_new.copy()
converged = (difference < conv_threshold)
write_message("PageRank calculated for all recids finnished in %s steps.\
The threshold was %s" % (str(nr_of_check_points), \
str(difference)), verbose=2)
return weights_old
def citation_rank_time(cit, dict_of_ids, date_coef, dates, decimals):
"""returns a dictionary recid:weight based on the total number of
citations as function of time"""
dict_of_ranks = {}
for key in dict_of_ids:
if key in cit:
dict_of_ranks[key] = 0
for recid in cit[key]:
dict_of_ranks[key] += date_coef[dict_of_ids[recid]]
dict_of_ranks[key] = round(dict_of_ranks[key], decimals) \
+ dates[dict_of_ids[key]]* pow(10, 0-4-decimals)
else:
dict_of_ranks[key] = dates[dict_of_ids[key]]* pow(10, 0-4-decimals)
write_message("Citation rank calculated", verbose=2)
return dict_of_ranks
def get_ranks(weights, dict_of_ids, mult, dates, decimals):
"""returns a dictionary recid:value, where value is the weight of the
recid paper; the second order is the reverse time order,
from recent to past"""
dict_of_ranks = {}
for item in dict_of_ids:
dict_of_ranks[item] = round(weights[dict_of_ids[item]]* mult, decimals)\
+ dates[dict_of_ids[item]]* pow(10, 0-4-decimals)
#dict_of_ranks[item] = weights[dict_of_ids[item]]
return dict_of_ranks
def sort_weights(dict_of_ranks):
"""sorts the recids based on weights(first order)
and on dates(second order)"""
ranks_by_citations = sorted(dict_of_ranks.keys(), lambda x, y: \
cmp(dict_of_ranks[y], dict_of_ranks[x]))
return ranks_by_citations
def normalize_weights(dict_of_ranks):
"""the weights should be normalized to 100, so they woun't be
different from the weights from other ranking methods"""
max_weight = 0.0
for recid in dict_of_ranks:
weight = dict_of_ranks[recid]
if weight > max_weight:
max_weight = weight
for recid in dict_of_ranks:
dict_of_ranks[recid] = round(dict_of_ranks[recid] * 100.0/max_weight, 3)
def write_first_ranks_to_file(ranks_by_citations, dict_of_ranks, \
nr_of_ranks, filename):
"""Writes the first n results of the ranking method into a file"""
try:
ranks_file = open(filename, "w")
except StandardError:
write_message("Problems with file: %s" % filename, sys.stderr)
raise StandardError
for i in range(nr_of_ranks):
ranks_file.write(str(i+1) + "\t" + str(ranks_by_citations[i]) + \
"\t" + str(dict_of_ranks[ranks_by_citations[i]]) + "\n")
ranks_file.close()
write_message("The first %s pairs recid:rank in the ranking order \
are written into this file: %s" % (nr_of_ranks, filename), verbose=2)
def del_rank_method_data(rank_method_code):
"""Delete the data for a rank method from rnkMETHODDATA table"""
id_ = run_sql("SELECT id from rnkMETHOD where name=%s", (rank_method_code, ))
run_sql("DELETE FROM rnkMETHODDATA WHERE id_rnkMETHOD=%s", (id_[0][0], ))
def into_db(dict_of_ranks, rank_method_code):
"""Writes into the rnkMETHODDATA table the ranking results"""
method_id = run_sql("SELECT id from rnkMETHOD where name=%s", \
(rank_method_code, ))
del_rank_method_data(rank_method_code)
serialized_data = serialize_via_marshal(dict_of_ranks)
method_id_str = str(method_id[0][0])
run_sql("INSERT INTO rnkMETHODDATA(id_rnkMETHOD, relevance_data) \
VALUES(%s, %s) ", (method_id_str, serialized_data, ))
date = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
run_sql("UPDATE rnkMETHOD SET last_updated=%s WHERE name=%s", \
(date, rank_method_code))
write_message("Finished writing the ranks into rnkMETHOD table", verbose=5)
def run_pagerank(cit, dict_of_ids, len_, ref, damping_factor, \
conv_threshold, check_point, dates):
"""returns the final form of the ranks when using pagerank method"""
write_message("Running the PageRank method", verbose=5)
sparse, semi_sparse, semi_sparse_coeficient = \
construct_sparse_matrix(cit, ref, dict_of_ids, len_, damping_factor)
weights = pagerank(conv_threshold, check_point, len_, \
sparse, semi_sparse, semi_sparse_coeficient)
dict_of_ranks = get_ranks(weights, dict_of_ids, 1, dates, 2)
return dict_of_ranks
def run_pagerank_ext(cit, dict_of_ids, ref, ext_links, \
conv_threshold, check_point, alpha, beta, dates):
"""returns the final form of the ranks when using pagerank_ext method"""
write_message("Running the PageRank with external links method", verbose=5)
len_ = len(dict_of_ids)
sparse, semi_sparse = construct_sparse_matrix_ext(cit, ref, \
ext_links, dict_of_ids, alpha, beta)
weights = pagerank_ext(conv_threshold, check_point, \
len_ + 1, sparse, semi_sparse)
dict_of_ranks = get_ranks(weights, dict_of_ids, 1, dates, 2)
return dict_of_ranks
def run_pagerank_time(cit, dict_of_ids, len_, ref, damping_factor, \
conv_threshold, check_point, date_coef, dates):
"""returns the final form of the ranks when using
pagerank + time decay method"""
write_message("Running the PageRank_time method", verbose=5)
sparse, semi_sparse, semi_sparse_coeficient = \
construct_sparse_matrix_time(cit, ref, dict_of_ids, \
damping_factor, date_coef)
weights = pagerank_time(conv_threshold, check_point, len_, \
sparse, semi_sparse, semi_sparse_coeficient, date_coef)
dict_of_ranks = get_ranks(weights, dict_of_ids, 100000, dates, 2)
return dict_of_ranks
def run_citation_rank_time(cit, dict_of_ids, date_coef, dates):
"""returns the final form of the ranks when using citation count
as function of time method"""
write_message("Running the citation rank with time decay method", verbose=5)
dict_of_ranks = citation_rank_time(cit, dict_of_ids, date_coef, dates, 2)
return dict_of_ranks
def spearman_rank_correlation_coef(rank1, rank2, len_):
"""rank1 and rank2 are arrays containing the recids in the ranking order
returns the corelation coeficient (-1 <= c <= 1) between 2 rankings
the closec c is to 1, the more correlated are the two ranking methods"""
total = 0
for i in range(len_):
rank_value = rank2.index(rank1[i])
total += (i - rank_value)*(i - rank_value)
return 1 - (6.0 * total) / (len_*(len_*len_ - 1))
def remove_loops(cit, dates, dict_of_ids):
"""when using time decay, new papers that are part of a loop
are accumulating a lot of fake weight"""
new_cit = {}
for recid in cit:
new_cit[recid] = []
for cited_by in cit[recid]:
if dates[dict_of_ids[cited_by]] >= dates[dict_of_ids[recid]]:
if cited_by in cit:
if recid not in cit[cited_by]:
new_cit[recid].append(cited_by)
else:
write_message("Loop removed: %s <-> %s" \
%(cited_by, recid), verbose=9)
else:
new_cit[recid].append(cited_by)
else:
write_message("Loop removed: %s <-> %s" \
%(cited_by, recid), verbose=9)
write_message("Simple loops removed", verbose=5)
return new_cit
def calculate_time_weights(len_, time_decay, dates):
"""calculates the time coeficients for each paper"""
current_year = int(datetime.datetime.now().strftime("%Y"))
date_coef = {}
for j in range(len_):
date_coef[j] = exp(time_decay*(dates[j] - current_year))
write_message("Time weights calculated", verbose=5)
write_message("Time weights: %s" % str(date_coef), verbose=9)
return date_coef
def get_dates(function, config, dict_of_ids):
"""returns a dictionary containing the year of
publishing for each paper"""
try:
file_for_dates = config.get(function, "file_with_dates")
dates = get_dates_from_file(file_for_dates, dict_of_ids)
except (ConfigParser.NoOptionError, StandardError), err:
write_message("If you want to read the dates from file set up the \
'file_for_dates' variable in the config file [%s]" %err, verbose=3)
try:
publication_year_tag = config.get(function, "publication_year_tag")
dummy = int(publication_year_tag[0:3])
except (ConfigParser.NoOptionError, StandardError):
write_message("You need to set up correctly the publication_year_tag \
in the cfg file", sys.stderr)
raise Exception
try:
creation_date_tag = config.get(function, "creation_date_tag")
dummy = int(creation_date_tag[0:3])
except (ConfigParser.NoOptionError, StandardError):
write_message("You need to set up correctly the creation_date_tag \
in the cfg file", sys.stderr)
raise Exception
dates = get_dates_from_db(dict_of_ids, publication_year_tag, \
creation_date_tag)
return dates
def citerank(rank_method_code):
"""new ranking method based on the citation graph"""
write_message("Running rank method: %s" % rank_method_code, verbose=0)
if not import_numpy:
write_message('The numpy package could not be imported. \
This package is compulsory for running the citerank methods.')
return
try:
file_ = CFG_ETCDIR + "/bibrank/" + rank_method_code + ".cfg"
config = ConfigParser.ConfigParser()
config.readfp(open(file_))
except StandardError:
write_message("Cannot find configuration file: %s" % file_, sys.stderr)
raise StandardError
# the file for citations needs to have the following format:
#each line needs to be x[tab]y, where x cites y; x,y are recids
function = config.get("rank_method", "function")
try:
file_for_citations = config.get(function, "file_with_citations")
cit, dict_of_ids = get_citations_from_file(file_for_citations)
except (ConfigParser.NoOptionError, StandardError), err:
write_message("If you want to read the citation data from file set up \
the file_for_citations parameter in the config file [%s]" %err, verbose=2)
cit, dict_of_ids = get_citations_from_db()
len_ = len(dict_of_ids.keys())
write_message("Number of nodes(papers) to rank : %s" % str(len_), verbose=3)
if len_ == 0:
- write_message("Error: No citations to read!", sys.stderr)
- raise Exception
+ write_message("No citation data found, nothing to be done.")
+ return
try:
method = config.get(function, "citerank_method")
except ConfigParser.NoOptionError, err:
write_message("Exception: %s " %err, sys.stderr)
raise Exception
write_message("Running %s method." % method, verbose=2)
dates = get_dates(function, config, dict_of_ids)
if method == "citation_time":
try:
time_decay = float(config.get(function, "time_decay"))
except (ConfigParser.NoOptionError, ValueError), err:
write_message("Exception: %s" % err, sys.stderr)
raise Exception
date_coef = calculate_time_weights(len_, time_decay, dates)
#cit = remove_loops(cit, dates, dict_of_ids)
dict_of_ranks = \
run_citation_rank_time(cit, dict_of_ids, date_coef, dates)
else:
try:
conv_threshold = float(config.get(function, "conv_threshold"))
check_point = int(config.get(function, "check_point"))
damping_factor = float(config.get(function, "damping_factor"))
write_message("Parameters: d = %s, conv_threshold = %s, \
check_point = %s" %(str(damping_factor), \
str(conv_threshold), str(check_point)), verbose=5)
except (ConfigParser.NoOptionError, StandardError), err:
write_message("Exception: %s" % err, sys.stderr)
raise Exception
if method == "pagerank_classic":
ref = construct_ref_array(cit, dict_of_ids, len_)
use_ext_cit = ""
try:
use_ext_cit = config.get(function, "use_external_citations")
write_message("Pagerank will use external citations: %s" \
%str(use_ext_cit), verbose=5)
except (ConfigParser.NoOptionError, StandardError), err:
write_message("%s" % err, verbose=2)
if use_ext_cit == "yes":
try:
ext_citation_file = config.get(function, "ext_citation_file")
ext_links = get_external_links_from_file(ext_citation_file,
ref, dict_of_ids)
except (ConfigParser.NoOptionError, StandardError):
write_message("If you want to read the external citation \
data from file set up the ext_citation_file parameter in the config. file", \
verbose=3)
try:
reference_tag = config.get(function, "ext_reference_tag")
dummy = int(reference_tag[0:3])
except (ConfigParser.NoOptionError, StandardError):
write_message("You need to set up correctly the \
reference_tag in the cfg file", sys.stderr)
raise Exception
ext_links = get_external_links_from_db(ref, \
dict_of_ids, reference_tag)
avg = avg_ext_links_with_0(ext_links)
if avg < 1:
write_message("This method can't be ran. There is not \
enough information about the external citation. Hint: check the reference tag", \
sys.stderr)
raise Exception
avg_ext_links_without_0(ext_links)
try:
alpha = float(config.get(function, "ext_alpha"))
beta = float(config.get(function, "ext_beta"))
except (ConfigParser.NoOptionError, StandardError), err:
write_message("Exception: %s" % err, sys.stderr)
raise Exception
dict_of_ranks = run_pagerank_ext(cit, dict_of_ids, ref, \
ext_links, conv_threshold, check_point, alpha, beta, dates)
else:
dict_of_ranks = run_pagerank(cit, dict_of_ids, len_, ref, \
damping_factor, conv_threshold, check_point, dates)
elif method == "pagerank_time":
try:
time_decay = float(config.get(function, "time_decay"))
write_message("Parameter: time_decay = %s" \
%str(time_decay), verbose=5)
except (ConfigParser.NoOptionError, StandardError), err:
write_message("Exception: %s" % err, sys.stderr)
raise Exception
date_coef = calculate_time_weights(len_, time_decay, dates)
cit = remove_loops(cit, dates, dict_of_ids)
ref = construct_ref_array(cit, dict_of_ids, len_)
dict_of_ranks = run_pagerank_time(cit, dict_of_ids, len_, ref, \
damping_factor, conv_threshold, check_point, date_coef, dates)
else:
write_message("Error: Unknown ranking method. \
Please check the ranking_method parameter in the config. file.", sys.stderr)
raise Exception
try:
filename_ranks = config.get(function, "output_ranks_to_filename")
max_ranks = config.get(function, "output_rank_limit")
if not max_ranks.isdigit():
max_ranks = len_
else:
max_ranks = int(max_ranks)
if max_ranks > len_:
max_ranks = len_
ranks = sort_weights(dict_of_ranks)
write_message("Ranks: %s" % str(ranks), verbose=9)
write_first_ranks_to_file(ranks, dict_of_ranks, \
max_ranks, filename_ranks)
except (ConfigParser.NoOptionError, StandardError):
write_message("If you want the ranks to be printed in a file you have \
to set output_ranks_to_filename and output_rank_limit \
parameters in the configuration file", verbose=3)
normalize_weights(dict_of_ranks)
into_db(dict_of_ranks, rank_method_code)
diff --git a/modules/bibrank/lib/bibrank_record_sorter.py b/modules/bibrank/lib/bibrank_record_sorter.py
index 87c14eea4..b3a2b3f19 100644
--- a/modules/bibrank/lib/bibrank_record_sorter.py
+++ b/modules/bibrank/lib/bibrank_record_sorter.py
@@ -1,442 +1,442 @@
# -*- coding: utf-8 -*-
## Ranking of records using different parameters and methods on the fly.
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
__revision__ = "$Id$"
import string
import time
import math
import re
import ConfigParser
import copy
from invenio.config import \
CFG_SITE_LANG, \
CFG_ETCDIR, \
CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS
from invenio.dbquery import run_sql, deserialize_via_marshal, wash_table_column_name
from invenio.errorlib import register_exception
from invenio.webpage import adderrorbox
from invenio.bibindex_engine_stemmer import stem
from invenio.bibindex_engine_stopwords import is_stopword
from invenio.bibrank_citation_searcher import get_cited_by, get_cited_by_weight
from invenio.intbitset import intbitset
from invenio.bibrank_word_searcher import find_similar
# Do not remove these lines, it is necessary for func_object = globals().get(function)
from invenio.bibrank_word_searcher import word_similarity
from invenio.solrutils_bibrank_searcher import word_similarity_solr
from invenio.xapianutils_bibrank_searcher import word_similarity_xapian
def compare_on_val(first, second):
return cmp(second[1], first[1])
def check_term(term, col_size, term_rec, max_occ, min_occ, termlength):
"""Check if the tem is valid for use
term - the term to check
col_size - the number of records in database
term_rec - the number of records which contains this term
max_occ - max frequency of the term allowed
min_occ - min frequence of the term allowed
termlength - the minimum length of the terms allowed"""
try:
- if is_stopword(term, 1) or (len(term) <= termlength) or ((float(term_rec) / float(col_size)) >= max_occ) or ((float(term_rec) / float(col_size)) <= min_occ):
+ if is_stopword(term) or (len(term) <= termlength) or ((float(term_rec) / float(col_size)) >= max_occ) or ((float(term_rec) / float(col_size)) <= min_occ):
return ""
if int(term):
return ""
except StandardError, e:
pass
return "true"
def create_external_ranking_settings(rank_method_code, config):
methods[rank_method_code]['fields'] = dict()
sections = config.sections()
field_pattern = re.compile('field[0-9]+')
for section in sections:
if field_pattern.search(section):
field_name = config.get(section, 'name')
methods[rank_method_code]['fields'][field_name] = dict()
for option in config.options(section):
if option != 'name':
create_external_ranking_option(section, option, methods[rank_method_code]['fields'][field_name], config)
elif section == 'find_similar_to_recid':
methods[rank_method_code][section] = dict()
for option in config.options(section):
create_external_ranking_option(section, option, methods[rank_method_code][section], config)
elif section == 'field_settings':
for option in config.options(section):
create_external_ranking_option(section, option, methods[rank_method_code], config)
def create_external_ranking_option(section, option, dictionary, config):
value = config.get(section, option)
if value.isdigit():
value = int(value)
dictionary[option] = value
def create_rnkmethod_cache():
"""Create cache with vital information for each rank method."""
global methods
bibrank_meths = run_sql("SELECT name from rnkMETHOD")
methods = {}
global voutput
voutput = ""
for (rank_method_code,) in bibrank_meths:
try:
file = CFG_ETCDIR + "/bibrank/" + rank_method_code + ".cfg"
config = ConfigParser.ConfigParser()
config.readfp(open(file))
except StandardError, e:
pass
cfg_function = config.get("rank_method", "function")
if config.has_section(cfg_function):
methods[rank_method_code] = {}
methods[rank_method_code]["function"] = cfg_function
methods[rank_method_code]["prefix"] = config.get(cfg_function, "relevance_number_output_prologue")
methods[rank_method_code]["postfix"] = config.get(cfg_function, "relevance_number_output_epilogue")
methods[rank_method_code]["chars_alphanumericseparators"] = r"[1234567890\!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\=\>\?\@\[\\\]\^\_\`\{\|\}\~]"
else:
raise Exception("Error in configuration file: %s" % (CFG_ETCDIR + "/bibrank/" + rank_method_code + ".cfg"))
i8n_names = run_sql("""SELECT ln,value from rnkMETHODNAME,rnkMETHOD where id_rnkMETHOD=rnkMETHOD.id and rnkMETHOD.name=%s""", (rank_method_code,))
for (ln, value) in i8n_names:
methods[rank_method_code][ln] = value
if config.has_option(cfg_function, "table"):
methods[rank_method_code]["rnkWORD_table"] = config.get(cfg_function, "table")
query = "SELECT count(*) FROM %sR" % wash_table_column_name(methods[rank_method_code]["rnkWORD_table"][:-1])
methods[rank_method_code]["col_size"] = run_sql(query)[0][0]
if config.has_option(cfg_function, "stemming") and config.get(cfg_function, "stemming"):
try:
methods[rank_method_code]["stemmer"] = config.get(cfg_function, "stemming")
except Exception,e:
pass
if config.has_option(cfg_function, "stopword"):
methods[rank_method_code]["stopwords"] = config.get(cfg_function, "stopword")
if config.has_section("find_similar"):
methods[rank_method_code]["max_word_occurence"] = float(config.get("find_similar", "max_word_occurence"))
methods[rank_method_code]["min_word_occurence"] = float(config.get("find_similar", "min_word_occurence"))
methods[rank_method_code]["min_word_length"] = int(config.get("find_similar", "min_word_length"))
methods[rank_method_code]["min_nr_words_docs"] = int(config.get("find_similar", "min_nr_words_docs"))
methods[rank_method_code]["max_nr_words_upper"] = int(config.get("find_similar", "max_nr_words_upper"))
methods[rank_method_code]["max_nr_words_lower"] = int(config.get("find_similar", "max_nr_words_lower"))
methods[rank_method_code]["default_min_relevance"] = int(config.get("find_similar", "default_min_relevance"))
if cfg_function in ('word_similarity_solr', 'word_similarity_xapian'):
create_external_ranking_settings(rank_method_code, config)
if config.has_section("combine_method"):
i = 1
methods[rank_method_code]["combine_method"] = []
while config.has_option("combine_method", "method%s" % i):
methods[rank_method_code]["combine_method"].append(string.split(config.get("combine_method", "method%s" % i), ","))
i += 1
def is_method_valid(colID, rank_method_code):
"""
Check if RANK_METHOD_CODE method is valid for the collection given.
If colID is None, then check for existence regardless of collection.
"""
if colID is None:
return run_sql("SELECT COUNT(*) FROM rnkMETHOD WHERE name=%s", (rank_method_code,))[0][0]
enabled_colls = dict(run_sql("SELECT id_collection, score from collection_rnkMETHOD,rnkMETHOD WHERE id_rnkMETHOD=rnkMETHOD.id AND name=%s", (rank_method_code,)))
try:
colID = int(colID)
except TypeError:
return 0
if enabled_colls.has_key(colID):
return 1
else:
while colID:
colID = run_sql("SELECT id_dad FROM collection_collection WHERE id_son=%s", (colID,))
if colID and enabled_colls.has_key(colID[0][0]):
return 1
elif colID:
colID = colID[0][0]
return 0
def get_bibrank_methods(colID, ln=CFG_SITE_LANG):
"""
Return a list of rank methods enabled for collection colID and the
name of them in the language defined by the ln parameter.
"""
if not globals().has_key('methods'):
create_rnkmethod_cache()
avail_methods = []
for (rank_method_code, options) in methods.iteritems():
if options.has_key("function") and is_method_valid(colID, rank_method_code):
if options.has_key(ln):
avail_methods.append((rank_method_code, options[ln]))
elif options.has_key(CFG_SITE_LANG):
avail_methods.append((rank_method_code, options[CFG_SITE_LANG]))
else:
avail_methods.append((rank_method_code, rank_method_code))
return avail_methods
def rank_records(rank_method_code, rank_limit_relevance, hitset_global, pattern=[], verbose=0, field='', rg=None, jrec=None):
"""rank_method_code, e.g. `jif' or `sbr' (word frequency vector model)
rank_limit_relevance, e.g. `23' for `nbc' (number of citations) or `0.10' for `vec'
hitset, search engine hits;
pattern, search engine query or record ID (you check the type)
verbose, verbose level
output:
list of records
list of rank values
prefix
postfix
verbose_output"""
voutput = ""
configcreated = ""
starttime = time.time()
afterfind = starttime - time.time()
aftermap = starttime - time.time()
try:
hitset = copy.deepcopy(hitset_global) #we are receiving a global hitset
if not globals().has_key('methods'):
create_rnkmethod_cache()
function = methods[rank_method_code]["function"]
#we get 'citation' method correctly here
func_object = globals().get(function)
if verbose > 0:
voutput += "function: %s <br/> " % function
voutput += "pattern: %s <br/>" % str(pattern)
if func_object and pattern and pattern[0][0:6] == "recid:" and function == "word_similarity":
result = find_similar(rank_method_code, pattern[0][6:], hitset, rank_limit_relevance, verbose, methods)
elif rank_method_code == "citation":
#we get rank_method_code correctly here. pattern[0] is the search word - not used by find_cit
p = ""
if pattern and pattern[0]:
p = pattern[0][6:]
result = find_citations(rank_method_code, p, hitset, verbose)
elif func_object:
if function == "word_similarity":
result = func_object(rank_method_code, pattern, hitset, rank_limit_relevance, verbose, methods)
elif function in ("word_similarity_solr", "word_similarity_xapian"):
if not rg:
rg = CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS
if not jrec:
jrec = 0
ranked_result_amount = rg + jrec
if verbose > 0:
voutput += "Ranked result amount: %s<br/><br/>" % ranked_result_amount
if verbose > 0:
voutput += "field: %s<br/>" % field
if function == "word_similarity_solr":
if verbose > 0:
voutput += "In Solr part:<br/>"
result = word_similarity_solr(pattern, hitset, methods[rank_method_code], verbose, field, ranked_result_amount)
if function == "word_similarity_xapian":
if verbose > 0:
voutput += "In Xapian part:<br/>"
result = word_similarity_xapian(pattern, hitset, methods[rank_method_code], verbose, field, ranked_result_amount)
else:
result = func_object(rank_method_code, pattern, hitset, rank_limit_relevance, verbose)
else:
result = rank_by_method(rank_method_code, pattern, hitset, rank_limit_relevance, verbose)
except Exception, e:
register_exception()
result = (None, "", adderrorbox("An error occured when trying to rank the search result "+rank_method_code, ["Unexpected error: %s<br />" % (e,)]), voutput)
afterfind = time.time() - starttime
if result[0] and result[1]: #split into two lists for search_engine
results_similar_recIDs = map(lambda x: x[0], result[0])
results_similar_relevances = map(lambda x: x[1], result[0])
result = (results_similar_recIDs, results_similar_relevances, result[1], result[2], "%s" % configcreated + result[3])
aftermap = time.time() - starttime;
else:
result = (None, None, result[1], result[2], result[3])
#add stuff from here into voutput from result
tmp = voutput+result[4]
if verbose > 0:
tmp += "<br/>Elapsed time after finding: "+str(afterfind)+"\nElapsed after mapping: "+str(aftermap)
result = (result[0],result[1],result[2],result[3],tmp)
#dbg = string.join(map(str,methods[rank_method_code].items()))
#result = (None, "", adderrorbox("Debug ",rank_method_code+" "+dbg),"",voutput);
return result
def combine_method(rank_method_code, pattern, hitset, rank_limit_relevance,verbose):
"""combining several methods into one based on methods/percentage in config file"""
global voutput
result = {}
try:
for (method, percent) in methods[rank_method_code]["combine_method"]:
function = methods[method]["function"]
func_object = globals().get(function)
percent = int(percent)
if func_object:
this_result = func_object(method, pattern, hitset, rank_limit_relevance, verbose)[0]
else:
this_result = rank_by_method(method, pattern, hitset, rank_limit_relevance, verbose)[0]
for i in range(0, len(this_result)):
(recID, value) = this_result[i]
if value > 0:
result[recID] = result.get(recID, 0) + int((float(i) / len(this_result)) * float(percent))
result = result.items()
result.sort(lambda x, y: cmp(x[1], y[1]))
return (result, "(", ")", voutput)
except Exception, e:
return (None, "Warning: %s method cannot be used for ranking your query." % rank_method_code, "", voutput)
def rank_by_method(rank_method_code, lwords, hitset, rank_limit_relevance,verbose):
"""Ranking of records based on predetermined values.
input:
rank_method_code - the code of the method, from the name field in rnkMETHOD, used to get predetermined values from
rnkMETHODDATA
lwords - a list of words from the query
hitset - a list of hits for the query found by search_engine
rank_limit_relevance - show only records with a rank value above this
verbose - verbose value
output:
reclist - a list of sorted records, with unsorted added to the end: [[23,34], [344,24], [1,01]]
prefix - what to show before the rank value
postfix - what to show after the rank value
voutput - contains extra information, content dependent on verbose value"""
global voutput
voutput = ""
rnkdict = run_sql("SELECT relevance_data FROM rnkMETHODDATA,rnkMETHOD where rnkMETHOD.id=id_rnkMETHOD and rnkMETHOD.name=%s", (rank_method_code,))
if not rnkdict:
return (None, "Warning: Could not load ranking data for method %s." % rank_method_code, "", voutput)
max_recid = 0
res = run_sql("SELECT max(id) FROM bibrec")
if res and res[0][0]:
max_recid = int(res[0][0])
lwords_hitset = None
for j in range(0, len(lwords)): #find which docs to search based on ranges..should be done in search_engine...
if lwords[j] and lwords[j][:6] == "recid:":
if not lwords_hitset:
lwords_hitset = intbitset()
lword = lwords[j][6:]
if string.find(lword, "->") > -1:
lword = string.split(lword, "->")
if int(lword[0]) >= max_recid or int(lword[1]) >= max_recid + 1:
return (None, "Warning: Given record IDs are out of range.", "", voutput)
for i in range(int(lword[0]), int(lword[1])):
lwords_hitset.add(int(i))
elif lword < max_recid + 1:
lwords_hitset.add(int(lword))
else:
return (None, "Warning: Given record IDs are out of range.", "", voutput)
rnkdict = deserialize_via_marshal(rnkdict[0][0])
if verbose > 0:
voutput += "<br />Running rank method: %s, using rank_by_method function in bibrank_record_sorter<br />" % rank_method_code
voutput += "Ranking data loaded, size of structure: %s<br />" % len(rnkdict)
lrecIDs = list(hitset)
if verbose > 0:
voutput += "Number of records to rank: %s<br />" % len(lrecIDs)
reclist = []
reclist_addend = []
if not lwords_hitset: #rank all docs, can this be speed up using something else than for loop?
for recID in lrecIDs:
if rnkdict.has_key(recID):
reclist.append((recID, rnkdict[recID]))
del rnkdict[recID]
else:
reclist_addend.append((recID, 0))
else: #rank docs in hitset, can this be speed up using something else than for loop?
for recID in lwords_hitset:
if rnkdict.has_key(recID) and recID in hitset:
reclist.append((recID, rnkdict[recID]))
del rnkdict[recID]
elif recID in hitset:
reclist_addend.append((recID, 0))
if verbose > 0:
voutput += "Number of records ranked: %s<br />" % len(reclist)
voutput += "Number of records not ranked: %s<br />" % len(reclist_addend)
reclist.sort(lambda x, y: cmp(x[1], y[1]))
return (reclist_addend + reclist, methods[rank_method_code]["prefix"], methods[rank_method_code]["postfix"], voutput)
def find_citations(rank_method_code, recID, hitset, verbose):
"""Rank by the amount of citations."""
#calculate the cited-by values for all the members of the hitset
#returns: ((recordid,weight),prefix,postfix,message)
global voutput
voutput = ""
#If the recID is numeric, return only stuff that cites it. Otherwise return
#stuff that cites hitset
#try to convert to int
recisint = True
recidint = 0
try:
recidint = int(recID)
except:
recisint = False
ret = []
if recisint:
myrecords = get_cited_by(recidint) #this is a simple list
ret = get_cited_by_weight(myrecords)
else:
ret = get_cited_by_weight(hitset)
ret.sort(lambda x,y:cmp(x[1],y[1])) #ascending by the second member of the tuples
if verbose > 0:
voutput = voutput+"\nrecID "+str(recID)+" is int: "+str(recisint)+" hitset "+str(hitset)+"\n"+"find_citations retlist "+str(ret)
#voutput = voutput + str(ret)
if ret:
return (ret,"(", ")", "")
else:
return ((),"", "", "")
diff --git a/modules/bibrank/lib/bibrank_regression_tests.py b/modules/bibrank/lib/bibrank_regression_tests.py
index 5a6e6cda4..ae6c4068a 100644
--- a/modules/bibrank/lib/bibrank_regression_tests.py
+++ b/modules/bibrank/lib/bibrank_regression_tests.py
@@ -1,189 +1,189 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2012 CERN.
+## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""BibRank Regression Test Suite."""
__revision__ = "$Id$"
from invenio.config import CFG_SITE_URL, CFG_SITE_RECORD
from invenio.dbquery import run_sql
from invenio.testutils import make_test_suite, run_test_suite, \
test_web_page_content, merge_error_messages, \
InvenioTestCase, nottest
class BibRankWebPagesAvailabilityTest(InvenioTestCase):
"""Check BibRank web pages whether they are up or not."""
def test_rank_by_word_similarity_pages_availability(self):
"""bibrank - availability of ranking search results pages"""
baseurl = CFG_SITE_URL + '/search'
_exports = ['?p=ellis&r=wrd']
error_messages = []
for url in [baseurl + page for page in _exports]:
error_messages.extend(test_web_page_content(url))
if error_messages:
self.fail(merge_error_messages(error_messages))
return
def test_similar_records_pages_availability(self):
"""bibrank - availability of similar records results pages"""
baseurl = CFG_SITE_URL + '/search'
_exports = ['?p=recid%3A18&rm=wrd']
error_messages = []
for url in [baseurl + page for page in _exports]:
error_messages.extend(test_web_page_content(url))
if error_messages:
self.fail(merge_error_messages(error_messages))
return
class BibRankIntlMethodNames(InvenioTestCase):
"""Check BibRank I18N ranking method names."""
def test_i18n_ranking_method_names(self):
"""bibrank - I18N ranking method names"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/collection/Articles%20%26%20Preprints?as=1',
expected_text="times cited"))
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/collection/Articles%20%26%20Preprints?as=1',
expected_text="journal impact factor"))
class BibRankWordSimilarityRankingTest(InvenioTestCase):
"""Check BibRank word similarity ranking tools."""
def test_search_results_ranked_by_similarity(self):
"""bibrank - search results ranked by word similarity"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=ellis&rm=wrd&of=id',
- expected_text="[8, 10, 17, 11, 12, 13, 47, 16, 9, 14, 18, 15]"))
+ expected_text="[8, 10, 11, 12, 47, 17, 13, 16, 9, 14, 18, 15]"))
def test_similar_records_link(self):
"""bibrank - 'Similar records' link"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?p=recid%3A77&rm=wrd&of=id',
- expected_text="[84, 96, 95, 85, 77]"))
+ expected_text="[96, 95, 85, 77]"))
class BibRankCitationRankingTest(InvenioTestCase):
"""Check BibRank citation ranking tools."""
def test_search_results_ranked_by_citations(self):
"""bibrank - search results ranked by number of citations"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?cc=Articles+%26+Preprints&p=Klebanov&rm=citation&of=id',
username="admin",
expected_text="[85, 77, 84]"))
@nottest
def test_search_results_ranked_by_citations_verbose(self):
"""bibrank - search results ranked by number of citations, verbose output"""
#FIXME verbose is not supported in jinja2 templates
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/search?cc=Articles+%26+Preprints&p=Klebanov&rm=citation&verbose=2',
username="admin",
expected_text="find_citations retlist [[85, 0], [77, 2], [84, 3]]"))
def test_detailed_record_citations_tab(self):
"""bibrank - detailed record, citations tab"""
self.assertEqual([],
test_web_page_content(CFG_SITE_URL + '/'+ CFG_SITE_RECORD +'/79/citations',
expected_text=["Cited by: 1 records",
"Co-cited with: 2 records"]))
class BibRankExtCitesTest(InvenioTestCase):
"""Check BibRank citation ranking tools with respect to the external cites."""
def _detect_extcite_info(self, extcitepubinfo):
"""
Helper function to return list of recIDs citing given
extcitepubinfo. Could be move to the business logic, if
interesting for other callers.
"""
res = run_sql("""SELECT id_bibrec FROM rnkCITATIONDATAEXT
WHERE extcitepubinfo=%s""",
(extcitepubinfo,))
return [int(x[0]) for x in res]
def test_extcite_via_report_number(self):
"""bibrank - external cites, via report number"""
# The external paper hep-th/0112258 is cited by 9 demo
# records: you can find out via 999:"hep-th/0112258", and we
# could eventually automatize this query, but it is maybe
# safer to leave it manual in case queries fail for some
# reason.
test_case_repno = "hep-th/0112258"
test_case_repno_cited_by = [77, 78, 81, 82, 85, 86, 88, 90, 91]
self.assertEqual(self._detect_extcite_info(test_case_repno),
test_case_repno_cited_by)
def test_extcite_via_publication_reference(self):
"""bibrank - external cites, via publication reference"""
# The external paper "J. Math. Phys. 4 (1963) 915" does not
# have any report number, and is cited by 1 demo record.
test_case_pubinfo = "J. Math. Phys. 4 (1963) 915"
test_case_pubinfo_cited_by = [90]
self.assertEqual(self._detect_extcite_info(test_case_pubinfo),
test_case_pubinfo_cited_by)
def test_intcite_via_report_number(self):
"""bibrank - external cites, no internal papers via report number"""
# The internal paper hep-th/9809057 is cited by 2 demo
# records, but it also exists as a demo record, so it should
# not be found in the extcite table.
test_case_repno = "hep-th/9809057"
test_case_repno_cited_by = []
self.assertEqual(self._detect_extcite_info(test_case_repno),
test_case_repno_cited_by)
def test_intcite_via_publication_reference(self):
"""bibrank - external cites, no internal papers via publication reference"""
# The internal paper #18 has only pubinfo, no repno, and is
# cited by internal paper #96 via its pubinfo, so should not
# be present in the extcite list:
test_case_repno = "Phys. Lett., B 151 (1985) 357"
test_case_repno_cited_by = []
self.assertEqual(self._detect_extcite_info(test_case_repno),
test_case_repno_cited_by)
TESTS = [BibRankWebPagesAvailabilityTest,
BibRankIntlMethodNames,
BibRankCitationRankingTest,
BibRankExtCitesTest]
from invenio.webinterface_handler_flask import with_app_context
@with_app_context()
def create_external_word_similarity_ranker_tests():
from invenio.bibrank_bridge_utils import get_external_word_similarity_ranker
if not get_external_word_similarity_ranker():
TESTS.append(BibRankWordSimilarityRankingTest)
create_external_word_similarity_ranker_tests()
TEST_SUITE = make_test_suite(*TESTS)
if __name__ == "__main__":
run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/bibrank/lib/bibrank_word_indexer.py b/modules/bibrank/lib/bibrank_word_indexer.py
index e31a82c30..931745174 100644
--- a/modules/bibrank/lib/bibrank_word_indexer.py
+++ b/modules/bibrank/lib/bibrank_word_indexer.py
@@ -1,1195 +1,1195 @@
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
__revision__ = "$Id$"
import sys
import time
import urllib
import math
import re
import ConfigParser
from invenio.config import \
CFG_SITE_LANG, \
CFG_ETCDIR
from invenio.search_engine import perform_request_search, wash_index_term
from invenio.dbquery import run_sql, DatabaseError, serialize_via_marshal, deserialize_via_marshal
from invenio.bibindex_engine_stemmer import is_stemmer_available_for_language, stem
from invenio.bibindex_engine_stopwords import is_stopword
from invenio.bibindex_engine import beautify_range_list, \
kill_sleepy_mysql_threads, create_range_list
from invenio.bibtask import write_message, task_get_option, task_update_progress, \
task_update_status, task_sleep_now_if_required
from invenio.intbitset import intbitset
from invenio.errorlib import register_exception
from invenio.textutils import strip_accents
options = {} # global variable to hold task options
## safety parameters concerning DB thread-multiplication problem:
CFG_CHECK_MYSQL_THREADS = 0 # to check or not to check the problem?
CFG_MAX_MYSQL_THREADS = 50 # how many threads (connections) we consider as still safe
CFG_MYSQL_THREAD_TIMEOUT = 20 # we'll kill threads that were sleeping for more than X seconds
## override urllib's default password-asking behaviour:
class MyFancyURLopener(urllib.FancyURLopener):
def prompt_user_passwd(self, host, realm):
# supply some dummy credentials by default
return ("mysuperuser", "mysuperpass")
def http_error_401(self, url, fp, errcode, errmsg, headers):
# do not bother with protected pages
raise IOError, (999, 'unauthorized access')
return None
#urllib._urlopener = MyFancyURLopener()
nb_char_in_line = 50 # for verbose pretty printing
chunksize = 1000 # default size of chunks that the records will be treated by
base_process_size = 4500 # process base size
## Dictionary merging functions
def dict_union(list1, list2):
"Returns union of the two dictionaries."
union_dict = {}
for (e, count) in list1.iteritems():
union_dict[e] = count
for (e, count) in list2.iteritems():
if not union_dict.has_key(e):
union_dict[e] = count
else:
union_dict[e] = (union_dict[e][0] + count[0], count[1])
#for (e, count) in list2.iteritems():
# list1[e] = (list1.get(e, (0, 0))[0] + count[0], count[1])
#return list1
return union_dict
# tagToFunctions mapping. It offers an indirection level necesary for
# indexing fulltext. The default is get_words_from_phrase
tagToWordsFunctions = {}
def get_words_from_phrase(phrase, weight, lang="",
chars_punctuation=r"[\.\,\:\;\?\!\"]",
chars_alphanumericseparators=r"[1234567890\!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\=\>\?\@\[\\\]\^\_\`\{\|\}\~]",
split=str.split):
"Returns list of words from phrase 'phrase'."
words = {}
phrase = strip_accents(phrase)
phrase = phrase.lower()
#Getting rid of strange characters
phrase = re.sub("&eacute;", 'e', phrase)
phrase = re.sub("&egrave;", 'e', phrase)
phrase = re.sub("&agrave;", 'a', phrase)
phrase = re.sub("&nbsp;", ' ', phrase)
phrase = re.sub("&laquo;", ' ', phrase)
phrase = re.sub("&raquo;", ' ', phrase)
phrase = re.sub("&ecirc;", ' ', phrase)
phrase = re.sub("&amp;", ' ', phrase)
if phrase.find("</") > -1:
#Most likely html, remove html code
phrase = re.sub("(?s)<[^>]*>|&#?\w+;", ' ', phrase)
#removes http links
phrase = re.sub("(?s)http://[^( )]*", '', phrase)
phrase = re.sub(chars_punctuation, ' ', phrase)
#By doing this like below, characters standing alone, like c a b is not added to the inedx, but when they are together with characters like c++ or c$ they are added.
for word in split(phrase):
- if options["remove_stopword"] == "True" and not is_stopword(word, 1) and check_term(word, 0):
+ if options["remove_stopword"] == "True" and not is_stopword(word) and check_term(word, 0):
if lang and lang !="none" and options["use_stemming"]:
word = stem(word, lang)
if not words.has_key(word):
words[word] = (0, 0)
else:
if not words.has_key(word):
words[word] = (0, 0)
words[word] = (words[word][0] + weight, 0)
- elif options["remove_stopword"] == "True" and not is_stopword(word, 1):
+ elif options["remove_stopword"] == "True" and not is_stopword(word):
phrase = re.sub(chars_alphanumericseparators, ' ', word)
for word_ in split(phrase):
if lang and lang !="none" and options["use_stemming"]:
word_ = stem(word_, lang)
if word_:
if not words.has_key(word_):
words[word_] = (0,0)
words[word_] = (words[word_][0] + weight, 0)
return words
class WordTable:
"A class to hold the words table."
def __init__(self, tablename, fields_to_index, separators="[^\s]"):
"Creates words table instance."
self.tablename = tablename
self.recIDs_in_mem = []
self.fields_to_index = fields_to_index
self.separators = separators
self.value = {}
def get_field(self, recID, tag):
"""Returns list of values of the MARC-21 'tag' fields for the
record 'recID'."""
out = []
bibXXx = "bib" + tag[0] + tag[1] + "x"
bibrec_bibXXx = "bibrec_" + bibXXx
query = """SELECT value FROM %s AS b, %s AS bb
WHERE bb.id_bibrec=%s AND bb.id_bibxxx=b.id
AND tag LIKE '%s'""" % (bibXXx, bibrec_bibXXx, recID, tag);
res = run_sql(query)
for row in res:
out.append(row[0])
return out
def clean(self):
"Cleans the words table."
self.value={}
def put_into_db(self, mode="normal"):
"""Updates the current words table in the corresponding DB
rnkWORD table. Mode 'normal' means normal execution,
mode 'emergency' means words index reverting to old state.
"""
write_message("%s %s wordtable flush started" % (self.tablename,mode))
write_message('...updating %d words into %sR started' % \
(len(self.value), self.tablename[:-1]))
task_update_progress("%s flushed %d/%d words" % (self.tablename, 0, len(self.value)))
self.recIDs_in_mem = beautify_range_list(self.recIDs_in_mem)
if mode == "normal":
for group in self.recIDs_in_mem:
query = """UPDATE %sR SET type='TEMPORARY' WHERE id_bibrec
BETWEEN '%d' AND '%d' AND type='CURRENT'""" % \
(self.tablename[:-1], group[0], group[1])
write_message(query, verbose=9)
run_sql(query)
nb_words_total = len(self.value)
nb_words_report = int(nb_words_total/10)
nb_words_done = 0
for word in self.value.keys():
self.put_word_into_db(word, self.value[word])
nb_words_done += 1
if nb_words_report!=0 and ((nb_words_done % nb_words_report) == 0):
write_message('......processed %d/%d words' % (nb_words_done, nb_words_total))
task_update_progress("%s flushed %d/%d words" % (self.tablename, nb_words_done, nb_words_total))
write_message('...updating %d words into %s ended' % \
(nb_words_total, self.tablename), verbose=9)
#if options["verbose"]:
# write_message('...updating reverse table %sR started' % self.tablename[:-1])
if mode == "normal":
for group in self.recIDs_in_mem:
query = """UPDATE %sR SET type='CURRENT' WHERE id_bibrec
BETWEEN '%d' AND '%d' AND type='FUTURE'""" % \
(self.tablename[:-1], group[0], group[1])
write_message(query, verbose=9)
run_sql(query)
query = """DELETE FROM %sR WHERE id_bibrec
BETWEEN '%d' AND '%d' AND type='TEMPORARY'""" % \
(self.tablename[:-1], group[0], group[1])
write_message(query, verbose=9)
run_sql(query)
write_message('End of updating wordTable into %s' % self.tablename, verbose=9)
elif mode == "emergency":
write_message("emergency")
for group in self.recIDs_in_mem:
query = """UPDATE %sR SET type='CURRENT' WHERE id_bibrec
BETWEEN '%d' AND '%d' AND type='TEMPORARY'""" % \
(self.tablename[:-1], group[0], group[1])
write_message(query, verbose=9)
run_sql(query)
query = """DELETE FROM %sR WHERE id_bibrec
BETWEEN '%d' AND '%d' AND type='FUTURE'""" % \
(self.tablename[:-1], group[0], group[1])
write_message(query, verbose=9)
run_sql(query)
write_message('End of emergency flushing wordTable into %s' % self.tablename, verbose=9)
#if options["verbose"]:
# write_message('...updating reverse table %sR ended' % self.tablename[:-1])
self.clean()
self.recIDs_in_mem = []
write_message("%s %s wordtable flush ended" % (self.tablename, mode))
task_update_progress("%s flush ended" % (self.tablename))
def load_old_recIDs(self,word):
"""Load existing hitlist for the word from the database index files."""
query = "SELECT hitlist FROM %s WHERE term=%%s" % self.tablename
res = run_sql(query, (word,))
if res:
return deserialize_via_marshal(res[0][0])
else:
return None
def merge_with_old_recIDs(self,word,recIDs, set):
"""Merge the system numbers stored in memory (hash of recIDs with value[0] > 0 or -1
according to whether to add/delete them) with those stored in the database index
and received in set universe of recIDs for the given word.
Return 0 in case no change was done to SET, return 1 in case SET was changed.
"""
set_changed_p = 0
for recID,sign in recIDs.iteritems():
if sign[0] == -1 and set.has_key(recID):
# delete recID if existent in set and if marked as to be deleted
del set[recID]
set_changed_p = 1
elif sign[0] > -1 and not set.has_key(recID):
# add recID if not existent in set and if marked as to be added
set[recID] = sign
set_changed_p = 1
elif sign[0] > -1 and sign[0] != set[recID][0]:
set[recID] = sign
set_changed_p = 1
return set_changed_p
def put_word_into_db(self, word, recIDs, split=str.split):
"""Flush a single word to the database and delete it from memory"""
set = self.load_old_recIDs(word)
#write_message("%s %s" % (word, self.value[word]))
if set is not None: # merge the word recIDs found in memory:
options["modified_words"][word] = 1
if not self.merge_with_old_recIDs(word, recIDs, set):
# nothing to update:
write_message("......... unchanged hitlist for ``%s''" % word, verbose=9)
pass
else:
# yes there were some new words:
write_message("......... updating hitlist for ``%s''" % word, verbose=9)
run_sql("UPDATE %s SET hitlist=%%s WHERE term=%%s" % self.tablename,
(serialize_via_marshal(set), word))
else: # the word is new, will create new set:
write_message("......... inserting hitlist for ``%s''" % word, verbose=9)
set = self.value[word]
if len(set) > 0:
#new word, add to list
options["modified_words"][word] = 1
try:
run_sql("INSERT INTO %s (term, hitlist) VALUES (%%s, %%s)" % self.tablename,
(word, serialize_via_marshal(set)))
except Exception, e:
## FIXME: This is for debugging encoding errors
register_exception(prefix="Error when putting the term '%s' into db (hitlist=%s): %s\n" % (repr(word), set, e), alert_admin=True)
if not set: # never store empty words
run_sql("DELETE from %s WHERE term=%%s" % self.tablename,
(word,))
del self.value[word]
def display(self):
"Displays the word table."
keys = self.value.keys()
keys.sort()
for k in keys:
write_message("%s: %s" % (k, self.value[k]))
def count(self):
"Returns the number of words in the table."
return len(self.value)
def info(self):
"Prints some information on the words table."
write_message("The words table contains %d words." % self.count())
def lookup_words(self, word=""):
"Lookup word from the words table."
if not word:
done = 0
while not done:
try:
word = raw_input("Enter word: ")
done = 1
except (EOFError, KeyboardInterrupt):
return
if self.value.has_key(word):
write_message("The word '%s' is found %d times." \
% (word, len(self.value[word])))
else:
write_message("The word '%s' does not exist in the word file."\
% word)
def update_last_updated(self, rank_method_code, starting_time=None):
"""Update last_updated column of the index table in the database.
Puts starting time there so that if the task was interrupted for record download,
the records will be reindexed next time."""
if starting_time is None:
return None
write_message("updating last_updated to %s..." % starting_time, verbose=9)
return run_sql("UPDATE rnkMETHOD SET last_updated=%s WHERE name=%s",
(starting_time, rank_method_code,))
def add_recIDs(self, recIDs):
"""Fetches records which id in the recIDs arange list and adds
them to the wordTable. The recIDs arange list is of the form:
[[i1_low,i1_high],[i2_low,i2_high], ..., [iN_low,iN_high]].
"""
global chunksize
flush_count = 0
records_done = 0
records_to_go = 0
for arange in recIDs:
records_to_go = records_to_go + arange[1] - arange[0] + 1
time_started = time.time() # will measure profile time
for arange in recIDs:
i_low = arange[0]
chunksize_count = 0
while i_low <= arange[1]:
# calculate chunk group of recIDs and treat it:
i_high = min(i_low+task_get_option("flush")-flush_count-1,arange[1])
i_high = min(i_low+chunksize-chunksize_count-1, i_high)
try:
self.chk_recID_range(i_low, i_high)
except StandardError, e:
write_message("Exception caught: %s" % e, sys.stderr)
register_exception()
task_update_status("ERROR")
sys.exit(1)
write_message("%s adding records #%d-#%d started" % \
(self.tablename, i_low, i_high))
if CFG_CHECK_MYSQL_THREADS:
kill_sleepy_mysql_threads()
task_update_progress("%s adding recs %d-%d" % (self.tablename, i_low, i_high))
self.del_recID_range(i_low, i_high)
just_processed = self.add_recID_range(i_low, i_high)
flush_count = flush_count + i_high - i_low + 1
chunksize_count = chunksize_count + i_high - i_low + 1
records_done = records_done + just_processed
write_message("%s adding records #%d-#%d ended " % \
(self.tablename, i_low, i_high))
if chunksize_count >= chunksize:
chunksize_count = 0
# flush if necessary:
if flush_count >= task_get_option("flush"):
self.put_into_db()
self.clean()
write_message("%s backing up" % (self.tablename))
flush_count = 0
self.log_progress(time_started,records_done,records_to_go)
# iterate:
i_low = i_high + 1
if flush_count > 0:
self.put_into_db()
self.log_progress(time_started,records_done,records_to_go)
def add_recIDs_by_date(self, dates=""):
"""Add recIDs modified between DATES[0] and DATES[1].
If DATES is not set, then add records modified since the last run of
the ranking method.
"""
if not dates:
write_message("Using the last update time for the rank method")
query = """SELECT last_updated FROM rnkMETHOD WHERE name='%s'
""" % options["current_run"]
res = run_sql(query)
if not res:
return
if not res[0][0]:
dates = ("0000-00-00",'')
else:
dates = (res[0][0],'')
query = """SELECT b.id FROM bibrec AS b WHERE b.modification_date >=
'%s'""" % dates[0]
if dates[1]:
query += "and b.modification_date <= '%s'" % dates[1]
query += " ORDER BY b.id ASC"""
res = run_sql(query)
alist = create_range_list([row[0] for row in res])
if not alist:
write_message( "No new records added. %s is up to date" % self.tablename)
else:
self.add_recIDs(alist)
return alist
def add_recID_range(self, recID1, recID2):
"""Add records from RECID1 to RECID2."""
wlist = {}
normalize = {}
self.recIDs_in_mem.append([recID1,recID2])
# secondly fetch all needed tags:
for (tag, weight, lang) in self.fields_to_index:
if tag in tagToWordsFunctions.keys():
get_words_function = tagToWordsFunctions[tag]
else:
get_words_function = get_words_from_phrase
bibXXx = "bib" + tag[0] + tag[1] + "x"
bibrec_bibXXx = "bibrec_" + bibXXx
query = """SELECT bb.id_bibrec,b.value FROM %s AS b, %s AS bb
WHERE bb.id_bibrec BETWEEN %d AND %d
AND bb.id_bibxxx=b.id AND tag LIKE '%s'""" % (bibXXx, bibrec_bibXXx, recID1, recID2, tag)
res = run_sql(query)
nb_total_to_read = len(res)
verbose_idx = 0 # for verbose pretty printing
for row in res:
recID, phrase = row
if recID in options["validset"]:
if not wlist.has_key(recID): wlist[recID] = {}
new_words = get_words_function(phrase, weight, lang) # ,self.separators
wlist[recID] = dict_union(new_words,wlist[recID])
# were there some words for these recIDs found?
if len(wlist) == 0: return 0
recIDs = wlist.keys()
for recID in recIDs:
# was this record marked as deleted?
if "DELETED" in self.get_field(recID, "980__c"):
wlist[recID] = {}
write_message("... record %d was declared deleted, removing its word list" % recID, verbose=9)
write_message("... record %d, termlist: %s" % (recID, wlist[recID]), verbose=9)
# put words into reverse index table with FUTURE status:
for recID in recIDs:
run_sql("INSERT INTO %sR (id_bibrec,termlist,type) VALUES (%%s,%%s,'FUTURE')" % self.tablename[:-1],
(recID, serialize_via_marshal(wlist[recID])))
# ... and, for new records, enter the CURRENT status as empty:
try:
run_sql("INSERT INTO %sR (id_bibrec,termlist,type) VALUES (%%s,%%s,'CURRENT')" % self.tablename[:-1],
(recID, serialize_via_marshal([])))
except DatabaseError:
# okay, it's an already existing record, no problem
pass
# put words into memory word list:
put = self.put
for recID in recIDs:
for (w, count) in wlist[recID].iteritems():
put(recID, w, count)
return len(recIDs)
def log_progress(self, start, done, todo):
"""Calculate progress and store it.
start: start time,
done: records processed,
todo: total number of records"""
time_elapsed = time.time() - start
# consistency check
if time_elapsed == 0 or done > todo:
return
time_recs_per_min = done/(time_elapsed/60.0)
write_message("%d records took %.1f seconds to complete.(%1.f recs/min)"\
% (done, time_elapsed, time_recs_per_min))
if time_recs_per_min:
write_message("Estimated runtime: %.1f minutes" % \
((todo-done)/time_recs_per_min))
def put(self, recID, word, sign):
"Adds/deletes a word to the word list."
try:
word = wash_index_term(word)
if self.value.has_key(word):
# the word 'word' exist already: update sign
self.value[word][recID] = sign
# PROBLEM ?
else:
self.value[word] = {recID: sign}
except:
write_message("Error: Cannot put word %s with sign %d for recID %s." % (word, sign, recID))
def del_recIDs(self, recIDs):
"""Fetches records which id in the recIDs range list and adds
them to the wordTable. The recIDs range list is of the form:
[[i1_low,i1_high],[i2_low,i2_high], ..., [iN_low,iN_high]].
"""
count = 0
for range in recIDs:
self.del_recID_range(range[0],range[1])
count = count + range[1] - range[0]
self.put_into_db()
def del_recID_range(self, low, high):
"""Deletes records with 'recID' system number between low
and high from memory words index table."""
write_message("%s fetching existing words for records #%d-#%d started" % \
(self.tablename, low, high), verbose=3)
self.recIDs_in_mem.append([low,high])
query = """SELECT id_bibrec,termlist FROM %sR as bb WHERE bb.id_bibrec
BETWEEN '%d' AND '%d'""" % (self.tablename[:-1], low, high)
recID_rows = run_sql(query)
for recID_row in recID_rows:
recID = recID_row[0]
wlist = deserialize_via_marshal(recID_row[1])
for word in wlist:
self.put(recID, word, (-1, 0))
write_message("%s fetching existing words for records #%d-#%d ended" % \
(self.tablename, low, high), verbose=3)
def report_on_table_consistency(self):
"""Check reverse words index tables (e.g. rnkWORD01R) for
interesting states such as 'TEMPORARY' state.
Prints small report (no of words, no of bad words).
"""
# find number of words:
query = """SELECT COUNT(*) FROM %s""" % (self.tablename)
res = run_sql(query, None, 1)
if res:
nb_words = res[0][0]
else:
nb_words = 0
# find number of records:
query = """SELECT COUNT(DISTINCT(id_bibrec)) FROM %sR""" % (self.tablename[:-1])
res = run_sql(query, None, 1)
if res:
nb_records = res[0][0]
else:
nb_records = 0
# report stats:
write_message("%s contains %d words from %d records" % (self.tablename, nb_words, nb_records))
# find possible bad states in reverse tables:
query = """SELECT COUNT(DISTINCT(id_bibrec)) FROM %sR WHERE type <> 'CURRENT'""" % (self.tablename[:-1])
res = run_sql(query)
if res:
nb_bad_records = res[0][0]
else:
nb_bad_records = 999999999
if nb_bad_records:
write_message("EMERGENCY: %s needs to repair %d of %d index records" % \
(self.tablename, nb_bad_records, nb_records))
else:
write_message("%s is in consistent state" % (self.tablename))
return nb_bad_records
def repair(self):
"""Repair the whole table"""
# find possible bad states in reverse tables:
query = """SELECT COUNT(DISTINCT(id_bibrec)) FROM %sR WHERE type <> 'CURRENT'""" % (self.tablename[:-1])
res = run_sql(query, None, 1)
if res:
nb_bad_records = res[0][0]
else:
nb_bad_records = 0
# find number of records:
query = """SELECT COUNT(DISTINCT(id_bibrec)) FROM %sR""" % (self.tablename[:-1])
res = run_sql(query)
if res:
nb_records = res[0][0]
else:
nb_records = 0
if nb_bad_records == 0:
return
query = """SELECT id_bibrec FROM %sR WHERE type <> 'CURRENT' ORDER BY id_bibrec""" \
% (self.tablename[:-1])
res = run_sql(query)
recIDs = create_range_list([row[0] for row in res])
flush_count = 0
records_done = 0
records_to_go = 0
for range in recIDs:
records_to_go = records_to_go + range[1] - range[0] + 1
time_started = time.time() # will measure profile time
for range in recIDs:
i_low = range[0]
chunksize_count = 0
while i_low <= range[1]:
# calculate chunk group of recIDs and treat it:
i_high = min(i_low+task_get_option("flush")-flush_count-1,range[1])
i_high = min(i_low+chunksize-chunksize_count-1, i_high)
try:
self.fix_recID_range(i_low, i_high)
except StandardError, e:
write_message("Exception caught: %s" % e, sys.stderr)
register_exception()
task_update_status("ERROR")
sys.exit(1)
flush_count = flush_count + i_high - i_low + 1
chunksize_count = chunksize_count + i_high - i_low + 1
records_done = records_done + i_high - i_low + 1
if chunksize_count >= chunksize:
chunksize_count = 0
# flush if necessary:
if flush_count >= task_get_option("flush"):
self.put_into_db("emergency")
self.clean()
flush_count = 0
self.log_progress(time_started,records_done,records_to_go)
# iterate:
i_low = i_high + 1
if flush_count > 0:
self.put_into_db("emergency")
self.log_progress(time_started,records_done,records_to_go)
write_message("%s inconsistencies repaired." % self.tablename)
def chk_recID_range(self, low, high):
"""Check if the reverse index table is in proper state"""
## check db
query = """SELECT COUNT(*) FROM %sR WHERE type <> 'CURRENT'
AND id_bibrec BETWEEN '%d' AND '%d'""" % (self.tablename[:-1], low, high)
res = run_sql(query, None, 1)
if res[0][0]==0:
write_message("%s for %d-%d is in consistent state"%(self.tablename,low,high))
return # okay, words table is consistent
## inconsistency detected!
write_message("EMERGENCY: %s inconsistencies detected..." % self.tablename)
write_message("""EMERGENCY: Errors found. You should check consistency of the %s - %sR tables.\nRunning 'bibrank --repair' is recommended.""" \
% (self.tablename, self.tablename[:-1]))
raise StandardError
def fix_recID_range(self, low, high):
"""Try to fix reverse index database consistency (e.g. table rnkWORD01R) in the low,high doc-id range.
Possible states for a recID follow:
CUR TMP FUT: very bad things have happened: warn!
CUR TMP : very bad things have happened: warn!
CUR FUT: delete FUT (crash before flushing)
CUR : database is ok
TMP FUT: add TMP to memory and del FUT from memory
flush (revert to old state)
TMP : very bad things have happened: warn!
FUT: very bad things have happended: warn!
"""
state = {}
query = "SELECT id_bibrec,type FROM %sR WHERE id_bibrec BETWEEN '%d' AND '%d'"\
% (self.tablename[:-1], low, high)
res = run_sql(query)
for row in res:
if not state.has_key(row[0]):
state[row[0]]=[]
state[row[0]].append(row[1])
ok = 1 # will hold info on whether we will be able to repair
for recID in state.keys():
if not 'TEMPORARY' in state[recID]:
if 'FUTURE' in state[recID]:
if 'CURRENT' not in state[recID]:
write_message("EMERGENCY: Index record %d is in inconsistent state. Can't repair it" % recID)
ok = 0
else:
write_message("EMERGENCY: Inconsistency in index record %d detected" % recID)
query = """DELETE FROM %sR
WHERE id_bibrec='%d'""" % (self.tablename[:-1], recID)
run_sql(query)
write_message("EMERGENCY: Inconsistency in index record %d repaired." % recID)
else:
if 'FUTURE' in state[recID] and not 'CURRENT' in state[recID]:
self.recIDs_in_mem.append([recID,recID])
# Get the words file
query = """SELECT type,termlist FROM %sR
WHERE id_bibrec='%d'""" % (self.tablename[:-1], recID)
write_message(query, verbose=9)
res = run_sql(query)
for row in res:
wlist = deserialize_via_marshal(row[1])
write_message("Words are %s " % wlist, verbose=9)
if row[0] == 'TEMPORARY':
sign = 1
else:
sign = -1
for word in wlist:
self.put(recID, word, wlist[word])
else:
write_message("EMERGENCY: %s for %d is in inconsistent state. Couldn't repair it." % (self.tablename, recID))
ok = 0
if not ok:
write_message("""EMERGENCY: Unrepairable errors found. You should check consistency
of the %s - %sR tables. Deleting affected TEMPORARY and FUTURE entries
from these tables is recommended; see the BibIndex Admin Guide.
(The repairing procedure is similar for bibrank word indexes.)""" % (self.tablename, self.tablename[:-1]))
raise StandardError
def word_index(run):
"""Run the indexing task. The row argument is the BibSched task
queue row, containing if, arguments, etc.
Return 1 in case of success and 0 in case of failure.
"""
global languages
max_recid = 0
res = run_sql("SELECT max(id) FROM bibrec")
if res and res[0][0]:
max_recid = int(res[0][0])
options["run"] = []
options["run"].append(run)
for rank_method_code in options["run"]:
task_sleep_now_if_required(can_stop_too=True)
method_starting_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
write_message("Running rank method: %s" % getName(rank_method_code))
try:
file = CFG_ETCDIR + "/bibrank/" + rank_method_code + ".cfg"
config = ConfigParser.ConfigParser()
config.readfp(open(file))
except StandardError, e:
write_message("Cannot find configurationfile: %s" % file, sys.stderr)
raise StandardError
options["current_run"] = rank_method_code
options["modified_words"] = {}
options["table"] = config.get(config.get("rank_method", "function"), "table")
options["use_stemming"] = config.get(config.get("rank_method","function"),"stemming")
options["remove_stopword"] = config.get(config.get("rank_method","function"),"stopword")
tags = get_tags(config) #get the tags to include
options["validset"] = get_valid_range(rank_method_code) #get the records from the collections the method is enabled for
function = config.get("rank_method","function")
wordTable = WordTable(options["table"], tags)
wordTable.report_on_table_consistency()
try:
if task_get_option("cmd") == "del":
if task_get_option("id"):
wordTable.del_recIDs(task_get_option("id"))
task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("collection"):
l_of_colls = task_get_option("collection").split(",")
recIDs = perform_request_search(c=l_of_colls)
recIDs_range = []
for recID in recIDs:
recIDs_range.append([recID,recID])
wordTable.del_recIDs(recIDs_range)
task_sleep_now_if_required(can_stop_too=True)
else:
write_message("Missing IDs of records to delete from index %s.", wordTable.tablename,
sys.stderr)
raise StandardError
elif task_get_option("cmd") == "add":
if task_get_option("id"):
wordTable.add_recIDs(task_get_option("id"))
task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("collection"):
l_of_colls = task_get_option("collection").split(",")
recIDs = perform_request_search(c=l_of_colls)
recIDs_range = []
for recID in recIDs:
recIDs_range.append([recID,recID])
wordTable.add_recIDs(recIDs_range)
task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("last_updated"):
wordTable.add_recIDs_by_date("")
# only update last_updated if run via automatic mode:
wordTable.update_last_updated(rank_method_code, method_starting_time)
task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("modified"):
wordTable.add_recIDs_by_date(task_get_option("modified"))
task_sleep_now_if_required(can_stop_too=True)
else:
wordTable.add_recIDs([[0,max_recid]])
task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("cmd") == "repair":
wordTable.repair()
check_rnkWORD(options["table"])
task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("cmd") == "check":
check_rnkWORD(options["table"])
options["modified_words"] = {}
task_sleep_now_if_required(can_stop_too=True)
elif task_get_option("cmd") == "stat":
rank_method_code_statistics(options["table"])
task_sleep_now_if_required(can_stop_too=True)
else:
write_message("Invalid command found processing %s" % \
wordTable.tablename, sys.stderr)
raise StandardError
update_rnkWORD(options["table"], options["modified_words"])
task_sleep_now_if_required(can_stop_too=True)
except StandardError, e:
register_exception(alert_admin=True)
write_message("Exception caught: %s" % e, sys.stderr)
sys.exit(1)
wordTable.report_on_table_consistency()
# We are done. State it in the database, close and quit
return 1
def get_tags(config):
"""Get the tags that should be used creating the index and each tag's parameter"""
tags = []
function = config.get("rank_method","function")
i = 1
shown_error = 0
#try:
if 1:
while config.has_option(function,"tag%s"% i):
tag = config.get(function, "tag%s" % i)
tag = tag.split(",")
tag[1] = int(tag[1].strip())
tag[2] = tag[2].strip()
#check if stemmer for language is available
if config.get(function, "stemming") and stem("information", "en") != "inform":
if shown_error == 0:
write_message("Warning: Stemming not working. Please check it out!")
shown_error = 1
elif tag[2] and tag[2] != "none" and config.get(function,"stemming") and not is_stemmer_available_for_language(tag[2]):
write_message("Warning: Stemming not available for language '%s'." % tag[2])
tags.append(tag)
i += 1
#except Exception:
# write_message("Could not read data from configuration file, please check for errors")
# raise StandardError
return tags
def get_valid_range(rank_method_code):
"""Returns which records are valid for this rank method, according to which collections it is enabled for."""
#if options["verbose"] >=9:
# write_message("Getting records from collections enabled for rank method.")
#res = run_sql("SELECT collection.name FROM collection,collection_rnkMETHOD,rnkMETHOD WHERE collection.id=id_collection and id_rnkMETHOD=rnkMETHOD.id and rnkMETHOD.name='%s'" % rank_method_code)
#l_of_colls = []
#for coll in res:
# l_of_colls.append(coll[0])
#if len(l_of_colls) > 0:
# recIDs = perform_request_search(c=l_of_colls)
#else:
# recIDs = []
valid = intbitset(trailing_bits=1)
valid.discard(0)
#valid.addlist(recIDs)
return valid
def check_term(term, termlength):
"""Check if term contains not allowed characters, or for any other reasons for not using this term."""
try:
if len(term) <= termlength:
return False
reg = re.compile(r"[1234567890\!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\=\>\?\@\[\\\]\^\_\`\{\|\}\~]")
if re.search(reg, term):
return False
term = str.replace(term, "-", "")
term = str.replace(term, ".", "")
term = str.replace(term, ",", "")
if int(term):
return False
except StandardError, e:
pass
return True
def check_rnkWORD(table):
"""Checks for any problems in rnkWORD tables."""
i = 0
errors = {}
termslist = run_sql("SELECT term FROM %s" % table)
N = run_sql("select max(id_bibrec) from %sR" % table[:-1])[0][0]
write_message("Checking integrity of rank values in %s" % table)
terms = map(lambda x: x[0], termslist)
while i < len(terms):
query_params = ()
for j in range(i, ((i+5000)< len(terms) and (i+5000) or len(terms))):
query_params += (terms[j],)
terms_docs = run_sql("SELECT term, hitlist FROM %s WHERE term IN (%s)" % (table, (len(query_params)*"%s,")[:-1]),
query_params)
for (t, hitlist) in terms_docs:
term_docs = deserialize_via_marshal(hitlist)
if (term_docs.has_key("Gi") and term_docs["Gi"][1] == 0) or not term_docs.has_key("Gi"):
write_message("ERROR: Missing value for term: %s (%s) in %s: %s" % (t, repr(t), table, len(term_docs)))
errors[t] = 1
i += 5000
write_message("Checking integrity of rank values in %sR" % table[:-1])
i = 0
while i < N:
docs_terms = run_sql("SELECT id_bibrec, termlist FROM %sR WHERE id_bibrec>=%s and id_bibrec<=%s" % (table[:-1], i, i+5000))
for (j, termlist) in docs_terms:
termlist = deserialize_via_marshal(termlist)
for (t, tf) in termlist.iteritems():
if tf[1] == 0 and not errors.has_key(t):
errors[t] = 1
write_message("ERROR: Gi missing for record %s and term: %s (%s) in %s" % (j,t,repr(t), table))
terms_docs = run_sql("SELECT term, hitlist FROM %s WHERE term=%%s" % table, (t,))
termlist = deserialize_via_marshal(terms_docs[0][1])
i += 5000
if len(errors) == 0:
write_message("No direct errors found, but nonconsistent data may exist.")
else:
write_message("%s errors found during integrity check, repair and rebalancing recommended." % len(errors))
options["modified_words"] = errors
def rank_method_code_statistics(table):
"""Shows some statistics about this rank method."""
maxID = run_sql("select max(id) from %s" % table)
maxID = maxID[0][0]
terms = {}
Gi = {}
write_message("Showing statistics of terms in index:")
write_message("Important: For the 'Least used terms', the number of terms is shown first, and the number of occurences second.")
write_message("Least used terms---Most important terms---Least important terms")
i = 0
while i < maxID:
terms_docs=run_sql("SELECT term, hitlist FROM %s WHERE id>= %s and id < %s" % (table, i, i + 10000))
for (t, hitlist) in terms_docs:
term_docs=deserialize_via_marshal(hitlist)
terms[len(term_docs)] = terms.get(len(term_docs), 0) + 1
if term_docs.has_key("Gi"):
Gi[t] = term_docs["Gi"]
i=i + 10000
terms=terms.items()
terms.sort(lambda x, y: cmp(y[1], x[1]))
Gi=Gi.items()
Gi.sort(lambda x, y: cmp(y[1], x[1]))
for i in range(0, 20):
write_message("%s/%s---%s---%s" % (terms[i][0],terms[i][1], Gi[i][0],Gi[len(Gi) - i - 1][0]))
def update_rnkWORD(table, terms):
"""Updates rnkWORDF and rnkWORDR with Gi and Nj values. For each term in rnkWORDF, a Gi value for the term is added. And for each term in each document, the Nj value for that document is added. In rnkWORDR, the Gi value for each term in each document is added. For description on how things are computed, look in the hacking docs.
table - name of forward index to update
terms - modified terms"""
stime = time.time()
Gi = {}
Nj = {}
N = run_sql("select count(id_bibrec) from %sR" % table[:-1])[0][0]
if len(terms) == 0 and task_get_option("quick") == "yes":
write_message("No terms to process, ending...")
return ""
elif task_get_option("quick") == "yes": #not used -R option, fast calculation (not accurate)
write_message("Beginning post-processing of %s terms" % len(terms))
#Locating all documents related to the modified/new/deleted terms, if fast update,
#only take into account new/modified occurences
write_message("Phase 1: Finding records containing modified terms")
terms = terms.keys()
i = 0
while i < len(terms):
terms_docs = get_from_forward_index(terms, i, (i+5000), table)
for (t, hitlist) in terms_docs:
term_docs = deserialize_via_marshal(hitlist)
if term_docs.has_key("Gi"):
del term_docs["Gi"]
for (j, tf) in term_docs.iteritems():
if (task_get_option("quick") == "yes" and tf[1] == 0) or task_get_option("quick") == "no":
Nj[j] = 0
write_message("Phase 1: ......processed %s/%s terms" % ((i+5000>len(terms) and len(terms) or (i+5000)), len(terms)))
i += 5000
write_message("Phase 1: Finished finding records containing modified terms")
#Find all terms in the records found in last phase
write_message("Phase 2: Finding all terms in affected records")
records = Nj.keys()
i = 0
while i < len(records):
docs_terms = get_from_reverse_index(records, i, (i + 5000), table)
for (j, termlist) in docs_terms:
doc_terms = deserialize_via_marshal(termlist)
for (t, tf) in doc_terms.iteritems():
Gi[t] = 0
write_message("Phase 2: ......processed %s/%s records " % ((i+5000>len(records) and len(records) or (i+5000)), len(records)))
i += 5000
write_message("Phase 2: Finished finding all terms in affected records")
else: #recalculate
max_id = run_sql("SELECT MAX(id) FROM %s" % table)
max_id = max_id[0][0]
write_message("Beginning recalculation of %s terms" % max_id)
terms = []
i = 0
while i < max_id:
terms_docs = get_from_forward_index_with_id(i, (i+5000), table)
for (t, hitlist) in terms_docs:
Gi[t] = 0
term_docs = deserialize_via_marshal(hitlist)
if term_docs.has_key("Gi"):
del term_docs["Gi"]
for (j, tf) in term_docs.iteritems():
Nj[j] = 0
write_message("Phase 1: ......processed %s/%s terms" % ((i+5000)>max_id and max_id or (i+5000), max_id))
i += 5000
write_message("Phase 1: Finished finding which records contains which terms")
write_message("Phase 2: Jumping over..already done in phase 1 because of -R option")
terms = Gi.keys()
Gi = {}
i = 0
if task_get_option("quick") == "no":
#Calculating Fi and Gi value for each term
write_message("Phase 3: Calculating importance of all affected terms")
while i < len(terms):
terms_docs = get_from_forward_index(terms, i, (i+5000), table)
for (t, hitlist) in terms_docs:
term_docs = deserialize_via_marshal(hitlist)
if term_docs.has_key("Gi"):
del term_docs["Gi"]
Fi = 0
Gi[t] = 1
for (j, tf) in term_docs.iteritems():
Fi += tf[0]
for (j, tf) in term_docs.iteritems():
if tf[0] != Fi:
Gi[t] = Gi[t] + ((float(tf[0]) / Fi) * math.log(float(tf[0]) / Fi) / math.log(2)) / math.log(N)
write_message("Phase 3: ......processed %s/%s terms" % ((i+5000>len(terms) and len(terms) or (i+5000)), len(terms)))
i += 5000
write_message("Phase 3: Finished calculating importance of all affected terms")
else:
#Using existing Gi value instead of calculating a new one. Missing some accurancy.
write_message("Phase 3: Getting approximate importance of all affected terms")
while i < len(terms):
terms_docs = get_from_forward_index(terms, i, (i+5000), table)
for (t, hitlist) in terms_docs:
term_docs = deserialize_via_marshal(hitlist)
if term_docs.has_key("Gi"):
Gi[t] = term_docs["Gi"][1]
elif len(term_docs) == 1:
Gi[t] = 1
else:
Fi = 0
Gi[t] = 1
for (j, tf) in term_docs.iteritems():
Fi += tf[0]
for (j, tf) in term_docs.iteritems():
if tf[0] != Fi:
Gi[t] = Gi[t] + ((float(tf[0]) / Fi) * math.log(float(tf[0]) / Fi) / math.log(2)) / math.log(N)
write_message("Phase 3: ......processed %s/%s terms" % ((i+5000>len(terms) and len(terms) or (i+5000)), len(terms)))
i += 5000
write_message("Phase 3: Finished getting approximate importance of all affected terms")
write_message("Phase 4: Calculating normalization value for all affected records and updating %sR" % table[:-1])
records = Nj.keys()
i = 0
while i < len(records):
#Calculating the normalization value for each document, and adding the Gi value to each term in each document.
docs_terms = get_from_reverse_index(records, i, (i + 5000), table)
for (j, termlist) in docs_terms:
doc_terms = deserialize_via_marshal(termlist)
try:
for (t, tf) in doc_terms.iteritems():
if Gi.has_key(t):
Nj[j] = Nj.get(j, 0) + math.pow(Gi[t] * (1 + math.log(tf[0])), 2)
Git = int(math.floor(Gi[t]*100))
if Git >= 0:
Git += 1
doc_terms[t] = (tf[0], Git)
else:
Nj[j] = Nj.get(j, 0) + math.pow(tf[1] * (1 + math.log(tf[0])), 2)
Nj[j] = 1.0 / math.sqrt(Nj[j])
Nj[j] = int(Nj[j] * 100)
if Nj[j] >= 0:
Nj[j] += 1
run_sql("UPDATE %sR SET termlist=%%s WHERE id_bibrec=%%s" % table[:-1],
(serialize_via_marshal(doc_terms), j))
except (ZeroDivisionError, OverflowError), e:
## This is to try to isolate division by zero errors.
register_exception(prefix="Error when analysing the record %s (%s): %s\n" % (j, repr(docs_terms), e), alert_admin=True)
write_message("Phase 4: ......processed %s/%s records" % ((i+5000>len(records) and len(records) or (i+5000)), len(records)))
i += 5000
write_message("Phase 4: Finished calculating normalization value for all affected records and updating %sR" % table[:-1])
write_message("Phase 5: Updating %s with new normalization values" % table)
i = 0
terms = Gi.keys()
while i < len(terms):
#Adding the Gi value to each term, and adding the normalization value to each term in each document.
terms_docs = get_from_forward_index(terms, i, (i+5000), table)
for (t, hitlist) in terms_docs:
try:
term_docs = deserialize_via_marshal(hitlist)
if term_docs.has_key("Gi"):
del term_docs["Gi"]
for (j, tf) in term_docs.iteritems():
if Nj.has_key(j):
term_docs[j] = (tf[0], Nj[j])
Git = int(math.floor(Gi[t]*100))
if Git >= 0:
Git += 1
term_docs["Gi"] = (0, Git)
run_sql("UPDATE %s SET hitlist=%%s WHERE term=%%s" % table,
(serialize_via_marshal(term_docs), t))
except (ZeroDivisionError, OverflowError), e:
register_exception(prefix="Error when analysing the term %s (%s): %s\n" % (t, repr(terms_docs), e), alert_admin=True)
write_message("Phase 5: ......processed %s/%s terms" % ((i+5000>len(terms) and len(terms) or (i+5000)), len(terms)))
i += 5000
write_message("Phase 5: Finished updating %s with new normalization values" % table)
write_message("Time used for post-processing: %.1fmin" % ((time.time() - stime) / 60))
write_message("Finished post-processing")
def get_from_forward_index(terms, start, stop, table):
terms_docs = ()
for j in range(start, (stop < len(terms) and stop or len(terms))):
terms_docs += run_sql("SELECT term, hitlist FROM %s WHERE term=%%s" % table,
(terms[j],))
return terms_docs
def get_from_forward_index_with_id(start, stop, table):
terms_docs = run_sql("SELECT term, hitlist FROM %s WHERE id BETWEEN %s AND %s" % (table, start, stop))
return terms_docs
def get_from_reverse_index(records, start, stop, table):
current_recs = "%s" % records[start:stop]
current_recs = current_recs[1:-1]
docs_terms = run_sql("SELECT id_bibrec, termlist FROM %sR WHERE id_bibrec IN (%s)" % (table[:-1], current_recs))
return docs_terms
#def test_word_separators(phrase="hep-th/0101001"):
#"""Tests word separating policy on various input."""
#print "%s:" % phrase
#gwfp = get_words_from_phrase(phrase)
#for (word, count) in gwfp.iteritems():
#print "\t-> %s - %s" % (word, count)
def getName(methname, ln=CFG_SITE_LANG, type='ln'):
"""Returns the name of the rank method, either in default language or given language.
methname = short name of the method
ln - the language to get the name in
type - which name "type" to get."""
try:
rnkid = run_sql("SELECT id FROM rnkMETHOD where name='%s'" % methname)
if rnkid:
rnkid = str(rnkid[0][0])
res = run_sql("SELECT value FROM rnkMETHODNAME where type='%s' and ln='%s' and id_rnkMETHOD=%s" % (type, ln, rnkid))
if not res:
res = run_sql("SELECT value FROM rnkMETHODNAME WHERE ln='%s' and id_rnkMETHOD=%s and type='%s'" % (CFG_SITE_LANG, rnkid, type))
if not res:
return methname
return res[0][0]
else:
raise Exception
except Exception, e:
write_message("Cannot run rank method, either given code for method is wrong, or it has not been added using the webinterface.")
raise Exception
def word_similarity(run):
"""Call correct method"""
return word_index(run)
diff --git a/modules/bibrank/lib/bibrank_word_searcher.py b/modules/bibrank/lib/bibrank_word_searcher.py
index f579ec1f2..534c68770 100644
--- a/modules/bibrank/lib/bibrank_word_searcher.py
+++ b/modules/bibrank/lib/bibrank_word_searcher.py
@@ -1,333 +1,333 @@
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
import string
import time
import math
import re
from invenio.dbquery import run_sql, deserialize_via_marshal
from invenio.bibindex_engine_stemmer import stem
from invenio.bibindex_engine_stopwords import is_stopword
def find_similar(rank_method_code, recID, hitset, rank_limit_relevance,verbose, methods):
"""Finding terms to use for calculating similarity. Terms are taken from the recid given, returns a list of recids's and relevance,
input:
rank_method_code - the code of the method, from the name field in rnkMETHOD
recID - records to use for find similar
hitset - a list of hits for the query found by search_engine
rank_limit_relevance - show only records with a rank value above this
verbose - verbose value
output:
reclist - a list of sorted records: [[23,34], [344,24], [1,01]]
prefix - what to show before the rank value
postfix - what to show after the rank value
voutput - contains extra information, content dependent on verbose value"""
startCreate = time.time()
global voutput
voutput = ""
if verbose > 0:
voutput += "<br />Running rank method: %s, using find_similar/word_frequency in bibrank_record_sorter<br />" % rank_method_code
rank_limit_relevance = methods[rank_method_code]["default_min_relevance"]
try:
recID = int(recID)
except Exception,e :
return (None, "Warning: Error in record ID, please check that a number is given.", "", voutput)
rec_terms = run_sql("""SELECT termlist FROM %sR WHERE id_bibrec=%%s""" % methods[rank_method_code]["rnkWORD_table"][:-1], (recID,))
if not rec_terms:
return (None, "Warning: Requested record does not seem to exist.", "", voutput)
rec_terms = deserialize_via_marshal(rec_terms[0][0])
#Get all documents using terms from the selected documents
if len(rec_terms) == 0:
return (None, "Warning: Record specified has no content indexed for use with this method.", "", voutput)
else:
terms = "%s" % rec_terms.keys()
terms_recs = dict(run_sql("""SELECT term, hitlist FROM %s WHERE term IN (%s)""" % (methods[rank_method_code]["rnkWORD_table"], terms[1:len(terms) - 1])))
tf_values = {}
#Calculate all term frequencies
for (term, tf) in rec_terms.iteritems():
if len(term) >= methods[rank_method_code]["min_word_length"] and terms_recs.has_key(term) and tf[1] != 0:
tf_values[term] = int((1 + math.log(tf[0])) * tf[1]) #calculate term weigth
tf_values = tf_values.items()
tf_values.sort(lambda x, y: cmp(y[1], x[1])) #sort based on weigth
lwords = []
stime = time.time()
(recdict, rec_termcount) = ({}, {})
for (t, tf) in tf_values: #t=term, tf=term frequency
term_recs = deserialize_via_marshal(terms_recs[t])
if len(tf_values) <= methods[rank_method_code]["max_nr_words_lower"] or (len(term_recs) >= methods[rank_method_code]["min_nr_words_docs"] and (((float(len(term_recs)) / float(methods[rank_method_code]["col_size"])) <= methods[rank_method_code]["max_word_occurence"]) and ((float(len(term_recs)) / float(methods[rank_method_code]["col_size"])) >= methods[rank_method_code]["min_word_occurence"]))): #too complicated...something must be done
lwords.append((t, methods[rank_method_code]["rnkWORD_table"])) #list of terms used
(recdict, rec_termcount) = calculate_record_relevance_findsimilar((t, round(tf, 4)) , term_recs, hitset, recdict, rec_termcount, verbose, "true") #true tells the function to not calculate all unimportant terms
if len(tf_values) > methods[rank_method_code]["max_nr_words_lower"] and (len(lwords) == methods[rank_method_code]["max_nr_words_upper"] or tf < 0):
break
if len(recdict) == 0 or len(lwords) == 0:
return (None, "Could not find similar documents for this query.", "", voutput)
else: #sort if we got something to sort
(reclist, hitset) = sort_record_relevance_findsimilar(recdict, rec_termcount, hitset, rank_limit_relevance, verbose)
if verbose > 0:
voutput += "<br />Number of terms: %s<br />" % run_sql("SELECT count(id) FROM %s" % methods[rank_method_code]["rnkWORD_table"])[0][0]
voutput += "Number of terms to use for query: %s<br />" % len(lwords)
voutput += "Terms: %s<br />" % lwords
voutput += "Current number of recIDs: %s<br />" % (methods[rank_method_code]["col_size"])
voutput += "Prepare time: %s<br />" % (str(time.time() - startCreate))
voutput += "Total time used: %s<br />" % (str(time.time() - startCreate))
rank_method_stat(rank_method_code, reclist, lwords)
return (reclist[:len(reclist)], methods[rank_method_code]["prefix"], methods[rank_method_code]["postfix"], voutput)
def calculate_record_relevance_findsimilar(term, invidx, hitset, recdict, rec_termcount, verbose, quick=None):
"""Calculating the relevance of the documents based on the input, calculates only one word
term - (term, query term factor) the term and its importance in the overall search
invidx - {recid: tf, Gi: norm value} The Gi value is used as a idf value
hitset - a hitset with records that are allowed to be ranked
recdict - contains currently ranked records, is returned with new values
rec_termcount - {recid: count} the number of terms in this record that matches the query
verbose - verbose value
quick - if quick=yes only terms with a positive qtf is used, to limit the number of records to sort"""
(t, qtf) = term
if invidx.has_key("Gi"): #Gi = weigth for this term, created by bibrank_word_indexer
Gi = invidx["Gi"][1]
del invidx["Gi"]
else: #if not existing, bibrank should be run with -R
return (recdict, rec_termcount)
if not quick or (qtf >= 0 or (qtf < 0 and len(recdict) == 0)):
#Only accept records existing in the hitset received from the search engine
for (j, tf) in invidx.iteritems():
if j in hitset: #only include docs found by search_engine based on query
#calculate rank value
recdict[j] = recdict.get(j, 0) + int((1 + math.log(tf[0])) * Gi * tf[1] * qtf)
rec_termcount[j] = rec_termcount.get(j, 0) + 1 #number of terms from query in document
elif quick: #much used term, do not include all records, only use already existing ones
for (j, tf) in recdict.iteritems(): #i.e: if doc contains important term, also count unimportant
if invidx.has_key(j):
tf = invidx[j]
recdict[j] = recdict[j] + int((1 + math.log(tf[0])) * Gi * tf[1] * qtf)
rec_termcount[j] = rec_termcount.get(j, 0) + 1 #number of terms from query in document
return (recdict, rec_termcount)
def sort_record_relevance_findsimilar(recdict, rec_termcount, hitset, rank_limit_relevance, verbose):
"""Sorts the dictionary and returns records with a relevance higher than the given value.
recdict - {recid: value} unsorted
rank_limit_relevance - a value > 0 usually
verbose - verbose value"""
startCreate = time.time()
voutput = ""
reclist = []
#Multiply with the number of terms of the total number of terms in the query existing in the records
for j in recdict.keys():
if recdict[j] > 0 and rec_termcount[j] > 1:
recdict[j] = math.log((recdict[j] * rec_termcount[j]))
else:
recdict[j] = 0
hitset -= recdict.keys()
#gives each record a score between 0-100
divideby = max(recdict.values())
for (j, w) in recdict.iteritems():
w = int(w * 100 / divideby)
if w >= rank_limit_relevance:
reclist.append((j, w))
#sort scores
reclist.sort(lambda x, y: cmp(x[1], y[1]))
if verbose > 0:
voutput += "Number of records sorted: %s<br />" % len(reclist)
voutput += "Sort time: %s<br />" % (str(time.time() - startCreate))
return (reclist, hitset)
def word_similarity(rank_method_code, lwords, hitset, rank_limit_relevance, verbose, methods):
"""Ranking a records containing specified words and returns a sorted list.
input:
rank_method_code - the code of the method, from the name field in rnkMETHOD
lwords - a list of words from the query
hitset - a list of hits for the query found by search_engine
rank_limit_relevance - show only records with a rank value above this
verbose - verbose value
output:
reclist - a list of sorted records: [[23,34], [344,24], [1,01]]
prefix - what to show before the rank value
postfix - what to show after the rank value
voutput - contains extra information, content dependent on verbose value"""
voutput = ""
startCreate = time.time()
if verbose > 0:
voutput += "<br />Running rank method: %s, using word_frequency function in bibrank_record_sorter<br />" % rank_method_code
lwords_old = lwords
lwords = []
#Check terms, remove non alphanumeric characters. Use both unstemmed and stemmed version of all terms.
for i in range(0, len(lwords_old)):
term = string.lower(lwords_old[i])
- if not methods[rank_method_code]["stopwords"] == "True" or methods[rank_method_code]["stopwords"] and not is_stopword(term, 1):
+ if not methods[rank_method_code]["stopwords"] == "True" or methods[rank_method_code]["stopwords"] and not is_stopword(term):
lwords.append((term, methods[rank_method_code]["rnkWORD_table"]))
terms = string.split(string.lower(re.sub(methods[rank_method_code]["chars_alphanumericseparators"], ' ', term)))
for term in terms:
if methods[rank_method_code].has_key("stemmer"): # stem word
term = stem(string.replace(term, ' ', ''), methods[rank_method_code]["stemmer"])
if lwords_old[i] != term: #add if stemmed word is different than original word
lwords.append((term, methods[rank_method_code]["rnkWORD_table"]))
(recdict, rec_termcount, lrecIDs_remove) = ({}, {}, {})
#For each term, if accepted, get a list of the records using the term
#calculate then relevance for each term before sorting the list of records
for (term, table) in lwords:
term_recs = run_sql("""SELECT term, hitlist FROM %s WHERE term=%%s""" % methods[rank_method_code]["rnkWORD_table"], (term,))
if term_recs: #if term exists in database, use for ranking
term_recs = deserialize_via_marshal(term_recs[0][1])
(recdict, rec_termcount) = calculate_record_relevance((term, int(term_recs["Gi"][1])) , term_recs, hitset, recdict, rec_termcount, verbose, quick=None)
del term_recs
if len(recdict) == 0 or (len(lwords) == 1 and lwords[0] == ""):
return (None, "Records not ranked. The query is not detailed enough, or not enough records found, for ranking to be possible.", "", voutput)
else: #sort if we got something to sort
(reclist, hitset) = sort_record_relevance(recdict, rec_termcount, hitset, rank_limit_relevance, verbose)
#Add any documents not ranked to the end of the list
if hitset:
lrecIDs = list(hitset) #using 2-3mb
reclist = zip(lrecIDs, [0] * len(lrecIDs)) + reclist #using 6mb
if verbose > 0:
voutput += "<br />Current number of recIDs: %s<br />" % (methods[rank_method_code]["col_size"])
voutput += "Number of terms: %s<br />" % run_sql("SELECT count(id) FROM %s" % methods[rank_method_code]["rnkWORD_table"])[0][0]
voutput += "Terms: %s<br />" % lwords
voutput += "Prepare and pre calculate time: %s<br />" % (str(time.time() - startCreate))
voutput += "Total time used: %s<br />" % (str(time.time() - startCreate))
voutput += str(reclist) + "<br />"
rank_method_stat(rank_method_code, reclist, lwords)
return (reclist, methods[rank_method_code]["prefix"], methods[rank_method_code]["postfix"], voutput)
def calculate_record_relevance(term, invidx, hitset, recdict, rec_termcount, verbose, quick=None):
"""Calculating the relevance of the documents based on the input, calculates only one word
term - (term, query term factor) the term and its importance in the overall search
invidx - {recid: tf, Gi: norm value} The Gi value is used as a idf value
hitset - a hitset with records that are allowed to be ranked
recdict - contains currently ranked records, is returned with new values
rec_termcount - {recid: count} the number of terms in this record that matches the query
verbose - verbose value
quick - if quick=yes only terms with a positive qtf is used, to limit the number of records to sort"""
(t, qtf) = term
if invidx.has_key("Gi"):#Gi = weigth for this term, created by bibrank_word_indexer
Gi = invidx["Gi"][1]
del invidx["Gi"]
else: #if not existing, bibrank should be run with -R
return (recdict, rec_termcount)
if not quick or (qtf >= 0 or (qtf < 0 and len(recdict) == 0)):
#Only accept records existing in the hitset received from the search engine
for (j, tf) in invidx.iteritems():
if j in hitset:#only include docs found by search_engine based on query
try: #calculates rank value
recdict[j] = recdict.get(j, 0) + int(math.log(tf[0] * Gi * tf[1] * qtf))
except:
return (recdict, rec_termcount)
rec_termcount[j] = rec_termcount.get(j, 0) + 1 #number of terms from query in document
elif quick: #much used term, do not include all records, only use already existing ones
for (j, tf) in recdict.iteritems(): #i.e: if doc contains important term, also count unimportant
if invidx.has_key(j):
tf = invidx[j]
recdict[j] = recdict.get(j, 0) + int(math.log(tf[0] * Gi * tf[1] * qtf))
rec_termcount[j] = rec_termcount.get(j, 0) + 1 #number of terms from query in document
return (recdict, rec_termcount)
def sort_record_relevance(recdict, rec_termcount, hitset, rank_limit_relevance, verbose):
"""Sorts the dictionary and returns records with a relevance higher than the given value.
recdict - {recid: value} unsorted
rank_limit_relevance - a value > 0 usually
verbose - verbose value"""
startCreate = time.time()
voutput = ""
reclist = []
#remove all ranked documents so that unranked can be added to the end
hitset -= recdict.keys()
#gives each record a score between 0-100
divideby = max(recdict.values())
for (j, w) in recdict.iteritems():
w = int(w * 100 / divideby)
if w >= rank_limit_relevance:
reclist.append((j, w))
#sort scores
reclist.sort(lambda x, y: cmp(x[1], y[1]))
if verbose > 0:
voutput += "Number of records sorted: %s<br />" % len(reclist)
voutput += "Sort time: %s<br />" % (str(time.time() - startCreate))
return (reclist, hitset)
def rank_method_stat(rank_method_code, reclist, lwords):
"""Shows some statistics about the searchresult.
rank_method_code - name field from rnkMETHOD
reclist - a list of sorted and ranked records
lwords - the words in the query"""
voutput = ""
if len(reclist) > 20:
j = 20
else:
j = len(reclist)
voutput += "<br />Rank statistics:<br />"
for i in range(1, j + 1):
voutput += "%s,Recid:%s,Score:%s<br />" % (i,reclist[len(reclist) - i][0],reclist[len(reclist) - i][1])
for (term, table) in lwords:
term_recs = run_sql("""SELECT hitlist FROM %s WHERE term=%%s""" % table, (term,))
if term_recs:
term_recs = deserialize_via_marshal(term_recs[0][0])
if term_recs.has_key(reclist[len(reclist) - i][0]):
voutput += "%s-%s / " % (term, term_recs[reclist[len(reclist) - i][0]])
voutput += "<br />"
voutput += "<br />Score variation:<br />"
count = {}
for i in range(0, len(reclist)):
count[reclist[i][1]] = count.get(reclist[i][1], 0) + 1
i = 100
while i >= 0:
if count.has_key(i):
voutput += "%s-%s<br />" % (i, count[i])
i -= 1
#TODO: use Cython instead of psycho
diff --git a/modules/bibrecord/lib/bibrecord_unit_tests.py b/modules/bibrecord/lib/bibrecord_unit_tests.py
index 54890a844..02b50ada4 100644
--- a/modules/bibrecord/lib/bibrecord_unit_tests.py
+++ b/modules/bibrecord/lib/bibrecord_unit_tests.py
@@ -1,1675 +1,1675 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
The BibRecord test suite.
"""
from invenio.config import CFG_TMPDIR, \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG
from invenio import bibrecord, bibrecord_config
from invenio.testutils import make_test_suite, run_test_suite, InvenioTestCase
try:
import pyRXP
parser_pyrxp_available = True
except ImportError:
parser_pyrxp_available = False
try:
from lxml import etree
parser_lxml_available = True
except ImportError:
parser_lxml_available = False
try:
import Ft.Xml.Domlette
parser_4suite_available = True
except ImportError:
parser_4suite_available = False
try:
import xml.dom.minidom
import xml.parsers.expat
parser_minidom_available = True
except ImportError:
parser_minidom_available = False
class BibRecordSuccessTest(InvenioTestCase):
""" bibrecord - demo file parsing test """
def setUp(self):
"""Initialize stuff"""
f = open(CFG_TMPDIR + '/demobibdata.xml', 'r')
xmltext = f.read()
f.close()
self.recs = [rec[0] for rec in bibrecord.create_records(xmltext)]
def test_records_created(self):
""" bibrecord - demo file how many records are created """
- self.assertEqual(113, len(self.recs))
+ self.assertEqual(141, len(self.recs))
def test_tags_created(self):
""" bibrecord - demo file which tags are created """
## check if the tags are correct
tags = ['003', '005', '020', '024', '035', '037', '041', '080', '084', '088',
- '100', '110', '242', '245', '246', '250', '260', '269', '270',
- '300', '340', '490', '500', '502', '506', '520', '542',
- '590', '595', '650', '653', '690', '691', '693', '694', '695', '697',
- '700', '710', '711', '720', '773', '852', '856', '859', '901', '909',
- '916', '960', '961', '962', '963', '964', '970', '980',
- '999', 'FFT']
+ '100', '110', '148', '150', '242', '245', '246', '250', '260', '269',
+ '270', '300', '340', '371', '372', '400', '410', '430', '440', '450',
+ '490', '500', '502', '506', '510', '520', '542', '550', '588', '590',
+ '595', '643', '650', '653', '670', '678', '680', '690', '691', '693',
+ '694', '695', '697', '700', '710', '711', '720', '773', '852', '856',
+ '859', '901', '909', '913', '914', '916', '920', '960', '961', '962',
+ '963', '964', '970', '980', '999', 'FFT']
t = []
for rec in self.recs:
t.extend(rec.keys())
t.sort()
#eliminate the elements repeated
tt = []
for x in t:
if not x in tt:
tt.append(x)
-
self.assertEqual(tags, tt)
def test_fields_created(self):
"""bibrecord - demo file how many fields are created"""
## check if the number of fields for each record is correct
fields = [14, 14, 8, 11, 11, 13, 11, 15, 10, 18, 15, 16,
10, 9, 15, 10, 11, 11, 11, 9, 11, 11, 10, 9, 9, 9,
10, 9, 10, 10, 8, 9, 8, 9, 14, 13, 14, 14, 15, 12,
13, 12, 15, 15, 13, 16, 16, 15, 15, 14, 16, 15, 15,
15, 16, 15, 16, 15, 15, 16, 15, 15, 14, 15, 12, 13,
11, 15, 8, 11, 14, 13, 12, 13, 6, 6, 25, 24, 27, 26,
26, 24, 26, 26, 25, 28, 24, 23, 27, 25, 25, 26, 26,
25, 20, 26, 25, 22, 9, 8, 9, 9, 8, 7, 19, 21, 27, 23,
- 23, 22, 9, 8, 16]
-
+ 23, 22, 9, 8, 16, 7, 7, 9, 5, 5, 3, 9, 12, 6,
+ 8, 8, 8, 13, 20, 20, 5, 8, 7, 7, 7, 7, 7, 8, 7, 8, 7, 7, 8]
cr = []
ret = []
for rec in self.recs:
cr.append(len(rec.values()))
ret.append(rec)
self.assertEqual(fields, cr, "\n%s\n!=\n%s" % (fields, cr))
def test_create_record_with_collection_tag(self):
""" bibrecord - create_record() for single record in collection"""
xmltext = """
<collection>
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
</record>
</collection>
"""
record = bibrecord.create_record(xmltext)
record1 = bibrecord.create_records(xmltext)[0]
self.assertEqual(record1, record)
class BibRecordParsersTest(InvenioTestCase):
""" bibrecord - testing the creation of records with different parsers"""
def setUp(self):
"""Initialize stuff"""
self.xmltext = """
<!-- A first comment -->
<collection>
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<!-- A second comment -->
<subfield code="a">eng</subfield>
</datafield>
</record>
</collection>
"""
self.expected_record = {
'001': [([], ' ', ' ', '33', 1)],
'041': [([('a', 'eng')], ' ', ' ', '', 2)]
}
if parser_pyrxp_available:
def test_pyRXP(self):
""" bibrecord - create_record() with pyRXP """
record = bibrecord._create_record_rxp(self.xmltext)
self.assertEqual(record, self.expected_record)
if parser_lxml_available:
def test_lxml(self):
""" bibrecord - create_record() with lxml"""
record = bibrecord._create_record_lxml(self.xmltext)
self.assertEqual(record, self.expected_record)
if parser_4suite_available:
def test_4suite(self):
""" bibrecord - create_record() with 4suite """
record = bibrecord._create_record_4suite(self.xmltext)
self.assertEqual(record, self.expected_record)
if parser_minidom_available:
def test_minidom(self):
""" bibrecord - create_record() with minidom """
record = bibrecord._create_record_minidom(self.xmltext)
self.assertEqual(record, self.expected_record)
class BibRecordBadInputTreatmentTest(InvenioTestCase):
""" bibrecord - testing for bad input treatment """
def test_empty_collection(self):
"""bibrecord - empty collection"""
xml_error0 = """<collection></collection>"""
rec = bibrecord.create_record(xml_error0)[0]
self.assertEqual(rec, {})
records = bibrecord.create_records(xml_error0)
self.assertEqual(len(records), 0)
def test_wrong_attribute(self):
"""bibrecord - bad input subfield \'cde\' instead of \'code\'"""
ws = bibrecord.CFG_BIBRECORD_WARNING_MSGS
xml_error1 = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe, John</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield cde="a">On the foo and bar</subfield>
</datafield>
</record>
"""
e = bibrecord.create_record(xml_error1, 1, 1)[2]
ee =''
for i in e:
if type(i).__name__ == 'str':
if i.count(ws[3])>0:
ee = i
self.assertEqual(bibrecord._warning((3, '(field number: 4)')), ee)
def test_missing_attribute(self):
""" bibrecord - bad input missing \"tag\" """
ws = bibrecord.CFG_BIBRECORD_WARNING_MSGS
xml_error2 = """
<record>
<controlfield tag="001">33</controlfield>
<datafield ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe, John</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the foo and bar</subfield>
</datafield>
</record>
"""
e = bibrecord.create_record(xml_error2, 1, 1)[2]
ee = ''
for i in e:
if type(i).__name__ == 'str':
if i.count(ws[1])>0:
ee = i
self.assertEqual(bibrecord._warning((1, '(field number(s): [2])')), ee)
def test_empty_datafield(self):
""" bibrecord - bad input no subfield """
ws = bibrecord.CFG_BIBRECORD_WARNING_MSGS
xml_error3 = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe, John</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the foo and bar</subfield>
</datafield>
</record>
"""
e = bibrecord.create_record(xml_error3, 1, 1)[2]
ee = ''
for i in e:
if type(i).__name__ == 'str':
if i.count(ws[8])>0:
ee = i
self.assertEqual(bibrecord._warning((8, '(field number: 2)')), ee)
def test_missing_tag(self):
"""bibrecord - bad input missing end \"tag\" """
ws = bibrecord.CFG_BIBRECORD_WARNING_MSGS
xml_error4 = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe, John</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the foo and bar</subfield>
</record>
"""
e = bibrecord.create_record(xml_error4, 1, 1)[2]
ee = ''
for i in e:
if type(i).__name__ == 'str':
if i.count(ws[99])>0:
ee = i
self.assertEqual(bibrecord._warning((99, '(Tagname : datafield)')), ee)
class BibRecordAccentedUnicodeLettersTest(InvenioTestCase):
""" bibrecord - testing accented UTF-8 letters """
def setUp(self):
"""Initialize stuff"""
self.xml_example_record = """<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Döè1, John</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, J>ohn</subfield>
<subfield code="b">editor</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">Пушкин</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>"""
self.rec = bibrecord.create_record(self.xml_example_record, 1, 1)[0]
def test_accented_unicode_characters(self):
"""bibrecord - accented Unicode letters"""
self.assertEqual(self.xml_example_record,
bibrecord.record_xml_output(self.rec))
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "100", " ", " "),
[([('a', 'Döè1, John')], " ", " ", "", 3), ([('a', 'Doe2, J>ohn'), ('b', 'editor')], " ", " ", "", 4)])
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "245", " ", "1"),
[([('a', 'Пушкин')], " ", '1', "", 5)])
class BibRecordGettingFieldValuesTest(InvenioTestCase):
""" bibrecord - testing for getting field/subfield values """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe1, John</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2="2">
<subfield code="a">Penrose, Roger</subfield>
<subfield code="u">University College London</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2="2">
<subfield code="a">Messi, Lionel</subfield>
<subfield code="u">FC Barcelona</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_get_field_instances(self):
"""bibrecord - getting field instances"""
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "100", " ", " "),
[([('a', 'Doe1, John')], " ", " ", "", 3), ([('a', 'Doe2, John'), ('b', 'editor')], " ", " ", "", 4)])
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "", " ", " "),
[('245', [([('a', 'On the foo and bar1')], " ", '1', "", 5), ([('a', 'On the foo and bar2')], " ", '2', "", 6)]), ('001', [([], " ", " ", '33', 1)]), ('700', [([('a', 'Penrose, Roger'), ('u', "University College London")], ' ', '2', '', 7), ([('a', 'Messi, Lionel'), ('u', 'FC Barcelona')], ' ', '2', '', 8)]), ('100', [([('a', 'Doe1, John')], " ", " ", "", 3), ([('a', 'Doe2, John'), ('b', 'editor')], " ", " ", "", 4)]), ('041', [([('a', 'eng')], " ", " ", "", 2)]),])
def test_get_field_values(self):
"""bibrecord - getting field values"""
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "a"),
['Doe1, John', 'Doe2, John'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "b"),
['editor'])
def test_get_field_value(self):
"""bibrecord - getting first field value"""
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", " ", " ", "a"),
'Doe1, John')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", " ", " ", "b"),
'editor')
def test_get_subfield_values(self):
"""bibrecord - getting subfield values"""
fi1, fi2 = bibrecord.record_get_field_instances(self.rec, "100", " ", " ")
self.assertEqual(bibrecord.field_get_subfield_values(fi1, "b"), [])
self.assertEqual(bibrecord.field_get_subfield_values(fi2, "b"), ["editor"])
def test_filter_field(self):
"""bibrecord - filter field instances"""
field_instances = bibrecord.record_get_field_instances(self.rec, "700", "%", "%")
out = bibrecord.filter_field_instances(field_instances, "u", "University College London", 'e')
self.assertEqual(out, [([('a', 'Penrose, Roger'), ('u', "University College London")], ' ', '2', '', 7)])
out = bibrecord.filter_field_instances(field_instances, "u", "Bar", "s")
self.assertEqual(out, [([('a', 'Messi, Lionel'), ('u', 'FC Barcelona')], ' ', '2', '', 8)])
out = bibrecord.filter_field_instances(field_instances, "u", "on", "s")
self.assertEqual(out, [([('a', 'Penrose, Roger'), ('u', "University College London")], ' ', '2', '', 7),
([('a', 'Messi, Lionel'), ('u', 'FC Barcelona')], ' ', '2', '', 8)])
out = bibrecord.filter_field_instances(field_instances, "u", r".*\scoll", "r")
self.assertEqual(out,[])
out = bibrecord.filter_field_instances(field_instances, "u", r"[FC]{2}\s.*", "r")
self.assertEqual(out, [([('a', 'Messi, Lionel'), ('u', 'FC Barcelona')], ' ', '2', '', 8)])
class BibRecordGettingFieldValuesViaWildcardsTest(InvenioTestCase):
""" bibrecord - testing for getting field/subfield values via wildcards """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">1</controlfield>
<datafield tag="100" ind1="C" ind2="5">
<subfield code="a">val1</subfield>
</datafield>
<datafield tag="555" ind1="A" ind2="B">
<subfield code="a">val2</subfield>
</datafield>
<datafield tag="555" ind1="A" ind2=" ">
<subfield code="a">val3</subfield>
</datafield>
<datafield tag="555" ind1=" " ind2=" ">
<subfield code="a">val4a</subfield>
<subfield code="b">val4b</subfield>
</datafield>
<datafield tag="555" ind1=" " ind2="B">
<subfield code="a">val5</subfield>
</datafield>
<datafield tag="556" ind1="A" ind2="C">
<subfield code="a">val6</subfield>
</datafield>
<datafield tag="556" ind1="A" ind2=" ">
<subfield code="a">val7a</subfield>
<subfield code="b">val7b</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_get_field_instances_via_wildcard(self):
"""bibrecord - getting field instances via wildcards"""
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "100", " ", " "),
[])
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "100", "%", " "),
[])
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "100", "%", "%"),
[([('a', 'val1')], 'C', '5', "", 2)])
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "55%", "A", "%"),
[([('a', 'val2')], 'A', 'B', "", 3),
([('a', 'val3')], 'A', " ", "", 4),
([('a', 'val6')], 'A', 'C', "", 7),
([('a', 'val7a'), ('b', 'val7b')], 'A', " ", "", 8)])
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "55%", "A", " "),
[([('a', 'val3')], 'A', " ", "", 4),
([('a', 'val7a'), ('b', 'val7b')], 'A', " ", "", 8)])
self.assertEqual(bibrecord.record_get_field_instances(self.rec, "556", "A", " "),
[([('a', 'val7a'), ('b', 'val7b')], 'A', " ", "", 8)])
def test_get_field_values_via_wildcard(self):
"""bibrecord - getting field values via wildcards"""
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", " "),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", "%", " ", " "),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", "%", " "),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", "%", "%", " "),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", "%", "%", "z"),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "%"),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "a"),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", "%", " ", "a"),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", "%", "%", "a"),
['val1'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", "%", "%", "%"),
['val1'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "A", "%", "a"),
['val2', 'val3', 'val6', 'val7a'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "A", " ", "a"),
['val3', 'val7a'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "556", "A", " ", "a"),
['val7a'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "555", " ", " ", " "),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "555", " ", " ", "z"),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "555", " ", " ", "%"),
['val4a', 'val4b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", " ", " ", "b"),
['val4b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "%", "%", "b"),
['val4b', 'val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "A", " ", "b"),
['val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "A", "%", "b"),
['val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "A", " ", "a"),
['val3', 'val7a'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "A", "%", "a"),
['val2', 'val3', 'val6', 'val7a'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "%", "%", "a"),
['val2', 'val3', 'val4a', 'val5', 'val6', 'val7a'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", " ", " ", "a"),
['val4a'])
def test_get_field_values_filtering_exact(self):
"""bibrecord - getting field values and exact filtering"""
self.assertEqual(bibrecord.record_get_field_values(self.rec, "556", "%", "%", "%", 'a', 'val7a'),
['val7a', 'val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "556", "%", "%", "a", 'a', 'val7a'),
['val7a'])
def test_get_field_values_filtering_substring(self):
"""bibrecord - getting field values and substring filtering"""
self.assertEqual(bibrecord.record_get_field_values(self.rec, "556", "%", "%", "%", 'a', '7a', 's'),
['val7a', 'val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "556", "%", "%", "b", 'a', '7a', 's'),
['val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "%", "%", "%", 'b', 'val', 's'),
['val4a', 'val4b', 'val7a', 'val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "%", "%", " ", 'b', 'val', 's'),
[])
def test_get_field_values_filtering_regexp(self):
"""bibrecord - getting field values and regexp filtering"""
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "%", "%", "%", 'b', r'al', 'r'),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "%", "%", "%", 'a', r'.*al[6,7]', 'r'),
['val6', 'val7a', 'val7b'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "55%", "%", "%", "a", 'a', r'.*al[6,7]', 'r'),
['val6', 'val7a'])
def test_get_field_value_via_wildcard(self):
"""bibrecord - getting first field value via wildcards"""
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", " ", " ", " "),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", "%", " ", " "),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", " ", "%", " "),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", "%", "%", " "),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", " ", " ", "%"),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", " ", " ", "a"),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", "%", " ", "a"),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", "%", "%", "a"),
'val1')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "100", "%", "%", "%"),
'val1')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "A", "%", "a"),
'val2')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "A", " ", "a"),
'val3')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "556", "A", " ", "a"),
'val7a')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "555", " ", " ", " "),
'')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "555", " ", " ", "%"),
'val4a')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", " ", " ", "b"),
'val4b')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "%", "%", "b"),
'val4b')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "A", " ", "b"),
'val7b')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "A", "%", "b"),
'val7b')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "A", " ", "a"),
'val3')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "A", "%", "a"),
'val2')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", "%", "%", "a"),
'val2')
self.assertEqual(bibrecord.record_get_field_value(self.rec, "55%", " ", " ", "a"),
'val4a')
class BibRecordAddFieldTest(InvenioTestCase):
""" bibrecord - testing adding field """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe1, John</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_add_controlfield(self):
"""bibrecord - adding controlfield"""
field_position_global_1 = bibrecord.record_add_field(self.rec, "003",
controlfield_value="SzGeCERN")
field_position_global_2 = bibrecord.record_add_field(self.rec, "004",
controlfield_value="Test")
self.assertEqual(field_position_global_1, 2)
self.assertEqual(field_position_global_2, 3)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "003", " ", " ", ""),
['SzGeCERN'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "004", " ", " ", ""),
['Test'])
def test_add_datafield(self):
"""bibrecord - adding datafield"""
field_position_global_1 = bibrecord.record_add_field(self.rec, "100",
subfields=[('a', 'Doe3, John')])
field_position_global_2 = bibrecord.record_add_field(self.rec, "100",
subfields= [('a', 'Doe4, John'), ('b', 'editor')])
self.assertEqual(field_position_global_1, 5)
self.assertEqual(field_position_global_2, 6)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "a"),
['Doe1, John', 'Doe2, John', 'Doe3, John', 'Doe4, John'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "b"),
['editor', 'editor'])
def test_add_controlfield_on_desired_position(self):
"""bibrecord - adding controlfield on desired position"""
field_position_global_1 = bibrecord.record_add_field(self.rec, "005",
controlfield_value="Foo",
field_position_global=0)
field_position_global_2 = bibrecord.record_add_field(self.rec, "006",
controlfield_value="Bar",
field_position_global=0)
self.assertEqual(field_position_global_1, 7)
self.assertEqual(field_position_global_2, 8)
def test_add_datafield_on_desired_position_field_position_global(self):
"""bibrecord - adding datafield on desired global field position"""
field_position_global_1 = bibrecord.record_add_field(self.rec, "100",
subfields=[('a', 'Doe3, John')], field_position_global=0)
field_position_global_2 = bibrecord.record_add_field(self.rec, "100",
subfields=[('a', 'Doe4, John'), ('b', 'editor')], field_position_global=0)
self.assertEqual(field_position_global_1, 3)
self.assertEqual(field_position_global_2, 3)
def test_add_datafield_on_desired_position_field_position_local(self):
"""bibrecord - adding datafield on desired local field position"""
field_position_global_1 = bibrecord.record_add_field(self.rec, "100",
subfields=[('a', 'Doe3, John')], field_position_local=0)
field_position_global_2 = bibrecord.record_add_field(self.rec, "100",
subfields=[('a', 'Doe4, John'), ('b', 'editor')],
field_position_local=2)
self.assertEqual(field_position_global_1, 3)
self.assertEqual(field_position_global_2, 5)
class BibRecordManageMultipleFieldsTest(InvenioTestCase):
""" bibrecord - testing the management of multiple fields """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">subfield1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">subfield2</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">subfield3</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">subfield4</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_delete_multiple_datafields(self):
"""bibrecord - deleting multiple datafields"""
self.fields = bibrecord.record_delete_fields(self.rec, '245', [1, 2])
self.assertEqual(self.fields[0],
([('a', 'subfield2')], ' ', ' ', '', 3))
self.assertEqual(self.fields[1],
([('a', 'subfield3')], ' ', ' ', '', 4))
def test_add_multiple_datafields_default_index(self):
"""bibrecord - adding multiple fields with the default index"""
fields = [([('a', 'subfield5')], ' ', ' ', '', 4),
([('a', 'subfield6')], ' ', ' ', '', 19)]
index = bibrecord.record_add_fields(self.rec, '245', fields)
self.assertEqual(index, None)
self.assertEqual(self.rec['245'][-2],
([('a', 'subfield5')], ' ', ' ', '', 6))
self.assertEqual(self.rec['245'][-1],
([('a', 'subfield6')], ' ', ' ', '', 7))
def test_add_multiple_datafields_with_index(self):
"""bibrecord - adding multiple fields with an index"""
fields = [([('a', 'subfield5')], ' ', ' ', '', 4),
([('a', 'subfield6')], ' ', ' ', '', 19)]
index = bibrecord.record_add_fields(self.rec, '245', fields,
field_position_local=0)
self.assertEqual(index, 0)
self.assertEqual(self.rec['245'][0],
([('a', 'subfield5')], ' ', ' ', '', 2))
self.assertEqual(self.rec['245'][1],
([('a', 'subfield6')], ' ', ' ', '', 3))
self.assertEqual(self.rec['245'][2],
([('a', 'subfield1')], ' ', ' ', '', 4))
def test_move_multiple_fields(self):
"""bibrecord - move multiple fields"""
bibrecord.record_move_fields(self.rec, '245', [1, 3])
self.assertEqual(self.rec['245'][0],
([('a', 'subfield1')], ' ', ' ', '', 2))
self.assertEqual(self.rec['245'][1],
([('a', 'subfield3')], ' ', ' ', '', 4))
self.assertEqual(self.rec['245'][2],
([('a', 'subfield2')], ' ', ' ', '', 5))
self.assertEqual(self.rec['245'][3],
([('a', 'subfield4')], ' ', ' ', '', 6))
class BibRecordDeleteFieldTest(InvenioTestCase):
""" bibrecord - testing field deletion """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe1, John</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
xml_example_record_empty = """
<record>
</record>
"""
self.rec_empty = bibrecord.create_record(xml_example_record_empty, 1, 1)[0]
def test_delete_controlfield(self):
"""bibrecord - deleting controlfield"""
bibrecord.record_delete_field(self.rec, "001", " ", " ")
self.assertEqual(bibrecord.record_get_field_values(self.rec, "001", " ", " ", " "),
[])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "b"),
['editor'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "2", "a"),
['On the foo and bar2'])
def test_delete_datafield(self):
"""bibrecord - deleting datafield"""
bibrecord.record_delete_field(self.rec, "100", " ", " ")
self.assertEqual(bibrecord.record_get_field_values(self.rec, "001", " ", " ", ""),
['33'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "b"),
[])
bibrecord.record_delete_field(self.rec, "245", " ", " ")
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "1", "a"),
['On the foo and bar1'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "2", "a"),
['On the foo and bar2'])
bibrecord.record_delete_field(self.rec, "245", " ", "2")
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "1", "a"),
['On the foo and bar1'])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "2", "a"),
[])
def test_add_delete_add_field_to_empty_record(self):
"""bibrecord - adding, deleting, and adding back a field to an empty record"""
field_position_global_1 = bibrecord.record_add_field(self.rec_empty, "003",
controlfield_value="SzGeCERN")
self.assertEqual(field_position_global_1, 1)
self.assertEqual(bibrecord.record_get_field_values(self.rec_empty, "003", " ", " ", ""),
['SzGeCERN'])
bibrecord.record_delete_field(self.rec_empty, "003", " ", " ")
self.assertEqual(bibrecord.record_get_field_values(self.rec_empty, "003", " ", " ", ""),
[])
field_position_global_1 = bibrecord.record_add_field(self.rec_empty, "003",
controlfield_value="SzGeCERN2")
self.assertEqual(field_position_global_1, 1)
self.assertEqual(bibrecord.record_get_field_values(self.rec_empty, "003", " ", " ", ""),
['SzGeCERN2'])
class BibRecordDeleteFieldFromTest(InvenioTestCase):
""" bibrecord - testing field deletion from position"""
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe1, John</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_delete_field_from(self):
"""bibrecord - deleting field from position"""
bibrecord.record_delete_field(self.rec, "100", field_position_global=4)
self.assertEqual(self.rec['100'], [([('a', 'Doe1, John')], ' ', ' ', '', 3)])
bibrecord.record_delete_field(self.rec, "100", field_position_global=3)
self.failIf(self.rec.has_key('100'))
bibrecord.record_delete_field(self.rec, "001", field_position_global=1)
bibrecord.record_delete_field(self.rec, "245", field_position_global=6)
self.failIf(self.rec.has_key('001'))
self.assertEqual(self.rec['245'], [([('a', 'On the foo and bar1')], ' ', '1', '', 5)])
# Some crash tests
bibrecord.record_delete_field(self.rec, '999', field_position_global=1)
bibrecord.record_delete_field(self.rec, '245', field_position_global=999)
class BibRecordAddSubfieldIntoTest(InvenioTestCase):
""" bibrecord - testing subfield addition """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_add_subfield_into(self):
"""bibrecord - adding subfield into position"""
bibrecord.record_add_subfield_into(self.rec, "100", "b", "Samekniv",
field_position_global=3)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "b"),
['editor', 'Samekniv'])
bibrecord.record_add_subfield_into(self.rec, "245", "x", "Elgokse",
field_position_global=4)
bibrecord.record_add_subfield_into(self.rec, "245", "x", "Fiskeflue",
subfield_position=0, field_position_global=4)
bibrecord.record_add_subfield_into(self.rec, "245", "z", "Ulriken",
subfield_position=2, field_position_global=4)
bibrecord.record_add_subfield_into(self.rec, "245", "z",
"Stortinget", subfield_position=999, field_position_global=4)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "1", "%"),
['Fiskeflue', 'On the foo and bar1', 'Ulriken', 'Elgokse', 'Stortinget'])
# Some crash tests
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_add_subfield_into, self.rec, "187", "x", "Crash",
field_position_global=1)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_add_subfield_into, self.rec, "245", "x", "Crash",
field_position_global=999)
class BibRecordModifyControlfieldTest(InvenioTestCase):
""" bibrecord - testing controlfield modification """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<controlfield tag="005">A Foo's Tale</controlfield>
<controlfield tag="008">Skeech Skeech</controlfield>
<controlfield tag="008">Whoop Whoop</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_modify_controlfield(self):
"""bibrecord - modify controlfield"""
bibrecord.record_modify_controlfield(self.rec, "001", "34",
field_position_global=1)
bibrecord.record_modify_controlfield(self.rec, "008", "Foo Foo",
field_position_global=3)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "001"), ["34"])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "005"), ["A Foo's Tale"])
self.assertEqual(bibrecord.record_get_field_values(self.rec, "008"), ["Foo Foo", "Whoop Whoop"])
# Some crash tests
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_modify_controlfield, self.rec, "187", "Crash",
field_position_global=1)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_modify_controlfield, self.rec, "008", "Test",
field_position_global=10)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_modify_controlfield, self.rec, "245", "Burn",
field_position_global=5)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "2", "%"),
["On the foo and bar2"])
class BibRecordModifySubfieldTest(InvenioTestCase):
""" bibrecord - testing subfield modification """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
<subfield code="b">On writing unit tests</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_modify_subfield(self):
"""bibrecord - modify subfield"""
bibrecord.record_modify_subfield(self.rec, "245", "a", "Holmenkollen",
0, field_position_global=4)
bibrecord.record_modify_subfield(self.rec, "245", "x", "Brann", 1,
field_position_global=4)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "245", " ", "1", "%"),
['Holmenkollen', 'Brann'])
# Some crash tests
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_modify_subfield, self.rec, "187", "x", "Crash", 0,
field_position_global=1)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_modify_subfield, self.rec, "245", "x", "Burn", 1,
field_position_global=999)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_modify_subfield, self.rec, "245", "a", "Burn",
999, field_position_global=4)
class BibRecordDeleteSubfieldFromTest(InvenioTestCase):
""" bibrecord - testing subfield deletion """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
<subfield code="z">Skal vi danse?</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_delete_subfield_from(self):
"""bibrecord - delete subfield from position"""
bibrecord.record_delete_subfield_from(self.rec, "100", 2,
field_position_global=3)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "z"),
[])
bibrecord.record_delete_subfield_from(self.rec, "100", 0,
field_position_global=3)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "%"),
['editor'])
bibrecord.record_delete_subfield_from(self.rec, "100", 0,
field_position_global=3)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "%"),
[])
# Some crash tests
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_delete_subfield_from, self.rec, "187", 0,
field_position_global=1)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_delete_subfield_from, self.rec, "245", 0,
field_position_global=999)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_delete_subfield_from, self.rec, "245", 999,
field_position_global=4)
class BibRecordDeleteSubfieldTest(InvenioTestCase):
""" bibrecord - testing subfield deletion """
def setUp(self):
"""Initialize stuff"""
self.xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
<subfield code="z">Skal vi danse?</subfield>
<subfield code="a">Doe3, Zbigniew</subfield>
<subfield code="d">Doe4, Joachim</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="2">
<subfield code="a">On the foo and bar2</subfield>
</datafield>
<datafield tag="246" ind1="1" ind2="2">
<subfield code="c">On the foo and bar1</subfield>
</datafield>
<datafield tag="246" ind1="1" ind2="2">
<subfield code="c">On the foo and bar2</subfield>
</datafield>
</record>
"""
def test_simple_removals(self):
""" bibrecord - delete subfield by its code"""
# testing a simple removals where all the fields are removed
rec = bibrecord.create_record(self.xml_example_record, 1, 1)[0]
bibrecord.record_delete_subfield(rec, "041", "b") # nothing should change
self.assertEqual(rec["041"][0][0], [("a", "eng")])
bibrecord.record_delete_subfield(rec, "041", "a")
self.assertEqual(rec["041"][0][0], [])
def test_indices_important(self):
""" bibrecord - delete subfield where indices are important"""
rec = bibrecord.create_record(self.xml_example_record, 1, 1)[0]
bibrecord.record_delete_subfield(rec, "245", "a", " ", "1")
self.assertEqual(rec["245"][0][0], [])
self.assertEqual(rec["245"][1][0], [("a", "On the foo and bar2")])
bibrecord.record_delete_subfield(rec, "245", "a", " ", "2")
self.assertEqual(rec["245"][1][0], [])
def test_remove_some(self):
""" bibrecord - delete subfield when some should be preserved and some removed"""
rec = bibrecord.create_record(self.xml_example_record, 1, 1)[0]
bibrecord.record_delete_subfield(rec, "100", "a", " ", " ")
self.assertEqual(rec["100"][0][0], [("b", "editor"), ("z", "Skal vi danse?"), ("d", "Doe4, Joachim")])
def test_more_fields(self):
""" bibrecord - delete subfield where more fits criteria"""
rec = bibrecord.create_record(self.xml_example_record, 1, 1)[0]
bibrecord.record_delete_subfield(rec, "246", "c", "1", "2")
self.assertEqual(rec["246"][1][0], [])
self.assertEqual(rec["246"][0][0], [])
def test_nonexisting_removals(self):
""" bibrecord - delete subfield that does not exist """
rec = bibrecord.create_record(self.xml_example_record, 1, 1)[0]
# further preparation
bibrecord.record_delete_subfield(rec, "100", "a", " ", " ")
self.assertEqual(rec["100"][0][0], [("b", "editor"), ("z", "Skal vi danse?"), ("d", "Doe4, Joachim")])
#the real tests begin
# 1) removing the subfield from an empty list of subfields
bibrecord.record_delete_subfield(rec, "246", "c", "1", "2")
self.assertEqual(rec["246"][1][0], [])
self.assertEqual(rec["246"][0][0], [])
bibrecord.record_delete_subfield(rec, "246", "8", "1", "2")
self.assertEqual(rec["246"][1][0], [])
self.assertEqual(rec["246"][0][0], [])
# 2) removing a subfield from a field that has some subfields but none has an appropriate code
bibrecord.record_delete_subfield(rec, "100", "a", " ", " ")
self.assertEqual(rec["100"][0][0], [("b", "editor"), ("z", "Skal vi danse?"), ("d", "Doe4, Joachim")])
bibrecord.record_delete_subfield(rec, "100", "e", " ", " ")
self.assertEqual(rec["100"][0][0], [("b", "editor"), ("z", "Skal vi danse?"), ("d", "Doe4, Joachim")])
class BibRecordMoveSubfieldTest(InvenioTestCase):
""" bibrecord - testing subfield moving """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Doe2, John</subfield>
<subfield code="b">editor</subfield>
<subfield code="c">fisk</subfield>
<subfield code="d">eple</subfield>
<subfield code="e">hammer</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2="1">
<subfield code="a">On the foo and bar1</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_move_subfield(self):
"""bibrecord - move subfields"""
bibrecord.record_move_subfield(self.rec, "100", 2, 4,
field_position_global=3)
bibrecord.record_move_subfield(self.rec, "100", 1, 0,
field_position_global=3)
bibrecord.record_move_subfield(self.rec, "100", 2, 999,
field_position_global=3)
self.assertEqual(bibrecord.record_get_field_values(self.rec, "100", " ", " ", "%"),
['editor', 'Doe2, John', 'hammer', 'fisk', 'eple'])
# Some crash tests
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_move_subfield, self.rec, "187", 0, 1,
field_position_global=3)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_move_subfield, self.rec, "100", 1, 0,
field_position_global=999)
self.assertRaises(bibrecord.InvenioBibRecordFieldError,
bibrecord.record_move_subfield, self.rec, "100", 999, 0,
field_position_global=3)
class BibRecordSpecialTagParsingTest(InvenioTestCase):
""" bibrecord - parsing special tags (FMT, FFT)"""
def setUp(self):
"""setting up example records"""
self.xml_example_record_with_fmt = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="FMT" ind1=" " ind2=" ">
<subfield code="f">HB</subfield>
<subfield code="g">Let us see if this gets inserted well.</subfield>
</datafield>
</record>
"""
self.xml_example_record_with_fft = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">file:///foo.pdf</subfield>
<subfield code="a">http://bar.com/baz.ps.gz</subfield>
</datafield>
</record>
"""
self.xml_example_record_with_xyz = """
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="XYZ" ind1=" " ind2=" ">
<subfield code="f">HB</subfield>
<subfield code="g">Let us see if this gets inserted well.</subfield>
</datafield>
</record>
"""
def test_parsing_file_containing_fmt_special_tag_with_correcting(self):
"""bibrecord - parsing special FMT tag, correcting on"""
rec = bibrecord.create_record(self.xml_example_record_with_fmt, 1, 1)[0]
self.assertEqual(rec,
{u'001': [([], " ", " ", '33', 1)],
'FMT': [([('f', 'HB'), ('g', 'Let us see if this gets inserted well.')], " ", " ", "", 3)],
'041': [([('a', 'eng')], " ", " ", "", 2)]})
self.assertEqual(bibrecord.record_get_field_values(rec, "041", " ", " ", "a"),
['eng'])
self.assertEqual(bibrecord.record_get_field_values(rec, "FMT", " ", " ", "f"),
['HB'])
self.assertEqual(bibrecord.record_get_field_values(rec, "FMT", " ", " ", "g"),
['Let us see if this gets inserted well.'])
def test_parsing_file_containing_fmt_special_tag_without_correcting(self):
"""bibrecord - parsing special FMT tag, correcting off"""
rec = bibrecord.create_record(self.xml_example_record_with_fmt, 1, 0)[0]
self.assertEqual(rec,
{u'001': [([], " ", " ", '33', 1)],
'FMT': [([('f', 'HB'), ('g', 'Let us see if this gets inserted well.')], " ", " ", "", 3)],
'041': [([('a', 'eng')], " ", " ", "", 2)]})
self.assertEqual(bibrecord.record_get_field_values(rec, "041", " ", " ", "a"),
['eng'])
self.assertEqual(bibrecord.record_get_field_values(rec, "FMT", " ", " ", "f"),
['HB'])
self.assertEqual(bibrecord.record_get_field_values(rec, "FMT", " ", " ", "g"),
['Let us see if this gets inserted well.'])
def test_parsing_file_containing_fft_special_tag_with_correcting(self):
"""bibrecord - parsing special FFT tag, correcting on"""
rec = bibrecord.create_record(self.xml_example_record_with_fft, 1, 1)[0]
self.assertEqual(rec,
{u'001': [([], " ", " ", '33', 1)],
'FFT': [([('a', 'file:///foo.pdf'), ('a', 'http://bar.com/baz.ps.gz')], " ", " ", "", 3)],
'041': [([('a', 'eng')], " ", " ", "", 2)]})
self.assertEqual(bibrecord.record_get_field_values(rec, "041", " ", " ", "a"),
['eng'])
self.assertEqual(bibrecord.record_get_field_values(rec, "FFT", " ", " ", "a"),
['file:///foo.pdf', 'http://bar.com/baz.ps.gz'])
def test_parsing_file_containing_fft_special_tag_without_correcting(self):
"""bibrecord - parsing special FFT tag, correcting off"""
rec = bibrecord.create_record(self.xml_example_record_with_fft, 1, 0)[0]
self.assertEqual(rec,
{u'001': [([], " ", " ", '33', 1)],
'FFT': [([('a', 'file:///foo.pdf'), ('a', 'http://bar.com/baz.ps.gz')], " ", " ", "", 3)],
'041': [([('a', 'eng')], " ", " ", "", 2)]})
self.assertEqual(bibrecord.record_get_field_values(rec, "041", " ", " ", "a"),
['eng'])
self.assertEqual(bibrecord.record_get_field_values(rec, "FFT", " ", " ", "a"),
['file:///foo.pdf', 'http://bar.com/baz.ps.gz'])
def test_parsing_file_containing_xyz_special_tag_with_correcting(self):
"""bibrecord - parsing unrecognized special XYZ tag, correcting on"""
# XYZ should not get accepted when correcting is on; should get changed to 000
rec = bibrecord.create_record(self.xml_example_record_with_xyz, 1, 1)[0]
self.assertEqual(rec,
{u'001': [([], " ", " ", '33', 1)],
'000': [([('f', 'HB'), ('g', 'Let us see if this gets inserted well.')], " ", " ", "", 3)],
'041': [([('a', 'eng')], " ", " ", "", 2)]})
self.assertEqual(bibrecord.record_get_field_values(rec, "041", " ", " ", "a"),
['eng'])
self.assertEqual(bibrecord.record_get_field_values(rec, "XYZ", " ", " ", "f"),
[])
self.assertEqual(bibrecord.record_get_field_values(rec, "XYZ", " ", " ", "g"),
[])
self.assertEqual(bibrecord.record_get_field_values(rec, "000", " ", " ", "f"),
['HB'])
self.assertEqual(bibrecord.record_get_field_values(rec, "000", " ", " ", "g"),
['Let us see if this gets inserted well.'])
def test_parsing_file_containing_xyz_special_tag_without_correcting(self):
"""bibrecord - parsing unrecognized special XYZ tag, correcting off"""
# XYZ should get accepted without correcting
rec = bibrecord.create_record(self.xml_example_record_with_xyz, 1, 0)[0]
self.assertEqual(rec,
{u'001': [([], " ", " ", '33', 1)],
'XYZ': [([('f', 'HB'), ('g', 'Let us see if this gets inserted well.')], " ", " ", "", 3)],
'041': [([('a', 'eng')], " ", " ", "", 2)]})
self.assertEqual(bibrecord.record_get_field_values(rec, "041", " ", " ", "a"),
['eng'])
self.assertEqual(bibrecord.record_get_field_values(rec, "XYZ", " ", " ", "f"),
['HB'])
self.assertEqual(bibrecord.record_get_field_values(rec, "XYZ", " ", " ", "g"),
['Let us see if this gets inserted well.'])
class BibRecordPrintingTest(InvenioTestCase):
""" bibrecord - testing for printing record """
def setUp(self):
"""Initialize stuff"""
self.xml_example_record = """
<record>
<controlfield tag="001">81</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">TEST-ARTICLE-2006-001</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">ARTICLE-2006-001</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Test ti</subfield>
</datafield>
</record>"""
self.xml_example_record_short = """
<record>
<controlfield tag="001">81</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">TEST-ARTICLE-2006-001</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">ARTICLE-2006-001</subfield>
</datafield>
</record>"""
self.xml_example_multi_records = """
<record>
<controlfield tag="001">81</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">TEST-ARTICLE-2006-001</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">ARTICLE-2006-001</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Test ti</subfield>
</datafield>
</record>
<record>
<controlfield tag="001">82</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Author, t</subfield>
</datafield>
</record>"""
self.xml_example_multi_records_short = """
<record>
<controlfield tag="001">81</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">TEST-ARTICLE-2006-001</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">ARTICLE-2006-001</subfield>
</datafield>
</record>
<record>
<controlfield tag="001">82</controlfield>
</record>"""
def test_record_xml_output(self):
"""bibrecord - xml output"""
rec = bibrecord.create_record(self.xml_example_record, 1, 1)[0]
rec_short = bibrecord.create_record(self.xml_example_record_short, 1, 1)[0]
self.assertEqual(bibrecord.create_record(bibrecord.record_xml_output(rec, tags=[]), 1, 1)[0], rec)
self.assertEqual(bibrecord.create_record(bibrecord.record_xml_output(rec, tags=["001", "037"]), 1, 1)[0], rec_short)
self.assertEqual(bibrecord.create_record(bibrecord.record_xml_output(rec, tags=["037"]), 1, 1)[0], rec_short)
class BibRecordCreateFieldTest(InvenioTestCase):
""" bibrecord - testing for creating field """
def test_create_valid_field(self):
"""bibrecord - create and check a valid field"""
bibrecord.create_field()
bibrecord.create_field([('a', 'testa'), ('b', 'testb')], '2', 'n',
'controlfield', 15)
def test_invalid_field_raises_exception(self):
"""bibrecord - exception raised when creating an invalid field"""
# Invalid subfields.
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, 'subfields', '1', '2', 'controlfield', 10)
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, ('1', 'value'), '1', '2', 'controlfield', 10)
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, [('value')], '1', '2', 'controlfield', 10)
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, [('1', 'value', '2')], '1', '2', 'controlfield', 10)
# Invalid indicators.
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, [], 1, '2', 'controlfield', 10)
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, [], '1', 2, 'controlfield', 10)
# Invalid controlfield value
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, [], '1', '2', 13, 10)
# Invalid global position
self.assertRaises(bibrecord_config.InvenioBibRecordFieldError,
bibrecord.create_field, [], '1', '2', 'controlfield', 'position')
def test_compare_fields(self):
"""bibrecord - compare fields"""
# Identical
field0 = ([('a', 'test')], '1', '2', '', 0)
field1 = ([('a', 'test')], '1', '2', '', 3)
self.assertEqual(True,
bibrecord._compare_fields(field0, field1, strict=True))
self.assertEqual(True,
bibrecord._compare_fields(field0, field1, strict=False))
# Order of the subfields changed.
field0 = ([('a', 'testa'), ('b', 'testb')], '1', '2', '', 0)
field1 = ([('b', 'testb'), ('a', 'testa')], '1', '2', '', 3)
self.assertEqual(False,
bibrecord._compare_fields(field0, field1, strict=True))
self.assertEqual(True,
bibrecord._compare_fields(field0, field1, strict=False))
# Different
field0 = ([], '3', '2', '', 0)
field1 = ([], '1', '2', '', 3)
self.assertEqual(False,
bibrecord._compare_fields(field0, field1, strict=True))
self.assertEqual(False,
bibrecord._compare_fields(field0, field1, strict=False))
class BibRecordFindFieldTest(InvenioTestCase):
""" bibrecord - testing for finding field """
def setUp(self):
"""Initialize stuff"""
xml = """
<record>
<controlfield tag="001">81</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">TEST-ARTICLE-2006-001</subfield>
<subfield code="b">ARTICLE-2007-001</subfield>
</datafield>
</record>
"""
self.rec = bibrecord.create_record(xml)[0]
self.field0 = self.rec['001'][0]
self.field1 = self.rec['037'][0]
self.field2 = (
[self.field1[0][1], self.field1[0][0]],
self.field1[1],
self.field1[2],
self.field1[3],
self.field1[4],
)
def test_finding_field_strict(self):
"""bibrecord - test finding field strict"""
self.assertEqual((1, 0),
bibrecord.record_find_field(self.rec, '001', self.field0,
strict=True))
self.assertEqual((2, 0),
bibrecord.record_find_field(self.rec, '037', self.field1,
strict=True))
self.assertEqual((None, None),
bibrecord.record_find_field(self.rec, '037', self.field2,
strict=True))
def test_finding_field_loose(self):
"""bibrecord - test finding field loose"""
self.assertEqual((1, 0),
bibrecord.record_find_field(self.rec, '001', self.field0,
strict=False))
self.assertEqual((2, 0),
bibrecord.record_find_field(self.rec, '037', self.field1,
strict=False))
self.assertEqual((2, 0),
bibrecord.record_find_field(self.rec, '037', self.field2,
strict=False))
class BibRecordSingletonTest(InvenioTestCase):
""" bibrecord - testing singleton removal """
def setUp(self):
"""Initialize stuff"""
self.xml = """<collection>
<record>
<controlfield tag="001">33</controlfield>
<controlfield tag="002" />
<datafield tag="99" ind1=" " ind2=" "/>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a" />
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Some value</subfield>
</datafield>
<tagname />
</record>
<record />
<collection>"""
self.rec_expected = {
'001': [([], ' ', ' ', '33', 1)],
'100': [([('a', 'Some value')], ' ', ' ', '', 2)],
}
if parser_minidom_available:
def test_singleton_removal_minidom(self):
"""bibrecord - enforcing singleton removal with minidom"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='minidom',
keep_singletons=False)[0][0]
self.assertEqual(rec, self.rec_expected)
if parser_4suite_available:
def test_singleton_removal_4suite(self):
"""bibrecord - enforcing singleton removal with 4suite"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='4suite',
keep_singletons=False)[0][0]
self.assertEqual(rec, self.rec_expected)
if parser_pyrxp_available:
def test_singleton_removal_pyrxp(self):
"""bibrecord - enforcing singleton removal with pyrxp"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='pyrxp',
keep_singletons=False)[0][0]
self.assertEqual(rec, self.rec_expected)
if parser_lxml_available:
def test_singleton_removal_lxml(self):
"""bibrecord - enforcing singleton removal with lxml"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='lxml',
keep_singletons=False)[0][0]
self.assertEqual(rec, self.rec_expected)
class BibRecordNumCharRefTest(InvenioTestCase):
""" bibrecord - testing numerical character reference expansion"""
def setUp(self):
"""Initialize stuff"""
self.xml = """<?xml version="1.0" encoding="UTF-8"?>
<record>
<controlfield tag="001">33</controlfield>
<datafield tag="123" ind1=" " ind2=" ">
<subfield code="a">Σ &amp; &#931;</subfield>
<subfield code="a">use &amp;amp; in XML</subfield>
</datafield>
</record>"""
self.rec_expected = {
'001': [([], ' ', ' ', '33', 1)],
'123': [([('a', '\xce\xa3 & \xce\xa3'), ('a', 'use &amp; in XML'),], ' ', ' ', '', 2)],
}
if parser_minidom_available:
def test_numcharref_expansion_minidom(self):
"""bibrecord - numcharref expansion with minidom"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='minidom')[0][0]
self.assertEqual(rec, self.rec_expected)
if parser_4suite_available:
def test_numcharref_expansion_4suite(self):
"""bibrecord - numcharref expansion with 4suite"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='4suite')[0][0]
self.assertEqual(rec, self.rec_expected)
if parser_pyrxp_available:
def test_numcharref_expansion_pyrxp(self):
"""bibrecord - but *no* numcharref expansion with pyrxp (see notes)
FIXME: pyRXP does not seem to like num char ref entities,
so this test is mostly left here in a TDD style in order
to remind us of this fact. If we want to fix this
situation, then we should probably use pyRXPU that uses
Unicode strings internally, hence it is num char ref
friendly. Maybe we should use pyRXPU by default, if
performance is acceptable, or maybe we should introduce a
flag to govern this behaviour.
"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='pyrxp')[0][0]
#self.assertEqual(rec, self.rec_expected)
self.assertEqual(rec, None)
if parser_lxml_available:
def test_numcharref_expansion_lxml(self):
"""bibrecord - numcharref expansion with lxml"""
rec = bibrecord.create_records(self.xml, verbose=1,
correct=1, parser='lxml')[0][0]
self.assertEqual(rec, self.rec_expected)
class BibRecordExtractIdentifiersTest(InvenioTestCase):
""" bibrecord - testing for getting identifiers from record """
def setUp(self):
"""Initialize stuff"""
xml_example_record = """
<record>
<controlfield tag="001">1</controlfield>
<datafield tag="100" ind1="C" ind2="5">
<subfield code="a">val1</subfield>
</datafield>
<datafield tag="024" ind1="7" ind2=" ">
<subfield code="2">doi</subfield>
<subfield code="a">5555/TEST1</subfield>
</datafield>
<datafield tag="024" ind1="7" ind2=" ">
<subfield code="2">DOI</subfield>
<subfield code="a">5555/TEST2</subfield>
</datafield>
<datafield tag="024" ind1="7" ind2=" ">
<subfield code="2">nondoi</subfield>
<subfield code="a">5555/TEST3</subfield>
</datafield>
<datafield tag="024" ind1="8" ind2=" ">
<subfield code="2">doi</subfield>
<subfield code="a">5555/TEST4</subfield>
</datafield>
<datafield tag="%(oai_tag)s" ind1="%(oai_ind1)s" ind2="%(oai_ind2)s">
<subfield code="%(oai_subcode)s">oai:atlantis:1</subfield>
</datafield>
</record>
""" % {'oai_tag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'oai_ind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3],
'oai_ind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4],
'oai_subcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5],
}
self.rec = bibrecord.create_record(xml_example_record, 1, 1)[0]
def test_extract_doi(self):
"""bibrecord - getting DOI identifier(s) from record"""
self.assertEqual(bibrecord.record_extract_dois(self.rec),
['5555/TEST1', '5555/TEST2'])
def test_extract_oai_id(self):
"""bibrecord - getting OAI identifier(s) from record"""
self.assertEqual(bibrecord.record_extract_oai_id(self.rec),
'oai:atlantis:1')
TEST_SUITE = make_test_suite(
BibRecordSuccessTest,
BibRecordParsersTest,
BibRecordBadInputTreatmentTest,
BibRecordGettingFieldValuesTest,
BibRecordGettingFieldValuesViaWildcardsTest,
BibRecordAddFieldTest,
BibRecordDeleteFieldTest,
BibRecordManageMultipleFieldsTest,
BibRecordDeleteFieldFromTest,
BibRecordAddSubfieldIntoTest,
BibRecordModifyControlfieldTest,
BibRecordModifySubfieldTest,
BibRecordDeleteSubfieldFromTest,
BibRecordMoveSubfieldTest,
BibRecordAccentedUnicodeLettersTest,
BibRecordSpecialTagParsingTest,
BibRecordPrintingTest,
BibRecordCreateFieldTest,
BibRecordFindFieldTest,
BibRecordDeleteSubfieldTest,
BibRecordSingletonTest,
BibRecordNumCharRefTest,
BibRecordExtractIdentifiersTest,
)
if __name__ == '__main__':
run_test_suite(TEST_SUITE)
diff --git a/modules/bibsort/bin/bibsort.in b/modules/bibsort/bin/bibsort.in
old mode 100755
new mode 100644
diff --git a/modules/bibsword/lib/bibsword_config.py b/modules/bibsword/lib/bibsword_config.py
index 5d90b7b44..e0301a09c 100644
--- a/modules/bibsword/lib/bibsword_config.py
+++ b/modules/bibsword/lib/bibsword_config.py
@@ -1,143 +1,140 @@
## This file is part of Invenio.
-## Copyright (C) 2010, 2011 CERN.
+## Copyright (C) 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
'''
Forward to ArXiv.org source code
'''
from invenio.bibformat_dblayer import get_tag_from_name
#Maximal time to keep the stored XML Service doucment before reloading it in sec
CFG_BIBSWORD_SERVICEDOCUMENT_UPDATE_TIME = 3600
#Default submission status
CFG_SUBMISSION_STATUS_SUBMITTED = "submitted"
CFG_SUBMISSION_STATUS_PUBLISHED = "published"
CFG_SUBMISSION_STATUS_ONHOLD = "onhold"
CFG_SUBMISSION_STATUS_REMOVED = "removed"
CFG_SUBMIT_ARXIV_INFO_MESSAGE = "Submitted from Invenio to arXiv by %s, on %s, as %s"
CFG_DOCTYPE_UPLOAD_COLLECTION = 'PUSHED_TO_ARXIV'
# report number:
marc_tag_main_report_number = get_tag_from_name('primary report number')
if marc_tag_main_report_number:
CFG_MARC_REPORT_NUMBER = marc_tag_main_report_number
else:
CFG_MARC_REPORT_NUMBER = '037__a'
# title:
marc_tag_title = get_tag_from_name('title')
if marc_tag_title:
CFG_MARC_TITLE = marc_tag_title
else:
CFG_MARC_TITLE = '245__a'
# author name:
marc_tag_author = get_tag_from_name('first author name')
if marc_tag_author:
CFG_MARC_AUTHOR_NAME = marc_tag_author
else:
CFG_MARC_AUTHOR_NAME = '100__a'
# author affiliation
marc_tag_author_affiliation = get_tag_from_name('first author affiliation')
if marc_tag_author_affiliation:
CFG_MARC_AUTHOR_AFFILIATION = marc_tag_author_affiliation
else:
CFG_MARC_AUTHOR_AFFILIATION = '100__u'
# contributor name:
marc_tag_contributor_name = get_tag_from_name('additional author name')
if marc_tag_contributor_name:
CFG_MARC_CONTRIBUTOR_NAME = marc_tag_contributor_name
else:
- CFG_MARC_CONTRIBUTOR_NAME = '700_a'
+ CFG_MARC_CONTRIBUTOR_NAME = '700__a'
# contributor affiliation:
marc_tag_contributor_affiliation = get_tag_from_name('additional author affiliation')
if marc_tag_contributor_affiliation:
CFG_MARC_CONTRIBUTOR_AFFILIATION = marc_tag_contributor_affiliation
else:
- CFG_MARC_CONTRIBUTOR_AFFILIATION = '700_u'
+ CFG_MARC_CONTRIBUTOR_AFFILIATION = '700__u'
# abstract:
marc_tag_abstract = get_tag_from_name('main abstract')
if marc_tag_abstract:
CFG_MARC_ABSTRACT = marc_tag_abstract
else:
CFG_MARC_ABSTRACT = '520__a'
# additional report number
marc_tag_additional_report_number = get_tag_from_name('additional report number')
if marc_tag_additional_report_number:
CFG_MARC_ADDITIONAL_REPORT_NUMBER = marc_tag_additional_report_number
else:
CFG_MARC_ADDITIONAL_REPORT_NUMBER = '088__a'
# doi
marc_tag_doi = get_tag_from_name('doi')
if marc_tag_doi:
CFG_MARC_DOI = marc_tag_doi
else:
CFG_MARC_DOI = '909C4a'
# journal code
marc_tag_journal_ref_code = get_tag_from_name('journal code')
if marc_tag_journal_ref_code:
CFG_MARC_JOURNAL_REF_CODE = marc_tag_journal_ref_code
else:
CFG_MARC_JOURNAL_REF_CODE = '909C4c'
# journal reference title
marc_tag_journal_ref_title = get_tag_from_name('journal title')
if marc_tag_journal_ref_title:
CFG_MARC_JOURNAL_REF_TITLE = marc_tag_journal_ref_title
else:
CFG_MARC_JOURNAL_REF_TITLE = '909C4p'
# journal reference page
marc_tag_journal_ref_page = get_tag_from_name('journal page')
if marc_tag_journal_ref_page:
CFG_MARC_JOURNAL_REF_PAGE = marc_tag_journal_ref_page
else:
CFG_MARC_JOURNAL_REF_PAGE = '909C4v'
# journal reference year
marc_tag_journal_ref_year = get_tag_from_name('journal year')
if marc_tag_journal_ref_year:
CFG_MARC_JOURNAL_REF_YEAR = marc_tag_journal_ref_year
else:
CFG_MARC_JOURNAL_REF_YEAR = '909C4y'
# comment
marc_tag_comment = get_tag_from_name('comment')
if marc_tag_comment:
CFG_MARC_COMMENT = marc_tag_comment
else:
CFG_MARC_COMMENT = '500__a'
# internal note field
marc_tag_internal_note = get_tag_from_name('internal notes')
if marc_tag_internal_note:
CFG_MARC_RECORD_SUBMIT_INFO = marc_tag_internal_note
else:
CFG_MARC_RECORD_SUBMIT_INFO = '595__a'
-
-
-
diff --git a/modules/bibupload/lib/bibupload.py b/modules/bibupload/lib/bibupload.py
index 7898cde4f..9de785a09 100644
--- a/modules/bibupload/lib/bibupload.py
+++ b/modules/bibupload/lib/bibupload.py
@@ -1,2915 +1,2937 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
BibUpload: Receive MARC XML file and update the appropriate database
tables according to options.
"""
__revision__ = "$Id$"
import os
import re
import sys
import time
from datetime import datetime
from zlib import compress
import socket
import marshal
import copy
import tempfile
import urlparse
import urllib2
import urllib
from invenio.config import CFG_OAI_ID_FIELD, \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG, \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG, \
CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG, \
CFG_BIBUPLOAD_STRONG_TAGS, \
CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS, \
CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE, \
CFG_BIBUPLOAD_DELETE_FORMATS, \
CFG_SITE_URL, CFG_SITE_SECURE_URL, CFG_SITE_RECORD, \
CFG_OAI_PROVENANCE_ALTERED_SUBFIELD, \
CFG_BIBUPLOAD_DISABLE_RECORD_REVISIONS, \
CFG_BIBUPLOAD_CONFLICTING_REVISION_TICKET_QUEUE
from invenio.jsonutils import json, CFG_JSON_AVAILABLE
from invenio.bibupload_config import CFG_BIBUPLOAD_CONTROLFIELD_TAGS, \
CFG_BIBUPLOAD_SPECIAL_TAGS, \
CFG_BIBUPLOAD_DELETE_CODE, \
CFG_BIBUPLOAD_DELETE_VALUE, \
CFG_BIBUPLOAD_OPT_MODES
from invenio.dbquery import run_sql, \
Error
from invenio.bibrecord import create_records, \
record_add_field, \
record_delete_field, \
record_xml_output, \
record_get_field_instances, \
record_get_field_value, \
record_get_field_values, \
field_get_subfield_values, \
field_get_subfield_instances, \
record_modify_subfield, \
record_delete_subfield_from, \
record_delete_fields, \
record_add_subfield_into, \
record_find_field, \
record_extract_oai_id, \
record_extract_dois, \
record_has_field,\
records_identical
from invenio.search_engine import get_record
from invenio.dateutils import convert_datestruct_to_datetext
from invenio.errorlib import register_exception
from invenio.bibcatalog import bibcatalog_system
from invenio.intbitset import intbitset
from invenio.urlutils import make_user_agent_string
from invenio.config import CFG_BIBDOCFILE_FILEDIR
from invenio.bibtask import task_init, write_message, \
task_set_option, task_get_option, task_get_task_param, task_update_status, \
task_update_progress, task_sleep_now_if_required, fix_argv_paths
from invenio.bibdocfile import BibRecDocs, file_strip_ext, normalize_format, \
get_docname_from_url, check_valid_url, download_url, \
KEEP_OLD_VALUE, decompose_bibdocfile_url, InvenioBibDocFileError, \
bibdocfile_url_p, CFG_BIBDOCFILE_AVAILABLE_FLAGS, guess_format_from_url, \
BibRelation, MoreInfo
from invenio.search_engine import search_pattern
from invenio.bibupload_revisionverifier import RevisionVerifier, \
InvenioBibUploadConflictingRevisionsError, \
InvenioBibUploadInvalidRevisionError, \
InvenioBibUploadMissing005Error, \
InvenioBibUploadUnchangedRecordError
#Statistic variables
stat = {}
stat['nb_records_to_upload'] = 0
stat['nb_records_updated'] = 0
stat['nb_records_inserted'] = 0
stat['nb_errors'] = 0
stat['nb_holdingpen'] = 0
stat['exectime'] = time.localtime()
_WRITING_RIGHTS = None
CFG_BIBUPLOAD_ALLOWED_SPECIAL_TREATMENTS = ('oracle', )
CFG_HAS_BIBCATALOG = "UNKNOWN"
def check_bibcatalog():
"""
Return True if bibcatalog is available.
"""
global CFG_HAS_BIBCATALOG # pylint: disable=W0603
if CFG_HAS_BIBCATALOG != "UNKNOWN":
return CFG_HAS_BIBCATALOG
CFG_HAS_BIBCATALOG = True
if bibcatalog_system is not None:
bibcatalog_response = bibcatalog_system.check_system()
else:
bibcatalog_response = "No ticket system configured"
if bibcatalog_response != "":
write_message("BibCatalog error: %s\n" % (bibcatalog_response,))
CFG_HAS_BIBCATALOG = False
return CFG_HAS_BIBCATALOG
## Let's set a reasonable timeout for URL request (e.g. FFT)
socket.setdefaulttimeout(40)
def parse_identifier(identifier):
"""Parse the identifier and determine if it is temporary or fixed"""
id_str = str(identifier)
if not id_str.startswith("TMP:"):
return (False, identifier)
else:
return (True, id_str[4:])
def resolve_identifier(tmps, identifier):
"""Resolves an identifier. If the identifier is not temporary, this
function is an identity on the second argument. Otherwise, a resolved
value is returned or an exception raised"""
is_tmp, tmp_id = parse_identifier(identifier)
if is_tmp:
if not tmp_id in tmps:
raise StandardError("Temporary identifier %s not present in the dictionary" % (tmp_id, ))
if tmps[tmp_id] == -1:
# the identifier has been signalised but never assigned a value - probably error during processing
raise StandardError("Temporary identifier %s has been declared, but never assigned a value. Probably an error during processign of an appropriate FFT has happened. Please see the log" % (tmp_id, ))
return int(tmps[tmp_id])
else:
return int(identifier)
_re_find_001 = re.compile('<controlfield\\s+tag=("001"|\'001\')\\s*>\\s*(\\d*)\\s*</controlfield>', re.S)
def bibupload_pending_recids():
"""This function embed a bit of A.I. and is more a hack than an elegant
algorithm. It should be updated in case bibupload/bibsched are modified
in incompatible ways.
This function return the intbitset of all the records that are being
(or are scheduled to be) touched by other bibuploads.
"""
options = run_sql("""SELECT arguments FROM schTASK WHERE status<>'DONE' AND
proc='bibupload' AND (status='RUNNING' OR status='CONTINUING' OR
status='WAITING' OR status='SCHEDULED' OR status='ABOUT TO STOP' OR
status='ABOUT TO SLEEP')""")
ret = intbitset()
xmls = []
if options:
for arguments in options:
arguments = marshal.loads(arguments[0])
for argument in arguments[1:]:
if argument.startswith('/'):
# XMLs files are recognizable because they're absolute
# files...
xmls.append(argument)
for xmlfile in xmls:
# Let's grep for the 001
try:
xml = open(xmlfile).read()
ret += [int(group[1]) for group in _re_find_001.findall(xml)]
except:
continue
return ret
### bibupload engine functions:
def bibupload(record, opt_mode=None, opt_notimechange=0, oai_rec_id="", pretend=False,
tmp_ids=None, tmp_vers=None):
"""Main function: process a record and fit it in the tables
bibfmt, bibrec, bibrec_bibxxx, bibxxx with proper record
metadata.
Return (error_code, recID) of the processed record.
"""
if tmp_ids is None:
tmp_ids = {}
if tmp_vers is None:
tmp_vers = {}
if opt_mode == 'reference':
## NOTE: reference mode has been deprecated in favour of 'correct'
opt_mode = 'correct'
assert(opt_mode in CFG_BIBUPLOAD_OPT_MODES)
error = None
affected_tags = {}
original_record = {}
rec_old = {}
now = datetime.now() # will hold record creation/modification date
record_had_altered_bit = False
is_opt_mode_delete = False
# Extraction of the Record Id from 001, SYSNO or OAIID or DOI tags:
rec_id = retrieve_rec_id(record, opt_mode, pretend=pretend)
if rec_id == -1:
msg = " Failed: either the record already exists and insert was " \
"requested or the record does not exists and " \
"replace/correct/append has been used"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, -1, msg)
elif rec_id > 0:
write_message(" -Retrieve record ID (found %s): DONE." % rec_id, verbose=2)
(unique_p, msg) = check_record_doi_is_unique(rec_id, record)
if not unique_p:
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
if not record.has_key('001'):
# Found record ID by means of SYSNO or OAIID or DOI, and the
# input MARCXML buffer does not have this 001 tag, so we
# should add it now:
error = record_add_field(record, '001', controlfield_value=rec_id)
if error is None:
msg = " Failed: Error during adding the 001 controlfield " \
"to the record"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
else:
error = None
write_message(" -Added tag 001: DONE.", verbose=2)
write_message(" -Check if the xml marc file is already in the database: DONE" , verbose=2)
record_deleted_p = False
if opt_mode == 'insert' or \
(opt_mode == 'replace_or_insert') and rec_id is None:
insert_mode_p = True
# Insert the record into the bibrec databases to have a recordId
rec_id = create_new_record(pretend=pretend)
write_message(" -Creation of a new record id (%d): DONE" % rec_id, verbose=2)
# we add the record Id control field to the record
error = record_add_field(record, '001', controlfield_value=rec_id)
if error is None:
msg = " Failed: Error during adding the 001 controlfield " \
"to the record"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
else:
error = None
- error = record_add_field(record, '005', controlfield_value=now.strftime("%Y%m%d%H%M%S.0"))
- if error is None:
- msg = " Failed: Error during adding to 005 controlfield to record"
- write_message(msg, verbose=1, stream=sys.stderr)
- return (1, int(rec_id), msg)
+ if '005' not in record:
+ error = record_add_field(record, '005', controlfield_value=now.strftime("%Y%m%d%H%M%S.0"))
+ if error is None:
+ msg = " Failed: Error during adding to 005 controlfield to record"
+ write_message(msg, verbose=1, stream=sys.stderr)
+ return (1, int(rec_id), msg)
+ else:
+ error = None
else:
- error=None
+ write_message(" Note: 005 already existing upon inserting of new record. Keeping it.", verbose=2)
elif opt_mode != 'insert':
insert_mode_p = False
# Update Mode
# Retrieve the old record to update
rec_old = get_record(rec_id)
record_had_altered_bit = record_get_field_values(rec_old, CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:3], CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3], CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4], CFG_OAI_PROVENANCE_ALTERED_SUBFIELD)
# Also save a copy to restore previous situation in case of errors
original_record = get_record(rec_id)
if rec_old is None:
msg = " Failed during the creation of the old record!"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
else:
write_message(" -Retrieve the old record to update: DONE", verbose=2)
# flag to check whether the revisions have been verified and patch generated.
# If revision verification failed, then we need to manually identify the affected tags
# and process them
revision_verified = False
rev_verifier = RevisionVerifier()
#check for revision conflicts before updating record
if record_has_field(record, '005') and not CFG_BIBUPLOAD_DISABLE_RECORD_REVISIONS:
write_message(" -Upload Record has 005. Verifying Revision", verbose=2)
try:
rev_res = rev_verifier.verify_revision(record, original_record, opt_mode)
if rev_res:
opt_mode = rev_res[0]
record = rev_res[1]
affected_tags = rev_res[2]
revision_verified = True
write_message(lambda: " -Patch record generated. Changing opt_mode to correct.\nPatch:\n%s " % record_xml_output(record), verbose=2)
else:
write_message(" -No Patch Record.", verbose=2)
except InvenioBibUploadUnchangedRecordError, err:
msg = " -ISSUE: %s" % err
write_message(msg, verbose=1, stream=sys.stderr)
write_message(msg, " Continuing anyway in case there are FFT or other tags")
except InvenioBibUploadConflictingRevisionsError, err:
msg = " -ERROR: Conflicting Revisions - %s" % err
write_message(msg, verbose=1, stream=sys.stderr)
submit_ticket_for_holding_pen(rec_id, err, "Conflicting Revisions. Inserting record into holding pen.")
insert_record_into_holding_pen(record, str(rec_id))
return (2, int(rec_id), msg)
except InvenioBibUploadInvalidRevisionError, err:
msg = " -ERROR: Invalid Revision - %s" % err
write_message(msg)
submit_ticket_for_holding_pen(rec_id, err, "Invalid Revisions. Inserting record into holding pen.")
insert_record_into_holding_pen(record, str(rec_id))
return (2, int(rec_id), msg)
except InvenioBibUploadMissing005Error, err:
msg = " -ERROR: Missing 005 - %s" % err
write_message(msg)
submit_ticket_for_holding_pen(rec_id, err, "Missing 005. Inserting record into holding pen.")
insert_record_into_holding_pen(record, str(rec_id))
return (2, int(rec_id), msg)
else:
write_message(" - No 005 Tag Present. Resuming normal flow.", verbose=2)
# dictionaries to temporarily hold original recs tag-fields
existing_tags = {}
retained_tags = {}
# in case of delete operation affected tags should be deleted in delete_bibrec_bibxxx
# but should not be updated again in STAGE 4
# utilising the below flag
is_opt_mode_delete = False
if not revision_verified:
# either 005 was not present or opt_mode was not correct/replace
# in this case we still need to find out affected tags to process
write_message(" - Missing 005 or opt_mode!=Replace/Correct.Revision Verifier not called.", verbose=2)
# Identify affected tags
if opt_mode == 'correct' or opt_mode == 'replace' or opt_mode == 'replace_or_insert':
rec_diff = rev_verifier.compare_records(record, original_record, opt_mode)
affected_tags = rev_verifier.retrieve_affected_tags_with_ind(rec_diff)
elif opt_mode == 'delete':
# populate an intermediate dictionary
# used in upcoming step related to 'delete' mode
is_opt_mode_delete = True
for tag, fields in original_record.iteritems():
existing_tags[tag] = [tag + (field[1] != ' ' and field[1] or '_') + (field[2] != ' ' and field[2] or '_') for field in fields]
elif opt_mode == 'append':
for tag, fields in record.iteritems():
if tag not in CFG_BIBUPLOAD_CONTROLFIELD_TAGS:
affected_tags[tag]=[(field[1], field[2]) for field in fields]
# In Replace mode, take over old strong tags if applicable:
if opt_mode == 'replace' or \
opt_mode == 'replace_or_insert':
copy_strong_tags_from_old_record(record, rec_old)
# Delete tags to correct in the record
if opt_mode == 'correct':
delete_tags_to_correct(record, rec_old)
write_message(" -Delete the old tags to correct in the old record: DONE",
verbose=2)
# Delete tags specified if in delete mode
if opt_mode == 'delete':
record = delete_tags(record, rec_old)
for tag, fields in record.iteritems():
retained_tags[tag] = [tag + (field[1] != ' ' and field[1] or '_') + (field[2] != ' ' and field[2] or '_') for field in fields]
#identify the tags that have been deleted
for tag in existing_tags.keys():
if tag not in retained_tags:
for item in existing_tags[tag]:
tag_to_add = item[0:3]
ind1, ind2 = item[3], item[4]
if tag_to_add in affected_tags and (ind1, ind2) not in affected_tags[tag_to_add]:
affected_tags[tag_to_add].append((ind1, ind2))
else:
affected_tags[tag_to_add] = [(ind1, ind2)]
else:
deleted = list(set(existing_tags[tag]) - set(retained_tags[tag]))
for item in deleted:
tag_to_add = item[0:3]
ind1, ind2 = item[3], item[4]
if tag_to_add in affected_tags and (ind1, ind2) not in affected_tags[tag_to_add]:
affected_tags[tag_to_add].append((ind1, ind2))
else:
affected_tags[tag_to_add] = [(ind1, ind2)]
write_message(" -Delete specified tags in the old record: DONE", verbose=2)
# Append new tag to the old record and update the new record with the old_record modified
if opt_mode == 'append' or opt_mode == 'correct':
record = append_new_tag_to_old_record(record, rec_old)
write_message(" -Append new tags to the old record: DONE", verbose=2)
write_message(" -Affected Tags found after comparing upload and original records: %s"%(str(affected_tags)), verbose=2)
# 005 tag should be added everytime the record is modified
# If an exiting record is modified, its 005 tag should be overwritten with a new revision value
if record.has_key('005'):
record_delete_field(record, '005')
write_message(" Deleted the existing 005 tag.", verbose=2)
last_revision = run_sql("SELECT MAX(job_date) FROM hstRECORD WHERE id_bibrec=%s", (rec_id, ))[0][0]
if last_revision and last_revision.strftime("%Y%m%d%H%M%S.0") == now.strftime("%Y%m%d%H%M%S.0"):
## We are updating the same record within the same seconds! It's less than
## the minimal granularity. Let's pause for 1 more second to take a breath :-)
time.sleep(1)
now = datetime.now()
error = record_add_field(record, '005', controlfield_value=now.strftime("%Y%m%d%H%M%S.0"))
if error is None:
write_message(" Failed: Error during adding to 005 controlfield to record", verbose=1, stream=sys.stderr)
return (1, int(rec_id))
else:
error=None
write_message(lambda: " -Added tag 005: DONE. "+ str(record_get_field_value(record, '005', '', '')), verbose=2)
# adding 005 to affected tags will delete the existing 005 entry
# and update with the latest timestamp.
if '005' not in affected_tags:
affected_tags['005'] = [(' ', ' ')]
write_message(" -Stage COMPLETED", verbose=2)
record_deleted_p = False
try:
if not record_is_valid(record):
msg = "ERROR: record is not valid"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, -1, msg)
# Have a look if we have FFT tags
write_message("Stage 2: Start (Process FFT tags if exist).", verbose=2)
record_had_FFT = False
if extract_tag_from_record(record, 'FFT') is not None:
record_had_FFT = True
if not writing_rights_p():
write_message(" Stage 2 failed: Error no rights to write fulltext files",
verbose=1, stream=sys.stderr)
task_update_status("ERROR")
sys.exit(1)
try:
record = elaborate_fft_tags(record, rec_id, opt_mode,
pretend=pretend, tmp_ids=tmp_ids,
tmp_vers=tmp_vers)
except Exception, e:
register_exception()
msg = " Stage 2 failed: Error while elaborating FFT tags: %s" % e
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
if record is None:
msg = " Stage 2 failed: Error while elaborating FFT tags"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
write_message(" -Stage COMPLETED", verbose=2)
else:
write_message(" -Stage NOT NEEDED", verbose=2)
# Have a look if we have FFT tags
write_message("Stage 2B: Start (Synchronize 8564 tags).", verbose=2)
if record_had_FFT or extract_tag_from_record(record, '856') is not None:
try:
record = synchronize_8564(rec_id, record, record_had_FFT, pretend=pretend)
# in case if FFT is in affected list make appropriate changes
- if ('4', ' ') not in affected_tags.get('856', []):
- if '856' not in affected_tags:
- affected_tags['856'] = [('4', ' ')]
- elif ('4', ' ') not in affected_tags['856']:
- affected_tags['856'].append(('4', ' '))
- write_message(" -Modified field list updated with FFT details: %s" % str(affected_tags), verbose=2)
+ if opt_mode is not 'insert': # because for insert, all tags are affected
+ if ('4', ' ') not in affected_tags.get('856', []):
+ if '856' not in affected_tags:
+ affected_tags['856'] = [('4', ' ')]
+ elif ('4', ' ') not in affected_tags['856']:
+ affected_tags['856'].append(('4', ' '))
+ write_message(" -Modified field list updated with FFT details: %s" % str(affected_tags), verbose=2)
except Exception, e:
register_exception(alert_admin=True)
msg = " Stage 2B failed: Error while synchronizing 8564 tags: %s" % e
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
if record is None:
msg = " Stage 2B failed: Error while synchronizing 8564 tags"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
write_message(" -Stage COMPLETED", verbose=2)
else:
write_message(" -Stage NOT NEEDED", verbose=2)
write_message("Stage 3: Start (Apply fields deletion requests).", verbose=2)
write_message(lambda: " Record before deletion:\n%s" % record_xml_output(record), verbose=9)
# remove fields with __DELETE_FIELDS__
# NOTE:creating a temporary deep copy of record for iteration to avoid RunTimeError
# RuntimeError due to change in dictionary size during iteration
tmp_rec = copy.deepcopy(record)
for tag in tmp_rec:
for data_tuple in record[tag]:
if (CFG_BIBUPLOAD_DELETE_CODE, CFG_BIBUPLOAD_DELETE_VALUE) in data_tuple[0]:
# delete the tag with particular indicator pairs from original record
record_delete_field(record, tag, data_tuple[1], data_tuple[2])
write_message(lambda: " Record after cleaning up fields to be deleted:\n%s" % record_xml_output(record), verbose=9)
# Update of the BibFmt
write_message("Stage 4: Start (Update bibfmt).", verbose=2)
updates_exist = not records_identical(record, original_record)
if updates_exist:
# if record_had_altered_bit, this must be set to true, since the
# record has been altered.
if record_had_altered_bit:
oai_provenance_fields = record_get_field_instances(record, CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:3], CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3], CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4])
for oai_provenance_field in oai_provenance_fields:
for i, (code, dummy_value) in enumerate(oai_provenance_field[0]):
if code == CFG_OAI_PROVENANCE_ALTERED_SUBFIELD:
oai_provenance_field[0][i] = (code, 'true')
tmp_indicators = (CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3], CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4])
if tmp_indicators not in affected_tags.get(CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:3], []):
if CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:3] not in affected_tags:
affected_tags[CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:3]] = [tmp_indicators]
else:
affected_tags[CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:3]].append(tmp_indicators)
write_message(lambda: " Updates exists:\n%s\n!=\n%s" % (record, original_record), verbose=9)
# format the single record as xml
rec_xml_new = record_xml_output(record)
# Update bibfmt with the format xm of this record
modification_date = time.strftime('%Y-%m-%d %H:%M:%S', time.strptime(record_get_field_value(record, '005'), '%Y%m%d%H%M%S.0'))
error = update_bibfmt_format(rec_id, rec_xml_new, 'xm', modification_date, pretend=pretend)
if error == 1:
msg = " Failed: error during update_bibfmt_format 'xm'"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
if CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE:
error = update_bibfmt_format(rec_id, marshal.dumps(record), 'recstruct', modification_date, pretend=pretend)
if error == 1:
msg = " Failed: error during update_bibfmt_format 'recstruct'"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
if not CFG_BIBUPLOAD_DISABLE_RECORD_REVISIONS:
# archive MARCXML format of this record for version history purposes:
- error = archive_marcxml_for_history(rec_id, pretend=pretend)
+ error = archive_marcxml_for_history(rec_id, affected_fields=affected_tags, pretend=pretend)
if error == 1:
msg = " Failed to archive MARCXML for history"
write_message(msg, verbose=1, stream=sys.stderr)
return (1, int(rec_id), msg)
else:
write_message(" -Archived MARCXML for history: DONE", verbose=2)
# delete some formats like HB upon record change:
if updates_exist or record_had_FFT:
for format_to_delete in CFG_BIBUPLOAD_DELETE_FORMATS:
try:
delete_bibfmt_format(rec_id, format_to_delete, pretend=pretend)
except:
# OK, some formats like HB could not have been deleted, no big deal
pass
write_message(" -Stage COMPLETED", verbose=2)
+ ## Let's assert that one and only one 005 tag is existing at this stage.
+ assert len(record['005']) == 1
+
# Update the database MetaData
write_message("Stage 5: Start (Update the database with the metadata).",
verbose=2)
if insert_mode_p:
update_database_with_metadata(record, rec_id, oai_rec_id, pretend=pretend)
elif opt_mode in ('replace', 'replace_or_insert',
'append', 'correct', 'delete') and updates_exist:
# now we clear all the rows from bibrec_bibxxx from the old
record_deleted_p = True
delete_bibrec_bibxxx(rec_old, rec_id, affected_tags, pretend=pretend)
# metadata update will insert tags that are available in affected_tags.
# but for delete, once the tags have been deleted from bibrec_bibxxx, they dont have to be inserted
# except for 005.
if is_opt_mode_delete:
tmp_affected_tags = copy.deepcopy(affected_tags)
for tag in tmp_affected_tags:
if tag != '005':
affected_tags.pop(tag)
write_message(" -Clean bibrec_bibxxx: DONE", verbose=2)
update_database_with_metadata(record, rec_id, oai_rec_id, affected_tags, pretend=pretend)
else:
write_message(" -Stage NOT NEEDED in mode %s" % opt_mode,
verbose=2)
write_message(" -Stage COMPLETED", verbose=2)
record_deleted_p = False
# Finally we update the bibrec table with the current date
write_message("Stage 6: Start (Update bibrec table with current date).",
verbose=2)
if opt_notimechange == 0 and (updates_exist or record_had_FFT):
bibrec_now = convert_datestruct_to_datetext(time.localtime())
write_message(" -Retrieved current localtime: DONE", verbose=2)
update_bibrec_date(bibrec_now, rec_id, insert_mode_p, pretend=pretend)
write_message(" -Stage COMPLETED", verbose=2)
else:
write_message(" -Stage NOT NEEDED", verbose=2)
# Increase statistics
if insert_mode_p:
stat['nb_records_inserted'] += 1
else:
stat['nb_records_updated'] += 1
# Upload of this record finish
write_message("Record "+str(rec_id)+" DONE", verbose=1)
return (0, int(rec_id), "")
finally:
if record_deleted_p:
## BibUpload has failed living the record deleted. We should
## back the original record then.
update_database_with_metadata(original_record, rec_id, oai_rec_id, pretend=pretend)
write_message(" Restored original record", verbose=1, stream=sys.stderr)
def record_is_valid(record):
"""
Check if the record is valid. Currently this simply checks if the record
has exactly one rec_id.
@param record: the record
@type record: recstruct
@return: True if the record is valid
@rtype: bool
"""
rec_ids = record_get_field_values(record, tag="001")
if len(rec_ids) != 1:
write_message(" The record is not valid: it has not a single rec_id: %s" % (rec_ids), stream=sys.stderr)
return False
return True
def find_record_ids_by_oai_id(oaiId):
"""
A method finding the records identifier provided the oai identifier
returns a list of identifiers matching a given oai identifier
"""
# Is this record already in invenio (matching by oaiid)
if oaiId:
recids = search_pattern(p=oaiId, f=CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG, m='e')
# Is this record already in invenio (matching by reportnumber i.e.
# particularly 037. Idea: to avoid doubbles insertions)
repnumber = oaiId.split(":")[-1]
if repnumber:
recids |= search_pattern(p = repnumber,
f = "reportnumber",
m = 'e' )
# Is this record already in invenio (matching by reportnumber i.e.
# particularly 037. Idea: to avoid double insertions)
repnumber = "arXiv:" + oaiId.split(":")[-1]
recids |= search_pattern(p = repnumber,
f = "reportnumber",
m = 'e' )
return recids
else:
return intbitset()
def bibupload_post_phase(record, mode=None, rec_id="", pretend=False,
tmp_ids=None, tmp_vers=None):
def _elaborate_tag(record, tag, fun):
if extract_tag_from_record(record, tag) is not None:
try:
record = fun()
except Exception, e:
register_exception()
write_message(" Stage failed: Error while elaborating %s tags: %s" % (tag, e),
verbose=1, stream=sys.stderr)
return (1, int(rec_id)) # TODO: ?
if record is None:
write_message(" Stage failed: Error while elaborating %s tags" % (tag, ),
verbose=1, stream=sys.stderr)
return (1, int(rec_id))
write_message(" -Stage COMPLETED", verbose=2)
else:
write_message(" -Stage NOT NEEDED", verbose=2)
if tmp_ids is None:
tmp_ids = {}
if tmp_vers is None:
tmp_vers = {}
_elaborate_tag(record, "BDR", lambda: elaborate_brt_tags(record, rec_id = rec_id,
mode = mode,
pretend = pretend,
tmp_ids = tmp_ids,
tmp_vers = tmp_vers))
_elaborate_tag(record, "BDM", lambda: elaborate_mit_tags(record, rec_id = rec_id,
mode = mode,
pretend = pretend,
tmp_ids = tmp_ids,
tmp_vers = tmp_vers))
def submit_ticket_for_holding_pen(rec_id, err, msg):
"""
Submit a ticket via BibCatalog to report about a record that has been put
into the Holding Pen.
@rec_id: the affected record
@err: the corresponding Exception
msg: verbose message
"""
from invenio import bibtask
from invenio.webuser import get_email_from_username, get_uid_from_email
user = task_get_task_param("user")
uid = None
if user:
try:
uid = get_uid_from_email(get_email_from_username(user))
except Exception, err:
write_message("WARNING: can't reliably retrieve uid for user %s: %s" % (user, err), stream=sys.stderr)
if check_bibcatalog():
text = """
%(msg)s found for record %(rec_id)s: %(err)s
See: <%(siteurl)s/record/edit/#state=edit&recid=%(rec_id)s>
BibUpload task information:
task_id: %(task_id)s
task_specific_name: %(task_specific_name)s
user: %(user)s
task_params: %(task_params)s
task_options: %(task_options)s""" % {
"msg": msg,
"rec_id": rec_id,
"err": err,
"siteurl": CFG_SITE_SECURE_URL,
"task_id": task_get_task_param("task_id"),
"task_specific_name": task_get_task_param("task_specific_name"),
"user": user,
"task_params": bibtask._TASK_PARAMS,
"task_options": bibtask._OPTIONS}
bibcatalog_system.ticket_submit(subject="%s: %s by %s" % (msg, rec_id, user), recordid=rec_id, text=text, queue=CFG_BIBUPLOAD_CONFLICTING_REVISION_TICKET_QUEUE, owner=uid)
def insert_record_into_holding_pen(record, oai_id, pretend=False):
query = "INSERT INTO bibHOLDINGPEN (oai_id, changeset_date, changeset_xml, id_bibrec) VALUES (%s, NOW(), %s, %s)"
xml_record = record_xml_output(record)
bibrec_ids = find_record_ids_by_oai_id(oai_id) # here determining the identifier of the record
if len(bibrec_ids) > 0:
bibrec_id = bibrec_ids.pop()
else:
# id not found by using the oai_id, let's use a wider search based
# on any information we might have.
bibrec_id = retrieve_rec_id(record, 'holdingpen', pretend=pretend)
if bibrec_id is None:
bibrec_id = 0
if not pretend:
run_sql(query, (oai_id, xml_record, bibrec_id))
# record_id is logged as 0! ( We are not inserting into the main database)
log_record_uploading(oai_id, task_get_task_param('task_id', 0), 0, 'H', pretend=pretend)
stat['nb_holdingpen'] += 1
def print_out_bibupload_statistics():
"""Print the statistics of the process"""
out = "Task stats: %(nb_input)d input records, %(nb_updated)d updated, " \
"%(nb_inserted)d inserted, %(nb_errors)d errors, %(nb_holdingpen)d inserted to holding pen. " \
"Time %(nb_sec).2f sec." % { \
'nb_input': stat['nb_records_to_upload'],
'nb_updated': stat['nb_records_updated'],
'nb_inserted': stat['nb_records_inserted'],
'nb_errors': stat['nb_errors'],
'nb_holdingpen': stat['nb_holdingpen'],
'nb_sec': time.time() - time.mktime(stat['exectime']) }
write_message(out)
def open_marc_file(path):
"""Open a file and return the data"""
try:
# open the file containing the marc document
marc_file = open(path, 'r')
marc = marc_file.read()
marc_file.close()
except IOError, erro:
write_message("Error: %s" % erro, verbose=1, stream=sys.stderr)
write_message("Exiting.", sys.stderr)
if erro.errno == 2:
# No such file or directory
# Not scary
task_update_status("CERROR")
else:
task_update_status("ERROR")
sys.exit(1)
return marc
def xml_marc_to_records(xml_marc):
"""create the records"""
# Creation of the records from the xml Marc in argument
recs = create_records(xml_marc, 1, 1)
if recs == []:
write_message("Error: Cannot parse MARCXML file.", verbose=1, stream=sys.stderr)
write_message("Exiting.", sys.stderr)
task_update_status("ERROR")
sys.exit(1)
elif recs[0][0] is None:
write_message("Error: MARCXML file has wrong format: %s" % recs,
verbose=1, stream=sys.stderr)
write_message("Exiting.", sys.stderr)
task_update_status("CERROR")
sys.exit(1)
else:
recs = map((lambda x:x[0]), recs)
return recs
def find_record_format(rec_id, bibformat):
"""Look whether record REC_ID is formatted in FORMAT,
i.e. whether FORMAT exists in the bibfmt table for this record.
Return the number of times it is formatted: 0 if not, 1 if yes,
2 if found more than once (should never occur).
"""
out = 0
query = """SELECT COUNT(*) FROM bibfmt WHERE id_bibrec=%s AND format=%s"""
params = (rec_id, bibformat)
res = []
res = run_sql(query, params)
out = res[0][0]
return out
def find_record_from_recid(rec_id):
"""
Try to find record in the database from the REC_ID number.
Return record ID if found, None otherwise.
"""
res = run_sql("SELECT id FROM bibrec WHERE id=%s",
(rec_id,))
if res:
return res[0][0]
else:
return None
def find_record_from_sysno(sysno):
"""
Try to find record in the database from the external SYSNO number.
Return record ID if found, None otherwise.
"""
bibxxx = 'bib'+CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:2]+'x'
bibrec_bibxxx = 'bibrec_' + bibxxx
res = run_sql("""SELECT bb.id_bibrec FROM %(bibrec_bibxxx)s AS bb,
%(bibxxx)s AS b WHERE b.tag=%%s AND b.value=%%s
AND bb.id_bibxxx=b.id""" % \
{'bibxxx': bibxxx,
'bibrec_bibxxx': bibrec_bibxxx},
(CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG, sysno,))
if res:
return res[0][0]
else:
return None
def find_records_from_extoaiid(extoaiid, extoaisrc=None):
"""
Try to find records in the database from the external EXTOAIID number.
Return list of record ID if found, None otherwise.
"""
assert(CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:5] == CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[:5])
bibxxx = 'bib'+CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:2]+'x'
bibrec_bibxxx = 'bibrec_' + bibxxx
write_message(' Looking for extoaiid="%s" with extoaisrc="%s"' % (extoaiid, extoaisrc), verbose=9)
id_bibrecs = intbitset(run_sql("""SELECT bb.id_bibrec FROM %(bibrec_bibxxx)s AS bb,
%(bibxxx)s AS b WHERE b.tag=%%s AND b.value=%%s
AND bb.id_bibxxx=b.id""" % \
{'bibxxx': bibxxx,
'bibrec_bibxxx': bibrec_bibxxx},
(CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG, extoaiid,)))
write_message(' Partially found %s for extoaiid="%s"' % (id_bibrecs, extoaiid), verbose=9)
ret = intbitset()
for id_bibrec in id_bibrecs:
record = get_record(id_bibrec)
instances = record_get_field_instances(record, CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3], CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3], CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4])
write_message(' recid %s -> instances "%s"' % (id_bibrec, instances), verbose=9)
for instance in instances:
this_extoaisrc = field_get_subfield_values(instance, CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5])
this_extoaisrc = this_extoaisrc and this_extoaisrc[0] or None
this_extoaiid = field_get_subfield_values(instance, CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5])
this_extoaiid = this_extoaiid and this_extoaiid[0] or None
write_message(" this_extoaisrc -> %s, this_extoaiid -> %s" % (this_extoaisrc, this_extoaiid), verbose=9)
if this_extoaiid == extoaiid:
write_message(' recid %s -> provenance "%s"' % (id_bibrec, this_extoaisrc), verbose=9)
if this_extoaisrc == extoaisrc:
write_message('Found recid %s for extoaiid="%s" with provenance="%s"' % (id_bibrec, extoaiid, extoaisrc), verbose=9)
ret.add(id_bibrec)
break
if this_extoaisrc is None:
write_message('WARNING: Found recid %s for extoaiid="%s" that doesn\'t specify any provenance, while input record does.' % (id_bibrec, extoaiid), stream=sys.stderr)
if extoaisrc is None:
write_message('WARNING: Found recid %s for extoaiid="%s" that specify a provenance (%s), while input record does not have a provenance.' % (id_bibrec, extoaiid, this_extoaisrc), stream=sys.stderr)
return ret
def find_record_from_oaiid(oaiid):
"""
Try to find record in the database from the OAI ID number and OAI SRC.
Return record ID if found, None otherwise.
"""
bibxxx = 'bib'+CFG_OAI_ID_FIELD[0:2]+'x'
bibrec_bibxxx = 'bibrec_' + bibxxx
res = run_sql("""SELECT bb.id_bibrec FROM %(bibrec_bibxxx)s AS bb,
%(bibxxx)s AS b WHERE b.tag=%%s AND b.value=%%s
AND bb.id_bibxxx=b.id""" % \
{'bibxxx': bibxxx,
'bibrec_bibxxx': bibrec_bibxxx},
(CFG_OAI_ID_FIELD, oaiid,))
if res:
return res[0][0]
else:
return None
def find_record_from_doi(doi):
"""
Try to find record in the database from the given DOI.
Return record ID if found, None otherwise.
"""
bibxxx = 'bib02x'
bibrec_bibxxx = 'bibrec_' + bibxxx
res = run_sql("""SELECT bb.id_bibrec, bb.field_number
FROM %(bibrec_bibxxx)s AS bb, %(bibxxx)s AS b
WHERE b.tag=%%s AND b.value=%%s
AND bb.id_bibxxx=b.id""" % \
{'bibxxx': bibxxx,
'bibrec_bibxxx': bibrec_bibxxx},
('0247_a', doi,))
# For each of the result, make sure that it is really tagged as doi
for (id_bibrec, field_number) in res:
res = run_sql("""SELECT bb.id_bibrec
FROM %(bibrec_bibxxx)s AS bb, %(bibxxx)s AS b
WHERE b.tag=%%s AND b.value=%%s
AND bb.id_bibxxx=b.id and bb.field_number=%%s and bb.id_bibrec=%%s""" % \
{'bibxxx': bibxxx,
'bibrec_bibxxx': bibrec_bibxxx},
('0247_2', "doi", field_number, id_bibrec))
if res and res[0][0] == id_bibrec:
return res[0][0]
return None
def extract_tag_from_record(record, tag_number):
""" Extract the tag_number for record."""
# first step verify if the record is not already in the database
if record:
return record.get(tag_number, None)
return None
def retrieve_rec_id(record, opt_mode, pretend=False, post_phase = False):
"""Retrieve the record Id from a record by using tag 001 or SYSNO or OAI ID or DOI
tag. opt_mod is the desired mode.
@param post_phase Tells if we are calling this method in the postprocessing phase. If true, we accept presence of 001 fields even in the insert mode
@type post_phase boolean
"""
rec_id = None
# 1st step: we look for the tag 001
tag_001 = extract_tag_from_record(record, '001')
if tag_001 is not None:
# We extract the record ID from the tag
rec_id = tag_001[0][3]
# if we are in insert mode => error
if opt_mode == 'insert' and not post_phase:
write_message(" Failed: tag 001 found in the xml" \
" submitted, you should use the option replace," \
" correct or append to replace an existing" \
" record. (-h for help)",
verbose=1, stream=sys.stderr)
return -1
else:
# we found the rec id and we are not in insert mode => continue
# we try to match rec_id against the database:
if find_record_from_recid(rec_id) is not None:
# okay, 001 corresponds to some known record
return int(rec_id)
elif opt_mode in ('replace', 'replace_or_insert'):
if task_get_option('force'):
# we found the rec_id but it's not in the system and we are
# requested to replace records. Therefore we create on the fly
# a empty record allocating the recid.
write_message(" Warning: tag 001 found in the xml with"
" value %(rec_id)s, but rec_id %(rec_id)s does"
" not exist. Since the mode replace was"
" requested the rec_id %(rec_id)s is allocated"
" on-the-fly." % {"rec_id": rec_id},
stream=sys.stderr)
return create_new_record(rec_id=rec_id, pretend=pretend)
else:
# Since --force was not used we are going to raise an error
write_message(" Failed: tag 001 found in the xml"
" submitted with value %(rec_id)s. The"
" corresponding record however does not"
" exists. If you want to really create"
" such record, please use the --force"
" parameter when calling bibupload." % {
"rec_id": rec_id}, stream=sys.stderr)
return -1
else:
# The record doesn't exist yet. We shall have try to check
# the SYSNO or OAI or DOI id later.
write_message(" -Tag 001 value not found in database.",
verbose=9)
rec_id = None
else:
write_message(" -Tag 001 not found in the xml marc file.", verbose=9)
if rec_id is None:
# 2nd step we look for the SYSNO
sysnos = record_get_field_values(record,
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] or "",
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] or "",
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6])
if sysnos:
sysno = sysnos[0] # there should be only one external SYSNO
write_message(" -Checking if SYSNO " + sysno + \
" exists in the database", verbose=9)
# try to find the corresponding rec id from the database
rec_id = find_record_from_sysno(sysno)
if rec_id is not None:
# rec_id found
pass
else:
# The record doesn't exist yet. We will try to check
# external and internal OAI ids later.
write_message(" -Tag SYSNO value not found in database.",
verbose=9)
rec_id = None
else:
write_message(" -Tag SYSNO not found in the xml marc file.",
verbose=9)
if rec_id is None:
# 2nd step we look for the external OAIID
extoai_fields = record_get_field_instances(record,
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] or "",
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] or "")
if extoai_fields:
for field in extoai_fields:
extoaiid = field_get_subfield_values(field, CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6])
extoaisrc = field_get_subfield_values(field, CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6])
if extoaiid:
extoaiid = extoaiid[0]
if extoaisrc:
extoaisrc = extoaisrc[0]
else:
extoaisrc = None
write_message(" -Checking if EXTOAIID %s (%s) exists in the database" % (extoaiid, extoaisrc), verbose=9)
# try to find the corresponding rec id from the database
rec_ids = find_records_from_extoaiid(extoaiid, extoaisrc)
if rec_ids:
# rec_id found
rec_id = rec_ids.pop()
break
else:
# The record doesn't exist yet. We will try to check
# OAI id later.
write_message(" -Tag EXTOAIID value not found in database.",
verbose=9)
rec_id = None
else:
write_message(" -Tag EXTOAIID not found in the xml marc file.", verbose=9)
if rec_id is None:
# 4th step we look for the OAI ID
oaiidvalues = record_get_field_values(record,
CFG_OAI_ID_FIELD[0:3],
CFG_OAI_ID_FIELD[3:4] != "_" and \
CFG_OAI_ID_FIELD[3:4] or "",
CFG_OAI_ID_FIELD[4:5] != "_" and \
CFG_OAI_ID_FIELD[4:5] or "",
CFG_OAI_ID_FIELD[5:6])
if oaiidvalues:
oaiid = oaiidvalues[0] # there should be only one OAI ID
write_message(" -Check if local OAI ID " + oaiid + \
" exist in the database", verbose=9)
# try to find the corresponding rec id from the database
rec_id = find_record_from_oaiid(oaiid)
if rec_id is not None:
# rec_id found
pass
else:
write_message(" -Tag OAI ID value not found in database.",
verbose=9)
rec_id = None
else:
write_message(" -Tag SYSNO not found in the xml marc file.",
verbose=9)
if rec_id is None:
# 5th step we look for the DOI.
record_dois = record_extract_dois(record)
matching_recids = set()
if record_dois:
# try to find the corresponding rec id from the database
for record_doi in record_dois:
possible_recid = find_record_from_doi(record_doi)
if possible_recid:
matching_recids.add(possible_recid)
if len(matching_recids) > 1:
# Oops, this record refers to DOI existing in multiple records.
# Dunno which one to choose.
write_message(" Failed: Multiple records found in the" \
" database %s that match the DOI(s) in the input" \
" MARCXML %s" % (repr(matching_recids), repr(record_dois)),
verbose=1, stream=sys.stderr)
return -1
elif len(matching_recids) == 1:
rec_id = matching_recids.pop()
if opt_mode == 'insert':
write_message(" Failed: DOI tag matching record #%s found in the xml" \
" submitted, you should use the option replace," \
" correct or append to replace an existing" \
" record. (-h for help)" % rec_id,
verbose=1, stream=sys.stderr)
return -1
else:
write_message(" - Tag DOI value not found in database.",
verbose=9)
rec_id = None
else:
write_message(" -Tag DOI not found in the xml marc file.",
verbose=9)
# Now we should have detected rec_id from SYSNO or OAIID
# tags. (None otherwise.)
if rec_id:
if opt_mode == 'insert':
write_message(" Failed: Record found in the database," \
" you should use the option replace," \
" correct or append to replace an existing" \
" record. (-h for help)",
verbose=1, stream=sys.stderr)
return -1
else:
if opt_mode != 'insert' and \
opt_mode != 'replace_or_insert':
write_message(" Failed: Record not found in the database."\
" Please insert the file before updating it."\
" (-h for help)", verbose=1, stream=sys.stderr)
return -1
return rec_id and int(rec_id) or None
def check_record_doi_is_unique(rec_id, record):
"""
Check that DOI found in 'record' does not exist in any other
record than 'recid'.
Return (boolean, msg) where 'boolean' would be True if the DOI is
unique.
"""
record_dois = record_extract_dois(record)
if record_dois:
matching_recids = set()
for record_doi in record_dois:
possible_recid = find_record_from_doi(record_doi)
if possible_recid:
matching_recids.add(possible_recid)
if len(matching_recids) > 1:
# Oops, this record refers to DOI existing in multiple records.
msg = " Failed: Multiple records found in the" \
" database %s that match the DOI(s) in the input" \
" MARCXML %s" % (repr(matching_recids), repr(record_dois))
return (False, msg)
elif len(matching_recids) == 1:
matching_recid = matching_recids.pop()
if str(matching_recid) != str(rec_id):
# Oops, this record refers to DOI existing in a different record.
msg = " Failed: DOI(s) %s found in this record (#%s)" \
" already exist(s) in another other record (#%s)" % \
(repr(record_dois), rec_id, matching_recid)
return (False, msg)
return (True, "")
### Insert functions
def create_new_record(rec_id=None, pretend=False):
"""
Create new record in the database
@param rec_id: if specified the new record will have this rec_id.
@type rec_id: int
@return: the allocated rec_id
@rtype: int
@note: in case of errors will be returned None
"""
if rec_id is not None:
try:
rec_id = int(rec_id)
except (ValueError, TypeError), error:
write_message(" Error during the creation_new_record function: %s "
% error, verbose=1, stream=sys.stderr)
return None
if run_sql("SELECT id FROM bibrec WHERE id=%s", (rec_id, )):
write_message(" Error during the creation_new_record function: the requested rec_id %s already exists." % rec_id)
return None
if pretend:
if rec_id:
return rec_id
else:
return run_sql("SELECT max(id)+1 FROM bibrec")[0][0]
if rec_id is not None:
return run_sql("INSERT INTO bibrec (id, creation_date, modification_date) VALUES (%s, NOW(), NOW())", (rec_id, ))
else:
return run_sql("INSERT INTO bibrec (creation_date, modification_date) VALUES (NOW(), NOW())")
def insert_bibfmt(id_bibrec, marc, bibformat, modification_date='1970-01-01 00:00:00', pretend=False):
"""Insert the format in the table bibfmt"""
# compress the marc value
pickled_marc = compress(marc)
try:
time.strptime(modification_date, "%Y-%m-%d %H:%M:%S")
except ValueError:
modification_date = '1970-01-01 00:00:00'
query = """INSERT LOW_PRIORITY INTO bibfmt (id_bibrec, format, last_updated, value)
VALUES (%s, %s, %s, %s)"""
if not pretend:
row_id = run_sql(query, (id_bibrec, bibformat, modification_date, pickled_marc))
return row_id
else:
return 1
def insert_record_bibxxx(tag, value, pretend=False):
"""Insert the record into bibxxx"""
# determine into which table one should insert the record
table_name = 'bib'+tag[0:2]+'x'
# check if the tag, value combination exists in the table
query = """SELECT id,value FROM %s """ % table_name
query += """ WHERE tag=%s AND value=%s"""
params = (tag, value)
res = None
res = run_sql(query, params)
# Note: compare now the found values one by one and look for
# string binary equality (e.g. to respect lowercase/uppercase
# match), regardless of the charset etc settings. Ideally we
# could use a BINARY operator in the above SELECT statement, but
# we would have to check compatibility on various MySQLdb versions
# etc; this approach checks all matched values in Python, not in
# MySQL, which is less cool, but more conservative, so it should
# work better on most setups.
if res:
for row in res:
row_id = row[0]
row_value = row[1]
if row_value == value:
return (table_name, row_id)
# We got here only when the tag, value combination was not found,
# so it is now necessary to insert the tag, value combination into
# bibxxx table as new.
query = """INSERT INTO %s """ % table_name
query += """ (tag, value) values (%s , %s)"""
params = (tag, value)
if not pretend:
row_id = run_sql(query, params)
else:
return (table_name, 1)
return (table_name, row_id)
def insert_record_bibrec_bibxxx(table_name, id_bibxxx,
field_number, id_bibrec, pretend=False):
"""Insert the record into bibrec_bibxxx"""
# determine into which table one should insert the record
full_table_name = 'bibrec_'+ table_name
# insert the proper row into the table
query = """INSERT INTO %s """ % full_table_name
query += """(id_bibrec,id_bibxxx, field_number) values (%s , %s, %s)"""
params = (id_bibrec, id_bibxxx, field_number)
if not pretend:
res = run_sql(query, params)
else:
return 1
return res
def synchronize_8564(rec_id, record, record_had_FFT, pretend=False):
"""
Synchronize 8564_ tags and BibDocFile tables.
This function directly manipulate the record parameter.
@type rec_id: positive integer
@param rec_id: the record identifier.
@param record: the record structure as created by bibrecord.create_record
@type record_had_FFT: boolean
@param record_had_FFT: True if the incoming bibuploaded-record used FFT
@return: the manipulated record (which is also modified as a side effect)
"""
def merge_marc_into_bibdocfile(field, pretend=False):
"""
Internal function that reads a single field and stores its content
in BibDocFile tables.
@param field: the 8564_ field containing a BibDocFile URL.
"""
write_message('Merging field: %s' % (field, ), verbose=9)
url = field_get_subfield_values(field, 'u')[:1] or field_get_subfield_values(field, 'q')[:1]
description = field_get_subfield_values(field, 'y')[:1]
comment = field_get_subfield_values(field, 'z')[:1]
if url:
recid, docname, docformat = decompose_bibdocfile_url(url[0])
if recid != rec_id:
write_message("INFO: URL %s is not pointing to a fulltext owned by this record (%s)" % (url, recid), stream=sys.stderr)
else:
try:
bibdoc = BibRecDocs(recid).get_bibdoc(docname)
if description and not pretend:
bibdoc.set_description(description[0], docformat)
if comment and not pretend:
bibdoc.set_comment(comment[0], docformat)
except InvenioBibDocFileError:
## Apparently the referenced docname doesn't exist anymore.
## Too bad. Let's skip it.
write_message("WARNING: docname %s does not seem to exist for record %s. Has it been renamed outside FFT?" % (docname, recid), stream=sys.stderr)
def merge_bibdocfile_into_marc(field, subfields):
"""
Internal function that reads BibDocFile table entries referenced by
the URL in the given 8564_ field and integrate the given information
directly with the provided subfields.
@param field: the 8564_ field containing a BibDocFile URL.
@param subfields: the subfields corresponding to the BibDocFile URL
generated after BibDocFile tables.
"""
write_message('Merging subfields %s into field %s' % (subfields, field), verbose=9)
subfields = dict(subfields) ## We make a copy not to have side-effects
subfield_to_delete = []
for subfield_position, (code, value) in enumerate(field_get_subfield_instances(field)):
## For each subfield instance already existing...
if code in subfields:
## ...We substitute it with what is in BibDocFile tables
record_modify_subfield(record, '856', code, subfields[code],
subfield_position, field_position_global=field[4])
del subfields[code]
else:
## ...We delete it otherwise
subfield_to_delete.append(subfield_position)
subfield_to_delete.sort()
for counter, position in enumerate(subfield_to_delete):
## FIXME: Very hackish algorithm. Since deleting a subfield
## will alterate the position of following subfields, we
## are taking note of this and adjusting further position
## by using a counter.
record_delete_subfield_from(record, '856', position - counter,
field_position_global=field[4])
subfields = subfields.items()
subfields.sort()
for code, value in subfields:
## Let's add non-previously existing subfields
record_add_subfield_into(record, '856', code, value,
field_position_global=field[4])
def get_bibdocfile_managed_info():
"""
Internal function, returns a dictionary of
BibDocFile URL -> wanna-be subfields.
This information is retrieved from internal BibDoc
structures rather than from input MARC XML files
@rtype: mapping
@return: BibDocFile URL -> wanna-be subfields dictionary
"""
ret = {}
bibrecdocs = BibRecDocs(rec_id)
latest_files = bibrecdocs.list_latest_files(list_hidden=False)
for afile in latest_files:
url = afile.get_url()
ret[url] = {'u': url}
description = afile.get_description()
comment = afile.get_comment()
subformat = afile.get_subformat()
if description:
ret[url]['y'] = description
if comment:
ret[url]['z'] = comment
if subformat:
ret[url]['x'] = subformat
return ret
write_message("Synchronizing MARC of recid '%s' with:\n%s" % (rec_id, record), verbose=9)
tags856s = record_get_field_instances(record, '856', '%', '%')
write_message("Original 856%% instances: %s" % tags856s, verbose=9)
tags8564s_to_add = get_bibdocfile_managed_info()
write_message("BibDocFile instances: %s" % tags8564s_to_add, verbose=9)
positions_tags8564s_to_remove = []
for local_position, field in enumerate(tags856s):
if field[1] == '4' and field[2] == ' ':
write_message('Analysing %s' % (field, ), verbose=9)
for url in field_get_subfield_values(field, 'u') + field_get_subfield_values(field, 'q'):
if url in tags8564s_to_add:
# there exists a link in the MARC of the record and the connection exists in BibDoc tables
if record_had_FFT:
merge_bibdocfile_into_marc(field, tags8564s_to_add[url])
else:
merge_marc_into_bibdocfile(field, pretend=pretend)
del tags8564s_to_add[url]
break
elif bibdocfile_url_p(url) and decompose_bibdocfile_url(url)[0] == rec_id:
# The link exists and is potentially correct-looking link to a document
# moreover, it refers to current record id ... but it does not exist in
# internal BibDoc structures. This could have happen in the case of renaming a document
# or its removal. In both cases we have to remove link... a new one will be created
positions_tags8564s_to_remove.append(local_position)
write_message("%s to be deleted and re-synchronized" % (field, ), verbose=9)
break
record_delete_fields(record, '856', positions_tags8564s_to_remove)
tags8564s_to_add = tags8564s_to_add.values()
tags8564s_to_add.sort()
for subfields in tags8564s_to_add:
subfields = subfields.items()
subfields.sort()
record_add_field(record, '856', '4', ' ', subfields=subfields)
write_message('Final record: %s' % record, verbose=9)
return record
def _get_subfield_value(field, subfield_code, default=None):
res = field_get_subfield_values(field, subfield_code)
if res != [] and res != None:
return res[0]
else:
return default
def elaborate_mit_tags(record, rec_id, mode, pretend = False, tmp_ids = {},
tmp_vers = {}):
"""
Uploading MoreInfo -> BDM tags
"""
tuple_list = extract_tag_from_record(record, 'BDM')
# Now gathering information from BDR tags - to be processed later
write_message("Processing BDM entries of the record ")
recordDocs = BibRecDocs(rec_id)
if tuple_list:
for mit in record_get_field_instances(record, 'BDM', ' ', ' '):
relation_id = _get_subfield_value(mit, "r")
bibdoc_id = _get_subfield_value(mit, "i")
# checking for a possibly temporary ID
if not (bibdoc_id is None):
bibdoc_id = resolve_identifier(tmp_ids, bibdoc_id)
bibdoc_ver = _get_subfield_value(mit, "v")
if not (bibdoc_ver is None):
bibdoc_ver = resolve_identifier(tmp_vers, bibdoc_ver)
bibdoc_name = _get_subfield_value(mit, "n")
bibdoc_fmt = _get_subfield_value(mit, "f")
moreinfo_str = _get_subfield_value(mit, "m")
if bibdoc_id == None:
if bibdoc_name == None:
raise StandardError("Incorrect relation. Neither name nor identifier of the first obejct has been specified")
else:
# retrieving the ID based on the document name (inside current record)
# The document is attached to current record.
try:
bibdoc_id = recordDocs.get_docid(bibdoc_name)
except:
raise StandardError("BibDoc of a name %s does not exist within a record" % (bibdoc_name, ))
else:
if bibdoc_name != None:
write_message("Warning: both name and id of the first document of a relation have been specified. Ignoring the name")
if (moreinfo_str is None or mode in ("replace", "correct")) and (not pretend):
MoreInfo(docid=bibdoc_id , version = bibdoc_ver,
docformat = bibdoc_fmt, relation = relation_id).delete()
if (not moreinfo_str is None) and (not pretend):
MoreInfo.create_from_serialised(moreinfo_str,
docid=bibdoc_id,
version = bibdoc_ver,
docformat = bibdoc_fmt,
relation = relation_id)
return record
def elaborate_brt_tags(record, rec_id, mode, pretend=False, tmp_ids = {}, tmp_vers = {}):
"""
Process BDR tags describing relations between existing objects
"""
tuple_list = extract_tag_from_record(record, 'BDR')
# Now gathering information from BDR tags - to be processed later
relations_to_create = []
write_message("Processing BDR entries of the record ")
recordDocs = BibRecDocs(rec_id) #TODO: check what happens if there is no record yet ! Will the class represent an empty set?
if tuple_list:
for brt in record_get_field_instances(record, 'BDR', ' ', ' '):
relation_id = _get_subfield_value(brt, "r")
bibdoc1_id = None
bibdoc1_name = None
bibdoc1_ver = None
bibdoc1_fmt = None
bibdoc2_id = None
bibdoc2_name = None
bibdoc2_ver = None
bibdoc2_fmt = None
if not relation_id:
bibdoc1_id = _get_subfield_value(brt, "i")
bibdoc1_name = _get_subfield_value(brt, "n")
if bibdoc1_id == None:
if bibdoc1_name == None:
raise StandardError("Incorrect relation. Neither name nor identifier of the first obejct has been specified")
else:
# retrieving the ID based on the document name (inside current record)
# The document is attached to current record.
try:
bibdoc1_id = recordDocs.get_docid(bibdoc1_name)
except:
raise StandardError("BibDoc of a name %s does not exist within a record" % \
(bibdoc1_name, ))
else:
# resolving temporary identifier
bibdoc1_id = resolve_identifier(tmp_ids, bibdoc1_id)
if bibdoc1_name != None:
write_message("Warning: both name and id of the first document of a relation have been specified. Ignoring the name")
bibdoc1_ver = _get_subfield_value(brt, "v")
if not (bibdoc1_ver is None):
bibdoc1_ver = resolve_identifier(tmp_vers, bibdoc1_ver)
bibdoc1_fmt = _get_subfield_value(brt, "f")
bibdoc2_id = _get_subfield_value(brt, "j")
bibdoc2_name = _get_subfield_value(brt, "o")
if bibdoc2_id == None:
if bibdoc2_name == None:
raise StandardError("Incorrect relation. Neither name nor identifier of the second obejct has been specified")
else:
# retrieving the ID based on the document name (inside current record)
# The document is attached to current record.
try:
bibdoc2_id = recordDocs.get_docid(bibdoc2_name)
except:
raise StandardError("BibDoc of a name %s does not exist within a record" % (bibdoc2_name, ))
else:
bibdoc2_id = resolve_identifier(tmp_ids, bibdoc2_id)
if bibdoc2_name != None:
write_message("Warning: both name and id of the first document of a relation have been specified. Ignoring the name")
bibdoc2_ver = _get_subfield_value(brt, "w")
if not (bibdoc2_ver is None):
bibdoc2_ver = resolve_identifier(tmp_vers, bibdoc2_ver)
bibdoc2_fmt = _get_subfield_value(brt, "g")
control_command = _get_subfield_value(brt, "d")
relation_type = _get_subfield_value(brt, "t")
if not relation_type and not relation_id:
raise StandardError("The relation type must be specified")
more_info = _get_subfield_value(brt, "m")
# the relation id might be specified in the case of updating
# MoreInfo table instead of other fields
rel_obj = None
if not relation_id:
rels = BibRelation.get_relations(rel_type = relation_type,
bibdoc1_id = bibdoc1_id,
bibdoc2_id = bibdoc2_id,
bibdoc1_ver = bibdoc1_ver,
bibdoc2_ver = bibdoc2_ver,
bibdoc1_fmt = bibdoc1_fmt,
bibdoc2_fmt = bibdoc2_fmt)
if len(rels) > 0:
rel_obj = rels[0]
relation_id = rel_obj.id
else:
rel_obj = BibRelation(rel_id=relation_id)
relations_to_create.append((relation_id, bibdoc1_id, bibdoc1_ver,
bibdoc1_fmt, bibdoc2_id, bibdoc2_ver,
bibdoc2_fmt, relation_type, more_info,
rel_obj, control_command))
record_delete_field(record, 'BDR', ' ', ' ')
if mode in ("insert", "replace_or_insert", "append", "correct", "replace"):
# now creating relations between objects based on the data
if not pretend:
for (relation_id, bibdoc1_id, bibdoc1_ver, bibdoc1_fmt,
bibdoc2_id, bibdoc2_ver, bibdoc2_fmt, rel_type,
more_info, rel_obj, control_command) in relations_to_create:
if rel_obj == None:
rel_obj = BibRelation.create(bibdoc1_id = bibdoc1_id,
bibdoc1_ver = bibdoc1_ver,
bibdoc1_fmt = bibdoc1_fmt,
bibdoc2_id = bibdoc2_id,
bibdoc2_ver = bibdoc2_ver,
bibdoc2_fmt = bibdoc2_fmt,
rel_type = rel_type)
relation_id = rel_obj.id
if mode in ("replace"):
# Clearing existing MoreInfo content
rel_obj.get_more_info().delete()
if more_info:
MoreInfo.create_from_serialised(more_info, relation = relation_id)
if control_command == "DELETE":
rel_obj.delete()
else:
write_message("BDR tag is not processed in the %s mode" % (mode, ))
return record
def elaborate_fft_tags(record, rec_id, mode, pretend=False,
tmp_ids = {}, tmp_vers = {}):
"""
Process FFT tags that should contain $a with file pathes or URLs
to get the fulltext from. This function enriches record with
proper 8564 URL tags, downloads fulltext files and stores them
into var/data structure where appropriate.
CFG_BIBUPLOAD_WGET_SLEEP_TIME defines time to sleep in seconds in
between URL downloads.
Note: if an FFT tag contains multiple $a subfields, we upload them
into different 856 URL tags in the metadata. See regression test
case test_multiple_fft_insert_via_http().
"""
# Let's define some handy sub procedure.
def _add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, modification_date, pretend=False):
"""Adds a new format for a given bibdoc. Returns True when everything's fine."""
write_message('Add new format to %s url: %s, format: %s, docname: %s, doctype: %s, newname: %s, description: %s, comment: %s, flags: %s, modification_date: %s' % (repr(bibdoc), url, docformat, docname, doctype, newname, description, comment, flags, modification_date), verbose=9)
try:
if not url: # Not requesting a new url. Just updating comment & description
return _update_description_and_comment(bibdoc, docname, docformat, description, comment, flags, pretend=pretend)
try:
if not pretend:
bibdoc.add_file_new_format(url, description=description, comment=comment, flags=flags, modification_date=modification_date)
except StandardError, e:
write_message("('%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s') not inserted because format already exists (%s)." % (url, docformat, docname, doctype, newname, description, comment, flags, modification_date, e), stream=sys.stderr)
raise
except Exception, e:
write_message("Error in adding '%s' as a new format because of: %s" % (url, e), stream=sys.stderr)
raise
return True
def _add_new_version(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, modification_date, pretend=False):
"""Adds a new version for a given bibdoc. Returns True when everything's fine."""
write_message('Add new version to %s url: %s, format: %s, docname: %s, doctype: %s, newname: %s, description: %s, comment: %s, flags: %s' % (repr(bibdoc), url, docformat, docname, doctype, newname, description, comment, flags))
try:
if not url:
return _update_description_and_comment(bibdoc, docname, docformat, description, comment, flags, pretend=pretend)
try:
if not pretend:
bibdoc.add_file_new_version(url, description=description, comment=comment, flags=flags, modification_date=modification_date)
except StandardError, e:
write_message("('%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s') not inserted because '%s'." % (url, docformat, docname, doctype, newname, description, comment, flags, modification_date, e), stream=sys.stderr)
raise
except Exception, e:
write_message("Error in adding '%s' as a new version because of: %s" % (url, e), stream=sys.stderr)
raise
return True
def _update_description_and_comment(bibdoc, docname, docformat, description, comment, flags, pretend=False):
"""Directly update comments and descriptions."""
write_message('Just updating description and comment for %s with format %s with description %s, comment %s and flags %s' % (docname, docformat, description, comment, flags), verbose=9)
try:
if not pretend:
bibdoc.set_description(description, docformat)
bibdoc.set_comment(comment, docformat)
for flag in CFG_BIBDOCFILE_AVAILABLE_FLAGS:
if flag in flags:
bibdoc.set_flag(flag, docformat)
else:
bibdoc.unset_flag(flag, docformat)
except StandardError, e:
write_message("('%s', '%s', '%s', '%s', '%s') description and comment not updated because '%s'." % (docname, docformat, description, comment, flags, e))
raise
return True
def _process_document_moreinfos(more_infos, docname, version, docformat, mode):
if not mode in ('correct', 'append', 'replace_or_insert', 'replace', 'correct', 'insert'):
print "exited because the mode is incorrect"
return
brd = BibRecDocs(rec_id)
docid = None
try:
docid = brd.get_docid(docname)
except:
raise StandardError("MoreInfo: No document of a given name associated with the record")
if not version:
# We have to retrieve the most recent version ...
version = brd.get_bibdoc(docname).get_latest_version()
doc_moreinfo_s, version_moreinfo_s, version_format_moreinfo_s, format_moreinfo_s = more_infos
if mode in ("replace", "replace_or_insert"):
if doc_moreinfo_s: #only if specified, otherwise do not touch
MoreInfo(docid = docid).delete()
if format_moreinfo_s: #only if specified... otherwise do not touch
MoreInfo(docid = docid, docformat = docformat).delete()
if not doc_moreinfo_s is None:
MoreInfo.create_from_serialised(ser_str = doc_moreinfo_s, docid = docid)
if not version_moreinfo_s is None:
MoreInfo.create_from_serialised(ser_str = version_moreinfo_s,
docid = docid, version = version)
if not version_format_moreinfo_s is None:
MoreInfo.create_from_serialised(ser_str = version_format_moreinfo_s,
docid = docid, version = version,
docformat = docformat)
if not format_moreinfo_s is None:
MoreInfo.create_from_serialised(ser_str = format_moreinfo_s,
docid = docid, docformat = docformat)
if mode == 'delete':
raise StandardError('FFT tag specified but bibupload executed in --delete mode')
tuple_list = extract_tag_from_record(record, 'FFT')
if tuple_list: # FFT Tags analysis
write_message("FFTs: "+str(tuple_list), verbose=9)
docs = {} # docnames and their data
for fft in record_get_field_instances(record, 'FFT', ' ', ' '):
# Very first, we retrieve the potentially temporary odentifiers...
#even if the rest fails, we should include them in teh dictionary
version = _get_subfield_value(fft, 'v', '')
# checking if version is temporary... if so, filling a different varaible
is_tmp_ver, bibdoc_tmpver = parse_identifier(version)
if is_tmp_ver:
version = None
else:
bibdoc_tmpver = None
if not version: #treating cases of empty string etc...
version = None
bibdoc_tmpid = field_get_subfield_values(fft, 'i')
if bibdoc_tmpid:
bibdoc_tmpid = bibdoc_tmpid[0]
else:
bibdoc_tmpid
is_tmp_id, bibdoc_tmpid = parse_identifier(bibdoc_tmpid)
if not is_tmp_id:
bibdoc_tmpid = None
# In the case of having temporary id's, we dont resolve them yet but signaklise that they have been used
# value -1 means that identifier has been declared but not assigned a value yet
if bibdoc_tmpid:
if bibdoc_tmpid in tmp_ids:
write_message("WARNING: the temporary identifier %s has been declared more than once. Ignoring the second occurance" % (bibdoc_tmpid, ))
else:
tmp_ids[bibdoc_tmpid] = -1
if bibdoc_tmpver:
if bibdoc_tmpver in tmp_vers:
write_message("WARNING: the temporary version identifier %s has been declared more than once. Ignoring the second occurance" % (bibdoc_tmpver, ))
else:
tmp_vers[bibdoc_tmpver] = -1
# Let's discover the type of the document
# This is a legacy field and will not be enforced any particular
# check on it.
doctype = _get_subfield_value(fft, 't', 'Main') #Default is Main
# Let's discover the url.
url = field_get_subfield_values(fft, 'a')
if url:
url = url[0]
try:
check_valid_url(url)
except StandardError, e:
raise StandardError, "fft '%s' specifies in $a a location ('%s') with problems: %s" % (fft, url, e)
else:
url = ''
#TODO: a lot of code can be compactified using similar syntax ... should be more readable on the longer scale
# maybe right side expressions look a bit cryptic, but the elaborate_fft function would be much clearer
if mode == 'correct' and doctype != 'FIX-MARC':
arg2 = ""
else:
arg2 = KEEP_OLD_VALUE
description = _get_subfield_value(fft, 'd', arg2)
# Let's discover the description
# description = field_get_subfield_values(fft, 'd')
# if description != []:
# description = description[0]
# else:
# if mode == 'correct' and doctype != 'FIX-MARC':
## If the user require to correct, and do not specify
## a description this means she really want to
## modify the description.
# description = ''
# else:
# description = KEEP_OLD_VALUE
# Let's discover the desired docname to be created/altered
name = field_get_subfield_values(fft, 'n')
if name:
## Let's remove undesired extensions
name = file_strip_ext(name[0] + '.pdf')
else:
if url:
name = get_docname_from_url(url)
elif mode != 'correct' and doctype != 'FIX-MARC':
raise StandardError, "Warning: fft '%s' doesn't specifies either a location in $a or a docname in $n" % str(fft)
else:
continue
# Let's discover the desired new docname in case we want to change it
newname = field_get_subfield_values(fft, 'm')
if newname:
newname = file_strip_ext(newname[0] + '.pdf')
else:
newname = name
# Let's discover the desired format
docformat = field_get_subfield_values(fft, 'f')
if docformat:
docformat = normalize_format(docformat[0])
else:
if url:
docformat = guess_format_from_url(url)
else:
docformat = ""
# Let's discover the icon
icon = field_get_subfield_values(fft, 'x')
if icon != []:
icon = icon[0]
if icon != KEEP_OLD_VALUE:
try:
check_valid_url(icon)
except StandardError, e:
raise StandardError, "fft '%s' specifies in $x an icon ('%s') with problems: %s" % (fft, icon, e)
else:
icon = ''
# Let's discover the comment
comment = field_get_subfield_values(fft, 'z')
if comment != []:
comment = comment[0]
else:
if mode == 'correct' and doctype != 'FIX-MARC':
## See comment on description
comment = ''
else:
comment = KEEP_OLD_VALUE
# Let's discover the restriction
restriction = field_get_subfield_values(fft, 'r')
if restriction != []:
restriction = restriction[0]
else:
if mode == 'correct' and doctype != 'FIX-MARC':
## See comment on description
restriction = ''
else:
restriction = KEEP_OLD_VALUE
document_moreinfo = _get_subfield_value(fft, 'w')
version_moreinfo = _get_subfield_value(fft, 'p')
version_format_moreinfo = _get_subfield_value(fft, 'b')
format_moreinfo = _get_subfield_value(fft, 'u')
# Let's discover the timestamp of the file (if any)
timestamp = field_get_subfield_values(fft, 's')
if timestamp:
try:
timestamp = datetime(*(time.strptime(timestamp[0], "%Y-%m-%d %H:%M:%S")[:6]))
except ValueError:
write_message('Warning: The timestamp is not in a good format, thus will be ignored. The format should be YYYY-MM-DD HH:MM:SS')
timestamp = ''
else:
timestamp = ''
flags = field_get_subfield_values(fft, 'o')
for flag in flags:
if flag not in CFG_BIBDOCFILE_AVAILABLE_FLAGS:
raise StandardError, "fft '%s' specifies a non available flag: %s" % (fft, flag)
if docs.has_key(name): # new format considered
(doctype2, newname2, restriction2, version2, urls, dummybibdoc_moreinfos2, dummybibdoc_tmpid2, dummybibdoc_tmpver2 ) = docs[name]
if doctype2 != doctype:
raise StandardError, "fft '%s' specifies a different doctype from previous fft with docname '%s'" % (str(fft), name)
if newname2 != newname:
raise StandardError, "fft '%s' specifies a different newname from previous fft with docname '%s'" % (str(fft), name)
if restriction2 != restriction:
raise StandardError, "fft '%s' specifies a different restriction from previous fft with docname '%s'" % (str(fft), name)
if version2 != version:
raise StandardError, "fft '%s' specifies a different version than the previous fft with docname '%s'" % (str(fft), name)
for (dummyurl2, format2, dummydescription2, dummycomment2, dummyflags2, dummytimestamp2) in urls:
if docformat == format2:
raise StandardError, "fft '%s' specifies a second file '%s' with the same format '%s' from previous fft with docname '%s'" % (str(fft), url, docformat, name)
if url or docformat:
urls.append((url, docformat, description, comment, flags, timestamp))
if icon:
urls.append((icon, icon[len(file_strip_ext(icon)):] + ';icon', description, comment, flags, timestamp))
else:
if url or docformat:
docs[name] = (doctype, newname, restriction, version, [(url, docformat, description, comment, flags, timestamp)], [document_moreinfo, version_moreinfo, version_format_moreinfo, format_moreinfo], bibdoc_tmpid, bibdoc_tmpver)
if icon:
docs[name][4].append((icon, icon[len(file_strip_ext(icon)):] + ';icon', description, comment, flags, timestamp))
elif icon:
docs[name] = (doctype, newname, restriction, version, [(icon, icon[len(file_strip_ext(icon)):] + ';icon', description, comment, flags, timestamp)], [document_moreinfo, version_moreinfo, version_format_moreinfo, format_moreinfo], bibdoc_tmpid, bibdoc_tmpver)
else:
docs[name] = (doctype, newname, restriction, version, [], [document_moreinfo, version_moreinfo, version_format_moreinfo, format_moreinfo], bibdoc_tmpid, bibdoc_tmpver)
write_message('Result of FFT analysis:\n\tDocs: %s' % (docs,), verbose=9)
# Let's remove all FFT tags
record_delete_field(record, 'FFT', ' ', ' ')
# Preprocessed data elaboration
bibrecdocs = BibRecDocs(rec_id)
## Let's pre-download all the URLs to see if, in case of mode 'correct' or 'append'
## we can avoid creating a new revision.
for docname, (doctype, newname, restriction, version, urls, more_infos, bibdoc_tmpid, bibdoc_tmpver ) in docs.items():
downloaded_urls = []
try:
bibdoc = bibrecdocs.get_bibdoc(docname)
except InvenioBibDocFileError:
## A bibdoc with the given docname does not exists.
## So there is no chance we are going to revise an existing
## format with an identical file :-)
bibdoc = None
new_revision_needed = False
for url, docformat, description, comment, flags, timestamp in urls:
if url:
try:
downloaded_url = download_url(url, docformat)
write_message("%s saved into %s" % (url, downloaded_url), verbose=9)
except Exception, err:
write_message("Error in downloading '%s' because of: %s" % (url, err), stream=sys.stderr)
raise
if mode == 'correct' and bibdoc is not None and not new_revision_needed:
downloaded_urls.append((downloaded_url, docformat, description, comment, flags, timestamp))
if not bibrecdocs.check_file_exists(downloaded_url, docformat):
new_revision_needed = True
else:
write_message("WARNING: %s is already attached to bibdoc %s for recid %s" % (url, docname, rec_id), stream=sys.stderr)
elif mode == 'append' and bibdoc is not None:
if not bibrecdocs.check_file_exists(downloaded_url, docformat):
downloaded_urls.append((downloaded_url, docformat, description, comment, flags, timestamp))
else:
write_message("WARNING: %s is already attached to bibdoc %s for recid %s" % (url, docname, rec_id), stream=sys.stderr)
else:
downloaded_urls.append((downloaded_url, docformat, description, comment, flags, timestamp))
else:
downloaded_urls.append(('', docformat, description, comment, flags, timestamp))
if mode == 'correct' and bibdoc is not None and not new_revision_needed:
## Since we don't need a new revision (because all the files
## that are being uploaded are different)
## we can simply remove the urls but keep the other information
write_message("No need to add a new revision for docname %s for recid %s" % (docname, rec_id), verbose=2)
docs[docname] = (doctype, newname, restriction, version, [('', docformat, description, comment, flags, timestamp) for (dummy, docformat, description, comment, flags, timestamp) in downloaded_urls], more_infos, bibdoc_tmpid, bibdoc_tmpver)
for downloaded_url, dummy, dummy, dummy, dummy, dummy in downloaded_urls:
## Let's free up some space :-)
if downloaded_url and os.path.exists(downloaded_url):
os.remove(downloaded_url)
else:
if downloaded_urls or mode != 'append':
docs[docname] = (doctype, newname, restriction, version, downloaded_urls, more_infos, bibdoc_tmpid, bibdoc_tmpver)
else:
## In case we are in append mode and there are no urls to append
## we discard the whole FFT
del docs[docname]
if mode == 'replace': # First we erase previous bibdocs
if not pretend:
for bibdoc in bibrecdocs.list_bibdocs():
bibdoc.delete()
bibrecdocs.build_bibdoc_list()
for docname, (doctype, newname, restriction, version, urls, more_infos, bibdoc_tmpid, bibdoc_tmpver) in docs.iteritems():
write_message("Elaborating olddocname: '%s', newdocname: '%s', doctype: '%s', restriction: '%s', urls: '%s', mode: '%s'" % (docname, newname, doctype, restriction, urls, mode), verbose=9)
if mode in ('insert', 'replace'): # new bibdocs, new docnames, new marc
if newname in bibrecdocs.get_bibdoc_names():
write_message("('%s', '%s') not inserted because docname already exists." % (newname, urls), stream=sys.stderr)
raise StandardError("('%s', '%s') not inserted because docname already exists." % (newname, urls), stream=sys.stderr)
try:
if not pretend:
bibdoc = bibrecdocs.add_bibdoc(doctype, newname)
bibdoc.set_status(restriction)
else:
bibdoc = None
except Exception, e:
write_message("('%s', '%s', '%s') not inserted because: '%s'." % (doctype, newname, urls, e), stream=sys.stderr)
raise e
for (url, docformat, description, comment, flags, timestamp) in urls:
assert(_add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, timestamp, pretend=pretend))
elif mode == 'replace_or_insert': # to be thought as correct_or_insert
for bibdoc in bibrecdocs.list_bibdocs():
brd = BibRecDocs(rec_id)
dn = brd.get_docname(bibdoc.id)
if dn == docname:
if doctype not in ('PURGE', 'DELETE', 'EXPUNGE', 'REVERT', 'FIX-ALL', 'FIX-MARC', 'DELETE-FILE'):
if newname != docname:
try:
if not pretend:
bibrecdocs.change_name(newname = newname, docid = bibdoc.id)
## Let's refresh the list of bibdocs.
bibrecdocs.build_bibdoc_list()
except StandardError, e:
write_message(e, stream=sys.stderr)
raise
found_bibdoc = False
for bibdoc in bibrecdocs.list_bibdocs():
brd = BibRecDocs(rec_id)
dn = brd.get_docname(bibdoc.id)
if dn == newname:
found_bibdoc = True
if doctype == 'PURGE':
if not pretend:
bibdoc.purge()
elif doctype == 'DELETE':
if not pretend:
bibdoc.delete()
elif doctype == 'EXPUNGE':
if not pretend:
bibdoc.expunge()
elif doctype == 'FIX-ALL':
if not pretend:
bibrecdocs.fix(docname)
elif doctype == 'FIX-MARC':
pass
elif doctype == 'DELETE-FILE':
if urls:
for (url, docformat, description, comment, flags, timestamp) in urls:
if not pretend:
bibdoc.delete_file(docformat, version)
elif doctype == 'REVERT':
try:
if not pretend:
bibdoc.revert(version)
except Exception, e:
write_message('(%s, %s) not correctly reverted: %s' % (newname, version, e), stream=sys.stderr)
raise
else:
if restriction != KEEP_OLD_VALUE:
if not pretend:
bibdoc.set_status(restriction)
# Since the docname already existed we have to first
# bump the version by pushing the first new file
# then pushing the other files.
if urls:
(first_url, first_format, first_description, first_comment, first_flags, first_timestamp) = urls[0]
other_urls = urls[1:]
assert(_add_new_version(bibdoc, first_url, first_format, docname, doctype, newname, first_description, first_comment, first_flags, first_timestamp, pretend=pretend))
for (url, docformat, description, comment, flags, timestamp) in other_urls:
assert(_add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, timestamp, pretend=pretend))
## Let's refresh the list of bibdocs.
bibrecdocs.build_bibdoc_list()
if not found_bibdoc:
if not pretend:
bibdoc = bibrecdocs.add_bibdoc(doctype, newname)
bibdoc.set_status(restriction)
for (url, docformat, description, comment, flags, timestamp) in urls:
assert(_add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, timestamp))
elif mode == 'correct':
for bibdoc in bibrecdocs.list_bibdocs():
brd = BibRecDocs(rec_id)
dn = brd.get_docname(bibdoc.id)
if dn == docname:
if doctype not in ('PURGE', 'DELETE', 'EXPUNGE', 'REVERT', 'FIX-ALL', 'FIX-MARC', 'DELETE-FILE'):
if newname != docname:
try:
if not pretend:
bibrecdocs.change_name(docid = bibdoc.id, newname=newname)
## Let's refresh the list of bibdocs.
bibrecdocs.build_bibdoc_list()
except StandardError, e:
write_message('Error in renaming %s to %s: %s' % (docname, newname, e), stream=sys.stderr)
raise
found_bibdoc = False
for bibdoc in bibrecdocs.list_bibdocs():
brd = BibRecDocs(rec_id)
dn = brd.get_docname(bibdoc.id)
if dn == newname:
found_bibdoc = True
if doctype == 'PURGE':
if not pretend:
bibdoc.purge()
elif doctype == 'DELETE':
if not pretend:
bibdoc.delete()
elif doctype == 'EXPUNGE':
if not pretend:
bibdoc.expunge()
elif doctype == 'FIX-ALL':
if not pretend:
bibrecdocs.fix(newname)
elif doctype == 'FIX-MARC':
pass
elif doctype == 'DELETE-FILE':
if urls:
for (url, docformat, description, comment, flags, timestamp) in urls:
if not pretend:
bibdoc.delete_file(docformat, version)
elif doctype == 'REVERT':
try:
if not pretend:
bibdoc.revert(version)
except Exception, e:
write_message('(%s, %s) not correctly reverted: %s' % (newname, version, e), stream=sys.stderr)
raise
else:
if restriction != KEEP_OLD_VALUE:
if not pretend:
bibdoc.set_status(restriction)
if doctype and doctype!= KEEP_OLD_VALUE:
if not pretend:
bibdoc.change_doctype(doctype)
if urls:
(first_url, first_format, first_description, first_comment, first_flags, first_timestamp) = urls[0]
other_urls = urls[1:]
assert(_add_new_version(bibdoc, first_url, first_format, docname, doctype, newname, first_description, first_comment, first_flags, first_timestamp, pretend=pretend))
for (url, docformat, description, comment, flags, timestamp) in other_urls:
assert(_add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, timestamp, pretend=pretend))
## Let's refresh the list of bibdocs.
bibrecdocs.build_bibdoc_list()
if not found_bibdoc:
if doctype in ('PURGE', 'DELETE', 'EXPUNGE', 'FIX-ALL', 'FIX-MARC', 'DELETE-FILE', 'REVERT'):
write_message("('%s', '%s', '%s') not performed because '%s' docname didn't existed." % (doctype, newname, urls, docname), stream=sys.stderr)
raise StandardError
else:
if not pretend:
bibdoc = bibrecdocs.add_bibdoc(doctype, newname)
bibdoc.set_status(restriction)
for (url, docformat, description, comment, flags, timestamp) in urls:
assert(_add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, timestamp))
elif mode == 'append':
try:
found_bibdoc = False
for bibdoc in bibrecdocs.list_bibdocs():
brd = BibRecDocs(rec_id)
dn = brd.get_docname(bibdoc.id)
if dn == docname:
found_bibdoc = True
for (url, docformat, description, comment, flags, timestamp) in urls:
assert(_add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, timestamp, pretend=pretend))
if not found_bibdoc:
try:
if not pretend:
bibdoc = bibrecdocs.add_bibdoc(doctype, docname)
bibdoc.set_status(restriction)
for (url, docformat, description, comment, flags, timestamp) in urls:
assert(_add_new_format(bibdoc, url, docformat, docname, doctype, newname, description, comment, flags, timestamp))
except Exception, e:
register_exception()
write_message("('%s', '%s', '%s') not appended because: '%s'." % (doctype, newname, urls, e), stream=sys.stderr)
raise
except:
register_exception()
raise
if not pretend:
_process_document_moreinfos(more_infos, newname, version, urls and urls[0][1], mode)
# resolving temporary version and identifier
brd = BibRecDocs(rec_id)
if bibdoc_tmpid:
if bibdoc_tmpid in tmp_ids and tmp_ids[bibdoc_tmpid] != -1:
write_message("WARNING: the temporary identifier %s has been declared more than once. Ignoring the second occurance" % (bibdoc_tmpid, ))
else:
tmp_ids[bibdoc_tmpid] = brd.get_docid(docname)
if bibdoc_tmpver:
if bibdoc_tmpver in tmp_vers and tmp_vers[bibdoc_tmpver] != -1:
write_message("WARNING: the temporary version identifier %s has been declared more than once. Ignoring the second occurance" % (bibdoc_tmpver, ))
else:
if version == None:
if version:
tmp_vers[bibdoc_tmpver] = version
else:
tmp_vers[bibdoc_tmpver] = brd.get_bibdoc(docname).get_latest_version()
else:
tmp_vers[bibdoc_tmpver] = version
return record
### Update functions
def update_bibrec_date(now, bibrec_id, insert_mode_p, pretend=False):
"""Update the date of the record in bibrec table """
if insert_mode_p:
query = """UPDATE bibrec SET creation_date=%s, modification_date=%s WHERE id=%s"""
params = (now, now, bibrec_id)
else:
query = """UPDATE bibrec SET modification_date=%s WHERE id=%s"""
params = (now, bibrec_id)
if not pretend:
run_sql(query, params)
write_message(" -Update record creation/modification date: DONE" , verbose=2)
def update_bibfmt_format(id_bibrec, format_value, format_name, modification_date=None, pretend=False):
"""Update the format in the table bibfmt"""
if modification_date is None:
modification_date = time.strftime('%Y-%m-%d %H:%M:%S')
else:
try:
time.strptime(modification_date, "%Y-%m-%d %H:%M:%S")
except ValueError:
modification_date = '1970-01-01 00:00:00'
# We check if the format is already in bibFmt
nb_found = find_record_format(id_bibrec, format_name)
if nb_found == 1:
# we are going to update the format
# compress the format_value value
pickled_format_value = compress(format_value)
# update the format:
query = """UPDATE LOW_PRIORITY bibfmt SET last_updated=%s, value=%s WHERE id_bibrec=%s AND format=%s"""
params = (modification_date, pickled_format_value, id_bibrec, format_name)
if not pretend:
row_id = run_sql(query, params)
if not pretend and row_id is None:
write_message(" Failed: Error during update_bibfmt_format function", verbose=1, stream=sys.stderr)
return 1
else:
write_message(" -Update the format %s in bibfmt: DONE" % format_name , verbose=2)
return 0
elif nb_found > 1:
write_message(" Failed: Same format %s found several time in bibfmt for the same record." % format_name, verbose=1, stream=sys.stderr)
return 1
else:
# Insert the format information in BibFMT
res = insert_bibfmt(id_bibrec, format_value, format_name, modification_date, pretend=pretend)
if res is None:
write_message(" Failed: Error during insert_bibfmt", verbose=1, stream=sys.stderr)
return 1
else:
write_message(" -Insert the format %s in bibfmt: DONE" % format_name , verbose=2)
return 0
def delete_bibfmt_format(id_bibrec, format_name, pretend=False):
"""
Delete format FORMAT_NAME from bibfmt table fo record ID_BIBREC.
"""
if not pretend:
run_sql("DELETE LOW_PRIORITY FROM bibfmt WHERE id_bibrec=%s and format=%s", (id_bibrec, format_name))
return 0
-def archive_marcxml_for_history(recID, pretend=False):
+
+def archive_marcxml_for_history(recID, affected_fields, pretend=False):
"""
Archive current MARCXML format of record RECID from BIBFMT table
into hstRECORD table. Useful to keep MARCXML history of records.
Return 0 if everything went fine. Return 1 otherwise.
"""
res = run_sql("SELECT id_bibrec, value, last_updated FROM bibfmt WHERE format='xm' AND id_bibrec=%s",
(recID,))
+
+ db_affected_fields = ""
+ if affected_fields:
+ tmp_affected_fields = {}
+ for field in affected_fields:
+ if field.isdigit(): #hack for tags from RevisionVerifier
+ for ind in affected_fields[field]:
+ tmp_affected_fields[(field + ind[0] + ind[1] + "%").replace(" ", "_")] = 1
+ else:
+ pass #future implementation for fields
+ tmp_affected_fields = tmp_affected_fields.keys()
+ tmp_affected_fields.sort()
+ db_affected_fields = ",".join(tmp_affected_fields)
if res and not pretend:
- run_sql("""INSERT INTO hstRECORD (id_bibrec, marcxml, job_id, job_name, job_person, job_date, job_details)
- VALUES (%s,%s,%s,%s,%s,%s,%s)""",
+ run_sql("""INSERT INTO hstRECORD (id_bibrec, marcxml, job_id, job_name, job_person, job_date, job_details, affected_fields)
+ VALUES (%s,%s,%s,%s,%s,%s,%s,%s)""",
(res[0][0], res[0][1], task_get_task_param('task_id', 0), 'bibupload', task_get_task_param('user', 'UNKNOWN'), res[0][2],
- 'mode: ' + task_get_option('mode', 'UNKNOWN') + '; file: ' + task_get_option('file_path', 'UNKNOWN') + '.'))
+ 'mode: ' + task_get_option('mode', 'UNKNOWN') + '; file: ' + task_get_option('file_path', 'UNKNOWN') + '.',
+ db_affected_fields))
return 0
def update_database_with_metadata(record, rec_id, oai_rec_id="oai", affected_tags=None, pretend=False):
"""Update the database tables with the record and the record id given in parameter"""
# extract only those tags that have been affected.
# check happens at subfield level. This is to prevent overhead
# associated with inserting already existing field with given ind pair
write_message("update_database_with_metadata: record=%s, rec_id=%s, oai_rec_id=%s, affected_tags=%s" % (record, rec_id, oai_rec_id, affected_tags), verbose=9)
tmp_record = {}
if affected_tags:
for tag in record.keys():
if tag in affected_tags.keys():
write_message(" -Tag %s found to be modified.Setting up for update" % tag, verbose=9)
# initialize new list to hold affected field
new_data_tuple_list = []
for data_tuple in record[tag]:
ind1 = data_tuple[1]
ind2 = data_tuple[2]
if (ind1, ind2) in affected_tags[tag]:
write_message(" -Indicator pair (%s, %s) added to update list" % (ind1, ind2), verbose=9)
new_data_tuple_list.append(data_tuple)
tmp_record[tag] = new_data_tuple_list
write_message(lambda: " -Modified fields: \n%s" % record_xml_output(tmp_record), verbose=2)
else:
tmp_record = record
for tag in tmp_record.keys():
# check if tag is not a special one:
if tag not in CFG_BIBUPLOAD_SPECIAL_TAGS:
# for each tag there is a list of tuples representing datafields
tuple_list = tmp_record[tag]
# this list should contain the elements of a full tag [tag, ind1, ind2, subfield_code]
tag_list = []
tag_list.append(tag)
for single_tuple in tuple_list:
# these are the contents of a single tuple
subfield_list = single_tuple[0]
ind1 = single_tuple[1]
ind2 = single_tuple[2]
# append the ind's to the full tag
if ind1 == '' or ind1 == ' ':
tag_list.append('_')
else:
tag_list.append(ind1)
if ind2 == '' or ind2 == ' ':
tag_list.append('_')
else:
tag_list.append(ind2)
datafield_number = single_tuple[4]
if tag in CFG_BIBUPLOAD_SPECIAL_TAGS:
# nothing to do for special tags (FFT, BDR, BDM)
pass
elif tag in CFG_BIBUPLOAD_CONTROLFIELD_TAGS and tag != "001":
value = single_tuple[3]
# get the full tag
full_tag = ''.join(tag_list)
# update the tables
write_message(" insertion of the tag "+full_tag+" with the value "+value, verbose=9)
# insert the tag and value into into bibxxx
(table_name, bibxxx_row_id) = insert_record_bibxxx(full_tag, value, pretend=pretend)
#print 'tname, bibrow', table_name, bibxxx_row_id;
if table_name is None or bibxxx_row_id is None:
write_message(" Failed: during insert_record_bibxxx", verbose=1, stream=sys.stderr)
# connect bibxxx and bibrec with the table bibrec_bibxxx
res = insert_record_bibrec_bibxxx(table_name, bibxxx_row_id, datafield_number, rec_id, pretend=pretend)
if res is None:
write_message(" Failed: during insert_record_bibrec_bibxxx", verbose=1, stream=sys.stderr)
else:
# get the tag and value from the content of each subfield
for subfield in subfield_list:
subtag = subfield[0]
value = subfield[1]
tag_list.append(subtag)
# get the full tag
full_tag = ''.join(tag_list)
# update the tables
write_message(" insertion of the tag "+full_tag+" with the value "+value, verbose=9)
# insert the tag and value into into bibxxx
(table_name, bibxxx_row_id) = insert_record_bibxxx(full_tag, value, pretend=pretend)
if table_name is None or bibxxx_row_id is None:
write_message(" Failed: during insert_record_bibxxx", verbose=1, stream=sys.stderr)
# connect bibxxx and bibrec with the table bibrec_bibxxx
res = insert_record_bibrec_bibxxx(table_name, bibxxx_row_id, datafield_number, rec_id, pretend=pretend)
if res is None:
write_message(" Failed: during insert_record_bibrec_bibxxx", verbose=1, stream=sys.stderr)
# remove the subtag from the list
tag_list.pop()
tag_list.pop()
tag_list.pop()
tag_list.pop()
write_message(" -Update the database with metadata: DONE", verbose=2)
log_record_uploading(oai_rec_id, task_get_task_param('task_id', 0), rec_id, 'P', pretend=pretend)
def append_new_tag_to_old_record(record, rec_old):
"""Append new tags to a old record"""
def _append_tag(tag):
if tag in CFG_BIBUPLOAD_CONTROLFIELD_TAGS:
if tag == '001':
pass
else:
# if it is a controlfield, just access the value
for single_tuple in record[tag]:
controlfield_value = single_tuple[3]
# add the field to the old record
newfield_number = record_add_field(rec_old, tag,
controlfield_value=controlfield_value)
if newfield_number is None:
write_message(" Error when adding the field"+tag, verbose=1, stream=sys.stderr)
else:
# For each tag there is a list of tuples representing datafields
for single_tuple in record[tag]:
# We retrieve the information of the tag
subfield_list = single_tuple[0]
ind1 = single_tuple[1]
ind2 = single_tuple[2]
if '%s%s%s' % (tag, ind1 == ' ' and '_' or ind1, ind2 == ' ' and '_' or ind2) in (CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[:5], CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[:5]):
## We don't want to append the external identifier
## if it is already existing.
if record_find_field(rec_old, tag, single_tuple)[0] is not None:
write_message(" Not adding tag: %s ind1=%s ind2=%s subfields=%s: it's already there" % (tag, ind1, ind2, subfield_list), verbose=9)
continue
# We add the datafield to the old record
write_message(" Adding tag: %s ind1=%s ind2=%s subfields=%s" % (tag, ind1, ind2, subfield_list), verbose=9)
newfield_number = record_add_field(rec_old, tag, ind1,
ind2, subfields=subfield_list)
if newfield_number is None:
write_message(" Error when adding the field"+tag, verbose=1, stream=sys.stderr)
# Go through each tag in the appended record
for tag in record:
_append_tag(tag)
return rec_old
def copy_strong_tags_from_old_record(record, rec_old):
"""
Look for strong tags in RECORD and REC_OLD. If no strong tags are
found in RECORD, then copy them over from REC_OLD. This function
modifies RECORD structure on the spot.
"""
for strong_tag in CFG_BIBUPLOAD_STRONG_TAGS:
if not record_get_field_instances(record, strong_tag, strong_tag[3:4] or '%', strong_tag[4:5] or '%'):
strong_tag_old_field_instances = record_get_field_instances(rec_old, strong_tag)
if strong_tag_old_field_instances:
for strong_tag_old_field_instance in strong_tag_old_field_instances:
sf_vals, fi_ind1, fi_ind2, controlfield, dummy = strong_tag_old_field_instance
record_add_field(record, strong_tag, fi_ind1, fi_ind2, controlfield, sf_vals)
return
### Delete functions
def delete_tags(record, rec_old):
"""
Returns a record structure with all the fields in rec_old minus the
fields in record.
@param record: The record containing tags to delete.
@type record: record structure
@param rec_old: The original record.
@type rec_old: record structure
@return: The modified record.
@rtype: record structure
"""
returned_record = copy.deepcopy(rec_old)
for tag, fields in record.iteritems():
if tag in ('001', ):
continue
for field in fields:
local_position = record_find_field(returned_record, tag, field)[1]
if local_position is not None:
record_delete_field(returned_record, tag, field_position_local=local_position)
return returned_record
def delete_tags_to_correct(record, rec_old):
"""
Delete tags from REC_OLD which are also existing in RECORD. When
deleting, pay attention not only to tags, but also to indicators,
so that fields with the same tags but different indicators are not
deleted.
"""
## Some fields are controlled via provenance information.
## We should re-add saved fields at the end.
fields_to_readd = {}
for tag in CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS:
if tag[:3] in record:
tmp_field_instances = record_get_field_instances(record, tag[:3], tag[3], tag[4]) ## Let's discover the provenance that will be updated
provenances_to_update = []
for instance in tmp_field_instances:
for code, value in instance[0]:
if code == tag[5]:
if value not in provenances_to_update:
provenances_to_update.append(value)
break
else:
## The provenance is not specified.
## let's add the special empty provenance.
if '' not in provenances_to_update:
provenances_to_update.append('')
potential_fields_to_readd = record_get_field_instances(rec_old, tag[:3], tag[3], tag[4]) ## Let's take all the field corresponding to tag
## Let's save apart all the fields that should be updated, but
## since they have a different provenance not mentioned in record
## they should be preserved.
fields = []
for sf_vals, ind1, ind2, dummy_cf, dummy_line in potential_fields_to_readd:
for code, value in sf_vals:
if code == tag[5]:
if value not in provenances_to_update:
fields.append(sf_vals)
break
else:
if '' not in provenances_to_update:
## Empty provenance, let's protect in any case
fields.append(sf_vals)
fields_to_readd[tag] = fields
# browse through all the tags from the MARCXML file:
for tag in record:
# check if the tag exists in the old record too:
if tag in rec_old and tag != '001':
# the tag does exist, so delete all record's tag+ind1+ind2 combinations from rec_old
for dummy_sf_vals, ind1, ind2, dummy_cf, dummyfield_number in record[tag]:
write_message(" Delete tag: " + tag + " ind1=" + ind1 + " ind2=" + ind2, verbose=9)
record_delete_field(rec_old, tag, ind1, ind2)
## Ok, we readd necessary fields!
for tag, fields in fields_to_readd.iteritems():
for sf_vals in fields:
write_message(" Adding tag: " + tag[:3] + " ind1=" + tag[3] + " ind2=" + tag[4] + " code=" + str(sf_vals), verbose=9)
record_add_field(rec_old, tag[:3], tag[3], tag[4], subfields=sf_vals)
def delete_bibrec_bibxxx(record, id_bibrec, affected_tags={}, pretend=False):
"""Delete the database record from the table bibxxx given in parameters"""
# we clear all the rows from bibrec_bibxxx from the old record
# clearing only those tags that have been modified.
write_message(lambda: "delete_bibrec_bibxxx(record=%s, id_bibrec=%s, affected_tags=%s)" % (record, id_bibrec, affected_tags), verbose=9)
for tag in affected_tags:
# sanity check with record keys just to make sure its fine.
if tag not in CFG_BIBUPLOAD_SPECIAL_TAGS:
write_message("%s found in record"%tag, verbose=2)
# for each name construct the bibrec_bibxxx table name
table_name = 'bib'+tag[0:2]+'x'
bibrec_table = 'bibrec_'+table_name
# delete all the records with proper id_bibrec. Indicators matter for individual affected tags
tmp_ind_1 = ''
tmp_ind_2 = ''
# construct exact tag value using indicators
for ind_pair in affected_tags[tag]:
if ind_pair[0] == ' ':
tmp_ind_1 = '_'
else:
tmp_ind_1 = ind_pair[0]
if ind_pair[1] == ' ':
tmp_ind_2 = '_'
else:
tmp_ind_2 = ind_pair[1]
# need to escape incase of underscore so that mysql treats it as a char
tag_val = tag+"\\"+tmp_ind_1+"\\"+tmp_ind_2 + '%'
query = """DELETE br.* FROM `%s` br,`%s` b where br.id_bibrec=%%s and br.id_bibxxx=b.id and b.tag like %%s""" % (bibrec_table, table_name)
params = (id_bibrec, tag_val)
write_message(query % params, verbose=9)
if not pretend:
run_sql(query, params)
else:
write_message("%s not found"%tag, verbose=2)
def main():
"""Main that construct all the bibtask."""
task_init(authorization_action='runbibupload',
authorization_msg="BibUpload Task Submission",
description="""Receive MARC XML file and update appropriate database
tables according to options.
Examples:
$ bibupload -i input.xml
""",
help_specific_usage=""" -a, --append\t\tnew fields are appended to the existing record
-c, --correct\t\tfields are replaced by the new ones in the existing record, except
\t\t\twhen overridden by CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS
-i, --insert\t\tinsert the new record in the database
-r, --replace\t\tthe existing record is entirely replaced by the new one,
\t\t\texcept for fields in CFG_BIBUPLOAD_STRONG_TAGS
-d, --delete\t\tspecified fields are deleted in existing record
-n, --notimechange\tdo not change record last modification date when updating
-o, --holdingpen\tInsert record into holding pen instead of the normal database
--pretend\t\tdo not really insert/append/correct/replace the input file
--force\t\twhen --replace, use provided 001 tag values, even if the matching
\t\t\trecord does not exist (thus allocating it on-the-fly)
--callback-url\tSend via a POST request a JSON-serialized answer (see admin guide), in
\t\t\torder to provide a feedback to an external service about the outcome of the operation.
--nonce\t\twhen used together with --callback add the nonce value in the JSON message.
--special-treatment=MODE\tif "oracle" is specified, when used together with --callback_url,
\t\t\tPOST an application/x-www-form-urlencoded request where the JSON message is encoded
\t\t\tinside a form field called "results".
""",
version=__revision__,
specific_params=("ircazdnoS:",
[
"insert",
"replace",
"correct",
"append",
"reference",
"delete",
"notimechange",
"holdingpen",
"pretend",
"force",
"callback-url=",
"nonce=",
"special-treatment=",
"stage=",
]),
task_submit_elaborate_specific_parameter_fnc=task_submit_elaborate_specific_parameter,
task_run_fnc=task_run_core)
def task_submit_elaborate_specific_parameter(key, value, opts, args): # pylint: disable=W0613
""" Given the string key it checks it's meaning, eventually using the
value. Usually it fills some key in the options dict.
It must return True if it has elaborated the key, False, if it doesn't
know that key.
eg:
if key in ['-n', '--number']:
task_get_option(\1) = value
return True
return False
"""
# No time change option
if key in ("-n", "--notimechange"):
task_set_option('notimechange', 1)
# Insert mode option
elif key in ("-i", "--insert"):
if task_get_option('mode') == 'replace':
# if also replace found, then set to replace_or_insert
task_set_option('mode', 'replace_or_insert')
else:
task_set_option('mode', 'insert')
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
# Replace mode option
elif key in ("-r", "--replace"):
if task_get_option('mode') == 'insert':
# if also insert found, then set to replace_or_insert
task_set_option('mode', 'replace_or_insert')
else:
task_set_option('mode', 'replace')
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
# Holding pen mode option
elif key in ("-o", "--holdingpen"):
write_message("Holding pen mode", verbose=3)
task_set_option('mode', 'holdingpen')
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
# Correct mode option
elif key in ("-c", "--correct"):
task_set_option('mode', 'correct')
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
# Append mode option
elif key in ("-a", "--append"):
task_set_option('mode', 'append')
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
# Deprecated reference mode option (now correct)
elif key in ("-z", "--reference"):
task_set_option('mode', 'correct')
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
elif key in ("-d", "--delete"):
task_set_option('mode', 'delete')
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
elif key in ("--pretend",):
task_set_option('pretend', True)
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
elif key in ("--force",):
task_set_option('force', True)
fix_argv_paths([args[0]])
task_set_option('file_path', os.path.abspath(args[0]))
elif key in ("--callback-url", ):
task_set_option('callback_url', value)
elif key in ("--nonce", ):
task_set_option('nonce', value)
elif key in ("--special-treatment", ):
if value.lower() in CFG_BIBUPLOAD_ALLOWED_SPECIAL_TREATMENTS:
if value.lower() == 'oracle':
task_set_option('oracle_friendly', True)
else:
print >> sys.stderr, """The specified value is not in the list of allowed special treatments codes: %s""" % CFG_BIBUPLOAD_ALLOWED_SPECIAL_TREATMENTS
return False
elif key in ("-S", "--stage"):
print >> sys.stderr, """WARNING: the --stage parameter is deprecated and ignored."""
else:
return False
return True
def task_submit_check_options():
""" Reimplement this method for having the possibility to check options
before submitting the task, in order for example to provide default
values. It must return False if there are errors in the options.
"""
if task_get_option('mode') is None:
write_message("Please specify at least one update/insert mode!")
return False
if task_get_option('file_path') is None:
write_message("Missing filename! -h for help.")
return False
return True
def writing_rights_p():
"""Return True in case bibupload has the proper rights to write in the
fulltext file folder."""
if _WRITING_RIGHTS is not None:
return _WRITING_RIGHTS
try:
if not os.path.exists(CFG_BIBDOCFILE_FILEDIR):
os.makedirs(CFG_BIBDOCFILE_FILEDIR)
fd, filename = tempfile.mkstemp(suffix='.txt', prefix='test', dir=CFG_BIBDOCFILE_FILEDIR)
test = os.fdopen(fd, 'w')
test.write('TEST')
test.close()
if open(filename).read() != 'TEST':
raise IOError("Can not successfully write and readback %s" % filename)
os.remove(filename)
except:
register_exception(alert_admin=True)
return False
return True
def post_results_to_callback_url(results, callback_url):
write_message("Sending feedback to %s" % callback_url)
if not CFG_JSON_AVAILABLE:
from warnings import warn
warn("--callback-url used but simplejson/json not available")
return
json_results = json.dumps(results)
write_message("Message to send: %s" % json_results, verbose=9)
## <scheme>://<netloc>/<path>?<query>#<fragment>
scheme, dummynetloc, dummypath, dummyquery, dummyfragment = urlparse.urlsplit(callback_url)
## See: http://stackoverflow.com/questions/111945/is-there-any-way-to-do-http-put-in-python
if scheme == 'http':
opener = urllib2.build_opener(urllib2.HTTPHandler)
elif scheme == 'https':
opener = urllib2.build_opener(urllib2.HTTPSHandler)
else:
raise ValueError("Scheme not handled %s for callback_url %s" % (scheme, callback_url))
if task_get_option('oracle_friendly'):
write_message("Oracle friendly mode requested", verbose=9)
request = urllib2.Request(callback_url, data=urllib.urlencode({'results': json_results}))
request.add_header('Content-Type', 'application/x-www-form-urlencoded')
else:
request = urllib2.Request(callback_url, data=json_results)
request.add_header('Content-Type', 'application/json')
request.add_header('User-Agent', make_user_agent_string('BibUpload'))
write_message("Headers about to be sent: %s" % request.headers, verbose=9)
write_message("Data about to be sent: %s" % request.data, verbose=9)
res = opener.open(request)
msg = res.read()
write_message("Result of posting the feedback: %s %s" % (res.code, res.msg), verbose=9)
write_message("Returned message is: %s" % msg, verbose=9)
return res
def bibupload_records(records, opt_mode=None, opt_notimechange=0,
pretend=False, callback_url=None, results_for_callback=None):
"""perform the task of uploading a set of records
returns list of (error_code, recid) tuples for separate records
"""
#Dictionaries maintaining temporary identifiers
# Structure: identifier -> number
tmp_ids = {}
tmp_vers = {}
results = []
# The first phase -> assigning meaning to temporary identifiers
if opt_mode == 'reference':
## NOTE: reference mode has been deprecated in favour of 'correct'
opt_mode = 'correct'
record = None
for record in records:
record_id = record_extract_oai_id(record)
task_sleep_now_if_required(can_stop_too=True)
if opt_mode == "holdingpen":
#inserting into the holding pen
write_message("Inserting into holding pen", verbose=3)
- insert_record_into_holding_pen(record, record_id)
+ insert_record_into_holding_pen(record, record_id, pretend=pretend)
else:
write_message("Inserting into main database", verbose=3)
error = bibupload(
record,
opt_mode = opt_mode,
opt_notimechange = opt_notimechange,
oai_rec_id = record_id,
pretend = pretend,
tmp_ids = tmp_ids,
tmp_vers = tmp_vers)
results.append(error)
if error[0] == 1:
if record:
write_message(lambda: record_xml_output(record),
stream=sys.stderr)
else:
write_message("Record could not have been parsed",
stream=sys.stderr)
stat['nb_errors'] += 1
if callback_url:
results_for_callback['results'].append({'recid': error[1], 'success': False, 'error_message': error[2]})
elif error[0] == 2:
if record:
write_message(lambda: record_xml_output(record),
stream=sys.stderr)
else:
write_message("Record could not have been parsed",
stream=sys.stderr)
stat['nb_holdingpen'] += 1
if callback_url:
results_for_callback['results'].append({'recid': error[1], 'success': False, 'error_message': error[2]})
elif error[0] == 0:
if callback_url:
from invenio.search_engine import print_record
results_for_callback['results'].append({'recid': error[1], 'success': True, "marcxml": print_record(error[1], 'xm'), 'url': "%s/%s/%s" % (CFG_SITE_URL, CFG_SITE_RECORD, error[1])})
else:
if callback_url:
results_for_callback['results'].append({'recid': error[1], 'success': False, 'error_message': error[2]})
# stat us a global variable
task_update_progress("Done %d out of %d." % \
(stat['nb_records_inserted'] + \
stat['nb_records_updated'],
stat['nb_records_to_upload']))
# Second phase -> Now we can process all entries where temporary identifiers might appear (BDR, BDM)
write_message("Identifiers table after processing: %s versions: %s" % (str(tmp_ids), str(tmp_vers)))
write_message("Uploading BDR and BDM fields")
if opt_mode != "holdingpen":
for record in records:
record_id = retrieve_rec_id(record, opt_mode, pretend=pretend, post_phase = True)
bibupload_post_phase(record,
rec_id = record_id,
mode = opt_mode,
pretend = pretend,
tmp_ids = tmp_ids,
tmp_vers = tmp_vers)
return results
def task_run_core():
""" Reimplement to add the body of the task."""
write_message("Input file '%s', input mode '%s'." %
(task_get_option('file_path'), task_get_option('mode')))
write_message("STAGE 0:", verbose=2)
if task_get_option('file_path') is not None:
write_message("start preocessing", verbose=3)
task_update_progress("Reading XML input")
recs = xml_marc_to_records(open_marc_file(task_get_option('file_path')))
stat['nb_records_to_upload'] = len(recs)
write_message(" -Open XML marc: DONE", verbose=2)
task_sleep_now_if_required(can_stop_too=True)
write_message("Entering records loop", verbose=3)
callback_url = task_get_option('callback_url')
results_for_callback = {'results': []}
if recs is not None:
# We proceed each record by record
bibupload_records(records=recs, opt_mode=task_get_option('mode'),
opt_notimechange=task_get_option('notimechange'),
pretend=task_get_option('pretend'),
callback_url=callback_url,
results_for_callback=results_for_callback)
else:
write_message(" Error bibupload failed: No record found",
verbose=1, stream=sys.stderr)
callback_url = task_get_option("callback_url")
if callback_url:
nonce = task_get_option("nonce")
if nonce:
results_for_callback["nonce"] = nonce
post_results_to_callback_url(results_for_callback, callback_url)
if task_get_task_param('verbose') >= 1:
# Print out the statistics
print_out_bibupload_statistics()
# Check if they were errors
return not stat['nb_errors'] >= 1
def log_record_uploading(oai_rec_id, task_id, bibrec_id, insertion_db, pretend=False):
if oai_rec_id != "" and oai_rec_id != None:
query = """UPDATE oaiHARVESTLOG SET date_inserted=NOW(), inserted_to_db=%s, id_bibrec=%s WHERE oai_id = %s AND bibupload_task_id = %s ORDER BY date_harvested LIMIT 1"""
if not pretend:
run_sql(query, (str(insertion_db), str(bibrec_id), str(oai_rec_id), str(task_id), ))
if __name__ == "__main__":
main()
diff --git a/modules/bibupload/lib/bibupload_regression_tests.py b/modules/bibupload/lib/bibupload_regression_tests.py
index b50d90e6b..70856bdd4 100644
--- a/modules/bibupload/lib/bibupload_regression_tests.py
+++ b/modules/bibupload/lib/bibupload_regression_tests.py
@@ -1,5992 +1,5991 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
# pylint: disable=C0301
"""Regression tests for the BibUpload."""
__revision__ = "$Id$"
import base64
import cPickle
import re
import os
import pprint
import sys
import time
from marshal import loads
from zlib import decompress
from urllib import urlencode
from urllib2 import urlopen
from invenio.config import CFG_OAI_ID_FIELD, CFG_PREFIX, CFG_SITE_URL, CFG_TMPDIR, \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG, \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG, \
CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG, \
CFG_BINDIR, \
CFG_SITE_RECORD, \
CFG_DEVEL_SITE, \
CFG_BIBUPLOAD_REFERENCE_TAG, \
CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE
from invenio.jsonutils import json
from invenio.dbquery import run_sql, get_table_status_info
from invenio.testutils import InvenioTestCase, make_test_suite, run_test_suite, test_web_page_content
from invenio.importutils import lazy_import
from invenio.hashutils import md5
from invenio.shellutils import run_shell_command
BibRecDocs = lazy_import('invenio.bibdocfile:BibRecDocs')
BibRelation = lazy_import('invenio.bibdocfile:BibRelation')
MoreInfo = lazy_import('invenio.bibdocfile:MoreInfo')
bibupload = lazy_import('invenio.bibupload')
print_record = lazy_import('invenio.search_engine:print_record')
get_record = lazy_import('invenio.search_engine:get_record')
create_record = lazy_import('invenio.bibrecord:create_record')
records_identical = lazy_import('invenio.bibrecord:records_identical')
encode_for_xml = lazy_import('invenio.textutils:encode_for_xml')
# helper functions:
RE_005 = re.compile(re.escape('tag="005"'))
def get_record_from_bibxxx(recid):
"""Return a recstruct built from bibxxx tables"""
record = "<record>"
record += """ <controlfield tag="001">%s</controlfield>\n""" % recid
# controlfields
query = "SELECT b.tag,b.value,bb.field_number FROM bib00x AS b, bibrec_bib00x AS bb "\
"WHERE bb.id_bibrec=%s AND b.id=bb.id_bibxxx AND b.tag LIKE '00%%' "\
"ORDER BY bb.field_number, b.tag ASC"
res = run_sql(query, (recid, ))
for row in res:
field, value = row[0], row[1]
value = encode_for_xml(value)
record += """ <controlfield tag="%s">%s</controlfield>\n""" % \
(encode_for_xml(field[0:3]), value)
# datafields
i = 1 # Do not process bib00x and bibrec_bib00x, as
# they are controlfields. So start at bib01x and
# bibrec_bib00x (and set i = 0 at the end of
# first loop)
for digit1 in range(0, 10):
for digit2 in range(i, 10):
bx = "bib%d%dx" % (digit1, digit2)
bibx = "bibrec_bib%d%dx" % (digit1, digit2)
query = "SELECT b.tag,b.value,bb.field_number FROM %s AS b, %s AS bb "\
"WHERE bb.id_bibrec=%%s AND b.id=bb.id_bibxxx AND b.tag LIKE %%s"\
"ORDER BY bb.field_number, b.tag ASC" % (bx, bibx)
res = run_sql(query, (recid, str(digit1)+str(digit2)+'%'))
field_number_old = -999
field_old = ""
for row in res:
field, value, field_number = row[0], row[1], row[2]
ind1, ind2 = field[3], field[4]
if ind1 == "_" or ind1 == "":
ind1 = " "
if ind2 == "_" or ind2 == "":
ind2 = " "
if field_number != field_number_old or field[:-1] != field_old[:-1]:
if field_number_old != -999:
record += """ </datafield>\n"""
record += """ <datafield tag="%s" ind1="%s" ind2="%s">\n""" % \
(encode_for_xml(field[0:3]), encode_for_xml(ind1), encode_for_xml(ind2))
field_number_old = field_number
field_old = field
# print subfield value
value = encode_for_xml(value)
record += """ <subfield code="%s">%s</subfield>\n""" % \
(encode_for_xml(field[-1:]), value)
# all fields/subfields printed in this run, so close the tag:
if field_number_old != -999:
record += """ </datafield>\n"""
i = 0 # Next loop should start looking at bib%0 and bibrec_bib00x
# we are at the end of printing the record:
record += " </record>\n"
return record
def remove_tag_001_from_xmbuffer(xmbuffer):
"""Remove tag 001 from MARCXML buffer. Useful for testing two
MARCXML buffers without paying attention to recIDs attributed
during the bibupload.
"""
return re.sub(r'<controlfield tag="001">.*</controlfield>', '', xmbuffer)
def compare_xmbuffers(xmbuffer1, xmbuffer2):
"""Compare two XM (XML MARC) buffers by removing whitespaces and version
numbers in tags 005 before testing.
"""
def remove_blanks_from_xmbuffer(xmbuffer):
"""Remove \n and blanks from XMBUFFER."""
out = xmbuffer.replace("\n", "")
out = out.replace(" ", "")
return out
# remove 005 revision numbers:
xmbuffer1 = re.sub(r'<controlfield tag="005">.*?</controlfield>', '', xmbuffer1)
xmbuffer2 = re.sub(r'<controlfield tag="005">.*?</controlfield>', '', xmbuffer2)
# remove whitespace:
xmbuffer1 = remove_blanks_from_xmbuffer(xmbuffer1)
xmbuffer2 = remove_blanks_from_xmbuffer(xmbuffer2)
if len(RE_005.findall(xmbuffer1)) > 1:
return "More than 1 005 tag has been found in the first XM: %s" % xmbuffer1
if len(RE_005.findall(xmbuffer2)) > 1:
return "More than 1 005 tag has been found in the second XM: %s" % xmbuffer2
if xmbuffer1 != xmbuffer2:
return "\n=" + xmbuffer1 + "=\n" + '!=' + "\n=" + xmbuffer2 + "=\n"
return ''
def remove_tag_001_from_hmbuffer(hmbuffer):
"""Remove tag 001 from HTML MARC buffer. Useful for testing two
HTML MARC buffers without paying attention to recIDs attributed
during the bibupload.
"""
return re.sub(r'(^|\n)(<pre>)?[0-9]{9}\s001__\s\d+($|\n)', '', hmbuffer)
def compare_hmbuffers(hmbuffer1, hmbuffer2):
"""Compare two HM (HTML MARC) buffers by removing whitespaces
before testing.
"""
hmbuffer1 = hmbuffer1.strip()
hmbuffer2 = hmbuffer2.strip()
# remove eventual <pre>...</pre> formatting:
hmbuffer1 = re.sub(r'^<pre>', '', hmbuffer1)
hmbuffer2 = re.sub(r'^<pre>', '', hmbuffer2)
hmbuffer1 = re.sub(r'</pre>$', '', hmbuffer1)
hmbuffer2 = re.sub(r'</pre>$', '', hmbuffer2)
# remove 005 revision numbers:
hmbuffer1 = re.sub(r'(^|\n)[0-9]{9}\s005.*($|\n)', '\n', hmbuffer1)
hmbuffer2 = re.sub(r'(^|\n)[0-9]{9}\s005.*($|\n)', '\n', hmbuffer2)
hmbuffer1 = hmbuffer1.strip()
hmbuffer2 = hmbuffer2.strip()
# remove leading recid, leaving only field values:
hmbuffer1 = re.sub(r'(^|\n)[0-9]{9}\s', '', hmbuffer1)
hmbuffer2 = re.sub(r'(^|\n)[0-9]{9}\s', '', hmbuffer2)
# remove leading whitespace:
hmbuffer1 = re.sub(r'(^|\n)\s+', '', hmbuffer1)
hmbuffer2 = re.sub(r'(^|\n)\s+', '', hmbuffer2)
compared_hmbuffers = hmbuffer1 == hmbuffer2
if not compared_hmbuffers:
return "\n=" + hmbuffer1 + "=\n" + '!=' + "\n=" + hmbuffer2 + "=\n"
return ''
def wipe_out_record_from_all_tables(recid):
"""
Wipe out completely the record and all its traces of RECID from
the database (bibrec, bibrec_bibxxx, bibxxx, bibfmt). Useful for
the time being for test cases.
"""
# delete all the linked bibdocs
try:
for bibdoc in BibRecDocs(recid).list_bibdocs():
bibdoc.expunge()
# delete from bibrec:
run_sql("DELETE FROM bibrec WHERE id=%s", (recid,))
# delete from bibrec_bibxxx:
for i in range(0, 10):
for j in range(0, 10):
run_sql("DELETE FROM %(bibrec_bibxxx)s WHERE id_bibrec=%%s" % # kwalitee: disable=sql
{'bibrec_bibxxx': "bibrec_bib%i%ix" % (i, j)},
(recid,))
# delete all unused bibxxx values:
for i in range(0, 10):
for j in range(0, 10):
run_sql("DELETE %(bibxxx)s FROM %(bibxxx)s " \
" LEFT JOIN %(bibrec_bibxxx)s " \
" ON %(bibxxx)s.id=%(bibrec_bibxxx)s.id_bibxxx " \
" WHERE %(bibrec_bibxxx)s.id_bibrec IS NULL" % \
{'bibxxx': "bib%i%ix" % (i, j),
'bibrec_bibxxx': "bibrec_bib%i%ix" % (i, j)})
# delete from bibfmt:
run_sql("DELETE FROM bibfmt WHERE id_bibrec=%s", (recid,))
# delete from bibrec_bibdoc:
run_sql("DELETE FROM bibrec_bibdoc WHERE id_bibrec=%s", (recid,))
# delete from holdingpen
run_sql("DELETE FROM bibHOLDINGPEN WHERE id_bibrec=%s", (recid,))
# delete from hstRECORD
run_sql("DELETE FROM hstRECORD WHERE id_bibrec=%s", (recid,))
except Exception, err:
print >> sys.stderr, "Exception captured while wiping records: %s" % err
def try_url_download(url):
"""Try to download a given URL"""
try:
open_url = urlopen(url)
open_url.read()
except Exception, e:
raise StandardError("Downloading %s is impossible because of %s"
% (url, str(e)))
return True
def force_webcoll(recid):
+ from invenio.bibindex_engine_config import CFG_BIBINDEX_INDEX_TABLE_TYPE
from invenio import bibindex_engine
reload(bibindex_engine)
from invenio import websearch_webcoll
reload(websearch_webcoll)
- index_id, index_name, index_tags = bibindex_engine.get_word_tables("collection")[0]
- bibindex_engine.WordTable(index_name, index_id, index_tags, "idxWORD%02dF", default_get_words_fnc=bibindex_engine.get_words_from_phrase, tag_to_words_fnc_map={'8564_u': bibindex_engine.get_words_from_fulltext}).add_recIDs([[recid, recid]], 1)
+ index_id, index_name, index_tags = bibindex_engine.get_word_tables(["collection"])[0]
+ bibindex_engine.WordTable(index_name, index_id, index_tags, "idxWORD%02dF", wordtable_type=CFG_BIBINDEX_INDEX_TABLE_TYPE["Words"], tag_to_tokenizer_map={'8564_u': "BibIndexFulltextTokenizer"}).add_recIDs([[recid, recid]], 1)
#sleep 1s to make sure all tables are ready
time.sleep(1)
c = websearch_webcoll.Collection()
c.calculate_reclist()
c.update_reclist()
class GenericBibUploadTest(InvenioTestCase):
"""Generic BibUpload testing class with predefined
setUp and tearDown methods.
"""
def setUp(self):
from invenio.bibtask import task_set_task_param, setup_loggers
self.verbose = 0
setup_loggers()
task_set_task_param('verbose', self.verbose)
self.last_recid = run_sql("SELECT MAX(id) FROM bibrec")[0][0]
def tearDown(self):
for recid in run_sql("SELECT id FROM bibrec WHERE id>%s", (self.last_recid,)):
wipe_out_record_from_all_tables(recid[0])
def check_record_consistency(self, recid):
rec_in_history = create_record(decompress(run_sql("SELECT marcxml FROM hstRECORD WHERE id_bibrec=%s ORDER BY job_date DESC LIMIT 1", (recid, ))[0][0]))[0]
rec_in_xm = create_record(decompress(run_sql("SELECT value FROM bibfmt WHERE id_bibrec=%s AND format='xm'", (recid, ))[0][0]))[0]
rec_in_bibxxx = create_record(get_record_from_bibxxx(recid))[0]
self.failUnless(records_identical(rec_in_xm, rec_in_history, skip_005=False), "\n%s\n!=\n%s\n" % (rec_in_xm, rec_in_history))
self.failUnless(records_identical(rec_in_xm, rec_in_bibxxx, skip_005=False, ignore_duplicate_subfields=True, ignore_duplicate_controlfields=True), "\n%s\n!=\n%s\n" % (rec_in_xm, rec_in_bibxxx))
if CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE:
rec_in_recstruct = loads(decompress(run_sql("SELECT value FROM bibfmt WHERE id_bibrec=%s AND format='recstruct'", (recid, ))[0][0]))
self.failUnless(records_identical(rec_in_xm, rec_in_recstruct, skip_005=False, ignore_subfield_order=True), "\n%s\n!=\n%s\n" % (rec_in_xm, rec_in_recstruct))
class BibUploadRealCaseRemovalDOIViaBibEdit(GenericBibUploadTest):
def test_removal_of_doi_via_bibedit(self):
test = """<record>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">HEP</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Fiore, Gaetano</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On quantum mechanics with a magnetic field on R**n and on a torus T**n, and their relation</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="p">Int.J.Theor.Phys.</subfield>
<subfield code="v">52</subfield>
<subfield code="c">877-896</subfield>
<subfield code="y">2013</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">INSPIRE</subfield>
<subfield code="a">General Physics</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">Published</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">20</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">2013</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">author</subfield>
<subfield code="a">Bloch theory with magnetic field</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">author</subfield>
<subfield code="a">Fiber bundles</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">author</subfield>
<subfield code="a">Gauge symmetry</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">author</subfield>
<subfield code="a">Quantization on manifolds</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="9">Springer</subfield>
<subfield code="a">We show in elementary terms the equivalence in a general gauge of a U(1)-gauge theory of a scalar charged particle on a torus to the analogous theory on ℝ( )n( ) constrained by quasiperiodicity under translations in the lattice Λ. The latter theory provides a global description of the former: the quasiperiodic wavefunctions ψ defined on ℝ( )n( ) play the role of sections of the associated hermitean line bundle E on , since also E admits a global description as a quotient. The components of the covariant derivatives corresponding to a constant (necessarily integral) magnetic field B=dA generate a Lie algebra g ( )Q( ) and together with the periodic functions the algebra of observables . The non-abelian part of g ( )Q( ) is a Heisenberg Lie algebra with the electric charge operator Q as the central generator, the corresponding Lie group G ( )Q( ) acts on the Hilbert space as the translation group up to phase factors. Also the space of sections of E is mapped into itself by g∈G ( )Q( ). We identify the socalled magnetic translation group as a subgroup of the observables’ group Y ( )Q( ). We determine the unitary irreducible representations of corresponding to integer charges and for each of them an associated orthonormal basis explicitly in configuration space. We also clarify how in the n=2m case a holomorphic structure and Theta functions arise on the associated complex torus.</subfield>
</datafield>
<datafield tag="024" ind1="7" ind2=" ">
<subfield code="2">DOI</subfield>
<subfield code="a">10.1007/s10773-012-1396-z</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">Fiore:2013nua</subfield>
<subfield code="9">INSPIRETeX</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">Published</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">Citeable</subfield>
</datafield>
</record>
"""
recs = create_record(test)
_, recid, _ = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid)
new_rec = get_record(recid)
del new_rec['024'] ## let's delete DOI
_, recid2, _ = bibupload.bibupload(new_rec, opt_mode='replace')
self.assertEqual(recid, recid2)
self.check_record_consistency(recid2)
class BibUploadTypicalBibEditSessionTest(GenericBibUploadTest):
"""Testing a typical BibEdit session"""
def setUp(self):
GenericBibUploadTest.setUp(self)
self.test = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
"""
recs = bibupload.xml_marc_to_records(self.test)
# We call the main function with the record as a parameter
_, self.recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(self.recid)
# We retrieve the inserted xml
inserted_xm = print_record(self.recid, 'xm')
# Compare if the two MARCXML are the same
self.assertEqual(compare_xmbuffers(remove_tag_001_from_xmbuffer(inserted_xm),
self.test), '')
self.history = run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, )) # kwalitee: disable=sql
self.timestamp = run_sql("SELECT modification_date FROM bibrec WHERE id=%s", (self.recid,))
self.tag005 = get_record(self.recid)['005'][0][3]
def test_simple_replace(self):
"""BibUpload - test a simple replace as in BibEdit"""
marc_to_replace1 = """
<record>
<controlfield tag="001">%(recid)s</controlfield>
<controlfield tag="005">%(tag005)s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Foo</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">bla bla bla</subfield>
</datafield>
</record>
""" % {'recid': self.recid, 'tag005': self.tag005}
recs = bibupload.xml_marc_to_records(marc_to_replace1)
# We call the main function with the record as a parameter
_, self.recid, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(self.recid)
## The change should have been applied!
self.failUnless(records_identical(recs[0], get_record(self.recid)), "\n%s\n!=\n%s\n" % (recs[0], get_record(self.recid)))
marc_to_replace2 = """
<record>
<controlfield tag="001">%(recid)s</controlfield>
<controlfield tag="005">%(tag005)s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Queen Elisabeth</subfield>
<subfield code="u">Great Britain</subfield>
</datafield>
</record>
""" % {'recid': self.recid, 'tag005': self.tag005}
expected_marc = """
<record>
<controlfield tag="001">%(recid)s</controlfield>
<controlfield tag="005">%(tag005)s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Foo</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">bla bla bla</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Queen Elisabeth</subfield>
<subfield code="u">Great Britain</subfield>
</datafield>
</record>
""" % {'recid': self.recid, 'tag005': self.tag005}
recs = bibupload.xml_marc_to_records(marc_to_replace2)
# We call the main function with the record as a parameter
_, self.recid, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(self.recid)
## The change should have been merged with the previous without conflict
self.failUnless(records_identical(bibupload.xml_marc_to_records(expected_marc)[0], get_record(self.recid)))
def test_replace_with_conflict(self):
"""BibUpload - test a replace as in BibEdit that leads to conflicts"""
marc_to_replace1 = """
<record>
<controlfield tag="001">%(recid)s</controlfield>
<controlfield tag="005">%(tag005)s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Foo</subfield>
<subfield code="u">Test Institute2</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">bla bla bla</subfield>
</datafield>
</record>
""" % {'recid': self.recid, 'tag005': self.tag005}
recs = bibupload.xml_marc_to_records(marc_to_replace1)
# We call the main function with the record as a parameter
_, self.recid, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(self.recid)
## The change should have been applied!
self.failUnless(records_identical(recs[0], get_record(self.recid)), "\n%s\n!=\n%s" % (recs[0], get_record(self.recid)))
marc_to_replace2 = """
<record>
<controlfield tag="001">%(recid)s</controlfield>
<controlfield tag="005">%(tag005)s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Queen Elisabeth</subfield>
<subfield code="u">Great Britain</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">No more Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">bla bla bla</subfield>
</datafield>
</record>
""" % {'recid': self.recid, 'tag005': self.tag005}
recs = bibupload.xml_marc_to_records(marc_to_replace2)
# We call the main function with the record as a parameter
_, self.recid, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(self.recid)
## The change should have been merged with the previous without conflict
self.failUnless(records_identical(bibupload.xml_marc_to_records(marc_to_replace1)[0], get_record(self.recid)), "%s != %s" % (bibupload.xml_marc_to_records(marc_to_replace1)[0], get_record(self.recid)))
self.failUnless(records_identical(bibupload.xml_marc_to_records(marc_to_replace2)[0], bibupload.xml_marc_to_records(run_sql("SELECT changeset_xml FROM bibHOLDINGPEN WHERE id_bibrec=%s", (self.recid,))[0][0])[0]))
class BibUploadNoUselessHistoryTest(GenericBibUploadTest):
"""Testing generation of history only when necessary"""
def setUp(self):
GenericBibUploadTest.setUp(self)
self.test = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
"""
recs = bibupload.xml_marc_to_records(self.test)
# We call the main function with the record as a parameter
_, self.recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(self.recid)
# We retrieve the inserted xml
inserted_xm = print_record(self.recid, 'xm')
# Compare if the two MARCXML are the same
self.assertEqual(compare_xmbuffers(remove_tag_001_from_xmbuffer(inserted_xm),
self.test), '')
self.history = run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, )) # kwalitee: disable=sql
self.timestamp = run_sql("SELECT modification_date FROM bibrec WHERE id=%s", (self.recid,))
def test_replace_identical_record(self):
"""bibupload - replace with identical record does not touch history"""
xml_to_upload = """
<record>
<controlfield tag="001">%s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
""" % self.recid
recs = bibupload.xml_marc_to_records(xml_to_upload)
# We call the main function with the record as a parameter
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(recid)
self.assertEqual(self.recid, recid)
self.assertEqual(self.history, run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, ))) # kwalitee: disable=sql
self.assertEqual(self.timestamp, run_sql("SELECT modification_date FROM bibrec WHERE id=%s", (self.recid,)))
def test_correct_identical_correction(self):
"""bibupload - correct with identical correction does not touch history"""
xml_to_upload = """
<record>
<controlfield tag="001">%s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
</record>
""" % self.recid
recs = bibupload.xml_marc_to_records(xml_to_upload)
# We call the main function with the record as a parameter
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
self.assertEqual(self.recid, recid)
self.maxDiff = None
self.assertEqual(self.history, run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, ))) # kwalitee: disable=sql
self.assertEqual(self.timestamp, run_sql("SELECT modification_date FROM bibrec WHERE id=%s", (self.recid,)))
def test_replace_different_record(self):
"""bibupload - replace with different records does indeed touch history"""
xml_to_upload = """
<record>
<controlfield tag="001">%s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
""" % self.recid
recs = bibupload.xml_marc_to_records(xml_to_upload)
# We call the main function with the record as a parameter
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(recid)
self.assertEqual(self.recid, recid)
self.assertNotEqual(self.history, run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, ))) # kwalitee: disable=sql
self.failUnless(len(self.history) == 1 and len(run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, ))) == 2) # kwalitee: disable=sql
self.assertNotEqual(self.timestamp, run_sql("SELECT modification_date FROM bibrec WHERE id=%s", (self.recid,)))
def test_correct_different_correction(self):
"""bibupload - correct with different correction does indeed touch history"""
xml_to_upload = """
<record>
<controlfield tag="001">%s</controlfield>
<controlfield tag="003">FooBar</controlfield>
</record>
""" % self.recid
recs = bibupload.xml_marc_to_records(xml_to_upload)
# We call the main function with the record as a parameter
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
self.assertEqual(self.recid, recid)
self.assertNotEqual(self.history, run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, ))) # kwalitee: disable=sql
self.failUnless(len(self.history) == 1 and len(run_sql("SELECT * FROM hstRECORD WHERE id_bibrec=%s", (self.recid, ))) == 2) # kwalitee: disable=sql
self.assertNotEqual(self.timestamp, run_sql("SELECT modification_date FROM bibrec WHERE id=%s", (self.recid,)))
class BibUploadCallbackURLTest(GenericBibUploadTest):
"""Testing usage of CLI callback_url"""
def setUp(self):
GenericBibUploadTest.setUp(self)
self.test = """<record>
<datafield tag ="245" ind1=" " ind2=" ">
<subfield code="a">something</subfield>
</datafield>
<datafield tag ="700" ind1=" " ind2=" ">
<subfield code="a">Tester, J Y</subfield>
<subfield code="u">MIT</subfield>
</datafield>
<datafield tag ="700" ind1=" " ind2=" ">
<subfield code="a">Tester, K J</subfield>
<subfield code="u">CERN2</subfield>
</datafield>
<datafield tag ="700" ind1=" " ind2=" ">
<subfield code="a">Tester, G</subfield>
<subfield code="u">CERN3</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test11</subfield>
<subfield code="c">test31</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test12</subfield>
<subfield code="c">test32</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test13</subfield>
<subfield code="c">test33</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="b">test21</subfield>
<subfield code="d">test41</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="b">test22</subfield>
<subfield code="d">test42</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test14</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="e">test51</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="e">test52</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">CERN</subfield>
</datafield>
</record>"""
self.testfile_path = os.path.join(CFG_TMPDIR, 'bibupload_regression_test_input.xml')
open(self.testfile_path, "w").write(self.test)
self.resultfile_path = os.path.join(CFG_TMPDIR, 'bibupload_regression_test_result.json')
if CFG_DEVEL_SITE:
def test_simple_insert_callback_url(self):
"""bibupload - --callback-url with simple insert"""
from invenio.bibtask import task_low_level_submission
taskid = task_low_level_submission('bibupload', 'test', '-i', self.testfile_path, '--callback-url', CFG_SITE_URL + '/httptest/post2?%s' % urlencode({"save": self.resultfile_path}), '-v0')
run_shell_command(CFG_BINDIR + '/bibupload %s', [str(taskid)])
results = json.loads(open(self.resultfile_path).read())
self.failUnless('results' in results)
self.assertEqual(len(results['results']), 1)
self.failUnless(results['results'][0]['success'])
self.failUnless(results['results'][0]['recid'] > 0)
self.failUnless("""<subfield code="a">Tester, J Y</subfield>""" in results['results'][0]['marcxml'], results['results'][0]['marcxml'])
class BibUploadBibRelationsTest(GenericBibUploadTest):
def setUp(self):
GenericBibUploadTest.setUp(self)
self.upload_xml = """<record>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">A very wise author</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(url_site)s/img/user-icon-1-20x20.gif</subfield>
<subfield code="t">Main</subfield>
<subfield code="n">docname</subfield>
<subfield code="i">TMP:id_identifier1</subfield>
<subfield code="v">TMP:ver_identifier1</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(url_site)s/record/8/files/9812226.pdf?version=1</subfield>
<subfield code="t">Main</subfield>
<subfield code="n">docname2</subfield>
<subfield code="i">TMP:id_identifier2</subfield>
<subfield code="v">TMP:ver_identifier2</subfield>
</datafield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="i">TMP:id_identifier1</subfield>
<subfield code="v">TMP:ver_identifier1</subfield>
<subfield code="j">TMP:id_identifier2</subfield>
<subfield code="w">TMP:ver_identifier2</subfield>
<subfield code="t">is_extracted_from</subfield>
</datafield>
</record>""" % {'url_site' : CFG_SITE_URL}
def test_upload_with_tmpids(self):
"""bibupload - Trying to upload a relation between two new documents ... and then to delete"""
recs = bibupload.xml_marc_to_records(self.upload_xml)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
# ertrive document numbers and check if there exists a relation between them
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of documents attached to a record")
rels = docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from")
self.assertEqual(1, len(rels), "Incorrect number of relations retrieved from the first document")
rels = docs[1].get_incoming_relations("is_extracted_from") + docs[1].get_outgoing_relations("is_extracted_from")
self.assertEqual(1, len(rels), "Incorrect number of relations retrieved from the second document")
created_relation_id = rels[0].id
rels = docs[0].get_incoming_relations("different_type_of_relation") + docs[0].get_outgoing_relations("different_type_of_relation")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
upload_xml_2 = """<record>
<controlfield tag="001">%(rec_id)s</controlfield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="r">%(rel_id)s</subfield>
<subfield code="d">DELETE</subfield>
</datafield>
</record>""" % {'rel_id' : created_relation_id, 'rec_id' : recid}
recs = bibupload.xml_marc_to_records(upload_xml_2)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of documents attached to a record")
rels = docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
rels = docs[1].get_incoming_relations("is_extracted_from") + docs[1].get_outgoing_relations("is_extracted_from")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the second document")
rels = docs[0].get_incoming_relations("different_type_of_relation") + docs[0].get_outgoing_relations("different_type_of_relation")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
def test_delete_by_docids(self):
"""bibupload - delete relation entry by the docid inside the currently modified record
Uploading a sample relation and trying to modify it by refering to other parameters than
the relation number"""
recs = bibupload.xml_marc_to_records(self.upload_xml)
dummyerr, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of attached documents")
rel = (docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from"))[0]
upload_xml_2 = """<record>
<controlfield tag="001">%(rec_id)s</controlfield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="i">%(first_docid)s</subfield>
<subfield code="v">%(first_docver)s</subfield>
<subfield code="j">%(second_docid)s</subfield>
<subfield code="w">%(second_docver)s</subfield>
<subfield code="t">is_extracted_from</subfield>
<subfield code="d">DELETE</subfield>
</datafield>
</record>""" % { 'rec_id' : recid,
'first_docid': rel.bibdoc1_id,
'first_docver' : rel.bibdoc1_ver,
'second_docid': rel.bibdoc2_id,
'second_docver' : rel.bibdoc2_ver}
recs = bibupload.xml_marc_to_records(upload_xml_2)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of documents attached to a record")
rels = docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
rels = docs[1].get_incoming_relations("is_extracted_from") + docs[1].get_outgoing_relations("is_extracted_from")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the second document")
rels = docs[0].get_incoming_relations("different_type_of_relation") + docs[0].get_outgoing_relations("different_type_of_relation")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
def test_remove_by_name(self):
"""bibupload - trying removing relation by providing bibdoc names rather than relation numbers"""
recs = bibupload.xml_marc_to_records(self.upload_xml)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of attached documents")
rel = (docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from"))[0]
upload_xml_2 = """<record>
<controlfield tag="001">%(rec_id)s</controlfield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="n">docname</subfield>
<subfield code="v">%(first_docver)s</subfield>
<subfield code="o">docname2</subfield>
<subfield code="w">%(second_docver)s</subfield>
<subfield code="t">is_extracted_from</subfield>
<subfield code="d">DELETE</subfield>
</datafield>
</record>""" % {'rec_id' : recid,
'first_docver' : rel.bibdoc1_ver,
'second_docver' : rel.bibdoc2_ver}
# the above is incorrect ! we assert that nothing has been removed
recs = bibupload.xml_marc_to_records(upload_xml_2)
_ = bibupload.bibupload_records(recs, opt_mode='correct')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of documents attached to a record")
rels = docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
rels = docs[1].get_incoming_relations("is_extracted_from") + docs[1].get_outgoing_relations("is_extracted_from")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the second document")
rels = docs[0].get_incoming_relations("different_type_of_relation") + docs[0].get_outgoing_relations("different_type_of_relation")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
def test_remove_by_name_incorrect(self):
"""bibupload - trying removing relation by providing bibdoc names rather than relation numbers, but providing incorrect name"""
recs = bibupload.xml_marc_to_records(self.upload_xml)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of attached documents")
rel = (docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from"))[0]
upload_xml_2 = """<record>
<controlfield tag="001">%(rec_id)s</controlfield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="n">docname1</subfield>
<subfield code="v">%(first_docver)s</subfield>
<subfield code="o">docname2</subfield>
<subfield code="w">%(second_docver)s</subfield>
<subfield code="t">is_extracted_from</subfield>
<subfield code="d">DELETE</subfield>
</datafield>
</record>""" % { 'rec_id' : recid,
'first_docver' : rel.bibdoc1_ver,
'second_docver' : rel.bibdoc2_ver}
# the above is incorrect ! we assert that nothing has been removed
recs = bibupload.xml_marc_to_records(upload_xml_2)
_ = bibupload.bibupload_records(recs, opt_mode='correct')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of documents attached to a record")
rels = docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from")
self.assertEqual(1, len(rels), "Incorrect number of relations retrieved from the first document")
rels = docs[1].get_incoming_relations("is_extracted_from") + docs[1].get_outgoing_relations("is_extracted_from")
self.assertEqual(1, len(rels), "Incorrect number of relations retrieved from the second document")
rels = docs[0].get_incoming_relations("different_type_of_relation") + docs[0].get_outgoing_relations("different_type_of_relation")
self.assertEqual(0, len(rels), "Incorrect number of relations retrieved from the first document")
def _upload_initial_moreinfo_key(self):
"""Prepare MoreInfo with sample keys and check it has been correctly uploaded
uploaded dic: {"ns1" : {"k1":"val1", "k2":[1,2,3,"something"], "k3" : (1,3,2,"something else"), "k4" : {"a":"b", 1:2}}}
... after encoding gives KGRwMQpTJ25zMScKcDIKKGRwMwpTJ2szJwpwNAooSTEKSTMKSTIKUydzb21ldGhpbmcgZWxzZScKdHA1CnNTJ2syJwpwNgoobHA3CkkxCmFJMgphSTMKYVMnc29tZXRoaW5nJwpwOAphc1MnazEnCnA5ClMndmFsMScKcDEwCnNTJ2s0JwpwMTEKKGRwMTIKUydhJwpTJ2InCnNJMQpJMgpzc3Mu
"""
moreinfo_str = "KGRwMQpTJ25zMScKcDIKKGRwMwpTJ2szJwpwNAooSTEKSTMKSTIKUydzb21ldGhpbmcgZWxzZScKdHA1CnNTJ2syJwpwNgoobHA3CkkxCmFJMgphSTMKYVMnc29tZXRoaW5nJwpwOAphc1MnazEnCnA5ClMndmFsMScKcDEwCnNTJ2s0JwpwMTEKKGRwMTIKUydhJwpTJ2InCnNJMQpJMgpzc3Mu"
xml_to_upload = """<record>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">A very wise author</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(url_site)s/img/user-icon-1-20x20.gif</subfield>
<subfield code="t">Main</subfield>
<subfield code="n">docname</subfield>
<subfield code="i">TMP:id_identifier1</subfield>
<subfield code="v">TMP:ver_identifier1</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(url_site)s/record/8/files/9812226.pdf?version=1</subfield>
<subfield code="t">Main</subfield>
<subfield code="n">docname2</subfield>
<subfield code="i">TMP:id_identifier2</subfield>
<subfield code="v">TMP:ver_identifier2</subfield>
</datafield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="i">TMP:id_identifier1</subfield>
<subfield code="v">TMP:ver_identifier1</subfield>
<subfield code="j">TMP:id_identifier2</subfield>
<subfield code="w">TMP:ver_identifier2</subfield>
<subfield code="t">is_extracted_from</subfield>
<subfield code="m">%(moreinfo_str)s</subfield>
</datafield>
</record>""" % {'url_site' : CFG_SITE_URL, 'moreinfo_str' : moreinfo_str}
recs = bibupload.xml_marc_to_records(xml_to_upload)
dummyerr, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
brd = BibRecDocs(recid)
docs = brd.list_bibdocs()
self.assertEqual(2, len(docs), "Incorrect number of attached documents")
return ((docs[0].get_incoming_relations("is_extracted_from") + docs[0].get_outgoing_relations("is_extracted_from"))[0], recid)
def test_add_relation_moreinfo_key(self):
"""bibupload - upload new MoreInfo key into the dictionary related to a relation"""
rel, _ = self._upload_initial_moreinfo_key()
# asserting correctness of data
self.assertEqual(rel.more_info.get_data("ns1", "k1"), "val1", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k1)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[0], 1, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[1], 2, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[2], 3, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[3], "something", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k3"), (1,3,2,"something else") , "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k3)")
self.assertEqual(rel.more_info.get_data("ns1", "k4")[1], 2, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k4)")
self.assertEqual(rel.more_info.get_data("ns1", "k4")["a"], "b", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k4)")
def test_modify_relation_moreinfo_key(self):
"""bibupload - modify existing MoreInfo key """
#the update : {"ns1":{"k1": "different value"}}
rel, recid = self._upload_initial_moreinfo_key()
moreinfo_str = "KGRwMQpTJ25zMScKcDIKKGRwMwpTJ2sxJwpwNApTJ2RpZmZlcmVudCB2YWx1ZScKcDUKc3Mu"
upload_xml = """
<record>
<controlfield tag="001">%(rec_id)s</controlfield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="n">docname</subfield>
<subfield code="o">docname2</subfield>
<subfield code="v">1</subfield>
<subfield code="w">1</subfield>
<subfield code="t">is_extracted_from</subfield>
<subfield code="m">%(moreinfo_str)s</subfield>
</datafield>
</record>""" % {"rec_id" : recid, "moreinfo_str": moreinfo_str}
recs = bibupload.xml_marc_to_records(upload_xml)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
rel = BibRelation(rel_id = rel.id)
self.assertEqual(rel.more_info.get_data("ns1", "k1"), "different value", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k1)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[0], 1, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[1], 2, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[2], 3, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[3], "something", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k3"), (1,3,2,"something else") , "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k3)")
self.assertEqual(rel.more_info.get_data("ns1", "k4")[1], 2, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k4)")
self.assertEqual(rel.more_info.get_data("ns1", "k4")["a"], "b", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k4)")
self.assertEqual(rel.more_info.get_data("ns2", "k4"), None, "Retrieved not none value for nonexisting namespace !")
def test_remove_relation_moreinfo_key(self):
"""bibupload - remove existing MoreInfo key """
#the update : {"ns1":{"k3": None}}
rel, recid = self._upload_initial_moreinfo_key()
moreinfo_str = "KGRwMQpTJ25zMScKcDIKKGRwMwpTJ2szJwpwNApOc3Mu"
upload_xml = """
<record>
<controlfield tag="001">%(rec_id)s</controlfield>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="n">docname</subfield>
<subfield code="o">docname2</subfield>
<subfield code="v">1</subfield>
<subfield code="w">1</subfield>
<subfield code="t">is_extracted_from</subfield>
<subfield code="m">%(moreinfo_str)s</subfield>
</datafield>
</record>""" % {"rec_id" : recid, "moreinfo_str": moreinfo_str}
recs = bibupload.xml_marc_to_records(upload_xml)
bibupload.bibupload_records(recs, opt_mode='correct')
rel = BibRelation(rel_id = rel.id)
self.assertEqual(rel.more_info.get_data("ns1", "k1"), "val1", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k1)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[0], 1, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[1], 2, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[2], 3, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k2")[3], "something", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k2)")
self.assertEqual(rel.more_info.get_data("ns1", "k3"), None , "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k3)")
self.assertEqual(rel.more_info.get_data("ns1", "k4")[1], 2, "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k4)")
self.assertEqual(rel.more_info.get_data("ns1", "k4")["a"], "b", "Retrieved incorrect data from the MoreInfo Dictionary (namespace : ns1 key: k4)")
class BibUploadMoreInfoTest(GenericBibUploadTest):
"""bibupload - Testing upload of different types of MoreInfo """
def _dict_checker(self, dic, more_info, equal = True):
""" Check the more_info for being conform with the dictionary
@param equal - The mode of conformity. True means that the dictionary
has to be equal with the MoreInfo. False means that dictionary
has to be contained in the MoreInfo
"""
for namespace in dic:
for key in dic[namespace]:
self.assertEqual(cPickle.dumps(dic[namespace][key]),
cPickle.dumps(more_info.get_data(namespace, key)),
"Different values for the value of key %s in the namespace %s inside of the MoreInfo object" % \
(namespace, key))
if equal:
for namespace in more_info.get_namespaces():
for key in more_info.get_keys(namespace):
self.assertTrue(namespace in dic,
"namespace %s present in the MoreInfo, but not present in the dictionary" % \
(namespace, ))
self.assertTrue(key in dic[namespace],
"key %s present in the namespace %s of the MoreInfo but not present in the dictionary" % \
(namespace, key))
self.assertEqual(cPickle.dumps(more_info.get_data(namespace, key)),
cPickle.dumps(dic[namespace][key]),
"Value for namespace '%s' and key '%s' varies between MoreInfo and the dictionary. moreinfo value: '%s' dictionary value: '%s'" % \
(namespace, key, repr(more_info.get_data(namespace, key)), repr(dic[namespace][key])))
def test_relation_moreinfo_insert(self):
"""bibupload - Testing the upload of BibRelation and corresponding MoreInfo field"""
# Cleaning existing data
rels = BibRelation.get_relations(bibdoc1_id = 70, bibdoc2_id = 71, rel_type = "is_extracted_from")
for rel in rels:
rel.delete()
# Uploading
relation_upload_template = """
<record>
<datafield tag="BDR" ind1=" " ind2=" ">
<subfield code="i">70</subfield>
<subfield code="j">71</subfield>
<subfield code="t">is_extracted_from</subfield>
<subfield code="m">%s</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Some author</subfield>
</datafield>
</record>"""
data_to_insert = {"first namespace": {"k1" : "val1", "k2" : "val2"},
"second" : {"k1" : "#@$#$@###!!!", "k123": {1:2, 9: (6,2,7)}}}
serialised = base64.b64encode(cPickle.dumps(data_to_insert))
recs = bibupload.xml_marc_to_records(relation_upload_template % (serialised, ))
bibupload.bibupload_records(recs, opt_mode='insert')[0]
# Verifying the correctness of the uploaded data
rels = BibRelation.get_relations(bibdoc1_id = 70, bibdoc2_id = 71, rel_type = "is_extracted_from")
self.assertEqual(len(rels), 1)
rel = rels[0]
self.assertEqual(rel.bibdoc1_id, 70)
self.assertEqual(rel.bibdoc2_id, 71)
self.assertEqual(rel.get_data("first namespace", "k1"), "val1")
self.assertEqual(rel.get_data("first namespace", "k2"), "val2")
self.assertEqual(rel.get_data("second", "k1"), "#@$#$@###!!!")
self.assertEqual(rel.get_data("second", "k123")[1], 2)
self.assertEqual(rel.get_data("second", "k123")[9], (6,2,7))
self._dict_checker(data_to_insert, rel.more_info)
# Cleaning after the upload ... just in case we have selected more
for rel in rels:
rel.delete()
def _serialise_data(self, data):
return base64.b64encode(cPickle.dumps(data))
# Subfield tags used to upload particular types of MoreInfo
_mi_bibdoc = "w"
_mi_bibdoc_version = "p"
_mi_bibdoc_version_format = "b"
_mi_bibdoc_format = "u"
def _generate_moreinfo_tag(self, mi_type, data):
"""
"""
serialised = self._serialise_data(data)
return """<subfield code="%s">%s</subfield>""" % (mi_type, serialised)
def test_document_moreinfo_insert(self):
"""bibupload - Inserting new MoreInfo to the document
1) Inserting new MoreInfo to new document
2) Inserting new MoreInfo keys existing document version
3) Removing keys from MoreInfo
4) Removing document and asserting, MoreInfo gets removed as well
5) Overriding MoreInfo keys
"""
moreinfo_upload_template = """
<record>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="n">0106015_01</subfield>
<subfield code="f">.jpg</subfield>
<subfield code="r">restricted_picture</subfield>
%%(additional_content)s
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Some author</subfield>
</datafield>
</record>""" % {"siteurl": CFG_SITE_URL}
sfs = []
sfs.append(self._generate_moreinfo_tag(BibUploadMoreInfoTest._mi_bibdoc,
{"first namespace" :
{"type": "document moreinfo"}}))
sfs.append(self._generate_moreinfo_tag(BibUploadMoreInfoTest._mi_bibdoc_version,
{"first namespace" :
{"type": "Bibdoc - version moreinfo"}}))
sfs.append(self._generate_moreinfo_tag(BibUploadMoreInfoTest._mi_bibdoc_version_format,
{"first namespace" :
{"type": "Bibdoc - version, format moreinfo"}}))
sfs.append(self._generate_moreinfo_tag(BibUploadMoreInfoTest._mi_bibdoc_format,
{"first namespace" :
{"type": "Bibdoc - format moreinfo"}}))
marcxml_1 = moreinfo_upload_template % {"additional_content" : "\n".join(sfs)}
recs = bibupload.xml_marc_to_records(marcxml_1)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
# now checking if all the data has been uploaded correctly
bdr = BibRecDocs(recid)
doc = bdr.list_bibdocs()[0]
docid = doc.get_id()
mi_doc = MoreInfo(docid = docid)
mi_doc_ver = MoreInfo(docid = docid, version = 1)
mi_doc_ver_fmt = MoreInfo(docid = docid, version = 1, docformat=".jpg")
mi_doc_fmt = MoreInfo(docid = docid, docformat=".jpg")
self._dict_checker({"first namespace" : {"type": "document moreinfo"}},
mi_doc, equal=False) # in case of the document only inclusive check
self._dict_checker({"first namespace" : {"type": "Bibdoc - version moreinfo"}},
mi_doc_ver)
self._dict_checker({"first namespace" : {
"type": "Bibdoc - version, format moreinfo"}},
mi_doc_ver_fmt)
self._dict_checker({"first namespace" : {"type": "Bibdoc - format moreinfo"}},
mi_doc_fmt)
#now appending to a particular version of MoreInfo
# uplad new key to an existing dictionary of a version
def _get_mit_template(recid, bibdocid=None, bibdocname=None,
version=None, docformat=None, relation=None, data=None):
if data is None:
ser = None
else:
ser = base64.b64encode(cPickle.dumps(data))
subfields = []
for s_code, val in (("r", relation), ("i", bibdocid),
("n", bibdocname), ("v", version),
("f", docformat) , ("m", ser)):
if not val is None:
subfields.append("""<subfield code="%s">%s</subfield>""" % \
(s_code, val))
return """<record>
<controlfield tag="001">%s</controlfield>
<datafield tag="BDM" ind1=" " ind2=" ">
%s
</datafield>
</record>""" % (str(recid), ("\n".join(subfields)))
marcxml_2 = _get_mit_template(recid, version = 1, bibdocid = docid,
data= {"first namespace" :
{"new key": {1:2, 987:678}}})
recs = bibupload.xml_marc_to_records(marcxml_2)
bibupload.bibupload_records(recs, opt_mode='append')
mi = MoreInfo(docid = docid, version = 1)
self._dict_checker({
"first namespace" : {"type": "Bibdoc - version moreinfo",
"new key": {1:2, 987:678}
}
}, mi)
#removing the entire old content of the MoreInfo and uploading new
data = {"ns1" : {"nk1": 12, "mk1": "this is new content"},
"namespace two" : {"ddd" : "bbb"}}
marcxml_3 = _get_mit_template(recid, version = 1, bibdocid = docid,
data= data)
recs = bibupload.xml_marc_to_records(marcxml_3)
bibupload.bibupload_records(recs, opt_mode='correct')
mi = MoreInfo(docid = docid, version = 1)
self._dict_checker(data, mi)
# removing a particular key
marcxml_4 = _get_mit_template(recid, version = 1, bibdocid = docid,
data= {"ns1": {"nk1" : None}})
recs = bibupload.xml_marc_to_records(marcxml_4)
bibupload.bibupload_records(recs, opt_mode='append')
mi = MoreInfo(docid = docid, version = 1)
self._dict_checker( {"ns1" : { "mk1": "this is new content"},
"namespace two" : {"ddd" : "bbb"}}, mi)
# adding new key
marcxml_5 = _get_mit_template(recid, version = 1, bibdocid = docid,
data= {"ns1": {"newkey" : "newvalue"}})
recs = bibupload.xml_marc_to_records(marcxml_5)
bibupload.bibupload_records(recs, opt_mode='append')
mi = MoreInfo(docid = docid, version = 1)
self._dict_checker( {"ns1" : { "mk1": "this is new content", "newkey" : "newvalue"},
"namespace two" : {"ddd" : "bbb"}}, mi)
class BibUploadInsertModeTest(GenericBibUploadTest):
"""Testing insert mode."""
def setUp(self):
# pylint: disable=C0103
"""Initialise the MARCXML variable"""
GenericBibUploadTest.setUp(self)
self.test = """<record>
<datafield tag ="245" ind1=" " ind2=" ">
<subfield code="a">something</subfield>
</datafield>
<datafield tag ="700" ind1=" " ind2=" ">
<subfield code="a">Tester, J Y</subfield>
<subfield code="u">MIT</subfield>
</datafield>
<datafield tag ="700" ind1=" " ind2=" ">
<subfield code="a">Tester, K J</subfield>
<subfield code="u">CERN2</subfield>
</datafield>
<datafield tag ="700" ind1=" " ind2=" ">
<subfield code="a">Tester, G</subfield>
<subfield code="u">CERN3</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test11</subfield>
<subfield code="c">test31</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test12</subfield>
<subfield code="c">test32</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test13</subfield>
<subfield code="c">test33</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="b">test21</subfield>
<subfield code="d">test41</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="b">test22</subfield>
<subfield code="d">test42</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="a">test14</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="e">test51</subfield>
</datafield>
<datafield tag ="111" ind1=" " ind2=" ">
<subfield code="e">test52</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">CERN</subfield>
</datafield>
</record>"""
self.test_hm = """
100__ $$aTester, T$$uCERN
111__ $$atest11$$ctest31
111__ $$atest12$$ctest32
111__ $$atest13$$ctest33
111__ $$btest21$$dtest41
111__ $$btest22$$dtest42
111__ $$atest14
111__ $$etest51
111__ $$etest52
245__ $$asomething
700__ $$aTester, J Y$$uMIT
700__ $$aTester, K J$$uCERN2
700__ $$aTester, G$$uCERN3
"""
def test_create_record_id(self):
"""bibupload - insert mode, trying to create a new record ID in the database"""
rec_id = bibupload.create_new_record()
self.assertNotEqual(None, rec_id)
def test_create_specific_record_id(self):
"""bibupload - insert mode, trying to create a new specifc record ID in the database"""
expected_rec_id = run_sql("SELECT MAX(id) FROM bibrec")[0][0] + 1
rec_id = bibupload.create_new_record(expected_rec_id)
self.assertEqual(rec_id, expected_rec_id)
def test_no_retrieve_record_id(self):
"""bibupload - insert mode, detection of record ID in the input file"""
# We create create the record out of the xml marc
recs = bibupload.xml_marc_to_records(self.test)
# We call the function which should retrieve the record id
rec_id = bibupload.retrieve_rec_id(recs[0], 'insert')
# We compare the value found with None
self.assertEqual(None, rec_id)
def test_insert_complete_xmlmarc(self):
"""bibupload - insert mode, trying to insert complete MARCXML file"""
# Initialize the global variable
# We create create the record out of the xml marc
recs = bibupload.xml_marc_to_records(self.test)
# We call the main function with the record as a parameter
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# We retrieve the inserted xml
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
inserted_hm = print_record(recid, 'hm')
# Compare if the two MARCXML are the same
self.assertEqual(compare_xmbuffers(remove_tag_001_from_xmbuffer(inserted_xm),
self.test), '')
self.assertEqual(compare_hmbuffers(remove_tag_001_from_hmbuffer(inserted_hm),
self.test_hm), '')
def test_retrieve_005_tag(self):
"""bibupload - insert mode, verifying insertion of 005 control field for record """
# Convert marc xml into record structure
from invenio.bibrecord import record_has_field, record_get_field_value
recs = bibupload.xml_marc_to_records(self.test)
dummy, recid, dummy = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid)
# Retrive the inserted record based on the record id
rec = get_record(recid)
# We retrieve the creationdate date from the database
query = """SELECT DATE_FORMAT(last_updated,'%%Y%%m%%d%%H%%i%%s') FROM bibfmt where id_bibrec=%s AND format='xm'"""
res = run_sql(query, (recid, ))
self.assertEqual(record_has_field(rec, '005'), True)
self.assertEqual(str(res[0][0]) + '.0', record_get_field_value(rec, '005', '', ''))
class BibUploadAppendModeTest(GenericBibUploadTest):
"""Testing append mode."""
def setUp(self):
# pylint: disable=C0103
"""Initialize the MARCXML variable"""
GenericBibUploadTest.setUp(self)
self.test_existing = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
</record>"""
self.test_to_append = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, U</subfield>
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
</record>"""
self.test_expected_xm = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, U</subfield>
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
</record>"""
self.test_expected_hm = """
001__ 123456789
100__ $$aTester, T$$uDESY
100__ $$aTester, U$$uCERN
970__ $$a0003719PHOPHO
"""
# insert test record:
test_to_upload = self.test_existing.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(test_to_upload)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
self.test_recid = recid
# replace test buffers with real recid of inserted test record:
self.test_existing = self.test_existing.replace('123456789',
str(self.test_recid))
self.test_to_append = self.test_to_append.replace('123456789',
str(self.test_recid))
self.test_expected_xm = self.test_expected_xm.replace('123456789',
str(self.test_recid))
self.test_expected_hm = self.test_expected_hm.replace('123456789',
str(self.test_recid))
def test_retrieve_record_id(self):
"""bibupload - append mode, the input file should contain a record ID"""
# We create create the record out of the xml marc
recs = bibupload.xml_marc_to_records(self.test_to_append)
# We call the function which should retrieve the record id
rec_id = bibupload.retrieve_rec_id(recs[0], 'append')
# We compare the value found with None
self.assertEqual(self.test_recid, rec_id)
# clean up after ourselves:
def test_update_modification_record_date(self):
"""bibupload - append mode, checking the update of the modification date"""
from invenio.dateutils import convert_datestruct_to_datetext
# Initialize the global variable
# We create create the record out of the xml marc
recs = bibupload.xml_marc_to_records(self.test_existing)
# We call the function which should retrieve the record id
rec_id = bibupload.retrieve_rec_id(recs[0], opt_mode='append')
# Retrieve current localtime
now = time.localtime()
# We update the modification date
bibupload.update_bibrec_date(convert_datestruct_to_datetext(now), rec_id, False)
# We retrieve the modification date from the database
query = """SELECT DATE_FORMAT(modification_date,'%%Y-%%m-%%d %%H:%%i:%%s') FROM bibrec where id = %s"""
res = run_sql(query, (str(rec_id), ))
# We compare the two results
self.assertEqual(res[0][0], convert_datestruct_to_datetext(now))
# clean up after ourselves:
def test_append_complete_xml_marc(self):
"""bibupload - append mode, appending complete MARCXML file"""
# Now we append a datafield
# We create create the record out of the xml marc
recs = bibupload.xml_marc_to_records(self.test_to_append)
# We call the main function with the record as a parameter
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='append')[0]
self.check_record_consistency(recid)
# We retrieve the inserted xm
after_append_xm = print_record(recid, 'xm')
after_append_hm = print_record(recid, 'hm')
# Compare if the two MARCXML are the same
self.assertEqual(compare_xmbuffers(after_append_xm, self.test_expected_xm), '')
self.assertEqual(compare_hmbuffers(after_append_hm, self.test_expected_hm), '')
def test_retrieve_updated_005_tag(self):
"""bibupload - append mode, updating 005 control tag after modifiction """
from invenio.bibrecord import record_get_field_value
recs = bibupload.xml_marc_to_records(self.test_to_append)
_, recid, _ = bibupload.bibupload(recs[0], opt_mode='append')
self.check_record_consistency(recid)
rec = get_record(recid)
query = """SELECT DATE_FORMAT(MAX(job_date),'%%Y%%m%%d%%H%%i%%s') FROM hstRECORD where id_bibrec = %s"""
res = run_sql(query, (str(recid), ))
self.assertEqual(str(res[0][0])+'.0',record_get_field_value(rec,'005','',''))
class BibUploadCorrectModeTest(GenericBibUploadTest):
"""
Testing correcting a record containing similar tags (identical
tag, different indicators). Currently Invenio replaces only
those tags that have matching indicators too, unlike ALEPH500 that
does not pay attention to indicators, it corrects all fields with
the same tag, regardless of the indicator values.
"""
def setUp(self):
"""Initialize the MARCXML test record."""
GenericBibUploadTest.setUp(self)
self.testrec1_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
"""
self.testrec1_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
10047 $$aTest, John$$uTest University
10048 $$aCool
10047 $$aTest, Jim$$uTest Laboratory
"""
self.testrec1_xm_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Joseph</subfield>
<subfield code="u">Test Academy</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test2, Joseph</subfield>
<subfield code="u">Test2 Academy</subfield>
</datafield>
</record>
"""
self.testrec1_corrected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Joseph</subfield>
<subfield code="u">Test Academy</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test2, Joseph</subfield>
<subfield code="u">Test2 Academy</subfield>
</datafield>
</record>
"""
self.testrec1_corrected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
10048 $$aCool
10047 $$aTest, Joseph$$uTest Academy
10047 $$aTest2, Joseph$$uTest2 Academy
"""
# insert test record:
test_record_xm = self.testrec1_xm.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(test_record_xm)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recID:
self.testrec1_xm = self.testrec1_xm.replace('123456789', str(recid))
self.testrec1_hm = self.testrec1_hm.replace('123456789', str(recid))
self.testrec1_xm_to_correct = self.testrec1_xm_to_correct.replace('123456789', str(recid))
self.testrec1_corrected_xm = self.testrec1_corrected_xm.replace('123456789', str(recid))
self.testrec1_corrected_hm = self.testrec1_corrected_hm.replace('123456789', str(recid))
# test of the inserted record:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm, self.testrec1_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm, self.testrec1_hm), '')
def test_record_correction(self):
"""bibupload - correct mode, similar MARCXML tags/indicators"""
# correct some tags:
recs = bibupload.xml_marc_to_records(self.testrec1_xm_to_correct)
_, self.recid, _ = bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(self.recid)
corrected_xm = print_record(self.recid, 'xm')
corrected_hm = print_record(self.recid, 'hm')
# did it work?
self.assertEqual(compare_xmbuffers(corrected_xm, self.testrec1_corrected_xm), '')
self.assertEqual(compare_hmbuffers(corrected_hm, self.testrec1_corrected_hm), '')
# clean up after ourselves:
return
class BibUploadDeleteModeTest(GenericBibUploadTest):
"""
Testing deleting specific tags from a record while keeping anything else
untouched. Currently Invenio deletes only those tags that have
matching indicators too, unlike ALEPH500 that does not pay attention to
indicators, it corrects all fields with the same tag, regardless of the
indicator values.
"""
def setUp(self):
"""Initialize the MARCXML test record."""
GenericBibUploadTest.setUp(self)
self.testrec1_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>
"""
self.testrec1_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
10047 $$aTest, John$$uTest University
10048 $$aCool
10047 $$aTest, Jim$$uTest Laboratory
888__ $$adumb text
"""
self.testrec1_xm_to_delete = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Johnson</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>
"""
self.testrec1_corrected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
"""
self.testrec1_corrected_hm = """
001__ 123456789
003__ SzGeCERN
10047 $$aTest, John$$uTest University
10047 $$aTest, Jim$$uTest Laboratory
"""
# insert test record:
test_record_xm = self.testrec1_xm.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(test_record_xm)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recID:
self.testrec1_xm = self.testrec1_xm.replace('123456789', str(recid))
self.testrec1_hm = self.testrec1_hm.replace('123456789', str(recid))
self.testrec1_xm_to_delete = self.testrec1_xm_to_delete.replace('123456789', str(recid))
self.testrec1_corrected_xm = self.testrec1_corrected_xm.replace('123456789', str(recid))
self.testrec1_corrected_hm = self.testrec1_corrected_hm.replace('123456789', str(recid))
# test of the inserted record:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm, self.testrec1_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm, self.testrec1_hm), '')
# Checking dumb text is in bibxxx
self.failUnless(run_sql("SELECT id_bibrec from bibrec_bib88x WHERE id_bibrec=%s", (recid, )))
def test_record_tags_deletion(self):
"""bibupload - delete mode, deleting specific tags"""
# correct some tags:
recs = bibupload.xml_marc_to_records(self.testrec1_xm_to_delete)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='delete')[0]
self.check_record_consistency(recid)
corrected_xm = print_record(recid, 'xm')
corrected_hm = print_record(recid, 'hm')
# did it work?
self.assertEqual(compare_xmbuffers(corrected_xm, self.testrec1_corrected_xm), '')
self.assertEqual(compare_hmbuffers(corrected_hm, self.testrec1_corrected_hm), '')
# Checking dumb text is no more in bibxxx
self.failIf(run_sql("SELECT id_bibrec from bibrec_bib88x WHERE id_bibrec=%s", (recid, )))
# clean up after ourselves:
class BibUploadReplaceModeTest(GenericBibUploadTest):
"""Testing replace mode."""
def test_record_replace(self):
"""bibupload - replace mode, similar MARCXML tags/indicators"""
# replace some tags:
testrec1_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
"""
testrec1_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
10047 $$aTest, John$$uTest University
10048 $$aCool
10047 $$aTest, Jim$$uTest Laboratory
"""
testrec1_xm_to_replace = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Joseph</subfield>
<subfield code="u">Test Academy</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test2, Joseph</subfield>
<subfield code="u">Test2 Academy</subfield>
</datafield>
</record>
"""
testrec1_replaced_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Joseph</subfield>
<subfield code="u">Test Academy</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test2, Joseph</subfield>
<subfield code="u">Test2 Academy</subfield>
</datafield>
</record>
"""
testrec1_replaced_hm = """
001__ 123456789
10047 $$aTest, Joseph$$uTest Academy
10047 $$aTest2, Joseph$$uTest2 Academy
"""
# insert test record:
test_record_xm = testrec1_xm.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(test_record_xm)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recID:
testrec1_xm = testrec1_xm.replace('123456789', str(recid))
testrec1_hm = testrec1_hm.replace('123456789', str(recid))
testrec1_xm_to_replace = testrec1_xm_to_replace.replace('123456789', str(recid))
testrec1_replaced_xm = testrec1_replaced_xm.replace('123456789', str(recid))
testrec1_replaced_hm = testrec1_replaced_hm.replace('123456789', str(recid))
# test of the inserted record:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm, testrec1_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm, testrec1_hm), '')
recs = bibupload.xml_marc_to_records(testrec1_xm_to_replace)
_, recid, _ = bibupload.bibupload(recs[0], opt_mode='replace')
self.check_record_consistency(recid)
replaced_xm = print_record(recid, 'xm')
replaced_hm = print_record(recid, 'hm')
# did it work?
self.assertEqual(compare_xmbuffers(replaced_xm, testrec1_replaced_xm), '')
self.assertEqual(compare_hmbuffers(replaced_hm, testrec1_replaced_hm), '')
def test_record_replace_force_non_existing(self):
"""bibupload - replace mode, force non existing recid"""
from invenio.bibtask import task_set_option
# replace some tags:
the_recid = self.last_recid + 1
testrec1_xm = """
<record>
<controlfield tag="001">%s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
""" % the_recid
testrec1_hm = """
001__ %s
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
10047 $$aTest, John$$uTest University
10048 $$aCool
10047 $$aTest, Jim$$uTest Laboratory
""" % the_recid
recs = bibupload.xml_marc_to_records(testrec1_xm)
task_set_option('force', True)
try:
err, recid, msg = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(recid)
finally:
task_set_option('force', False)
replaced_xm = print_record(recid, 'xm')
replaced_hm = print_record(recid, 'hm')
# did it work?
self.assertEqual(compare_xmbuffers(replaced_xm, testrec1_xm), '')
self.assertEqual(compare_hmbuffers(replaced_hm, testrec1_hm), '')
self.assertEqual(recid, the_recid)
def test_record_replace_non_existing(self):
"""bibupload - replace mode, non existing recid"""
# replace some tags:
the_recid = self.last_recid + 1
testrec1_xm = """
<record>
<controlfield tag="001">%s</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
""" % the_recid
recs = bibupload.xml_marc_to_records(testrec1_xm)
err, recid, _ = bibupload.bibupload(recs[0], opt_mode='replace')
self.assertEqual((err, recid), (1, -1))
def test_record_replace_two_recids(self):
"""bibupload - replace mode, two recids"""
# replace some tags:
testrec1_xm = """
<record>
<controlfield tag="001">300</controlfield>
<controlfield tag="001">305</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="8">
<subfield code="a">Cool</subfield>
</datafield>
<datafield tag="100" ind1="4" ind2="7">
<subfield code="a">Test, Jim</subfield>
<subfield code="u">Test Laboratory</subfield>
</datafield>
</record>
"""
recs = bibupload.xml_marc_to_records(testrec1_xm)
err, recid, _ = bibupload.bibupload(recs[0], opt_mode='replace')
# did it work?
self.assertEqual((err, recid), (1, -1))
class BibUploadReferencesModeTest(GenericBibUploadTest):
"""Testing references mode.
NOTE: in the past this was done by calling bibupload --reference|-z
which is now simply implying bibupload --correct.
"""
def setUp(self):
"""Initialize the MARCXML variable"""
GenericBibUploadTest.setUp(self)
self.test_insert = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">CERN</subfield>
</datafield>
</record>"""
self.test_reference = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag =\"""" + CFG_BIBUPLOAD_REFERENCE_TAG + """\" ind1="C" ind2="5">
<subfield code="m">M. Lüscher and P. Weisz, String excitation energies in SU(N) gauge theories beyond the free-string approximation,</subfield>
<subfield code="s">J. High Energy Phys. 07 (2004) 014</subfield>
</datafield>
</record>"""
self.test_reference_expected_xm = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag =\"""" + CFG_BIBUPLOAD_REFERENCE_TAG + """\" ind1="C" ind2="5">
<subfield code="m">M. Lüscher and P. Weisz, String excitation energies in SU(N) gauge theories beyond the free-string approximation,</subfield>
<subfield code="s">J. High Energy Phys. 07 (2004) 014</subfield>
</datafield>
</record>"""
self.test_insert_hm = """
001__ 123456789
100__ $$aTester, T$$uCERN
"""
self.test_reference_expected_hm = """
001__ 123456789
100__ $$aTester, T$$uCERN
%(reference_tag)sC5 $$mM. Lüscher and P. Weisz, String excitation energies in SU(N) gauge theories beyond the free-string approximation,$$sJ. High Energy Phys. 07 (2004) 014
""" % {'reference_tag': CFG_BIBUPLOAD_REFERENCE_TAG}
# insert test record:
test_insert = self.test_insert.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(test_insert)
_, recid, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recID:
self.test_insert = self.test_insert.replace('123456789', str(recid))
self.test_insert_hm = self.test_insert_hm.replace('123456789', str(recid))
self.test_reference = self.test_reference.replace('123456789', str(recid))
self.test_reference_expected_xm = self.test_reference_expected_xm.replace('123456789', str(recid))
self.test_reference_expected_hm = self.test_reference_expected_hm.replace('123456789', str(recid))
# test of the inserted record:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm, self.test_insert), '')
self.assertEqual(compare_hmbuffers(inserted_hm, self.test_insert_hm), '')
self.test_recid = recid
def test_reference_complete_xml_marc(self):
"""bibupload - reference mode, inserting references MARCXML file"""
# We create create the record out of the xml marc
recs = bibupload.xml_marc_to_records(self.test_reference)
# We call the main function with the record as a parameter
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='reference')[0]
self.check_record_consistency(recid)
# We retrieve the inserted xml
reference_xm = print_record(recid, 'xm')
reference_hm = print_record(recid, 'hm')
# Compare if the two MARCXML are the same
self.assertEqual(compare_xmbuffers(reference_xm, self.test_reference_expected_xm), '')
self.assertEqual(compare_hmbuffers(reference_hm, self.test_reference_expected_hm), '')
class BibUploadRecordsWithSYSNOTest(GenericBibUploadTest):
"""Testing uploading of records that have external SYSNO present."""
def setUp(self):
# pylint: disable=C0103
"""Initialize the MARCXML test records."""
GenericBibUploadTest.setUp(self)
# Note that SYSNO fields are repeated but with different
# subfields, this is to test whether bibupload would not
# mistakenly pick up wrong values.
self.xm_testrec1 = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="%(sysnosubfieldcode)s">sysno1</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="0">sysno2</subfield>
</datafield>
</record>
""" % {'sysnotag': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
'sysnoind1': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] or " ",
'sysnoind2': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] or " ",
'sysnosubfieldcode': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6],
}
self.hm_testrec1 = """
001__ 123456789
003__ SzGeCERN
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1
%(sysnotag)s%(sysnoind1)s%(sysnoind2)s $$%(sysnosubfieldcode)ssysno1
%(sysnotag)s%(sysnoind1)s%(sysnoind2)s $$0sysno2
""" % {'sysnotag': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
'sysnoind1': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4],
'sysnoind2': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5],
'sysnosubfieldcode': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6],
}
self.xm_testrec1_to_update = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="%(sysnosubfieldcode)s">sysno1</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="0">sysno2</subfield>
</datafield>
</record>
""" % {'sysnotag': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
'sysnoind1': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] or " ",
'sysnoind2': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] or " ",
'sysnosubfieldcode': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6],
}
self.xm_testrec1_updated = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="%(sysnosubfieldcode)s">sysno1</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="0">sysno2</subfield>
</datafield>
</record>
""" % {'sysnotag': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
'sysnoind1': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] or " ",
'sysnoind2': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] or " ",
'sysnosubfieldcode': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6],
}
self.hm_testrec1_updated = """
001__ 123456789
003__ SzGeCERN
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1 Updated
%(sysnotag)s%(sysnoind1)s%(sysnoind2)s $$%(sysnosubfieldcode)ssysno1
%(sysnotag)s%(sysnoind1)s%(sysnoind2)s $$0sysno2
""" % {'sysnotag': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
'sysnoind1': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4],
'sysnoind2': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5],
'sysnosubfieldcode': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6],
}
self.xm_testrec2 = """
<record>
<controlfield tag="001">987654321</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 2</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="%(sysnosubfieldcode)s">sysno2</subfield>
</datafield>
<datafield tag="%(sysnotag)s" ind1="%(sysnoind1)s" ind2="%(sysnoind2)s">
<subfield code="0">sysno1</subfield>
</datafield>
</record>
""" % {'sysnotag': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
'sysnoind1': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4] or " ",
'sysnoind2': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5] or " ",
'sysnosubfieldcode': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6],
}
self.hm_testrec2 = """
001__ 987654321
003__ SzGeCERN
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 2
%(sysnotag)s%(sysnoind1)s%(sysnoind2)s $$%(sysnosubfieldcode)ssysno2
%(sysnotag)s%(sysnoind1)s%(sysnoind2)s $$0sysno1
""" % {'sysnotag': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[0:3],
'sysnoind1': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[3:4],
'sysnoind2': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[4:5],
'sysnosubfieldcode': CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG[5:6],
}
def test_insert_the_same_sysno_record(self):
"""bibupload - SYSNO tag, refuse to insert the same SYSNO record"""
# initialize bibupload mode:
if self.verbose:
print "test_insert_the_same_sysno_record() started"
# insert record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# insert record 2 first time:
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr2, recid2, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid2)
inserted_xm = print_record(recid2, 'xm')
inserted_hm = print_record(recid2, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec2 = self.xm_testrec2.replace('987654321', str(recid2))
self.hm_testrec2 = self.hm_testrec2.replace('987654321', str(recid2))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec2), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec2), '')
# try to insert updated record 1, it should fail:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
dummyerr1_updated, recid1_updated, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.assertEqual(-1, recid1_updated)
if self.verbose:
print "test_insert_the_same_sysno_record() finished"
def test_insert_or_replace_the_same_sysno_record(self):
"""bibupload - SYSNO tag, allow to insert or replace the same SYSNO record"""
# initialize bibupload mode:
if self.verbose:
print "test_insert_or_replace_the_same_sysno_record() started"
# insert/replace record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to insert/replace updated record 1, it should be okay:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
dummyerr1_updated, recid1_updated, _ = bibupload.bibupload_records(recs,
opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1_updated)
inserted_xm = print_record(recid1_updated, 'xm')
inserted_hm = print_record(recid1_updated, 'hm')
self.assertEqual(recid1, recid1_updated)
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1_updated = self.xm_testrec1_updated.replace('123456789', str(recid1))
self.hm_testrec1_updated = self.hm_testrec1_updated.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1_updated), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1_updated), '')
if self.verbose:
print "test_insert_or_replace_the_same_sysno_record() finished"
def test_replace_nonexisting_sysno_record(self):
"""bibupload - SYSNO tag, refuse to replace non-existing SYSNO record"""
# initialize bibupload mode:
if self.verbose:
print "test_replace_nonexisting_sysno_record() started"
# insert record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummy, recid1, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to replace record 2 it should fail:
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummy, recid2, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.assertEqual(-1, recid2)
if self.verbose:
print "test_replace_nonexisting_sysno_record() finished"
class BibUploadRecordsWithEXTOAIIDTest(GenericBibUploadTest):
"""Testing uploading of records that have external EXTOAIID present."""
def setUp(self):
# pylint: disable=C0103
"""Initialize the MARCXML test records."""
GenericBibUploadTest.setUp(self)
# Note that EXTOAIID fields are repeated but with different
# subfields, this is to test whether bibupload would not
# mistakenly pick up wrong values.
self.xm_testrec1 = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="%(extoaiidsubfieldcode)s">extoaiid1</subfield>
<subfield code="%(extoaisrcsubfieldcode)s">extoaisrc1</subfield>
</datafield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="0">extoaiid2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1</subfield>
</datafield>
</record>
""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] or " ",
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] or " ",
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'extoaisrcsubfieldcode' : CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6],
}
self.hm_testrec1 = """
001__ 123456789
003__ SzGeCERN
%(extoaiidtag)s%(extoaiidind1)s%(extoaiidind2)s $$%(extoaisrcsubfieldcode)sextoaisrc1$$%(extoaiidsubfieldcode)sextoaiid1
%(extoaiidtag)s%(extoaiidind1)s%(extoaiidind2)s $$0extoaiid2
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1
""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4],
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5],
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'extoaisrcsubfieldcode' : CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6],
}
self.xm_testrec1_to_update = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="%(extoaiidsubfieldcode)s">extoaiid1</subfield>
<subfield code="%(extoaisrcsubfieldcode)s">extoaisrc1</subfield>
</datafield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="0">extoaiid2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
</record>
""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] or " ",
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] or " ",
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'extoaisrcsubfieldcode' : CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6],
}
self.xm_testrec1_updated = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="%(extoaiidsubfieldcode)s">extoaiid1</subfield>
<subfield code="%(extoaisrcsubfieldcode)s">extoaisrc1</subfield>
</datafield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="0">extoaiid2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
</record>
""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] or " ",
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] or " ",
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'extoaisrcsubfieldcode' : CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6],
}
self.hm_testrec1_updated = """
001__ 123456789
003__ SzGeCERN
%(extoaiidtag)s%(extoaiidind1)s%(extoaiidind2)s $$%(extoaisrcsubfieldcode)sextoaisrc1$$%(extoaiidsubfieldcode)sextoaiid1
%(extoaiidtag)s%(extoaiidind1)s%(extoaiidind2)s $$0extoaiid2
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1 Updated
""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4],
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5],
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'extoaisrcsubfieldcode' : CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6],
}
self.xm_testrec2 = """
<record>
<controlfield tag="001">987654321</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="%(extoaiidsubfieldcode)s">extoaiid2</subfield>
<subfield code="%(extoaisrcsubfieldcode)s">extoaisrc1</subfield>
</datafield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="0">extoaiid1</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 2</subfield>
</datafield>
</record>
""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] or " ",
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] or " ",
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'extoaisrcsubfieldcode' : CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6],
}
self.hm_testrec2 = """
001__ 987654321
003__ SzGeCERN
%(extoaiidtag)s%(extoaiidind1)s%(extoaiidind2)s $$%(extoaisrcsubfieldcode)sextoaisrc1$$%(extoaiidsubfieldcode)sextoaiid2
%(extoaiidtag)s%(extoaiidind1)s%(extoaiidind2)s $$0extoaiid1
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 2
""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4],
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5],
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'extoaisrcsubfieldcode' : CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG[5:6],
}
def test_insert_the_same_extoaiid_record(self):
"""bibupload - EXTOAIID tag, refuse to insert the same EXTOAIID record"""
# initialize bibupload mode:
if self.verbose:
print "test_insert_the_same_extoaiid_record() started"
# insert record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# insert record 2 first time:
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr2, recid2, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid2)
inserted_xm = print_record(recid2, 'xm')
inserted_hm = print_record(recid2, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec2 = self.xm_testrec2.replace('987654321', str(recid2))
self.hm_testrec2 = self.hm_testrec2.replace('987654321', str(recid2))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec2), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec2), '')
# try to insert updated record 1, it should fail:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
dummyerr1_updated, recid1_updated, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.assertEqual(-1, recid1_updated)
if self.verbose:
print "test_insert_the_same_extoaiid_record() finished"
def test_insert_or_replace_the_same_extoaiid_record(self):
"""bibupload - EXTOAIID tag, allow to insert or replace the same EXTOAIID record"""
# initialize bibupload mode:
if self.verbose:
print "test_insert_or_replace_the_same_extoaiid_record() started"
# insert/replace record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to insert/replace updated record 1, it should be okay:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
dummyerr1_updated, recid1_updated, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1_updated)
inserted_xm = print_record(recid1_updated, 'xm')
inserted_hm = print_record(recid1_updated, 'hm')
self.assertEqual(recid1, recid1_updated)
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1_updated = self.xm_testrec1_updated.replace('123456789', str(recid1))
self.hm_testrec1_updated = self.hm_testrec1_updated.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1_updated), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1_updated), '')
if self.verbose:
print "test_insert_or_replace_the_same_extoaiid_record() finished"
def test_replace_nonexisting_extoaiid_record(self):
"""bibupload - EXTOAIID tag, refuse to replace non-existing EXTOAIID record"""
# initialize bibupload mode:
if self.verbose:
print "test_replace_nonexisting_extoaiid_record() started"
# insert record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to replace record 2 it should fail:
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr2, recid2, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.assertEqual(-1, recid2)
if self.verbose:
print "test_replace_nonexisting_extoaiid_record() finished"
class BibUploadRecordsWithOAIIDTest(GenericBibUploadTest):
"""Testing uploading of records that have OAI ID present."""
def setUp(self):
"""Initialize the MARCXML test records."""
GenericBibUploadTest.setUp(self)
# Note that OAI fields are repeated but with different
# subfields, this is to test whether bibupload would not
# mistakenly pick up wrong values.
self.xm_testrec1 = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="%(oaisubfieldcode)s">oai:foo:1</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="0">oai:foo:2</subfield>
</datafield>
</record>
""" % {'oaitag': CFG_OAI_ID_FIELD[0:3],
'oaiind1': CFG_OAI_ID_FIELD[3:4] != "_" and \
CFG_OAI_ID_FIELD[3:4] or " ",
'oaiind2': CFG_OAI_ID_FIELD[4:5] != "_" and \
CFG_OAI_ID_FIELD[4:5] or " ",
'oaisubfieldcode': CFG_OAI_ID_FIELD[5:6],
}
self.hm_testrec1 = """
001__ 123456789
003__ SzGeCERN
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1
%(oaitag)s%(oaiind1)s%(oaiind2)s $$%(oaisubfieldcode)soai:foo:1
%(oaitag)s%(oaiind1)s%(oaiind2)s $$0oai:foo:2
""" % {'oaitag': CFG_OAI_ID_FIELD[0:3],
'oaiind1': CFG_OAI_ID_FIELD[3:4],
'oaiind2': CFG_OAI_ID_FIELD[4:5],
'oaisubfieldcode': CFG_OAI_ID_FIELD[5:6],
}
self.xm_testrec1_to_update = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="%(oaisubfieldcode)s">oai:foo:1</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="0">oai:foo:2</subfield>
</datafield>
</record>
""" % {'oaitag': CFG_OAI_ID_FIELD[0:3],
'oaiind1': CFG_OAI_ID_FIELD[3:4] != "_" and \
CFG_OAI_ID_FIELD[3:4] or " ",
'oaiind2': CFG_OAI_ID_FIELD[4:5] != "_" and \
CFG_OAI_ID_FIELD[4:5] or " ",
'oaisubfieldcode': CFG_OAI_ID_FIELD[5:6],
}
self.xm_testrec1_updated = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="%(oaisubfieldcode)s">oai:foo:1</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="0">oai:foo:2</subfield>
</datafield>
</record>
""" % {'oaitag': CFG_OAI_ID_FIELD[0:3],
'oaiind1': CFG_OAI_ID_FIELD[3:4] != "_" and \
CFG_OAI_ID_FIELD[3:4] or " ",
'oaiind2': CFG_OAI_ID_FIELD[4:5] != "_" and \
CFG_OAI_ID_FIELD[4:5] or " ",
'oaisubfieldcode': CFG_OAI_ID_FIELD[5:6],
}
self.hm_testrec1_updated = """
001__ 123456789
003__ SzGeCERN
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1 Updated
%(oaitag)s%(oaiind1)s%(oaiind2)s $$%(oaisubfieldcode)soai:foo:1
%(oaitag)s%(oaiind1)s%(oaiind2)s $$0oai:foo:2
""" % {'oaitag': CFG_OAI_ID_FIELD[0:3],
'oaiind1': CFG_OAI_ID_FIELD[3:4],
'oaiind2': CFG_OAI_ID_FIELD[4:5],
'oaisubfieldcode': CFG_OAI_ID_FIELD[5:6],
}
self.xm_testrec2 = """
<record>
<controlfield tag="001">987654321</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 2</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="%(oaisubfieldcode)s">oai:foo:2</subfield>
</datafield>
<datafield tag="%(oaitag)s" ind1="%(oaiind1)s" ind2="%(oaiind2)s">
<subfield code="0">oai:foo:1</subfield>
</datafield>
</record>
""" % {'oaitag': CFG_OAI_ID_FIELD[0:3],
'oaiind1': CFG_OAI_ID_FIELD[3:4] != "_" and \
CFG_OAI_ID_FIELD[3:4] or " ",
'oaiind2': CFG_OAI_ID_FIELD[4:5] != "_" and \
CFG_OAI_ID_FIELD[4:5] or " ",
'oaisubfieldcode': CFG_OAI_ID_FIELD[5:6],
}
self.hm_testrec2 = """
001__ 987654321
003__ SzGeCERN
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 2
%(oaitag)s%(oaiind1)s%(oaiind2)s $$%(oaisubfieldcode)soai:foo:2
%(oaitag)s%(oaiind1)s%(oaiind2)s $$0oai:foo:1
""" % {'oaitag': CFG_OAI_ID_FIELD[0:3],
'oaiind1': CFG_OAI_ID_FIELD[3:4],
'oaiind2': CFG_OAI_ID_FIELD[4:5],
'oaisubfieldcode': CFG_OAI_ID_FIELD[5:6],
}
def test_insert_the_same_oai_record(self):
"""bibupload - OAIID tag, refuse to insert the same OAI record"""
# insert record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# insert record 2 first time:
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr2, recid2, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid2)
inserted_xm = print_record(recid2, 'xm')
inserted_hm = print_record(recid2, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec2 = self.xm_testrec2.replace('987654321', str(recid2))
self.hm_testrec2 = self.hm_testrec2.replace('987654321', str(recid2))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec2), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec2), '')
# try to insert updated record 1, it should fail:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
dummyerr1_updated, recid1_updated, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.assertEqual(-1, recid1_updated)
def test_insert_or_replace_the_same_oai_record(self):
"""bibupload - OAIID tag, allow to insert or replace the same OAI record"""
# initialize bibupload mode:
# insert/replace record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to insert/replace updated record 1, it should be okay:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
dummyerr1_updated, recid1_updated, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1_updated)
inserted_xm = print_record(recid1_updated, 'xm')
inserted_hm = print_record(recid1_updated, 'hm')
self.assertEqual(recid1, recid1_updated)
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1_updated = self.xm_testrec1_updated.replace('123456789', str(recid1))
self.hm_testrec1_updated = self.hm_testrec1_updated.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1_updated), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1_updated), '')
def test_replace_nonexisting_oai_record(self):
"""bibupload - OAIID tag, refuse to replace non-existing OAI record"""
# insert record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='replace_or_insert')[0]
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to replace record 2 it should fail:
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
dummyerr2, recid2, _ = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.assertEqual(-1, recid2)
class BibUploadRecordsWithDOITest(GenericBibUploadTest):
"""Testing uploading of records with DOI."""
def setUp(self):
"""Initialize the MARCXML test records."""
GenericBibUploadTest.setUp(self)
self.xm_testrec1 = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789</subfield>
</datafield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">nondoi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789-0</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.hm_testrec1 = """
001__ 123456789
003__ SzGeCERN
%(doitag)s%(doiind1)s%(doiind2)s $$%(doisubfieldcodesource)sdoi$$%(doisubfieldcodevalue)s10.5170/123-456-789
%(doitag)s%(doiind1)s%(doiind2)s $$%(doisubfieldcodesource)snondoi$$%(doisubfieldcodevalue)s10.5170/123-456-789-0
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': '_',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.xm_testrec1_to_update = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789</subfield>
</datafield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">nondoi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789-0</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.xm_testrec1_updated = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789</subfield>
</datafield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">nondoi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789-0</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 1 Updated</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.hm_testrec1_updated = """
001__ 123456789
003__ SzGeCERN
%(doitag)s%(doiind1)s%(doiind2)s $$%(doisubfieldcodesource)sdoi$$%(doisubfieldcodevalue)s10.5170/123-456-789
%(doitag)s%(doiind1)s%(doiind2)s $$%(doisubfieldcodesource)snondoi$$%(doisubfieldcodevalue)s10.5170/123-456-789-0
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 1 Updated
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': '_',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.xm_testrec2 = """
<record>
<controlfield tag="001">987654321</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/987-654-321</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 2</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.hm_testrec2 = """
001__ 987654321
003__ SzGeCERN
%(doitag)s%(doiind1)s%(doiind2)s $$%(doisubfieldcodesource)sdoi$$%(doisubfieldcodevalue)s10.5170/987-654-321
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 2
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': '_',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.xm_testrec2_to_update = """
<record>
<controlfield tag="001">987654321</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.xm_testrec3 = """
<record>
<controlfield tag="001">192837645</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789-0</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 4</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.hm_testrec3 = """
001__ 192837645
003__ SzGeCERN
%(doitag)s%(doiind1)s%(doiind2)s $$%(doisubfieldcodesource)sdoi$$%(doisubfieldcodevalue)s10.5170/123-456-789-0
100__ $$aBar, Baz$$uFoo
245__ $$aOn the quux and huux 4
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': '_',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.xm_testrec4 = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789-non-existing</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 5</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
self.xm_testrec5 = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/123-456-789</subfield>
</datafield>
<datafield tag="%(doitag)s" ind1="%(doiind1)s" ind2="%(doiind2)s">
<subfield code="%(doisubfieldcodesource)s">doi</subfield>
<subfield code="%(doisubfieldcodevalue)s">10.5170/987-654-321</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bar, Baz</subfield>
<subfield code="u">Foo</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quux and huux 6</subfield>
</datafield>
</record>
""" % {'doitag': '024',
'doiind1': '7',
'doiind2': ' ',
'doisubfieldcodevalue': 'a',
'doisubfieldcodesource': '2'
}
def test_insert_the_same_doi_matching_on_doi(self):
"""bibupload - DOI tag, refuse to "insert" twice same DOI (matching on DOI)"""
# insert record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err1, recid1, msg1 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# insert record 2 first time:
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err2, recid2, msg2 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid2)
inserted_xm = print_record(recid2, 'xm')
inserted_hm = print_record(recid2, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec2 = self.xm_testrec2.replace('987654321', str(recid2))
self.hm_testrec2 = self.hm_testrec2.replace('987654321', str(recid2))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec2), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec2), '')
# try to insert again record 1 (without recid, matching on DOI)
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err1_updated, recid1_updated, msg1_updated = bibupload.bibupload(recs[0], opt_mode='insert')
self.assertEqual(-1, recid1_updated)
# if we try to update, append or correct, the same record is matched
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
err1_updated, recid1_updated, msg1_updated = bibupload.bibupload(recs[0], opt_mode='correct')
self.check_record_consistency(recid1_updated)
self.assertEqual(recid1, recid1_updated)
err1_updated, recid1_updated, msg1_updated = bibupload.bibupload(recs[0], opt_mode='append')
self.check_record_consistency(recid1_updated)
self.assertEqual(recid1, recid1_updated)
err1_updated, recid1_updated, msg1_updated = bibupload.bibupload(recs[0], opt_mode='replace')
self.check_record_consistency(recid1_updated)
self.assertEqual(recid1, recid1_updated)
def test_insert_the_same_doi_matching_on_recid(self):
"""bibupload - DOI tag, refuse to "insert" twice same DOI (matching on recid)"""
# First upload 2 test records
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err1, recid1, msg1 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid1)
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err2, recid2, msg2 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid2)
# try to update record 2 with DOI already in record 1. It must fail:
testrec_to_update = self.xm_testrec2_to_update.replace('<controlfield tag="001">987654321</controlfield>',
'<controlfield tag="001">%s</controlfield>' % recid2)
recs = bibupload.xml_marc_to_records(testrec_to_update)
err, recid, msg = bibupload.bibupload(recs[0], opt_mode='replace')
self.check_record_consistency(recid)
self.assertEqual(1, err)
# Ditto in correct and append mode
recs = bibupload.xml_marc_to_records(testrec_to_update)
err, recid, msg = bibupload.bibupload(recs[0], opt_mode='correct')
self.check_record_consistency(recid)
self.assertEqual(1, err)
recs = bibupload.xml_marc_to_records(testrec_to_update)
err, recid, msg = bibupload.bibupload(recs[0], opt_mode='append')
self.check_record_consistency(recid)
self.assertEqual(1, err)
def test_insert_or_replace_the_same_doi_record(self):
"""bibupload - DOI tag, allow to insert or replace matching on DOI"""
# insert/replace record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err1, recid1, msg1 = bibupload.bibupload(recs[0], opt_mode='replace_or_insert')
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to insert/replace updated record 1, it should be okay:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
err1_updated, recid1_updated, msg1_updated = bibupload.bibupload(recs[0], opt_mode='replace_or_insert')
self.check_record_consistency(recid1_updated)
inserted_xm = print_record(recid1_updated, 'xm')
inserted_hm = print_record(recid1_updated, 'hm')
self.assertEqual(recid1, recid1_updated)
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1_updated = self.xm_testrec1_updated.replace('123456789', str(recid1))
self.hm_testrec1_updated = self.hm_testrec1_updated.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1_updated), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1_updated), '')
def test_correct_the_same_doi_record(self):
"""bibupload - DOI tag, allow to correct matching on DOI"""
# insert/replace record 1 first time:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err1, recid1, msg1 = bibupload.bibupload(recs[0], opt_mode='replace_or_insert')
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# try to correct updated record 1, it should be okay:
recs = bibupload.xml_marc_to_records(self.xm_testrec1_to_update)
err1_updated, recid1_updated, msg1_updated = bibupload.bibupload(recs[0], opt_mode='correct')
self.check_record_consistency(recid1_updated)
inserted_xm = print_record(recid1_updated, 'xm')
inserted_hm = print_record(recid1_updated, 'hm')
self.assertEqual(recid1, recid1_updated)
# use real recID in test buffers when comparing whether it worked:
self.xm_testrec1_updated = self.xm_testrec1_updated.replace('123456789', str(recid1))
self.hm_testrec1_updated = self.hm_testrec1_updated.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1_updated), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1_updated), '')
def test_replace_nonexisting_doi_record(self):
"""bibupload - DOI tag, refuse to replace non-existing DOI record (matching on DOI)"""
testrec_to_insert_first = self.xm_testrec4
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err4, recid4, msg4 = bibupload.bibupload(recs[0], opt_mode='replace')
self.assertEqual(-1, recid4)
def test_matching_on_doi_source_field(self):
"""bibupload - DOI tag, test matching records using DOI value AND source field ($2)"""
# insert test record 1, with a "fake" doi (not "doi" in source field):
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err1, recid1, msg1 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid1)
inserted_xm = print_record(recid1, 'xm')
inserted_hm = print_record(recid1, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec1 = self.xm_testrec1.replace('123456789', str(recid1))
self.hm_testrec1 = self.hm_testrec1.replace('123456789', str(recid1))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec1), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec1), '')
# insert record 3, which matches record 1 "fake" doi, so it
# should work.
testrec_to_insert_first = self.xm_testrec3.replace('<controlfield tag="001">192837645</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err3, recid3, msg3 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid3)
inserted_xm = print_record(recid3, 'xm')
inserted_hm = print_record(recid3, 'hm')
# use real recID when comparing whether it worked:
self.xm_testrec3 = self.xm_testrec3.replace('192837645', str(recid3))
self.hm_testrec3 = self.hm_testrec3.replace('192837645', str(recid3))
self.assertEqual(compare_xmbuffers(inserted_xm,
self.xm_testrec3), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
self.hm_testrec3), '')
def test_replace_or_update_record__with_ambiguous_doi(self):
"""bibupload - DOI tag, refuse to replace/correct/append on the basis of ambiguous DOI"""
# First upload 2 test records with two different DOIs:
testrec_to_insert_first = self.xm_testrec1.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err1, recid1, msg1 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid1)
self.assertEqual(0, err1)
testrec_to_insert_first = self.xm_testrec2.replace('<controlfield tag="001">987654321</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec_to_insert_first)
err2, recid2, msg2 = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid2)
self.assertEqual(0, err2)
# Now try to insert record with DOIs matching the records
# previously uploaded. It must fail.
testrec = self.xm_testrec5.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='insert')
self.assertEqual(1, err5)
# Ditto for other modes:
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='replace_or_insert')
self.assertEqual(1, err5)
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='replace')
self.assertEqual(1, err5)
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='correct')
self.assertEqual(1, err5)
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='append')
self.assertEqual(1, err5)
# The same is true if a recid exists in the input MARCXML (as
# long as DOIs are ambiguous):
testrec = self.xm_testrec5.replace('<controlfield tag="001">123456789</controlfield>',
'<controlfield tag="001">%s</controlfield>' % recid1)
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='replace_or_insert')
self.assertEqual(1, err5)
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='replace')
self.assertEqual(1, err5)
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='correct')
self.assertEqual(1, err5)
recs = bibupload.xml_marc_to_records(testrec)
err5, recid5, msg5 = bibupload.bibupload(recs[0], opt_mode='append')
self.assertEqual(1, err5)
class BibUploadIndicatorsTest(GenericBibUploadTest):
"""
Testing uploading of a MARCXML record with indicators having
either blank space (as per MARC schema) or empty string value (old
behaviour).
"""
def setUp(self):
"""Initialize the MARCXML test record."""
GenericBibUploadTest.setUp(self)
self.testrec1_xm = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
</record>
"""
self.testrec1_hm = """
003__ SzGeCERN
100__ $$aTest, John$$uTest University
"""
self.testrec2_xm = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1="" ind2="">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
</record>
"""
self.testrec2_hm = """
003__ SzGeCERN
100__ $$aTest, John$$uTest University
"""
def test_record_with_spaces_in_indicators(self):
"""bibupload - inserting MARCXML with spaces in indicators"""
recs = bibupload.xml_marc_to_records(self.testrec1_xm)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(remove_tag_001_from_xmbuffer(inserted_xm),
self.testrec1_xm), '')
self.assertEqual(compare_hmbuffers(remove_tag_001_from_hmbuffer(inserted_hm),
self.testrec1_hm), '')
def test_record_with_no_spaces_in_indicators(self):
"""bibupload - inserting MARCXML with no spaces in indicators"""
recs = bibupload.xml_marc_to_records(self.testrec2_xm)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(remove_tag_001_from_xmbuffer(inserted_xm),
self.testrec2_xm), '')
self.assertEqual(compare_hmbuffers(remove_tag_001_from_hmbuffer(inserted_hm),
self.testrec2_hm), '')
class BibUploadUpperLowerCaseTest(GenericBibUploadTest):
"""
Testing treatment of similar records with only upper and lower
case value differences in the bibxxx table.
"""
def setUp(self):
"""Initialize the MARCXML test records."""
GenericBibUploadTest.setUp(self)
self.testrec1_xm = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
</record>
"""
self.testrec1_hm = """
003__ SzGeCERN
100__ $$aTest, John$$uTest University
"""
self.testrec2_xm = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1="" ind2="">
<subfield code="a">TeSt, JoHn</subfield>
<subfield code="u">Test UniVeRsity</subfield>
</datafield>
</record>
"""
self.testrec2_hm = """
003__ SzGeCERN
100__ $$aTeSt, JoHn$$uTest UniVeRsity
"""
def test_record_with_upper_lower_case_letters(self):
"""bibupload - inserting similar MARCXML records with upper/lower case"""
# insert test record #1:
recs = bibupload.xml_marc_to_records(self.testrec1_xm)
dummyerr1, recid1, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid1)
recid1_inserted_xm = print_record(recid1, 'xm')
recid1_inserted_hm = print_record(recid1, 'hm')
# insert test record #2:
recs = bibupload.xml_marc_to_records(self.testrec2_xm)
dummyerr1, recid2, _ = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid2)
recid2_inserted_xm = print_record(recid2, 'xm')
recid2_inserted_hm = print_record(recid2, 'hm')
# let us compare stuff now:
self.assertEqual(compare_xmbuffers(remove_tag_001_from_xmbuffer(recid1_inserted_xm),
self.testrec1_xm), '')
self.assertEqual(compare_hmbuffers(remove_tag_001_from_hmbuffer(recid1_inserted_hm),
self.testrec1_hm), '')
self.assertEqual(compare_xmbuffers(remove_tag_001_from_xmbuffer(recid2_inserted_xm),
self.testrec2_xm), '')
self.assertEqual(compare_hmbuffers(remove_tag_001_from_hmbuffer(recid2_inserted_hm),
self.testrec2_hm), '')
class BibUploadControlledProvenanceTest(GenericBibUploadTest):
"""Testing treatment of tags under controlled provenance in the correct mode."""
def setUp(self):
"""Initialize the MARCXML test record."""
GenericBibUploadTest.setUp(self)
self.testrec1_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Test title</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">blabla</subfield>
<subfield code="9">sam</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">blublu</subfield>
<subfield code="9">sim</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">human</subfield>
</datafield>
</record>
"""
self.testrec1_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
245__ $$aTest title
6531_ $$9sam$$ablabla
6531_ $$9sim$$ablublu
6531_ $$ahuman
"""
self.testrec1_xm_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">bleble</subfield>
<subfield code="9">sim</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">bloblo</subfield>
<subfield code="9">som</subfield>
</datafield>
</record>
"""
self.testrec1_corrected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Test title</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">blabla</subfield>
<subfield code="9">sam</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">human</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">bleble</subfield>
<subfield code="9">sim</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">bloblo</subfield>
<subfield code="9">som</subfield>
</datafield>
</record>
"""
self.testrec1_corrected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
245__ $$aTest title
6531_ $$9sam$$ablabla
6531_ $$ahuman
6531_ $$9sim$$ableble
6531_ $$9som$$abloblo
"""
# insert test record:
test_record_xm = self.testrec1_xm.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(test_record_xm)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recID:
self.testrec1_xm = self.testrec1_xm.replace('123456789', str(recid))
self.testrec1_hm = self.testrec1_hm.replace('123456789', str(recid))
self.testrec1_xm_to_correct = self.testrec1_xm_to_correct.replace('123456789', str(recid))
self.testrec1_corrected_xm = self.testrec1_corrected_xm.replace('123456789', str(recid))
self.testrec1_corrected_hm = self.testrec1_corrected_hm.replace('123456789', str(recid))
# test of the inserted record:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm, self.testrec1_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm, self.testrec1_hm), '')
def test_controlled_provenance_persistence(self):
"""bibupload - correct mode, tags with controlled provenance"""
# correct metadata tags; will the protected tags be kept?
recs = bibupload.xml_marc_to_records(self.testrec1_xm_to_correct)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
corrected_xm = print_record(recid, 'xm')
corrected_hm = print_record(recid, 'hm')
# did it work?
self.assertEqual(compare_xmbuffers(corrected_xm, self.testrec1_corrected_xm), '')
self.assertEqual(compare_hmbuffers(corrected_hm, self.testrec1_corrected_hm), '')
class BibUploadStrongTagsTest(GenericBibUploadTest):
"""Testing treatment of strong tags and the replace mode."""
def setUp(self):
"""Initialize the MARCXML test record."""
GenericBibUploadTest.setUp(self)
self.testrec1_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Jane</subfield>
<subfield code="u">Test Institute</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Test title</subfield>
</datafield>
<datafield tag="%(strong_tag)s" ind1=" " ind2=" ">
<subfield code="a">A value</subfield>
<subfield code="b">Another value</subfield>
</datafield>
</record>
""" % {'strong_tag': bibupload.CFG_BIBUPLOAD_STRONG_TAGS[0]}
self.testrec1_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, Jane$$uTest Institute
245__ $$aTest title
%(strong_tag)s__ $$aA value$$bAnother value
""" % {'strong_tag': bibupload.CFG_BIBUPLOAD_STRONG_TAGS[0]}
self.testrec1_xm_to_replace = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Joseph</subfield>
<subfield code="u">Test Academy</subfield>
</datafield>
</record>
"""
self.testrec1_replaced_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, Joseph</subfield>
<subfield code="u">Test Academy</subfield>
</datafield>
<datafield tag="%(strong_tag)s" ind1=" " ind2=" ">
<subfield code="a">A value</subfield>
<subfield code="b">Another value</subfield>
</datafield>
</record>
""" % {'strong_tag': bibupload.CFG_BIBUPLOAD_STRONG_TAGS[0]}
self.testrec1_replaced_hm = """
001__ 123456789
100__ $$aTest, Joseph$$uTest Academy
%(strong_tag)s__ $$aA value$$bAnother value
""" % {'strong_tag': bibupload.CFG_BIBUPLOAD_STRONG_TAGS[0]}
# insert test record:
test_record_xm = self.testrec1_xm.replace('<controlfield tag="001">123456789</controlfield>',
'')
recs = bibupload.xml_marc_to_records(test_record_xm)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recID:
self.testrec1_xm = self.testrec1_xm.replace('123456789', str(recid))
self.testrec1_hm = self.testrec1_hm.replace('123456789', str(recid))
self.testrec1_xm_to_replace = self.testrec1_xm_to_replace.replace('123456789', str(recid))
self.testrec1_replaced_xm = self.testrec1_replaced_xm.replace('123456789', str(recid))
self.testrec1_replaced_hm = self.testrec1_replaced_hm.replace('123456789', str(recid))
# test of the inserted record:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm, self.testrec1_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm, self.testrec1_hm), '')
def test_strong_tags_persistence(self):
"""bibupload - strong tags, persistence in replace mode"""
# replace all metadata tags; will the strong tags be kept?
recs = bibupload.xml_marc_to_records(self.testrec1_xm_to_replace)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(recid)
replaced_xm = print_record(recid, 'xm')
replaced_hm = print_record(recid, 'hm')
# did it work?
self.assertEqual(compare_xmbuffers(replaced_xm, self.testrec1_replaced_xm), '')
self.assertEqual(compare_hmbuffers(replaced_hm, self.testrec1_replaced_hm), '')
class BibUploadPretendTest(GenericBibUploadTest):
"""
Testing bibupload --pretend correctness.
"""
def setUp(self):
from invenio.bibtask import task_set_task_param
GenericBibUploadTest.setUp(self)
self.demo_data = bibupload.xml_marc_to_records(open(os.path.join(CFG_TMPDIR, 'demobibdata.xml')).read())[0]
self.before = self._get_tables_fingerprint()
task_set_task_param('pretend', True)
def tearDown(self):
from invenio.bibtask import task_set_task_param
GenericBibUploadTest.tearDown(self)
task_set_task_param('pretend', False)
+ @staticmethod
def _get_tables_fingerprint():
"""
Take lenght and last modification time of all the tables that
might be touched by bibupload and return them in a nice structure.
"""
fingerprint = {}
- tables = ['bibrec', 'bibdoc', 'bibrec_bibdoc', 'bibdoc_bibdoc', 'bibfmt', 'hstDOCUMENT', 'hstRECORD']
+ tables = ['bibrec', 'bibdoc', 'bibrec_bibdoc', 'bibdoc_bibdoc', 'bibfmt', 'hstDOCUMENT', 'hstRECORD', 'bibHOLDINGPEN', 'bibdocmoreinfo', 'bibdocfsinfo']
for i in xrange(100):
tables.append('bib%02dx' % i)
tables.append('bibrec_bib%02dx' % i)
for table in tables:
fingerprint[table] = get_table_status_info(table)
return fingerprint
- _get_tables_fingerprint = staticmethod(_get_tables_fingerprint)
+ @staticmethod
def _checks_tables_fingerprints(before, after):
"""
Checks differences in table_fingerprints.
"""
- err = True
for table in before.keys():
if before[table] != after[table]:
- print >> sys.stderr, "Table %s has been modified: before was [%s], after was [%s]" % (table, pprint.pformat(before[table]), pprint.pformat(after[table]))
- err = False
- return err
- _checks_tables_fingerprints = staticmethod(_checks_tables_fingerprints)
+ raise StandardError("Table %s has been modified: before was [%s], after was [%s]" % (table, pprint.pformat(before[table]), pprint.pformat(after[table])))
+ return True
def test_pretend_insert(self):
"""bibupload - pretend insert"""
bibupload.bibupload_records([self.demo_data], opt_mode='insert', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
def test_pretend_correct(self):
"""bibupload - pretend correct"""
bibupload.bibupload_records([self.demo_data], opt_mode='correct', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
def test_pretend_replace(self):
"""bibupload - pretend replace"""
bibupload.bibupload_records([self.demo_data], opt_mode='replace', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
def test_pretend_append(self):
"""bibupload - pretend append"""
bibupload.bibupload_records([self.demo_data], opt_mode='append', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
def test_pretend_replace_or_insert(self):
"""bibupload - pretend replace or insert"""
bibupload.bibupload_records([self.demo_data], opt_mode='replace_or_insert', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
def test_pretend_holdingpen(self):
"""bibupload - pretend holdingpen"""
bibupload.bibupload_records([self.demo_data], opt_mode='holdingpen', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
def test_pretend_delete(self):
"""bibupload - pretend delete"""
bibupload.bibupload_records([self.demo_data], opt_mode='delete', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
def test_pretend_reference(self):
"""bibupload - pretend reference"""
bibupload.bibupload_records([self.demo_data], opt_mode='reference', pretend=True)
self.failUnless(self._checks_tables_fingerprints(self.before, self._get_tables_fingerprint()))
class BibUploadHoldingPenTest(GenericBibUploadTest):
"""
Testing the Holding Pen usage.
"""
def setUp(self):
from invenio.bibtask import task_set_task_param, setup_loggers
GenericBibUploadTest.setUp(self)
self.verbose = 9
setup_loggers()
task_set_task_param('verbose', self.verbose)
self.recid = 10
self.oai_id = "oai:cds.cern.ch:CERN-EP-2001-094"
def test_holding_pen_upload_with_recid(self):
"""bibupload - holding pen upload with recid"""
test_to_upload = """<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
<record>
<controlfield tag="001">%s</controlfield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kleefeld, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Newcomer, Y</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rupp, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Scadron, M D</subfield>
</datafield>
</record>
</collection>""" % self.recid
recs = bibupload.xml_marc_to_records(test_to_upload)
bibupload.insert_record_into_holding_pen(recs[0], "")
res = run_sql("SELECT changeset_xml FROM bibHOLDINGPEN WHERE id_bibrec=%s", (self.recid, ))
self.failUnless("Rupp, G" in res[0][0])
def test_holding_pen_upload_with_oai_id(self):
"""bibupload - holding pen upload with oai_id"""
test_to_upload = """<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
<record>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kleefeld, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Newcomer, Y</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rupp, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Scadron, M D</subfield>
</datafield>
<datafield tag="%(extoaiidtag)s" ind1="%(extoaiidind1)s" ind2="%(extoaiidind2)s">
<subfield code="%(extoaiidsubfieldcode)s">%(value)s</subfield>
</datafield>
</record>
</collection>""" % {'extoaiidtag': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[0:3],
'extoaiidind1': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[3:4] or " ",
'extoaiidind2': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] != "_" and \
CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[4:5] or " ",
'extoaiidsubfieldcode': CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG[5:6],
'value': self.oai_id
}
recs = bibupload.xml_marc_to_records(test_to_upload)
bibupload.insert_record_into_holding_pen(recs[0], self.oai_id)
res = run_sql("SELECT changeset_xml FROM bibHOLDINGPEN WHERE id_bibrec=%s AND oai_id=%s", (self.recid, self.oai_id))
self.failUnless("Rupp, G" in res[0][0])
def tearDown(self):
GenericBibUploadTest.tearDown(self)
run_sql("DELETE FROM bibHOLDINGPEN WHERE id_bibrec=%s", (self.recid, ))
class BibUploadFFTModeTest(GenericBibUploadTest):
"""
Testing treatment of fulltext file transfer import mode.
"""
def _test_bibdoc_status(self, recid, docname, status):
res = run_sql('SELECT bd.status FROM bibrec_bibdoc as bb JOIN bibdoc as bd ON bb.id_bibdoc = bd.id WHERE bb.id_bibrec = %s AND bb.docname = %s', (recid, docname))
self.failUnless(res)
self.assertEqual(status, res[0][0])
def test_writing_rights(self):
"""bibupload - FFT has writing rights"""
self.failUnless(bibupload.writing_rights_p())
def test_simple_fft_insert(self):
"""bibupload - simple FFT insert"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self.failUnless(try_url_download(testrec_expected_url))
def test_fft_insert_with_valid_embargo(self):
"""bibupload - FFT insert with valid embargo"""
# define the test case:
future_date = time.strftime('%Y-%m-%d', time.gmtime(time.time() + 24 * 3600 * 2))
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="r">firerole: deny until '%(future_date)s'
allow any</subfield>
</datafield>
</record>
""" % {
'future_date': future_date,
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
result = urlopen(testrec_expected_url).read()
self.failUnless("This file is restricted." in result, result)
def test_fft_insert_with_expired_embargo(self):
"""bibupload - FFT insert with expired embargo"""
# define the test case:
past_date = time.strftime('%Y-%m-%d', time.gmtime(time.time() - 24 * 3600 * 2))
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="r">firerole: deny until '%(past_date)s'
allow any</subfield>
</datafield>
</record>
""" % {
'past_date': past_date,
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
980__ $$aARTICLE
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
result = urlopen(testrec_expected_url).read()
self.failIf("If you already have an account, please login using the form below." in result, result)
self.assertEqual(test_web_page_content(testrec_expected_url, 'hyde', 'h123yde', expected_text='Authorization failure'), [])
force_webcoll(recid)
self.assertEqual(test_web_page_content(testrec_expected_url, 'hyde', 'h123yde', expected_text=urlopen("%(siteurl)s/img/site_logo.gif" % {
'siteurl': CFG_SITE_URL
}).read()), [])
def test_exotic_format_fft_append(self):
"""bibupload - exotic format FFT append"""
# define the test case:
from invenio.access_control_config import CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS
testfile = os.path.join(CFG_TMPDIR, 'test.ps.Z')
open(testfile, 'w').write('TEST')
email_tag = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][0:3]
email_ind1 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][3]
email_ind2 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][4]
email_code = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][5]
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
</record>
""" % {
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
testrec_to_append = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%s</subfield>
</datafield>
</record>
""" % testfile
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/test.ps.Z</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
%(email_tag)s%(email_ind1)s%(email_ind2)s $$%(email_code)sjekyll@cds.cern.ch
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/test.ps.Z
""" % {'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == ' ' and '_' or email_ind1,
'email_ind2': email_ind2 == ' ' and '_' or email_ind2,
'email_code': email_code}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/test.ps.Z" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url2 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/test?format=ps.Z" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_to_append = testrec_to_append.replace('123456789',
str(recid))
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
testrec_expected_url2 = testrec_expected_url.replace('123456789',
str(recid))
recs = bibupload.xml_marc_to_records(testrec_to_append)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='append')[0]
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self.assertEqual(test_web_page_content(testrec_expected_url, 'jekyll', 'j123ekyll', expected_text='TEST'), [])
self.assertEqual(test_web_page_content(testrec_expected_url2, 'jekyll', 'j123ekyll', expected_text='TEST'), [])
def test_fft_check_md5_through_bibrecdoc_str(self):
"""bibupload - simple FFT insert, check md5 through BibRecDocs.str()"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%s/img/head.gif</subfield>
</datafield>
</record>
""" % CFG_SITE_URL
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
original_md5 = md5(urlopen('%s/img/head.gif' % CFG_SITE_URL).read()).hexdigest()
bibrec_str = str(BibRecDocs(int(recid)))
md5_found = False
for row in bibrec_str.split('\n'):
if 'checksum' in row:
if original_md5 in row:
md5_found = True
self.failUnless(md5_found)
def test_detailed_fft_insert(self):
"""bibupload - detailed FFT insert"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="t">SuperMain</subfield>
<subfield code="d">This is a description</subfield>
<subfield code="z">This is a comment</subfield>
<subfield code="n">CIDIESSE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/rss.png</subfield>
<subfield code="t">SuperMain</subfield>
<subfield code="f">.jpeg</subfield>
<subfield code="d">This is a description</subfield>
<subfield code="z">This is a second comment</subfield>
<subfield code="n">CIDIESSE</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/CIDIESSE.gif</subfield>
<subfield code="y">This is a description</subfield>
<subfield code="z">This is a comment</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/CIDIESSE.jpeg</subfield>
<subfield code="y">This is a description</subfield>
<subfield code="z">This is a second comment</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/CIDIESSE.gif$$yThis is a description$$zThis is a comment
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/CIDIESSE.jpeg$$yThis is a description$$zThis is a second comment
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url1 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/CIDIESSE.gif" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url2 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/CIDIESSE.jpeg" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url1 = testrec_expected_url1.replace('123456789',
str(recid))
testrec_expected_url2 = testrec_expected_url1.replace('123456789',
str(recid))
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self.failUnless(try_url_download(testrec_expected_url1))
self.failUnless(try_url_download(testrec_expected_url2))
def test_simple_fft_insert_with_restriction(self):
"""bibupload - simple FFT insert with restriction"""
# define the test case:
from invenio.access_control_config import CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS
email_tag = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][0:3]
email_ind1 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][3]
email_ind2 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][4]
email_code = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][5]
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="r">thesis</subfield>
<subfield code="x">%(siteurl)s/img/sb.gif</subfield>
</datafield>
</record>
""" % {'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code,
'siteurl': CFG_SITE_URL}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon</subfield>
<subfield code="x">icon</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
%(email_tag)s%(email_ind1)s%(email_ind2)s $$%(email_code)sjekyll@cds.cern.ch
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon$$xicon
980__ $$aARTICLE
""" % {'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == ' ' and '_' or email_ind1,
'email_ind2': email_ind2 == ' ' and '_' or email_ind2,
'email_code': email_code}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_icon = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
testrec_expected_icon = testrec_expected_icon.replace('123456789',
str(recid))
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self.assertEqual(test_web_page_content(testrec_expected_icon, 'jekyll', 'j123ekyll', expected_text=urlopen('%(siteurl)s/img/sb.gif' % {
'siteurl': CFG_SITE_URL
}).read()), [])
self.assertEqual(test_web_page_content(testrec_expected_icon, 'hyde', 'h123yde', expected_text='Authorization failure'), [])
force_webcoll(recid)
self.assertEqual(test_web_page_content(testrec_expected_icon, 'hyde', 'h123yde', expected_text=urlopen('%(siteurl)s/img/restricted.gif' % {'siteurl': CFG_SITE_URL}).read()), [])
self.failUnless("HTTP Error 401: Unauthorized" in test_web_page_content(testrec_expected_url, 'hyde', 'h123yde')[0])
self.failUnless("This file is restricted." in urlopen(testrec_expected_url).read())
def test_simple_fft_insert_with_icon(self):
"""bibupload - simple FFT insert with icon"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="x">%(siteurl)s/img/sb.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon</subfield>
<subfield code="x">icon</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon$$xicon
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_icon = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
testrec_expected_icon = testrec_expected_icon.replace('123456789',
str(recid))
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self.failUnless(try_url_download(testrec_expected_url))
self.failUnless(try_url_download(testrec_expected_icon))
def test_multiple_fft_insert(self):
"""bibupload - multiple FFT insert"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/head.gif</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/%(CFG_SITE_RECORD)s/95/files/9809057.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(prefix)s/var/tmp/demobibdata.xml</subfield>
</datafield>
</record>
""" % {
'prefix': CFG_PREFIX,
'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/9809057.pdf</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/demobibdata.xml</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/head.gif</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/9809057.pdf
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/demobibdata.xml
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/head.gif
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
testrec_expected_urls = []
for files in ('site_logo.gif', 'head.gif', '9809057.pdf', 'demobibdata.xml'):
testrec_expected_urls.append('%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/%(files)s' % {'siteurl' : CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD, 'files' : files})
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_urls = []
for files in ('site_logo.gif', 'head.gif', '9809057.pdf', 'demobibdata.xml'):
testrec_expected_urls.append('%(siteurl)s/%(CFG_SITE_RECORD)s/%(recid)s/files/%(files)s' % {'siteurl' : CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD, 'files' : files, 'recid' : recid})
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
for url in testrec_expected_urls:
self.failUnless(try_url_download(url))
self._test_bibdoc_status(recid, 'head', '')
self._test_bibdoc_status(recid, '9809057', '')
self._test_bibdoc_status(recid, 'site_logo', '')
self._test_bibdoc_status(recid, 'demobibdata', '')
def test_simple_fft_correct(self):
"""bibupload - simple FFT correct"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/sb.gif</subfield>
<subfield code="n">site_logo</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self._test_bibdoc_status(recid, 'site_logo', '')
def test_fft_correct_already_exists(self):
"""bibupload - FFT correct with already identical existing file"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">a description</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/help.png</subfield>
<subfield code="n">site_logo</subfield>
<subfield code="d">another description</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/rss.png</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/line.gif</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/merge.png</subfield>
<subfield code="n">line</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">a second description</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/help.png</subfield>
<subfield code="n">site_logo</subfield>
<subfield code="d">another second description</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/refresh.png</subfield>
<subfield code="n">rss</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/line.gif</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/merge-small.png</subfield>
<subfield code="n">line</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/line.gif</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/line.png</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/rss.png</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
<subfield code="y">a second description</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.png</subfield>
<subfield code="y">another second description</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/line.gif
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/line.png
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/rss.png
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif$$ya second description
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.png$$yanother second description
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url2 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/rss.png" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url3 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.png" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url4 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/line.png" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url5 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/line.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
_, recid, _ = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
testrec_expected_url2 = testrec_expected_url2.replace('123456789',
str(recid))
testrec_expected_url3 = testrec_expected_url3.replace('123456789',
str(recid))
testrec_expected_url4 = testrec_expected_url4.replace('123456789',
str(recid))
testrec_expected_url5 = testrec_expected_url5.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload(recs[0], opt_mode='correct')
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.failUnless(try_url_download(testrec_expected_url2))
self.failUnless(try_url_download(testrec_expected_url3))
self.failUnless(try_url_download(testrec_expected_url4))
self.failUnless(try_url_download(testrec_expected_url5))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
bibrecdocs = BibRecDocs(recid)
self.failUnless(bibrecdocs.get_bibdoc('rss').list_versions(), [1, 2])
self.failUnless(bibrecdocs.get_bibdoc('site_logo').list_versions(), [1])
self.failUnless(bibrecdocs.get_bibdoc('line').list_versions(), [1, 2])
def test_fft_correct_modify_doctype(self):
"""bibupload - FFT correct with different doctype"""
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">a description</subfield>
<subfield code="t">TEST1</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="n">site_logo</subfield>
<subfield code="t">TEST2</subfield>
</datafield>
</record>
"""
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
<subfield code="y">a description</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
_, recid, _ = bibupload.bibupload(recs[0], opt_mode='insert')
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
bibrecdocs = BibRecDocs(recid)
self.failUnless(bibrecdocs.get_bibdoc('site_logo').doctype, 'TEST1')
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload(recs[0], opt_mode='correct')
# compare expected results:
inserted_xm = print_record(recid, 'xm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
bibrecdocs = BibRecDocs(recid)
self.failUnless(bibrecdocs.get_bibdoc('site_logo').doctype, 'TEST2')
def test_fft_append_already_exists(self):
"""bibupload - FFT append with already identical existing file"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">a description</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_append = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">a second description</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/help.png</subfield>
<subfield code="n">site_logo</subfield>
<subfield code="d">another second description</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
<subfield code="y">a description</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.png</subfield>
<subfield code="y">another second description</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif$$ya description
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.png$$yanother second description
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url2 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.png" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
_, recid, _ = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_append = test_to_append.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_append)
err, recid, msg = bibupload.bibupload(recs[0], opt_mode='append')
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.failUnless(try_url_download(testrec_expected_url2))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
def test_fft_implicit_fix_marc(self):
"""bibupload - FFT implicit FIX-MARC"""
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">foo@bar.com</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="f">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">foo@bar.com</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">foo@bar.com</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8560_ $$ffoo@bar.com
8564_ $$u%(siteurl)s/img/site_logo.gif
""" % {
'siteurl': CFG_SITE_URL
}
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
test_to_correct = test_to_correct.replace('123456789',
str(recid))
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
# correct test record with implicit FIX-MARC:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
def test_fft_vs_bibedit(self):
"""bibupload - FFT Vs. BibEdit compatibility"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_replace = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">http://www.google.com/</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="z">BibEdit Comment</subfield>
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
<subfield code="y">BibEdit Description</subfield>
<subfield code="x">01</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">http://cern.ch/</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_xm = str(test_to_replace)
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$uhttp://www.google.com/
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif$$x01$$yBibEdit Description$$zBibEdit Comment
8564_ $$uhttp://cern.ch/
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_replace = test_to_replace.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_replace)
bibupload.bibupload_records(recs, opt_mode='replace')[0]
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self._test_bibdoc_status(recid, 'site_logo', '')
bibrecdocs = BibRecDocs(recid)
bibdoc = bibrecdocs.get_bibdoc('site_logo')
self.assertEqual(bibdoc.get_description('.gif'), 'BibEdit Description')
def test_detailed_fft_correct(self):
"""bibupload - detailed FFT correct
"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">Try</subfield>
<subfield code="z">Comment</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/head.gif</subfield>
<subfield code="n">site_logo</subfield>
<subfield code="m">patata</subfield>
<subfield code="d">Next Try</subfield>
<subfield code="z">KEEP-OLD-VALUE</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif</subfield>
<subfield code="y">Next Try</subfield>
<subfield code="z">Comment</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif$$yNext Try$$zComment
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='correct')
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '', "bufers not equal: %s and %s" % (inserted_xm, testrec_expected_xm))
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '', "bufers not equal: %s and %s" % (inserted_hm, testrec_expected_hm))
self._test_bibdoc_status(recid, 'patata', '')
def test_no_url_fft_correct(self):
"""bibupload - no_url FFT correct"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">Try</subfield>
<subfield code="z">Comment</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="n">site_logo</subfield>
<subfield code="m">patata</subfield>
<subfield code="f">.gif</subfield>
<subfield code="d">KEEP-OLD-VALUE</subfield>
<subfield code="z">Next Comment</subfield>
</datafield>
</record>
"""
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif</subfield>
<subfield code="y">Try</subfield>
<subfield code="z">Next Comment</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif$$yTry$$zNext Comment
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self._test_bibdoc_status(recid, 'patata', '')
def test_new_icon_fft_append(self):
"""bibupload - new icon FFT append"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
</record>
"""
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="n">site_logo</subfield>
<subfield code="x">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon</subfield>
<subfield code="x">icon</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon$$xicon
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif?subformat=icon" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='append')[0]
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self._test_bibdoc_status(recid, 'site_logo', '')
def test_multiple_fft_correct(self):
"""bibupload - multiple FFT correct"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="d">Try</subfield>
<subfield code="z">Comment</subfield>
<subfield code="r">Restricted</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/okay.gif</subfield>
<subfield code="n">site_logo</subfield>
<subfield code="f">.jpeg</subfield>
<subfield code="d">Try jpeg</subfield>
<subfield code="z">Comment jpeg</subfield>
<subfield code="r">Restricted</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/loading.gif</subfield>
<subfield code="n">site_logo</subfield>
<subfield code="m">patata</subfield>
<subfield code="f">.gif</subfield>
<subfield code="r">New restricted</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/patata.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless("This file is restricted." in urlopen(testrec_expected_url).read())
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self._test_bibdoc_status(recid, 'patata', 'New restricted')
def test_purge_fft_correct(self):
"""bibupload - purge FFT correct"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/head.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_purge = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="t">PURGE</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/head.gif</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
</record>
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/head.gif
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
""" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
test_to_purge = test_to_purge.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='correct')[0]
self.check_record_consistency(recid)
# purge test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_purge)
bibupload.bibupload_records(recs, opt_mode='correct')
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self._test_bibdoc_status(recid, 'site_logo', '')
self._test_bibdoc_status(recid, 'head', '')
def test_revert_fft_correct(self):
"""bibupload - revert FFT correct"""
# define the test case:
from invenio.access_control_config import CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS
email_tag = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][0:3]
email_ind1 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][3]
email_ind2 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][4]
email_code = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][5]
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/iconpen.gif</subfield>
<subfield code="n">site_logo</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL,
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%s/img/head.gif</subfield>
<subfield code="n">site_logo</subfield>
</datafield>
</record>
""" % CFG_SITE_URL
test_to_revert = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="n">site_logo</subfield>
<subfield code="t">REVERT</subfield>
<subfield code="v">1</subfield>
</datafield>
</record>
"""
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
%(email_tag)s%(email_ind1)s%(email_ind2)s $$%(email_code)sjekyll@cds.cern.ch
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
""" % {'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == ' ' and '_' or email_ind1,
'email_ind2': email_ind2 == ' ' and '_' or email_ind2,
'email_code': email_code}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
test_to_revert = test_to_revert.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
bibupload.bibupload_records(recs, opt_mode='correct')
self.check_record_consistency(recid)
# revert test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_revert)
bibupload.bibupload_records(recs, opt_mode='correct')
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self._test_bibdoc_status(recid, 'site_logo', '')
expected_content_version1 = urlopen('%s/img/iconpen.gif' % CFG_SITE_URL).read()
expected_content_version2 = urlopen('%s/img/head.gif' % CFG_SITE_URL).read()
expected_content_version3 = expected_content_version1
self.assertEqual(test_web_page_content('%s/%s/%s/files/site_logo.gif?version=1' % (CFG_SITE_URL, CFG_SITE_RECORD, recid), 'jekyll', 'j123ekyll', expected_content_version1), [])
self.assertEqual(test_web_page_content('%s/%s/%s/files/site_logo.gif?version=2' % (CFG_SITE_URL, CFG_SITE_RECORD, recid), 'jekyll', 'j123ekyll', expected_content_version2), [])
self.assertEqual(test_web_page_content('%s/%s/%s/files/site_logo.gif?version=3' % (CFG_SITE_URL, CFG_SITE_RECORD, recid), 'jekyll', 'j123ekyll', expected_content_version3), [])
def test_simple_fft_replace(self):
"""bibupload - simple FFT replace"""
# define the test case:
from invenio.access_control_config import CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS
email_tag = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][0:3]
email_ind1 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][3]
email_ind2 = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][4]
email_code = CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS[0][5]
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/iconpen.gif</subfield>
<subfield code="n">site_logo</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL,
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
test_to_replace = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/head.gif</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL,
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="%(email_tag)s" ind1="%(email_ind1)s" ind2="%(email_ind2)s">
<subfield code="%(email_code)s">jekyll@cds.cern.ch</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/head.gif</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == '_' and ' ' or email_ind1,
'email_ind2': email_ind2 == '_' and ' ' or email_ind2,
'email_code': email_code}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
%(email_tag)s%(email_ind1)s%(email_ind2)s $$%(email_code)sjekyll@cds.cern.ch
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/head.gif
""" % {
'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
'email_tag': email_tag,
'email_ind1': email_ind1 == ' ' and '_' or email_ind1,
'email_ind2': email_ind2 == ' ' and '_' or email_ind2,
'email_code': email_code}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/head.gif" % { 'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_replace = test_to_replace.replace('123456789',
str(recid))
# replace test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_replace)
bibupload.bibupload_records(recs, opt_mode='replace')
self.check_record_consistency(recid)
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.failUnless(try_url_download(testrec_expected_url))
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
expected_content_version = urlopen('%s/img/head.gif' % CFG_SITE_URL).read()
self.assertEqual(test_web_page_content(testrec_expected_url, 'hyde', 'h123yde', expected_text='Authorization failure'), [])
self.assertEqual(test_web_page_content(testrec_expected_url, 'jekyll', 'j123ekyll', expected_text=expected_content_version), [])
def test_simple_fft_insert_with_modification_time(self):
"""bibupload - simple FFT insert with modification time"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="s">2006-05-04 03:02:01</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_xm = """
<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="856" ind1="4" ind2=" ">
<subfield code="u">%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_hm = """
001__ 123456789
003__ SzGeCERN
100__ $$aTest, John$$uTest University
8564_ $$u%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif
980__ $$aARTICLE
""" % {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/site_logo.gif" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
testrec_expected_url2 = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload_records(recs, opt_mode='insert')[0]
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_xm = testrec_expected_xm.replace('123456789',
str(recid))
testrec_expected_hm = testrec_expected_hm.replace('123456789',
str(recid))
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
testrec_expected_url2 = testrec_expected_url2.replace('123456789',
str(recid))
# compare expected results:
inserted_xm = print_record(recid, 'xm')
inserted_hm = print_record(recid, 'hm')
self.assertEqual(compare_xmbuffers(inserted_xm,
testrec_expected_xm), '')
self.assertEqual(compare_hmbuffers(inserted_hm,
testrec_expected_hm), '')
self.failUnless(try_url_download(testrec_expected_url))
force_webcoll(recid)
self.assertEqual(test_web_page_content(testrec_expected_url2, expected_text='<em>04 May 2006, 03:02</em>'), [])
def test_multiple_fft_insert_with_modification_time(self):
"""bibupload - multiple FFT insert with modification time"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="s">2006-05-04 03:02:01</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/head.gif</subfield>
<subfield code="s">2007-05-04 03:02:01</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/%(CFG_SITE_RECORD)s/95/files/9809057.pdf</subfield>
<subfield code="s">2008-05-04 03:02:01</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(prefix)s/var/tmp/demobibdata.xml</subfield>
<subfield code="s">2009-05-04 03:02:01</subfield>
</datafield>
</record>
""" % {
'prefix': CFG_PREFIX,
'siteurl': CFG_SITE_URL,
'CFG_SITE_RECORD': CFG_SITE_RECORD,
}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
force_webcoll(recid)
self.assertEqual(test_web_page_content(testrec_expected_url, expected_text=['<em>04 May 2006, 03:02</em>', '<em>04 May 2007, 03:02</em>', '<em>04 May 2008, 03:02</em>', '<em>04 May 2009, 03:02</em>']), [])
def test_simple_fft_correct_with_modification_time(self):
"""bibupload - simple FFT correct with modification time"""
# define the test case:
test_to_upload = """
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Test, John</subfield>
<subfield code="u">Test University</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/site_logo.gif</subfield>
<subfield code="s">2007-05-04 03:02:01</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
test_to_correct = """
<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">%(siteurl)s/img/sb.gif</subfield>
<subfield code="n">site_logo</subfield>
<subfield code="s">2008-05-04 03:02:01</subfield>
</datafield>
</record>
""" % {
'siteurl': CFG_SITE_URL
}
testrec_expected_url = "%(siteurl)s/%(CFG_SITE_RECORD)s/123456789/files/" \
% {'siteurl': CFG_SITE_URL, 'CFG_SITE_RECORD': CFG_SITE_RECORD}
# insert test record:
recs = bibupload.xml_marc_to_records(test_to_upload)
dummy, recid, dummy = bibupload.bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(recid)
# replace test buffers with real recid of inserted test record:
testrec_expected_url = testrec_expected_url.replace('123456789',
str(recid))
test_to_correct = test_to_correct.replace('123456789',
str(recid))
# correct test record with new FFT:
recs = bibupload.xml_marc_to_records(test_to_correct)
err, recid, msg = bibupload.bibupload(recs[0], opt_mode='correct')
self.check_record_consistency(recid)
force_webcoll(recid)
self.assertEqual(test_web_page_content(testrec_expected_url, expected_text=['<em>04 May 2008, 03:02</em>']), [])
TEST_SUITE = make_test_suite(BibUploadNoUselessHistoryTest,
BibUploadHoldingPenTest,
BibUploadInsertModeTest,
BibUploadAppendModeTest,
BibUploadCorrectModeTest,
BibUploadDeleteModeTest,
BibUploadReplaceModeTest,
BibUploadReferencesModeTest,
BibUploadRecordsWithSYSNOTest,
BibUploadRecordsWithEXTOAIIDTest,
BibUploadRecordsWithOAIIDTest,
BibUploadIndicatorsTest,
BibUploadUpperLowerCaseTest,
BibUploadControlledProvenanceTest,
BibUploadStrongTagsTest,
BibUploadFFTModeTest,
BibUploadPretendTest,
BibUploadCallbackURLTest,
BibUploadMoreInfoTest,
BibUploadBibRelationsTest,
BibUploadRecordsWithDOITest,
BibUploadTypicalBibEditSessionTest,
BibUploadRealCaseRemovalDOIViaBibEdit,
)
if __name__ == "__main__":
run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/bibupload/lib/bibupload_revisionverifier.py b/modules/bibupload/lib/bibupload_revisionverifier.py
index e09d08d64..325bef863 100644
--- a/modules/bibupload/lib/bibupload_revisionverifier.py
+++ b/modules/bibupload/lib/bibupload_revisionverifier.py
@@ -1,457 +1,456 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
RevisionVerifier : Compares the Revision of Record to be uploaded
with the archived Revision and the latest Revision(if any) and
generates a record patch for modified fields alone. This is to
avoid replacing the whole record where changes are minimal
"""
__revision__ = "$Id$"
import zlib
import copy
from invenio.bibrecord import record_get_field_value, \
record_get_field_instances, \
record_add_field, \
record_delete_field, \
create_record
from invenio.bibupload_config import CFG_BIBUPLOAD_CONTROLFIELD_TAGS, \
CFG_BIBUPLOAD_DELETE_CODE, \
CFG_BIBUPLOAD_DELETE_VALUE
from invenio.bibedit_dblayer import get_marcxml_of_record_revision, \
get_record_revisions
class RevisionVerifier:
"""
Class RevisionVerifier contains methods for Revision comparison
for the given record.
"""
def __init__(self):
self.rec_id = ''
def group_tag_values_by_indicator(self, tag_value_list):
"""
Groups the field instances of tag based on indicator pairs
Returns a dictionary of format {(ind1,ind2):[subfield_tuple1, .., subfield_tuplen]}
"""
curr_tag_indicator = {}
for data_tuple in tag_value_list:
ind1 = data_tuple[1]
ind2 = data_tuple[2]
if (ind1, ind2) not in curr_tag_indicator:
curr_tag_indicator[(ind1, ind2)] = [data_tuple]
else:
curr_tag_indicator[(ind1, ind2)].append(data_tuple)
return curr_tag_indicator
def compare_tags_by_ind(self, rec1_tag_val, rec2_tag_val):
"""
Groups the fields(of given tag) based on the indicator pairs
Returns a tuple of lists,each list denoting common/specific indicators
"""
# temporary dictionary to hold fields from record2
tmp = copy.deepcopy(rec2_tag_val)
common_ind_pair = {}
ind_pair_in_rec1_tag = {}
ind_pair_in_rec2_tag = {}
for ind_pair in rec1_tag_val:
# if indicator pair is common
if ind_pair in tmp:
# copying values from record1 tag as this could help
# at next stage in case of any subfield level modifications
# this could be directly used.
common_ind_pair[ind_pair] = rec1_tag_val[ind_pair]
del tmp[ind_pair]
else:
# indicator pair is present only in current tag field
ind_pair_in_rec1_tag[ind_pair] = rec1_tag_val[ind_pair]
for ind_pair in rec2_tag_val:
# indicator pair present only in record2 tag field
if ind_pair in tmp:
ind_pair_in_rec2_tag[ind_pair] = rec2_tag_val[ind_pair]
return (common_ind_pair, ind_pair_in_rec1_tag, ind_pair_in_rec2_tag)
def find_modified_tags(self, common_tags, record1, record2):
"""
For each tag common to Record1 and Record2, checks for modifictions
at field-level, indicator-level and subfield-level.
Returns a dictionary of tags and corresponding fields from Record1
that have been found to have modified.
"""
result = {}
for tag in common_tags:
# retrieve tag instances of record1 and record2
rec1_tag_val = record_get_field_instances(record1, tag, '%', '%')
rec2_tag_val = record_get_field_instances(record2, tag, '%', '%')
if rec1_tag_val:
rec1_ind = self.group_tag_values_by_indicator(rec1_tag_val)
if rec2_tag_val:
rec2_ind = self.group_tag_values_by_indicator(rec2_tag_val)
# NOTE: At this point rec1_ind and rec2_ind will be dictionary
# Key ==> (ind1, ind2) tuple
# Val ==> list of data_tuple => [dt1,dt2]
# dt(n) => ([sfl],ind1,ind2,ctrlfield,fn)
# Generating 3 different dictionaries
# common/added/deleted ind pairs in record1 based on record2
(com_ind, add_ind, del_ind) = self.compare_tags_by_ind(rec1_ind, rec2_ind)
if add_ind:
for ind_pair in add_ind:
for data_tuple in add_ind[ind_pair]:
subfield_list = data_tuple[0]
record_add_field(result, tag, ind_pair[0], ind_pair[1], '', subfields=subfield_list)
# Indicators that are deleted from record1 w.r.t record2 will be added with special code
if del_ind:
for ind_pair in del_ind:
record_add_field(result, tag, ind_pair[0], ind_pair[1], '', [(CFG_BIBUPLOAD_DELETE_CODE, CFG_BIBUPLOAD_DELETE_VALUE)])
# Common modified fields. Identifying changes at subfield level
if com_ind:
for ind_pair in com_ind:
# NOTE: sf_rec1 and sf_rec2 are list of list of subfields
# A simple list comparison is sufficient in this scneario
# Any change in the order of fields or changes in subfields
# will cause the entire list of data_tuple for that ind_pair
# to be copied from record1(upload) to result.
if tag in CFG_BIBUPLOAD_CONTROLFIELD_TAGS:
cf_rec1 = [data_tuple[3] for data_tuple in rec1_ind[ind_pair]]
cf_rec2 = [data_tuple[3] for data_tuple in rec2_ind[ind_pair]]
if cf_rec1 != cf_rec2:
for data_tuple in com_ind[ind_pair]:
record_add_field(result, tag, controlfield_value=data_tuple[3])
else:
sf_rec1 = [data_tuple[0] for data_tuple in rec1_ind[ind_pair]]
sf_rec2 = [data_tuple[0] for data_tuple in rec2_ind[ind_pair]]
if sf_rec1 != sf_rec2:
# change at subfield level/ re-oredered fields
for data_tuple in com_ind[ind_pair]:
# com_ind will have data_tuples of record1(upload) and not record2
subfield_list = data_tuple[0]
record_add_field(result, tag, ind_pair[0], ind_pair[1], '', subfields=subfield_list)
return result
def compare_records(self, record1, record2, opt_mode=None):
"""
Compares two records to identify added/modified/deleted tags.
The records are either the upload record or existing record or
record archived.
Returns a Tuple of Dictionaries(For modified/added/deleted tags).
"""
def remove_control_tag(tag_list):
"""
Returns the list of keys without any control tags
"""
cleaned_list = [item for item in tag_list
if item not in CFG_BIBUPLOAD_CONTROLFIELD_TAGS]
return cleaned_list
def group_record_tags():
"""
Groups all the tags in a Record as Common/Added/Deleted tags.
Returns a Tuple of 3 lists for each category mentioned above.
"""
rec1_keys = record1.keys()
rec2_keys = record2.keys()
com_tag_lst = [key for key in rec1_keys if key in rec2_keys]
# tags in record2 not present in record1
del_tag_lst = [key for key in rec2_keys if key not in rec1_keys]
# additional tags in record1
add_tag_lst = [key for key in rec1_keys if key not in rec2_keys]
return (com_tag_lst, add_tag_lst, del_tag_lst)
# declaring dictionaries to hold the identified patch
mod_patch = {}
add_patch = {}
del_patch = {}
result = {}
(common_tags, added_tags, deleted_tags) = group_record_tags()
-
if common_tags:
mod_patch = self.find_modified_tags(common_tags, record1, record2)
if added_tags:
for tag in added_tags:
add_patch[tag] = record1[tag]
# if record comes with correct, it should already have fields
# marked with '0' code. If not deleted tag list will
if deleted_tags and \
opt_mode == 'replace' or opt_mode == 'delete':
for tag in deleted_tags:
del_patch[tag] = record2[tag]
# returning back a result dictionary with all available patches
if mod_patch:
result['MOD'] = mod_patch
if add_patch:
result['ADD'] = add_patch
if del_patch:
# for a tag that has been deleted in the upload record in replace
# mode, loop through all the fields of the tag and add additional
# subfield with code '0' and value '__DELETE_FIELDS__'
# NOTE Indicators taken into consideration while deleting fields
for tag in del_patch:
for data_tuple in del_patch[tag]:
ind1 = data_tuple[1]
ind2 = data_tuple[2]
record_delete_field(del_patch, tag, ind1, ind2)
record_add_field(del_patch, tag, ind1, ind2, "", [(CFG_BIBUPLOAD_DELETE_CODE, CFG_BIBUPLOAD_DELETE_VALUE)])
result['DEL'] = del_patch
return result
def detect_conflict(self, up_patch, up_date, orig_patch, orig_date):
"""
Compares the generated patches for Upload and Original Records for any common tags.
Raises Conflict Error in case of any common tags.
Returns the upload record patch in case of no conflicts.
"""
conflict_tags = []
# if tag is modified in upload rec but modified/deleted in current rec
if 'MOD' in up_patch:
for tag in up_patch['MOD']:
if 'MOD' in orig_patch and tag in orig_patch['MOD'] \
or 'DEL' in orig_patch and tag in orig_patch['DEL']:
conflict_tags.append(tag)
# if tag is added in upload rec but added in current revision
if 'ADD' in up_patch:
for tag in up_patch['ADD']:
if 'ADD' in orig_patch and tag in orig_patch['ADD']:
conflict_tags.append(tag)
# if tag is deleted in upload rec but modified/deleted in current rec
if 'DEL' in up_patch:
for tag in up_patch['DEL']:
if 'MOD' in orig_patch and tag in orig_patch['MOD'] \
or 'DEL' in orig_patch and tag in orig_patch['DEL']:
conflict_tags.append(tag)
if conflict_tags:
raise InvenioBibUploadConflictingRevisionsError(self.rec_id, \
conflict_tags, \
up_date, \
orig_date)
return up_patch
def generate_final_patch(self, patch_dict, recid):
"""
Generates patch by merging modified patch and added patch
Returns the final merged patch containing modified and added fields
"""
def _add_to_record(record, patch):
for tag in patch:
for data_tuple in patch[tag]:
record_add_field(record, tag, data_tuple[1], data_tuple[2], '', subfields=data_tuple[0])
return record
final_patch = {}
#tag_list = []
# merge processed and added fields into one patch
if 'MOD' in patch_dict:
# tag_list = tag_list + patch_dict['MOD'].items()
final_patch = _add_to_record(final_patch, patch_dict['MOD'])
if 'ADD' in patch_dict:
#tag_list = tag_list + patch_dict['ADD'].items()
final_patch = _add_to_record(final_patch, patch_dict['ADD'])
if 'DEL' in patch_dict:
#tag_list = tag_list + patch_dict['DEL'].items()
final_patch = _add_to_record(final_patch, patch_dict['DEL'])
record_add_field(final_patch, '001', ' ', ' ', recid)
return final_patch
def retrieve_affected_tags_with_ind(self, patch):
"""
Generates a dictionary of all the tags added/modified/romoved from
record1 w.r.t record2 (record1 is upload record and record2 the existing one)
Returns dictionary containing tag and corresponding ind pairs
"""
affected_tags = {}
# ==> Key will be either MOD/ADD/DEL and values will hold another dictionary
# containing tags and corresponding fields
for key in patch:
item = patch[key]
for tag in item:
#each tag will have LIST of TUPLES (data)
affected_tags[tag] = [(data_tuple[1], data_tuple[2]) for data_tuple in item[tag]]
return affected_tags
def verify_revision(self, verify_record, original_record, opt_mode=None):
"""
Compares the upload record with the same 005 record from archive.
Once the changes are identified, The latest revision of the record is fetched
from the system and the identified changes are applied over the latest.
Returns record patch in case of non-conflicting addition/modification/deletion
Conflicting records raise Error and stops the bibupload process
"""
upload_rev = ''
original_rev = ''
r_date = ''
record_patch = {}
# No need for revision check for other operations
if opt_mode not in ['replace', 'correct']:
return
if '001' in verify_record:
self.rec_id = record_get_field_value(verify_record, '001')
# Retrieving Revision tags for comparison
if '005' in verify_record:
upload_rev = record_get_field_value(verify_record, '005')
r_date = upload_rev.split('.')[0]
if r_date not in [k[1] for k in get_record_revisions(self.rec_id)]:
raise InvenioBibUploadInvalidRevisionError(self.rec_id, r_date)
else:
raise InvenioBibUploadMissing005Error(self.rec_id)
if '005' in original_record:
original_rev = record_get_field_value(original_record, '005')
else:
raise InvenioBibUploadMissing005Error(self.rec_id)
# Retrieving the archived version
marc_xml = get_marcxml_of_record_revision(self.rec_id, r_date)
res = create_record(zlib.decompress(marc_xml[0][0]))
archived_record = res[0]
# Comparing Upload and Archive record
curr_patch = self.compare_records(verify_record, archived_record, opt_mode)
# No changes in Upload Record compared to Archived Revision
# Raising Error to skip the bibupload for the record
if not curr_patch:
raise InvenioBibUploadUnchangedRecordError(self.rec_id, upload_rev)
if original_rev == upload_rev:
# Upload, Archive and Original Records have same Revisions.
affected_tags = self.retrieve_affected_tags_with_ind(curr_patch)
return ('correct', self.generate_final_patch(curr_patch, self.rec_id), affected_tags)
# Comparing Original and Archive record
orig_patch = self.compare_records(original_record, archived_record, opt_mode)
# Checking for conflicts
# If no original patch - Original Record same as Archived Record
if orig_patch:
curr_patch = self.detect_conflict(curr_patch, upload_rev, \
orig_patch, original_rev)
record_patch = self.generate_final_patch(curr_patch, self.rec_id)
affected_tags = self.retrieve_affected_tags_with_ind(curr_patch)
# Returning patch in case of no conflicting fields
return ('correct', record_patch, affected_tags)
class InvenioBibUploadUnchangedRecordError(Exception):
"""
Exception for unchanged upload records.
"""
def __init__(self, recid, current_rev):
self.cur_rev = current_rev
self.recid = recid
def __str__(self):
msg = 'UNCHANGED RECORD : Upload Record %s same as Rev-%s'
return repr(msg%(self.recid, self.cur_rev))
class InvenioBibUploadConflictingRevisionsError(Exception):
"""
Exception for conflicting records.
"""
def __init__(self, recid, tag_list, upload_rev, current_rev):
self.up_rev = upload_rev
self.cur_rev = current_rev
self.tags = tag_list
self.recid = recid
def __str__(self):
msg = 'CONFLICT : In Record %s between Rev-%s and Rev-%s for Tags : %s'
return repr(msg%(self.recid, self.up_rev, self.cur_rev, str(self.tags)))
class InvenioBibUploadInvalidRevisionError(Exception):
"""
Exception for incorrect revision of the upload records.
"""
def __init__(self, recid, upload_rev):
self.upload_rev = upload_rev
self.recid = recid
def __str__(self):
msg = 'INVALID REVISION : %s for Record %s not in Archive.'
return repr(msg%(self.upload_rev, self.recid))
class InvenioBibUploadMissing005Error(Exception):
"""
Exception for missing Revision field in Upload/Original records.
"""
def __init__(self, recid):
self.recid = recid
diff --git a/modules/bibupload/lib/bibupload_revisionverifier_regression_tests.py b/modules/bibupload/lib/bibupload_revisionverifier_regression_tests.py
index 9edecfc59..4fa2a0678 100644
--- a/modules/bibupload/lib/bibupload_revisionverifier_regression_tests.py
+++ b/modules/bibupload/lib/bibupload_revisionverifier_regression_tests.py
@@ -1,1078 +1,1118 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Contains Test Cases for Revision Verifier module used along with BibUpload.
"""
from invenio.importutils import lazy_import
from invenio.testutils import make_test_suite, run_test_suite, nottest
get_record = lazy_import('invenio.search_engine:get_record')
print_record = lazy_import('invenio.search_engine:print_record')
bibupload = lazy_import('invenio.bibupload:bibupload')
xml_marc_to_records = lazy_import('invenio.bibupload:xml_marc_to_records')
record_get_field_value = lazy_import('invenio.bibrecord:record_get_field_value')
record_xml_output = lazy_import('invenio.bibrecord:record_xml_output')
from invenio.bibupload_revisionverifier \
import RevisionVerifier, \
InvenioBibUploadConflictingRevisionsError, \
InvenioBibUploadMissing005Error, \
InvenioBibUploadUnchangedRecordError, \
InvenioBibUploadInvalidRevisionError
from invenio.bibupload_regression_tests import GenericBibUploadTest, \
compare_xmbuffers
+from invenio.testutils import make_test_suite, run_test_suite, nottest
+
+from invenio.dbquery import run_sql
+
+
@nottest
def init_test_records():
"""
Initializes test records for revision verifying scenarios
Inserts 1st version and then appends new field every 1 sec
to create 2nd and 3rd version of the record
Returns a dict of following format :
{'id':recid,
'rev1':(rev1_rec, rev1_005),
'rev2':(rev2_rec, rev2_005tag),
'rev3':(rev3_rec, rev3_005tag)}
"""
# Rev 1 -- tag 100
rev1 = """ <record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
</record>"""
# Append 970 to Rev1
rev1_append = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
</record>"""
# Rev 2 -- Rev 1 + tag 970
rev2 = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
</record>"""
# Append 888 to Rev2
rev2_append = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
# Rev 3 -- Rev 2 + tag 888
rev3 = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
init_details = {}
insert_record = rev1.replace(
'<controlfield tag="001">123456789</controlfield>', '')
insert_record = insert_record.replace(
'<controlfield tag="005">20110101000000.0</controlfield>', '')
recs = xml_marc_to_records(insert_record)
# --> Revision 1 submitted
res = bibupload(recs[0], opt_mode='insert')
recid = res[1]
init_details['id'] = (str(recid), )
rec = get_record(recid)
rev_tag = record_get_field_value(rec, '005', '', '')
# update the test data
rev1 = rev1.replace('123456789', str(recid))
rev1 = rev1.replace('20110101000000.0', rev_tag)
rev1_append = rev1_append.replace('123456789', str(recid))
rev2 = rev2.replace('123456789', str(recid))
rev2 = rev2.replace('20110101000000.0', rev_tag)
rev2_append = rev2_append.replace('123456789', str(recid))
rev3 = rev3.replace('123456789', str(recid))
init_details['rev1'] = (rev1, rev_tag)
old_rev_tag = rev_tag
# --> Revision 2 submitted
recs = xml_marc_to_records(rev1_append)
res = bibupload(recs[0], opt_mode='append')
rec = get_record(recid)
rev_tag = record_get_field_value(rec, '005')
rev2 = rev2.replace(old_rev_tag, rev_tag)
rev3 = rev3.replace('20110101000000.0', rev_tag)
init_details['rev2'] = (rev2, rev_tag)
old_rev_tag = rev_tag
# --> Revision 3 submitted
recs = xml_marc_to_records(rev2_append)
res = bibupload(recs[0], opt_mode='append')
rec = get_record(recid)
rev_tag = record_get_field_value(rec, '005')
rev3 = rev3.replace(old_rev_tag, rev_tag)
init_details['rev3'] = (rev3, rev_tag)
return init_details
class RevisionVerifierForCorrectAddition(GenericBibUploadTest):
"""
Test Cases for Patch generation when fields added in Upload Record.
Scenarios:
* Field added in Upload Record and not added in Original Record
* Another instance of existing Field added in Upload Record and
not added in Original Record
"""
def setUp(self):
""" Sets Up sample Records for Adding Field Scenario."""
GenericBibUploadTest.setUp(self)
self.data = init_test_records()
# Rev 2 Update -- Rev2 + tag 300
self.rev2_add_field = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">100P</subfield>
</datafield>
</record>"""
#Rev 2 Update -- Rev2 + tag 100*
self.rev2_add_sim_field = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="100" ind1="C" ind2="0">
<subfield code="a">Devel, D</subfield>
<subfield code="u">FUZZY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
</record>"""
# Record Patch -- Ouput For a New Field
self.patch = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">100P</subfield>
</datafield>
</record>"""
# Record Patch -- Outpute for a New Identical Field
self.patch_identical_field = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="100" ind1="C" ind2="0">
<subfield code="a">Devel, D</subfield>
<subfield code="u">FUZZY</subfield>
</datafield>
</record>"""
self.rev2_add_field = self.rev2_add_field.replace(
'123456789', self.data['id'][0])
self.rev2_add_field = self.rev2_add_field.replace(
'20110101000000.0', \
self.data['rev2'][1])
self.rev2_add_sim_field = self.rev2_add_sim_field.replace(
'123456789', self.data['id'][0])
self.rev2_add_sim_field = self.rev2_add_sim_field.replace(
'20110101000000.0', \
self.data['rev2'][1])
self.patch = self.patch.replace('123456789', self.data['id'][0])
self.patch_identical_field = self.patch_identical_field.replace(
'123456789', \
self.data['id'][0])
def test_add_new_field(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Added 300 to Rev2(100/970), Patch Generated for 300"""
upload_recs = xml_marc_to_records(self.rev2_add_field)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
(opt_mode, patch, dummy_affected_tags) = rev_verifier.verify_revision(upload_recs[0], \
orig_recs[0], \
'replace')
self.assertEqual('correct', opt_mode)
self.assertEqual(compare_xmbuffers(record_xml_output(patch), self.patch), '')
def test_add_identical_field(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Added 100 to Rev2(100/970), Patch Generated for 100"""
upload_identical_rec = xml_marc_to_records(self.rev2_add_sim_field)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
(opt_mode, patch, dummy_affected_tags) = rev_verifier.verify_revision(upload_identical_rec[0], \
orig_recs[0], \
'replace')
self.assertEqual('correct', opt_mode)
self.assertEqual(compare_xmbuffers(record_xml_output(patch), self.patch_identical_field), '')
class RevisionVerifierForConflictingAddition(GenericBibUploadTest):
"""
Test Cases for Conflicts when fields added in Upload Record.
Scenarios:
* Field added in Upload Record but also added in Original Record
* Field added in Upload Record but similar field modified in Original
"""
def setUp(self):
""" Sets Up sample Records for Adding Field Scenario."""
GenericBibUploadTest.setUp(self)
self.data = init_test_records()
# Rev 2 Update -- Rev2 + tag 888
self.rev2_add_conf_field = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
#Rev 2 Update -- Rev2 + tag 100*
self.rev2_add_sim_field = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="100" ind1="C" ind2="0">
<subfield code="a">Devel, D</subfield>
<subfield code="u">FUZZY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
</record>"""
#Rev 3 -- Rev2 + tag 100* +tag 888
self.rev3_add_sim_field = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="100" ind1="C" ind2="1">
<subfield code="a">Devel, D</subfield>
<subfield code="z">FUZZY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
# Rev 3 -- tag 100 updated from Rev 2 + Tag 888
self.rev3_mod = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="z">DEVEL, U</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
self.rev2_add_conf_field = self.rev2_add_conf_field.replace(
'123456789', self.data['id'][0])
self.rev2_add_conf_field = self.rev2_add_conf_field.replace(
'20110101000000.0', \
self.data['rev2'][1])
self.rev2_add_sim_field = self.rev2_add_sim_field.replace(
'123456789', self.data['id'][0])
self.rev2_add_sim_field = self.rev2_add_sim_field.replace(
'20110101000000.0', \
self.data['rev2'][1])
self.rev3_mod = self.rev3_mod.replace('123456789', self.data['id'][0])
self.rev3_mod = self.rev3_mod.replace('20110101000000.0', \
self.data['rev3'][1])
self.rev3_add_sim_field = self.rev3_add_sim_field.replace(
'123456789', \
self.data['id'][0])
self.rev3_add_sim_field = self.rev3_add_sim_field.replace(
'20110101000000.0', \
self.data['rev3'][1])
def test_add_conflict_field(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Added 888 to Rev2(100/970), Conflict Expected"""
upload_conf_rec = xml_marc_to_records(self.rev2_add_conf_field)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
self.assertRaises(InvenioBibUploadConflictingRevisionsError, \
rev_verifier.verify_revision, \
upload_conf_rec[0], \
orig_recs[0], \
'replace')
def test_conflicting_similarfield(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Added 100 to Rev2(100/970), 100 added to Rev3, Conflict Expected"""
upload_identical_rec = xml_marc_to_records(self.rev2_add_sim_field)
orig_recs = xml_marc_to_records(self.rev3_add_sim_field)
rev_verifier = RevisionVerifier()
self.assertRaises(InvenioBibUploadConflictingRevisionsError, \
rev_verifier.verify_revision, \
upload_identical_rec[0], \
orig_recs[0], \
'replace')
def test_conflicting_modfield(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Added 100 to Rev2(100/970), Rev3 100 modified, Conflict Expected"""
upload_identical_rec = xml_marc_to_records(self.rev2_add_sim_field)
orig_recs = xml_marc_to_records(self.rev3_mod)
rev_verifier = RevisionVerifier()
self.assertRaises(InvenioBibUploadConflictingRevisionsError, \
rev_verifier.verify_revision, \
upload_identical_rec[0], \
orig_recs[0], \
'replace')
class RevisionVerifierForCorrectModification(GenericBibUploadTest):
"""
Test Cases for Patch generation when fields are modified.
Scenarios:
* Fields modified in Upload Record but not modified in Original Record
"""
def setUp(self):
""" Sets up sample records for Modified Fields Scenarios."""
GenericBibUploadTest.setUp(self)
self.data = init_test_records()
# Rev 2 Update -- Rev2 ~ tag 970 Modified
self.rev2_mod_field = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PZOOPZOO</subfield>
</datafield>
</record>"""
# Modify Record Patch Output
self.patch = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PZOOPZOO</subfield>
</datafield>
</record>"""
# Scenario 2 - 970CP added to existing record
# Rev 2 Update -- Rev2 ~ tag 970CP Added
self.rev2_mod_field_diff_ind = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHOPHO</subfield>
</datafield>
<datafield tag ="970" ind1="C" ind2="P">
<subfield code="a">0003719XYZOXYZO</subfield>
</datafield>
</record>"""
# Modify Record Patch Output
self.patch_diff_ind = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="970" ind1="C" ind2="P">
<subfield code="a">0003719XYZOXYZO</subfield>
</datafield>
</record>"""
# Scenario 3 - 970__ deleted and 970CP added to existing record
# Rev 2 Update -- Rev2 ~ tag 970CP Added
self.rev2_mod_del_one_add_one = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1="C" ind2="P">
<subfield code="a">0003719XYZOXYZO</subfield>
</datafield>
</record>"""
# Modify Record Patch Output - 1st possibility
self.patch_del_one_add_one = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="0">__DELETED_FIELDS__</subfield>
</datafield>
<datafield tag ="970" ind1="C" ind2="P">
<subfield code="a">0003719XYZOXYZO</subfield>
</datafield>
</record>"""
# Modify Record Patch Output - 2nd possibility
self.patch_del_one_add_one_2 = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="970" ind1="C" ind2="P">
<subfield code="a">0003719XYZOXYZO</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="0">__DELETED_FIELDS__</subfield>
</datafield>
</record>"""
self.rev2_mod_field = self.rev2_mod_field.replace(
'123456789', \
self.data['id'][0])
self.rev2_mod_field = self.rev2_mod_field.replace(
'20110101000000.0', \
self.data['rev2'][1])
self.patch = self.patch.replace('123456789', self.data['id'][0])
self.rev2_mod_field_diff_ind = self.rev2_mod_field_diff_ind.replace(
'123456789', \
self.data['id'][0])
self.rev2_mod_field_diff_ind = self.rev2_mod_field_diff_ind.replace(
'20110101000000.0', \
self.data['rev2'][1])
self.patch_diff_ind = self.patch_diff_ind.replace('123456789', self.data['id'][0])
self.rev2_mod_del_one_add_one = self.rev2_mod_del_one_add_one.replace(
'123456789', \
self.data['id'][0])
self.rev2_mod_del_one_add_one = self.rev2_mod_del_one_add_one.replace(
'20110101000000.0', \
self.data['rev2'][1])
self.patch_del_one_add_one = self.patch_del_one_add_one.replace('123456789', self.data['id'][0])
self.patch_del_one_add_one_2 = self.patch_del_one_add_one_2.replace('123456789', self.data['id'][0])
def test_modified_fields(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Modified 970 in Rev2(100/970), Patch Generated for 970"""
upload_recs = xml_marc_to_records(self.rev2_mod_field)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
(opt_mode, patch, dummy_affected_tags) = rev_verifier.verify_revision(
upload_recs[0], \
orig_recs[0], \
'replace')
self.assertEqual('correct', opt_mode)
self.assertEqual(compare_xmbuffers(record_xml_output(patch), self.patch), '')
def test_correcting_added_field_with_diff_ind(self):
""" BibUpload Revision Verifier - Rev3-100/970__/888, Added 970CP in Rev2(100/970__), Patch Generated for 970CP"""
upload_recs = xml_marc_to_records(self.rev2_mod_field_diff_ind)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
(opt_mode, patch, dummy_affected_tags) = rev_verifier.verify_revision(
upload_recs[0], \
orig_recs[0], \
'replace')
self.assertEqual('correct', opt_mode)
self.assertEqual(compare_xmbuffers(record_xml_output(patch), self.patch_diff_ind), '')
def test_correcting_del_field_add_field_diff_ind(self):
""" BibUpload Revision Verifier - Rev3-100/970__/888, Deleted 970__ and Added 970CP in Rev2(100/970__), Patch Generated for 970__/970CP"""
upload_recs = xml_marc_to_records(self.rev2_mod_del_one_add_one)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
(opt_mode, patch, dummy_affected_tags) = rev_verifier.verify_revision(
upload_recs[0], \
orig_recs[0], \
'replace')
self.assertEqual('correct', opt_mode)
#NOTE:for multiple fields in patch it is better to compare with different possible patch strings
#This is due to unsorted key-value pairs of generated patch dictionary
#self.assertEqual(compare_xmbuffers(record_xml_output(patch), self.patch_del_one_add_one), '')
self.failUnless((compare_xmbuffers(record_xml_output(patch), self.patch_del_one_add_one)!='') \
or (compare_xmbuffers(record_xml_output(patch), self.patch_del_one_add_one_2)!=''))
class RevisionVerifierForConflictingModification(GenericBibUploadTest):
"""
Test Cases for Revision Verifier when fields modified are conflicting.
Scenarios:
* Fields modified in both Upload Record and Original Record
* Fields modified in Upload record but deleted from Original Record
"""
def setUp(self):
""" Sets up sample records for Modified Fields Scenarios."""
GenericBibUploadTest.setUp(self)
self.data = init_test_records()
# Rev 2 Update -- Rev2 ~ tag 970 Modified
self.rev2_mod_field = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PZOOPZOO</subfield>
</datafield>
</record>"""
# Rev 3 MOdified = Rev3 ~ Tag 970 modified - Conflict with Rev2-Update
self.rev3_mod = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHYPHY</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
# Rev 3 MOdified = Rev3 ~ Tag 970 Deleted - Conflict with Rev2-Update
self.rev3_deleted = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
self.rev2_mod_field = self.rev2_mod_field.replace('123456789', \
self.data['id'][0])
self.rev2_mod_field = self.rev2_mod_field.replace('20110101000000.0', \
self.data['rev2'][1])
self.rev3_mod = self.rev3_mod.replace('123456789', self.data['id'][0])
self.rev3_mod = self.rev3_mod.replace('20110101000000.0', \
self.data['rev3'][1])
self.rev3_deleted = self.rev3_deleted.replace('123456789', \
self.data['id'][0])
self.rev3_deleted = self.rev3_deleted.replace('20110101000000.0', \
self.data['rev3'][1])
def test_conflicting_modified_field(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Modified 970 in Rev2(100/970), 970 modified in Rev3, Conflict Expected"""
upload_conf_recs = xml_marc_to_records(self.rev2_mod_field)
orig_recs = xml_marc_to_records(self.rev3_mod)
rev_verifier = RevisionVerifier()
self.assertRaises(
InvenioBibUploadConflictingRevisionsError, \
rev_verifier.verify_revision, \
upload_conf_recs[0], \
orig_recs[0], \
'replace')
def test_conflicting_deleted_field(self):
""" BibUpload Revision Verifier - Rev3-100/970/888, Modified 970 in Rev2(100/970), 970 removed in Rev3, Conflict Expected"""
upload_conf_recs = xml_marc_to_records(self.rev2_mod_field)
orig_recs = xml_marc_to_records(self.rev3_deleted)
rev_verifier = RevisionVerifier()
self.assertRaises(
InvenioBibUploadConflictingRevisionsError, \
rev_verifier.verify_revision, \
upload_conf_recs[0], \
orig_recs[0], \
'replace')
class RevisionVerifierForDeletingFields(GenericBibUploadTest):
"""
Test Cases for Revision Verifier when fields are to be deleted from upload record.
Scenarios:
* Fields modified in both Upload Record and Original Record
* Fields modified in Upload record but deleted from Original Record
"""
def setUp(self):
""" Sets up sample records for Modified Fields Scenarios."""
GenericBibUploadTest.setUp(self)
# Rev 1
self.rev1 = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="300" ind1=" " ind2=" ">
<subfield code="a">Test, Field-1</subfield>
</datafield>
<datafield tag ="300" ind1=" " ind2=" ">
<subfield code="a">Test, Field-2</subfield>
</datafield>
<datafield tag ="300" ind1="C" ind2="P">
<subfield code="a">Test, Field-3</subfield>
</datafield>
</record>"""
# Rev 1 -- To Replace
self.rev1_mod = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
</record>"""
# Patch with SPECIAL DELETE FIELD-1
self.patch_1 = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="300" ind1=" " ind2=" ">
<subfield code="0">__DELETE_FIELDS__</subfield>
</datafield>
<datafield tag ="300" ind1="C" ind2="P">
<subfield code="0">__DELETE_FIELDS__</subfield>
</datafield>
</record>"""
# Patch with SPECIAL DELETE FIELD-2
self.patch_2 = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="300" ind1="C" ind2="P">
<subfield code="0">__DELETE_FIELDS__</subfield>
</datafield>
<datafield tag ="300" ind1=" " ind2=" ">
<subfield code="0">__DELETE_FIELDS__</subfield>
</datafield>
</record>"""
self.rev_to_insert = self.rev1.replace('<controlfield tag="001">123456789</controlfield>', '')
self.rev_to_insert = self.rev_to_insert.replace('<controlfield tag="005">20110101000000.0</controlfield>','')
rec = xml_marc_to_records(self.rev_to_insert)
dummy_error, self.recid, dummy_msg = bibupload(rec[0], opt_mode='insert')
self.check_record_consistency(self.recid)
self.rev1 = self.rev1.replace('123456789', str(self.recid))
self.rev1_mod = self.rev1_mod.replace('123456789', str(self.recid))
self.patch_1 = self.patch_1.replace('123456789', str(self.recid))
self.patch_2 = self.patch_2.replace('123456789', str(self.recid))
record = get_record(self.recid)
rev = record_get_field_value(record, '005')
self.rev1 = self.rev1.replace('20110101000000.0', rev)
self.rev1_mod = self.rev1_mod.replace('20110101000000.0', rev)
def test_for_special_delete_field(self):
""" BibUpload Revision Verifier - Rev1-100/300, Modified 100 in Rev1-Mod, Deleted 300 in Rev1-Mod (100/300), Patch for DELETE generated"""
upload_rec = xml_marc_to_records(self.rev1_mod)
orig_rec = xml_marc_to_records(self.rev1)
rev_verifier = RevisionVerifier()
(opt_mode, final_patch, dummy_affected_tags) = rev_verifier.verify_revision(upload_rec[0], \
orig_rec[0], \
'replace')
self.assertEqual('correct', opt_mode)
self.failUnless((compare_xmbuffers(self.patch_1, record_xml_output(final_patch))!='') or \
(compare_xmbuffers(self.patch_2, record_xml_output(final_patch))!=''))
class RevisionVerifierForInterchangedFields(GenericBibUploadTest):
"""
Contains Test Cases for Re-ordered Fields.
Scenarios include:
* Same set of fields but in different order
"""
def setUp(self):
""" Sets up sample records for Modified Fields Scenarios."""
GenericBibUploadTest.setUp(self)
# Rev 1 -- 100-1/100-2/100-3
self.rev1 = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester1, T</subfield>
<subfield code="u">DESY1</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester2, T</subfield>
<subfield code="u">DESY2</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester3, T</subfield>
<subfield code="u">DESY3</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHYPHY</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
# Rev 1 Modified -- 100-2/100-3/100-1
self.rev1_mod = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester2, T</subfield>
<subfield code="u">DESY2</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester3, T</subfield>
<subfield code="u">DESY3</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester1, T</subfield>
<subfield code="u">DESY1</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PHYPHY</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
self.patch = """<record>
<controlfield tag="001">123456789</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester2, T</subfield>
<subfield code="u">DESY2</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester3, T</subfield>
<subfield code="u">DESY3</subfield>
</datafield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester1, T</subfield>
<subfield code="u">DESY1</subfield>
</datafield>
</record>"""
insert_record = self.rev1.replace(
'<controlfield tag="001">123456789</controlfield>', '')
insert_record = insert_record.replace(
'<controlfield tag="005">20110101000000.0</controlfield>', '')
recs = xml_marc_to_records(insert_record)
# --> Revision 1 submitted
res = bibupload(recs[0], opt_mode='insert')
self.recid = res[1]
self.check_record_consistency(self.recid)
rec = get_record(self.recid)
rev_tag = record_get_field_value(rec, '005', '', '')
# update the test data
self.rev1 = self.rev1.replace('123456789', str(self.recid))
self.rev1 = self.rev1.replace('20110101000000.0', rev_tag)
self.rev1_mod = self.rev1_mod.replace('123456789', str(self.recid))
self.rev1_mod = self.rev1_mod.replace('20110101000000.0', rev_tag)
self.patch = self.patch.replace('123456789', str(self.recid))
def test_interchanged_fields(self):
""" BibUpload Revision Verifier - Rev1--100-1/100-2/100-3/970/888, Rev1-Up--100-2/100-3/100-1/970/888, Patch Generated for 100"""
upload_recs = xml_marc_to_records(self.rev1_mod)
orig_recs = xml_marc_to_records(self.rev1)
rev_verifier = RevisionVerifier()
(opt_mode, patch, dummy_affected_tags) = rev_verifier.verify_revision(
upload_recs[0], \
orig_recs[0], \
'replace')
self.assertEqual('correct', opt_mode)
self.assertEqual(compare_xmbuffers(record_xml_output(patch), self.patch), '')
class RevisionVerifierForCommonCases(GenericBibUploadTest):
"""
Contains Test Cases for Common Scenarios.
Scenarios include :
* Invalid Revision
* Invalide Opt_Mode value
* Missing Revision in Upload Record
"""
def setUp(self):
""" Set up all the sample records required for Test Cases."""
GenericBibUploadTest.setUp(self)
self.data = init_test_records()
# Rev 2 Update -- Rev2 ~ tag 970 Modified
self.rev2_modified = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester, T</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="970" ind1=" " ind2=" ">
<subfield code="a">0003719PZOOPZOO</subfield>
</datafield>
</record>"""
self.rev2_modified = self.rev2_modified.replace('123456789', \
self.data['id'][0])
def test_unchanged_record_upload(self):
""" BibUpload Revision Verifier - Uploading Unchanged Record, Raise UnchangedRecordError"""
upload_recs = xml_marc_to_records(self.data['rev3'][0])
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
self.assertRaises(InvenioBibUploadUnchangedRecordError, \
rev_verifier.verify_revision, \
upload_recs[0], \
orig_recs[0], \
'replace')
def test_missing_revision(self):
""" BibUpload Revision Verifier - Missing 005 Tag scenario, Raise Missing005Error."""
self.rev2_modified = self.rev2_modified.replace(
'<controlfield tag="005">20110101000000.0</controlfield>', \
'')
upload_recs = xml_marc_to_records(self.rev2_modified)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
self.assertRaises(InvenioBibUploadMissing005Error, \
rev_verifier.verify_revision, \
upload_recs[0], \
orig_recs[0], \
'replace')
def test_invalid_operation(self):
""" BibUpload Revision Verifier - Incorrect opt_mode parameter."""
upload_recs = xml_marc_to_records(self.rev2_modified)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
rev_verifier = RevisionVerifier()
for item in ['append', 'format', 'insert', 'delete', 'reference']:
self.assertEqual(rev_verifier.verify_revision(
upload_recs[0], \
orig_recs[0], \
item), None)
def test_invalid_revision(self):
""" BibUpload Revision Verifier - Wrong Revision in the Upload Record, Raise InvalidRevisionError"""
self.rev2_modified = self.rev2_modified.replace(
'<controlfield tag="005">20110101000000.0</controlfield>', \
'<controlfield tag="005">20110101020304.0</controlfield>')
rev_verifier = RevisionVerifier()
upload_recs = xml_marc_to_records(self.rev2_modified)
orig_recs = xml_marc_to_records(self.data['rev3'][0])
self.assertRaises(InvenioBibUploadInvalidRevisionError, \
rev_verifier.verify_revision, \
upload_recs[0], \
orig_recs[0], \
'replace')
class RevisionVerifierFromBibUpload(GenericBibUploadTest):
""" Test Case for End-to-End Bibupload with Revision Verifier module Enabled """
def setUp(self):
""" Set up all the sample records required for Test Cases."""
GenericBibUploadTest.setUp(self)
# Rev 1 -- To Insert
self.rev1 = """<record>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="870" ind1=" " ind2=" ">
<subfield code="a">3719PZOOPZOO</subfield>
</datafield>
</record>"""
# Rev 1 Modified -- To Replace
self.rev1_modified = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="870" ind1=" " ind2=" ">
<subfield code="a">3719PZOOPZOO_modified</subfield>
</datafield>
</record>"""
# Rev 2 Update -- Rev2
self.rev2 = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="870" ind1=" " ind2=" ">
<subfield code="a">3719PZOOPZOO</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
# Rev 2 MOdified -- Rev2 - 870 modified
self.rev2_modified = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="870" ind1=" " ind2=" ">
<subfield code="a">3719PZOOPZOO_another modification</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
self.final_xm = """<record>
<controlfield tag="001">123456789</controlfield>
<controlfield tag="005">20110101000000.0</controlfield>
<datafield tag ="100" ind1=" " ind2=" ">
<subfield code="a">Tester</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag ="870" ind1=" " ind2=" ">
<subfield code="a">3719PZOOPZOO_modified</subfield>
</datafield>
<datafield tag="888" ind1=" " ind2=" ">
<subfield code="a">dumb text</subfield>
</datafield>
</record>"""
def test_BibUpload_revision_verifier(self):
""" BibUpload Revision Verifier - Called from BibUpload Operation - Patch & Conflict Scenarios"""
recs = xml_marc_to_records(self.rev1)
# --> Revision 1 submitted
error, self.recid, dummy_msg = bibupload(recs[0], opt_mode='insert')
self.check_record_consistency(self.recid)
record = get_record(self.recid)
rev = record_get_field_value(record, '005', '', '')
recs = xml_marc_to_records(self.rev1)
self.rev2 = self.rev2.replace('123456789', str(self.recid))
self.rev2 = self.rev2.replace('20110101000000.0', rev)
self.rev1_modified = self.rev1_modified.replace('123456789', str(self.recid))
self.rev1_modified = self.rev1_modified.replace('20110101000000.0', rev)
self.final_xm = self.final_xm.replace('123456789', str(self.recid))
recs = xml_marc_to_records(self.rev1)
recs = xml_marc_to_records(self.rev2)
# --> Revision 2 submitted
error, self.recid, dummy_msg = bibupload(recs[0], opt_mode='replace')
self.check_record_consistency(self.recid)
record = get_record(self.recid)
self.rev2 = self.rev2.replace(rev, record_get_field_value(record, '005', '', ''))
self.rev2_modified = self.rev2_modified.replace('123456789', str(self.recid))
self.rev2_modified = self.rev2_modified.replace('20110101000000.0', record_get_field_value(record, '005', '', ''))
# --> Revision 1 modified submitted
recs = xml_marc_to_records(self.rev1_modified)
error, self.recid, dummy_msg = bibupload(recs[0], opt_mode='replace')
self.check_record_consistency(self.recid)
record = get_record(self.recid)
rev = record_get_field_value(record, '005', '', '')
self.final_xm = self.final_xm.replace('20110101000000.0', rev)
self.assertEqual(compare_xmbuffers(self.final_xm, print_record(self.recid, 'xm')), '')
# --> Revision 2 modified submitted
recs = xml_marc_to_records(self.rev2_modified)
error, self.recid, dummy_msg = bibupload(recs[0], opt_mode='replace')
self.check_record_consistency(self.recid)
self.assertEquals(error, 2)
+
+
+class RevisionVerifierHistoryOfAffectedFields(GenericBibUploadTest):
+ """Checks if column 'affected fields' from hstRECORD table
+ is filled correctly"""
+
+ def setUp(self):
+ GenericBibUploadTest.setUp(self)
+ self.data = init_test_records()
+
+ def test_inserted_record_with_no_affected_tags_in_hst(self):
+ """Checks if inserted record has affected fields in hstRECORD table"""
+ query = "SELECT affected_fields from hstRECORD where id_bibrec=5 ORDER BY job_date DESC"
+ res = run_sql(query)
+ self.assertEqual(res[0][0], "")
+
+ def test_corrected_record_affected_tags(self):
+ """Checks if corrected record has affected fields in hstRECORD table"""
+ query = "SELECT affected_fields from hstRECORD where id_bibrec=12 ORDER BY job_date DESC"
+ res = run_sql(query)
+ self.assertEqual(res[0][0], "005__%,8564_%,909C0%,909C1%,909C5%,909CO%,909CS%")
+
+
+ def test_append_to_record_affected_tags(self):
+ """Checks if record with appended parts has proper affected fields in hstRECORD table"""
+ query = """SELECT affected_fields from hstRECORD where id_bibrec=%s
+ ORDER BY job_date DESC""" % self.data["id"][0]
+ res = run_sql(query)
+ self.assertEqual(res[0][0], '005__%,888__%')
+ self.assertEqual(res[1][0], '005__%,970__%')
+ self.assertEqual(res[2][0], '')
+
+
TEST_SUITE = make_test_suite(RevisionVerifierForCorrectAddition,
RevisionVerifierForCorrectModification,
RevisionVerifierForInterchangedFields,
RevisionVerifierForDeletingFields,
RevisionVerifierForConflictingAddition,
RevisionVerifierForConflictingModification,
- RevisionVerifierForCommonCases)
+ RevisionVerifierForCommonCases,
+ RevisionVerifierHistoryOfAffectedFields)
+
if __name__ == '__main__':
run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/modules/docextract/lib/refextract_api.py b/modules/docextract/lib/refextract_api.py
index e1908f395..50840f8f9 100644
--- a/modules/docextract/lib/refextract_api.py
+++ b/modules/docextract/lib/refextract_api.py
@@ -1,272 +1,273 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""This is where all the public API calls are accessible
This is the only file containing public calls and everything that is
present here can be considered private by the invenio modules.
"""
import os
from urllib import urlretrieve
from tempfile import mkstemp
from invenio.refextract_engine import parse_references, \
get_plaintext_document_body, \
parse_reference_line, \
get_kbs
from invenio.refextract_text import extract_references_from_fulltext
from invenio.search_engine_utils import get_fieldvalues
-from invenio.bibindex_engine import CFG_JOURNAL_PUBINFO_STANDARD_FORM, \
- CFG_JOURNAL_TAG
+from invenio.bibindex_tokenizers.BibIndexJournalTokenizer import \
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM, \
+ CFG_JOURNAL_TAG
from invenio.bibdocfile import BibRecDocs, InvenioBibDocFileError
from invenio.search_engine import get_record
from invenio.bibtask import task_low_level_submission
from invenio.bibrecord import record_delete_fields, record_xml_output, \
create_record, record_get_field_instances, record_add_fields, \
record_has_field
from invenio.refextract_find import get_reference_section_beginning, \
find_numeration_in_body
from invenio.refextract_text import rebuild_reference_lines
from invenio.refextract_config import CFG_REFEXTRACT_FILENAME
from invenio.config import CFG_TMPSHAREDDIR
class FullTextNotAvailable(Exception):
"""Raised when we cannot access the document text"""
class RecordHasReferences(Exception):
"""Raised when
* we asked to updated references for a record
* we explicitely asked for not overwriting references for this record
(via the appropriate function argument)
* the record has references thus we cannot update them
"""
def extract_references_from_url_xml(url):
"""Extract references from the pdf specified in the url
The single parameter is the path to the pdf.
It raises FullTextNotAvailable if the url gives a 404
The result is given in marcxml.
"""
filename, dummy = urlretrieve(url)
try:
try:
marcxml = extract_references_from_file_xml(filename)
except IOError, err:
if err.code == 404:
raise FullTextNotAvailable()
else:
raise
finally:
os.remove(filename)
return marcxml
def extract_references_from_file_xml(path, recid=1):
"""Extract references from a local pdf file
The single parameter is the path to the file
It raises FullTextNotAvailable if the file does not exist
The result is given in marcxml.
"""
if not os.path.isfile(path):
raise FullTextNotAvailable()
docbody, dummy = get_plaintext_document_body(path)
reflines, dummy, dummy = extract_references_from_fulltext(docbody)
if not len(reflines):
docbody, dummy = get_plaintext_document_body(path, keep_layout=True)
reflines, dummy, dummy = extract_references_from_fulltext(docbody)
return parse_references(reflines, recid=recid)
def extract_references_from_string_xml(source, is_only_references=True):
"""Extract references from a string
The single parameter is the document
The result is given in marcxml.
"""
docbody = source.split('\n')
if not is_only_references:
reflines, dummy, dummy = extract_references_from_fulltext(docbody)
else:
refs_info = get_reference_section_beginning(docbody)
if not refs_info:
refs_info, dummy = find_numeration_in_body(docbody)
refs_info['start_line'] = 0
refs_info['end_line'] = len(docbody) - 1,
reflines = rebuild_reference_lines(docbody, refs_info['marker_pattern'])
return parse_references(reflines)
def extract_references_from_record_xml(recid):
"""Extract references from a record id
The single parameter is the document
The result is given in marcxml.
"""
path = look_for_fulltext(recid)
if not path:
raise FullTextNotAvailable()
return extract_references_from_file_xml(path, recid=recid)
def replace_references(recid):
"""Replace references for a record
The record itself is not updated, the marc xml of the document with updated
references is returned
Parameters:
* recid: the id of the record
"""
# Parse references
references_xml = extract_references_from_record_xml(recid)
references = create_record(references_xml.encode('utf-8'))
# Record marc xml
record = get_record(recid)
if references[0]:
fields_to_add = record_get_field_instances(references[0],
tag='999',
ind1='%',
ind2='%')
# Replace 999 fields
record_delete_fields(record, '999')
record_add_fields(record, '999', fields_to_add)
# Update record references
out_xml = record_xml_output(record)
else:
out_xml = None
return out_xml
def update_references(recid, overwrite=True):
"""Update references for a record
First, we extract references from a record.
Then, we are not updating the record directly but adding a bibupload
task in -c mode which takes care of updating the record.
Parameters:
* recid: the id of the record
"""
if not overwrite:
# Check for references in record
record = get_record(recid)
if record and record_has_field(record, '999'):
raise RecordHasReferences('Record has references and overwrite ' \
'mode is disabled: %s' % recid)
if get_fieldvalues(recid, '999C59'):
raise RecordHasReferences('Record has been curated: %s' % recid)
# Parse references
references_xml = extract_references_from_record_xml(recid)
# Save new record to file
(temp_fd, temp_path) = mkstemp(prefix=CFG_REFEXTRACT_FILENAME,
dir=CFG_TMPSHAREDDIR)
temp_file = os.fdopen(temp_fd, 'w')
temp_file.write(references_xml.encode('utf-8'))
temp_file.close()
# Update record
task_low_level_submission('bibupload', 'refextract', '-P', '5',
'-c', temp_path)
def list_pdfs(recid):
rec_info = BibRecDocs(recid)
docs = rec_info.list_bibdocs()
for doc in docs:
for ext in ('pdf', 'pdfa', 'PDF'):
try:
yield doc.get_file(ext)
except InvenioBibDocFileError:
pass
def get_pdf_doc(recid):
try:
doc = list_pdfs(recid).next()
except StopIteration:
doc = None
return doc
def look_for_fulltext(recid):
doc = get_pdf_doc(recid)
path = None
if doc:
path = doc.get_full_path()
return path
def record_has_fulltext(recid):
"""Checks if we can access the fulltext for the given recid"""
path = look_for_fulltext(recid)
return path is not None
def search_from_reference(text):
"""Convert a raw reference to a search query
Called by the search engine to convert a raw reference:
find rawref John, JINST 4 (1994) 45
is converted to
journal:"JINST,4,45"
"""
field = ''
pattern = ''
kbs = get_kbs()
references, dummy_m, dummy_c, dummy_co = parse_reference_line(text, kbs)
for elements in references:
for el in elements:
if el['type'] == 'JOURNAL':
field = 'journal'
pattern = CFG_JOURNAL_PUBINFO_STANDARD_FORM \
.replace(CFG_JOURNAL_TAG.replace('%', 'p'), el['title']) \
.replace(CFG_JOURNAL_TAG.replace('%', 'v'), el['volume']) \
.replace(CFG_JOURNAL_TAG.replace('%', 'c'), el['page']) \
.replace(CFG_JOURNAL_TAG.replace('%', 'y'), el['year'])
break
elif el['type'] == 'REPORTNUMBER':
field = 'report'
pattern = el['report_num']
break
return field, pattern.encode('utf-8')
diff --git a/modules/docextract/lib/refextract_linker.py b/modules/docextract/lib/refextract_linker.py
index f8f65c781..05327c618 100644
--- a/modules/docextract/lib/refextract_linker.py
+++ b/modules/docextract/lib/refextract_linker.py
@@ -1,74 +1,75 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
from invenio.bibrank_citation_indexer import INTBITSET_OF_DELETED_RECORDS
-from invenio.bibindex_engine import CFG_JOURNAL_PUBINFO_STANDARD_FORM
+from invenio.bibindex_tokenizers.BibIndexJournalTokenizer import \
+ CFG_JOURNAL_PUBINFO_STANDARD_FORM
from invenio.search_engine import search_pattern
def get_recids_matching_query(pvalue, fvalue):
"""Return list of recIDs matching query for PVALUE and FVALUE."""
recids = search_pattern(p=pvalue, f=fvalue, m='e')
recids -= INTBITSET_OF_DELETED_RECORDS
return list(recids)
def format_journal(format_string, mappings):
"""format the publ infostring according to the format"""
def replace(char, data):
return data.get(char, char)
return ''.join(replace(c, mappings) for c in format_string)
def find_journal(citation_element):
tags_values = {
'773__p': citation_element['title'],
'773__v': citation_element['volume'],
'773__c': citation_element['page'],
'773__y': citation_element['year'],
}
journal_string \
= format_journal(CFG_JOURNAL_PUBINFO_STANDARD_FORM, tags_values)
return get_recids_matching_query(journal_string, 'journal')
def find_reportnumber(citation_element):
reportnumber_string = citation_element['report_num']
return get_recids_matching_query(reportnumber_string, 'reportnumber')
def find_doi(citation_element):
doi_string = citation_element['doi_string']
return get_recids_matching_query(doi_string, 'doi')
def find_referenced_recid(citation_element):
el_type = citation_element['type']
if el_type in FINDERS:
return FINDERS[el_type](citation_element)
return []
FINDERS = {
'JOURNAL': find_journal,
'REPORTNUMBER': find_reportnumber,
'DOI': find_doi,
}
diff --git a/modules/miscutil/demo/demobibdata.xml b/modules/miscutil/demo/demobibdata.xml
index d2998f51c..828223339 100644
--- a/modules/miscutil/demo/demobibdata.xml
+++ b/modules/miscutil/demo/demobibdata.xml
@@ -1,24578 +1,26285 @@
+<!--
+Layout of this file:
+
+1. BIBLIOGRAPHIC records (containing only few references to authority records)
+2. AUTHORITY records
+- AUTHOR Authority Records
+- INSTITUTION Authority Records
+- SUBJECT Authority Records
+3. linked BIBLIOGRAPHIC and AUTHORITY data from juelich
+-->
+
+<!------------------------------>
+<!-- 1. BIBLIOGRAPHIC records -->
+<!------------------------------>
+
+
<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-EX-0106015</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Photolab</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">ALEPH experiment: Candidate of Higgs boson production</subfield>
</datafield>
<datafield tag="246" ind1=" " ind2="1">
<subfield code="a">Expérience ALEPH: Candidat de la production d'un boson Higgs</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">14 06 2000</subfield>
</datafield>
<datafield tag="340" ind1=" " ind2=" ">
<subfield code="a">FILM</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Candidate for the associated production of the Higgs boson and Z boson. Both, the Higgs and Z boson decay into 2 jets each. The green and the yellow jets belong to the Higgs boson. They represent the fragmentation of a bottom andanti-bottom quark. The red and the blue jets stem from the decay of the Z boson into a quark anti-quark pair. Left: View of the event along the beam axis. Bottom right: Zoom around the interaction point at the centre showing detailsof the fragmentation of the bottom and anti-bottom quarks. As expected for b quarks, in each jet the decay of a long-lived B meson is visible. Top right: "World map" showing the spatial distribution of the jets in the event.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">Press</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Experiments and Tracks</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">LEP</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">neil.calder@cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0106015_01.jpg</subfield>
<subfield code="r">restricted_picture</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0106015_01.gif</subfield>
<subfield code="f">.gif;icon</subfield>
<subfield code="r">restricted_picture</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0003717PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2001-06-14</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-08-27</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="p">Bldg. 2</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="r">Calder, N</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-EX-0104007</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Patrice Loïez</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">The first CERN-built module of the barrel section of ATLAS's electromagnetic calorimeter</subfield>
</datafield>
<datafield tag="246" ind1=" " ind2="1">
<subfield code="a">Premier module du tonneau du calorimètre electromagnétique d'ATLAS</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">10 Apr 2001</subfield>
</datafield>
<datafield tag="340" ind1=" " ind2=" ">
<subfield code="a">DIGITAL</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Behind the module, left to right Ralf Huber, Andreas Bies and Jorgen Beck Hansen. In front of the module, left to right: Philippe Lançon and Edward Wood.</subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="a">Derrière le module, de gauche à droite: Ralf Huber, Andreas Bies, Jorgen Beck Hansen. Devant le module, de gauche à droite : Philippe Lançon et Edward Wood.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">CERN EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Experiments and Tracks</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">marie.noelle.pages.ribeiro@cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0104007_02.jpeg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/0104007_02.gif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0003601PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2001</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2001-04-23</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-06-18</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">0020699</subfield>
<subfield code="l">ADMBUL</subfield>
<subfield code="t">CERN Bulletin 18/2001 : 30 April 2001 (English)</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">0020700</subfield>
<subfield code="l">ADMBUL</subfield>
<subfield code="t">CERN Bulletin 18/2001 : 30 avril 2001 (French)</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="p">Bldg. 184</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="r">Fassnacht, P</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-HI-6902127</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">European Molecular Biology Conference</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">Jul 1969</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">In February, the Agreement establishing the European Molecular Biology Conference was signed at CERN. Willy Spuhler is signing for Switzerland.</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Personalities and History of CERN</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/6902127.jpeg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/6902127.gif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0002690PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1969</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-06-13</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2000-06-13</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="s">127-2-69</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200024</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-DI-9906028</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">J.L. Caron</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">The Twenty Member States of CERN (with dates of accession) on 1 June 1999</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">Jun 1999</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">CERN Member States.</subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="a">Les Etats membres du CERN.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">Press</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Diagrams and Charts</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9906028_01.jpeg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/9906028_01.gif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0001754PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1999-06-17</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2000-10-30</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">199924</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-DI-9905005</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">High energy cosmic rays striking atoms at the top of the atmosphere give the rise to showers of particles striking the Earth's surface</subfield>
</datafield>
<datafield tag="246" ind1=" " ind2="1">
<subfield code="a">Des rayons cosmiques de haute energie heurtent des atomes dans la haute atmosphere et donnent ainsi naissance a des gerbes de particules projetees sur la surface terrestre</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">10 May 1999</subfield>
</datafield>
<datafield tag="340" ind1=" " ind2=" ">
<subfield code="a">DIGITAL</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">Press</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Diagrams and Charts</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">neil.calder@cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9905005_01.jpeg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/9905005_01.gif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0001626PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1999-05-10</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2000-09-12</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="p">Bldg. 60</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="r">Calder, N</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-HI-6206002</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">At CERN in 1962</subfield>
<subfield code="s">eight Nobel prizewinners</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">1962</subfield>
</datafield>
<datafield tag="506" ind1="1" ind2=" ">
<subfield code="a">jekyll_only</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">In 1962, CERN hosted the 11th International Conference on High Energy Physics. Among the distinguished visitors were eight Nobel prizewinners.Left to right: Cecil F. Powell, Isidor I. Rabi, Werner Heisenberg, Edwin M. McMillan, Emile Segre, Tsung Dao Lee, Chen Ning Yang and Robert Hofstadter.</subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="a">En 1962, le CERN est l'hote de la onzieme Conference Internationale de Physique des Hautes Energies. Parmi les visiteurs eminents se trouvaient huit laureats du prix Nobel.De gauche a droite: Cecil F. Powell, Isidor I. Rabi, Werner Heisenberg, Edwin M. McMillan, Emile Segre, Tsung Dao Lee, Chen Ning Yang et Robert Hofstadter.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">Press</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Personalities and History of CERN</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">Nobel laureate</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/6206002.jpg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/6206002.gif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0000736PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1962</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1998-07-23</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-07-15</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.nobel.se/physics/laureates/1950/index.html</subfield>
<subfield code="p">The Nobel Prize in Physics 1950 : Cecil Frank Powell</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.nobel.se/physics/laureates/1944/index.html</subfield>
<subfield code="p">The Nobel Prize in Physics 1944 : Isidor Isaac Rabi</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.nobel.se/physics/laureates/1932/index.html</subfield>
<subfield code="p">The Nobel Prize in Physics 1932 : Werner Karl Heisenberg</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.nobel.se/chemistry/laureates/1951/index.html</subfield>
<subfield code="p">The Nobel Prize in Chemistry 1951 : Edwin Mattison McMillan</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.nobel.se/physics/laureates/1959/index.html</subfield>
<subfield code="p">The Nobel Prize in Physics 1959 : Emilio Gino Segre</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.nobel.se/physics/laureates/1957/index.html</subfield>
<subfield code="p">The Nobel Prize in Physics 1957 : Chen Ning Yang and Tsung-Dao Lee</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.nobel.se/physics/laureates/1961/index.html</subfield>
<subfield code="p">The Nobel Prize in Physics 1961 : Robert Hofstadter</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="s">6206002 (1962)</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">199830</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-GE-9806033</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Tim Berners-Lee</subfield>
<subfield code="s">World-Wide Web inventor</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">28 Jun 1998</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Conference "Internet, Web, What's next?" on 26 June 1998 at CERN : Tim Berners-Lee, inventor of the World-Wide Web and Director of the W3C, explains how the Web came to be and give his views on the future.</subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="a">Conference "Internet, Web, What's next?" le 26 juin 1998 au CERN: Tim Berners-Lee, inventeur du World-Wide Web et directeur du W3C, explique comment le Web est ne, et donne ses opinions sur l'avenir.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">Press</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Life at CERN</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">neil.calder@cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9806033.jpeg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/9806033.gif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="o">0000655PHOPHO</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1998</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">81</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1998-07-03</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-07-10</subfield>
<subfield code="o">CM</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.cern.ch/CERN/Announcements/1998/WebNext.html</subfield>
<subfield code="p">"Internet, Web, What's next?" 26 June 1998</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://Bulletin.cern.ch/9828/art2/Text_E.html</subfield>
<subfield code="p">CERN Bulletin no 28/98 (6 July 1998) (English)</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://Bulletin.cern.ch/9828/art2/Text_F.html</subfield>
<subfield code="p">CERN Bulletin no 28/98 (6 juillet 1998) (French)</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="d">http://www.w3.org/People/Berners-Lee/</subfield>
<subfield code="p">Biography</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">0000990</subfield>
<subfield code="l">PRSPRS</subfield>
<subfield code="t">Le Pays Gessien : 3 Jul 1998</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">0001037</subfield>
<subfield code="l">PRSPRS</subfield>
<subfield code="t">Le Temps : 27 Jun 1998</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">0000809</subfield>
<subfield code="l">PRSPRS</subfield>
<subfield code="t">La Tribune de Geneve : 27 Jun 1998</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="p">Bldg. 60</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="P">
<subfield code="r">Calder, N</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">199827</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PICTURE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">astro-ph/9812226</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Efstathiou, G P</subfield>
<subfield code="u">Cambridge University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Constraints on $\Omega_{\Lambda}$ and $\Omega_{m}$from Distant Type 1a Supernovae and Cosmic Microwave Background Anisotropies</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">14 Dec 1998</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">6 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We perform a combined likelihood analysis of the latest cosmic microwave background anisotropy data and distant Type 1a Supernova data of Perlmutter etal (1998a). Our analysis is restricted tocosmological models where structure forms from adiabatic initial fluctuations characterised by a power-law spectrum with negligible tensor component. Marginalizing over other parameters, our bestfit solution gives Omega_m = 0.25 (+0.18, -0.12) and Omega_Lambda = 0.63 (+0.17, -0.23) (95 % confidence errors) for the cosmic densities contributed by matter and a cosmological constantrespectively. The results therefore strongly favour a nearly spatially flat Universe with a non-zero cosmological constant.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Astrophysics and Astronomy</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lasenby, A N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hobson, M P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, R S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bridle, S L</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">George Efstathiou &lt;gpe@ast.cam.ac.uk&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig1.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig3.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig5.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig6.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9812226.fig7.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1998</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1998-12-14</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-04-07</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="p">Mon. Not. R. Astron. Soc.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4162242</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond, J.R. 1996, Theory and Observations of the Cosmic Background Radiation, in "Cosmology and Large Scale Structure", Les Houches Session LX, August 1993, eds. R. Schaeffer, J. Silk, M. Spiro and J. Zinn-Justin, Elsevier SciencePress, Amsterdam, p469</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond J.R., Efstathiou G., Tegmark M., 1997</subfield>
<subfield code="p">L33</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">291</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 291 (1997) L33</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond, J.R., Jaffe, A. 1997, in Proc. XXXI Rencontre de Moriond, ed. F. Bouchet, Edition Fronti eres, in press</subfield>
<subfield code="r">astro-ph/9610091</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Bond J.R., Jaffe A.H. and Knox L.E., 1998</subfield>
<subfield code="r">astro-ph/9808264</subfield>
<subfield code="s">Astrophys.J. 533 (2000) 19</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Burles S., Tytler D., 1998a, to appear in the Proceedings of the Second Oak Ridge Symposium on Atomic &amp; Nuclear Astrophysics, ed. A. Mezzacappa, Institute of Physics, Bristol</subfield>
<subfield code="r">astro-ph/9803071</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Burles S., Tytler D., 1998b, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9712109</subfield>
<subfield code="s">Astrophys.J. 507 (1998) 732</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Caldwell, R.R., Dave, R., Steinhardt P.J., 1998</subfield>
<subfield code="p">1582</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">80</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 80 (1998) 1582</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Carroll S.M., Press W.H., Turner E.L., 1992, Ann. Rev. Astr. Astrophys., 30, 499. Chaboyer B., 1998</subfield>
<subfield code="r">astro-ph/9808200</subfield>
<subfield code="s">Phys.Rept. 307 (1998) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Devlin M.J., De Oliveira-Costa A., Herbig T., Miller A.D., Netterfield C.B., Page L., Tegmark M., 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9808043</subfield>
<subfield code="s">Astrophys. J. 509 (1998) L69-72</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Efstathiou G. 1996, Observations of Large-Scale Structure in the Universe, in "Cosmology and Large Scale Structure", Les Houches Session LX, August 1993, eds. R. Schaeffer, J. Silk, M. Spiro and J. Zinn-Justin, Elsevier SciencePress, Amsterdam, p135. Efstathiou G., Bond J.R., Mon. Not. R. Astron. Soc.in press</subfield>
<subfield code="r">astro-ph/9807130</subfield>
<subfield code="s">Astrophys. J. 518 (1999) 2-23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Evrard G., 1998, submitted to Mon. Not. R. Astron. Soc</subfield>
<subfield code="r">astro-ph/9701148</subfield>
<subfield code="s">Mon.Not.Roy.Astron.Soc. 292 (1997) 289</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Freedman J.B., Mould J.R., Kennicutt R.C., Madore B.F., 1998</subfield>
<subfield code="r">astro-ph/9801090</subfield>
<subfield code="s">Astrophys. J. 480 (1997) 705</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Garnavich P.M. et al. 1998</subfield>
<subfield code="r">astro-ph/9806396</subfield>
<subfield code="s">Astrophys.J. 509 (1998) 74-79</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Goobar A., Perlmutter S., 1995</subfield>
<subfield code="p">14</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">450</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Astrophys. J. 450 (1995) 14</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Hamuy M., Phillips M.M., Maza J., Suntzeff N.B., Schommer R.A., Aviles R. 1996</subfield>
<subfield code="p">2391</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">112</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Astrophys. J. 112 (1996) 2391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Hancock S., Gutierrez C.M., Davies R.D., Lasenby A.N., Rocha G., Rebolo R., Watson R.A., Tegmark M., 1997</subfield>
<subfield code="p">505</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">298</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 298 (1997) 505</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Hancock S., Rocha G., Lasenby A.N., Gutierrez C.M., 1998</subfield>
<subfield code="p">L1</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">294</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 294 (1998) L1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Herbig T., De Oliveira-Costa A., Devlin M.J., Miller A.D., Page L., Tegmark M., 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9808044</subfield>
<subfield code="s">Astrophys.J. 509 (1998) L73-76</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Lineweaver C.H., 1998. Astrophys. J.505, L69. Lineweaver, C.H., Barbosa D., 1998a</subfield>
<subfield code="p">624</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">446</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astrophys. J. 446 (1998) 624</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Lineweaver, C.H., Barbosa D., 1998b</subfield>
<subfield code="p">799</subfield>
<subfield code="t">Astron. Astrophys.</subfield>
<subfield code="v">329</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astron. Astrophys. 329 (1998) 799</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">De Oliveira-Costa A., Devlin M.J., Herbig T., Miller A.D., Netterfield C.B. Page L., Tegmark M., 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9808045</subfield>
<subfield code="s">Astrophys. J. 509 (1998) L77-80</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Ostriker J.P., Steinhardt P.J., 1995</subfield>
<subfield code="p">600</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">377</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nature 377 (1995) 600</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Peebles P.J.E., 1993, Principles of Physical Cosmology, Princeton University Press, Princeton, New Jersey. Perlmutter S, et al., 1995, In Presentations at the NATO ASI in Aiguablava, Spain, LBL-38400; also published in Thermonuclear Supernova, P. Ruiz-Lapuente, R. Cana and J. Isern (eds), Dordrecht, Kluwer, 1997, p749. Perlmutter S, et al., 1997</subfield>
<subfield code="p">565</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">483</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Astrophys. J. 483 (1997) 565</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Perlmutter S. et al., 1998a, Astrophys. J.in press. (P98)</subfield>
<subfield code="r">astro-ph/9812133</subfield>
<subfield code="s">Astrophys. J. 517 (1999) 565-586</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Perlmutter S. et al., 1998b, In Presentation at the January 1988 Meeting of the American Astronomical Society, Washington D.C., LBL-42230, available at www-supernova.lbl.gov; B.A.A.S., volume : 29 (1997) 1351Perlmutter S, et al., 1998c</subfield>
<subfield code="p">51</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">391</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nature 391 (1998) 51</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Ratra B., Peebles P.J.E., 1988</subfield>
<subfield code="p">3406</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">37</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Rev. D 37 (1988) 3406</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Riess A. et al. 1998, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9805201</subfield>
<subfield code="s">Astron. J. 116 (1998) 1009-1038</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Seljak U., Zaldarriaga M. 1996</subfield>
<subfield code="p">437</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">469</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Astrophys. J. 469 (1996) 437</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Seljak U. &amp; Zaldarriaga M., 1998</subfield>
<subfield code="r">astro-ph/9811123</subfield>
<subfield code="s">Phys. Rev. D60 (1999) 043504</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Tegmark M., 1997</subfield>
<subfield code="p">3806</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">79</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. Lett. 79 (1997) 3806</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Tegmark M. 1998, submitted to Astrophys. J</subfield>
<subfield code="r">astro-ph/9809201</subfield>
<subfield code="s">Astrophys. J. 514 (1999) L69-L72</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Tegmark, M., Eisenstein D.J., Hu W., Kron R.G., 1998</subfield>
<subfield code="r">astro-ph/9805117</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Wambsganss J., Cen R., Ostriker J.P., 1998</subfield>
<subfield code="p">29</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">494</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astrophys. J. 494 (1998) 29</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Webster M., Bridle S.L., Hobson M.P., Lasenby A.N., Lahav O., Rocha, G., 1998, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">White M., 1998, Astrophys. J.in press</subfield>
<subfield code="r">astro-ph/9802295</subfield>
<subfield code="s">Astrophys. J. 506 (1998) 495</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Zaldarriaga, M., Spergel D.N., Seljak U., 1997</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">488</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Astrophys. J. 488 (1997) 1</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">PRE-25553</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">RL-82-024</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Ellis, J</subfield>
+ <subfield code="0">AUTHOR|(SzGeCERN)aaa0005</subfield>
<subfield code="u">University of Oxford</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Grand unification with large supersymmetry breaking</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">Mar 1982</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">18 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Theoretical Physics</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ibanez, L E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ross, G G</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1982</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Oxford Univ.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Univ. Auton. Madrid</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Rutherford Lab.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-28</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-01-04</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">1982n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ex/0201013</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-EP-2001-094</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Heister, A</subfield>
<subfield code="u">Aachen, Tech. Hochsch.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Search for R-Parity Violating Production of Single Sneutrinos in $e^{+}e^{-}$ Collisions at $\sqrt{s}$ = 189-209 GeV</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">17 Dec 2001</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">22 p</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">ALEPH Papers</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">A search for single sneutrino production under the assumption that $R$-parity is violated via a single dominant $LL\bar{E}$ coupling is presented. This search considers the process ${\rm e} \gamma\;{\smash{\mathop{\rightarrow}}}\;\tilde{\nu}\ell$ and is performed using the data collected by the ALEPH detector at centre-of-mass energies from 189\,GeV up to 209\,GeV corresponding to an integrated luminosity of637.1\,$\mathrm{pb}^{-1}$. The numbers of observed candidate events are in agreement with Standard Model expectations and 95\% confidence level upper limits on five of the $LL\bar{E}$ couplings are given as a function of the assumedsneutrino mass.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">CERN EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">20011220SLAC</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">giva</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Experimental Results</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Schael, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Barate, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bruneliere, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">De Bonis, I</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Decamp, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Goy, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jezequel, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lees, J P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Martin, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Merle, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Minard, M N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pietrzyk, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Trocme, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Boix, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bravo, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Casado, M P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Chmeissani, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Crespo, J M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Fernandez, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Fernandez-Bosman, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Garrido, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Grauges, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lopez, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Martinez, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Merino, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Miquel, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Mir, L M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pacheco, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Paneque, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ruiz, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Colaleo, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Creanza, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">De Filippis, N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">De Palma, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Iaselli, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Maggi, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Maggi, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nuzzo, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ranieri, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Raso, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ruggieri, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Selvaggi, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Silvestris, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tempesta, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tricomi, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zito, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Huang, X</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lin, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ouyang, Q</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wang, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Xie, Y</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Xu, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Xue, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zhang, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zhang, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zhao, W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Abbaneo, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Azzurri, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Barklow, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Buchmuller, O</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cattaneo, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cerutti, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Clerbaux, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Drevermann, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Forty, R W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Frank, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gianotti, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Greening, T C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hansen, J B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Harvey, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hutchcroft, D E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Janot, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jost, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kado, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Maley, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Mato, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Moutoussi, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ranjard, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rolandi, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Schlatter, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sguazzoni, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tejessy, W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Teubert, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Valassi, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Videau, I</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ward, J J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Badaud, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dessagne, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Falvard, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Fayolle, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gay, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jousset, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Michel, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Monteil, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pallin, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pascolo, J M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Perret, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hansen, J D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hansen, J R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hansen, P H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nilsson, B S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Waananen, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kyriakis, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Markou, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Simopoulou, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Vayaki, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zachariadou, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Blondel, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Brient, J C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Machefert, F P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rouge, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Swynghedauw, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tanaka, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Videau, H L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ciulli, V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Focardi, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Parrini, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Antonelli, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Antonelli, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bencivenni, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bologna, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bossi, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Campana, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Capon, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Chiarella, V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Laurelli, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Mannocchi, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Murtas, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Murtas, G P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Passalacqua, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pepe-Altarelli, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Spagnolo, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kennedy, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lynch, J G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Negus, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">O'Shea, V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Smith, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Thompson, A S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wasserbaech, S R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cavanaugh, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dhamotharan, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Geweniger, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hanke, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hepp, V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kluge, E E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Leibenguth, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Putzer, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Stenzel, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tittel, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Werner, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wunsch, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Beuselinck, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Binnie, D M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cameron, W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Davies, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dornan, P J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Girone, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hill, R D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Marinelli, N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nowell, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Przysiezniak, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rutherford, S A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sedgbeer, J K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Thompson, J C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">White, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ghete, V M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Girtler, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kneringer, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kuhn, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rudolph, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bouhova-Thacker, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bowdery, C K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Clarke, D P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Finch, A J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Foster, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hughes, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jones, R W L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pearson, M R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Robertson, N A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Smizanska, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lemaître, V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Blumenschein, U</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Holldorfer, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jakobs, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kayser, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kleinknecht, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Muller, A S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Quast, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Renk, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sander, H G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Schmeling, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wachsmuth, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zeitnitz, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ziegler, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bonissent, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Carr, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Coyle, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Curtil, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ealet, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Fouchez, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Leroy, O</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kachelhoffer, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Payre, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rousseau, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tilquin, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ragusa, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">David, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dietl, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ganis, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Huttmann, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lutjens, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Mannert, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Manner, W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Moser, H G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Settles, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wolf, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Boucrot, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Callot, O</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Davier, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Duflot, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Grivaz, J F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Heusse, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jacholkowska, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Loomis, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Serin, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Veillet, J J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">De Vivie de Regie, J B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Yuan, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bagliesi, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Boccali, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Foà, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Giammanco, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Giassi, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ligabue, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Messineo, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Palla, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sanguinetti, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sciaba, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tenchini, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Venturi, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Verdini, P G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Awunor, O</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Blair, G A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Coles, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cowan, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">García-Bellido, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Green, M G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jones, L T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Medcalf, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Misiejuk, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Strong, J A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Teixeira-Dias, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Clifft, R W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Edgecock, T R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Norton, P R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tomalin, I R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bloch-Devaux, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Boumediene, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Colas, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Fabbro, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lancon, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lemaire, M C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Locci, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Perez, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rander, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Renardy, J F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rosowsky, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Seager, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Trabelsi, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tuchming, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Vallage, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Konstantinidis, N P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Litke, A M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Taylor, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Booth, C N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cartwright, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Combley, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hodgson, P N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lehto, M H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Thompson, L F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Affholderbach, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bohrer, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Brandt, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Grupen, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hess, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ngac, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Prange, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sieler, U</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Borean, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Giannini, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">He, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Putz, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rothberg, J E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Armstrong, S R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Berkelman, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cranmer, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ferguson, D P S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gao, Y</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gonzalez, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hayes, O J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hu, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jin, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kile, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">McNamara, P A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nielsen, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pan, Y B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Von Wimmersperg-Toller, J H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wiedenmann, W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wu, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wu, S L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wu, X</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zobernig, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dissertori, G</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">ALEPH Collaboration</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">valerie.brunner@cern.ch</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/ep-2001-094.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/ep-2001-094.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="e">ALEPH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="p">EP</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="a">CERN LEP</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2001-12-19</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-02-19</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="p">Eur. Phys. J., C</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4823672</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="o">oai:cds.cern.ch:CERN-EP-2001-094</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ALEPHPAPER</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">For reviews, see for example: H.P. Nilles</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">110</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Phys. Rep. 110 (1984) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H.E. Haber and G. L. Kane</subfield>
<subfield code="p">75</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">117</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Phys. Rep. 117 (1985) 75</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Farrar and P. Fayet</subfield>
<subfield code="p">575</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">76</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Phys. Lett. B 76 (1978) 575</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Weinberg</subfield>
<subfield code="p">287</subfield>
<subfield code="t">Phys. Rev., B</subfield>
<subfield code="v">26</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Phys. Rev. B 26 (1982) 287</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Sakai and T. Yanagida</subfield>
<subfield code="p">83</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">197</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Nucl. Phys. B 197 (1982) 83</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Dimopoulos, S. Raby and F. Wilczek</subfield>
<subfield code="p">133</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">212</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Phys. Lett. B 212 (1982) 133</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B.C. Allanach, H. Dreiner, P. Morawitz and M.D. Williams, "Single Sneutrino/Slepton Production at LEP2 and the NLC"</subfield>
<subfield code="p">307</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">420</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 420 (1998) 307</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">ALEPH Collaboration, "Search for R-Parity Violating Decays of Supersymmetric Particles in e+e- Collisions at Centre-of-Mass Energies between s = 189­202 GeV"</subfield>
<subfield code="p">415</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">19</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Eur. Phys. J. C 19 (2001) 415</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">ALEPH Collaboration, "ALEPH: a detector for electron-positron annihilations at LEP", Nucl. Instrum. and Methods. A : 294 (1990) 121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Cantini, Yu. L. Dokshitzer, M. Olsson, G. Turnock and B.R. Webber, `New clustering algorithm for multijet cross sections in e+e- annihilation"</subfield>
<subfield code="p">432</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">269</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Phys. Lett. B 269 (1991) 432</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">ALEPH Collaboration, "Performance of the ALEPH detector at LEP", Nucl. Instrum. and Methods. A : 360 (1995) 481</subfield>
<subfield code="s">Nucl. Instrum. and Methods. 360 (1995) 481</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Katsanevas and P. Morawitz, "SUSYGEN 2.2 - A Monte Carlo Event Generator for MSSM Sparticle Production at e+e- Colliders"</subfield>
<subfield code="p">227</subfield>
<subfield code="t">Comput. Phys. Commun.</subfield>
<subfield code="v">112</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Comput. Phys. Commun. 112 (1998) 227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Barberio, B. van Eijk and Z. W¸as</subfield>
<subfield code="p">115</subfield>
<subfield code="t">Comput. Phys. Commun.</subfield>
<subfield code="v">66</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Comput. Phys. Commun. 66 (1991) 115</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Jadach and Z. W¸as, R. Decker and J.H. Kühn, "The decay library TAUOLA"</subfield>
<subfield code="p">361</subfield>
<subfield code="t">Comput. Phys. Commun.</subfield>
<subfield code="v">76</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Comput. Phys. Commun. 76 (1993) 361</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Sjöstrand, " High-Energy Physics Event Generation with PYTHIA 5.7 and JETSET 7.4"</subfield>
<subfield code="p">74</subfield>
<subfield code="t">Comput. Phys. Commun.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Comput. Phys. Commun. 82 (1994) 74</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Jadach et al</subfield>
<subfield code="p">276</subfield>
<subfield code="t">Comput. Phys. Commun.</subfield>
<subfield code="v">66</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Comput. Phys. Commun. 66 (1991) 276</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">11</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Skrzypek, S. Jadach, W. Placzek and Z. Was</subfield>
<subfield code="p">216</subfield>
<subfield code="t">Comput. Phys. Commun.</subfield>
<subfield code="v">94</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Comput. Phys. Commun. 94 (1996) 216</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Jadach et al</subfield>
<subfield code="p">298</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">390</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 390 (1997) 298</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.A.M. Vermaseren, in Proceedings of the IVth International Workshop on Gamma Gamma Interactions, Eds. G. Cochard and P. Kessler, Springer Verlag, 1980</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. -F. Grivaz and F. Le Diberder, "Complementary analyses and acceptance optimization in new particle searches", LAL preprint # 92-37 (1992)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">ALEPH Collaboration, "Search for Supersymmetry with a dominant R-Parity Violating LL ¯ E Coupling in e+e- Collisions at Centre-of-Mass Energies of 130 GeV to 172 GeV"</subfield>
<subfield code="p">433</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">4</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Eur. Phys. J. C 4 (1998) 433</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">For reviews see for example: H. Dreiner, "An Introduction to Explicit R-parity Violation"</subfield>
<subfield code="r">hep-ph/9707435</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">published in Perspectives on Supersymmetry, ed. G.L. Kane, World Scientific, Singapore (1998); G. Bhattacharyya</subfield>
<subfield code="p">83</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">52</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B Proc. Suppl. 52 (1997) 83</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">12</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">astro-ph/0101431</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Gray, M E</subfield>
<subfield code="u">Cambridge University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Infrared constraints on the dark mass concentration observed in the cluster Abell 1942</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">24 Jan 2001</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">8 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We present a deep H-band image of the region in the vicinity of the cluster Abell 1942 containing the puzzling dark matter concentration detected in an optical weak lensing study by Erben et al. (2000). We demonstrate that ourlimiting magnitude, H=22, would be sufficient to detect clusters of appropriate mass out to redshifts comparable with the mean redshift of the background sources. Despite this, our infrared image reveals no obvious overdensity ofsources at the location of the lensing mass peak, nor an excess of sources in the I-H vs. H colour-magnitude diagram. We use this to further constrain the luminosity and mass-to-light ratio of the putative dark clump as a function ofits redshift. We find that for spatially-flat cosmologies, background lensing clusters with reasonable mass-to-light ratios lying in the redshift range 0&lt;z&lt;1 are strongly excluded, leaving open the possibility that the massconcentration is a new type of truly dark object.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Astrophysics and Astronomy</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, R S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lewis, J R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">McMahon, R G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Firth, A E</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Meghan Gray &lt;meg@ast.cam.ac.uk&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.ps.gz</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig1.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig2.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig3.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig4.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig5a.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig5b.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig6.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0101431.fig7.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2001</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Caltech</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">IoA, Cambridge</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2001-01-25</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2001-11-02</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Gray, Meghan E.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Ellis, Richard S.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Lewis, James R.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Mahon, Richard G. Mc</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Firth, Andrew E.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="p">Mon. Not. R. Astron. Soc.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Allen S.W.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 296 (1998) 392</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Bacon D., Refregier A., Ellis R.S.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 318 (2000) 625</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Beckett M., Mackay C., McMahon R., Parry I., Ellis R.S., Chan S.J., Hoenig M.,</subfield>
<subfield code="s">Proc. SPIE 3354 (1998) 431</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Bertin E., Arnouts S., 1996, A&amp;Ann. Sci., 117, 393 Bonnet H., Mellier Y., Fort B.,</subfield>
<subfield code="s">Astrophys. J. 427 (1994) 83</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Bower R.G., Lucey J.R., Ellis R.S.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 254 (1992) 589</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Bower R.G., Lucey J.R., Ellis R.S.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 254 (1992) 601</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Clowe D., Luppino G.A., Kaiser N., Henry J.P., Gioia I.M., 1998, Astrophys. J., 497L Couch W.J., Ellis R.S., Malin D.F., MacLaren I.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 249 (1991) 606</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">da Costa L., et al.,</subfield>
<subfield code="s">Astron. Astrophys. 343 (1999) 29</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Erben T., van Waerbeke L., Mellier Y., Schneider P., Cuillandre J.-C., Castander F.J., Dantel-Fort M.,</subfield>
<subfield code="s">Astron. Astrophys. 355 (2000) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Fahlman G., Kaiser N., Squires G., Woods D.,</subfield>
<subfield code="s">Astrophys. J. 437 (1994) 56</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Firth, A., 2000, Clustering at High Redshift, Astropart. Phys. Conference Series, Vol. 200, p. 404 Fischer P.,</subfield>
<subfield code="s">Astron. J. 117 (1999) 2024</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Gladders M.D., Yee H.K.C.,</subfield>
<subfield code="s">Astron. J. 120 (2000) 2148</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Gray M.E., Ellis R.S., Refregier A., Bézecourt J., McMahon R.G., Hoenig M.D : 318 (2000) 573 Hradecky V., Jones C., Donnelly R.H., Djorgovski S.G., Gal R.R., Odeqahn S.C.,</subfield>
<subfield code="s">Astrophys. J. 543 (2000) 521</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Kaiser N., Wilson G., Luppino G.A., 2000,</subfield>
<subfield code="r">astro-ph/0003338</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Kaiser N., Squires G.,</subfield>
<subfield code="s">Astrophys. J. 404 (1993) 441</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Kneib J.P., Ellis R.S., Smail I., Couch W.J., Sharples R.M.,</subfield>
<subfield code="s">Astrophys. J. 471 (1996) 643</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Marzke R., McCarthy P.J., Persson E., et al., 1999, Photometric Redshifts and the Detection of High Redshift Galaxies, Astropart. Phys. Conference Series, Vol. 191, p. 148 Menanteau F., Ellis R.S., Abraham R.G., Barger Astron. J., Cowie L.L.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 309 (1999) 208</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Metzler C.A., White M., Loken C., 2000, Astrophys. J., submitted</subfield>
<subfield code="r">astro-ph/0005442</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Miralda-Escude J., Babul A.,</subfield>
<subfield code="s">Astrophys. J. 449 (1995) 18</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Persson S.E., Murphy D.C., Krzeminsky W., Rother M., Rieke M.J., 1998, Astron. J., 116 Refregier A., Heavens A., Heymans C.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 319 (2000) 649</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Schneider P.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 283 (1996) 837</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Smail I., Ellis R.S., Fitchett M.J., Edge A.C.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 273 (1995) 277</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Smail I., Dressler A., Kneib J.-P., Ellis R.S., Couch W.J., Sharples R.M., Oemler Astron. J.,</subfield>
<subfield code="s">Astrophys. J. 469 (1996) 508</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Squires G., Neumann D.M., Kaiser N., Arnaud M., Babul A., Boehringer H., Fahlman G., Woods D.,</subfield>
<subfield code="s">Astrophys. J. 482 (1997) 648</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">van Kampen E., Katgert P.,</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 289 (1997) 327</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">van Waerbeke L, et al.,</subfield>
<subfield code="s">Astron. Astrophys. 538 (2000) 30</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o"> </subfield>
<subfield code="m">Whittman D.M., Tyson J.A., Kirkman D., Dell’Antonio, I., Bertstein G.,</subfield>
<subfield code="s">Nature 405 (2000) 143</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0105155</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-TH-2001-131</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Mangano, M L</subfield>
<subfield code="u">CERN</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Physics at the front-end of a neutrino factory : a quantitative appraisal</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">16 May 2001</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">1 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We present a quantitative appraisal of the physics potential for neutrino experiments at the front-end of a muon storage ring. We estimate the forseeable accuracy in the determination of several interesting observables, and explorethe consequences of these measurements. We discuss the extraction of individual quark and antiquark densities from polarized and unpolarized deep-inelastic scattering. In particular we study the implications for the undertanding ofthe nucleon spin structure. We assess the determination of alpha_s from scaling violation of structure functions, and from sum rules, and the determination of sin^2(theta_W) from elastic nu-e and deep-inelastic nu-p scattering. Wethen consider the production of charmed hadrons, and the measurement of their absolute branching ratios. We study the polarization of Lambda baryons produced in the current and target fragmentation regions. Finally, we discuss thesensitivity to physics beyond the Standard Model.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Alekhin, S I</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Anselmino, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ball, R D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Boglione, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">D'Alesio, U</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Davidson, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">De Lellis, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, J</subfield>
+ <subfield code="0">AUTHOR|(SzGeCERN)aaa0005</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Forte, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gambino, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gehrmann, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kataev, A L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kotzinian, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kulagin, S A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lehmann-Dronke, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Migliozzi, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Murgia, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ridolfi, G</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Michelangelo MANGANO &lt;Michelangelo.Mangano@cern.ch&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0105155.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0105155.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2001</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="p">TH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">CERN</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">nuDIS Working group of the ECFA-CERN Neutrino-Factory Study Group</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2001-05-17</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-05-25</subfield>
<subfield code="o">MH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4628020</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Geer</subfield>
<subfield code="p">6989</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 6989</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9712290</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 6989-6997</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">039903</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 039903</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">The Muon Collider Collab., µ+µ- Collider: a feasibility study, Report BNL-52503, Fermilab-Conf-96/092, LBNL-38946 (1996); B. Autin, A. Blondel and J. Ellis (eds.), Prospective study of muon storage rings at CERN, Report CERN 99-02, ECFA 99-197 (Geneva, 1999)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. Bigi et al., The potential for neutrino physics at muon colliders and dedicated high current muon storage rings, Report BNL-67404</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Albright et al</subfield>
<subfield code="r">hep-ex/0008064</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.D. Ball, D.A. Harris and K.S. McFarland</subfield>
<subfield code="r">hep-ph/0009223</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">submitted to the Proceedings of the Nufact '00 Workshop, June 2000, Monterey</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H.L. Lai et al</subfield>
<subfield code="p">1280</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">55</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 55 (1997) 1280</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9606399</subfield>
<subfield code="s">Phys. Rev. D 55 (1997) 1280-1296</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Barone, C. Pascaud and F. Zomer</subfield>
<subfield code="p">243</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">12</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 12 (2000) 243</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9907512</subfield>
<subfield code="s">Eur. Phys. J. C 12 (2000) 243-262</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. I. Alekhin</subfield>
<subfield code="p">094022</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">63</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 63 (2001) 094022</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0011002</subfield>
<subfield code="s">Phys. Rev. D 63 (2001) 094022</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">65</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Ridolfi</subfield>
<subfield code="p">278</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">666</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. A 666 (2000) 278</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.D. Ball and H.A.M. Tallini</subfield>
<subfield code="p">1327</subfield>
<subfield code="t">J. Phys., G</subfield>
<subfield code="v">25</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">J. Phys. G 25 (1999) 1327</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Forte</subfield>
<subfield code="r">hep-ph/9409416</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="r">hep-ph/9610238</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Forte, M.L. Mangano and G. Ridolfi</subfield>
<subfield code="r">hep-ph/0101192</subfield>
<subfield code="s">Nucl. Phys. B 602 (2001) 585-621</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">to appear in Nucl. Phys., B</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Blümlein and N. Kochelev</subfield>
<subfield code="p">296</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">381</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 381 (1996) 296</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">285</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">498</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 498 (1997) 285</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.A. Dicus</subfield>
<subfield code="p">1637</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">5</subfield>
<subfield code="y">1972</subfield>
<subfield code="s">Phys. Rev. D 5 (1972) 1637</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Anselmino, P. Gambino and J. Kalinowski</subfield>
<subfield code="p">267</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">64</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Z. Phys. C 64 (1994) 267</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Maul et al</subfield>
<subfield code="p">443</subfield>
<subfield code="t">Z. Phys., A</subfield>
<subfield code="v">356</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Z. Phys. A 356 (1997) 443</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Blümlein and N. Kochelev</subfield>
<subfield code="p">285</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">498</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 498 (1997) 285</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Ravishankar</subfield>
<subfield code="p">309</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">374</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Nucl. Phys. B 374 (1992) 309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Ehrnsperger and A. Schäfer</subfield>
<subfield code="p">619</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">348</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 348 (1995) 619</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Lichtenstadt and H.J. Lipkin</subfield>
<subfield code="p">119</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">353</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 353 (1995) 119</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Dai et al</subfield>
<subfield code="p">273</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 53 (1996) 273</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.G. Ratcliffe</subfield>
<subfield code="p">383</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">365</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 365 (1996) 383</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N.W. Park, J. Schechter and H. Weigel</subfield>
<subfield code="p">420</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">228</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Phys. Lett. B 228 (1989) 420</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.O. Bazarko et al</subfield>
<subfield code="p">189</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">65</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Z. Phys. C 65 (1989) 189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Mertig and W.L. van Neerven</subfield>
<subfield code="p">637</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">70</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Z. Phys. C 70 (1996) 637</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Vogelsang</subfield>
<subfield code="p">2023</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">54</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 54 (1996) 2023</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. de Florian and R. Sassot</subfield>
<subfield code="p">6052</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">51</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. D 51 (1995) 6052</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.D. Ball, S. Forte and G. Ridolfi</subfield>
<subfield code="p">255</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">378</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 378 (1996) 255</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli, S. Forte and G. Ridolfi</subfield>
<subfield code="p">277</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">534</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 534 (1998) 277</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">138</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">74</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B Proc. Suppl. 74 (1999) 138</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli, R.D. Ball, S. Forte and G. Ridolfi</subfield>
<subfield code="p">1145</subfield>
<subfield code="t">Acta Phys. Pol., B</subfield>
<subfield code="v">29</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Acta Phys. Pol. B 29 (1998) 1145</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9803237</subfield>
<subfield code="s">Acta Phys. Pol. B 29 (1998) 1145-1173</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H.L. Lai et al. (CTEQ Collab.)</subfield>
<subfield code="p">375</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">12</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 12 (2000) 375</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9903282</subfield>
<subfield code="s">Eur. Phys. J. C 12 (2000) 375-392</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli, R.D. Ball, S. Forte and G. Ridolfi</subfield>
<subfield code="p">337</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">496</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 496 (1997) 337</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">1145</subfield>
<subfield code="t">Acta Phys. Pol., B</subfield>
<subfield code="v">29</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Acta Phys. Pol. B 29 (1998) 1145</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli and G.G. Ross</subfield>
<subfield code="p">391</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">212</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Lett. B 212 (1988) 391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. V. Efremov and O. V. Teryaev, JINR-E2-88-287, in Proceedings of Symposium on Hadron Interactions-Theory and Phenomenology, Bechyne, June 26- July 1, 1988; ed. by J. Fischer et al (Czech. Acad. ScienceInst. Phys., 1988) p.432; R.D. Carlitz, J.C. Collins and A.H. Mueller</subfield>
<subfield code="p">229</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">214</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Lett. B 214 (1988) 229</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli and B. Lampe</subfield>
<subfield code="p">315</subfield>
<subfield code="t">Z. Phys. C</subfield>
<subfield code="v">47</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Z. Phys. C 47 (1990) 315</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Vogelsang</subfield>
<subfield code="p">275</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Z. Phys. C 50 (1991) 275</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G.M. Shore and G. Veneziano</subfield>
<subfield code="p">75</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">244</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Lett. B 244 (1990) 75</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">23</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">381</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Nucl. Phys. B 381 (1992) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">see also G. M. Shore</subfield>
<subfield code="r">hep-ph/9812355</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Forte</subfield>
<subfield code="p">189</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">224</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Phys. Lett. B 224 (1989) 189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">331</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Nucl. Phys. B 331 (1990) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Forte and E.V. Shuryak</subfield>
<subfield code="p">153</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">357</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Nucl. Phys. B 357 (1991) 153</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.J. Brodsky and B.-Q. Ma</subfield>
<subfield code="p">317</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">381</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 381 (1996) 317</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">66</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.J. Brodsky, J. Ellis and M. Karliner</subfield>
<subfield code="p">309</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">206</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Lett. B 206 (1988) 309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Ellis and M. Karliner</subfield>
<subfield code="r">hep-ph/9601280</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Glück et al</subfield>
<subfield code="r">hep-ph/0011215</subfield>
<subfield code="s">Phys.Rev. D63 (2001) 094005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Adams et al. (Spin Muon Collab.)</subfield>
<subfield code="p">23</subfield>
<subfield code="t">Nucl. Instrum. Methods Phys. Res., A</subfield>
<subfield code="v">437</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Instrum. Methods Phys. Res. A 437 (1999) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Adeva et al. (SMC Collab.)</subfield>
<subfield code="p">112001</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 112001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. L. Anthony et al. (E155 Collab.)</subfield>
<subfield code="p">19</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">493</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 493 (2000) 19</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.M. Barnett</subfield>
<subfield code="p">1163</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">36</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Phys. Rev. Lett. 36 (1976) 1163</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.A. Aivazis, J.C. Collins, F.I. Olness and W. Tung</subfield>
<subfield code="p">3102</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 50 (1994) 3102</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Gehrmann and W.J. Stirling</subfield>
<subfield code="p">6100</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 53 (1996) 6100</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Glück, E. Reya, M. Stratmann and W. Vogelsang</subfield>
<subfield code="p">4775</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 53 (1996) 4775</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.J. Gross and C.H. Llewellyn Smith</subfield>
<subfield code="p">337</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">14</subfield>
<subfield code="y">1969</subfield>
<subfield code="s">Nucl. Phys. B 14 (1969) 337</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. D. Ball and S. Forte</subfield>
<subfield code="p">365</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">358</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 358 (1995) 365</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9506233</subfield>
<subfield code="s">Phys.Lett. B358 (1995) 365-378</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="r">hep-ph/9607289</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Santiago and F.J. Yndurain</subfield>
<subfield code="p">45</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">563</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 563 (1999) 45</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9904344</subfield>
<subfield code="s">Nucl.Phys. B563 (1999) 45-62</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.S. Fadin and L.N. Lipatov</subfield>
<subfield code="p">127</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">429</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 429 (1998) 127</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Ciafaloni, D. Colferai and G. Salam</subfield>
<subfield code="p">114036</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 114036</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli, R.D. Ball and S. Forte</subfield>
<subfield code="r">hep-ph/0011270</subfield>
<subfield code="s">Nucl.Phys. B599 (2001) 383-423</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W.G. Seligman et al</subfield>
<subfield code="p">1213</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">79</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. Lett. 79 (1997) 1213</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. L. Kataev, G. Parente and A.V. Sidorov</subfield>
<subfield code="p">405</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">573</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 573 (2000) 405</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9905310</subfield>
<subfield code="s">Nucl.Phys. B573 (2000) 405-433</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.L. Kataev, G. Parente and A.V. Sidorov, preprint CERN-TH/2000-343</subfield>
<subfield code="r">hep-ph/0012014</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and work in progress</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.I. Alekhin and A.L. Kataev</subfield>
<subfield code="p">402</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">452</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 452 (1999) 402</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9812348</subfield>
<subfield code="s">Phys.Lett. B452 (1999) 402-408</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Bethke</subfield>
<subfield code="p">R27</subfield>
<subfield code="t">J. Phys., G</subfield>
<subfield code="v">26</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">J. Phys. G 26 (2000) R27</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ex/0004021</subfield>
<subfield code="s">J.Phys. G26 (2000) R27</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[44]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. Hinchliffe and A.V. Manohar</subfield>
<subfield code="p">643</subfield>
<subfield code="t">Annu. Rev. Nucl. Part. Sci.</subfield>
<subfield code="v">50</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Annu. Rev. Nucl. Part. Sci. 50 (2000) 643</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0004186</subfield>
<subfield code="s">Ann.Rev.Nucl.Part.Sci. 50 (2000) 643-678</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[45]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Georgi and H.D. Politzer</subfield>
<subfield code="p">1829</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">14</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Phys. Rev. D 14 (1976) 1829</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[46]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E.B. Zijlstra and W.L. van Neerven</subfield>
<subfield code="p">377</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">297</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Lett. B 297 (1992) 377</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[47]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W.L. van Neerven and A. Vogt</subfield>
<subfield code="p">263</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">568</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 568 (2000) 263</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9907472</subfield>
<subfield code="s">Nucl.Phys. B568 (2000) 263-286</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="r">hep-ph/0103123</subfield>
<subfield code="s">Nucl.Phys. B603 (2001) 42-68</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[48]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W.L. van Neerven and A. Vogt</subfield>
<subfield code="p">111</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">490</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 490 (2000) 111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0007362</subfield>
<subfield code="s">Phys.Lett. B490 (2000) 111-118</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[49]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Larin, T. van Ritbergen and J.A. Vermaseren</subfield>
<subfield code="p">41</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">427</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. B 427 (1994) 41</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[50]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Larin, P. Nogueira, T. van Ritbergen and J.A. Vermaseren</subfield>
<subfield code="p">338</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">492</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 492 (1997) 338</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9605317</subfield>
<subfield code="s">Nucl.Phys. B492 (1997) 338-378</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[51]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Retey and J.A. Vermaseren, preprint TTP00-13, NIKHEF-2000-018</subfield>
<subfield code="r">hep-ph/0007294</subfield>
<subfield code="s">Nucl.Phys. B604 (2001) 281-311</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[52]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.A. Gracey</subfield>
<subfield code="p">141</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">322</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 322 (1994) 141</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9401214</subfield>
<subfield code="s">Phys.Lett. B322 (1994) 141-146</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">67</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[53]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Blümlein and A. Vogt</subfield>
<subfield code="p">149</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">370</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 370 (1996) 149</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9510410</subfield>
<subfield code="s">Phys.Lett. B370 (1996) 149-155</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[54]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Catani et al., preprint CERN-TH/2000-131</subfield>
<subfield code="r">hep-ph/0005025</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">in Standard model physics (and more) at the LHC, eds. G. Altarelli and M. Mangano, Report CERN 2000-004 (Geneva, 2000)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[55]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.L. Kataev, A.V. Kotikov, G. Parente and A.V. Sidorov</subfield>
<subfield code="p">374</subfield>
<subfield code="t">Phys. Lett. B</subfield>
<subfield code="v">417</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 417 (1998) 374</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9706534</subfield>
<subfield code="s">Phys.Lett. B417 (1998) 374-384</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[56]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Beneke</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">317</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rep. 317 (1999) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9807443</subfield>
<subfield code="s">Phys.Rept. 317 (1999) 1-142</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[57]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Beneke and V.M. Braun</subfield>
<subfield code="r">hep-ph/0010208</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[58]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dasgupta and B.R. Webber</subfield>
<subfield code="p">273</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">382</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 382 (1996) 273</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9604388</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[59]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Maul, E. Stein, A. Schafer and L. Mankiewicz</subfield>
<subfield code="p">100</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">401</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 401 (1997) 100</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9612300</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[60]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.V. Sidorov et al. (IHEP­JINR Neutrino Detector Collab.)</subfield>
<subfield code="p">405</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Eur. Phys. J. C 10 (1999) 405</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ex/9905038</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[61]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.I. Alekhin et al. (IHEP-JINR Neutrino Detector Collab), preprint IHEP-01-18 (2001)</subfield>
<subfield code="r">hep-ex/0104013</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[62]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Adloff et al. (H1 Collab.)</subfield>
<subfield code="r">hep-ex/0012052</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[63]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne</subfield>
<subfield code="p">117</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">18</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 18 (2000) 117</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0007099</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[64]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E.B. Zijlstra and W.L. van Neerven</subfield>
<subfield code="p">525</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">383</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Nucl. Phys. B 383 (1992) 525</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[65]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.G. Gorishny and S.A. Larin</subfield>
<subfield code="p">109</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">172</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Phys. Lett. B 172 (1986) 109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Larin and J.A.M. Vermaseren</subfield>
<subfield code="p">345</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">259</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Phys. Lett. B 259 (1991) 345</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[66]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.L. Kataev and A.V. Sidorov, preprint CERN-TH/7235-94</subfield>
<subfield code="r">hep-ph/9405254</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">in Proceedings of Rencontre de Moriond - Hadronic session of `QCD and high energy hadronic interactions', M´eribel-les-Allues, 1994, ed. J. Tr an Thanh V an (Editions Fronti eres, Gif-sur-Yvette, 1995), p. 189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[67]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.H. Kim et al</subfield>
<subfield code="p">3595</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">81</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 81 (1998) 3595</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ex/9808015</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[68]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Chyla and A.L. Kataev</subfield>
<subfield code="p">385</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">297</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Lett. B 297 (1992) 385</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9209213</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[69]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.L. Kataev and A.V. Sidorov</subfield>
<subfield code="p">179</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">331</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 331 (1994) 179</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9402342</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[70]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Blümlein and W. L. van Neerven</subfield>
<subfield code="p">417</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">450</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 450 (1999) 417</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9811351</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[71]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.L. Kataev and V.V. Starshenko</subfield>
<subfield code="p">235</subfield>
<subfield code="t">Mod. Phys. Lett., A</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Mod. Phys. Lett. A 10 (1995) 235</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9502348</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.A. Samuel, J. Ellis and M. Karliner</subfield>
<subfield code="p">4380</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">74</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. Lett. 74 (1995) 4380</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9503411</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[72]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Bernreuther and W. Wetzel</subfield>
<subfield code="p">228</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">197</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Nucl. Phys. B 197 (1982) 228</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[Erratum</subfield>
<subfield code="p">758</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">513</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 513 (1998) 758</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]; S.A. Larin, T. van Ritbergen and J.A. Vermaseren</subfield>
<subfield code="p">278</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">438</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. B 438 (1995) 278</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9411260</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K.G. Chetyrkin, B.A. Kniehl and M. Steinhauser</subfield>
<subfield code="p">2184</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">79</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. Lett. 79 (1997) 2184</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9706430</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[73]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E.V. Shuryak and A.I. Vainshtein</subfield>
<subfield code="p">451</subfield>
<subfield code="t">Nucl. Phys. B</subfield>
<subfield code="v">199</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Nucl. Phys. B 199 (1982) 451</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[74]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.A. Shifman, A.I. Vainshtein and V.I. Zakharov</subfield>
<subfield code="p">385</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">147</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Nucl. Phys. B 147 (1979) 385</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[75]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.M. Braun and A.V. Kolesnichenko</subfield>
<subfield code="p">723</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">283</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Nucl. Phys. B 283 (1987) 723</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">68</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[76]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G.G. Ross and R.G. Roberts</subfield>
<subfield code="p">425</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">322</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 322 (1994) 425</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9312237</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[77]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Balla, M.V. Polyakov and C. Weiss</subfield>
<subfield code="p">327</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">510</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 510 (1998) 327</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9707515</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[78]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.G. Oldeman (CHORUS Collab.)</subfield>
<subfield code="p">96</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">79</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 79 (1999) 96</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.G. Oldeman, PhD Thesis, Amsterdam University, June 2000 (unpublished)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[79]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">U.K. Yang et al. (CCFR­NuTeV Collab.)</subfield>
<subfield code="r">hep-ex/0010001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[80]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.D. Bjorken</subfield>
<subfield code="p">1767</subfield>
<subfield code="t">Phys. Rev.</subfield>
<subfield code="v">163</subfield>
<subfield code="y">1967</subfield>
<subfield code="s">Phys. Rev. 163 (1967) 1767</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[81]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W.A. Bardeen, A.J. Buras, D.W. Duke and T. Muta</subfield>
<subfield code="p">3998</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">18</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Phys. Rev. D 18 (1978) 3998</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli, R.K. Ellis and G. Martinelli</subfield>
<subfield code="p">521</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">143</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Nucl. Phys. B 143 (1978) 521</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[82]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K.G. Chetyrkin, S.G. Gorishny, S.A. Larin and F.V. Tkachov</subfield>
<subfield code="p">230</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">137</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Phys. Lett. B 137 (1984) 230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[83]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Larin, F.V. Tkachov and J.A. Vermaseren</subfield>
<subfield code="p">862</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">66</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Phys. Rev. Lett. 66 (1991) 862</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[84]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Arneodo</subfield>
<subfield code="p">301</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">240</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rep. 240 (1994) 301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[85]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Piller and W. Weise</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">330</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rep. 330 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9908230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[86]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Amaudruz et al</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">441</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. B 441 (1995) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Arneodo et al</subfield>
<subfield code="p">12</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">441</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. B 441 (1995) 12</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[87]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.C. Benvenuti et al. (BCDMS Collab.)</subfield>
<subfield code="p">483</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">189</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Phys. Lett. B 189 (1987) 483</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[88]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Gomez et al</subfield>
<subfield code="p">4348</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">49</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 4348</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[89]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.R. Adams et al. (E665 Collab.)</subfield>
<subfield code="p">403</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">67</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Z. Phys. C 67 (1995) 403</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ex/9505006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[90]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.L. Adler</subfield>
<subfield code="p">963</subfield>
<subfield code="t">Phys. Rev., B</subfield>
<subfield code="v">135</subfield>
<subfield code="y">1964</subfield>
<subfield code="s">Phys. Rev. B 135 (1964) 963</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[91]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.S. Bell</subfield>
<subfield code="p">57</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">13</subfield>
<subfield code="y">1964</subfield>
<subfield code="s">Phys. Rev. Lett. 13 (1964) 57</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[92]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C.A. Piketti and L. Stodolsky</subfield>
<subfield code="p">571</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">15</subfield>
<subfield code="y">1970</subfield>
<subfield code="s">Nucl. Phys. B 15 (1970) 571</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[93]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B.Z. Kopeliovich and P. Marage</subfield>
<subfield code="p">1513</subfield>
<subfield code="t">Int. J. Mod. Phys., A</subfield>
<subfield code="v">8</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Int. J. Mod. Phys. A 8 (1993) 1513</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[94]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.P. Allport et al. (BEBC WA59 Collab.)</subfield>
<subfield code="p">417</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">232</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Phys. Lett. B 232 (1989) 417</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[95]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Boros, J.T. Londergan and A.W. Thomas</subfield>
<subfield code="p">114030</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 114030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9804410</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[96]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">U.K. Yang et al. (CCFR­NuTeV Collab.)</subfield>
<subfield code="r">hep-ex/0009041</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[97]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.A. Aivazis, F.I. Olness and W. Tung</subfield>
<subfield code="p">2339</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">65</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 65 (1990) 2339</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Barone, M. Genovese, N.N. Nikolaev, E. Predazzi and B. Zakharov</subfield>
<subfield code="p">279</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">268</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Phys. Lett. B 268 (1991) 279</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">83</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">70</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Z. Phys. C 70 (1996) 83</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9505343</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[98]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.S. Thorne and R.G. Roberts</subfield>
<subfield code="p">303</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">421</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 421 (1998) 303</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9711223</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[99]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne</subfield>
<subfield code="p">463</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">4</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Eur. Phys. J. C 4 (1998) 463</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9803445</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[100]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L.L. Frankfurt, M.I. Strikman and S. Liuti</subfield>
<subfield code="p">1725</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">65</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 65 (1990) 1725</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[101]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Kobayashi, S. Kumano and M. Miyama</subfield>
<subfield code="p">465</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">354</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 354 (1995) 465</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9501313</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[102]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K.J. Eskola, V.J. Kolhinen and P.V. Ruuskanen</subfield>
<subfield code="p">351</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">535</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 535 (1998) 351</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9802350</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">69</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[103]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Kulagin</subfield>
<subfield code="r">hep-ph/9812532</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[104]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.V. Landshoff, J.C. Polkinghorne and R.D. Short</subfield>
<subfield code="p">225</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">28</subfield>
<subfield code="y">1971</subfield>
<subfield code="s">Nucl. Phys. B 28 (1971) 225</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[105]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Kulagin, G. Piller and W. Weise</subfield>
<subfield code="p">1154</subfield>
<subfield code="t">Phys. Rev., C</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. C 50 (1994) 1154</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">nucl-th/9402015</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[106]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.V. Akulinichev, S.A. Kulagin and G.M. Vagradov</subfield>
<subfield code="p">485</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">158</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Phys. Lett. B 158 (1985) 485</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[107]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Kulagin</subfield>
<subfield code="p">653</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">500</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Nucl. Phys. A 500 (1989) 653</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[108]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G.B. West, Ann. Phys.NY : 74 (1972) 464</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[109]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Kulagin and A.V. Sidorov</subfield>
<subfield code="p">261</subfield>
<subfield code="t">Eur. Phys. J., A</subfield>
<subfield code="v">9</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. A 9 (2000) 261</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0009150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[110]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.C. Benvenuti et al. (BCDMS Collab.)</subfield>
<subfield code="p">29</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">63</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Z. Phys. C 63 (1994) 29</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[111]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Vakili et al. (CCFR Collab.)</subfield>
<subfield code="p">052003</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 052003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ex/9905052</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[112]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Kulagin</subfield>
<subfield code="p">435</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">640</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. A 640 (1998) 435</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">nucl-th/9801039</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[113]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I.R. Afnan, F. Bissey, J. Gomez, A.T. Katramatou, W. Melnitchouk, G.G. Petratos and A.W. Thomas</subfield>
<subfield code="r">nucl-th/0006003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[114]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Guzey et al</subfield>
<subfield code="r">hep-ph/0102133</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[115]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Sarantakos, A. Sirlin and W.J. Marciano</subfield>
<subfield code="p">84</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">217</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Nucl. Phys. B 217 (1983) 84</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.Y. Bardin and V.A. Dokuchaeva</subfield>
<subfield code="p">975</subfield>
<subfield code="t">Sov. J. Nucl. Phys.</subfield>
<subfield code="v">43</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Sov. J. Nucl. Phys. 43 (1986) 975</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.Y. Bardin and V.A. Dokuchaeva</subfield>
<subfield code="p">839</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">287</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Nucl. Phys. B 287 (1987) 839</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[116]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Degrassi et al. Phys. Lett., B350 (95) 75; G. Degrassi and P. Gambino</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">567</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 567 (2000) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[117]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.N. Bahcall, M. Kamionkowski and A. Sirlin</subfield>
<subfield code="p">6146</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">51</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. D 51 (1995) 6146</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/9502003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[118]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">See F. Jegerlehner</subfield>
<subfield code="r">hep-ph/9901386</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and references therein</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[119]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K.S. McFarland et al. (NuTeV Collab.)</subfield>
<subfield code="r">hep-ex/9806013</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">in Proceedings 33rd Rencontres de Moriond on Electroweak Interactions and Unified Theories, Les Arcs, 1998</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[120]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. E. Peskin and T. Takeuchi</subfield>
<subfield code="p">381</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">46</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Rev. D 46 (1992) 381</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. J. Marciano and J. L. Rosner</subfield>
<subfield code="p">2963</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">65</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 65 (1990) 2963</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[Erratum</subfield>
<subfield code="p">2963</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">68</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 68 (1990) 2963</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[121]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli and R. Barbieri</subfield>
<subfield code="p">161</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">253</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Phys. Lett. B 253 (1991) 161</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. C. Kennedy and P. Langacker</subfield>
<subfield code="p">2967</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">65</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 65 (1990) 2967</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[Erratum</subfield>
<subfield code="p">2967</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">66</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 66 (1990) 2967</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[122]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.E. Groom et al, Particle Data Group</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Eur. Phys. J.</subfield>
<subfield code="v">15</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. 15 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[123]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Migliozzi et al</subfield>
<subfield code="p">217</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">462</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 462 (1999) 217</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[124]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Finjord and F. Ravndal</subfield>
<subfield code="p">61</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1975</subfield>
<subfield code="s">Phys. Lett. B 58 (1975) 61</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[125]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.E. Shrock and B.W. Lee</subfield>
<subfield code="p">2539</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">13</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Phys. Rev. D 13 (1976) 2539</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[126]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Avilez et al</subfield>
<subfield code="p">149</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">66</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Phys. Lett. B 66 (1977) 149</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[127]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Avilez and T. Kobayashi</subfield>
<subfield code="p">3448</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">19</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Phys. Rev. D 19 (1979) 3448</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[128]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Avilez et al</subfield>
<subfield code="p">709</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">17</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Phys. Rev. D 17 (1978) 709</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">70</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[129]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Amer et al</subfield>
<subfield code="p">48</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">81</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Phys. Lett. B 81 (1979) 48</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[130]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.G. Kovalenko</subfield>
<subfield code="p">934</subfield>
<subfield code="t">Sov. J. Nucl. Phys.</subfield>
<subfield code="v">52</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Sov. J. Nucl. Phys. 52 (1990) 934</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[131]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G.T. Jones et al. (WA21 Collab.)</subfield>
<subfield code="p">593</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">36</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Z. Phys. C 36 (1987) 593</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[132]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.V. Ammosov et al</subfield>
<subfield code="p">247</subfield>
<subfield code="t">JETP Lett.</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">JETP Lett. 58 (1993) 247</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[133]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Son et al</subfield>
<subfield code="p">2129</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">28</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Phys. Rev. D 28 (1983) 2129</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[134]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Ushida et al. (E531 Collab.)</subfield>
<subfield code="p">375</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">206</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Lett. B 206 (1988) 375</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[135]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Armenise et al</subfield>
<subfield code="p">409</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">104</subfield>
<subfield code="y">1981</subfield>
<subfield code="s">Phys. Lett. B 104 (1981) 409</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[136]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. De Lellis, P. Migliozzi and P. Zucchelli</subfield>
<subfield code="p">7</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">507</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 507 (2001) 7</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0104066</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[137]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Corcella et al</subfield>
<subfield code="r">hep-ph/0011363</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[138]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Sjöstrand, report LU-TP-95-20</subfield>
<subfield code="r">hep-ph/9508391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[139]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Ingelman et al</subfield>
<subfield code="p">108</subfield>
<subfield code="t">Comput. Phys. Commun.</subfield>
<subfield code="v">101</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Comput. Phys. Commun. 101 (1997) 108</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[140]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Bolton</subfield>
<subfield code="r">hep-ex/9708014</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[141]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Annis et al. (CHORUS Collab.)</subfield>
<subfield code="p">458</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">435</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 435 (1998) 458</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[142]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Conrad et al</subfield>
<subfield code="p">1341</subfield>
<subfield code="t">Rev. Mod. Phys.</subfield>
<subfield code="v">70</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Rev. Mod. Phys. 70 (1998) 1341</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[143]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Adams et al. (NuTeV Collab.)</subfield>
<subfield code="p">092001</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 092001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[144]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.E. Asratian et al. (BBCN Collab.)</subfield>
<subfield code="p">55</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Z. Phys. C 58 (1993) 55</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[145]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.D. Richman and P.R. Burchat</subfield>
<subfield code="p">893</subfield>
<subfield code="t">Rev. Mod. Phys.</subfield>
<subfield code="v">67</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Rev. Mod. Phys. 67 (1995) 893</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[146]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Collins, L. Frankfurt and M. Strikman</subfield>
<subfield code="p">2982</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 2982</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[147]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.V. Radyushkin</subfield>
<subfield code="p">5524</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 5524</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[148]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.J. Brodsky, L. Frankfurt, J.F. Gunion, A.H. Mueller and M. Strikman</subfield>
<subfield code="p">3134</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 50 (1994) 3134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. V. Radyushkin</subfield>
<subfield code="p">333</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">385</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 385 (1996) 333</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Mankiewicz, G. Piller and T. Weigl</subfield>
<subfield code="p">119</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">5</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Eur. Phys. J. C 5 (1998) 119</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">017501</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 017501</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Vanderhaeghen, P.A.M. Guichon and M. Guidal</subfield>
<subfield code="p">5064</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">80</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 80 (1998) 5064</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[149]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Lehmann-Dronke, P.V. Pobylitsa, M.V. Polyakov, A. Schäfer and K. Goeke</subfield>
<subfield code="p">147</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">475</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 475 (2000) 147</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Lehmann-Dronke, M.V. Polyakov, A. Schäfer and K. Goeke</subfield>
<subfield code="p">114001</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">63</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 63 (2001) 114001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0012108</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[150]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Wirbel, B. Stech and M. Bauer</subfield>
<subfield code="p">637</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">29</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Z. Phys. C 29 (1985) 637</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Bauer and M. Wirbel</subfield>
<subfield code="p">671</subfield>
<subfield code="t">Z. Phys.</subfield>
<subfield code="v">42</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Z. Phys. 42 (1989) 671</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[151]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H-n. Li and B. Meli´c</subfield>
<subfield code="p">695</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">11</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Eur. Phys. J. C 11 (1999) 695</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[152]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Abada et al</subfield>
<subfield code="p">268</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">83</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 83 (2000) 268</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Becirevic et al</subfield>
<subfield code="r">hep-lat/0002025</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Ali Khan et al</subfield>
<subfield code="r">hep-lat/0010009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. S. Kronfeld</subfield>
<subfield code="r">hep-ph/0010074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Lellouch and C.J.D. Lin (UKQCD Collab.)</subfield>
<subfield code="r">hep-ph/0011086</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">71</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[153]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.V. Radyushkin</subfield>
<subfield code="p">014030</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 014030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[154]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.D. Martin, R.G. Roberts and W.J. Stirling</subfield>
<subfield code="p">155</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">354</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 354 (1995) 155</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[155]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.T. Jones et al. (WA21 Collab.)</subfield>
<subfield code="p">23</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">28</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Z. Phys. C 28 (1987) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[156]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Willocq et al. (WA59 Collab.)</subfield>
<subfield code="p">207</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Z. Phys. C 53 (1992) 207</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[157]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. DeProspo et al. (E632 Collab.)</subfield>
<subfield code="p">6691</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 50 (1994) 6691</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[158]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Astier et al. (NOMAD Collab.)</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">588</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 588 (2000) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[159]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Trentadue and G. Veneziano</subfield>
<subfield code="p">201</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">323</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 323 (1994) 201</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[160]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Anselmino, M. Boglione, J. Hansson, and F. Murgia</subfield>
<subfield code="p">828</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">54</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 54 (1996) 828</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[161]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.L. Jaffe</subfield>
<subfield code="p">6581</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">54</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 54 (1996) 6581</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[162]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Ellis, D.E. Kharzeev and A. Kotzinian</subfield>
<subfield code="p">467</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">69</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Z. Phys. C 69 (1996) 467</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[163]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. de Florian, M. Stratmann, and W. Vogelsang</subfield>
<subfield code="p">5811</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 5811</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[164]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Kotzinian, A. Bravar and D. von Harrach</subfield>
<subfield code="p">329</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Eur. Phys. J. C 2 (1998) 329</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[165]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Kotzinian</subfield>
<subfield code="r">hep-ph/9709259</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[166]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.L. Belostotski</subfield>
<subfield code="p">526</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">79</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 79 (1999) 526</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[167]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Boer, R. Jakob, and P.J. Mulders</subfield>
<subfield code="p">471</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">564</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 564 (2000) 471</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[168]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Boros, J.T. Londergan and A.W. Thomas</subfield>
<subfield code="p">014007</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 014007</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and D : 62 (2000) 014021</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[169]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Ashery and H.J. Lipkin</subfield>
<subfield code="p">263</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">469</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 469 (1999) 263</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[170]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B-Q. Ma, I. Schmidt, J. Soffer, and J-Y. Yang</subfield>
<subfield code="p">657</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">16</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 16 (2000) 657</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">114009</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 114009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[171]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Anselmino, M. Boglione, and F. Murgia</subfield>
<subfield code="p">253</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">481</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 481 (2000) 253</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[172]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Anselmino, D. Boer, U. D'Alesio, and F. Murgia</subfield>
<subfield code="p">054029</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">63</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 63 (2001) 054029</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[173]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Indumathi, H.S. Mani and A. Rastogi</subfield>
<subfield code="p">094014</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 094014</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[174]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Burkardt and R.L. Jaffe</subfield>
<subfield code="p">2537</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">70</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Phys. Rev. Lett. 70 (1993) 2537</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[175]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I.I. Bigi</subfield>
<subfield code="p">43</subfield>
<subfield code="t">Nuovo Cimento</subfield>
<subfield code="v">41</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Nuovo Cimento 41 (1977) 43</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and 581</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[176]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Melnitchouk and A.W. Thomas</subfield>
<subfield code="p">311</subfield>
<subfield code="t">Z. Phys., A</subfield>
<subfield code="v">353</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Z. Phys. A 353 (1996) 311</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[177]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Ellis, M. Karliner, D.E. Kharzeev and M.G. Sapozhnikov</subfield>
<subfield code="p">256</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">673</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. A 673 (2000) 256</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[178]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Carlitz and M. Kislinger</subfield>
<subfield code="p">336</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1970</subfield>
<subfield code="s">Phys. Rev. D 2 (1970) 336</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[179]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Naumov</subfield>
<subfield code="r">hep-ph/0101355</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[180]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Migliozzi et al</subfield>
<subfield code="p">19</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">494</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 494 (2000) 19</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[181]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Alton et al</subfield>
<subfield code="r">hep-ex/0008068</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">72</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[182]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Grossman</subfield>
<subfield code="p">141</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">359</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 359 (1995) 141</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[183]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Langacker, M. Luo and A. Mann</subfield>
<subfield code="p">87</subfield>
<subfield code="t">Rev. Mod. Phys.</subfield>
<subfield code="v">64</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Rev. Mod. Phys. 64 (1992) 87</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[184]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. Cuypers and S. Davidson</subfield>
<subfield code="p">503</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Eur. Phys. J. C 2 (1998) 503</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Davidson, D. Bailey and B.A. Campbell</subfield>
<subfield code="p">613</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">61</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Z. Phys. C 61 (1994) 613</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[185]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Leike</subfield>
<subfield code="p">143</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">317</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rep. 317 (1999) 143</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[186]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Datta, R. Gandhi, B. Mukhopadhyaya and P. Mehta</subfield>
<subfield code="r">hep-ph/0011375</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[187]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Giudice et al., Report of the Stopped-Muon Working Group, to appear. 73</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">BNL-40718</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">FERMILAB-Pub-87-222-T</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Nason, P</subfield>
<subfield code="u">Brookhaven Nat. Lab.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">The total cross section for the production of heavy quarks in hadronic collisions</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Upton, IL</subfield>
<subfield code="b">Brookhaven Nat. Lab.</subfield>
<subfield code="c">23 Dec 1987</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">42 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dawson, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, R K</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1987</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-29</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-01-04</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">1773607</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">198804n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-PRE-82-006</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Ellis, J</subfield>
+ <subfield code="0">AUTHOR|(SzGeCERN)aaa0005</subfield>
<subfield code="u">CERN</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">From the standard model to grand unification</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">1982</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">mult. p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Theoretical Physics</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1982</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="p">TH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-28</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-09-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="2">
<subfield code="f">820332</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="o">oai:cds.cern.ch:CERN-PRE-82-006</subfield>
<subfield code="p">cern:theory</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">1982n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">astro-ph/0104076</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Dev, A</subfield>
<subfield code="u">Delhi University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Cosmic equation of state, Gravitational Lensing Statistics and Merging of Galaxies</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">4 Apr 2001</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">28 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">In this paper we investigate observational constraints on the cosmic equation of state of dark energy ($p = w \rho$) using gravitational lensing statistics. We carry out likelihood analysis of the lens surveys to constrain thecosmological parameters $\Omega_{m}$ and $w$. We start by constraining $\Omega_{m}$ and $w$ in the no-evolution model of galaxies where the comoving number density of galaxies is constant. We extend our study to evolutionary modelsof galaxies - Volmerange $&amp;$ Guiderdoni Model and Fast-Merging Model (of Broadhurst, Ellis $&amp;$ Glazebrook). For the no-evolution model we get $w \leq -0.24$ and $\Omega_{m}\leq 0.48$ at $1\sigma$ (68% confidence level). For theVolmerange $&amp;$ Guiderdoni Model we have $w \leq -0.2$ and $\Omega_{m} \leq 0.58$ at $1 \sigma$, and for the Fast Merging Model we get $w \leq -0.02$ and $\Omega_{m} \leq 0.93$ at $1\sigma$. For the case of constant $\Lambda$ ($w=-1$), all the models permit $\Omega_{m} = 0.3$ with 68% CL. We observe that the constraints on $w$ and $\Omega_{m}$ (and on $\Omega_{m}$ in the case of $w = -1$) obtained in the case of evolutionary models are weaker than thoseobtained in the case of the no-evolution model.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Astrophysics and Astronomy</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jain, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Panchapakesan, N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Mahajan, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bhatia, V B</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Deepak Jain &lt;deepak@physics.du.ac.in&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0104076.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0104076.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2001</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">10</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Delhi University</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2001-04-05</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2001-04-10</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Dev, Abha</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Jain, Deepak</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Perlmutter et al</subfield>
<subfield code="p">565</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">517</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Astrophys. J. 517 (1999) 565</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Perlmutter et al., Phy. Rev. Lett.: 83 (1999) 670</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. G. Riess et al</subfield>
<subfield code="p">1009</subfield>
<subfield code="t">Astron. J.</subfield>
<subfield code="v">116</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astron. J. 116 (1998) 1009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. de Bernardis et al</subfield>
<subfield code="p">955</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">404</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nature 404 (2000) 955</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. J. Ostriker &amp; P. J. Steinhardt</subfield>
<subfield code="p">600</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">377</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nature 377 (1995) 600</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Sahni &amp; Alexei Starobinsky, IJMP, D : 9 (2000) 373</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. F. Bloomfield Torres &amp; I. Waga</subfield>
<subfield code="p">712</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">279</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 279 (1996) 712</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Silveira &amp; I. Waga</subfield>
<subfield code="p">4890</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 50 (1994) 4890</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.Silveira &amp; I. Waga</subfield>
<subfield code="p">4625</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 4625</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. Waga &amp; Ana P. M. R. Miceli, Phy. Rev. D : 59 (1999) 103507</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. S. Turner &amp; M. White, Phy. Rev. D : 56 (1997) 4439</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Huterer &amp; M. S. Turner, Phy. Rev. D : 60 (1999) 081301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Chiba, N. Sugiyama &amp; T. Nakamura, Mon. Not. R. As-tron. Soc.: 289 (1997) L5</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Chiba, N. Sugiyama &amp; T. Nakamura, Mon. Not. R. As-tron. Soc.: 301 (1998) 72</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. J. E. Peebles</subfield>
<subfield code="p">439</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">284</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Astrophys. J. 284 (1984) 439</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Ratra &amp; P. J. E. Peebles, Phy. Rev. D : 37 (1988) 3406</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. R. Caldwell, R. Dave &amp; P. J. Steinhardt, Phy. Rev. Lett.: 80 (1998) 1582</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Efstathiou</subfield>
<subfield code="r">astro-ph/9904356</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">(1999)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. A. S. Lima &amp; J. S. Alcaniz</subfield>
<subfield code="p">893</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">317</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 317 (2000) 893</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Wang et al</subfield>
<subfield code="p">17</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">530</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Astrophys. J. 530 (2000) 17</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. W. Rix, D. Maoz, E. Turner &amp; M. Fukugita</subfield>
<subfield code="p">49</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">435</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Astrophys. J. 435 (1994) 49</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Mao &amp; C. S. Kochanek</subfield>
<subfield code="p">569</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">268</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 268 (1994) 569</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Jain, N. Panchapakesan, S. Mahajan &amp; V. B. Bhatia, MPLA : 15 (2000) 41</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Broadhurst, R. Ellis &amp; K. Glazebrook</subfield>
<subfield code="p">55</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">355</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Nature 355 (1992) 55</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[BEG]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Rocca-Volmerange &amp; B. Guiderdoni, Mon. Not. R. As-tron. Soc.: 247 (1990) 166</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Toomre, in The Evolution of Galaxies and Stellar Pop-ulations eds: B. M. Tinsley &amp; R. B. Larson ( Yale Univ. Observatory), p-401 (1977)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. Schwezier</subfield>
<subfield code="p">109</subfield>
<subfield code="t">Astron. J.</subfield>
<subfield code="v">111</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Astron. J. 111 (1996) 109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">O. J. Eggen, D. Lynden-Bell &amp; A. R. Sandage</subfield>
<subfield code="p">748</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">136</subfield>
<subfield code="y">1962</subfield>
<subfield code="s">Astrophys. J. 136 (1962) 748</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. B. Partridge &amp; P. J. E. Peebles</subfield>
<subfield code="p">868</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">147</subfield>
<subfield code="y">1967</subfield>
<subfield code="s">Astrophys. J. 147 (1967) 868</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. P. Driver et al</subfield>
<subfield code="p">L23</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">449</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Astrophys. J. 449 (1995) L23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. M. Burkey et al</subfield>
<subfield code="p">L13</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">429</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Astrophys. J. 429 (1994) L13</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. M. Faber et al</subfield>
<subfield code="p">668</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">204</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Astrophys. J. 204 (1976) 668</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. G. Carlberg et al</subfield>
<subfield code="p">540</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">435</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Astrophys. J. 435 (1994) 540</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. E. Zepf</subfield>
<subfield code="p">377</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">390</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nature 390 (1997) 377</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Glazebrook et al</subfield>
<subfield code="p">157</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">273</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 273 (1995) 157</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. J. Lilly et al</subfield>
<subfield code="p">108</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">455</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Astrophys. J. 455 (1995) 108</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. S. Ellis et al</subfield>
<subfield code="p">235</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">280</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 280 (1996) 235</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. S. Ellis, Ann. Rev</subfield>
<subfield code="p">389</subfield>
<subfield code="t">Astron. Astrophys.</subfield>
<subfield code="v">35</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Astron. Astrophys. 35 (1997) 389</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Guiderdoni &amp; B. Rocca-Volmerange</subfield>
<subfield code="p">435</subfield>
<subfield code="t">Astron. Astrophys.</subfield>
<subfield code="v">252</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Astron. Astrophys. 252 (1991) 435</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. E. Zepf, &amp; D. C. Koo</subfield>
<subfield code="p">34</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">337</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Astrophys. J. 337 (1989) 34</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. K. C. Yee &amp; E. Ellingson</subfield>
<subfield code="p">37</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">445</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Astrophys. J. 445 (1995) 37</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Cole et al</subfield>
<subfield code="p">781</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">271</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 271 (1994) 781</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. M. Baugh, S. Cole &amp; C. S. Frenk</subfield>
<subfield code="p">L27</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">282</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 282 (1996) L27</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[44]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. M. Baugh, S. Cole &amp; C. S. Frenk</subfield>
<subfield code="p">1361</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">283</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 283 (1996) 1361</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[45]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. M. Baugh et al</subfield>
<subfield code="p">504</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">498</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astrophys. J. 498 (1998) 504</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[46]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Schechter</subfield>
<subfield code="p">297</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">203</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Astrophys. J. 203 (1976) 297</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[47]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. H. Press &amp; P. Schechter</subfield>
<subfield code="p">487</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">187</subfield>
<subfield code="y">1974</subfield>
<subfield code="s">Astrophys. J. 187 (1974) 487</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[48]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. E. Gunn &amp; J. R. Gott</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">176</subfield>
<subfield code="y">1972</subfield>
<subfield code="s">Astrophys. J. 176 (1972) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[49]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Loveday, B. A. Peterson, G. Efstathiou &amp; S. J. Maddox</subfield>
<subfield code="p">338</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">390</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Astrophys. J. 390 (1994) 338</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[50]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. L. Turner, J. P. ostriker &amp; J. R. Gott III</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">284</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Astrophys. J. 284 (1984) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[51]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. L. Turner</subfield>
<subfield code="p">L43</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">365</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Astrophys. J. 365 (1990) L43</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[52]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Fukugita &amp; E. L. Turner</subfield>
<subfield code="p">99</subfield>
<subfield code="t">Mon. Not. R. Astron. Soc.</subfield>
<subfield code="v">253</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Mon. Not. R. Astron. Soc. 253 (1991) 99</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[53]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Fukugita, T. Futamase, M. Kasai &amp; E. L. Turner, As-trophys. J.: 393 (1992) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[54]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. S. Kochanek</subfield>
<subfield code="p">12</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">419</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Astrophys. J. 419 (1993) 12</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[55]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. S. Kochanek</subfield>
<subfield code="p">638</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">466</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Astrophys. J. 466 (1996) 638</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[56]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. D. A. Hartwich &amp; D. Schade, Ann. Rev. Astron. Astro-phys.: 28 (1990) 437</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[57]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Yu-Chung N. Cheng &amp; L. M. Krauss</subfield>
<subfield code="p">697</subfield>
<subfield code="t">Int. J. Mod. Phys., A</subfield>
<subfield code="v">15</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Int. J. Mod. Phys. A 15 (2000) 697</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[58]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. N. Bahcall et al</subfield>
<subfield code="p">56</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">387</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Astrophys. J. 387 (1992) 56</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[59]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. C. Hewett et al., Astron. J.109, 1498(LBQS) (1995)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[60]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Maoz et al., Astrophys. J.409, 28(Snapshot) (1993)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[61]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Crampton, R. D. McClure &amp; J. M. Fletcher</subfield>
<subfield code="p">23</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">392</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Astrophys. J. 392 (1992) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[62]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. K. C. Yee, A. V. Filippenko &amp; D. Tang</subfield>
<subfield code="p">7</subfield>
<subfield code="t">Astron. J.</subfield>
<subfield code="v">105</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Astron. J. 105 (1993) 7</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[63]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Surdej et al</subfield>
<subfield code="p">2064</subfield>
<subfield code="t">Astron. J.</subfield>
<subfield code="v">105</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Astron. J. 105 (1993) 2064</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[64]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Jain, N. Panchapakesan, S. Mahajan &amp; V. B. Bhatia</subfield>
<subfield code="r">astro-ph/9807129</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">(1998)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[65]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Jain, N. Panchapakesan, S. Mahajan &amp; V. B. Bhatia, IJMP, A : 13 (1998) 4227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[66]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. lampton, B. Margon &amp; S. Bowyer</subfield>
<subfield code="p">177</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">208</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Astrophys. J. 208 (1976) 177</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DOE-ER-40048-24-P-4</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Abbott, R B</subfield>
<subfield code="u">Washington U. Seattle</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Cosmological perturbations in Kaluza-Klein models</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Washington, DC</subfield>
<subfield code="b">US. Dept. Energy. Office Adm. Serv.</subfield>
<subfield code="c">Nov 1985</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">26 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Theoretical Physics</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bednarz, B F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, S D</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1985</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-29</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-01-04</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">198608n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-PPE-92-085</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">HEPHY-PUB-568</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Albajar, C</subfield>
<subfield code="u">CERN</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Multifractal analysis of minimum bias events in \Sqrt s = 630 GeV $\overline{p}$p collisions</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">1 Jun 1992</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">27 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Experimental Results</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Allkofer, O C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Apsimon, R J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bartha, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bezaguet, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bohrer, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Buschbeck, B</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cennini, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Cittolin, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Clayton, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Coughlan, J A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dau, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Della Negra, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Demoulin, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dibon, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Dowell, J D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Eggert, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Eisenhandler, E F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Faissner, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Fensome, I F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ferrando, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Garvey, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Geiser, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Givernaud, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gonidec, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jank, W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jorat, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Josa-Mutuberria, I</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kalmus, P I P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Karimaki, V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kenyon, I R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kinnunen, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Krammer, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lammel, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Landon, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Levegrun, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lipa, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Markou, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Markytan, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Maurin, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">McMahon, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Meyer, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Moers, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Morsch, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Moulin, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Naumann, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Neumeister, N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Norton, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pancheri, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pauss, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pietarinen, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pimia, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Placci, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Porte, J P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Priem, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Prosi, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Radermacher, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rauschkolb, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Reithler, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Revol, J P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Robinson, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rubbia, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Salicio, J M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Samyn, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Schinzel, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Schleichert, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Seez, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Shah, T P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sphicas, P</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sumorok, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Szoncso, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tan, C H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Taurok, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Taylor, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tether, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Teykal, H F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Thompson, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Terrente-Lujan, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tuchscherer, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tuominiemi, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Virdee, T S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">von Schlippe, W</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Vuillemin, V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wacker, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wagner, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Walzel, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Weselka, D</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wulz, C E</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">AACHEN - BIRMINGHAM - CERN - HELSINKI - KIEL - IMP. COLL. LONDON - QUEEN MARY COLL. LONDON - MADRID CIEMAT - MIT - RUTHERFORD APPLETON LAB. - VIENNA Collaboration</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1992</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="e">UA1</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="p">PPE</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="f">P00003707</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="a">CERN SPS</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1992-06-16</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">37-46</subfield>
<subfield code="p">Z. Phys., C</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1992</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">2576562</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="o">oai:cds.cern.ch:CERN-PPE-92-085</subfield>
<subfield code="p">cern:experiment</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">199226</subfield>
<subfield code="y">a1992</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-TH-4036</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Ellis, J</subfield>
+ <subfield code="0">AUTHOR|(SzGeCERN)aaa0005</subfield>
<subfield code="u">CERN</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Non-compact supergravity solves problems</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">Oct 1984</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">15 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Theoretical Physics</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">Kahler</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">manifolds</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">gravitinos</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">axions</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">constraints</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">noscale</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Enqvist, K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nanopoulos, D V</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1985</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="p">TH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-29</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-09-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<!--the datafield below is used in citation seach testing-->
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">357-362</subfield>
<subfield code="p">Phys. Lett., B</subfield>
<subfield code="v">151</subfield>
<subfield code="y">1985</subfield>
</datafield>
<!--the datafield above is used in citation seach testing-->
<datafield tag="909" ind1="C" ind2="O">
<subfield code="o">oai:cds.cern.ch:CERN-TH-4036</subfield>
<subfield code="p">cern:theory</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">198451</subfield>
<subfield code="y">a1985</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">STAN-CS-81-898-MF</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Whang, K</subfield>
<subfield code="u">Stanford University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Separability as a physical database design methodology</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Stanford, CA</subfield>
<subfield code="b">Stanford Univ. Comput. Sci. Dept.</subfield>
<subfield code="c">Oct 1981</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">60 p</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">Ordered for J Blake/DD</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Computing and Computers</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Wiederhold, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sagalowicz, D</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1981</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">19</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Stanford Univ.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-28</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-01-04</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">198238n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">REPORT</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">JYFL-RR-82-7</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Arje, J</subfield>
<subfield code="u">University of Jyvaskyla</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Charge creation and reset mechanisms in an ion guide isotope separator (IGIS)</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Jyvaskyla</subfield>
<subfield code="b">Finland Univ. Dept. Phys.</subfield>
<subfield code="c">Jul 1982</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">18 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Detectors and Experimental Techniques</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1982</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">19</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Jyväsklä Univ.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-28</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-01-04</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">198238n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">REPORT</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">0898710022</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">519.2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Lindley, Dennis Victor</subfield>
<subfield code="u">University College London</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Bayesian statistics</subfield>
<subfield code="b">a review</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Philadelphia, PA</subfield>
<subfield code="b">SIAM</subfield>
<subfield code="c">1972</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">88 p</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">CBMS-NSF Reg. Conf. Ser. Appl. Math.</subfield>
<subfield code="v">2</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="a">Society for Industrial and Applied Mathematics. Philadelphia</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1972</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198604</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">0844621951</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621.396.615</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621.385.3</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Hamilton, Donald R</subfield>
<subfield code="u">MIT</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Klystrons and microwave triodes</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">New York, NY</subfield>
<subfield code="b">McGraw-Hill</subfield>
<subfield code="c">1948</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">547 p</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">M.I.T. Radiat. Lab.</subfield>
<subfield code="v">7</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Knipp, Julian K</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kuper, J B Horner</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1948</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198604</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621.313</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621.382.333.33</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Draper, Alec</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Electrical machines</subfield>
</datafield>
<datafield tag="250" ind1=" " ind2=" ">
<subfield code="a">2nd ed</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">London</subfield>
<subfield code="b">Longmans</subfield>
<subfield code="c">1967</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">404 p</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">Electrical engineering series</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1967</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198604</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">1563964554</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">539.1.078</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">539.143.44</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621.384.8</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Quadrupole mass spectrometry and its applications</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Amsterdam</subfield>
<subfield code="b">North-Holland</subfield>
<subfield code="c">1976</subfield>
</datafield>
<datafield tag="270" ind1=" " ind2=" ">
<subfield code="g">ed.</subfield>
<subfield code="p">Dawson, Peter H</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">368 p</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1976</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198604</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">2225350574</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">fre</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">518.5:62.01</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Dasse, Michel</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Analyse informatique</subfield>
<subfield code="n">t.1</subfield>
<subfield code="p">Les preliminaires</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Paris</subfield>
<subfield code="b">Masson</subfield>
<subfield code="c">1972</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">Informatique</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1972</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198604</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">2225350574</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">fre</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">518.5:62.01</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Dasse, Michel</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Analyse informatique</subfield>
<subfield code="n">t.2</subfield>
<subfield code="p">L'accomplissement</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Paris</subfield>
<subfield code="b">Masson</subfield>
<subfield code="c">1972</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">Informatique</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1972</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198604</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">0023506709</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">519.2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Harshbarger, Thad R</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Introductory statistics</subfield>
<subfield code="b">a decision map</subfield>
</datafield>
<datafield tag="250" ind1=" " ind2=" ">
<subfield code="a">2nd ed</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">New York, NY</subfield>
<subfield code="b">Macmillan</subfield>
<subfield code="c">1977</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">597 p</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1977</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198604</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">519.2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Fry, Thornton C</subfield>
<subfield code="u">Bell Teleph Labs</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Probability</subfield>
<subfield code="b">and its engineering uses</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Princeton, NJ</subfield>
<subfield code="b">Van Nostrand</subfield>
<subfield code="c">1928</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">490 p</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">Bell Teleph Lab. Ser.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1928</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">0720421039</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">517.11</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Kleene, Stephen Cole</subfield>
<subfield code="u">University of Wisconsin</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Introduction to metamathematics</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Amsterdam</subfield>
<subfield code="b">North-Holland</subfield>
<subfield code="c">1952 (repr.1964.)</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">560 p</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">Bibl. Matematica</subfield>
<subfield code="v">1</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1952</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621.38</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Hughes, Robert James</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Introduction to electronics</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">London</subfield>
<subfield code="b">English Univ. Press</subfield>
<subfield code="c">1962</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">432 p</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">65/0938, Blair, W/PE, pp</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pipe, Peter</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1962</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">519.2</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">518.5:519.2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Burford, Roger L</subfield>
<subfield code="u">Indiana University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Statistics</subfield>
<subfield code="b">a computer approach</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Columbus, OH</subfield>
<subfield code="b">Merrill</subfield>
<subfield code="c">1968</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">814 p</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1968</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">0471155039</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">539.1.075</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Chiang, Hai Hung</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Basic nuclear electronics</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">New York, NY</subfield>
<subfield code="b">Wiley</subfield>
<subfield code="c">1969</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">354 p</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1969</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">621-5</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Dransfield, Peter</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Engineering systems and automatic control</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Englewood Cliffs, N.J.</subfield>
<subfield code="b">Prentice-Hall</subfield>
<subfield code="c">1968</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">432 p</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1968</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">0387940758</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="080" ind1=" " ind2=" ">
<subfield code="a">537.52</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Electrical breakdown in gases</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">London</subfield>
<subfield code="b">Macmillan</subfield>
<subfield code="c">1973</subfield>
</datafield>
<datafield tag="270" ind1=" " ind2=" ">
<subfield code="g">ed.</subfield>
<subfield code="p">Rees, J A</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">303 p</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1973</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">21</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">m</subfield>
<subfield code="w">198606</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">BOOK</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Tavanapong, W</subfield>
<subfield code="u">University of Central Florida</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A High-performance Video Browsing System</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Orlando, FL</subfield>
<subfield code="b">Central Florida Univ.</subfield>
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="270" ind1=" " ind2=" ">
<subfield code="g">dir.</subfield>
<subfield code="p">Hua, K A</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">172 p</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">No fulltext</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">Not held by the library</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Ph.D. : Univ. Central Florida : 1999</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Recent advances in multimedia processing technologies, internetworking technologies, and the World Wide Web phenomenon have resulted in a vast creation and use of digital videos in all kinds of applications ranging fromentertainment, business solutions, to education. Designing efficient techniques for searching and retrieving videos over the networks becomes increasingly more important as future applications will include a huge volume of multimediacontent. One practical approach to search for a video segment is as follows. Step 1: Apply an initial search to determine the set of candidate videos. Step 2: Browse the candidates to identify the relevant videos. Step 3: Searchwithin the relevant videos for interesting video segments. In practice, a user might have to iterate through these steps multiple times in order to locate the desired video segments. Independently, database researchers have beeninvestigating techniques for the initial search in Step 1. Multimedia researchers have proposed several techniques for video browsing in Step 2. Computer communications researchers have been investigating video delivery techniques. Iidentify that searching for video data is an interactive process which involves the transmission of video data. Developing techniques for each step independently could result in a system with less performance. In this dissertation, Ipresent a unified approach taking into accounts all fundamental characteristics of multimedia data. I evaluated the proposed techniques through both simulation and system implementation. The resulting system is less expensive andoffers better performance. The simulation results demonstrate that the proposed technique can offer video browsing and search operations with little delays and with minimum storage overhead at the server. Client machines can handletheir search operations without involving the server making the design more scalable, which is vital for large systems deployed over the Internet. The implemented system shows that the visual quality of the browsing and the searchoperations are excellent.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">PROQUEST200009</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Computing and Computers</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">notheld</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-09-22</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-02-22</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">9923724</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">DAI-B60/03p1177Sep1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200034</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Teshome, D</subfield>
<subfield code="u">California State Univ</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Neural Networks For Speech Recognition Of A Phonetic Language</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Long Beach, CA</subfield>
<subfield code="b">Calif. State Univ.</subfield>
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">55 p</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">No fulltext</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">Not held by the library</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Ms : California State Univ. : 1999</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">The goal of this thesis is to explore a possibility for a viable alternative/replacement to the Amharic typewriter. Amharic is the national language of Ethiopia. It is one of the oldest languages in the world. Actually, the root-language of Amharic, called Geez, is a descendent of Sabean, which is the direct ancestor of all Semitic languages including English. A phonetic language with 276 phonemes/characters, Amharic has posed quite a challenge to those who,like the author of this thesis, have attempted to design an easy-to-use word processor that interfaces with the conventional keyboard. With current Amharic word processing software, each character requires an average of threekeystrokes thus making typing Amharic literature quite a task. This thesis researches the feasibility of developing a PC-based speech recognition system to recognize the spoken phonemes of the Amharic language. Artificial NeuralNetworks are used for the recognition of spoken alphabets that form Amharic words. A neural network with feed-forward architecture is trained with a series of alphabets and is evaluated on its ability to recognize subsequent testdata. The neural network used in this project is a static classification network; that is, it focuses on the frequency domain of speech while making no attempt to process temporal information. The network training procedure uses thegeneralized Delta Rule. The recognition system developed in this project is an Isolated Speech Recognition System. The approach taken is to recognize the spoken word character by character. This approach is expected to work well dueto the phonetic nature of Amharic.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">PROQUEST200009</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Computing and Computers</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">notheld</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-09-22</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-02-22</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">1397120</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">MAI38/02p448Apr2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200034</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Topcuoglu, H R</subfield>
<subfield code="u">Syracuse Univ.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Scheduling Task Graphs In Heterogeneous Computing Environments</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Syracuse, NY</subfield>
<subfield code="b">Syracuse Univ.</subfield>
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="270" ind1=" " ind2=" ">
<subfield code="g">dir.</subfield>
<subfield code="p">Hariri, S</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">126 p</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">No fulltext</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">Not held by the library</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Ph.D. : Syracuse Univ. : 1999</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Efficient application scheduling is critical for achieving high performance in heterogeneous computing environments. An application is represented by a directed acyclic graph (DAG) whose nodes represent tasks and whose edgesrepresent communication messages and precedence constraints among the tasks. The general task-scheduling problem maps the tasks of an application on processors and orders their execution so that task precedence requirements aresatisfied and a minimum schedule length is obtained. The task-scheduling problem has been shown to be NP- complete in general cases as well as in several restricted cases. Although a large number of scheduling heuristics arepresented in the literature, most of them target homogeneous processors. Existing algorithms for heterogeneous processors are not generally efficient because of their high complexity and the quality of their results. This thesisstudies the scheduling of DAG-structured application tasks on heterogeneous domains. We develop two novel low-complexity and efficient scheduling algorithms for bounded number of heterogeneous processors, the HeterogeneousEarliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm. The experimental work presented in this thesis shows that these algorithms significantly surpass previous approaches in terms of performance(schedule length ratio, speed-up, and frequency of best results) and cost (running time and time complexity). Our experimental work includes randomly generated graphs and graphs deducted from real applications. As part of thecomparison study, a parametric graph generator is introduced to generate graphs with various characteristics. We also present a further optimization of the HEFT Algorithm by introducing alternative methods for task prioritizing andprocessor selection phases. A novel processor selection policy based on the earliest finish time of the critical child task improves the performance of the HEFT algorithm. Several strategies for selecting the critical child task of agiven task are presented. This thesis addresses embedding the task scheduling algorithms into an application-development environment for distributed resources. An analytical model is introduced for setting the computation costs oftasks and communication costs of edges of a graph. As part of the design framework of our application development environment, a novel, two-phase, distributed scheduling algorithm is presented for scheduling an application overwide-area distributed resources.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">PROQUEST200009</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Computing and Computers</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">notheld</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-09-22</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-02-08</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">9946509</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">DAI-B60/09p4718Mar2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200034</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">spa</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Trespalacios-Mantilla, J H</subfield>
<subfield code="u">Puerto Rico Univ.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Software De Apoyo Educativo Al Concepto De Funcion En Precalculo I (spanish Text)</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Rio Piedras</subfield>
<subfield code="b">Puerto Rico Univ.</subfield>
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="270" ind1=" " ind2=" ">
<subfield code="g">dir.</subfield>
<subfield code="p">Monroy, H</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">64 p</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">No fulltext</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">Not held by the library</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Ms : Univ. Puerto Rico : 1999</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">This thesis reports on the evaluation of the use of an educational software, designed to improve student's learning of the concept of mathematical function. The students in the study were registered in Precalculus I at theUniversity of Puerto Rico, Mayaguez Campus. The educational software allows the practice of changing the representation of a function among tabular, analytic, and graphical representations. To carry the evaluation, 59 students wereselected and were divided in two groups: control and experimental. Both groups received the 'traditional' classroom lectures on the topic. The experimental group, in addition, was allowed to practice with the educational software. Tomeasure their performance and the effect of the educational software, two tests were given: a pre-test and a post-test. The results of this study shows that the experimental group improved significantly more than the control group,thus demonstrating the validity of the educational software in the learning of the concept of mathematical function.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">PROQUEST200009</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Computing and Computers</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">notheld</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-09-22</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-02-08</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">1395476</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">MAI37/06p1890Dec1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200034</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="020" ind1=" " ind2=" ">
<subfield code="a">0612382052</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">fre</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Troudi, N</subfield>
<subfield code="u">Laval Univ.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Systeme Multiagent Pour Les Environnements Riches En Informations (french Text)</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Laval</subfield>
<subfield code="b">Laval Univ.</subfield>
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="270" ind1=" " ind2=" ">
<subfield code="g">dir.</subfield>
<subfield code="p">Chaib-Draa, B</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">101 p</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">No fulltext</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">Not held by the library</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Msc : Universite Laval : 1999</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">La croissance du Web est spectaculaire, puisqu'on estime aujourd'hui a plus de 50 millions, le nombre de pages sur le Web qui ne demandent qu'a etre consultees. Un simple calcul montre qu'en consacrant ne serait-ce qu'une minute parpage, il faudrait environ 95 ans pour consulter toutes ces pages. L'utilisation d'une strategie de recherche est donc vitale. Dans ce cadre, de nombreux outils de recherche ont ete proposes. Ces outils appeles souvent moteurs derecherche se sont averes aujourd'hui incapables de fournir de l'aide aux utilisateurs. Les raisons principales a cela sont les suivantes: (1) La nature ouverte de l'Internet: aucune supervision centrale ne s'applique quant audeveloppement d'Internet, puisque toute personne qui desire l'utiliser et/ou offrir des informations est libre de le faire; (2) La nature dynamique des informations: les informations qui ne sont pas disponibles aujourd'hui peuventl'etre demain et inversement; (3) La nature heterogene de l'information: l'information est offerte sous plusieurs formats et de plusieurs facons, compliquant ainsi la recherche automatique de l'information. Devant ce constat, ilsemble important de chercher de nouvelles solutions pour aider l'utilisateur dans sa recherche d'informations. (Abstract shortened by UMI.)</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">PROQUEST200009</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Computing and Computers</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">notheld</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-09-22</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-02-08</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">MQ38205</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">PROQUEST</subfield>
<subfield code="s">MAI37/06p1890Dec1999</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200034</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">LBL-22304</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Manes, J L</subfield>
<subfield code="u">Calif. Univ. Berkeley</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Anomalies in quantum field theory and differential geometry</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Berkeley, CA</subfield>
<subfield code="b">Lawrence Berkeley Nat. Lab.</subfield>
<subfield code="c">Apr 1986</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">76 p</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Thesis : Calif. Univ. Berkeley</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Theoretical Physics</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">bibliography</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">REPORT</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1986</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-29</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-03-22</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">1594192</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">198650n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">LBL-21916</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Ingermanson, R</subfield>
<subfield code="u">Calif. Univ. Berkeley</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Accelerating the loop expansion</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Berkeley, CA</subfield>
<subfield code="b">Lawrence Berkeley Nat. Lab.</subfield>
<subfield code="c">Jul 1986</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">96 p</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Thesis : Calif. Univ. Berkeley</subfield>
</datafield>
<datafield tag="506" ind1="1" ind2=" ">
<subfield code="a">montague_only</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Theoretical Physics</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">bibliography</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">REPORT</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1986</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-29</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-03-22</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">1594184</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">198650n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">LBL-28106</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bertsche, K J</subfield>
<subfield code="u">Calif. Univ. Berkeley</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A small low energy cyclotron for radioisotope measurements</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Berkeley, CA</subfield>
<subfield code="b">Lawrence Berkeley Nat. Lab.</subfield>
<subfield code="c">Nov 1989</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">155 p</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">Thesis : Calif. Univ. Berkeley</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Accelerators and Storage Rings</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">bibliography</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">REPORT</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">14</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1989</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-02-28</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-03-22</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">199010n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">gr-qc/0204045</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Khalatnikov, I M</subfield>
<subfield code="u">L D Landau Institute for Theoretical Physics of Russian Academy of Sciences</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Comment about quasiisotropic solution of Einstein equations near cosmological singularity</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">7 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We generalize for the case of arbitrary hydrodynamical matter the quasiisotropic solution of Einstein equations near cosmological singularity, found by Lifshitz and Khalatnikov in 1960 for the case of radiation-dominated universe. Itis shown that this solution always exists, but dependence of terms in the quasiisotropic expansion acquires a more complicated form.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Relativity and Cosmology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kamenshchik, A Y</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Alexander Kamenshchik &lt;sasha.kamenshchik@centrovolta.it&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204045.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204045.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Lifshitz E M and Khalatnikov I M 1960</subfield>
<subfield code="p">149</subfield>
<subfield code="t">Zh. Eksp. Teor. Fiz.</subfield>
<subfield code="v">39</subfield>
<subfield code="y">1960</subfield>
<subfield code="s">Zh. Eksp. Teor. Fiz. 39 (1960) 149</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Lifshitz E M and Khalatnikov I M 1964 Sov. Phys. Uspekhi 6 495</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Landau L D and Lifshitz E M 1979 The Classical Theory of Fields (Perg-amon Press)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Starobinsky A A 1986 Stochastic De Sitter (inflationary stage) in the early universe in Field Theory, Quantum Gravity and Strings, (Eds. H.J. De Vega and N. Sanchez, Springer-Verlag, Berlin) 107; Linde A D 1990 Particle Physics and Inflationary Cosmology (Harward Academic Publishers, New York)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Banks T and Fischler W 2001 M theory observables for cosmolog-ical space-times</subfield>
<subfield code="r">hep-th/0102077</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">An Holographic Cosmology</subfield>
<subfield code="r">hep-th/0111142</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Perlmutter S J et al 1999</subfield>
<subfield code="p">565</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">517</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Astrophys. J. 517 (1999) 565</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Riess A et al 1998</subfield>
<subfield code="p">1009</subfield>
<subfield code="t">Astron. J.</subfield>
<subfield code="v">116</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astron. J. 116 (1998) 1009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Sahni V and Starobinsky A A 2000</subfield>
<subfield code="p">373</subfield>
<subfield code="t">Int. J. Mod. Phys., D</subfield>
<subfield code="v">9</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Int. J. Mod. Phys. D 9 (2000) 373</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">gr-qc/0204046</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bento, M C</subfield>
<subfield code="u">CERN</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Supergravity Inflation on the Brane</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">5 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study N=1 Supergravity inflation in the context of the braneworld scenario. Particular attention is paid to the problem of the onset of inflation at sub-Planckian field values and the ensued inflationary observables. We find thatthe so-called $\eta$-problem encountered in supergravity inspired inflationary models can be solved in the context of the braneworld scenario, for some range of the parameters involved. Furthermore, we obtain an upper bound on thescale of the fifth dimension, $M_5 \lsim 10^{-3} M_P$, in case the inflationary potential is quadratic in the inflaton field, $\phi$. If the inflationary potential is cubic in $\phi$, consistency with observational data requires that$M_5 \simeq 9.2 \times 10^{-4} M_P$.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">General Relativity and Cosmology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bertolami, O</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sen, A A</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Maria da Conceicao Bento &lt;bento@sirius.ist.utl.pt&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204046.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204046.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200216</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">R. Maartens, D. Wands, B.A. Bassett, I.P.C. Heard,</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 041301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">M.C. Bento, O. Bertolami,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 063513</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">O. Bertolami, G.G. Ross,</subfield>
<subfield code="s">Phys. Lett. B 183 (1987) 163</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">J. McDonald, "F-term Hybrid Inflation, η-Problem and Extra Dimensions",</subfield>
<subfield code="r">hep-ph/0201016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">G. Dvali, Q. Shafi, R. Schaefer,</subfield>
<subfield code="s">Phys. Rev. Lett. 73 (1994) 1886</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">A.D. Linde,</subfield>
<subfield code="s">Phys. Lett. B 259 (1991) 38</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">M.C. Bento, O. Bertolami, P.M. Sá,</subfield>
<subfield code="s">Phys. Lett. B 262 (1991) 11</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="s">Mod. Phys. Lett. A 7 (1992) 911</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">A.D. Linde,</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 748</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">E.J. Copeland, A.R. Liddle, D.H. Lyth, E.D. Stewart, D. Wands,</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 6410</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">L.E. Mendes, A.R. Liddle,</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 103511</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">J.A. Adams, G.G. Ross, S. Sarkar,</subfield>
<subfield code="s">Phys. Lett. B 391 (1997) 271</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">T. Shiromizu, K. Maeda, M. Sasaki,</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 024012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">P. Binétruy, C. Deffayet, U. Ellwanger, D. Langlois,</subfield>
<subfield code="s">Phys. Lett. B 477 (2000) 285</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">E.E. Flanagan, S.H. Tye, I. Wasserman,</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 044039</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">D. Langlois, R. Maartens, D. Wands,</subfield>
<subfield code="s">Phys. Lett. B 489 (2000) 259</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">C.B. Netterfield Pryke, et al., "A Measurement by BOOMERANG of multiple peaks in the angular power spectrum of the cosmic microwave background", Ann. Sci.tro/ph0104460.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">A.T. Lee, et al., "A High Spatial Resolution Analy-sis of the MAXIMA-1 Cosmic Microwave Background Anisotropy Data", Astronomy/ph0104459.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">C. Pryke, et al., "Cosmological Parameter Extraction from the First Season of Observations with DASI", Ann. Sci.tro/ph0104490.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">M.C. Bento, O. Bertolami,</subfield>
<subfield code="s">Phys. Lett. B 384 (1996) 98</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">G.G. Ross, S. Sarkar,</subfield>
<subfield code="s">Nucl. Phys. B 461 (1995) 597</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0204098</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Alhaidari, A D</subfield>
<subfield code="u">King Fadh University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Reply to 'Comment on "Solution of the Relativistic Dirac-Morse Problem"'</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">11 Apr 2002</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">This combines a reply to the Comment [hep-th/0203067 v1] by A. N. Vaidya and R. de L. Rodrigues with an erratum to our Letter [Phys. Rev. Lett. 87, 210405 (2001)]</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">A. D. Alhaidari &lt;haidari@kfupm.edu.sa&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204098.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204098.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200216</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">A. D. Alhaidari,</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 210405</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">A. N. Vaidya and R. de L. Rodrigues,</subfield>
<subfield code="r">hep-th/0203067</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">A. D. Alhaidari,</subfield>
<subfield code="s">J. Phys. A 34 (2001) 9827</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="s">J. Phys. A 35 (2002) 3143</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">See for example G. A. Natanzon,</subfield>
<subfield code="s">Teor. Mat. Fiz. 38 (1979) 146</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">L. E. Gendenshtein,</subfield>
<subfield code="s">JETP Lett. 38 (1983) 356</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">F. Cooper, J. N. Ginocchi, and A. Khare,</subfield>
<subfield code="s">Phys. Rev. D 36 (1987) 2438</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">R. Dutt, A. Khare, and U. P. Sukhatme,</subfield>
<subfield code="s">Am. J. Phys. 56 (1988) 163</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="s">Am. J. Phys. 59 (1991) 723</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">G. Lévai,</subfield>
<subfield code="s">J. Phys. A 22 (1989) 689</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="s">J. Phys. A 27 (1994) 3809</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">R. De, R. Dutt, and U. Sukhatme,</subfield>
<subfield code="s">J. Phys. A 25 (1992) L843</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">See for example M. F. Manning,</subfield>
<subfield code="s">Phys. Rev. 48 (1935) 161</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">A. Bhattacharjie and E. C. G. Sudarshan,</subfield>
<subfield code="s">Nuovo Cimento 25 (1962) 864</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">N. K. Pak and I. Sökmen,</subfield>
<subfield code="s">Phys. Lett. A 103 (1984) 298</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">H. G. Goldstein, Classical Mechanics (Addison-Wesley, Reading-MA 1986); R. Montemayer,</subfield>
<subfield code="s">Phys. Rev. A 36 (1987) 1562</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">G. Junker,</subfield>
<subfield code="s">J. Phys. A 23 (1990) L881</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">A. D. Alhaidari,</subfield>
<subfield code="s">Phys. Rev. A 65 (2002) 042109</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0204099</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CU-TP-1043</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Easther, R</subfield>
<subfield code="u">Columbia Univ.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Cosmological String Gas on Orbifolds</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Irvington-on-Hudson, NY</subfield>
<subfield code="b">Columbia Univ. Dept. Phys.</subfield>
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">14 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">It has long been known that strings wound around incontractible cycles can play a vital role in cosmology. In particular, in a spacetime with toroidal spatial hypersurfaces, the dynamics of the winding modes may help yield threelarge spatial dimensions. However, toroidal compactifications are phenomenologically unrealistic. In this paper we therefore take a first step toward extending these cosmological considerations to $D$-dimensional toroidal orbifolds.We use numerical simulation to study the timescales over which "pseudo-wound" strings unwind on these orbifolds with trivial fundamental group. We show that pseudo-wound strings can persist for many ``Hubble times'' in some of thesespaces, suggesting that they may affect the dynamics in the same way as genuinely wound strings. We also outline some possible extensions that include higher-dimensional wrapped branes.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Greene, B R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Jackson, M G</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">M. G. Jackson &lt;markj@phys.columbia.edu&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204099.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204099.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Easther, Richard</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Greene, Brian R.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Jackson, Mark G.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Brandenberger and C. Vafa</subfield>
<subfield code="p">391</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">316</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Nucl. Phys. B 316 (1989) 391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. A. Tseytlin and C. Vafa</subfield>
<subfield code="p">443</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">372</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Nucl. Phys. B 372 (1992) 443</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9109048</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Sakellariadou</subfield>
<subfield code="p">319</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">468</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 468 (1996) 319</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9511075</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. G. Smith and A. Vilenkin</subfield>
<subfield code="p">990</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">36</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Phys. Rev. D 36 (1987) 990</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Brandenberger, D. A. Easson and D. Kimberly</subfield>
<subfield code="p">421</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">623</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Nucl. Phys. B 623 (2002) 421</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/0109165</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. R. Greene, A. D. Shapere, C. Vafa, and S. T. Yau</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">337</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Nucl. Phys. B 337 (1990) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Dixon, J. Harvey, C. Vafa and E. Witten</subfield>
<subfield code="p">678</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">261</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Nucl. Phys. B 261 (1985) 678</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Dixon, J. Harvey, C. Vafa and E. Witten</subfield>
<subfield code="p">285</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">274</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Nucl. Phys. B 274 (1986) 285</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Sakellariadou and A. Vilenkin</subfield>
<subfield code="p">885</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">37</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Rev. D 37 (1988) 885</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. J. Atick and E. Witten</subfield>
<subfield code="p">291</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">310</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Nucl. Phys. B 310 (1988) 291</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Mitchell and N. Turok</subfield>
<subfield code="p">1577</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Phys. Rev. Lett. 58 (1987) 1577</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Imperial College report, 1987 (unpublished)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Alexander, R. Brandenberger, and D. Easson</subfield>
<subfield code="p">103509</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 103509</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/0005212</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Easson</subfield>
<subfield code="r">hep-th/0110225</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204132</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">NUC-MINN-02-3-T</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Shovkovy, I A</subfield>
<subfield code="u">Minnesota Univ.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Thermal conductivity of dense quark matter and cooling of stars</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Minneapolis, MN</subfield>
<subfield code="b">Minnesota Univ.</subfield>
<subfield code="c">11 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">9 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">The thermal conductivity of the color-flavor locked phase of dense quark matter is calculated. The dominant contribution to the conductivity comes from photons and Nambu-Goldstone bosons associated with breaking of baryon numberwhich are trapped in the quark core. Because of their very large mean free path the conductivity is also very large. The cooling of the quark core arises mostly from the heat flux across the surface of direct contact with the nuclearmatter. As the thermal conductivity of the neighboring layer is also high, the whole interior of the star should be nearly isothermal. Our results imply that the cooling time of compact stars with color-flavor locked quark cores issimilar to that of ordinary neutron stars.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ellis, P J</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Igor Shovkovy &lt;shovkovy@physics.umn.edu&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204132.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204132.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Shovkovy, Igor A.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Ellis, Paul J.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.C. Collins and M.J. Perry</subfield>
<subfield code="p">1353</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">34</subfield>
<subfield code="y">1975</subfield>
<subfield code="s">Phys. Rev. Lett. 34 (1975) 1353</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. C. Barrois</subfield>
<subfield code="p">390</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">129</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Nucl. Phys. B 129 (1977) 390</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. C. Frautschi, in "Hadronic matter at extreme energy density", edited by N. Cabibbo and L. Sertorio (Plenum Press, 1980); D. Bailin and A. Love</subfield>
<subfield code="p">325</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">107</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Phys. Rep. 107 (1984) 325</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. G. Alford, K. Rajagopal and F. Wilczek</subfield>
<subfield code="p">247</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">422</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 422 (1998) 247</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Rapp, T. Schäfer, E. V. Shuryak and M. Velkovsky</subfield>
<subfield code="p">53</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">81</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 81 (1998) 53</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. T. Son</subfield>
<subfield code="p">094019</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 094019</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. D. Pisarski and D. H. Rischke</subfield>
<subfield code="p">37</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">83</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 37</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Schafer and F. Wilczek</subfield>
<subfield code="p">114033</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 114033</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. K. Hong, V. A. Miransky, I. A. Shovkovy and L. C. R. Wijewardhana</subfield>
<subfield code="p">056001</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 056001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">erratum</subfield>
<subfield code="p">059903</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 059903</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. D. Pisarski and D. H. Rischke</subfield>
<subfield code="p">051501</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 051501</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. D. Hsu and M. Schwetz</subfield>
<subfield code="p">211</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">572</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 572 (2000) 211</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. E. Brown, J. T. Liu and H. C. Ren</subfield>
<subfield code="p">114012</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 114012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. A. Shovkovy and L. C. R. Wijewardhana</subfield>
<subfield code="p">189</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">470</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 470 (1999) 189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Schäfer</subfield>
<subfield code="p">269</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">575</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 575 (2000) 269</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Rajagopal and F. Wilczek, arXiv</subfield>
<subfield code="r">hep-ph/0011333</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. G. Alford</subfield>
<subfield code="p">131</subfield>
<subfield code="t">Annu. Rev. Nucl. Part. Sci.</subfield>
<subfield code="v">51</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Annu. Rev. Nucl. Part. Sci. 51 (2001) 131</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Alford and K. Rajagopal, arXiv</subfield>
<subfield code="r">hep-ph/0204001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Alford, K. Rajagopal and F. Wilczek</subfield>
<subfield code="p">443</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">537</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 537 (1999) 443</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Casalbuoni and R. Gatto</subfield>
<subfield code="p">111</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">464</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 464 (1999) 111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. T. Son and M. A. Stephanov</subfield>
<subfield code="p">074012</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 074012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">erratum</subfield>
<subfield code="p">059902</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 059902</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. F. Bedaque and T. Schäfer</subfield>
<subfield code="p">802</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">697</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Nucl. Phys. A 697 (2002) 802</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. A. Miransky and I. A. Shovkovy</subfield>
<subfield code="p">111601</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">88</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. Lett. 88 (2002) 111601</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0108178</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Schafer, D. T. Son, M. A. Stephanov, D. Toublan and J. J. Ver-baarschot</subfield>
<subfield code="p">67</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">522</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 522 (2001) 67</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0108210</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. T. Son, arXiv</subfield>
<subfield code="r">hep-ph/0108260</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Jaikumar, M. Prakash and T. Schäfer, arXiv</subfield>
<subfield code="r">astro-ph/0203088</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. J. Ferrer, V. P. Gusynin and V. de la Incera, arXiv: cond-matt/0203217</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series and Products (Academic, New York, 1965) 3.252.9</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. J. Freeman and A. C. Anderson</subfield>
<subfield code="p">5684</subfield>
<subfield code="t">Phys. Rev., B</subfield>
<subfield code="v">34</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Phys. Rev. B 34 (1986) 5684</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Kittel, Introduction to Solid State Phys.(John Wi-ley &amp; Sons, Inc., 1960) p. 139</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. P. Gusynin and I. A. Shovkovy</subfield>
<subfield code="p">577</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">700</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Nucl. Phys. A 700 (2002) 577</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. M. Khalatnikov, An introduction to the theory of su-perfluidity, (Addison-Wesley Pub. Co., 1989)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. A. Sturrock, Plasma Physics, (Cambridge University Press, 1994)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. M. Lattimer, K. A. Van Riper, M. Prakash and M. Prakash</subfield>
<subfield code="p">802</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">425</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Astrophys. J. 425 (1994) 802</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. G. Alford, K. Rajagopal, S. Reddy and F. Wilczek</subfield>
<subfield code="p">074017</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">64</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 074017</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. W. Carter and S. Reddy</subfield>
<subfield code="p">103002</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 103002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. W. Steiner, M. Prakash and J. M. Lattimer</subfield>
<subfield code="p">10</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">509</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 509 (2001) 10</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">astro-ph/0101566</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Reddy, M. Sadzikowski and M. Tachibana, arXiv</subfield>
<subfield code="r">nucl-th/0203011</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Prakash, J. M. Lattimer, J. A. Pons, A. W. Steiner and S. Reddy</subfield>
<subfield code="p">364</subfield>
<subfield code="t">Lect. Notes Phys.</subfield>
<subfield code="v">578</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Lect. Notes Phys. 578 (2001) 364</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. L. Shapiro and S. A. Teukolsky, Black holes, white dwarfs, and neutron stars: the physics of compact ob-jects, (John Wiley &amp; Sons, 1983)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Blaschke, H. Grigorian and D. N. Voskresensky, As-tron. Astrophys. : 368 (2001) 561</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Page, M. Prakash, J. M. Lattimer and A. W. Steiner</subfield>
<subfield code="p">2048</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">85</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. Lett. 85 (2000) 2048</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Rajagopal and F. Wilczek</subfield>
<subfield code="p">3492</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 3492</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. I. Kapusta, Finite-temperature field theory, (Cam-bridge University Press, 1989)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. M. Johns, P. J. Ellis and J. M. Lattimer</subfield>
<subfield code="p">1020</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">473</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Astrophys. J. 473 (1996) 1020</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204133</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Gomez, M E</subfield>
<subfield code="u">CFIF</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Lepton-Flavour Violation in SUSY with and without R-parity</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">11 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study whether the individual violation of the lepton numbers L_{e,mu,tau} in the charged sector can lead to measurable rates for BR(mu-&gt;e gamma) and BR(tau-&gt;mu gamma). We consider three different scenarios, the fist onecorresponds to the Minimal Supersymmetric Standard Model with non-universal soft terms. In the other two cases the violation of flavor in the leptonic charged sector is associated to the neutrino problem in models with a see-sawmechanism and with R-parity violation respectively.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Carvalho, D F</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Mario E. Gomez &lt;mgomez@gtae3.ist.utl.pt&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204133.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204133.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">TALK GIVEN BY M E G AT THE CORFU SUMMER INSTITUTE ON ELEMENTARY PARTICLE PHYSICS CORFU 2001 11 PAGES 5 FIGURES</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Fukuda et al., Super-Kamiokande collaboration</subfield>
<subfield code="p">9</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">433</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 433 (1998) 9</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">33</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">436</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 436 (1998) 33</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">1562</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">81</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 81 (1998) 1562</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Apollonio et al., Chooz collaboration</subfield>
<subfield code="p">397</subfield>
<subfield code="t">Phys. Lett. B</subfield>
<subfield code="v">420</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 420 (1998) 397</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. N. Brown et al. [Muon g-2 Collaboration]</subfield>
<subfield code="p">2227</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 2227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ex/0102017</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Review of Particle Physics, D. E. Groom et al</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">15</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 15 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. F. Carvalho, M. E. Gomez and S. Khalil</subfield>
<subfield code="p">001</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">0107</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">J. High Energy Phys. 0107 (2001) 001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0104292</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. F. Carvalho, J. R. Ellis, M. Gomez and S. Lola</subfield>
<subfield code="p">323</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">515</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 515 (2001) 323</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. F. Carvalho, M. E. Gomez and J. C. Romao</subfield>
<subfield code="r">hep-ph/0202054</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">(to appear in Phys. Rev., D</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. R. Ellis, M. E. Gomez, G. K. Leontaris, S. Lola and D. V. Nanopoulos</subfield>
<subfield code="p">319</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">14</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 14 (2000) 319</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Belyaev et al</subfield>
<subfield code="p">715</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">22</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Eur. Phys. J. C 22 (2002) 715</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Belyaev et al. [Kaon Physics Working Group Collaboration]</subfield>
<subfield code="r">hep-ph/0107046</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Hisano, T. Moroi, K. Tobe and M. Yamaguchi</subfield>
<subfield code="p">2442</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 53 (1996) 2442</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Hisano and D. Nomura</subfield>
<subfield code="p">116005</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 116005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Barbieri and L.J. Hall</subfield>
<subfield code="p">212</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">338</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 338 (1994) 212</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Barbieri et al</subfield>
<subfield code="p">219</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">445</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. B 445 (1995) 219</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Nima Arkani-Hamed, Hsin-Chia Cheng and L.J. Hall</subfield>
<subfield code="p">413</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 53 (1996) 413</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Ciafaloni, A. Romanino and A. Strumia</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">458</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 458 (1996) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. E. Gomez and H. Goldberg</subfield>
<subfield code="p">5244</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 53 (1996) 5244</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Hisano, D. Nomura, Y. Okada, Y. Shimizu and M. Tanaka</subfield>
<subfield code="p">116010</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 116010</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. E. Gomez, G. K. Leontaris, S. Lola and J. D. Vergados</subfield>
<subfield code="p">116009</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 116009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G.K. Leontaris and N.D. Tracas</subfield>
<subfield code="p">90</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">431</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 431 (1998) 90</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Buchmuller, D. Delepine and F. Vissani</subfield>
<subfield code="p">171</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">459</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 459 (1999) 171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Buchmuller, D. Delepine and L. T. Handoko</subfield>
<subfield code="p">445</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">576</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 576 (2000) 445</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Q. Shafi and Z. Tavartkiladze</subfield>
<subfield code="p">145</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">473</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 473 (2000) 145</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. L. Feng, Y. Nir and Y. Shadmi</subfield>
<subfield code="p">113005</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 113005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Brignole, L. E. Iba nez and C. Mu noz</subfield>
<subfield code="p">125</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">422</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. B 422 (1994) 125</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[Erratum</subfield>
<subfield code="p">747</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">436</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. B 436 (1994) 747</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Iba nez and G.G. Ross</subfield>
<subfield code="p">100</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">332</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 332 (1994) 100</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G.K. Leontaris, S. Lola and G.G. Ross</subfield>
<subfield code="p">25</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">454</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. B 454 (1995) 25</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Lola and G.G. Ross</subfield>
<subfield code="p">81</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">553</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 553 (1999) 81</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Gell-Mann, P. Ramond and R. Slansky, Proceedings of the Stony Brook Super-gravity Workshop, New York, 1979, eds. P. Van Nieuwenhuizen and D. Freedman (North-Holland, Amsterdam)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. A. Casas and A. Ibarra</subfield>
<subfield code="p">171</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">618</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B 618 (2001) 171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Lavignac, I. Masina and C. A. Savoy</subfield>
<subfield code="p">269</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">520</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 520 (2001) 269</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="r">hep-ph/0202086</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. R. Ellis, D. V. Nanopoulos and K. A. Olive</subfield>
<subfield code="p">65</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">508</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 508 (2001) 65</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. C. Romao, M. A. Diaz, M. Hirsch, W. Porod and J. W. Valle</subfield>
<subfield code="p">071703</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 071703</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">113008</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 113008</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. E. Gomez and K. Tamvakis</subfield>
<subfield code="p">057701</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 057701</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Hirsch, W. Porod, J. W. F. Valle and J. C. Rom ao</subfield>
<subfield code="r">hep-ph/0202149</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. M. Barkov et al., Research Proposal to PSI, 1999</subfield>
<subfield code="u">http://www.icepp.s.u-tokyo.ac.jp/meg</subfield>
<subfield code="z">http://www.icepp.s.u-tokyo.ac.jp/meg</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">The homepage of the PRISM project</subfield>
<subfield code="u">http://www-prism.kek.jp/</subfield>
<subfield code="z">http://www-prism.kek.jp/</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Kuno, Lep-ton Flavor Violation Experiments at KEK/JAERI Joint Project of High Intensity Proton Machine, in Proceedings of Workshop of "LOWNU/NOON 2000", Tokyo, December 4-8, 2000</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Porod, M. Hirsch, J. Rom ao and J. W. Valle</subfield>
<subfield code="p">115004</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">63</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 63 (2001) 115004</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204134</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Dzuba, V A</subfield>
<subfield code="u">University of New South Wales</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Precise calculation of parity nonconservation in cesium and test of the standard model</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">24 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We have calculated the 6s-7s parity nonconserving (PNC) E1 transition amplitude, E_{PNC}, in cesium. We have used an improved all-order technique in the calculation of the correlations and have included all significant contributionsto E_{PNC}. Our final value E_{PNC} = 0.904 (1 +/- 0.5 %) \times 10^{-11}iea_{B}(-Q_{W}/N) has half the uncertainty claimed in old calculations used for the interpretation of Cs PNC experiments. The resulting nuclear weak chargeQ_{W} for Cs deviates by about 2 standard deviations from the value predicted by the standard model.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Flambaum, V V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ginges, J S M</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">"Jacinda S.M. GINGES" &lt;ginges@phys.unsw.edu.au&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204134.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204134.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I.B. Khriplovich, Parity Nonconservation in Atomic Phenomena (Gordon and Breach, Philadelphia, 1991)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.-A. Bouchiat and C. Bouchiat</subfield>
<subfield code="p">1351</subfield>
<subfield code="t">Rep. Prog. Phys.</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Rep. Prog. Phys. 60 (1997) 1351</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C.S. Wood et al</subfield>
<subfield code="p">1759</subfield>
<subfield code="t">Science</subfield>
<subfield code="v">275</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Science 275 (1997) 1759</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Dzuba, V.V. Flambaum, and O.P. Sushkov</subfield>
<subfield code="p">147</subfield>
<subfield code="t">Phys. Lett., A</subfield>
<subfield code="v">141</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Phys. Lett. A 141 (1989) 147</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Blundell, W.R. Johnson, and J. Sapirstein</subfield>
<subfield code="p">1411</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">65</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 65 (1990) 1411</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.A. Blundell, J. Sapirstein, and W.R. Johnson</subfield>
<subfield code="p">1602</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">45</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Rev. D 45 (1992) 1602</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.J. Rafac, and C.E. Tanner, Phys. Rev., A58 1087 (1998); R.J. Rafac, C.E. Tanner, A.E. Livingston, and H.G. Berry, Phys. Rev., A60 3648 (1999)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.C. Bennett, J.L. Roberts, and C.E. Wieman</subfield>
<subfield code="p">R16</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. A 59 (1999) R16</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.C. Bennett and C.E. Wieman</subfield>
<subfield code="p">2484</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 82 (1999) 2484</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">82, 4153(E) (1999); 83, 889(E) (1999)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Casalbuoni, S. De Curtis, D. Dominici, and R. Gatto</subfield>
<subfield code="p">135</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">460</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 460 (1999) 135</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. L. Rosner</subfield>
<subfield code="p">016006</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 61 (1999) 016006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Erler and P. Langacker</subfield>
<subfield code="p">212</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">84</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. Lett. 84 (2000) 212</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Derevianko</subfield>
<subfield code="p">1618</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">85</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. Lett. 85 (2000) 1618</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Dzuba, C. Harabati, W.R. Johnson, and M.S. Safronova</subfield>
<subfield code="p">044103</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">63</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. A 63 (2001) 044103</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.G. Kozlov, S.G. Porsev, and I.I. Tupitsyn</subfield>
<subfield code="p">3260</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 3260</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W.J. Marciano and A. Sirlin</subfield>
<subfield code="p">552</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">27</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Phys. Rev. D 27 (1983) 552</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W.J. Marciano and J.L. Rosner</subfield>
<subfield code="p">2963</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">65</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. Lett. 65 (1990) 2963</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B.W. Lynn and P.G.H. Sandars</subfield>
<subfield code="p">1469</subfield>
<subfield code="t">J. Phys., B</subfield>
<subfield code="v">27</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">J. Phys. B 27 (1994) 1469</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. Bednyakov et al</subfield>
<subfield code="p">012103</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">61</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. A 61 (1999) 012103</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.I. Milstein and O.P. Sushkov, e-print</subfield>
<subfield code="r">hep-ph/0109257</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W.R. Johnson, I. Bednyakov, and G. Soff</subfield>
<subfield code="p">233001</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 233001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Derevianko</subfield>
<subfield code="p">012106</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">65</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. A 65 (2002) 012106</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Dzuba and V.V. Flambaum</subfield>
<subfield code="p">052101</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. A 62 (2000) 052101</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Dzuba, V.V. Flambaum, and O.P. Sushkov</subfield>
<subfield code="p">R4357</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. A 56 (1997) R4357</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.E. Groom et al., Euro. Phys. J. C : 15 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Dzuba, V.V. Flambaum, P.G. Silvestrov, and O.P. Sushkov</subfield>
<subfield code="p">1399</subfield>
<subfield code="t">J. Phys., B</subfield>
<subfield code="v">20</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">J. Phys. B 20 (1987) 1399</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Dzuba, V.V. Flambaum, and O.P. Sushkov</subfield>
<subfield code="p">493</subfield>
<subfield code="t">Phys. Lett., A</subfield>
<subfield code="v">140</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Phys. Lett. A 140 (1989) 493</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Dzuba, V.V. Flambaum, A.Ya. Kraftmakher, and O.P. Sushkov</subfield>
<subfield code="p">373</subfield>
<subfield code="t">Phys. Lett., A</subfield>
<subfield code="v">142</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Phys. Lett. A 142 (1989) 373</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Fricke et al</subfield>
<subfield code="p">177</subfield>
<subfield code="t">At. Data Nucl. Data Tables</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">At. Data Nucl. Data Tables 60 (1995) 177</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Trzci´nska et al</subfield>
<subfield code="p">082501</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 082501</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.B. Berestetskii, E.M. Lifshitz, and L.P. Pitaevskii, Relativistic Quantum Theory (Pergamon Press, Oxford, 1982)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.J. Mohr and Y.-K. Kim</subfield>
<subfield code="p">2727</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">45</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Rev. A 45 (1992) 2727</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.J. Mohr</subfield>
<subfield code="p">4421</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">46</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Rev. A 46 (1992) 4421</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L.W. Fullerton and G.A. Rinker, Jr</subfield>
<subfield code="p">1283</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">13</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Phys. Rev. A 13 (1976) 1283</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E.H. Wichmann and N.M. Kroll</subfield>
<subfield code="p">343</subfield>
<subfield code="t">Phys. Rev.</subfield>
<subfield code="v">101</subfield>
<subfield code="y">1956</subfield>
<subfield code="s">Phys. Rev. 101 (1956) 343</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.I. Milstein and V.M. Strakhovenko</subfield>
<subfield code="p">1247</subfield>
<subfield code="t">Zh. Eksp. Teor. Fiz.</subfield>
<subfield code="v">84</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Zh. Eksp. Teor. Fiz. 84 (1983) 1247</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.V. Flambaum and V.G. Zelevinsky</subfield>
<subfield code="p">3108</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">83</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 3108</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C.E. Moore, Natl. Stand. Ref. Data Ser. (U.S., Natl. Bur. Stand.), 3 (1971)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.J. Rafac, C.E. Tanner, A.E. Livingston, and H.G. Berry</subfield>
<subfield code="p">3648</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. A 60 (1999) 3648</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.-A. Bouchiat, J. Gu´ena, and L. Pottier, J. Phys.(France) Lett. : 45 (1984) L523</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Arimondo, M. Inguscio, and P. Violino</subfield>
<subfield code="p">31</subfield>
<subfield code="t">Rev. Mod. Phys.</subfield>
<subfield code="v">49</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Rev. Mod. Phys. 49 (1977) 31</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.L. Gilbert, R.N. Watts, and C.E. Wieman</subfield>
<subfield code="p">581</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">27</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Phys. Rev. A 27 (1983) 581</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.J. Rafac and C.E. Tanner</subfield>
<subfield code="p">1027</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. A 56 (1997) 1027</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.-A. Bouchiat and J. Gu´ena, J. Phys.(France) : 49 (1988) 2037</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Cho et al</subfield>
<subfield code="p">1007</subfield>
<subfield code="t">Phys. Rev., A</subfield>
<subfield code="v">55</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. A 55 (1997) 1007</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.A. Vasilyev, I.M. Savukov, M.S. Safronova, and H.G. Berry, e-print</subfield>
<subfield code="r">physics/0112071</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204135</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bertin, V</subfield>
<subfield code="u">Universite Blaise Pascal</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Neutrino Indirect Detection of Neutralino Dark Matter in the CMSSM</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">16 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study potential signals of neutralino dark matter indirect detection by neutrino telescopes in a wide range of CMSSM parameters. We also compare with direct detection potential signals taking into account in both cases present andfuture experiment sensitivities. Only models with neutralino annihilation into gauge bosons can satisfy cosmological constraints and current neutrino indirect detection sensitivities. For both direct and indirect detection, only nextgeneration experiments will be able to really test this kind of models.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nezri, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Orloff, J</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Jean Orloff &lt;orloff@in2p3.fr&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204135.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204135.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200216</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204136</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">FISIST-14-2001-CFIF</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">IPPP-01-58</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DCPT-01-114</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Branco, G C</subfield>
<subfield code="u">FCIF</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Supersymmetry and a rationale for small CP violating phases</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">28 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We analyse the CP problem in the context of a supersymmetric extension of the standard model with universal strength of Yukawa couplings. A salient feature of these models is that the CP phases are constrained to be very small by thehierarchy of the quark masses, and the pattern of CKM mixing angles. This leads to a small amount of CP violation from the usual KM mechanism and a significant contribution from supersymmetry is required. Due to the large generationmixing in some of the supersymmetric interactions, the electric dipole moments impose severe constraints on the parameter space, forcing the trilinear couplings to be factorizable in matrix form. We find that the LL mass insertionsgive the dominant gluino contribution to saturate epsilon_K. The chargino contributions to epsilon'/epsilon are significant and can accommodate the experimental results. In this framework, the standard model gives a negligiblecontribution to the CP asymmetry in B-meson decay, a_{J/\psi K_s}. However, due to supersymmetric contributions to B_d-\bar{B}_d mixing, the recent large value of a_{J/\psi K_s} can be accommodated.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gomez, M E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Khalil, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Teixeira, A M</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Shaaban Khalil &lt;shaaban.khalil@durham.ac.uk&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204136.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204136.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.G. Cohen, D.B. Kaplan and A.E. Nelson</subfield>
<subfield code="p">27</subfield>
<subfield code="t">Annu. Rev. Nucl. Part. Sci.</subfield>
<subfield code="v">43</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Annu. Rev. Nucl. Part. Sci. 43 (1993) 27</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.B. Gavela, P. Hernandez, J. Orloff, O. P ene and C. Quimbay</subfield>
<subfield code="p">345</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">430</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. B 430 (1994) 345</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">382</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">430</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. B 430 (1994) 382</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.D. Dolgov</subfield>
<subfield code="r">hep-ph/9707419</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.A. Rubakov and M.E. Shaposhnikov, Usp. Fiz. Nauk : 166 (1996) 493[</subfield>
<subfield code="p">461</subfield>
<subfield code="t">Phys. Usp.</subfield>
<subfield code="v">39</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Usp. 39 (1996) 461</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Abel, S. Khalil and O. Lebedev</subfield>
<subfield code="p">151</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">606</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B 606 (2001) 151</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Pokorski, J. Rosiek and C. A. Savoy</subfield>
<subfield code="p">81</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">570</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 570 (2000) 81</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Recent Developments in Gauge Theories, Proceedings of Nato Advanced Study Insti-tute (Carg ese, 1979), edited by G. 't Hooft et al., Plenum, New York (1980)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. C. Branco, J. I. Silva-Marcos and M. N. Rebelo</subfield>
<subfield code="p">446</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">237</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Lett. B 237 (1990) 446</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. C. Branco, D. Emmanuel­Costa and J. I. Silva-Marcos</subfield>
<subfield code="p">107</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 107</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. M. Fishbane and P. Q. Hung</subfield>
<subfield code="p">2743</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 2743</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Q. Hung and M. Seco</subfield>
<subfield code="r">hep-ph/0111013</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. C. Branco and J. I. Silva-Marcos</subfield>
<subfield code="p">166</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">359</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 359 (1995) 166</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. V. Romalis, W. C. Griffith and E. N. Fortson</subfield>
<subfield code="p">2505</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 2505</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. P. Jacobs et al</subfield>
<subfield code="p">3782</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">71</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Phys. Rev. Lett. 71 (1993) 3782</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">BABAR Collaboration, B. Aubert et al</subfield>
<subfield code="p">091801</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 091801</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">BELLE Collaboration, K. Abe et al</subfield>
<subfield code="p">091802</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 091802</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Eyal and Y. Nir</subfield>
<subfield code="p">21</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">528</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 528 (1998) 21</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and references therein</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. C. Branco, F. Cagarrinho and F. Krüger</subfield>
<subfield code="p">224</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">459</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 459 (1999) 224</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Fritzsch and J. Plankl</subfield>
<subfield code="p">584</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">49</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 584</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Fritzsch and P. Minkowski</subfield>
<subfield code="p">393</subfield>
<subfield code="t">Nuovo Cimento</subfield>
<subfield code="v">30</subfield>
<subfield code="y">1975</subfield>
<subfield code="s">Nuovo Cimento 30 (1975) 393</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Fritzsch and D. Jackson</subfield>
<subfield code="p">365</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">66</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Phys. Lett. B 66 (1977) 365</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Kaus and S. Meshkov</subfield>
<subfield code="p">1863</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">42</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. D 42 (1990) 1863</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Fusaoka and Y. Koide</subfield>
<subfield code="p">3986</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 3986</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">See for example, V. Barger, M. S. Berger and P. Ohmann</subfield>
<subfield code="p">1093</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">47</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Phys. Rev. D 47 (1993) 1093</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">4908</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">49</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 4908</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. C. Branco and J. I. Silva-Marcos</subfield>
<subfield code="p">390</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">331</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 331 (1994) 390</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Particle Data Group</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">15</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 15 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Jarlskog</subfield>
<subfield code="p">1039</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">55</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Phys. Rev. Lett. 55 (1985) 1039</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">491</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">29</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Z. Phys. C 29 (1985) 491</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dugan, B. Grinstein and L. J. Hall</subfield>
<subfield code="p">413</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">255</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Nucl. Phys. B 255 (1985) 413</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. A. Demir, A. Masiero and O. Vives</subfield>
<subfield code="p">230</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">479</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 479 (2000) 230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. M. Barr and S. Khalil</subfield>
<subfield code="p">035005</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 035005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. A. Abel and J. M. Fr ere</subfield>
<subfield code="p">1632</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">55</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 55 (1997) 1632</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Khalil, T. Kobayashi and A. Masiero</subfield>
<subfield code="p">075003</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 075003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Khalil and T. Kobayashi</subfield>
<subfield code="p">341</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">460</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 460 (1999) 341</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Khalil, T. Kobayashi and O. Vives</subfield>
<subfield code="p">275</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">580</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 580 (2000) 275</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Kobayashi and O. Vives</subfield>
<subfield code="p">323</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">406</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 406 (2001) 323</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Abel, D. Bailin, S. Khalil and O. Lebedev</subfield>
<subfield code="p">241</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">504</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 504 (2001) 241</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Masiero, M. Piai, A. Romanino and L. Silvestrini</subfield>
<subfield code="p">075005</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">64</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 075005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and references therein</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. G. Harris et al</subfield>
<subfield code="p">904</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 82 (1999) 904</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Ciuchini et al</subfield>
<subfield code="p">008</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">J. High Energy Phys. 10 (1998) 008</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Khalil and O. Lebedev</subfield>
<subfield code="p">387</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">515</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 515 (2001) 387</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. J. Buras</subfield>
<subfield code="r">hep-ph/0101336</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">See, for example, G. C. Branco, L. Lavoura and J. P. Silva, CP Violation, Interna-tional Series of Monographs on Physics (103), Oxford University Press, Clarendon (1999)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. Gabbiani, E. Gabrielli, A. Masiero and L. Silverstrini</subfield>
<subfield code="p">321</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">477</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 477 (1996) 321</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Fanti et al</subfield>
<subfield code="p">335</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">465</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 465 (1999) 335</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Gershon (NA48)</subfield>
<subfield code="r">hep-ex/0101034</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. J. Buras, M. Jamin and M. E. Lautenbacher</subfield>
<subfield code="p">209</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">408</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Nucl. Phys. B 408 (1993) 209</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Bertolini, M. Fabbrichesi and E. Gabrielli</subfield>
<subfield code="p">136</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">327</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. B 327 (1994) 136</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Colangelo and G. Isidori</subfield>
<subfield code="p">009</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">09</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">J. High Energy Phys. 09 (1998) 009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Buras, G. Colan-gelo, G. Isidori, A. Romanino and L. Silvestrini</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">566</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 566 (2000) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">OPAL Collaboration, K. Ackerstaff et al</subfield>
<subfield code="p">379</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">5</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Eur. Phys. J. C 5 (1998) 379</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">CDF Collaboration, T. Affolder et al</subfield>
<subfield code="p">072005</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 072005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">CDF Collaboration, C. A. Blocker, Proceedings of 3rd Workshop on Physics and Detectors for DAPHNE (DAPHNE 99), Frascati, Italy, 16-19 Nov 1999; ALEPH Collaboration, R. Barate et al</subfield>
<subfield code="p">259</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">492</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 492 (2000) 259</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Bertolini, F. Borzumati, A. Masiero and G. Ridolfi</subfield>
<subfield code="p">591</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">353</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Nucl. Phys. B 353 (1991) 591</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">CLEO Collaboration, S. Ahmed et al, CLEO-CONF-99-10</subfield>
<subfield code="r">hep-ex/9908022</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Gabrielli, S. Khalil and E. Torrente­Lujan</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">594</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B 594 (2001) 3</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204137</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DO-TH-02-05</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Paschos, E A</subfield>
<subfield code="u">Univ. Dortmund</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Leptogenesis with Majorana neutrinos</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Dortmund</subfield>
<subfield code="b">Dortmund Univ. Inst. Phys.</subfield>
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">6 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">I review the origin of the lepton asymmetry which is converted to a baryon excess at the electroweak scale. This scenario becomes more attractive if we can relate it to other physical phenomena. For this reason I elaborate on theconditions of the early universe which lead to a sizable lepton asymmetry. Then I describe methods and models which relate the low energy parameters of neutrinos to the high energy (cosmological) CP-violation and to neutrinolessdouble beta-decay.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Emmanuel A. Paschos &lt;paschos@hal1.physik.uni-dortmund.de&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204137.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204137.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">CONTRIBUTED TO 1ST WORKSHOP ON NEUTRINO - NUCLEUS INTERACTIONS IN THE FEW GEV REGION (NUINT01) TSUKUBA JAPAN 13-16 DEC 2001 6 PAGES 6 FIGURES</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">1.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Fukugida and Yanagida</subfield>
<subfield code="p">45</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">174</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Phys. Lett. B 174 (1986) 45</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">2.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Flanz, E.A. Paschos and U. Sarkar</subfield>
<subfield code="p">248</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">345</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 345 (1995) 248</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">3.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Luty</subfield>
<subfield code="p">445</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">45</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Rev. D 45 (1992) 445</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">4.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Flanz, E.A. Paschos, U. Sarkar and J. Weiss</subfield>
<subfield code="p">693</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">389</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 389 (1996) 693</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Flanz and E.A. Paschos</subfield>
<subfield code="p">113009</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 113009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">5.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Pilaftsis</subfield>
<subfield code="p">5431</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 5431</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">6.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Buchmüller and M. Plümacher</subfield>
<subfield code="p">354</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">431</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 431 (1998) 354</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">7.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Covi, E. Roulet and F. Vissani</subfield>
<subfield code="p">169</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">384</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 384 (1996) 169</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E.K. Akhmedov, V.A. Rubakov and A.Y. Smirnov</subfield>
<subfield code="p">1359</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">81</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 81 (1998) 1359</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">9.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.Y. Khlepnikov and M.E. Shaposhnikov</subfield>
<subfield code="p">885</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">308</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Nucl. Phys. B 308 (1988) 885</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and references therein</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">10.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Ellis, S. Lola and D.V. Nanopoulos</subfield>
<subfield code="p">87</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">452</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 452 (1999) 87</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">11.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Lazarides and N. Vlachos</subfield>
<subfield code="p">482</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">459</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 459 (1999) 482</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">12.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.S. Berger and B. Brahmachari</subfield>
<subfield code="p">073009</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 073009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">13.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Kang, S.K. Kang and U. Sarkar</subfield>
<subfield code="p">391</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">486</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 486 (2000) 391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">14.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Goldberg</subfield>
<subfield code="p">389</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">474</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 474 (2000) 389</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">15.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Hirsch and S.F. King</subfield>
<subfield code="p">113005</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">64</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 113005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">16.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Nielsen and Y. Takanishi</subfield>
<subfield code="p">241</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">507</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 507 (2001) 241</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">17.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Buchmüller and D. Wyler</subfield>
<subfield code="p">291</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">521</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 521 (2001) 291</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">18.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Falcone and Tramontano</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">506</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 506 (2001) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. Buccella et al</subfield>
<subfield code="p">241</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">524</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Lett. B 524 (2002) 241</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">19.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G.C. Branco et al., Nucl. Phys., B617 (2001)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">475</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">20.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.S. Joshipura, E.A. Paschos and W. Rodejo-hann</subfield>
<subfield code="p">227</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">611</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B 611 (2001) 227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">29</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">0108</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 29</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">21.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">mentioned by H.V. Klapdor­Kleingrothaus, in</subfield>
<subfield code="r">hep-ph/0103062</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204138</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DO-TH-02-06</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Paschos, E A</subfield>
<subfield code="u">Univ. Dortmund</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Neutrino Interactions at Low and Medium Energies</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Dortmund</subfield>
<subfield code="b">Dortmund Univ. Inst. Phys.</subfield>
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">9 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We discuss the calculations for several neutrino induced reactions from low energies to the GeV region. Special attention is paid to nuclear corrections when the targets are medium or heavy nuclei. Finally, we present several ratiosof neutral to charged current reactions whose values on isoscalar targets can be estimated accurately. The ratios are useful for investigating neutrino oscillations in Long Baseline experiments.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Emmanuel A. Paschos &lt;paschos@hal1.physik.uni-dortmund.de&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204138.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204138.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">CONTRIBUTED TO 1ST WORKSHOP ON NEUTRINO - NUCLEUS INTERACTIONS IN THE FEW GEV REGION (NUINT01) TSUKUBA JAPAN 13-16 DEC 2001 9 PAGES 15 FIGURES</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">1.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. A. Paschos, L. Pasquali and J. Y. Yu</subfield>
<subfield code="p">263</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">588</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 588 (2000) 263</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">2.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. A. Paschos and J. Y. Yu</subfield>
<subfield code="p">033002</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">65</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 033002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">3.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Albright and C. Jarlskog</subfield>
<subfield code="p">467</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">84</subfield>
<subfield code="y">1975</subfield>
<subfield code="s">Nucl. Phys. B 84 (1975) 467</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">4.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. J. Baker et al</subfield>
<subfield code="p">617</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">25</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Phys. Rev. D 25 (1982) 617</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">5.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Hirai, S. Kumano and M. Miyama</subfield>
<subfield code="p">034003</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">64</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 034003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">6.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. J. Eskola, V. J. Kolhinen and P. V. Ru-uskanen</subfield>
<subfield code="p">351</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">535</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 535 (1998) 351</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. J. Eskola, V. J. Kolhinen, P. V. Ruuskanen and C. A. Salgado</subfield>
<subfield code="p">645</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">661</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. A 661 (1999) 645</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">7.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">See Figure 1 in Ref. [2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. A. Schreiner and F. V. von Hippel</subfield>
<subfield code="p">333</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1973</subfield>
<subfield code="s">Nucl. Phys. B 58 (1973) 333</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">9.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. L. Adler, S. Nussinov. and E. A. Paschos</subfield>
<subfield code="p">2125</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">9</subfield>
<subfield code="y">1974</subfield>
<subfield code="s">Phys. Rev. D 9 (1974) 2125</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">10.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. L. Adler</subfield>
<subfield code="p">2644</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">12</subfield>
<subfield code="y">1975</subfield>
<subfield code="s">Phys. Rev. D 12 (1975) 2644</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">11.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Musset and J. P. Vialle</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">39</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Phys. Rep. 39 (1978) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">12.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Kluttig, J. G. Morfin and W. Van Do-minick</subfield>
<subfield code="p">446</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">71</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Phys. Lett. B 71 (1977) 446</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">13.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Merenyi et al</subfield>
<subfield code="p">743</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">45</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Phys. Rev. D 45 (1982) 743</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">14.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. K. Singh, M. T. Vicente-Vacas and E. Oset</subfield>
<subfield code="p">23</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">416</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 416 (1998) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">15.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. A. Paschos and L. Wolfenstein</subfield>
<subfield code="p">91</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">7</subfield>
<subfield code="y">1973</subfield>
<subfield code="s">Phys. Rev. D 7 (1973) 91</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">see equation (15)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">16.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. A. Paschos, Precise Ratios for Neutrino-Nucleon and Neutrino-Nucleus Interactions, Dortmund preprint­DO­TH 02/02; hep-ph 0204090</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">17.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Gounaris, E. A. Paschos and P. Porfyri-adis</subfield>
<subfield code="p">63</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">525</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Lett. B 525 (2002) 63</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">18.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Bouchez and I. Giomataris, CEA/Saclay internal note, DAPNIA/01-07, June 2001</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204139</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Van Beveren, E</subfield>
<subfield code="u">University of Coimbra</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Remarks on the f_0(400-1200) scalar meson as the dynamically generated chiral partner of the pion</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">15 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">The quark-level linear sigma model is revisited, in particular concerning the identification of the f_0(400-1200) (or \sigma(600)) scalar meson as the chiral partner of the pion. We demonstrate the predictive power of the linearsigma model through the pi-pi and pi-N s-wave scattering lengths, as well as several electromagnetic, weak, and strong decays of pseudoscalar and vector mesons. The ease with which the data for these observables are reproduced in thelinear sigma model lends credit to the necessity to include the sigma as a fundamental q\bar{q} degree of freedom, to be contrasted with approaches like chiral perturbation theory or the confining NJL model of Shakin and Wang.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kleefeld, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rupp, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Scadron, M D</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">George Rupp &lt;george@ajax.ist.utl.pt&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204139.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204139.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Beveren, Eef van</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Kleefeld, Frieder</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Rupp, George</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Scadron, Michael D.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. E. Groom et al. [Particle Data Group Collaboration]</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">15</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 15 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Isgur and J. Speth</subfield>
<subfield code="p">2332</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">77</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. Lett. 77 (1996) 2332</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. A. Törnqvist and M. Roos</subfield>
<subfield code="p">1575</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">76</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. Lett. 76 (1996) 1575</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9511210</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Harada, F. Sannino, and J. Schechter</subfield>
<subfield code="p">1603</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">78</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. Lett. 78 (1997) 1603</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9609428</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. van Beveren, T. A. Rijken, K. Metzger, C. Dullemond, G. Rupp, and J. E. Ribeiro</subfield>
<subfield code="p">615</subfield>
<subfield code="t">Z. Phys., C</subfield>
<subfield code="v">30</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Z. Phys. C 30 (1986) 615</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Eef van Beveren and George Rupp</subfield>
<subfield code="p">469</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Eur. Phys. J. C 10 (1999) 469</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9806246</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Boglione and M. R. Pennington</subfield>
<subfield code="r">hep-ph/0203149</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Gell-Mann and M. L´evy</subfield>
<subfield code="p">705</subfield>
<subfield code="t">Nuovo Cimento</subfield>
<subfield code="v">16</subfield>
<subfield code="y">1960</subfield>
<subfield code="s">Nuovo Cimento 16 (1960) 705</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">also see V. de Alfaro, S. Fubini, G. Furlan, and C. Rossetti, in Currents in Hadron Physics, North-Holland Publ., Amsterdam, Chap. 5 (1973)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Delbourgo and M. D. Scadron</subfield>
<subfield code="p">251</subfield>
<subfield code="t">Mod. Phys. Lett., A</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Mod. Phys. Lett. A 10 (1995) 251</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9910242</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">657</subfield>
<subfield code="t">Int. J. Mod. Phys., A</subfield>
<subfield code="v">13</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Int. J. Mod. Phys. A 13 (1998) 657</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9807504</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Nambu and G. Jona-Lasinio</subfield>
<subfield code="p">345</subfield>
<subfield code="t">Phys. Rev.</subfield>
<subfield code="v">122</subfield>
<subfield code="y">1961</subfield>
<subfield code="s">Phys. Rev. 122 (1961) 345</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. M. Shakin and Huangsheng Wang</subfield>
<subfield code="p">094020</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">64</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 094020</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. M. Shakin and Huangsheng Wang</subfield>
<subfield code="p">014019</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">63</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 63 (2000) 014019</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Rupp, E. van Beveren, and M. D. Scadron</subfield>
<subfield code="p">078501</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">65</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 078501</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0104087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Eef van Beveren, George Rupp, and Michael D. Scadron</subfield>
<subfield code="p">300</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">495</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 495 (2000) 300</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[Erratum</subfield>
<subfield code="p">365</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">509</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 509 (2000) 365</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]</subfield>
<subfield code="r">hep-ph/0009265</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Frieder Kleefeld, Eef van Beveren, George Rupp, and Michael D. Scadron</subfield>
<subfield code="r">hep-ph/0109158</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. D. Scadron</subfield>
<subfield code="p">239</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">26</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Phys. Rev. D 26 (1982) 239</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">2076</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">29</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Phys. Rev. D 29 (1984) 2076</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">669</subfield>
<subfield code="t">Mod. Phys. Lett., A</subfield>
<subfield code="v">7</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Mod. Phys. Lett. A 7 (1992) 669</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. L´evy</subfield>
<subfield code="p">23</subfield>
<subfield code="t">Nuovo Cimento, A</subfield>
<subfield code="v">52</subfield>
<subfield code="y">1967</subfield>
<subfield code="s">Nuovo Cimento, A 52 (1967) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Gasiorowicz and D. A. Geffen</subfield>
<subfield code="p">531</subfield>
<subfield code="t">Rev. Mod. Phys.</subfield>
<subfield code="v">41</subfield>
<subfield code="y">1969</subfield>
<subfield code="s">Rev. Mod. Phys. 41 (1969) 531</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Schechter and Y. Ueda</subfield>
<subfield code="p">2874</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">3</subfield>
<subfield code="y">1971</subfield>
<subfield code="s">Phys. Rev. D 3 (1971) 2874</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[Erratum</subfield>
<subfield code="p">987</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">8</subfield>
<subfield code="y">1973</subfield>
<subfield code="s">Phys. Rev. D 8 (1973) 987</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Eguchi</subfield>
<subfield code="p">2755</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">14</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Phys. Rev. D 14 (1976) 2755</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Eguchi</subfield>
<subfield code="p">611</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">17</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Phys. Rev. D 17 (1978) 611</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">The once-subtracted dispersion-relation result</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204140</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-TH-2002-069</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">RM3-TH-02-4</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Aglietti, U</subfield>
<subfield code="u">CERN</subfield>
+ <subfield code="0">INSTITUTION|(SzGeCERN)iii0002</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A new model-independent way of extracting |V_ub/V_cb|</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">20 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">The ratio between the photon spectrum in B -&gt; X_s gamma and the differential semileptonic rate wrt the hadronic variable M_X/E_X is a short-distance quantity calculable in perturbation theory and independent of the Fermi motion ofthe b quark in the B meson. We present a NLO analysis of this ratio and show how it can be used to determine |V_ub/V_cb| independently of any model for the shape function. We also discuss how this relation can be used to test thevalidity of the shape-function theory on the data.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ciuchini, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gambino, P</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Paolo Gambino &lt;paolo.gambino@cern.ch&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204140.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204140.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="p">TH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Aglietti, Ugo</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Ciuchini, Marco</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Gambino, Paolo</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. I. Bigi, M. A. Shifman, N. G. Uraltsev and A. I. Vainshtein</subfield>
<subfield code="p">496</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">71</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Phys. Rev. Lett. 71 (1993) 496</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9304225</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">2467</subfield>
<subfield code="t">Int. J. Mod. Phys., A</subfield>
<subfield code="v">9</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Int. J. Mod. Phys. A 9 (1994) 2467</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9312359</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Neubert</subfield>
<subfield code="p">4623</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">49</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 4623</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9312311</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Akhoury and I. Z. Rothstein</subfield>
<subfield code="p">2349</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">54</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 54 (1996) 2349</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9512303</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. K. Leibovich and I. Z. Rothstein</subfield>
<subfield code="p">074006</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 074006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9907391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. K. Leibovich, I. Low and I. Z. Rothstein</subfield>
<subfield code="p">053006</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 053006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9909404</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. K. Leibovich, I. Low and I. Z. Rothstein</subfield>
<subfield code="p">86</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">486</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 486 (2000) 86</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0005124</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Neubert</subfield>
<subfield code="p">88</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">513</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 513 (2001) 88</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0104280</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. K. Leibovich, I. Low and I. Z. Rothstein</subfield>
<subfield code="p">83</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">513</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 513 (2001) 83</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0105066</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. D. Barger, C. S. Kim and R. J. Phillips</subfield>
<subfield code="p">629</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">251</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Lett. B 251 (1990) 629</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. F. Falk, Z. Ligeti and M. B. Wise</subfield>
<subfield code="p">225</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">406</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 406 (1997) 225</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9705235</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. I. Bigi, R. D. Dikeman and N. Uraltsev</subfield>
<subfield code="p">453</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">4</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Eur. Phys. J. C 4 (1998) 453</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9706520</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Barate et al. (ALEPH Coll.)</subfield>
<subfield code="p">555</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">6</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Eur. Phys. J. C 6 (1999) 555</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Acciarri et al. (L3 Coll.), Phys. Lett., B436 (1998); P. Abreu et al. (DELPHI Coll.)</subfield>
<subfield code="p">14</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">478</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 478 (2000) 14</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Abbiendi et al. (OPAL Coll.)</subfield>
<subfield code="p">399</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">21</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Eur. Phys. J. C 21 (2001) 399</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Bornheim [CLEO Coll.], arXiv</subfield>
<subfield code="r">hep-ex/0202019</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. W. Bauer, Z. Ligeti and M. E. Luke</subfield>
<subfield code="p">395</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">479</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 479 (2000) 395</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0002161</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. W. Bauer, Z. Ligeti and M. E. Luke</subfield>
<subfield code="p">113004</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">64</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 113004</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0107074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">U. Aglietti, arXiv</subfield>
<subfield code="r">hep-ph/0010251</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">U. Aglietti</subfield>
<subfield code="p">308</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">515</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Lett. B 515 (2001) 308</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0103002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">U. Aglietti</subfield>
<subfield code="p">293</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">610</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B 610 (2001) 293</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0104020</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Ali and E. Pietarinen</subfield>
<subfield code="p">519</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">154</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Nucl. Phys. B 154 (1979) 519</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Altarelli, N. Cabibbo, G. Corb o, L. Maiani and G. Martinelli</subfield>
<subfield code="p">365</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">208</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Nucl. Phys. B 208 (1982) 365</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. L. Jaffe and L. Randall</subfield>
<subfield code="p">79</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">412</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. B 412 (1994) 79</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9306201</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Neubert</subfield>
<subfield code="p">3392</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">49</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 3392</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9311325</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">U. Aglietti, M. Ciuchini, G. Corb o, E. Franco, G. Martinelli and L. Silvestrini</subfield>
<subfield code="p">411</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">432</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 432 (1998) 411</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9804416</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Catani, L. Trentadue, G. Turnock and B. R. Webber</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">407</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Nucl. Phys. B 407 (1993) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Lubicz</subfield>
<subfield code="p">116</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">94</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 94 (2001) 116</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-lat/0012003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. J. Buras, M. Jamin, M. E. Lautenbacher and P. H. Weisz</subfield>
<subfield code="p">37</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">400</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Nucl. Phys. B 400 (1993) 37</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9211304</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Ciuchini, E. Franco, G. Martinelli and L. Reina</subfield>
<subfield code="p">403</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">415</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. B 415 (1994) 403</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9304257</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Chetyrkin, M. Misiak and M. Munz</subfield>
<subfield code="p">206</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">400</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 400 (1997) 206</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[Erratum</subfield>
<subfield code="p">414</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">425</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 425 (1997) 414</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">] [arXiv</subfield>
<subfield code="r">hep-ph/9612313</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and refs. therein</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Gambino and M. Misiak</subfield>
<subfield code="p">338</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">611</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B 611 (2001) 338</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0104034</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.B. Voloshin</subfield>
<subfield code="p">275</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">397</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 397 (1997) 275</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Khodjamirian et al</subfield>
<subfield code="p">167</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">402</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 402 (1997) 167</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Z. Ligeti, L. Randall and M.B. Wise</subfield>
<subfield code="p">178</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">402</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 402 (1997) 178</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.K. Grant, A.G. Morgan, S. Nussinov and R.D. Peccei</subfield>
<subfield code="p">3151</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 3151</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Buchalla, G. Isidori and S.J. Rey</subfield>
<subfield code="p">594</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">511</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 511 (1998) 594</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Gambino and U. Haisch</subfield>
<subfield code="p">020</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">0110</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">J. High Energy Phys. 0110 (2001) 020</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0109058</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and</subfield>
<subfield code="p">001</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">0009</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">J. High Energy Phys. 0009 (2000) 001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0007259</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. De Fazio and M. Neubert</subfield>
<subfield code="p">017</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">9906</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">J. High Energy Phys. 9906 (1999) 017</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9905351</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">U. Aglietti, arXiv</subfield>
<subfield code="r">hep-ph/0105168</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">to appear in the Proceedings of "XIII Convegno sulla Fisica al LEP (LEPTRE 2001)", Rome (Italy), 18-20 April 2001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. van Ritbergen</subfield>
<subfield code="p">353</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">454</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 454 (1999) 353</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. W. Bauer, M. E. Luke and T. Mannel, arXiv</subfield>
<subfield code="r">hep-ph/0102089</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">The Particle Data Group</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Eur. Phys. J., C</subfield>
<subfield code="v">15</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Eur. Phys. J. C 15 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Ciuchini et al</subfield>
<subfield code="p">013</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">0107</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">J. High Energy Phys. 0107 (2001) 013</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0012308</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Chen et al., CLEO Coll</subfield>
<subfield code="p">251807</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 251807</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Pott</subfield>
<subfield code="p">938</subfield>
<subfield code="t">Phys. Rev. D</subfield>
<subfield code="v">54</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev., D 54 (1996) 938</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9512252</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Greub, T. Hurth and D. Wyler</subfield>
<subfield code="p">3350</subfield>
<subfield code="t">Phys. Rev. D</subfield>
<subfield code="v">54</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev., D 54 (1996) 3350</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/9603404</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. J. Buras, A. Czarnecki, M. Misiak and J. Urban</subfield>
<subfield code="p">488</subfield>
<subfield code="t">Nucl. Phys. B</subfield>
<subfield code="v">611</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys., B 611 (2001) 488</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">[arXiv</subfield>
<subfield code="r">hep-ph/0105160</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. J. Buras, A. Czarnecki, M. Misiak and J. Urban, arXiv</subfield>
<subfield code="r">hep-ph/0203135</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204141</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Appelquist, T</subfield>
<subfield code="u">Yale University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Neutrino Masses in Theories with Dynamical Electroweak Symmetry Breaking</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">4 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We address the problem of accounting for light neutrino masses in theories with dynamical electroweak symmetry breaking. As a possible solution, we embed (extended) technicolor in a theory in which a $|\Delta L|=2$ neutrinocondensate forms at a scale $\Lambda_N \gsim 10^{11}$ GeV, and produces acceptably small (Majorana) neutrino masses. We present an explicit model illustrating this mechanism.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Shrock, R</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Robert Shrock &lt;shrock@insti.physics.sunysb.edu&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204141.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204141.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Appelquist, Thomas</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Shrock, Robert</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Fukuda, et al</subfield>
<subfield code="p">5651</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 5651</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Fukuda et al. ibid, 5656 (2001) (SuperK) and Q.R. Ahmad et al</subfield>
<subfield code="p">071301</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 071301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">(SNO). Other data is from the Homestake, Kamiokande, GALLEX, and SAGE experiments</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Fukuda et al</subfield>
<subfield code="p">9</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">433</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 433 (1998) 9</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">1562</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">81</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. Lett. 81 (1998) 1562</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">2644</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 82 (1999) 2644</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">185</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">467</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 467 (1999) 185</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">(SuperK) and data from Kamiokande, IMB, Soudan-2, and MACRO experiments. The data, which is consistent with results from K2K, indicates that |m( 3)2 - m( 2)2| |m( 3)2 - m( 1)2| 2.5 × 10-3 eV2. With a hierarchical mass assumption, one infers m( 3) m232 0.05 eV</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Weinberg</subfield>
<subfield code="p">1277</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">19</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Phys. Rev. D 19 (1979) 1277</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Susskind</subfield>
<subfield code="p">2619</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">20</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Phys. Rev. D 20 (1979) 2619</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Eichten and K. Lane</subfield>
<subfield code="p">125</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">90</subfield>
<subfield code="y">1980</subfield>
<subfield code="s">Phys. Lett. B 90 (1980) 125</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Sikivie, L. Susskind, M. Voloshin, and V. Zakharov</subfield>
<subfield code="p">189</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">173</subfield>
<subfield code="y">1980</subfield>
<subfield code="s">Nucl. Phys. B 173 (1980) 189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Holdom</subfield>
<subfield code="p">301</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">150</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Phys. Lett. B 150 (1985) 301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K Yamawaki, M. Bando, and K. Matumoto</subfield>
<subfield code="p">1335</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">56</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Phys. Rev. Lett. 56 (1986) 1335</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist, D. Karabali, and L. Wijeward-hana</subfield>
<subfield code="p">957</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Phys. Rev. Lett. 57 (1986) 957</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist and L.C.R. Wijewardhana</subfield>
<subfield code="p">774</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">35</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Phys. Rev. D 35 (1987) 774</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">568</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">36</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Phys. Rev. D 36 (1987) 568</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Holdom</subfield>
<subfield code="p">1637</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">23</subfield>
<subfield code="y">1981</subfield>
<subfield code="s">Phys. Rev. D 23 (1981) 1637</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">169</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">246</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Lett. B 246 (1990) 169</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist and J. Terning</subfield>
<subfield code="p">139</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">315</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Phys. Lett. B 315 (1993) 139</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist, J. Terning, and L. Wijewardhana</subfield>
<subfield code="p">1214</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">77</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. Lett. 77 (1996) 1214</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">2767</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">79</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. Lett. 79 (1997) 2767</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist, N. Evans, S. Selipsky</subfield>
<subfield code="p">145</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">374</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 374 (1996) 145</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist and S. Selipsky</subfield>
<subfield code="p">364</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">400</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 400 (1997) 364</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist, J. Terning</subfield>
<subfield code="p">2116</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 50 (1994) 2116</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Recent reviews include R. Chivukula</subfield>
<subfield code="r">hep-ph/0011264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Lane</subfield>
<subfield code="r">hep-ph/0202255</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Hill E. Simmons</subfield>
<subfield code="r">hep-ph/0203079</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Gell-Mann, P. Ramond, R. Slansky, in Supergrav-ity (North Holland, Amsterdam, 1979), p. 315; T. Yanagida in proceedings of Workshop on Unified Theory and Baryon Number in the Universe, KEK, 1979</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Although we require our model to yield a small S, a re-analysis of precision electroweak data is called for in view of the value of sin2 W reported in G. Zeller et al</subfield>
<subfield code="p">091802</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">88</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. Lett. 88 (2002) 091802</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">For a vectorial SU(N) theory with Nf fermions in the fundamental representation, an IRFP occurs if Nf &gt; Nf,min,IR, where, perturbatively, Nf,min,IR 34N3/(13N2 -3). At this IRFP, using the criticality con-dition [13], the theory is expected to exist in a confining phase with S SB if Nf,min,IR &lt; Nf &lt; Nf,con, where Nf,con (2/5)N(50N2 - 33)/(5N2 - 3) and in a confor-mal phase if Nf,con &lt; Nf &lt; 11N/2. For N = 2 we have Nf,min,IR 5 and Nf,con 8, respectively. For attempts at lattice measurements, see R. Mawhinney</subfield>
<subfield code="p">57</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">83</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 83 (2000) 57</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">In the approximation of a single-gauge-boson exchange, the critical coupling for the condensation of fermion rep-resentations R1 × R2 Rc is 3 2 C2 = 1, where C2 = [C2(R1) + C2(R2) - C2(Rc)], and C2(R) is the quadratic Casimir invariant. Instanton contributions are also important [7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Gasser, H. Leutwyler</subfield>
<subfield code="p">77</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Phys. Rep. 87 (1982) 77</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Leutwyler, in</subfield>
<subfield code="p">108</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">94</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 94 (2001) 108</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Ali Khan et al</subfield>
<subfield code="p">4674</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">85</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. Lett. 85 (2000) 4674</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Wingate et al., Int. J. Mod. Phys., A16 S B1 (2001) 585</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Here a = exp[ETC,a fF (dµ/µ) ( (µ))], and in walking TC theories the anomalous dimension 1 so a ETC,a/fF</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">By convention, we write SM-singlet neutrinos as right-handed fields j,R. These are assigned lepton number 1. Thus, in writing SU(4)PS SU(3)c × U(1), the U(1) is not U(1)B-L since some neutrinos in the model are SU(4)PS-singlet states</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Z. Maki, M. Nakagawa, S. Sakata</subfield>
<subfield code="p">870</subfield>
<subfield code="t">Prog. Theor. Phys.</subfield>
<subfield code="v">28</subfield>
<subfield code="y">1962</subfield>
<subfield code="s">Prog. Theor. Phys. 28 (1962) 870</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">(2 × 2 matrix); B. W. Lee, S. Pakvasa, R. Shrock, and H. Sugawara</subfield>
<subfield code="p">937</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">38</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Phys. Rev. Lett. 38 (1977) 937</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">(3 × 3 matrix)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist and R. Shrock, to appear</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Dienes, E. Dudas, T. Gherghetta</subfield>
<subfield code="p">25</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">557</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 557 (1999) 25</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Arkani-Hamed, S. Dimopoulos, G. Dvali, and J. March-Russell</subfield>
<subfield code="r">hep-ph/9811448</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Appelquist, B. Dobrescu, E. Ponton, and H.-U. Yee</subfield>
<subfield code="r">hep-ph/0201131</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0204100</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">LBNL-50097</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">UCB-PTH-02-14</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Gaillard, M K</subfield>
<subfield code="u">University of California, Berkeley</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Modular Invariant Anomalous U(1) Breaking</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Berkeley, CA</subfield>
<subfield code="b">Lawrence Berkeley Nat. Lab.</subfield>
<subfield code="c">11 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">19 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We describe the effective supergravity theory present below the scale of spontaneous gauge symmetry breaking due to an anomalous U(1), obtained by integrating out tree-level interactions of massive modes. A simple case is examined insome detail. We find that the effective theory can be expressed in the linear multiplet formulation, with some interesting consequences. Among them, the modified linearity conditions lead to new interactions not present in the theorywithout an anomalous U(1). These additional interactions are compactly expressed through a superfield functional.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Giedt, J</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Mary K Gaillard &lt;gaillard@thsrv.lbl.gov&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204100.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204100.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Gaillard, Mary K.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Giedt, Joel</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Giedt</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Ann. Phys. (N.Y.)</subfield>
<subfield code="v">297</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Ann. Phys. (N.Y.) 297 (2002) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/0108244</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dine, N. Seiberg and E. Witten</subfield>
<subfield code="p">585</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">289</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Nucl. Phys. B 289 (1987) 585</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. J. Atick, L. Dixon and A. Sen</subfield>
<subfield code="p">109</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">292</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Nucl. Phys. B 292 (1987) 109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dine, I. Ichinose and N. Seiberg</subfield>
<subfield code="p">253</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">293</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Nucl. Phys. B 293 (1987) 253</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. B. Green and J. H. Schwarz</subfield>
<subfield code="p">117</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">149</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Phys. Lett. B 149 (1984) 117</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Bin´etruy, G. Girardi and R. Grimm</subfield>
<subfield code="p">111</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">265</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Phys. Lett. B 265 (1991) 111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Müller</subfield>
<subfield code="p">292</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">264</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Nucl. Phys. B 264 (1986) 292</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Bin´etruy, G. Girardi, R. Grimm and M. Müller</subfield>
<subfield code="p">389</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">189</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Phys. Lett. B 189 (1987) 389</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Bin´etruy, G. Girardi and R. Grimm</subfield>
<subfield code="p">255</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">343</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rep. 343 (2001) 255</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Girardi and R. Grimm</subfield>
<subfield code="p">49</subfield>
<subfield code="t">Ann. Phys. (N.Y.)</subfield>
<subfield code="v">272</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Ann. Phys. (N.Y.) 272 (1999) 49</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Bin´etruy, M. K. Gaillard and Y.-Y. Wu</subfield>
<subfield code="p">109</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">481</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 481 (1996) 109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Bin´etruy, M. K. Gaillard and Y.-Y. Wu</subfield>
<subfield code="p">27</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">493</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 493 (1997) 27</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Bin´etruy, M. K. Gaillard and Y.-Y. Wu</subfield>
<subfield code="p">288</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">412</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 412 (1997) 288</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. K. Gaillard, B. Nelson and Y.-Y. Wu</subfield>
<subfield code="p">549</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">459</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 459 (1999) 549</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. K. Gaillard and B. Nelson</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">571</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 571 (2000) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Ferrara, C. Kounnas and M. Porrati</subfield>
<subfield code="p">263</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">181</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Phys. Lett. B 181 (1986) 263</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Cveti c, J. Louis and B. A. Ovrut</subfield>
<subfield code="p">227</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">206</subfield>
<subfield code="y">1988</subfield>
<subfield code="s">Phys. Lett. B 206 (1988) 227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. E. Iba nez and D. Lüst</subfield>
<subfield code="p">305</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">382</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Nucl. Phys. B 382 (1992) 305</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. K. Gaillard</subfield>
<subfield code="p">125</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">342</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 342 (1995) 125</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">105027</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 105027</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D : 61 (2000) 084028</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Witten</subfield>
<subfield code="p">151</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">155</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Phys. Lett. B 155 (1985) 151</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. J. Dixon, V. S. Kaplunovsky and J. Louis</subfield>
<subfield code="p">27</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">329</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Nucl. Phys. B 329 (1990) 27</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.J. Gates, M. Grisaru, M. Ro cek and W. Siegel, Superspace (Benjamin/Cummings, 1983)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.K. Gaillard and T.R. Taylor</subfield>
<subfield code="p">577</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">381</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Nucl. Phys. B 381 (1992) 577</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Wess and J. Bagger, Supersymmetry and supergravity (Princeton, 1992)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Bin´etruy, C. Deffayet and P. Peter</subfield>
<subfield code="p">163</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">441</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 441 (1998) 163</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. K. Gaillard and J. Giedt, in progress</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204142</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Chacko, Z</subfield>
<subfield code="u">University of California, Berkeley</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Fine Structure Constant Variation from a Late Phase Transition</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Berkeley, CA</subfield>
<subfield code="b">Lawrence Berkeley Nat. Lab.</subfield>
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">9 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Recent experimental data indicates that the fine structure constant alpha may be varying on cosmological time scales. We consider the possibility that such a variation could be induced by a second order phase transition which occursat late times (z ~ 1 - 3) and involves a change in the vacuum expectation value (vev) of a scalar with milli-eV mass. Such light scalars are natural in supersymmetric theories with low SUSY breaking scale. If the vev of this scalarcontributes to masses of electrically charged fields, the low-energy value of alpha changes during the phase transition. The observational predictions of this scenario include isotope-dependent deviations from Newtonian gravity atsub-millimeter distances, and (if the phase transition is a sharp event on cosmological time scales) the presence of a well-defined step-like feature in the alpha(z) plot. The relation between the fractional changes in alpha and theQCD confinement scale is highly model dependent, and even in grand unified theories the change in alpha does not need to be accompanied by a large shift in nucleon masses.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Grojean, C</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Perelstein, M</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Maxim Perelstein &lt;meperelstein@lbl.gov&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204142.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204142.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. K. Webb, M. T. Murphy, V. V. Flambaum, V. A. Dzuba, J. D. Barrow, C. W. Churchill, J. X. Prochaska and A. M. Wolfe</subfield>
<subfield code="p">091301</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 091301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/0012539</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">see also J. K. Webb, V. V. Flambaum, C. W. Churchill, M. J. Drinkwater and J. D. Barrow</subfield>
<subfield code="p">884</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 82 (1999) 884</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/9803165</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. A. Dzuba, V. V. Flambaum, and J. K. Webb</subfield>
<subfield code="p">888</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 82 (1999) 888</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. A. Dirac</subfield>
<subfield code="p">323</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">139</subfield>
<subfield code="y">1937</subfield>
<subfield code="s">Nature 139 (1937) 323</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">for an historial perspective, see F. Dyson, "The fundamental constants and their time variation", in Aspects of Quantum Theory, eds A. Salam and E. Wigner</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Damour</subfield>
<subfield code="r">gr-qc/0109063</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. D. Bekenstein</subfield>
<subfield code="p">1527</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">25</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Phys. Rev. D 25 (1982) 1527</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. R. Dvali and M. Zaldarriaga</subfield>
<subfield code="p">091303</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">88</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. Lett. 88 (2002) 091303</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0108217</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. A. Olive and M. Pospelov</subfield>
<subfield code="p">085044</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">65</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 085044</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0110377</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Banks, M. Dine and M. R. Douglas</subfield>
<subfield code="p">131301</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">88</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Rev. Lett. 88 (2002) 131301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0112059</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Langacker, G. Segr e and M. J. Strassler</subfield>
<subfield code="p">121</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">528</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Phys. Lett. B 528 (2002) 121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0112233</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Y. Potekhin, A. V. Ivanchik, D. A. Varshalovich, K. M. Lanzetta, J. A. Bald-win, G. M. Williger and R. F. Carswell</subfield>
<subfield code="p">523</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">505</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Astrophys. J. 505 (1998) 523</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/9804116</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Weinberg</subfield>
<subfield code="p">3357</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">9</subfield>
<subfield code="y">1974</subfield>
<subfield code="s">Phys. Rev. D 9 (1974) 3357</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Dolan and R. Jackiw</subfield>
<subfield code="p">3320</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">9</subfield>
<subfield code="y">1974</subfield>
<subfield code="s">Phys. Rev. D 9 (1974) 3320</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Arkani-Hamed, L. J. Hall, C. Kolda and H. Murayama</subfield>
<subfield code="p">4434</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">85</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. Lett. 85 (2000) 4434</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/0005111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dine, W. Fischler and M. Srednicki</subfield>
<subfield code="p">575</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">189</subfield>
<subfield code="y">1981</subfield>
<subfield code="s">Nucl. Phys. B 189 (1981) 575</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Dimopou-los and S. Raby</subfield>
<subfield code="p">353</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">192</subfield>
<subfield code="y">1981</subfield>
<subfield code="s">Nucl. Phys. B 192 (1981) 353</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Alvarez-Gaum´e, M. Claudson and M. B. Wise</subfield>
<subfield code="p">96</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">207</subfield>
<subfield code="y">1982</subfield>
<subfield code="s">Nucl. Phys. B 207 (1982) 96</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dine and A. E. Nelson</subfield>
<subfield code="p">1277</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">48</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Phys. Rev. D 48 (1993) 1277</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9303230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dine, A. E. Nelson and Y. Shirman</subfield>
<subfield code="p">1362</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">51</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. D 51 (1995) 1362</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9408384</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dine, A. E. Nelson, Y. Nir and Y. Shirman</subfield>
<subfield code="p">2658</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">53</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. D 53 (1996) 2658</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9507378</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Arkani-Hamed, S. Dimopoulos, N. Kaloper and R. Sundrum</subfield>
<subfield code="p">193</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">480</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Lett. B 480 (2000) 193</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/0001197</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Kachru, M. Schulz and E. Silverstein</subfield>
<subfield code="p">045021</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 045021</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/0001206</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Cs´aki, J. Erlich and C. Grojean</subfield>
<subfield code="p">312</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">604</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. B 604 (2001) 312</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/0012143</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">X. Calmet and H. Fritzsch</subfield>
<subfield code="r">hep-ph/0112110</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Fritzsch</subfield>
<subfield code="r">hep-ph/0201198</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. R. Dvali and S. Pokorski</subfield>
<subfield code="p">126</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">379</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 379 (1996) 126</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9601358</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Z. Chacko and R. N. Mohapatra</subfield>
<subfield code="p">2836</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 82 (1999) 2836</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9810315</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. P. Turneaure, C. M. Will, B. F. Farrell, E. M. Mattison and R. F. C. Vessot</subfield>
<subfield code="p">1705</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">27</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Phys. Rev. D 27 (1983) 1705</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. D. Prestage, R. L. Tjoelker and L. Maleki</subfield>
<subfield code="p">3511</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">74</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. Lett. 74 (1995) 3511</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. I. Shlyakhter</subfield>
<subfield code="p">340</subfield>
<subfield code="t">Nature</subfield>
<subfield code="v">264</subfield>
<subfield code="y">1976</subfield>
<subfield code="s">Nature 264 (1976) 340</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T. Damour and F. Dyson</subfield>
<subfield code="p">37</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">480</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 480 (1996) 37</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9606486</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Fujii, A. Iwamoto, T. Fukahori, T. Ohnuki, M. Nakagawa, H. Hidaka, Y. Oura, P. Möller</subfield>
<subfield code="p">377</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">573</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 573 (2000) 377</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9809549</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. W. Kolb, M. J. Perry and T. P. Walker</subfield>
<subfield code="p">869</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">33</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Phys. Rev. D 33 (1986) 869</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. A. Camp-bell and K. A. Olive</subfield>
<subfield code="p">429</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">345</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. B 345 (1995) 429</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/9411272</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Bergström, S. Iguri and H. Rubinstein</subfield>
<subfield code="p">045005</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 045005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/9902157</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. P. Avelino et al</subfield>
<subfield code="p">103505</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">64</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 103505</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/0102144</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Hannestad</subfield>
<subfield code="p">023515</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 023515</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/9810102</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Kaplinghat, R. J. Scherrer and M. S. Turner</subfield>
<subfield code="p">023516</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 023516</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/9810133</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. P. Avelino, C. J. Martins, G. Rocha and P. Viana</subfield>
<subfield code="p">123508</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 123508</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">astro-ph/0008446</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. D. Hoyle, U. Schmidt, B. R. Heckel, E. G. Adelberger, J. H. Gundlach, D. J. Kap-ner and H. E. Swanson</subfield>
<subfield code="p">1418</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 1418</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-ph/0011014</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. G. Adelberger [EOT-WASH Group Collaboration]</subfield>
<subfield code="r">hep-ex/0202008</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Coleman, Aspects of symmetry. (Cambridge Univ. Press, 1985.)</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0204143</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Domin, P</subfield>
<subfield code="u">Comenius University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Phenomenological Study of Solar-Neutrino Induced Double Beta Decay of Mo100</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">8 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">The detection of solar-neutrinos of different origin via induced beta beta process of Mo100 is investigated. The particular counting rates and energy distributions of emitted electrons are presented. A discussion in respect tosolar-neutrino detector consisting of 10 tones of Mo100 is included. Both the cases of the standard solar model and neutrino oscillation scenarios are analyzed. Moreover, new beta^- beta^+ and beta^-/EC channels of the double-betaprocess are introduced and possibilities of their experimental observation are addressed.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Simkovic, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Semenov, S V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Gaponov, Y V</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Pavol Domin &lt;domin@chavena.dnp.fmph.uniba.sk&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204143.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204143.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Gaponov, Yu. V.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">8 PAGES LATEX 2 POSTSCRIPT FIGURES TALK PRESENTED BY P DOMIN ON THE WORKSHOP MEDEX'01 (PRAGUE JUNE 2001) TO APPEAR IN CZECH J PHYS 52 (2002)</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. M. Bilenky, C. Giunti and W. Grimus</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Prog. Part. Nucl. Phys.</subfield>
<subfield code="v">45</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Prog. Part. Nucl. Phys. 45 (1999) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. N. Bahcall, S. Basu and M. H. Pinsonneault</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">433</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 433 (1998) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Davis Jr</subfield>
<subfield code="p">13</subfield>
<subfield code="t">Prog. Part. Nucl. Phys.</subfield>
<subfield code="v">32</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Prog. Part. Nucl. Phys. 32 (1994) 13</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Kamiokande Collaboration, Y Fukuda et al</subfield>
<subfield code="p">1683</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">77</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. Lett. 77 (1996) 1683</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">SAGE collaboration, A. I. Abazov et al</subfield>
<subfield code="p">3332</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">67</subfield>
<subfield code="y">1991</subfield>
<subfield code="s">Phys. Rev. Lett. 67 (1991) 3332</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. N. Abdurashitov et al</subfield>
<subfield code="p">4708</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">77</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. Lett. 77 (1996) 4708</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">GALLEX collaboration, P. Anselmann et al</subfield>
<subfield code="p">376</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">285</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Lett. B 285 (1992) 376</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Hampel et al</subfield>
<subfield code="p">384</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">388</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 388 (1996) 384</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Super-Kamiokande Coll., S. Fukuda et al</subfield>
<subfield code="p">5651</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 5651</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">SNO Collaboration, Q.R. Ahmad et. al</subfield>
<subfield code="p">071301</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">87</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 87 (2001) 071301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Ejiri et al., Phys. Rev. Lett.85 2917 (2000); H. Ejiri</subfield>
<subfield code="p">265</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">338</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rep. 338 (2000) 265</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. V. Semenov, Yu. V. Gaponov and R. U. Khafizov</subfield>
<subfield code="p">1379</subfield>
<subfield code="t">Yad. Fiz.</subfield>
<subfield code="v">61</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Yad. Fiz. 61 (1998) 1379</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. V. Inzhechik, Yu. V. Gaponov and S. V. Semenov</subfield>
<subfield code="p">1384</subfield>
<subfield code="t">Yad. Fiz.</subfield>
<subfield code="v">61</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Yad. Fiz. 61 (1998) 1384</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="u">http://www.sns.ias.edu/~jnb.</subfield>
<subfield code="z">http://www.sns.ias.edu/~jnb</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Singh et al</subfield>
<subfield code="p">478</subfield>
<subfield code="t">Nucl. Data Sheets</subfield>
<subfield code="v">84</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Data Sheets 84 (1998) 478</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Akimune et al</subfield>
<subfield code="p">23</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">394</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 394 (1997) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. N. Bahcall, P. I. Krastev, and A. Yu. Smirnov</subfield>
<subfield code="p">096016</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 096016</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0204101</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CSULB-PA-02-2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Nishino, H</subfield>
<subfield code="u">California State University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Axisymmetric Gravitational Solutions as Possible Classical Backgrounds around Closed String Mass Distributions</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">15 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">By studying singularities in stationary axisymmetric Kerr and Tomimatsu-Sato solutions with distortion parameter \d = 2, 3, ... in general relativity, we conclude that these singularities can be regarded as nothing other than closedstring-like circular mass distributions. We use two different regularizations to identify \d-function type singularities in the energy-momentum tensor for these solutions, realizing a regulator independent result. This result givessupporting evidence that these axisymmetric exact solutions may well be the classical solutions around closed string-like mass distributions, just like Schwarzschild solution corresponding to a point mass distribution. In otherwords, these axisymmetric exact solutions may well provide the classical backgrounds around closed strings.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Rajpoot, S</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Hitoshi Nishino &lt;hnishino@csulb.edu&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204101.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204101.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Nishino, Hitoshi</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Rajpoot, Subhash</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Schwarzschild, Sitzungsberichte Preuss. Akad. Wiss., 424 (1916)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Green, J.H. Schwarz and E. Witten, `Superstring Theory', Vols. I and II, Cambridge University Press (1987)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Chazy, Bull. Soc. Math. France: 52 (1924) 17H.E.J. Curzon, Proc. London Math. Soc. : 23 (1924) 477</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Ho rava and E. Witten</subfield>
<subfield code="p">506</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">460</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 460 (1996) 506</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">94</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">475</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 475 (1996) 94</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N. Arkani-Hamed, S. Dimopoulos and G. Dvali</subfield>
<subfield code="p">263</subfield>
<subfield code="t">Phys. Lett.</subfield>
<subfield code="v">429</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. 429 (1998) 263</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. Anto-niadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali</subfield>
<subfield code="p">257</subfield>
<subfield code="t">Phys. Lett.</subfield>
<subfield code="v">436</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. 436 (1998) 257</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Randall and R. Sundrum</subfield>
<subfield code="p">3370</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">83</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 3370</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">4690</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">83</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 4690</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.P. Kerr</subfield>
<subfield code="p">237</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">11</subfield>
<subfield code="y">1963</subfield>
<subfield code="s">Phys. Rev. Lett. 11 (1963) 237</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Ya Burinskii</subfield>
<subfield code="p">441</subfield>
<subfield code="t">Phys. Lett., A</subfield>
<subfield code="v">185</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Lett. A 185 (1994) 441</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">`Complex String as Source of Kerr Ge-ometry'</subfield>
<subfield code="r">hep-th/9503094</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">2392</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 2392</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">`Structure of Spinning Parti-cle Suggested by Gravity, Supergravity &amp; Low-Energy String Theory'</subfield>
<subfield code="r">hep-th/9910045</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Czech. J. Phys.50S : 1 (2000) 201</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">See, e.g., A. Sen</subfield>
<subfield code="p">2081</subfield>
<subfield code="t">Mod. Phys. Lett., A</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Mod. Phys. Lett. A 10 (1995) 2081</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.H. Frampton and T.W. Kephart</subfield>
<subfield code="p">2571</subfield>
<subfield code="t">Mod. Phys. Lett., A</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Mod. Phys. Lett. A 10 (1995) 2571</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Strominger and C. Vafa</subfield>
<subfield code="p">99</subfield>
<subfield code="t">Phys. Lett.</subfield>
<subfield code="v">379</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. 379 (1996) 99</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Behrndt</subfield>
<subfield code="p">188</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">455</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. B 455 (1995) 188</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.C. Breckenridge, D.A. Lowe, R.C. Myers, A.W. Peet, A. Strominger and C. Vafa</subfield>
<subfield code="p">423</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">381</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. B 381 (1996) 423</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Callan and J. Maldacena</subfield>
<subfield code="p">591</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">472</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 472 (1996) 591</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Horowitz and A. Stro-minger</subfield>
<subfield code="p">2368</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">77</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Rev. Lett. 77 (1996) 2368</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.M. Maldacena, `Black Holes in String Theory', Ph.D. Thesis</subfield>
<subfield code="r">hep-th/9607235</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Dabholkar and J.A. Harvey</subfield>
<subfield code="p">478</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">63</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Phys. Rev. Lett. 63 (1989) 478</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Dabholkar, G.W. Gibbons, J.A. Harvey and F. Ruiz Ruiz</subfield>
<subfield code="p">33</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">340</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Nucl. Phys. B 340 (1990) 33</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C.G. Callan, Jr., J.M. Maldacena, A.W. Peet</subfield>
<subfield code="p">645</subfield>
<subfield code="t">Nucl. Phys. B</subfield>
<subfield code="v">475</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. B 475 (1996) 645</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Tomimatu and H. Sato</subfield>
<subfield code="p">95</subfield>
<subfield code="t">Prog. Theor. Phys.</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1973</subfield>
<subfield code="s">Prog. Theor. Phys. 50 (1973) 95</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Yamazaki and S. Hori</subfield>
<subfield code="p">696</subfield>
<subfield code="t">Prog. Theor. Phys.</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1977</subfield>
<subfield code="s">Prog. Theor. Phys. 57 (1977) 696</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">erratum</subfield>
<subfield code="p">1248</subfield>
<subfield code="t">Prog. Theor. Phys.</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Prog. Theor. Phys. 60 (1978) 1248</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Hori</subfield>
<subfield code="p">1870</subfield>
<subfield code="t">Prog. Theor. Phys.</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Prog. Theor. Phys. 59 (1978) 1870</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">erratum</subfield>
<subfield code="p">365</subfield>
<subfield code="t">Prog. Theor. Phys.</subfield>
<subfield code="v">61</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Prog. Theor. Phys. 61 (1979) 365</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Nishino</subfield>
<subfield code="p">77</subfield>
<subfield code="t">Phys. Lett.</subfield>
<subfield code="v">359</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Lett. 359 (1995) 77</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Weyl, Ann. de Phys. : 54 (1917) 117</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.M. Bardeen, Astrophys. Jour. : 162 (1970) 71</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. Kramer, H. Stephani, E. Herlt and M. MacCallum, `Exact Solutions of Einstein's Field Equations', Cambridge University Press (1980)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Arnowitt, S. Deser and C. Misner, in `Gravitation': `An Introduction to Current Re-search', ed. L. Witten (New York, Wiley, 1962)</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0204102</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bo-Yu, H</subfield>
<subfield code="u">Northwest University, China</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Soliton on Noncommutative Orbifold $ T^2/Z_k $</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">13 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Following the construction of the projection operators on $ T^2 $ presented by Gopakumar, Headrick and Spradin, we construct the projection operators on the integral noncommutative orbifold $ T^2/G (G=Z_k,k=2, 3, 4, 6)$. Suchoperators are expressed by a function on this orbifold. So it provides a complete set of projection operators upon the moduli space $T^2 \times K/Z_k$. All these operators has the same trace 1/A ($A$ is an integer). Since theprojection operators correspond to solitons in noncommutative string field theory, we obtained the explicit expression of all the soliton solutions on $ T^2/Z_k $.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kangjie, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zhan-ying, Y</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Zhanying Yang &lt;yzy@phy.nwu.edu.cn&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204102.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204102.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Bo-yu, Hou</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Kangjie, Shi</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Zhan-ying, Yang</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Connes, Non-commutative Geometry, Academic Press, 1994</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Landi," An introduction to non-commutative space and their geometry"</subfield>
<subfield code="r">hep-th/9701078</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Varilly, "An introduction to non-commutative Geometry"</subfield>
<subfield code="r">physics/9709045</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Madore, "An introduction to non-commutative Differential Geometry and its physical Applications", Cambridge University press 2nd edition, 1999</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Connes, M. Douglas, A. Schwartz, Matrix theory compactification on Tori</subfield>
<subfield code="p">003</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">9802</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">J. High Energy Phys. 9802 (1998) 003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9711162</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. dougals, C. Hull</subfield>
<subfield code="p">008</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">9802</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">J. High Energy Phys. 9802 (1998) 008</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9711165</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Nathan. Seiberg and Edward. Witten," String theory and non-commutative geometry"</subfield>
<subfield code="p">032</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">9909</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">J. High Energy Phys. 9909 (1999) 032</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9908142</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V. Schomerus," D-branes and Deformation Quan-tization"</subfield>
<subfield code="p">030</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">9906</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">J. High Energy Phys. 9906 (1999) 030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Witten, "Noncommutative Geometry and String Field Theory"</subfield>
<subfield code="p">253</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">268</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">Nucl. Phys. B 268 (1986) 253</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. B. Laughlin, "The quantum Hall Effect", edited by R. Prange and S. Girvin, p233</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Susskind</subfield>
<subfield code="r">hep-th/0101029</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. P. Hu and S. C. Zhang</subfield>
<subfield code="r">cond-mat/0112432</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Gopakumar, M. Headrick, M. Spradin, "on Noncommutative Multi-solitons"</subfield>
<subfield code="r">hep-th/0103256</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. J. Martinec and G. Moore, "Noncommutative Solitons on Orbifolds"</subfield>
<subfield code="r">hep-th/0101199</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D. J. Gross and N. A. Nekrasov, " Solitons in noncommutative Gauge Theory"</subfield>
<subfield code="r">hep-th/0010090</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. R. Douglas and N. A. Nekrasov, "Noncommutative Field Theory"</subfield>
<subfield code="r">hep-th/0106048</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Gopakumar, S. Minwalla and A. Strominger, " Noncommutative Soliton"</subfield>
<subfield code="p">048</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">005</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">J. High Energy Phys. 005 (2000) 048</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/0003160</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Harvey, " Komaba Lectures on Noncommutative Solitons and D-branes</subfield>
<subfield code="r">hep-th/0102076</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. A. Harvey, P. Kraus and F.Larsen, J. High Energy Phys.0012 (200) 024</subfield>
<subfield code="r">hep-th/0010060</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Konechny and A. Schwarz, "Compactification of M(atrix) theory on noncommutative toroidal orbifolds"</subfield>
<subfield code="p">667</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">591</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Nucl. Phys. B 591 (2000) 667</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9912185</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">" Moduli spaces of max-imally supersymmetric solutions on noncommutative tori and noncommutative orbifolds", J. High Energy Phys.0009, (2000) 005</subfield>
<subfield code="r">hep-th/0005167</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Walters, "Projective modules over noncommutative sphere", J. London Math. Soc. : 51 (1995) 589"Chern characters of Fourier modules", Can. J. Math. : 52 (2000) 633</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Rieffel, Pacific J. Math. : 93 (1981) 415</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. P. Boca</subfield>
<subfield code="p">325</subfield>
<subfield code="t">Commun. Math. Phys.</subfield>
<subfield code="v">202</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Commun. Math. Phys. 202 (1999) 325</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Bacry, A. Grossman and J. Zak</subfield>
<subfield code="p">1118</subfield>
<subfield code="t">Phys. Rev., B</subfield>
<subfield code="v">12</subfield>
<subfield code="y">1975</subfield>
<subfield code="s">Phys. Rev. B 12 (1975) 1118</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Zak, In Solid State Phys.edited by H. Ehrenreich, F. Seitz and D. Turnbull (Aca-demic,new York,1972), Vol. 27</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">nucl-th/0204031</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">LA-UR-02-2040</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Page, P R</subfield>
<subfield code="u">Los Alamos Sci. Lab.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Hybrid Baryons</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Los Alamos, NM</subfield>
<subfield code="b">Los Alamos Sci. Lab.</subfield>
<subfield code="c">11 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">12 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We review the status of hybrid baryons. The only known way to study hybrids rigorously is via excited adiabatic potentials. Hybrids can be modelled by both the bag and flux-tube models. The low-lying hybrid baryon is N 1/2^+ with amass of 1.5-1.8 GeV. Hybrid baryons can be produced in the glue-rich processes of diffractive gamma N and pi N production, Psi decays and p pbar annihilation.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Nuclear Physics</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">"Philip R. page" &lt;prp@t16prp.lanl.gov&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204031.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204031.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Page, Philip R.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="K">
<subfield code="b">INVITED PLENARY TALK PRESENTED AT THE ``9TH INTERNATIONAL CONFERENCE ON THE STRUCTURE OF BARYONS'' (BARYONS 2002) 3-8 MARCH NEWPORT NEWS VA USA 12 PAGES 7 ENCAPSULATED POSTSCRIPT FIGURES LATEX</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200216</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">1.</subfield>
<subfield code="m">T. Barnes, contribution at the COSY Workshop on Baryon Excitations (May 2000, Jülich, Germany),</subfield>
<subfield code="r">nucl-th/0009011</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">2.</subfield>
<subfield code="m">E.I. Ivanov et al.,</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 3977</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">3.</subfield>
<subfield code="m">T.T. Takahashi, H. Matsufuru, Y. Nemoto and H. Suganuma,</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 18</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">3.</subfield>
<subfield code="m">; ibid., Proc. of "Int. Symp. on Hadron and Nuclei" (February 2001, Seoul, Korea), published by Institute of Physics and Applied Phyics (2001), ed. Dr. T.K. Choi, p. 341; ibid., T. Umeda, Nucl. Phys. Proc. S. : 94 (2001) 554.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">4.</subfield>
<subfield code="m">Yu. A. Simonov, these proceedings; D.S. Kuzmenko and Yu. A. Simonov,</subfield>
<subfield code="r">hep-ph/0202277</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">5.</subfield>
<subfield code="m">S. Capstick and N. Isgur,</subfield>
<subfield code="s">Phys. Rev. D 34 (1986) 2809</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">6.</subfield>
<subfield code="m">C. Alexandrou, Ph. de Forcrand and A. Tsapalis,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 054503</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">7.</subfield>
<subfield code="m">C.E. Carlson and N.C. Mukhopadhyay,</subfield>
<subfield code="s">Phys. Rev. Lett. 67 (1991) 3745</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="m">C.-K. Chow, D. Pirjol and T.-M. Yan,</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 056002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">9.</subfield>
<subfield code="m">T. Barnes, Ph. D. thesis, California Institute of Technology, 1977; T. Barnes and F.E. Close,</subfield>
<subfield code="s">Phys. Lett. B 123 (1983) 89</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">10.</subfield>
<subfield code="m">E. Golowich, E. Haqq and G. Karl,</subfield>
<subfield code="s">Phys. Rev. D 28 (1983) 160</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">11.</subfield>
<subfield code="m">C.E. Carlson, Proc. of the 7th Int. Conf. on the Structure of Baryons (October 1995, Santa Fe, NM), p. 461, eds. B. F. Gibson et al. (World Scientific, Singapore, 1996).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">12.</subfield>
<subfield code="m">C.E. Carlson and T.H. Hansson,</subfield>
<subfield code="s">Phys. Lett. B 128 (1983) 95</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">13.</subfield>
<subfield code="m">I. Duck and E. Umland,</subfield>
<subfield code="s">Phys. Lett. B 128 (1983) 221</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">14.</subfield>
<subfield code="m">P.R. Page, Proc. of "The Physics of Excited Nucleons" (NSTAR2000) (February 2000, Newport News, VA).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">15.</subfield>
<subfield code="m">J. Merlin and J. Paton,</subfield>
<subfield code="s">J. Phys. G 11 (1985) 439</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">16.</subfield>
<subfield code="m">K.J. Juge, J. Kuti and C.J. Morningstar, Nucl. Phys. Proc. S. : 63 (1998) 543.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">17.</subfield>
<subfield code="m">E.S. Swanson and A.P. Szczepaniak,</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 014035</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">18.</subfield>
<subfield code="m">T.J. Allen, M.G. Olsson and S. Veseli,</subfield>
<subfield code="s">Phys. Lett. B 434 (1998) 110</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">19.</subfield>
<subfield code="m">S. Capstick and P.R. Page,</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 111501</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">20.</subfield>
<subfield code="m">L.S. Kisslinger et al.,</subfield>
<subfield code="s">Phys. Rev. D 51 (1995) 5986</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">20.</subfield>
<subfield code="s">Nucl. Phys. A 629 (1998) 30c</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">20.</subfield>
<subfield code="m">A.P. Martynenko,</subfield>
<subfield code="s">Sov. J. Nucl. Phys. 54 (1991) 488</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">21.</subfield>
<subfield code="m">S.M. Gerasyuta and V.I. Kochkin,</subfield>
<subfield code="r">hep-ph/0203104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">22.</subfield>
<subfield code="m">T.D. Cohen and L.Ya. Glozman,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 016006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">23.</subfield>
<subfield code="m">E. Klempt, these proceedings.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">24.</subfield>
<subfield code="m">A.M. Zaitsev (VES Collab.), Proc. of ICHEP’96 (Warsaw, 1996).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">25.</subfield>
<subfield code="m">D.E. Groom et al. (Particle Data Group),</subfield>
<subfield code="s">Eur. Phys. J. C 15 (2000) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">26.</subfield>
<subfield code="m">H. Li (BES Collab.),</subfield>
<subfield code="s">Nucl. Phys. A 675 (2000) 189c</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">26.</subfield>
<subfield code="m">B.-S. Zou et al.,</subfield>
<subfield code="r">hep-ph/9909204</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">27.</subfield>
<subfield code="m">L.S. Kisslinger and Z.-P. Li,</subfield>
<subfield code="s">Phys. Lett. B 445 (1999) 271</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">28.</subfield>
<subfield code="m">N. Isgur,</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 114016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">29.</subfield>
<subfield code="m">Z.-P. Li et al.,</subfield>
<subfield code="s">Phys. Rev. D 44 (1991) 2841</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">29.</subfield>
<subfield code="s">Phys. Rev. D 46 (1992) 70</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">30.</subfield>
<subfield code="m">T. Barnes and F.E. Close,</subfield>
<subfield code="s">Phys. Lett. B 128 (1983) 277</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">31.</subfield>
<subfield code="m">O. Kittel and G.F. Farrar,</subfield>
<subfield code="r">hep-ph/0010186</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">32.</subfield>
<subfield code="m">P.R. Page, Proc. of "3rd Int. Conf. on Quark Confinement and Hadron Spectrum" (Confinement III), (June 1998, Newport News, VA).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">33.</subfield>
<subfield code="m">K.J. Juge, J. Kuti and C.J. Morningstar, Nucl. Phys. Proc. S. : 63 (1998) 326.</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">nucl-th/0204032</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Amos, K</subfield>
<subfield code="u">The University of Melbourne</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A simple functional form for proton-nucleus total reaction cross sections</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">13 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">A simple functional form has been found that gives a good representation of the total reaction cross sections for the scattering of protons from (15) nuclei spanning the mass range ${}^{9}$Be to ${}^{238}$U and for proton energiesranging from 20 to 300 MeV.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Nuclear Physics</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Deb, P K</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Ken Amos &lt;amos@physics.unimelb.edu.au&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204032.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204032.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200216</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">nucl-th/0204033</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Oyamatsu, K</subfield>
<subfield code="u">Aichi Shukutoku Univ</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Saturation of nuclear matter and radii of unstable nuclei</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">26 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We examine relations among the parameters characterizing the phenomenological equation of state (EOS) of nearly symmetric, uniform nuclear matter near the saturation density by comparing macroscopic calculations of radii and massesof stable nuclei with the experimental data. The EOS parameters of interest here are the symmetry energy S_0, the symmetry energy density-derivative coefficient L and the incompressibility K_0 at the normal nuclear density. We find aconstraint on the relation between K_0 and L from the empirically allowed values of the slope of the saturation line (the line joining the saturation points of nuclear matter at finite neutron excess), together with a strongcorrelation between S_0 and L. In the light of the uncertainties in the values of K_0 and L, we macroscopically calculate radii of unstable nuclei as expected to be produced in future facilities. We find that the matter radii dependstrongly on L while being almost independent of K_0, a feature that will help to determine the L value via systematic measurements of nuclear size.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Nuclear Physics</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Iida, K</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Kei Iida &lt;keiiida@postman.riken.go.jp&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204033.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204033.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Oyamatsu, Kazuhiro</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Iida, Kei</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.M. Blatt and V.F. Weisskopf, Theoretical Nuclear Physics, Wiley, New York, 1952</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. Heiselberg, V.R. Pandharipande</subfield>
<subfield code="p">481</subfield>
<subfield code="t">Annu. Rev. Nucl. Part. Sci.</subfield>
<subfield code="v">50</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Annu. Rev. Nucl. Part. Sci. 50 (2000) 481</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Oyamatsu, I. Tanihata, Y. Sugahara, K. Sumiyoshi, H. Toki</subfield>
<subfield code="p">3</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">634</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. A 634 (1998) 3</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B.A. Brown</subfield>
<subfield code="p">5296</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">85</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. Lett. 85 (2000) 5296</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K.C. Chung, C.S. Wang, A.J. Santiago</subfield>
<subfield code="r">nucl-th/0102017</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B.A. Li</subfield>
<subfield code="p">4221</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">85</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. Lett. 85 (2000) 4221</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Sturm et al</subfield>
<subfield code="p">39</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 39</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. Fuchs, A. Faessler, E. Zabrodin, Y.M. Zheng</subfield>
<subfield code="p">1974</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">86</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Phys. Rev. Lett. 86 (2001) 1974</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Danielewicz, in: Proc. Int. Symp. on Non-Equilibrium and Nonlinear Dynamics in Nuclear and Other Finite Systems, Beijing, 2001</subfield>
<subfield code="r">nucl-th/0112006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.H. Youngblood, H.L. Clark, Y.-W. Lui</subfield>
<subfield code="p">691</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">82</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. Lett. 82 (1999) 691</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.A. Pons, F.M. Walter, J.M. Lattimer, M. Prakash, R. Neuhaeuser, P. An</subfield>
<subfield code="p">981</subfield>
<subfield code="t">Astrophys. J.</subfield>
<subfield code="v">564</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Astrophys. J. 564 (2002) 981</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.M. Lattimer</subfield>
<subfield code="p">337</subfield>
<subfield code="t">Annu. Rev. Nucl. Part. Sci.</subfield>
<subfield code="v">31</subfield>
<subfield code="y">1981</subfield>
<subfield code="s">Annu. Rev. Nucl. Part. Sci. 31 (1981) 337</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Oyamatsu</subfield>
<subfield code="p">431</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">561</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Nucl. Phys. A 561 (1993) 431</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L.R.B. Elton, A. Swift</subfield>
<subfield code="p">52</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">94</subfield>
<subfield code="y">1967</subfield>
<subfield code="s">Nucl. Phys. A 94 (1967) 52</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Yamada</subfield>
<subfield code="p">512</subfield>
<subfield code="t">Prog. Theor. Phys.</subfield>
<subfield code="v">32</subfield>
<subfield code="y">1964</subfield>
<subfield code="s">Prog. Theor. Phys. 32 (1964) 512</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">H. de Vries, C.W. de Jager, C. de Vries</subfield>
<subfield code="p">495</subfield>
<subfield code="t">At. Data Nucl. Data Tables</subfield>
<subfield code="v">36</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">At. Data Nucl. Data Tables 36 (1987) 495</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Audi, A.H. Wapstra</subfield>
<subfield code="p">409</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">595</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. A 595 (1995) 409</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Goriely, F. Tondeur, J.M. Pearson</subfield>
<subfield code="p">311</subfield>
<subfield code="t">At. Data Nucl. Data Tables</subfield>
<subfield code="v">77</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">At. Data Nucl. Data Tables 77 (2001) 311</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Samyn, S. Goriely, P.-H. Heenen, J.M. Pearson, F. Tondeur</subfield>
<subfield code="p">142</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">700</subfield>
<subfield code="y">2002</subfield>
<subfield code="s">Nucl. Phys. A 700 (2002) 142</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Chabanat, P. Bonche, P. Haensel, J. Meyer, R. Schaeffer</subfield>
<subfield code="p">231</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">635</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. A 635 (1998) 231</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Sugahara, H. Toki</subfield>
<subfield code="p">557</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">579</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Nucl. Phys. A 579 (1994) 557</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Ozawa, T. Suzuki, I. Tanihata</subfield>
<subfield code="p">32</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">693</subfield>
<subfield code="y">2001</subfield>
<subfield code="s">Nucl. Phys. A 693 (2001) 32</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C.J. Batty, E. Friedman, H.J. Gils, H. Rebel</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Adv. Nucl. Phys.</subfield>
<subfield code="v">19</subfield>
<subfield code="y">1989</subfield>
<subfield code="s">Adv. Nucl. Phys. 19 (1989) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Fricke, C. Bernhardt, K. Heilig, L.A. Schaller, L. Schellenberg, E.B. Shera, C.W. de Jager</subfield>
<subfield code="p">177</subfield>
<subfield code="t">At. Data Nucl. Data Tables</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">At. Data Nucl. Data Tables 60 (1995) 177</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Huber et al</subfield>
<subfield code="p">2342</subfield>
<subfield code="t">Phys. Rev., C</subfield>
<subfield code="v">18</subfield>
<subfield code="y">1978</subfield>
<subfield code="s">Phys. Rev. C 18 (1978) 2342</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Ray, G.W. Hoffmann, W.R. Coker</subfield>
<subfield code="p">223</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">212</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Rep. 212 (1992) 223</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Yoshida, H. Sagawa, N. Takigawa</subfield>
<subfield code="p">2796</subfield>
<subfield code="t">Phys. Rev., C</subfield>
<subfield code="v">58</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. C 58 (1998) 2796</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C.J. Pethick, D.G. Ravenhall</subfield>
<subfield code="p">173</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">606</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Nucl. Phys. A 606 (1996) 173</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Iida, K. Oyamatsu, unpublished</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.P. Blaizot, J.F. Berger, J. Decharg´e, M. Girod</subfield>
<subfield code="p">435</subfield>
<subfield code="t">Nucl. Phys., A</subfield>
<subfield code="v">591</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Nucl. Phys. A 591 (1995) 435</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">nucl-th/0204034</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bozek, P</subfield>
<subfield code="u">Institute of Nuclear Physics, Cracow, Poland</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Nuclear matter with off-shell propagation</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">12 Apr 2002</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Symmetric nuclear matter is studied within the conserving, self-consistent T-matrix approximation. This approach involves off-shell propagation of nucleons in the ladder diagrams. The binding energy receives contributions from thebackground part of the spectral function, away form the quasiparticle peak. The Fermi energy at the saturation point fulfills the Hugenholz-Van Hove relation. In comparison to the Brueckner-Hartree-Fock approach, the binding energyis reduced and the equation of state is harder</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Nuclear Physics</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Bozek &lt;bozek@sothis.ifj.edu.pl&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204034.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0204034.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">11</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2002-04-15</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-15</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200216</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">SCAN-9605068</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">McGILL-96-15</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Contogouris, A P</subfield>
<subfield code="u">University of Athens</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">One loop corrections for certain reactions initiated by 5-parton subprocesses via helicity amplitudes</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Montreal</subfield>
<subfield code="b">McGill Univ. Phys. Dept.</subfield>
<subfield code="c">Apr 1996?</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">28 p</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">UNC9808</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Merebashvili, Z V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lebessis, F</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Veropoulos, G</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/convert_SCAN-9605068.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/SCAN-9605068.tif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1996</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1996-05-08</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-12-14</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">4234-4243</subfield>
<subfield code="n">7</subfield>
<subfield code="p">Phys. Rev., D</subfield>
<subfield code="v">54</subfield>
<subfield code="y">1996</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">199620</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">TRI-PP-86-73</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bryman, D A</subfield>
<subfield code="u">University of British Columbia</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Exotic muon decay mu --&gt; e + x</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Burnaby, BC</subfield>
<subfield code="b">TRIUMF</subfield>
<subfield code="c">Aug 1986</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">8 p</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">jv200203</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Experimental Results</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Clifford, E T H</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1986</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-29</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2002-03-26</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">2787-88</subfield>
<subfield code="n">22</subfield>
<subfield code="p">Phys. Rev. Lett.</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1986</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">1594699</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">198648n</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0003289</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">PUPT-1926</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Costa, M S</subfield>
<subfield code="u">Princeton University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A Test of the AdS/CFT Duality on the Coulomb Branch</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Princeton, NJ</subfield>
<subfield code="b">Princeton Univ. Joseph-Henry Lab. Phys.</subfield>
<subfield code="c">31 Mar 2000</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">11 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We consider the N=4 SU(N) Super Yang Mills theory on the Coulomb branch with gauge symmetry broken to S(U(N_1)*U(N_2)). By integrating the W particles, the effective action near the IR SU(N_i) conformal fixed points is seen to be adeformation of the Super Yang Mills theory by a non-renormalized, irrelevant, dimension 8 operator. The correction to the two-point function of the dilaton field dual operator near the IR is related to a three-point function ofchiral primary operators at the conformal fixed points and agrees with the classical gravity prediction, including the numerical factor.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANLPUBL200104</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Miguel S Costa &lt;miguel@feynman.princeton.edu&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003289.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003289.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="u">Princeton University</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-04-03</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-11-09</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Costa, Miguel S.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">287-292</subfield>
<subfield code="p">Phys. Lett., B</subfield>
<subfield code="v">482</subfield>
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4356110</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200014</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.M. Maldacena, The Large N Limit of Superconformal Field Theories and Supergrav-ity</subfield>
<subfield code="p">231</subfield>
<subfield code="t">Adv. Theor. Math. Phys.</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 231</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9711200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.S. Gubser, I.R. Klebanov and A.M. Polyakov, Gauge Theory Correlators from Non-Critical String Theory</subfield>
<subfield code="p">105</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">428</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 428 (1998) 105</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Witten, Anti De Sitter Space And Holography</subfield>
<subfield code="p">253</subfield>
<subfield code="t">Adv. Theor. Math. Phys.</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 253</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9802150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">O. Aharony, S.S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, Large N Field Theories, String Theory and Gravity</subfield>
<subfield code="p">183</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">323</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rep. 323 (2000) 183</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9905111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.A. Minahan and N.P. Warner, Quark Potentials in the Higgs Phase of Large N Supersymmetric Yang-Mills Theories</subfield>
<subfield code="p">005</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">06</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">J. High Energy Phys. 06 (1998) 005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9805104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.R. Douglas and W. Taylor, Branes in the bulk of Anti-de Sitter space</subfield>
<subfield code="r">hep-th/9807225</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.A. Tseytlin and S. Yankielowicz, Free energy of N=4 super Yang-Mills in Higgs phase and non-extremal D3-brane interactions</subfield>
<subfield code="p">145</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">541</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 541 (1999) 145</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9809032</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y. Wu, A Note on AdS/SYM Correspondence on the Coulomb Branch</subfield>
<subfield code="r">hep-th/9809055</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. Kraus, F. Larsen, S. Trivedi, The Coulomb Branch of Gauge Theory from Rotating Branes</subfield>
<subfield code="p">003</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">03</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">J. High Energy Phys. 03 (1999) 003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9811120</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I.R. Klebanov and E. Witten, AdS/CFT Correspondence and Symmetry Breaking</subfield>
<subfield code="p">89</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">556</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 556 (1999) 89</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9905104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">D.Z. Freedman, S.S. Gubser, K. Pilch and N.P. Warner, Continuous distributions of D3-branes and gauged supergravity</subfield>
<subfield code="r">hep-th/9906194</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Brandhuber and K. Sfetsos, Wilson loops from multicentre and rotating branes, mass gaps and phase structure in gauge theories</subfield>
<subfield code="r">hep-th/9906201</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. Chepelev and R. Roiban, A note on correlation functions in AdS5/SY M4 corre-spondence on the Coulomb branch</subfield>
<subfield code="p">74</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">462</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 462 (1999) 74</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9906224</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.B. Giddings and S.F. Ross, D3-brane shells to black branes on the Coulomb branch</subfield>
<subfield code="p">024036</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">61</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Phys. Rev. D 61 (2000) 024036</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9907204</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Cvetic, S.S. Gubser, H. Lu and C.N. Pope, Symmetric Potentials of Gauged Su-pergravities in Diverse Dimensions and Coulomb Branch of Gauge Theories</subfield>
<subfield code="r">hep-th/9909121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.C.Rashkov and K.S.Viswanathan, Correlation functions in the Coulomb branch of N=4 SYM from AdS/CFT correspondence</subfield>
<subfield code="r">hep-th/9911160</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.S. Costa, Absorption by Double-Centered D3-Branes and the Coulomb Branch of N = 4 SYM Theory</subfield>
<subfield code="r">hep-th/9912073</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Y.S. Myung, G. Kang and H.W. Lee, Greybody factor for D3-branes in B field</subfield>
<subfield code="r">hep-th/9911193</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S-wave absorption of scalars by noncommutative D3-branes</subfield>
<subfield code="r">hep-th/9912288</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. Manvelyan, H.J.W. Mueller-Kirsten, J.-Q. Liang, Y. Zhang, Absorption Cross Sec-tion of Scalar Field in Supergravity Background</subfield>
<subfield code="r">hep-th/0001179</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.S. Gubser and I.R. Klebanov, Absorption by Branes and Schwinger Terms in the World Volume Theory</subfield>
<subfield code="p">41</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">413</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 413 (1997) 41</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9708005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Intriligator, Maximally Supersymmetric RG Flows and AdS Duality</subfield>
<subfield code="r">hep-th/9909082</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.S. Gubser, A. Hashimoto, I.R. Klebanov and M. Krasnitz, Scalar Absorption and the Breaking of the World Volume Conformal Invariance</subfield>
<subfield code="p">393</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">526</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 526 (1998) 393</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9803023</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Lee, S. Minwalla, M. Rangamani and N. Seiberg, Three-Point Functions of Chiral Operators in D=4, N = 4 SYM at Large N</subfield>
<subfield code="p">697</subfield>
<subfield code="t">Adv. Theor. Math. Phys.</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 697</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9806074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. D'Hoker, D.Z. Freedman and W. Skiba, Field Theory Tests for Correlators in the AdS/CFT Correspondence</subfield>
<subfield code="p">045008</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">59</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 045008</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9807098</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">F. Gonzalez-Rey, B. Kulik and I.Y. Park, Non-renormalization of two and three Point Correlators of N=4 SYM in N=1 Superspace</subfield>
<subfield code="p">164</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">455</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 455 (1999) 164</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9903094</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Intriligator, Bonus Symmetries of N=4 Super-Yang-Mills Correlation Functions via AdS Duality</subfield>
<subfield code="p">575</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">551</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 551 (1999) 575</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9811047</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. Intriligator and W. Skiba, Bonus Symmetry and the Operator Product Expansion of N=4 Super-Yang-Mills</subfield>
<subfield code="p">165</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">559</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 559 (1999) 165</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9905020</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Eden, P.S. Howe and P.C. West, Nilpotent invariants in N=4 SYM</subfield>
<subfield code="p">19</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">463</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Lett. B 463 (1999) 19</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9905085</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.S. Howe, C. Schubert, E. Sokatchev and P.C. West, Explicit construction of nilpotent covariants in N=4 SYM</subfield>
<subfield code="r">hep-th/9910011</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Petkou and K. Skenderis, A non-renormalization theorem for conformal anomalies</subfield>
<subfield code="p">100</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">561</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Nucl. Phys. B 561 (1999) 100</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9906030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.R. Douglas, D. Kabat, P. Pouliot and S.H. Shenker, D-branes and Short Distances in String Theory</subfield>
<subfield code="p">85</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">485</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 485 (1997) 85</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9608024</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. Lifschytz and S.D. Mathur, Supersymmetry and Membrane Interactions in M(atrix) Theory</subfield>
<subfield code="p">621</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">507</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 507 (1997) 621</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9612087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. Maldacena, Probing Near Extremal Black Holes with D-branes</subfield>
<subfield code="p">3736</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">57</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 3736</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9705053</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Branes probing black holes</subfield>
<subfield code="p">17</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">68</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 68 (1998) 17</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9709099</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">I. Chepelev and A.A. Tseytlin, Interactions of type IIB D-branes from D-instanton ma-trix model</subfield>
<subfield code="p">629</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">511</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 511 (1998) 629</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9705120</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">Long-distance interactions of branes: correspondence between supergravity and super Yang-Mills descriptions</subfield>
<subfield code="p">73</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">515</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B 515 (1998) 73</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9709087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.A. Tseytlin, Interactions Between Branes and Matrix Theories</subfield>
<subfield code="p">99</subfield>
<subfield code="t">Nucl. Phys. B, Proc. Suppl.</subfield>
<subfield code="v">68</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 68 (1998) 99</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9709123</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M. Dine and N. Seiberg, Comments on Higher Derivative Operators in Some SUSY Field Theories</subfield>
<subfield code="p">239</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">409</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Lett. B 409 (1997) 239</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9705057</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.A. Tseytlin, On non-abelian generalisation of Born-Infeld action in string theory</subfield>
<subfield code="p">41</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">501</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Nucl. Phys. B 501 (1997) 41</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9701125</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.S. Gubser and A. Hashimoto, Exact absorption probabilities for the D3-brane, Com-mun. Math. Phys. : 203 (1999) 325</subfield>
<subfield code="r">hep-th/9805140</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.S. Gubser, Non-conformal examples of AdS/CFT</subfield>
<subfield code="p">1081</subfield>
<subfield code="t">Class. Quantum Gravity</subfield>
<subfield code="v">17</subfield>
<subfield code="y">2000</subfield>
<subfield code="s">Class. Quantum Gravity 17 (2000) 1081</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9910117</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Bollen, G</subfield>
<subfield code="u">Institut fur Physic, Universitat Mainz</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">ISOLTRAP : a tandem penning trap system for accurate on-line mass determination of short-lived isotopes</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Detectors and Experimental Techniques</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Becker, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kluge, H J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Konig, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Moore, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Otto, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Raimbault-Hartmann, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Savard, G</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Schweikhard, L</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Stolzenberg, H</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">ISOLDE Collaboration</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1996</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="e">IS302</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="e">ISOLDE</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="p">PPE</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="a">CERN PS</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1996-05-08</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-12-14</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">675-697</subfield>
<subfield code="p">Nucl. Instrum. Methods Phys. Res., A</subfield>
<subfield code="v">368</subfield>
<subfield code="y">1996</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">199600</subfield>
<subfield code="y">a1996</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ISOLDEPAPER</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0003291</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">McInnes, B</subfield>
<subfield code="u">National University of Singapore</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">AdS/CFT For Non-Boundary Manifolds</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">In its Euclidean formulation, the AdS/CFT correspondence begins as a study of Yang-Mills conformal field theories on the sphere, S^4. It has been successfully extended, however, to S^1 X S^3 and to the torus T^4. It is natural tohope that it can be made to work for any manifold on which it is possible to define a stable Yang-Mills conformal field theory. We consider a possible classification of such manifolds, and show how to deal with the most obviousobjection : the existence of manifolds which cannot be represented as boundaries. We confirm Witten's suggestion that this can be done with the help of a brane in the bulk.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Brett McInnes &lt;matmcinn@nus.edu.sg&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003291.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003291.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Innes, Brett Mc</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-04-03</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-11-09</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">025</subfield>
<subfield code="p">J. High Energy Phys.</subfield>
<subfield code="v">05</subfield>
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4356136</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200014</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">SCAN-9605071</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">KEK-Preprint-95-196</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">TUAT-HEP-96-1</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DPNU-96-04</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Emi, K</subfield>
<subfield code="u">KEK</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Study of a dE/dx measurement and the gas-gain saturation by a prototype drift chamber for the BELLE-CDC</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Tsukuba</subfield>
<subfield code="b">KEK</subfield>
<subfield code="c">Jan 1996</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">20 p</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">UNC9806</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Detectors and Experimental Techniques</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tsukamoto, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Hirano, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Mamada, H</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sakai, Y</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Uno, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Itami, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Kajikawa, R</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nitoh, O</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Ohishi, N</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sugiyama, A</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Suzuki, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Takahashi, T</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tamagawa, Y</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Tomoto, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Yamaki, T</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">library@kekvax.kek.jp</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/convert_SCAN-9605071.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/SCAN-9605071.tif</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1996</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1996-05-08</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-12-14</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">225</subfield>
<subfield code="n">2</subfield>
<subfield code="p">Nucl. Instrum. Methods Phys. Res., A</subfield>
<subfield code="v">379</subfield>
<subfield code="y">1996</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">3328660</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">h</subfield>
<subfield code="w">199620</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0003293</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Smailagic, A</subfield>
<subfield code="u">University of Osijek</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Higher Dimensional Schwinger-like Anomalous Effective Action</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We construct explicit form of the anomalous effective action, in arbitrary even dimension, for Abelian vector and axial gauge fields coupled to Dirac fermions. It turns out to be a surprisingly simple extension of 2D Schwinger modeleffective action.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANLPUBL200104</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Spallucci, E</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">spallucci@ts.infn.it</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003293.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003293.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-04-03</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-11-09</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">045010</subfield>
<subfield code="p">Phys. Rev., D</subfield>
<subfield code="v">62</subfield>
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4356152</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.L. Adler</subfield>
<subfield code="p">2426</subfield>
<subfield code="t">Phys. Rev.</subfield>
<subfield code="v">177</subfield>
<subfield code="y">1969</subfield>
<subfield code="s">Phys. Rev. 177 (1969) 2426</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S.E.Treiman, R. Jackiw, D.J.Gross " Lectures on Current Algebra and its Applications ", Princeton UP, Princeton NJ, (1972)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">T.Berger " Fermions in two (1+1)-dimensional anomalous gauge theories: the chiral Schwinger model and the chiral quantum gravity " Hamburg U</subfield>
<subfield code="r">DESY-90-084</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">July 1990</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L.Rosenberg, Phys. Rev.129, (1963) 2786</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.Jackiw " Topological Investigations of Quantized Gauge Theories " in Relativity, Groups and Topology eds. B.deWitt and R.Stora (Elsevier, Amsterdam 1984)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.T.Grisaru, N.K.Nielsen, W.Siegel, D.Zanon</subfield>
<subfield code="p">157</subfield>
<subfield code="t">Nucl. Phys., B</subfield>
<subfield code="v">247</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Nucl. Phys. B 247 (1984) 157</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.M. Polyakov</subfield>
<subfield code="p">207</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">103</subfield>
<subfield code="y">1981</subfield>
<subfield code="s">Phys. Lett. B 103 (1981) 207</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.M. Polyakov</subfield>
<subfield code="p">893</subfield>
<subfield code="t">Mod. Phys. Lett., A</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Mod. Phys. Lett. A 2 (1987) 893</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R.J. Riegert</subfield>
<subfield code="p">56</subfield>
<subfield code="t">Phys. Lett.</subfield>
<subfield code="v">134</subfield>
<subfield code="y">1984</subfield>
<subfield code="s">Phys. Lett. 134 (1984) 56</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K.Fujikawa</subfield>
<subfield code="p">1195</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">42</subfield>
<subfield code="y">1979</subfield>
<subfield code="s">Phys. Rev. Lett. 42 (1979) 1195</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B.deWitt, Relativity, Groups and Topology, Paris (1963); A.O.Barvinsky, G.A.Vilkovisky</subfield>
<subfield code="p">1</subfield>
<subfield code="t">Phys. Rep.</subfield>
<subfield code="v">119</subfield>
<subfield code="y">1985</subfield>
<subfield code="s">Phys. Rep. 119 (1985) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P.H.Frampton, T.W.Kephart</subfield>
<subfield code="p">1343</subfield>
<subfield code="t">Phys. Rev. Lett.</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Phys. Rev. Lett. 50 (1983) 1343</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">L. Alvarez-Gaume, E.Witten</subfield>
<subfield code="p">269</subfield>
<subfield code="t">Nucl. Phys.</subfield>
<subfield code="v">234</subfield>
<subfield code="y">1983</subfield>
<subfield code="s">Nucl. Phys. 234 (1983) 269</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.Smailagic, R.E.Gamboa-Saravi</subfield>
<subfield code="p">145</subfield>
<subfield code="t">Phys. Lett.</subfield>
<subfield code="v">192</subfield>
<subfield code="y">1987</subfield>
<subfield code="s">Phys. Lett. 192 (1987) 145</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A.Smailagic, E.Spallucci</subfield>
<subfield code="p">17</subfield>
<subfield code="t">Phys. Lett.</subfield>
<subfield code="v">284</subfield>
<subfield code="y">1992</subfield>
<subfield code="s">Phys. Lett. 284 (1992) 17</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0003294</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Matsubara, K</subfield>
<subfield code="u">Uppsala University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Restrictions on Gauge Groups in Noncommutative Gauge Theory</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We show that the gauge groups SU(N), SO(N) and Sp(N) cannot be realized on a flat noncommutative manifold, while it is possible for U(N).</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANLPUBL200104</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Keizo Matsubara &lt;keizo.matsubara@teorfys.uu.se&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003294.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003294.ps.gz</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Matsubara, Keizo</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-04-03</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-11-09</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">417-419</subfield>
<subfield code="p">Phys. Lett., B</subfield>
<subfield code="v">482</subfield>
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4356160</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J.Polchinski, TASI Lectures on D-branes</subfield>
<subfield code="r">hep-th/9611050</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">M.R.Douglas and C.Hull, D-branes and the Noncommuta-tive torus</subfield>
<subfield code="p">8</subfield>
<subfield code="t">J. High Energy Phys.</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">J. High Energy Phys. 2 (1998) 8</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="r">hep-th/9711165</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">V.Schomerus, D-branes and Deformation Quantization</subfield>
<subfield code="r">hep-th/9903205</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">N.Seiberg and E.Witten, String Theory and Noncommu-tative Geometry</subfield>
<subfield code="r">hep-th/9908142</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">2</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0003295</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Wang, B</subfield>
<subfield code="u">Fudan University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Quasinormal modes of Reissner-Nordstrom Anti-de Sitter Black Holes</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Complex frequencies associated with quasinormal modes for large Reissner-Nordstr$\ddot{o}$m Anti-de Sitter black holes have been computed. These frequencies have close relation to the black hole charge and do not linearly scale withthe black hole temperature as in Schwarzschild Anti-de Sitter case. In terms of AdS/CFT correspondence, we found that the bigger the black hole charge is, the quicker for the approach to thermal equilibrium in the CFT. The propertiesof quasinormal modes for $l&gt;0$ have also been studied.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANLPUBL200104</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Lin, C Y</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Abdalla, E</subfield>
</datafield>
<datafield tag="856" ind1="0" ind2=" ">
<subfield code="f">Elcio Abdalla &lt;eabdalla@fma.if.usp.br&gt;</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.ps.gz</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig1.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig2.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig3.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig4.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig5.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig6a.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig6b.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0003295.fig7.ps.gz</subfield>
<subfield code="t">Additional</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="5">
<subfield code="b">CER</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="S">
<subfield code="s">n</subfield>
<subfield code="w">200231</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="b">13</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Wang, Bin</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Lin, Chi-Yong</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="y">Abdalla, Elcio</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2000-04-03</subfield>
<subfield code="l">50</subfield>
<subfield code="m">2001-11-09</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">79-88</subfield>
<subfield code="p">Phys. Lett., B</subfield>
<subfield code="v">481</subfield>
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="i">SLAC</subfield>
<subfield code="s">4356179</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">K. D. Kokkotas, B. G. Schmidt</subfield>
<subfield code="r">gr-qc/9909058</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">and references therein</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">W. Krivan</subfield>
<subfield code="p">101501</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 101501</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. Hod</subfield>
<subfield code="r">gr-qc/9902072</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. R. Brady, C. M. Chambers, W. G. Laarakkers and E. Poisson</subfield>
<subfield code="p">064003</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 064003</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">P. R. Brady, C. M. Chambers, W. Krivan and P. Laguna</subfield>
<subfield code="p">7538</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">55</subfield>
<subfield code="y">1997</subfield>
<subfield code="s">Phys. Rev. D 55 (1997) 7538</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. T. Horowitz and V. E. Hubeny</subfield>
<subfield code="r">hep-th/9909056</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">G. T. Horowitz</subfield>
<subfield code="r">hep-th/9910082</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. S. C. Ching, P. T. Leung, W. M. Suen and K. Young</subfield>
<subfield code="p">2118</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">52</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. D 52 (1995) 2118</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">J. M. Maldacena</subfield>
<subfield code="p">231</subfield>
<subfield code="t">Adv. Theor. Math. Phys.</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 231</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. Witten</subfield>
<subfield code="p">253</subfield>
<subfield code="t">Adv. Theor. Math. Phys.</subfield>
<subfield code="v">2</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 253</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">S. S. Gubser, I. R. Klebanov and A. M. Polyakov</subfield>
<subfield code="p">105</subfield>
<subfield code="t">Phys. Lett., B</subfield>
<subfield code="v">428</subfield>
<subfield code="y">1998</subfield>
<subfield code="s">Phys. Lett. B 428 (1998) 105</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers</subfield>
<subfield code="p">064018</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">60</subfield>
<subfield code="y">1999</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 064018</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. W. Leaver</subfield>
<subfield code="p">1238</subfield>
<subfield code="t">J. Math. Phys.</subfield>
<subfield code="v">27</subfield>
<subfield code="y">1986</subfield>
<subfield code="s">J. Math. Phys. 27 (1986) 1238</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">E. W. Leaver</subfield>
<subfield code="p">2986</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">41</subfield>
<subfield code="y">1990</subfield>
<subfield code="s">Phys. Rev. D 41 (1990) 2986</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">C. O. Lousto</subfield>
<subfield code="p">1733</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">51</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. D 51 (1995) 1733</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">O. Kaburaki</subfield>
<subfield code="p">316</subfield>
<subfield code="t">Phys. Lett., A</subfield>
<subfield code="v">217</subfield>
<subfield code="y">1996</subfield>
<subfield code="s">Phys. Lett. A 217 (1996) 316</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">R. K. Su, R. G. Cai and P. K. N. Yu</subfield>
<subfield code="p">2932</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">50</subfield>
<subfield code="y">1994</subfield>
<subfield code="s">Phys. Rev. D 50 (1994) 2932</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">3473</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">48</subfield>
<subfield code="y">1993</subfield>
<subfield code="s">Phys. Rev. D 48 (1993) 3473</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="p">6186</subfield>
<subfield code="t">Phys. Rev., D</subfield>
<subfield code="v">52</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Phys. Rev. D 52 (1995) 6186</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">B. Wang, J. M. Zhu</subfield>
<subfield code="p">1269</subfield>
<subfield code="t">Mod. Phys. Lett., A</subfield>
<subfield code="v">10</subfield>
<subfield code="y">1995</subfield>
<subfield code="s">Mod. Phys. Lett. A 10 (1995) 1269</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="m">A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, Phys. Rev., D60: 104026 (1999) 5070 90 110 130 150 r+ 130 230 330 50 70 90 110 130 150 r+</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">rus</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Пушкин, А С</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Медный всадник</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">
&lt;!--HTML--&gt;На берегу пустынных волн, &lt;br /&gt;
Стоял он, дум великих полн, &lt;br /&gt;
И вдаль глядел. Пред ним широко&lt;br /&gt;
Река неслася; бедный чёлн&lt;br /&gt;
По ней стремился одиноко. &lt;br /&gt;
По мшистым, топким берегам&lt;br /&gt;
Чернели избы здесь и там, &lt;br /&gt;
Приют убогого чухонца; &lt;br /&gt;
И лес, неведомый лучам&lt;br /&gt;
В тумане спрятанного солнца, &lt;br /&gt;
Кругом шумел.
</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1833</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">1990-01-27</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2002-04-12</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">POETRY</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">gre</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Καβάφης, Κ Π</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Ιθάκη</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">
&lt;!--HTML--&gt;Σα βγεις στον πηγαιμό για την Ιθάκη, &lt;br /&gt;
να εύχεσαι νάναι μακρύς ο δρόμος, &lt;br /&gt;
γεμάτος περιπέτειες, γεμάτος γνώσεις. &lt;br/&gt;
Τους Λαιστρυγόνας και τους Κύκλωπας, &lt;br /&gt;
τον θυμωμένο Ποσειδώνα μη φοβάσαι, &lt;br /&gt;
τέτοια στον δρόμο σου ποτέ σου δεν θα βρείς, &lt;br /&gt;
αν μέν' η σκέψις σου υψηλή, αν εκλεκτή&lt;br /&gt;
συγκίνησις το πνεύμα και το σώμα σου αγγίζει. &lt;br /&gt;
Τους Λαιστρυγόνας και τους Κύκλωπας, &lt;br /&gt;
τον άγριο Ποσειδώνα δεν θα συναντήσεις, &lt;br /&gt;
αν δεν τους κουβανείς μες στην ψυχή σου, &lt;br /&gt;
αν η ψυχή σου δεν τους στήνει εμπρός σου. &lt;br /&gt;
&lt;br&gt;
Να εύχεσαι νάναι μακρύς ο δρόμος. &lt;br /&gt;
Πολλά τα καλοκαιρινά πρωϊά να είναι&lt;br /&gt;
που με τι ευχαρίστησι, με τι χαρά&lt;br /&gt;
θα μπαίνεις σε λιμένας πρωτοειδωμένους· &lt;br /&gt;
να σταματήσεις σ' εμπορεία Φοινικικά, &lt;br /&gt;
και τες καλές πραγμάτειες ν' αποκτήσεις, &lt;br /&gt;
σεντέφια και κοράλλια, κεχριμπάρια κ' έβενους, &lt;br /&gt;
και ηδονικά μυρωδικά κάθε λογής, &lt;br /&gt;
όσο μπορείς πιο άφθονα ηδονικά μυρωδικά· &lt;br /&gt;
σε πόλεις Αιγυπτιακές πολλές να πας, &lt;br /&gt;
να μάθεις και να μάθεις απ' τους σπουδασμένους. &lt;br /&gt;
&lt;br /&gt;
Πάντα στον νου σου νάχεις την Ιθάκη. &lt;br/&gt;
Το φθάσιμον εκεί είν' ο προορισμός σου. &lt;br /&gt;
Αλλά μη βιάζεις το ταξίδι διόλου. &lt;br /&gt;
Καλλίτερα χρόνια πολλά να διαρκέσει· &lt;br /&gt;
και γέρος πια ν' αράξεις στο νησί, &lt;br /&gt;
πλούσιος με όσα κέρδισες στον δρόμο, &lt;br /&gt;
μη προσδοκώντας πλούτη να σε δώσει η Ιθάκη. &lt;br /&gt;
&lt;br /&gt;
Η Ιθάκη σ' έδωσε το ωραίο ταξίδι. &lt;br /&gt;
Χωρίς αυτήν δεν θάβγαινες στον δρόμο. &lt;br /&gt;
Αλλο δεν έχει να σε δώσει πια. &lt;br /&gt;
&lt;br /&gt;
Κι αν πτωχική την βρεις, η Ιθάκη δεν σε γέλασε. &lt;br /&gt;
Ετσι σοφός που έγινες, με τόση πείρα, &lt;br /&gt;
ήδη θα το κατάλαβες η Ιθάκες τι σημαίνουν.
</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">1911</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2005-03-02</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2005-03-02</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">POETRY</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2345180CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5278333</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0210114</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Klebanov, Igor R</subfield>
<subfield code="u">Princeton University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">AdS Dual of the Critical O(N) Vector Model</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2002</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">11 Oct 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">11 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We suggest a general relation between theories of infinite number of higher-spin massless gauge fields in $AdS_{d+1}$ and large $N$ conformal theories in $d$ dimensions containing $N$-component vector fields. In particular, we propose that the singlet sector of the well-known critical 3-d O(N) model with the $(\phi^a \phi^a)^2$ interaction is dual, in the large $N$ limit, to the minimal bosonic theory in $AdS_4$ containing massless gauge fields of even spin.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2003</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2003 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Polyakov, A M</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">213-219</subfield>
<subfield code="p">Phys. Lett. B</subfield>
<subfield code="v">550</subfield>
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0210114.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0210114.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">klebanov@feynman.princeton.edu</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200242</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060826</subfield>
<subfield code="h">0012</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20021014</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002345180CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">DRAFT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">G.’t Hooft, "A planar diagram theory for strong interactions,"</subfield>
<subfield code="s">Nucl. Phys. B 72 (1974) 461</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">A.M. Polyakov, "String theory and quark confinement,"</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 68 (1998) 1</subfield>
<subfield code="r">hep-th/9711002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">"The wall of the cave,"</subfield>
<subfield code="r">hep-th/9809057</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">J. Maldacena, "The large N limit of superconformal field theories and supergravity,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 231</subfield>
<subfield code="r">hep-th/9711200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, "Gauge theory correlators from non-critical string theory,"</subfield>
<subfield code="s">Phys. Lett. B 428 (1998) 105</subfield>
<subfield code="r">hep-th/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">E. Witten, "Anti-de Sitter space and holography,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 253</subfield>
<subfield code="r">hep-th/9802150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">E. Brezin, D.J. Wallace,</subfield>
<subfield code="s">Phys. Rev. B 7 (1973) 1976</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">K.G. Wilson and J. Kogut, "The Renormalization Group and the Epsilon Expansion,"</subfield>
<subfield code="s">Phys. Rep. 12 (1974) 75</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">C. Fronsdal,</subfield>
<subfield code="s">Phys. Rev. D 18 (1978) 3624</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">E. Fradkin and M. Vasiliev,</subfield>
<subfield code="s">Phys. Lett. B 189 (1987) 89</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="s">Nucl. Phys. B 291 (1987) 141</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">M.A. Vasiliev, "Higher Spin Gauge Theories Star Product and AdS Space,"</subfield>
<subfield code="r">hep-th/9910096</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">A. M. Polyakov, "Gauge fields and space-time,"</subfield>
<subfield code="r">hep-th/0110196</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">P. Haggi-Mani and B. Sundborg, "Free Large N Supersymmetric Yang-Mills Theory Ann. Sci. a String Theory,"</subfield>
<subfield code="r">hep-th/0002189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">B. Sundborg, "Stringy Gravity, Interacting Tensionless Strings and Massless Higher Spins,"</subfield>
<subfield code="r">hep-th/0103247</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">E. Witten, Talk at the John Schwarz 60-th Birthday Symposium,</subfield>
<subfield code="u">http://theory.caltech.edu/jhs60/witten/1.html</subfield>
<subfield code="z">http://theory.caltech.edu/jhs60/witten/1.html</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">E. Sezgin and P. Sundell, "Doubletons and 5D Higher Spin Gauge Theory,"</subfield>
<subfield code="r">hep-th/0105001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">A. Mikhailov, "Notes On Higher Spin Symmetries,"</subfield>
<subfield code="r">hep-th/0201019</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">E. Sezgin and P. Sundell, "Analysis of Higher Spin Field Equations in Four Dimensions,"</subfield>
<subfield code="r">hep-th/0205132</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">J. Engquist, E. Sezgin, P. Sundell, "On N=1,2,4 Higher Spin Gauge Theories in Four Dimensions,"</subfield>
<subfield code="r">hep-th/0207101</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">M. Vasiliev, "Higher Spin Gauge Theories in Four, Three and Two Dimensions,"</subfield>
<subfield code="s">Int. J. Mod. Phys. D 5 (1996) 763</subfield>
<subfield code="r">hep-th/9611024</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">I.R. Klebanov and E. Witten, "AdS/CFT correspondence and Symmetry Breaking,"</subfield>
<subfield code="s">Nucl. Phys. B 556 (1999) 89</subfield>
<subfield code="r">hep-th/9905104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">O. Aharony, M. Berkooz, E. Silverstein, "Multiple Trace Operators and Nonlocal String Theories,"</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 006</subfield>
<subfield code="r">hep-th-0105309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">E. Witten, "Multi-Trace Operators, Boundary Conditions, And AdS/CFT Correspondence,"</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">M. Berkooz, A. Sever and A. Shomer, "Double Trace Deformations, Boundary Conditions and Space-time Singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 034</subfield>
<subfield code="r">hep-th-0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">S.S. Gubser and I. Mitra, "Double-Trace Operators and One-Loop Vacuum Energy in AdS/CFT,"</subfield>
<subfield code="r">hep-th/0210093</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">I.R. Klebanov, "Touching Random Surfaces and Liouville Gravity,"</subfield>
<subfield code="s">Phys. Rev. D 51 (1995) 1836</subfield>
<subfield code="r">hep-th/9407167</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">I.R. Klebanov and A. Hashimoto, "Non-Perturbative Solution of Matrix Models Modified by Trace-Squared Terms,"</subfield>
<subfield code="s">Nucl. Phys. B 434 (1995) 264</subfield>
<subfield code="r">hep-th/9409064</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">A.M. Polyakov, "Non-Hamiltonian Approach to Quantum Field Theory at Small Distances,"</subfield>
<subfield code="s">Zh. Eksp. Teor. Fiz. 66 (1974) 23</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">E. D’Hoker, D. Z. Freedman, S. Mathur, A. Matusis and L. Rastelli, "Graviton exchange and complete 4-point functions in the AdS/CFT correspondence, "</subfield>
<subfield code="r">hep-th/9903196</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">For a review with a comprehensive set of references, see E. d‘Hoker and D.Z. Freedman, "Supersymmetric Gauge Theories and the AdS/CFT Correspondence,"</subfield>
<subfield code="r">hep-th/0201253</subfield>
</datafield>
</record>
<record>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2292727CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">4828445</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">UNCOVER</subfield>
<subfield code="a">1021768628</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0201100</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DSF-2002-2</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Mück, W</subfield>
<subfield code="u">INFN</subfield>
<subfield code="u">Universita di Napoli</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">An improved correspondence formula for AdS/CFT with multi-trace operators</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2002</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Napoli</subfield>
<subfield code="b">Napoli Univ.</subfield>
<subfield code="c">15 Jan 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">6 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">An improved correspondence formula is proposed for the calculation of correlation functions of a conformal field theory perturbed by multi-trace operators from the analysis of the dynamics of the dual field theory in Anti-de Sitter space. The formula reduces to the usual AdS/CFT correspondence formula in the case of single-trace perturbations.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS ING2002</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2002 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">301-304</subfield>
<subfield code="n">3-4</subfield>
<subfield code="p">Phys. Lett. B</subfield>
<subfield code="v">531</subfield>
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0201100.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0201100.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">wolfgang.mueck@na.infn.it</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200204</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060826</subfield>
<subfield code="h">0008</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20020128</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002292727CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">E. Witten,</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">S. S. Gubser, I. R. Klebanov and A. M. Polyakov,</subfield>
<subfield code="s">Phys. Lett. B 428 (1998) 105</subfield>
<subfield code="r">hep-th/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">E. Witten,</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 253</subfield>
<subfield code="r">hep-th/9802150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">P. Breitenlohner and D. Z. Freedman,</subfield>
<subfield code="s">Ann. Phys. (San Diego) 144 (1982) 249</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">I. R. Klebanov and E. Witten,</subfield>
<subfield code="s">Nucl. Phys. B 556 (1999) 89</subfield>
<subfield code="r">hep-th/9905104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">W. Mück and K. S. Viswanathan,</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 081901</subfield>
<subfield code="r">hep-th/9906155</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">W. Mück,</subfield>
<subfield code="s">Nucl. Phys. B 620 (2002) 477</subfield>
<subfield code="r">hep-th/0105270</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">M. Bianchi, D. Z. Freedman and K. Skenderis,</subfield>
<subfield code="s">J. High Energy Phys. 08 (2001) 041</subfield>
<subfield code="r">hep-th/0105276</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2307939CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">4923022</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0205061</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DSF-2002-11</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">QMUL-PH-2002-11</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Martelli, D</subfield>
<subfield code="u">University of London</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Holographic Renormalization and Ward Identities with the Hamilton-Jacobi Method</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2003</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Napoli</subfield>
<subfield code="b">Napoli Univ.</subfield>
<subfield code="c">7 May 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">31 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">A systematic procedure for performing holographic renormalization, which makes use of the Hamilton-Jacobi method, is proposed and applied to a bulk theory of gravity interacting with a scalar field and a U(1) gauge field in the Stueckelberg formalism. We describe how the power divergences are obtained as solutions of a set of "descent equations" stemming from the radial Hamiltonian constraint of the theory. In addition, we isolate the logarithmic divergences, which are closely related to anomalies. The method allows to determine also the exact one-point functions of the dual field theory. Using the other Hamiltonian constraints of the bulk theory, we derive the Ward identities for diffeomorphisms and gauge invariance. In particular, we demonstrate the breaking of U(1)_R current conservation, recovering the holographic chiral anomaly recently discussed in hep-th/0112119 and hep-th/0202056.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2004</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2004 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Mück, W</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Martelli, Dario</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Mueck, Wolfgang</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">248-276</subfield>
<subfield code="p">Nucl. Phys. B</subfield>
<subfield code="v">654</subfield>
<subfield code="y">2003</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0205061.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0205061.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">d.martelli@qmul.ac.uk</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200219</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060823</subfield>
<subfield code="h">0005</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20020508</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002307939CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">J. M. Maldacena,</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 231</subfield>
<subfield code="r">hep-th/9711200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">S. S. Gubser, I. R. Klebanov and A. M. Polyakov,</subfield>
<subfield code="s">Phys. Lett. B 428 (1998) 105</subfield>
<subfield code="r">hep-th/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">E. Witten,</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 253</subfield>
<subfield code="r">hep-th/9802150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">E. D’Hoker and D. Z. Freedman,</subfield>
<subfield code="r">hep-th/0201253</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">W. Mück and K. S. Viswanathan,</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 041901</subfield>
<subfield code="r">hep-th/9804035</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">D. Z. Freedman, S. D. Mathur, A. Matusis and L. Rastelli,</subfield>
<subfield code="s">Nucl. Phys. B 546 (1998) 96</subfield>
<subfield code="r">hep-th/9812032</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">H. Liu and Astron. Astrophys. Tseytlin,</subfield>
<subfield code="s">Nucl. Phys. B 533 (1998) 88</subfield>
<subfield code="r">hep-th/9804083</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">M. Henningson and K. Skenderis,</subfield>
<subfield code="s">J. High Energy Phys. 07 (1998) 023</subfield>
<subfield code="r">hep-th/9806087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">J. D. Brown and J. W. York,</subfield>
<subfield code="s">Phys. Rev. D 47 (1993) 1407</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">B. Balasubramanian and P. Kraus,</subfield>
<subfield code="s">Commun. Math. Phys. 208 (1999) 413</subfield>
<subfield code="r">hep-th/9902121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">R. C. Myers,</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 046002</subfield>
<subfield code="r">hep-th/9903203</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">R. Emparan, C. V. Johnson and R. C. Myers, Phys. Rev. D p. 104001 (1999),</subfield>
<subfield code="r">hep-th/9903238</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">S. de Haro, K. Skenderis and S. N. Solodukhin,</subfield>
<subfield code="s">Commun. Math. Phys. 217 (2000) 595</subfield>
<subfield code="r">hep-th/0002230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">M. Bianchi, D. Z. Freedman and K. Skenderis,</subfield>
<subfield code="r">hep-th/0112119</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">M. Bianchi, D. Z. Freedman and K. Skenderis,</subfield>
<subfield code="s">J. High Energy Phys. 08 (2001) 041</subfield>
<subfield code="r">hep-th/0105276</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">J. de Boer, E. Verlinde and H. Verlinde,</subfield>
<subfield code="s">J. High Energy Phys. 08 (2000) 003</subfield>
<subfield code="r">hep-th/9912012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">J. Kalkkinen, D. Martelli and W. Mück,</subfield>
<subfield code="s">J. High Energy Phys. 04 (2001) 036</subfield>
<subfield code="r">hep-th/0103111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">S. Corley,</subfield>
<subfield code="s">Phys. Lett. B 484 (2000) 141</subfield>
<subfield code="r">hep-th/0004030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">J. Kalkkinen and D. Martelli,</subfield>
<subfield code="s">Nucl. Phys. B 596 (2001) 415</subfield>
<subfield code="r">hep-th/0007234</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">M. Bianchi, O. DeWolfe, D. Z. Freedman and K. Pilch,</subfield>
<subfield code="s">J. High Energy Phys. 01 (2001) 021</subfield>
<subfield code="r">hep-th/0009156</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">I. R. Klebanov, P. Ouyang and E. Witten,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 105007</subfield>
<subfield code="r">hep-th/0202056</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">C. Fefferman and C. R. Graham, in Elie Cartan et les Mathématiques d’aujour d’hui, Astérique, p. 95 (1985).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">D. Martelli and A. Miemic,</subfield>
<subfield code="s">J. High Energy Phys. 04 (2002) 027</subfield>
<subfield code="r">hep-th/0112150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">S. Ferrara and A. Zaffaroni,</subfield>
<subfield code="r">hep-th/9908163</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">J. Parry, D. S. Salopek and J. M. Stewart,</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 2872</subfield>
<subfield code="r">gr-qc/9310020</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">B. Darian,</subfield>
<subfield code="s">Class. Quantum Gravity 15 (1998) 143</subfield>
<subfield code="r">gr-qc/9707046</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">V. L. Campos, G. Ferretti, H. Larsson, D. Martelli and B. E. W. Nilsson,</subfield>
<subfield code="s">J. High Energy Phys. 0006 (2000) 023</subfield>
<subfield code="r">hep-th/0003151</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni,</subfield>
<subfield code="s">Nucl. Phys. B 569 (2000) 451</subfield>
<subfield code="r">hep-th/9909047</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">W. Mück,</subfield>
<subfield code="r">hep-th/0201100</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">W. Mück and K. S. Viswanathan,</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 106006</subfield>
<subfield code="r">hep-th/9805145</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">M. M. Taylor-Robinson,</subfield>
<subfield code="r">hep-th/0002125</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">C. W. Misner, K. S. Thorne and J. A. Wheeler, Gravitation, Freeman, San Francisco (1973).</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2327507CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5004500</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0207111</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">BROWN-HEP-1309</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Ramgoolam, S</subfield>
<subfield code="u">Brown University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Higher dimensional geometries related to fuzzy odd-dimensional spheres</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2002</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Providence, RI</subfield>
<subfield code="b">Brown Univ.</subfield>
<subfield code="c">11 Jul 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">32 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study $SO(m)$ covariant Matrix realizations of $ \sum_{i=1}^{m} X_i^2 = 1 $ for even $m$ as candidate fuzzy odd spheres following hep-th/0101001. As for the fuzzy four sphere, these Matrix algebras contain more degrees of freedom than the sphere itself and the full set of variables has a geometrical description in terms of a higher dimensional coset. The fuzzy $S^{2k-1} $ is related to a higher dimensional coset $ {SO(2k) \over U(1) \times U(k-1)}$. These cosets are bundles where base and fibre are hermitian symmetric spaces. The detailed form of the generators and relations for the Matrix algebras related to the fuzzy three-spheres suggests Matrix actions which admit the fuzzy spheres as solutions. These Matrix actions are compared with the BFSS, IKKT and BMN Matrix models as well as some others. The geometry and combinatorics of fuzzy odd spheres lead to some remarks on the transverse five-brane problem of Matrix theories and the exotic scaling of the entropy of 5-branes with the brane number.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2003 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Ramgoolam, Sanjaye</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">064</subfield>
<subfield code="p">J. High Energy Phys.</subfield>
<subfield code="v">10</subfield>
<subfield code="y">2002</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0207111.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0207111.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">ramgosk@het.brown.edu</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200228</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070205</subfield>
<subfield code="h">2036</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20020712</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002327507CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">D. Kabat and W. Taylor, "Spherical membranes in Matrix theory,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 181</subfield>
<subfield code="r">hep-th/9711078</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">J.Castelino, S. Lee and W. Taylor IV, "Longitudinal Five-Branes Ann. Sci. Four Spheres in Matrix Theory,"</subfield>
<subfield code="s">Nucl. Phys. B 526 (1998) 334</subfield>
<subfield code="r">hep-th/9712105</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">R. Myers, "Dielectric-Branes,"</subfield>
<subfield code="r">hep-th/9910053</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">N. Constable, R. Myers, O. Tafjord, "Non-abelian Brane intersections, "</subfield>
<subfield code="s">J. High Energy Phys. 0106 (2001) 023</subfield>
<subfield code="r">hep-th/0102080</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">D. Berenstein, J. Maldacena, H. Nastase "Strings in flat space and pp waves from N = 4 Super Yang Mills,"</subfield>
<subfield code="s">J. High Energy Phys. 0204 (2002) 013</subfield>
<subfield code="r">hep-th/0202021</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">J. Maldacena, A. Strominger, "AdS3 Black Holes and a Stringy Exclusion Principle," hep-th/980408,</subfield>
<subfield code="s">J. High Energy Phys. 9812 (1998) 005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">A. Jevicki, S. Ramgoolam, "Non commutative gravity from the ADS/CFT correspon-dence,"</subfield>
<subfield code="s">J. High Energy Phys. 9904 (1999) 032</subfield>
<subfield code="r">hep-th/9902059</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">P.M. Ho, M. Li, "Fuzzy Spheres in AdS/CFT Correspondence and Holography from Noncommutativity,"</subfield>
<subfield code="s">Nucl. Phys. B 596 (2001) 259</subfield>
<subfield code="r">hep-th/0004072</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">M. Berkooz, H. Verlinde "Matrix Theory, AdS/CFT and Higgs-Coulomb Equiva-lence,"</subfield>
<subfield code="s">J. High Energy Phys. 9911 (1999) 037</subfield>
<subfield code="r">hep-th/9907100</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">Z. Guralnik, S. Ramgoolam "On the Polarization of Unstable D0-Branes into Non-Commutative Odd Spheres,"</subfield>
<subfield code="s">J. High Energy Phys. 0102 (2001) 032</subfield>
<subfield code="r">hep-th/0101001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">S. Ramgoolam, " On spherical harmonics for fuzzy spheres in diverse dimensions, "</subfield>
<subfield code="s">Nucl. Phys. B 610 (2001) 461</subfield>
<subfield code="r">hep-th/0105006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">P.M. Ho, S. Ramgoolam, "Higher dimensional geometries from matrix brane construc-tions,"</subfield>
<subfield code="s">Nucl. Phys. B 627 (2002) 266</subfield>
<subfield code="r">hep-th/0111278</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">Y. Kimura "Noncommutative Gauge Theory on Fuzzy Four-Sphere and Matrix Model,"</subfield>
<subfield code="r">hep-th/0204256</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">S.C Zhang, J. Hu "A Four Dimensional Generalization of the Quantum Hall Effect,"</subfield>
<subfield code="s">Science 294 (2001) 823</subfield>
<subfield code="r">cond-mat/0110572</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">M. Fabinger, "Higher-Dimensional Quantum Hall Effect in String Theory,"</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 037</subfield>
<subfield code="r">hep-th/0201016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">A.P. Balachandran "Quantum Spacetimes in the Year 1,"</subfield>
<subfield code="r">hep-th/0203259</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">A. Salam, J. Strathdee, "On Kaluza-Klein Theory," Ann. Phys. 141, 1982, 216</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">N.L. Wallach, "Harmonic Analysis on homogeneous spaces," M. Drekker Inc. NY 1973</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">Y. Kazama, H. Suzuki, "New N = 2 superconformal field theories and superstring compactification"</subfield>
<subfield code="s">Nucl. Phys. B 321 (1989) 232</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">M. Kramer, " Some remarks suggesting an interesting theory of harmonic functions on SU(2n + 1)/Sp(n) and SO(2n + 1)/U(n)," Arch. Math. 33 ( 1979/80), 76-79.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">P.M. Ho, "Fuzzy sphere from Matrix model,"</subfield>
<subfield code="s">J. High Energy Phys. 0012 (2000) 015</subfield>
<subfield code="r">hep-th/0110165</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">T. Banks, W. Fischler, S. Shenker, L. Susskind, "M-Theory Ann. Sci. a Matrix model A conjecture,"</subfield>
<subfield code="s">Phys. Rev. D 55 (1997) 5112</subfield>
<subfield code="r">hep-th/9610043</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">N. Ishibashi, H. Kawai, Y. Kitazawa, A. Tsuchiya, " A large-N reduced model Ann. Sci. Superstring, "</subfield>
<subfield code="s">Nucl. Phys. B 498 (1997) 467</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">V. Periwal, "Matrices on a point Ann. Sci. the theory of everything,"</subfield>
<subfield code="s">Phys. Rev. D 55 (1997) 1711</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">S. Chaudhuri, "Bosonic Matrix Theory and D-branes,"</subfield>
<subfield code="r">hep-th/0205306</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">M. Bagnoud, L. Carlevaro, A. Bilal, "Supermatrix models for M-theory based on osp(1—32,R),"</subfield>
<subfield code="r">hep-th/0201183</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">L. Smolin, "M theory Ann. Sci. a matrix extension of Chern Simons theory,"</subfield>
<subfield code="s">Nucl. Phys. B 591 (2000) 227</subfield>
<subfield code="r">hep-th/0002009</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">I. Bandos, J. Lukierski, "New superparticle models outside the HLS suersymmetry scheme,"</subfield>
<subfield code="r">hep-th/9812074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">S.Iso, Y.Kimura, K.Tanaka, K. Wakatsuki, "Noncommutative Gauge Theory on Fuzzy Sphere from Matrix Model,"</subfield>
<subfield code="r">hep-th/0101102</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">W. Fulton and G. Harris, "Representation theory," Springer Verlag 1991.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">M. Atiyah and E. Witten, "M-Theory Dynamics On A Manifold Of G2 Holonomy,"</subfield>
<subfield code="r">hep-th/0107177</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">S. Ramgoolam, D. Waldram, "Zero branes on a compact orbifold,"</subfield>
<subfield code="s">J. High Energy Phys. 9807 (1998) 009</subfield>
<subfield code="r">hep-th/9805191</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">Brian R. Greene, C.I. Lazaroiu, Piljin Yi " D Particles on T4 /Z(N) Orbifolds and their resolutions,"</subfield>
<subfield code="s">Nucl. Phys. B 539 (1999) 135</subfield>
<subfield code="r">hep-th/9807040</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">I. Klebanov, A. Tseytlin, "Entropy of Near-Extremal Black p-branes,"</subfield>
<subfield code="s">Nucl. Phys. B 475 (1996) 164</subfield>
<subfield code="r">hep-th/9604089</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2341644CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5208424</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0209226</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">PUTP-2002-48</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">SLAC-PUB-9504</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">SU-ITP-2002-36</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Adams, A</subfield>
<subfield code="u">Stanford University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Decapitating Tadpoles</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2002</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Beijing</subfield>
<subfield code="b">Beijing Univ. Dept. Phys.</subfield>
<subfield code="c">26 Sep 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">31 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We argue that perturbative quantum field theory and string theory can be consistently modified in the infrared to eliminate, in a radiatively stable manner, tadpole instabilities that arise after supersymmetry breaking. This is achieved by deforming the propagators of classically massless scalar fields and the graviton so as to cancel the contribution of their zero modes. In string theory, this modification of propagators is accomplished by perturbatively deforming the world-sheet action with bi-local operators similar to those that arise in double-trace deformations of AdS/CFT. This results in a perturbatively finite and unitary S-matrix (in the case of string theory, this claim depends on standard assumptions about unitarity in covariant string diagrammatics). The S-matrix is parameterized by arbitrary scalar VEVs, which exacerbates the vacuum degeneracy problem. However, for generic values of these parameters, quantum effects produce masses for the nonzero modes of the scalars, lifting the fluctuating components of the moduli.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">McGreevy, J</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Silverstein, E</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Adams, Allan</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Greevy, John Mc</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Silverstein, Eva</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0209226.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0209226.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">evas@slac.stanford.edu</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200239</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">11</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060218</subfield>
<subfield code="h">0013</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20020927</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002341644CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">W. Fischler and L. Susskind, "Dilaton Tadpoles, String Condensates And Scale In-variance,"</subfield>
<subfield code="s">Phys. Lett. B 171 (1986) 383</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">W. Fischler and L. Susskind, "Dilaton Tadpoles, String Condensates And Scale In-variance. 2,"</subfield>
<subfield code="s">Phys. Lett. B 173 (1986) 262</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">C. G. Callan, C. Lovelace, C. R. Nappi and S. A. Yost, "Loop Corrections To Super-string Equations Of Motion,"</subfield>
<subfield code="s">Nucl. Phys. B 308 (1988) 221</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">H. Ooguri and N. Sakai, "String Multiloop Corrections To Equations Of Motion,"</subfield>
<subfield code="s">Nucl. Phys. B 312 (1989) 435</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">J. Polchinski, "Factorization Of Bosonic String Amplitudes,"</subfield>
<subfield code="s">Nucl. Phys. B 307 (1988) 61</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">H. La and P. Nelson, "Effective Field Equations For Fermionic Strings,"</subfield>
<subfield code="s">Nucl. Phys. B 332 (1990) 83</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and E. Silverstein, "Multiple-trace operators and non-local string theories,"</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 006</subfield>
<subfield code="r">hep-th/0105309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and E. Silverstein, "Non-local string theories on AdS3 × S3 and stable non-supersymmetric backgrounds,"</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 106007</subfield>
<subfield code="r">hep-th/0112178</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">N. Arkani-Hamed, S. Dimopoulos, G. Dvali, G. Gabadadze, to appear.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">E. Witten, "Strong Coupling Expansion Of Calabi-Yau Compactification,"</subfield>
<subfield code="s">Nucl. Phys. B 471 (1996) 135</subfield>
<subfield code="r">hep-th/9602070</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">O. Aharony and T. Banks, "Note on the Quantum Mech. of M theory,"</subfield>
<subfield code="s">J. High Energy Phys. 9903 (1999) 016</subfield>
<subfield code="r">hep-th/9812237</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">T. Banks, "On isolated vacua and background independence," arXiv</subfield>
<subfield code="r">hep-th/0011255</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">R. Bousso and J. Polchinski, "Quantization of four-form fluxes and dynamical neutral-ization of the cosmological constant,"</subfield>
<subfield code="s">J. High Energy Phys. 0006 (2000) 006</subfield>
<subfield code="r">hep-th/0004134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">S. B. Giddings, S. Kachru and J. Polchinski, "Hierarchies from fluxes in string com-pactifications," arXiv</subfield>
<subfield code="r">hep-th/0105097</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">A. Maloney, E. Silverstein and A. Strominger, "De Sitter space in noncritical string theory," arXiv</subfield>
<subfield code="r">hep-th/0205316</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">S. Kachru and E. Silverstein, "4d conformal theories and strings on orbifolds,"</subfield>
<subfield code="s">Phys. Rev. Lett. 80 (1998) 4855</subfield>
<subfield code="r">hep-th/9802183</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">A. E. Lawrence, N. Nekrasov and C. Vafa, "On conformal field theories in four di-mensions,"</subfield>
<subfield code="s">Nucl. Phys. B 533 (1998) 199</subfield>
<subfield code="r">hep-th/9803015</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">M. Bershadsky, Z. Kakushadze and C. Vafa, "String expansion Ann. Sci. large N expansion of gauge theories,"</subfield>
<subfield code="s">Nucl. Phys. B 523 (1998) 59</subfield>
<subfield code="r">hep-th/9803076</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">I. R. Klebanov and Astron. Astrophys. Tseytlin, "A non-supersymmetric large N CFT from type 0 string theory,"</subfield>
<subfield code="s">J. High Energy Phys. 9903 (1999) 015</subfield>
<subfield code="r">hep-th/9901101</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspon-dence," arXiv</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary condi-tions and spacetime singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 034</subfield>
<subfield code="r">hep-th/0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">A. Adams and E. Silverstein, "Closed string tachyons, AdS/CFT, and large N QCD,"</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 086001</subfield>
<subfield code="r">hep-th/0103220</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">Astron. Astrophys. Tseytlin and K. Zarembo, "Effective potential in non-supersymmetric SU(N) x SU(N) gauge theory and interactions of type 0 D3-branes,"</subfield>
<subfield code="s">Phys. Lett. B 457 (1999) 77</subfield>
<subfield code="r">hep-th/9902095</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">M. Strassler, to appear</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">V. Balasubramanian and P. Kraus, "A stress tensor for anti-de Sitter gravity,"</subfield>
<subfield code="s">Commun. Math. Phys. 208 (1999) 413</subfield>
<subfield code="r">hep-th/9902121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">S. Thomas, in progress.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">O. Aharony, M. Fabinger, G. T. Horowitz and E. Silverstein, "Clean time-dependent string backgrounds from bubble baths,"</subfield>
<subfield code="s">J. High Energy Phys. 0207 (2002) 007</subfield>
<subfield code="r">hep-th/0204158</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze and M. Shifman, "Diluting cosmological constant in infinite volume extra dimensions," arXiv</subfield>
<subfield code="r">hep-th/0202174</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">D. Friedan, "A tentative theory of large distance Physics," arXiv</subfield>
<subfield code="r">hep-th/0204131</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze and M. Shifman, "Diluting cosmological constant via large distance modification of gravity," arXiv</subfield>
<subfield code="r">hep-th/0208096</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">J. W. Moffat, arXiv</subfield>
<subfield code="r">hep-th/0207198</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">Astron. Astrophys. Tseytlin, "On ’Macroscopic String’ Approximation In String Theory,"</subfield>
<subfield code="s">Phys. Lett. B 251 (1990) 530</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">B. Zwiebach, "Closed string field theory Quantum action and the B-V master equa-tion,"</subfield>
<subfield code="s">Nucl. Phys. B 390 (1993) 33</subfield>
<subfield code="r">hep-th/9206084</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">J. Polchinski, "String Theory. Vol. 1 An Introduction To The Bosonic String," Cam-bridge, UK Univ. Phys. Rev. (1998) 402 p.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
<subfield code="m">S. Kachru, X. Liu, M. B. Schulz and S. P. Trivedi, "Supersymmetry changing bubbles in string theory," arXiv</subfield>
<subfield code="r">hep-th/0205108</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
<subfield code="m">A. R. Frey and J. Polchinski, "N = 3 warped compactifications,"</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 126009</subfield>
<subfield code="r">hep-th/0201029</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
<subfield code="m">A. Adams, O. Aharony, J. McGreevy, E. Silverstein,..., work in progress</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2342206CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5224543</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0209257</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Berkooz, M</subfield>
<subfield code="u">The Weizmann Inst. of Science</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Double Trace Deformations, Infinite Extra Dimensions and Supersymmetry Breaking</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2002</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">29 Sep 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">22 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">It was recently shown how to break supersymmetry in certain $AdS_3$ spaces, without destabilizing the background, by using a ``double trace'' deformation which localizes on the boundary of space-time. By viewing spatial sections of $AdS_3$ as a compactification space, one can convert this into a SUSY breaking mechanism which exists uniformly throughout a large 3+1 dimensional space-time, without generating any dangerous tadpoles. This is a generalization of a Visser type infinite extra dimensions compactification. Although the model is not Lorentz invariant, the dispersion relation is relativistic at high enough momenta, and it can be arranged such that at the same kinematical regime the energy difference between between former members of a SUSY multiplet is large.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Berkooz, Micha</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0209257.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0209257.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">berkooz@wisemail.weizmann.ac.il</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200240</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">11</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060603</subfield>
<subfield code="h">0013</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20021001</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002342206CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">T. Banks, "Cosmological Breaking of Supersymmetry ? Or Little Lambda Goes Back to the Future 2.",</subfield>
<subfield code="r">hep-th/0007146</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">J. Brown and C. Teitelboim,</subfield>
<subfield code="s">Phys. Lett. B 195 (1987) 177</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="s">Nucl. Phys. B 297 (1988) 787</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">R. Bousso and J. Polchinski, "Quantization of Four-form Fluxes and Dynami-cal Neutralization of the Cosmological Constant",</subfield>
<subfield code="s">J. High Energy Phys. 0006 (2000) 006</subfield>
<subfield code="r">hep-th/0004134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">J.L. Feng, J. March-Russell, S. Sethi and F. Wilczek, "Saltatory Relaxation of the Cosmological Constant",</subfield>
<subfield code="s">Nucl. Phys. B 602 (2001) 307</subfield>
<subfield code="r">hep-th/0005276</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">E. Witten, "Strong Coupling and the Cosmological Constant",</subfield>
<subfield code="s">Mod. Phys. Lett. A 10 (1995) 2153</subfield>
<subfield code="r">hep-th/9506101</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">A. Maloney, E. Silverstein and A. Strominger "de Sitter Space in Non-Critical String Theory",</subfield>
<subfield code="r">hep-th/0205316</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">, Hawking Festschrift.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">S. Kachru and E. Silverstein, "On Vanishing Two Loop Cosmological Constant in Nonsupersymmetric Strings",</subfield>
<subfield code="s">J. High Energy Phys. 9901 (1999) 004</subfield>
<subfield code="r">hep-th/9810129</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">S. Kachru, M. Schulz and E. Silverstein, "Self-tuning flat domain walls in 5d gravity and string theory",</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 045021</subfield>
<subfield code="r">hep-th/0001206</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">V.A. Rubakov and M.E. Shaposhnikov, "Extra Space-time Dimensions Towards a Solution to the Cosmological Constant Problem",</subfield>
<subfield code="s">Phys. Lett. B 125 (1983) 139</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze and M. Shifman, "Diluting Cosmological Constant In Infinite Volume Extra",</subfield>
<subfield code="r">hep-th/0202174</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">G.W. Moore, "Atkin-Lehner Symmetry",</subfield>
<subfield code="s">Nucl. Phys. B 293 (1987) 139</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">Erratum-</subfield>
<subfield code="s">Nucl. Phys. B 299 (1988) 847</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">By K. Akama, "Pregeometry",</subfield>
<subfield code="s">Lect. Notes Phys. 176 (1982) 267</subfield>
<subfield code="r">hep-th/0001113</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m"> Also in *Nara 1982, Proceedings, Gauge Theory and Gravitation*, 267-271.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">M. Visser, "An Exotic Class of Kaluza-Klein Models",</subfield>
<subfield code="s">Phys. Lett. B 159 (1985) 22</subfield>
<subfield code="r">hep-th/9910093</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">L. Randall and R. Sundrum, "An Alternative to Compactification",</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 4690</subfield>
<subfield code="r">hep-th/9906064</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">A. Adams and E. Silverstein, "Closed String Tachyons, AdS/CFT and Large N QCD",</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 086001</subfield>
<subfield code="r">hep-th/0103220</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and E. Silverstein, "Multiple-Trace Operators and Non-Local String Theories",</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 006</subfield>
<subfield code="r">hep-th/0105309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and E. Silverstein, "Non-local String Theories on AdS3 × S3 and non-supersymmetric backgrounds",</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 106007</subfield>
<subfield code="r">hep-th/0112178</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">M. Berkooz, A. Sever and A. Shomer, "Double-trace Deformations, Boundary Condi-tions and Space-time Singularities",</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 034</subfield>
<subfield code="r">hep-th/0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">E. Witten, "Multi-Trace Operators, Boundary Conditions, And AdS/CFT Correspon-dence",</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">A. Sever and A. Shomer, "A Note on Multi-trace Deformations and AdS/CFT",</subfield>
<subfield code="s">J. High Energy Phys. 0207 (2002) 027</subfield>
<subfield code="r">hep-th/0203168</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">J. Maldacena, "The large N limit of superconformal field theories and supergravity,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 231</subfield>
<subfield code="r">hep-th/9711200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="s">Int. J. Theor. Phys. 38 (1998) 1113</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">E. Witten, "Anti-de Sitter space and holography,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 253</subfield>
<subfield code="r">hep-th/9802150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">S. S. Gubser, I. R. Klebanov and A. M. Polyakov, "Gauge theory correlators from non-critical string theory," hep-th/980210,</subfield>
<subfield code="s">Phys. Lett. B 428 (1998) 105</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">O. Aharony, S.S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, "Large N Field Theories, String Theory and Gravity",</subfield>
<subfield code="s">Phys. Rep. 323 (2000) 183</subfield>
<subfield code="r">hep-th/9905111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">A. Giveon, D. Kutasov and N. Seiberg, "Comments on string theory on AdS3," he-th/9806194,</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 733</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">J. Maldacena, J. Michelson and A. Strominger, "Anti-de Sitter Fragmentation",</subfield>
<subfield code="s">J. High Energy Phys. 9902 (1999) 011</subfield>
<subfield code="r">hep-th/9812073</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">N. Seiberg and E. Witten, "The D1/D5 System And Singular CFT",</subfield>
<subfield code="s">J. High Energy Phys. 9904 (1999) 017</subfield>
<subfield code="r">hep-th/9903224</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">J. Maldacena and H. Ooguri, "Strings in AdS3 and the SL(2, R) WZW Model. Part 1 The Spectrum",</subfield>
<subfield code="s">J. Math. Phys. 42 (2001) 2929</subfield>
<subfield code="r">hep-th/0001053</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">I. R. Klebanov and E. Witten, "AdS/CFT correspondence and Symmetry breaking,"</subfield>
<subfield code="s">Nucl. Phys. B 556 (1999) 89</subfield>
<subfield code="r">hep-th/9905104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">R. Kallosh, A.D. Linde, S. Prokushkin and M. Shmakova, "Gauged Supergravities, de Sitter space and Cosmology",</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 105016</subfield>
<subfield code="r">hep-th/0110089</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">R. Kallosh, "Supergravity, M-Theory and Cosmology",</subfield>
<subfield code="r">hep-th/0205315</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">R. Kallosh, A.D. Linde, S. Prokushkin and M. Shmakova, "Supergravity, Dark Energy and the Fate of the Universe",</subfield>
<subfield code="r">hep-th/0208156</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">C.M. Hull and N.P. Warner, "Non-compact Gauging from Higher Dimensions",</subfield>
<subfield code="s">Class. Quantum Gravity 5 (1988) 1517</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">P. Kraus and E.T. Tomboulis "Title Photons and Gravitons Ann. Sci. Goldstone Bosons, and the Cosmological Constant",</subfield>
<subfield code="s">Phys. Rev. D 66 (2002) 045015</subfield>
<subfield code="r">hep-th/0203221</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">M. Berkooz and S.-J. Rey, "Non-Supersymmetric Stable Vacua of M-Theory",</subfield>
<subfield code="s">J. High Energy Phys. 9901 (1999) 014</subfield>
<subfield code="r">hep-th/9807200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">A. Adams, J. McGreevy and E. Silverstein, "Decapitating Tadpoles",</subfield>
<subfield code="r">hep-th/0209226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
<subfield code="m">N. Arkani-Hamed, S. Dimopoulos, G. Dvali and G. Gabadadze, "Non-Local Modifi-cation of Gravity and the Cosmological Constant Problem",</subfield>
<subfield code="r">hep-th/0209227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
<subfield code="m">V. Balasubramanian, P. Kraus and A. Lawrence "Bulk vs. Boundary Dynamics in Anti-de Sitter Space-time"</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 046003</subfield>
<subfield code="r">hep-th/9805171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
<subfield code="m">H. Verlinde, "Holography and Compactification",</subfield>
<subfield code="s">Nucl. Phys. B 580 (2000) 264</subfield>
<subfield code="r">hep-th/9906182</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
<subfield code="m">S.B. Giddings, S. Kachru and J. Polchinski, "Hierarchies from Fluxes in String Com-pactifications",</subfield>
<subfield code="r">hep-th/0105097</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze and M. Shifman, "Diluting Cosmological Constant via Large Distance Modification of Gravity"</subfield>
<subfield code="r">hep-th/0208096</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
<subfield code="m">D. Gepner, "Lectures on N=2 String Theory", In Superstrings 89, The Trieste Spring School, 1989.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
<subfield code="m">R.G. Leigh, "Dirac-Born-Infeld Action from Dirichlet SIGMA, Symmetry Integrability Geom. Methods Appl. Model",</subfield>
<subfield code="s">Mod. Phys. Lett. A 4 (1989) 2767</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
<subfield code="m">J. Bagger and A. Galperin "Linear and Non-linear Supersymmetries",</subfield>
<subfield code="r">hep-th/9810109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
<subfield code="m">, *Dubna 1997, Supersymmetries and quantum symmetries* 3-20.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
<subfield code="m">A, Giveon and M. Rocek, "Supersymmetric String Vacua on AdS3 × N ",</subfield>
<subfield code="r">hep-th/9904024</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[44]</subfield>
<subfield code="m">E.J. Martinec and W. McElgin, "String Theory on AdS Orbifolds"</subfield>
<subfield code="s">J. High Energy Phys. 0204 (2002) 029</subfield>
<subfield code="r">hep-th/0106171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[45]</subfield>
<subfield code="m">E.J. Martinec and W. McElgin, "Exciting AdS Orbifolds",</subfield>
<subfield code="r">hep-th/0206175</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[46]</subfield>
<subfield code="m">V. Balasubramanian, J. de Boer, E. Keski-Vakkuri and S.F. Ross, "Supersymmetric Conical Defects",</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 064011</subfield>
<subfield code="r">hep-th/0011217</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">CERCER</subfield>
<subfield code="a">2344398</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5256739</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0210075</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">SISSA-2002-64-EP</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Borunda, M</subfield>
<subfield code="u">INFN</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">On the quantum stability of IIB orbifolds and orientifolds with Scherk-Schwarz SUSY breaking</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2003</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Trieste</subfield>
<subfield code="b">Scuola Int. Sup. Studi Avan.</subfield>
<subfield code="c">8 Oct 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">26 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study the quantum stability of Type IIB orbifold and orientifold string models in various dimensions, including Melvin backgrounds, where supersymmetry (SUSY) is broken {\it \`a la} Scherk-Schwarz (SS) by twisting periodicity conditions along a circle of radius R. In particular, we compute the R-dependence of the one-loop induced vacuum energy density $\rho(R)$, or cosmological constant. For SS twists different from Z2 we always find, for both orbifolds and orientifolds, a monotonic $\rho(R)&lt;0$, eventually driving the system to a tachyonic instability. For Z2 twists, orientifold models can have a different behavior, leading either to a runaway decompactification limit or to a negative minimum at a finite value R_0. The last possibility is obtained for a 4D chiral orientifold model where a more accurate but yet preliminary analysis seems to indicate that $R_0\to \infty$ or towards the tachyonic instability, as the dependence on the other geometric moduli is included.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2003</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2003 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Serone, M</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Trapletti, M</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">85-108</subfield>
<subfield code="p">Nucl. Phys. B</subfield>
<subfield code="v">653</subfield>
<subfield code="y">2003</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0210075.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0210075.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">serone@he.sissa.it</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200241</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060823</subfield>
<subfield code="h">0006</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20021009</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002344398CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">J. Scherk and J. H. Schwarz,</subfield>
<subfield code="s">Phys. Lett. B 82 (1979) 60</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="s">Nucl. Phys. B 153 (1979) 61</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">R. Rohm,</subfield>
<subfield code="s">Nucl. Phys. B 237 (1984) 553</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">H. Itoyama and T.R. Taylor,</subfield>
<subfield code="s">Phys. Lett. B 186 (1987) 129</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">C. Kounnas and M. Porrati,</subfield>
<subfield code="s">Nucl. Phys. B 310 (1988) 355</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">S. Ferrara, C. Kounnas, M. Porrati and F. Zwirner,</subfield>
<subfield code="s">Nucl. Phys. B 318 (1989) 75</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">C. Kounnas and B. Rostand,</subfield>
<subfield code="s">Nucl. Phys. B 341 (1990) 641</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">I. Antoniadis and C. Kounnas,</subfield>
<subfield code="s">Phys. Lett. B 261 (1991) 369</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">E. Kiritsis and C. Kounnas,</subfield>
<subfield code="s">Nucl. Phys. B 503 (1997) 117</subfield>
<subfield code="r">hep-th/9703059</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">I. Antoniadis,</subfield>
<subfield code="s">Phys. Lett. B 246 (1990) 377</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">C. A. Scrucca and M. Serone,</subfield>
<subfield code="s">J. High Energy Phys. 0110 (2001) 017</subfield>
<subfield code="r">hep-th/0107159</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">I. Antoniadis, E. Dudas and A. Sagnotti,</subfield>
<subfield code="s">Nucl. Phys. B 544 (1999) 469</subfield>
<subfield code="r">hep-th/9807011</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">I. Antoniadis, G. D’Appollonio, E. Dudas and A. Sagnotti,</subfield>
<subfield code="s">Nucl. Phys. B 553 (1999) 133</subfield>
<subfield code="r">hep-th/9812118</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="s">Nucl. Phys. B 565 (2000) 123</subfield>
<subfield code="r">hep-th/9907184</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">I. Antoniadis, K. Benakli and A. Laugier,</subfield>
<subfield code="r">hep-th/0111209</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">C. A. Scrucca, M. Serone and M. Trapletti,</subfield>
<subfield code="s">Nucl. Phys. B 635 (2002) 33</subfield>
<subfield code="r">hep-th/0203190</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">J. D. Blum and K. R. Dienes,</subfield>
<subfield code="s">Nucl. Phys. B 516 (1998) 83</subfield>
<subfield code="r">hep-th/9707160</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">M. Fabinger and P. Horava,</subfield>
<subfield code="s">Nucl. Phys. B 580 (2000) 243</subfield>
<subfield code="r">hep-th/0002073</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">P. Ginsparg and C. Vafa,</subfield>
<subfield code="s">Nucl. Phys. B 289 (1987) 414</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">M. A. Melvin,</subfield>
<subfield code="s">Phys. Lett. 8 (1964) 65</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">G. W. Gibbons and K. i. Maeda,</subfield>
<subfield code="s">Nucl. Phys. B 298 (1988) 741</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">F. Dowker, J. P. Gauntlett, D. A. Kastor and J. Traschen,</subfield>
<subfield code="s">Phys. Rev. D 49 (1994) 2909</subfield>
<subfield code="r">hep-th/9309075</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">A. Adams, J. Polchinski and E. Silverstein,</subfield>
<subfield code="s">J. High Energy Phys. 0110 (2001) 029</subfield>
<subfield code="r">hep-th/0108075</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">J. R. David, M. Gutperle, M. Headrick and S. Minwalla,</subfield>
<subfield code="s">J. High Energy Phys. 0202 (2002) 041</subfield>
<subfield code="r">hep-th/0111212</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">T. Suyama,</subfield>
<subfield code="s">J. High Energy Phys. 0207 (2002) 015</subfield>
<subfield code="r">hep-th/0110077</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">C. Vafa, arXiv</subfield>
<subfield code="r">hep-th/0111051</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">G. Aldazabal, A. Font, L. E. Ibanez and G. Violero,</subfield>
<subfield code="s">Nucl. Phys. B 536 (1998) 29</subfield>
<subfield code="r">hep-th/9804026</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">K. H. O’Brien and C. I. Tan,</subfield>
<subfield code="s">Phys. Rev. D 36 (1987) 1184</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">J. Polchinski,</subfield>
<subfield code="s">Commun. Math. Phys. 104 (1986) 37</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">D. M. Ghilencea, H. P. Nilles and S. Stieberger, arXiv</subfield>
<subfield code="r">hep-th/0108183</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">P. Mayr and S. Stieberger,</subfield>
<subfield code="s">Nucl. Phys. B 407 (1993) 725</subfield>
<subfield code="r">hep-th/9303017</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">E. Alvarez,</subfield>
<subfield code="s">Nucl. Phys. B 269 (1986) 596</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">J. G. Russo and Astron. Astrophys. Tseytlin,</subfield>
<subfield code="s">J. High Energy Phys. 0111 (2001) 065</subfield>
<subfield code="r">hep-th/0110107</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="s">Nucl. Phys. B 611 (2001) 93</subfield>
<subfield code="r">hep-th/0104238</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">A. Dabholkar,</subfield>
<subfield code="s">Nucl. Phys. B 639 (2002) 331</subfield>
<subfield code="r">hep-th/0109019</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">M. Gutperle and A. Strominger,</subfield>
<subfield code="s">J. High Energy Phys. 0106 (2001) 035</subfield>
<subfield code="r">hep-th/0104136</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">M. S. Costa and M. Gutperle,</subfield>
<subfield code="s">J. High Energy Phys. 0103 (2001) 027</subfield>
<subfield code="r">hep-th/0012072</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">E. Dudas and J. Mourad,</subfield>
<subfield code="s">Nucl. Phys. B 622 (2002) 46</subfield>
<subfield code="r">hep-th/0110186</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">T. Takayanagi and T. Uesugi,</subfield>
<subfield code="s">J. High Energy Phys. 0111 (2001) 036</subfield>
<subfield code="r">hep-th/0110200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="s">Phys. Lett. B 528 (2002) 156</subfield>
<subfield code="r">hep-th/0112199</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">C. Angelantonj, E. Dudas and J. Mourad,</subfield>
<subfield code="s">Nucl. Phys. B 637 (2002) 59</subfield>
<subfield code="r">hep-th/0205096</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">M. Trapletti, in preparation.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">A. Adams, J. McGreevy and E. Silverstein, arXiv</subfield>
<subfield code="r">hep-th/0209226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">E. Witten,</subfield>
<subfield code="s">Nucl. Phys. B 195 (1982) 481</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2355566CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5419166</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0212138</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">PUPT-2069</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Gubser, S S</subfield>
<subfield code="u">Princeton University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A universal result on central charges in the presence of double-trace deformations</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2003</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Princeton, NJ</subfield>
<subfield code="b">Princeton Univ. Joseph-Henry Lab. Phys.</subfield>
<subfield code="c">12 Dec 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">15 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study large N conformal field theories perturbed by relevant double-trace deformations. Using the auxiliary field trick, or Hubbard-Stratonovich transformation, we show that in the infrared the theory flows to another CFT. The generating functionals of planar correlators in the ultraviolet and infrared CFT's are shown to be related by a Legendre transform. Our main result is a universal expression for the difference of the scale anomalies between the ultraviolet and infrared fixed points, which is of order 1 in the large N expansion. Our computations are entirely field theoretic, and the results are shown to agree with predictions from AdS/CFT. We also remark that a certain two-point function can be computed for all energy scales on both sides of the duality, with full agreement between the two and no scheme dependence.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2004</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2004 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Klebanov, Igor R</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Gubser, Steven S.</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Klebanov, Igor R.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">23-36</subfield>
<subfield code="p">Nucl. Phys. B</subfield>
<subfield code="v">656</subfield>
<subfield code="y">2003</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0212138.ps.gz</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0212138.pdf</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">ssgubser@Princeton.EDU</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200250</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060823</subfield>
<subfield code="h">0007</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20021213</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002355566CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2356302CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5423422</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0212181</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Girardello, L</subfield>
<subfield code="u">INFN</subfield>
<subfield code="u">Universita di Milano-Bicocca</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">3-D Interacting CFTs and Generalized Higgs Phenomenon in Higher Spin Theories on AdS</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2003</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">16 Dec 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">8 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study a duality, recently conjectured by Klebanov and Polyakov, between higher-spin theories on AdS_4 and O(N) vector models in 3-d. These theories are free in the UV and interacting in the IR. At the UV fixed point, the O(N) model has an infinite number of higher-spin conserved currents. In the IR, these currents are no longer conserved for spin s&gt;2. In this paper, we show that the dual interpretation of this fact is that all fields of spin s&gt;2 in AdS_4 become massive by a Higgs mechanism, that leaves the spin-2 field massless. We identify the Higgs field and show how it relates to the RG flow connecting the two CFTs, which is induced by a double trace deformation.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2004</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2004 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Porrati, Massimo</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Zaffaroni, A</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">289-293</subfield>
<subfield code="p">Phys. Lett. B</subfield>
<subfield code="v">561</subfield>
<subfield code="y">2003</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0212181.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0212181.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">alberto.zaffaroni@mib.infn.it</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200251</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060823</subfield>
<subfield code="h">0007</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20021217</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002356302CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">D. Francia and A. Sagnotti,</subfield>
<subfield code="s">Phys. Lett. B 543 (2002) 303</subfield>
<subfield code="r">hep-th/0207002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">P. Haggi-Mani and B. Sundborg,</subfield>
<subfield code="s">J. High Energy Phys. 0004 (2000) 031</subfield>
<subfield code="r">hep-th/0002189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">B. Sundborg,</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 102 (2001) 113</subfield>
<subfield code="r">hep-th/0103247</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">E. Sezgin and P. Sundell,</subfield>
<subfield code="s">J. High Energy Phys. 0109 (2001) 036</subfield>
<subfield code="r">hep-th/0105001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">A. Mikhailov,</subfield>
<subfield code="r">hep-th/0201019</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">E. Sezgin and P. Sundell,</subfield>
<subfield code="s">Nucl. Phys. B 644 (2002) 303</subfield>
<subfield code="r">hep-th/0205131</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">E. Sezgin and P. Sundell,</subfield>
<subfield code="s">J. High Energy Phys. 0207 (2002) 055</subfield>
<subfield code="r">hep-th/0205132</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">J. Engquist, E. Sezgin and P. Sundell,</subfield>
<subfield code="s">Class. Quantum Gravity 19 (2002) 6175</subfield>
<subfield code="r">hep-th/0207101</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">M. A. Vasiliev,</subfield>
<subfield code="s">Int. J. Mod. Phys. D 5 (1996) 763</subfield>
<subfield code="r">hep-th/9611024</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">D. Anselmi,</subfield>
<subfield code="s">Nucl. Phys. B 541 (1999) 323</subfield>
<subfield code="r">hep-th/9808004</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">D. Anselmi,</subfield>
<subfield code="s">Class. Quantum Gravity 17 (2000) 1383</subfield>
<subfield code="r">hep-th/9906167</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">E. S. Fradkin and M. A. Vasiliev,</subfield>
<subfield code="s">Nucl. Phys. B 291 (1987) 141</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">E. S. Fradkin and M. A. Vasiliev,</subfield>
<subfield code="s">Phys. Lett. B 189 (1987) 89</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">I. R. Klebanov and A. M. Polyakov,</subfield>
<subfield code="s">Phys. Lett. B 550 (2002) 213</subfield>
<subfield code="r">hep-th/0210114</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">M. A. Vasiliev,</subfield>
<subfield code="r">hep-th/9910096</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">T. Leonhardt, A. Meziane and W. Ruhl,</subfield>
<subfield code="r">hep-th/0211092</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and E. Silverstein,</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 006</subfield>
<subfield code="r">hep-th/0105309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">E. Witten,</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">M. Berkooz, A. Sever and A. Shomer</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 034</subfield>
<subfield code="r">hep-th/0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">S. S. Gubser and I. Mitra,</subfield>
<subfield code="r">hep-th/0210093</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">S. S. Gubser and I. R. Klebanov,</subfield>
<subfield code="r">hep-th/0212138</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">M. Porrati,</subfield>
<subfield code="s">J. High Energy Phys. 0204 (2002) 058</subfield>
<subfield code="r">hep-th/0112166</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">K. G. Wilson and J. B. Kogut,</subfield>
<subfield code="s">Phys. Rep. 12 (1974) 75</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">I. R. Klebanov and E. Witten,</subfield>
<subfield code="s">Nucl. Phys. B 556 (1999) 89</subfield>
<subfield code="r">hep-th/9905104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">W. Heidenreich,</subfield>
<subfield code="s">J. Math. Phys. 22 (1981) 1566</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">D. Anselmi,</subfield>
<subfield code="r">hep-th/0210123</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20041129103619.0</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2357700CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">5435544</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0212314</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">KUNS-1817</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">YITP-2002-73</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">TAUP-2719</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Fukuma, M</subfield>
<subfield code="u">Kyoto University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Holographic Renormalization Group</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2003</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Kyoto</subfield>
<subfield code="b">Kyoto Univ.</subfield>
<subfield code="c">26 Dec 2002</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">90 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">The holographic renormalization group (RG) is reviewed in a self-contained manner. The holographic RG is based on the idea that the radial coordinate of a space-time with asymptotically AdS geometry can be identified with the RG flow parameter of the boundary field theory. After briefly discussing basic aspects of the AdS/CFT correspondence, we explain how the notion of the holographic RG comes out in the AdS/CFT correspondence. We formulate the holographic RG based on the Hamilton-Jacobi equations for bulk systems of gravity and scalar fields, as was introduced by de Boer, Verlinde and Verlinde. We then show that the equations can be solved with a derivative expansion by carefully extracting local counterterms from the generating functional of the boundary field theory. The calculational methods to obtain the Weyl anomaly and scaling dimensions are presented and applied to the RG flow from the N=4 SYM to an N=1 superconformal fixed point discovered by Leigh and Strassler. We further discuss a relation between the holographic RG and the noncritical string theory, and show that the structure of the holographic RG should persist beyond the supergravity approximation as a consequence of the renormalizability of the nonlinear sigma model action of noncritical strings. As a check, we investigate the holographic RG structure of higher-derivative gravity systems, and show that such systems can also be analyzed based on the Hamilton-Jacobi equations, and that the behaviour of bulk fields are determined solely by their boundary values. We also point out that higher-derivative gravity systems give rise to new multicritical points in the parameter space of the boundary field theories.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS INIS2004</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2004</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2004 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Matsuura, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Sakai, T</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Fukuma, Masafumi</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Matsuura, So</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Sakai, Tadakatsu</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">489-562</subfield>
<subfield code="p">Prog. Theor. Phys.</subfield>
<subfield code="v">109</subfield>
<subfield code="y">2003</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0212314.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0212314.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">matsu@yukawa.kyoto-u.ac.jp</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200201</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20051024</subfield>
<subfield code="h">1938</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20021230</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002357700CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">Y. Nambu, in Symmetries and quark models, ed. R. Chand (Tordon and Breach 1970), p 269; H. Nielsen, in the 15th International Conference on High Energy Physics (Kiev 1970); L. Susskind,</subfield>
<subfield code="s">Nuovo Cimento A 69 (1970) 457</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">G. ’t Hooft, "A Planar Diagram Theory For Strong Interactions,"</subfield>
<subfield code="s">Nucl. Phys. B 72 (1974) 461</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">K. G. Wilson, ‘Confinement of Quarks,"</subfield>
<subfield code="s">Phys. Rev. D 10 (1974) 2445</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">R. Gopakumar and C. Vafa, "On the gauge theory/geometry correspondence,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 3 (1999) 1415</subfield>
<subfield code="r">hep-th/9811131</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">J. Maldacena, "The large N limit of superconformal field theories and supergravity,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 231</subfield>
<subfield code="r">hep-th/9711200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">S. S. Gubser, I. R. Klebanov and A. M. Polyakov, "Gauge Theory Correlators from Non-Critical String Theory,"</subfield>
<subfield code="s">Phys. Lett. B 428 (1998) 105</subfield>
<subfield code="r">hep-th/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">E. Witten, "Anti De Sitter Space And Holography,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 253</subfield>
<subfield code="r">hep-th/9802150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, "Large N Field Theories, String Theory and Gravity,"</subfield>
<subfield code="r">hep-th/9905111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">, and references therein.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">G. T. Horowitz and A. Strominger, "Black Strings And P-Branes,"</subfield>
<subfield code="s">Nucl. Phys. B 360 (1991) 197</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">L. Susskind and E. Witten, "The holographic bound in anti-de Sitter space,"</subfield>
<subfield code="r">hep-th/9805114</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">E. T. Akhmedov, "A remark on the AdS/CFT correspondence and the renormaliza-tion group flow,"</subfield>
<subfield code="s">Phys. Lett. B 442 (1998) 152</subfield>
<subfield code="r">hep-th/9806217</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">E. Alvarez and C. Gomez, "Geometric Holography, the Renormalization Group and the c-Theorem,"</subfield>
<subfield code="s">Nucl. Phys. B 541 (1999) 441</subfield>
<subfield code="r">hep-th/9807226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni, "Novel Local CFT and Exact Results on Perturbations of N=4 Super Yang Mills from AdS Dynamics,"</subfield>
<subfield code="s">J. High Energy Phys. 12 (1998) 022</subfield>
<subfield code="r">hep-th/9810126</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">M. Porrati and A. Starinets, "RG Fixed Points in Supergravity Duals of 4-d Field Theory and Asymptotically AdS Spaces,"</subfield>
<subfield code="s">Phys. Lett. B 454 (1999) 77</subfield>
<subfield code="r">hep-th/9903085</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">V. Balasubramanian and P. Kraus, "Spacetime and the Holographic Renormalization Group,"</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 3605</subfield>
<subfield code="r">hep-th/9903190</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">D. Z. Freedman, S. S. Gubser, K. Pilch and N. P. Warner, "Renormalization group flows from holography supersymmetry and a c-theorem,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 3 (1999) 363</subfield>
<subfield code="r">hep-th/9904017</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni "The Supergravity Dual of N=1 Super Yang-Mills Theory,"</subfield>
<subfield code="s">Nucl. Phys. B 569 (2000) 451</subfield>
<subfield code="r">hep-th/9909047</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">K. Skenderis and P. K. Townsend, "Gravitational Stability and Renormalization-Group Flow,"</subfield>
<subfield code="s">Phys. Lett. B 468 (1999) 46</subfield>
<subfield code="r">hep-th/9909070</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">O. DeWolfe, D. Z. Freedman, S. S. Gubser and A. Karch, "Modeling the fifth dimen-sion with scalars and gravity,"</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 046008</subfield>
<subfield code="r">hep-th/9909134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">V. Sahakian, "Holography, a covariant c-function and the geometry of the renormal-ization group,"</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 126011</subfield>
<subfield code="r">hep-th/9910099</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">E. Alvarez and C. Gomez, "A comment on the holographic renormalization group and the soft dilaton theorem,"</subfield>
<subfield code="s">Phys. Lett. B 476 (2000) 411</subfield>
<subfield code="r">hep-th/0001016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">S. Nojiri, S. D. Odintsov and S. Zerbini, "Quantum (in)stability of dilatonic AdS backgrounds and holographic renormalization group with gravity,"</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 064006</subfield>
<subfield code="r">hep-th/0001192</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">M. Li, "A note on relation between holographic RG equation and Polchinski’s RG equation,"</subfield>
<subfield code="s">Nucl. Phys. B 579 (2000) 525</subfield>
<subfield code="r">hep-th/0001193</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">V. Sahakian, "Comments on D branes and the renormalization group,"</subfield>
<subfield code="s">J. High Energy Phys. 0005 (2000) 011</subfield>
<subfield code="r">hep-th/0002126</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">O. DeWolfe and D. Z. Freedman, "Notes on fluctuations and correlation functions in holographic renormalization group flows,"</subfield>
<subfield code="r">hep-th/0002226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">V. Balasubramanian, E. G. Gimon and D. Minic, "Consistency conditions for holo-graphic duality,"</subfield>
<subfield code="s">J. High Energy Phys. 0005 (2000) 014</subfield>
<subfield code="r">hep-th/0003147</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">C. V. Johnson, K. J. Lovis and D. C. Page, "Probing some N = 1 AdS/CFT RG flows,"</subfield>
<subfield code="s">J. High Energy Phys. 0105 (2001) 036</subfield>
<subfield code="r">hep-th/0011166</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">J. Erdmenger, "A field-theoretical interpretation of the holographic renormalization group,"</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 085012</subfield>
<subfield code="r">hep-th/0103219</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">S. Yamaguchi, "Holographic RG flow on the defect and g-theorem,"</subfield>
<subfield code="s">J. High Energy Phys. 0210 (2002) 002</subfield>
<subfield code="r">hep-th/0207171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">J. de Boer, E. Verlinde and H. Verlinde, "On the Holographic Renormalization Group,"</subfield>
<subfield code="r">hep-th/9912012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">M. Henningson and K. Skenderis, "The Holographic Weyl anomaly,"</subfield>
<subfield code="s">J. High Energy Phys. 07 (1998) 023</subfield>
<subfield code="r">hep-th/9806087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">V. Balasubramanian and P. Kraus, "A stress tensor for anti-de Sitter gravity,"</subfield>
<subfield code="s">Commun. Math. Phys. 208 (1999) 413</subfield>
<subfield code="r">hep-th/9902121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">S. de Haro, K. Skenderis and S. Solodukhin, "Holographic Reconstruction of Space-time and Renormalization in the AdS/CFT Correspondence,"</subfield>
<subfield code="r">hep-th/0002230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">M. J. Duff, "Twenty Years of the Weyl Anomaly,"</subfield>
<subfield code="s">Class. Quantum Gravity 11 (1994) 1387</subfield>
<subfield code="r">hep-th/9308075</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
<subfield code="m">M. Fukuma, S. Matsuura and T. Sakai, "A note on the Weyl anomaly in the holographic renormalization group,"</subfield>
<subfield code="s">Prog. Theor. Phys. 104 (2000) 1089</subfield>
<subfield code="r">hep-th/0007062</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
<subfield code="m">M. Fukuma and T. Sakai, "Comment on ambiguities in the holographic Weyl anomaly,"</subfield>
<subfield code="s">Mod. Phys. Lett. A 15 (2000) 1703</subfield>
<subfield code="r">hep-th/0007200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
<subfield code="m">M. Fukuma, S. Matsuura and T. Sakai, "Higher-Derivative Gravity and the AdS/CFT Correspondence,"</subfield>
<subfield code="s">Prog. Theor. Phys. 105 (2001) 1017</subfield>
<subfield code="r">hep-th/0103187</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
<subfield code="m">M. Fukuma and S. Matsuura, "Holographic renormalization group structure in higher-derivative gravity,"</subfield>
<subfield code="s">Prog. Theor. Phys. 107 (2002) 1085</subfield>
<subfield code="r">hep-th/0112037</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">A. Fayyazuddin and M. Spalinski "Large N Superconformal Gauge Theories and Supergravity Orientifolds,"</subfield>
<subfield code="s">Nucl. Phys. B 535 (1998) 219</subfield>
<subfield code="r">hep-th/9805096</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">O. Aharony, A. Fayyazuddin and J. Maldacena, "The Large N Limit of N = 1, 2 Field Theories from Three Branes in F-theory,"</subfield>
<subfield code="s">J. High Energy Phys. 9807 (1998) 013</subfield>
<subfield code="r">hep-th/9806159</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
<subfield code="m">M. Blau, K. S. Narain and E. Gava "On Subleading Contributions to the AdS/CFT Trace Anomaly,"</subfield>
<subfield code="s">J. High Energy Phys. 9909 (1999) 018</subfield>
<subfield code="r">hep-th/9904179</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
<subfield code="m">O. Aharony, J. Pawelczyk, S. Theisen and S. Yankielowicz, "A Note on Anomalies in the AdS/CFT correspondence,"</subfield>
<subfield code="s">Phys. Rev. D 60 (1999) 066001</subfield>
<subfield code="r">hep-th/9901134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
<subfield code="m">S. Corley, "A Note on Holographic Ward Identities,"</subfield>
<subfield code="s">Phys. Lett. B 484 (2000) 141</subfield>
<subfield code="r">hep-th/0004030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
<subfield code="m">J. Kalkkinen and D. Martelli, "Holographic renormalization group with fermions and form fields,"</subfield>
<subfield code="s">Nucl. Phys. B 596 (2001) 415</subfield>
<subfield code="r">hep-th/0007234</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[44]</subfield>
<subfield code="m">S. Nojiri, S. D. Odintsov and S. Ogushi, "Scheme-dependence of holographic confor-mal anomaly in d5 gauged supergravity with non-trivial bulk potential,"</subfield>
<subfield code="s">Phys. Lett. B 494 (2000) 318</subfield>
<subfield code="r">hep-th/0009015</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[45]</subfield>
<subfield code="m">N. Hambli, "On the holographic RG-flow and the low-Energy, strong coupling, large N limit,"</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 024001</subfield>
<subfield code="r">hep-th/0010054</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[46]</subfield>
<subfield code="m">S. Nojiri, S. D. Odintsov and S. Ogushi, "Holographic renormalization group and conformal anomaly for AdS(9)/CFT(8) correspondence,"</subfield>
<subfield code="s">Phys. Lett. B 500 (2001) 199</subfield>
<subfield code="r">hep-th/0011182</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[47]</subfield>
<subfield code="m">J. de Boer, "The holographic renormalization group,"</subfield>
<subfield code="s">Fortschr. Phys. 49 (2001) 339</subfield>
<subfield code="r">hep-th/0101026</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[48]</subfield>
<subfield code="m">J. Kalkkinen, D. Martelli and W. Muck, "Holographic renormalisation and anoma-lies,"</subfield>
<subfield code="s">J. High Energy Phys. 0104 (2001) 036</subfield>
<subfield code="r">hep-th/0103111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[49]</subfield>
<subfield code="m">S. Nojiri and S. D. Odintsov, "Conformal anomaly from dS/CFT correspondence,"</subfield>
<subfield code="s">Phys. Lett. B 519 (2001) 145</subfield>
<subfield code="r">hep-th/0106191</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[50]</subfield>
<subfield code="m">S. Nojiri and S. D. Odintsov, "Asymptotically de Sitter dilatonic space-time, holo-graphic RG flow and conformal anomaly from (dilatonic) dS/CFT correspondence,"</subfield>
<subfield code="s">Phys. Lett. B 531 (2002) 143</subfield>
<subfield code="r">hep-th/0201210</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[51]</subfield>
<subfield code="m">R. G. Leigh and M. J. Strassler, "Exactly marginal operators and duality in four-dimensional N=1 supersymmetric gauge theory,"</subfield>
<subfield code="s">Nucl. Phys. B 447 (1995) 95</subfield>
<subfield code="r">hep-th/9503121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[52]</subfield>
<subfield code="m">S. Ferrara, C. Fronsdal and A. Zaffaroni, "On N = 8 supergravity on AdS(5) and N = 4 superconformal Yang-Mills theory,"</subfield>
<subfield code="s">Nucl. Phys. B 532 (1998) 153</subfield>
<subfield code="r">hep-th/9802203</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[53]</subfield>
<subfield code="m">L. Andrianopoli and S. Ferrara, "K-K excitations on AdS(5) x S(5) Ann. Sci. N = 4 *pri-mary* superfields,"</subfield>
<subfield code="s">Phys. Lett. B 430 (1998) 248</subfield>
<subfield code="r">hep-th/9803171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[54]</subfield>
<subfield code="m">S. Ferrara, M. A. Lledo and A. Zaffaroni, "Born-Infeld corrections to D3 brane action in AdS(5) x S(5) and N = 4, d = 4 primary superfields,"</subfield>
<subfield code="s">Phys. Rev. D 58 (1998) 105029</subfield>
<subfield code="r">hep-th/9805082</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[55]</subfield>
<subfield code="m">M. F. Sohnius, "Introducing Supersymmetry,"</subfield>
<subfield code="s">Phys. Rep. 128 (1985) 39</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[56]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and E. Silverstein, "Multiple-trace operators and non-local string theories,"</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 006</subfield>
<subfield code="r">hep-th/0105309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[57]</subfield>
<subfield code="m">E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspon-dence,"</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[58]</subfield>
<subfield code="m">M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary condi-tions and spacetime singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 034</subfield>
<subfield code="r">hep-th/0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[59]</subfield>
<subfield code="m">S. Minwalla, "Restrictions imposed by superconformal invariance on quantum field theories,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 2 (1998) 781</subfield>
<subfield code="r">hep-th/9712074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[60]</subfield>
<subfield code="m">M. Gunaydin, D. Minic, and M. Zagermann, "Novel supermultiplets of SU(2, 2|4) and the AdS5 / CFT4 duality,"</subfield>
<subfield code="r">hep-th/9810226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[61]</subfield>
<subfield code="m">L. Andrianopoli and S. Ferrara, "K-K Excitations on AdS5 ×S5 Ann. Sci. N = 4 ‘Primary’ Superfields,"</subfield>
<subfield code="s">Phys. Lett. B 430 (1998) 248</subfield>
<subfield code="r">hep-th/9803171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[62]</subfield>
<subfield code="m">L. Andrianopoli and S. Ferrara, "Nonchiral’ Primary Superfields in the AdSd+1 / CFTd Correspondence,"</subfield>
<subfield code="s">Lett. Math. Phys. 46 (1998) 265</subfield>
<subfield code="r">hep-th/9807150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[63]</subfield>
<subfield code="m">S. Ferrara and A. Zaffaroni, "Bulk gauge fields in AdS supergravity and supersingle-tons,"</subfield>
<subfield code="r">hep-th/9807090</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[64]</subfield>
<subfield code="m">M. Gunaydin, D. Minic, and M. Zagermann, "4-D doubleton conformal theories, CPT and II B string on AdS5 × S5,"</subfield>
<subfield code="s">Nucl. Phys. B 534 (1998) 96</subfield>
<subfield code="r">hep-th/9806042</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[65]</subfield>
<subfield code="m">L. Andrianopoli and S. Ferrara, "On Short and Long SU(2, 2/4) Multiplets in the AdS/CFT Correspondence,"</subfield>
<subfield code="r">hep-th/9812067</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[66]</subfield>
<subfield code="m">P. S. Howe, K. S. Stelle and P. K. Townsend, "Supercurrents,"</subfield>
<subfield code="s">Nucl. Phys. B 192 (1981) 332</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[67]</subfield>
<subfield code="m">P. S. Howe and P. C. West, "Operator product expansions in four-dimensional super-conformal field theories,"</subfield>
<subfield code="s">Phys. Lett. B 389 (1996) 273</subfield>
<subfield code="r">hep-th/9607060</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[67]</subfield>
<subfield code="m">"Is N = 4 Yang-Mills theory soluble?,"</subfield>
<subfield code="r">hep-th/9611074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[67]</subfield>
<subfield code="m">"Superconformal invariants and extended supersymmetry,"</subfield>
<subfield code="s">Phys. Lett. B 400 (1997) 307</subfield>
<subfield code="r">hep-th/9611075</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[68]</subfield>
<subfield code="m">H. J. Kim, L. J. Romans and P. van Nieuwenhuizen, "The Mass Spectrum Of Chiral N=2 D = 10 Supergravity On S**5,"</subfield>
<subfield code="s">Phys. Rev. D 32 (1985) 389</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[69]</subfield>
<subfield code="m">M. Günaydin and N. Marcus, "The Spectrum Of The S**5 Compactification Of The Chiral N=2, D=10 Supergravity And The Unitary Supermultiplets Of U(2, 2/4),"</subfield>
<subfield code="s">Class. Quantum Gravity 2 (1985) L11</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[70]</subfield>
<subfield code="m">V. A. Novikov, M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, "Exact Gell-Mann-Low Function Of Supersymmetric Yang-Mills Theories From Instanton Cal-culus,"</subfield>
<subfield code="s">Nucl. Phys. B 229 (1983) 381</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[71]</subfield>
<subfield code="m">D. Anselmi, D. Z. Freedman, M. T. Grisaru and Astron. Astrophys. Johansen, "Nonperturbative formulas for central functions of supersymmetric gauge theories,"</subfield>
<subfield code="s">Nucl. Phys. B 526 (1998) 543</subfield>
<subfield code="r">hep-th/9708042</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[72]</subfield>
<subfield code="m">A. Khavaev, K. Pilch and N. P. Warner, "New vacua of gauged N = 8 supergravity in five dimensions,"</subfield>
<subfield code="s">Phys. Lett. B 487 (2000) 14</subfield>
<subfield code="r">hep-th/9812035</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[73]</subfield>
<subfield code="m">K. Pilch and N. P. Warner, "N = 1 supersymmetric renormalization group flows from ICFA Instrum. Bull. supergravity,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 4 (2002) 627</subfield>
<subfield code="r">hep-th/0006066</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[74]</subfield>
<subfield code="m">D. Berenstein, J. M. Maldacena and H. Nastase, "Strings in flat space and pp waves from N = 4 super Yang Mills,"</subfield>
<subfield code="s">J. High Energy Phys. 0204 (2002) 013</subfield>
<subfield code="r">hep-th/0202021</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[75]</subfield>
<subfield code="m">M. Blau, J. Figueroa-O’Farrill, C. Hull and G. Papadopoulos, "A new maximally supersymmetric background of ICFA Instrum. Bull. superstring theory,"</subfield>
<subfield code="s">J. High Energy Phys. 0201 (2002) 047</subfield>
<subfield code="r">hep-th/0110242</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[75]</subfield>
<subfield code="m">M. Blau, J. Figueroa-O’Farrill, C. Hull and G. Papadopoulos, "Pen-rose limits and maximal supersymmetry,"</subfield>
<subfield code="s">Class. Quantum Gravity 19 (2002) L87</subfield>
<subfield code="r">hep-th/0201081</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[75]</subfield>
<subfield code="m">M. Blau, J. Figueroa-O’Farrill and G. Papadopoulos, "Penrose lim-its, supergravity and brane dynamics,"</subfield>
<subfield code="s">Class. Quantum Gravity 19 (2002) 4753</subfield>
<subfield code="r">hep-th/0202111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[76]</subfield>
<subfield code="m">R. R. Metsaev, "Type ICFA Instrum. Bull. Green-Schwarz superstring in plane wave Ramond-Ramond background,"</subfield>
<subfield code="s">Nucl. Phys. B 625 (2002) 70</subfield>
<subfield code="r">hep-th/0112044</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[77]</subfield>
<subfield code="m">R. Corrado, N. Halmagyi, K. D. Kennaway and N. P. Warner, "Penrose limits of RG fixed points and pp-waves with background fluxes,"</subfield>
<subfield code="r">hep-th/0205314</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[77]</subfield>
<subfield code="m">E. G. Gi-mon, L. A. Pando Zayas and J. Sonnenschein, "Penrose limits and RG flows,"</subfield>
<subfield code="r">hep-th/0206033</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[77]</subfield>
<subfield code="m">D. Brecher, C. V. Johnson, K. J. Lovis and R. C. Myers, "Penrose limits, deformed pp-waves and the string duals of N = 1 large N gauge theory,"</subfield>
<subfield code="s">J. High Energy Phys. 0210 (2002) 008</subfield>
<subfield code="r">hep-th/0206045</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[78]</subfield>
<subfield code="m">Y. Oz and T. Sakai, "Penrose limit and six-dimensional gauge theories,"</subfield>
<subfield code="s">Phys. Lett. B 544 (2002) 321</subfield>
<subfield code="r">hep-th/0207223</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[78]</subfield>
<subfield code="m">; etc.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[79]</subfield>
<subfield code="m">A. B. Zamolodchikov, "Irreversibility’ Of The Flux Of The Renormalization Group In A 2-D Field Theory,"</subfield>
<subfield code="s">JETP Lett. 43 (1986) 730</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[79]</subfield>
<subfield code="m">[</subfield>
<subfield code="s">Pis'ma Zh. Eksp. Teor. Fiz. 43 (1986) 565</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[79]</subfield>
<subfield code="m">].</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[80]</subfield>
<subfield code="m">D. Anselmi, "Anomalies, unitarity, and quantum irreversibility,"</subfield>
<subfield code="s">Ann. Phys. 276 (1999) 361</subfield>
<subfield code="r">hep-th/9903059</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[81]</subfield>
<subfield code="m">G. W. Gibbons and S. W. Hawking, "Action Integrals and Partition Functions in Quantum Gravity,"</subfield>
<subfield code="s">Phys. Rev. D 15 (1977) 2752</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[82]</subfield>
<subfield code="m">C. R. Graham and J. M. Lee, "Einstein Metrics with Prescribed Conformal Infinity on the Ball,"</subfield>
<subfield code="s">Adv. Math. 87 (1991) 186</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[83]</subfield>
<subfield code="m">M. Green, J. Schwarz and E. Witten, "Superstring Theory," Cambridge University Press, New York, 1987.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[84]</subfield>
<subfield code="m">S. Nojiri and S. Odintsov, "Conformal Anomaly for Dilaton Coupled Theories from AdS/CFT Correspondence,"</subfield>
<subfield code="s">Phys. Lett. B 444 (1998) 92</subfield>
<subfield code="r">hep-th/9810008</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[84]</subfield>
<subfield code="m">S. Nojiri, S. Odintsov and S. Ogushi, "Conformal Anomaly from d5 Gauged Super-gravity and c-function Away from Conformity,"</subfield>
<subfield code="r">hep-th/9912191</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[84]</subfield>
<subfield code="m">"Finite Action in d5 Gauged Supergravity and Dilatonic Conformal Anomaly for Dual Quantum Field Theory,"</subfield>
<subfield code="r">hep-th/0001122</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[85]</subfield>
<subfield code="m">A. Polyakov,</subfield>
<subfield code="s">Phys. Lett. B 103 (1981) 207</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[85]</subfield>
<subfield code="m">211; V. Knizhnik, A. Polyakov and A. Zamolodchikov,</subfield>
<subfield code="s">Mod. Phys. Lett. A 3 (1988) 819</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[86]</subfield>
<subfield code="m">F. David,</subfield>
<subfield code="s">Mod. Phys. Lett. A 3 (1988) 1651</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[86]</subfield>
<subfield code="m">J. Distler and H. Kawai,</subfield>
<subfield code="s">Nucl. Phys. B 321 (1989) 509</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[87]</subfield>
<subfield code="m">N. Seiberg, "Notes on quantum Liouville theory and quantum gravity,"</subfield>
<subfield code="s">Prog. Theor. Phys. Suppl. 102 (1990) 319</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[88]</subfield>
<subfield code="m">R. Myer,</subfield>
<subfield code="s">Phys. Lett. B 199 (1987) 371</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[89]</subfield>
<subfield code="m">A. Dhar and S. Wadia, "Noncritical strings, RG flows and holography,"</subfield>
<subfield code="s">Nucl. Phys. B 590 (2000) 261</subfield>
<subfield code="r">hep-th/0006043</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[90]</subfield>
<subfield code="m">S. Nojiri and S. D. Odintsov, "Brane World Inflation Induced by Quantum Effects,"</subfield>
<subfield code="s">Phys. Lett. B 484 (2000) 119</subfield>
<subfield code="r">hep-th/0004097</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[91]</subfield>
<subfield code="m">R. C. Myers, "Higher-derivative gravity, surface terms, and string theory,"</subfield>
<subfield code="s">Phys. Rev. D 36 (1987) 392</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[92]</subfield>
<subfield code="m">S. Nojiri and S. D. Odintsov, "Brane-World Cosmology in Higher Derivative Gravity or Warped Compactification in the Next-to-leading Order of AdS/CFT Correspon-dence,"</subfield>
<subfield code="s">J. High Energy Phys. 0007 (2000) 049</subfield>
<subfield code="r">hep-th/0006232</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[92]</subfield>
<subfield code="m">S. Nojiri, S. D. Odintsov and S. Ogushi, "Dynamical Branes from Gravitational Dual of N = 2 Sp(N) Superconformal Field Theory,"</subfield>
<subfield code="r">hep-th/0010004</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[92]</subfield>
<subfield code="m">"Holographic Europhys. Newstropy and brane FRW-dynamics from AdS black hole in d5 higher derivative gravity,"</subfield>
<subfield code="r">hep-th/0105117</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[93]</subfield>
<subfield code="m">S. Nojiri and S. D. Odintsov, "On the conformal anomaly from higher derivative grav-ity in AdS/CFT correspondence,"</subfield>
<subfield code="s">Int. J. Mod. Phys. A 15 (2000) 413</subfield>
<subfield code="r">hep-th/9903033</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[93]</subfield>
<subfield code="m">S. Nojiri and S. D. Odintsov, "Finite gravitational action for higher derivative and stringy gravity,"</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 064018</subfield>
<subfield code="r">hep-th/9911152</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[94]</subfield>
<subfield code="m">J. Polchinski, "String Theory," Vol. II, Cambridge University Press, 1998.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[95]</subfield>
<subfield code="m">S. Kamefuchi, L. O’Raifeartaigh and A. Salam, "Change Of Variables And Equiva-lence Theorems In Quantum Field Theories,"</subfield>
<subfield code="s">Nucl. Phys. 28 (1961) 529</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[96]</subfield>
<subfield code="m">D. J. Gross and E. Witten, "Superstring Modifications Of Einstein’s Equations,"</subfield>
<subfield code="s">Nucl. Phys. B 277 (1986) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[97]</subfield>
<subfield code="m">J. I. Latorre and T. R. Morris, "Exact scheme independence,"</subfield>
<subfield code="s">J. High Energy Phys. 0011 (2000) 004</subfield>
<subfield code="r">hep-th/0008123</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[98]</subfield>
<subfield code="m">M. Fukuma and S. Matsuura, "Comment on field redefinitions in the AdS/CFT cor-respondence,"</subfield>
<subfield code="s">Prog. Theor. Phys. 108 (2002) 375</subfield>
<subfield code="r">hep-th/0204257</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[99]</subfield>
<subfield code="m">I. R. Klebanov and A. M. Polyakov, "AdS dual of the critical O(N) vector model,"</subfield>
<subfield code="s">Phys. Lett. B 550 (2002) 213</subfield>
<subfield code="r">hep-th/0210114</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[100]</subfield>
<subfield code="m">A. M. Polyakov, "Gauge Fields Ann. Sci. Rings Of Glue,"</subfield>
<subfield code="s">Nucl. Phys. B 164 (1980) 171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[101]</subfield>
<subfield code="m">Y. Makeenko and Astron. Astrophys. Migdal, "Quantum Chromodynamics Ann. Sci. Dynamics Of Loops,"</subfield>
<subfield code="s">Nucl. Phys. B 188 (1981) 269</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[102]</subfield>
<subfield code="m">A. M. Polyakov, "Confining strings,"</subfield>
<subfield code="s">Nucl. Phys. B 486 (1997) 23</subfield>
<subfield code="r">hep-th/9607049</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[103]</subfield>
<subfield code="m">A. M. Polyakov, "String theory and quark confinement,"</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 68 (1998) 1</subfield>
<subfield code="r">hep-th/9711002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[104]</subfield>
<subfield code="m">A. M. Polyakov, "The wall of the cave,"</subfield>
<subfield code="s">Int. J. Mod. Phys. A 14 (1999) 645</subfield>
<subfield code="r">hep-th/9809057</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[105]</subfield>
<subfield code="m">A. M. Polyakov and V. S. Rychkov, "Gauge fields - strings duality and the loop equation,"</subfield>
<subfield code="s">Nucl. Phys. B 581 (2000) 116</subfield>
<subfield code="r">hep-th/0002106</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[106]</subfield>
<subfield code="m">A. M. Polyakov and V. S. Rychkov, "Loop dynamics and AdS/CFT correspondence,"</subfield>
<subfield code="s">Nucl. Phys. B 594 (2001) 272</subfield>
<subfield code="r">hep-th/0005173</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[107]</subfield>
<subfield code="m">A. M. Polyakov, "String theory Ann. Sci. a universal language,"</subfield>
<subfield code="s">Phys. At. Nucl. 64 (2001) 540</subfield>
<subfield code="r">hep-th/0006132</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[108]</subfield>
<subfield code="m">A. M. Polyakov, "Gauge fields and space-time," Int. J. Mod. Phys. A 17S : 1 (2002) 119,</subfield>
<subfield code="r">hep-th/0110196</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[109]</subfield>
<subfield code="m">J. B. Kogut and L. Susskind, "Hamiltonian Formulation Of Wilson’s Lattice Gauge Theories,"</subfield>
<subfield code="s">Phys. Rev. D 11 (1975) 395</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[110]</subfield>
<subfield code="m">A. Santambrogio and D. Zanon, "Exact anomalous dimensions of N = 4 Yang-Mills operators with large R charge,"</subfield>
<subfield code="s">Phys. Lett. B 545 (2002) 425</subfield>
<subfield code="r">hep-th/0206079</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[111]</subfield>
<subfield code="m">Y. Oz and T. Sakai, "Exact anomalous dimensions for N = 2 ADE SCFTs,"</subfield>
<subfield code="r">hep-th/0208078</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[112]</subfield>
<subfield code="m">S. R. Das, C. Gomez and S. J. Rey, "Penrose limit, spontaneous Symmetry break-ing and holography in pp-wave background,"</subfield>
<subfield code="s">Phys. Rev. D 66 (2002) 046002</subfield>
<subfield code="r">hep-th/0203164</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[113]</subfield>
<subfield code="m">R. G. Leigh, K. Okuyama and M. Rozali, "PP-waves and holography,"</subfield>
<subfield code="s">Phys. Rev. D 66 (2002) 046004</subfield>
<subfield code="r">hep-th/0204026</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[114]</subfield>
<subfield code="m">D. Berenstein and H. Nastase, "On lightcone string field theory from super Yang-Mills and holography,"</subfield>
<subfield code="r">hep-th/0205048</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2373792CERCER</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0304229</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Barvinsky, A O</subfield>
<subfield code="u">Lebedev Physics Institute</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Nonlocal action for long-distance modifications of gravity theory</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2003</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">28 Apr 2003</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">9 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We construct the covariant nonlocal action for recently suggested long-distance modifications of gravity theory motivated by the cosmological constant and cosmological acceleration problems. This construction is based on the special nonlocal form of the Einstein-Hilbert action explicitly revealing the fact that this action within the covariant curvature expansion begins with curvature-squared terms.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2004</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2004 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">109-116</subfield>
<subfield code="p">Phys. Lett. B</subfield>
<subfield code="v">572</subfield>
<subfield code="y">2003</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0304229.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0304229.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">barvin@lpi.ru</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200318</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20060826</subfield>
<subfield code="h">0015</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20030429</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002373792CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">N.Arkani-Hamed, S.Dimopoulos, G.Dvali and G.Gabadadze, Nonlocal modifica-tion of gravity and the cosmological constant problem,</subfield>
<subfield code="r">hep-th/0209227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">S.Weinberg,</subfield>
<subfield code="s">Rev. Mod. Phys. 61 (1989) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">M.K.Parikh and S.N.Solodukhin,</subfield>
<subfield code="s">Phys. Lett. B 503 (2001) 384</subfield>
<subfield code="r">hep-th/0012231</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">A.O.Barvinsky and G.A.Vilkovisky,</subfield>
<subfield code="s">Nucl. Phys. B 282 (1987) 163</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">A.O.Barvinsky and G.A.Vilkovisky,</subfield>
<subfield code="s">Nucl. Phys. B 333 (1990) 471</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">A.O.Barvinsky, Yu.V.Gusev, G.A.Vilkovisky and V.V.Zhytnikov,</subfield>
<subfield code="s">J. Math. Phys. 35 (1994) 3525</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="s">J. Math. Phys. 35 (1994) 3543</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">A.Adams, J.McGreevy and E.Silverstein,</subfield>
<subfield code="r">hep-th/0209226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">R.Gregory, V.A.Rubakov and S.M.Sibiryakov,</subfield>
<subfield code="s">Phys. Rev. Lett. 84 (2000) 5928</subfield>
<subfield code="r">hep-th/0002072</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">G.Dvali, G.Gabadadze and M.Porrati, Phys. Rev. Lett. B : 485 (2000) 208,</subfield>
<subfield code="r">hep-th/0005016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">S.L.Dubovsky and V.A.Rubakov,</subfield>
<subfield code="s">Phys. Rev. D 67 (2003) 104014</subfield>
<subfield code="r">hep-th/0212222</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">A.O.Barvinsky,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 062003</subfield>
<subfield code="r">hep-th/0107244</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">A.O.Barvinsky, A.Yu.Kamenshchik, A.Rathke and C.Kiefer,</subfield>
<subfield code="s">Phys. Rev. D 67 (2003) 023513</subfield>
<subfield code="r">hep-th/0206188</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">E.S. Fradkin and Astron. Astrophys. Tseytlin,</subfield>
<subfield code="s">Phys. Lett. B 104 (1981) 377</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">A.O.Barvinsky and I.G.Avramidi,</subfield>
<subfield code="s">Phys. Lett. B 159 (1985) 269</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">A.O.Barvinsky, A.Yu.Kamenshchik and I.P.Karmazin,</subfield>
<subfield code="s">Phys. Rev. D 48 (1993) 3677</subfield>
<subfield code="r">gr-qc/9302007</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">E.V.Gorbar and I.L.Shapiro,</subfield>
<subfield code="s">J. High Energy Phys. 0302 (2003) 021</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">M.Porrati,</subfield>
<subfield code="s">Phys. Lett. B 534 (2002) 209</subfield>
<subfield code="r">hep-th/0203014</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">H. van Damm and M.J.Veltman, Nucl. Phys. D : 22 (1970) 397; V.I.Zakharov,</subfield>
<subfield code="s">JETP Lett. 12 (1970) 312</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">M.Porrati,</subfield>
<subfield code="s">Phys. Lett. B 498 (2001) 92</subfield>
<subfield code="r">hep-th/0011152</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">A.O.Barvinsky, Yu.V.Gusev, V.F.Mukhanov and D.V.Nesterov, Nonperturbative late time asymptotics for heat kernel in gravity theory,</subfield>
<subfield code="r">hep-th/0306052</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">A.Strominger,</subfield>
<subfield code="s">J. High Energy Phys. 0110 (2001) 034</subfield>
<subfield code="r">hep-th/0106113</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="s">J. High Energy Phys. 0111 (2001) 049</subfield>
<subfield code="r">hep-th/0110087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">J.Schwinger,</subfield>
<subfield code="s">J. Math. Phys. 2 (1961) 407</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">J.L.Buchbinder, E.S.Fradkin and D.M.Gitman,</subfield>
<subfield code="s">Fortschr. Phys. 29 (1981) 187</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">R.D.Jordan,</subfield>
<subfield code="s">Phys. Rev. D 33 (1986) 444</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">C.Deffayet, G.Dvali and G.Gabadadze,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 044023</subfield>
<subfield code="r">astro-ph/0105068</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">G.Dvali, A.Gruzinov and M.Zaldarriaga, The accelerated Universe and the Moon,</subfield>
<subfield code="r">hep-ph/0212069</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">M.E.Soussa and R.P.Woodard, A nonlocal metric formulation of MOND,</subfield>
<subfield code="r">astro-ph/0302030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">M.Milgrom,</subfield>
<subfield code="s">Astrophys. J. 270 (1983) 365</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="s">Astrophys. J. 270 (1983) 371</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">J.Bekenstein and M.Milgrom,</subfield>
<subfield code="s">Astrophys. J. 286 (1984) 7</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">L.R.Abramo and R.P.Woodard,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 063516</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">V.K.Onemli and R.P.Woodard,</subfield>
<subfield code="s">Class. Quantum Gravity 19 (2002) 4607</subfield>
<subfield code="r">gr-qc/0204065</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0307041</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Witten, Edward</subfield>
<subfield code="u">Princeton University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">SL(2,Z) Action On Three-Dimensional Conformal Field Theories With Abelian Symmetry</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2003</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">3 Jul 2003</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">24 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">On the space of three-dimensional conformal field theories with U(1) symmetry and a chosen coupling to a background gauge field, there is a natural action of the group SL(2,Z). The generator S of SL(2,Z) acts by letting the background gauge field become dynamical, an operation considered recently by Kapustin and Strassler. The other generator T acts by shifting the Chern-Simons coupling of the background field. This SL(2,Z) action in three dimensions is related by the AdS/CFT correspondence to SL(2,Z) duality of low energy U(1) gauge fields in four dimensions.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Witten, Edward</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0307041.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0307041.ps.gz</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">witten@ias.edu</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200327</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">11</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20061123</subfield>
<subfield code="h">0917</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20030704</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002385282CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">C. Burgess and B. P. Dolan, "Particle Vortex Duality And The Modular Group Applications To The Quantum Hall Effect And Other 2-D Systems,"</subfield>
<subfield code="r">hep-th/0010246</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">A. Shapere and F. Wilczek, "Self-Dual Models With Theta Terms,"</subfield>
<subfield code="s">Nucl. Phys. B 320 (1989) 669</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">S. J. Rey and A. Zee, "Self-Duality Of Three-Dimensional Chern-Simons Theory,"</subfield>
<subfield code="s">Nucl. Phys. B 352 (1991) 897</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">C. A. Lutken and G. G. Ross, "Duality In The Quantum Hall System,"</subfield>
<subfield code="s">Phys. Rev. B 45 (1992) 11837</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="s">Phys. Rev. B 48 (1993) 2500</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">D.-H. Lee, S. Kivelson, and S.-C. Zhang,</subfield>
<subfield code="s">Phys. Lett. 68 (1992) 2386</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="s">Phys. Rev. B 46 (1992) 2223</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">C. A. Lutken, "Geometry Of Renormalization Group Flows Constrained By Discrete Global Symmetries,"</subfield>
<subfield code="s">Nucl. Phys. B 396 (1993) 670</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">B. P. Dolan, "Duality And The Modular Group In The Quantum Hall Effect,"</subfield>
<subfield code="s">J. Phys. A 32 (1999) L243</subfield>
<subfield code="r">cond-mat/9805171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">C. P. Burgess, R. Dib, and B. P. Dolan,</subfield>
<subfield code="s">Phys. Rev. B 62 (2000) 15359</subfield>
<subfield code="r">cond-mat/9911476</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">A. Zee, "Quantum Hall Fluids,"</subfield>
<subfield code="r">cond-mat/9501022</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">A. Kapustin and M. Strassler, "On Mirror Symmetry In Three Dimensional Abelian Gauge Theories,"</subfield>
<subfield code="r">hep-th/9902033</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">K. Intriligator and N. Seiberg, "Mirror Symmetry In Three-Dimensional Gauge The-ories,"</subfield>
<subfield code="s">Phys. Lett. B 387 (1996) 512</subfield>
<subfield code="r">hep-th/9607207</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">J. Cardy and E. Rabinovici, "Phase Structure Of Z. Phys. Models In The Presence Of A Theta Parameter,"</subfield>
<subfield code="s">Nucl. Phys. B 205 (1982) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">J. Cardy, "Duality And The Theta Parameter In Abelian Lattice Models,"</subfield>
<subfield code="s">Nucl. Phys. B 205 (1982) 17</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">C. Vafa and E. Witten, "A Strong Coupling Test Of S-Duality,"</subfield>
<subfield code="s">Nucl. Phys. B 431 (1994) 3</subfield>
<subfield code="r">hep-th/9408074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">E. Witten, "On S Duality In Abelian Gauge Theory," Selecta Mathematica : 1 (1995) 383,</subfield>
<subfield code="r">hep-th/9505186</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">S. Deser, R. Jackiw, and S. Templeton, "Topologically Massive Gauge Theories,"</subfield>
<subfield code="s">Ann. Phys. 140 (1982) 372</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">E. Guadagnini, M. Martinelli, and M. Mintchev, "Scale-Invariant SIGMA, Symmetry Integrability Geom. Methods Appl. Models On Homogeneous Spaces,"</subfield>
<subfield code="s">Phys. Lett. B 194 (1987) 69</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">K. Bardacki, E. Rabinovici, and B. Saring,</subfield>
<subfield code="s">Nucl. Phys. B 299 (1988) 157</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">D. Karabali, Q.-H. Park, H. J. Schnitzer, and Z. Yang,</subfield>
<subfield code="s">Phys. Lett. B 216 (1989) 307</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">H. J. Schnitzer,</subfield>
<subfield code="s">Nucl. Phys. B 324 (1989) 412</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">D. Karabali and H. J. Schnitzer,</subfield>
<subfield code="s">Nucl. Phys. B 329 (1990) 649</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">T. Appelquist and R. D. Pisarski, "Hot Yang-Mills Theories And Three-Dimensional QCD,"</subfield>
<subfield code="s">Phys. Rev. D 23 (1981) 2305</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">R. Jackiw and S. Templeton, "How Superrenormalizable Interactions Cure Their In-frared Divergences,"</subfield>
<subfield code="s">Phys. Rev. D 23 (1981) 2291</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">S. Templeton, "Summation Of Dominant Coupling Constant Logarithms In QED In Three Dimensions,"</subfield>
<subfield code="s">Phys. Lett. B 103 (1981) 134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">"Summation Of Coupling Constant Logarithms In QED In Three Dimensions,"</subfield>
<subfield code="s">Phys. Rev. D 24 (1981) 3134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">T. Appelquist and U. W. Heinz,"Three-Dimensional O(N) Theories At Large Dis-tances,"</subfield>
<subfield code="s">Phys. Rev. D 24 (1981) 2169</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">D. Anselmi, "Large N Expansion, Conformal Field Theory, And Renormalization Group Flows In Three Dimensions,"</subfield>
<subfield code="s">J. High Energy Phys. 0006 (2000) 042</subfield>
<subfield code="r">hep-th/0005261</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">V. Borokhov, A. Kapustin, and X. Wu, "Topological Disorder Operators In Three-Dimensional Conformal Field Theory,"</subfield>
<subfield code="r">hep-th/0206054</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">V. Borokhov, A. Kapustin, and X. Wu, "Monopole Operators And Mirror Symmetry In Three Dimensions,"</subfield>
<subfield code="s">J. High Energy Phys. 0212 (2002) 044</subfield>
<subfield code="r">hep-th/0207074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">P. Breitenlohner and D. Z. Freedman, "Stability In Gauged Extended Supergravity,"</subfield>
<subfield code="s">Ann. Phys. 144 (1982) 249</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">I. R. Klebanov and E. Witten, "AdS/CFT Correspondence And Symmetry Breaking,"</subfield>
<subfield code="s">Nucl. Phys. B 536 (1998) 199</subfield>
<subfield code="r">hep-th/9905104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">R. Jackiw, "Topological Investigations Of Quantized Gauge Theories," in Current Algebra And Anomalies, ed. S. B. Treiman et. al. (World-Scientific, 1985).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">A. Schwarz, "The Partition Function Of A Degenerate Functional,"</subfield>
<subfield code="s">Commun. Math. Phys. 67 (1979) 1</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">M. Rocek and E. Verlinde, "Duality, Quotients, and Currents,"</subfield>
<subfield code="s">Nucl. Phys. B 373 (1992) 630</subfield>
<subfield code="r">hep-th/9110053</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">S. Elitzur, G. Moore, A. Schwimmer, and N. Seiberg, "Remarks On The Canonical Quantization Of The Chern-Simons-Witten Theory,"</subfield>
<subfield code="s">Nucl. Phys. B 326 (1989) 108</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">E. Witten, "Quantum Field Theory And The Jones Polynomial,"</subfield>
<subfield code="s">Commun. Math. Phys. 121 (1989) 351</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">N. Redlich, "Parity Violation And Gauge Non-Invariance Of The Effective Gauge Field Action In Three Dimensions,"</subfield>
<subfield code="s">Phys. Rev. D 29 (1984) 2366</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">E. Witten, "Multi-Trace Operators, Boundary Conditions, and AdS/CFT Correspon-dence,"</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
<subfield code="m">M. Berkooz, A. Sever, and A. Shomer, "Double-trace Deformations, Boundary Con-ditions, and Spacetime Singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 05 (2002) 034</subfield>
<subfield code="r">hep-th/0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
<subfield code="m">P. Minces, "Multi-trace Operators And The Generalized AdS/CFT Prescription,"</subfield>
<subfield code="r">hep-th/0201172</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
<subfield code="m">O. Aharony, M. Berkooz, and E. Silverstein, "Multiple Trace Operators And Non-Local String Theories,"</subfield>
<subfield code="s">J. High Energy Phys. 08 (2001) 006</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
<subfield code="m">V. K. Dobrev, "Intertwining Operator Realization Of The AdS/CFT Correspon-dence,"</subfield>
<subfield code="s">Nucl. Phys. B 553 (1999) 559</subfield>
<subfield code="r">hep-th/9812194</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">I. R. Klebanov, "Touching Random Surfaces And Liouville Theory,"</subfield>
<subfield code="s">Phys. Rev. D 51 (1995) 1836</subfield>
<subfield code="r">hep-th/9407167</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">I. R. Klebanov and A. Hashimoto, "Non-perturbative Solution Of Matrix Models Modified By Trace Squared Terms,"</subfield>
<subfield code="s">Nucl. Phys. B 434 (1995) 264</subfield>
<subfield code="r">hep-th/9409064</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
<subfield code="m">S. Gubser and I. Mitra, "Double-trace Operators And One-Loop Vacuum Energy In AdS/CFT,"</subfield>
<subfield code="r">hep-th/0210093</subfield>
<subfield code="s">Phys.Rev. D67 (2003) 064018</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
<subfield code="m">S. Gubser and I. R. Klebanov, "A Universal Result On Central Charges In The Presence Of Double-Trace Deformations,"</subfield>
<subfield code="s">Nucl.Phys. B656 (2003) 23</subfield>
<subfield code="r">hep-th/0212138</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20070403111954.0</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0402130</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">NYU-TH-2004-02-17</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Dvali, G</subfield>
<subfield code="u">New York University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Filtering Gravity: Modification at Large Distances?</subfield>
</datafield>
<datafield tag="246" ind1=" " ind2=" ">
<subfield code="a">Infrared Modification of Gravity</subfield>
<subfield code="i">Preprint title</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2005</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">New York, NY</subfield>
<subfield code="b">New York Univ. Dept. Phys.</subfield>
<subfield code="c">17 Feb 2004</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">18 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">In this lecture I address the issue of possible large distance modification of gravity and its observational consequences. Although, for the illustrative purposes we focus on a particular simple generally-covariant example, our conclusions are rather general and apply to large class of theories in which, already at the Newtonian level, gravity changes the regime at a certain very large crossover distance $r_c$. In such theories the cosmological evolution gets dramatically modified at the crossover scale, usually exhibiting a "self-accelerated" expansion, which can be differentiated from more conventional "dark energy" scenarios by precision cosmology. However, unlike the latter scenarios, theories of modified-gravity are extremely constrained (and potentially testable) by the precision gravitational measurements at much shorter scales. Despite the presence of extra polarizations of graviton, the theory is compatible with observations, since the naive perturbative expansion in Newton's constant breaks down at a certain intermediate scale. This happens because the extra polarizations have couplings singular in $1/r_c$. However, the correctly resummed non-linear solutions are regular and exhibit continuous Einsteinian limit. Contrary to the naive expectation, explicit examples indicate that the resummed solutions remain valid after the ultraviolet completion of the theory, with the loop corrections taken into account.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:200704 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Dvali, Gia</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">92-98</subfield>
<subfield code="p">Phys. Scr. Top. Issues</subfield>
<subfield code="v">T117</subfield>
<subfield code="y">2005</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0402130.pdf</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200408</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070425</subfield>
<subfield code="h">1019</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20040218</subfield>
</datafield>
<datafield tag="962" ind1=" " ind2=" ">
<subfield code="b">002414101</subfield>
<subfield code="k">92-98</subfield>
<subfield code="n">sigtuna20030814</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002426503CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze and M. Porrati,</subfield>
<subfield code="s">Phys. Lett. B 485 (2000) 208</subfield>
<subfield code="r">hep-th/0005016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">G. R. Dvali and G. Gabadadze,</subfield>
<subfield code="s">Phys. Rev. D 63 (2001) 065007</subfield>
<subfield code="r">hep-th/0008054</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze, M. Kolanovic and F. Nitti,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 024031</subfield>
<subfield code="r">hep-ph/0106058</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze, M. Kolanovic and F. Nitti,</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 084004</subfield>
<subfield code="r">hep-ph/0102216</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">C. Deffayet, G. Dvali and G. Gabadadze,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 044023</subfield>
<subfield code="r">astro-ph/0105068</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">C. Deffayet,</subfield>
<subfield code="s">Phys. Lett. B 502 (2001) 199</subfield>
<subfield code="r">hep-th/0010186</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">A. G. Riess et al. [Supernova Search Team Collaboration],</subfield>
<subfield code="s">Astron. J. 116 (1998) 1009</subfield>
<subfield code="r">astro-ph/9805201</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">S. Perlmutter et al. [Supernova Cosmology Project Collaboration],</subfield>
<subfield code="s">Astrophys. J. 517 (1999) 565</subfield>
<subfield code="r">astro-ph/9812133</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">G. Dvali and M. Turner,</subfield>
<subfield code="r">astro-ph/0301510</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">H. van Dam and M. Veltman,</subfield>
<subfield code="s">Nucl. Phys. B 22 (1970) 397</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">V. I. Zakharov,</subfield>
<subfield code="s">JETP Lett. 12 (1970) 312</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">A. I. Vainshtein,</subfield>
<subfield code="s">Phys. Lett. B 39 (1972) 393</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">C. Deffayet, G. Dvali, G. Gabadadze and A. I. Vainshtein,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 044026</subfield>
<subfield code="r">hep-th/0106001</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">N. Arkani-Hamed, H. Georgi and M.D. Schwartz,</subfield>
<subfield code="s">Ann. Phys. 305 (2003) 96</subfield>
<subfield code="r">hep-th/0210184</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">D. G..Boulware and S. Deser.,</subfield>
<subfield code="s">Phys. Rev. D 6 (1972) 3368</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">G. Gabadadze and A. Gruzinov,</subfield>
<subfield code="s">Phys.Rev. D72 (2005) 124007</subfield>
<subfield code="r">hep-th/0312074</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">M. A. Luty, M. Porrati and R. Rattazzi,</subfield>
<subfield code="s">J. High Energy Phys. 0309 (2003) 029</subfield>
<subfield code="r">hep-th/0303116</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">A. Lue,</subfield>
<subfield code="s">Phys. Rev. D 66 (2002) 043509</subfield>
<subfield code="r">hep-th/0111168</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">A. Gruzinov,</subfield>
<subfield code="r">astro-ph/0112246</subfield>
<subfield code="s">New Astron. 10 (2005) 311</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">S. Corley, D.A.Lowe and S. Ramgoolam,</subfield>
<subfield code="s">J. High Energy Phys. 0107 (2001) 030</subfield>
<subfield code="r">hep-th/0106067</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">I. Antoniadis, R. Minasian and P. Vanhove,</subfield>
<subfield code="s">Nucl. Phys. B 648 (2003) 69</subfield>
<subfield code="r">hep-th/0209030</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">R. L. Davis,</subfield>
<subfield code="s">Phys. Rev. D 35 (1987) 3705</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">G. Dvali, A. Gruzinov and M. Zaldarriaga,</subfield>
<subfield code="s">Phys. Rev. D 68 (2003) 024012</subfield>
<subfield code="r">hep-ph/0212069</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">A. Lue and G. Starkman,</subfield>
<subfield code="s">Phys. Rev. D 67 (2003) 064002</subfield>
<subfield code="r">astro-ph/0212083</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">E. Adelberger (2002). Private communication.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">T. Damour, I. I. Kogan, A. Papazoglou,</subfield>
<subfield code="s">Phys. Rev. D 66 (2002) 104025</subfield>
<subfield code="r">hep-th/0206044</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">G. Dvali, G. Gabadadze and M. Shifman,</subfield>
<subfield code="s">Phys. Rev. D 67 (2003) 044020</subfield>
<subfield code="r">hep-th/0202174</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">A. Adams, J. McGreevy and E. Silverstein,</subfield>
<subfield code="r">hep-th/0209226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">N. Arkani-Hamed, S. Dimopoulos, G. Dvali and G. Gabadadze,</subfield>
<subfield code="r">hep-th/0209227</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">S.M. Carrol, V. Duvvuri, M. Trodden and M.S. Turner,</subfield>
<subfield code="r">astro-ph/0306438</subfield>
<subfield code="s">Phys.Rev. D70 (2004) 043528</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">G.Gabadadze and M. Shifman,</subfield>
<subfield code="r">hep-th/0312289</subfield>
<subfield code="s">Phys.Rev. D69 (2004) 124032</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">M.Porrati and G. W. Rombouts,</subfield>
<subfield code="r">hep-th/0401211</subfield>
<subfield code="s">Phys.Rev. D69 (2004) 122003</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20060124104603.0</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0501145</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Durin, B</subfield>
<subfield code="u">LPTHE</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Closed strings in Misner space: a toy model for a Big Bounce ?</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2005</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">19 Jan 2005</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Misner space, also known as the Lorentzian orbifold $R^{1,1}/boost$, is one of the simplest examples of a cosmological singularity in string theory. In this lecture, we review the semi-classical propagation of closed strings in this background, with a particular emphasis on the twisted sectors of the orbifold. Tree-level scattering amplitudes and the one-loop vacuum amplitude are also discussed.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2006</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2006 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Pioline, B</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Durin, Bruno</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Pioline, Boris</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0501145.pdf</subfield>
</datafield>
<datafield tag="901" ind1=" " ind2=" ">
<subfield code="u">LPTHE</subfield>
</datafield>
<datafield tag="901" ind1=" " ind2=" ">
<subfield code="u">LPTHE, Lptens</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200503</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20061202</subfield>
<subfield code="h">0008</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20050120</subfield>
</datafield>
<datafield tag="962" ind1=" " ind2=" ">
<subfield code="b">002424942</subfield>
<subfield code="k">177</subfield>
<subfield code="n">cargese20040607</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002503681CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">S. Lem, "The Seventh Voyage", in The Star Diaries, Varsaw 1971, english translation New York, 1976.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">A. Borde and A. Vilenkin, "Eternal inflation and the initial singu-larity,"</subfield>
<subfield code="s">Phys. Rev. Lett. 72 (1994) 3305</subfield>
<subfield code="r">gr-qc/9312022</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">C. W. Misner, in Relativity Theory and Astrophysics I Relativity and Cosmology, edited by J. Ehlers, Lectures in Applied Mathe-matics, Vol. 8 (American Electron. Res. Announc. Am. Math. Soc., Providence, 1967), p. 160.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">M. Berkooz, and B. Pioline, "Strings in an electric field, and the Milne universe,"</subfield>
<subfield code="s">J. Cosmol. Astropart. Phys. 0311 (2003) 007</subfield>
<subfield code="r">hep-th/0307280</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">M. Berkooz, B. Pioline and M. Rozali, "Closed strings in Mis-ner space Cosmological production of winding strings,"</subfield>
<subfield code="s">J. Cosmol. Astropart. Phys. 07 (2004) 003</subfield>
<subfield code="r">hep-th/0405126</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="s">JCAP 0410 (2004) 002</subfield>
<subfield code="m">M. Berkooz, B. Durin, B. Pioline and D. Reichmann, "Closed strings in Misner space Stringy fuzziness with a twist," arXiv</subfield>
<subfield code="r">hep-th/0407216</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">G. T. Horowitz and A. R. Steif, "Singular String Solutions With Nonsingular Initial Data,"</subfield>
<subfield code="s">Phys. Lett. B 258 (1991) 91</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">J. Khoury, B. A. Ovrut, N. Seiberg, P. J. Steinhardt and N. Turok, "From big crunch to big bang,"</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 086007</subfield>
<subfield code="r">hep-th/0108187</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="s">Surveys High Energ.Phys. 17 (2002) 115</subfield>
<subfield code="m">N. A. Nekrasov, "Milne universe, tachyons, and quantum group," arXiv</subfield>
<subfield code="r">hep-th/0203112</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">V. Balasubramanian, S. F. Hassan, E. Keski-Vakkuri and A. Naqvi, "A space-time orbifold A toy model for a cosmological singu-larity,"</subfield>
<subfield code="s">Phys. Rev. D 67 (2003) 026003</subfield>
<subfield code="r">hep-th/0202187</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">R. Biswas, E. Keski-Vakkuri, R. G. Leigh, S. Nowling and E. Sharpe, "The taming of closed time-like curves,"</subfield>
<subfield code="s">J. High Energy Phys. 0401 (2004) 064</subfield>
<subfield code="r">hep-th/0304241</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">I. Antoniadis, C. Bachas, J. R. Ellis and D. V. Nanopoulos, "Cosmo-logical String Theories And Discrete Inflation,"</subfield>
<subfield code="s">Phys. Lett. B 211 (1988) 393</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">I. Antoniadis, C. Bachas, J. R. Ellis and D. V. Nanopou-los, "An Expanding Universe In String Theory,"</subfield>
<subfield code="s">Nucl. Phys. B 328 (1989) 117</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">I. Antoniadis, C. Bachas, J. R. Ellis and D. V. Nanopou-los, "Comments On Cosmological String Solutions,"</subfield>
<subfield code="s">Phys. Lett. B 257 (1991) 278</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">C. R. Nappi and E. Witten, "A Closed, expanding universe in string theory,"</subfield>
<subfield code="s">Phys. Lett. B 293 (1992) 309</subfield>
<subfield code="r">hep-th/9206078</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">C. Kounnas and D. Lust, "Cosmological string backgrounds from gauged WZW models,"</subfield>
<subfield code="s">Phys. Lett. B 289 (1992) 56</subfield>
<subfield code="r">hep-th/9205046</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">E. Kiritsis and C. Kounnas, "Dynamical Topology change in string theory,"</subfield>
<subfield code="s">Phys. Lett. B 331 (1994) 51</subfield>
<subfield code="r">hep-th/9404092</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">S. Elitzur, A. Giveon, D. Kutasov and E. Rabinovici, "From big bang to big crunch and beyond,"</subfield>
<subfield code="s">J. High Energy Phys. 0206 (2002) 017</subfield>
<subfield code="r">hep-th/0204189</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">S. Elitzur, A. Giveon and E. Rabinovici, "Removing singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 0301 (2003) 017</subfield>
<subfield code="r">hep-th/0212242</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">L. Cornalba and M. S. Costa, "A New Cosmological Scenario in String Theory,"</subfield>
<subfield code="s">Phys. Rev. D 66 (2002) 066001</subfield>
<subfield code="r">hep-th/0203031</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">L. Cornalba, M. S. Costa and C. Kounnas, "A res-olution of the cosmological singularity with orientifolds,"</subfield>
<subfield code="s">Nucl. Phys. B 637 (2002) 378</subfield>
<subfield code="r">hep-th/0204261</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">L. Cornalba and M. S. Costa, "On the classical stability of orientifold cosmologies,"</subfield>
<subfield code="s">Class. Quantum Gravity 20 (2003) 3969</subfield>
<subfield code="r">hep-th/0302137</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">B. Craps, D. Kutasov and G. Rajesh, "String propagation in the presence of cosmological singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 0206 (2002) 053</subfield>
<subfield code="r">hep-th/0205101</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">B. Craps and B. A. Ovrut, "Global fluc-tuation spectra in big crunch / big bang string vacua,"</subfield>
<subfield code="s">Phys. Rev. D 69 (2004) 066001</subfield>
<subfield code="r">hep-th/0308057</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">E. Dudas, J. Mourad and C. Timirgaziu, "Time and space depen-dent backgrounds from nonsupersymmetric strings,"</subfield>
<subfield code="s">Nucl. Phys. B 660 (2003) 3</subfield>
<subfield code="r">hep-th/0209176</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">L. Cornalba and M. S. Costa, "Time-dependent orbifolds and string cosmology,"</subfield>
<subfield code="s">Fortschr. Phys. 52 (2004) 145</subfield>
<subfield code="r">hep-th/0310099</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="s">Phys.Rev. D70 (2004) 126011</subfield>
<subfield code="m">C. V. Johnson and H. G. Svendsen, "An exact string theory model of closed time-like curves and cosmological singularities," arXiv</subfield>
<subfield code="r">hep-th/0405141</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">N. Toumbas and J. Troost, "A time-dependent brane in a cosmolog-ical background,"</subfield>
<subfield code="s">J. High Energy Phys. 0411 (2004) 032</subfield>
<subfield code="r">hep-th/0410007</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">W. A. Hiscock and D. A. Konkowski, "Quantum Vacuum Energy In Taub - Nut (Newman-Unti-Tamburino) Type Cosmologies,"</subfield>
<subfield code="s">Phys. Rev. D 26 (1982) 1225</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">A. H. Taub, "Empty Space-Times Admitting A Three Parameter Group Of Motions,"</subfield>
<subfield code="s">Ann. Math. 53 (1951) 472</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">E. Newman, L. Tamburino and T. Unti, "Empty Space Generalization Of The Schwarzschild Metric,"</subfield>
<subfield code="s">J. Math. Phys. 4 (1963) 915</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">J. G. Russo, "Cosmological string models from Milne spaces and SL(2,Z) orbifold," arXiv</subfield>
<subfield code="r">hep-th/0305032</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="s">Mod.Phys.Lett. A19 (2004) 421</subfield>
<subfield code="m">J. R. I. Gott, "Closed Timelike Curves Produced By Pairs Of Mov-ing Cosmic Strings Exact Solutions,"</subfield>
<subfield code="s">Phys. Rev. Lett. 66 (1991) 1126</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">J. D. Grant, "Cosmic strings and chronology protection,"</subfield>
<subfield code="s">Phys. Rev. D 47 (1993) 2388</subfield>
<subfield code="r">hep-th/9209102</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">S. W. Hawking, "The Chronology protection conjecture,"</subfield>
<subfield code="s">Phys. Rev. D 46 (1992) 603</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="s">Commun.Math.Phys. 256 (2005) 491</subfield>
<subfield code="m">D. Kutasov, J. Marklof and G. W. Moore, "Melvin Models and Diophantine Approximation," arXiv</subfield>
<subfield code="r">hep-th/0407150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">C. Gabriel and P. Spindel, "Quantum charged fields in Rindler space,"</subfield>
<subfield code="s">Ann. Phys. 284 (2000) 263</subfield>
<subfield code="r">gr-qc/9912016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">N. Turok, M. Perry and P. J. Steinhardt, "M theory model of a big crunch / big bang transition,"</subfield>
<subfield code="s">Phys. Rev. D 70 (2004) 106004</subfield>
<subfield code="r">hep-th/0408083</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">C. Bachas and M. Porrati, "Pair Creation Of Open Strings In An Electric Field,"</subfield>
<subfield code="s">Phys. Lett. B 296 (1992) 77</subfield>
<subfield code="r">hep-th/9209032</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">J. M. Maldacena, H. Ooguri and J. Son, "Strings in AdS(3) and the SL(2,R) WZW model. II Euclidean black hole,"</subfield>
<subfield code="s">J. Math. Phys. 42 (2001) 2961</subfield>
<subfield code="r">hep-th/0005183</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">M. Berkooz, B. Craps, D. Kutasov and G. Rajesh, "Comments on cosmological singularities in string theory," arXiv</subfield>
<subfield code="r">hep-th/0212215</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">D. J. Gross and P. F. Mende, "The High-Energy Behavior Of String Scattering Amplitudes,"</subfield>
<subfield code="s">Phys. Lett. B 197 (1987) 129</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">H. Liu, G. Moore and N. Seiberg, "Strings in a time-dependent orbifold,"</subfield>
<subfield code="s">J. High Energy Phys. 0206 (2002) 045</subfield>
<subfield code="r">hep-th/0204168</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">H. Liu, G. Moore and N. Seiberg, "Strings in time-dependent orbifolds,"</subfield>
<subfield code="s">J. High Energy Phys. 0210 (2002) 031</subfield>
<subfield code="r">hep-th/0206182</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
<subfield code="m">D. Amati, M. Ciafaloni and G. Veneziano, "Class. Quantum Gravity Effects From Planckian Energy Superstring Collisions,"</subfield>
<subfield code="s">Int. J. Mod. Phys. A 3 (1988) 1615</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
<subfield code="m">G. T. Horowitz and J. Polchinski, "Instability of spacelike and null orbifold singularities,"</subfield>
<subfield code="s">Phys. Rev. D 66 (2002) 103512</subfield>
<subfield code="r">hep-th/0206228</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
<subfield code="m">C. R. Nappi and E. Witten, "A WZW model based on a non-semisimple group,"</subfield>
<subfield code="s">Phys. Rev. Lett. 71 (1993) 3751</subfield>
<subfield code="r">hep-th/9310112</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
<subfield code="m">D. I. Olive, E. Rabinovici and A. Schwimmer, "A Class of string backgrounds Ann. Sci. a semiclassical limit of WZW models,"</subfield>
<subfield code="s">Phys. Lett. B 321 (1994) 361</subfield>
<subfield code="r">hep-th/9311081</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">E. Kiritsis and C. Kounnas, "String Propagation In Gravitational Wave Backgrounds,"</subfield>
<subfield code="s">Phys. Lett. B 320 (1994) 264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">[Addendum-</subfield>
<subfield code="s">Phys. Lett. B 325 (1994) 536</subfield>
<subfield code="r">hep-th/9310202</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">E. Kiritsis, C. Koun-nas and D. Lust, "Superstring gravitational wave backgrounds with space-time supersymmetry,"</subfield>
<subfield code="s">Phys. Lett. B 331 (1994) 321</subfield>
<subfield code="r">hep-th/9404114</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
<subfield code="m">E. Kiritsis and B. Pioline, "Strings in homogeneous gravitational waves and null holography,"</subfield>
<subfield code="s">J. High Energy Phys. 0208 (2002) 048</subfield>
<subfield code="r">hep-th/0204004</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
<subfield code="s">Nucl.Phys. B674 (2003) 80</subfield>
<subfield code="m">G. D’Appollonio and E. Kiritsis, "String interactions in gravita-tional wave backgrounds," arXiv</subfield>
<subfield code="r">hep-th/0305081</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
<subfield code="m">Y. K. Cheung, L. Freidel and K. Savvidy, "Strings in gravimagnetic fields,"</subfield>
<subfield code="s">J. High Energy Phys. 0402 (2004) 054</subfield>
<subfield code="r">hep-th/0309005</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and E. Silverstein, "Multiple-trace op-erators and non-local string theories,"</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 006</subfield>
<subfield code="r">hep-th/0105309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
<subfield code="m">M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary conditions and spacetime singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 034</subfield>
<subfield code="r">hep-th/0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
<subfield code="m">E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspondence," arXiv</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[44]</subfield>
<subfield code="m">T. Damour, M. Henneaux and H. Nicolai, "Cosmological billiards,"</subfield>
<subfield code="s">Class. Quantum Gravity 20 (2003) R145</subfield>
<subfield code="r">hep-th/0212256</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20060713170102.0</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0606038</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DESY-06-083</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">DESY-2006-083</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Papadimitriou, I</subfield>
<subfield code="u">DESY</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Non-Supersymmetric Membrane Flows from Fake Supergravity and Multi-Trace Deformations</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2007</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Hamburg</subfield>
<subfield code="b">DESY</subfield>
<subfield code="c">5 Jun 2006</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">45 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We use fake supergravity as a solution generating technique to obtain a continuum of non-supersymmetric asymptotically $AdS_4\times S^7$ domain wall solutions of eleven-dimensional supergravity with non-trivial scalars in the $SL(8,\mathbb{R})/SO(8)$ coset. These solutions are continuously connected to the supersymmetric domain walls describing a uniform sector of the Coulomb branch of the $M2$-brane theory. We also provide a general argument that identifies the fake superpotential with the exact large-N quantum effective potential of the dual theory, thus arriving at a very general description of multi-trace deformations in the AdS/CFT correspondence, which strongly motivates further study of fake supergravity as a solution generating method. This identification allows us to interpret our non-supersymmetric solutions as a family of marginal triple-trace deformations of the Coulomb branch that completely break supersymmetry and to calculate the exact large-N anomalous dimensions of the operators involved. The holographic one- and two-point functions for these solutions are also computed.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS JHEP2007</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:200703 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Papadimitriou, Ioannis</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">008</subfield>
<subfield code="p">J. High Energy Phys.</subfield>
<subfield code="v">02</subfield>
<subfield code="y">2007</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0606038.pdf</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200623</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070307</subfield>
<subfield code="h">2032</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20060607</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002623855CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">M. Cvetic and H. H. Soleng, "Supergravity domain walls,"</subfield>
<subfield code="s">Phys. Rep. 282 (1997) 159</subfield>
<subfield code="r">hep-th/9604090</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">D. Z. Freedman, C. Nunez, M. Schnabl and K. Skenderis, "Fake supergravity and domain wall stability,"</subfield>
<subfield code="s">Phys. Rev. D 69 (2004) 104027</subfield>
<subfield code="r">hep-th/0312055</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">A. Celi, A. Ceresole, G. Dall’Agata, A. Van Proeyen and M. Zagermann, "On the fakeness of fake supergravity,"</subfield>
<subfield code="s">Phys. Rev. D 71 (2005) 045009</subfield>
<subfield code="r">hep-th/0410126</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">K. Skenderis and P. K. Townsend, "Gravitational stability and renormalization-group flow,"</subfield>
<subfield code="s">Phys. Lett. B 468 (1999) 46</subfield>
<subfield code="r">hep-th/9909070</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">I. Bakas, A. Brandhuber and K. Sfetsos, "Domain walls of gauged supergravity, M-branes, and algebraic curves,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 3 (1999) 1657</subfield>
<subfield code="r">hep-th/9912132</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">M. Zagermann, "N = 4 fake supergravity,"</subfield>
<subfield code="s">Phys. Rev. D 71 (2005) 125007</subfield>
<subfield code="r">hep-th/0412081</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">K. Skenderis and P. K. Townsend, "Hidden supersymmetry of domain walls and cosmologies," arXiv</subfield>
<subfield code="r">hep-th/0602260</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">K. Skenderis, private communication.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">P. K. Townsend, "Positive Energy And The Scalar Potential In Higher Dimensional (Super)Gravity Theories,"</subfield>
<subfield code="s">Phys. Lett. B 148 (1984) 55</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">O. DeWolfe, D. Z. Freedman, S. S. Gubser and A. Karch, "Modeling the fifth dimension with scalars and gravity,"</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 046008</subfield>
<subfield code="r">hep-th/9909134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">S. S. Gubser, "Curvature singularities The good, the bad, and the naked,"</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 4 (2002) 679</subfield>
<subfield code="r">hep-th/0002160</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">I. Papadimitriou and K. Skenderis, "AdS / CFT correspondence and geometry," arXiv</subfield>
<subfield code="r">hep-th/0404176</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">V. L. Campos, G. Ferretti, H. Larsson, D. Martelli and B. E. W. Nilsson, "A study of holographic renormalization group flows in d = 6 and d = 3,"</subfield>
<subfield code="s">J. High Energy Phys. 0006 (2000) 023</subfield>
<subfield code="r">hep-th/0003151</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">M. Cvetic, S. S. Gubser, H. Lu and C. N. Pope, "Symmetric potentials of gauged supergravities in diverse dimensions and Coulomb branch of gauge theories,"</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 086003</subfield>
<subfield code="r">hep-th/9909121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">M. Cvetic, H. Lu, C. N. Pope and A. Sadrzadeh, "Consistency of Kaluza-Klein sphere reductions of symmetric potentials,"</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 046005</subfield>
<subfield code="r">hep-th/0002056</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">P. Kraus, F. Larsen and S. P. Trivedi, "The Coulomb branch of gauge theory from rotating branes,"</subfield>
<subfield code="s">J. High Energy Phys. 9903 (1999) 003</subfield>
<subfield code="r">hep-th/9811120</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">D. Z. Freedman, S. S. Gubser, K. Pilch and N. P. Warner, "Continuous distributions of D3-branes and gauged supergravity,"</subfield>
<subfield code="s">J. High Energy Phys. 0007 (2000) 038</subfield>
<subfield code="r">hep-th/9906194</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">I. Bakas and K. Sfetsos, "States and curves of five-dimensional gauged supergravity,"</subfield>
<subfield code="s">Nucl. Phys. B 573 (2000) 768</subfield>
<subfield code="r">hep-th/9909041</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">C. Martinez, R. Troncoso and J. Zanelli, "Exact black hole solution with a minimally coupled scalar field,"</subfield>
<subfield code="s">Phys. Rev. D 70 (2004) 084035</subfield>
<subfield code="r">hep-th/0406111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">B. de Wit and H. Nicolai, "The Consistency Of The S7 Truncation In D = 11 Supergravity,"</subfield>
<subfield code="s">Nucl. Phys. B 281 (1987) 211</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">H. Nastase, D. Vaman and P. van Nieuwenhuizen, "Consistent Nonlinearity K K reduction of 11d supergravity on AdS7 × S4 and self-duality in odd dimensions,"</subfield>
<subfield code="s">Phys. Lett. B 469 (1999) 96</subfield>
<subfield code="r">hep-th/9905075</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">H. Nastase, D. Vaman and P. van Nieuwenhuizen, "Consistency of the AdS7 × S4 reduction and the origin of self-duality in odd dimensions,"</subfield>
<subfield code="s">Nucl. Phys. B 581 (2000) 179</subfield>
<subfield code="r">hep-th/9911238</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">P. Breitenlohner and D. Z. Freedman, "Stability In Gauged Extended Supergravity,"</subfield>
<subfield code="s">Ann. Phys. 144 (1982) 249</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">I. R. Klebanov and E. Witten, "AdS/CFT correspondence and Symmetry breaking,"</subfield>
<subfield code="s">Nucl. Phys. B 556 (1999) 89</subfield>
<subfield code="r">hep-th/9905104</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">Dr. E. Kamke, Differentialgleichungen Lösungsmethoden und Lösungen, Chelsea Publishing Company, 1971.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">E. S. Cheb-Terrab and A. D. Roche, "Abel ODEs Equivalence and Integrable Classes," Comput. Phys. Commun. 130, Issues 1- : 2 (2000) 204 [arXiv</subfield>
<subfield code="r">math-ph/0001037</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">E. S. Cheb-Terrab and A. D. Roche, "An Abel ordinary differential equation class generalizing known integrable classes," European J.</subfield>
<subfield code="s">Appl. Math. 14 (2003) 217</subfield>
<subfield code="r">math.GM/0002059</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">V. M. Boyko, "Symmetry, Equivalence and Integrable Classes of Abel’s Equations," Proceedings of the Institute of Mathematics of the NAS of Ukraine 50, Part : 1 (2004) 47 [arXiv</subfield>
<subfield code="r">nlin.SI/0404020</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">M. J. Duff and J. T. Liu, "Anti-de Sitter black holes in gauged N = 8 supergravity,"</subfield>
<subfield code="s">Nucl. Phys. B 554 (1999) 237</subfield>
<subfield code="r">hep-th/9901149</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">M. Cvetic et al., "Embedding AdS black holes in ten and eleven dimensions,"</subfield>
<subfield code="s">Nucl. Phys. B 558 (1999) 96</subfield>
<subfield code="r">hep-th/9903214</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">J. de Boer, E. P. Verlinde and H. L. Verlinde, "On the holographic renormalization group,"</subfield>
<subfield code="s">J. High Energy Phys. 0008 (2000) 003</subfield>
<subfield code="r">hep-th/9912012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">M. Bianchi, D. Z. Freedman and K. Skenderis, "How to go with an RG flow,"</subfield>
<subfield code="s">J. High Energy Phys. 0108 (2001) 041</subfield>
<subfield code="r">hep-th/0105276</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">I. Papadimitriou and K. Skenderis, "Correlation functions in holographic RG flows,"</subfield>
<subfield code="s">J. High Energy Phys. 0410 (2004) 075</subfield>
<subfield code="r">hep-th/0407071</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">M. Henningson and K. Skenderis, "The holographic Weyl anomaly,"</subfield>
<subfield code="s">J. High Energy Phys. 9807 (1998) 023</subfield>
<subfield code="r">hep-th/9806087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">V. Balasubramanian and P. Kraus, "A stress tensor for anti-de Sitter gravity,"</subfield>
<subfield code="s">Commun. Math. Phys. 208 (1999) 413</subfield>
<subfield code="r">hep-th/9902121</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">P. Kraus, F. Larsen and R. Siebelink, "The gravitational action in asymptotically AdS and flat spacetimes,"</subfield>
<subfield code="s">Nucl. Phys. B 563 (1999) 259</subfield>
<subfield code="r">hep-th/9906127</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
<subfield code="m">S. de Haro, S. N. Solodukhin and K. Skenderis, "Holographic reconstruction of spacetime and renormalization in the AdS/CFT correspondence,"</subfield>
<subfield code="s">Commun. Math. Phys. 217 (2001) 595</subfield>
<subfield code="r">hep-th/0002230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
<subfield code="m">M. Bianchi, D. Z. Freedman and K. Skenderis, "Holographic renormalization,"</subfield>
<subfield code="s">Nucl. Phys. B 631 (2002) 159</subfield>
<subfield code="r">hep-th/0112119</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
<subfield code="m">D. Martelli and W. Muck, "Holographic renormalization and Ward identities with the Hamilton-Jacobi method,"</subfield>
<subfield code="s">Nucl. Phys. B 654 (2003) 248</subfield>
<subfield code="r">hep-th/0205061</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
<subfield code="m">K. Skenderis, "Lecture notes on holographic renormalization,"</subfield>
<subfield code="s">Class. Quantum Gravity 19 (2002) 5849</subfield>
<subfield code="r">hep-th/0209067</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">D. Z. Freedman, S. D. Mathur, A. Matusis and L. Rastelli, "Correlation functions in the CFT(d)/AdS(d + 1) correspondence,"</subfield>
<subfield code="s">Nucl. Phys. B 546 (1999) 96</subfield>
<subfield code="r">hep-th/9804058</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
<subfield code="m">O. DeWolfe and D. Z. Freedman, "Notes on fluctuations and correlation functions in holographic renormalization group flows," arXiv</subfield>
<subfield code="r">hep-th/0002226</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[41]</subfield>
<subfield code="m">W. Muck, "Correlation functions in holographic renormalization group flows,"</subfield>
<subfield code="s">Nucl. Phys. B 620 (2002) 477</subfield>
<subfield code="r">hep-th/0105270</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[42]</subfield>
<subfield code="m">M. Bianchi, M. Prisco and W. Muck, "New results on holographic three-point functions,"</subfield>
<subfield code="s">J. High Energy Phys. 0311 (2003) 052</subfield>
<subfield code="r">hep-th/0310129</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[43]</subfield>
<subfield code="m">E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspondence," arXiv</subfield>
<subfield code="r">hep-th/0112258</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[44]</subfield>
<subfield code="m">M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary conditions and spacetime singularities,"</subfield>
<subfield code="s">J. High Energy Phys. 0205 (2002) 034</subfield>
<subfield code="r">hep-th/0112264</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[45]</subfield>
<subfield code="m">W. Muck, "An improved correspondence formula for AdS/CFT with multi-trace operators,"</subfield>
<subfield code="s">Phys. Lett. B 531 (2002) 301</subfield>
<subfield code="r">hep-th/0201100</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[46]</subfield>
<subfield code="m">P. Minces, "Multi-trace operators and the generalized AdS/CFT prescription,"</subfield>
<subfield code="s">Phys. Rev. D 68 (2003) 024027</subfield>
<subfield code="r">hep-th/0201172</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[47]</subfield>
<subfield code="m">A. Sever and A. Shomer, "A note on multi-trace deformations and AdS/CFT,"</subfield>
<subfield code="s">J. High Energy Phys. 0207 (2002) 027</subfield>
<subfield code="r">hep-th/0203168</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[48]</subfield>
<subfield code="m">S. S. Gubser and I. R. Klebanov, "A universal result on central charges in the presence of double-trace deformations,"</subfield>
<subfield code="s">Nucl. Phys. B 656 (2003) 23</subfield>
<subfield code="r">hep-th/0212138</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[49]</subfield>
<subfield code="m">O. Aharony, M. Berkooz and B. Katz, "Non-local effects of multi-trace deformations in the AdS/CFT correspondence,"</subfield>
<subfield code="s">J. High Energy Phys. 0510 (2005) 097</subfield>
<subfield code="r">hep-th/0504177</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[50]</subfield>
<subfield code="m">S. Elitzur, A. Giveon, M. Porrati and E. Rabinovici, "Multitrace deformations of vector and adjoint theories and their holographic duals,"</subfield>
<subfield code="s">J. High Energy Phys. 0602 (2006) 006</subfield>
<subfield code="r">hep-th/0511061</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[51]</subfield>
<subfield code="m">R. Corrado, K. Pilch and N. P. Warner, "An N = 2 supersymmetric membrane flow,"</subfield>
<subfield code="s">Nucl. Phys. B 629 (2002) 74</subfield>
<subfield code="r">hep-th/0107220</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[52]</subfield>
<subfield code="m">T. Hertog and K. Maeda, "Black holes with scalar hair and asymptotics in N = 8 supergravity,"</subfield>
<subfield code="s">J. High Energy Phys. 0407 (2004) 051</subfield>
<subfield code="r">hep-th/0404261</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[53]</subfield>
<subfield code="m">T. Hertog and G. T. Horowitz, "Towards a big crunch dual,"</subfield>
<subfield code="s">J. High Energy Phys. 0407 (2004) 073</subfield>
<subfield code="r">hep-th/0406134</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[54]</subfield>
<subfield code="m">T. Hertog and G. T. Horowitz, "Designer gravity and field theory effective potentials,"</subfield>
<subfield code="s">Phys. Rev. Lett. 94 (2005) 221301</subfield>
<subfield code="r">hep-th/0412169</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[55]</subfield>
<subfield code="m">T. Hertog and G. T. Horowitz, "Holographic description of AdS cosmologies,"</subfield>
<subfield code="s">J. High Energy Phys. 0504 (2005) 005</subfield>
<subfield code="r">hep-th/0503071</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[56]</subfield>
<subfield code="m">S. de Haro, I. Papadimitriou and A. C. Petkou, "Conformally coupled scalars, instantons and Vacuum instability in AdS(4)," [arXiv</subfield>
<subfield code="r">hep-th/0611315</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20060616163757.0</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0606096</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">UTHET-2006-05-01</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Koutsoumbas, G</subfield>
<subfield code="u">National Technical University of Athens</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Quasi-normal Modes of Electromagnetic Perturbations of Four-Dimensional Topological Black Holes with Scalar Hair</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2006</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">10 Jun 2006</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">17 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study the perturbative behaviour of topological black holes with scalar hair. We calculate both analytically and numerically the quasi-normal modes of the electromagnetic perturbations. In the case of small black holes we find clear evidence of a second-order phase transition of a topological black hole to a hairy configuration. We also find evidence of a second-order phase transition of the AdS vacuum solution to a topological black hole.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS JHEP2007</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:200702 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Musiri, S</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Papantonopoulos, E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Siopsis, G</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Koutsoumbas, George</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Musiri, Suphot</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Papantonopoulos, Eleftherios</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Siopsis, George</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">006</subfield>
<subfield code="p">J. High Energy Phys.</subfield>
<subfield code="v">10</subfield>
<subfield code="y">2006</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0606096.pdf</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200624</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070425</subfield>
<subfield code="h">1021</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20060613</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002628325CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">K. D. Kokkotas and B. G. Schmidt,</subfield>
<subfield code="s">Living Rev. Relativ. 2 (1999) 2</subfield>
<subfield code="r">gr-qc/9909058</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">H.-P. Nollert,</subfield>
<subfield code="s">Class. Quantum Gravity 16 (1999) R159</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">J. S. F. Chan and R. B. Mann,</subfield>
<subfield code="s">Phys. Rev. D 55 (1997) 7546</subfield>
<subfield code="r">gr-qc/9612026</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 064025</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">G. T. Horowitz and V. E. Hubeny,</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 024027</subfield>
<subfield code="r">hep-th/9909056</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">V. Cardoso and J. P. S. Lemos,</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 084017</subfield>
<subfield code="r">gr-qc/0105103</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">B. Wang, C. Y. Lin and E. Abdalla,</subfield>
<subfield code="s">Phys. Lett. B 481 (2000) 79</subfield>
<subfield code="r">hep-th/0003295</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">E. Berti and K. D. Kokkotas,</subfield>
<subfield code="s">Phys. Rev. D 67 (2003) 064020</subfield>
<subfield code="r">gr-qc/0301052</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">F. Mellor and I. Moss,</subfield>
<subfield code="s">Phys. Rev. D 41 (1990) 403</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">C. Martinez and J. Zanelli,</subfield>
<subfield code="s">Phys. Rev. D 54 (1996) 3830</subfield>
<subfield code="r">gr-qc/9604021</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">M. Henneaux, C. Martinez, R. Troncoso and J. Zanelli,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 104007</subfield>
<subfield code="r">hep-th/0201170</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">C. Martinez, R. Troncoso and J. Zanelli,</subfield>
<subfield code="s">Phys. Rev. D 67 (2003) 024008</subfield>
<subfield code="r">hep-th/0205319</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">N. Bocharova, K. Bronnikov and V. Melnikov, Vestn. Mosk. Univ. Fizika</subfield>
<subfield code="s">Astronomy 6 (1970) 706</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">J. D. Bekenstein,</subfield>
<subfield code="s">Ann. Phys. 82 (1974) 535</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="s">Ann. Phys. 91 (1975) 75</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">T. Torii, K. Maeda and M. Narita,</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 044007</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">E. Winstanley,</subfield>
<subfield code="s">Found. Phys. 33 (2003) 111</subfield>
<subfield code="r">gr-qc/0205092</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">T. Hertog and K. Maeda,</subfield>
<subfield code="s">J. High Energy Phys. 0407 (2004) 051</subfield>
<subfield code="r">hep-th/0404261</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">J. P. S. Lemos,</subfield>
<subfield code="s">Phys. Lett. B 353 (1995) 46</subfield>
<subfield code="r">gr-qc/9404041</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">R. B. Mann,</subfield>
<subfield code="s">Class. Quantum Gravity 14 (1997) L109</subfield>
<subfield code="r">gr-qc/9607071</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">R. B. Mann,</subfield>
<subfield code="s">Nucl. Phys. B 516 (1998) 357</subfield>
<subfield code="r">hep-th/9705223</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">L. Vanzo,</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 6475</subfield>
<subfield code="r">gr-qc/9705004</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">D. R. Brill, J. Louko and P. Peldan,</subfield>
<subfield code="s">Phys. Rev. D 56 (1997) 3600</subfield>
<subfield code="r">gr-qc/9705012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">D. Birmingham,</subfield>
<subfield code="s">Class. Quantum Gravity 16 (1999) 1197</subfield>
<subfield code="r">hep-th/9808032</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">R. G. Cai and K. S. Soh,</subfield>
<subfield code="s">Phys. Rev. D 59 (1999) 044013</subfield>
<subfield code="r">gr-qc/9808067</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="s">Phys.Rev. D65 (2002) 084006</subfield>
<subfield code="m">B. Wang, E. Abdalla and R. B. Mann, [arXiv</subfield>
<subfield code="r">hep-th/0107243</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="s">Phys.Rev. D65 (2002) 084006</subfield>
<subfield code="m">R. B. Mann, [arXiv</subfield>
<subfield code="r">gr-qc/9709039</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">J. Crisostomo, R. Troncoso and J. Zanelli,</subfield>
<subfield code="s">Phys. Rev. D 62 (2000) 084013</subfield>
<subfield code="r">hep-th/0003271</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">R. Aros, R. Troncoso and J. Zanelli,</subfield>
<subfield code="s">Phys. Rev. D 63 (2001) 084015</subfield>
<subfield code="r">hep-th/0011097</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">R. G. Cai, Y. S. Myung and Y. Z. Zhang,</subfield>
<subfield code="s">Phys. Rev. D 65 (2002) 084019</subfield>
<subfield code="r">hep-th/0110234</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">M. H. Dehghani,</subfield>
<subfield code="s">Phys. Rev. D 70 (2004) 064019</subfield>
<subfield code="r">hep-th/0405206</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">C. Martinez, R. Troncoso and J. Zanelli,</subfield>
<subfield code="s">Phys. Rev. D 70 (2004) 084035</subfield>
<subfield code="r">hep-th/0406111</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="s">Phys.Rev. D74 (2006) 044028</subfield>
<subfield code="m">C. Martinez, J. P. Staforelli and R. Troncoso, [arXiv</subfield>
<subfield code="r">hep-th/0512022</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">C. Martinez and R. Troncoso, [arXiv</subfield>
<subfield code="s">Phys.Rev. D74 (2006) 064007</subfield>
<subfield code="r">hep-th/0606130</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">E. Winstanley,</subfield>
<subfield code="s">Class. Quantum Gravity 22 (2005) 2233</subfield>
<subfield code="r">gr-qc/0501096</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">E. Radu and E. Win-stanley,</subfield>
<subfield code="s">Phys. Rev. D 72 (2005) 024017</subfield>
<subfield code="r">gr-qc/0503095</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">A. M. Barlow, D. Doherty and E. Winstanley,</subfield>
<subfield code="s">Phys. Rev. D 72 (2005) 024008</subfield>
<subfield code="r">gr-qc/0504087</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[31]</subfield>
<subfield code="m">I. Papadimitriou, [arXiv</subfield>
<subfield code="s">JHEP 0702 (2007) 008</subfield>
<subfield code="r">hep-th/0606038</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="m">P. Breitenlohner and D. Z. Freedman,</subfield>
<subfield code="s">Phys. Lett. B 115 (1982) 197</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[32]</subfield>
<subfield code="s">Ann. Phys. 144 (1982) 249</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[33]</subfield>
<subfield code="m">L. Mezincescu and P. K. Townsend,</subfield>
<subfield code="s">Ann. Phys. 160 (1985) 406</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[34]</subfield>
<subfield code="m">V. Cardoso, J. Natario and R. Schiappa,</subfield>
<subfield code="s">J. Math. Phys. 45 (2004) 4698</subfield>
<subfield code="r">hep-th/0403132</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[35]</subfield>
<subfield code="m">J. Natario and R. Schiappa,</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 8 (2004) 1001</subfield>
<subfield code="r">hep-th/0411267</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[36]</subfield>
<subfield code="m">S. Musiri, S. Ness and G. Siopsis,</subfield>
<subfield code="s">Phys. Rev. D 73 (2006) 064001</subfield>
<subfield code="r">hep-th/0511113</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[37]</subfield>
<subfield code="m">L. Motl and A. Neitzke,</subfield>
<subfield code="s">Adv. Theor. Math. Phys. 7 (2003) 307</subfield>
<subfield code="r">hep-th/0301173</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[38]</subfield>
<subfield code="m">Astron. J. M. Medved, D. Martin and M. Visser,</subfield>
<subfield code="s">Class. Quantum Gravity 21 (2004) 2393</subfield>
<subfield code="r">gr-qc/0310097</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[39]</subfield>
<subfield code="m">W.-H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery in Numerical Recipies (Cambridge University Press, Cambridge, England, 1992).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[40]</subfield>
<subfield code="m">G. Koutsoumbas, S. Musiri, E. Papantonopoulos and G. Siopsis, in preparation.</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/0703265</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">IGPG-07-3-4</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Alexander, S</subfield>
<subfield code="u">The Pennsylvania State University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A new PPN parameter to test Chern-Simons gravity</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2007</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="c">28 Mar 2007</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">4 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">We study Chern-Simons (CS) gravity in the parameterized post-Newtonian (PPN) framework through weak-field solutions of the modified field equations for a perfect fluid source. We discover that CS gravity possesses the same PPN parameters as general relativity, except for the inclusion of a new term, proportional both to the CS coupling parameter and the curl of the PPN vector potentials. This new term encodes the key physical effect of CS gravity in the weak-field limit, leading to a modification of frame dragging and, thus, the Lense-Thirring contribution to gyroscopic precession. We provide a physical interpretation for the new term, as well as an estimate of the size of this effect relative to the general relativistic Lense-Thirring prediction. This correction to frame dragging might be used in experiments, such as Gravity Probe B and lunar ranging, to place bounds on the CS coupling parameter, as well as other intrinsic parameters of string theory.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">LANL EDS</subfield>
<subfield code="a">High Energy Physics - Theory</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Yunes, N</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Alexander, Stephon</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Yunes, Nicolas</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="p">Phys. Rev. Lett.</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0703265.pdf</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">yunes@gravity.psu.edu&gt; Uploader Engine &lt;uploader@sundh99.cern.ch</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200713</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">11</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070417</subfield>
<subfield code="h">2012</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20070330</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002685163CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">J. Polchinski, String theory. Vol. 2 Superstring theory and beyond (Cambridge University Press, Cambridge, UK, 1998).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">S. H. S. Alexander, M. E. Peskin, and M. M. Sheik-Jabbari,</subfield>
<subfield code="s">Phys. Rev. Lett. 96 (2006) 081301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">A. Lue, L.-M. Wang, and M. Kamionkowski,</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 1506</subfield>
<subfield code="r">astro-ph/9812088</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">C. M. Will, Theory and experiment in gravitational Physics (Cambridge Univ. Press, Cambridge, UK, 1993).</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">C. M. Will,</subfield>
<subfield code="s">Phys. Rev. D 57 (1998) 2061</subfield>
<subfield code="r">gr-qc/9709011</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">C. M. Will and N. Yunes,</subfield>
<subfield code="s">Class. Quantum Gravity 21 (2004) 4367</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">E. Berti, A. Buonanno, and C. M. Will,</subfield>
<subfield code="s">Phys. Rev. D 71 (2005) 084025</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">A discussion of the history, technology and Physics of Gravity Probe B can be found at</subfield>
<subfield code="u">http://einstein.standfod.edu</subfield>
<subfield code="z">http://einstein.standfod.edu</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">J. Murphy, T. W., K. Nordtvedt, and S. G. Turyshev,</subfield>
<subfield code="s">Phys. Rev. Lett. 98 (2007) 071102</subfield>
<subfield code="r">gr-qc/0702028</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">R. Jackiw and S. Y. Pi,</subfield>
<subfield code="s">Phys. Rev. D 68 (2003) 104012</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">D. Guarrera and Astron. J. Hariton (2007),</subfield>
<subfield code="s">Phys. Rev. D 76 (2007) 044011</subfield>
<subfield code="r">gr-qc/0702029</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">S. Alexander and J. Martin,</subfield>
<subfield code="s">Phys. Rev. D 71 (2005) 063526</subfield>
<subfield code="r">hep-th/0410230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">R. J. Gleiser and C. N. Kozameh,</subfield>
<subfield code="s">Phys. Rev. D 64 (2001) 083007</subfield>
<subfield code="r">gr-qc/0102093</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">R. H. Brandenberger and C. Vafa,</subfield>
<subfield code="s">Nucl. Phys. B 316 (1989) 391</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">L. Randall and R. Sundrum,</subfield>
<subfield code="s">Phys. Rev. Lett. 83 (1999) 4690</subfield>
<subfield code="r">hep-th/9906064</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">S. Alexander and N. Yunes (2007), in progress.</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">L. Blanchet,</subfield>
<subfield code="s">Living Rev. Relativ. 9 (2006) 4</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">and references therein,</subfield>
<subfield code="r">gr-qc/0202016</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">S. Alexander, L. S. Finn, and N. Yunes, in progress (2007).</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">0237765CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">3455840</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-th/9611103</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">PUPT-1665</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Periwal, V</subfield>
<subfield code="u">Princeton University</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Matrices on a point as the theory of everything</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">1997</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Princeton, NJ</subfield>
<subfield code="b">Princeton Univ. Joseph-Henry Lab. Phys.</subfield>
<subfield code="c">14 Nov 1996</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">5 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">It is shown that the world-line can be eliminated in the matrix quantum mechanics conjectured by Banks, Fischler, Shenker and Susskind to describe the light-cone physics of M theory. The resulting matrix model has a form that suggests origins in the reduction to a point of a Yang-Mills theory. The reduction of the Nishino-Sezgin $10+2$ dimensional supersymmetric Yang-Mills theory to a point gives a matrix model with the appropriate features: Lorentz invariance in $9+1$ dimensions, supersymmetry, and the correct number of physical degrees of freedom.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS UNC98</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Periwal, Vipul</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">1711</subfield>
<subfield code="n">4</subfield>
<subfield code="p">Phys. Rev. D</subfield>
<subfield code="v">55</subfield>
<subfield code="y">1997</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9611103.pdf</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">vipul@viper.princeton.edu</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">199648</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070310</subfield>
<subfield code="h">0012</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">19961115</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">000237765CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">1.</subfield>
<subfield code="h">T. Banks, W. Fischler, S. Shenker and L. Susskind</subfield>
<subfield code="r">hep-th/9610043</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">2.</subfield>
<subfield code="h">E. Witten</subfield>
<subfield code="s">Nucl. Phys B 460 (1995) 335</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">3.</subfield>
<subfield code="h">B. de Wit, J. Hoppe and H. Nicolai</subfield>
<subfield code="s">Nucl. Phys B 305 (1988) 545</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">4.</subfield>
<subfield code="h">M. Berkooz and M. Douglas</subfield>
<subfield code="r">hep-th/9610236</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">5.</subfield>
<subfield code="h">H. Nishino and E. Sezgin</subfield>
<subfield code="r">hep-th/9607185</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">6.</subfield>
<subfield code="h">M. Blencowe and M. Duff</subfield>
<subfield code="s">Nucl. Phys B 310 (1988) 387</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">7.</subfield>
<subfield code="h">C. Vafa</subfield>
<subfield code="s">Nucl. Phys B 469 (1996) 403</subfield>
<subfield code="m">9</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">7.</subfield>
<subfield code="h">C. Hull</subfield>
<subfield code="r">hep-th/9512181</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">D. Kutasov and E. Martinec</subfield>
<subfield code="r">hep-th/9602049</subfield>
<subfield code="m">10</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">I. Bars</subfield>
<subfield code="r">hep-th/9607112</subfield>
<subfield code="m">11. For some background on the choice of 10+2, see e.g</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">L. Castellani, P. Fré, F. Giani, K. Pilch and P. van Nieuwenhuizen</subfield>
<subfield code="s">Phys. Rev D 26 (1982) 1481</subfield>
<subfield code="m">12</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">A. Connes</subfield>
<subfield code="m">Non-commutative Geometry, Academic Press (San Diego, 1994) 5</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">3.</subfield>
<subfield code="h">B. de Wit, J. Hoppe and H. Nicolai</subfield>
<subfield code="s">Nucl. Phys B 305 (1988) 545</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">4.</subfield>
<subfield code="h">M. Berkooz and M. Douglas</subfield>
<subfield code="r">hep-th/9610236</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">5.</subfield>
<subfield code="h">H. Nishino and E. Sezgin</subfield>
<subfield code="r">hep-th/9607185</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">6.</subfield>
<subfield code="h">M. Blencowe and M. Duff</subfield>
<subfield code="s">Nucl. Phys B 310 (1988) 387</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">7.</subfield>
<subfield code="h">C. Vafa</subfield>
<subfield code="s">Nucl. Phys B 469 (1996) 403</subfield>
<subfield code="m">9</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">7.</subfield>
<subfield code="h">C. Hull</subfield>
<subfield code="r">hep-th/9512181</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">D. Kutasov and E. Martinec</subfield>
<subfield code="r">hep-th/9602049</subfield>
<subfield code="m">10</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">I. Bars</subfield>
<subfield code="r">hep-th/9607112</subfield>
<subfield code="m">11. For some background on the choice of 10+2, see e.g</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">L. Castellani, P. Fré, F. Giani, K. Pilch and P. van Nieuwenhuizen</subfield>
<subfield code="s">Phys. Rev D 26 (1982) 1481</subfield>
<subfield code="m">12</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">8.</subfield>
<subfield code="h">A. Connes</subfield>
<subfield code="m">Non-commutative Geometry, Academic Press (San Diego, 1994) 5</subfield>
</datafield>
</record>
<record>
<datafield tag="035" ind1="" ind2="">
<subfield code="a">0289446CERCER</subfield>
</datafield>
<datafield tag="035" ind1="" ind2="">
<subfield code="9">SLAC</subfield>
<subfield code="a">3838510</subfield>
</datafield>
<datafield tag="037" ind1="" ind2="">
<subfield code="a">hep-th/9809057</subfield>
</datafield>
<datafield tag="041" ind1="" ind2="">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1="" ind2="">
<subfield code="a">Polyakov, A M</subfield>
<subfield code="u">Princeton University</subfield>
</datafield>
<datafield tag="245" ind1="" ind2="">
<subfield code="a">The wall of the cave</subfield>
</datafield>
<datafield tag="260" ind1="" ind2="">
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="520" ind1="" ind2="">
<subfield code="a">In this article old and new relations between gauge fields and strings are discussed. We add new arguments that the Yang Mills theories must be described by the non-critical strings in the five dimensional curved space. The physical meaning of the fifth dimension is that of the renormalization scale represented by the Liouville field. We analyze the meaning of the zigzag symmetry and show that it is likely to be present if there is a minimal supersymmetry on the world sheet. We also present the new string backgrounds which may be relevant for the description of the ordinary bosonic Yang-Mills theories. The article is written on the occasion of the 40-th anniversary of the IHES.</subfield>
</datafield>
<datafield tag="595" ind1="" ind2="">
<subfield code="a">SIS LANLPUBL2001</subfield>
</datafield>
<datafield tag="595" ind1="" ind2="">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1="" ind2="">
<subfield code="a">SIS:2001 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Theory</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2="">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">645-658</subfield>
<subfield code="p">Int. J. Mod. Phys. A</subfield>
<subfield code="v">14</subfield>
<subfield code="y">1999</subfield>
</datafield>
<datafield tag="859" ind1="" ind2="">
<subfield code="f">polyakov@puhep1.princeton.edu</subfield>
</datafield>
<datafield tag="916" ind1="" ind2="">
<subfield code="s">n</subfield>
<subfield code="w">199837</subfield>
</datafield>
<datafield tag="960" ind1="" ind2="">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1="" ind2="">
<subfield code="c">20060916</subfield>
<subfield code="h">0007</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">19980910</subfield>
</datafield>
<datafield tag="963" ind1="" ind2="">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1="" ind2="">
<subfield code="a">000289446CER</subfield>
</datafield>
<datafield tag="980" ind1="" ind2="">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9809057.pdf</subfield>
</datafield>
<!--added using refextract-->
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">K. Wilson</subfield>
<subfield code="s">Phys. Rev. D 10 (1974) 2445</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">A. Polyakov</subfield>
<subfield code="s">Phys. Lett. B 59 (1975) 82</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">A.Polyakov</subfield>
<subfield code="s">Nucl. Phys. B 120 (1977) 429</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">S. Mandelstam</subfield>
<subfield code="s">Phys. Rep., C 23 (1976) 245</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">G. t’Hooft in High Energy Phys., Zichichi editor, Bolognia (1976)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="m">A. Polyakov</subfield>
<subfield code="r">hep-th/9711002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="m">I. Klebanov</subfield>
<subfield code="s">Nucl. Phys. B 496 (1997) 231</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="m">J. Maldacena</subfield>
<subfield code="r">hep-th/9711200</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[9]</subfield>
<subfield code="m">S. Gubser I. Klebanov A. Polyakov</subfield>
<subfield code="r">hep-th/9802109</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="m">E. Witten</subfield>
<subfield code="r">hep-th/9802150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[11]</subfield>
<subfield code="m">L. Brink P. di Vecchia P. Howe</subfield>
<subfield code="s">Phys. Lett. B 63 (1976) 471</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">S. Deser B. Zumino</subfield>
<subfield code="s">Phys. Lett. B 65 (1976) 369</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">A. Polyakov</subfield>
<subfield code="s">Phys. Lett. B 103 (1981) 207</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[14]</subfield>
<subfield code="m">T. Curtright C. Thorn</subfield>
<subfield code="s">Phys. Rev. Lett. 48 (1982) 1309</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[15]</subfield>
<subfield code="m">J. Gervais A. Neveu</subfield>
<subfield code="s">Nucl. Phys. B 199 (1982) 59</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[16]</subfield>
<subfield code="m">J. Polchinski</subfield>
<subfield code="s">Nucl. Phys. B 346 (1990) 253</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[17]</subfield>
<subfield code="m">C. Callan E. Martinec M. Perry D. Friedan</subfield>
<subfield code="s">Nucl. Phys. B 262 (1985) 593</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[18]</subfield>
<subfield code="m">A. Polyakov Proceedings of Les Houches (1992)</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[19]</subfield>
<subfield code="m">A. Polyakov</subfield>
<subfield code="s">Nucl. Phys. B 164 (1980) 171</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[20]</subfield>
<subfield code="m">Y. Makeenko A. Migdal</subfield>
<subfield code="s">Nucl. Phys. B 188 (1981) 269</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[21]</subfield>
<subfield code="m">H. Verlinde</subfield>
<subfield code="r">hep-th/9705029</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[22]</subfield>
<subfield code="m">A. Migdal</subfield>
<subfield code="s">Nucl. Phys. B, Proc. Suppl. 41 (1995) 51</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[23]</subfield>
<subfield code="m">J. Maldacena</subfield>
<subfield code="r">hep-th/9803002</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[24]</subfield>
<subfield code="m">G. Horowitz A. Strominger</subfield>
<subfield code="s">Nucl. Phys. B 360 (1991) 197</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[25]</subfield>
<subfield code="m">A. Lukas B. Ovrut D. Waldram</subfield>
<subfield code="r">hep-th/9802041</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[26]</subfield>
<subfield code="m">K. Wilson</subfield>
<subfield code="s">Phys. Rev. 179 (1969) 1499</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="m">A. Polyakov</subfield>
<subfield code="s">Zh. Eksp. Teor. Fiz. 59 (1970) 542</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[27]</subfield>
<subfield code="s">Pis'ma Zh. Eksp. Teor. Fiz. 12 (1970) 538</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[28]</subfield>
<subfield code="m">A. Peet J. Polchinski</subfield>
<subfield code="r">hep-th/9809022</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[29]</subfield>
<subfield code="m">E. Fradkin A. Tseytlin</subfield>
<subfield code="s">Phys. Lett. B 178 (1986) 34</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[30]</subfield>
<subfield code="m">S. Gubser I. Klebanov</subfield>
<subfield code="r">hep-th/9708005</subfield>
</datafield>
</record>
<!--added the following record to get an example of finding a reference-->
<!--by 999C5s to 909C4-->
<!--It's probably a good article, anyway-->
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">2174811CERCER</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SLAC</subfield>
<subfield code="a">4308492</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">hep-ph/0002060</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">ACT-2000-1</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CTP-TAMU-2000-2</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">OUTP-2000-03-P</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">TPI-MINN-2000-6</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Cleaver, G B</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Non-Abelian Flat Directions in a Minimal Superstring Standard Model</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2000</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Houston, TX</subfield>
<subfield code="b">Houston Univ. Adv. Res. Cent. The Woodlands</subfield>
<subfield code="c">4 Feb 2000</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">14 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Recently, by studying exact flat directions of non-Abelian singlet fields, wedemonstrated the existence of free fermionic heterotic-string models in whichthe SU(3)_C x SU(2)_L x U(1)_Y-charged matter spectrum, just below the stringscale, consists solely of the MSSM spectrum. In this paper we generalize theanalysis to include VEVs of non-Abelian fields. We find several,MSSM-producing, exact non-Abelian flat directions, which are the first suchexamples in the literature. We examine the possibility that hidden sectorcondensates lift the flat directions.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">LANL EDS</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS LANLPUBL2001</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:2001 PR/LKR added</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Phenomenology</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Faraggi, A E</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Nanopoulos, Dimitri V</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Walker, J W</subfield>
</datafield>
<datafield tag="720" ind1=" " ind2=" ">
<subfield code="a">Walker, Joel W.</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="a">10.1142/S0217732300001444</subfield>
<subfield code="c">1191-1202</subfield>
<subfield code="p">Mod. Phys. Lett. A</subfield>
<subfield code="v">15</subfield>
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">gcleaver@rainbow.physics.tamu.edu</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200006</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070425</subfield>
<subfield code="h">1017</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20000207</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002174811CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[1]</subfield>
<subfield code="m">A.E. Faraggi and D.V. Nanopoulos and L. Yuan</subfield>
<subfield code="s">Nucl. Phys. B 335 (1990) 347</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[2]</subfield>
<subfield code="m">I. Antoniadis and J. Ellis and J. Hagelin and D.V. Nanopoulos</subfield>
<subfield code="s">Phys. Lett. B 213 (1989) 65</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[3]</subfield>
<subfield code="m">I. Antoniadis and C. Bachas and C. Kounnas</subfield>
<subfield code="s">Nucl. Phys. B 289 (1987) 87</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[4]</subfield>
<subfield code="m">A.E. Faraggi and D.V. Nanopoulos</subfield>
<subfield code="s">Phys. Rev. D 48 (1993) 3288</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[5]</subfield>
<subfield code="m">G.B. Cleaver and A.E. Faraggi and D.V. Nanopoulos and L. Yuan</subfield>
<subfield code="s">Phys. Lett. B 455 (1999) 135</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[6]</subfield>
<subfield code="r">hep-ph/9904301</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[7]</subfield>
<subfield code="r">hep-ph/9910230</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[8]</subfield>
<subfield code="s">Phys. Lett. B 256 (1991) 150</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[10]</subfield>
<subfield code="r">hep-ph/9511426</subfield>
</datafield>
<!--the reference-->
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[12]</subfield>
<subfield code="m">J. Ellis, K. Enqvist, D.V. Nanopoulos</subfield>
<subfield code="s">Phys. Lett., B 151 (1985) 357</subfield>
</datafield>
<datafield tag="999" ind1="C" ind2="5">
<subfield code="o">[13]</subfield>
<subfield code="m">P. Horava</subfield>
<subfield code="s">Phys. Rev. D 54 (1996) 7561</subfield>
</datafield>
<datafield tag="FFT" ind1="" ind2="">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0002060.pdf</subfield>
</datafield>
<datafield tag="FFT" ind1="" ind2="">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/0002060.ps.gz</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20060914104330.0</controlfield>
<datafield tag="035" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">34038281</subfield>
</datafield>
<datafield tag="035" ind1="" ind2="">
<subfield code="9">UNCOVER</subfield>
<subfield code="a">251,129,189,013</subfield>
</datafield>
<datafield tag="041" ind1="" ind2="">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1="" ind2="">
<subfield code="9">SCAN-0005061</subfield>
</datafield>
<datafield tag="088" ind1="" ind2="">
<subfield code="a">TESLA-FEL-99-07</subfield>
</datafield>
<datafield tag="100" ind1="" ind2="">
<subfield code="a">Treusch, R</subfield>
</datafield>
<datafield tag="245" ind1="" ind2="">
<subfield code="a">Development of photon beam diagnostics for VUV radiation from a SASE FEL</subfield>
</datafield>
<datafield tag="260" ind1="" ind2="">
<subfield code="c">2000</subfield>
</datafield>
<datafield tag="269" ind1="" ind2="">
<subfield code="a">Hamburg</subfield>
<subfield code="b">DESY</subfield>
<subfield code="c">Dec 1999</subfield>
</datafield>
<datafield tag="520" ind1="" ind2="">
<subfield code="a">For the proof-of-principle experiment of self-amplified spontaneous emission (SASE) at short wavelengths on the VUV FEL at DESY a multi-facetted photon beam diagnostics experiment has been developed employing new detection concepts to measure all SASE specific properties on a single pulse basis. The present setup includes instrumentation for the measurement of the energy and the angular and spectral distribution of individual photon pulses. Different types of photon detectors such as PtSi-photodiodes and fast thermoelectric detectors based on YBaCuO-films are used to cover some five orders of magnitude of intensity from the level of spontaneous emission to FEL radiation at saturation. A 1 m normal incidence monochromator in combination with a fast intensified CCD camera allows to select single photon pulses and to record the full spectrum at high resolution to resolve the fine structure due to the start-up from noise.</subfield>
</datafield>
<datafield tag="595" ind1="" ind2="">
<subfield code="a">SIS INIS2004</subfield>
</datafield>
<datafield tag="595" ind1="" ind2="">
<subfield code="a">SIS UNC2002</subfield>
</datafield>
<datafield tag="595" ind1="" ind2="">
<subfield code="d">Development of photon beam diagnostics for VUV radiation from a SASE FEL</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Accelerators and Storage Rings</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2="">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="694" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">Particle accelerators</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">ceramics-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">desy-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">far-ultraviolet-radiation</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">free-electron-lasers</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">photodiodes-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">photon-beams</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">superradiance-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">thin-films</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">x-ray-detection</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">x-ray-sources</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">accelerators-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">beams-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">cyclic-accelerators</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">detection-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">electromagnetic-radiation</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">emission-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">energy-level-transitions</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">films-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">lasers-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">photon-emission</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">radiation-detection</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">radiation-sources</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">radiations-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">semiconductor-devices</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">semiconductor-diodes</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">stimulated-emission</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">synchrotrons-</subfield>
</datafield>
<datafield tag="695" ind1="" ind2="">
<subfield code="9">INIS</subfield>
<subfield code="a">ultraviolet-radiation</subfield>
</datafield>
<datafield tag="700" ind1="" ind2="">
<subfield code="a">Lokajczyk, T</subfield>
</datafield>
<datafield tag="700" ind1="" ind2="">
<subfield code="a">Xu, W</subfield>
</datafield>
<datafield tag="700" ind1="" ind2="">
<subfield code="a">Jastrow, U</subfield>
</datafield>
<datafield tag="700" ind1="" ind2="">
<subfield code="a">Hahn, U</subfield>
</datafield>
<datafield tag="700" ind1="" ind2="">
<subfield code="a">Bittner, L</subfield>
</datafield>
<datafield tag="700" ind1="" ind2="">
<subfield code="a">Feldhaus, J</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="4">
<subfield code="c">456-462</subfield>
<subfield code="n">1-3</subfield>
<subfield code="p">Nucl. Instrum. Methods Phys. Res., A</subfield>
<subfield code="v">445</subfield>
<subfield code="y">2000</subfield>
</datafield>
<datafield tag="916" ind1="" ind2="">
<subfield code="s">n</subfield>
<subfield code="w">200430</subfield>
</datafield>
<datafield tag="960" ind1="" ind2="">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1="" ind2="">
<subfield code="c">20061230</subfield>
<subfield code="h">0016</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20040727</subfield>
</datafield>
<datafield tag="962" ind1="" ind2="">
<subfield code="b">000289917</subfield>
<subfield code="k">456-462</subfield>
<subfield code="n">hamburg990823</subfield>
</datafield>
<datafield tag="963" ind1="" ind2="">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1="" ind2="">
<subfield code="a">002471378CER</subfield>
</datafield>
<datafield tag="980" ind1="" ind2="">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="FFT" ind1="" ind2="">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/convert_SCAN-0005061.pdf</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20070110102840.0</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">0008580CERCER</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="9">SCAN-9709037</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">UCRL-8417</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Orear, J</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Notes on statistics for physicists</subfield>
</datafield>
<datafield tag="246" ind1=" " ind2=" ">
<subfield code="a">Statistics for physicists</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">1958</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Berkeley, CA</subfield>
<subfield code="b">Lawrence Berkeley Nat. Lab.</subfield>
<subfield code="c">13 Aug 1958</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">34 p</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Mathematical Physics and Mathematics</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="O">
<subfield code="a">oai:cds.cern.ch:SCAN-9709037</subfield>
<subfield code="p">cerncds:SCAN</subfield>
<subfield code="p">cerncds:FULLTEXT</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">h</subfield>
<subfield code="w">199700</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">11</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20070110</subfield>
<subfield code="h">1028</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">19900127</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">DRAFT</subfield>
</datafield>
<datafield tag="964" ind1=" " ind2=" ">
<subfield code="a">0001</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">000008580CER</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/9709037.pdf</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">BUL-NEWS-2009-001</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Charles Darwin</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A naturalist's voyage around the world</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;p class="articleHeader">After having been twice driven back by heavy south-western gales, Her Majesty's ship Beagle" a ten-gun brig, under the command of Captain Fitz Roy, R.N., sailed from Devonport on the 27th of December, 1831. The object of the expedition was to complete the survey of Patagonia and Tierra del Fuego, commenced under Captain King in 1826 to 1830--to survey the shores of Chile, Peru, and of some islands in the Pacific--and to carry a chain of chronometrical measurements round the World.&lt;/p> &lt;div class="phwithcaption"> &lt;div class="imageScale">&lt;img alt="" src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_hms_beagle_image.gif" />&lt;/div> &lt;p>H.M.S. Beagle&lt;/p> &lt;/div> &lt;p>On the 6th of January we reached Teneriffe, but were prevented landing, by fears of our bringing the cholera: the next morning we saw the sun rise behind the rugged outline of the Grand Canary Island, and suddenly illumine the Peak of Teneriffe, whilst the lower parts were veiled in fleecy clouds. This was the first of many delightful days never to be forgotten. On the 16th of January 1832 we anchored at Porto Praya, in St. Jago, the chief island of the Cape de Verd archipelago.&lt;/p> &lt;p>The neighbourhood of Porto Praya, viewed from the sea, wears a desolate aspect. The volcanic fires of a past age, and the scorching heat of a tropical sun, have in most places rendered the soil unfit for vegetation. The country rises in successive steps of table-land, interspersed with some truncate conical hills, and the horizon is bounded by an irregular chain of more lofty mountains. The scene, as beheld through the hazy atmosphere of this climate, is one of great interest; if, indeed, a person, fresh from sea, and who has just walked, for the first time, in a grove of cocoa-nut trees, can be a judge of anything but his own happiness. The island would generally be considered as very uninteresting, but to any one accustomed only to an English landscape, the novel aspect of an utterly sterile land possesses a grandeur which more vegetation might spoil. A single green leaf can scarcely be discovered over wide tracts of the lava plains; yet flocks of goats, together with a few cows, contrive to exist. It rains very seldom, but during a short portion of the year heavy torrents fall, and immediately afterwards a light vegetation springs out of every crevice. This soon withers; and upon such naturally formed hay the animals live. It had not now rained for an entire year. When the island was discovered, the immediate neighbourhood of Porto Praya was clothed with trees,1 the reckless destruction of which has caused here, as at St. Helena, and at some of the Canary islands, almost entire sterility. The broad, flat-bottomed valleys, many of which serve during a few days only in the season as watercourses, are clothed with thickets of leafless bushes. Few living creatures inhabit these valleys. The commonest bird is a kingfisher (Dacelo Iagoensis), which tamely sits on the branches of the castor-oil plant, and thence darts on grasshoppers and lizards. It is brightly coloured, but not so beautiful as the European species: in its flight, manners, and place of habitation, which is generally in the driest valley, there is also a wide difference. One day, two of the officers and myself rode to Ribeira Grande, a village a few miles eastward of Porto Praya. Until we reached the valley of St. Martin, the country presented its usual dull brown appearance; but here, a very small rill of water produces a most refreshing margin of luxuriant vegetation. In the course of an hour we arrived at Ribeira Grande, and were surprised at the sight of a large ruined fort and cathedral. This little town, before its harbour was filled up, was the principal place in the island: it now presents a melancholy, but very picturesque appearance. Having procured a black Padre for a guide, and a Spaniard who had served in the Peninsular war as an interpreter, we visited a collection of buildings, of which an ancient church formed the principal part. It is here the governors and captain-generals of the islands have been buried. Some of the tombstones recorded dates of the sixteenth century.1 The heraldic ornaments were the only things in this retired place that reminded us of Europe. The church or chapel formed one side of a quadrangle, in the middle of which a large clump of bananas were growing. On another side was a hospital, containing about a ozen miserable-looking inmates.&lt;/p> &lt;p>We returned to the Vênda to eat our dinners. A considerable number of men, women, and children, all as black as jet, collected to watch us. Our companions were extremely merry; and everything we said or did was followed by their hearty laughter. Before leaving the town we visited the cathedral. It does not appear so rich as the smaller church, but boasts of a little organ, which sent forth singularly inharmonious cries. We presented the black priest with a few shillings, and the Spaniard, patting him on the head, said, with much candour, he thought his colour made no great difference. We then returned, as fast as the ponies would go, to Porto Praya.&lt;/p> (Excerpt from A NATURALIST'S VOYAGE ROUND THE WORLD Chapter 1, By Charles Darwin)</subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;br /></subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">3</subfield>
<subfield code="n">02/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">3</subfield>
<subfield code="n">03/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ATLANTISTIMESNEWS</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/journal_hms_beagle_image.gif</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/icon-journal_hms_beagle_image.gif</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">BUL-NEWS-2009-002</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Plato</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Atlantis (Critias)</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;p class="articleHeader">I have before remarked in speaking of the allotments of the gods, that they distributed the whole earth into portions differing in extent, and made for themselves temples and instituted sacrifices. And Poseidon, receiving for his lot the island of Atlantis, begat children by a mortal woman, and settled them in a part of the island, which I will describe.&lt;/p> &lt;p>Looking towards the sea, but in the centre of the whole island, there was a plain which is said to have been the fairest of all plains and very fertile. Near the plain again, and also in the centre of the island at a distance of about fifty stadia, there was a mountain not very high on any side. In this mountain there dwelt one of the earth-born primeval men of that country, whose name was Evenor, and he had a wife named Leucippe, and they had an only daughter who was called Cleito. The maiden had already reached womanhood, when her father and mother died; Poseidon fell in love with her and had intercourse with her, and breaking the ground, inclosed the hill in which she dwelt all round, making alternate zones of sea and land larger and smaller, encircling one another; there were two of land and three of water, which he turned as with a lathe, each having its circumference equidistant every way from the centre, so that no man could get to the island, for ships and voyages were not as yet. He himself, being a god, found no difficulty in making special arrangements for the centre island, bringing up two springs of water from beneath the earth, one of warm water and the other of cold, and making every variety of food to spring up abundantly from the soil. He also begat and brought up five pairs of twin male children; and dividing the island of Atlantis into ten portions, he gave to the first-born of the eldest pair his mother's dwelling and the surrounding allotment, which was the largest and best, and made him king over the rest; the others he made princes, and gave them rule over many men, and a large territory. And he named them all; the eldest, who was the first king, he named Atlas, and after him the whole island and the ocean were called Atlantic. To his twin brother, who was born after him, and obtained as his lot the extremity of the island towards the pillars of Heracles, facing the country which is now called the region of Gades in that part of the world, he gave the name which in the Hellenic language is Eumelus, in the language of the country which is named after him, Gadeirus. Of the second pair of twins he called one Ampheres, and the other Evaemon. To the elder of the third pair of twins he gave the name Mneseus, and Autochthon to the one who followed him. Of the fourth pair of twins he called the elder Elasippus, and the younger Mestor. And of the fifth pair he gave to the elder the name of Azaes, and to the younger that of Diaprepes. All these and their descendants for many generations were the inhabitants and rulers of divers islands in the open sea; and also, as has been already said, they held sway in our direction over the country within the pillars as far as Egypt and Tyrrhenia. Now Atlas had a numerous and honourable family, and they retained the kingdom, the eldest son handing it on to his eldest for many generations; and they had such an amount of wealth as was never before possessed by kings and potentates, and is not likely ever to be again, and they were furnished with everything which they needed, both in the city and country. For because of the greatness of their empire many things were brought to them from foreign countries, and the island itself provided most of what was required by them for the uses of life. In the first place, they dug out of the earth whatever was to be found there, solid as well as fusile, and that which is now only a name and was then something more than a name, orichalcum, was dug out of the earth in many parts of the island, being more precious in those days than anything except gold. There was an abundance of wood for carpenter's work, and sufficient maintenance for tame and wild animals. Moreover, there were a great number of elephants in the island; for as there was provision for all other sorts of animals, both for those which live in lakes and marshes and rivers, and also for those which live in mountains and on plains, so there was for the animal which is the largest and most voracious of all. Also whatever fragrant things there now are in the earth, whether roots, or herbage, or woods, or essences which distil from fruit and flower, grew and thrived in that land; also the fruit which admits of cultivation, both the dry sort, which is given us for nourishment and any other which we use for food&amp;mdash;we call them all by the common name of pulse, and the fruits having a hard rind, affording drinks and meats and ointments, and good store of chestnuts and the like, which furnish pleasure and amusement, and are fruits which spoil with keeping, and the pleasant kinds of dessert, with which we console ourselves after dinner, when we are tired of eating&amp;mdash;all these that sacred island which then beheld the light of the sun, brought forth fair and wondrous and in infinite abundance. With such blessings the earth freely furnished them; meanwhile they went on constructing their temples and palaces and harbours and docks.&lt;/p> (Excerpt from CRITIAS, By Plato, translated By Jowett, Benjamin)</subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;br /></subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">2</subfield>
<subfield code="n">02/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">2</subfield>
<subfield code="n">03/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ATLANTISTIMESNEWS</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">BUL-NEWS-2009-003</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Plato</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Atlantis (Timaeus)</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;p class="articleHeader">This great island lay over against the Pillars of Heracles, in extent greater than Libya and Asia put together, and was the passage to other islands and to a great ocean of which the Mediterranean sea was only the harbour; and within the Pillars the empire of Atlantis reached in Europe to Tyrrhenia and in Libya to Egypt.&lt;/p> &lt;p>This mighty power was arrayed against Egypt and Hellas and all the countries&lt;/p> &lt;div class="phrwithcaption"> &lt;div class="imageScale">&lt;img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_Athanasius_Kircher_Atlantis_image.gif" alt="" />&lt;/div> &lt;p>Representation of Atlantis by Athanasius Kircher (1669)&lt;/p> &lt;/div> bordering on the Mediterranean. Then your city did bravely, and won renown over the whole earth. For at the peril of her own existence, and when the other Hellenes had deserted her, she repelled the invader, and of her own accord gave liberty to all the nations within the Pillars. A little while afterwards there were great earthquakes and floods, and your warrior race all sank into the earth; and the great island of Atlantis also disappeared in the sea. This is the explanation of the shallows which are found in that part of the Atlantic ocean. &lt;p> &lt;/p> (Excerpt from TIMAEUS, By Plato, translated By Jowett, Benjamin)&lt;br /></subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;br /></subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">02/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">03/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">04/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ATLANTISTIMESNEWS</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/journal_Athanasius_Kircher_Atlantis_image.gif</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/icon-journal_Athanasius_Kircher_Atlantis_image.gif</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">BUL-SCIENCE-2009-001</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Charles Darwin</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">The order Rodentia in South America</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;p>The order Rodentia is here very numerous in species: of mice alone I obtained no less than eight kinds. &lt;sup>&lt;a name="note1" href="#footnote1">1&lt;/a>&lt;/sup>The largest gnawing animal in the world, the Hydrochærus capybara (the water-hog), is here also common. One which I shot at Monte Video weighed ninety-eight pounds: its length, from the end of the snout to the stump-like tail, was three feet two inches; and its girth three feet eight. These great Rodents occasionally frequent the islands in the mouth of the Plata, where the water is quite salt, but are far more abundant on the borders of fresh-water lakes and rivers. Near Maldonado three or four generally live together. In the daytime they either lie among the aquatic plants, or openly feed on the turf plain.&lt;sup>&lt;a name="note2" href="#footnote2">2&lt;/a>&lt;/sup>&lt;/p> &lt;p> &lt;div class="phlwithcaption"> &lt;div class="imageScale">&lt;img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_water_dog_image.gif" alt="" />&lt;/div> &lt;p>Hydrochærus capybara or Water-hog&lt;/p> &lt;/div> When viewed at a distance, from their manner of walking and colour they resemble pigs: but when seated on their haunches, and attentively watching any object with one eye, they reassume the appearance of their congeners, cavies and rabbits. Both the front and side view of their head has quite a ludicrous aspect, from the great depth of their jaw. These animals, at Maldonado, were very tame; by cautiously walking, I approached within three yards of four old ones. This tameness may probably be accounted for, by the Jaguar having been banished for some years, and by the Gaucho not thinking it worth his while to hunt them. As I approached nearer and nearer they frequently made their peculiar noise, which is a low abrupt grunt, not having much actual sound, but rather arising from the sudden expulsion of air: the only noise I know at all like it, is the first hoarse bark of a large dog. Having watched the four from almost within arm's length (and they me) for several minutes, they rushed into the water at full gallop with the greatest impetuosity, and emitted at the same time their bark. After diving a short distance they came again to the surface, but only just showed the upper part of their heads. When the female is swimming in the water, and has young ones, they are said to sit on her back. These animals are easily killed in numbers; but their skins are of trifling value, and the meat is very indifferent. On the islands in the Rio Parana they are exceedingly abundant, and afford the ordinary prey to the Jaguar.&lt;/p> &lt;p>&lt;small>&lt;sup>&lt;a name="footnote1" href="#note1">1&lt;/a>&lt;/sup>. In South America I collected altogether twenty-seven species of mice, and thirteen more are known from the works of Azara and other authors. Those collected by myself have been named and described by Mr. Waterhouse at the meetings of the Zoological Society. I must be allowed to take this opportunity of returning my cordial thanks to Mr. Waterhouse, and to the other gentleman attached to that Society, for their kind and most liberal assistance on all occasions.&lt;/small>&lt;/p> &lt;p>&lt;small>&lt;sup>&lt;a name="footnote2" href="#note2">2&lt;/a>&lt;/sup>. In the stomach and duodenum of a capybara which I opened, I found a very large quantity of a thin yellowish fluid, in which scarcely a fibre could be distinguished. Mr. Owen informs me that a part of the oesophagus is so constructed that nothing much larger than a crowquill can be passed down. Certainly the broad teeth and strong jaws of this animal are well fitted to grind into pulp the aquatic plants on which it feeds.&lt;/small>&lt;/p> (Excerpt from A NATURALIST'S VOYAGE ROUND THE WORLD Chapter 3, By Charles Darwin)</subfield>
</datafield>
<datafield tag="590" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;br />test fr</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">02/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">03/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ATLANTISTIMESSCIENCE</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/journal_water_dog_image.gif</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/icon-journal_water_dog_image.gif</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">BUL-NEWS-2009-004</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Charles Darwin</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Rio Macâe</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;p class="articleHeader">April 14th, 1832.—Leaving Socêgo, we rode to another estate on the Rio Macâe, which was the last patch of cultivated ground in that direction. The estate was two and a half miles long, and the owner had forgotten how many broad.&lt;/p> &lt;p> &lt;div class="phlwithcaption"> &lt;div class="imageScale">&lt;img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_virgin_forest_image.gif" alt="" />&lt;/div> &lt;p>Virgin Forest&lt;/p> &lt;/div> Only a very small piece had been cleared, yet almost every acre was capable of yielding all the various rich productions of a tropical land. Considering the enormous area of Brazil, the proportion of cultivated ground can scarcely be considered as anything compared to that which is left in the state of nature: at some future age, how vast a population it will support! During the second day's journey we found the road so shut up that it was necessary that a man should go ahead with a sword to cut away the creepers. The forest abounded with beautiful objects; among which the tree ferns, though not large, were, from their bright green foliage, and the elegant curvature of their fronds, most worthy of admiration. In the evening it rained very heavily, and although the thermometer stood at 65°, I felt very cold. As soon as the rain ceased, it was curious to observe the extraordinary evaporation which commenced over the whole extent of the forest. At the height of a hundred feet the hills were buried in a dense white vapour, which rose like columns of smoke from the most thickly-wooded parts, and especially from the valleys. I observed this phenomenon on several occasions: I suppose it is owing to the large surface of foliage, previously heated by the sun's rays.&lt;/p> &lt;p>While staying at this estate, I was very nearly being an eye-witness to one of those atrocious acts which can only take place in a slave country. Owing to a quarrel and a lawsuit, the owner was on the point of taking all the women and children from the male slaves, and selling them separately at the public auction at Rio. Interest, and not any feeling of compassion, prevented this act. Indeed, I do not believe the inhumanity of separating thirty families, who had lived together for many years, even occurred to the owner. Yet I will pledge myself, that in humanity and good feeling he was superior to the common run of men. It may be said there exists no limit to the blindness of interest and selfish habit. I may mention one very trifling anecdote, which at the time struck me more forcibly than any story of cruelty. I was crossing a ferry with a negro who was uncommonly stupid. In endeavouring to make him understand, I talked loud, and made signs, in doing which I passed my hand near his face. He, I suppose, thought I was in a passion, and was going to strike him; for instantly, with a frightened look and half-shut eyes, he dropped his hands. I shall never forget my feelings of surprise, disgust, and shame, at seeing a great powerful man afraid even to ward off a blow, directed, as he thought, at his face. This man had been trained to a degradation lower than the slavery of the most helpless animal.&lt;/p> (Excerpt from A NATURALIST'S VOYAGE ROUND THE WORLD Chapter 2, By Charles Darwin)</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">03/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ATLANTISTIMESNEWS</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/journal_virgin_forest_image.gif</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/icon-journal_virgin_forest_image.gif</subfield>
</datafield>
</record>
<record>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">zho</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">李白</subfield>
<subfield code="q">Li Bai</subfield>
</datafield>
<datafield tag="242" ind1=" " ind2=" ">
<subfield code="a">Alone Looking at the Mountain</subfield>
<subfield code="y">eng</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">敬亭獨坐</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">
&lt;!--HTML--&gt;眾鳥高飛盡&lt;br /&gt;
孤雲去獨閒&lt;br /&gt;
相看兩不厭&lt;br /&gt;
唯有敬亭山
</subfield>
<subfield code="t">
&lt;!--HTML--&gt;All the birds have flown up and gone;&lt;br /&gt;
A lonely cloud floats leisurely by.&lt;br /&gt;
We never tire of looking at each other -&lt;br /&gt;
Only the mountain and I.
</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="y">701-762</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="1">
<subfield code="c">2009-09-16</subfield>
<subfield code="l">00</subfield>
<subfield code="m">2009-09-16</subfield>
<subfield code="o">BATCH</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">POETRY</subfield>
</datafield>
</record>
<record>
<controlfield tag="005">20110118111428.0</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">Inspire</subfield>
<subfield code="a">882629</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SPIRES</subfield>
<subfield code="a">8921016</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-THESIS-99-074</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Goodsir, S M</subfield>
<subfield code="u">Imperial Coll., London</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A W mass measurement with the ALEPH detector</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">London</subfield>
<subfield code="b">Imperial Coll.</subfield>
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">PhD</subfield>
<subfield code="b">London U.</subfield>
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">No fulltext</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">CERN EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Detectors and Experimental Techniques</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">CERN</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ALEPH</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LHC</subfield>
<subfield code="e">ALEPH</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">h</subfield>
<subfield code="w">201103</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">14</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20110128</subfield>
<subfield code="h">1717</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20110118</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002943225CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ALEPHTHESIS</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">DRAFT</subfield>
</datafield>
<datafield tag="024" ind1="8" ind2=" ">
<subfield code="a">oai:cds.cern.ch:1322667</subfield>
<subfield code="p">cerncds:CERN</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">CDS</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">33028075</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Hodgson, P</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">A measurement of the di-jet cross-sections in two photon physics at LEP 2</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Sheffield</subfield>
<subfield code="b">Sheffield Univ.</subfield>
<subfield code="c">2001</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">mult. p</subfield>
</datafield>
<datafield tag="502" ind1=" " ind2=" ">
<subfield code="a">PhD</subfield>
<subfield code="b">Sheffield Univ.</subfield>
<subfield code="c">2001</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">This thesis presents a study of di-jet production in untagged two photon events in the ALEPH detector at LEP with sq root s=183 GeV. A low background sample of 145146 untagged gamma gamma events is obtained and from this 2346 di-jet events are found. A clustering algorithm, KTCLUS, is used to reconstruct the jet momenta. The data is corrected back to hadron level using the PHOJET Monte Carlo and this sample is compared with two independent NLO QCD calculations. Good agreement is seen except at the lowest jet P sub T , where the calculations overshoot the data; however, it should be noted that perturbative QCD is less reliable at low P sub T.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS INIS2004</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">THESIS</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">ALEPH</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LEP</subfield>
<subfield code="e">ALEPH</subfield>
</datafield>
<datafield tag="694" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">Physics of elementary particles and fields</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">accelerators-</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">bosons-</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">computer-codes</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">cyclic-accelerators</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">elementary-particles</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">energy-range</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">field-theories</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">gev-range</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">jet-model</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">k-codes</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">lep-storage-rings</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">linear-momentum</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">massless-particles</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">mathematical-models</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">multiple-production</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">particle-discrimination</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">particle-identification</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">particle-models</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">particle-production</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">photons-</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">quantum-chromodynamics</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">quantum-field-theory</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">storage-rings</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">synchrotrons-</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">INIS</subfield>
<subfield code="a">transverse-momentum</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200431</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">14</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20100419</subfield>
<subfield code="h">1112</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20040730</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">002474361CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ALEPHTHESIS</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20080521084337.0</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">SPIRES</subfield>
<subfield code="a">4066995</subfield>
</datafield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-EP-99-060</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="084" ind1=" " ind2=" ">
<subfield code="2">CERN Library</subfield>
<subfield code="a">EP-1999-060</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="9">SCAN-9910048</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-L3-175</subfield>
</datafield>
<datafield tag="110" ind1=" " ind2=" ">
<subfield code="a">CERN. Geneva</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Limits on Higgs boson masses from combining the data of the four LEP experiments at $\sqrt{s} \leq 183 GeV$</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">1999</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">26 Apr 1999</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">18 p</subfield>
</datafield>
<datafield tag="490" ind1=" " ind2=" ">
<subfield code="a">ALEPH Papers</subfield>
</datafield>
<datafield tag="500" ind1=" " ind2=" ">
<subfield code="a">Preprint not submitted to publication</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">No authors</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">CERN-EP</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">OA</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS:200740 PR/LKR not found (from SLAC, INSPEC)</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Experiment</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">CERN</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">PREPRINT</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LEP</subfield>
<subfield code="e">ALEPH</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LEP</subfield>
<subfield code="e">DELPHI</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LEP</subfield>
<subfield code="e">L3</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LEP</subfield>
<subfield code="e">OPAL</subfield>
</datafield>
<datafield tag="695" ind1=" " ind2=" ">
<subfield code="9">MEDLINE</subfield>
<subfield code="a">searches Higgs bosons</subfield>
</datafield>
<datafield tag="697" ind1="C" ind2=" ">
<subfield code="a">LexiHiggs</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="5">EP</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">ALEPH Collaboration</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">DELPHI Collaboration</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">L3 Collaboration</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">LEP Working Group for Higgs Boson Searches</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">OPAL Collaboration</subfield>
</datafield>
<datafield tag="901" ind1=" " ind2=" ">
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">h</subfield>
<subfield code="w">199941</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">11</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">000330309CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20090128145544.0</controlfield>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-ALEPH-ARCH-DATA-2009-004</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Beddall, Andrew</subfield>
<subfield code="u">Gaziantep U.</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Residual Bose-Einstein Correlations and the Söding Model</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2008</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">19 Jan 2009</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">8 p</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Bose--Einstein correlations between identical pions close in phase-space is thought to be responsible for the observed distortion in mass spectra of non-identical pions. For example in the decays \rho 0 \rightarrow \pi + \pi - and \rho \pm \rightarrow \pi \pm \pi 0, such distortions are a residual effect where the pions from the \rho decay interact with other identical pions that are close in phase-space. Such interactions can be significant in, for example, hadronic decays of Z bosons where pion multiplicities are high, and resonances such as \rho mesons decay with a very short lifetime thereby creating pions that are close to prompt pions created directly. We present the S{ö}ding model and show that it has been used successfully to model distortions in \pi \pm \pi 0 mass spectra in hadronic Z decays recorded by ALEPH.</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">CERN EDS</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Particle Physics - Experiment</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LEP</subfield>
<subfield code="e">ALEPH</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Beddall, Ayda</subfield>
<subfield code="u">Gaziantep U.</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Bingül, Ahmet</subfield>
<subfield code="u">Gaziantep U.</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="5">PH-EP</subfield>
</datafield>
<datafield tag="773" ind1=" " ind2=" ">
<subfield code="c">173-180</subfield>
<subfield code="n">1</subfield>
<subfield code="p">Acta Phys. Pol. B</subfield>
<subfield code="v">39</subfield>
<subfield code="y">2008</subfield>
</datafield>
<datafield tag="859" ind1=" " ind2=" ">
<subfield code="f">nathalie.grub@cern.ch</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200904</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">13</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20100205</subfield>
<subfield code="h">1021</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">20090119</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">000701647CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ARTICLE</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ALEPHPAPER</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20080909102446.0</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">INTINT</subfield>
<subfield code="a">0000990</subfield>
</datafield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="9">WAI01</subfield>
<subfield code="a">000004764</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-ALEPH-95-089</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-ALEPH-PHYSIC-95-083</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="9">CERN-SL-Note-95-77-BI</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">SL-Note-95-77-BI</subfield>
</datafield>
<datafield tag="110" ind1=" " ind2=" ">
<subfield code="a">CERN. Geneva</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">LEP Center-of-Mass Energies in Presence of Opposite Sign Vertical Dispersion in Bunch-Train Operation</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">1995</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">17 Jul 1995</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">14 p</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS ALEPH2004</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">SIS SLNOTE2003</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Accelerators and Storage Rings</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">CERN LEP</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">beam-energy</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">bunch-trains</subfield>
</datafield>
<datafield tag="691" ind1=" " ind2=" ">
<subfield code="a">LEP = Large Electron Positron Collider</subfield>
</datafield>
<datafield tag="693" ind1=" " ind2=" ">
<subfield code="a">CERN LEP</subfield>
<subfield code="e">ALEPH</subfield>
</datafield>
<datafield tag="694" ind1=" " ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">PHYSIC (PHYSICs)</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="g">LEP Energy Working Group</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="5">SL</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="a">The LEP energy working group</subfield>
</datafield>
<datafield tag="852" ind1=" " ind2=" ">
<subfield code="c">DD506</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="s">n</subfield>
<subfield code="w">200350</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">04</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">000415594CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">NOTE</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ALEPHNOTE</subfield>
</datafield>
</record>
<record>
<controlfield tag="003">SzGeCERN</controlfield>
<controlfield tag="005">20040304154728.0</controlfield>
<datafield tag="035" ind1=" " ind2=" ">
<subfield code="a">0335074CERCER</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-PS-PA-Note-93-04</subfield>
</datafield>
<datafield tag="088" ind1=" " ind2=" ">
<subfield code="a">CERN-PS-PA-Note-93-04-PPC</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="a">Geneva</subfield>
<subfield code="b">CERN</subfield>
<subfield code="c">1993</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">207 p</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">gift: Bouthéon, Marcel</subfield>
</datafield>
<datafield tag="650" ind1="1" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Accelerators and Storage Rings</subfield>
</datafield>
<datafield tag="650" ind1="2" ind2="7">
<subfield code="2">SzGeCERN</subfield>
<subfield code="a">Miscellaneous</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">East Hall</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">ISOLDE</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">antiprotons</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">heavy ions</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="9">CERN</subfield>
<subfield code="a">proton beams</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">CERN</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">CONFERENCE</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">PROCEEDINGS</subfield>
</datafield>
<datafield tag="700" ind1=" " ind2=" ">
<subfield code="a">Manglunki, Django</subfield>
<subfield code="e">ed.</subfield>
</datafield>
<datafield tag="710" ind1=" " ind2=" ">
<subfield code="5">PS</subfield>
</datafield>
<datafield tag="711" ind1=" " ind2=" ">
<subfield code="a">PPD '93</subfield>
</datafield>
<datafield tag="901" ind1=" " ind2=" ">
<subfield code="u">CERN</subfield>
</datafield>
<datafield tag="916" ind1=" " ind2=" ">
<subfield code="a">d</subfield>
<subfield code="s">h</subfield>
<subfield code="w">199936</subfield>
<subfield code="y">y1999</subfield>
</datafield>
<datafield tag="960" ind1=" " ind2=" ">
<subfield code="a">42</subfield>
</datafield>
<datafield tag="961" ind1=" " ind2=" ">
<subfield code="c">20091104</subfield>
<subfield code="h">2203</subfield>
<subfield code="l">CER01</subfield>
<subfield code="x">19991117</subfield>
</datafield>
<datafield tag="963" ind1=" " ind2=" ">
<subfield code="a">PUBLIC</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">PROCEEDINGS</subfield>
</datafield>
<datafield tag="964" ind1=" " ind2=" ">
<subfield code="a">0001</subfield>
</datafield>
<datafield tag="970" ind1=" " ind2=" ">
<subfield code="a">000335074CER</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ISOLDENOTE</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">BUL-SCIENCE-2009-002</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">eng</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Charles Darwin</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Scissor-beak</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;p class="articleHeader"> &lt;i>October 15th.&lt;/i>&amp;mdash;We got under way and passed Punta Gorda, where there is a colony of tame Indians from the province of Missiones. We sailed rapidly down the current, but before sunset, from a silly fear of bad weather, we brought-to in a narrow arm of the river. I took the boat and rowed some distance up this creek. It was very narrow, winding, and deep; on each side a wall thirty or forty feet high, formed by trees intwined with creepers, gave to the canal a singularly gloomy appearance. I here saw a very extraordinary bird, called the Scissor-beak (Rhynchops nigra). It has short legs, web feet, extremely long-pointed wings, and is of about the size of a tern.&lt;/p> &lt;div class="phrwithcaption"> &lt;div class="imageScale"> &lt;img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_scissor_beak.jpg" />&lt;/div> &lt;/div> &lt;p> The beak is flattened laterally, that is, in a plane at right angles to that of a spoonbill or duck. It is as flat and elastic as an ivory paper-cutter, and the lower mandible, differently from every other bird, is an inch and a half longer than the upper. In a lake near Maldonado, from which the water had been nearly drained, and which, in consequence, swarmed with small fry, I saw several of these birds, generally in small flocks, flying rapidly backwards and forwards close to the surface of the lake. They kept their bills wide open, and the lower mandible half buried in the water. Thus skimming the surface, they ploughed it in their course: the water was quite smooth, and it formed a most curious spectacle to behold a flock, each bird leaving its narrow wake on the mirror-like surface. In their flight they frequently twist about with extreme quickness, and dexterously manage with their projecting lower mandible to plough up small fish, which are secured by the upper and shorter half of their scissor-like bills. This fact I repeatedly saw, as, like swallows, they continued to fly backwards and forwards close before me. Occasionally when leaving the surface of the water their flight was wild, irregular, and rapid; they then uttered loud harsh cries. When these birds are fishing, the advantage of the long primary feathers of their wings, in keeping them dry, is very evident. When thus employed, their forms resemble the symbol by which many artists represent marine birds. Their tails are much used in steering their irregular course.&lt;/p> &lt;p> These birds are common far inland along the course of the Rio Parana; it is said that they remain here during the whole year, and breed in the marshes. During the day they rest in flocks on the grassy plains, at some distance from the water. Being at anchor, as I have said, in one of the deep creeks between the islands of the Parana, as the evening drew to a close, one of these scissor-beaks suddenly appeared. The water was quite still, and many little fish were rising. The bird continued for a long time to skim the surface, flying in its wild and irregular manner up and down the narrow canal, now dark with the growing night and the shadows of the overhanging trees. At Monte Video, I observed that some large flocks during the day remained on the mud-banks at the head of the harbour, in the same manner as on the grassy plains near the Parana; and every evening they took flight seaward. From these facts I suspect that the Rhynchops generally fishes by night, at which time many of the lower animals come most abundantly to the surface. M. Lesson states that he has seen these birds opening the shells of the mactr&amp;aelig; buried in the sand-banks on the coast of Chile: from their weak bills, with the lower mandible so much projecting, their short legs and long wings, it is very improbable that this can be a general habit.&lt;/p></subfield>
</datafield>
<datafield tag="691" ind1=" " ind2=" ">
<subfield code="a">DRAFT</subfield>
</datafield>
<datafield tag="773" ind1="" ind2=" ">
<subfield code="c">2</subfield>
<subfield code="n">03/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="773" ind1="" ind2=" ">
<subfield code="c">2</subfield>
<subfield code="n">04/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ATLANTISTIMESSCIENCEDRAFT</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/journal_scissor_beak.jpg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/icon-journal_scissor_beak.jpg</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/journal_scissor_beak.jpg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/icon-journal_scissor_beak.jpg</subfield>
<subfield code="n">restricted-journal_scissor_beak.jpg</subfield>
<subfield code="r">restricted</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">BUL-NEWS-2009-005</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">Charles Darwin</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">Galapagos Archipelago</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="b">&lt;!--HTML-->&lt;p class="articleHeader">&lt;i>September 15th.&lt;/i>&amp;mdash;This archipelago consists of ten principal islands, of which five exceed the others in size. They are situated under the Equator, and between five and six hundred miles westward of the coast of America. They are all formed of volcanic rocks; a few fragments of granite curiously glazed and altered by the heat can hardly be considered as an exception.&lt;/p> &lt;p> Some of the craters surmounting the larger islands are of immense size, and they rise to a height of between three and four thousand feet. Their flanks are studded by innumerable smaller orifices. I scarcely hesitate to affirm that there must be in the whole archipelago at least two thousand craters. These consist either of lava and scori&amp;aelig;, or of finely-stratified, sandstone-like tuff. Most of the latter are beautifully symmetrical; they owe their origin to eruptions of volcanic mud without any lava: it is a remarkable circumstance that every one of the twenty-eight tuff-craters which were examined had their southern sides either much lower than the other sides, or quite broken down and removed. As all these craters apparently have been formed when standing in the sea, and as the waves from the trade wind and the swell from the open Pacific here unite their forces on the southern coasts of all the islands, this singular uniformity in the broken state of the craters, composed of the soft and yielding tuff, is easily explained.&lt;/p> &lt;p style="text-align: center;"> &lt;img alt="" class="ph" src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_galapagos_archipelago.jpg" style="width: 300px; height: 294px;" />&lt;/p> &lt;p> Considering that these islands are placed directly under the equator, the climate is far from being excessively hot; this seems chiefly caused by the singularly low temperature of the surrounding water, brought here by the great southern Polar current. Excepting during one short season very little rain falls, and even then it is irregular; but the clouds generally hang low. Hence, whilst the lower parts of the islands are very sterile, the upper parts, at a height of a thousand feet and upwards, possess a damp climate and a tolerably luxuriant vegetation. This is especially the case on the windward sides of the islands, which first receive and condense the moisture from the atmosphere.&lt;/p></subfield>
</datafield>
<datafield tag="691" ind1=" " ind2=" ">
<subfield code="a">DRAFT</subfield>
</datafield>
<datafield tag="773" ind1="" ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">06/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="773" ind1="" ind2=" ">
<subfield code="c">1</subfield>
<subfield code="n">07/2009</subfield>
<subfield code="t">Atlantis Times</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">ATLANTISTIMESNEWSDRAFT</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/journal_galapagos_archipelago.jpg</subfield>
<subfield code="x">http://invenio-software.org/download/invenio-demo-site-files/icon-journal_galapagos_archipelago.jpg</subfield>
</datafield>
</record>
<record>
<datafield tag="037" ind1=" " ind2=" ">
<subfield code="a">CERN-MOVIE-2010-075</subfield>
</datafield>
<datafield tag="041" ind1=" " ind2=" ">
<subfield code="a">silent</subfield>
</datafield>
<datafield tag="100" ind1=" " ind2=" ">
<subfield code="a">CMS team</subfield>
<subfield code="e">Produced by</subfield>
</datafield>
<datafield tag="245" ind1=" " ind2=" ">
<subfield code="a">CMS animation of the high-energy collisions at 7 TeV on 30th March 2010</subfield>
</datafield>
<datafield tag="260" ind1=" " ind2=" ">
<subfield code="c">2010</subfield>
</datafield>
<datafield tag="269" ind1=" " ind2=" ">
<subfield code="b">CERN Copyright</subfield>
<subfield code="c">2010-03-30</subfield>
</datafield>
<datafield tag="300" ind1=" " ind2=" ">
<subfield code="a">10 sec</subfield>
<subfield code="b">720x576 16/9, 25</subfield>
</datafield>
<datafield tag="340" ind1=" " ind2=" ">
<subfield code="a">UNKNOWN PAL</subfield>
</datafield>
<datafield tag="595" ind1=" " ind2=" ">
<subfield code="a">CERN EDS</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">LHC</subfield>
<subfield code="9">CERN</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">CMS</subfield>
<subfield code="9">CERN</subfield>
</datafield>
<datafield tag="653" ind1="1" ind2=" ">
<subfield code="a">LHCfirstphysics</subfield>
<subfield code="9">CERN</subfield>
</datafield>
<datafield tag="690" ind1="C" ind2=" ">
<subfield code="a">publvideomovie</subfield>
</datafield>
<datafield tag="909" ind1="C" ind2="0">
<subfield code="Y">2010</subfield>
</datafield>
<datafield tag="980" ind1=" " ind2=" ">
<subfield code="a">VIDEO</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Run 132440 - Event 2732271</subfield>
</datafield>
<datafield tag="542" ind1=" " ind2=" ">
<subfield code="d">CERN</subfield>
<subfield code="g">2010</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_POSTER.jpg</subfield>
<subfield code="n">CERN-MOVIE-2010-075_POSTER</subfield>
<subfield code="f">jpg</subfield>
<subfield code="d">POSTER</subfield>
<subfield code="z">POSTER</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075.mpg;master</subfield>
<subfield code="n">CERN-MOVIE-2010-075</subfield>
<subfield code="f">mpg;master</subfield>
<subfield code="d">MASTER</subfield>
<subfield code="z">MASTER</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075.webm;480p</subfield>
<subfield code="n">CERN-MOVIE-2010-075</subfield>
<subfield code="f">webm;480p</subfield>
<subfield code="d">WEBM_480P</subfield>
<subfield code="z">WEBM_480P</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075.webm;720p</subfield>
<subfield code="n">CERN-MOVIE-2010-075</subfield>
<subfield code="f">webm;720p</subfield>
<subfield code="d">WEBM_720P</subfield>
<subfield code="z">WEBM_720P</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_01.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_01</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_01.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_01</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_02.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_02</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_02.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_02</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_03.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_03</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_03.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_03</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_04.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_04</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_04.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_04</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_05.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_05</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_05.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_05</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_06.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_06</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_06.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_06</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_07.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_07</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_07.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_07</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_08.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_08</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_08.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_08</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_09.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_09</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_09.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_09</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_10.jpg;big</subfield>
<subfield code="n">CERN-MOVIE-2010-075_10</subfield>
<subfield code="f">jpg;big</subfield>
<subfield code="d">BIGTHUMB</subfield>
<subfield code="z">BIGTHUMB</subfield>
</datafield>
<datafield tag="FFT" ind1=" " ind2=" ">
<subfield code="a">http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_10.jpg;small</subfield>
<subfield code="n">CERN-MOVIE-2010-075_10</subfield>
<subfield code="f">jpg;small</subfield>
<subfield code="d">SMALLTHUMB</subfield>
<subfield code="z">SMALLTHUMB</subfield>
</datafield>
</record>
+
+
+ <!------------------------------>
+<!-- 2. AUTHORITY records -->
+<!------------------------------>
+
+ <!------------------------------------
+ AUTHOR Authority Records
+ ------------------------------------->
+ <record>
+ <controlfield tag="003">DLC</controlfield>
+ <controlfield tag="005">19951121053638.0</controlfield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">(OCoLC)oca00230701 </subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(SzGeCERN)aaa0001</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="a">Mann, Thomas</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Man, Tomas</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Mann, Tomas</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Mān, Tūmās</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Mann, Paul Thomas</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Thomas, Paul</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Mani, Tʿomas</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Man, Tʿomasŭ</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Mann, Tomasz</subfield>
+ <subfield code="d">1875-1955</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">His Königliche Hoheit, 1909.</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Volgina, A.A. Tomas Mann--biobibliogr. ukazatelʹ, 1979:</subfield>
+ <subfield code="b">t.p. (Tomas Mann)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Najīb, N. Qiṣṣat al-ajyāl bayna Tūmās Mān wa-Najīb Maħfūẓ, 1982:</subfield>
+ <subfield code="b">t.p. (Tūmās Mān)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Vaget, H.R. Thomas Mann-Kommentar zu sämtlichen Erzählungen, c1984:</subfield>
+ <subfield code="b">t.p. (Thomas Mann) p. 13, etc. (b. 6-6-1875 in Lübeck as Paul Thomas Mann; used pseud. Paul Thomas as co-editor of student newspaper in 1893; d. 8-12-1955)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Kakabadze, N. Tomas Mann, 1985:</subfield>
+ <subfield code="b">t.p. (Tomas Mann) added t.p. (Tʿomas Mani)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Chʿoe, S.B. Tʿomasŭ Man yŏnʾgu, 1981:</subfield>
+ <subfield code="b">t.p. (Tʿomasŭ Man)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Łukosz, J. Terapia jako duchowa forma życia, 1990:</subfield>
+ <subfield code="b">t.p. (Tomasza Manna)</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR</subfield>
+ </datafield>
+</record>
+<record>
+ <!-- this record is used by INVENIO (regression) tests -->
+ <controlfield tag="003">DLC</controlfield>
+ <controlfield tag="005">19991204070327.0</controlfield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(OCoLC)oca00955355</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(SzGeCERN)aaa0002</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Es erhub sich ein Streit</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">See how fiercely they fight</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">There uprose a great strife</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Cantatas,</subfield>
+ <subfield code="n">BWV 19</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Cantatas,</subfield>
+ <subfield code="n">no. 19</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">There arose a great strife</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Kantate am Michaelisfest</subfield>
+ <subfield code="n">BWV 19</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Festo Michaelis</subfield>
+ <subfield code="n">BWV 19</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Kantate zum Michaelistag</subfield>
+ <subfield code="n">BWV 19</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Cantata for Michaelmas Day</subfield>
+ <subfield code="n">BWV 19</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Cantate am Michaelisfeste</subfield>
+ <subfield code="n">BWV 19</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Bach, J.S. BWV 19, Es erhub sich ein Streit [SR] p1988:</subfield>
+ <subfield code="b">label (BWV 19, Es erhub sich ein Streit = There arose a great strife)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Schmieder, 1990</subfield>
+ <subfield code="b">(19. Es erhub sich ein Streit; Kantate am Michaelisfest (Festo Michaelis))</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR</subfield>
+ </datafield>
+ </record>
+ <record>
+ <!-- this record is used by INVENIO (regression) tests -->
+ <controlfield tag="003">DLC</controlfield>
+ <controlfield tag="005">19850502074119.0</controlfield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(SzGeCERN)aaa0003</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Keyboard music.</subfield>
+ <subfield code="k">Selections (Bach Guild)</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Historical anthology of music.</subfield>
+ <subfield code="n">V,</subfield>
+ <subfield code="p">Baroque (late).</subfield>
+ <subfield code="n">F,</subfield>
+ <subfield code="p">Johann Sebastian Bach.</subfield>
+ <subfield code="n">1,</subfield>
+ <subfield code="p">Works for keyboard</subfield>
+ </datafield>
+ <datafield tag="430" ind1=" " ind2=" ">
+ <subfield code="a">Historical anthology of music.</subfield>
+ <subfield code="n">V,</subfield>
+ <subfield code="p">Baroque (late).</subfield>
+ <subfield code="n">F,</subfield>
+ <subfield code="p">Johann Sebastian Bach.</subfield>
+ <subfield code="n">1,</subfield>
+ <subfield code="p">Works for keyboard</subfield>
+ </datafield>
+ <datafield tag="430" ind1=" " ind2=" ">
+ <subfield code="w">nnaa</subfield>
+ <subfield code="a">Historical anthology of music</subfield>
+ <subfield code="n">period V,</subfield>
+ <subfield code="n">category F,</subfield>
+ <subfield code="n">sub-category 1</subfield>
+ </datafield>
+ <datafield tag="430" ind1=" " ind2=" ">
+ <subfield code="a">Johann Sebastian Bach.</subfield>
+ <subfield code="n">1</subfield>
+ <subfield code="p">Works for keyboard</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Johann Sebastian Bach.</subfield>
+ <subfield code="n">1,</subfield>
+ <subfield code="p">Works for keyboard</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Bach, Johann Sebastian</subfield>
+ <subfield code="d">1685-1750.</subfield>
+ <subfield code="t">Works for keyboard</subfield>
+ </datafield>
+ <datafield tag="643" ind1=" " ind2=" ">
+ <subfield code="a">New York, NY</subfield>
+ <subfield code="b">Bach Guild</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Bach, J.S. Organ works [SR] c1981.</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR</subfield>
+ </datafield>
+ </record>
+ <record>
+ <controlfield tag="005">19960528091722.0</controlfield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Solar energy technology handbook, c1980- (a.e.)</subfield>
+ <subfield code="b">v. 1, t.p. (William C. Dickinson) pub. info sheet (William Clarence Dickinson, b. 3/15/22)</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="a">Dickinson, William C.</subfield>
+ <subfield code="d">1922-</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(DLC)n 80007472</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(SzGeCERN)aaa0004</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR</subfield>
+ </datafield>
+ </record>
+ <record>
+ <!-- this record is used by INVENIO (regression) tests as recID #109 -->
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Europhysics Study Conference on Unification of the Fundamental Particle Interactions, Erice, Italy, 1980. Unification of the fundamental particle interactions, 1980 (a.e.)</subfield>
+ <subfield code="b">t.p. (John Ellis)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Supersymmetry and supergravity, c1986:</subfield>
+ <subfield code="b">CIP t.p. (J. Ellis)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Quantum reflections, 2000:</subfield>
+ <subfield code="b">CIP t.p. (John Ellis) data sht. (b. July 1, 1946) pub. info. (Jonathan Richard Ellis)</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Ellis, J.</subfield>
+ <subfield code="d">1946-</subfield>
+ <subfield code="q">(John),</subfield>
+ </datafield>
+ <datafield tag="400" ind1=" " ind2=" ">
+ <subfield code="a">Ellis, Jonathan Richard</subfield>
+ <subfield code="d">1946-</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="a">Ellis, John</subfield>
+ <subfield code="d">1946-</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(DLC)n 80141717</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(SzGeCERN)aaa0005</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR</subfield>
+ </datafield>
+ </record>
+ <record>
+ <!-- this 'DELETED' record is used by INVENIO (regression) tests -->
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="a">EllisDeleted, JohnDeleted</subfield>
+ <subfield code="d">1946-</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR|(SzGeCERN)aaa0006</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHOR</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="c">DELETED</subfield>
+ </datafield>
+ </record>
+ <!------------------------------------
+ INSTITUTION Authority Records
+ ------------------------------------->
+ <record>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Stanford Linear Accelerator Center</subfield>
+ <subfield code="u">SLAC</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">SLAC STANFORD</subfield>
+ <subfield code="9">DESY_AFF</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="b">DOE</subfield>
+ <subfield code="b">HEP200</subfield>
+ <subfield code="b">LABLIST</subfield>
+ <subfield code="b">PDGLIST</subfield>
+ <subfield code="b">PPF</subfield>
+ <subfield code="b">SLUO</subfield>
+ <subfield code="b">TOP050</subfield>
+ <subfield code="b">TOP100</subfield>
+ <subfield code="b">TOP200</subfield>
+ <subfield code="b">TOP500</subfield>
+ <subfield code="b">WEB</subfield>
+ </datafield>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.slac.stanford.edu/</subfield>
+ </datafield>
+ <datafield tag="961" ind1=" " ind2=" ">
+ <subfield code="c">2011-01-21</subfield>
+ </datafield>
+ <datafield tag="961" ind1=" " ind2=" ">
+ <subfield code="x">1989-07-18</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">INST-6300</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="g">Accel. Ctr. Stanford Linear Center</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">SLAC Stanford</subfield>
+ <subfield code="9">DESY</subfield>
+ </datafield>
+ <datafield tag="690" ind1="C" ind2=" ">
+ <subfield code="a">CORE</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(SzGeCERN)iii0001</subfield>
+ </datafield>
+ <datafield tag="371" ind1=" " ind2=" ">
+ <subfield code="e">94025 </subfield>
+ <subfield code="g">US</subfield>
+ <subfield code="a">SLAC National Accelerator Laboratory</subfield>
+ <subfield code="f">SLAC National Accelerator Laboratory</subfield>
+ <subfield code="a">2575 Sand Hill Road</subfield>
+ <subfield code="f">2575 Sand Hill Road</subfield>
+ <subfield code="a">Menlo Park, CA 94025-7090</subfield>
+ <subfield code="f">Menlo Park, CA 94025-7090</subfield>
+ <subfield code="a">USA</subfield>
+ <subfield code="f">USA</subfield>
+ <subfield code="c">CA</subfield>
+ <subfield code="b">Menlo Park</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ </record>
+ <record>
+ <datafield tag="856" ind1="4" ind2=" ">
+ <subfield code="u">http://www.cern.ch</subfield>
+ </datafield>
+ <datafield tag="961" ind1=" " ind2=" ">
+ <subfield code="c">2011-01-21</subfield>
+ </datafield>
+ <datafield tag="961" ind1=" " ind2=" ">
+ <subfield code="x">1989-07-16</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">INST-1147</subfield>
+ </datafield>
+ <datafield tag="678" ind1="1" ind2=" ">
+ <subfield code="a">Conseil européen pour la Recherche Nucléaire (1952-1954)</subfield>
+ <subfield code="a">Organisation européenne pour la Recherche nucléaire (1954-now)</subfield>
+ <subfield code="a">Sub title: Laboratoire européen pour la Physique des Particules (1984-now)</subfield>
+ <subfield code="a">Sub title: European Laboratory for Particle Physics (1984-now)</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="b">HEP200</subfield>
+ <subfield code="b">LABLIST</subfield>
+ <subfield code="b">PDGLIST</subfield>
+ <subfield code="b">PPF</subfield>
+ <subfield code="b">SLUO</subfield>
+ <subfield code="b">TOP050</subfield>
+ <subfield code="b">TOP100</subfield>
+ <subfield code="b">TOP200</subfield>
+ <subfield code="b">TOP500</subfield>
+ <subfield code="b">WEB</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="9">DESY</subfield>
+ <subfield code="a">CERN Geneva</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">Centre Européen de Recherches Nucléaires</subfield>
+ <subfield code="g">center</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">European Organization for Nuclear Research</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">CERN</subfield>
+ <subfield code="t">CERN</subfield>
+ <subfield code="u">CERN</subfield>
+ </datafield>
+ <datafield tag="371" ind1=" " ind2=" ">
+ <subfield code="a">CH-1211 Genève 23</subfield>
+ <subfield code="b">Geneva</subfield>
+ <subfield code="d">Switzerland</subfield>
+ <subfield code="e">1211</subfield>
+ <subfield code="g">CH</subfield>
+ </datafield>
+ <datafield tag="372" ind1=" " ind2=" ">
+ <subfield code="a">Research centre</subfield>
+ </datafield>
+ <datafield tag="680" ind1=" " ind2=" ">
+ <subfield code="i">2nd address: Organisation Européenne pour la Recherche Nucléaire (CERN), F-01631 Prévessin Cedex, France</subfield>
+ </datafield>
+ <datafield tag="690" ind1="C" ind2=" ">
+ <subfield code="a">CORE</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(SzGeCERN)iii0002</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ </record>
+ <!------------------------------------
+ SUBJECT Authority Records
+ ------------------------------------->
+ <record>
+ <controlfield tag="005">19891121083347.0</controlfield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT|(DLC)sh 85101653</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT|(SzGeCERN)sss0001</subfield>
+ </datafield>
+ <datafield tag="150" ind1=" " ind2=" ">
+ <subfield code="a">Physics</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Natural philosophy</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Philosophy, Natural</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="w">g</subfield>
+ <subfield code="a">Physical sciences</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="a">Dynamics</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT</subfield>
+ </datafield>
+ </record>
+ <record>
+ <controlfield tag="003">DLC</controlfield>
+ <controlfield tag="005">20010904160459.0</controlfield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT|(SzGeCERN)sss0003</subfield>
+ </datafield>
+ <datafield tag="150" ind1=" " ind2=" ">
+ <subfield code="a">Computer crimes</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Computer fraud</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Computers</subfield>
+ <subfield code="x">Law and legislation</subfield>
+ <subfield code="x">Criminal provisions</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Computers and crime</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Cyber crimes</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Cybercrimes</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Electronic crimes (Computer crimes)</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="w">g</subfield>
+ <subfield code="a">Crime</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="a">Privacy, Right of</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">00351496: Adamski, A. Prawo kame komputerowe, c2000.</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Excite WWW directory of subjects, July 10, 2001</subfield>
+ <subfield code="b">(cybercrimes; subcategory under Criminal, Branches of law, Law, Education)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Electronic crime needs assessment for state and local law enforcement, 2001:</subfield>
+ <subfield code="b">glossary, p. 41 (electronic crime includes but is not limited to fraud, theft, forgery, child pornography or exploitation, stalking, traditional white-collar crimes, privacy violations, illegal drug transactions, espionage, computer intrusions; no synonyms given)</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT</subfield>
+ </datafield>
+ </record>
+ <record>
+ <controlfield tag="003">DLC</controlfield>
+ <controlfield tag="005">20010904162409.0</controlfield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT|(SzGeCERN)sss0004</subfield>
+ </datafield>
+ <datafield tag="150" ind1=" " ind2=" ">
+ <subfield code="a">Embellishment (Music)</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Diminution (Music)</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Ornamentation (Music)</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Ornaments (Music)</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="w">g</subfield>
+ <subfield code="a">Music</subfield>
+ <subfield code="x">Performance</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="w">g</subfield>
+ <subfield code="a">Performance practice (Music)</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="a">Musical notation</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="a">Variation (Music)</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Heim, N.M. Ornamentation for the clarinetist, c1993.</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Massin. De la variation, c2000.</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT</subfield>
+ </datafield>
+ </record>
+ <record>
+ <controlfield tag="003">DLC</controlfield>
+ <controlfield tag="005">20010904162503.0</controlfield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT|(SzGeCERN)sss0005</subfield>
+ </datafield>
+ <datafield tag="150" ind1=" " ind2=" ">
+ <subfield code="a">Embellishment (Vocal music)</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Colorature</subfield>
+ </datafield>
+ <datafield tag="450" ind1=" " ind2=" ">
+ <subfield code="a">Fioriture</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="w">g</subfield>
+ <subfield code="a">Embellishment (Music)</subfield>
+ </datafield>
+ <datafield tag="550" ind1=" " ind2=" ">
+ <subfield code="w">g</subfield>
+ <subfield code="a">Vocal music</subfield>
+ <subfield code="x">History and criticism</subfield>
+ </datafield>
+ <datafield tag="670" ind1=" " ind2=" ">
+ <subfield code="a">Massin. De la variation, c2000.</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">SUBJECT</subfield>
+ </datafield>
+ </record>
+
+<!------------------------------------------------------------------->
+<!-- 3. Linked BIBLIOGRAPHIC AND AUTHORITY records from Jülich -->
+<!------------------------------------------------------------------->
+
+ <!----------------------------
+ BIBLIOGRAPHIC records
+ ----------------------------->
+ <record>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="0">PER:749</subfield>
+ <subfield code="a">Kilian, K.</subfield>
+ <subfield code="b">0</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="a">Other ways to make polarized antiproton beams</subfield>
+ </datafield>
+ <datafield tag="260" ind1=" " ind2=" ">
+ <subfield code="c">2010</subfield>
+ </datafield>
+ <datafield tag="300" ind1=" " ind2=" ">
+ <subfield code="a">107</subfield>
+ </datafield>
+ <datafield tag="440" ind1=" " ind2="0">
+ <subfield code="0">6697</subfield>
+ <subfield code="a">Verhandlungen der Deutschen Physikalischen Gesellschaft (Reihe 06)</subfield>
+ <subfield code="v">2</subfield>
+ </datafield>
+ <datafield tag="500" ind1=" " ind2=" ">
+ <subfield code="3">Journal Article</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:513</subfield>
+ <subfield code="a">Grzonka, D.</subfield>
+ <subfield code="b">1</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:1182</subfield>
+ <subfield code="a">Oelert, W.</subfield>
+ <subfield code="b">2</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="773" ind1=" " ind2=" ">
+ <subfield code="0">--NOT MATCHED--</subfield>
+ <subfield code="g">Vol. 2</subfield>
+ <subfield code="q">2:&lt;</subfield>
+ <subfield code="v">2</subfield>
+ <subfield code="y">2010</subfield>
+ </datafield>
+ <datafield tag="913" ind1=" " ind2=" ">
+ <subfield code="0">GRANT:413</subfield>
+ <subfield code="k">P53</subfield>
+ <subfield code="s">Struktur der Materie der Materie</subfield>
+ <subfield code="v">Physik der Hadronen und Kerne</subfield>
+ </datafield>
+ <datafield tag="914" ind1=" " ind2=" ">
+ <subfield code="y">2010</subfield>
+ </datafield>
+ <datafield tag="920" ind1=" " ind2=" ">
+ <subfield code="0">INSTITUTION|(DE-Juel1)795</subfield>
+ <subfield code="g">IKP</subfield>
+ <subfield code="k">IKP-1</subfield>
+ <subfield code="v">Experimentelle Hadronstruktur</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">VDB:126525</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">VDB</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="2">DOI</subfield>
+ <subfield code="a">10.1063/1.2737136</subfield>
+ </datafield>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="2">VDB</subfield>
+ <subfield code="a">VDB:88636</subfield>
+ </datafield>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="2">WOS</subfield>
+ <subfield code="a">WOS:000246413400056</subfield>
+ </datafield>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="2">ISSN</subfield>
+ <subfield code="a">0003-6951</subfield>
+ </datafield>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">inh:9498831</subfield>
+ <subfield code="2">inh</subfield>
+ </datafield>
+ <datafield tag="041" ind1=" " ind2=" ">
+ <subfield code="a">English</subfield>
+ </datafield>
+ <datafield tag="084" ind1=" " ind2=" ">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Atom-, molecule-, and ion-surface impact and interactions</subfield>
+ </datafield>
+ <datafield tag="084" ind1=" " ind2=" ">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Pulsed laser deposition</subfield>
+ </datafield>
+ <datafield tag="084" ind1=" " ind2=" ">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Stoichiometry and homogeneity</subfield>
+ </datafield>
+ <datafield tag="084" ind1=" " ind2=" ">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Thin film growth, structure, and epitaxy</subfield>
+ </datafield>
+ <datafield tag="084" ind1=" " ind2=" ">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Vacuum deposition</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="0">39744</subfield>
+ <subfield code="a">Heeg, T.</subfield>
+ <subfield code="b">0</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="a">Epitaxially stabilized growth of orthorhombic LuScO3 thin films</subfield>
+ </datafield>
+ <datafield tag="260" ind1=" " ind2=" ">
+ <subfield code="c">2007</subfield>
+ </datafield>
+ <datafield tag="300" ind1=" " ind2=" ">
+ <subfield code="a">192901-1 - 192901-3</subfield>
+ </datafield>
+ <datafield tag="440" ind1=" " ind2="0">
+ <subfield code="0">562</subfield>
+ <subfield code="a">Applied Physics Letters</subfield>
+ <subfield code="v">90</subfield>
+ <subfield code="x">0003-6951</subfield>
+ </datafield>
+ <datafield tag="500" ind1=" " ind2=" ">
+ <subfield code="3">Journal Article</subfield>
+ </datafield>
+ <datafield tag="520" ind1=" " ind2=" ">
+ <subfield code="a">Metastable lutetium scandate (LuScO3) thin films with an orthorhombic perovskite structure have been prepared by molecular-beam epitaxy and pulsed-laser deposition on NdGaO3(110) and DyScO3(110) substrates. Stoichiometry and crystallinity were investigated using Rutherford backscattering spectrometry/channeling, x-ray diffraction, and transmission electron microscopy. The results indicate that LuScO3, which normally only exists as a solid solution of Sc2O3 and Lu2O3 with the cubic bixbyite structure, can be grown in the orthorhombically distorted perovskite structure. Rocking curves as narrow as 0.05deg were achieved. A critical film thickness of approximately 200 nm for the epitaxially stabilized perovskite polymorph of LuScO3 on NdGaO3(110) substrates was determined.</subfield>
+ </datafield>
+ <datafield tag="588" ind1=" " ind2=" ">
+ <subfield code="a">Enriched from Web of Science, Inspec</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">DyScO3</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">DyScO3(110) substrates</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Fuel cells</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">LuScO3</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">NdGaO3</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">NdGaO3(110) substrates</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Rutherford backscattering</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Rutherford backscattering channeling</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Rutherford backscattering spectrometry</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">X-ray diffraction</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">critical film thickness</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">crystallinity</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">epitaxial layers</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">epitaxially stabilized growth</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">epitaxially stabilized perovskite polymorph</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">lutetium compounds</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">metastable lutetium scandate thin films</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">molecular beam epitaxial growth</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">molecular beam epitaxy</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">orthorhombically distorted perovskite structure</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">polymorphism</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">pulsed laser deposition</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">rocking curves</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">stoichiometry</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">transmission electron microscopy</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:64142</subfield>
+ <subfield code="a">Roeckerath, M.</subfield>
+ <subfield code="b">1</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:5409</subfield>
+ <subfield code="a">Schubert, J.</subfield>
+ <subfield code="b">2</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:5482</subfield>
+ <subfield code="a">Zander, W.</subfield>
+ <subfield code="b">3</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:14557</subfield>
+ <subfield code="a">Buchal, Ch.</subfield>
+ <subfield code="b">4</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:60616</subfield>
+ <subfield code="a">Chen, H. Y.</subfield>
+ <subfield code="b">5</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:5020</subfield>
+ <subfield code="a">Jia, C. L.</subfield>
+ <subfield code="b">6</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:45799</subfield>
+ <subfield code="a">Jia, Y.</subfield>
+ <subfield code="b">7</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:65921</subfield>
+ <subfield code="a">Adamo, C.</subfield>
+ <subfield code="b">8</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:15220</subfield>
+ <subfield code="a">Schlom, D. G.</subfield>
+ <subfield code="b">9</subfield>
+ </datafield>
+ <datafield tag="773" ind1=" " ind2=" ">
+ <subfield code="0">ZDBID:2265524-4</subfield>
+ <subfield code="a">10.1063/1.2737136</subfield>
+ <subfield code="g">Vol. 90, no. 19, p. 192901</subfield>
+ <subfield code="n">19</subfield>
+ <subfield code="q">90:19&lt;192901</subfield>
+ <subfield code="t">Applied physics reviews</subfield>
+ <subfield code="v">90</subfield>
+ <subfield code="x">0003-6951</subfield>
+ <subfield code="y">2007</subfield>
+ </datafield>
+ <datafield tag="856" ind1="7" ind2=" ">
+ <subfield code="u">http://dx.doi.org/10.1063/1.2737136</subfield>
+ </datafield>
+ <datafield tag="913" ind1=" " ind2=" ">
+ <subfield code="0">GRANT:412</subfield>
+ <subfield code="k">P42</subfield>
+ <subfield code="s">Schlüsseltechnologien</subfield>
+ <subfield code="v">Grundlagen für zukünftige Informationstechnologien</subfield>
+ </datafield>
+ <datafield tag="914" ind1=" " ind2=" ">
+ <subfield code="y">2007</subfield>
+ </datafield>
+ <datafield tag="920" ind1=" " ind2=" ">
+ <subfield code="0">INSTITUTION|(DE-Juel1)381</subfield>
+ <subfield code="d">14.09.2008</subfield>
+ <subfield code="g">CNI</subfield>
+ <subfield code="k">CNI</subfield>
+ <subfield code="v">Center of Nanoelectronic Systems for Information Technology</subfield>
+ <subfield code="z">Zusammenschluss der am FE-Vorhaben I01 beteiligtenute: IFF-TH-I, IFF-TH-II, IFF-IEM, IFF-IMF, IFF-IEE, ISG-1, ISG-2, ISG-3</subfield>
+ </datafield>
+ <datafield tag="920" ind1=" " ind2=" ">
+ <subfield code="0">INSTITUTION|(DE-Juel1)788</subfield>
+ <subfield code="g">IFF</subfield>
+ <subfield code="k">IFF-8</subfield>
+ <subfield code="v">Mikrostrukturforschung</subfield>
+ </datafield>
+ <datafield tag="920" ind1=" " ind2=" ">
+ <subfield code="0">INSTITUTION|(DE-Juel1)799</subfield>
+ <subfield code="g">IBN</subfield>
+ <subfield code="k">IBN-1</subfield>
+ <subfield code="v">Halbleiter-Nanoelektronik</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">VDB:88636</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">VDB</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">ARTICLE</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="2">DOI</subfield>
+ <subfield code="a">10.4028/www.scientific.net/MSF.638-642.1098</subfield>
+ </datafield>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="2">VDB</subfield>
+ <subfield code="a">VDB:125298</subfield>
+ </datafield>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="2">ISSN</subfield>
+ <subfield code="a">0255-5476</subfield>
+ </datafield>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">inh:11707323</subfield>
+ <subfield code="2">inh</subfield>
+ </datafield>
+ <datafield tag="041" ind1=" " ind2=" ">
+ <subfield code="a">English</subfield>
+ </datafield>
+ <datafield tag="084" ind1=" " ind2=" ">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">Fuel cells</subfield>
+ </datafield>
+ <datafield tag="100" ind1=" " ind2=" ">
+ <subfield code="0">PER:96536</subfield>
+ <subfield code="a">Menzler, N.H.</subfield>
+ <subfield code="b">0</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="245" ind1=" " ind2=" ">
+ <subfield code="a">Influence of processing parameters on the manufacturing of anode-supported solid oxide fuel cells by different wet chemical routes</subfield>
+ </datafield>
+ <datafield tag="260" ind1=" " ind2=" ">
+ <subfield code="c">2010</subfield>
+ </datafield>
+ <datafield tag="300" ind1=" " ind2=" ">
+ <subfield code="a"></subfield>
+ </datafield>
+ <datafield tag="440" ind1=" " ind2="0">
+ <subfield code="0">4206</subfield>
+ <subfield code="a">Materials Science Forum</subfield>
+ <subfield code="v">638-642</subfield>
+ <subfield code="x">0255-5476</subfield>
+ <subfield code="y">1098 - 1105</subfield>
+ </datafield>
+ <datafield tag="500" ind1=" " ind2=" ">
+ <subfield code="3">Journal Article</subfield>
+ </datafield>
+ <datafield tag="520" ind1=" " ind2=" ">
+ <subfield code="a">Anode-supported solid oxide fuel cells (SOFC) are manufactured at Forschungszentrum Jülich by different wet chemical powder processes and subsequent sintering at high temperatures. Recently, the warm pressing of Coat-Mix powders has been replaced by tape casting as the shaping technology for the NiO/8YSZ-containing substrate in order to decrease the demand for raw materials due to lower substrate thickness and in order to increase reproducibility and fabrication capacities (scalable process). Different processing routes for the substrates require the adjustment of process parameters for further coating with functional layers. Therefore, mainly thermal treatment steps have to be adapted to the properties of the new substrate types in order to obtain high-performance cells with minimum curvature (for stack assembly). In this presentation, the influence of selected process parameters during cell manufacturing will be characterized with respect to the resulting physical parameters such as slurry viscosity, green tape thickness, relative density, substrate strength, electrical conductivity, and shrinkage of the different newly developed substrate types. The influencing factors during manufacturing and the resulting characteristics will be presented and possible applications for the various substrates identified.</subfield>
+ </datafield>
+ <datafield tag="588" ind1=" " ind2=" ">
+ <subfield code="a">Enriched from Inspec</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">anode-supported solid oxide fuel cells</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">coating</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">electrical conductivity</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">green tape thickness</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">powders</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">processing parameters</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">shrinkage</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">sintering</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">slurry viscosity</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">solid oxide fuel cells</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">tape casting</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">thermal treatment</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">viscosity</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">warm pressing</subfield>
+ </datafield>
+ <datafield tag="650" ind1=" " ind2="7">
+ <subfield code="2">Inspec</subfield>
+ <subfield code="a">wet chemical powder processes</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:76694</subfield>
+ <subfield code="a">Schafbauer, W.</subfield>
+ <subfield code="b">1</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="700" ind1=" " ind2=" ">
+ <subfield code="0">PER:96316</subfield>
+ <subfield code="a">Buchkremer, H.P.</subfield>
+ <subfield code="b">2</subfield>
+ <subfield code="g">fzj</subfield>
+ </datafield>
+ <datafield tag="773" ind1=" " ind2=" ">
+ <subfield code="0">ZDBID:2047372-2</subfield>
+ <subfield code="a">10.4028/www.scientific.net/MSF.638-642.1098</subfield>
+ <subfield code="g">Vol. 638-642, p. 1098 - 1105</subfield>
+ <subfield code="q">638-642:&lt;1098 - 1105</subfield>
+ <subfield code="t">Materials science forum</subfield>
+ <subfield code="v">638-642</subfield>
+ <subfield code="x">0255-5476</subfield>
+ <subfield code="y">2010</subfield>
+ </datafield>
+ <datafield tag="856" ind1="7" ind2=" ">
+ <subfield code="u">http://dx.doi.org/10.4028/www.scientific.net/MSF.638-642.1098</subfield>
+ </datafield>
+ <datafield tag="913" ind1=" " ind2=" ">
+ <subfield code="0">GRANT:402</subfield>
+ <subfield code="k">P12</subfield>
+ <subfield code="s">Energie</subfield>
+ <subfield code="v">Rationelle Energieumwandlung</subfield>
+ </datafield>
+ <datafield tag="914" ind1=" " ind2=" ">
+ <subfield code="y">2010</subfield>
+ </datafield>
+ <datafield tag="920" ind1=" " ind2=" ">
+ <subfield code="0">INSTITUTION|(DE-Juel1)1130</subfield>
+ <subfield code="g">IEK</subfield>
+ <subfield code="k">IEK-1</subfield>
+ <subfield code="v">Werkstoffsynthese und Herstellverfahren</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">VDB:125298</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">VDB</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">ARTICLE</subfield>
+ </datafield>
+ </record>
+
+ <!----------------------------
+ AUTHORITY records
+ ----------------------------->
+
+ <record>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)5008462-8</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Forschungszentrum Jülich</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">Energieforschungszentrum Jülich</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">KFA</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">Research Centre Jülich</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">KFA Jülich</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">FZJ</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="w">a</subfield>
+ <subfield code="a">Kernforschungsanlage Jülich</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">INSTITUTION|(DE-Juel1)301</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)301</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Institut für Kernphysik</subfield>
+ </datafield>
+ <datafield tag="148" ind1=" " ind2=" ">
+ <subfield code="a">31.12.2000</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IKP</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)5008462-8</subfield>
+ <subfield code="2">GND</subfield>
+ <subfield code="a">Forschungszentrum Jülich</subfield>
+ <subfield code="w">t</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)301</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IKP</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)301</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)795</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)795</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Experimentelle Hadronstruktur</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IKP-1</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)301</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IKP</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)221</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Institut 1 (Experimentelle Kernphysik I)</subfield>
+ <subfield code="w">a</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)795</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)241</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)241</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Institut für Festkörperforschung</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IFF</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)5008462-8</subfield>
+ <subfield code="2">GND</subfield>
+ <subfield code="a">Forschungszentrum Jülich</subfield>
+ <subfield code="w">t</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)241</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IFF</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)241</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)1107</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)1107</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Institut für Bio- und Nanosysteme</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IBN</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)5008462-8</subfield>
+ <subfield code="2">GND</subfield>
+ <subfield code="a">Forschungszentrum Jülich</subfield>
+ <subfield code="w">t</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1107</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IBN</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)1107</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)1125</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)1125</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Institut für Energie- und Klimaforschung</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IEK</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)5008462-8</subfield>
+
+ <subfield code="2">GND</subfield>
+ <subfield code="a">Forschungszentrum Jülich</subfield>
+ <subfield code="w">t</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1125</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IEK</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)1125</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)1115</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)1115</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Institut für Energieforschung</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IEF</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)5008462-8</subfield>
+ <subfield code="2">GND</subfield>
+ <subfield code="a">Forschungszentrum Jülich</subfield>
+ <subfield code="w">t</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1115</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IEF</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)1115</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)381</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)381</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Center of Nanoelectronic Systems for Information Technology</subfield>
+ </datafield>
+ <datafield tag="148" ind1=" " ind2=" ">
+ <subfield code="a">14.09.2008</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">CNI</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)5008462-8</subfield>
+ <subfield code="2">GND</subfield>
+ <subfield code="a">Forschungszentrum Jülich</subfield>
+ <subfield code="w">t</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)381</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">CNI</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)381</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)788</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)788</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Mikrostrukturforschung</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IFF-8</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)241</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IFF</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)37</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Mikrostrukturforschung</subfield>
+ <subfield code="w">a</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)788</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)861</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)861</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Institut für Halbleiterschichten und Bauelemente</subfield>
+ </datafield>
+ <datafield tag="148" ind1=" " ind2=" ">
+ <subfield code="a">30.09.2007</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IBN-1</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1107</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IBN</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)41</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Institut für Halbleiterschichten und Bauelemente</subfield>
+ <subfield code="w">a</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)799</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Halbleiter-Nanoelektronik</subfield>
+ <subfield code="w">b</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)861</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)799</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)799</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Halbleiter-Nanoelektronik</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IBN-1</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1107</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IBN</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)861</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Institut für Halbleiterschichten und Bauelemente</subfield>
+ <subfield code="w">a</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)799</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)1130</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)1130</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Werkstoffsynthese und Herstellverfahren</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IEK-1</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1125</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IEK</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)809</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Werkstoffsynthese und Herstellungsverfahren</subfield>
+ <subfield code="w">a</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)1130</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
+ <record>
+ <datafield tag="024" ind1="7" ind2=" ">
+ <subfield code="0">(DE-Juel1)809</subfield>
+ <subfield code="2">INST</subfield>
+ </datafield>
+ <datafield tag="035" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION|(DE-Juel1)809</subfield>
+ </datafield>
+ <datafield tag="110" ind1=" " ind2=" ">
+ <subfield code="a">Werkstoffsynthese und Herstellungsverfahren</subfield>
+ </datafield>
+ <datafield tag="148" ind1=" " ind2=" ">
+ <subfield code="a">30.09.2010</subfield>
+ </datafield>
+ <datafield tag="410" ind1=" " ind2=" ">
+ <subfield code="a">IEF-1</subfield>
+ <subfield code="w">d</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1115</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">IEF</subfield>
+ <subfield code="w">g</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)5</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Werkstoffsynthese und Herstellungsverfahren</subfield>
+ <subfield code="w">a</subfield>
+ </datafield>
+ <datafield tag="510" ind1=" " ind2=" ">
+ <subfield code="4">INSTITUTION|(DE-Juel1)1130</subfield>
+ <subfield code="2">INST</subfield>
+ <subfield code="a">Werkstoffsynthese und Herstellverfahren</subfield>
+ <subfield code="w">b</subfield>
+ </datafield>
+ <datafield tag="970" ind1=" " ind2=" ">
+ <subfield code="a">(DE-Juel1)809</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">INSTITUTION</subfield>
+ </datafield>
+ <datafield tag="980" ind1=" " ind2=" ">
+ <subfield code="a">AUTHORITY</subfield>
+ </datafield>
+ </record>
+
</collection>
diff --git a/modules/miscutil/demo/democfgdata.sql b/modules/miscutil/demo/democfgdata.sql
index 1fcfe8ef0..65d042c4a 100644
--- a/modules/miscutil/demo/democfgdata.sql
+++ b/modules/miscutil/demo/democfgdata.sql
@@ -1,2528 +1,2582 @@
-- This file is part of Invenio.
-- Copyright (C) 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
--
-- Invenio is free software; you can redistribute it and/or
-- modify it under the terms of the GNU General Public License as
-- published by the Free Software Foundation; either version 2 of the
-- License, or (at your option) any later version.
--
-- Invenio is distributed in the hope that it will be useful, but
-- WITHOUT ANY WARRANTY; without even the implied warranty of
-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- General Public License for more details.
--
-- You should have received a copy of the GNU General Public License
-- along with Invenio; if not, write to the Free Software Foundation, Inc.,
-- 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
INSERT INTO user VALUES (2,'jekyll@cds.cern.ch',AES_ENCRYPT(email,'j123ekyll'),'1',NULL,'jekyll','');
INSERT INTO user VALUES (3,'hyde@cds.cern.ch',AES_ENCRYPT(email,'h123yde'),'1',NULL,'hyde','');
INSERT INTO user VALUES (4,'dorian.gray@cds.cern.ch',AES_ENCRYPT(email,'d123orian'),'1',NULL,'dorian','');
INSERT INTO user VALUES (5,'romeo.montague@cds.cern.ch',AES_ENCRYPT(email,'r123omeo'),'1',NULL,'romeo','');
INSERT INTO user VALUES (6,'juliet.capulet@cds.cern.ch',AES_ENCRYPT(email,'j123uliet'),'1',NULL,'juliet','');
INSERT INTO user VALUES (7,'benvolio.montague@cds.cern.ch',AES_ENCRYPT(email,'b123envolio'),'1',NULL,'benvolio','');
INSERT INTO user VALUES (8,'balthasar.montague@cds.cern.ch',AES_ENCRYPT(email,'b123althasar'),'1',NULL,'balthasar','');
INSERT INTO usergroup VALUES (1,'Theses viewers','Theses viewers internal group','VO','INTERNAL');
INSERT INTO usergroup VALUES (2,'montague-family','The Montague family.','VM','INTERNAL');
INSERT INTO usergroup VALUES (3,'ALEPH viewers','ALEPH viewers internal group','VO','INTERNAL');
INSERT INTO usergroup VALUES (4,'ISOLDE Internal Notes viewers','ISOLDE Internal Notes viewers internal group','VO','INTERNAL');
INSERT INTO user_usergroup VALUES (2,1,'M',NOW());
INSERT INTO user_usergroup VALUES (5,2,'A',NOW());
INSERT INTO user_usergroup VALUES (6,2,'M',NOW());
INSERT INTO user_usergroup VALUES (7,2,'M',NOW());
INSERT INTO collection VALUES (2,'Preprints','980:"PREPRINT"',NULL,NULL);
INSERT INTO collection VALUES (3,'Books','980:"BOOK"',NULL,NULL);
INSERT INTO collection VALUES (4,'Theses','980:"THESIS"',NULL,NULL);
INSERT INTO collection VALUES (5,'Reports','980:"REPORT"',NULL,NULL);
INSERT INTO collection VALUES (6,'Articles','980:"ARTICLE"',NULL,NULL);
INSERT INTO collection VALUES (8,'Pictures','980:"PICTURE"',NULL,NULL);
INSERT INTO collection VALUES (9,'CERN Divisions',NULL,NULL,NULL);
INSERT INTO collection VALUES (10,'CERN Experiments',NULL,NULL,NULL);
INSERT INTO collection VALUES (11,'Theoretical Physics (TH)','division:TH',NULL,NULL);
INSERT INTO collection VALUES (12,'Experimental Physics (EP)','division:EP',NULL,NULL);
INSERT INTO collection VALUES (13,'ISOLDE','',NULL,NULL);
INSERT INTO collection VALUES (14,'ALEPH','',NULL,NULL);
INSERT INTO collection VALUES (15,'Articles & Preprints',NULL,NULL,NULL);
INSERT INTO collection VALUES (16,'Books & Reports',NULL,NULL,NULL);
INSERT INTO collection VALUES (17,'Multimedia & Arts',NULL,NULL,NULL);
INSERT INTO collection VALUES (18,'Poetry','980:"POETRY"',NULL,NULL);
INSERT INTO collection VALUES (19,'Atlantis Times News','980:"ATLANTISTIMESNEWS"',NULL,NULL);
INSERT INTO collection VALUES (20,'Atlantis Times Arts','980:"ATLANTISTIMESARTS"',NULL,NULL);
INSERT INTO collection VALUES (21,'Atlantis Times Science','980:"ATLANTISTIMESSCIENCE"',NULL,NULL);
INSERT INTO collection VALUES (22,'Atlantis Times',NULL,NULL,NULL);
INSERT INTO collection VALUES (23,'Atlantis Institute Books','hostedcollection:',NULL,NULL);
INSERT INTO collection VALUES (24,'Atlantis Institute Articles','hostedcollection:',NULL,NULL);
INSERT INTO collection VALUES (25,'Atlantis Times Drafts','980:"ATLANTISTIMESSCIENCEDRAFT" or 980:"ATLANTISTIMESARTSDRAFT" or 980:"ATLANTISTIMESNEWSDRAFT"',NULL,NULL);
INSERT INTO collection VALUES (26, 'Notes', '980:"NOTE"', NULL, NULL);
INSERT INTO collection VALUES (27, 'ALEPH Papers', '980:"ALEPHPAPER"', NULL, NULL);
INSERT INTO collection VALUES (28, 'ALEPH Internal Notes', '980:"ALEPHNOTE"', NULL, NULL);
INSERT INTO collection VALUES (29, 'ALEPH Theses', '980:"ALEPHTHESIS"', NULL, NULL);
INSERT INTO collection VALUES (30, 'ISOLDE Papers', '980:"ISOLDEPAPER"', NULL, NULL);
INSERT INTO collection VALUES (31, 'ISOLDE Internal Notes', '980:"ISOLDENOTE"', NULL, NULL);
INSERT INTO collection VALUES (32, 'Drafts', '980:"DRAFT"', NULL, NULL);
INSERT INTO collection VALUES (33,'Videos','980:"VIDEO"',NULL,NULL);
+INSERT INTO collection VALUES (34, 'Authority Records', 'collection:AUTHORITY', null, null);
+INSERT INTO collection VALUES (35, 'Authority Author', 'collection:AUTHOR', null, null);
+INSERT INTO collection VALUES (36, 'Authority Institution', 'collection:INSTITUTION', null, null);
+INSERT INTO collection VALUES (37, 'Authority Journal', 'collection:JOURNAL', null, null);
+INSERT INTO collection VALUES (38, 'Authority Subject', 'collection:SUBJECT', null, null);
INSERT INTO collectiondetailedrecordpagetabs VALUES (8, 'usage;comments;metadata');
INSERT INTO collectiondetailedrecordpagetabs VALUES (19, 'usage;comments;metadata');
INSERT INTO collectiondetailedrecordpagetabs VALUES (18, 'usage;comments;metadata');
INSERT INTO collectiondetailedrecordpagetabs VALUES (17, 'usage;comments;metadata');
INSERT INTO clsMETHOD VALUES (1,'HEP','http://invenio-software.org/download/invenio-demo-site-files/HEP.rdf','High Energy Physics Taxonomy','0000-00-00 00:00:00');
INSERT INTO clsMETHOD VALUES (2,'NASA-subjects','http://invenio-software.org/download/invenio-demo-site-files/NASA-subjects.rdf','NASA Subjects','0000-00-00 00:00:00');
INSERT INTO collection_clsMETHOD VALUES (2,1);
INSERT INTO collection_clsMETHOD VALUES (12,2);
INSERT INTO collectionname VALUES (2,'en','ln','Preprints');
INSERT INTO collectionname VALUES (2,'fr','ln','Prétirages');
INSERT INTO collectionname VALUES (2,'de','ln','Preprints');
INSERT INTO collectionname VALUES (2,'es','ln','Preprints');
INSERT INTO collectionname VALUES (2,'ca','ln','Preprints');
INSERT INTO collectionname VALUES (2,'pl','ln','Preprinty');
INSERT INTO collectionname VALUES (2,'pt','ln','Preprints');
INSERT INTO collectionname VALUES (2,'it','ln','Preprint');
INSERT INTO collectionname VALUES (2,'ru','ln','Препринты');
INSERT INTO collectionname VALUES (2,'sk','ln','Preprinty');
INSERT INTO collectionname VALUES (2,'cs','ln','Preprinty');
INSERT INTO collectionname VALUES (2,'no','ln','Førtrykk');
INSERT INTO collectionname VALUES (2,'sv','ln','Preprints');
INSERT INTO collectionname VALUES (2,'el','ln','Προδημοσιεύσεις');
INSERT INTO collectionname VALUES (2,'uk','ln','Препринти');
INSERT INTO collectionname VALUES (2,'ja','ln','プレプリント');
INSERT INTO collectionname VALUES (2,'bg','ln','Препринти');
INSERT INTO collectionname VALUES (2,'hr','ln','Preprinti');
INSERT INTO collectionname VALUES (2,'zh_CN','ln','预印');
INSERT INTO collectionname VALUES (2,'zh_TW','ln','預印');
INSERT INTO collectionname VALUES (2,'hu','ln','Preprintek');
INSERT INTO collectionname VALUES (2,'af','ln','Pre-drukke');
INSERT INTO collectionname VALUES (2,'gl','ln','Preprints');
INSERT INTO collectionname VALUES (2,'ro','ln','Preprinturi');
INSERT INTO collectionname VALUES (2,'rw','ln','Preprints');
INSERT INTO collectionname VALUES (2,'ka','ln','პრეპრინტები');
INSERT INTO collectionname VALUES (2,'lt','ln','Rankraščiai');
INSERT INTO collectionname VALUES (2,'ar','ln','مسودات');
+INSERT INTO collectionname VALUES (2,'fa','ln','پیش چاپ ها');
INSERT INTO collectionname VALUES (3,'en','ln','Books');
INSERT INTO collectionname VALUES (3,'fr','ln','Livres');
INSERT INTO collectionname VALUES (3,'de','ln','Bücher');
INSERT INTO collectionname VALUES (3,'es','ln','Libros');
INSERT INTO collectionname VALUES (3,'ca','ln','Llibres');
INSERT INTO collectionname VALUES (3,'pl','ln','Książki');
INSERT INTO collectionname VALUES (3,'pt','ln','Livros');
INSERT INTO collectionname VALUES (3,'it','ln','Libri');
INSERT INTO collectionname VALUES (3,'ru','ln','Книги');
INSERT INTO collectionname VALUES (3,'sk','ln','Knihy');
INSERT INTO collectionname VALUES (3,'cs','ln','Knihy');
INSERT INTO collectionname VALUES (3,'no','ln','Bøker');
INSERT INTO collectionname VALUES (3,'sv','ln','');
INSERT INTO collectionname VALUES (3,'el','ln','Βιβλία');
INSERT INTO collectionname VALUES (3,'uk','ln','Книги');
INSERT INTO collectionname VALUES (3,'ja','ln','本');
INSERT INTO collectionname VALUES (3,'bg','ln','Книги');
INSERT INTO collectionname VALUES (3,'hr','ln','Knjige');
INSERT INTO collectionname VALUES (3,'zh_CN','ln','书本');
INSERT INTO collectionname VALUES (3,'zh_TW','ln','書本');
INSERT INTO collectionname VALUES (3,'hu','ln','Könyvek');
INSERT INTO collectionname VALUES (3,'af','ln','Boeke');
INSERT INTO collectionname VALUES (3,'gl','ln','Libros');
INSERT INTO collectionname VALUES (3,'ro','ln','Cărţi');
INSERT INTO collectionname VALUES (3,'rw','ln','Ibitabo');
INSERT INTO collectionname VALUES (3,'ka','ln','წიგნები');
INSERT INTO collectionname VALUES (3,'lt','ln','Knygos');
INSERT INTO collectionname VALUES (3,'ar','ln','كتب');
+INSERT INTO collectionname VALUES (3,'fa','ln','کتاب ها');
INSERT INTO collectionname VALUES (4,'en','ln','Theses');
INSERT INTO collectionname VALUES (4,'fr','ln','Thèses');
INSERT INTO collectionname VALUES (4,'de','ln','Dissertationen');
INSERT INTO collectionname VALUES (4,'es','ln','Tesis');
INSERT INTO collectionname VALUES (4,'ca','ln','Tesis');
INSERT INTO collectionname VALUES (4,'pl','ln','Prace naukowe');
INSERT INTO collectionname VALUES (4,'pt','ln','Teses');
INSERT INTO collectionname VALUES (4,'it','ln','Tesi');
INSERT INTO collectionname VALUES (4,'ru','ln','Диссертации');
INSERT INTO collectionname VALUES (4,'sk','ln','Dizertácie');
INSERT INTO collectionname VALUES (4,'cs','ln','Disertace');
INSERT INTO collectionname VALUES (4,'no','ln','Avhandlinger');
INSERT INTO collectionname VALUES (4,'sv','ln','');
INSERT INTO collectionname VALUES (4,'el','ln','Διατριβές');
INSERT INTO collectionname VALUES (4,'uk','ln','Дисертації');
INSERT INTO collectionname VALUES (4,'ja','ln','説');
INSERT INTO collectionname VALUES (4,'bg','ln','Дисертации');
INSERT INTO collectionname VALUES (4,'hr','ln','Disertacije');
INSERT INTO collectionname VALUES (4,'zh_CN','ln','论文');
INSERT INTO collectionname VALUES (4,'zh_TW','ln','論文');
INSERT INTO collectionname VALUES (4,'hu','ln','Disszertációk');
INSERT INTO collectionname VALUES (4,'af','ln','Tesise');
INSERT INTO collectionname VALUES (4,'gl','ln','Teses');
INSERT INTO collectionname VALUES (4,'ro','ln','Teze');
INSERT INTO collectionname VALUES (4,'rw','ln','Igitabo ky\'ubushakashatsi'); -- '
INSERT INTO collectionname VALUES (4,'ka','ln','თეზისები');
INSERT INTO collectionname VALUES (4,'lt','ln','Disertacijos');
INSERT INTO collectionname VALUES (4,'ar','ln','أطروحات');
+INSERT INTO collectionname VALUES (4,'fa','ln','پایان نامه ها');
INSERT INTO collectionname VALUES (5,'en','ln','Reports');
INSERT INTO collectionname VALUES (5,'fr','ln','Rapports');
INSERT INTO collectionname VALUES (5,'de','ln','Reports');
INSERT INTO collectionname VALUES (5,'es','ln','Informes');
INSERT INTO collectionname VALUES (5,'ca','ln','Informes');
INSERT INTO collectionname VALUES (5,'pl','ln','Raporty');
INSERT INTO collectionname VALUES (5,'pt','ln','Relatórios');
INSERT INTO collectionname VALUES (5,'it','ln','Rapporti');
INSERT INTO collectionname VALUES (5,'ru','ln','Рапорты');
INSERT INTO collectionname VALUES (5,'sk','ln','Správy');
INSERT INTO collectionname VALUES (5,'cs','ln','Zprávy');
INSERT INTO collectionname VALUES (5,'no','ln','Rapporter');
INSERT INTO collectionname VALUES (5,'sv','ln','');
INSERT INTO collectionname VALUES (5,'el','ln','Αναφορές');
INSERT INTO collectionname VALUES (5,'uk','ln','Звіти');
INSERT INTO collectionname VALUES (5,'ja','ln','レポート');
INSERT INTO collectionname VALUES (5,'bg','ln','Доклади');
INSERT INTO collectionname VALUES (5,'hr','ln','Izvještaji');
INSERT INTO collectionname VALUES (5,'zh_CN','ln','报告');
INSERT INTO collectionname VALUES (5,'zh_TW','ln','報告');
INSERT INTO collectionname VALUES (5,'hu','ln','Tanulmányok');
INSERT INTO collectionname VALUES (5,'af','ln','Verslae');
INSERT INTO collectionname VALUES (5,'gl','ln','Informes');
INSERT INTO collectionname VALUES (5,'ro','ln','Rapoarte');
INSERT INTO collectionname VALUES (5,'rw','ln','Raporo');
INSERT INTO collectionname VALUES (5,'ka','ln','რეპორტები');
INSERT INTO collectionname VALUES (5,'lt','ln','Pranešimai');
INSERT INTO collectionname VALUES (5,'ar','ln','تقارير');
+INSERT INTO collectionname VALUES (5,'fa','ln','گزارش ها');
INSERT INTO collectionname VALUES (6,'en','ln','Articles');
INSERT INTO collectionname VALUES (6,'fr','ln','Articles');
INSERT INTO collectionname VALUES (6,'de','ln','Artikel');
INSERT INTO collectionname VALUES (6,'es','ln','Articulos');
INSERT INTO collectionname VALUES (6,'ca','ln','Articles');
INSERT INTO collectionname VALUES (6,'pl','ln','Artykuły');
INSERT INTO collectionname VALUES (6,'pt','ln','Artigos');
INSERT INTO collectionname VALUES (6,'it','ln','Articoli');
INSERT INTO collectionname VALUES (6,'ru','ln','Статьи');
INSERT INTO collectionname VALUES (6,'sk','ln','Články');
INSERT INTO collectionname VALUES (6,'cs','ln','Články');
INSERT INTO collectionname VALUES (6,'no','ln','Artikler');
INSERT INTO collectionname VALUES (6,'sv','ln','');
INSERT INTO collectionname VALUES (6,'el','ln',"Άρθρα");
INSERT INTO collectionname VALUES (6,'uk','ln','Статті');
INSERT INTO collectionname VALUES (6,'ja','ln','記事');
INSERT INTO collectionname VALUES (6,'bg','ln','Статии');
INSERT INTO collectionname VALUES (6,'hr','ln','Članci');
INSERT INTO collectionname VALUES (6,'zh_CN','ln','文章');
INSERT INTO collectionname VALUES (6,'zh_TW','ln','文章');
INSERT INTO collectionname VALUES (6,'hu','ln','Cikkek');
INSERT INTO collectionname VALUES (6,'af','ln','Artikels');
INSERT INTO collectionname VALUES (6,'gl','ln','Artigos');
INSERT INTO collectionname VALUES (6,'ro','ln','Articole');
INSERT INTO collectionname VALUES (6,'rw','ln','Ikinyamakuru ky\'ubushakashatsi'); -- '
INSERT INTO collectionname VALUES (6,'ka','ln','სტატიები');
INSERT INTO collectionname VALUES (6,'lt','ln','Straipsniai');
INSERT INTO collectionname VALUES (6,'ar','ln','مقالات');
+INSERT INTO collectionname VALUES (6,'fa','ln','مقاله ها');
INSERT INTO collectionname VALUES (8,'en','ln','Pictures');
INSERT INTO collectionname VALUES (8,'fr','ln','Photos');
INSERT INTO collectionname VALUES (8,'de','ln','Fotos');
INSERT INTO collectionname VALUES (8,'es','ln','Imagenes');
INSERT INTO collectionname VALUES (8,'ca','ln','Imatges');
INSERT INTO collectionname VALUES (8,'pl','ln','Obrazy');
INSERT INTO collectionname VALUES (8,'pt','ln','Fotografias');
INSERT INTO collectionname VALUES (8,'it','ln','Foto');
INSERT INTO collectionname VALUES (8,'ru','ln','Фотографии');
INSERT INTO collectionname VALUES (8,'sk','ln','Fotografie');
INSERT INTO collectionname VALUES (8,'cs','ln','Fotografie');
INSERT INTO collectionname VALUES (8,'no','ln','Fotografier');
INSERT INTO collectionname VALUES (8,'sv','ln','');
INSERT INTO collectionname VALUES (8,'el','ln','Εικόνες');
INSERT INTO collectionname VALUES (8,'uk','ln','Зображення');
INSERT INTO collectionname VALUES (8,'ja','ln','映像');
INSERT INTO collectionname VALUES (8,'bg','ln','Снимки');
INSERT INTO collectionname VALUES (8,'hr','ln','Slike');
INSERT INTO collectionname VALUES (8,'zh_CN','ln','图片');
INSERT INTO collectionname VALUES (8,'zh_TW','ln','圖片');
INSERT INTO collectionname VALUES (8,'hu','ln','Képek');
INSERT INTO collectionname VALUES (8,'af','ln','Prente');
INSERT INTO collectionname VALUES (8,'gl','ln','Imaxes');
INSERT INTO collectionname VALUES (8,'ro','ln','Poze');
INSERT INTO collectionname VALUES (8,'rw','ln','Ifoto');
INSERT INTO collectionname VALUES (8,'ka','ln','სურათები');
INSERT INTO collectionname VALUES (8,'lt','ln','Paveikslėliai');
INSERT INTO collectionname VALUES (8,'ar','ln','صور');
+INSERT INTO collectionname VALUES (8,'fa','ln','تصویرها');
INSERT INTO collectionname VALUES (9,'en','ln','CERN Divisions');
INSERT INTO collectionname VALUES (9,'fr','ln','Divisions du CERN');
INSERT INTO collectionname VALUES (9,'de','ln','Abteilungen des CERN');
INSERT INTO collectionname VALUES (9,'es','ln','Divisiones del CERN');
INSERT INTO collectionname VALUES (9,'ca','ln','Divisions del CERN');
INSERT INTO collectionname VALUES (9,'pl','ln','Działy CERN');
INSERT INTO collectionname VALUES (9,'pt','ln','Divisões do CERN');
INSERT INTO collectionname VALUES (9,'it','ln','Divisioni del CERN');
INSERT INTO collectionname VALUES (9,'ru','ln','Разделения CERNа');
INSERT INTO collectionname VALUES (9,'sk','ln','Oddelenia CERNu');
INSERT INTO collectionname VALUES (9,'cs','ln','Oddělení CERNu');
INSERT INTO collectionname VALUES (9,'no','ln','Divisjoner ved CERN');
INSERT INTO collectionname VALUES (9,'sv','ln','');
INSERT INTO collectionname VALUES (9,'el','ln','Τομείς του CERN');
INSERT INTO collectionname VALUES (9,'uk','ln','Підрозділи CERN');
INSERT INTO collectionname VALUES (9,'ja','ln','CERN 部');
INSERT INTO collectionname VALUES (9,'bg','ln','Отдели в CERN');
INSERT INTO collectionname VALUES (9,'hr','ln','Odjeli CERN-a');
INSERT INTO collectionname VALUES (9,'zh_CN','ln','CERN 分类');
INSERT INTO collectionname VALUES (9,'zh_TW','ln','CERN 分類');
INSERT INTO collectionname VALUES (9,'hu','ln','CERN részlegek');
INSERT INTO collectionname VALUES (9,'af','ln','CERN Afdelings');
INSERT INTO collectionname VALUES (9,'gl','ln','Divisións do CERN');
INSERT INTO collectionname VALUES (9,'ro','ln','Divizii CERN');
INSERT INTO collectionname VALUES (9,'rw','ln','Ishami ya CERN');
INSERT INTO collectionname VALUES (9,'ka','ln','ცერნის განყოფილებები');
INSERT INTO collectionname VALUES (9,'lt','ln','CERN Padaliniai');
INSERT INTO collectionname VALUES (9,'ar','ln','أقسام المنظمة الأوربية للبحوث النووية');
+INSERT INTO collectionname VALUES (9,'fa','ln','بخش های سازمان پژوهش های هسته ای اروپا');
INSERT INTO collectionname VALUES (10,'en','ln','CERN Experiments');
INSERT INTO collectionname VALUES (10,'fr','ln','Expériences du CERN');
INSERT INTO collectionname VALUES (10,'de','ln','Experimente des CERN');
INSERT INTO collectionname VALUES (10,'es','ln','Experimentos del CERN');
INSERT INTO collectionname VALUES (10,'ca','ln','Experiments del CERN');
INSERT INTO collectionname VALUES (10,'pl','ln','Eksperymenty CERN');
INSERT INTO collectionname VALUES (10,'pt','ln','Experimentos do CERN');
INSERT INTO collectionname VALUES (10,'it','ln','Esperimenti del CERN');
INSERT INTO collectionname VALUES (10,'ru','ln','Эксперименты CERNа');
INSERT INTO collectionname VALUES (10,'sk','ln','Experimenty CERNu');
INSERT INTO collectionname VALUES (10,'cs','ln','Experimenty CERNu');
INSERT INTO collectionname VALUES (10,'no','ln','Eksperimenter ved CERN');
INSERT INTO collectionname VALUES (10,'sv','ln','');
INSERT INTO collectionname VALUES (10,'el','ln','Πειράματα του CERN');
INSERT INTO collectionname VALUES (10,'uk','ln','Експерименти CERN');
INSERT INTO collectionname VALUES (10,'ja','ln','CERN の実験');
INSERT INTO collectionname VALUES (10,'bg','ln','Експерименти в CERN');
INSERT INTO collectionname VALUES (10,'hr','ln','Eksperimenti CERN-a');
INSERT INTO collectionname VALUES (10,'zh_CN','ln','CERN 实验');
INSERT INTO collectionname VALUES (10,'zh_TW','ln','CERN 實驗');
INSERT INTO collectionname VALUES (10,'hu','ln','CERN kísérletek');
INSERT INTO collectionname VALUES (10,'af','ln','CERN Experimente');
INSERT INTO collectionname VALUES (10,'gl','ln','Experimentos do CERN');
INSERT INTO collectionname VALUES (10,'ro','ln','Experimente CERN');
INSERT INTO collectionname VALUES (10,'rw','ln','Ubushakashatsi bwa CERN');
INSERT INTO collectionname VALUES (10,'ka','ln','ცერნის ექსპერემენტები');
INSERT INTO collectionname VALUES (10,'lt','ln','CERN Eksperimentai');
INSERT INTO collectionname VALUES (10,'ar','ln','تجارب المنظمة الأوربية للبحوث النووية');
+INSERT INTO collectionname VALUES (10,'fa','ln','آزمایش های سازمان پژوهش های هسته ای اروپا');
INSERT INTO collectionname VALUES (11,'en','ln','Theoretical Physics (TH)');
INSERT INTO collectionname VALUES (11,'fr','ln','Physique Théorique (TH)');
INSERT INTO collectionname VALUES (11,'de','ln','Theoretische Physik (TH)');
INSERT INTO collectionname VALUES (11,'es','ln','Física teórica (TH)');
INSERT INTO collectionname VALUES (11,'ca','ln','Física teòrica (TH)');
INSERT INTO collectionname VALUES (11,'pl','ln','Fizyka Teoretyczna (TH)');
INSERT INTO collectionname VALUES (11,'pt','ln','Física Teórica (TH)');
INSERT INTO collectionname VALUES (11,'it','ln','Fisica Teorica (TH)');
INSERT INTO collectionname VALUES (11,'ru','ln','Теоретическая физика (TH)');
INSERT INTO collectionname VALUES (11,'sk','ln','Teoretická fyzika (TH)');
INSERT INTO collectionname VALUES (11,'cs','ln','Teoretická fyzika (TH)');
INSERT INTO collectionname VALUES (11,'no','ln','Teoretisk fysikk (TH)');
INSERT INTO collectionname VALUES (11,'sv','ln','');
INSERT INTO collectionname VALUES (11,'el','ln','Θεωρητική Φυσική (TH)');
INSERT INTO collectionname VALUES (11,'uk','ln','Теоретична фізика (TH)');
INSERT INTO collectionname VALUES (11,'ja','ln','理論的な物理学 (TH)');
INSERT INTO collectionname VALUES (11,'bg','ln','Теоретична физика (TH)');
INSERT INTO collectionname VALUES (11,'hr','ln','Teorijska fizika (TH)');
INSERT INTO collectionname VALUES (11,'zh_CN','ln','理论物理 (TH)');
INSERT INTO collectionname VALUES (11,'zh_TW','ln','理論物理 (TH)');
INSERT INTO collectionname VALUES (11,'hu','ln','Elméleti fizika (TH)');
INSERT INTO collectionname VALUES (11,'af','ln','Teoretiese Fisika (TH)');
INSERT INTO collectionname VALUES (11,'gl','ln','Física Teórica (TH)');
INSERT INTO collectionname VALUES (11,'ro','ln','Fizică Teoretică (TH)');
INSERT INTO collectionname VALUES (11,'rw','ln','Theoretical Physics (TH)');
INSERT INTO collectionname VALUES (11,'ka','ln','თეორიული ფიზიკა (თფ)');
INSERT INTO collectionname VALUES (11,'lt','ln','Teorinė fizika (TH)');
INSERT INTO collectionname VALUES (11,'ar','ln','الفيزياء النظرية');
+INSERT INTO collectionname VALUES (11,'fa','ln','فیزیک نظری');
INSERT INTO collectionname VALUES (12,'en','ln','Experimental Physics (EP)');
INSERT INTO collectionname VALUES (12,'fr','ln','Physique Expérimentale (EP)');
INSERT INTO collectionname VALUES (12,'de','ln','Experimentelle Physik (EP)');
INSERT INTO collectionname VALUES (12,'es','ln','Física experimental (FE)');
INSERT INTO collectionname VALUES (12,'ca','ln','Física experimental (EP)');
INSERT INTO collectionname VALUES (12,'pl','ln','Fizyka Doświadczalna (EP)');
INSERT INTO collectionname VALUES (12,'pt','ln','Física Experimental (EP)');
INSERT INTO collectionname VALUES (12,'it','ln','Fisica Sperimentale (EP)');
INSERT INTO collectionname VALUES (12,'ru','ln','Экспериментальная Физика (EP)');
INSERT INTO collectionname VALUES (12,'sk','ln','Experimentálna fyzika (EP)');
INSERT INTO collectionname VALUES (12,'cs','ln','Experimentální fyzika (EP)');
INSERT INTO collectionname VALUES (12,'no','ln','Eksperimentell fysikk (EP)');
INSERT INTO collectionname VALUES (12,'sv','ln','');
INSERT INTO collectionname VALUES (12,'el','ln','Πειραματική Φυσική (EP)');
INSERT INTO collectionname VALUES (12,'uk','ln','Експериментальна фізика (EP)');
INSERT INTO collectionname VALUES (12,'ja','ln','実験物理学 (EP)');
INSERT INTO collectionname VALUES (12,'bg','ln','Експериментална физика (EP)');
INSERT INTO collectionname VALUES (12,'hr','ln','Eksperimentalna fizika (EP)');
INSERT INTO collectionname VALUES (12,'zh_CN','ln','实验物理 (EP)');
INSERT INTO collectionname VALUES (12,'zh_TW','ln','實驗物理 (EP)');
INSERT INTO collectionname VALUES (12,'hu','ln','Kísérleti fizika (EP)');
INSERT INTO collectionname VALUES (12,'af','ln','Eksperimentele Fisika (EP)');
INSERT INTO collectionname VALUES (12,'gl','ln','Física Experimental (EP)');
INSERT INTO collectionname VALUES (12,'ro','ln','Fizică Experimentală (EP)');
INSERT INTO collectionname VALUES (12,'rw','ln','Experimental Physics (EP)');
INSERT INTO collectionname VALUES (12,'ka','ln','ექსპერიმენტული ფიზიკა (ეფ)');
INSERT INTO collectionname VALUES (12,'lt','ln','Eksperimentinė fizika (EP)');
INSERT INTO collectionname VALUES (12,'ar','ln','الفيزياء التجريبية');
+INSERT INTO collectionname VALUES (12,'fa','ln','فیزیک تجربی');
INSERT INTO collectionname VALUES (13,'en','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'fr','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'de','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'es','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'ca','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'pl','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'pt','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'it','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'ru','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'sk','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'cs','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'no','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'sv','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'el','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'uk','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'ja','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'bg','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'hr','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'zh_CN','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'zh_TW','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'hu','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'af','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'gl','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'ro','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'rw','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'ka','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'lt','ln','ISOLDE');
INSERT INTO collectionname VALUES (13,'ar','ln','ISOLDE');
+INSERT INTO collectionname VALUES (13,'fa','ln','ISOLDE');
INSERT INTO collectionname VALUES (14,'en','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'fr','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'de','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'es','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'ca','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'pl','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'pt','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'it','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'ru','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'sk','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'cs','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'no','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'sv','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'el','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'uk','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'ja','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'bg','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'hr','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'zh_CN','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'zh_TW','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'hu','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'af','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'gl','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'ro','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'rw','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'ka','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'lt','ln','ALEPH');
INSERT INTO collectionname VALUES (14,'ar','ln','ALEPH');
+INSERT INTO collectionname VALUES (14,'fa','ln','ALEPH');
INSERT INTO collectionname VALUES (15,'en','ln','Articles & Preprints');
INSERT INTO collectionname VALUES (15,'fr','ln','Articles et Prétirages');
INSERT INTO collectionname VALUES (15,'de','ln','Artikel & Preprints');
INSERT INTO collectionname VALUES (15,'es','ln','Articulos y preprints');
INSERT INTO collectionname VALUES (15,'ca','ln','Articles i preprints');
INSERT INTO collectionname VALUES (15,'pl','ln','Artykuły i Preprinty');
INSERT INTO collectionname VALUES (15,'pt','ln','Artigos e Preprints');
INSERT INTO collectionname VALUES (15,'it','ln','Articoli e Preprint');
INSERT INTO collectionname VALUES (15,'ru','ln','Статьи и Препринты');
INSERT INTO collectionname VALUES (15,'sk','ln','Články a Preprinty');
INSERT INTO collectionname VALUES (15,'cs','ln','Články a Preprinty');
INSERT INTO collectionname VALUES (15,'no','ln','Artikler og Førtrykk');
INSERT INTO collectionname VALUES (15,'sv','ln','');
INSERT INTO collectionname VALUES (15,'el','ln',"Άρθρα & Προδημοσιεύσεις");
INSERT INTO collectionname VALUES (15,'uk','ln','Статті та Препринти');
INSERT INTO collectionname VALUES (15,'ja','ln','記事及びプレプリント');
INSERT INTO collectionname VALUES (15,'bg','ln','Статии и Препринти');
INSERT INTO collectionname VALUES (15,'hr','ln','Članci i Preprinti');
INSERT INTO collectionname VALUES (15,'zh_CN','ln','文章和预印');
INSERT INTO collectionname VALUES (15,'zh_TW','ln','文章和預印');
INSERT INTO collectionname VALUES (15,'hu','ln','Cikkek és Preprintek');
INSERT INTO collectionname VALUES (15,'af','ln','Artikels & Pre-drukke');
INSERT INTO collectionname VALUES (15,'gl','ln','Artigos e Preprints');
INSERT INTO collectionname VALUES (15,'ro','ln','Articole şi Preprinturi');
INSERT INTO collectionname VALUES (15,'rw','ln','Ibinyamakuru');
INSERT INTO collectionname VALUES (15,'ka','ln','სტატიები და პრეპრინტები');
INSERT INTO collectionname VALUES (15,'lt','ln','Straipsniai ir Rankraščiai');
INSERT INTO collectionname VALUES (15,'ar','ln','مقالات & مسودات');
+INSERT INTO collectionname VALUES (15,'fa','ln','مقاله ها و پیش چاپ ها');
INSERT INTO collectionname VALUES (16,'en','ln','Books & Reports');
INSERT INTO collectionname VALUES (16,'fr','ln','Livres et Rapports');
INSERT INTO collectionname VALUES (16,'de','ln','Monographien & Reports');
INSERT INTO collectionname VALUES (16,'es','ln','Libros e informes');
INSERT INTO collectionname VALUES (16,'ca','ln','Llibres i informes');
INSERT INTO collectionname VALUES (16,'pl','ln','Książki i Raporty');
INSERT INTO collectionname VALUES (16,'pt','ln','Livros e Relatórios');
INSERT INTO collectionname VALUES (16,'it','ln','Libri e Rapporti');
INSERT INTO collectionname VALUES (16,'ru','ln','Книги и Рапорты');
INSERT INTO collectionname VALUES (16,'sk','ln','Knihy a Správy');
INSERT INTO collectionname VALUES (16,'cs','ln','Knihy a Zprávy');
INSERT INTO collectionname VALUES (16,'no','ln','Bøker og Rapporter');
INSERT INTO collectionname VALUES (16,'sv','ln','');
INSERT INTO collectionname VALUES (16,'el','ln','Βιβλία & Αναφορές');
INSERT INTO collectionname VALUES (16,'uk','ln','Книги та Звіти');
INSERT INTO collectionname VALUES (16,'ja','ln','本及びレポート');
INSERT INTO collectionname VALUES (16,'bg','ln','Книги и Доклади');
INSERT INTO collectionname VALUES (16,'hr','ln','Knjige i Izvještaji');
INSERT INTO collectionname VALUES (16,'zh_CN','ln','书本和报告');
INSERT INTO collectionname VALUES (16,'zh_TW','ln','書本和報告');
INSERT INTO collectionname VALUES (16,'hu','ln','Könyvek és tanulmányok');
INSERT INTO collectionname VALUES (16,'af','ln','Boeke & Verslae');
INSERT INTO collectionname VALUES (16,'gl','ln','Libros e Informes');
INSERT INTO collectionname VALUES (16,'ro','ln','Cărţi şi Rapoarte');
INSERT INTO collectionname VALUES (16,'rw','ln','Ibitabo & Raporo');
INSERT INTO collectionname VALUES (16,'ka','ln','წიგნები და მოხსენებები');
INSERT INTO collectionname VALUES (16,'lt','ln','Knygos ir Pranešimai');
INSERT INTO collectionname VALUES (16,'ar','ln','كتب & تقارير');
+INSERT INTO collectionname VALUES (16,'fa','ln','کتاب ها و گزارش ها');
INSERT INTO collectionname VALUES (17,'en','ln','Multimedia & Arts');
INSERT INTO collectionname VALUES (17,'fr','ln','Multimédia et Arts');
INSERT INTO collectionname VALUES (17,'de','ln','Multimedia & Kunst');
INSERT INTO collectionname VALUES (17,'es','ln','Multimedia y artes');
INSERT INTO collectionname VALUES (17,'ca','ln','Multimèdia i arts');
INSERT INTO collectionname VALUES (17,'pl','ln','Multimedia i Sztuka');
INSERT INTO collectionname VALUES (17,'pt','ln','Multimédia e Artes');
INSERT INTO collectionname VALUES (17,'it','ln','Multimedia e Arti');
INSERT INTO collectionname VALUES (17,'ru','ln','Мультимедиа и Исскуство');
INSERT INTO collectionname VALUES (17,'sk','ln','Multimédia a Umenie');
INSERT INTO collectionname VALUES (17,'cs','ln','Multimédia a Umění');
INSERT INTO collectionname VALUES (17,'no','ln','Multimedia og Grafikk');
INSERT INTO collectionname VALUES (17,'sv','ln','');
INSERT INTO collectionname VALUES (17,'el','ln','Πολυμέσα & Τέχνες');
INSERT INTO collectionname VALUES (17,'uk','ln','Мультимедіа та Мистецтво');
INSERT INTO collectionname VALUES (17,'ja','ln','マルチメディア及び芸術');
INSERT INTO collectionname VALUES (17,'bg','ln','Мултимедия и Изкуства');
INSERT INTO collectionname VALUES (17,'hr','ln','Multimedija i Umjetnost');
INSERT INTO collectionname VALUES (17,'zh_CN','ln','多媒体和艺术');
INSERT INTO collectionname VALUES (17,'zh_TW','ln','多媒體和藝術');
INSERT INTO collectionname VALUES (17,'hu','ln','Multimédia és képzőművészet');
INSERT INTO collectionname VALUES (17,'af','ln','Multimedia & Kunste');
INSERT INTO collectionname VALUES (17,'gl','ln','Multimedia e Arte');
INSERT INTO collectionname VALUES (17,'ro','ln','Multimedia şi Arte');
INSERT INTO collectionname VALUES (17,'rw','ln','Multimedia & Arts');
INSERT INTO collectionname VALUES (17,'ka','ln','მულტიმედია და ხელოვნება');
INSERT INTO collectionname VALUES (17,'lt','ln','Multimedija ir Menas');
INSERT INTO collectionname VALUES (17,'ar','ln','وسائط متعددة & فنون');
+INSERT INTO collectionname VALUES (17,'fa','ln','چندرسانه ای و هنرها');
INSERT INTO collectionname VALUES (18,'en','ln','Poetry');
INSERT INTO collectionname VALUES (18,'fr','ln','Poésie');
INSERT INTO collectionname VALUES (18,'de','ln','Poesie');
INSERT INTO collectionname VALUES (18,'es','ln','Poesía');
INSERT INTO collectionname VALUES (18,'ca','ln','Poesia');
INSERT INTO collectionname VALUES (18,'pl','ln','Poezja');
INSERT INTO collectionname VALUES (18,'pt','ln','Poesia');
INSERT INTO collectionname VALUES (18,'it','ln','Poesia');
INSERT INTO collectionname VALUES (18,'ru','ln','Поэзия');
INSERT INTO collectionname VALUES (18,'sk','ln','Poézia');
INSERT INTO collectionname VALUES (18,'cs','ln','Poezie');
INSERT INTO collectionname VALUES (18,'no','ln','Poesi');
INSERT INTO collectionname VALUES (18,'sv','ln','');
INSERT INTO collectionname VALUES (18,'el','ln','Ποίηση');
INSERT INTO collectionname VALUES (18,'uk','ln','Поезія');
INSERT INTO collectionname VALUES (18,'ja','ln','詩歌');
INSERT INTO collectionname VALUES (18,'bg','ln','Поезия');
INSERT INTO collectionname VALUES (18,'hr','ln','Poezija');
INSERT INTO collectionname VALUES (18,'zh_CN','ln','诗歌');
INSERT INTO collectionname VALUES (18,'zh_TW','ln','詩歌');
INSERT INTO collectionname VALUES (18,'hu','ln','Költészet');
INSERT INTO collectionname VALUES (18,'af','ln','Poësie');
INSERT INTO collectionname VALUES (18,'gl','ln','Poesía');
INSERT INTO collectionname VALUES (18,'ro','ln','Poezie');
INSERT INTO collectionname VALUES (18,'rw','ln','Umuvugo');
INSERT INTO collectionname VALUES (18,'ka','ln','პოეზია');
INSERT INTO collectionname VALUES (18,'lt','ln','Poezija');
INSERT INTO collectionname VALUES (18,'ar','ln','شعر');
+INSERT INTO collectionname VALUES (18,'fa','ln','شعر');
INSERT INTO collectionname VALUES (19,'en','ln','Atlantis Times News');
INSERT INTO collectionname VALUES (19,'fr','ln','Atlantis Times Actualités');
INSERT INTO collectionname VALUES (20,'en','ln','Atlantis Times Arts');
INSERT INTO collectionname VALUES (20,'fr','ln','Atlantis Times Arts');
INSERT INTO collectionname VALUES (21,'en','ln','Atlantis Times Science');
INSERT INTO collectionname VALUES (21,'fr','ln','Atlantis Times Science');
INSERT INTO collectionname VALUES (22,'en','ln','Atlantis Times');
INSERT INTO collectionname VALUES (22,'fr','ln','Atlantis Times');
INSERT INTO collectionname VALUES (23,'en','ln','Atlantis Institute Books');
INSERT INTO collectionname VALUES (23,'fr','ln','Atlantis Institute Books');
INSERT INTO collectionname VALUES (24,'en','ln','Atlantis Institute Articles');
INSERT INTO collectionname VALUES (24,'fr','ln','Atlantis Institute Articles');
INSERT INTO collectionname VALUES (25,'en','ln','Atlantis Times Drafts');
INSERT INTO collectionname VALUES (25,'fr','ln','Atlantis Times Ébauches');
INSERT INTO collectionname VALUES (26,'en','ln','Notes');
INSERT INTO collectionname VALUES (27,'en','ln','ALEPH Papers');
INSERT INTO collectionname VALUES (28,'en','ln','ALEPH Internal Notes');
INSERT INTO collectionname VALUES (29,'en','ln','ALEPH Theses');
INSERT INTO collectionname VALUES (30, 'en','ln', 'ISOLDE Papers');
INSERT INTO collectionname VALUES (31, 'en','ln', 'ISOLDE Internal Notes');
INSERT INTO collectionname VALUES (32, 'en','ln', 'Drafts');
INSERT INTO collectionname VALUES (33,'en','ln','Videos');
INSERT INTO collectionname VALUES (33,'fr','ln','Vidéos');
INSERT INTO collectionname VALUES (33,'it','ln','Filmati');
+INSERT INTO collectionname VALUES (34,'en','ln','Authority Records');
+INSERT INTO collectionname VALUES (34,'fr','ln','Notices d''autorité');
+INSERT INTO collectionname VALUES (34,'pl','ln','Rekordy kontrolne');
+INSERT INTO collectionname VALUES (35,'en','ln','Authors');
+INSERT INTO collectionname VALUES (35,'fr','ln','Auteurs');
+INSERT INTO collectionname VALUES (35,'pl','ln','Autorzy');
+
+INSERT INTO collectionname VALUES (36,'en','ln','Institutions');
+INSERT INTO collectionname VALUES (36,'fr','ln','Institutions');
+INSERT INTO collectionname VALUES (36,'pl','ln','Instytucje');
+
+INSERT INTO collectionname VALUES (37,'en','ln','Journals');
+INSERT INTO collectionname VALUES (37,'fr','ln','Journals');
+INSERT INTO collectionname VALUES (37,'pl','ln','Czasopisma');
+
+INSERT INTO collectionname VALUES (38,'en','ln','Subjects');
+INSERT INTO collectionname VALUES (38,'fr','ln','Sujets');
+INSERT INTO collectionname VALUES (38,'pl','ln','Tematy');
INSERT INTO collection_collection VALUES (1,15,'r',60);
INSERT INTO collection_collection VALUES (1,16,'r',40);
INSERT INTO collection_collection VALUES (1,17,'r',30);
-- INSERT INTO collection_collection VALUES (1,23,'r',20);
-- INSERT INTO collection_collection VALUES (1,24,'r',10);
INSERT INTO collection_collection VALUES (15,6,'r',20);
INSERT INTO collection_collection VALUES (15,2,'r',10);
INSERT INTO collection_collection VALUES (15,32,'r',10);
INSERT INTO collection_collection VALUES (15,26,'r',10);
INSERT INTO collection_collection VALUES (16,3,'r',30);
INSERT INTO collection_collection VALUES (16,4,'r',20);
INSERT INTO collection_collection VALUES (16,5,'r',10);
INSERT INTO collection_collection VALUES (17,8,'r',30);
INSERT INTO collection_collection VALUES (17,18,'r',20);
INSERT INTO collection_collection VALUES (17,22,'r',10);
INSERT INTO collection_collection VALUES (17,33,'r',30);
INSERT INTO collection_collection VALUES (22,19,'r',30);
INSERT INTO collection_collection VALUES (22,20,'r',20);
INSERT INTO collection_collection VALUES (22,21,'r',10);
INSERT INTO collection_collection VALUES (1,9,'v',20);
INSERT INTO collection_collection VALUES (1,10,'v',10);
INSERT INTO collection_collection VALUES (9,11,'r',10);
INSERT INTO collection_collection VALUES (9,12,'r',20);
INSERT INTO collection_collection VALUES (10,13,'r',10);
INSERT INTO collection_collection VALUES (10,14,'r',20);
INSERT INTO collection_collection VALUES (13,30,'r',20);
-- INSERT INTO collection_collection VALUES (13,31,'r',10); -- ISOLDE Internal Notes
INSERT INTO collection_collection VALUES (14,27,'r',20);
INSERT INTO collection_collection VALUES (14,28,'r',10);
INSERT INTO collection_collection VALUES (14,29,'r',10);
-
-
+INSERT INTO collection_collection VALUES (1,34,'v',5);
+INSERT INTO collection_collection VALUES (34,35,'r',4);
+INSERT INTO collection_collection VALUES (34,36,'r',3);
+INSERT INTO collection_collection VALUES (34,37,'r',2);
+INSERT INTO collection_collection VALUES (34,38,'r',1);
INSERT INTO collection_example VALUES (1,1,1);
INSERT INTO collection_example VALUES (1,5,2);
INSERT INTO collection_example VALUES (1,8,3);
INSERT INTO collection_example VALUES (1,7,5);
INSERT INTO collection_example VALUES (1,6,4);
INSERT INTO collection_example VALUES (1,4,6);
INSERT INTO collection_example VALUES (1,3,7);
INSERT INTO collection_example VALUES (1,13,50);
INSERT INTO collection_example VALUES (1,2,8);
INSERT INTO collection_example VALUES (2,1,1);
INSERT INTO collection_example VALUES (2,5,2);
INSERT INTO collection_example VALUES (2,8,3);
INSERT INTO collection_example VALUES (2,7,5);
INSERT INTO collection_example VALUES (2,6,4);
INSERT INTO collection_example VALUES (2,4,6);
INSERT INTO collection_example VALUES (2,3,7);
INSERT INTO collection_example VALUES (2,2,8);
INSERT INTO collection_example VALUES (3,6,30);
INSERT INTO collection_example VALUES (3,17,10);
INSERT INTO collection_example VALUES (3,18,20);
INSERT INTO collection_example VALUES (4,1,1);
INSERT INTO collection_example VALUES (4,5,2);
INSERT INTO collection_example VALUES (4,8,3);
INSERT INTO collection_example VALUES (4,7,5);
INSERT INTO collection_example VALUES (4,6,4);
INSERT INTO collection_example VALUES (4,4,6);
INSERT INTO collection_example VALUES (4,3,7);
INSERT INTO collection_example VALUES (4,2,8);
INSERT INTO collection_example VALUES (5,1,1);
INSERT INTO collection_example VALUES (5,5,2);
INSERT INTO collection_example VALUES (5,8,3);
INSERT INTO collection_example VALUES (5,7,5);
INSERT INTO collection_example VALUES (5,6,4);
INSERT INTO collection_example VALUES (5,4,6);
INSERT INTO collection_example VALUES (5,3,7);
INSERT INTO collection_example VALUES (5,2,8);
INSERT INTO collection_example VALUES (6,1,10);
INSERT INTO collection_example VALUES (6,5,20);
INSERT INTO collection_example VALUES (6,8,30);
INSERT INTO collection_example VALUES (6,0,27);
INSERT INTO collection_example VALUES (6,4,40);
INSERT INTO collection_example VALUES (6,3,60);
INSERT INTO collection_example VALUES (6,2,80);
INSERT INTO collection_example VALUES (8,14,10);
INSERT INTO collection_example VALUES (8,15,20);
INSERT INTO collection_example VALUES (8,16,30);
INSERT INTO collection_example VALUES (15,0,27);
INSERT INTO collection_example VALUES (15,1,1);
INSERT INTO collection_example VALUES (15,2,8);
INSERT INTO collection_example VALUES (15,3,60);
INSERT INTO collection_example VALUES (15,4,40);
INSERT INTO collection_example VALUES (15,5,2);
INSERT INTO collection_example VALUES (15,6,4);
INSERT INTO collection_example VALUES (15,7,5);
INSERT INTO collection_example VALUES (15,8,3);
INSERT INTO collection_example VALUES (16,1,1);
INSERT INTO collection_example VALUES (16,2,8);
INSERT INTO collection_example VALUES (16,3,7);
INSERT INTO collection_example VALUES (16,4,6);
INSERT INTO collection_example VALUES (16,5,2);
INSERT INTO collection_example VALUES (16,6,4);
INSERT INTO collection_example VALUES (16,7,5);
INSERT INTO collection_example VALUES (16,8,3);
INSERT INTO collection_example VALUES (17,14,10);
INSERT INTO collection_example VALUES (17,15,20);
INSERT INTO collection_example VALUES (17,16,30);
INSERT INTO collection_example VALUES (1,19,0);
INSERT INTO collection_example VALUES (15,19,0);
INSERT INTO collection_example VALUES (16,19,0);
INSERT INTO collection_field_fieldvalue VALUES (2,7,7,'seo',10,18);
INSERT INTO collection_field_fieldvalue VALUES (2,7,6,'seo',10,19);
INSERT INTO collection_field_fieldvalue VALUES (2,7,5,'seo',10,20);
INSERT INTO collection_field_fieldvalue VALUES (2,7,4,'seo',10,21);
INSERT INTO collection_field_fieldvalue VALUES (6,7,1,'seo',2,24);
INSERT INTO collection_field_fieldvalue VALUES (6,7,2,'seo',2,23);
INSERT INTO collection_field_fieldvalue VALUES (6,7,3,'seo',2,22);
INSERT INTO collection_field_fieldvalue VALUES (6,7,4,'seo',2,21);
INSERT INTO collection_field_fieldvalue VALUES (6,7,5,'seo',2,20);
INSERT INTO collection_field_fieldvalue VALUES (6,7,6,'seo',2,19);
INSERT INTO collection_field_fieldvalue VALUES (6,7,7,'seo',2,18);
INSERT INTO collection_field_fieldvalue VALUES (6,7,8,'seo',2,17);
INSERT INTO collection_field_fieldvalue VALUES (6,7,9,'seo',2,16);
INSERT INTO collection_field_fieldvalue VALUES (6,7,10,'seo',2,15);
INSERT INTO collection_field_fieldvalue VALUES (6,7,11,'seo',2,14);
INSERT INTO collection_field_fieldvalue VALUES (6,7,12,'seo',2,13);
INSERT INTO collection_field_fieldvalue VALUES (6,7,13,'seo',2,12);
INSERT INTO collection_field_fieldvalue VALUES (6,7,14,'seo',2,11);
INSERT INTO collection_field_fieldvalue VALUES (6,7,15,'seo',2,10);
INSERT INTO collection_field_fieldvalue VALUES (6,7,16,'seo',2,9);
INSERT INTO collection_field_fieldvalue VALUES (6,7,17,'seo',2,8);
INSERT INTO collection_field_fieldvalue VALUES (6,7,18,'seo',2,7);
INSERT INTO collection_field_fieldvalue VALUES (6,7,19,'seo',2,6);
INSERT INTO collection_field_fieldvalue VALUES (6,7,20,'seo',2,5);
INSERT INTO collection_field_fieldvalue VALUES (6,7,21,'seo',2,4);
INSERT INTO collection_field_fieldvalue VALUES (6,7,22,'seo',2,3);
INSERT INTO collection_field_fieldvalue VALUES (6,7,23,'seo',2,2);
INSERT INTO collection_field_fieldvalue VALUES (6,7,24,'seo',2,1);
INSERT INTO collection_field_fieldvalue VALUES (2,7,3,'seo',10,22);
INSERT INTO collection_field_fieldvalue VALUES (2,7,2,'seo',10,23);
INSERT INTO collection_field_fieldvalue VALUES (6,8,NULL,'sew',2,0);
INSERT INTO collection_field_fieldvalue VALUES (2,7,1,'seo',10,24);
INSERT INTO collection_field_fieldvalue VALUES (6,4,NULL,'sew',4,70);
INSERT INTO collection_field_fieldvalue VALUES (6,2,NULL,'sew',3,70);
INSERT INTO collection_field_fieldvalue VALUES (6,19,NULL,'sew',3,65);
INSERT INTO collection_field_fieldvalue VALUES (6,5,NULL,'sew',1,70);
INSERT INTO collection_field_fieldvalue VALUES (6,11,25,'seo',1,1);
INSERT INTO collection_field_fieldvalue VALUES (6,11,26,'seo',1,2);
INSERT INTO collection_field_fieldvalue VALUES (8,7,27,'seo',10,3);
INSERT INTO collection_field_fieldvalue VALUES (8,7,28,'seo',10,1);
INSERT INTO collection_field_fieldvalue VALUES (8,7,29,'seo',10,4);
INSERT INTO collection_field_fieldvalue VALUES (8,7,30,'seo',10,2);
INSERT INTO collection_field_fieldvalue VALUES (6,3,NULL,'sew',5,70);
INSERT INTO collection_field_fieldvalue VALUES (2,7,8,'seo',10,17);
INSERT INTO collection_field_fieldvalue VALUES (2,7,9,'seo',10,16);
INSERT INTO collection_field_fieldvalue VALUES (2,7,10,'seo',10,15);
INSERT INTO collection_field_fieldvalue VALUES (2,7,11,'seo',10,14);
INSERT INTO collection_field_fieldvalue VALUES (2,7,12,'seo',10,13);
INSERT INTO collection_field_fieldvalue VALUES (2,7,13,'seo',10,12);
INSERT INTO collection_field_fieldvalue VALUES (2,7,14,'seo',10,11);
INSERT INTO collection_field_fieldvalue VALUES (2,7,15,'seo',10,10);
INSERT INTO collection_field_fieldvalue VALUES (2,7,16,'seo',10,9);
INSERT INTO collection_field_fieldvalue VALUES (2,7,17,'seo',10,8);
INSERT INTO collection_field_fieldvalue VALUES (2,7,18,'seo',10,7);
INSERT INTO collection_field_fieldvalue VALUES (2,7,19,'seo',10,6);
INSERT INTO collection_field_fieldvalue VALUES (2,7,20,'seo',10,5);
INSERT INTO collection_field_fieldvalue VALUES (2,7,21,'seo',10,4);
INSERT INTO collection_field_fieldvalue VALUES (2,7,22,'seo',10,3);
INSERT INTO collection_field_fieldvalue VALUES (2,7,23,'seo',10,2);
INSERT INTO collection_field_fieldvalue VALUES (2,7,24,'seo',10,1);
INSERT INTO collection_field_fieldvalue VALUES (2,8,NULL,'sew',20,0);
INSERT INTO collection_field_fieldvalue VALUES (2,4,NULL,'sew',40,70);
INSERT INTO collection_field_fieldvalue VALUES (2,2,NULL,'sew',60,70);
INSERT INTO collection_field_fieldvalue VALUES (2,5,NULL,'sew',30,70);
INSERT INTO collection_field_fieldvalue VALUES (2,11,26,'seo',5,1);
INSERT INTO collection_field_fieldvalue VALUES (2,3,NULL,'sew',50,70);
INSERT INTO collection_field_fieldvalue VALUES (2,11,25,'seo',5,2);
INSERT INTO collection_field_fieldvalue VALUES (2,11,32,'seo',5,0);
INSERT INTO collection_field_fieldvalue VALUES (3,2,NULL,'sew',10,0);
INSERT INTO collection_field_fieldvalue VALUES (3,3,NULL,'sew',20,0);
INSERT INTO collection_field_fieldvalue VALUES (3,12,NULL,'sew',30,0);
INSERT INTO collection_field_fieldvalue VALUES (4,4,NULL,'sew',30,0);
INSERT INTO collection_field_fieldvalue VALUES (4,3,NULL,'sew',40,0);
INSERT INTO collection_field_fieldvalue VALUES (4,12,NULL,'sew',10,0);
INSERT INTO collection_field_fieldvalue VALUES (4,2,NULL,'sew',50,0);
INSERT INTO collection_field_fieldvalue VALUES (4,6,NULL,'sew',20,0);
INSERT INTO collection_field_fieldvalue VALUES (4,7,NULL,'seo',10,0);
INSERT INTO collection_field_fieldvalue VALUES (4,7,12,'seo',10,2);
INSERT INTO collection_field_fieldvalue VALUES (4,7,8,'seo',10,3);
INSERT INTO collection_field_fieldvalue VALUES (4,7,10,'seo',10,1);
INSERT INTO collection_field_fieldvalue VALUES (5,6,NULL,'sew',20,0);
INSERT INTO collection_field_fieldvalue VALUES (5,12,NULL,'sew',10,0);
INSERT INTO collection_field_fieldvalue VALUES (5,4,NULL,'sew',30,0);
INSERT INTO collection_field_fieldvalue VALUES (5,3,NULL,'sew',40,0);
INSERT INTO collection_field_fieldvalue VALUES (5,2,NULL,'sew',50,0);
INSERT INTO collection_field_fieldvalue VALUES (5,7,NULL,'seo',10,0);
INSERT INTO collection_field_fieldvalue VALUES (5,7,9,'seo',10,3);
INSERT INTO collection_field_fieldvalue VALUES (5,7,12,'seo',10,2);
INSERT INTO collection_field_fieldvalue VALUES (8,6,NULL,'sew',10,0);
INSERT INTO collection_field_fieldvalue VALUES (8,2,NULL,'sew',50,0);
INSERT INTO collection_field_fieldvalue VALUES (8,3,NULL,'sew',40,0);
INSERT INTO collection_field_fieldvalue VALUES (8,5,NULL,'sew',20,0);
INSERT INTO collection_field_fieldvalue VALUES (8,4,NULL,'sew',30,0);
INSERT INTO collection_field_fieldvalue VALUES (1,2,NULL,'soo',40,0);
INSERT INTO collection_field_fieldvalue VALUES (1,3,NULL,'soo',30,0);
INSERT INTO collection_field_fieldvalue VALUES (1,6,NULL,'soo',20,0);
INSERT INTO collection_field_fieldvalue VALUES (1,12,NULL,'soo',10,0);
INSERT INTO collection_field_fieldvalue VALUES (3,2,NULL,'soo',40,0);
INSERT INTO collection_field_fieldvalue VALUES (3,3,NULL,'soo',30,0);
INSERT INTO collection_field_fieldvalue VALUES (3,15,NULL,'soo',20,0);
INSERT INTO collection_field_fieldvalue VALUES (3,12,NULL,'soo',10,0);
+INSERT INTO collection_field_fieldvalue VALUES (34,33,NULL,'sew',4,0);
+INSERT INTO collection_field_fieldvalue VALUES (34,34,NULL,'sew',3,0);
+INSERT INTO collection_field_fieldvalue VALUES (34,35,NULL,'sew',2,0);
+INSERT INTO collection_field_fieldvalue VALUES (34,36,NULL,'sew',1,0);
+INSERT INTO collection_field_fieldvalue VALUES (35,33,NULL,'sew',1,0);
+INSERT INTO collection_field_fieldvalue VALUES (36,34,NULL,'sew',1,0);
+INSERT INTO collection_field_fieldvalue VALUES (37,35,NULL,'sew',1,0);
+INSERT INTO collection_field_fieldvalue VALUES (38,36,NULL,'sew',1,0);
INSERT INTO collection_format VALUES (6,1,100);
INSERT INTO collection_format VALUES (6,2,90);
INSERT INTO collection_format VALUES (6,3,80);
INSERT INTO collection_format VALUES (6,4,70);
INSERT INTO collection_format VALUES (6,5,60);
INSERT INTO collection_format VALUES (2,1,100);
INSERT INTO collection_format VALUES (2,2,90);
INSERT INTO collection_format VALUES (2,3,80);
INSERT INTO collection_format VALUES (2,4,70);
INSERT INTO collection_format VALUES (2,5,60);
INSERT INTO collection_format VALUES (3,1,100);
INSERT INTO collection_format VALUES (3,2,90);
INSERT INTO collection_format VALUES (3,3,80);
INSERT INTO collection_format VALUES (3,4,70);
INSERT INTO collection_format VALUES (3,5,60);
INSERT INTO collection_format VALUES (4,1,100);
INSERT INTO collection_format VALUES (4,2,90);
INSERT INTO collection_format VALUES (4,3,80);
INSERT INTO collection_format VALUES (4,4,70);
INSERT INTO collection_format VALUES (4,5,60);
INSERT INTO collection_format VALUES (5,1,100);
INSERT INTO collection_format VALUES (5,2,90);
INSERT INTO collection_format VALUES (5,3,80);
INSERT INTO collection_format VALUES (5,4,70);
INSERT INTO collection_format VALUES (5,5,60);
INSERT INTO collection_format VALUES (8,1,100);
INSERT INTO collection_format VALUES (8,2,90);
INSERT INTO collection_format VALUES (8,3,80);
INSERT INTO collection_format VALUES (8,4,70);
INSERT INTO collection_format VALUES (8,5,60);
INSERT INTO collection_format VALUES (8,6,96);
INSERT INTO collection_format VALUES (8,7,93);
INSERT INTO collection_format VALUES (1,1,100);
INSERT INTO collection_format VALUES (1,2,90);
INSERT INTO collection_format VALUES (1,3,80);
INSERT INTO collection_format VALUES (1,4,70);
INSERT INTO collection_format VALUES (1,5,60);
INSERT INTO collection_format VALUES (15,1,100);
INSERT INTO collection_format VALUES (15,2,90);
INSERT INTO collection_format VALUES (15,18,85);
INSERT INTO collection_format VALUES (15,3,80);
INSERT INTO collection_format VALUES (15,4,70);
INSERT INTO collection_format VALUES (15,5,60);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,1,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,2,'en','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (6,3,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (6,49,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (6,4,'en','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (2,5,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (2,45,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (2,6,'en','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (3,7,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (3,46,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (3,8,'en','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (4,9,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (4,47,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (4,10,'en','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (5,11,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (5,48,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (8,14,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (8,50,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (9,15,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (10,16,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (11,17,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (12,18,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (13,19,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (14,20,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (15,21,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (15,51,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (16,22,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (16,52,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (17,23,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (17,53,'en','rt',95);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (18,24,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,25,'fr','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,26,'fr','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,27,'sk','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,28,'sk','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,29,'cs','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,30,'cs','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,31,'de','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,32,'de','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,33,'es','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,34,'es','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,35,'it','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,36,'it','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,37,'no','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,38,'no','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,39,'pt','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,40,'pt','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,41,'ru','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,42,'ru','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,43,'sv','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,44,'sv','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,54,'el','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,55,'el','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,56,'uk','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,57,'uk','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,58,'ca','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,59,'ca','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,60,'ja','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,61,'ja','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,62,'pl','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,63,'pl','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,64,'bg','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,65,'bg','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,66,'hr','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,67,'hr','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,68,'zh_CN','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,69,'zh_CN','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,70,'zh_TW','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,71,'zh_TW','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,72,'hu','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,73,'hu','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,74,'af','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,75,'af','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,76,'gl','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,77,'gl','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (19,78,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (20,78,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (21,78,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (22,78,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,79,'ro','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,80,'ro','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,81,'rw','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,82,'rw','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,83,'ka','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,84,'ka','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,85,'lt','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,86,'lt','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,87,'ar','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,88,'ar','rt',90);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (26,89,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (27,90,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (28,91,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (29,92,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (30,93,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (31,94,'en','rt',100);
INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (32,95,'en','rt',100);
+INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,96,'fa','rt',100);
+INSERT INTO collection_portalbox (id_collection,id_portalbox,ln,position,score) VALUES (1,97,'fa','rt',90);
INSERT INTO collectiondetailedrecordpagetabs(id_collection,tabs) VALUES(33,'');
INSERT INTO example VALUES (1,'author search','author:"Ellis, J"');
INSERT INTO example VALUES (2,'word search','quantum');
INSERT INTO example VALUES (3,'wildcard word search','quant*');
INSERT INTO example VALUES (4,'phrase search','title:\'standard model\'');
INSERT INTO example VALUES (5,'boolean search','quark -sigma +dense');
INSERT INTO example VALUES (6,'complex boolean search','author:draper title:electrical');
INSERT INTO example VALUES (7,'complex boolean search','author:ellis -muon* +abstract:\'dense quark matter\'');
INSERT INTO example VALUES (8,'boolean search','ellis muon*');
INSERT INTO example VALUES (13,'reference search','references:"Theor. Math. Phys. 2 (1998) 231"');
INSERT INTO example VALUES (14,'phrase search','abstract:\'Higgs boson\'');
INSERT INTO example VALUES (15,'wildcard word search','cal*');
INSERT INTO example VALUES (16,'keyword search','keyword:Nobel');
INSERT INTO example VALUES (17,'author search','author:Cole');
INSERT INTO example VALUES (18,'phrase search','title:\'nuclear electronics\'');
INSERT INTO example VALUES (19,'combined search','supergravity and author:"Ellis, J" and year:1980->1990');
INSERT INTO fieldvalue VALUES (1,'Particle Physics','Particle Physics');
INSERT INTO fieldvalue VALUES (2,'Particle Physics - Experimental Results','Particle Physics - Experimental Results');
INSERT INTO fieldvalue VALUES (3,'Particle Physics - Phenomenology','Particle Physics - Phenomenology');
INSERT INTO fieldvalue VALUES (4,'Particle Physics - Theory','Particle Physics - Theory');
INSERT INTO fieldvalue VALUES (5,'Particle Physics - Lattice','Particle Physics - Lattice');
INSERT INTO fieldvalue VALUES (6,'Nuclear Physics','Nuclear Physics');
INSERT INTO fieldvalue VALUES (7,'General Relativity and Cosmology','General Relativity and Cosmology');
INSERT INTO fieldvalue VALUES (8,'General Theoretical Physics','General Theoretical Physics');
INSERT INTO fieldvalue VALUES (9,'Detectors and Experimental Techniques','Detectors and Experimental Techniques');
INSERT INTO fieldvalue VALUES (10,'Accelerators and Storage Rings','Accelerators and Storage Rings');
INSERT INTO fieldvalue VALUES (11,'Health Physics and Radiation Effects','Health Physics and Radiation Effects');
INSERT INTO fieldvalue VALUES (12,'Computing and Computers','Computing and Computers');
INSERT INTO fieldvalue VALUES (13,'Mathematical Physics and Mathematics','Mathematical Physics and Mathematics');
INSERT INTO fieldvalue VALUES (14,'Astrophysics and Astronomy','Astrophysics and Astronomy');
INSERT INTO fieldvalue VALUES (15,'Nonlinear Systems','Nonlinear Systems');
INSERT INTO fieldvalue VALUES (16,'Condensed Matter','Condensed Matter');
INSERT INTO fieldvalue VALUES (17,'Other Fields of Physics','Other Fields of Physics');
INSERT INTO fieldvalue VALUES (18,'Chemical Physics and Chemistry','Chemical Physics and Chemistry');
INSERT INTO fieldvalue VALUES (19,'Engineering','Engineering');
INSERT INTO fieldvalue VALUES (20,'Information Transfer and Management','Information Transfer and Management');
INSERT INTO fieldvalue VALUES (21,'Other Aspects of Science','Other Aspects of Science');
INSERT INTO fieldvalue VALUES (22,'Commerce, Economics, Social Science','Commerce, Economics, Social Science');
INSERT INTO fieldvalue VALUES (23,'Biography, Geography, History','Biography, Geography, History');
INSERT INTO fieldvalue VALUES (24,'Other Subjects','Other Subjects');
INSERT INTO fieldvalue VALUES (25,'CERN TH','TH');
INSERT INTO fieldvalue VALUES (26,'CERN PPE','PPE');
INSERT INTO fieldvalue VALUES (27,'Experiments and Tracks','Experiments and Tracks');
INSERT INTO fieldvalue VALUES (28,'Personalities and History of CERN','Personalities and History of CERN');
INSERT INTO fieldvalue VALUES (29,'Diagrams and Charts','Diagrams and Charts');
INSERT INTO fieldvalue VALUES (30,'Life at CERN','Life at CERN');
INSERT INTO fieldvalue VALUES (31,'CERN ETT','ETT');
INSERT INTO fieldvalue VALUES (32,'CERN EP','EP');
INSERT INTO oaiREPOSITORY(id,setName,setSpec,setCollection,setDescription,setDefinition,setRecList,p1,f1,m1,p2,f2,m2,p3,f3,m3,last_updated) VALUES (2,'CERN experimental papers','cern:experiment','','','c=;p1=CERN;f1=reportnumber;m1=a;p2=(EP|PPE);f2=division;m2=r;p3=;f3=;m3=;',NULL,'CERN','reportnumber','a','(EP|PPE)','division','r','','','',NOW());
INSERT INTO oaiREPOSITORY(id,setName,setSpec,setCollection,setDescription,setDefinition,setRecList,p1,f1,m1,p2,f2,m2,p3,f3,m3,last_updated) VALUES (3,'CERN theoretical papers','cern:theory','','','c=;p1=CERN;f1=reportnumber;m1=a;p2=TH;f2=division;m2=e;p3=;f3=;m3=;',NULL,'CERN','reportnumber','a','TH','division','e','','','',NOW());
INSERT INTO portalbox VALUES (1,'ABOUT THIS SITE','Welcome to the demo site of the Invenio, a free document server software coming from CERN. Please feel free to explore all the features of this demo site to the full.');
INSERT INTO portalbox VALUES (2,'SEE ALSO','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (3,'ABOUT ARTICLES','The Articles collection contains all the papers published in scientific journals by our staff. The collection starts from 1998.');
INSERT INTO portalbox VALUES (4,'SEE ALSO','<a href=\"http://arXiv.org/\">arXiv.org</a><br /><a href=\"http://cds.cern.ch/\">CDS</a><br /><a href=\"www.chemweb.com\">ChemWeb</a><br /><a href=\"http://www.ams.org/mathscinet\">MathSciNet</a>');
INSERT INTO portalbox VALUES (5,'ABOUT PREPRINTS','The Preprints collection contains not-yet-published papers and research results obtained at the institute. The collection starts from 2001.');
INSERT INTO portalbox VALUES (6,'SEE ALSO','<a href=\"http://arXiv.org/\">arXiv.org</a><br /><a href=\"http://cds.cern.ch/\">CDS</a>');
INSERT INTO portalbox VALUES (7,'ABOUT BOOKS','The Books collection contains monographs published by institute staff as well as pointers to interesting online e-books available in fulltext.');
INSERT INTO portalbox VALUES (8,'SEE ALSO','<a href=\"http://etext.lib.virginia.edu/ebooks/ebooklist.html\">UV e-Books</a><br /><a href=\"http://www.gutenberg.org/\">Project Gutenberg</a>');
INSERT INTO portalbox VALUES (9,'ABOUT THESES','The Theses collection contains all students\' theses defended at the institute. The collection starts from 1950.');
INSERT INTO portalbox VALUES (10,'SEE ALSO','<a href=\"http://www.theses.org/\">NDLTD Theses</a><br /><a href=\"http://www.thesis.de/\">Thesis.DE</a>');
INSERT INTO portalbox VALUES (11,'ABOUT REPORTS','The Reports collection contains miscellaneous technical reports, unpublished elsewhere. The collection starts from 1950.');
INSERT INTO portalbox VALUES (12,'TEST portal box','this is a test portal box');
INSERT INTO portalbox VALUES (13,'test','this is a test portal box');
INSERT INTO portalbox VALUES (14,'ABOUT PICTURES','The Pictures collection contains selected photographs and illustrations. Please note that photographs are copyrighted. The collection includes historical archive that starts from 1950.');
INSERT INTO portalbox VALUES (15,'ABOUT CERN DIVISIONS','These virtual collections present a specific point of view on the database content from CERN Divisions persective.');
INSERT INTO portalbox VALUES (16,'ABOUT CERN EXPERIMENTS','These virtual collections present a specific point of view on the database content from CERN Experiments persective.');
INSERT INTO portalbox VALUES (17,'ABOUT TH','This virtual collection groups together all the documents written by authors from CERN TH Division.');
INSERT INTO portalbox VALUES (18,'ABOUT EP','This virtual collection groups together all the documents written by authors from CERN EP Division.');
INSERT INTO portalbox VALUES (19,'ABOUT ISOLDE','This virtual collection groups together all the documents about ISOLDE CERN experiment.');
INSERT INTO portalbox VALUES (20,'ABOUT ALEPH','This virtual collection groups together all the documents about ALEPH CERN experiment.');
INSERT INTO portalbox VALUES (21,'ABOUT ARTICLES AND PREPRINTS','This collection groups together all published and non-published articles, many of which in electronic fulltext form.');
INSERT INTO portalbox VALUES (22,'ABOUT BOOKS AND REPORTS','This collection groups together all monograph-like publications, be they books, theses, reports, book chapters, proceedings, and so on.');
INSERT INTO portalbox VALUES (23,'ABOUT MULTIMEDIA & OUTREACH','This collection groups together all multimedia- and outreach- oriented material.');
INSERT INTO portalbox VALUES (24,'ABOUT POETRY','This collection presents poetry excerpts, mainly to demonstrate and test the treatment of various languages.<p>Vitrum edere possum; mihi non nocet.<br />Μπορώ να φάω σπασμένα γυαλιά χωρίς να πάθω τίποτα.<br />Pòdi manjar de veire, me nafrariá pas.<br />Ég get etið gler án þess að meiða mig.<br />Ic mæg glæs eotan ond hit ne hearmiað me.<br />ᛁᚳ᛫ᛗᚨᚷ᛫ᚷᛚᚨᛋ᛫ᛖᚩᛏᚪᚾ᛫ᚩᚾᛞ᛫ᚻᛁᛏ᛫ᚾᛖ᛫ᚻᛖᚪᚱᛗᛁᚪᚧ᛫ᛗᛖ᛬<br />⠊⠀⠉⠁⠝⠀⠑⠁⠞⠀⠛⠇⠁⠎⠎⠀⠁⠝⠙⠀⠊⠞⠀⠙⠕⠑⠎⠝⠞⠀⠓⠥⠗⠞⠀⠍⠑<br />Pot să mănânc sticlă și ea nu mă rănește.<br />Meg tudom enni az üveget, nem lesz tőle bajom.<br />Môžem jesť sklo. Nezraní ma.<br /><span dir="rtl" lang="he">אני יכול לאכול זכוכית וזה לא מזיק לי.</span><br /><span dir="rtl" lang="ji">איך קען עסן גלאָז און עס טוט מיר נישט װײ.</span><br /><span dir="RTL" lang=AR>أنا قادر على أكل الزجاج و هذا لا يؤلمني.</span><br />Я могу есть стекло, оно мне не вредит.<br />მინას ვჭამ და არა მტკივა.<br />Կրնամ ապակի ուտել և ինծի անհանգիստ չըներ։<br />मैं काँच खा सकता हूँ, मुझे उस से कोई पीडा नहीं होती.<br />काचं शक्नोम्यत्तुम् । नोपहिनस्ति माम् ॥<br />ฉันกินกระจกได้ แต่มันไม่ทำให้ฉันเจ็บ<br />Tôi có thể ăn thủy tinh mà không hại gì.<br /><span lang="zh">我能吞下玻璃而不伤身体。</span><br /><span lang=ja>私はガラスを食べられます。それは私を傷つけません。</span><br /><span lang=ko>나는 유리를 먹을 수 있어요. 그래도 아프지 않아요</span><br />(<a href="http://www.columbia.edu/kermit/utf8.html">http://www.columbia.edu/kermit/utf8.html</a>)');
INSERT INTO portalbox VALUES (25,'À PROPOS DE CE SITE','Bienvenue sur le site de démonstration de Invenio, un logiciel libre pour des serveurs des documents, venant du CERN. Veuillez explorer les possibilités de ce site de démonstration de tous ses côtés.');
INSERT INTO portalbox VALUES (26,'VOIR AUSSI','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (27,'O TÝCHTO STRÁNKACH','Vitajte na demonštračných stránkach Invenio, voľne dostupného softwaru pre dokumentové servery, pochádzajúceho z CERNu. Prehliadnite si možnosti našeho demonštračného serveru podla ľubovôle.');
INSERT INTO portalbox VALUES (28,'VIĎ TIEŽ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (29,'O TĚCHTO STRÁNKÁCH','Vítejte na demonstračních stránkách Invenio, volně dostupného softwaru pro dokumentové servery, pocházejícího z CERNu. Prohlédněte si možnosti našeho demonstračního serveru podle libosti.');
INSERT INTO portalbox VALUES (30,'VIZ TÉŽ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (31,'ÜBER DIESEN SEITEN','Willkommen Sie bei der Demo-Seite des Invenio, des Dokument Management Systems Software aus CERN. Hier können Sie den System gleich und frei ausprobieren.');
INSERT INTO portalbox VALUES (32,'SEHEN SIE AUCH','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (33,'ACERCA DE ESTAS PÁGINAS','Bienvenidos a las páginas de demostración de Invenio, un software gratuito desarrollado por el CERN que permite crear un servidor de documentos. Le invitamos a explorar a fondo todas las funcionalidades ofrecidas por estas páginas de demostración.');
INSERT INTO portalbox VALUES (34,'VEA TAMBIÉN','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (35,'A PROPOSITO DI QUESTO SITO','Benvenuti nel sito demo di Invenio, un software libero per server di documenti sviluppato al CERN. Vi invitiamo ad esplorare a fondo tutte le caratteristiche di questo sito demo.');
INSERT INTO portalbox VALUES (36,'VEDI ANCHE','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (37,'OM DENNE SIDEN','Velkommen til demosiden for Invenio, en gratis dokumentserver fra CERN. Vennligst føl deg fri til å utforske alle mulighetene i denne demoen til det fulle.');
INSERT INTO portalbox VALUES (38,'SE OGSÅ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (39,'SOBRE ESTE SITE','Bem vindo ao site de demonstração do Invenio, um servidor de documentos livre desenvolvido pelo CERN. Sinta-se à vontade para explorar plenamente todos os recursos deste site demonstração.');
INSERT INTO portalbox VALUES (40,'VEJA TAMBÉM','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (41,'ОБ ЭТОМ САЙТЕ','Добро пожаловать на наш демонстрационный сайт Invenio. Invenio -- свободная программа для серверов документов, разработанная в CERNе. Пожалуйста пользуйтесь свободно этим сайтом.');
INSERT INTO portalbox VALUES (42,'СМОТРИТЕ ТАКЖЕ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (43,'OM DENNA WEBBPLATS','Välkommen till demoinstallationen av Invenio, en fri programvara för hantering av dokument, från CERN. Välkommen att undersöka alla funktioner i denna installation.');
INSERT INTO portalbox VALUES (44,'SE ÄVEN','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (45,'SUBMIT PREPRINT','<a href=\"/submit?doctype=TEXT\">Submit a new preprint</a>');
INSERT INTO portalbox VALUES (46,'SUBMIT BOOK','<a href=\"/submit?doctype=TEXT\">Submit a new book</a>');
INSERT INTO portalbox VALUES (47,'SUBMIT THESIS','<a href=\"/submit?doctype=TEXT\">Submit a new thesis</a>');
INSERT INTO portalbox VALUES (48,'SUBMIT REPORT','<a href=\"/submit?doctype=TEXT\">Submit a new report</a>');
INSERT INTO portalbox VALUES (49,'SUBMIT ARTICLE','<a href=\"/submit?doctype=TEXT\">Submit a new article</a>');
INSERT INTO portalbox VALUES (50,'SUBMIT PICTURE','<a href=\"/submit?doctype=DEMOPIC\">Submit a new picture</a>');
INSERT INTO portalbox VALUES (51,'SUBMIT NEW DOCUMENT','<a href=\"/submit?doctype=TEXT\">Submit a new article</a><br /><a href=\"/submit?doctype=TEXT\">Submit a new preprint</a>');
INSERT INTO portalbox VALUES (52,'SUBMIT NEW DOCUMENT','<a href=\"/submit?doctype=TEXT\">Submit a new book</a><br /><a href=\"/submit?doctype=TEXT\">Submit a new thesis</a><br /><a href=\"/submit?doctype=TEXT\">Submit a new report</a>');
INSERT INTO portalbox VALUES (53,'SUBMIT NEW DOCUMENT','<a href=\"/submit?doctype=DEMOPIC\">Submit a new picture</a>');
INSERT INTO portalbox VALUES (54,'ΣΧΕΤΙΚΑ ΜΕ ΤΗΝ ΣΕΛΙΔΑ','Καλως ήλθατε στον δικτυακό τόπο του Invenio, ενός δωρεάν εξυπηρετητή για έγγραφα προερχόμενο απο το CERN. Είστε ευπρόσδεκτοι να εξερευνήσετε σε βάθος τις δυνατότητες που σας παρέχει ο δικτυακός αυτός τόπος.');
INSERT INTO portalbox VALUES (55,'ΔΕΙΤΕ ΕΠΙΣΗΣ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (56,'ПРО ЦЕЙ САЙТ','Ласкаво просимо до демонстраційного сайту Invenio, вільного програмного забезпечення, розробленого CERN. Випробуйте всі можливості цього демонстраційного сайту в повному обсязі.');
INSERT INTO portalbox VALUES (57,'ДИВИСЬ ТАКОЖ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (58,'SOBRE AQUEST LLOC','Benvinguts al lloc de demo de Invenio, un servidor de documents lliure originat al CERN. Us convidem a explorar a fons totes les funcionalitats ofertes en aquestes pàgines de demostració.');
INSERT INTO portalbox VALUES (59,'VEGEU TAMBÉ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (60,'この場所について','Invenioデモンストレーションの場所への歓迎, CERN から来る自由な文書のサーバーソフトウェア, このデモンストレーションの場所の特徴すべてを探検する自由の感じ');
INSERT INTO portalbox VALUES (61,'また見なさい','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (62,'O TEJ STRONIE','Witamy w wersji demo systemu Invenio, darmowego oprogramowania do obsługi serwera dokumentów, stworzonego w CERN. Zachęcamy do odkrywania wszelkich funkcjonalności oferowanych przez tę stronę.');
INSERT INTO portalbox VALUES (63,'ZOBACZ TAKŻE','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (64,'ЗА САЙТА','Добре дошли на демонстрационния сайт на Invenio, свободен софтуер за документни сървъри изработен в ЦЕРН. Чувствайте се свободни да изследвате всяка една от характеристиките на сайта.');
INSERT INTO portalbox VALUES (65,'ВИЖ СЪЩО','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (66,'O OVOM SITE-u','Dobrodošli na Invenio demo site. Invenio je slobodno dostupan poslužitelj dokumenata razvijen na CERN-u. Slobodno istražite sve mogućnosti ove aplikacije.');
INSERT INTO portalbox VALUES (67,'TAKOĐER POGLEDAJTE','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (68,'关于这个网站','欢迎来到Invenio 的示范网站!Invenio是一个由CERN开发的免费文件服务器软件。 要了解这网站所提供的各项特点, 请立刻行动,尽情探索。');
INSERT INTO portalbox VALUES (69,'参见','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (70,'關於這個網站', '歡迎來到Invenio 的示範網站!Invenio是一個由CERN開發的免費文件伺服器軟體。 要瞭解這網站所提供的各項特點, 請立刻行動,盡情探索。');
INSERT INTO portalbox VALUES (71,'參見','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (72,'IMPRESSZUM', 'Üdvözöljük a Invenio bemutatóoldalain! Ezt a szabad dokumentumkezelő szoftvert a CERN-ben fejlesztették. Fedezze fel bátran a tesztrendszer nyújtotta szolgáltatásokat!');
INSERT INTO portalbox VALUES (73,'LÁSD MÉG','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (74,'OMTRENT HIERDIE TUISTE', 'Welkom by die demo tuiste van Invenio, gratis dokument bediener sagteware wat deur CERN geskryf is. Voel vry om al die eienskappe van die demo te deursoek.');
INSERT INTO portalbox VALUES (75,'SIEN OOK','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (76,'ACERCA DESTE SITIO', 'Benvido ó sitio de demostración do Invenio, un software de servidor de documentos do CERN. Por favor síntete libre de explorar todas as características deste sitio de demostración.');
INSERT INTO portalbox VALUES (77,'VEXA TAMÉN','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (78,'ABOUT ATLANTIS TIMES','The \"Atlantis Times\" collections contain the articles from the \<a href=\"/journal/atlantistimes/\">Atlantis Times</a> journal.');
INSERT INTO portalbox VALUES (79,'DESPRE ACEST SITE', 'Bine aţi venit pe site-ul demo al Invenio, un software gratuit pentru servere de documente, creat de CERN. Nu ezitaţi să exploraţi din plin toate caracteristicile acestui site demo.');
INSERT INTO portalbox VALUES (80,'ALTE RESURSE','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (81,'IBYEREKERANYE N\'IYI WEB', 'Murakzaneza kuri web ya Invenio, iyi ni koranabuhanga y\'ubuntu ya kozwe na CERN. Bitimuntu afite uburenganzira bwo kuyigerageza no kuyikoresha.');
INSERT INTO portalbox VALUES (82,'REBA N\'IBI','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>'); -- '
INSERT INTO portalbox VALUES (83,'საიტის შესახებ', 'კეთილი იყოს თქვენი მობრძანება Invenio -ის სადემონსტრაციო საიტზე, თავისუფალი დოკუმენტების სერვერი CERN -ისაგან. გთხოვთ სრულად შეისწავლოთ სადემონსტრაციო საიტის შესაძლებლობები.');
INSERT INTO portalbox VALUES (84,'ასევე იხილეთ','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>'); -- '
INSERT INTO portalbox VALUES (85,'APIE PUSLAPĮ', 'Sveiki atvykę į Invenio bandomąjį tinklapį. Invenio yra nemokama programinė įranga dokumentų serveriams, sukurta CERN. Kviečiame išbandyti visas tinklapio galimybes ir funkcijas.');
INSERT INTO portalbox VALUES (86,'TAIP PAT ŽIŪRĖKITE','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>'); -- '
INSERT INTO portalbox VALUES (87,'حول هذا الموقع','مرحبا بكم في الموقع التجريبي لإنفينيو، المحطة الخادمة (الحرة) المبرمجة من طرف المنظمة الأوربية للبحوث النووية. الرجاء عدم التردد للإطلاع على جميع صفحات هذا الموقع التجريبي');
INSERT INTO portalbox VALUES (88,'زوروا أيضا','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO portalbox VALUES (89,'ABOUT Notes','The Notes collection is a temporary collection that is being used for testing.');
INSERT INTO portalbox VALUES (90,'ABOUT ALEPH Papers','The ALEPH Papers collection is a temporary collection that is being used for testing.');
INSERT INTO portalbox VALUES (91,'ABOUT ALEPH Internal Notes','The ALEPH Internal Notes collection is a temporary restricted collection that is being used for testing.');
INSERT INTO portalbox VALUES (92,'ABOUT ALEPH Theses','The ALEPH Theses collection is a temporary restricted collection that is being used for testing.');
INSERT INTO portalbox VALUES (93,'ABOUT ISOLDE Papers','The ISOLDE Papers collection is a temporary collection that is being used for testing.');
INSERT INTO portalbox VALUES (94,'ABOUT ISOLDE Internal Notes','The ISOLDE Internal Notes collection is a temporary restricted and hidden collection that is being used for testing.');
INSERT INTO portalbox VALUES (95,'ABOUT Drafts','The Drafts collection is a temporary restricted collection that is being used for testing.');
+INSERT INTO portalbox VALUES (96,'درباره این سایت', 'به سایت نمایشی سی دی اس اینونیو،نرم افزار رایگان سرویس دهنده مدارک ساخته شده در سرن، خوش آمدید.‑لطفا برای کشف تمام ویژگی های این سایت نمایشی به صورت کامل، راحت باشید.');
+INSERT INTO portalbox VALUES (97,'نیز نگاه کنید','<a href=\"http://invenio-software.org/\">Invenio</a><br /><a href=\"http://www.cern.ch/\">CERN</a>');
INSERT INTO sbmCOLLECTION VALUES (36,'Document Types');
INSERT INTO sbmCOLLECTION_sbmCOLLECTION VALUES (0,36,1);
INSERT INTO sbmCOLLECTION_sbmDOCTYPE VALUES (36,'DEMOTHE',1);
INSERT INTO sbmCOLLECTION_sbmDOCTYPE VALUES (36,'DEMOPOE',2);
INSERT INTO sbmCOLLECTION_sbmDOCTYPE VALUES (36,'DEMOPIC',3);
INSERT INTO sbmCOLLECTION_sbmDOCTYPE VALUES (36,'DEMOART',4);
INSERT INTO sbmCOLLECTION_sbmDOCTYPE VALUES (36,'DEMOBOO',5);
INSERT INTO sbmCOLLECTION_sbmDOCTYPE VALUES (36,'DEMOJRN',6);
INSERT INTO sbmCOLLECTION_sbmDOCTYPE VALUES (36,'DEMOVID',7);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOPIC','LIFE','Life at CERN',3);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOPIC','HIST','Personalities and History of CERN',2);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOPIC','EXP','Experiments',1);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOART','ARTICLE','Article',1);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOART','PREPRINT','Preprint',2);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOART','REPORT','Report',3);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOJRN','NEWS','News',2);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOJRN','ARTS','Arts',1);
INSERT INTO sbmCATEGORIES (doctype,sname,lname,score) VALUES ('DEMOJRN','SCIENCE','Science',4);
INSERT INTO sbmDOCTYPE VALUES ('Demo Picture Submission','DEMOPIC','2007-09-13','2007-10-17','<br /><br />\r\nThe Demo Picture submission demonstrates a slightly more detailed submission type.<br />\r\nIt makes use of different categories (which in this case are used in the picture\'s reference number to better describe it) and creates icons for the submitted picture files. Records created with this submission are inserted into the ATLANTIS \"Pictures\" collection.\r\n<br /><br />\r\n');
INSERT INTO sbmDOCTYPE VALUES ('Demo Thesis Submission','DEMOTHE','2008-03-02','2008-03-05','<br />\r\n<br />\r\nThe Demo Thesis submission demonstrates a very simple submission type.<br />\r\nIt has no categories, submits directly into the ATLANTIS \"Theses\" collection and also stamps full-text files.\r\n<br /><br />\r\n');
INSERT INTO sbmDOCTYPE VALUES ('Demo Article Submission','DEMOART','2008-03-06','2008-03-06','<br /><br />The Demo Article submission demonstrates a more complex submission type.<br /><br />\r\nThe submission gives a document a category. This category is used in the document\'s reference number and also serves as a means to classify it into a specific ATLANTIS collection. Documents submitted into the \"Article\" category are inserted into the ATLANTIS \"Articles\" collection, documents categorized as \"Preprint\" are inserted into the ATLANTIS \"Preprints\" collection, and a document categorized as a \"Report\" is inserted into the ATLANTIS \"Reports\" collection.<br /><br />\r\n');
INSERT INTO sbmDOCTYPE VALUES ('Demo Book Submission (Refereed)','DEMOBOO','2008-03-06','2008-03-06','<br /><br />The Demo Book submission demonstrates a refereed submission.<br /><br />\r\nWhen the details of a book are submitted by a user, they must be approved by a referee before the record is integrated into the ATLANTIS repository.<br />\r\nApproved books are integrated into the ATLANTIS \"Books\" collection.<br />\r\n');
INSERT INTO sbmDOCTYPE VALUES ('Demo Poetry Submission','DEMOPOE','2008-03-12','2008-03-12','<br /><br />\r\nThe Demo Poetry submission demonstrates a simple submission type with a submission form split over two pages.<br />\r\nIt does not use categories. Records created with this submission are inserted into the ATLANTIS \"Poetry\" collection.\r\n<br /><br />');
INSERT INTO sbmDOCTYPE VALUES ('Demo Journal Submission','DEMOJRN','2008-09-18','2008-09-18','The Demo Journal submission submits records that will be integrated into the demo "Atlantis Times" journal.<br />\r\n Makes use of CKEditor to provide WYSIWYG HTML edition of the articles. Install it with <code>make install-ckeditor-plugin</code>.');
INSERT INTO sbmDOCTYPE VALUES ('Demo Video Submission','DEMOVID','2012-02-16','2012-02-16','This is a prototype implementation of a video submission workflow. It will generate all necessary files and video formats from one file uploaded to the system.');
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,1,'DEMOPIC_TITLE','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Submit an ATLANTIS picture:</b><br /><br /><span style=\"color: red;\">*</span>Picture Title:<br />','M','Picture Title','','2007-09-13','2007-10-04',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,2,'DEMOPIC_PHOTOG','<br /><br />Picture Author(s) or Photographers(s): <i>(one per line)</i><br />','O','Photographer(s)','','2007-09-13','2007-09-13',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,3,'DEMOPIC_DATE','<br /><br /><span style=\"color: red;\">*</span>Picture Date: <i>(dd/mm/yyyy)</i>&nbsp;','M','Picture Date','DatCheckNew','2007-09-13','2007-10-04',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,4,'DEMOPIC_KW','<br /><br />Keywords:<br /><i>(one keyword/key-phrase per line)</i><br />','O','Picture Keywords','','2007-09-13','2007-09-13',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,5,'DEMOPIC_DESCR','<br /><br />Picture Description:<br />','O','Picture Description','','2007-09-13','2007-09-13',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,6,'DEMOPIC_ADD_RN','<br /><br />Your picture will be given a reference number automatically.<br /> However, if the picture has other reference numbers, please enter them here:<br /><i>(one per line)</i><br />','O','Additional Reference Numbers','','2007-09-13','2007-09-13',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,7,'DEMOPIC_NOTE','<br /><br />Additional Comments or Notes about the Picture:<br />','O','Picture Notes or Comments','','2007-09-13','2007-09-13',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,8,'Upload_Photos','<br /><br />Select the photo(s) to upload:<br />','O','Picture File','','2007-09-13','2007-09-13',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPIC',1,9,'DEMOPIC_FINISH','<br /><br /></td></tr></table>','O','','','2007-09-13','2007-09-13',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOPIC',1,1,'DEMOPIC_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Modify a picture\'s bibliographic information:</b><br /><br /><span style=\'color: red;\'>*</span>Picture Reference Number:&nbsp;&nbsp;','M','Reference Number','','2007-10-04','2007-10-04',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOPIC',1,2,'DEMOPIC_CHANGE','<br /><br /><span style=\"color: red;\">*</span>Choose the fields to be modified:<br />','M','Fields to Modify','','2007-10-04','2007-10-04',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOPIC',1,3,'DEMOPIC_CONT','<br /><br /></td></tr></table>','O','','','2007-10-04','2007-10-04',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SRVDEMOPIC',1,1,'DEMOPIC_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b> Revise/add pictures:</b><br /><br /><span style=\'color: red;\'>*</span>Picture Reference Number:&nbsp;&nbsp;','M','Reference Number','','2009-04-09','2009-04-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SRVDEMOPIC',1,2,'DEMOPIC_CONT','<br /><br /></td></tr></table>','O','','','2009-04-09','2009-04-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,1,'DEMOTHE_REP','<TABLE WIDTH=\"100%\" BGCOLOR=\"#D3E3E2\" ALIGN=\"center\" CELLSPACING=\"2\" CELLPADDING=\"2\" BORDER=\"1\"><TR><TD ALIGN=\"left\"><br /><b>Submit an ATLANTIS Thesis:</b><br /><br />Your thesis will be given a reference number automatically.<br /> However, if it has other reference numbers, please enter them here:<br /><i>(one per line)</i><br />','O','Other Report Numbers','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,2,'DEMOTHE_TITLE','<br /><br /><span style=\"color: red;\">*</span>Thesis Title:<br />','M','Title','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,3,'DEMOTHE_SUBTTL','<br /><br />Thesis Subtitle <i>(if any)</i>:<br />','O','Subtitle','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,4,'DEMOTHE_AU','<br /><br /><table width=\"100%\"><tr><td valign=\"top\"><span style=\"color: red;\">*</span>Author of the Thesis: <i>(one per line)</i><br />','M','Author(s)','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,5,'DEMOTHE_SUPERV','</td></tr><tr><td valign=\"top\"><br>Thesis Supervisor(s): <i>(one per line)</i><br />','O','Thesis Supervisor(s)','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,6,'DEMOTHE_ABS','</td></tr></table><br /><span style=\"color: red;\">*</span>Abstract:<br />','M','Abstract','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,7,'DEMOTHE_NUMP','<br /><br />Number of Pages: ','O','Number of Pages','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,8,'DEMOTHE_LANG','<br><br><span style=\"color: red;\">*</span>Language: ','M','Thesis Language','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,9,'DEMOTHE_PUBL','<br /><br /><span style=\"color: red;\">*</span>Thesis Publisher (or Institute):&nbsp;','M','Thesis Publisher/University','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,10,'DEMOTHE_PLDEF','&nbsp;at <span style=\"color: red;\">*</span>Place/Town:&nbsp;','M','Place of Thesis Defence','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,11,'DEMOTHE_DIPL','<br /><br /><span style=\"color: red;\">*</span>Diploma Awarded:&nbsp;','M','Diploma Awarded','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,12,'DEMOTHE_DATE','<br /><br /><span style=\"color: red;\">*</span>Thesis Defence date <i>(dd/mm/yyyy)</i>:&nbsp;','M','Date of Thesis Defence','DatCheckNew','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,13,'DEMOTHE_UNIV','<br /><span style=\"color: red;\">*</span>Awarding University:&nbsp;','M','Awarding University','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,14,'DEMOTHE_PLACE','&nbsp;at <span style=\"color: red;\">*</span>Place/Town:&nbsp;','M','Awarding University town','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,15,'DEMOTHE_FILE','<br><br><span style=\"color: red;\">*</span>Enter the full path to the source file to upload:<br />','M','Source File','','2008-03-02','2008-03-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOTHE',1,16,'DEMOTHE_END','<br /><br /></td></tr></table><br />','O','','','2008-03-02','2008-03-02',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOTHE',1,1,'DEMOTHE_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Modify a thesis\' bibliographic information:</b><br /><br /><span style=\'color: red;\'>*</span>Thesis Reference Number:&nbsp;&nbsp;','M','Reference Number','','2008-03-05','2008-03-05',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOTHE',1,2,'DEMOTHE_CHANGE','<br /><br /><span style=\"color: red;\">*</span>Choose the fields to be modified:<br />','M','Fields to Modify','','2008-03-05','2008-03-05',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOTHE',1,3,'DEMOTHE_CONT','<br /><br /></td></tr></table>','O','','','2008-03-05','2008-03-05',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,1,'DEMOART_REP','<TABLE WIDTH=\"100%\" BGCOLOR=\"#D3E3E2\" ALIGN=\"center\" CELLSPACING=\"2\" CELLPADDING=\"2\" BORDER=\"1\"><TR><TD ALIGN=\"left\"><br /><b>Submit an ATLANTIS Article:</b><br /><br />Your document will be given a reference number automatically.<br /> However, if it has other reference numbers, please enter them here:<br /><i>(one per line)</i><br />','O','Other Report Numbers','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,2,'DEMOART_TITLE','<br /><br /><span style=\"color: red;\">*</span>Document Title:<br />','M','Title','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,3,'DEMOART_AU','<br /><br /><table width=\"100%\"><tr><td valign=\"top\"><span style=\"color: red;\">*</span>Author of the Document: <i>(one per line)</i><br />','M','Author(s)','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,4,'DEMOART_ABS','</td></tr></table><br /><span style=\"color: red;\">*</span>Abstract:<br />','M','Abstract','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,5,'DEMOART_NUMP','<br /><br />Number of Pages:&nbsp;','O','Number of Pages','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,6,'DEMOART_LANG','<br /><br /><span style=\"color: red;\">*</span>Language:&nbsp;','O','Language','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,7,'DEMOART_DATE','<br /><br /><span style=\"color: red;\">*</span>Date of Document: <i>(dd/mm/yyyy)</i>&nbsp;','M','Date of Document','DatCheckNew','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,8,'DEMOART_KW','<br /><br />Keywords/Key-phrases: <i>(one per line)</i><br />','O','Keywords/Key-phrases','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,9,'DEMOART_NOTE','<br /><br />Additional Notes or Comments:<br />','O','Notes/Comments','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,10,'DEMOART_FILE','<br><br><span style=\"color: red;\">*</span>Enter the full path to the source file to upload:<br />','M','Source File','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOART',1,11,'DEMOART_END','<br /><br /></td></tr></table><br />','O','','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOART',1,1,'DEMOART_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Modify an article\'s bibliographic information:</b><br /><br /><span style=\'color: red;\'>*</span>Document Reference Number:&nbsp;&nbsp;','M','Reference Number','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOART',1,2,'DEMOART_CHANGE','<br /><br /><span style=\"color: red;\">*</span>Choose the fields to be modified:<br />','M','Fields to Modify','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOART',1,3,'DEMOART_CONT','<br /><br /></td></tr></table>','O','','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,1,'DEMOBOO_REP','<TABLE WIDTH=\"100%\" BGCOLOR=\"#D3E3E2\" ALIGN=\"center\" CELLSPACING=\"2\" CELLPADDING=\"2\" BORDER=\"1\"><TR><TD ALIGN=\"left\"><br /><b>Submit an ATLANTIS Book:</b><br /><br />Your book will be given a reference number automatically.<br /> However, if it has other reference numbers, please enter them here:<br /><i>(one per line)</i><br />','O','Other Report Numbers','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,2,'DEMOBOO_TITLE','<br /><br /><span style=\"color: red;\">*</span>Book Title:<br />','M','Title','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,3,'DEMOBOO_AU','<br /><br /><table width=\"100%\"><tr><td valign=\"top\"><span style=\"color: red;\">*</span>Author of the Book: <i>(one per line)</i><br />','M','Author(s)','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,4,'DEMOBOO_ABS','</td></tr></table><br /><span style=\"color: red;\">*</span>Abstract:<br />','M','Abstract','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,5,'DEMOBOO_NUMP','<br /><br />Number of Pages:&nbsp;','O','Number of Pages','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,6,'DEMOBOO_LANG','<br /><br /><span style=\"color: red;\">*</span>Language:&nbsp;','O','Language','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,7,'DEMOBOO_DATE','<br /><br /><span style=\"color: red;\">*</span>Date of the Book: <i>(dd/mm/yyyy)</i>&nbsp;','M','Date of Document','DatCheckNew','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,8,'DEMOBOO_KW','<br /><br />Keywords/Key-phrases: <i>(one per line)</i><br />','O','Keywords/Key-phrases','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,9,'DEMOBOO_NOTE','<br /><br />Additional Notes or Comments:<br />','O','Notes/Comments','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,10,'DEMOBOO_FILE','<br><br>Enter the full path to the source file to upload:<br />','O','Source File','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOBOO',1,11,'DEMOBOO_END','<br /><br /></td></tr></table><br />','O','','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOBOO',1,1,'DEMOBOO_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Modify a book\'s bibliographic information:</b><br /><br /><span style=\'color: red;\'>*</span>Book Reference Number:&nbsp;&nbsp;','M','Reference Number','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOBOO',1,2,'DEMOBOO_CHANGE','<br /><br /><span style=\"color: red;\">*</span>Choose the fields to be modified:<br />','M','Fields to Modify','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOBOO',1,3,'DEMOBOO_CONT','<br /><br /></td></tr></table>','O','','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('APPDEMOBOO',1,1,'DEMOBOO_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Approve or reject an ATLANTIS book:</b><br /><br /><span style=\'color: red;\'>*</span>Book Reference Number:&nbsp;&nbsp;','M','Reference Number','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('APPDEMOBOO',1,2,'DEMOBOO_DECSN','<br /><br /><span style=\"color: red;\">*</span>Decision:<br />\r\n','M','Decision','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('APPDEMOBOO',1,3,'DEMOBOO_COMNT','<br /><br />Comments on Decision:<br />\r\n','O','Referee\'s Comments','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('APPDEMOBOO',1,4,'DEMOBOO_REGB','<br /><br /></td></tr></table>','O','','','2008-03-07','2008-03-07',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPOE',1,1,'DEMOPOE_TITLE','<TABLE WIDTH=\"100%\" BGCOLOR=\"#D3E3E2\" ALIGN=\"center\" CELLSPACING=\"2\" CELLPADDING=\"2\" BORDER=\"1\"><TR><TD ALIGN=\"left\"><br /><b>Submit an ATLANTIS Poem:</b><br /><br /><span style=\"color: red;\">*</span>Poem Title:<br />','M','>Title','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPOE',1,2,'DEMOPOE_AU','<br /><br /><span style=\"color: red;\">*</span>Author(s) of the Poem: <i>(one per line)</i><br />','M','Author(s)','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPOE',1,3,'DEMOPOE_LANG','<br /><br /><table width=\"100%\"><tr><td valign=\"top\"><span style=\"color: red;\">*</span>Poem Language:&nbsp;','M','Language','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPOE',1,4,'DEMOPOE_YEAR','</td><td><span style=\"color: red;\">*</span>Year of the Poem:&nbsp;','M','Poem Year','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPOE',1,5,'DEMOPOE_DUMMY','','O','','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPOE',2,1,'DEMOPOE_ABS','<TABLE WIDTH=\"100%\" BGCOLOR=\"#D3E3E2\" ALIGN=\"center\" CELLSPACING=\"2\" CELLPADDING=\"2\" BORDER=\"1\"><TR><TD ALIGN=\"left\"><br /><br /><span style=\"color: red;\">*</span>Poem Text:<br />','M','Abstract','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOPOE',2,2,'DEMOPOE_END','<br /><br /></td></tr></table><br />','O','','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOPOE',1,1,'DEMOPOE_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Modify a poem\'s bibliographic information:</b><br /><br /><span style=\'color: red;\'>*</span>Poem Reference Number:&nbsp;&nbsp;','M','Reference Number','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOPOE',1,2,'DEMOPOE_CHANGE','<br /><br /><span style=\"color: red;\">*</span>Choose the fields to be modified:<br />','M','Fields to Modify','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOPOE',1,3,'DEMOPOE_CONT','<br /><br /></td></tr></table>','O','','','2008-03-12','2008-03-12',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,13,'DEMOJRN_ENDING','</td></tr></table>','O','','','2009-02-20','2009-02-20',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOJRN',1,3,'DEMOJRN_CONT','<br /><br /></td></tr></table>','O','','','2008-10-06','2009-01-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOJRN',1,2,'DEMOJRN_CHANGE','','O','','','2009-01-09','2009-01-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('MBIDEMOJRN',1,1,'DEMOJRN_RN','<table width=\"100%\" bgcolor=\"#D3E3E2\" align=\"center\" cellspacing=\"2\" cellpadding=\"2\" border=\"1\"><tr><td align=\"left\"><br /><b>Update a journal article:</b><br /><br /><span style=\'color: red;\'>*</span>Document Reference Number:&nbsp;&nbsp;','M','','','2008-10-06','2008-10-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,3,'DEMOJRN_ISSUES','</TD><TD align=\"center\"><span style=\"color: red;\">*</span>Order(s) <small><i>(digit)</i></small> and issue(s) <small><i>(xx/YYYY)</i></small> of the article:<br />','O','Order and issue numbers','','2009-02-20','2009-02-20',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,2,'DEMOJRN_TYPE','<TABLE WIDTH=\"100%\" BGCOLOR=\"#D3E3E2\" ALIGN=\"center\" CELLSPACING=\"2\" CELLPADDING=\"2\" BORDER=\"1\"><TR><TD ALIGN=\"left\" colspan=\"2\"><br /><b>Submit an Atlantis Times article:</b></TD></TR><TR><TD align=\"center\"><span style=\"color: red;\">*</span>Status:<br />','O','Status:','','2009-02-20','2009-02-20',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,5,'DEMOJRN_EMAIL','</TD><TD><br /><br />E-mail(s) of the author(s): <i>(one per line)</i><br />','O','E-mail of the author(s): <i>(one per line)</i>','','2008-09-26','2009-01-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,9,'DEMOJRN_ABSF','</td><td><br />French article:<br />','O','French article:','','2008-09-26','2009-01-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,7,'DEMOJRN_TITLEF','</TD><TD><br /><br />French title:<br />','O','French title:','','2008-09-26','2009-01-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,4,'DEMOJRN_AU','</td></TR><TR><TD><br /><br />Author(s): <i>(one per line)</i><br />','O','Author(s)','','2008-09-26','2009-02-20',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,6,'DEMOJRN_TITLEE','</TD></TR><TR><TD><br /><br />English title:<br />','O','English title:','','2008-09-26','2009-01-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,10,'DEMOJRN_IN','','O','Journal Name','','2008-09-26','2009-02-06',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,12,'DEMOJRN_END','</td></tr><tr><td colspan=\"2\">','O','','','2008-09-26','2009-02-20',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,8,'DEMOJRN_ABSE','</td></tr><tr><td><br />English article:<br />','O','English article:','','2008-11-04','2009-01-09',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOJRN',1,14,'DEMOJRN_CATEG','','O','comboDEMOJRN-like for MBI','','2009-10-15','2009-10-15',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOVID',1,1,'DEMOVID_SINGLE','','O','','','2012-02-16','2012-02-16',NULL,NULL);
INSERT INTO sbmFIELD VALUES ('SBIDEMOVID',1,2,'DEMOVID_SUBMIT','','O','','','2012-02-16','2012-02-16',NULL,NULL);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_TITLE',NULL,'245__a','T',NULL,5,60,NULL,NULL,NULL,'2007-09-13','2007-09-13',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_PHOTOG',NULL,'100__a','T',NULL,6,30,NULL,NULL,NULL,'2007-09-13','2007-09-19','<br /><br />Picture Author(s) or Photographers(s)<br /><i>(optional)(<B>one per line</B>)</i>:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_DATE',NULL,'260__c','I',10,NULL,NULL,NULL,NULL,NULL,'2007-09-13','2007-09-19','<br /><br />Date of the picture (dd/mm/yyyy):&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_KW',NULL,'6531_a','T',NULL,2,50,NULL,NULL,NULL,'2007-09-13','2007-09-13','<br /><br />Keywords<br /><i>(Optional, <b>one keyword/key-phrase per line</b>)</i>:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_DESCR',NULL,'520__a','T',NULL,12,80,NULL,NULL,NULL,'2007-09-13','2007-09-13','<br /><br />Picture Description:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_ADD_RN',NULL,'088__a','T',NULL,4,30,NULL,NULL,NULL,'2007-09-13','2007-09-13','<br /><br />Additional Reference Numbers:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_NOTE',NULL,'500__a','T',NULL,6,60,NULL,NULL,NULL,'2007-09-13','2007-09-13','<br /><br />Additional Comments or Notes about the Picture:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_FILE',NULL,'','F',40,NULL,NULL,NULL,NULL,NULL,'2007-09-13','2007-09-13',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_FINISH',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"finish submission\" onclick=\"finish();\" />\r\n</div>','2007-09-13','2007-09-13',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_CHANGE',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOPIC_CHANGE[]\" size=\"8\" multiple>\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"DEMOPIC_TITLE\">Title</option>\r\n <option value=\"DEMOPIC_PHOTOG\">Photographer(s)</option>\r\n <option value=\"DEMOPIC_DATE\">Picture Date</option>\r\n <option value=\"DEMOPIC_KW\">Keywords</option>\r\n <option value=\"DEMOPIC_DESCR\">Picture Description</option>\r\n <option value=\"DEMOPIC_ADD_RN\">Picture Reference Numbers</option>\r\n <option value=\"DEMOPIC_NOTE\">Notes or Comments</option>\r\n <option value=\"Upload_Photos\">Pictures</option>\r\n</select>','2007-10-04','2007-10-04',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_RN',NULL,'037__a','I',30,NULL,NULL,NULL,'DEMO-PICTURE-<COMBO>-<YYYY>-???',NULL,'2007-10-04','2007-10-04',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPIC_CONT',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"Continue\" onclick=\"finish();\" />\r\n</div>','2007-10-04','2007-10-04',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_REP',NULL,'088__a','T',NULL,4,30,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Other Report Numbers (one per line):',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_TITLE',NULL,'245__a','T',NULL,5,60,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Title:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_SUBTTL',NULL,'245__b','T',NULL,3,60,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br /><br />Thesis Subtitle (if any):<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_AU',NULL,'100__a','T',NULL,6,60,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Authors:<br />(one per line):<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_SUPERV',NULL,'','T',NULL,6,60,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Thesis Supervisor(s)<br />(one per line):<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_ABS',NULL,'520__a','T',NULL,12,80,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Abstract:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_NUMP',NULL,'300__a','I',5,NULL,NULL,NULL,NULL,NULL,'2008-03-02','2008-03-06','<br />Number of Pages:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_LANG',NULL,'041__a','S',NULL,NULL,NULL,NULL,NULL,'<SELECT name=\"DEMOTHE_LANG\">\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"eng\">English</option>\r\n <option value=\"fre\">French</option>\r\n <option value=\"ger\">German</option>\r\n <option value=\"dut\">Dutch</option>\r\n <option value=\"ita\">Italian</option>\r\n <option value=\"spa\">Spanish</option>\r\n <option value=\"por\">Portuguese</option>\r\n <option value=\"gre\">Greek</option>\r\n <option value=\"slo\">Slovak</option>\r\n <option value=\"cze\">Czech</option>\r\n <option value=\"hun\">Hungarian</option>\r\n <option value=\"pol\">Polish</option>\r\n <option value=\"nor\">Norwegian</option>\r\n <option value=\"swe\">Swedish</option>\r\n <option value=\"fin\">Finnish</option>\r\n <option value=\"rus\">Russian</option>\r\n</SELECT>','2008-03-02','2008-03-02','<br /><br />Select the Language:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_PUBL',NULL,'','I',35,NULL,NULL,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Thesis Publisher (or University):&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_PLDEF',NULL,'','I',20,NULL,NULL,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br /><br />Place of Thesis Defence:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_DIPL',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOTHE_DIPL\">\r\n <option value=\"\">Select:</option>\r\n <option value=\"Diploma\">Diploma</option>\r\n <option value=\"MSc\">MSc</option>\r\n <option value=\"PhD\">PhD</option>\r\n</select>','2008-03-02','2008-03-02','<br /><br />Diploma Awarded:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_DATE',NULL,'269__c','I',10,NULL,NULL,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Date:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_UNIV',NULL,'502__b','I',30,NULL,NULL,NULL,NULL,NULL,'2008-03-02','2008-03-02','<br />Awarding University:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_PLACE',NULL,'','I',20,NULL,NULL,NULL,NULL,NULL,'2008-03-02','2008-03-02',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_FILE',NULL,'','F',60,NULL,NULL,NULL,NULL,NULL,'2008-03-02','2008-03-02',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_END',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<INPUT TYPE=\"button\" class=\"adminbutton\" name=\"endS\" width=\"400\" height=\"50\" value=\"Finish Submission\" onclick=\"finish();\">\r\n</div>','2008-03-02','2008-03-02',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_RN',NULL,'037__a','I',30,NULL,NULL,NULL,'DEMO-THESIS-<YYYY>-???',NULL,'2008-03-05','2008-03-05',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_CHANGE',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOTHE_CHANGE[]\" size=\"9\" multiple>\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"DEMOTHE_REP\">Other Report Numbers</option>\r\n <option value=\"DEMOTHE_TITLE\">Title</option>\r\n <option value=\"DEMOTHE_SUBTITLE\">Subtitle</option>\r\n <option value=\"DEMOTHE_AU\">Author(s)</option>\r\n <option value=\"DEMOTHE_SUPERV\">Supervisor(s)</option>\r\n <option value=\"DEMOTHE_ABS\">Abstract</option>\r\n <option value=\"DEMOTHE_NUMP\">Number of Pages</option>\r\n <option value=\"DEMOTHE_LANG\">Language</option>\r\n</select>','2008-03-05','2008-03-06',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOTHE_CONT',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"Continue\" onclick=\"finish();\" />\r\n</div>','2008-03-05','2008-03-05',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_ABS',NULL,'520__a','T',NULL,12,80,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Abstract:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_AU',NULL,'100__a','T',NULL,6,60,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Authors: <i>(one per line)</i><br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_CHANGE',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOART_CHANGE[]\" size=\"8\" multiple>\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"DEMOART_REP\">Other Report Numbers</option>\r\n <option value=\"DEMOART_TITLE\">Title</option>\r\n <option value=\"DEMOART_AU\">Author(s)</option>\r\n <option value=\"DEMOART_LANG\">Language</option>\r\n <option value=\"DEMOART_KW\">Keywords</option>\r\n <option value=\"DEMOART_ABS\">Abstract</option>\r\n <option value=\"DEMOART_NUMP\">Number of Pages</option>\r\n</select>','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_CONT',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"Continue\" onclick=\"finish();\" />\r\n</div>','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_DATE',NULL,'269__c','I',10,NULL,NULL,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Date:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_END',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<INPUT TYPE=\"button\" class=\"adminbutton\" name=\"endS\" width=\"400\" height=\"50\" value=\"Finish Submission\" onclick=\"finish();\">\r\n</div>','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_FILE',NULL,'','F',60,NULL,NULL,NULL,NULL,NULL,'2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_KW',NULL,'6531_a','T',NULL,4,50,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br /><br />Keywords:<br /><i>(one keyword/key-phrase per line)</i><br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_LANG',NULL,'041__a','S',NULL,NULL,NULL,NULL,NULL,'<SELECT name=\"DEMOART_LANG\">\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"eng\">English</option>\r\n <option value=\"fre\">French</option>\r\n <option value=\"ger\">German</option>\r\n <option value=\"dut\">Dutch</option>\r\n <option value=\"ita\">Italian</option>\r\n <option value=\"spa\">Spanish</option>\r\n <option value=\"por\">Portuguese</option>\r\n <option value=\"gre\">Greek</option>\r\n <option value=\"slo\">Slovak</option>\r\n <option value=\"cze\">Czech</option>\r\n <option value=\"hun\">Hungarian</option>\r\n <option value=\"pol\">Polish</option>\r\n <option value=\"nor\">Norwegian</option>\r\n <option value=\"swe\">Swedish</option>\r\n <option value=\"fin\">Finnish</option>\r\n <option value=\"rus\">Russian</option>\r\n</SELECT>','2008-03-07','2008-03-07','<br /><br />Select the Language:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_NOTE',NULL,'500__a','T',NULL,6,60,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br /><br />Additional Comments or Notes:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_NUMP',NULL,'300__a','I',5,NULL,NULL,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Number of Pages:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_REP',NULL,'088__a','T',NULL,4,30,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Other Report Numbers <i>(one per line)</i>:',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_RN',NULL,'037__a','I',35,NULL,NULL,NULL,'DEMO-<COMBO>-<YYYY>-???',NULL,'2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOART_TITLE',NULL,'245__a','T',NULL,5,60,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Title:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_ABS',NULL,'520__a','T',NULL,12,80,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Abstract:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_AU',NULL,'100__a','T',NULL,6,60,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Authors: <i>(one per line)</i><br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_CHANGE',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOBOO_CHANGE[]\" size=\"9\" multiple>\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"DEMOBOO_REP\">Other Report Numbers</option>\r\n <option value=\"DEMOBOO_TITLE\">Title</option>\r\n <option value=\"DEMOBOO_AU\">Author(s)</option>\r\n <option value=\"DEMOBOO_LANG\">Language</option>\r\n <option value=\"DEMOBOO_KW\">Keywords</option>\r\n <option value=\"DEMOBOO_ABS\">Abstract</option>\r\n <option value=\"DEMOBOO_NUMP\">Number of Pages</option>\r\n <option value=\"DEMOBOO_FILE\">File</option>\r\n</select>','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_CONT',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"Continue\" onclick=\"finish();\" />\r\n</div>','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_DATE',NULL,'269__c','I',10,NULL,NULL,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Date:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_END',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<INPUT TYPE=\"button\" class=\"adminbutton\" name=\"endS\" width=\"400\" height=\"50\" value=\"Finish Submission\" onclick=\"finish();\">\r\n</div>','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_FILE',NULL,'','F',60,NULL,NULL,NULL,NULL,NULL,'2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_KW',NULL,'6531_a','T',NULL,4,50,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br /><br />Keywords:<br /><i>(one keyword/key-phrase per line)</i><br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_LANG',NULL,'041__a','S',NULL,NULL,NULL,NULL,NULL,'<SELECT name=\"DEMOBOO_LANG\">\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"eng\">English</option>\r\n <option value=\"fre\">French</option>\r\n <option value=\"ger\">German</option>\r\n <option value=\"dut\">Dutch</option>\r\n <option value=\"ita\">Italian</option>\r\n <option value=\"spa\">Spanish</option>\r\n <option value=\"por\">Portuguese</option>\r\n <option value=\"gre\">Greek</option>\r\n <option value=\"slo\">Slovak</option>\r\n <option value=\"cze\">Czech</option>\r\n <option value=\"hun\">Hungarian</option>\r\n <option value=\"pol\">Polish</option>\r\n <option value=\"nor\">Norwegian</option>\r\n <option value=\"swe\">Swedish</option>\r\n <option value=\"fin\">Finnish</option>\r\n <option value=\"rus\">Russian</option>\r\n</SELECT>','2008-03-07','2008-03-07','<br /><br />Select the Language:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_NOTE',NULL,'500__a','T',NULL,6,60,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br /><br />Additional Comments or Notes:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_NUMP',NULL,'300__a','I',5,NULL,NULL,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Number of Pages:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_REP',NULL,'088__a','T',NULL,4,30,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Other Report Numbers <i>(one per line)</i>:',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_RN',NULL,'037__a','I',35,NULL,NULL,NULL,'DEMO-BOOK-<YYYY>-???',NULL,'2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_TITLE',NULL,'245__a','T',NULL,5,60,NULL,NULL,NULL,'2008-03-07','2008-03-07','<br />Title:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_COMNT',NULL,'','T',NULL,6,60,NULL,NULL,NULL,'2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_DECSN',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOBOO_DECSN\">\r\n<option value=\"Select:\">Select:</option>\r\n<option value=\"approve\">Approve</option>\r\n<option value=\"reject\">Reject</option>\r\n</select>\r\n','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOBOO_REGB',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<INPUT TYPE=\"button\" class=\"adminbutton\" name=\"endS\" width=\"400\" height=\"50\" value=\"Register Decision\" onclick=\"finish();\">\r\n</div>','2008-03-07','2008-03-07',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_ABS',NULL,'520__a','T',NULL,20,80,NULL,NULL,NULL,'2008-03-12','2008-03-12','<br />Abstract:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_AU',NULL,'100__a','T',NULL,6,60,NULL,NULL,NULL,'2008-03-12','2008-03-12','<br />Authors: <i>(one per line)</i><br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_CHANGE',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOPOE_CHANGE[]\" size=\"6\" multiple>\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"DEMOPOE_TITLE\">Title</option>\r\n <option value=\"DEMOPOE_AU\">Author(s)</option>\r\n <option value=\"DEMOPOE_LANG\">Language</option>\r\n <option value=\"DEMOPOE_YEAR\">Year</option>\r\n <option value=\"DEMOPOE_ABS\">Poem Text</option>\r\n</select>\r\n','2008-03-12','2008-03-12',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_CONT',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"Continue\" onclick=\"finish();\" />\r\n</div>','2008-03-12','2008-03-12',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_DUMMY',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'</td></tr></table><br /><br /></td></tr></table>','2008-03-12','2008-03-12',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_END',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<INPUT TYPE=\"button\" class=\"adminbutton\" name=\"endS\" width=\"400\" height=\"50\" value=\"Finish Submission\" onclick=\"finish();\">\r\n</div>','2008-03-12','2008-03-12',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_LANG',NULL,'041__a','S',NULL,NULL,NULL,NULL,NULL,'<SELECT name=\"DEMOPOE_LANG\">\r\n <option value=\"Select:\">Select:</option>\r\n <option value=\"eng\">English</option>\r\n <option value=\"fre\">French</option>\r\n <option value=\"ger\">German</option>\r\n <option value=\"dut\">Dutch</option>\r\n <option value=\"ita\">Italian</option>\r\n <option value=\"spa\">Spanish</option>\r\n <option value=\"por\">Portuguese</option>\r\n <option value=\"gre\">Greek</option>\r\n <option value=\"slo\">Slovak</option>\r\n <option value=\"cze\">Czech</option>\r\n <option value=\"hun\">Hungarian</option>\r\n <option value=\"pol\">Polish</option>\r\n <option value=\"nor\">Norwegian</option>\r\n <option value=\"swe\">Swedish</option>\r\n <option value=\"fin\">Finnish</option>\r\n <option value=\"rus\">Russian</option>\r\n</SELECT>','2008-03-12','2008-03-12','<br /><br />Select the Language:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_RN',NULL,'037__a','I',35,NULL,NULL,NULL,'DEMO-POETRY-<YYYY>-???',NULL,'2008-03-12','2008-03-12',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_TITLE',NULL,'245__a','T',NULL,5,60,NULL,NULL,NULL,'2008-03-12','2008-03-12','<br />Title:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOPOE_YEAR',NULL,'909C0y','I',4,NULL,NULL,4,NULL,NULL,'2008-03-12','2008-03-12','<br /><br />Year:&nbsp;',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_CHANGE',NULL,'','S',NULL,NULL,NULL,NULL,NULL,'<div id=\"1\" STYLE=\"position:relative;visibility:hidden;\">\r\n<select name=\"DEMOJRN_CHANGE[]\" size=\"2\" multiple>\r\n <option selected value=\"DEMOJRN_TYPE\">3</option>\r\n <option selected value=\"DEMOJRN_ISSUES\">4</option>\r\n <option selected value=\"DEMOJRN_AU\">12</option>\r\n <option selected value=\"DEMOJRN_EMAIL\">13</option>\r\n <option selected value=\"DEMOJRN_TITLEE\">14</option>\r\n <option selected value=\"DEMOJRN_TITLEF\">15</option>\r\n <option selected value=\"DEMOJRN_ABSE\">16</option>\r\n <option selected value=\"DEMOJRN_ABSF\">17</option>\r\n <option selected value=\"DEMOJRN_IN\">18</option>\r\n <option selected value=\"DEMOJRN_ENDING\">19</option>\r\n</select>\r\n <option selected value=\"DEMOJRN_CATEG\">20</option>\r\n</select>\r\n</div>','2009-01-09','2009-02-20',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_TYPE',NULL,'691__a','S',NULL,NULL,NULL,NULL,NULL,'<select name=\"DEMOJRN_TYPE\">\r\n<option value=\"Select:\">Select:</option>\r\n\r\n<option value=\"DRAFT\">Offline</option>\r\n<option value=\"ONLINE\">Online</option>\r\n</select><small>[<a target=\"_blank\" href=\"/help/admin/webjournal-editor-guide#offlineVsOnline\">?</a>]</small>','2008-12-04','2009-02-20','<TABLE WIDTH=\"100%\" BGCOLOR=\"#D3E3E2\" ALIGN=\"center\" CELLSPACING=\"2\" CELLPADDING=\"2\" BORDER=\"1\"><TR><TD ALIGN=\"left\" colspan=\"2\"><br /><b>Update an Atlantis Times article:</b></TD></TR><TR><TD align=\"center\"><span style=\"color: red;\">*</span>Status:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_AU',NULL,'100__a','T',NULL,4,60,NULL,NULL,NULL,'2008-09-23','2009-02-20','</TD></TR><TR><TD><br /><br />Author(s): <i>(one per line)</i><br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_EMAIL',NULL,'859__a','T',NULL,4,60,NULL,NULL,NULL,'2008-09-23','2009-02-20','</TD><TD><br /><br />E-mail(s) of the author(s): <i>(one per line)</i><br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_TITLEE',NULL,'245__a','T',NULL,5,60,NULL,NULL,NULL,'2008-09-23','2009-02-20','</TD></TR><TR><TD><br /><br />English title:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_TITLEF',NULL,'246_1a','T',NULL,5,60,NULL,NULL,NULL,'2008-09-23','2009-02-20','</TD><TD><br /><br />French title:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_ABSF',NULL,'590__b','R',NULL,100,90,NULL,NULL,'from invenio.htmlutils import get_html_text_editor\r\nfrom invenio.config import CFG_SITE_URL\r\nfrom invenio.search_engine_utils import get_fieldvalues\r\nimport os\r\n\r\nif (\'modify\' in curdir) and not os.path.exists(\"%s/DEMOJRN_ABSF\" % curdir):\r\n try:\r\n content = get_fieldvalues(int(sysno), \'590__b\')[0]\r\n except:\r\n content = \'\'\r\nelif os.path.exists(\"%s/DEMOJRN_ABSE\" % curdir):\r\n content = file(\"%s/DEMOJRN_ABSE\" % curdir).read()\r\nelse:\r\n content = \'\'\r\n\r\ntext = get_html_text_editor(\"DEMOJRN_ABSF\", id=\"BulletinCKEditor1\", content=content, toolbar_set=\"WebJournal\", width=\'522px\', height=\'700px\', file_upload_url=CFG_SITE_URL + \'/submit/attachfile\', custom_configurations_path=\'/ckeditor/journal-editor-config.js\')','2008-09-23','2009-02-23','</td><td><br />French Article:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_CONT',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"Continue\" onclick=\"finish();\" />\r\n</div>','2008-10-06','2008-10-06',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_END',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<INPUT TYPE=\"button\" class=\"adminbutton\" name=\"endS\" width=\"400\" height=\"50\" value=\"Finish Submission\" onclick=\"finish();\">\r\n</div>','2008-09-23','2009-02-20','</td></tr><tr><td colspan=\"2\">',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_ISSUES',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'from invenio.search_engine_utils import get_fieldvalues\r\nfrom invenio.webjournal_utils import get_next_journal_issues, get_current_issue, get_journal_issue_grouping\r\nimport os\r\n\r\norders_and_issues = [(\'\',\'\')]*4\r\n\r\nif (\'modify\' in curdir) and not os.path.exists(\"%s/DEMOJRN_ISSUE1\" % curdir):\r\n try:\r\n orders = get_fieldvalues(int(sysno), \'773__c\')\r\n issues = get_fieldvalues(int(sysno), \'773__n\')\r\n orders_and_issues = zip(orders, issues) + orders_and_issues\r\n except:\r\n pass\r\nelif (\'running\' in curdir) and not os.path.exists(\"%s/DEMOJRN_ISSUE1\" % curdir):\r\n try:\r\n journal_name = \'AtlantisTimes\'\r\n current_issue = get_current_issue(\'en\', journal_name)\r\n nb_issues = get_journal_issue_grouping(journal_name)\r\n next_issue_numbers = get_next_journal_issues(current_issue, journal_name, nb_issues)\r\n orders_and_issues = zip([\'\']*4, next_issue_numbers) + orders_and_issues\r\n except:\r\n pass\r\nissues_fields = []\r\nsingle_issue_and_order_tmpl = \'\'\'\r\n<input type=\"text\" name=\"DEMOJRN_ORDER%i\" size=\"2\" value=\"%s\" />\r\n<input type=\"text\" name=\"DEMOJRN_ISSUE%i\" size=\"7\" value=\"%s\" />\'\'\'\r\ni = 1\r\nfor order_and_issue in orders_and_issues[:4]:\r\n order = order_and_issue[0]\r\n issue = order_and_issue[1]\r\n issues_fields.append(single_issue_and_order_tmpl % (i, order, i, issue))\r\n i += 1\r\n\r\ntext = \'<br/>\'.join(issues_fields)\r\n','2009-02-20','2009-02-23','</TD><TD align=\"center\"><span style=\"color: red;\">*</span>Order(s) <small><i>(digit)</i></small> and issue(s) <small><i>(xx/YYYY)</i></small> of the article:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_RN',NULL,'037__a','I',35,NULL,NULL,NULL,'BUL-<COMBO>-<YYYY>-???',NULL,'2008-10-06','2009-02-20',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_IN',NULL,'595__a','H',NULL,NULL,NULL,NULL,'Atlantis Times',NULL,'2008-09-23','2009-02-20',' ',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_ABSE',NULL,'520__b','R',NULL,100,90,NULL,NULL,'from invenio.htmlutils import get_html_text_editor\r\nfrom invenio.config import CFG_SITE_URL\r\nfrom invenio.search_engine_utils import get_fieldvalues\r\nimport os\r\n\r\n\r\nif (\'modify\' in curdir) and not os.path.exists(\"%s/DEMOJRN_ABSE\" % curdir):\r\n try:\r\n content = get_fieldvalues(int(sysno), \'520__b\')[0]\r\n except:\r\n content = \'\'\r\nelif os.path.exists(\"%s/DEMOJRN_ABSE\" % curdir):\r\n content = file(\"%s/DEMOJRN_ABSE\" % curdir).read()\r\nelse:\r\n content = \'\'\r\n\r\ntext = get_html_text_editor(\"DEMOJRN_ABSE\",id=\"BulletinCKEditor2\", content=content, toolbar_set=\"WebJournal\", width=\'522px\', height=\'700px\', file_upload_url=CFG_SITE_URL + \'/submit/attachfile\', custom_configurations_path=\'/ckeditor/journal-editor-config.js\')\r\n\r\n','2008-09-22','2009-02-23','</td></tr><tr><td><br />English Article:<br />',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_ENDING',NULL,'','H',NULL,NULL,NULL,NULL,NULL,NULL,'2009-02-06','2009-02-20','</td></tr></table>',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOJRN_CATEG',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'# Solve usual problem with submit/direct?.. links that bypass \r\n# the comboXXX (category selection) of the submission. Retrieve \r\n# the value, and set it (only in the case of MBI)\r\n\r\nfrom invenio.search_engine_utils import get_fieldvalues\r\n\r\nif \"modify\" in curdir:\r\n try:\r\n comboDEMOJRNfile = file(\"%s/%s\" % (curdir,\'comboDEMOJRN\'), \'w\')\r\n report_number = get_fieldvalues(int(sysno), \'037__a\')[0]\r\n category = report_number.split(\'-\')[1]\r\n comboDEMOJRNfile.write(category)\r\n comboDEMOJRNfile.close()\r\n except:\r\n text = \'\'','2009-10-15','2009-10-15',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_ASPECT',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'from invenio.bibencode_websubmit import websumbit_aspect_ratio_form_element\r\n\r\ntext = websumbit_aspect_ratio_form_element(curdir, doctype, uid, access)','2012-02-16','2012-02-16','Aspect Ratio',NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_AU',NULL,'100__a','T',NULL,6,80,NULL,NULL,NULL,'2012-02-16','2012-02-16',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_DESCR',NULL,'520__a','T',NULL,12,80,NULL,NULL,NULL,'2012-02-16','2012-02-16',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_FILE',NULL,'','F',NULL,NULL,NULL,NULL,NULL,NULL,'2012-02-16','2012-02-16',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_SINGLE',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'from invenio.bibencode_websubmit import (get_session_id, websubmit_singlepage)\r\n\r\n# Retrieve session id\r\ntry:\r\n # User info is defined only in MBI/MPI actions...\r\n session_id = get_session_id(None, uid, user_info) \r\nexcept:\r\n session_id = get_session_id(req, uid, {})\r\n\r\ntext = websubmit_singlepage(curdir, doctype, uid, access, session_id)','2012-02-16','2012-02-16',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_SUBMIT',NULL,'','D',NULL,NULL,NULL,NULL,NULL,'<div align=\"center\">\r\n<input type=\"button\" class=\"adminbutton\" width=\"400\" height=\"50\" name=\"endS\" value=\"finish submission\" onclick=\"finish();\" />\r\n</div>','2012-02-16','2012-02-16',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_TITLE',NULL,'245__a','I',NULL,NULL,NULL,NULL,NULL,NULL,'2012-02-16','2012-02-16',NULL,NULL,0);
INSERT INTO sbmFIELDDESC VALUES ('DEMOVID_YEAR',NULL,'909C0y','I',4,NULL,NULL,4,NULL,NULL,'2012-02-16','2012-02-16',NULL,NULL,0);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Mail_Submitter',70,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Print_Success',60,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Move_Photos_to_Storage',50,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Insert_Record',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Make_Record',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Report_Number_Generation',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Move_to_Done',80,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPIC','Create_Recid',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Get_Recid',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Is_Original_Submitter',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Create_Modify_Interface',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Get_Report_Number',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Get_Recid',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Is_Original_Submitter',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Make_Modify_Record',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Insert_Modify_Record',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Move_Photos_to_Storage',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Print_Success_MBI',70,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Send_Modify_Mail',80,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPIC','Move_to_Done',90,2);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Create_Upload_Files_Interface',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Is_Original_Submitter',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Get_Recid',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Move_to_Done',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Print_Success',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Mail_Submitter',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Move_Uploaded_Files_to_Storage',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Is_Original_Submitter',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('SRV','DEMOPIC','Get_Recid',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Move_to_Done',90,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Mail_Submitter',80,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Make_Record',50,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Insert_Record',60,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Print_Success',70,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Move_Files_to_Storage',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Stamp_Uploaded_Files',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Report_Number_Generation',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOTHE','Create_Recid',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Get_Recid',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Is_Original_Submitter',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Create_Modify_Interface',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Move_to_Done',80,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Send_Modify_Mail',70,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Print_Success_MBI',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Insert_Modify_Record',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Make_Modify_Record',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Is_Original_Submitter',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Get_Recid',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOTHE','Get_Report_Number',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOART','Print_Success',50,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOART','Insert_Record',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOART','Make_Record',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOART','Report_Number_Generation',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOART','Create_Recid',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOART','Mail_Submitter',60,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Create_Modify_Interface',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Get_Recid',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Is_Original_Submitter',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Move_to_Done',80,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Send_Modify_Mail',70,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Print_Success_MBI',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Insert_Modify_Record',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Make_Modify_Record',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Is_Original_Submitter',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Get_Recid',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOART','Get_Report_Number',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Create_Modify_Interface',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Get_Recid',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Is_Original_Submitter',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Move_to_Done',90,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Send_Modify_Mail',80,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Print_Success_MBI',70,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Move_Uploaded_Files_to_Storage',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Insert_Modify_Record',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Make_Modify_Record',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Is_Original_Submitter',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Get_Recid',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOBOO','Get_Report_Number',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Test_Status',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Is_Referee',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','CaseEDS',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Send_APP_Mail',90,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Print_Success_APP',80,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Update_Approval_DB',70,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Insert_Record',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Print_Success_APP',60,3);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Send_APP_Mail',70,3);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Move_From_Pending',20,3);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Get_Recid',30,3);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Get_Info',40,3);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Get_Report_Number',10,3);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Update_Approval_DB',50,3);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Move_to_Done',80,3);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Move_to_Pending',90,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Print_Success',80,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Send_Approval_Request',70,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Update_Approval_DB',60,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Mail_Submitter',50,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Move_Files_to_Storage',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Make_Dummy_MARC_XML_Record',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Report_Number_Generation',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOBOO','Create_Recid',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Make_Record',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Get_Info',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Get_Recid',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Move_From_Pending',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Get_Report_Number',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('APP','DEMOBOO','Move_to_Done',100,2);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPOE','Create_Recid',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPOE','Report_Number_Generation',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPOE','Make_Record',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPOE','Insert_Record',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPOE','Print_Success',50,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPOE','Mail_Submitter',60,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOPOE','Move_to_Done',70,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Get_Recid',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Is_Original_Submitter',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Create_Modify_Interface',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Move_to_Done',80,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Print_Success_MBI',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Insert_Modify_Record',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Make_Modify_Record',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Is_Original_Submitter',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Get_Recid',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOPOE','Get_Report_Number',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Move_to_Done',80,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Mail_Submitter',70,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Print_Success',60,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Insert_Record',50,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Make_Record',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Move_CKEditor_Files_to_Storage',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Report_Number_Generation',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOJRN','Create_Recid',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Get_Report_Number',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Get_Recid',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Create_Modify_Interface',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Move_to_Done',90,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Send_Modify_Mail',80,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Print_Success_MBI',70,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Move_Files_to_Storage',60,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Insert_Modify_Record',50,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Make_Modify_Record',40,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Move_CKEditor_Files_to_Storage',30,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Get_Recid',20,2);
INSERT INTO sbmFUNCTIONS VALUES ('MBI','DEMOJRN','Get_Report_Number',10,2);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOVID','Create_Recid',10,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOVID','Insert_Record',40,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOVID','Mail_Submitter',60,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOVID','Make_Record',30,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOVID','Print_Success',50,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOVID','Report_Number_Generation',20,1);
INSERT INTO sbmFUNCTIONS VALUES ('SBI','DEMOVID','Video_Processing',70,1);
INSERT INTO sbmIMPLEMENT VALUES ('DEMOPIC','SBI','Y','SBIDEMOPIC',1,'2007-09-13','2007-10-04',1,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOPIC','MBI','Y','MBIDEMOPIC',1,'2007-10-04','2007-10-04',2,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOPIC','SRV','Y','SRVDEMOPIC',1,'2009-04-09','2009-04-09',3,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOTHE','SBI','Y','SBIDEMOTHE',1,'2008-03-02','2008-03-05',1,'','1',1,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOTHE','MBI','Y','MBIDEMOTHE',1,'2008-03-05','2008-03-05',2,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOART','SBI','Y','SBIDEMOART',1,'2008-03-06','2008-03-07',1,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOART','MBI','Y','MBIDEMOART',1,'2008-03-07','2008-03-07',2,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOBOO','SBI','Y','SBIDEMOBOO',1,'2008-03-06','2008-03-07',1,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOBOO','MBI','Y','MBIDEMOBOO',1,'2008-03-07','2008-03-07',2,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOBOO','APP','Y','APPDEMOBOO',1,'2002-05-06','2002-05-28',3,'0','0',0,1,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOPOE','SBI','Y','SBIDEMOPOE',2,'2008-03-12','2008-03-12',1,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOPOE','MBI','Y','MBIDEMOPOE',1,'2008-03-12','2008-03-12',2,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOJRN','SBI','Y','SBIDEMOJRN',1,'2008-09-18','2009-02-23',1,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOJRN','MBI','Y','MBIDEMOJRN',1,'2008-09-18','2009-02-23',2,'','',0,0,'');
INSERT INTO sbmIMPLEMENT VALUES ('DEMOVID','SBI','Y','SBIDEMOVID',1,'2012-02-16','2012-02-16',1,'','',0,0,'');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','addressesMBI','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','authorfile','DEMOPIC_PHOTOG');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','autorngen','Y');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','counterpath','lastid_DEMOPIC_<PA>categ</PA>_<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','createTemplate','DEMOPICcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','documenttype','picture');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','edsrn','DEMOPIC_RN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','emailFile','SuE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','fieldnameMBI','DEMOPIC_CHANGE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','iconsize','180>,700>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','modifyTemplate','DEMOPICmodify.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','newrnin','NEWRN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','paths_and_suffixes','{\"DEMOPIC_FILE\":\"\"}');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','rename','<PA>file:DEMOPIC_RN</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','rnformat','DEMO-PICTURE-<PA>categ</PA>-<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','rnin','comboDEMOPIC');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','sourceDoc','photos');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','sourceTemplate','DEMOPIC.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','status','ADDED');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','titleFile','DEMOPIC_TITLE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','yeargen','AUTO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','doctypes','DEMOPIC_FILE=Picture|Additional=Additional Document');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','canReviseDoctypes','*');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','maxFilesDoctypes','Additional=1');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','canNameNewFiles','1');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','canRenameDoctypes','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','canCommentDoctypes','*');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','canAddFormatDoctypes','DEMOPIC_FILE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','showLinks','1');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','keepDefault','1');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','restrictions','=Public|restricted_picture=Private');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','canRestrictDoctypes','*');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','canDeleteDoctypes','*');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','createIconDoctypes','*');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPIC','forceFileRevision','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','counterpath','lastid_DEMOTHE_<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','autorngen','Y');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','edsrn','DEMOTHE_RN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','rnformat','DEMO-THESIS-<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','yeargen','AUTO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','rnin','comboDEMOTHE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','files_to_be_stamped','DEMOTHE_FILE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','latex_template','demo-stamp-left.tex');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','stamp','first');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','latex_template_vars','{\'REPORTNUMBER\':\'FILE:DEMOTHE_RN\',\'DATE\':\'FILE:DEMOTHE_DATE\'}');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','createTemplate','DEMOTHEcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','sourceTemplate','DEMOTHE.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','documenttype','fulltext');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','iconsize','180>,700>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','paths_and_suffixes','{\"DEMOTHE_FILE\":\"\"}');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','rename','<PA>file:DEMOTHE_RN</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','newrnin','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','status','ADDED');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','authorfile','DEMOTHE_AU');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','emailFile','SuE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','titleFile','DEMOTHE_TITLE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','fieldnameMBI','DEMOTHE_CHANGE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','modifyTemplate','DEMOTHEmodify.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','addressesMBI','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOTHE','sourceDoc','Thesis');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','addressesMBI','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','authorfile','DEMOART_AU');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','autorngen','Y');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','counterpath','lastid_DEMOART_<PA>categ</PA>_<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','createTemplate','DEMOARTcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','documenttype','fulltext');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','edsrn','DEMOART_RN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','emailFile','SuE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','fieldnameMBI','DEMOART_CHANGE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','iconsize','180');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','modifyTemplate','DEMOARTmodify.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','newrnin','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','paths_and_suffixes','{\"DEMOART_FILE\":\"\"}');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','rename','<PA>file:DEMOART_RN</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','rnformat','DEMO-<PA>categ</PA>-<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','rnin','comboDEMOART');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','sourceDoc','Textual Document');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','sourceTemplate','DEMOART.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','status','ADDED');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','titleFile','DEMOART_TITLE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOART','yeargen','AUTO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','autorngen','Y');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','counterpath','lastid_DEMOBOO_<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','edsrn','DEMOBOO_RN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','rnformat','DEMO-BOOK-<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','rnin','comboDEMOBOO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','yeargen','AUTO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','newrnin','NEWRN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','status','APPROVAL');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','authorfile','DEMOBOO_AU');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','emailFile','SuE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','titleFile','DEMOBOO_TITLE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','categformatDAM','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','addressesDAM','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','directory','DEMOBOO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','fieldnameMBI','DEMOBOO_CHANGE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','modifyTemplate','DEMOBOOmodify.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','sourceTemplate','DEMOBOO.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','addressesMBI','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','sourceDoc','BOOK');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','casevalues','approve,reject');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','casesteps','2,3');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','casevariable','DEMOBOO_DECSN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','casedefault','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','categformatAPP','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','addressesAPP','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','createTemplate','DEMOBOOcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','documenttype','fulltext');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','iconsize','180>,700>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','paths_and_suffixes','{\"DEMOBOO_FILE\":\"\"}');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','rename','<PA>file:DEMOBOO_RN</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','dummyrec_source_tpl','DEMOBOO.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','dummyrec_create_tpl','DEMOBOOcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','decision_file','DEMOBOO_DECSN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','comments_file','DEMOBOO_COMNT');
INSERT INTO sbmPARAMETERS VALUES ('DEMOBOO','elementNameToDoctype','DEMOBOO_FILE=DEMOBOO_FILE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','addressesMBI','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','authorfile','DEMOPOE_AU');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','autorngen','Y');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','counterpath','lastid_DEMOPOE_<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','createTemplate','DEMOPOEcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','edsrn','DEMOPOE_RN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','emailFile','SuE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','fieldnameMBI','DEMOPOE_CHANGE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','modifyTemplate','DEMOPOEmodify.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','newrnin','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','rnformat','DEMO-POETRY-<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','rnin','comboDEMOPOE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','sourceDoc','Poem');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','sourceTemplate','DEMOPOE.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','status','ADDED');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','titleFile','DEMOPOE_TITLE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOPOE','yeargen','AUTO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','record_search_pattern','collection:ATLANTISTIMES*');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','paths_and_suffixes','{\'image\':\"image\", \'file\':\"file\", \'flash\':\"flash\", \'media\':\'media\'}');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','documenttype','picture');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','rename','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','addressesMBI','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','emailFile','SuE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','sourceTemplate','DEMOJRN.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','edsrn','DEMOJRN_RN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','modifyTemplate','DEMOJRNmodify.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','iconsize','300>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','createTemplate','DEMOJRNcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','newrnin','');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','status','ADDED');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','titleFile','DEMOJRN_TITLEE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','authorfile','DEMOJRN_AU');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','fieldnameMBI','DEMOJRN_CHANGE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','sourceDoc','Textual Document');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','autorngen','Y');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','counterpath','lastid_DEMOJRN_<PA>categ</PA>_<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','rnformat','BUL-<PA>categ</PA>-<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','rnin','comboDEMOJRN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','yeargen','AUTO');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','input_fields','DEMOJRN_ABSE,DEMOJRN_ABSF');
INSERT INTO sbmPARAMETERS VALUES ('DEMOJRN','files','DEMOJRN_ABSE,DEMOJRN_ABSF');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','aspect','DEMOVID_ASPECT');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','authorfile','DEMOVID_AU');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','autorngen','Y');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','counterpath','lastid_DEMOVID_<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','createTemplate','DEMOVIDcreate.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','edsrn','DEMOVID_RN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','emailFile','SuE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','newrnin','NEWRN');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','rnformat','DEMO-VIDEO-<PA>yy</PA>');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','rnin','comboDEMOVID');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','sourceTemplate','DEMOVID.tpl');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','status','ADDED');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','title','DEMOVID_TITLE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','titleFile','DEMOVID_TITLE');
INSERT INTO sbmPARAMETERS VALUES ('DEMOVID','yeargen','AUTO');
INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (2,'demo_jif','0000-00-00 00:00:00');
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (15,2,90);
INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (3,'citation','0000-00-00 00:00:00');
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (1,3,10);
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (15,3,80);
INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (4,'citerank_citation_t','0000-00-00 00:00:00');
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (15,4,70);
INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (5,'citerank_pagerank_c','0000-00-00 00:00:00');
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (15,5,60);
INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (6,'citerank_pagerank_t','0000-00-00 00:00:00');
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (15,6,50);
INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (7,'selfcites','0000-00-00 00:00:00');
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (1,7,10);
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (15,7,80);
INSERT INTO externalcollection (id, name) VALUES (1, 'Amazon');
INSERT INTO externalcollection (id, name) VALUES (2, 'CERN EDMS');
INSERT INTO externalcollection (id, name) VALUES (3, 'CERN Indico');
INSERT INTO externalcollection (id, name) VALUES (4, 'CERN Intranet');
INSERT INTO externalcollection (id, name) VALUES (5, 'CiteSeer');
INSERT INTO externalcollection (id, name) VALUES (6, 'Google Books');
INSERT INTO externalcollection (id, name) VALUES (7, 'Google Scholar');
INSERT INTO externalcollection (id, name) VALUES (8, 'Google Web');
INSERT INTO externalcollection (id, name) VALUES (9, 'IEC');
INSERT INTO externalcollection (id, name) VALUES (10, 'IHS');
INSERT INTO externalcollection (id, name) VALUES (11, 'INSPEC');
INSERT INTO externalcollection (id, name) VALUES (12, 'ISO');
INSERT INTO externalcollection (id, name) VALUES (13, 'KISS Books/Journals');
INSERT INTO externalcollection (id, name) VALUES (14, 'KISS Preprints');
INSERT INTO externalcollection (id, name) VALUES (15, 'NEBIS');
INSERT INTO externalcollection (id, name) VALUES (16, 'SLAC Library Catalog');
INSERT INTO externalcollection (id, name) VALUES (17, 'INSPIRE');
INSERT INTO externalcollection (id, name) VALUES (18, 'Scirus');
INSERT INTO externalcollection (id, name) VALUES (19, 'Atlantis Institute Books');
INSERT INTO externalcollection (id, name) VALUES (20, 'Atlantis Institute Articles');
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (1,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,3,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,5,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,13,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,14,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,17,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,18,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (2,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (3,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (4,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (5,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,3,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,5,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,13,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,14,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,17,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,18,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (6,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (8,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (9,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (10,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (11,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (12,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (13,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (14,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,3,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,5,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,13,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,14,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,17,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,18,2);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (15,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (16,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (17,20,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,1,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,2,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,3,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,4,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,5,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,6,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,7,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,8,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,9,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,10,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,11,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,12,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,13,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,14,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,15,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,16,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,17,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,18,1);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,19,0);
INSERT INTO collection_externalcollection (id_collection,id_externalcollection,type) VALUES (18,20,0);
INSERT INTO knwKB VALUES ('1','DBCOLLID2COLL','DbCollID to Coll name correspondance.', NULL);
INSERT INTO knwKB VALUES ('2','EJOURNALS','Knowledge base of all known electronic journals. Useful for reference linking.', NULL);
INSERT INTO knwKB VALUES ('3','DBCOLLID2BIBTEX','Mapping between the 980 field and BibTeX entry types.', NULL);
INSERT INTO knwKB VALUES ('4','SEARCH-SYNONYM-JOURNAL','Knowledge base of journal title synonyms. Used during search time.', NULL);
INSERT INTO knwKB VALUES ('5','INDEX-SYNONYM-TITLE','Knowledge base of title word synonyms. Used during indexing time.', NULL);
INSERT INTO knwKB VALUES ('6','DBCOLLID2OPENGRAPHTYPE','Maps collection 980 field to an Open Graph Type', NULL);
INSERT INTO knwKB VALUES ('7','LICENSE2URL','Map a license name to its URL', NULL);
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('ARTICLE','Published Article', '1');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('PREPRINT','Preprint', '1');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('THESIS','Thesis', '1');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('BOOK','Book', '1');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('REPORT','Report', '1');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('PICTURE','Pictures', '1');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('AAS Photo Bull.','AAS Photo Bull.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Accredit. Qual. Assur.','Accredit. Qual. Assur.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Acoust. Phys.','Acoust. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Acoust. Res. Lett.','Acoust. Res. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Acta Astron.','Acta Astron.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Adv. Comput. Math.','Adv. Comput. Math.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Aequ. Math.','Aequ. Math.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Afr. Skies','Afr. Skies', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Algorithmica','Algorithmica', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Am. J. Phys.','Am. J. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Ann. Phys.','Ann. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Annu. Rev. Astron. Astrophys.','Annu. Rev. Astron. Astrophys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Annu. Rev. Earth Planet. Sci.','Annu. Rev. Earth Planet. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Appl. Phys. Lett.','Appl. Phys. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Appl. Phys., A','Appl. Phys., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Appl. Phys., B','Appl. Phys., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Appl. Radiat. Isot.','Appl. Radiat. Isot.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Appl. Surf. Sci.','Appl. Surf. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Arch. Appl. Mech.','Arch. Appl. Mech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Arch. Envir. Contam. Toxicol.','Arch. Envir. Contam. Toxicol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Arch. Rational Mech. Analys.','Arch. Rational Mech. Analys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astron. Astrophys. Rev.','Astron. Astrophys. Rev.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astron. Astrophys.','Astron. Astrophys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astron. Astrophys., Suppl.','Astron. Astrophys., Suppl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astron. J.','Astron. J.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astron. Lett.','Astron. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astron. Nachr.','Astron. Nachr.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astron. Rep.','Astron. Rep.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astropart. Phys.','Astropart. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astrophys. J.','Astrophys. J.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Astrophys. Norvegica','Astrophys. Norvegica', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Balt. Astron.','Balt. Astron.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Bioimaging','Bioimaging', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Biol. Cybern.','Biol. Cybern.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Bull. Astron. Belgrade','Bull. Astron. Belgrade', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Bull. Astron. Inst. Czech.','Bull. Astron. Inst. Czech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Bull. Astron. Soc. India','Bull. Astron. Soc. India', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Bull. Eng. Geol. Environ.','Bull. Eng. Geol. Environ.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Bull. Environ. Contam. Toxicol.','Bull. Environ. Contam. Toxicol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Calc. Var. Partial Differ. Equ.','Calc. Var. Partial Differ. Equ.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Chaos','Chaos', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Chaos Solitons Fractals','Chaos Solitons Fractals', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Chem. Phys.','Chem. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Chem. Phys. Lett.','Chem. Phys. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Chin. Astron. Astrophys.','Chin. Astron. Astrophys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Chin. J. Astron. Astrophys.','Chin. J. Astron. Astrophys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Class. Quantum Gravity','Class. Quantum Gravity', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Clim. Dyn.','Clim. Dyn.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Colloid Polym. Sci.','Colloid Polym. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Combinatorica','Combinatorica', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Combust. Theory Model.','Combust. Theory Model.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Commun. Math. Phys.','Commun. Math. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Comment. Math. Helv.','Comment. Math. Helv.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Comput. Mech.','Comput. Mech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Comput. Phys.','Comput. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Comput. Phys. Commun.','Comput. Phys. Commun.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Comput. Sci. Eng.','Comput. Sci. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Comput. Vis. Sci.','Comput. Vis. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Computing','Computing', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Constr. Approx.','Constr. Approx.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Contin. Mech. Thermodyn.','Contin. Mech. Thermodyn.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Contrib. Astron. Obs. Skaln. Pleso','Contrib. Astron. Obs. Skaln. Pleso', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Contrib. Astron. Obs. Skaln. Pleso Suppl.','Contrib. Astron. Obs. Skaln. Pleso Suppl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Cryogenics','Cryogenics', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Crystallogr. Rep.','Crystallogr. Rep.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Curr. Appl. Phys.','Curr. Appl. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Curr. Opin. Solid State Mater. Sci.','Curr. Opin. Solid State Mater. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Discret. Comput. Geom.','Discret. Comput. Geom.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Displays','Displays', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Distrib. Comput.','Distrib. Comput.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Distrib. Syst. Eng.','Distrib. Syst. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Dokl. Phys.','Dokl. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Electrochem. Solid State Lett.','Electrochem. Solid State Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Electron. Lett.','Electron. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Elem. Math.','Elem. Math.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Environ. Geol.','Environ. Geol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Environ. Manage.','Environ. Manage.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. Biophys. J. Biophys. Lett.','Eur. Biophys. J. Biophys. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. J. Phys.','Eur. J. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. Phys. J., A','Eur. Phys. J., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. Phys. J., Appl. Phys.','Eur. Phys. J., Appl. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. Phys. J., B','Eur. Phys. J., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. Phys. J., C','Eur. Phys. J., C', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. Phys. J., D','Eur. Phys. J., D', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Eur. Phys. J., E','Eur. Phys. J., E', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Europhys. Lett.','Europhys. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Europhys. News','Europhys. News', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Exp. Fluids','Exp. Fluids', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Few-Body Syst.','Few-Body Syst.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Finan. Stoch.','Finan. Stoch.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Fluid Dyn. Res.','Fluid Dyn. Res.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Geom. Funct. Anal.','Geom. Funct. Anal.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Heat Mass Transf.','Heat Mass Transf.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('High Energy Phys. Libr. Webzine','High Energy Phys. Libr. Webzine', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('High Perform. Polym.','High Perform. Polym.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Circ. Devices Syst.','IEE Proc., Circ. Devices Syst.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Commun.','IEE Proc., Commun.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Comput. Digit. Tech.','IEE Proc., Comput. Digit. Tech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Control Theory Appl.','IEE Proc., Control Theory Appl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Electr. Power Appl.','IEE Proc., Electr. Power Appl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Gener. Transm. Distrib.','IEE Proc., Gener. Transm. Distrib.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Microw. Antennas Propag.','IEE Proc., Microw. Antennas Propag.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Optoelectron.','IEE Proc., Optoelectron.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Radar, Sonar Navig.','IEE Proc., Radar, Sonar Navig.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Sci. Meas. Technol.','IEE Proc., Sci. Meas. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Softw. Eng.','IEE Proc., Softw. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('IEE Proc., Vis. Image Signal Process.','IEE Proc., Vis. Image Signal Process.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Image Vis. Comput.','Image Vis. Comput.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Inform. Forsch. Entwickl.','Inform. Forsch. Entwickl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Inform. Spektr.','Inform. Spektr.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Infrared Phys. Technol.','Infrared Phys. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Int. J. Digit. Libr.','Int. J. Digit. Libr.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Int. J. Doc. Anal. Recogn.','Int. J. Doc. Anal. Recogn.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Int. J. Nonlinear Mech.','Int. J. Nonlinear Mech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Int. J. Softw. Tools Technol. Transf.','Int. J. Softw. Tools Technol. Transf.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Invent. Math.','Invent. Math.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Inverse Probl.','Inverse Probl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Acoust. Soc. Am.','J. Acoust. Soc. Am.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Aerosp. Eng.','J. Aerosp. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Alloys. Compounds','J. Alloys. Compounds', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Am. Assoc. Var. Star Obs.','J. Am. Assoc. Var. Star Obs.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Appl. Mech.','J. Appl. Mech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Appl. Phys.','J. Appl. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Atmos. Solar Terrest. Phys.','J. Atmos. Solar Terrest. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Br. Astron. Assoc.','J. Br. Astron. Assoc.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Chem. Phys.','J. Chem. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Classif.','J. Classif.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Comput. Inf. Sci. Eng.','J. Comput. Inf. Sci. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Constr. Eng. Manage.','J. Constr. Eng. Manage.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Cryptol.','J. Cryptol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Cryst. Growth','J. Cryst. Growth', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Dyn. Syst. Meas. Control','J. Dyn. Syst. Meas. Control', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Electrochem. Soc.','J. Electrochem. Soc.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Electron Spectrosc. Relat. Phen.','J. Electron Spectrosc. Relat. Phen.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Electron. Imaging','J. Electron. Imaging', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Electron. Packag.','J. Electron. Packag.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Energy Eng.','J. Energy Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Energy Resour. Technol.','J. Energy Resour. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Eng. Mater. Technol.','J. Eng. Mater. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Eng. Mech.','J. Eng. Mech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Environ. Eng.','J. Environ. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Exp. Theor. Phys., JETP','J. Exp. Theor. Phys., JETP', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Fluids Eng.','J. Fluids Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Geom. Phys.','J. Geom. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Heat Transf.','J. Heat Transf.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. High Energy Phys.','J. High Energy Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Korean Astron. Soc.','J. Korean Astron. Soc.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Lumin.','J. Lumin.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Magn. Magn. Mater.','J. Magn. Magn. Mater.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Manage. Eng.','J. Manage. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Manuf. Sci. Eng.','J. Manuf. Sci. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Mater. Civ. Eng.','J. Mater. Civ. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Math. Biol.','J. Math. Biol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Math. Phys.','J. Math. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Mech. Des.','J. Mech. Des.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Micromech. Microeng.','J. Micromech. Microeng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Opt.','J. Opt.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys., A','J. Phys., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys., B','J. Phys., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys., Condens. Matter','J. Phys., Condens. Matter', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys., D','J. Phys., D', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys., G','J. Phys., G', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys. I','J. Phys. I', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys. II','J. Phys. II', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys. III','J. Phys. III', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys. Chem. Ref. Data','J. Phys. Chem. Ref. Data', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Phys. Chem. Solids','J. Phys. Chem. Solids', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Quant. Spectrosc. Radiat. Transf.','J. Quant. Spectrosc. Radiat. Transf.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. R. Astron. Soc. Can.','J. R. Astron. Soc. Can.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Radio. Prot.','J. Radio. Prot.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Rheol.','J. Rheol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Solar Energy Eng.','J. Solar Energy Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Solid State Electrochem.','J. Solid State Electrochem.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Struct. Eng.','J. Struct. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Surv. Eng.','J. Surv. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Tribol.','J. Tribol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Turbomach.','J. Turbomach.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Vac. Sci. Technol.','J. Vac. Sci. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Vac. Sci. Technol., A','J. Vac. Sci. Technol., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Vac. Sci. Technol., B','J. Vac. Sci. Technol., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('J. Vib. Acoust.','J. Vib. Acoust.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('JETP','JETP', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('JETP Lett.','JETP Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Low Temp. Phys.','Low Temp. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Mach. Vis. Appl.','Mach. Vis. Appl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Mater. Res. Innov.','Mater. Res. Innov.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Mater. Sci. Eng., B','Mater. Sci. Eng., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Math. Ann.','Math. Ann.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Math. Model. Numer. Anal.','Math. Model. Numer. Anal.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Math. Program.','Math. Program.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Math. Z.','Math. Z.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Meas. Sci. Technol.','Meas. Sci. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Med. Phys.','Med. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Meteorit. Planet. Sci.','Meteorit. Planet. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Microelectron. Eng.','Microelectron. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Micron','Micron', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Microsc. Microanal.','Microsc. Microanal.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Microsyst. Technol.','Microsyst. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Mon. Not. R. Astron. Soc.','Mon. Not. R. Astron. Soc.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Multim. Syst.','Multim. Syst.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nanotech.','Nanotech.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Naturwiss.','Naturwiss.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Network','Network', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('New Astron.','New Astron.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('New Astron. Rev.','New Astron. Rev.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nonlinearity','Nonlinearity', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nucl. Instrum. Methods Phys. Res., A','Nucl. Instrum. Methods Phys. Res., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nucl. Instrum. Methods Phys. Res., B','Nucl. Instrum. Methods Phys. Res., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nucl. Phys. B, Proc. Suppl.','Nucl. Phys. B, Proc. Suppl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nucl. Phys., A','Nucl. Phys., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nucl. Phys., B','Nucl. Phys., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Num. Math.','Num. Math.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nuovo Cimento, A','Nuovo Cimento, A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nuovo Cimento, B','Nuovo Cimento, B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nuovo Cimento, C','Nuovo Cimento, C', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Nuovo Cimento, D','Nuovo Cimento, D', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Obs.','Obs.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Opt. Commun.','Opt. Commun.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Opt. Eng.','Opt. Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Opt. Lasers Eng.','Opt. Lasers Eng.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Opt. Mater.','Opt. Mater.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Opt. Spectrosc.','Opt. Spectrosc.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. At. Nucl.','Phys. At. Nucl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Chem. Miner.','Phys. Chem. Miner.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Educ.','Phys. Educ.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Fluids','Phys. Fluids', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Fluids, A','Phys. Fluids, A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Fluids, B','Phys. Fluids, B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Lett., A','Phys. Lett., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Lett., B','Phys. Lett., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Med. Biol.','Phys. Med. Biol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Part. Nucl.','Phys. Part. Nucl.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Plasmas','Phys. Plasmas', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rep.','Phys. Rep.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev., A','Phys. Rev., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev., B','Phys. Rev., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev., C','Phys. Rev., C', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev., D','Phys. Rev., D', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev., E','Phys. Rev., E', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev., ser. 1','Phys. Rev., ser. 1', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev. Lett.','Phys. Rev. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev. Spec. Top. Accel. Beams','Phys. Rev. Spec. Top. Accel. Beams', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Rev.','Phys. Rev.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys. Solid State','Phys. Solid State', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Physica, A','Physica, A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Physica, B','Physica, B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Physica, C','Physica, C', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Physica, D','Physica, D', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Physica, E','Physica, E', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Physiol. Meas.','Physiol. Meas.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Planet. Space Sci.','Planet. Space Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Plasma Phys. Control. Fusion','Plasma Phys. Control. Fusion', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Plasma Phys. Rep.','Plasma Phys. Rep.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Plasma Sources Sci. Technol.','Plasma Sources Sci. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Polym. Bull.','Polym. Bull.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Powder Diffraction','Powder Diffraction', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Probab. Theory Relat. Fields','Probab. Theory Relat. Fields', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Proc. Astron. Soc. Aust.','Proc. Astron. Soc. Aust.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Proc. Nat. Acad. Sci.','Proc. Nat. Acad. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Prog. Cryst. Growth Charact. Mat.','Prog. Cryst. Growth Charact. Mat.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Prog. Part. Nucl. Phys.','Prog. Part. Nucl. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Prog. Quantum Electron.','Prog. Quantum Electron.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Prog. Surf. Sci.','Prog. Surf. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Program','Program', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Publ. Astron. Soc. Aust.','Publ. Astron. Soc. Aust.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Publ. Astron. Soc. Jpn.','Publ. Astron. Soc. Jpn.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Publ. Astron. Soc. Pac.','Publ. Astron. Soc. Pac.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Publ. Underst. Sci.','Publ. Underst. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Pure Appl. Opt.: J. Eur. Opt. Soc. P. A','Pure Appl. Opt.: J. Eur. Opt. Soc. P. A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Quantum Semiclass. Opt.: J. Eur. Opt. Soc. P. B','Quantum Semiclass. Opt.: J. Eur. Opt. Soc. P. B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Radiat. Environ. Biophys.','Radiat. Environ. Biophys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Radiat. Meas.','Radiat. Meas.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Radiat. Phys. Chem.','Radiat. Phys. Chem.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Radiologe','Radiologe', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Radioprotection','Radioprotection', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Rep. Math. Phys.','Rep. Math. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Rep. Prog. Phys.','Rep. Prog. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Res. Exp. Med.','Res. Exp. Med.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Rev. Mex. Astron. Astrofis.','Rev. Mex. Astron. Astrofis.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Rev. Mod. Phys.','Rev. Mod. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Rev. Sci. Instrum.','Rev. Sci. Instrum.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Sel. Math., New Ser.','Sel. Math., New Ser.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Semicond.','Semicond.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Semicond. Sci. Technol.','Semicond. Sci. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Shock Waves','Shock Waves', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('SIAM J. Appl. Math.','SIAM J. Appl. Math.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('SIAM J. Comput.','SIAM J. Comput.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('SIAM J. Math. Anal.','SIAM J. Math. Anal.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('SIAM J. Numer. Anal.','SIAM J. Numer. Anal.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('SIAM J. Optim.','SIAM J. Optim.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('SIAM Rev.','SIAM Rev.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Smart Mat. Struct.','Smart Mat. Struct.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Soft Comput.','Soft Comput.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Softw. Concepts Tools','Softw. Concepts Tools', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Solar Phys.','Solar Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Solid State Commun.','Solid State Commun.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Solid State Electron.','Solid State Electron.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Solid State Ion.','Solid State Ion.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Sov. Astron. Lett.','Sov. Astron. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Superconductor Science and Technology','Superconductor Science and Technology', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Surf. Coatings Techn.','Surf. Coatings Techn.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Surf. Sci.','Surf. Sci.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Surf. Sci. Rep.','Surf. Sci. Rep.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Surf. Sci. Spectra','Surf. Sci. Spectra', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Synth. Metals','Synth. Metals', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Syst. Fam.','Syst. Fam.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Tech. Phys.','Tech. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Tech. Phys. Lett.','Tech. Phys. Lett.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Theor. Comput. Fluid Dyn.','Theor. Comput. Fluid Dyn.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Theory Comput. Syst.','Theory Comput. Syst.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Thin Solid Films','Thin Solid Films', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Tribol. Int.','Tribol. Int.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Ultramicroscopy','Ultramicroscopy', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Vacuum','Vacuum', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('VLDB J.','VLDB J.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Virtual J. Nanoscale Sci. Technol.','Virtual J. Nanoscale Sci. Technol.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Virtual J. Biol. Phys. Res.','Virtual J. Biol. Phys. Res.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Vis. Comput.','Vis. Comput.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Wave Motion','Wave Motion', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Waves Random Media','Waves Random Media', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Wear','Wear', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Z. Angew. Math. Phys.','Z. Angew. Math. Phys.', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Z. Phys., A','Z. Phys., A', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Z. Phys., B','Z. Phys., B', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Z. Phys., C','Z. Phys., C', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Zphys-e.C','Zphys-e.C', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('ATLAS eNews','ATLAS eNews', '2');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('PICTURE','unpublished', '3');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('PREPRINT','techreport', '3');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('ARTICLE','article', '3');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('REPORT','techreport', '3');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('BOOK','book', '3');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('THESIS','phdthesis', '3');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('POETRY','unpublished', '3');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('PHRVD','Phys. Rev., D', '4');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('Phys.Rev.D','Phys. Rev., D', '4');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('beta','β', '5');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('β','beta', '5');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('ARTICLE','article', '6');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('BOOK','book', '6');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('VIDEO','video.other', '6');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('CERN','http://copyright.cern.ch/', '7');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('CC-BY-3.0','http://creativecommons.org/licenses/by/3.0/', '7');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('CC-BY-SA-3.0','http://creativecommons.org/licenses/by-sa/3.0/', '7');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('CC-BY-NC-3.0','http://creativecommons.org/licenses/by-nc/3.0/', '7');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('CC-BY-ND-3.0','http://creativecommons.org/licenses/by-nd/3.0/', '7');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('CC-BY-NC-SA-3.0','http://creativecommons.org/licenses/by-nc-sa/3.0/', '7');
INSERT INTO knwKBRVAL (m_key,m_value,id_knwKB) VALUES ('CC-BY-NC-ND-3.0','http://creativecommons.org/licenses/by-nc-nd/3.0/', '7');
-- crcLIBRARY demo data:
-INSERT INTO crcLIBRARY (name, address, email, phone, notes) VALUES ('Atlantis Main Library', 'CH-1211 Geneva 23', 'atlantis@cds.cern.ch', '1234567', '');
-INSERT INTO crcLIBRARY (name, address, email, phone, notes) VALUES ('Atlantis HEP Library', 'CH-1211 Geneva 21', 'atlantis.hep@cds.cern.ch', '1234567', '');
+INSERT INTO crcLIBRARY (name, address, email, phone, type, notes) VALUES ('Atlantis Main Library', 'CH-1211 Geneva 23', 'atlantis@cds.cern.ch', '1234567', 'main', '');
+INSERT INTO crcLIBRARY (name, address, email, phone, type, notes) VALUES ('Atlantis HEP Library', 'CH-1211 Geneva 21', 'atlantis.hep@cds.cern.ch', '1234567', 'external', '');
-- crcITEM demo data:
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-34001', '34', '1', '', 'ABC-123', 'Book', '4 weeks', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-34002', '34', '2', '', 'HEP-12A', 'Book', '4 weeks', 'requested', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-33001', '33', '1', '', 'AZ.12-AK', 'Book', '4 weeks', 'on loan', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-32001', '32', '1', 'Reference', 'WDFG-54', 'Book', 'Not for loan', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-32002', '32', '2', '', 'RZ.612-MK', 'Book', '4 weeks', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-32003', '32', '1', '', 'RT-4654-E', 'Book', '4 weeks', 'missing', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-31001', '31', '2', '', '123LSKD', 'Book', '1 week', 'on loan', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-31002', '31', '1', '', 'QSQ452-S', 'Book', '1 week', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-30001', '30', '1', 'Reference', 'QSQS-52-S', 'Book', 'Not for loan', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-29001', '29', '1', '', 'AZD456-465', 'Book', '4 weeks', 'requested', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-28001', '28', '1', '', 'AZD5-456', 'Book', '4 weeks', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-27001', '27', '2', '', 'JLMQ-45-SQ', 'Book', '4 weeks', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-26001', '26', '1', '', 'AZD456-465', 'Book', '1 week', 'missing', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-25001', '25', '2', '', 'AGT-MLL5', 'Book', '4 weeks', 'on loan', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-24001', '24', '2', '', 'J56-475', 'Book', '4 weeks', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-23001', '23', '1', '', 'JHL-465.DS', 'Book', '4 weeks', 'requested', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-22001', '22', '1', '', 'AZD4E-865', 'Book', '1 week', 'requested', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
INSERT INTO crcITEM (barcode, id_bibrec, id_crcLIBRARY, collection, location, description, loan_period, status, creation_date, modification_date, number_of_requests)
VALUES ('bc-21001', '21', '2', '', 'MLL-DS.63', 'Book', '4 weeks', 'available', '2008-07-21 00:00:00', '2008-07-21 00:00:00', '0');
-- crcLOAN demo data:
INSERT INTO crcLOAN (id_crcBORROWER, id_bibrec, barcode, loaned_on, due_date, status, type, notes)
VALUES ('4', '33', 'bc-33001', NOW(), NOW() + INTERVAL 30 DAY, 'on loan' ,'normal', '');
INSERT INTO crcLOAN (id_crcBORROWER, id_bibrec, barcode, loaned_on, due_date, status, type, notes)
VALUES ('5', '31', 'bc-31001', NOW(), NOW() + INTERVAL 7 DAY, 'on loan' ,'normal', '');
INSERT INTO crcLOAN (id_crcBORROWER, id_bibrec, barcode, loaned_on, due_date, status, type, notes)
VALUES ('5', '31', 'bc-25001', NOW(), NOW() + INTERVAL 30 DAY, 'on loan' ,'normal', '');
-- crcLOANREQUEST demo data:
INSERT INTO crcLOANREQUEST (id_crcBORROWER, id_bibrec, barcode, period_of_interest_from, period_of_interest_to, status, notes, request_date)
VALUES ('5', '34', 'bc-34002', NOW(), NOW() + INTERVAL 60 DAY, 'pending' , '', NOW());
INSERT INTO crcLOANREQUEST (id_crcBORROWER, id_bibrec, barcode, period_of_interest_from, period_of_interest_to, status, notes, request_date)
VALUES ('6', '29', 'bc-29001', NOW(), NOW() + INTERVAL 45 DAY, 'pending' , '', NOW());
INSERT INTO crcLOANREQUEST (id_crcBORROWER, id_bibrec, barcode, period_of_interest_from, period_of_interest_to, status, notes, request_date)
VALUES ('5', '33', 'bc-33001', NOW(), NOW() + INTERVAL 45 DAY, 'waiting' , '', NOW());
INSERT INTO crcLOANREQUEST (id_crcBORROWER, id_bibrec, barcode, period_of_interest_from, period_of_interest_to, status, notes, request_date)
VALUES ('7', '22', 'bc-22001', NOW(), NOW() + INTERVAL 90 DAY, 'pending' , '', NOW());
-- crcBORROWER demo data:
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Admin', 'admin@cds.cern.ch', '20003', '99-Z-019', '2008-07-21 00:00:00', '');
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Jekyll', 'jekyll@cds.cern.ch', '01234', '21-Z-019', '2008-07-21 00:00:00', '');
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Hyde', 'Hyde@cds.cern.ch', '01574', '22-Z-119', '2008-07-21 00:00:00', '');
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Dorian Gray', 'dorian.gray@cds.cern.ch', '33234', '38-Y-819', '2008-07-21 00:00:00', '');
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Romeo Montague', 'romeo.montague@cds.cern.ch', '93844', '98-W-859', '2008-07-21 00:00:00', '');
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Juliet Capulet', 'juliet.capulet@cds.cern.ch', '99874', '91-X-098', '2008-07-21 00:00:00', '');
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Benvolio Montague', 'benvolio.montague@cds.cern.ch', '32354', '93-P-019', '2008-07-21 00:00:00', '');
INSERT INTO crcBORROWER (name, email, phone, address, borrower_since, notes)
VALUES ('Balthasar Montague', 'balthasar.montague@cds.cern.ch', '78644', '20-M-349', '2008-07-21 00:00:00', '');
-- switch on stemming for some indexes:
-UPDATE idxINDEX SET stemming_language='en' WHERE name IN ('global','abstract','keyword','title','fulltext');
+UPDATE idxINDEX SET stemming_language='en' WHERE name IN ('global','abstract','keyword','title','fulltext', 'miscellaneous');
-- exporting demo:
INSERT INTO expJOB (jobname) VALUES ('sitemap');
INSERT INTO expJOB (jobname) VALUES ('googlescholar');
INSERT INTO expJOB (jobname) VALUES ('marcxml');
-- WebJournal demo:
INSERT INTO jrnJOURNAL (id,name) VALUES(1,'AtlantisTimes');
INSERT INTO jrnISSUE (id_jrnJOURNAL,issue_number,issue_display,date_released,date_announced) VALUES (1,'02/2009','02-03/2009','2009-01-09','2009-01-09');
INSERT INTO jrnISSUE (id_jrnJOURNAL,issue_number,issue_display,date_released) VALUES (1,'03/2009','02-03/2009','2009-01-16');
-- BibAuthorID demo person assignment:
INSERT INTO aidPERSONIDDATA (personid, tag, data) VALUES (1,'uid','2');
INSERT INTO aidPERSONIDDATA (personid, tag, data) VALUES (2,'uid','1');
INSERT INTO aidPERSONIDDATA (personid, tag, data) VALUES (3,'uid','4');
INSERT INTO aidPERSONIDDATA (personid, tag, data) VALUES (4,'uid','5');
INSERT INTO aidPERSONIDDATA (personid, tag, data) VALUES (5,'uid','6');
INSERT INTO aidPERSONIDDATA (personid, tag, data) VALUES (6,'uid','7');
INSERT INTO aidPERSONIDDATA (personid, tag, data) VALUES (7,'uid','8');
-- end of file
diff --git a/modules/miscutil/lib/dbquery.py b/modules/miscutil/lib/dbquery.py
index 748cfdc59..6b797395f 100644
--- a/modules/miscutil/lib/dbquery.py
+++ b/modules/miscutil/lib/dbquery.py
@@ -1,453 +1,458 @@
## This file is part of Invenio.
## Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Invenio utilities to run SQL queries.
The main API functions are:
- run_sql()
- run_sql_many()
- run_sql_with_limit()
but see the others as well.
"""
__revision__ = "$Id$"
# dbquery clients can import these from here:
# pylint: disable=W0611
from MySQLdb import Warning, Error, InterfaceError, DataError, \
DatabaseError, OperationalError, IntegrityError, \
InternalError, NotSupportedError, \
ProgrammingError
import gc
import os
import string
import time
import marshal
import re
import atexit
+import os
+
from zlib import compress, decompress
from thread import get_ident
from invenio import config
from invenio.config import CFG_ACCESS_CONTROL_LEVEL_SITE, \
CFG_MISCUTIL_SQL_USE_SQLALCHEMY, \
CFG_MISCUTIL_SQL_RUN_SQL_MANY_LIMIT
if CFG_MISCUTIL_SQL_USE_SQLALCHEMY:
try:
import sqlalchemy.pool as pool
import MySQLdb as mysqldb
mysqldb = pool.manage(mysqldb, use_threadlocal=True)
connect = mysqldb.connect
except ImportError:
CFG_MISCUTIL_SQL_USE_SQLALCHEMY = False
from MySQLdb import connect
else:
from MySQLdb import connect
## DB config variables. These variables are to be set in
## invenio-local.conf by admins and then replaced in situ in this file
## by calling "inveniocfg --update-dbexec".
## Note that they are defined here and not in config.py in order to
## prevent them from being exported accidentally elsewhere, as no-one
## should know DB credentials but this file.
## FIXME: this is more of a blast-from-the-past that should be fixed
## both here and in inveniocfg when the time permits.
try:
from invenio.dbquery_config import CFG_DATABASE_HOST, \
CFG_DATABASE_PORT, \
CFG_DATABASE_NAME, \
CFG_DATABASE_USER, \
CFG_DATABASE_PASS, \
CFG_DATABASE_TYPE, \
CFG_DATABASE_SLAVE
except ImportError:
CFG_DATABASE_HOST = 'localhost'
CFG_DATABASE_PORT = '3306'
CFG_DATABASE_NAME = 'invenio'
CFG_DATABASE_USER = 'invenio'
CFG_DATABASE_PASS = 'my123p$ss'
CFG_DATABASE_TYPE = 'mysql'
CFG_DATABASE_SLAVE = ''
_DB_CONN = {}
_DB_CONN[CFG_DATABASE_HOST] = {}
_DB_CONN[CFG_DATABASE_SLAVE] = {}
def unlock_all():
for dbhost in _DB_CONN.keys():
for db in _DB_CONN[dbhost].values():
try:
cur = db.cur()
cur.execute("UNLOCK TABLES")
except:
pass
atexit.register(unlock_all)
class InvenioDbQueryWildcardLimitError(Exception):
"""Exception raised when query limit reached."""
def __init__(self, res):
"""Initialization."""
self.res = res
def _db_login(dbhost=CFG_DATABASE_HOST, relogin=0):
"""Login to the database."""
## Note: we are using "use_unicode=False", because we want to
## receive strings from MySQL as Python UTF-8 binary string
## objects, not as Python Unicode string objects, as of yet.
## Note: "charset='utf8'" is needed for recent MySQLdb versions
## (such as 1.2.1_p2 and above). For older MySQLdb versions such
## as 1.2.0, an explicit "init_command='SET NAMES utf8'" parameter
## would constitute an equivalent. But we are not bothering with
## older MySQLdb versions here, since we are recommending to
## upgrade to more recent versions anyway.
if CFG_MISCUTIL_SQL_USE_SQLALCHEMY:
return connect(host=dbhost, port=int(CFG_DATABASE_PORT),
db=CFG_DATABASE_NAME, user=CFG_DATABASE_USER,
passwd=CFG_DATABASE_PASS,
use_unicode=False, charset='utf8')
else:
thread_ident = (os.getpid(), get_ident())
if relogin:
connection = _DB_CONN[dbhost][thread_ident] = connect(host=dbhost,
port=int(CFG_DATABASE_PORT),
db=CFG_DATABASE_NAME,
user=CFG_DATABASE_USER,
passwd=CFG_DATABASE_PASS,
use_unicode=False, charset='utf8')
connection.autocommit(True)
return connection
else:
if _DB_CONN[dbhost].has_key(thread_ident):
return _DB_CONN[dbhost][thread_ident]
else:
connection = _DB_CONN[dbhost][thread_ident] = connect(host=dbhost,
port=int(CFG_DATABASE_PORT),
db=CFG_DATABASE_NAME,
user=CFG_DATABASE_USER,
passwd=CFG_DATABASE_PASS,
use_unicode=False, charset='utf8')
connection.autocommit(True)
return connection
def _db_logout(dbhost=CFG_DATABASE_HOST):
"""Close a connection."""
try:
del _DB_CONN[dbhost][(os.getpid(), get_ident())]
except KeyError:
pass
def close_connection(dbhost=CFG_DATABASE_HOST):
"""
Enforce the closing of a connection
Highly relevant in multi-processing and multi-threaded modules
"""
try:
- _DB_CONN[dbhost][(os.getpid(), get_ident())].close()
- del(_DB_CONN[dbhost][(os.getpid(), get_ident())])
+ db = _DB_CONN[dbhost][(os.getpid(), get_ident())]
+ cur = db.cursor()
+ cur.execute("UNLOCK TABLES")
+ db.close()
+ del _DB_CONN[dbhost][(os.getpid(), get_ident())]
except KeyError:
pass
def run_sql(sql, param=None, n=0, with_desc=False, with_dict=False, run_on_slave=False):
"""Run SQL on the server with PARAM and return result.
@param param: tuple of string params to insert in the query (see
notes below)
@param n: number of tuples in result (0 for unbounded)
@param with_desc: if True, will return a DB API 7-tuple describing
columns in query.
@param with_dict: if True, will return a list of dictionaries
composed of column-value pairs
@return: If SELECT, SHOW, DESCRIBE statements, return tuples of data,
followed by description if parameter with_desc is
provided.
If SELECT and with_dict=True, return a list of dictionaries
composed of column-value pairs, followed by description
if parameter with_desc is provided.
If INSERT, return last row id.
Otherwise return SQL result as provided by database.
@note: When the site is closed for maintenance (as governed by the
config variable CFG_ACCESS_CONTROL_LEVEL_SITE), do not attempt
to run any SQL queries but return empty list immediately.
Useful to be able to have the website up while MySQL database
is down for maintenance, hot copies, table repairs, etc.
@note: In case of problems, exceptions are returned according to
the Python DB API 2.0. The client code can import them from
this file and catch them.
"""
if CFG_ACCESS_CONTROL_LEVEL_SITE == 3:
# do not connect to the database as the site is closed for maintenance:
return []
if param:
param = tuple(param)
dbhost = CFG_DATABASE_HOST
if run_on_slave and CFG_DATABASE_SLAVE:
dbhost = CFG_DATABASE_SLAVE
if 'sql-logger' in getattr(config, 'CFG_DEVEL_TOOLS', []):
log_sql_query(dbhost, sql, param)
try:
db = _db_login(dbhost)
cur = db.cursor()
gc.disable()
rc = cur.execute(sql, param)
gc.enable()
except (OperationalError, InterfaceError): # unexpected disconnect, bad malloc error, etc
# FIXME: now reconnect is always forced, we may perhaps want to ping() first?
try:
db = _db_login(dbhost, relogin=1)
cur = db.cursor()
gc.disable()
rc = cur.execute(sql, param)
gc.enable()
except (OperationalError, InterfaceError): # unexpected disconnect, bad malloc error, etc
raise
if string.upper(string.split(sql)[0]) in ("SELECT", "SHOW", "DESC", "DESCRIBE"):
if n:
recset = cur.fetchmany(n)
else:
recset = cur.fetchall()
if with_dict: # return list of dictionaries
# let's extract column names
keys = [row[0] for row in cur.description]
# let's construct a list of dictionaries
list_dict_results = [dict(zip(*[keys, values])) for values in recset]
if with_desc:
return list_dict_results, cur.description
else:
return list_dict_results
else:
if with_desc:
return recset, cur.description
else:
return recset
else:
if string.upper(string.split(sql)[0]) == "INSERT":
rc = cur.lastrowid
return rc
def run_sql_many(query, params, limit=CFG_MISCUTIL_SQL_RUN_SQL_MANY_LIMIT, run_on_slave=False):
"""Run SQL on the server with PARAM.
This method does executemany and is therefore more efficient than execute
but it has sense only with queries that affect state of a database
(INSERT, UPDATE). That is why the results just count number of affected rows
@param params: tuple of tuple of string params to insert in the query
@param limit: query will be executed in parts when number of
parameters is greater than limit (each iteration runs at most
`limit' parameters)
@return: SQL result as provided by database
"""
dbhost = CFG_DATABASE_HOST
if run_on_slave and CFG_DATABASE_SLAVE:
dbhost = CFG_DATABASE_SLAVE
i = 0
r = None
while i < len(params):
## make partial query safely (mimicking procedure from run_sql())
try:
db = _db_login(dbhost)
cur = db.cursor()
gc.disable()
rc = cur.executemany(query, params[i:i + limit])
gc.enable()
except (OperationalError, InterfaceError):
try:
db = _db_login(dbhost, relogin=1)
cur = db.cursor()
gc.disable()
rc = cur.executemany(query, params[i:i + limit])
gc.enable()
except (OperationalError, InterfaceError):
raise
## collect its result:
if r is None:
r = rc
else:
r += rc
i += limit
return r
def run_sql_with_limit(query, param=None, n=0, with_desc=False, wildcard_limit=0, run_on_slave=False):
"""This function should be used in some cases, instead of run_sql function, in order
to protect the db from queries that might take a log time to respond
Ex: search queries like [a-z]+ ; cern*; a->z;
The parameters are exactly the ones for run_sql function.
In case the query limit is reached, an InvenioDbQueryWildcardLimitError will be raised.
"""
try:
dummy = int(wildcard_limit)
except ValueError:
raise
if wildcard_limit < 1:#no limit on the wildcard queries
return run_sql(query, param, n, with_desc, run_on_slave=run_on_slave)
safe_query = query + " limit %s" %wildcard_limit
res = run_sql(safe_query, param, n, with_desc, run_on_slave=run_on_slave)
if len(res) == wildcard_limit:
raise InvenioDbQueryWildcardLimitError(res)
return res
def blob_to_string(ablob):
"""Return string representation of ABLOB. Useful to treat MySQL
BLOBs in the same way for both recent and old MySQLdb versions.
"""
if ablob:
if type(ablob) is str:
# BLOB is already a string in MySQLdb 0.9.2
return ablob
else:
# BLOB is array.array in MySQLdb 1.0.0 and later
return ablob.tostring()
else:
return ablob
def log_sql_query(dbhost, sql, param=None):
"""Log SQL query into prefix/var/log/dbquery.log log file. In order
to enable logging of all SQL queries, please uncomment one line
in run_sql() above. Useful for fine-level debugging only!
"""
from flask import current_app
from invenio.config import CFG_LOGDIR
from invenio.dateutils import convert_datestruct_to_datetext
from invenio.textutils import indent_text
date_of_log = convert_datestruct_to_datetext(time.localtime())
message = date_of_log + '-->\n'
message += indent_text('Host:\n' + indent_text(str(dbhost), 2, wrap=True), 2)
message += indent_text('Query:\n' + indent_text(str(sql), 2, wrap=True), 2)
message += indent_text('Params:\n' + indent_text(str(param), 2, wrap=True), 2)
message += '-----------------------------\n\n'
try:
current_app.logger.info(message)
except:
pass
def get_table_update_time(tablename, run_on_slave=False):
"""Return update time of TABLENAME. TABLENAME can contain
wildcard `%' in which case we return the maximum update time
value.
"""
# Note: in order to work with all of MySQL 4.0, 4.1, 5.0, this
# function uses SHOW TABLE STATUS technique with a dirty column
# position lookup to return the correct value. (Making use of
# Index_Length column that is either of type long (when there are
# some indexes defined) or of type None (when there are no indexes
# defined, e.g. table is empty). When we shall use solely
# MySQL-5.0, we can employ a much cleaner technique of using
# SELECT UPDATE_TIME FROM INFORMATION_SCHEMA.TABLES WHERE
# table_name='collection'.
res = run_sql("SHOW TABLE STATUS LIKE %s", (tablename,),
run_on_slave=run_on_slave)
update_times = [] # store all update times
for row in res:
if type(row[10]) is long or \
row[10] is None:
# MySQL-4.1 and 5.0 have creation_time in 11th position,
# so return next column:
update_times.append(str(row[12]))
else:
# MySQL-4.0 has creation_time in 10th position, which is
# of type datetime.datetime or str (depending on the
# version of MySQLdb), so return next column:
update_times.append(str(row[11]))
return max(update_times)
def get_table_status_info(tablename, run_on_slave=False):
"""Return table status information on TABLENAME. Returned is a
dict with keys like Name, Rows, Data_length, Max_data_length,
etc. If TABLENAME does not exist, return empty dict.
"""
# Note: again a hack so that it works on all MySQL 4.0, 4.1, 5.0
res = run_sql("SHOW TABLE STATUS LIKE %s", (tablename,),
run_on_slave=run_on_slave)
table_status_info = {} # store all update times
for row in res:
if type(row[10]) is long or \
row[10] is None:
# MySQL-4.1 and 5.0 have creation time in 11th position:
table_status_info['Name'] = row[0]
table_status_info['Rows'] = row[4]
table_status_info['Data_length'] = row[6]
table_status_info['Max_data_length'] = row[8]
table_status_info['Create_time'] = row[11]
table_status_info['Update_time'] = row[12]
else:
# MySQL-4.0 has creation_time in 10th position, which is
# of type datetime.datetime or str (depending on the
# version of MySQLdb):
table_status_info['Name'] = row[0]
table_status_info['Rows'] = row[3]
table_status_info['Data_length'] = row[5]
table_status_info['Max_data_length'] = row[7]
table_status_info['Create_time'] = row[10]
table_status_info['Update_time'] = row[11]
return table_status_info
def serialize_via_marshal(obj):
"""Serialize Python object via marshal into a compressed string."""
return compress(marshal.dumps(obj))
def deserialize_via_marshal(astring):
"""Decompress and deserialize string into a Python object via marshal."""
return marshal.loads(decompress(astring))
def wash_table_column_name(colname):
"""
Evaluate table-column name to see if it is clean.
This function accepts only names containing [a-zA-Z0-9_].
@param colname: The string to be checked
@type colname: str
@return: colname if test passed
@rtype: str
@raise Exception: Raises an exception if colname is invalid.
"""
if re.search('[^\w]', colname):
raise Exception('The table column %s is not valid.' % repr(colname))
return colname
def real_escape_string(unescaped_string, run_on_slave=False):
"""
Escapes special characters in the unescaped string for use in a DB query.
@param unescaped_string: The string to be escaped
@type unescaped_string: str
@return: Returns the escaped string
@rtype: str
"""
dbhost = CFG_DATABASE_HOST
if run_on_slave and CFG_DATABASE_SLAVE:
dbhost = CFG_DATABASE_SLAVE
connection_object = _db_login(dbhost)
escaped_string = connection_object.escape_string(unescaped_string)
return escaped_string
diff --git a/modules/miscutil/lib/errorlib.py b/modules/miscutil/lib/errorlib.py
index 15d652b67..835bd8c76 100644
--- a/modules/miscutil/lib/errorlib.py
+++ b/modules/miscutil/lib/errorlib.py
@@ -1,575 +1,581 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
""" Error handling library """
__revision__ = "$Id$"
import traceback
import os
import sys
import time
import datetime
import re
import inspect
from cStringIO import StringIO
from invenio.config import CFG_SITE_LANG, CFG_LOGDIR, \
CFG_WEBALERT_ALERT_ENGINE_EMAIL, CFG_SITE_ADMIN_EMAIL, \
CFG_SITE_SUPPORT_EMAIL, CFG_SITE_NAME, CFG_SITE_URL, \
CFG_SITE_EMERGENCY_EMAIL_ADDRESSES, \
CFG_SITE_ADMIN_EMAIL_EXCEPTIONS, \
CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER
from invenio.urlutils import wash_url_argument
from invenio.messages import wash_language, gettext_set_language
from invenio.dateutils import convert_datestruct_to_datetext
from invenio.dbquery import run_sql
## Regular expression to match possible password related variable that should
## be disclosed in frame analysis.
RE_PWD = re.compile(r"pwd|pass|p_pw", re.I)
def get_client_info(req):
"""
Returns a dictionary with client information
@param req: mod_python request
"""
try:
return {
'host': req.hostname,
'url': req.unparsed_uri,
'time': convert_datestruct_to_datetext(time.localtime()),
'browser': 'User-Agent' in req.headers_in and \
req.headers_in['User-Agent'] or "N/A",
'client_ip': req.remote_ip}
except:
return {}
def get_pretty_wide_client_info(req):
"""Return in a pretty way all the avilable information about the current
user/client"""
if req:
from invenio.webuser import collect_user_info
user_info = collect_user_info(req)
keys = user_info.keys()
keys.sort()
max_key = max([len(key) for key in keys])
ret = ""
fmt = "%% %is: %%s\n" % max_key
for key in keys:
if RE_PWD.search(key):
continue
if key in ('uri', 'referer'):
ret += fmt % (key, "<%s>" % user_info[key])
else:
ret += fmt % (key, user_info[key])
if ret.endswith('\n'):
return ret[:-1]
else:
return ret
else:
return "No client information available"
def get_tracestack():
"""
If an exception has been caught, return the system tracestack or else
return tracestack of what is currently in the stack
"""
if traceback.format_tb(sys.exc_info()[2]):
delimiter = "\n"
tracestack_pretty = "Traceback: \n%s" % \
delimiter.join(traceback.format_tb(sys.exc_info()[2]))
else:
## force traceback except for this call
tracestack = traceback.extract_stack()[:-1]
tracestack_pretty = "%sForced traceback (most recent call last)" % \
(' '*4, )
for trace_tuple in tracestack:
tracestack_pretty += """
File "%(file)s", line %(line)s, in %(function)s
%(text)s""" % {
'file': trace_tuple[0],
'line': trace_tuple[1],
'function': trace_tuple[2],
'text': trace_tuple[3] is not None and \
str(trace_tuple[3]) or ""}
return tracestack_pretty
def register_emergency(msg, recipients=None):
"""Launch an emergency. This means to send email messages to each
address in 'recipients'. By default recipients will be obtained via
get_emergency_recipients() which loads settings from
CFG_SITE_EMERGENCY_EMAIL_ADDRESSES
"""
from invenio.mailutils import send_email
if not recipients:
recipients = get_emergency_recipients()
recipients = set(recipients)
recipients.add(CFG_SITE_ADMIN_EMAIL)
for address_str in recipients:
send_email(CFG_SITE_SUPPORT_EMAIL, address_str, "Emergency notification", msg)
def get_emergency_recipients(recipient_cfg=CFG_SITE_EMERGENCY_EMAIL_ADDRESSES):
"""Parse a list of appropriate emergency email recipients from
CFG_SITE_EMERGENCY_EMAIL_ADDRESSES, or from a provided dictionary
comprised of 'time constraint' => 'comma separated list of addresses'
CFG_SITE_EMERGENCY_EMAIL_ADDRESSES format example:
CFG_SITE_EMERGENCY_EMAIL_ADDRESSES = {
'Sunday 22:00-06:00': '0041761111111@email2sms.foo.com',
'06:00-18:00': 'team-in-europe@foo.com,0041762222222@email2sms.foo.com',
'18:00-06:00': 'team-in-usa@foo.com',
'*': 'john.doe.phone@foo.com'}
"""
from invenio.dateutils import parse_runtime_limit
recipients = set()
for time_condition, address_str in recipient_cfg.items():
if time_condition and time_condition is not '*':
(current_range, future_range) = parse_runtime_limit(time_condition)
if not current_range[0] <= datetime.datetime.now() <= current_range[1]:
continue
recipients.update([address_str])
return list(recipients)
def find_all_values_to_hide(local_variables, analyzed_stack=None):
"""Return all the potential password to hyde."""
## Let's add at least the DB password.
if analyzed_stack is None:
from invenio.dbquery import CFG_DATABASE_PASS
ret = set([CFG_DATABASE_PASS])
analyzed_stack = set()
else:
ret = set()
for key, value in local_variables.iteritems():
if id(value) in analyzed_stack:
## Let's avoid loops
continue
analyzed_stack.add(id(value))
if RE_PWD.search(key):
ret.add(str(value))
if isinstance(value, dict):
ret |= find_all_values_to_hide(value, analyzed_stack)
if '' in ret:
## Let's discard the empty string in case there is an empty password,
## or otherwise anything will be separated by '<*****>' in the output
## :-)
ret.remove('')
return ret
def get_pretty_traceback(req=None, exc_info=None, skip_frames=0):
"""
Given an optional request object and an optional exc_info,
returns a text string representing many details about an exception.
"""
if exc_info is None:
exc_info = sys.exc_info()
if exc_info[0]:
## We found an exception.
## We want to extract the name of the Exception
exc_name = exc_info[0].__name__
exc_value = str(exc_info[1])
filename, line_no, function_name = _get_filename_and_line(exc_info)
## Let's record when and where and what
www_data = "%(time)s -> %(name)s: %(value)s (%(file)s:%(line)s:%(function)s)" % {
'time': time.strftime("%Y-%m-%d %H:%M:%S"),
'name': exc_name,
'value': exc_value,
'file': filename,
'line': line_no,
'function': function_name }
## Let's retrieve contextual user related info, if any
try:
client_data = get_pretty_wide_client_info(req)
except Exception, err:
client_data = "Error in retrieving " \
"contextual information: %s" % err
## Let's extract the traceback:
tracestack_data_stream = StringIO()
print >> tracestack_data_stream, \
"\n** Traceback details \n"
traceback.print_exc(file=tracestack_data_stream)
stack = [frame[0] for frame in inspect.trace()]
#stack = [frame[0] for frame in inspect.getouterframes(exc_info[2])][skip_frames:]
try:
stack.reverse()
print >> tracestack_data_stream, \
"\n** Stack frame details"
values_to_hide = set()
for frame in stack:
try:
print >> tracestack_data_stream
print >> tracestack_data_stream, \
"Frame %s in %s at line %s" % (
frame.f_code.co_name,
frame.f_code.co_filename,
frame.f_lineno)
## Dereferencing f_locals
## See: http://utcc.utoronto.ca/~cks/space/blog/python/FLocalsAndTraceFunctions
local_values = frame.f_locals
try:
values_to_hide |= find_all_values_to_hide(local_values)
code = open(frame.f_code.co_filename).readlines()
first_line = max(1, frame.f_lineno-3)
last_line = min(len(code), frame.f_lineno+3)
print >> tracestack_data_stream, "-" * 79
for line in xrange(first_line, last_line+1):
code_line = code[line-1].rstrip()
if line == frame.f_lineno:
print >> tracestack_data_stream, \
"----> %4i %s" % (line, code_line)
else:
print >> tracestack_data_stream, \
" %4i %s" % (line, code_line)
print >> tracestack_data_stream, "-" * 79
except:
pass
for key, value in local_values.items():
print >> tracestack_data_stream, "\t%20s = " % key,
try:
value = repr(value)
except Exception, err:
## We shall gracefully accept errors when repr() of
## a value fails (e.g. when we are trying to repr() a
## variable that was not fully initialized as the
## exception was raised during its __init__ call).
value = "ERROR: when representing the value: %s" % (err)
try:
print >> tracestack_data_stream, \
_truncate_dynamic_string(value)
except:
print >> tracestack_data_stream, \
"<ERROR WHILE PRINTING VALUE>"
finally:
del frame
finally:
del stack
tracestack_data = tracestack_data_stream.getvalue()
for to_hide in values_to_hide:
## Let's hide passwords
tracestack_data = tracestack_data.replace(to_hide, '<*****>')
## Okay, start printing:
output = StringIO()
print >> output, "* %s" % www_data
print >> output, "\n** User details"
print >> output, client_data
if tracestack_data:
print >> output, tracestack_data
return output.getvalue()
else:
return ""
def _is_pow_of_2(n):
"""
Return True if n is a power of 2
"""
while n > 1:
if n % 2:
return False
n = n / 2
return True
def exception_should_be_notified(name, filename, line):
"""
Return True if the exception should be notified to the admin.
This actually depends on several considerations, e.g. wethever
it has passed some since the last time this exception has been notified.
"""
try:
exc_log = run_sql("SELECT id,last_notified,counter,total FROM hstEXCEPTION WHERE name=%s AND filename=%s AND line=%s", (name, filename, line))
if exc_log:
exc_id, last_notified, counter, total = exc_log[0]
delta = datetime.datetime.now() - last_notified
counter += 1
total += 1
if (delta.seconds + delta.days * 86400) >= CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER:
run_sql("UPDATE hstEXCEPTION SET last_seen=NOW(), last_notified=NOW(), counter=1, total=%s WHERE id=%s", (total, exc_id))
return True
else:
run_sql("UPDATE hstEXCEPTION SET last_seen=NOW(), counter=%s, total=%s WHERE id=%s", (counter, total, exc_id))
return _is_pow_of_2(counter)
else:
run_sql("INSERT INTO hstEXCEPTION(name, filename, line, last_seen, last_notified, counter, total) VALUES(%s, %s, %s, NOW(), NOW(), 1, 1)", (name, filename, line))
return True
except:
raise
return True
def get_pretty_notification_info(name, filename, line):
"""
Return a sentence describing when this exception was already seen.
"""
exc_log = run_sql("SELECT last_notified,last_seen,total FROM hstEXCEPTION WHERE name=%s AND filename=%s AND line=%s", (name, filename, line))
if exc_log:
last_notified, last_seen, total = exc_log[0]
return "This exception has already been seen %s times\n last time it was seen: %s\n last time it was notified: %s\n" % (total, last_seen.strftime("%Y-%m-%d %H:%M:%S"), last_notified.strftime("%Y-%m-%d %H:%M:%S"))
else:
return "It is the first time this exception has been seen.\n"
def register_exception(stream='error',
req=None,
prefix='',
suffix='',
alert_admin=False,
subject=''):
"""
Log error exception to invenio.err and warning exception to invenio.log.
Errors will be logged together with client information (if req is
given).
Note: For sanity reasons, dynamic params such as PREFIX, SUFFIX and
local stack variables are checked for length, and only first 500
chars of their values are printed.
@param stream: 'error' or 'warning'
@param req: mod_python request
@param prefix: a message to be printed before the exception in
the log
@param suffix: a message to be printed before the exception in
the log
@param alert_admin: wethever to send the exception to the administrator via
email. Note this parameter is bypassed when
CFG_SITE_ADMIN_EMAIL_EXCEPTIONS is set to a value different than 1
@param subject: overrides the email subject
@return: 1 if successfully wrote to stream, 0 if not
"""
try:
## Let's extract exception information
exc_info = sys.exc_info()
exc_name = exc_info[0].__name__
output = get_pretty_traceback(
req=req, exc_info=exc_info, skip_frames=2)
if output:
## Okay, start printing:
log_stream = StringIO()
email_stream = StringIO()
print >> email_stream, '\n',
## If a prefix was requested let's print it
if prefix:
#prefix = _truncate_dynamic_string(prefix)
print >> log_stream, prefix + '\n'
print >> email_stream, prefix + '\n'
print >> log_stream, output
print >> email_stream, output
## If a suffix was requested let's print it
if suffix:
#suffix = _truncate_dynamic_string(suffix)
print >> log_stream, suffix
print >> email_stream, suffix
log_text = log_stream.getvalue()
email_text = email_stream.getvalue()
if email_text.endswith('\n'):
email_text = email_text[:-1]
## Preparing the exception dump
stream = stream=='error' and 'err' or 'log'
## We now have the whole trace
written_to_log = False
try:
## Let's try to write into the log.
open(os.path.join(CFG_LOGDIR, 'invenio.' + stream), 'a').write(
log_text)
written_to_log = True
except:
written_to_log = False
filename, line_no, function_name = _get_filename_and_line(exc_info)
## let's log the exception and see whether we should report it.
pretty_notification_info = get_pretty_notification_info(exc_name, filename, line_no)
if exception_should_be_notified(exc_name, filename, line_no) and (CFG_SITE_ADMIN_EMAIL_EXCEPTIONS > 1 or
(alert_admin and CFG_SITE_ADMIN_EMAIL_EXCEPTIONS > 0) or
not written_to_log):
## If requested or if it's impossible to write in the log
from invenio.mailutils import send_email
if not subject:
subject = 'Exception (%s:%s:%s)' % (filename, line_no, function_name)
subject = '%s at %s' % (subject, CFG_SITE_URL)
email_text = "\n%s\n%s" % (pretty_notification_info, email_text)
if not written_to_log:
email_text += """\
Note that this email was sent to you because it has been impossible to log
this exception into %s""" % os.path.join(CFG_LOGDIR, 'invenio.' + stream)
send_email(
CFG_SITE_ADMIN_EMAIL,
CFG_SITE_ADMIN_EMAIL,
subject=subject,
content=email_text)
return 1
else:
return 0
except Exception, err:
print >> sys.stderr, "Error in registering exception to '%s': '%s'" % (
CFG_LOGDIR + '/invenio.' + stream, err)
return 0
def raise_exception(exception_type = Exception,
msg = '',
stream='error',
req=None,
prefix='',
suffix='',
alert_admin=False,
subject=''):
"""
Log error exception to invenio.err and warning exception to invenio.log.
Errors will be logged together with client information (if req is
given).
It does not require a previously risen exception.
Note: For sanity reasons, dynamic params such as PREFIX, SUFFIX and
local stack variables are checked for length, and only first 500
chars of their values are printed.
@param exception_type: exception type to be used internally
@param msg: error message
@param stream: 'error' or 'warning'
@param req: mod_python request
@param prefix: a message to be printed before the exception in
the log
@param suffix: a message to be printed before the exception in
the log
@param alert_admin: wethever to send the exception to the administrator via
email. Note this parameter is bypassed when
CFG_SITE_ADMIN_EMAIL_EXCEPTIONS is set to a value different than 1
@param subject: overrides the email subject
@return: 1 if successfully wrote to stream, 0 if not
"""
try:
raise exception_type(msg)
except:
return register_exception(stream=stream,
req=req,
prefix=prefix,
suffix=suffix,
alert_admin=alert_admin,
subject=subject)
def send_error_report_to_admin(header, url, time_msg,
browser, client, error,
sys_error, traceback_msg):
"""
Sends an email to the admin with client info and tracestack
"""
from_addr = '%s Alert Engine <%s>' % (
CFG_SITE_NAME, CFG_WEBALERT_ALERT_ENGINE_EMAIL)
to_addr = CFG_SITE_ADMIN_EMAIL
body = """
The following error was seen by a user and sent to you.
%(contact)s
%(header)s
%(url)s
%(time)s
%(browser)s
%(client)s
%(error)s
%(sys_error)s
%(traceback)s
Please see the %(logdir)s/invenio.err for traceback details.""" % {
'header': header,
'url': url,
'time': time_msg,
'browser': browser,
'client': client,
'error': error,
'sys_error': sys_error,
'traceback': traceback_msg,
'logdir': CFG_LOGDIR,
'contact': "Please contact %s quoting the following information:" %
(CFG_SITE_SUPPORT_EMAIL, )}
from invenio.mailutils import send_email
send_email(from_addr, to_addr, subject="Error notification", content=body)
def _get_filename_and_line(exc_info):
"""
Return the filename, the line and the function_name where the exception happened.
"""
tb = exc_info[2]
exception_info = traceback.extract_tb(tb)[-1]
filename = os.path.basename(exception_info[0])
line_no = exception_info[1]
function_name = exception_info[2]
return filename, line_no, function_name
def _truncate_dynamic_string(val, maxlength=500):
"""
Return at most MAXLENGTH characters of VAL. Useful for
sanitizing dynamic variable values in the output.
"""
out = repr(val)
if len(out) > maxlength:
out = out[:maxlength] + ' [...]'
return out
def wrap_warn():
import warnings
+ from functools import wraps
- def wrapper(warn_fun):
- def fun(*args, **kwargs):
- traceback.print_stack()
- return warn_fun(*args, **kwargs)
- return fun
-
- # Ideally we would use @wraps when we drop python 2.4
- wrapper.__name__ = warnings.warn.__name__
- wrapper.__module__ = warnings.warn.__module__
- wrapper.__doc__ = warnings.warn.__doc__
- warnings.warn = wrapper(warnings.warn)
+ def wrapper(showwarning):
+ @wraps(showwarning)
+ def new_showwarning(message=None, category=None, filename=None, lineno=None, file=None, line=None):
+ invenio_err = open(os.path.join(CFG_LOGDIR, 'invenio.err'), "a")
+ print >> invenio_err, "* %(time)s -> WARNING: %(category)s: %(message)s (%(file)s:%(line)s)\n" % {
+ 'time': time.strftime("%Y-%m-%d %H:%M:%S"),
+ 'category': category,
+ 'message': message,
+ 'file': filename,
+ 'line': lineno}
+ print >> invenio_err, "** Traceback details\n"
+ traceback.print_stack(file=invenio_err)
+ print >> invenio_err, "\n"
+ return new_showwarning
+
+ warnings.showwarning = wrapper(warnings.showwarning)
diff --git a/modules/miscutil/lib/messages.py b/modules/miscutil/lib/messages.py
index 817fafd30..364e8c6fc 100644
--- a/modules/miscutil/lib/messages.py
+++ b/modules/miscutil/lib/messages.py
@@ -1,152 +1,153 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Invenio international messages functions, to be used by all
I18N interfaces. Typical usage in the caller code is:
from messages import gettext_set_language
[...]
def square(x, ln=CFG_SITE_LANG):
_ = gettext_set_language(ln)
print _("Hello there!")
print _("The square of %s is %s.") % (x, x*x)
In the caller code, all output strings should be made translatable via
the _() convention.
For more information, see ABOUT-NLS file.
"""
__revision__ = "$Id$"
import gettext
from invenio.config import CFG_LOCALEDIR, CFG_SITE_LANG, CFG_SITE_LANGS
_LANG_GT_D = {}
for _alang in CFG_SITE_LANGS:
_LANG_GT_D[_alang] = gettext.translation('invenio',
CFG_LOCALEDIR,
languages = [_alang],
fallback = True)
def gettext_set_language(ln, use_unicode=False):
"""
Set the _ gettext function in every caller function
Usage::
_ = gettext_set_language(ln)
"""
if use_unicode:
def unicode_gettext_wrapper(*args, **kwargs):
return _LANG_GT_D[ln].gettext(*args, **kwargs).decode('utf-8')
return unicode_gettext_wrapper
return _LANG_GT_D[ln].gettext
def wash_language(ln):
"""Look at language LN and check if it is one of allowed languages
for the interface. Return it in case of success, return the
default language otherwise."""
if not ln:
return CFG_SITE_LANG
if isinstance(ln, list):
ln = ln[0]
ln = ln.replace('-', '_')
if ln in CFG_SITE_LANGS:
return ln
elif ln[:2] in CFG_SITE_LANGS:
return ln[:2]
else:
return CFG_SITE_LANG
def wash_languages(lns):
"""Look at list of languages LNS and check if there's at least one
of the allowed languages for the interface. Return it in case
of success, return the default language otherwise."""
for ln in lns:
if ln:
ln = ln.replace('-', '_')
if ln in CFG_SITE_LANGS:
return ln
elif ln[:2] in CFG_SITE_LANGS:
return ln[:2]
return CFG_SITE_LANG
def language_list_long(enabled_langs_only=True):
"""
Return list of [short name, long name] for all enabled languages,
in the same language order as they appear in CFG_SITE_LANG.
If 'enabled_langs_only' is set to False, then return all possibly
existing Invenio languages, even if they were not enabled on the
site by the local administrator. Useful for recognizing all I18N
translations in webdoc sources or bibformat templates.
"""
all_language_names = {'af': 'Afrikaans',
'ar': 'العربية',
'bg': 'Български',
'ca': 'Català',
'cs': 'Česky',
'de': 'Deutsch',
'el': 'Ελληνικά',
'en': 'English',
'es': 'Español',
+ 'fa': 'فارسی',
'fr': 'Français',
'hr': 'Hrvatski',
'gl': 'Galego',
'it': 'Italiano',
'ka': 'ქართული',
'rw': 'Kinyarwanda',
'lt': 'Lietuvių',
'hu': 'Magyar',
'ja': '日本語',
'no': 'Norsk/Bokmål',
'pl': 'Polski',
'pt': 'Português',
'ro': 'Română',
'ru': 'Русский',
'sk': 'Slovensky',
'sv': 'Svenska',
'uk': 'Українська',
'zh_CN': '中文(简)',
'zh_TW': '中文(繁)',}
if enabled_langs_only:
enabled_lang_list = []
for lang in CFG_SITE_LANGS:
enabled_lang_list.append([lang, all_language_names[lang]])
return enabled_lang_list
else:
return [[lang, lang_long] for lang, lang_long in \
all_language_names.iteritems()]
def is_language_rtl(ln):
"""
Returns True or False depending on whether language is
right-to-left direction.
@param ln: language
@type ln: str
@return: is language right-to-left direction?
@rtype: bool
"""
- if ln in ('ar',):
+ if ln in ('ar', 'fa'):
return True
return False
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_03_20_idxINDEX_synonym_kb.py b/modules/miscutil/lib/upgrades/invenio_2013_03_20_idxINDEX_synonym_kb.py
new file mode 100644
index 000000000..431fb16b5
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_03_20_idxINDEX_synonym_kb.py
@@ -0,0 +1,55 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2012_10_29_idxINDEX_new_indexer_column']
+
+
+def info():
+ return "Introduces new column for idxINDEX table: synonym_kbrs"
+
+
+def do_upgrade():
+ #first step: change tables
+ stmt = run_sql('SHOW CREATE TABLE idxINDEX')[0][1]
+ if '`synonym_kbrs` varchar(255)' not in stmt:
+ run_sql("ALTER TABLE idxINDEX ADD COLUMN synonym_kbrs varchar(255) NOT NULL default '' AFTER indexer")
+ #second step: fill tables
+ run_sql("UPDATE idxINDEX SET synonym_kbrs='INDEX-SYNONYM-TITLE,exact' WHERE name IN ('global','title')")
+ #third step: check invenio.conf
+ from invenio.config import CFG_BIBINDEX_SYNONYM_KBRS
+ from invenio.bibindex_engine_utils import get_index_id_from_index_name
+ if CFG_BIBINDEX_SYNONYM_KBRS:
+ for index in CFG_BIBINDEX_SYNONYM_KBRS:
+ index_id = get_index_id_from_index_name(index)
+ synonym = ",".join(CFG_BIBINDEX_SYNONYM_KBRS[index])
+ query = "UPDATE idxINDEX SET synonym_kbrs='%s' WHERE id=%s" % (synonym, index_id)
+ run_sql(query)
+
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ print 'NOTE: please double check your new index synonym settings in BibIndex Admin Interface.'
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_03_21_idxINDEX_stopwords.py b/modules/miscutil/lib/upgrades/invenio_2013_03_21_idxINDEX_stopwords.py
new file mode 100644
index 000000000..5925b6834
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_03_21_idxINDEX_stopwords.py
@@ -0,0 +1,50 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2013_03_20_idxINDEX_synonym_kb']
+
+def info():
+ return "Introduces new column for idxINDEX table: remove_stopwords"
+
+
+def do_upgrade():
+ #first step: change tables
+ stmt = run_sql('SHOW CREATE TABLE idxINDEX')[0][1]
+ if '`remove_stopwords` varchar' not in stmt:
+ run_sql("ALTER TABLE idxINDEX ADD COLUMN remove_stopwords varchar(255) NOT NULL default '' AFTER synonym_kbrs")
+ #second step: fill tables
+ run_sql("UPDATE idxINDEX SET remove_stopwords='No'")
+ #third step: load from invenio.cfg if necessary
+ from invenio.config import CFG_BIBINDEX_REMOVE_STOPWORDS
+ if CFG_BIBINDEX_REMOVE_STOPWORDS:
+ if CFG_BIBINDEX_REMOVE_STOPWORDS == 1:
+ run_sql("UPDATE idxINDEX SET remove_stopwords='Yes'")
+
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ print 'NOTE: please double check your new index stopword settings in BibIndex Admin Interface.'
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_03_25_idxINDEX_html_markup.py b/modules/miscutil/lib/upgrades/invenio_2013_03_25_idxINDEX_html_markup.py
new file mode 100644
index 000000000..48e07620e
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_03_25_idxINDEX_html_markup.py
@@ -0,0 +1,58 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2013_03_21_idxINDEX_stopwords']
+
+def info():
+ return "Introduces new columns for idxINDEX table: remove_html_markup, remove_latex_markup"
+
+
+def do_upgrade():
+ #first step: change tables
+ stmt = run_sql('SHOW CREATE TABLE idxINDEX')[0][1]
+ if '`remove_html_markup` varchar(10)' not in stmt:
+ run_sql("ALTER TABLE idxINDEX ADD COLUMN remove_html_markup varchar(10) NOT NULL default '' AFTER remove_stopwords")
+ if '`remove_latex_markup` varchar(10)' not in stmt:
+ run_sql("ALTER TABLE idxINDEX ADD COLUMN remove_latex_markup varchar(10) NOT NULL default '' AFTER remove_html_markup")
+ #second step: fill tables
+ run_sql("UPDATE idxINDEX SET remove_html_markup='No'")
+ run_sql("UPDATE idxINDEX SET remove_latex_markup='No'")
+ #third step: check invenio.conf and update db if necessary
+ try:
+ from invenio.config import CFG_BIBINDEX_REMOVE_HTML_MARKUP, CFG_BIBINDEX_REMOVE_LATEX_MARKUP
+ if CFG_BIBINDEX_REMOVE_HTML_MARKUP:
+ if CFG_BIBINDEX_REMOVE_HTML_MARKUP == 1:
+ run_sql("UPDATE idxINDEX SET remove_html_markup='Yes'")
+ if CFG_BIBINDEX_REMOVE_LATEX_MARKUP:
+ if CFG_BIBINDEX_REMOVE_LATEX_MARKUP == 1:
+ run_sql("UPDATE idxINDEX SET remove_latex_markup='Yes'")
+ except:
+ pass
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ print 'NOTE: please double check your new HTML/LaTeX processing settings in BibIndex Admin Interface.'
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_03_28_idxINDEX_tokenizer.py b/modules/miscutil/lib/upgrades/invenio_2013_03_28_idxINDEX_tokenizer.py
new file mode 100644
index 000000000..f127b86f6
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_03_28_idxINDEX_tokenizer.py
@@ -0,0 +1,55 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2013_03_25_idxINDEX_html_markup']
+
+def info():
+ return "Introduces new columns for idxINDEX table: tokenizer"
+
+
+def do_upgrade():
+ #first step: change table
+ stmt = run_sql('SHOW CREATE TABLE idxINDEX')[0][1]
+ if '`tokenizer` varchar(50)' not in stmt:
+ run_sql("ALTER TABLE idxINDEX ADD COLUMN tokenizer varchar(50) NOT NULL default '' AFTER remove_latex_markup")
+ #second step: update table
+ run_sql("""UPDATE idxINDEX SET tokenizer='BibIndexDefaultTokenizer' WHERE name IN
+ ('global', 'collection', 'abstract', 'keyword',
+ 'reference', 'reportnumber', 'title', 'collaboration',
+ 'affiliation', 'caption', 'exacttitle')""")
+ run_sql("""UPDATE idxINDEX SET tokenizer='BibIndexAuthorTokenizer' WHERE name IN
+ ('author', 'firstauthor')""")
+ run_sql("""UPDATE idxINDEX SET tokenizer='BibIndexExactAuthorTokenizer' WHERE name IN
+ ('exactauthor', 'exactfirstauthor')""")
+ run_sql("""UPDATE idxINDEX SET tokenizer='BibIndexFulltextTokenizer' WHERE name='fulltext'""")
+ run_sql("""UPDATE idxINDEX SET tokenizer='BibIndexAuthorCountTokenizer' WHERE name='authorcount'""")
+ run_sql("""UPDATE idxINDEX SET tokenizer='BibIndexJournalTokenizer' WHERE name='journal'""")
+ run_sql("""UPDATE idxINDEX SET tokenizer='BibIndexYearTokenizer' WHERE name='year'""")
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ pass
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_03_29_idxINDEX_stopwords_update.py b/modules/miscutil/lib/upgrades/invenio_2013_03_29_idxINDEX_stopwords_update.py
new file mode 100644
index 000000000..349b297c6
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_03_29_idxINDEX_stopwords_update.py
@@ -0,0 +1,45 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2013_03_28_idxINDEX_tokenizer']
+
+def info():
+ return "Updates column remove_stopwords of idxINDEX table with path to default 'stopwords' file if necessary"
+
+
+def do_upgrade():
+ #different stopwords file for every index:
+ #need to update default stopwords path for every index
+ from invenio.config import CFG_BIBINDEX_REMOVE_STOPWORDS
+ if CFG_BIBINDEX_REMOVE_STOPWORDS:
+ if CFG_BIBINDEX_REMOVE_STOPWORDS == 1:
+ run_sql("UPDATE idxINDEX SET remove_stopwords='stopwords.kb'")
+
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ print 'NOTE: please double check your new index stopword settings in BibIndex Admin Interface.'
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_08_20_bibauthority_updates.py b/modules/miscutil/lib/upgrades/invenio_2013_08_20_bibauthority_updates.py
new file mode 100644
index 000000000..9fe3db735
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_08_20_bibauthority_updates.py
@@ -0,0 +1,276 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2013_03_29_idxINDEX_stopwords_update']
+
+def info():
+ return """Introduces bibauthority module. Adds:
+ -> new indexes:
+ authorityauthor
+ authoritysubject
+ authorityjournal
+ authorityinstitution
+ -> new fields:
+ authorityauthor
+ authoritysubject
+ authorityjournal
+ authorityinstitution
+ -> new tags:
+ authority: main personal name
+ authority: alternative personal name
+ authority: personal name from other record
+ authority: organization main name'
+ organization alternative name
+ organization main from other record
+ authority: uniform title
+ authority: uniform title alternatives
+ authority: uniform title from other record
+ authority: subject from other record
+ authority: subject alternative name
+ authority: subject main name
+ """
+
+
+def do_upgrade():
+ pass
+
+
+def do_upgrade_atlantis():
+ #first step: create tables
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD20F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM; """)
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD20R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD21F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD21R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD22F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD22R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD23F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD23R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR20F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR20R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR21F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR21R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR22F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR22R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR23F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR23R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE20F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE20R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE21F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE21R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE22F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE22R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE23F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE23R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+ #second step: fill tables with data
+ run_sql("""INSERT INTO field VALUES (33,'authority author','authorityauthor')""")
+ run_sql("""INSERT INTO field VALUES (34,'authority institution','authorityinstitution')""")
+ run_sql("""INSERT INTO field VALUES (35,'authority journal','authorityjournal')""")
+ run_sql("""INSERT INTO field VALUES (36,'authority subject','authoritysubject')""")
+ run_sql("""INSERT INTO field_tag VALUES (33,1,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (33,146,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (33,140,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (34,148,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (34,149,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (34,150,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (35,151,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (35,152,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (35,153,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (36,154,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (36,155,100)""")
+ run_sql("""INSERT INTO field_tag VALUES (36,156,100)""")
+ run_sql("""INSERT INTO tag VALUES (145,'authority: main personal name','100__a')""")
+ run_sql("""INSERT INTO tag VALUES (146,'authority: alternative personal name','400__a')""")
+ run_sql("""INSERT INTO tag VALUES (147,'authority: personal name from other record','500__a')""")
+ run_sql("""INSERT INTO tag VALUES (148,'authority: organization main name','110__a')""")
+ run_sql("""INSERT INTO tag VALUES (149,'organization alternative name','410__a')""")
+ run_sql("""INSERT INTO tag VALUES (150,'organization main from other record','510__a')""")
+ run_sql("""INSERT INTO tag VALUES (151,'authority: uniform title','130__a')""")
+ run_sql("""INSERT INTO tag VALUES (152,'authority: uniform title alternatives','430__a')""")
+ run_sql("""INSERT INTO tag VALUES (153,'authority: uniform title from other record','530__a')""")
+ run_sql("""INSERT INTO tag VALUES (154,'authority: subject from other record','150__a')""")
+ run_sql("""INSERT INTO tag VALUES (155,'authority: subject alternative name','450__a')""")
+ run_sql("""INSERT INTO tag VALUES (156,'authority: subject main name','550__a')""")
+
+ run_sql("""INSERT INTO idxINDEX VALUES (20,'authorityauthor','This index contains words/phrases from author authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorTokenizer')""")
+ run_sql("""INSERT INTO idxINDEX VALUES (21,'authorityinstitution','This index contains words/phrases from institution authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer')""")
+ run_sql("""INSERT INTO idxINDEX VALUES (22,'authorityjournal','This index contains words/phrases from journal authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer')""")
+ run_sql("""INSERT INTO idxINDEX VALUES (23,'authoritysubject','This index contains words/phrases from subject authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer')""")
+
+ run_sql("""INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (20,33)""")
+ run_sql("""INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (21,34)""")
+ run_sql("""INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (22,35)""")
+ run_sql("""INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (23,36)""")
+
+
+def estimate():
+ return 1
+
+
+def pre_upgrade():
+ pass
+
+
+def post_upgrade():
+ pass
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_08_22_hstRECORD_affected_fields.py b/modules/miscutil/lib/upgrades/invenio_2013_08_22_hstRECORD_affected_fields.py
new file mode 100644
index 000000000..21192659d
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_08_22_hstRECORD_affected_fields.py
@@ -0,0 +1,44 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2012_11_15_hstRECORD_marcxml_longblob']
+
+def info():
+ return "New column hstRECORD.affected_fields"
+
+def do_upgrade():
+ #first step: change the table
+ create_statement = run_sql('SHOW CREATE TABLE hstRECORD')[0][1]
+ if 'affected_fields' not in create_statement:
+ run_sql("ALTER TABLE hstRECORD ADD COLUMN affected_fields text NOT NULL default '' AFTER job_details")
+ #second step: nothing
+ #we don't need to fill in the column since empty value
+ #is valid and it means that all fields/tags were modified
+
+def estimate():
+ """ Estimate running time of upgrade in seconds (optional). """
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ """Check for potentially invalid revisions"""
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_08_22_new_index_itemcount.py b/modules/miscutil/lib/upgrades/invenio_2013_08_22_new_index_itemcount.py
new file mode 100644
index 000000000..66b6aaea9
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_08_22_new_index_itemcount.py
@@ -0,0 +1,91 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2013_08_20_bibauthority_updates']
+
+def info():
+ return """Introduces new index: itemcount"""
+
+
+def do_upgrade():
+ pass
+
+
+def do_upgrade_atlantis():
+ #first step: create tables
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD24F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM; """)
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD24R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR24F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR24R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+
+
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE24F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+ ) ENGINE=MyISAM;""")
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE24R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;""")
+ #second step: fill in idxINDEX, idxINDEX_field, field tables
+ run_sql("""INSERT INTO field VALUES (37,'item count','itemcount')""")
+ run_sql("""INSERT INTO idxINDEX VALUES (24,'itemcount','This index contains number of copies of items in the library.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexItemCountTokenizer')""")
+ run_sql("""INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (24,37)""")
+
+
+def estimate():
+ return 1
+
+
+def pre_upgrade():
+ pass
+
+
+def post_upgrade():
+ pass
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_09_25_virtual_indexes.py b/modules/miscutil/lib/upgrades/invenio_2013_09_25_virtual_indexes.py
new file mode 100644
index 000000000..b1ac184f3
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_09_25_virtual_indexes.py
@@ -0,0 +1,347 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+from invenio.dbquery import run_sql
+
+depends_on = ['invenio_2013_08_22_new_index_itemcount',
+ 'invenio_2013_08_22_hstRECORD_affected_fields']
+
+def info():
+ return "BibIndex virtual indexes"
+
+def do_upgrade():
+ run_sql("""CREATE TABLE IF NOT EXISTS idxINDEX_idxINDEX (
+ id_virtual mediumint(9) unsigned NOT NULL,
+ id_normal mediumint(9) unsigned NOT NULL,
+ PRIMARY KEY (id_virtual,id_normal)
+ ) ENGINE=MyISAM""")
+
+def do_upgrade_atlantis():
+ #0 step: parametrize script for quick change
+ misc_field = 39
+ misc_index = 26
+ #1st step: create tables for miscellaneous index
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD%02dF (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM""" % misc_index)
+ run_sql("""CREATE TABLE IF NOT EXISTS idxWORD%02dR (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM""" % misc_index)
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR%02dF (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM""" % misc_index)
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPAIR%02dR (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM""" % misc_index)
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE%02dF (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+ ) ENGINE=MyISAM""" % misc_index)
+ run_sql("""CREATE TABLE IF NOT EXISTS idxPHRASE%02dR (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM""" % misc_index)
+
+ #2nd step: add 'miscellaneous' index to idxINDEX table
+ run_sql("""INSERT INTO idxINDEX VALUES (%s,'miscellaneous','This index contains words/phrases from miscellaneous fields','0000-00-00 00:00:00', '', 'native','','No','No','No', 'BibIndexDefaultTokenizer')""" % misc_index)
+
+ #3rd step: add 'miscellaneous' field
+ run_sql("""INSERT INTO field VALUES (%s,'miscellaneous', 'miscellaneous')""" % misc_field)
+
+ #4th step: add idxINDEX_field map
+ run_sql("""INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (%s,%s)""" % (misc_index, misc_field))
+
+ #5th step: add tags
+ run_sql("""INSERT INTO tag VALUES (157,'031x','031%')""")
+ run_sql("""INSERT INTO tag VALUES (158,'032x','032%')""")
+ run_sql("""INSERT INTO tag VALUES (159,'033x','033%')""")
+ run_sql("""INSERT INTO tag VALUES (160,'034x','034%')""")
+ run_sql("""INSERT INTO tag VALUES (161,'035x','035%')""")
+ run_sql("""INSERT INTO tag VALUES (162,'036x','036%')""")
+ run_sql("""INSERT INTO tag VALUES (163,'037x','037%')""")
+ run_sql("""INSERT INTO tag VALUES (164,'038x','038%')""")
+ run_sql("""INSERT INTO tag VALUES (165,'080x','080%')""")
+ run_sql("""INSERT INTO tag VALUES (166,'082x','082%')""")
+ run_sql("""INSERT INTO tag VALUES (167,'083x','083%')""")
+ run_sql("""INSERT INTO tag VALUES (168,'084x','084%')""")
+ run_sql("""INSERT INTO tag VALUES (169,'085x','085%')""")
+ run_sql("""INSERT INTO tag VALUES (170,'086x','086%')""")
+ run_sql("""INSERT INTO tag VALUES (171,'240x','240%')""")
+ run_sql("""INSERT INTO tag VALUES (172,'242x','242%')""")
+ run_sql("""INSERT INTO tag VALUES (173,'243x','243%')""")
+ run_sql("""INSERT INTO tag VALUES (174,'244x','244%')""")
+ run_sql("""INSERT INTO tag VALUES (175,'247x','247%')""")
+ run_sql("""INSERT INTO tag VALUES (176,'521x','521%')""")
+ run_sql("""INSERT INTO tag VALUES (177,'522x','522%')""")
+ run_sql("""INSERT INTO tag VALUES (178,'524x','524%')""")
+ run_sql("""INSERT INTO tag VALUES (179,'525x','525%')""")
+ run_sql("""INSERT INTO tag VALUES (180,'526x','526%')""")
+ run_sql("""INSERT INTO tag VALUES (181,'650x','650%')""")
+ run_sql("""INSERT INTO tag VALUES (182,'651x','651%')""")
+ run_sql("""INSERT INTO tag VALUES (183,'6531_v','6531_v')""")
+ run_sql("""INSERT INTO tag VALUES (184,'6531_y','6531_y')""")
+ run_sql("""INSERT INTO tag VALUES (185,'6531_9','6531_9')""")
+ run_sql("""INSERT INTO tag VALUES (186,'654x','654%')""")
+ run_sql("""INSERT INTO tag VALUES (187,'655x','655%')""")
+ run_sql("""INSERT INTO tag VALUES (188,'656x','656%')""")
+ run_sql("""INSERT INTO tag VALUES (189,'657x','657%')""")
+ run_sql("""INSERT INTO tag VALUES (190,'658x','658%')""")
+ run_sql("""INSERT INTO tag VALUES (191,'711x','711%')""")
+ run_sql("""INSERT INTO tag VALUES (192,'900x','900%')""")
+ run_sql("""INSERT INTO tag VALUES (193,'901x','901%')""")
+ run_sql("""INSERT INTO tag VALUES (194,'902x','902%')""")
+ run_sql("""INSERT INTO tag VALUES (195,'903x','903%')""")
+ run_sql("""INSERT INTO tag VALUES (196,'904x','904%')""")
+ run_sql("""INSERT INTO tag VALUES (197,'905x','905%')""")
+ run_sql("""INSERT INTO tag VALUES (198,'906x','906%')""")
+ run_sql("""INSERT INTO tag VALUES (199,'907x','907%')""")
+ run_sql("""INSERT INTO tag VALUES (200,'908x','908%')""")
+ run_sql("""INSERT INTO tag VALUES (201,'909C1x','909C1%')""")
+ run_sql("""INSERT INTO tag VALUES (202,'909C5x','909C5%')""")
+ run_sql("""INSERT INTO tag VALUES (203,'909CSx','909CS%')""")
+ run_sql("""INSERT INTO tag VALUES (204,'909COx','909CO%')""")
+ run_sql("""INSERT INTO tag VALUES (205,'909CKx','909CK%')""")
+ run_sql("""INSERT INTO tag VALUES (206,'909CPx','909CP%')""")
+ run_sql("""INSERT INTO tag VALUES (207,'981x','981%')""")
+ run_sql("""INSERT INTO tag VALUES (208,'982x','982%')""")
+ run_sql("""INSERT INTO tag VALUES (209,'983x','983%')""")
+ run_sql("""INSERT INTO tag VALUES (210,'984x','984%')""")
+ run_sql("""INSERT INTO tag VALUES (211,'985x','985%')""")
+ run_sql("""INSERT INTO tag VALUES (212,'986x','986%')""")
+ run_sql("""INSERT INTO tag VALUES (213,'987x','987%')""")
+ run_sql("""INSERT INTO tag VALUES (214,'988x','988%')""")
+ run_sql("""INSERT INTO tag VALUES (215,'989x','989%')""")
+ run_sql("""INSERT INTO tag VALUES (216,'author control','100__0')""")
+ run_sql("""INSERT INTO tag VALUES (217,'institution control','110__0')""")
+ run_sql("""INSERT INTO tag VALUES (218,'journal control','130__0')""")
+ run_sql("""INSERT INTO tag VALUES (219,'subject control','150__0')""")
+ run_sql("""INSERT INTO tag VALUES (220,'additional institution control', '260__0')""")
+ run_sql("""INSERT INTO tag VALUES (221,'additional author control', '700__0')""")
+
+ #6th step: add field tag mapping
+ run_sql("""INSERT INTO field_tag VALUES (%s,17,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,18,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,157,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,158,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,159,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,160,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,161,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,162,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,163,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,164,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,20,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,21,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,22,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,23,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,165,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,166,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,167,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,168,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,169,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,170,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,25,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,27,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,28,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,29,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,30,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,31,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,32,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,33,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,34,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,35,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,36,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,37,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,38,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,39,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,171,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,172,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,173,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,174,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,175,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,41,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,42,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,43,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,44,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,45,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,46,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,47,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,48,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,49,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,50,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,51,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,52,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,53,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,54,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,55,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,56,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,57,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,58,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,59,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,60,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,61,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,62,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,63,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,64,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,65,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,66,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,67,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,176,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,177,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,178,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,179,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,180,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,69,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,70,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,71,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,72,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,73,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,74,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,75,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,76,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,77,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,78,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,79,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,80,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,181,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,182,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,183,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,184,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,185,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,186,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,82,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,83,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,84,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,85,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,187,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,88,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,89,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,90,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,91,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,92,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,93,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,94,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,95,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,96,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,97,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,98,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,99,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,100,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,102,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,103,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,104,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,105,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,188,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,189,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,190,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,191,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,192,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,193,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,194,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,195,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,196,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,107,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,108,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,109,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,110,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,111,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,112,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,113,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,197,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,198,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,199,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,200,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,201,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,202,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,203,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,204,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,205,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,206,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,207,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,208,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,209,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,210,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,211,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,212,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,213,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,214,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,215,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,122,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,123,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,124,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,125,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,126,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,127,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,128,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,129,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,130,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,1,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,2,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,216,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,217,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,218,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,219,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,220,10)""" % misc_field)
+ run_sql("""INSERT INTO field_tag VALUES (%s,221,10)""" % misc_field)
+
+ #7th step: remove old unneeded field tag mapping
+ run_sql("""DELETE FROM field_tag WHERE id_field=1""")
+
+ #8th step: add mapping between indexes for global index
+ query = """SELECT name, id FROM idxINDEX"""
+ ids = dict(run_sql(query))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['collection']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['abstract']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['collection']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['reportnumber']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['title']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['year']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['journal']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['collaboration']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['affiliation']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['exacttitle']))
+ run_sql("""INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (%s, %s)""" % (ids['global'], ids['miscellaneous']))
+
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ print 'NOTE: please double check your index settings in BibIndex Admin Interface; you can make your global index virtual.'
diff --git a/modules/bibformat/lib/elements/bfe_publisher.py b/modules/miscutil/lib/upgrades/invenio_2013_09_30_indexer_interface.py
similarity index 61%
copy from modules/bibformat/lib/elements/bfe_publisher.py
copy to modules/miscutil/lib/upgrades/invenio_2013_09_30_indexer_interface.py
index 3a5f78261..1692f5db0 100644
--- a/modules/bibformat/lib/elements/bfe_publisher.py
+++ b/modules/miscutil/lib/upgrades/invenio_2013_09_30_indexer_interface.py
@@ -1,33 +1,39 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""BibFormat element - Prints publisher name
-"""
-__revision__ = "$Id$"
-def format_element(bfo):
- """
- Prints the publisher name
+from invenio.dbquery import run_sql
- @see: place.py, date.py, reprints.py, imprint.py, pagination.py
- """
+depends_on = ['invenio_2013_09_25_virtual_indexes']
- publisher = bfo.field('260__b')
+def info():
+ return "Small compatibility change in idxINDEX table"
- if publisher != "sine nomine":
- return publisher
+def do_upgrade():
+ res = run_sql("SELECT DISTINCT(id_virtual) FROM idxINDEX_idxINDEX")
+ for row in res:
+ run_sql("UPDATE idxINDEX SET indexer='virtual' WHERE id=%s", (row[0],))
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ pass
diff --git a/modules/bibformat/lib/elements/bfe_publisher.py b/modules/miscutil/lib/upgrades/invenio_2013_10_18_crcLIBRARY_type.py
similarity index 57%
copy from modules/bibformat/lib/elements/bfe_publisher.py
copy to modules/miscutil/lib/upgrades/invenio_2013_10_18_crcLIBRARY_type.py
index 3a5f78261..5f535bd39 100644
--- a/modules/bibformat/lib/elements/bfe_publisher.py
+++ b/modules/miscutil/lib/upgrades/invenio_2013_10_18_crcLIBRARY_type.py
@@ -1,33 +1,40 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""BibFormat element - Prints publisher name
-"""
-__revision__ = "$Id$"
-def format_element(bfo):
- """
- Prints the publisher name
+from invenio.dbquery import run_sql
- @see: place.py, date.py, reprints.py, imprint.py, pagination.py
- """
+depends_on = ['invenio_release_1_1_0']
- publisher = bfo.field('260__b')
+def info():
+ return "crcLIBRARY.type is now mandatory"
- if publisher != "sine nomine":
- return publisher
+def do_upgrade():
+ run_sql("UPDATE crcLIBRARY SET type='main' WHERE type IS NULL")
+ create_statement = run_sql('SHOW CREATE TABLE crcLIBRARY')[0][1]
+ if '`type` varchar(30) NOT NULL' not in create_statement:
+ run_sql("ALTER TABLE crcLIBRARY CHANGE type type varchar(30) NOT NULL default 'main'")
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ pass
diff --git a/modules/miscutil/lib/upgrades/invenio_2013_10_18_new_index_filetype.py b/modules/miscutil/lib/upgrades/invenio_2013_10_18_new_index_filetype.py
new file mode 100644
index 000000000..9a8481ed8
--- /dev/null
+++ b/modules/miscutil/lib/upgrades/invenio_2013_10_18_new_index_filetype.py
@@ -0,0 +1,97 @@
+# -*- coding: utf-8 -*-
+##
+## This file is part of Invenio.
+## Copyright (C) 2012, 2013 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+
+from invenio.dbquery import run_sql
+
+
+depends_on = ['invenio_2013_08_22_new_index_itemcount']
+
+def info():
+ return "New index filetype."
+
+
+def do_upgrade():
+ pass
+
+
+def do_upgrade_atlantis():
+ run_sql("""
+ CREATE TABLE IF NOT EXISTS idxWORD25F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;
+ """)
+ run_sql("""
+ CREATE TABLE IF NOT EXISTS idxWORD25R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;
+ """)
+ run_sql("""
+ CREATE TABLE IF NOT EXISTS idxPAIR25F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+ ) ENGINE=MyISAM;
+ """)
+ run_sql("""
+ CREATE TABLE IF NOT EXISTS idxPAIR25R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;
+ """)
+ run_sql("""
+ CREATE TABLE IF NOT EXISTS idxPHRASE25F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+ ) ENGINE=MyISAM;
+ """)
+ run_sql("""
+ CREATE TABLE IF NOT EXISTS idxPHRASE25R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+ ) ENGINE=MyISAM;
+ """)
+ run_sql("""INSERT INTO idxINDEX VALUES (25,'filetype','This index contains file extensions of the record.', '0000-00-00 00:00:00', '', 'native', '', 'No', 'No', 'No', 'BibIndexFiletypeTokenizer')""")
+ run_sql("""INSERT INTO field VALUES (38,'file type', 'filetype')""")
+ run_sql("""INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (25,38)""")
+
+def estimate():
+ return 1
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ pass
diff --git a/modules/bibformat/lib/elements/bfe_publisher.py b/modules/miscutil/lib/upgrades/invenio_2013_10_25_delete_recjson_cache.py
similarity index 58%
copy from modules/bibformat/lib/elements/bfe_publisher.py
copy to modules/miscutil/lib/upgrades/invenio_2013_10_25_delete_recjson_cache.py
index 3a5f78261..8790a5be0 100644
--- a/modules/bibformat/lib/elements/bfe_publisher.py
+++ b/modules/miscutil/lib/upgrades/invenio_2013_10_25_delete_recjson_cache.py
@@ -1,33 +1,38 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
-## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-"""BibFormat element - Prints publisher name
-"""
-__revision__ = "$Id$"
-def format_element(bfo):
- """
- Prints the publisher name
+from invenio.dbquery import run_sql
- @see: place.py, date.py, reprints.py, imprint.py, pagination.py
- """
+depends_on = ['invenio_release_1_1_0']
- publisher = bfo.field('260__b')
+def info():
+ return "Delete recjson cache after bibfield config update"
- if publisher != "sine nomine":
- return publisher
+def do_upgrade():
+ run_sql("DELETE FROM bibfmt WHERE format='recjson'")
+
+def estimate():
+ return 10
+
+def pre_upgrade():
+ pass
+
+def post_upgrade():
+ print 'NOTE: please run inveniocfg --load-bibfield-conf to apply new changes.'
+ print 'NOTE: please consider scheduling inveniocfg --reset-recjson-cache when time permits. (May take long time.)'
diff --git a/modules/miscutil/sql/tabbibclean.sql b/modules/miscutil/sql/tabbibclean.sql
index cbacba311..e0703c5a1 100644
--- a/modules/miscutil/sql/tabbibclean.sql
+++ b/modules/miscutil/sql/tabbibclean.sql
@@ -1,354 +1,378 @@
TRUNCATE bibrec;
TRUNCATE bib00x;
TRUNCATE bib01x;
TRUNCATE bib02x;
TRUNCATE bib03x;
TRUNCATE bib04x;
TRUNCATE bib05x;
TRUNCATE bib06x;
TRUNCATE bib07x;
TRUNCATE bib08x;
TRUNCATE bib09x;
TRUNCATE bib10x;
TRUNCATE bib11x;
TRUNCATE bib12x;
TRUNCATE bib13x;
TRUNCATE bib14x;
TRUNCATE bib15x;
TRUNCATE bib16x;
TRUNCATE bib17x;
TRUNCATE bib18x;
TRUNCATE bib19x;
TRUNCATE bib20x;
TRUNCATE bib21x;
TRUNCATE bib22x;
TRUNCATE bib23x;
TRUNCATE bib24x;
TRUNCATE bib25x;
TRUNCATE bib26x;
TRUNCATE bib27x;
TRUNCATE bib28x;
TRUNCATE bib29x;
TRUNCATE bib30x;
TRUNCATE bib31x;
TRUNCATE bib32x;
TRUNCATE bib33x;
TRUNCATE bib34x;
TRUNCATE bib35x;
TRUNCATE bib36x;
TRUNCATE bib37x;
TRUNCATE bib38x;
TRUNCATE bib39x;
TRUNCATE bib40x;
TRUNCATE bib41x;
TRUNCATE bib42x;
TRUNCATE bib43x;
TRUNCATE bib44x;
TRUNCATE bib45x;
TRUNCATE bib46x;
TRUNCATE bib47x;
TRUNCATE bib48x;
TRUNCATE bib49x;
TRUNCATE bib50x;
TRUNCATE bib51x;
TRUNCATE bib52x;
TRUNCATE bib53x;
TRUNCATE bib54x;
TRUNCATE bib55x;
TRUNCATE bib56x;
TRUNCATE bib57x;
TRUNCATE bib58x;
TRUNCATE bib59x;
TRUNCATE bib60x;
TRUNCATE bib61x;
TRUNCATE bib62x;
TRUNCATE bib63x;
TRUNCATE bib64x;
TRUNCATE bib65x;
TRUNCATE bib66x;
TRUNCATE bib67x;
TRUNCATE bib68x;
TRUNCATE bib69x;
TRUNCATE bib70x;
TRUNCATE bib71x;
TRUNCATE bib72x;
TRUNCATE bib73x;
TRUNCATE bib74x;
TRUNCATE bib75x;
TRUNCATE bib76x;
TRUNCATE bib77x;
TRUNCATE bib78x;
TRUNCATE bib79x;
TRUNCATE bib80x;
TRUNCATE bib81x;
TRUNCATE bib82x;
TRUNCATE bib83x;
TRUNCATE bib84x;
TRUNCATE bib85x;
TRUNCATE bib86x;
TRUNCATE bib87x;
TRUNCATE bib88x;
TRUNCATE bib89x;
TRUNCATE bib90x;
TRUNCATE bib91x;
TRUNCATE bib92x;
TRUNCATE bib93x;
TRUNCATE bib94x;
TRUNCATE bib95x;
TRUNCATE bib96x;
TRUNCATE bib97x;
TRUNCATE bib98x;
TRUNCATE bib99x;
TRUNCATE bibrec_bib00x;
TRUNCATE bibrec_bib01x;
TRUNCATE bibrec_bib02x;
TRUNCATE bibrec_bib03x;
TRUNCATE bibrec_bib04x;
TRUNCATE bibrec_bib05x;
TRUNCATE bibrec_bib06x;
TRUNCATE bibrec_bib07x;
TRUNCATE bibrec_bib08x;
TRUNCATE bibrec_bib09x;
TRUNCATE bibrec_bib10x;
TRUNCATE bibrec_bib11x;
TRUNCATE bibrec_bib12x;
TRUNCATE bibrec_bib13x;
TRUNCATE bibrec_bib14x;
TRUNCATE bibrec_bib15x;
TRUNCATE bibrec_bib16x;
TRUNCATE bibrec_bib17x;
TRUNCATE bibrec_bib18x;
TRUNCATE bibrec_bib19x;
TRUNCATE bibrec_bib20x;
TRUNCATE bibrec_bib21x;
TRUNCATE bibrec_bib22x;
TRUNCATE bibrec_bib23x;
TRUNCATE bibrec_bib24x;
TRUNCATE bibrec_bib25x;
TRUNCATE bibrec_bib26x;
TRUNCATE bibrec_bib27x;
TRUNCATE bibrec_bib28x;
TRUNCATE bibrec_bib29x;
TRUNCATE bibrec_bib30x;
TRUNCATE bibrec_bib31x;
TRUNCATE bibrec_bib32x;
TRUNCATE bibrec_bib33x;
TRUNCATE bibrec_bib34x;
TRUNCATE bibrec_bib35x;
TRUNCATE bibrec_bib36x;
TRUNCATE bibrec_bib37x;
TRUNCATE bibrec_bib38x;
TRUNCATE bibrec_bib39x;
TRUNCATE bibrec_bib40x;
TRUNCATE bibrec_bib41x;
TRUNCATE bibrec_bib42x;
TRUNCATE bibrec_bib43x;
TRUNCATE bibrec_bib44x;
TRUNCATE bibrec_bib45x;
TRUNCATE bibrec_bib46x;
TRUNCATE bibrec_bib47x;
TRUNCATE bibrec_bib48x;
TRUNCATE bibrec_bib49x;
TRUNCATE bibrec_bib50x;
TRUNCATE bibrec_bib51x;
TRUNCATE bibrec_bib52x;
TRUNCATE bibrec_bib53x;
TRUNCATE bibrec_bib54x;
TRUNCATE bibrec_bib55x;
TRUNCATE bibrec_bib56x;
TRUNCATE bibrec_bib57x;
TRUNCATE bibrec_bib58x;
TRUNCATE bibrec_bib59x;
TRUNCATE bibrec_bib60x;
TRUNCATE bibrec_bib61x;
TRUNCATE bibrec_bib62x;
TRUNCATE bibrec_bib63x;
TRUNCATE bibrec_bib64x;
TRUNCATE bibrec_bib65x;
TRUNCATE bibrec_bib66x;
TRUNCATE bibrec_bib67x;
TRUNCATE bibrec_bib68x;
TRUNCATE bibrec_bib69x;
TRUNCATE bibrec_bib70x;
TRUNCATE bibrec_bib71x;
TRUNCATE bibrec_bib72x;
TRUNCATE bibrec_bib73x;
TRUNCATE bibrec_bib74x;
TRUNCATE bibrec_bib75x;
TRUNCATE bibrec_bib76x;
TRUNCATE bibrec_bib77x;
TRUNCATE bibrec_bib78x;
TRUNCATE bibrec_bib79x;
TRUNCATE bibrec_bib80x;
TRUNCATE bibrec_bib81x;
TRUNCATE bibrec_bib82x;
TRUNCATE bibrec_bib83x;
TRUNCATE bibrec_bib84x;
TRUNCATE bibrec_bib85x;
TRUNCATE bibrec_bib86x;
TRUNCATE bibrec_bib87x;
TRUNCATE bibrec_bib88x;
TRUNCATE bibrec_bib89x;
TRUNCATE bibrec_bib90x;
TRUNCATE bibrec_bib91x;
TRUNCATE bibrec_bib92x;
TRUNCATE bibrec_bib93x;
TRUNCATE bibrec_bib94x;
TRUNCATE bibrec_bib95x;
TRUNCATE bibrec_bib96x;
TRUNCATE bibrec_bib97x;
TRUNCATE bibrec_bib98x;
TRUNCATE bibrec_bib99x;
TRUNCATE bibfmt;
TRUNCATE idxWORD01F;
TRUNCATE idxWORD02F;
TRUNCATE idxWORD03F;
TRUNCATE idxWORD04F;
TRUNCATE idxWORD05F;
TRUNCATE idxWORD06F;
TRUNCATE idxWORD07F;
TRUNCATE idxWORD08F;
TRUNCATE idxWORD09F;
TRUNCATE idxWORD10F;
TRUNCATE idxWORD11F;
TRUNCATE idxWORD12F;
TRUNCATE idxWORD13F;
TRUNCATE idxWORD14F;
TRUNCATE idxWORD15F;
TRUNCATE idxWORD16F;
TRUNCATE idxWORD17F;
TRUNCATE idxWORD18F;
TRUNCATE idxWORD19F;
+TRUNCATE idxWORD20F;
+TRUNCATE idxWORD21F;
+TRUNCATE idxWORD22F;
+TRUNCATE idxWORD23F;
TRUNCATE idxWORD01R;
TRUNCATE idxWORD02R;
TRUNCATE idxWORD03R;
TRUNCATE idxWORD04R;
TRUNCATE idxWORD05R;
TRUNCATE idxWORD06R;
TRUNCATE idxWORD07R;
TRUNCATE idxWORD08R;
TRUNCATE idxWORD09R;
TRUNCATE idxWORD10R;
TRUNCATE idxWORD11R;
TRUNCATE idxWORD12R;
TRUNCATE idxWORD13R;
TRUNCATE idxWORD14R;
TRUNCATE idxWORD15R;
TRUNCATE idxWORD16R;
TRUNCATE idxWORD17R;
TRUNCATE idxWORD18R;
TRUNCATE idxWORD19R;
+TRUNCATE idxWORD20R;
+TRUNCATE idxWORD21R;
+TRUNCATE idxWORD22R;
+TRUNCATE idxWORD23R;
TRUNCATE idxPAIR01F;
TRUNCATE idxPAIR02F;
TRUNCATE idxPAIR03F;
TRUNCATE idxPAIR04F;
TRUNCATE idxPAIR05F;
TRUNCATE idxPAIR06F;
TRUNCATE idxPAIR07F;
TRUNCATE idxPAIR08F;
TRUNCATE idxPAIR09F;
TRUNCATE idxPAIR10F;
TRUNCATE idxPAIR11F;
TRUNCATE idxPAIR12F;
TRUNCATE idxPAIR13F;
TRUNCATE idxPAIR14F;
TRUNCATE idxPAIR15F;
TRUNCATE idxPAIR16F;
TRUNCATE idxPAIR17F;
TRUNCATE idxPAIR18F;
TRUNCATE idxPAIR19F;
+TRUNCATE idxPAIR20F;
+TRUNCATE idxPAIR21F;
+TRUNCATE idxPAIR22F;
+TRUNCATE idxPAIR23F;
TRUNCATE idxPAIR01R;
TRUNCATE idxPAIR02R;
TRUNCATE idxPAIR03R;
TRUNCATE idxPAIR04R;
TRUNCATE idxPAIR05R;
TRUNCATE idxPAIR06R;
TRUNCATE idxPAIR07R;
TRUNCATE idxPAIR08R;
TRUNCATE idxPAIR09R;
TRUNCATE idxPAIR10R;
TRUNCATE idxPAIR11R;
TRUNCATE idxPAIR12R;
TRUNCATE idxPAIR13R;
TRUNCATE idxPAIR14R;
TRUNCATE idxPAIR15R;
TRUNCATE idxPAIR16R;
TRUNCATE idxPAIR17R;
TRUNCATE idxPAIR18R;
TRUNCATE idxPAIR19R;
+TRUNCATE idxPAIR20R;
+TRUNCATE idxPAIR21R;
+TRUNCATE idxPAIR22R;
+TRUNCATE idxPAIR23R;
TRUNCATE idxPHRASE01F;
TRUNCATE idxPHRASE02F;
TRUNCATE idxPHRASE03F;
TRUNCATE idxPHRASE04F;
TRUNCATE idxPHRASE05F;
TRUNCATE idxPHRASE06F;
TRUNCATE idxPHRASE07F;
TRUNCATE idxPHRASE08F;
TRUNCATE idxPHRASE09F;
TRUNCATE idxPHRASE10F;
TRUNCATE idxPHRASE11F;
TRUNCATE idxPHRASE12F;
TRUNCATE idxPHRASE13F;
TRUNCATE idxPHRASE14F;
TRUNCATE idxPHRASE15F;
TRUNCATE idxPHRASE16F;
TRUNCATE idxPHRASE17F;
TRUNCATE idxPHRASE18F;
TRUNCATE idxPHRASE19F;
+TRUNCATE idxPHRASE20F;
+TRUNCATE idxPHRASE21F;
+TRUNCATE idxPHRASE22F;
+TRUNCATE idxPHRASE23F;
TRUNCATE idxPHRASE01R;
TRUNCATE idxPHRASE02R;
TRUNCATE idxPHRASE03R;
TRUNCATE idxPHRASE04R;
TRUNCATE idxPHRASE05R;
TRUNCATE idxPHRASE06R;
TRUNCATE idxPHRASE07R;
TRUNCATE idxPHRASE08R;
TRUNCATE idxPHRASE09R;
TRUNCATE idxPHRASE10R;
TRUNCATE idxPHRASE11R;
TRUNCATE idxPHRASE12R;
TRUNCATE idxPHRASE13R;
TRUNCATE idxPHRASE14R;
TRUNCATE idxPHRASE15R;
TRUNCATE idxPHRASE16R;
TRUNCATE idxPHRASE17R;
TRUNCATE idxPHRASE18R;
TRUNCATE idxPHRASE19R;
+TRUNCATE idxPHRASE20R;
+TRUNCATE idxPHRASE21R;
+TRUNCATE idxPHRASE22R;
+TRUNCATE idxPHRASE23R;
TRUNCATE rnkMETHODDATA;
TRUNCATE rnkCITATIONDATA;
TRUNCATE rnkCITATIONDATAEXT;
TRUNCATE rnkAUTHORDATA;
TRUNCATE rnkRECORDSCACHE;
TRUNCATE rnkEXTENDEDAUTHORS;
TRUNCATE rnkSELFCITES;
TRUNCATE rnkDOWNLOADS;
TRUNCATE rnkPAGEVIEWS;
TRUNCATE rnkWORD01F;
TRUNCATE rnkWORD01R;
TRUNCATE bibdoc;
TRUNCATE bibrec_bibdoc;
TRUNCATE bibdoc_bibdoc;
TRUNCATE bibdocmoreinfo;
TRUNCATE bibdocfsinfo;
TRUNCATE sbmAPPROVAL;
TRUNCATE sbmSUBMISSIONS;
TRUNCATE sbmPUBLICATION;
TRUNCATE sbmPUBLICATIONCOMM;
TRUNCATE sbmPUBLICATIONDATA;
TRUNCATE hstRECORD;
TRUNCATE hstDOCUMENT;
TRUNCATE bibHOLDINGPEN;
TRUNCATE hstEXCEPTION;
TRUNCATE aidPERSONIDDATA;
TRUNCATE aidRESULTS;
TRUNCATE aidCACHE;
TRUNCATE aidPERSONIDPAPERS;
TRUNCATE aidUSERINPUTLOG;
TRUNCATE lnkENTRY;
TRUNCATE lnkENTRYURLTITLE;
TRUNCATE lnkENTRYLOG;
TRUNCATE lnkLOG;
TRUNCATE lnkADMINURL;
TRUNCATE lnkADMINURLLOG;
TRUNCATE wapCACHE;
TRUNCATE goto;
diff --git a/modules/miscutil/sql/tabcreate.sql b/modules/miscutil/sql/tabcreate.sql
index c77cff86d..5a1bc4e72 100644
--- a/modules/miscutil/sql/tabcreate.sql
+++ b/modules/miscutil/sql/tabcreate.sql
@@ -1,4346 +1,4689 @@
-- This file is part of Invenio.
-- Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
--
-- Invenio is free software; you can redistribute it and/or
-- modify it under the terms of the GNU General Public License as
-- published by the Free Software Foundation; either version 2 of the
-- License, or (at your option) any later version.
--
-- Invenio is distributed in the hope that it will be useful, but
-- WITHOUT ANY WARRANTY; without even the implied warranty of
-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- General Public License for more details.
--
-- You should have received a copy of the GNU General Public License
-- along with Invenio; if not, write to the Free Software Foundation, Inc.,
-- 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-- tables for bibliographic records:
CREATE TABLE IF NOT EXISTS bibrec (
id mediumint(8) unsigned NOT NULL auto_increment,
creation_date datetime NOT NULL default '0000-00-00',
modification_date datetime NOT NULL default '0000-00-00',
master_format varchar(16) NOT NULL default 'marc',
PRIMARY KEY (id),
KEY creation_date (creation_date),
KEY modification_date (modification_date)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib00x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib01x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib02x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib03x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib04x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib05x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib06x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib07x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib08x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib09x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib10x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib11x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib12x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib13x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib14x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib15x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib16x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib17x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib18x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib19x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib20x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib21x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib22x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib23x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib24x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib25x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib26x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib27x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib28x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib29x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib30x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib31x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib32x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib33x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib34x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib35x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib36x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib37x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib38x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib39x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib40x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib41x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib42x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib43x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib44x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib45x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib46x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib47x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib48x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib49x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib50x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib51x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib52x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib53x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib54x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib55x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib56x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib57x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib58x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib59x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib60x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib61x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib62x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib63x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib64x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib65x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib66x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib67x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib68x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib69x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib70x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib71x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib72x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib73x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib74x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib75x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib76x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib77x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib78x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib79x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib80x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib81x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib82x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib83x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib84x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib85x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(100)) -- URLs need usually a larger index for speedy lookups
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib86x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib87x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib88x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib89x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib90x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib91x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib92x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib93x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib94x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib95x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib96x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib97x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib98x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bib99x (
id mediumint(8) unsigned NOT NULL auto_increment,
tag varchar(6) NOT NULL default '',
value text NOT NULL,
PRIMARY KEY (id),
KEY kt (tag),
KEY kv (value(35))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib00x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib01x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib02x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib03x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib04x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib05x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib06x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib07x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib08x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib09x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib10x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib11x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib12x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib13x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib14x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib15x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib16x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib17x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib18x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib19x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib20x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib21x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib22x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib23x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib24x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib25x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib26x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib27x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib28x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib29x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib30x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib31x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib32x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib33x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib34x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib35x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib36x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib37x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib38x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib39x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib40x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib41x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib42x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib43x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib44x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib45x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib46x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib47x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib48x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib49x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib50x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib51x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib52x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib53x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib54x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib55x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib56x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib57x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib58x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib59x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib60x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib61x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib62x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib63x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib64x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib65x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib66x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib67x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib68x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib69x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib70x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib71x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib72x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib73x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib74x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib75x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib76x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib77x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib78x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib79x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib80x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib81x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib82x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib83x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib84x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib85x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib86x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib87x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib88x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib89x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib90x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib91x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib92x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib93x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib94x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib95x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib96x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib97x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib98x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bib99x (
id_bibrec mediumint(8) unsigned NOT NULL default '0',
id_bibxxx mediumint(8) unsigned NOT NULL default '0',
field_number smallint(5) unsigned default NULL,
KEY id_bibxxx (id_bibxxx),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
-- tables for bibliographic records formatted:
CREATE TABLE IF NOT EXISTS bibfmt (
id_bibrec int(8) unsigned NOT NULL default '0',
format varchar(10) NOT NULL default '',
last_updated datetime NOT NULL default '0000-00-00',
value longblob,
PRIMARY KEY (id_bibrec, format),
KEY format (format),
KEY last_updated (last_updated)
) ENGINE=MyISAM;
-- tables for index files:
CREATE TABLE IF NOT EXISTS idxINDEX (
id mediumint(9) unsigned NOT NULL,
name varchar(50) NOT NULL default '',
description varchar(255) NOT NULL default '',
last_updated datetime NOT NULL default '0000-00-00 00:00:00',
stemming_language varchar(10) NOT NULL default '',
indexer varchar(10) NOT NULL default 'native',
+ synonym_kbrs varchar(255) NOT NULL default '',
+ remove_stopwords varchar(255) NOT NULL default '',
+ remove_html_markup varchar(10) NOT NULL default '',
+ remove_latex_markup varchar(10) NOT NULL default '',
+ tokenizer varchar(50) NOT NULL default '',
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxINDEXNAME (
id_idxINDEX mediumint(9) unsigned NOT NULL,
ln char(5) NOT NULL default '',
type char(3) NOT NULL default 'sn',
value varchar(255) NOT NULL,
PRIMARY KEY (id_idxINDEX,ln,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxINDEX_field (
id_idxINDEX mediumint(9) unsigned NOT NULL,
id_field mediumint(9) unsigned NOT NULL,
regexp_punctuation varchar(255) NOT NULL default "[\.\,\:\;\?\!\"]",
regexp_alphanumeric_separators varchar(255) NOT NULL default "[\!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\=\>\?\@\[\\\]\^\_\`\{\|\}\~]",
PRIMARY KEY (id_idxINDEX,id_field)
) ENGINE=MyISAM;
-- this comment line here is just to fix the SQL display mode in Emacs '
+CREATE TABLE IF NOT EXISTS idxINDEX_idxINDEX (
+ id_virtual mediumint(9) unsigned NOT NULL,
+ id_normal mediumint(9) unsigned NOT NULL,
+ PRIMARY KEY (id_virtual,id_normal)
+) ENGINE=MyISAM;
+
CREATE TABLE IF NOT EXISTS idxWORD01F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD01R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD02F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD02R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD03F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD03R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD04F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD04R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD05F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD05R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD06F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD06R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD07F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD07R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD08F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD08R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD09F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD09R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD10F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD10R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD11F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD11R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD12F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD12R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD13F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD13R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD14F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD14R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD15F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD15R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD16F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD16R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD17F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD17R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD18F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD18R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD19F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxWORD19R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
+CREATE TABLE IF NOT EXISTS idxWORD20F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD20R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+
+CREATE TABLE IF NOT EXISTS idxWORD21F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD21R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD22F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD22R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD23F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD23R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD24F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD24R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD25F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD25R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD26F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(50) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxWORD26R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
CREATE TABLE IF NOT EXISTS idxPAIR01F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR01R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR02F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR02R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR03F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR03R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR04F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR04R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR05F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR05R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR06F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR06R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR07F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR07R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR08F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR08R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR09F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR09R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR10F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR10R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR11F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR11R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR12F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR12R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR13F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR13R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR14F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR14R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR15F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR15R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR16F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR16R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR17F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR17R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR18F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR18R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR19F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(100) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPAIR19R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
+CREATE TABLE IF NOT EXISTS idxPAIR20F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR20R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR21F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR21R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR22F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR22R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR23F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR23R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR24F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR24R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR25F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR25R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR26F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term varchar(100) default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ UNIQUE KEY term (term)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPAIR26R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
CREATE TABLE IF NOT EXISTS idxPHRASE01F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE01R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE02F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE02R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE03F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE03R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE04F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE04R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE05F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE05R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE06F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE06R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE07F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE07R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE08F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE08R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE09F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE09R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE10F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE10R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE11F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE11R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE12F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE12R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE13F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE13R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE14F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE14R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE15F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE15R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE16F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE16R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE17F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE17R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE18F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE18R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE19F (
id mediumint(9) unsigned NOT NULL auto_increment,
term text default NULL,
hitlist longblob,
PRIMARY KEY (id),
KEY term (term(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS idxPHRASE19R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
+CREATE TABLE IF NOT EXISTS idxPHRASE20F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE20R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+
+CREATE TABLE IF NOT EXISTS idxPHRASE21F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE21R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE22F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE22R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE23F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE23R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE24F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE24R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE25F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE25R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE26F (
+ id mediumint(9) unsigned NOT NULL auto_increment,
+ term text default NULL,
+ hitlist longblob,
+ PRIMARY KEY (id),
+ KEY term (term(50))
+) ENGINE=MyISAM;
+
+CREATE TABLE IF NOT EXISTS idxPHRASE26R (
+ id_bibrec mediumint(9) unsigned NOT NULL,
+ termlist longblob,
+ type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
+ PRIMARY KEY (id_bibrec,type)
+) ENGINE=MyISAM;
+
-- tables for ranking:
CREATE TABLE IF NOT EXISTS rnkMETHOD (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(20) NOT NULL default '',
last_updated datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS rnkMETHODNAME (
id_rnkMETHOD mediumint(9) unsigned NOT NULL,
ln char(5) NOT NULL default '',
type char(3) NOT NULL default 'sn',
value varchar(255) NOT NULL,
PRIMARY KEY (id_rnkMETHOD,ln,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS rnkMETHODDATA (
id_rnkMETHOD mediumint(9) unsigned NOT NULL,
relevance_data longblob,
PRIMARY KEY (id_rnkMETHOD)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS collection_rnkMETHOD (
id_collection mediumint(9) unsigned NOT NULL,
id_rnkMETHOD mediumint(9) unsigned NOT NULL,
score tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_collection,id_rnkMETHOD)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS rnkWORD01F (
id mediumint(9) unsigned NOT NULL auto_increment,
term varchar(50) default NULL,
hitlist longblob,
PRIMARY KEY (id),
UNIQUE KEY term (term)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS rnkWORD01R (
id_bibrec mediumint(9) unsigned NOT NULL,
termlist longblob,
type enum('CURRENT','FUTURE','TEMPORARY') NOT NULL default 'CURRENT',
PRIMARY KEY (id_bibrec,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS rnkAUTHORDATA (
aterm varchar(50) default NULL,
hitlist longblob,
UNIQUE KEY aterm (aterm)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS rnkPAGEVIEWS (
id_bibrec mediumint(8) unsigned default NULL,
id_user int(15) unsigned default '0',
client_host int(10) unsigned default NULL,
view_time datetime default '0000-00-00 00:00:00',
KEY view_time (view_time),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS rnkDOWNLOADS (
id_bibrec mediumint(8) unsigned default NULL,
download_time datetime default '0000-00-00 00:00:00',
client_host int(10) unsigned default NULL,
id_user int(15) unsigned default NULL,
id_bibdoc mediumint(9) unsigned default NULL,
file_version smallint(2) unsigned default NULL,
file_format varchar(50) NULL default NULL,
KEY download_time (download_time),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
-- a table for citations. record-cites-record
CREATE TABLE IF NOT EXISTS rnkCITATIONDATA (
id mediumint(8) unsigned NOT NULL auto_increment,
object_name varchar(255) NOT NULL,
object_value longblob,
last_updated datetime NOT NULL default '0000-00-00',
PRIMARY KEY id (id),
UNIQUE KEY object_name (object_name)
) ENGINE=MyISAM;
-- a table for missing citations. This should be scanned by a program
-- occasionally to check if some publication has been cited more than
-- 50 times (or such), and alert cataloguers to create record for that
-- external citation
--
-- id_bibrec is the id of the record. extcitepubinfo is publication info
-- that looks in general like hep-th/0112088
CREATE TABLE IF NOT EXISTS rnkCITATIONDATAEXT (
id_bibrec int(8) unsigned,
extcitepubinfo varchar(255) NOT NULL,
PRIMARY KEY (id_bibrec, extcitepubinfo),
KEY extcitepubinfo (extcitepubinfo)
) ENGINE=MyISAM;
-- tables for self-citations computation
CREATE TABLE IF NOT EXISTS `rnkRECORDSCACHE` (
`id_bibrec` int(10) unsigned NOT NULL,
`authorid` bigint(10) NOT NULL,
PRIMARY KEY (`id_bibrec`,`authorid`)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS `rnkEXTENDEDAUTHORS` (
`id` int(10) unsigned NOT NULL,
`authorid` bigint(10) NOT NULL,
PRIMARY KEY (`id`,`authorid`)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS `rnkSELFCITES` (
`id_bibrec` int(10) unsigned NOT NULL,
`count` int(10) unsigned NOT NULL,
`references` text NOT NULL,
`last_updated` datetime NOT NULL,
PRIMARY KEY (`id_bibrec`)
) ENGINE=MyISAM;
-- a table for storing invalid or ambiguous references encountered
CREATE TABLE IF NOT EXISTS rnkCITATIONDATAERR (
`type` ENUM('multiple-matches', 'not-well-formed'),
citinfo varchar(255) NOT NULL default '',
PRIMARY KEY (`type`, citinfo)
) ENGINE=MyISAM;
-- tables for collections and collection tree:
CREATE TABLE IF NOT EXISTS collection (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL,
dbquery text,
nbrecs int(10) unsigned default '0',
reclist longblob,
PRIMARY KEY (id),
UNIQUE KEY name (name),
KEY dbquery (dbquery(50))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS collectionname (
id_collection mediumint(9) unsigned NOT NULL,
ln char(5) NOT NULL default '',
type char(3) NOT NULL default 'sn',
value varchar(255) NOT NULL,
PRIMARY KEY (id_collection,ln,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS collection_collection (
id_dad mediumint(9) unsigned NOT NULL,
id_son mediumint(9) unsigned NOT NULL,
type char(1) NOT NULL default 'r',
score tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_dad,id_son)
) ENGINE=MyISAM;
-- tables for OAI sets:
CREATE TABLE IF NOT EXISTS oaiREPOSITORY (
id mediumint(9) unsigned NOT NULL auto_increment,
setName varchar(255) NOT NULL default '',
setSpec varchar(255) NOT NULL default 'GLOBAL_SET',
setCollection varchar(255) NOT NULL default '',
setDescription text NOT NULL default '',
setDefinition text NOT NULL default '',
setRecList longblob,
last_updated datetime NOT NULL default '1970-01-01',
p1 text NOT NULL default '',
f1 text NOT NULL default '',
m1 text NOT NULL default '',
p2 text NOT NULL default '',
f2 text NOT NULL default '',
m2 text NOT NULL default '',
p3 text NOT NULL default '',
f3 text NOT NULL default '',
m3 text NOT NULL default '',
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS oaiHARVEST (
id mediumint(9) unsigned NOT NULL auto_increment,
baseurl varchar(255) NOT NULL default '',
metadataprefix varchar(255) NOT NULL default 'oai_dc',
arguments text,
comment text,
bibconvertcfgfile varchar(255),
name varchar(255) NOT NULL,
lastrun datetime,
frequency mediumint(12) NOT NULL default '0',
postprocess varchar(20) NOT NULL default 'h',
bibfilterprogram varchar(255) NOT NULL default '',
setspecs text NOT NULL default '',
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS oaiHARVESTLOG (
id_oaiHARVEST mediumint(9) unsigned NOT NULL REFERENCES oaiHARVEST, -- source we harvest from
id_bibrec mediumint(8) unsigned NOT NULL default '0', -- internal record id ( filled by bibupload )
bibupload_task_id int NOT NULL default 0, -- bib upload task number
oai_id varchar(40) NOT NULL default "", -- OAI record identifier we harvested
date_harvested datetime NOT NULL default '0000-00-00', -- when we harvested
date_inserted datetime NOT NULL default '0000-00-00', -- when it was inserted
inserted_to_db char(1) NOT NULL default 'P', -- where it was inserted (P=prod, H=holding-pen, etc)
PRIMARY KEY (bibupload_task_id, oai_id, date_harvested)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibHOLDINGPEN (
changeset_id INT NOT NULL AUTO_INCREMENT, -- the identifier of the changeset stored in the holding pen
changeset_date datetime NOT NULL DEFAULT '0000:00:00 00:00:00', -- when was the changeset inserted
changeset_xml TEXT NOT NULL DEFAULT '',
oai_id varchar(40) NOT NULL DEFAULT '', -- OAI identifier of concerned record
id_bibrec mediumint(8) unsigned NOT NULL default '0', -- record ID of concerned record (filled by bibupload)
PRIMARY KEY (changeset_id),
KEY changeset_date (changeset_date),
KEY id_bibrec (id_bibrec)
) ENGINE=MyISAM;
-- tables for portal elements:
CREATE TABLE IF NOT EXISTS collection_portalbox (
id_collection mediumint(9) unsigned NOT NULL,
id_portalbox mediumint(9) unsigned NOT NULL,
ln char(5) NOT NULL default '',
position char(3) NOT NULL default 'top',
score tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_collection,id_portalbox,ln)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS portalbox (
id mediumint(9) unsigned NOT NULL auto_increment,
title text NOT NULL,
body text NOT NULL,
UNIQUE KEY id (id)
) ENGINE=MyISAM;
-- tables for search examples:
CREATE TABLE IF NOT EXISTS collection_example (
id_collection mediumint(9) unsigned NOT NULL,
id_example mediumint(9) unsigned NOT NULL,
score tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_collection,id_example)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS example (
id mediumint(9) unsigned NOT NULL auto_increment,
type text NOT NULL default '',
body text NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
-- tables for collection formats:
CREATE TABLE IF NOT EXISTS collection_format (
id_collection mediumint(9) unsigned NOT NULL,
id_format mediumint(9) unsigned NOT NULL,
score tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_collection,id_format)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS format (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL,
code varchar(6) NOT NULL,
description varchar(255) default '',
content_type varchar(255) default '',
visibility tinyint NOT NULL default '1',
last_updated datetime NOT NULL default '0000-00-00',
PRIMARY KEY (id),
UNIQUE KEY code (code)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS formatname (
id_format mediumint(9) unsigned NOT NULL,
ln char(5) NOT NULL default '',
type char(3) NOT NULL default 'sn',
value varchar(255) NOT NULL,
PRIMARY KEY (id_format,ln,type)
) ENGINE=MyISAM;
-- tables for collection detailed page options
CREATE TABLE IF NOT EXISTS collectiondetailedrecordpagetabs (
id_collection mediumint(9) unsigned NOT NULL,
tabs varchar(255) NOT NULL default '',
PRIMARY KEY (id_collection)
) ENGINE=MyISAM;
-- tables for search options and MARC tags:
CREATE TABLE IF NOT EXISTS collection_field_fieldvalue (
id_collection mediumint(9) unsigned NOT NULL,
id_field mediumint(9) unsigned NOT NULL,
id_fieldvalue mediumint(9) unsigned,
type char(3) NOT NULL default 'src',
score tinyint(4) unsigned NOT NULL default '0',
score_fieldvalue tinyint(4) unsigned NOT NULL default '0',
KEY id_collection (id_collection),
KEY id_field (id_field),
KEY id_fieldvalue (id_fieldvalue)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS field (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL,
code varchar(255) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY code (code)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS fieldname (
id_field mediumint(9) unsigned NOT NULL,
ln char(5) NOT NULL default '',
type char(3) NOT NULL default 'sn',
value varchar(255) NOT NULL,
PRIMARY KEY (id_field,ln,type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS fieldvalue (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL,
value text NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS field_tag (
id_field mediumint(9) unsigned NOT NULL,
id_tag mediumint(9) unsigned NOT NULL,
score tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_field,id_tag)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS tag (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL,
value char(6) NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
-- tables for file management
CREATE TABLE IF NOT EXISTS bibdoc (
id mediumint(9) unsigned NOT NULL auto_increment,
status text NOT NULL default '',
docname varchar(250) COLLATE utf8_bin default NULL, -- now NULL means that this is new version bibdoc
creation_date datetime NOT NULL default '0000-00-00',
modification_date datetime NOT NULL default '0000-00-00',
text_extraction_date datetime NOT NULL default '0000-00-00',
doctype varchar(255),
PRIMARY KEY (id),
KEY docname (docname),
KEY creation_date (creation_date),
KEY modification_date (modification_date)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibrec_bibdoc (
id_bibrec mediumint(9) unsigned NOT NULL default '0',
id_bibdoc mediumint(9) unsigned NOT NULL default '0',
docname varchar(250) COLLATE utf8_bin NOT NULL default 'file',
type varchar(255),
KEY docname (docname),
KEY (id_bibrec),
KEY (id_bibdoc)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibdoc_bibdoc (
id mediumint(9) unsigned NOT NULL auto_increment,
id_bibdoc1 mediumint(9) unsigned DEFAULT NULL,
version1 tinyint(4) unsigned, -- NULL means all versions
format1 varchar(50),
id_bibdoc2 mediumint(9) unsigned DEFAULT NULL,
version2 tinyint(4) unsigned, -- NULL means all versions
format2 varchar(50),
rel_type varchar(255),
KEY (id_bibdoc1),
KEY (id_bibdoc2),
KEY (id)
) ENGINE=MyISAM;
-- Storage of moreInfo fields
CREATE TABLE IF NOT EXISTS bibdocmoreinfo (
id_bibdoc mediumint(9) unsigned DEFAULT NULL,
version tinyint(4) unsigned DEFAULT NULL, -- NULL means all versions
format VARCHAR(50) DEFAULT NULL,
id_rel mediumint(9) unsigned DEFAULT NULL,
namespace VARCHAR(25) DEFAULT NULL, -- namespace in the moreinfo dictionary
data_key VARCHAR(25), -- key in the moreinfo dictionary
data_value MEDIUMBLOB,
KEY (id_bibdoc, version, format, id_rel, namespace, data_key)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bibdocfsinfo (
id_bibdoc mediumint(9) unsigned NOT NULL,
version tinyint(4) unsigned NOT NULL,
format varchar(50) NOT NULL,
last_version boolean NOT NULL,
cd datetime NOT NULL,
md datetime NOT NULL,
checksum char(32) NOT NULL,
filesize bigint(15) unsigned NOT NULL,
mime varchar(100) NOT NULL,
master_format varchar(50) NULL default NULL,
PRIMARY KEY (id_bibdoc, version, format),
KEY (last_version),
KEY (format),
KEY (cd),
KEY (md),
KEY (filesize),
KEY (mime)
) ENGINE=MyISAM;
-- tables for publication requests:
CREATE TABLE IF NOT EXISTS publreq (
id int(11) NOT NULL auto_increment,
host varchar(255) NOT NULL default '',
date varchar(255) NOT NULL default '',
name varchar(255) NOT NULL default '',
email varchar(255) NOT NULL default '',
address text NOT NULL,
publication text NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
-- table for sessions and users:
CREATE TABLE IF NOT EXISTS session (
session_key varchar(32) NOT NULL default '',
session_expiry datetime NOT NULL default '0000-00-00 00:00:00',
session_object longblob,
uid int(15) unsigned NOT NULL,
UNIQUE KEY session_key (session_key),
KEY uid (uid),
KEY session_expiry (session_expiry)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS user (
id int(15) unsigned NOT NULL auto_increment,
email varchar(255) NOT NULL default '',
password blob NOT NULL,
note varchar(255) default NULL,
settings blob default NULL,
nickname varchar(255) NOT NULL default '',
last_login datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY id (id),
KEY email (email),
KEY nickname (nickname)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS userEXT (
id varbinary(255) NOT NULL,
method varchar(50) NOT NULL,
id_user int(15) unsigned NOT NULL,
PRIMARY KEY (id, method),
UNIQUE KEY (id_user, method)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS oauth1_storage (
token varchar(255) NOT NULL,
secret varchar(255) NOT NULL,
date_creation datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (token)
) ENGINE=MyISAM;
-- tables for usergroups
CREATE TABLE IF NOT EXISTS usergroup (
id int(15) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL default '',
description text default '',
join_policy char(2) NOT NULL default '',
login_method varchar(255) NOT NULL default 'INTERNAL',
PRIMARY KEY (id),
UNIQUE KEY login_method_name (login_method(70), name),
KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS user_usergroup (
id_user int(15) unsigned NOT NULL default '0',
id_usergroup int(15) unsigned NOT NULL default '0',
user_status char(1) NOT NULL default '',
user_status_date datetime NOT NULL default '0000-00-00 00:00:00',
KEY id_user (id_user),
KEY id_usergroup (id_usergroup)
) ENGINE=MyISAM;
-- tables for access control engine
CREATE TABLE IF NOT EXISTS accROLE (
id int(15) unsigned NOT NULL auto_increment,
name varchar(32),
description varchar(255),
firerole_def_ser blob NULL,
firerole_def_src text NULL,
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS user_accROLE (
id_user int(15) unsigned NOT NULL,
id_accROLE int(15) unsigned NOT NULL,
expiration datetime NOT NULL default '9999-12-31 23:59:59',
PRIMARY KEY (id_user, id_accROLE)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS accMAILCOOKIE (
id int(15) unsigned NOT NULL auto_increment,
data blob NOT NULL,
expiration datetime NOT NULL default '9999-12-31 23:59:59',
kind varchar(32) NOT NULL,
onetime boolean NOT NULL default 0,
status char(1) NOT NULL default 'W',
PRIMARY KEY (id),
KEY expiration (expiration)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS accACTION (
id int(15) unsigned NOT NULL auto_increment,
name varchar(32),
description varchar(255),
allowedkeywords varchar(255),
optional ENUM ('yes', 'no') NOT NULL default 'no',
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS accARGUMENT (
id int(15) unsigned NOT NULL auto_increment,
keyword varchar (32),
value varchar(255),
PRIMARY KEY (id),
KEY KEYVAL (keyword, value)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS accROLE_accACTION_accARGUMENT (
id_accROLE int(15),
id_accACTION int(15),
id_accARGUMENT int(15),
argumentlistid mediumint(8),
KEY id_accROLE (id_accROLE),
KEY id_accACTION (id_accACTION),
KEY id_accARGUMENT (id_accARGUMENT)
) ENGINE=MyISAM;
-- tables for personal/collaborative features (baskets, alerts, searches, messages, usergroups):
CREATE TABLE IF NOT EXISTS user_query (
id_user int(15) unsigned NOT NULL default '0',
id_query int(15) unsigned NOT NULL default '0',
hostname varchar(50) default 'unknown host',
date datetime default NULL,
KEY id_user (id_user,id_query)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS query (
id int(15) unsigned NOT NULL auto_increment,
type char(1) NOT NULL default 'r',
urlargs text NOT NULL,
PRIMARY KEY (id),
KEY urlargs (urlargs(100))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS user_query_basket (
id_user int(15) unsigned NOT NULL default '0',
id_query int(15) unsigned NOT NULL default '0',
id_basket int(15) unsigned NOT NULL default '0',
frequency varchar(5) NOT NULL default '',
date_creation date default NULL,
date_lastrun date default '0000-00-00',
alert_name varchar(30) NOT NULL default '',
alert_desc text default NULL,
alert_recipient text default NULL,
notification char(1) NOT NULL default 'y',
PRIMARY KEY (id_user,id_query,frequency,id_basket),
KEY alert_name (alert_name)
) ENGINE=MyISAM;
-- baskets
CREATE TABLE IF NOT EXISTS bskBASKET (
id int(15) unsigned NOT NULL auto_increment,
id_owner int(15) unsigned NOT NULL default '0',
name varchar(50) NOT NULL default '',
date_modification datetime NOT NULL default '0000-00-00 00:00:00',
nb_views int(15) NOT NULL default '0',
PRIMARY KEY (id),
KEY id_owner (id_owner),
KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bskREC (
id_bibrec_or_bskEXTREC int(16) NOT NULL default '0',
id_bskBASKET int(15) unsigned NOT NULL default '0',
id_user_who_added_item int(15) NOT NULL default '0',
score int(15) NOT NULL default '0',
date_added datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (id_bibrec_or_bskEXTREC,id_bskBASKET),
KEY id_bibrec_or_bskEXTREC (id_bibrec_or_bskEXTREC),
KEY id_bskBASKET (id_bskBASKET),
KEY score (score),
KEY date_added (date_added)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bskEXTREC (
id int(15) unsigned NOT NULL auto_increment,
external_id int(15) NOT NULL default '0',
collection_id int(15) unsigned NOT NULL default '0',
original_url text,
creation_date datetime NOT NULL default '0000-00-00 00:00:00',
modification_date datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bskEXTFMT (
id int(15) unsigned NOT NULL auto_increment,
id_bskEXTREC int(15) unsigned NOT NULL default '0',
format varchar(10) NOT NULL default '',
last_updated datetime NOT NULL default '0000-00-00 00:00:00',
value longblob,
PRIMARY KEY (id),
KEY id_bskEXTREC (id_bskEXTREC),
KEY format (format)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS user_bskBASKET (
id_user int(15) unsigned NOT NULL default '0',
id_bskBASKET int(15) unsigned NOT NULL default '0',
topic varchar(50) NOT NULL default '',
PRIMARY KEY (id_user,id_bskBASKET),
KEY id_user (id_user),
KEY id_bskBASKET (id_bskBASKET)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS usergroup_bskBASKET (
id_usergroup int(15) unsigned NOT NULL default '0',
id_bskBASKET int(15) unsigned NOT NULL default '0',
topic varchar(50) NOT NULL default '',
date_shared datetime NOT NULL default '0000-00-00 00:00:00',
share_level char(2) NOT NULL default '',
PRIMARY KEY (id_usergroup,id_bskBASKET),
KEY id_usergroup (id_usergroup),
KEY id_bskBASKET (id_bskBASKET)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bskRECORDCOMMENT (
id int(15) unsigned NOT NULL auto_increment,
id_bibrec_or_bskEXTREC int(16) NOT NULL default '0',
id_bskBASKET int(15) unsigned NOT NULL default '0',
id_user int(15) unsigned NOT NULL default '0',
title varchar(255) NOT NULL default '',
body text NOT NULL,
date_creation datetime NOT NULL default '0000-00-00 00:00:00',
priority int(15) NOT NULL default '0',
in_reply_to_id_bskRECORDCOMMENT int(15) unsigned NOT NULL default '0',
reply_order_cached_data blob NULL default NULL,
PRIMARY KEY (id),
KEY id_bskBASKET (id_bskBASKET),
KEY id_bibrec_or_bskEXTREC (id_bibrec_or_bskEXTREC),
KEY date_creation (date_creation),
KEY in_reply_to_id_bskRECORDCOMMENT (in_reply_to_id_bskRECORDCOMMENT),
INDEX (reply_order_cached_data(40))
) ENGINE=MyISAM;
-- tables for messaging system
CREATE TABLE IF NOT EXISTS msgMESSAGE (
id int(15) unsigned NOT NULL auto_increment,
id_user_from int(15) unsigned NOT NULL default '0',
sent_to_user_nicks text NOT NULL default '',
sent_to_group_names text NOT NULL default '',
subject text NOT NULL default '',
body text default NULL,
sent_date datetime NOT NULL default '0000-00-00 00:00:00',
received_date datetime NULL default '0000-00-00 00:00:00',
PRIMARY KEY id (id),
KEY id_user_from (id_user_from)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS user_msgMESSAGE (
id_user_to int(15) unsigned NOT NULL default '0',
id_msgMESSAGE int(15) unsigned NOT NULL default '0',
status char(1) NOT NULL default 'N',
PRIMARY KEY id (id_user_to, id_msgMESSAGE),
KEY id_user_to (id_user_to),
KEY id_msgMESSAGE (id_msgMESSAGE)
) ENGINE=MyISAM;
-- tables for WebComment
CREATE TABLE IF NOT EXISTS cmtRECORDCOMMENT (
id int(15) unsigned NOT NULL auto_increment,
id_bibrec int(15) unsigned NOT NULL default '0',
id_user int(15) unsigned NOT NULL default '0',
title varchar(255) NOT NULL default '',
body text NOT NULL default '',
date_creation datetime NOT NULL default '0000-00-00 00:00:00',
star_score tinyint(5) unsigned NOT NULL default '0',
nb_votes_yes int(10) NOT NULL default '0',
nb_votes_total int(10) unsigned NOT NULL default '0',
nb_abuse_reports int(10) NOT NULL default '0',
status char(2) NOT NULL default 'ok',
round_name varchar(255) NOT NULL default '',
restriction varchar(50) NOT NULL default '',
in_reply_to_id_cmtRECORDCOMMENT int(15) unsigned NOT NULL default '0',
reply_order_cached_data blob NULL default NULL,
PRIMARY KEY (id),
KEY id_bibrec (id_bibrec),
KEY id_user (id_user),
KEY status (status),
KEY in_reply_to_id_cmtRECORDCOMMENT (in_reply_to_id_cmtRECORDCOMMENT),
INDEX (reply_order_cached_data(40))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS cmtACTIONHISTORY (
id_cmtRECORDCOMMENT int(15) unsigned NULL,
id_bibrec int(15) unsigned NULL,
id_user int(15) unsigned NULL default NULL,
client_host int(10) unsigned default NULL,
action_time datetime NOT NULL default '0000-00-00 00:00:00',
action_code char(1) NOT NULL,
KEY id_cmtRECORDCOMMENT (id_cmtRECORDCOMMENT),
KEY client_host (client_host),
KEY id_user (id_user),
KEY action_code (action_code)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS cmtSUBSCRIPTION (
id_bibrec mediumint(8) unsigned NOT NULL,
id_user int(15) unsigned NOT NULL,
creation_time datetime NOT NULL default '0000-00-00 00:00:00',
KEY id_user (id_bibrec, id_user)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS cmtCOLLAPSED (
id_bibrec int(15) unsigned NOT NULL default '0',
id_cmtRECORDCOMMENT int(15) unsigned NULL,
id_user int(15) unsigned NOT NULL,
PRIMARY KEY (id_user, id_bibrec, id_cmtRECORDCOMMENT)
) ENGINE=MyISAM;
-- tables for BibKnowledge:
CREATE TABLE IF NOT EXISTS knwKB (
id mediumint(8) unsigned NOT NULL auto_increment,
name varchar(255) default '',
description text default '',
kbtype char default NULL,
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS knwKBRVAL (
id mediumint(8) unsigned NOT NULL auto_increment,
m_key varchar(255) NOT NULL default '',
m_value text NOT NULL default '',
id_knwKB mediumint(8) NOT NULL default '0',
PRIMARY KEY (id),
KEY id_knwKB (id_knwKB),
KEY m_key (m_key(30)),
KEY m_value (m_value(30))
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS knwKBDDEF (
id_knwKB mediumint(8) unsigned NOT NULL,
id_collection mediumint(9),
output_tag text default '',
search_expression text default '',
PRIMARY KEY (id_knwKB)
) ENGINE=MyISAM;
-- tables for WebSubmit:
CREATE TABLE IF NOT EXISTS sbmACTION (
lactname text,
sactname char(3) NOT NULL default '',
dir text,
cd date default NULL,
md date default NULL,
actionbutton text,
statustext text,
PRIMARY KEY (sactname)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmALLFUNCDESCR (
function varchar(40) NOT NULL default '',
description tinytext,
PRIMARY KEY (function)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmAPPROVAL (
doctype varchar(10) NOT NULL default '',
categ varchar(50) NOT NULL default '',
rn varchar(50) NOT NULL default '',
status varchar(10) NOT NULL default '',
dFirstReq datetime NOT NULL default '0000-00-00 00:00:00',
dLastReq datetime NOT NULL default '0000-00-00 00:00:00',
dAction datetime NOT NULL default '0000-00-00 00:00:00',
access varchar(20) NOT NULL default '0',
note text NOT NULL default '',
PRIMARY KEY (rn)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmCPLXAPPROVAL (
doctype varchar(10) NOT NULL default '',
categ varchar(50) NOT NULL default '',
rn varchar(50) NOT NULL default '',
type varchar(10) NOT NULL,
status varchar(10) NOT NULL,
id_group int(15) unsigned NOT NULL default '0',
id_bskBASKET int(15) unsigned NOT NULL default '0',
id_EdBoardGroup int(15) unsigned NOT NULL default '0',
dFirstReq datetime NOT NULL default '0000-00-00 00:00:00',
dLastReq datetime NOT NULL default '0000-00-00 00:00:00',
dEdBoardSel datetime NOT NULL default '0000-00-00 00:00:00',
dRefereeSel datetime NOT NULL default '0000-00-00 00:00:00',
dRefereeRecom datetime NOT NULL default '0000-00-00 00:00:00',
dEdBoardRecom datetime NOT NULL default '0000-00-00 00:00:00',
dPubComRecom datetime NOT NULL default '0000-00-00 00:00:00',
dProjectLeaderAction datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (rn, type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmCOLLECTION (
id int(11) NOT NULL auto_increment,
name varchar(100) NOT NULL default '',
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmCOLLECTION_sbmCOLLECTION (
id_father int(11) NOT NULL default '0',
id_son int(11) NOT NULL default '0',
catalogue_order int(11) NOT NULL default '0'
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmCOLLECTION_sbmDOCTYPE (
id_father int(11) NOT NULL default '0',
id_son char(10) NOT NULL default '0',
catalogue_order int(11) NOT NULL default '0'
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmCATEGORIES (
doctype varchar(10) NOT NULL default '',
sname varchar(75) NOT NULL default '',
lname varchar(75) NOT NULL default '',
score tinyint unsigned NOT NULL default 0,
PRIMARY KEY (doctype, sname),
KEY doctype (doctype),
KEY sname (sname)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmCHECKS (
chname varchar(15) NOT NULL default '',
chdesc text,
cd date default NULL,
md date default NULL,
chefi1 text,
chefi2 text,
PRIMARY KEY (chname)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmDOCTYPE (
ldocname text,
sdocname varchar(10) default NULL,
cd date default NULL,
md date default NULL,
description text
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmFIELD (
subname varchar(13) default NULL,
pagenb int(11) default NULL,
fieldnb int(11) default NULL,
fidesc varchar(15) default NULL,
fitext text,
level char(1) default NULL,
sdesc text,
checkn text,
cd date default NULL,
md date default NULL,
fiefi1 text,
fiefi2 text
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmFIELDDESC (
name varchar(15) NOT NULL default '',
alephcode varchar(50) default NULL,
marccode varchar(50) NOT NULL default '',
type char(1) default NULL,
size int(11) default NULL,
rows int(11) default NULL,
cols int(11) default NULL,
maxlength int(11) default NULL,
val text,
fidesc text,
cd date default NULL,
md date default NULL,
modifytext text,
fddfi2 text,
cookie int(11) default '0',
PRIMARY KEY (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmFORMATEXTENSION (
FILE_FORMAT text NOT NULL,
FILE_EXTENSION text NOT NULL
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmFUNCTIONS (
action varchar(10) NOT NULL default '',
doctype varchar(10) NOT NULL default '',
function varchar(40) NOT NULL default '',
score int(11) NOT NULL default '0',
step tinyint(4) NOT NULL default '1'
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmFUNDESC (
function varchar(40) NOT NULL default '',
param varchar(40) default NULL
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmGFILERESULT (
FORMAT text NOT NULL,
RESULT text NOT NULL
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmIMPLEMENT (
docname varchar(10) default NULL,
actname char(3) default NULL,
displayed char(1) default NULL,
subname varchar(13) default NULL,
nbpg int(11) default NULL,
cd date default NULL,
md date default NULL,
buttonorder int(11) default NULL,
statustext text,
level char(1) NOT NULL default '',
score int(11) NOT NULL default '0',
stpage int(11) NOT NULL default '0',
endtxt varchar(100) NOT NULL default ''
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmPARAMETERS (
doctype varchar(10) NOT NULL default '',
name varchar(40) NOT NULL default '',
value text NOT NULL default '',
PRIMARY KEY (doctype,name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmPUBLICATION (
doctype varchar(10) NOT NULL default '',
categ varchar(50) NOT NULL default '',
rn varchar(50) NOT NULL default '',
status varchar(10) NOT NULL default '',
dFirstReq datetime NOT NULL default '0000-00-00 00:00:00',
dLastReq datetime NOT NULL default '0000-00-00 00:00:00',
dAction datetime NOT NULL default '0000-00-00 00:00:00',
accessref varchar(20) NOT NULL default '',
accessedi varchar(20) NOT NULL default '',
access varchar(20) NOT NULL default '',
referees varchar(50) NOT NULL default '',
authoremail varchar(50) NOT NULL default '',
dRefSelection datetime NOT NULL default '0000-00-00 00:00:00',
dRefRec datetime NOT NULL default '0000-00-00 00:00:00',
dEdiRec datetime NOT NULL default '0000-00-00 00:00:00',
accessspo varchar(20) NOT NULL default '',
journal varchar(100) default NULL,
PRIMARY KEY (doctype,categ,rn)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmPUBLICATIONCOMM (
id int(11) NOT NULL auto_increment,
id_parent int(11) default '0',
rn varchar(100) NOT NULL default '',
firstname varchar(100) default NULL,
secondname varchar(100) default NULL,
email varchar(100) default NULL,
date varchar(40) NOT NULL default '',
synopsis varchar(255) NOT NULL default '',
commentfulltext text,
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmPUBLICATIONDATA (
doctype varchar(10) NOT NULL default '',
editoboard varchar(250) NOT NULL default '',
base varchar(10) NOT NULL default '',
logicalbase varchar(10) NOT NULL default '',
spokesperson varchar(50) NOT NULL default '',
PRIMARY KEY (doctype)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmREFEREES (
doctype varchar(10) NOT NULL default '',
categ varchar(10) NOT NULL default '',
name varchar(50) NOT NULL default '',
address varchar(50) NOT NULL default '',
rid int(11) NOT NULL auto_increment,
PRIMARY KEY (rid)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmSUBMISSIONS (
email varchar(50) NOT NULL default '',
doctype varchar(10) NOT NULL default '',
action varchar(10) NOT NULL default '',
status varchar(10) NOT NULL default '',
id varchar(30) NOT NULL default '',
reference varchar(40) NOT NULL default '',
cd datetime NOT NULL default '0000-00-00 00:00:00',
md datetime NOT NULL default '0000-00-00 00:00:00'
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS sbmCOOKIES (
id int(15) unsigned NOT NULL auto_increment,
name varchar(100) NOT NULL,
value text,
uid int(15) NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
-- WebDeposit tables
CREATE TABLE IF NOT EXISTS depDRAFT (
uuid VARCHAR(36) NOT NULL,
step INTEGER(15) UNSIGNED NOT NULL,
form_type VARCHAR(45) NOT NULL,
form_values TEXT NOT NULL,
status INTEGER(15) UNSIGNED NOT NULL,
timestamp DATETIME NOT NULL,
PRIMARY KEY (uuid, step),
FOREIGN KEY(uuid) REFERENCES depWORKFLOW (uuid)
) ENGINE=MyISAM;
-- Scheduler tables
CREATE TABLE IF NOT EXISTS schTASK (
id int(15) unsigned NOT NULL auto_increment,
proc varchar(255) NOT NULL,
host varchar(255) NOT NULL default '',
user varchar(50) NOT NULL,
runtime datetime NOT NULL,
sleeptime varchar(20),
arguments mediumblob,
status varchar(50),
progress varchar(255),
priority tinyint(4) NOT NULL default 0,
sequenceid int(8) NULL default NULL,
PRIMARY KEY (id),
KEY status (status),
KEY runtime (runtime),
KEY priority (priority),
KEY sequenceid (sequenceid)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS hstTASK (
id int(15) unsigned NOT NULL,
proc varchar(255) NOT NULL,
host varchar(255) NOT NULL default '',
user varchar(50) NOT NULL,
runtime datetime NOT NULL,
sleeptime varchar(20),
arguments mediumblob,
status varchar(50),
progress varchar(255),
priority tinyint(4) NOT NULL default 0,
sequenceid int(8) NULL default NULL,
PRIMARY KEY (id),
KEY status (status),
KEY runtime (runtime),
KEY priority (priority),
KEY sequenceid (sequenceid)
) ENGINE=MyISAM;
-- Batch Upload History
CREATE TABLE IF NOT EXISTS hstBATCHUPLOAD (
id int(15) unsigned NOT NULL auto_increment,
user varchar(50) NOT NULL,
submitdate datetime NOT NULL,
filename varchar(255) NOT NULL,
execdate datetime NOT NULL,
id_schTASK int(15) unsigned NOT NULL,
batch_mode varchar(15) NOT NULL,
PRIMARY KEY (id),
KEY user (user)
) ENGINE=MyISAM;
-- External collections
CREATE TABLE IF NOT EXISTS collection_externalcollection (
id_collection mediumint(9) unsigned NOT NULL default '0',
id_externalcollection mediumint(9) unsigned NOT NULL default '0',
type tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_collection, id_externalcollection)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS externalcollection (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL default '',
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
-- WebStat tables:
CREATE TABLE IF NOT EXISTS staEVENT (
id varchar(255) NOT NULL,
number smallint(2) unsigned ZEROFILL NOT NULL auto_increment,
name varchar(255),
creation_time TIMESTAMP DEFAULT NOW(),
cols varchar(255),
PRIMARY KEY (id),
UNIQUE KEY number (number)
) ENGINE=MyISAM;
-- BibClassify tables:
CREATE TABLE IF NOT EXISTS clsMETHOD (
id mediumint(9) unsigned NOT NULL,
name varchar(50) NOT NULL default '',
location varchar(255) NOT NULL default '',
description varchar(255) NOT NULL default '',
last_updated datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS collection_clsMETHOD (
id_collection mediumint(9) unsigned NOT NULL,
id_clsMETHOD mediumint(9) unsigned NOT NULL,
PRIMARY KEY (id_collection, id_clsMETHOD)
) ENGINE=MyISAM;
-- WebJournal tables:
CREATE TABLE IF NOT EXISTS jrnJOURNAL (
id mediumint(9) unsigned NOT NULL auto_increment,
name varchar(50) NOT NULL default '',
PRIMARY KEY (id),
UNIQUE KEY name (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS jrnISSUE (
id_jrnJOURNAL mediumint(9) unsigned NOT NULL,
issue_number varchar(50) NOT NULL default '',
issue_display varchar(50) NOT NULL default '',
date_released datetime NOT NULL default '0000-00-00 00:00:00',
date_announced datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (id_jrnJOURNAL,issue_number)
) ENGINE=MyISAM;
-- tables recording history of record's metadata and fulltext documents:
CREATE TABLE IF NOT EXISTS hstRECORD (
id_bibrec mediumint(8) unsigned NOT NULL,
marcxml longblob NOT NULL,
job_id mediumint(15) unsigned NOT NULL,
job_name varchar(255) NOT NULL,
job_person varchar(255) NOT NULL,
job_date datetime NOT NULL,
job_details blob NOT NULL,
+ affected_fields text NOT NULL default '',
KEY (id_bibrec),
KEY (job_id),
KEY (job_name),
KEY (job_person),
KEY (job_date)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS hstDOCUMENT (
id_bibdoc mediumint(9) unsigned NOT NULL,
docname varchar(250) NOT NULL,
docformat varchar(50) NOT NULL,
docversion tinyint(4) unsigned NOT NULL,
docsize bigint(15) unsigned NOT NULL,
docchecksum char(32) NOT NULL,
doctimestamp datetime NOT NULL,
action varchar(50) NOT NULL,
job_id mediumint(15) unsigned NULL default NULL,
job_name varchar(255) NULL default NULL,
job_person varchar(255) NULL default NULL,
job_date datetime NULL default NULL,
job_details blob NULL default NULL,
KEY (action),
KEY (id_bibdoc),
KEY (docname),
KEY (docformat),
KEY (doctimestamp),
KEY (job_id),
KEY (job_name),
KEY (job_person),
KEY (job_date)
) ENGINE=MyISAM;
-- BibCirculation tables:
CREATE TABLE IF NOT EXISTS crcBORROWER (
id int(15) unsigned NOT NULL auto_increment,
ccid int(15) unsigned NULL default NULL,
name varchar(255) NOT NULL default '',
email varchar(255) NOT NULL default '',
phone varchar(60) default NULL,
address varchar(60) default NULL,
mailbox varchar(30) default NULL,
borrower_since datetime NOT NULL default '0000-00-00 00:00:00',
borrower_until datetime NOT NULL default '0000-00-00 00:00:00',
notes text,
PRIMARY KEY (id),
UNIQUE KEY (ccid),
KEY (name),
KEY (email)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS crcILLREQUEST (
id int(15) unsigned NOT NULL auto_increment,
id_crcBORROWER int(15) unsigned NOT NULL default '0',
barcode varchar(30) NOT NULL default '',
period_of_interest_from datetime NOT NULL default '0000-00-00 00:00:00',
period_of_interest_to datetime NOT NULL default '0000-00-00 00:00:00',
id_crcLIBRARY int(15) unsigned NOT NULL default '0',
request_date datetime NOT NULL default '0000-00-00 00:00:00',
expected_date datetime NOT NULL default '0000-00-00 00:00:00',
arrival_date datetime NOT NULL default '0000-00-00 00:00:00',
due_date datetime NOT NULL default '0000-00-00 00:00:00',
return_date datetime NOT NULL default '0000-00-00 00:00:00',
status varchar(20) NOT NULL default '',
cost varchar(30) NOT NULL default '',
budget_code varchar(60) NOT NULL default '',
item_info text,
request_type text,
borrower_comments text,
only_this_edition varchar(10) NOT NULL default '',
library_notes text,
overdue_letter_number int(3) unsigned NOT NULL default '0',
overdue_letter_date datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (id),
KEY id_crcborrower (id_crcBORROWER),
KEY id_crclibrary (id_crcLIBRARY)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS crcITEM (
barcode varchar(30) NOT NULL default '',
id_bibrec int(15) unsigned NOT NULL default '0',
id_crcLIBRARY int(15) unsigned NOT NULL default '0',
collection varchar(60) default NULL,
location varchar(60) default NULL,
description varchar(60) default NULL,
loan_period varchar(30) NOT NULL default '',
status varchar(20) NOT NULL default '',
expected_arrival_date varchar(60) NOT NULL default '',
creation_date datetime NOT NULL default '0000-00-00 00:00:00',
modification_date datetime NOT NULL default '0000-00-00 00:00:00',
number_of_requests int(3) unsigned NOT NULL default '0',
PRIMARY KEY (barcode),
KEY id_bibrec (id_bibrec),
KEY id_crclibrary (id_crcLIBRARY)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS crcLIBRARY (
id int(15) unsigned NOT NULL auto_increment,
name varchar(80) NOT NULL default '',
address varchar(255) NOT NULL default '',
email varchar(255) NOT NULL default '',
phone varchar(30) NOT NULL default '',
- type varchar(30) default NULL,
+ type varchar(30) NOT NULL default 'main',
notes text,
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS crcLOAN (
id int(15) unsigned NOT NULL auto_increment,
id_crcBORROWER int(15) unsigned NOT NULL default '0',
id_bibrec int(15) unsigned NOT NULL default '0',
barcode varchar(30) NOT NULL default '',
loaned_on datetime NOT NULL default '0000-00-00 00:00:00',
returned_on date NOT NULL default '0000-00-00',
due_date datetime NOT NULL default '0000-00-00 00:00:00',
number_of_renewals int(3) unsigned NOT NULL default '0',
overdue_letter_number int(3) unsigned NOT NULL default '0',
overdue_letter_date datetime NOT NULL default '0000-00-00 00:00:00',
status varchar(20) NOT NULL default '',
type varchar(20) NOT NULL default '',
notes text,
PRIMARY KEY (id),
KEY id_crcborrower (id_crcBORROWER),
KEY id_bibrec (id_bibrec),
KEY barcode (barcode)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS crcLOANREQUEST (
id int(15) unsigned NOT NULL auto_increment,
id_crcBORROWER int(15) unsigned NOT NULL default '0',
id_bibrec int(15) unsigned NOT NULL default '0',
barcode varchar(30) NOT NULL default '',
period_of_interest_from datetime NOT NULL default '0000-00-00 00:00:00',
period_of_interest_to datetime NOT NULL default '0000-00-00 00:00:00',
status varchar(20) NOT NULL default '',
notes text,
request_date datetime NOT NULL default '0000-00-00 00:00:00',
PRIMARY KEY (id),
KEY id_crcborrower (id_crcBORROWER),
KEY id_bibrec (id_bibrec),
KEY barcode (barcode)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS crcPURCHASE (
id int(15) unsigned NOT NULL auto_increment,
id_bibrec int(15) unsigned NOT NULL default '0',
id_crcVENDOR int(15) unsigned NOT NULL default '0',
ordered_date datetime NOT NULL default '0000-00-00 00:00:00',
expected_date datetime NOT NULL default '0000-00-00 00:00:00',
price varchar(20) NOT NULL default '0',
status varchar(20) NOT NULL default '',
notes text,
PRIMARY KEY (id),
KEY id_bibrec (id_bibrec),
KEY id_crcVENDOR (id_crcVENDOR)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS crcVENDOR (
id int(15) unsigned NOT NULL auto_increment,
name varchar(80) NOT NULL default '',
address varchar(255) NOT NULL default '',
email varchar(255) NOT NULL default '',
phone varchar(30) NOT NULL default '',
notes text,
PRIMARY KEY (id)
) ENGINE=MyISAM;
-- BibExport tables:
CREATE TABLE IF NOT EXISTS expJOB (
id int(15) unsigned NOT NULL auto_increment,
jobname varchar(50) NOT NULL default '',
jobfreq mediumint(12) NOT NULL default '0',
output_format mediumint(12) NOT NULL default '0',
deleted mediumint(12) NOT NULL default '0',
lastrun datetime NOT NULL default '0000-00-00 00:00:00',
output_directory text,
PRIMARY KEY (id),
UNIQUE KEY jobname (jobname)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS expQUERY (
id int(15) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL,
search_criteria text NOT NULL,
output_fields text NOT NULL,
notes text,
deleted mediumint(12) NOT NULL default '0',
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS expJOB_expQUERY (
id_expJOB int(15) NOT NULL,
id_expQUERY int(15) NOT NULL,
PRIMARY KEY (id_expJOB,id_expQUERY),
KEY id_expJOB (id_expJOB),
KEY id_expQUERY (id_expQUERY)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS expQUERYRESULT (
id int(15) unsigned NOT NULL auto_increment,
id_expQUERY int(15) NOT NULL,
result text NOT NULL,
status mediumint(12) NOT NULL default '0',
status_message text NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS expJOBRESULT (
id int(15) unsigned NOT NULL auto_increment,
id_expJOB int(15) NOT NULL,
execution_time datetime NOT NULL default '0000-00-00 00:00:00',
status mediumint(12) NOT NULL default '0',
status_message text NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS expJOBRESULT_expQUERYRESULT (
id_expJOBRESULT int(15) NOT NULL,
id_expQUERYRESULT int(15) NOT NULL,
PRIMARY KEY (id_expJOBRESULT, id_expQUERYRESULT),
KEY id_expJOBRESULT (id_expJOBRESULT),
KEY id_expQUERYRESULT (id_expQUERYRESULT)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS user_expJOB (
id_user int(15) NOT NULL,
id_expJOB int(15) NOT NULL,
PRIMARY KEY (id_user, id_expJOB),
KEY id_user (id_user),
KEY id_expJOB (id_expJOB)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS swrREMOTESERVER (
id int(15) unsigned NOT NULL auto_increment,
name varchar(50) unique NOT NULL,
host varchar(50) NOT NULL,
username varchar(50) NOT NULL,
password varchar(50) NOT NULL,
email varchar(50) NOT NULL,
realm varchar(50) NOT NULL,
url_base_record varchar(50) NOT NULL,
url_servicedocument varchar(80) NOT NULL,
xml_servicedocument longblob,
last_update int(15) unsigned NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS swrCLIENTDATA (
id int(15) unsigned NOT NULL auto_increment,
id_swrREMOTESERVER int(15) NOT NULL,
id_record int(15) NOT NULL,
report_no varchar(50) NOT NULL,
id_remote varchar(50) NOT NULL,
id_user int(15) NOT NULL,
user_name varchar(100) NOT NULL,
user_email varchar(100) NOT NULL,
xml_media_deposit longblob NOT NULL,
xml_metadata_submit longblob NOT NULL,
submission_date datetime NOT NULL default '0000-00-00 00:00:00',
publication_date datetime NOT NULL default '0000-00-00 00:00:00',
removal_date datetime NOT NULL default '0000-00-00 00:00:00',
link_medias varchar(150) NOT NULL,
link_metadata varchar(150) NOT NULL,
link_status varchar(150) NOT NULL,
status varchar(150) NOT NULL default 'submitted',
last_update datetime NOT NULL,
PRIMARY KEY (id)
) ENGINE=MyISAM;
-- tables for exception management
-- This table is used to log exceptions
-- to discover the full details of an exception either check the email
-- that are sent to CFG_SITE_ADMIN_EMAIL or look into invenio.err
CREATE TABLE IF NOT EXISTS hstEXCEPTION (
id int(15) unsigned NOT NULL auto_increment,
name varchar(50) NOT NULL, -- name of the exception
filename varchar(255) NULL, -- file where the exception was raised
line int(9) NULL, -- line at which the exception was raised
last_seen datetime NOT NULL default '0000-00-00 00:00:00', -- last time this exception has been seen
last_notified datetime NOT NULL default '0000-00-00 00:00:00', -- last time this exception has been notified
counter int(15) NOT NULL default 0, -- internal counter to decide when to notify this exception
total int(15) NOT NULL default 0, -- total number of times this exception has been seen
PRIMARY KEY (id),
KEY (last_seen),
KEY (last_notified),
KEY (total),
UNIQUE KEY (name(50), filename(255), line)
) ENGINE=MyISAM;
-- tables for BibAuthorID module:
CREATE TABLE IF NOT EXISTS `aidPERSONIDPAPERS` (
`personid` BIGINT( 16 ) UNSIGNED NOT NULL ,
`bibref_table` ENUM( '100', '700' ) NOT NULL ,
`bibref_value` MEDIUMINT( 8 ) UNSIGNED NOT NULL ,
`bibrec` MEDIUMINT( 8 ) UNSIGNED NOT NULL ,
`name` VARCHAR( 256 ) NOT NULL ,
`flag` SMALLINT( 2 ) NOT NULL DEFAULT '0' ,
`lcul` SMALLINT( 2 ) NOT NULL DEFAULT '0' ,
`last_updated` TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ,
INDEX `personid-b` (`personid`) ,
INDEX `reftable-b` (`bibref_table`) ,
INDEX `refvalue-b` (`bibref_value`) ,
INDEX `rec-b` (`bibrec`) ,
INDEX `name-b` (`name`) ,
INDEX `pn-b` (`personid`, `name`) ,
INDEX `timestamp-b` (`last_updated`) ,
INDEX `flag-b` (`flag`) ,
INDEX `ptvrf-b` (`personid`, `bibref_table`, `bibref_value`, `bibrec`, `flag`)
) ENGINE=MYISAM;
CREATE TABLE IF NOT EXISTS `aidRESULTS` (
`personid` VARCHAR( 256 ) NOT NULL ,
`bibref_table` ENUM( '100', '700' ) NOT NULL ,
`bibref_value` MEDIUMINT( 8 ) UNSIGNED NOT NULL ,
`bibrec` MEDIUMINT( 8 ) UNSIGNED NOT NULL ,
INDEX `personid-b` (`personid`) ,
INDEX `reftable-b` (`bibref_table`) ,
INDEX `refvalue-b` (`bibref_value`) ,
INDEX `rec-b` (`bibrec`)
) ENGINE=MYISAM;
CREATE TABLE IF NOT EXISTS `aidPERSONIDDATA` (
`personid` BIGINT( 16 ) UNSIGNED NOT NULL ,
`tag` VARCHAR( 64 ) NOT NULL ,
`data` VARCHAR( 256 ) NOT NULL ,
`opt1` MEDIUMINT( 8 ) NULL DEFAULT NULL ,
`opt2` MEDIUMINT( 8 ) NULL DEFAULT NULL ,
`opt3` VARCHAR( 256 ) NULL DEFAULT NULL ,
INDEX `personid-b` (`personid`) ,
INDEX `tag-b` (`tag`) ,
INDEX `data-b` (`data`) ,
INDEX `opt1` (`opt1`)
) ENGINE=MYISAM;
CREATE TABLE IF NOT EXISTS `aidUSERINPUTLOG` (
`id` bigint(15) NOT NULL AUTO_INCREMENT,
`transactionid` bigint(15) NOT NULL,
`timestamp` datetime NOT NULL,
`userid` int,
`userinfo` varchar(255) NOT NULL,
`personid` bigint(15) NOT NULL,
`action` varchar(50) NOT NULL,
`tag` varchar(50) NOT NULL,
`value` varchar(200) NOT NULL,
`comment` text,
PRIMARY KEY (`id`),
INDEX `transactionid-b` (`transactionid`),
INDEX `timestamp-b` (`timestamp`),
INDEX `userinfo-b` (`userinfo`),
INDEX `userid-b` (`userid`),
INDEX `personid-b` (`personid`),
INDEX `action-b` (`action`),
INDEX `tag-b` (`tag`),
INDEX `value-b` (`value`)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS `aidCACHE` (
`id` int(15) NOT NULL auto_increment,
`object_name` varchar(120) NOT NULL,
`object_key` varchar(120) NOT NULL,
`object_value` text,
`last_updated` datetime NOT NULL,
PRIMARY KEY (`id`),
INDEX `name-b` (`object_name`),
INDEX `key-b` (`object_key`),
INDEX `last_updated-b` (`last_updated`)
) ENGINE=MyISAM;
-- refextract tables:
CREATE TABLE IF NOT EXISTS `xtrJOB` (
`id` tinyint(4) NOT NULL AUTO_INCREMENT,
`name` varchar(30) NOT NULL,
`last_updated` datetime NOT NULL,
`last_recid` mediumint(8) unsigned NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM;
-- tables for bibsort module
CREATE TABLE IF NOT EXISTS bsrMETHOD (
id mediumint(8) unsigned NOT NULL auto_increment,
name varchar(20) NOT NULL,
definition varchar(255) NOT NULL,
washer varchar(255) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY (name)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bsrMETHODNAME (
id_bsrMETHOD mediumint(8) unsigned NOT NULL,
ln char(5) NOT NULL default '',
type char(3) NOT NULL default 'sn',
value varchar(255) NOT NULL,
PRIMARY KEY (id_bsrMETHOD, ln, type)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bsrMETHODDATA (
id_bsrMETHOD mediumint(8) unsigned NOT NULL,
data_dict longblob,
data_dict_ordered longblob,
data_list_sorted longblob,
last_updated datetime,
PRIMARY KEY (id_bsrMETHOD)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS bsrMETHODDATABUCKET (
id_bsrMETHOD mediumint(8) unsigned NOT NULL,
bucket_no tinyint(2) NOT NULL,
bucket_data longblob,
bucket_last_value varchar(255),
last_updated datetime,
PRIMARY KEY (id_bsrMETHOD, bucket_no)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS collection_bsrMETHOD (
id_collection mediumint(9) unsigned NOT NULL,
id_bsrMETHOD mediumint(9) unsigned NOT NULL,
score tinyint(4) unsigned NOT NULL default '0',
PRIMARY KEY (id_collection, id_bsrMETHOD)
) ENGINE=MyISAM;
-- tables for sequence storage
CREATE TABLE IF NOT EXISTS seqSTORE (
id int(15) NOT NULL auto_increment,
seq_name varchar(15),
seq_value varchar(20),
PRIMARY KEY (id),
UNIQUE KEY seq_name_value (seq_name, seq_value)
) ENGINE=MyISAM;
-- tables for linkbacks:
CREATE TABLE IF NOT EXISTS lnkENTRY (
id int(15) NOT NULL auto_increment,
origin_url varchar(100) NOT NULL, -- url of the originating resource
id_bibrec mediumint(8) unsigned NOT NULL, -- bibrecord
additional_properties longblob,
type varchar(30) NOT NULL,
status varchar(30) NOT NULL default 'PENDING',
insert_time datetime default '0000-00-00 00:00:00',
PRIMARY KEY (id),
INDEX (id_bibrec),
INDEX (type),
INDEX (status),
INDEX (insert_time)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS lnkENTRYURLTITLE (
id int(15) unsigned NOT NULL auto_increment,
url varchar(100) NOT NULL,
title varchar(100) NOT NULL,
manual_set boolean NOT NULL default 0,
broken_count int(5) default 0,
broken boolean NOT NULL default 0,
PRIMARY KEY (id),
UNIQUE (url),
INDEX (title)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS lnkENTRYLOG (
id_lnkENTRY int(15) unsigned NOT NULL,
id_lnkLOG int(15) unsigned NOT NULL,
FOREIGN KEY (id_lnkENTRY) REFERENCES lnkENTRY(id),
FOREIGN KEY (id_lnkLOG) REFERENCES lnkLOG(id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS lnkLOG (
id int(15) unsigned NOT NULL auto_increment,
id_user int(15) unsigned,
action varchar(30) NOT NULL,
log_time datetime default '0000-00-00 00:00:00',
PRIMARY KEY (id),
INDEX (id_user),
INDEX (action),
INDEX (log_time)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS lnkADMINURL (
id int(15) unsigned NOT NULL auto_increment,
url varchar(100) NOT NULL,
list varchar(30) NOT NULL,
PRIMARY KEY (id),
UNIQUE (url),
INDEX (list)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS lnkADMINURLLOG (
id_lnkADMINURL int(15) unsigned NOT NULL,
id_lnkLOG int(15) unsigned NOT NULL,
FOREIGN KEY (id_lnkADMINURL) REFERENCES lnkADMINURL(id),
FOREIGN KEY (id_lnkLOG) REFERENCES lnkLOG(id)
) ENGINE=MyISAM;
-- table for API key
CREATE TABLE IF NOT EXISTS webapikey (
id varchar(150) NOT NULL,
secret varchar(150) NOT NULL,
id_user int(15) NOT NULL,
status varchar(25) NOT NULL default 'OK',
description varchar(255) default NULL,
PRIMARY KEY (id),
KEY (id_user),
KEY (status)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS `wapCACHE` (
`object_name` varchar(120) NOT NULL,
`object_key` varchar(120) NOT NULL,
`object_value` longtext,
`object_status` varchar(120),
`last_updated` datetime NOT NULL,
PRIMARY KEY (`object_name`,`object_key`),
INDEX `last_updated-b` (`last_updated`),
INDEX `status-b` (`object_status`)
) ENGINE=MyISAM;
-- tables for goto:
CREATE TABLE IF NOT EXISTS goto (
label varchar(150) NOT NULL,
plugin varchar(150) NOT NULL,
parameters text NOT NULL,
creation_date datetime NOT NULL,
modification_date datetime NOT NULL,
PRIMARY KEY (label),
KEY (creation_date),
KEY (modification_date)
) ENGINE=MyISAM;
-- tables for invenio_upgrader
CREATE TABLE IF NOT EXISTS upgrade (
upgrade varchar(255) NOT NULL,
applied DATETIME NOT NULL,
PRIMARY KEY (upgrade)
) ENGINE=MyISAM;
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_release_1_1_0',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_10_31_tablesorter_location',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_11_01_lower_user_email',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_11_21_aiduserinputlog_userid_check',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_11_15_hstRECORD_marcxml_longblob',NOW());
-- master upgrade recipes:
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_10_29_idxINDEX_new_indexer_column',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_11_04_circulation_and_linkback_updates',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_11_07_xtrjob_last_recid',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_11_27_new_selfcite_tables',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_12_11_new_citation_errors_table',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_01_08_new_goto_table',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2012_11_15_bibdocfile_model',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_02_01_oaiREPOSITORY_last_updated',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_03_07_crcILLREQUEST_overdue_letter',NOW());
INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_01_12_bibrec_master_format',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_06_11_rnkDOWNLOADS_file_format',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_03_20_idxINDEX_synonym_kb',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_03_21_idxINDEX_stopwords',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_03_25_idxINDEX_html_markup',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_03_28_idxINDEX_tokenizer',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_03_29_idxINDEX_stopwords_update',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_08_20_bibauthority_updates',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_08_22_new_index_itemcount',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_10_18_crcLIBRARY_type',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_10_18_new_index_filetype',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_10_25_delete_recjson_cache',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_08_22_hstRECORD_affected_fields',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_09_25_virtual_indexes',NOW());
+INSERT INTO upgrade (upgrade, applied) VALUES ('invenio_2013_09_30_indexer_interface',NOW());
-- end of file
diff --git a/modules/miscutil/sql/tabdrop.sql b/modules/miscutil/sql/tabdrop.sql
index a601364ff..ceb30594b 100644
--- a/modules/miscutil/sql/tabdrop.sql
+++ b/modules/miscutil/sql/tabdrop.sql
@@ -1,500 +1,543 @@
-- $Id$
-- This file is part of Invenio.
-- Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
--
-- Invenio is free software; you can redistribute it and/or
-- modify it under the terms of the GNU General Public License as
-- published by the Free Software Foundation; either version 2 of the
-- License, or (at your option) any later version.
--
-- Invenio is distributed in the hope that it will be useful, but
-- WITHOUT ANY WARRANTY; without even the implied warranty of
-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- General Public License for more details.
--
-- You should have received a copy of the GNU General Public License
-- along with Invenio; if not, write to the Free Software Foundation, Inc.,
-- 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
DROP TABLE IF EXISTS bibrec;
DROP TABLE IF EXISTS bib00x;
DROP TABLE IF EXISTS bib01x;
DROP TABLE IF EXISTS bib02x;
DROP TABLE IF EXISTS bib03x;
DROP TABLE IF EXISTS bib04x;
DROP TABLE IF EXISTS bib05x;
DROP TABLE IF EXISTS bib06x;
DROP TABLE IF EXISTS bib07x;
DROP TABLE IF EXISTS bib08x;
DROP TABLE IF EXISTS bib09x;
DROP TABLE IF EXISTS bib10x;
DROP TABLE IF EXISTS bib11x;
DROP TABLE IF EXISTS bib12x;
DROP TABLE IF EXISTS bib13x;
DROP TABLE IF EXISTS bib14x;
DROP TABLE IF EXISTS bib15x;
DROP TABLE IF EXISTS bib16x;
DROP TABLE IF EXISTS bib17x;
DROP TABLE IF EXISTS bib18x;
DROP TABLE IF EXISTS bib19x;
DROP TABLE IF EXISTS bib20x;
DROP TABLE IF EXISTS bib21x;
DROP TABLE IF EXISTS bib22x;
DROP TABLE IF EXISTS bib23x;
DROP TABLE IF EXISTS bib24x;
DROP TABLE IF EXISTS bib25x;
DROP TABLE IF EXISTS bib26x;
DROP TABLE IF EXISTS bib27x;
DROP TABLE IF EXISTS bib28x;
DROP TABLE IF EXISTS bib29x;
DROP TABLE IF EXISTS bib30x;
DROP TABLE IF EXISTS bib31x;
DROP TABLE IF EXISTS bib32x;
DROP TABLE IF EXISTS bib33x;
DROP TABLE IF EXISTS bib34x;
DROP TABLE IF EXISTS bib35x;
DROP TABLE IF EXISTS bib36x;
DROP TABLE IF EXISTS bib37x;
DROP TABLE IF EXISTS bib38x;
DROP TABLE IF EXISTS bib39x;
DROP TABLE IF EXISTS bib40x;
DROP TABLE IF EXISTS bib41x;
DROP TABLE IF EXISTS bib42x;
DROP TABLE IF EXISTS bib43x;
DROP TABLE IF EXISTS bib44x;
DROP TABLE IF EXISTS bib45x;
DROP TABLE IF EXISTS bib46x;
DROP TABLE IF EXISTS bib47x;
DROP TABLE IF EXISTS bib48x;
DROP TABLE IF EXISTS bib49x;
DROP TABLE IF EXISTS bib50x;
DROP TABLE IF EXISTS bib51x;
DROP TABLE IF EXISTS bib52x;
DROP TABLE IF EXISTS bib53x;
DROP TABLE IF EXISTS bib54x;
DROP TABLE IF EXISTS bib55x;
DROP TABLE IF EXISTS bib56x;
DROP TABLE IF EXISTS bib57x;
DROP TABLE IF EXISTS bib58x;
DROP TABLE IF EXISTS bib59x;
DROP TABLE IF EXISTS bib60x;
DROP TABLE IF EXISTS bib61x;
DROP TABLE IF EXISTS bib62x;
DROP TABLE IF EXISTS bib63x;
DROP TABLE IF EXISTS bib64x;
DROP TABLE IF EXISTS bib65x;
DROP TABLE IF EXISTS bib66x;
DROP TABLE IF EXISTS bib67x;
DROP TABLE IF EXISTS bib68x;
DROP TABLE IF EXISTS bib69x;
DROP TABLE IF EXISTS bib70x;
DROP TABLE IF EXISTS bib71x;
DROP TABLE IF EXISTS bib72x;
DROP TABLE IF EXISTS bib73x;
DROP TABLE IF EXISTS bib74x;
DROP TABLE IF EXISTS bib75x;
DROP TABLE IF EXISTS bib76x;
DROP TABLE IF EXISTS bib77x;
DROP TABLE IF EXISTS bib78x;
DROP TABLE IF EXISTS bib79x;
DROP TABLE IF EXISTS bib80x;
DROP TABLE IF EXISTS bib81x;
DROP TABLE IF EXISTS bib82x;
DROP TABLE IF EXISTS bib83x;
DROP TABLE IF EXISTS bib84x;
DROP TABLE IF EXISTS bib85x;
DROP TABLE IF EXISTS bib86x;
DROP TABLE IF EXISTS bib87x;
DROP TABLE IF EXISTS bib88x;
DROP TABLE IF EXISTS bib89x;
DROP TABLE IF EXISTS bib90x;
DROP TABLE IF EXISTS bib91x;
DROP TABLE IF EXISTS bib92x;
DROP TABLE IF EXISTS bib93x;
DROP TABLE IF EXISTS bib94x;
DROP TABLE IF EXISTS bib95x;
DROP TABLE IF EXISTS bib96x;
DROP TABLE IF EXISTS bib97x;
DROP TABLE IF EXISTS bib98x;
DROP TABLE IF EXISTS bib99x;
DROP TABLE IF EXISTS bibrec_bib00x;
DROP TABLE IF EXISTS bibrec_bib01x;
DROP TABLE IF EXISTS bibrec_bib02x;
DROP TABLE IF EXISTS bibrec_bib03x;
DROP TABLE IF EXISTS bibrec_bib04x;
DROP TABLE IF EXISTS bibrec_bib05x;
DROP TABLE IF EXISTS bibrec_bib06x;
DROP TABLE IF EXISTS bibrec_bib07x;
DROP TABLE IF EXISTS bibrec_bib08x;
DROP TABLE IF EXISTS bibrec_bib09x;
DROP TABLE IF EXISTS bibrec_bib10x;
DROP TABLE IF EXISTS bibrec_bib11x;
DROP TABLE IF EXISTS bibrec_bib12x;
DROP TABLE IF EXISTS bibrec_bib13x;
DROP TABLE IF EXISTS bibrec_bib14x;
DROP TABLE IF EXISTS bibrec_bib15x;
DROP TABLE IF EXISTS bibrec_bib16x;
DROP TABLE IF EXISTS bibrec_bib17x;
DROP TABLE IF EXISTS bibrec_bib18x;
DROP TABLE IF EXISTS bibrec_bib19x;
DROP TABLE IF EXISTS bibrec_bib20x;
DROP TABLE IF EXISTS bibrec_bib21x;
DROP TABLE IF EXISTS bibrec_bib22x;
DROP TABLE IF EXISTS bibrec_bib23x;
DROP TABLE IF EXISTS bibrec_bib24x;
DROP TABLE IF EXISTS bibrec_bib25x;
DROP TABLE IF EXISTS bibrec_bib26x;
DROP TABLE IF EXISTS bibrec_bib27x;
DROP TABLE IF EXISTS bibrec_bib28x;
DROP TABLE IF EXISTS bibrec_bib29x;
DROP TABLE IF EXISTS bibrec_bib30x;
DROP TABLE IF EXISTS bibrec_bib31x;
DROP TABLE IF EXISTS bibrec_bib32x;
DROP TABLE IF EXISTS bibrec_bib33x;
DROP TABLE IF EXISTS bibrec_bib34x;
DROP TABLE IF EXISTS bibrec_bib35x;
DROP TABLE IF EXISTS bibrec_bib36x;
DROP TABLE IF EXISTS bibrec_bib37x;
DROP TABLE IF EXISTS bibrec_bib38x;
DROP TABLE IF EXISTS bibrec_bib39x;
DROP TABLE IF EXISTS bibrec_bib40x;
DROP TABLE IF EXISTS bibrec_bib41x;
DROP TABLE IF EXISTS bibrec_bib42x;
DROP TABLE IF EXISTS bibrec_bib43x;
DROP TABLE IF EXISTS bibrec_bib44x;
DROP TABLE IF EXISTS bibrec_bib45x;
DROP TABLE IF EXISTS bibrec_bib46x;
DROP TABLE IF EXISTS bibrec_bib47x;
DROP TABLE IF EXISTS bibrec_bib48x;
DROP TABLE IF EXISTS bibrec_bib49x;
DROP TABLE IF EXISTS bibrec_bib50x;
DROP TABLE IF EXISTS bibrec_bib51x;
DROP TABLE IF EXISTS bibrec_bib52x;
DROP TABLE IF EXISTS bibrec_bib53x;
DROP TABLE IF EXISTS bibrec_bib54x;
DROP TABLE IF EXISTS bibrec_bib55x;
DROP TABLE IF EXISTS bibrec_bib56x;
DROP TABLE IF EXISTS bibrec_bib57x;
DROP TABLE IF EXISTS bibrec_bib58x;
DROP TABLE IF EXISTS bibrec_bib59x;
DROP TABLE IF EXISTS bibrec_bib60x;
DROP TABLE IF EXISTS bibrec_bib61x;
DROP TABLE IF EXISTS bibrec_bib62x;
DROP TABLE IF EXISTS bibrec_bib63x;
DROP TABLE IF EXISTS bibrec_bib64x;
DROP TABLE IF EXISTS bibrec_bib65x;
DROP TABLE IF EXISTS bibrec_bib66x;
DROP TABLE IF EXISTS bibrec_bib67x;
DROP TABLE IF EXISTS bibrec_bib68x;
DROP TABLE IF EXISTS bibrec_bib69x;
DROP TABLE IF EXISTS bibrec_bib70x;
DROP TABLE IF EXISTS bibrec_bib71x;
DROP TABLE IF EXISTS bibrec_bib72x;
DROP TABLE IF EXISTS bibrec_bib73x;
DROP TABLE IF EXISTS bibrec_bib74x;
DROP TABLE IF EXISTS bibrec_bib75x;
DROP TABLE IF EXISTS bibrec_bib76x;
DROP TABLE IF EXISTS bibrec_bib77x;
DROP TABLE IF EXISTS bibrec_bib78x;
DROP TABLE IF EXISTS bibrec_bib79x;
DROP TABLE IF EXISTS bibrec_bib80x;
DROP TABLE IF EXISTS bibrec_bib81x;
DROP TABLE IF EXISTS bibrec_bib82x;
DROP TABLE IF EXISTS bibrec_bib83x;
DROP TABLE IF EXISTS bibrec_bib84x;
DROP TABLE IF EXISTS bibrec_bib85x;
DROP TABLE IF EXISTS bibrec_bib86x;
DROP TABLE IF EXISTS bibrec_bib87x;
DROP TABLE IF EXISTS bibrec_bib88x;
DROP TABLE IF EXISTS bibrec_bib89x;
DROP TABLE IF EXISTS bibrec_bib90x;
DROP TABLE IF EXISTS bibrec_bib91x;
DROP TABLE IF EXISTS bibrec_bib92x;
DROP TABLE IF EXISTS bibrec_bib93x;
DROP TABLE IF EXISTS bibrec_bib94x;
DROP TABLE IF EXISTS bibrec_bib95x;
DROP TABLE IF EXISTS bibrec_bib96x;
DROP TABLE IF EXISTS bibrec_bib97x;
DROP TABLE IF EXISTS bibrec_bib98x;
DROP TABLE IF EXISTS bibrec_bib99x;
DROP TABLE IF EXISTS bibfmt;
DROP TABLE IF EXISTS idxINDEX;
DROP TABLE IF EXISTS idxINDEXNAME;
DROP TABLE IF EXISTS idxINDEX_field;
+DROP TABLE IF EXISTS idxINDEX_idxINDEX;
DROP TABLE IF EXISTS idxWORD01F;
DROP TABLE IF EXISTS idxWORD02F;
DROP TABLE IF EXISTS idxWORD03F;
DROP TABLE IF EXISTS idxWORD04F;
DROP TABLE IF EXISTS idxWORD05F;
DROP TABLE IF EXISTS idxWORD06F;
DROP TABLE IF EXISTS idxWORD07F;
DROP TABLE IF EXISTS idxWORD08F;
DROP TABLE IF EXISTS idxWORD09F;
DROP TABLE IF EXISTS idxWORD10F;
DROP TABLE IF EXISTS idxWORD11F;
DROP TABLE IF EXISTS idxWORD12F;
DROP TABLE IF EXISTS idxWORD13F;
DROP TABLE IF EXISTS idxWORD14F;
DROP TABLE IF EXISTS idxWORD15F;
DROP TABLE IF EXISTS idxWORD16F;
DROP TABLE IF EXISTS idxWORD17F;
DROP TABLE IF EXISTS idxWORD18F;
DROP TABLE IF EXISTS idxWORD19F;
+DROP TABLE IF EXISTS idxWORD20F;
+DROP TABLE IF EXISTS idxWORD21F;
+DROP TABLE IF EXISTS idxWORD22F;
+DROP TABLE IF EXISTS idxWORD23F;
+DROP TABLE IF EXISTS idxWORD24F;
+DROP TABLE IF EXISTS idxWORD25F;
+DROP TABLE IF EXISTS idxWORD26F;
DROP TABLE IF EXISTS idxWORD01R;
DROP TABLE IF EXISTS idxWORD02R;
DROP TABLE IF EXISTS idxWORD03R;
DROP TABLE IF EXISTS idxWORD04R;
DROP TABLE IF EXISTS idxWORD05R;
DROP TABLE IF EXISTS idxWORD06R;
DROP TABLE IF EXISTS idxWORD07R;
DROP TABLE IF EXISTS idxWORD08R;
DROP TABLE IF EXISTS idxWORD09R;
DROP TABLE IF EXISTS idxWORD10R;
DROP TABLE IF EXISTS idxWORD11R;
DROP TABLE IF EXISTS idxWORD12R;
DROP TABLE IF EXISTS idxWORD13R;
DROP TABLE IF EXISTS idxWORD14R;
DROP TABLE IF EXISTS idxWORD15R;
DROP TABLE IF EXISTS idxWORD16R;
DROP TABLE IF EXISTS idxWORD17R;
DROP TABLE IF EXISTS idxWORD18R;
DROP TABLE IF EXISTS idxWORD19R;
+DROP TABLE IF EXISTS idxWORD20R;
+DROP TABLE IF EXISTS idxWORD21R;
+DROP TABLE IF EXISTS idxWORD22R;
+DROP TABLE IF EXISTS idxWORD23R;
+DROP TABLE IF EXISTS idxWORD24R;
+DROP TABLE IF EXISTS idxWORD25R;
+DROP TABLE IF EXISTS idxWORD26R;
DROP TABLE IF EXISTS idxPAIR01F;
DROP TABLE IF EXISTS idxPAIR02F;
DROP TABLE IF EXISTS idxPAIR03F;
DROP TABLE IF EXISTS idxPAIR04F;
DROP TABLE IF EXISTS idxPAIR05F;
DROP TABLE IF EXISTS idxPAIR06F;
DROP TABLE IF EXISTS idxPAIR07F;
DROP TABLE IF EXISTS idxPAIR08F;
DROP TABLE IF EXISTS idxPAIR09F;
DROP TABLE IF EXISTS idxPAIR10F;
DROP TABLE IF EXISTS idxPAIR11F;
DROP TABLE IF EXISTS idxPAIR12F;
DROP TABLE IF EXISTS idxPAIR13F;
DROP TABLE IF EXISTS idxPAIR14F;
DROP TABLE IF EXISTS idxPAIR15F;
DROP TABLE IF EXISTS idxPAIR16F;
DROP TABLE IF EXISTS idxPAIR17F;
DROP TABLE IF EXISTS idxPAIR18F;
DROP TABLE IF EXISTS idxPAIR19F;
+DROP TABLE IF EXISTS idxPAIR20F;
+DROP TABLE IF EXISTS idxPAIR21F;
+DROP TABLE IF EXISTS idxPAIR22F;
+DROP TABLE IF EXISTS idxPAIR23F;
+DROP TABLE IF EXISTS idxPAIR24F;
+DROP TABLE IF EXISTS idxPAIR25F;
+DROP TABLE IF EXISTS idxPAIR26F;
DROP TABLE IF EXISTS idxPAIR01R;
DROP TABLE IF EXISTS idxPAIR02R;
DROP TABLE IF EXISTS idxPAIR03R;
DROP TABLE IF EXISTS idxPAIR04R;
DROP TABLE IF EXISTS idxPAIR05R;
DROP TABLE IF EXISTS idxPAIR06R;
DROP TABLE IF EXISTS idxPAIR07R;
DROP TABLE IF EXISTS idxPAIR08R;
DROP TABLE IF EXISTS idxPAIR09R;
DROP TABLE IF EXISTS idxPAIR10R;
DROP TABLE IF EXISTS idxPAIR11R;
DROP TABLE IF EXISTS idxPAIR12R;
DROP TABLE IF EXISTS idxPAIR13R;
DROP TABLE IF EXISTS idxPAIR14R;
DROP TABLE IF EXISTS idxPAIR15R;
DROP TABLE IF EXISTS idxPAIR16R;
DROP TABLE IF EXISTS idxPAIR17R;
DROP TABLE IF EXISTS idxPAIR18R;
DROP TABLE IF EXISTS idxPAIR19R;
+DROP TABLE IF EXISTS idxPAIR20R;
+DROP TABLE IF EXISTS idxPAIR21R;
+DROP TABLE IF EXISTS idxPAIR22R;
+DROP TABLE IF EXISTS idxPAIR23R;
+DROP TABLE IF EXISTS idxPAIR24R;
+DROP TABLE IF EXISTS idxPAIR25R;
+DROP TABLE IF EXISTS idxPAIR26R;
DROP TABLE IF EXISTS idxPHRASE01F;
DROP TABLE IF EXISTS idxPHRASE02F;
DROP TABLE IF EXISTS idxPHRASE03F;
DROP TABLE IF EXISTS idxPHRASE04F;
DROP TABLE IF EXISTS idxPHRASE05F;
DROP TABLE IF EXISTS idxPHRASE06F;
DROP TABLE IF EXISTS idxPHRASE07F;
DROP TABLE IF EXISTS idxPHRASE08F;
DROP TABLE IF EXISTS idxPHRASE09F;
DROP TABLE IF EXISTS idxPHRASE10F;
DROP TABLE IF EXISTS idxPHRASE11F;
DROP TABLE IF EXISTS idxPHRASE12F;
DROP TABLE IF EXISTS idxPHRASE13F;
DROP TABLE IF EXISTS idxPHRASE14F;
DROP TABLE IF EXISTS idxPHRASE15F;
DROP TABLE IF EXISTS idxPHRASE16F;
DROP TABLE IF EXISTS idxPHRASE17F;
DROP TABLE IF EXISTS idxPHRASE18F;
DROP TABLE IF EXISTS idxPHRASE19F;
+DROP TABLE IF EXISTS idxPHRASE20F;
+DROP TABLE IF EXISTS idxPHRASE21F;
+DROP TABLE IF EXISTS idxPHRASE22F;
+DROP TABLE IF EXISTS idxPHRASE23F;
+DROP TABLE IF EXISTS idxPHRASE24F;
+DROP TABLE IF EXISTS idxPHRASE25F;
+DROP TABLE IF EXISTS idxPHRASE26F;
DROP TABLE IF EXISTS idxPHRASE01R;
DROP TABLE IF EXISTS idxPHRASE02R;
DROP TABLE IF EXISTS idxPHRASE03R;
DROP TABLE IF EXISTS idxPHRASE04R;
DROP TABLE IF EXISTS idxPHRASE05R;
DROP TABLE IF EXISTS idxPHRASE06R;
DROP TABLE IF EXISTS idxPHRASE07R;
DROP TABLE IF EXISTS idxPHRASE08R;
DROP TABLE IF EXISTS idxPHRASE09R;
DROP TABLE IF EXISTS idxPHRASE10R;
DROP TABLE IF EXISTS idxPHRASE11R;
DROP TABLE IF EXISTS idxPHRASE12R;
DROP TABLE IF EXISTS idxPHRASE13R;
DROP TABLE IF EXISTS idxPHRASE14R;
DROP TABLE IF EXISTS idxPHRASE15R;
DROP TABLE IF EXISTS idxPHRASE16R;
DROP TABLE IF EXISTS idxPHRASE17R;
DROP TABLE IF EXISTS idxPHRASE18R;
DROP TABLE IF EXISTS idxPHRASE19R;
+DROP TABLE IF EXISTS idxPHRASE20R;
+DROP TABLE IF EXISTS idxPHRASE21R;
+DROP TABLE IF EXISTS idxPHRASE22R;
+DROP TABLE IF EXISTS idxPHRASE23R;
+DROP TABLE IF EXISTS idxPHRASE24R;
+DROP TABLE IF EXISTS idxPHRASE25R;
+DROP TABLE IF EXISTS idxPHRASE26R;
DROP TABLE IF EXISTS rnkMETHOD;
DROP TABLE IF EXISTS rnkMETHODNAME;
DROP TABLE IF EXISTS rnkMETHODDATA;
DROP TABLE IF EXISTS rnkWORD01F;
DROP TABLE IF EXISTS rnkWORD01R;
DROP TABLE IF EXISTS rnkPAGEVIEWS;
DROP TABLE IF EXISTS rnkDOWNLOADS;
DROP TABLE IF EXISTS rnkCITATIONDATA;
DROP TABLE IF EXISTS rnkCITATIONDATAEXT;
DROP TABLE IF EXISTS rnkCITATIONDATAERR;
DROP TABLE IF EXISTS rnkAUTHORDATA;
DROP TABLE IF EXISTS rnkRECORDSCACHE;
DROP TABLE IF EXISTS rnkEXTENDEDAUTHORS;
DROP TABLE IF EXISTS rnkSELFCITES;
DROP TABLE IF EXISTS collection_rnkMETHOD;
DROP TABLE IF EXISTS collection;
DROP TABLE IF EXISTS collectionname;
DROP TABLE IF EXISTS oaiREPOSITORY;
DROP TABLE IF EXISTS oaiHARVEST;
DROP TABLE IF EXISTS oaiHARVESTLOG;
DROP TABLE IF EXISTS bibHOLDINGPEN;
DROP TABLE IF EXISTS collection_collection;
DROP TABLE IF EXISTS collection_portalbox;
DROP TABLE IF EXISTS portalbox;
DROP TABLE IF EXISTS collection_example;
DROP TABLE IF EXISTS example;
DROP TABLE IF EXISTS collection_format;
DROP TABLE IF EXISTS format;
DROP TABLE IF EXISTS formatname;
DROP TABLE IF EXISTS collection_field_fieldvalue;
DROP TABLE IF EXISTS field;
DROP TABLE IF EXISTS fieldname;
DROP TABLE IF EXISTS fieldvalue;
DROP TABLE IF EXISTS field_tag;
DROP TABLE IF EXISTS tag;
DROP TABLE IF EXISTS publreq;
DROP TABLE IF EXISTS session;
DROP TABLE IF EXISTS user;
DROP TABLE IF EXISTS userEXT;
DROP TABLE IF EXISTS accROLE;
DROP TABLE IF EXISTS accMAILCOOKIE;
DROP TABLE IF EXISTS user_accROLE;
DROP TABLE IF EXISTS accACTION;
DROP TABLE IF EXISTS accARGUMENT;
DROP TABLE IF EXISTS accROLE_accACTION_accARGUMENT;
DROP TABLE IF EXISTS user_query;
DROP TABLE IF EXISTS query;
DROP TABLE IF EXISTS user_basket;
DROP TABLE IF EXISTS basket;
DROP TABLE IF EXISTS basket_record;
DROP TABLE IF EXISTS record;
DROP TABLE IF EXISTS user_query_basket;
DROP TABLE IF EXISTS cmtRECORDCOMMENT;
DROP TABLE IF EXISTS cmtCOLLAPSED;
DROP TABLE IF EXISTS knwKB;
DROP TABLE IF EXISTS knwKBRVAL;
DROP TABLE IF EXISTS knwKBDDEF;
DROP TABLE IF EXISTS sbmACTION;
DROP TABLE IF EXISTS sbmALLFUNCDESCR;
DROP TABLE IF EXISTS sbmAPPROVAL;
DROP TABLE IF EXISTS sbmCPLXAPPROVAL;
DROP TABLE IF EXISTS sbmCOLLECTION;
DROP TABLE IF EXISTS sbmCOLLECTION_sbmCOLLECTION;
DROP TABLE IF EXISTS sbmCOLLECTION_sbmDOCTYPE;
DROP TABLE IF EXISTS sbmCATEGORIES;
DROP TABLE IF EXISTS sbmCHECKS;
DROP TABLE IF EXISTS sbmCOOKIES;
DROP TABLE IF EXISTS sbmDOCTYPE;
DROP TABLE IF EXISTS sbmFIELD;
DROP TABLE IF EXISTS sbmFIELDDESC;
DROP TABLE IF EXISTS sbmFORMATEXTENSION;
DROP TABLE IF EXISTS sbmFUNCTIONS;
DROP TABLE IF EXISTS sbmFUNDESC;
DROP TABLE IF EXISTS sbmGFILERESULT;
DROP TABLE IF EXISTS sbmIMPLEMENT;
DROP TABLE IF EXISTS sbmPARAMETERS;
DROP TABLE IF EXISTS sbmPUBLICATION;
DROP TABLE IF EXISTS sbmPUBLICATIONCOMM;
DROP TABLE IF EXISTS sbmPUBLICATIONDATA;
DROP TABLE IF EXISTS sbmREFEREES;
DROP TABLE IF EXISTS sbmSUBMISSIONS;
DROP TABLE IF EXISTS schTASK;
DROP TABLE IF EXISTS bibdoc;
DROP TABLE IF EXISTS bibdoc_bibdoc;
DROP TABLE IF EXISTS bibdocmoreinfo;
DROP TABLE IF EXISTS bibrec_bibdoc;
DROP TABLE IF EXISTS bibdocfsinfo;
DROP TABLE IF EXISTS usergroup;
DROP TABLE IF EXISTS user_usergroup;
DROP TABLE IF EXISTS user_basket;
DROP TABLE IF EXISTS msgMESSAGE;
DROP TABLE IF EXISTS user_msgMESSAGE;
DROP TABLE IF EXISTS bskBASKET;
DROP TABLE IF EXISTS bskEXTREC;
DROP TABLE IF EXISTS bskEXTFMT;
DROP TABLE IF EXISTS bskREC;
DROP TABLE IF EXISTS bskRECORDCOMMENT;
DROP TABLE IF EXISTS cmtACTIONHISTORY;
DROP TABLE IF EXISTS cmtSUBSCRIPTION;
DROP TABLE IF EXISTS user_bskBASKET;
DROP TABLE IF EXISTS usergroup_bskBASKET;
DROP TABLE IF EXISTS collection_externalcollection;
DROP TABLE IF EXISTS externalcollection;
DROP TABLE IF EXISTS collectiondetailedrecordpagetabs;
DROP TABLE IF EXISTS staEVENT;
DROP TABLE IF EXISTS clsMETHOD;
DROP TABLE IF EXISTS collection_clsMETHOD;
DROP TABLE IF EXISTS jrnJOURNAL;
DROP TABLE IF EXISTS jrnISSUE;
DROP TABLE IF EXISTS hstRECORD;
DROP TABLE IF EXISTS hstDOCUMENT;
DROP TABLE IF EXISTS hstTASK;
DROP TABLE IF EXISTS hstBATCHUPLOAD;
DROP TABLE IF EXISTS crcBORROWER;
DROP TABLE IF EXISTS crcILLREQUEST;
DROP TABLE IF EXISTS crcITEM;
DROP TABLE IF EXISTS crcLIBRARY;
DROP TABLE IF EXISTS crcLOAN;
DROP TABLE IF EXISTS crcLOANREQUEST;
DROP TABLE IF EXISTS crcPURCHASE;
DROP TABLE IF EXISTS crcVENDOR;
DROP TABLE IF EXISTS expJOB;
DROP TABLE IF EXISTS expQUERY;
DROP TABLE IF EXISTS expJOB_expQUERY;
DROP TABLE IF EXISTS expQUERYRESULT;
DROP TABLE IF EXISTS expJOBRESULT;
DROP TABLE IF EXISTS expJOBRESULT_expQUERYRESULT;
DROP TABLE IF EXISTS user_expJOB;
DROP TABLE IF EXISTS swrREMOTESERVER;
DROP TABLE IF EXISTS swrCLIENTDATA;
DROP TABLE IF EXISTS hstEXCEPTION;
DROP TABLE IF EXISTS aidUSERINPUTLOG;
DROP TABLE IF EXISTS aidCACHE;
DROP TABLE IF EXISTS aidPERSONIDDATA;
DROP TABLE IF EXISTS aidPERSONIDPAPERS;
DROP TABLE IF EXISTS aidRESULTS;
DROP TABLE IF EXISTS xtrJOB;
DROP TABLE IF EXISTS bsrMETHOD;
DROP TABLE IF EXISTS bsrMETHODNAME;
DROP TABLE IF EXISTS bsrMETHODDATA;
DROP TABLE IF EXISTS bsrMETHODDATABUCKET;
DROP TABLE IF EXISTS collection_bsrMETHOD;
DROP TABLE IF EXISTS lnkENTRY;
DROP TABLE IF EXISTS lnkENTRYURLTITLE;
DROP TABLE IF EXISTS lnkENTRYLOG;
DROP TABLE IF EXISTS lnkLOG;
DROP TABLE IF EXISTS lnkADMINURL;
DROP TABLE IF EXISTS lnkADMINURLLOG;
DROP TABLE IF EXISTS webapikey;
DROP TABLE IF EXISTS wapCACHE;
DROP TABLE IF EXISTS seqSTORE;
DROP TABLE IF EXISTS upgrade;
DROP TABLE IF EXISTS goto;
DROP TABLE IF EXISTS depWORKFLOW;
DROP TABLE IF EXISTS depDRAFT;
DROP TABLE IF EXISTS bwlAUDITLOGGING;
DROP TABLE IF EXISTS bwlOBJECT;
DROP TABLE IF EXISTS bwlTASKLOGGING;
DROP TABLE IF EXISTS bwlWORKFLOW;
DROP TABLE IF EXISTS bwlWORKFLOWLOGGING;
-- end of file
diff --git a/modules/miscutil/sql/tabfill.sql b/modules/miscutil/sql/tabfill.sql
index 891d01d5f..df8a26d2f 100644
--- a/modules/miscutil/sql/tabfill.sql
+++ b/modules/miscutil/sql/tabfill.sql
@@ -1,667 +1,858 @@
-- This file is part of Invenio.
-- Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013 CERN.
--
-- Invenio is free software; you can redistribute it and/or
-- modify it under the terms of the GNU General Public License as
-- published by the Free Software Foundation; either version 2 of the
-- License, or (at your option) any later version.
--
-- Invenio is distributed in the hope that it will be useful, but
-- WITHOUT ANY WARRANTY; without even the implied warranty of
-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-- General Public License for more details.
--
-- You should have received a copy of the GNU General Public License
-- along with Invenio; if not, write to the Free Software Foundation, Inc.,
-- 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
-- Fill Invenio configuration tables with defaults suitable for any site.
INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (1,'wrd','0000-00-00 00:00:00');
INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (1,1,100);
INSERT INTO rnkCITATIONDATA VALUES (1,'citationdict',NULL,'0000-00-00');
INSERT INTO rnkCITATIONDATA VALUES (2,'reversedict',NULL,'0000-00-00');
INSERT INTO rnkCITATIONDATA VALUES (3,'selfcitdict',NULL,'0000-00-00');
INSERT INTO rnkCITATIONDATA VALUES (4,'selfcitedbydict',NULL,'0000-00-00');
INSERT INTO field VALUES (1,'any field','anyfield');
INSERT INTO field VALUES (2,'title','title');
INSERT INTO field VALUES (3,'author','author');
INSERT INTO field VALUES (4,'abstract','abstract');
INSERT INTO field VALUES (5,'keyword','keyword');
INSERT INTO field VALUES (6,'report number','reportnumber');
INSERT INTO field VALUES (7,'subject','subject');
INSERT INTO field VALUES (8,'reference','reference');
INSERT INTO field VALUES (9,'fulltext','fulltext');
INSERT INTO field VALUES (10,'collection','collection');
INSERT INTO field VALUES (11,'division','division');
INSERT INTO field VALUES (12,'year','year');
INSERT INTO field VALUES (13,'experiment','experiment');
INSERT INTO field VALUES (14,'record ID','recid');
INSERT INTO field VALUES (15,'isbn','isbn');
INSERT INTO field VALUES (16,'issn','issn');
INSERT INTO field VALUES (17,'coden','coden');
-- INSERT INTO field VALUES (18,'doi','doi');
INSERT INTO field VALUES (19,'journal','journal');
INSERT INTO field VALUES (20,'collaboration','collaboration');
INSERT INTO field VALUES (21,'affiliation','affiliation');
INSERT INTO field VALUES (22,'exact author','exactauthor');
INSERT INTO field VALUES (23,'date created','datecreated');
INSERT INTO field VALUES (24,'date modified','datemodified');
INSERT INTO field VALUES (25,'refers to','refersto');
INSERT INTO field VALUES (26,'cited by','citedby');
INSERT INTO field VALUES (27,'caption','caption');
INSERT INTO field VALUES (28,'first author','firstauthor');
INSERT INTO field VALUES (29,'exact first author','exactfirstauthor');
INSERT INTO field VALUES (30,'author count','authorcount');
INSERT INTO field VALUES (31,'reference to','rawref');
INSERT INTO field VALUES (32,'exact title','exacttitle');
+INSERT INTO field VALUES (33,'authority author','authorityauthor');
+INSERT INTO field VALUES (34,'authority institution','authorityinstitution');
+INSERT INTO field VALUES (35,'authority journal','authorityjournal');
+INSERT INTO field VALUES (36,'authority subject','authoritysubject');
+INSERT INTO field VALUES (37,'item count','itemcount');
+INSERT INTO field VALUES (38,'file type','filetype');
+INSERT INTO field VALUES (39,'miscellaneous', 'miscellaneous');
+
-INSERT INTO field_tag VALUES (1,100,10);
-INSERT INTO field_tag VALUES (1,102,10);
-INSERT INTO field_tag VALUES (1,103,10);
-INSERT INTO field_tag VALUES (1,104,10);
-INSERT INTO field_tag VALUES (1,105,10);
-INSERT INTO field_tag VALUES (1,106,10);
-INSERT INTO field_tag VALUES (1,107,10);
-INSERT INTO field_tag VALUES (1,108,10);
-INSERT INTO field_tag VALUES (1,109,10);
-INSERT INTO field_tag VALUES (1,110,10);
-INSERT INTO field_tag VALUES (1,111,10);
-INSERT INTO field_tag VALUES (1,112,10);
-INSERT INTO field_tag VALUES (1,113,10);
-INSERT INTO field_tag VALUES (1,114,10);
-INSERT INTO field_tag VALUES (1,16,10);
-INSERT INTO field_tag VALUES (1,17,10);
-INSERT INTO field_tag VALUES (1,18,10);
-INSERT INTO field_tag VALUES (1,19,10);
-INSERT INTO field_tag VALUES (1,20,10);
-INSERT INTO field_tag VALUES (1,21,10);
-INSERT INTO field_tag VALUES (1,22,10);
-INSERT INTO field_tag VALUES (1,23,10);
-INSERT INTO field_tag VALUES (1,24,10);
-INSERT INTO field_tag VALUES (1,25,10);
-INSERT INTO field_tag VALUES (1,26,10);
-INSERT INTO field_tag VALUES (1,27,10);
-INSERT INTO field_tag VALUES (1,28,10);
-INSERT INTO field_tag VALUES (1,29,10);
-INSERT INTO field_tag VALUES (1,30,10);
-INSERT INTO field_tag VALUES (1,31,10);
-INSERT INTO field_tag VALUES (1,32,10);
-INSERT INTO field_tag VALUES (1,33,10);
-INSERT INTO field_tag VALUES (1,34,10);
-INSERT INTO field_tag VALUES (1,35,10);
-INSERT INTO field_tag VALUES (1,36,10);
-INSERT INTO field_tag VALUES (1,37,10);
-INSERT INTO field_tag VALUES (1,38,10);
-INSERT INTO field_tag VALUES (1,39,10);
-INSERT INTO field_tag VALUES (1,40,10);
-INSERT INTO field_tag VALUES (1,41,10);
-INSERT INTO field_tag VALUES (1,42,10);
-INSERT INTO field_tag VALUES (1,43,10);
-INSERT INTO field_tag VALUES (1,44,10);
-INSERT INTO field_tag VALUES (1,45,10);
-INSERT INTO field_tag VALUES (1,46,10);
-INSERT INTO field_tag VALUES (1,47,10);
-INSERT INTO field_tag VALUES (1,48,10);
-INSERT INTO field_tag VALUES (1,49,10);
-INSERT INTO field_tag VALUES (1,50,10);
-INSERT INTO field_tag VALUES (1,51,10);
-INSERT INTO field_tag VALUES (1,52,10);
-INSERT INTO field_tag VALUES (1,53,10);
-INSERT INTO field_tag VALUES (1,54,10);
-INSERT INTO field_tag VALUES (1,55,10);
-INSERT INTO field_tag VALUES (1,56,10);
-INSERT INTO field_tag VALUES (1,57,10);
-INSERT INTO field_tag VALUES (1,58,10);
-INSERT INTO field_tag VALUES (1,59,10);
-INSERT INTO field_tag VALUES (1,60,10);
-INSERT INTO field_tag VALUES (1,61,10);
-INSERT INTO field_tag VALUES (1,62,10);
-INSERT INTO field_tag VALUES (1,63,10);
-INSERT INTO field_tag VALUES (1,64,10);
-INSERT INTO field_tag VALUES (1,65,10);
-INSERT INTO field_tag VALUES (1,66,10);
-INSERT INTO field_tag VALUES (1,67,10);
-INSERT INTO field_tag VALUES (1,68,10);
-INSERT INTO field_tag VALUES (1,69,10);
-INSERT INTO field_tag VALUES (1,70,10);
-INSERT INTO field_tag VALUES (1,71,10);
-INSERT INTO field_tag VALUES (1,72,10);
-INSERT INTO field_tag VALUES (1,73,10);
-INSERT INTO field_tag VALUES (1,74,10);
-INSERT INTO field_tag VALUES (1,75,10);
-INSERT INTO field_tag VALUES (1,76,10);
-INSERT INTO field_tag VALUES (1,77,10);
-INSERT INTO field_tag VALUES (1,78,10);
-INSERT INTO field_tag VALUES (1,79,10);
-INSERT INTO field_tag VALUES (1,80,10);
-INSERT INTO field_tag VALUES (1,81,10);
-INSERT INTO field_tag VALUES (1,82,10);
-INSERT INTO field_tag VALUES (1,83,10);
-INSERT INTO field_tag VALUES (1,84,10);
-INSERT INTO field_tag VALUES (1,85,10);
-INSERT INTO field_tag VALUES (1,86,10);
-INSERT INTO field_tag VALUES (1,87,10);
-INSERT INTO field_tag VALUES (1,88,10);
-INSERT INTO field_tag VALUES (1,89,10);
-INSERT INTO field_tag VALUES (1,90,10);
-INSERT INTO field_tag VALUES (1,91,10);
-INSERT INTO field_tag VALUES (1,92,10);
-INSERT INTO field_tag VALUES (1,93,10);
-INSERT INTO field_tag VALUES (1,94,10);
-INSERT INTO field_tag VALUES (1,95,10);
-INSERT INTO field_tag VALUES (1,96,10);
-INSERT INTO field_tag VALUES (1,97,10);
-INSERT INTO field_tag VALUES (1,98,10);
-INSERT INTO field_tag VALUES (1,99,10);
-INSERT INTO field_tag VALUES (1,122,10);
-INSERT INTO field_tag VALUES (1,123,10);
-INSERT INTO field_tag VALUES (1,124,10);
-INSERT INTO field_tag VALUES (1,125,10);
-INSERT INTO field_tag VALUES (1,126,10);
-INSERT INTO field_tag VALUES (1,127,10);
-INSERT INTO field_tag VALUES (1,128,10);
-INSERT INTO field_tag VALUES (1,129,10);
-INSERT INTO field_tag VALUES (1,130,10);
INSERT INTO field_tag VALUES (10,11,100);
INSERT INTO field_tag VALUES (11,14,100);
INSERT INTO field_tag VALUES (12,15,10);
INSERT INTO field_tag VALUES (13,116,10);
INSERT INTO field_tag VALUES (2,3,100);
INSERT INTO field_tag VALUES (2,4,90);
INSERT INTO field_tag VALUES (3,1,100);
INSERT INTO field_tag VALUES (3,2,90);
INSERT INTO field_tag VALUES (4,5,100);
INSERT INTO field_tag VALUES (5,6,100);
INSERT INTO field_tag VALUES (6,7,30);
INSERT INTO field_tag VALUES (6,8,10);
INSERT INTO field_tag VALUES (6,9,20);
INSERT INTO field_tag VALUES (7,12,100);
INSERT INTO field_tag VALUES (7,13,90);
INSERT INTO field_tag VALUES (8,10,100);
INSERT INTO field_tag VALUES (9,115,100);
INSERT INTO field_tag VALUES (14,117,100);
INSERT INTO field_tag VALUES (15,118,100);
INSERT INTO field_tag VALUES (16,119,100);
INSERT INTO field_tag VALUES (17,120,100);
-- INSERT INTO field_tag VALUES (18,121,100);
INSERT INTO field_tag VALUES (19,131,100);
INSERT INTO field_tag VALUES (20,132,100);
INSERT INTO field_tag VALUES (21,133,100);
INSERT INTO field_tag VALUES (21,134,90);
INSERT INTO field_tag VALUES (22,1,100);
INSERT INTO field_tag VALUES (22,2,90);
INSERT INTO field_tag VALUES (27,135,100);
INSERT INTO field_tag VALUES (28,1,100);
INSERT INTO field_tag VALUES (29,1,100);
INSERT INTO field_tag VALUES (30,1,100);
INSERT INTO field_tag VALUES (30,2,90);
INSERT INTO field_tag VALUES (32,3,100);
INSERT INTO field_tag VALUES (32,4,90);
+-- authority fields
+INSERT INTO field_tag VALUES (33,1,100);
+INSERT INTO field_tag VALUES (33,146,100);
+INSERT INTO field_tag VALUES (33,140,100);
+INSERT INTO field_tag VALUES (34,148,100);
+INSERT INTO field_tag VALUES (34,149,100);
+INSERT INTO field_tag VALUES (34,150,100);
+INSERT INTO field_tag VALUES (35,151,100);
+INSERT INTO field_tag VALUES (35,152,100);
+INSERT INTO field_tag VALUES (35,153,100);
+INSERT INTO field_tag VALUES (36,154,100);
+INSERT INTO field_tag VALUES (36,155,100);
+INSERT INTO field_tag VALUES (36,156,100);
+-- misc fields
+INSERT INTO field_tag VALUES (39,17,10);
+INSERT INTO field_tag VALUES (39,18,10);
+INSERT INTO field_tag VALUES (39,157,10);
+INSERT INTO field_tag VALUES (39,158,10);
+INSERT INTO field_tag VALUES (39,159,10);
+INSERT INTO field_tag VALUES (39,160,10);
+INSERT INTO field_tag VALUES (39,161,10);
+INSERT INTO field_tag VALUES (39,162,10);
+INSERT INTO field_tag VALUES (39,163,10);
+INSERT INTO field_tag VALUES (39,164,10);
+INSERT INTO field_tag VALUES (39,20,10);
+INSERT INTO field_tag VALUES (39,21,10);
+INSERT INTO field_tag VALUES (39,22,10);
+INSERT INTO field_tag VALUES (39,23,10);
+INSERT INTO field_tag VALUES (39,165,10);
+INSERT INTO field_tag VALUES (39,166,10);
+INSERT INTO field_tag VALUES (39,167,10);
+INSERT INTO field_tag VALUES (39,168,10);
+INSERT INTO field_tag VALUES (39,169,10);
+INSERT INTO field_tag VALUES (39,170,10);
+INSERT INTO field_tag VALUES (39,25,10);
+INSERT INTO field_tag VALUES (39,27,10);
+INSERT INTO field_tag VALUES (39,28,10);
+INSERT INTO field_tag VALUES (39,29,10);
+INSERT INTO field_tag VALUES (39,30,10);
+INSERT INTO field_tag VALUES (39,31,10);
+INSERT INTO field_tag VALUES (39,32,10);
+INSERT INTO field_tag VALUES (39,33,10);
+INSERT INTO field_tag VALUES (39,34,10);
+INSERT INTO field_tag VALUES (39,35,10);
+INSERT INTO field_tag VALUES (39,36,10);
+INSERT INTO field_tag VALUES (39,37,10);
+INSERT INTO field_tag VALUES (39,38,10);
+INSERT INTO field_tag VALUES (39,39,10);
+INSERT INTO field_tag VALUES (39,171,10);
+INSERT INTO field_tag VALUES (39,172,10);
+INSERT INTO field_tag VALUES (39,173,10);
+INSERT INTO field_tag VALUES (39,174,10);
+INSERT INTO field_tag VALUES (39,175,10);
+INSERT INTO field_tag VALUES (39,41,10);
+INSERT INTO field_tag VALUES (39,42,10);
+INSERT INTO field_tag VALUES (39,43,10);
+INSERT INTO field_tag VALUES (39,44,10);
+INSERT INTO field_tag VALUES (39,45,10);
+INSERT INTO field_tag VALUES (39,46,10);
+INSERT INTO field_tag VALUES (39,47,10);
+INSERT INTO field_tag VALUES (39,48,10);
+INSERT INTO field_tag VALUES (39,49,10);
+INSERT INTO field_tag VALUES (39,50,10);
+INSERT INTO field_tag VALUES (39,51,10);
+INSERT INTO field_tag VALUES (39,52,10);
+INSERT INTO field_tag VALUES (39,53,10);
+INSERT INTO field_tag VALUES (39,54,10);
+INSERT INTO field_tag VALUES (39,55,10);
+INSERT INTO field_tag VALUES (39,56,10);
+INSERT INTO field_tag VALUES (39,57,10);
+INSERT INTO field_tag VALUES (39,58,10);
+INSERT INTO field_tag VALUES (39,59,10);
+INSERT INTO field_tag VALUES (39,60,10);
+INSERT INTO field_tag VALUES (39,61,10);
+INSERT INTO field_tag VALUES (39,62,10);
+INSERT INTO field_tag VALUES (39,63,10);
+INSERT INTO field_tag VALUES (39,64,10);
+INSERT INTO field_tag VALUES (39,65,10);
+INSERT INTO field_tag VALUES (39,66,10);
+INSERT INTO field_tag VALUES (39,67,10);
+INSERT INTO field_tag VALUES (39,176,10);
+INSERT INTO field_tag VALUES (39,177,10);
+INSERT INTO field_tag VALUES (39,178,10);
+INSERT INTO field_tag VALUES (39,179,10);
+INSERT INTO field_tag VALUES (39,180,10);
+INSERT INTO field_tag VALUES (39,69,10);
+INSERT INTO field_tag VALUES (39,70,10);
+INSERT INTO field_tag VALUES (39,71,10);
+INSERT INTO field_tag VALUES (39,72,10);
+INSERT INTO field_tag VALUES (39,73,10);
+INSERT INTO field_tag VALUES (39,74,10);
+INSERT INTO field_tag VALUES (39,75,10);
+INSERT INTO field_tag VALUES (39,76,10);
+INSERT INTO field_tag VALUES (39,77,10);
+INSERT INTO field_tag VALUES (39,78,10);
+INSERT INTO field_tag VALUES (39,79,10);
+INSERT INTO field_tag VALUES (39,80,10);
+INSERT INTO field_tag VALUES (39,181,10);
+INSERT INTO field_tag VALUES (39,182,10);
+INSERT INTO field_tag VALUES (39,183,10);
+INSERT INTO field_tag VALUES (39,184,10);
+INSERT INTO field_tag VALUES (39,185,10);
+INSERT INTO field_tag VALUES (39,186,10);
+INSERT INTO field_tag VALUES (39,82,10);
+INSERT INTO field_tag VALUES (39,83,10);
+INSERT INTO field_tag VALUES (39,84,10);
+INSERT INTO field_tag VALUES (39,85,10);
+INSERT INTO field_tag VALUES (39,187,10);
+INSERT INTO field_tag VALUES (39,88,10);
+INSERT INTO field_tag VALUES (39,89,10);
+INSERT INTO field_tag VALUES (39,90,10);
+INSERT INTO field_tag VALUES (39,91,10);
+INSERT INTO field_tag VALUES (39,92,10);
+INSERT INTO field_tag VALUES (39,93,10);
+INSERT INTO field_tag VALUES (39,94,10);
+INSERT INTO field_tag VALUES (39,95,10);
+INSERT INTO field_tag VALUES (39,96,10);
+INSERT INTO field_tag VALUES (39,97,10);
+INSERT INTO field_tag VALUES (39,98,10);
+INSERT INTO field_tag VALUES (39,99,10);
+INSERT INTO field_tag VALUES (39,100,10);
+INSERT INTO field_tag VALUES (39,102,10);
+INSERT INTO field_tag VALUES (39,103,10);
+INSERT INTO field_tag VALUES (39,104,10);
+INSERT INTO field_tag VALUES (39,105,10);
+INSERT INTO field_tag VALUES (39,188,10);
+INSERT INTO field_tag VALUES (39,189,10);
+INSERT INTO field_tag VALUES (39,190,10);
+INSERT INTO field_tag VALUES (39,191,10);
+INSERT INTO field_tag VALUES (39,192,10);
+INSERT INTO field_tag VALUES (39,193,10);
+INSERT INTO field_tag VALUES (39,194,10);
+INSERT INTO field_tag VALUES (39,195,10);
+INSERT INTO field_tag VALUES (39,196,10);
+INSERT INTO field_tag VALUES (39,107,10);
+INSERT INTO field_tag VALUES (39,108,10);
+INSERT INTO field_tag VALUES (39,109,10);
+INSERT INTO field_tag VALUES (39,110,10);
+INSERT INTO field_tag VALUES (39,111,10);
+INSERT INTO field_tag VALUES (39,112,10);
+INSERT INTO field_tag VALUES (39,113,10);
+INSERT INTO field_tag VALUES (39,197,10);
+INSERT INTO field_tag VALUES (39,198,10);
+INSERT INTO field_tag VALUES (39,199,10);
+INSERT INTO field_tag VALUES (39,200,10);
+INSERT INTO field_tag VALUES (39,201,10);
+INSERT INTO field_tag VALUES (39,202,10);
+INSERT INTO field_tag VALUES (39,203,10);
+INSERT INTO field_tag VALUES (39,204,10);
+INSERT INTO field_tag VALUES (39,205,10);
+INSERT INTO field_tag VALUES (39,206,10);
+INSERT INTO field_tag VALUES (39,207,10);
+INSERT INTO field_tag VALUES (39,208,10);
+INSERT INTO field_tag VALUES (39,209,10);
+INSERT INTO field_tag VALUES (39,210,10);
+INSERT INTO field_tag VALUES (39,211,10);
+INSERT INTO field_tag VALUES (39,212,10);
+INSERT INTO field_tag VALUES (39,213,10);
+INSERT INTO field_tag VALUES (39,214,10);
+INSERT INTO field_tag VALUES (39,215,10);
+INSERT INTO field_tag VALUES (39,122,10);
+INSERT INTO field_tag VALUES (39,123,10);
+INSERT INTO field_tag VALUES (39,124,10);
+INSERT INTO field_tag VALUES (39,125,10);
+INSERT INTO field_tag VALUES (39,126,10);
+INSERT INTO field_tag VALUES (39,127,10);
+INSERT INTO field_tag VALUES (39,128,10);
+INSERT INTO field_tag VALUES (39,129,10);
+INSERT INTO field_tag VALUES (39,130,10);
+INSERT INTO field_tag VALUES (39,1,10);
+INSERT INTO field_tag VALUES (39,2,10);
+-- misc authority fields
+INSERT INTO field_tag VALUES (39,216,10);
+INSERT INTO field_tag VALUES (39,217,10);
+INSERT INTO field_tag VALUES (39,218,10);
+INSERT INTO field_tag VALUES (39,219,10);
+INSERT INTO field_tag VALUES (39,220,10);
+INSERT INTO field_tag VALUES (39,221,10);
+
+
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (1,'HTML brief','hb', 'HTML brief output format, used for search results pages.', 'text/html', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (2,'HTML detailed','hd', 'HTML detailed output format, used for Detailed record pages.', 'text/html', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (3,'MARC','hm', 'HTML MARC.', 'text/html', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (4,'Dublin Core','xd', 'XML Dublin Core.', 'text/xml', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (5,'MARCXML','xm', 'XML MARC.', 'text/xml', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (6,'portfolio','hp', 'HTML portfolio-style output format for photos.', 'text/html', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (7,'photo captions only','hc', 'HTML caption-only output format for photos.', 'text/html', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (8,'BibTeX','hx', 'BibTeX.', 'text/html', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (9,'EndNote','xe', 'XML EndNote.', 'text/xml', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (10,'NLM','xn', 'XML NLM.', 'text/xml', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (11,'Excel','excel', 'Excel csv output', 'application/ms-excel', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (12,'HTML similarity','hs', 'Very short HTML output for similarity box (<i>people also viewed..</i>).', 'text/html', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (13,'RSS','xr', 'RSS.', 'text/xml', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (14,'OAI DC','xoaidc', 'OAI DC.', 'text/xml', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (15,'File mini-panel', 'hdfile', 'Used to show fulltext files in mini-panel of detailed record pages.', 'text/html', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (16,'Actions mini-panel', 'hdact', 'Used to display actions in mini-panel of detailed record pages.', 'text/html', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (17,'References tab', 'hdref', 'Display record references in References tab.', 'text/html', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (18,'HTML citesummary','hcs', 'HTML cite summary format, used for search results pages.', 'text/html', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (19,'RefWorks','xw', 'RefWorks.', 'text/xml', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (20,'MODS', 'xo', 'Metadata Object Description Schema', 'application/xml', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (21,'HTML author claiming', 'ha', 'Very brief HTML output format for author/paper claiming facility.', 'text/html', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (22,'Podcast', 'xp', 'Sample format suitable for multimedia feeds, such as podcasts', 'application/rss+xml', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (23,'WebAuthorProfile affiliations helper','wapaff', 'cPickled dicts', 'text', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (24,'EndNote (8-X)','xe8x', 'XML EndNote (8-X).', 'text/xml', 1);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (25,'HTML citesummary extended','hcs2', 'HTML cite summary format, including self-citations counts.', 'text/html', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (26,'DataCite','dcite', 'DataCite XML format.', 'text/xml', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (27,'Mobile brief','mobb', 'Mobile brief format.', 'text/html', 0);
INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (28,'Mobile detailed','mobd', 'Mobile detailed format.', 'text/html', 0);
INSERT INTO tag VALUES (1,'first author name','100__a');
INSERT INTO tag VALUES (2,'additional author name','700__a');
INSERT INTO tag VALUES (3,'main title','245__%');
INSERT INTO tag VALUES (4,'additional title','246__%');
INSERT INTO tag VALUES (5,'abstract','520__%');
INSERT INTO tag VALUES (6,'keyword','6531_a');
INSERT INTO tag VALUES (7,'primary report number','037__a');
INSERT INTO tag VALUES (8,'additional report number','088__a');
INSERT INTO tag VALUES (9,'added report number','909C0r');
INSERT INTO tag VALUES (10,'reference','999C5%');
INSERT INTO tag VALUES (11,'collection identifier','980__%');
INSERT INTO tag VALUES (12,'main subject','65017a');
INSERT INTO tag VALUES (13,'additional subject','65027a');
INSERT INTO tag VALUES (14,'division','909C0p');
INSERT INTO tag VALUES (15,'year','909C0y');
INSERT INTO tag VALUES (16,'00x','00%');
INSERT INTO tag VALUES (17,'01x','01%');
INSERT INTO tag VALUES (18,'02x','02%');
INSERT INTO tag VALUES (19,'03x','03%');
INSERT INTO tag VALUES (20,'lang','04%');
INSERT INTO tag VALUES (21,'05x','05%');
INSERT INTO tag VALUES (22,'06x','06%');
INSERT INTO tag VALUES (23,'07x','07%');
INSERT INTO tag VALUES (24,'08x','08%');
INSERT INTO tag VALUES (25,'09x','09%');
INSERT INTO tag VALUES (26,'10x','10%');
INSERT INTO tag VALUES (27,'11x','11%');
INSERT INTO tag VALUES (28,'12x','12%');
INSERT INTO tag VALUES (29,'13x','13%');
INSERT INTO tag VALUES (30,'14x','14%');
INSERT INTO tag VALUES (31,'15x','15%');
INSERT INTO tag VALUES (32,'16x','16%');
INSERT INTO tag VALUES (33,'17x','17%');
INSERT INTO tag VALUES (34,'18x','18%');
INSERT INTO tag VALUES (35,'19x','19%');
INSERT INTO tag VALUES (36,'20x','20%');
INSERT INTO tag VALUES (37,'21x','21%');
INSERT INTO tag VALUES (38,'22x','22%');
INSERT INTO tag VALUES (39,'23x','23%');
INSERT INTO tag VALUES (40,'24x','24%');
INSERT INTO tag VALUES (41,'25x','25%');
INSERT INTO tag VALUES (42,'internal','26%');
INSERT INTO tag VALUES (43,'27x','27%');
INSERT INTO tag VALUES (44,'28x','28%');
INSERT INTO tag VALUES (45,'29x','29%');
INSERT INTO tag VALUES (46,'pages','30%');
INSERT INTO tag VALUES (47,'31x','31%');
INSERT INTO tag VALUES (48,'32x','32%');
INSERT INTO tag VALUES (49,'33x','33%');
INSERT INTO tag VALUES (50,'34x','34%');
INSERT INTO tag VALUES (51,'35x','35%');
INSERT INTO tag VALUES (52,'36x','36%');
INSERT INTO tag VALUES (53,'37x','37%');
INSERT INTO tag VALUES (54,'38x','38%');
INSERT INTO tag VALUES (55,'39x','39%');
INSERT INTO tag VALUES (56,'40x','40%');
INSERT INTO tag VALUES (57,'41x','41%');
INSERT INTO tag VALUES (58,'42x','42%');
INSERT INTO tag VALUES (59,'43x','43%');
INSERT INTO tag VALUES (60,'44x','44%');
INSERT INTO tag VALUES (61,'45x','45%');
INSERT INTO tag VALUES (62,'46x','46%');
INSERT INTO tag VALUES (63,'47x','47%');
INSERT INTO tag VALUES (64,'48x','48%');
INSERT INTO tag VALUES (65,'series','49%');
INSERT INTO tag VALUES (66,'50x','50%');
INSERT INTO tag VALUES (67,'51x','51%');
INSERT INTO tag VALUES (68,'52x','52%');
INSERT INTO tag VALUES (69,'53x','53%');
INSERT INTO tag VALUES (70,'54x','54%');
INSERT INTO tag VALUES (71,'55x','55%');
INSERT INTO tag VALUES (72,'56x','56%');
INSERT INTO tag VALUES (73,'57x','57%');
INSERT INTO tag VALUES (74,'58x','58%');
INSERT INTO tag VALUES (75,'summary','59%');
INSERT INTO tag VALUES (76,'60x','60%');
INSERT INTO tag VALUES (77,'61x','61%');
INSERT INTO tag VALUES (78,'62x','62%');
INSERT INTO tag VALUES (79,'63x','63%');
INSERT INTO tag VALUES (80,'64x','64%');
INSERT INTO tag VALUES (81,'65x','65%');
INSERT INTO tag VALUES (82,'66x','66%');
INSERT INTO tag VALUES (83,'67x','67%');
INSERT INTO tag VALUES (84,'68x','68%');
INSERT INTO tag VALUES (85,'subject','69%');
INSERT INTO tag VALUES (86,'70x','70%');
INSERT INTO tag VALUES (87,'71x','71%');
INSERT INTO tag VALUES (88,'author-ad','72%');
INSERT INTO tag VALUES (89,'73x','73%');
INSERT INTO tag VALUES (90,'74x','74%');
INSERT INTO tag VALUES (91,'75x','75%');
INSERT INTO tag VALUES (92,'76x','76%');
INSERT INTO tag VALUES (93,'77x','77%');
INSERT INTO tag VALUES (94,'78x','78%');
INSERT INTO tag VALUES (95,'79x','79%');
INSERT INTO tag VALUES (96,'80x','80%');
INSERT INTO tag VALUES (97,'81x','81%');
INSERT INTO tag VALUES (98,'82x','82%');
INSERT INTO tag VALUES (99,'83x','83%');
INSERT INTO tag VALUES (100,'84x','84%');
INSERT INTO tag VALUES (101,'electr','85%');
INSERT INTO tag VALUES (102,'86x','86%');
INSERT INTO tag VALUES (103,'87x','87%');
INSERT INTO tag VALUES (104,'88x','88%');
INSERT INTO tag VALUES (105,'89x','89%');
INSERT INTO tag VALUES (106,'publication','90%');
INSERT INTO tag VALUES (107,'pub-conf-cit','91%');
INSERT INTO tag VALUES (108,'92x','92%');
INSERT INTO tag VALUES (109,'93x','93%');
INSERT INTO tag VALUES (110,'94x','94%');
INSERT INTO tag VALUES (111,'95x','95%');
INSERT INTO tag VALUES (112,'catinfo','96%');
INSERT INTO tag VALUES (113,'97x','97%');
INSERT INTO tag VALUES (114,'98x','98%');
INSERT INTO tag VALUES (115,'url','8564_u');
INSERT INTO tag VALUES (116,'experiment','909C0e');
INSERT INTO tag VALUES (117,'record ID','001');
INSERT INTO tag VALUES (118,'isbn','020__a');
INSERT INTO tag VALUES (119,'issn','022__a');
INSERT INTO tag VALUES (120,'coden','030__a');
INSERT INTO tag VALUES (121,'doi','909C4a');
INSERT INTO tag VALUES (122,'850x','850%');
INSERT INTO tag VALUES (123,'851x','851%');
INSERT INTO tag VALUES (124,'852x','852%');
INSERT INTO tag VALUES (125,'853x','853%');
INSERT INTO tag VALUES (126,'854x','854%');
INSERT INTO tag VALUES (127,'855x','855%');
INSERT INTO tag VALUES (128,'857x','857%');
INSERT INTO tag VALUES (129,'858x','858%');
INSERT INTO tag VALUES (130,'859x','859%');
INSERT INTO tag VALUES (131,'journal','909C4%');
INSERT INTO tag VALUES (132,'collaboration','710__g');
INSERT INTO tag VALUES (133,'first author affiliation','100__u');
INSERT INTO tag VALUES (134,'additional author affiliation','700__u');
INSERT INTO tag VALUES (135,'caption','8564_y');
INSERT INTO tag VALUES (136,'journal page','909C4c');
INSERT INTO tag VALUES (137,'journal title','909C4p');
INSERT INTO tag VALUES (138,'journal volume','909C4v');
INSERT INTO tag VALUES (139,'journal year','909C4y');
INSERT INTO tag VALUES (140,'comment','500__a');
INSERT INTO tag VALUES (141,'title','245__a');
INSERT INTO tag VALUES (142,'main abstract','245__a');
INSERT INTO tag VALUES (143,'internal notes','595__a');
INSERT INTO tag VALUES (144,'other relationship entry', '787%');
+-- INSERT INTO tag VALUES (145,'authority: main personal name','100__a'); -- already exists under a different name ('first author name')
+INSERT INTO tag VALUES (146,'authority: alternative personal name','400__a');
+-- INSERT INTO tag VALUES (147,'authority: personal name from other record','500__a'); -- already exists under a different name ('comment')
+INSERT INTO tag VALUES (148,'authority: organization main name','110__a');
+INSERT INTO tag VALUES (149,'organization alternative name','410__a');
+INSERT INTO tag VALUES (150,'organization main from other record','510__a');
+INSERT INTO tag VALUES (151,'authority: uniform title','130__a');
+INSERT INTO tag VALUES (152,'authority: uniform title alternatives','430__a');
+INSERT INTO tag VALUES (153,'authority: uniform title from other record','530__a');
+INSERT INTO tag VALUES (154,'authority: subject from other record','150__a');
+INSERT INTO tag VALUES (155,'authority: subject alternative name','450__a');
+INSERT INTO tag VALUES (156,'authority: subject main name','550__a');
+-- tags for misc index
+INSERT INTO tag VALUES (157,'031x','031%');
+INSERT INTO tag VALUES (158,'032x','032%');
+INSERT INTO tag VALUES (159,'033x','033%');
+INSERT INTO tag VALUES (160,'034x','034%');
+INSERT INTO tag VALUES (161,'035x','035%');
+INSERT INTO tag VALUES (162,'036x','036%');
+INSERT INTO tag VALUES (163,'037x','037%');
+INSERT INTO tag VALUES (164,'038x','038%');
+INSERT INTO tag VALUES (165,'080x','080%');
+INSERT INTO tag VALUES (166,'082x','082%');
+INSERT INTO tag VALUES (167,'083x','083%');
+INSERT INTO tag VALUES (168,'084x','084%');
+INSERT INTO tag VALUES (169,'085x','085%');
+INSERT INTO tag VALUES (170,'086x','086%');
+INSERT INTO tag VALUES (171,'240x','240%');
+INSERT INTO tag VALUES (172,'242x','242%');
+INSERT INTO tag VALUES (173,'243x','243%');
+INSERT INTO tag VALUES (174,'244x','244%');
+INSERT INTO tag VALUES (175,'247x','247%');
+INSERT INTO tag VALUES (176,'521x','521%');
+INSERT INTO tag VALUES (177,'522x','522%');
+INSERT INTO tag VALUES (178,'524x','524%');
+INSERT INTO tag VALUES (179,'525x','525%');
+INSERT INTO tag VALUES (180,'526x','526%');
+INSERT INTO tag VALUES (181,'650x','650%');
+INSERT INTO tag VALUES (182,'651x','651%');
+INSERT INTO tag VALUES (183,'6531_v','6531_v');
+INSERT INTO tag VALUES (184,'6531_y','6531_y');
+INSERT INTO tag VALUES (185,'6531_9','6531_9');
+INSERT INTO tag VALUES (186,'654x','654%');
+INSERT INTO tag VALUES (187,'655x','655%');
+INSERT INTO tag VALUES (188,'656x','656%');
+INSERT INTO tag VALUES (189,'657x','657%');
+INSERT INTO tag VALUES (190,'658x','658%');
+INSERT INTO tag VALUES (191,'711x','711%');
+INSERT INTO tag VALUES (192,'900x','900%');
+INSERT INTO tag VALUES (193,'901x','901%');
+INSERT INTO tag VALUES (194,'902x','902%');
+INSERT INTO tag VALUES (195,'903x','903%');
+INSERT INTO tag VALUES (196,'904x','904%');
+INSERT INTO tag VALUES (197,'905x','905%');
+INSERT INTO tag VALUES (198,'906x','906%');
+INSERT INTO tag VALUES (199,'907x','907%');
+INSERT INTO tag VALUES (200,'908x','908%');
+INSERT INTO tag VALUES (201,'909C1x','909C1%');
+INSERT INTO tag VALUES (202,'909C5x','909C5%');
+INSERT INTO tag VALUES (203,'909CSx','909CS%');
+INSERT INTO tag VALUES (204,'909COx','909CO%');
+INSERT INTO tag VALUES (205,'909CKx','909CK%');
+INSERT INTO tag VALUES (206,'909CPx','909CP%');
+INSERT INTO tag VALUES (207,'981x','981%');
+INSERT INTO tag VALUES (208,'982x','982%');
+INSERT INTO tag VALUES (209,'983x','983%');
+INSERT INTO tag VALUES (210,'984x','984%');
+INSERT INTO tag VALUES (211,'985x','985%');
+INSERT INTO tag VALUES (212,'986x','986%');
+INSERT INTO tag VALUES (213,'987x','987%');
+INSERT INTO tag VALUES (214,'988x','988%');
+INSERT INTO tag VALUES (215,'989x','989%');
+-- authority controled tags
+INSERT INTO tag VALUES (216,'author control','100__0');
+INSERT INTO tag VALUES (217,'institution control','110__0');
+INSERT INTO tag VALUES (218,'journal control','130__0');
+INSERT INTO tag VALUES (219,'subject control','150__0');
+INSERT INTO tag VALUES (220,'additional institution control', '260__0');
+INSERT INTO tag VALUES (221,'additional author control', '700__0');
+
+
-INSERT INTO idxINDEX VALUES (1,'global','This index contains words/phrases from global fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (2,'collection','This index contains words/phrases from collection identifiers fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (3,'abstract','This index contains words/phrases from abstract fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (4,'author','This index contains fuzzy words/phrases from author fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (5,'keyword','This index contains words/phrases from keyword fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (6,'reference','This index contains words/phrases from references fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (7,'reportnumber','This index contains words/phrases from report numbers fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (8,'title','This index contains words/phrases from title fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (9,'fulltext','This index contains words/phrases from fulltext fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (10,'year','This index contains words/phrases from year fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (11,'journal','This index contains words/phrases from journal publication information fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (12,'collaboration','This index contains words/phrases from collaboration name fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (13,'affiliation','This index contains words/phrases from institutional affiliation fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (14,'exactauthor','This index contains exact words/phrases from author fields.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (15,'caption','This index contains exact words/phrases from figure captions.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (16,'firstauthor','This index contains fuzzy words/phrases from first author field.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (17,'exactfirstauthor','This index contains exact words/phrases from first author field.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (18,'authorcount','This index contains number of authors of the record.','0000-00-00 00:00:00', '', 'native');
-INSERT INTO idxINDEX VALUES (19,'exacttitle','This index contains exact words/phrases from title fields.','0000-00-00 00:00:00', '', 'native');
+INSERT INTO idxINDEX VALUES (1,'global','This index contains words/phrases from global fields.','0000-00-00 00:00:00', '', 'native', 'INDEX-SYNONYM-TITLE,exact','No','No','No','BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (2,'collection','This index contains words/phrases from collection identifiers fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (3,'abstract','This index contains words/phrases from abstract fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (4,'author','This index contains fuzzy words/phrases from author fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorTokenizer');
+INSERT INTO idxINDEX VALUES (5,'keyword','This index contains words/phrases from keyword fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (6,'reference','This index contains words/phrases from references fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (7,'reportnumber','This index contains words/phrases from report numbers fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (8,'title','This index contains words/phrases from title fields.','0000-00-00 00:00:00', '', 'native','INDEX-SYNONYM-TITLE,exact','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (9,'fulltext','This index contains words/phrases from fulltext fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexFulltextTokenizer');
+INSERT INTO idxINDEX VALUES (10,'year','This index contains words/phrases from year fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexYearTokenizer');
+INSERT INTO idxINDEX VALUES (11,'journal','This index contains words/phrases from journal publication information fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexJournalTokenizer');
+INSERT INTO idxINDEX VALUES (12,'collaboration','This index contains words/phrases from collaboration name fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (13,'affiliation','This index contains words/phrases from institutional affiliation fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (14,'exactauthor','This index contains exact words/phrases from author fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexExactAuthorTokenizer');
+INSERT INTO idxINDEX VALUES (15,'caption','This index contains exact words/phrases from figure captions.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (16,'firstauthor','This index contains fuzzy words/phrases from first author field.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorTokenizer');
+INSERT INTO idxINDEX VALUES (17,'exactfirstauthor','This index contains exact words/phrases from first author field.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexExactAuthorTokenizer');
+INSERT INTO idxINDEX VALUES (18,'authorcount','This index contains number of authors of the record.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorCountTokenizer');
+INSERT INTO idxINDEX VALUES (19,'exacttitle','This index contains exact words/phrases from title fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (20,'authorityauthor','This index contains words/phrases from author authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorTokenizer');
+INSERT INTO idxINDEX VALUES (21,'authorityinstitution','This index contains words/phrases from institution authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (22,'authorityjournal','This index contains words/phrases from journal authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (23,'authoritysubject','This index contains words/phrases from subject authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer');
+INSERT INTO idxINDEX VALUES (24,'itemcount','This index contains number of copies of items in the library.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexItemCountTokenizer');
+INSERT INTO idxINDEX VALUES (25,'filetype','This index contains extensions of files connected to records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexFiletypeTokenizer');
+INSERT INTO idxINDEX VALUES (26,'miscellaneous','This index contains words/phrases from miscellaneous fields','0000-00-00 00:00:00', '', 'native','','No','No','No', 'BibIndexDefaultTokenizer');
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (1,1);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (2,10);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (3,4);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (4,3);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (5,5);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (6,8);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (7,6);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (8,2);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (9,9);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (10,12);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (11,19);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (12,20);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (13,21);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (14,22);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (15,27);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (16,28);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (17,29);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (18,30);
INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (19,32);
+INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (20,33);
+INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (21,34);
+INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (22,35);
+INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (23,36);
+INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (24,37);
+INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (25,38);
+INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (26,39);
+
+
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 2);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 3);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 5);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 7);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 8);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 10);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 11);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 12);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 13);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 19);
+INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 26);
+
+
INSERT INTO sbmACTION VALUES ('Submit New Record','SBI','running','1998-08-17','2001-08-08','','Submit New Record');
INSERT INTO sbmACTION VALUES ('Modify Record','MBI','modify','1998-08-17','2001-11-07','','Modify Record');
INSERT INTO sbmACTION VALUES ('Submit New File','SRV','revise','0000-00-00','2001-11-07','','Submit New File');
INSERT INTO sbmACTION VALUES ('Approve Record','APP','approve','2001-11-08','2002-06-11','','Approve Record');
INSERT INTO sbmALLFUNCDESCR VALUES ('Ask_For_Record_Details_Confirmation','');
INSERT INTO sbmALLFUNCDESCR VALUES ('CaseEDS','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Modify_Interface',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Recid',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Finish_Submission','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Info','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Recid', 'This function gets the recid for a document with a given report-number (as stored in the global variable rn).');
INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Report_Number',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Sysno',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Insert_Modify_Record','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Insert_Record',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Is_Original_Submitter','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Is_Referee','This function checks whether the logged user is a referee for the current document');
INSERT INTO sbmALLFUNCDESCR VALUES ('Mail_Approval_Request_to_Referee',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Mail_Approval_Withdrawn_to_Referee',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Mail_Submitter',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Make_Modify_Record',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Make_Record','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_From_Pending','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_to_Done',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_to_Pending',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_Approval_Request',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_APP','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_DEL','Prepare a message for the user informing them that their record was successfully deleted.');
INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_MBI',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_SRV',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Register_Approval_Request',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Register_Referee_Decision',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Withdraw_Approval_Request',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Report_Number_Generation',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Second_Report_Number_Generation','Generate a secondary report number for a document.');
INSERT INTO sbmALLFUNCDESCR VALUES ('Send_Approval_Request',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Send_APP_Mail','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Send_Delete_Mail','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Send_Modify_Mail',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Send_SRV_Mail',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Set_Embargo','Set an embargo on all the documents of a given record.');
INSERT INTO sbmALLFUNCDESCR VALUES ('Stamp_Replace_Single_File_Approval','Stamp a single file when a document is approved.');
INSERT INTO sbmALLFUNCDESCR VALUES ('Stamp_Uploaded_Files','Stamp some of the files that were uploaded during a submission.');
INSERT INTO sbmALLFUNCDESCR VALUES ('Test_Status','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Update_Approval_DB',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('User_is_Record_Owner_or_Curator','Check if user is owner or special editor of a record');
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Files_to_Storage','Attach files received from chosen file input element(s)');
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Revised_Files_to_Storage','Revise files initially uploaded with "Move_Files_to_Storage"');
INSERT INTO sbmALLFUNCDESCR VALUES ('Make_Dummy_MARC_XML_Record','');
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_CKEditor_Files_to_Storage','Transfer files attached to the record with the CKEditor');
INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Upload_Files_Interface','Display generic interface to add/revise/delete files. To be used before function "Move_Uploaded_Files_to_Storage"');
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Uploaded_Files_to_Storage','Attach files uploaded with "Create_Upload_Files_Interface"');
INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Photos_to_Storage','Attach/edit the pictures uploaded with the "create_photos_manager_interface()" function');
INSERT INTO sbmALLFUNCDESCR VALUES ('Link_Records','Link two records toghether via MARC');
INSERT INTO sbmALLFUNCDESCR VALUES ('Video_Processing',NULL);
INSERT INTO sbmALLFUNCDESCR VALUES ('Set_RN_From_Sysno', 'Set the value of global rn variable to the report number identified by sysno (recid)');
INSERT INTO sbmALLFUNCDESCR VALUES ('Notify_URL','Access URL, possibly to post content');
INSERT INTO sbmFIELDDESC VALUES ('Upload_Photos',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a photos upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Photos_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\n\r\nfrom invenio.websubmit_functions.Shared_Functions import ParamFromFile\r\nfrom invenio.websubmit_functions.Move_Photos_to_Storage import \\\r\n read_param_file, \\\r\n create_photos_manager_interface, \\\r\n get_session_id\r\n\r\n# Retrieve session id\r\ntry:\r\n # User info is defined only in MBI/MPI actions...\r\n session_id = get_session_id(None, uid, user_info) \r\nexcept:\r\n session_id = get_session_id(req, uid, {})\r\n\r\n# Retrieve context\r\nindir = curdir.split(\'/\')[-3]\r\ndoctype = curdir.split(\'/\')[-2]\r\naccess = curdir.split(\'/\')[-1]\r\n\r\n# Get the record ID, if any\r\nsysno = ParamFromFile(\"%s/%s\" % (curdir,\'SN\')).strip()\r\n\r\n\"\"\"\r\nModify below the configuration of the photos manager interface.\r\nNote: `can_reorder_photos\' parameter is not yet fully taken into consideration\r\n\r\nDocumentation of the function is available at <http://localhost/admin/websubmit/websubmitadmin.py/functionedit?funcname=Move_Photos_to_Storage>\r\n\"\"\"\r\ntext += create_photos_manager_interface(sysno, session_id, uid,\r\n doctype, indir, curdir, access,\r\n can_delete_photos=True,\r\n can_reorder_photos=True,\r\n can_upload_photos=True,\r\n editor_width=700,\r\n editor_height=400,\r\n initial_slider_value=100,\r\n max_slider_value=200,\r\n min_slider_value=80)','0000-00-00','0000-00-00',NULL,NULL,0);
INSERT INTO sbmCHECKS VALUES ('AUCheck','function AUCheck(txt) {\r\n var res=1;\r\n tmp=txt.indexOf(\"\\015\");\r\n while (tmp != -1) {\r\n left=txt.substring(0,tmp);\r\n right=txt.substring(tmp+2,txt.length);\r\n txt=left + \"\\012\" + right;\r\n tmp=txt.indexOf(\"\\015\");\r\n }\r\n tmp=txt.indexOf(\"\\012\");\r\n if (tmp==-1){\r\n line=txt;\r\n txt=\'\';}\r\n else{\r\n line=txt.substring(0,tmp);\r\n txt=txt.substring(tmp+1,txt.length);}\r\n while (line != \"\"){\r\n coma=line.indexOf(\",\");\r\n left=line.substring(0,coma);\r\n right=line.substring(coma+1,line.length);\r\n coma2=right.indexOf(\",\");\r\n space=right.indexOf(\" \");\r\n if ((coma==-1)||(left==\"\")||(right==\"\")||(space!=0)||(coma2!=-1)){\r\n res=0;\r\n error_log=line;\r\n }\r\n tmp=txt.indexOf(\"\\012\");\r\n if (tmp==-1){\r\n line=txt;\r\n txt=\'\';}\r\n else{\r\n line=txt.substring(0,tmp-1);\r\n txt=txt.substring(tmp+1,txt.length);}\r\n }\r\n if (res == 0){\r\n alert(\"This author name cannot be managed \\: \\012\\012\" + error_log + \" \\012\\012It is not in the required format!\\012Put one author per line and a comma (,) between the name and the firstname initial letters. \\012The name is going first, followed by the firstname initial letters.\\012Do not forget the whitespace after the comma!!!\\012\\012Example \\: Put\\012\\012Le Meur, J Y \\012Baron, T \\012\\012for\\012\\012Le Meur Jean-Yves & Baron Thomas.\");\r\n return 0;\r\n } \r\n return 1; \r\n}','1998-08-18','0000-00-00','','');
INSERT INTO sbmCHECKS VALUES ('DatCheckNew','function DatCheckNew(txt) {\r\n var res=1;\r\n if (txt.length != 10){res=0;}\r\n if (txt.indexOf(\"/\") != 2){res=0;}\r\n if (txt.lastIndexOf(\"/\") != 5){res=0;}\r\n tmp=parseInt(txt.substring(0,2),10);\r\n if ((tmp > 31)||(tmp < 1)||(isNaN(tmp))){res=0;}\r\n tmp=parseInt(txt.substring(3,5),10);\r\n if ((tmp > 12)||(tmp < 1)||(isNaN(tmp))){res=0;}\r\n tmp=parseInt(txt.substring(6,10),10);\r\n if ((tmp < 1)||(isNaN(tmp))){res=0;}\r\n if (txt.length == 0){res=1;}\r\n if (res == 0){\r\n alert(\"Please enter a correct Date \\012Format: dd/mm/yyyy\");\r\n return 0;\r\n }\r\n return 1; \r\n}','0000-00-00','0000-00-00','','');
INSERT INTO sbmFIELDDESC VALUES ('Upload_Files',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a file upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Uploaded_Files_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\nimport os\r\nfrom invenio.bibdocfile_managedocfiles import create_file_upload_interface\r\nfrom invenio.websubmit_functions.Shared_Functions import ParamFromFile\r\n\r\nindir = ParamFromFile(os.path.join(curdir, \'indir\'))\r\ndoctype = ParamFromFile(os.path.join(curdir, \'doctype\'))\r\naccess = ParamFromFile(os.path.join(curdir, \'access\'))\r\ntry:\r\n sysno = int(ParamFromFile(os.path.join(curdir, \'SN\')).strip())\r\nexcept:\r\n sysno = -1\r\nln = ParamFromFile(os.path.join(curdir, \'ln\'))\r\n\r\n\"\"\"\r\nRun the following to get the list of parameters of function \'create_file_upload_interface\':\r\necho -e \'from invenio.bibdocfile_managedocfiles import create_file_upload_interface as f\\nprint f.__doc__\' | python\r\n\"\"\"\r\ntext = create_file_upload_interface(recid=sysno,\r\n print_outside_form_tag=False,\r\n include_headers=True,\r\n ln=ln,\r\n doctypes_and_desc=[(\'main\',\'Main document\'),\r\n (\'additional\',\'Figure, schema, etc.\')],\r\n can_revise_doctypes=[\'*\'],\r\n can_describe_doctypes=[\'main\'],\r\n can_delete_doctypes=[\'additional\'],\r\n can_rename_doctypes=[\'main\'],\r\n sbm_indir=indir, sbm_doctype=doctype, sbm_access=access)[1]\r\n','0000-00-00','0000-00-00',NULL,NULL,0);
INSERT INTO sbmFORMATEXTENSION VALUES ('WORD','.doc');
INSERT INTO sbmFORMATEXTENSION VALUES ('PostScript','.ps');
INSERT INTO sbmFORMATEXTENSION VALUES ('PDF','.pdf');
INSERT INTO sbmFORMATEXTENSION VALUES ('JPEG','.jpg');
INSERT INTO sbmFORMATEXTENSION VALUES ('JPEG','.jpeg');
INSERT INTO sbmFORMATEXTENSION VALUES ('GIF','.gif');
INSERT INTO sbmFORMATEXTENSION VALUES ('PPT','.ppt');
INSERT INTO sbmFORMATEXTENSION VALUES ('HTML','.htm');
INSERT INTO sbmFORMATEXTENSION VALUES ('HTML','.html');
INSERT INTO sbmFORMATEXTENSION VALUES ('Latex','.tex');
INSERT INTO sbmFORMATEXTENSION VALUES ('Compressed PostScript','.ps.gz');
INSERT INTO sbmFORMATEXTENSION VALUES ('Tarred Tex (.tar)','.tar');
INSERT INTO sbmFORMATEXTENSION VALUES ('Text','.txt');
INSERT INTO sbmFUNDESC VALUES ('Get_Recid','record_search_pattern');
INSERT INTO sbmFUNDESC VALUES ('Get_Report_Number','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','addressesMBI');
INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','sourceDoc');
INSERT INTO sbmFUNDESC VALUES ('Register_Approval_Request','categ_file_appreq');
INSERT INTO sbmFUNDESC VALUES ('Register_Approval_Request','categ_rnseek_appreq');
INSERT INTO sbmFUNDESC VALUES ('Register_Approval_Request','note_file_appreq');
INSERT INTO sbmFUNDESC VALUES ('Register_Referee_Decision','decision_file');
INSERT INTO sbmFUNDESC VALUES ('Withdraw_Approval_Request','categ_file_withd');
INSERT INTO sbmFUNDESC VALUES ('Withdraw_Approval_Request','categ_rnseek_withd');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','autorngen');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','rnin');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','counterpath');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','rnformat');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','yeargen');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','nblength');
INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','initialvalue');
INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Request_to_Referee','categ_file_appreq');
INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Request_to_Referee','categ_rnseek_appreq');
INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Request_to_Referee','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Withdrawn_to_Referee','categ_file_withd');
INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Withdrawn_to_Referee','categ_rnseek_withd');
INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','authorfile');
INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','status');
INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','authorfile');
INSERT INTO sbmFUNDESC VALUES ('Create_Modify_Interface','fieldnameMBI');
INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','fieldnameMBI');
INSERT INTO sbmFUNDESC VALUES ('Update_Approval_DB','categformatDAM');
INSERT INTO sbmFUNDESC VALUES ('Update_Approval_DB','decision_file');
INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','categformatDAM');
INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','addressesSRV');
INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','directory');
INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','categformatDAM');
INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','addressesDAM');
INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','titleFile');
INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','titleFile');
INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','emailFile');
INSERT INTO sbmFUNDESC VALUES ('Get_Info','authorFile');
INSERT INTO sbmFUNDESC VALUES ('Get_Info','emailFile');
INSERT INTO sbmFUNDESC VALUES ('Get_Info','titleFile');
INSERT INTO sbmFUNDESC VALUES ('Make_Modify_Record','modifyTemplate');
INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','addressesAPP');
INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','categformatAPP');
INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','newrnin');
INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','decision_file');
INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','comments_file');
INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casevariable');
INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casevalues');
INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casesteps');
INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casedefault');
INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','noteFile');
INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','emailFile');
INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','emailFile');
INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','newrnin');
INSERT INTO sbmFUNDESC VALUES ('Make_Record','sourceTemplate');
INSERT INTO sbmFUNDESC VALUES ('Make_Record','createTemplate');
INSERT INTO sbmFUNDESC VALUES ('Print_Success','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Print_Success','newrnin');
INSERT INTO sbmFUNDESC VALUES ('Print_Success','status');
INSERT INTO sbmFUNDESC VALUES ('Make_Modify_Record','sourceTemplate');
INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','documenttype');
INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','iconsize');
INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_suffixes');
INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','rename');
INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_restrictions');
INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_doctypes');
INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','elementNameToDoctype');
INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createIconDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createRelatedFormats');
INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','iconsize');
INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','keepPreviousVersionDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Set_Embargo','date_file');
INSERT INTO sbmFUNDESC VALUES ('Set_Embargo','date_format');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','files_to_be_stamped');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','latex_template');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','latex_template_vars');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','stamp');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','layer');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','switch_file');
INSERT INTO sbmFUNDESC VALUES ('Make_Dummy_MARC_XML_Record','dummyrec_source_tpl');
INSERT INTO sbmFUNDESC VALUES ('Make_Dummy_MARC_XML_Record','dummyrec_create_tpl');
INSERT INTO sbmFUNDESC VALUES ('Print_Success_APP','decision_file');
INSERT INTO sbmFUNDESC VALUES ('Print_Success_APP','newrnin');
INSERT INTO sbmFUNDESC VALUES ('Send_Delete_Mail','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Send_Delete_Mail','record_managers');
INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rn_file');
INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rn_format');
INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rn_yeargen');
INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rncateg_file');
INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_counterpath');
INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_nb_length');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','file_to_be_stamped');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','latex_template');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','latex_template_vars');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','new_file_name');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','stamp');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','layer');
INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','switch_file');
INSERT INTO sbmFUNDESC VALUES ('Move_CKEditor_Files_to_Storage','input_fields');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxsize');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','minsize');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','doctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictions');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDeleteDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canReviseDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDescribeDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canCommentDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canKeepDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canAddFormatDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRestrictDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRenameDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canNameNewFiles');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','createRelatedFormats');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','keepDefault');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','showLinks');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','fileLabel');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','filenameLabel');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','descriptionLabel');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','commentLabel');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictionLabel');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','startDoc');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','endDoc');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','defaultFilenameDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxFilesDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','iconsize');
INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','createIconDoctypes');
INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','forceFileRevision');
INSERT INTO sbmFUNDESC VALUES ('Move_Photos_to_Storage','iconsize');
INSERT INTO sbmFUNDESC VALUES ('Move_Photos_to_Storage','iconformat');
INSERT INTO sbmFUNDESC VALUES ('User_is_Record_Owner_or_Curator','curator_role');
INSERT INTO sbmFUNDESC VALUES ('User_is_Record_Owner_or_Curator','curator_flag');
INSERT INTO sbmFUNDESC VALUES ('Link_Records','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Link_Records','edsrn2');
INSERT INTO sbmFUNDESC VALUES ('Link_Records','directRelationship');
INSERT INTO sbmFUNDESC VALUES ('Link_Records','reverseRelationship');
INSERT INTO sbmFUNDESC VALUES ('Link_Records','keep_original_edsrn2');
INSERT INTO sbmFUNDESC VALUES ('Video_Processing','aspect');
INSERT INTO sbmFUNDESC VALUES ('Video_Processing','batch_template');
INSERT INTO sbmFUNDESC VALUES ('Video_Processing','title');
INSERT INTO sbmFUNDESC VALUES ('Set_RN_From_Sysno','edsrn');
INSERT INTO sbmFUNDESC VALUES ('Set_RN_From_Sysno','rep_tags');
INSERT INTO sbmFUNDESC VALUES ('Set_RN_From_Sysno','record_search_pattern');
INSERT INTO sbmFUNDESC VALUES ('Notify_URL','url');
INSERT INTO sbmFUNDESC VALUES ('Notify_URL','data');
INSERT INTO sbmFUNDESC VALUES ('Notify_URL','admin_emails');
INSERT INTO sbmFUNDESC VALUES ('Notify_URL','content_type');
INSERT INTO sbmFUNDESC VALUES ('Notify_URL','attempt_times');
INSERT INTO sbmFUNDESC VALUES ('Notify_URL','attempt_sleeptime');
INSERT INTO sbmFUNDESC VALUES ('Notify_URL','user');
INSERT INTO sbmGFILERESULT VALUES ('HTML','HTML document');
INSERT INTO sbmGFILERESULT VALUES ('WORD','data');
INSERT INTO sbmGFILERESULT VALUES ('PDF','PDF document');
INSERT INTO sbmGFILERESULT VALUES ('PostScript','PostScript document');
INSERT INTO sbmGFILERESULT VALUES ('PostScript','data ');
INSERT INTO sbmGFILERESULT VALUES ('PostScript','HP Printer Job Language data');
INSERT INTO sbmGFILERESULT VALUES ('jpg','JPEG image');
INSERT INTO sbmGFILERESULT VALUES ('Compressed PostScript','gzip compressed data');
INSERT INTO sbmGFILERESULT VALUES ('Tarred Tex (.tar)','tar archive');
INSERT INTO sbmGFILERESULT VALUES ('JPEG','JPEG image');
INSERT INTO sbmGFILERESULT VALUES ('GIF','GIF');
INSERT INTO swrREMOTESERVER VALUES (1, 'arXiv', 'arxiv.org', 'CDS_Invenio', 'sword_invenio', 'admin', 'SWORD at arXiv', 'http://arxiv.org/abs', 'https://arxiv.org/sword-app/servicedocument', '', 0);
-- end of file
diff --git a/modules/oaiharvest/lib/oai_harvest_getter.py b/modules/oaiharvest/lib/oai_harvest_getter.py
index 1b99f8d22..e303b311b 100644
--- a/modules/oaiharvest/lib/oai_harvest_getter.py
+++ b/modules/oaiharvest/lib/oai_harvest_getter.py
@@ -1,368 +1,370 @@
## -*- mode: python; coding: utf-8; -*-
##
## This file is part of Invenio.
## Copyright (C) 2009, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""OAI harvestor - 'wget' records from an OAI repository.
This 'getter' simply retrieve the records from an OAI repository.
"""
__revision__ = "$Id$"
try:
import sys
import httplib
import urllib
import getpass
import socket
import re
import time
import base64
import tempfile
import os
except ImportError, e:
print "Error: %s" % e
sys.exit(1)
try:
from invenio.config import CFG_SITE_ADMIN_EMAIL, CFG_VERSION
except ImportError, e:
print "Error: %s" % e
sys.exit(1)
class InvenioOAIRequestError(Exception):
pass
http_response_status_code = {
"000" : "Unknown",
"100" : "Continue",
"200" : "OK",
"302" : "Redirect",
"401" : "Authentication Required",
"403" : "Forbidden",
"404" : "Not Found",
"500" : "Error",
"503" : "Service Unavailable"
}
def http_param_resume(http_param_dict, resumptionToken):
"Change parameter dictionary for harvest resumption"
http_param = {
'verb' : http_param_dict['verb'],
'resumptionToken' : resumptionToken
}
return http_param
def http_request_parameters(http_param_dict, method="POST"):
"Assembly http request parameters for http method used"
return urllib.urlencode(http_param_dict)
def OAI_Session(server, script, http_param_dict , method="POST", output="",
resume_request_nbr=0, secure=False, user=None, password=None,
cert_file=None, key_file=None):
"""Handle one OAI session (1 request, which might lead
to multiple answers because of resumption tokens)
If output filepath is given, each answer of the oai repository is saved
in corresponding filepath, with a unique number appended at the end.
This number starts at 'resume_request_nbr'.
Returns a tuple containing an int corresponding to the last created 'resume_request_nbr' and
a list of harvested files.
"""
sys.stderr.write("Starting the harvesting session at %s" %
time.strftime("%Y-%m-%d %H:%M:%S --> ", time.localtime()))
sys.stderr.write("%s - %s\n" % (server,
http_request_parameters(http_param_dict)))
output_path, output_name = os.path.split(output)
harvested_files = []
i = resume_request_nbr
while True:
harvested_data = OAI_Request(server, script,
http_request_parameters(http_param_dict, method), method,
secure, user, password, cert_file, key_file)
if output:
# Write results to a file specified by 'output'
if harvested_data.lower().find('<'+http_param_dict['verb'].lower()) > -1:
output_fd, output_filename = tempfile.mkstemp(suffix="_%07d.harvested" % (i,), \
prefix=output_name, dir=output_path)
os.write(output_fd, harvested_data)
os.close(output_fd)
harvested_files.append(output_filename)
else:
# No records in output? Do not create a file. Warn the user.
sys.stderr.write("\n<!--\n*** WARNING: NO RECORDS IN THE HARVESTED DATA: "
+ "\n" + repr(harvested_data) + "\n***\n-->\n")
else:
sys.stdout.write(harvested_data)
rt_obj = re.search('<resumptionToken.*>(.+)</resumptionToken>',
harvested_data, re.DOTALL)
if rt_obj is not None and rt_obj != "":
http_param_dict = http_param_resume(http_param_dict, rt_obj.group(1))
i = i + 1
else:
break
return i, harvested_files
def harvest(server, script, http_param_dict , method="POST", output="",
sets=None, secure=False, user=None, password=None,
cert_file=None, key_file=None):
"""
Handle multiple OAI sessions (multiple requests, which might lead to
multiple answers).
Needed for harvesting multiple sets in one row.
Returns a list of filepaths for harvested files.
Parameters:
server - *str* the server URL to harvest
eg: cds.cern.ch
script - *str* path to the OAI script on the server to harvest
eg: /oai2d
http_param_dict - *dict* the URL parameters to send to the OAI script
eg: {'verb':'ListRecords', 'from'='2004-04-01'}
EXCLUDING the setSpec parameters. See 'sets'
parameter below.
method - *str* if we harvest using POST or GET
eg: POST
output - *str* the path (and base name) where results are
saved. To handle multiple answers (for eg. triggered
by multiple sets harvesting or OAI resumption
tokens), this base name is suffixed with a sequence
number. Eg output='/tmp/z.xml' ->
'/tmp/z.xml.0000000', '/tmp/z.xml.0000001', etc.
If file at given path already exists, it is
overwritten.
When this parameter is left empty, the results are
returned on the standard output.
sets - *list* the sets to harvest. Since this function
offers multiple sets harvesting in one row, the OAI
'setSpec' cannot be defined in the 'http_param_dict'
dict where other OAI parameters are.
secure - *bool* of we should use HTTPS (True) or HTTP (false)
user - *str* username to use to login to the server to
harvest in case it requires Basic authentication.
password - *str* a password (in clear) of the server to harvest
in case it requires Basic authentication.
key_file - *str* a path to a PEM file that contain your private
key to connect to the server in case it requires
certificate-based authentication
(If provided, 'cert_file' must also be provided)
cert_file - *str* a path to a PEM file that contain your public
key in case the server to harvest requires
certificate-based authentication
(If provided, 'key_file' must also be provided)
"""
if sets:
resume_request_nbr = 0
all_harvested_files = []
for set in sets:
http_param_dict['set'] = set
resume_request_nbr, harvested_files = OAI_Session(server, script, http_param_dict, method,
output, resume_request_nbr, secure, user, password,
cert_file, key_file)
resume_request_nbr += 1
all_harvested_files.extend(harvested_files)
return all_harvested_files
else:
dummy, harvested_files = OAI_Session(server, script, http_param_dict, method,
output, secure=secure, user=user,
password=password, cert_file=cert_file,
key_file=key_file)
return harvested_files
def OAI_Request(server, script, params, method="POST", secure=False,
user=None, password=None,
key_file=None, cert_file=None, attempts=10):
"""Handle OAI request. Returns harvested data.
Parameters:
server - *str* the server URL to harvest
eg: cds.cern.ch
script - *str* path to the OAI script on the server to harvest
eg: /oai2d
params - *str* the URL parameters to send to the OAI script
eg: verb=ListRecords&from=2004-04-01
method - *str* if we harvest using POST or GET
eg: POST
secure - *bool* of we should use HTTPS (True) or HTTP (false)
user - *str* username to use to login to the server to
harvest in case it requires Basic authentication.
password - *str* a password (in clear) of the server to harvest
in case it requires Basic authentication.
key_file - *str* a path to a PEM file that contain your private
key to connect to the server in case it requires
certificate-based authentication
(If provided, 'cert_file' must also be provided)
cert_file - *str* a path to a PEM file that contain your public
key in case the server to harvest requires
certificate-based authentication
(If provided, 'key_file' must also be provided)
attempts - *int* maximum number of attempts
Returns harvested data if harvest is successful.
Note: if the environment variable "http_proxy" is set, the defined
proxy will be used in order to instantiate a connection,
however no special treatment is supported for HTTPS
"""
headers = {"Content-type":"application/x-www-form-urlencoded",
"Accept":"text/xml",
"From": CFG_SITE_ADMIN_EMAIL,
"User-Agent":"Invenio %s" % CFG_VERSION}
proxy = os.getenv('http_proxy')
if proxy:
if proxy.startswith('http://'):
proxy = proxy[7:]
proxy = proxy.strip('/ ')
if len(proxy) > 0:
script = 'http://' + server + script
server = proxy
if password:
# We use basic authentication
headers["Authorization"] = "Basic " + base64.encodestring(user + ":" + password).strip()
i = 0
while i < attempts:
i = i + 1
# Try to establish a connection
try:
if secure and not (key_file and cert_file):
# Basic authentication over HTTPS
conn = httplib.HTTPSConnection(server)
elif secure and key_file and cert_file:
# Certificate-based authentication
conn = httplib.HTTPSConnection(server,
key_file=key_file,
cert_file=cert_file)
else:
# Unsecured connection
conn = httplib.HTTPConnection(server)
except (httplib.HTTPException, socket.error), e:
raise InvenioOAIRequestError("An error occured when trying to connect to %s: %s" % (server, e))
# Connection established, perform a request
try:
if method == "GET":
conn.request("GET", script + "?" + params, headers=headers)
elif method == "POST":
conn.request("POST", script, params, headers)
- except socket.gaierror, (err, str_e):
+ except socket.gaierror, e:
# We'll retry in a few seconds
nb_seconds_retry = 30
sys.stderr.write("An error occured when trying to request %s: %s\nWill retry in %i seconds\n" % (server, e, nb_seconds_retry))
time.sleep(nb_seconds_retry)
continue
# Request sent, get results
try:
response = conn.getresponse()
except (httplib.HTTPException, socket.error), e:
# We'll retry in a few seconds
nb_seconds_retry = 30
sys.stderr.write("An error occured when trying to read response from %s: %s\nWill retry in %i seconds\n" % (server, e, nb_seconds_retry))
time.sleep(nb_seconds_retry)
continue
status = "%d" % response.status
if http_response_status_code.has_key(status):
sys.stderr.write("%s(%s) : %s : %s\n" % (status,
http_response_status_code[status],
response.reason,
params))
else:
sys.stderr.write("%s(%s) : %s : %s\n" % (status,
http_response_status_code['000'],
response.reason, params))
if response.status == 200:
data = response.read()
conn.close()
return data
elif response.status == 503:
try:
nb_seconds_to_wait = \
int(response.getheader("Retry-After", "%d" % (i*i)))
except ValueError:
nb_seconds_to_wait = 10
sys.stderr.write("Retry in %d seconds...\n" % nb_seconds_to_wait)
time.sleep(nb_seconds_to_wait)
elif response.status == 302:
sys.stderr.write("Redirecting...\n")
server = response.getheader("Location").split("/")[2]
script = "/" + \
"/".join(response.getheader("Location").split("/")[3:])
elif response.status == 401:
if user is not None:
sys.stderr.write("Try again\n")
if not secure:
sys.stderr.write("*WARNING* Your password will be sent in clear!\n")
# getting input from user
sys.stderr.write('User:')
try:
user = raw_input()
password = getpass.getpass()
except EOFError, e:
+ sys.stderr.write(str(e))
sys.stderr.write("\n")
sys.exit(1)
except KeyboardInterrupt, e:
+ sys.stderr.write(str(e))
sys.stderr.write("\n")
sys.exit(1)
headers["Authorization"] = "Basic " + base64.encodestring(user + ":" + password).strip()
else:
sys.stderr.write("Retry in 10 seconds...\n")
time.sleep(10)
raise InvenioOAIRequestError("Harvesting interrupted (after 10 attempts) at %s: %s\n"
% (time.strftime("%Y-%m-%d %H:%M:%S --> ", time.localtime()), params))
diff --git a/modules/webhelp/web/admin/admin.webdoc b/modules/webhelp/web/admin/admin.webdoc
index be0c7dde3..74b92d796 100644
--- a/modules/webhelp/web/admin/admin.webdoc
+++ b/modules/webhelp/web/admin/admin.webdoc
@@ -1,594 +1,611 @@
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
<!-- WebDoc-Page-Title: _(Admin Area)_ -->
<!-- WebDoc-Page-Revision: $Id$ -->
<p>Welcome to the Admin Area of the <CFG_SITE_NAME>. You'll find here
pointers to the available runtime admin-level interfaces and
admin-level guides on how to configure and run the Invenio
system.</p>
<p>Invenio comes as a suite of several more or less independent
modules. You'll find brief descriptions for each admin module below.
(More background information on each module may be read in the <a
href="<CFG_SITE_URL>/help/hacking/modules-overview">modules overview</a> article.)
</p>
<h3>Admin HOWTO guides</h3>
<p><a href="<CFG_SITE_URL>/help/admin/howto">Admin HOWTO Guides</a> give you
you both short and not-so-short recipes and thoughts on some of the
most frequently encountered administrative tasks. They tend to answer
various admin-level questions of a rather general level. The specific
tasks are better addressed by module-specific guides and interfaces
presented below.
</p>
<h3>Data acquisition related modules</h3>
<p>The metadata input into a running Invenio system can be done in two
ways: <em>(i) admin-oriented batch mode</em>,
i.e. <strong>OAI Harvest</strong> to get data from OAI repositories,
<strong>BibConvert</strong> to convert any input data into XML MARC,
and <strong>BibUpload</strong> to upload XML MARC files into Invenio;
and <em>(ii) author-oriented interactive mode</em>,
i.e. <strong>WebSubmit</strong> to submit documents via Web. Once the
data are uploaded in Invenio, you may want to modify them via
<strong>BibEdit</strong> to edit the metadata.
</p>
<table border="1" cellpadding="2">
<tr>
<th class="searchboxheader">Admin Module</th>
<th class="searchboxheader">Admin Description</th>
<th class="searchboxheader">Admin Interface</th>
<th class="searchboxheader">Admin Guide</th>
</tr>
<tr>
<td>
<strong>OAI Harvest Admin</strong>
</td>
<td>
Enables you to configure OAI metadata harvestor for eventual
periodical batch upload of data. For example, you can define from
where to harvest, with what periodicity, how to transform data
before uploading them into Invenio, etc. See also
<a href="<CFG_SITE_URL>/admin/oairepository/oairepositoryadmin.py">OAI Repository Admin</a>
to expose your data to other harvesters.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/oaiharvest/oaiharvestadmin.py">OAI Harvest Admin Interface</a>
</td>
<td>
<a href="oaiharvest-admin-guide">OAI Harvest Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibConvert Admin</strong>
</td>
<td>
Explains how to use bibliographic data convertor. Useful for batch
upload of data. For example, when migrating the metadata from
your old system, or when integrating metadata acquisitions from
non-OAI sources, or just about any line-based
not-so-well-structured metadata.
</td>
<td>
<small class="note">command-line program</small>
</td>
<td>
<a href="bibconvert-admin-guide">BibConvert Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibMatch Admin</strong>
</td>
<td>
Tools for matching XML MARC files against the repository content.
Useful when importing third-party metadata files.
</td>
<td>
<small class="note">command-line program</small>
</td>
<td>
<a href="bibmatch-admin-guide">BibMatch Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibUpload Admin</strong>
</td>
<td>
Enables you to configure eventual local special operations to be
done on the data being uploaded.
</td>
<td>
<small class="note">command-line program</small>
</td>
<td>
<a href="bibupload-admin-guide">BibUpload Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebSubmit Admin</strong>
</td>
<td>
Enables you to configure the submit interface and logic for various document types.
For example, you can define which metadata fields should be submitted for various
doctypes, what to do with the inputted values before uploading,
possible peer review and approval strategy, etc.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/websubmit/websubmitadmin.py">WebSubmit Admin Interface</a>
</td>
<td>
<a href="websubmit-admin-guide">WebSubmit Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>ElmSubmit Admin</strong>
</td>
<td>
Enables you to configure the submission of documents by electronic mail.
</td>
<td>
<small class="note">command-line program</small>
</td>
<td>
<a href="elmsubmit-admin-guide">ElmSubmit Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibEdit Admin</strong>
</td>
<td>
Enables you to directly manipulate bibliographic data, edit a single
record, do global replacements, and other cataloguing tasks.
</td>
<td>
<a href="<CFG_SITE_URL>/<CFG_SITE_RECORD>/edit/">BibEdit Admin Interface</a>
</td>
<td>
<a href="bibedit-admin-guide">BibEdit Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibCheck Admin</strong>
</td>
<td>
Enables you to manage BibCheck configuration files. BibCheck is used to verify and
correct records.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/bibcheck/bibcheckadmin.py">BibCheck Admin Interface</a>
</td>
<td>
<a href="bibcheck-admin-guide">BibCheck Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>Publiline Admin</strong>
</td>
<td>
Enables you to approve documents by using a complex approval workflow.
</td>
<td>
None
</td>
<td>
<a href="publiline-admin-guide">Publiline Admin Guide</a>
</td>
</tr>
+
+<tr>
+<td>
+ <strong>BibAuthority Admin</strong>
+</td>
+<td>
+ Enables you to configure Authority Control in Invenio.
+</td>
+<td>
+ None
+</td>
+<td>
+ <a href="bibauthority-admin-guide">Authority Record Admin Guide</a>
+</td>
+</tr>
+
+
</table>
<h3>Data provision related modules</h3>
<p>The metadata output from a running Invenio system to the end-user
is covered by several modules: <strong>BibIndex</strong> to index the
metadata, <strong>BibRank</strong> to eventually rank them,
<strong>BibFormat</strong> to format them for the output,
<strong>WebSearch</strong> to provide search interfaces and search
engine.
</p>
<table border="1">
<tr>
<th class="searchboxheader">Admin Module</th>
<th class="searchboxheader">Admin Description</th>
<th class="searchboxheader">Admin Interface</th>
<th class="searchboxheader">Admin Guide</th>
</tr>
<tr>
<td>
<strong>BibIndex Admin</strong>
</td>
<td>
Enables you to configure "word files", i.e. to define which
bibliographic fields are indexed into which word indexes. The word
indexes are then used by the search interface. For example, you can
define that the logical author index is constructed from physical
<code>100 $a</code> and <code>700 $a</code> bibliographic tags, you
can force reindexing of the fulltext index, etc.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/bibindex/bibindexadmin.py">Manage indexes</a>
<p>
<a href="<CFG_SITE_URL>/admin/bibindex/bibindexadmin.py/field">Manage logical fields</a>
</td>
<td>
<a href="bibindex-admin-guide">BibIndex Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibRank Admin</strong>
</td>
<td>
Enables you to configure various ranking methods to be used by the search engine.
You can rebalance existing ranking sets, create new ranking methods, etc.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/bibrank/bibrankadmin.py">BibRank Admin Interface</a>
</td>
<td>
<a href="bibrank-admin-guide">BibRank Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibSort Admin</strong>
</td>
<td>
Enables you to configure the sorting methods displayed to the user. You can update
the sorting data for a particular set of records, you can rebalance all the sorting
data, you can remove all the sorting data associated with a method.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/bibsort/bibsortadmin.py">BibSort Admin Interface</a>
</td>
<td>
<a href="bibsort-admin-guide">BibSort Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibClassify Admin</strong>
</td>
<td>
Enables you to automatically classify documents according to
keyword taxonomies and thesauri.
</td>
<td>
<small class="note">command-line configuration</small>
</td>
<td>
<a href="bibclassify-admin-guide">BibClassify Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibFormat Admin</strong>
</td>
<td>
Enables you to specify how the bibliographic data is presented to
the end user in the search interface. You can decide that titles should be
presented in bold font, that for each author an automatic link to
author's home page should be created according to some receipt, etc.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/bibformat/bibformatadmin.py">BibFormat Admin Interface</a>
</td>
<td>
<a href="bibformat-admin-guide">BibFormat Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>BibSword Client Admin</strong>
</td>
<td>
Enables you to consult and refresh the status of the forwared
record to any SWORD Remote Server. It also gives information about
the SWORD configuration and credential of the Remote Server.
Finally, this function allows admin to forward record from
Invenio to any configured Remote Server.
</td>
<td>
<a href="<CFG_SITE_URL>/bibsword/">BibSword Client Admin Interface</a>
</td>
<td>
<a href="bibsword-client-admin-guide">BibSword Client Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>OAI Repository Admin</strong>
</td>
<td>
Enables you to define which records should be tagged to be exposed
via the OAI Repository gateway, so that other other repositories
can harvest your records. See also
<a href="<CFG_SITE_URL>/admin/oaiharvest/oaiharvestadmin.py">OAI Harvest Admin</a>
to import data into your repository.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/oairepository/oairepositoryadmin.py">OAI Repository Admin Interface</a>
</td>
<td>
<a href="oairepository-admin-guide">OAI Repository Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebSearch Admin</strong>
</td>
<td>
Enables you to configure the search interface for various metadata
collections. You can define new collections and organize them in
the tree, you can define various portalboxes that would appear on
the screen, you can define search options and search fields to
present, etc.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/websearch/websearchadmin.py">WebSearch Admin Interface</a>
</td>
<td>
<a href="websearch-admin-guide">WebSearch Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebStat Admin</strong>
</td>
<td>
Enables you to configure the usage statistics reporting system.
</td>
<td>
<small class="note">command-line configuration</small>
</td>
<td>
<a href="webstat-admin-guide">WebStat Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebJournal Admin</strong>
</td>
<td> Enables you to configure and run an online journal hosted on your
Invenio server, using the same tools and concepts at those you
use to manage bibliographic data. </td>
<td>
<a href="<CFG_SITE_URL>/admin/webjournal/webjournaladmin.py">WebJournal Admin Interface</a>
</td>
<td>
<a href="webjournal-admin-guide">WebJournal Admin Guide</a><br/><br/>
<a href="webjournal-editor-guide">WebJournal Editor Guide</a>
</td>
</tr>
</table>
<h3>Personalization related modules</h3>
<p>Invenio interface can be personalized to suit different needs
of different end-users. This functionality is covered by several
modules: <strong>WebSession</strong> to identify users and their
personal configurations, <strong>WebBasket</strong> to provide
personal baskets or document carts, and <strong>WebAlert</strong> to
set up personal email notification alerts.
</p>
<table border="1">
<tr>
<th class="searchboxheader">Admin Module</th>
<th class="searchboxheader">Admin Description</th>
<th class="searchboxheader">Admin Interface</th>
<th class="searchboxheader">Admin Guide</th>
</tr>
<tr>
<td>
<strong>WebSession Admin</strong>
</td>
<td>
Enables you to inspect the status of guest sessions and to expire
them; the status and details on registered users, etc.
</td>
<td>
<small class="note">not available, but see the guide for the command-line way</small>
</td>
<td>
<a href="websession-admin-guide">WebSession Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebBasket Admin</strong>
</td>
<td>
Enables you to inspect and manipulate user baskets set up on the
system, to make them public/private, etc.
</td>
<td>
<small class="note">not available, but see the guide for the command-line way</small>
</td>
<td>
<a href="webbasket-admin-guide">WebBasket Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebAlert Admin</strong>
</td>
<td>
Enables you to inspect and manipulate user alerts set up on the
system, to run the alert engine, etc.
</td>
<td>
<small class="note">not available, but see the guide for the command-line way</small>
</td>
<td>
<a href="webalert-admin-guide">WebAlert Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebComment Admin</strong>
</td>
<td>
Enables you to manipulate readers comments and reviews,
see which ones were reported as abuse/spam, delete them, etc.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/webcomment/webcommentadmin.py">WebComment Admin Interface</a>
</td>
<td>
<a href="webcomment-admin-guide">WebComment Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebMessage Admin</strong>
</td>
<td>
Enables you to configure the messaging system.
</td>
<td>
<small class="note">command-line configuration</small>
</td>
<td>
<a href="webmessage-admin-guide">WebMessage Admin Guide</a>
</td>
</tr>
</table>
<h3>System glue modules</h3>
<p>Modules that provide the necessary glue for those presented above
are: <strong>BibSched</strong> to manage and schedule bibliographic
tasks, <strong>WebAccess</strong> to define role-based access control
system to all Invenio services, and <strong>WebStyle</strong> to
define a common look and feel of Invenio web pages.
</p>
<table border="1">
<tr>
<th class="searchboxheader">Admin Module</th>
<th class="searchboxheader">Admin Description</th>
<th class="searchboxheader">Admin Interface</th>
<th class="searchboxheader">Admin Guide</th>
</tr>
<tr>
<td>
<strong>BibSched Admin</strong>
</td>
<td>
Enables you to inspect bibliographic task queue, to postpone or
reschedule jobs, to make priorities, to run periodical tasks, etc.
</td>
<td>
<small class="note">command-line program</small>
</td>
<td>
<a href="bibsched-admin-guide">BibSched Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebAccess Admin</strong>
</td>
<td>
Enables you to define who has got access or admin rights on various Invenio modules.
For example, you can define that John is the bibliographic
data manager, that Jim can modify the search interface pages, that
Jill is the submission approval editor, etc.
</td>
<td>
<a href="<CFG_SITE_URL>/admin/webaccess/webaccessadmin.py">WebAccess Admin Interface - Full functionality</a>
<br /><small>Full interface. Manage, grant, revoke any right.</small>
<p>
<a href="<CFG_SITE_URL>/admin/webaccess/webaccessadmin.py/delegate_startarea">Delegate Rights - With Restrictions</a>
<br /><small>Delegate your rights for some roles.</small>
<p>
<a href="<CFG_SITE_URL>/admin/webaccess/webaccessadmin.py/manageaccounts">Manage Accounts</a>
<br /><small>Enable, disable, and modify accounts.</small>
</td>
<td>
<a href="webaccess-admin-guide">WebAccess Admin Guide</a>
</td>
</tr>
<tr>
<td>
<strong>WebStyle Admin</strong>
</td>
<td>
Enables you to customize default Invenio page style and the CSS style sheet.
</td>
<td>
<small class="note">not available, but see the guide for the command-line way</small>
</td>
<td>
<a href="webstyle-admin-guide">WebStyle Admin Guide</a>
</td>
</tr>
</table>
diff --git a/modules/webhelp/web/admin/howto/howto-authority-1.png b/modules/webhelp/web/admin/howto/howto-authority-1.png
new file mode 100644
index 000000000..a8b16c8a1
Binary files /dev/null and b/modules/webhelp/web/admin/howto/howto-authority-1.png differ
diff --git a/modules/webhelp/web/admin/howto/howto-authority-2.png b/modules/webhelp/web/admin/howto/howto-authority-2.png
new file mode 100644
index 000000000..2a081126d
Binary files /dev/null and b/modules/webhelp/web/admin/howto/howto-authority-2.png differ
diff --git a/modules/webhelp/web/admin/howto/howto-authority.webdoc b/modules/webhelp/web/admin/howto/howto-authority.webdoc
new file mode 100644
index 000000000..78a2b1491
--- /dev/null
+++ b/modules/webhelp/web/admin/howto/howto-authority.webdoc
@@ -0,0 +1,99 @@
+## -*- mode: html; coding: utf-8; -*-
+
+## This file is part of Invenio.
+## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN.
+##
+## Invenio is free software; you can redistribute it and/or
+## modify it under the terms of the GNU General Public License as
+## published by the Free Software Foundation; either version 2 of the
+## License, or (at your option) any later version.
+##
+## Invenio is distributed in the hope that it will be useful, but
+## WITHOUT ANY WARRANTY; without even the implied warranty of
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+## General Public License for more details.
+##
+## You should have received a copy of the GNU General Public License
+## along with Invenio; if not, write to the Free Software Foundation, Inc.,
+## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+<!-- WebDoc-Page-Title: _(HOWTO Manage Authority Records)_ -->
+<!-- WebDoc-Page-Navtrail: <a class="navtrail" href="<CFG_SITE_URL>/help/admin<lang:link/>">_(Admin Area)_</a> -->
+<!-- WebDoc-Page-Revision: $Id$ -->
+
+<h2>Introduction</h2>
+<p>This page describes how to use Authority Control in Invenio from a user's perspective</p>
+<p><i>For an explanation of how to configure Authority Control in Invenio, cf. <a href="bibauthority-admin-guide">_(BibAuthority Admin Guide)_</a>.</i></p>
+
+<h2><a name="howto-marc">How to MARC authority records</a></h2>
+
+<h3>1. The 980 field</h3>
+<p>When adding an authority record to INVENIO, whether by uploading a MARC record manually or by adding a new record in BibEdit, it is important to add two separate '980' fields to the record.
+The first field will contain the value “AUTHORITY” in the $a subfield.
+This is to tell INVENIO that this is an authority record.
+The second '980' field will likewise contain a value in its $a subfield, only this time you must specify what kind of authority record it is.
+Typically an author authority record would contain the term “AUTHOR”, an institution would contain “INSTITUTION” etc.
+It is important to communicate these exact terms to the INVENIO admin who will configure how INVENIO handles each of these authority record types for the individual INVENIO modules.
+</p>
+
+<h3>2. The 035 field</h3>
+<p>Further, you must add a unique control number to each authority record. In Invenio, this number must be contained in the 035__ $a field of the authority record and contains the MARC code (enclosed in parentheses) of the organization originating the system control number, followed immediately by the number, e.g. "(SzGeCERN)abc123").
+Cf. <a href="http://www.loc.gov/marc/authority/concise/ad035.html" target="_brief">035 - System Control Number</a> from the MARC 21 reference page.
+</p>
+
+<h3>3. Links between MARC records</h3>
+<p>When creating links between MARC records, we must distinguish two cases: 1) references from bibliographic records towards authority records, and 2) references between authority records<br />
+
+<h4>3.1 Creating a reference from a bibliographic record</h3>
+<p>Example: You have an article (bibliographic record) with Author "Ellis" in the 100__ $a field and you want to create a reference to the authority record for this author. </p>
+<p>This can be done by inserting the control number of this authority record (as contained in the 035__ $a subfield of the authority record) into the $0 subfield of the same 100__ field of the bibliographic record, prefixed by the type of authority record being referenced and a (configurable) separator.
+</p>
+<p>A 100 field might look like this:</p>
+<pre>
+100__ $a Ellis, J.
+ $0 AUTHOR:(CERN)abc123
+ $u CERN
+ $0 INSTITUTION:(CERN)xyz789
+</pre>
+<p>In this case, since we are referencing an AUTHOR authority record, the 100__ $0 subfield would read, e.g. "AUTHOR:(CERN)abc123". If you want to reference an institution, e.g. SLAC, as affiliation for an author, you would prefix the control number with "INSTITUTION". You would add another 100__ $0 subfield to the same 100 field and add the value "INSTITUTION:(CERN)xyz789".</p>
+
+<h4>3.2 Creating links between authority records</h3>
+<p>Links between authority records use the 5xx fields. AUTHOR records use the 500 fields, INSTITUTION records the 510 fields and so on, according to the MARC 21 standard.
+</p>
+<pre><strong>Subfield codes:</strong></pre>
+<pre>
+$a - Corporate name or jurisdiction name as entry element (NR)
+ e.g. "SLAC National Accelerator Laboratory" or "European Organization for Nuclear Research"
+
+$w - Control subfield (NR)
+ 'a' - for predecessor
+ 'b' - for successor
+ 't' - for top / parent
+
+$4 - Relationship code (R)
+ The control number of the referenced authority record,
+ e.g. "(CERN)iii000"
+</pre>
+
+</p><p>Example: You want to add a predecessor to an INSTITUTION authority record. Let's say "Institution A" has control number "(CERN)iii000" and its successor "Institution B" has control number "(CERN)iii001". In order to designate Institution A as predecessor of Institution B, we would add a 510 field to Institution B with a $w value of 'a', a $a value of 'Institution A', and a $4 value of '(CERN)iii000' like this:
+<pre>
+510__ $a Institution A
+ $w a
+ $4 INSTITUTION:(CERN)iii000
+</pre>
+
+<h3>4. Other MARC for authority records</h3>
+<p>All other MARC fields should follow the <a href="http://www.loc.gov/marc/authority/" target="_blank">MARC 21 Format for Authority Data</a></p>
+
+<h2>Creating collections of authority records</h2>
+<p>Once the authority records have been given the appropriate '980__a' values (cf. above), creating a collection of authority records is no different from creating any other collection in INVENIO. You can simply define a new collection defined by the usual collection query 'collection:AUTHOR' for author authority records, or 'collection:INSTITUTION' for institutions, etc.</p>
+<p>The recommended way of creating collections for authority records is to create a “virtual collection” for the main 'collection:AUTHORITY' collection and then add the individual authority record collections as regular children of this collection. This will allow you to browse and search within authority records without making this the default for all INVENIO searches.</p>
+
+<h2>How to use authority control in BibEdit</h2>
+<p>When using BibEdit to modify MARC meta-data of bibliographic records, certain fields may be configured (by the admin of your INVENIO installation) to offer you auto-complete functionality based upon the data contained in authority records for that field. For example, if MARC subfield 100__ $a was configured to be under authority control, then typing the beginning of a word into this subfield will trigger a drop-down list, offering you a choice of values to choose from. When you click on one of the entries in the drop-down list, this will not only populate the immediate subfield you are editing, but it will also insert a reference into a new $0 subfield of the same MARC field you are editing. This reference tells the system that the author you are referring to is the author as contained in the 'author' authority record with the given authority record control number.</p>
+<p>The illustration below demonstrates how this works:</p>
+<img src="<CFG_SITE_URL>/img/admin/howto-authority-1.png" alt="autosuggest dropdown" border="0" />
+<p>Typing “Elli” into the 100__ $a subfield will present you with a list of authors that contain a word starting with “Elli” somewhere in their name. In case there are multiple authors with similar or identical names (as is the case in the example shown here), you will receive additional information about these authors to help you disambiguate. The fields to be used for disambiguation can be configured by your INVENIO administrator. If such fields have not been configured, or if they are not sufficient for disambiguation, the authority record control number will be used to assure a unique value for each entry in the drop-down list. In the example above, the first author can be uniquely identified by his email address, whereas for the latter we have only the authority record control number as uniquely identifying characteristic.</p>
+<img src="<CFG_SITE_URL>/img/admin/howto-authority-2.png" alt="inserted $0 subfield for authority record" border="0" />
+<p>If in the shown example you click on the first author from the list, this author's name will automatically be inserted into the 100__ $a subfield you were editing, while the authority type and the authority record control number “author:(SzGeCERN)abc123” , is inserted into a new $0 subfield (cf. Illustration 2). This new subfield tells INVENIO that “Ellis, John” is associated with the 'author' authority record containing the authority record control number “(SzGeCERN)abc123”. In this example you can also see that the author's affiliation has been entered in the same way as well, using the auto-complete option for the 100__ $u subfield. In this case the author's affiliation is the “University of Oxford”, which is associated in this INVENIO installation with the 'institution' authority record containing the authority record control number “(SzGeCERN)inst0001”.</p>
+<p>If INVENIO has no authority record data to match what you type into the authority-controlled subfield, you still have the possibility to enter a value manually.</p>
\ No newline at end of file
diff --git a/modules/webhelp/web/admin/howto/howto-marc.webdoc b/modules/webhelp/web/admin/howto/howto-marc.webdoc
index 584165948..a6cb97056 100644
--- a/modules/webhelp/web/admin/howto/howto-marc.webdoc
+++ b/modules/webhelp/web/admin/howto/howto-marc.webdoc
@@ -1,1913 +1,1923 @@
## -*- mode: html; coding: utf-8; -*-
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
<!-- WebDoc-Page-Title: HOWTO MARC Your Bibliographic Data -->
<!-- WebDoc-Page-Navtrail: <a class="navtrail" href="<CFG_SITE_URL>/help/admin<lang:link/>">Admin Area</a> &gt; <a class="navtrail" href="howto">Admin HOWTOs</a> -->
<!-- WebDoc-Page-Revision: $Id$ -->
+<h2>Overview</h2>
+
+<p>This HOWTO guide intends to explain how to use the MARC 21
+format for bibliographic records in Invenio.</p>
+
+<p><i>For an explanation of how to use MARC in authority records,
+please read <a href="howto-authority#howto-marc" target="_blank">How to MARC
+authority records</a></i>
+</p>
+
<h2>Why to MARC at all?</h2>
<p>All the bibliographic data in the Invenio system are
internally represented in the <a
href="http://www.loc.gov/marc/bibliographic/">MARC 21</a> format.
There are several good reasons for this:
<ul>
<li>MARC format is <em>the</em> standard in the library world. It is
well established and has been used since 1960s.
<li>MARC is flexible enough to represent any metadata structure you
may need now or in the future. Therefore, Invenio can adapt to your
needs without altering its internal data structure.
<li>MARC technology, albeit developed in the punch card times
(1960s!), can be well combined with recent technologies like XML. In
fact, whenever bibliographic metadata are to be worked with externally
in a file format, Invenio uses recently standardized <a
href="http://www.loc.gov/standards/marcxml/">MARC XML</a> format
provided by the Library of Congress.
</ul>
<h2>Choosing MARC representation of your metadata</h2>
<p>Basically, you are in one of the following three situations:
<ol>
<li><p>You do not care much about internal MARC metadata structure as
far as you can work with "more meaningful" metadata concepts like
<em>author</em>, <em>abstract</em>, <em>title</em>, etc. In this case
we simply recommend you to stick to Invenio defaults that preset for
you the most commonly used metadata fields (in alphabetical order;
non-exhaustive list):
<blockquote>
<pre>
METADATA CONCEPT PROPOSED MARC 21 REPRESENTATION
------------------------ -------------------------------
Abstract 520 $a
Author, first 100 $a
Author(s), additional 700 $a
Collection identifier 980 $a
Email 8560 $f
Imprint 260 $a,b,c; 300 $a
Keywords 6531 $a
Language 041 $a
OAI identifier 909CO $o
Publication info 909C4 $* [many subfields]
References 999C5 $* [many subfields]
Primary report number 037 $a [unique throughout the system!]
Additional report number(s) 088 $a
Series 490 $a,v
Subject 65017 $a
Title 245 $a
URL (e.g. to fulltext) 8564 $u, $z
</pre>
</blockquote>
<p>The advantage of using these Invenio defaults is that you can use
pre-defined configurations of <a href="bibconvert-admin">BibConvert</a>,
<a href="bibformat-admin">BibFormat</a>, and <a
href="bibindex-admin">BibIndex</a>.
<li><p>You do not care much about internal MARC metadata structure, so
you are using Invenio defaults, but you need to introduce a new
metadata concept. For example, you would like to store also the
document shelf number, and you want to make it separately searchable.
In this case you are free to choose any MARC tag of your own, for
example <code>963 $6</code>. After that you would configure Invenio
as follows:
<ul>
<li>configure <a href="bibindex-admin">BibIndex</a> to create a new
logical field called <em>document shelf</em> and associate it with
<code>963 $6</code> physical MARC tag;
<li>run <a href="bibindex-admin">BibIndex</a> to create word tables for
the new searchable index;
<li>configure <a href="websubmit-admin">WebSubmit</a> to let the
submission interface know of the existence of the new field;
<li>configure <a href="websearch-admin">WebSearch</a> to introduce the
new searchable field into collections of your choice;
<li>configure <a href="bibformat-admin">BibFormat</a> to include
document shelf information in the record display on search results
pages.
</ul>
<p>which should give you the functionality you need.
<li><p>You have some constraints on the MARC level, for example you
would like to use MARC markup scheme of your own. You are free to
define your own scheme and even invert the meaning of our default
configurations. For each field you would simply follow the
above-mentioned configuration procedure.
<p>However, when designing your own MARC scheme, you need to think of
two Invenio-related restrictions:
<ul>
<li>There should be no clash in the meaning of the same MARC tag in
two different collections. For example, the tag <code>100</code>
should not mean <em>first author</em> in the Preprints collection and
<em>title</em> in the Videotapes collection. The MARC tags are
considered to be chosen globally, in a collection-independent way.
This means that we cannot have several collections reusing the same
MARC code for their own different purposes. (This should never happen
in well designed database system anyway, but if you have a merge of
various databases coming from various groups of users, and if you do
not have the liberty to remap their MARC tags, this may be a problem.)
<li>Also, our database design assumes that the order of repetitive
subfields inside the same field instance does not matter. For
example, let us consider the tag <code>100</code> with the value
<code>$a Foo $a Bar $a Baz</code>. Then, the question "what is the
second <code>$a</code> of the tag <code>100</code>?" is invalid within
the Invenio MARC paradigm. Invenio would store a tag like that,
but not the order of repetitive subfields themselves. In our MARC
paradigm, we prefer to code that information either (i) into different
subfields within the same field instance (<code>100 $a Foo $b Bar $c
Baz</code>), or (ii) into the same subfield but inside several field
instances (<code>100 $a Foo</code>; <code>100 $a Bar</code>; <code>100
$a Baz</code>), according to what is more appropriate. (We think that
to rely on the order of repetitive subfields inside the same field
instance is a suspicious database design.)
</ul>
<p>These two restrictions were introduced in order to keep Invenio
bibliographic tables both simple and fast. As explained above, we
believe that any good database design will avoid these techniques
anyway.
</ol>
<h2>MARC representation in use at CERN</h2>
<p>MARC schema that is used in the CERN library differs in some ways
from the schema found in the Invenio default configuration and in the
demo records, described under point 1 of the preceding section. This
has got both an advantage and a disadvantage: (i) an advantage that it
permits us to easily test the behaviour of Invenio in the context of
different metadata schemata used by different Invenio installations in
the world, and (ii) a disadvantage that the default schema may be
prone to different extensions by different Invenio installations in
the world, while these extensions could have possibly been avoided by
building local extensions on top of a richer default. (There will
probably always be local tag schema differences due to various local
cataloguing traditions.)
<p>As an example of an extensive MARC schema that could provide a
possible richer default for the Invenio installations in the world, we
are listing below all MARC tags that are in production in the CERN
Library. Please note that in some cases the metadata choice was
dictated by local policy of a Swiss library network that the CERN
Library is participating at, and that it may deliberately differ from
the Library of Congress recommendations in some places to some extent.
(See especially tags 024, 035, 037, 518, 700, 710, 711, 720, 721, 722,
723, 724, 725, 773, 866.)
<p>If this schema is found interesting, we may also provide CERN
BibFormat etc configurations that implement it for Invenio.
<pre>
<protect>
GUIDE TO MARC21 TAGS FOR CERN AND CDSWARE
an attempt to present the actual setup of different MARC tags in use in AL500
at CERN
Maja Gracco
1 September 2004
Updated by Jocelyne Jerdelet December 2009
NOTE:
1. The abbreviations "NR" and "R" are MARC21 standards and stands for
"Not repetitive" and "Repetitive". At CERN the rule is to make a tag repetitive,
when possible [only tags 1xx are not-repetitive] but to make subfields
non-repetitive [there are a couple of exceptions].
2. Subfield codes in AL300 are marked with $$ and subfield codes in AL500
are marked with $.
3. When different indicators are used, the tag will be repeated for each of
them like tag 246
4. [CERN] indicates, that some modifications have been done to the tag to
suit the CERN Library needs, like adding one or more subfield code(s) like
tag "773" or slightly change the content of the field like tag "037"
5. [Invenio/MySQL] indicates, that this tag is exlusively used for Invenio;
this technic is mainly used, when AL500 proposes an alpha-tag like "BAS"
6. To be able to find previous fields/tags mentioned under the title:
"Additional field(s)/tag(s):", you can use the Find-command on your browser
001 CONTROL NUMBER (NR)
This field has no indicators or subfield codes.
It contains the control number assigned by the organization
creating, using or distributing the record. - [ARC,CER,IEX,MAN,MMD]
{Created automatically at input via GUI/Cataloguing module}
NOTE: In MySQL used for Invenio record ID
003 CONTROL NUMBER IDENTIFIER (NR)
This field has no indicators or subfield codes.
It contains the MARC code for the agency whose system control
number is present in field 001. - [ARC,CER,IEX,MAN,MMD]
005 DATE AND TIME OF LAST TRANSACTION (NR)
This field has no indicators or subfield codes.
It contains 16 characters that specify the date and time
of the last record transaction. - [ARC,CER,IEX,MAN,MMD]
020 INTERNATIONAL STANDARD BOOK NUMBER (R)
Indicators - Both undefined
Subfield Code(s)
$a International Standard Book Number (NR) - [CER base=2n monograph]
$u International Standard Book Number - Medium (NR) - [CER base=2n monograph]]
021 INTERNATIONAL STANDARD NUMBER (R)
Indicators - Both undefined
Subfield Code(s)
$a International Standard Number (NR) - [CER]
022 INTERNATIONAL STANDARD SERIAL NUMBER (R)
Indicators - Both undefined
Subfield Code(s)
$a International Standard Serial Number (NR) - [CER base=3n periodical]
$b ISSN Support (NR) - [CER]
024 OPEN ARCHIVES INITIATIVE (R) [CERN]
Indicators
First Unspecified type of standard number or code
8
Second - undefined
Subfield Code(s)
$a OAI - [CER]
$p OAI-set indicator - [CER]
024 DIGITAL OBJECT IDENTIFIER
Indicators
First Source specified in subfield $2
7
Second - undefined
Subfield Code(s)
$a "doi" (CERN prefix: 10.5170)
$2 doi
030 CODEN DESIGNATION (R)
Indicators - Both undefined
Subfield Code(s)
$a CODEN (NR) - [CER base=3n]
$9 CODEN Source - [CER base=3n]
035 SYSTEM CONTROL NUMBER (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a System control number (NR) - [CER,IEX,MAN,MMD,WAI/UDC]
$9 System control number: Inst. (NR) - [CER,"CERN annual report",
"CERN ISOLDE",IEX,MAN,MMD,WAI/UDC]
NOTE:
035 $9 inspire {record with other subject than Particle Physics to import into INSPIRE}
037 SOURCE OF ACQUISITION (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Electronically retrievable number (NR) - [CER,MAN,MMD]
041 LANGUAGE CODE (NR)
Indicators - Both undefined
Subfield Code(s)
$a Language code (NR) - [ARC,CER,IEX,MAN,MMD]
044 COUNTRY OF PUBLISHING/PRODUCING ENTITY CODE (NR)
Indicators - Both undefined
Subfield Code(s)
$a Country of publishing/producing entity code (NR) - [CER base=3n]
050 LIBRARY OF CONGRESS CALL NUMBER (R)
Indicators - Both undefined
Subfield Code(s)
$a Classification number (R) - [CER]
080 UNIVERSAL DECIMAL CLASSIFICATION NUMBER (R)
Indicators - Both undefined
Subfield Code(s)
$a Universal Decimal Classification number (NR) - [CER,WAI/UDC]
082 DEWEY DECIMAL CLASSIFICATION NUMBER (R)
Indicators - Both undefined
Subfield Code(s)
$a Classification number (R) - [CER]
084 OTHER CLASSIFICATION NUMBER (R)
Indicators - Both undefined
Subfield Code(s)
$a Classification number (R) - [CER]
$b 980__a + 260__c + 088__a [CER] (this field is useful for sort the collection by reportnumber - Council documents)
$2 Number source (NR) - [CER]
088 REPORT NUMBER (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Report number (NR) - [ARC,CER,MAN,MMD]
$9 CERN internal number (NR) - [CER,MMD]
CER: $9 not displayed but searchable
not used in Inspire ?
100 MAIN ENTRY--PERSONAL NAME (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [CER,MAN,MMD]
$e Relator term (NR) - [CER,MMD]
$h CCID (NR) - [CER]
$i INSPIRE Number (NR) - [CER]
$u Affiliation (R) - [CER]
CER: $e entries: dir., ed., ...
110 MAIN ENTRY--CORPORATE NAME (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Corporate name (NR) - [CER,IEX]
$b Subordinate unit (NR) - [IEX]
$g Acronym (NR) - [IEX]
111 MAIN ENTRY--MEETING NAME (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Meeting: conference, school, workshop (NR) - [CER,MAN]
$c Location of meeting (NR) - [CER]
$d Date of meeting (NR) - [CER]
$f Year of meeting (NR) - [CER]
$g Conference code (NR) - [CER]
$n Number of part/section/meeting (NR) - [CER]
$w Country code (NR) - [CER]
$z Closing date (NR) - [CER]
$9 Opening date (NR) - [CER]
145 MAIN TITLE STATEMENT (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Remainder of title (NR) - [CER]
$b Remainder of subtitle (NR) - [CER]
210 ABBREVIATED TITLE (NR)
Indicators - Both undefined
Subfield Code(s)
$a Abbreviated title (NR) - [CER base=3n]
222 KEY TITLE (R)
Indicators - Both undefined
Subfield Code(s)
$a Key title (NR) - [CER base=3n]
NOTE:
[Created automatically by the system from tag 245 {Not in use in AL300}]
242 TRANSLATION OF TITLE BY CATALOGING AGENCY (R)
Indicators - Both undefined
Subfield Code(s)
$a Title (NR) - [CER bas=17,14]
$b Remainder of title (NR) - [CER bas=17,14]
$y Language code of translated title (NR) - [CER bas=17,14]
245 TITLE STATEMENT (NR)
Indicators - Both undefined
Subfield Code(s)
$a Title (NR) - [ARC,CER,IEX,MAN,MMD]
$b Remainder of title (sub-title) (NR) - [ARC,CER,IEX,MAN,MMD]
$k Form (NR) - [MAN]
246 VARYING FORM OF TITLE:1 (R)
Indicators - Both undefined
Subfield Code(s)
$a Title proper/short title (NR) - [CER not base=3n]
$b Remainder of title (NR) - [CER not base=3n]
$g Miscellaneous information (NR) - [CER not base=3n]
$i Display text (NR) - [CER not base=3n]
$n Number of part/section of a work (R) - [CER not base=3n]
$p Name of part/section of a work (R) - [CER not base=3n]
246 VARYING FORM OF TITLE:2 (R)
Indicators
First - undefined
Second Type of title
1 Parallel title
Subfield Code(s)
$a Title proper/short title (NR) - [CER base=3n,MAN,MMD]
$i Display text (NR) - [CER base=3n]
246 VARYING FORM OF TITLE:3 (R)
Indicators
First - undefined
Second Type of title
3 Other title
Subfield Code(s)
$a Title proper/short title (NR) - [CER base=3n]
$i Display text (NR) {cross reference} - [CER base=3n]
$9 Siglum "sigle" (NR) - [CER base=3n]
250 EDITION STATEMENT (NR)
Indicators - Both undefined
Subfield Code(s)
$a Edition statement (NR) - [CER not base=3n,IEX]
260 PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Place of publication (NR) - [CER base=2n,41-45]
$b Name of publisher (NR) - [CER base=2n,41-45]
$c Date of publication [only year] (NR) - [ARC,CER,IEX,MAN,MMD]
$g Reprinted editions (NR) - [CER base=2n,41-45]
NOTE: This tag is not used for base=3n [use tag 933]
NOTE: This full tag ($a,$b,$c) HAS TO BE USED for base=14 (THESES)
269 PRE-PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Place of publ. (NR) - [ARC,CER NOT for base=14,2n,41-45,IEX,MAN,MMD]
$b Name of publ. (NR) - [ARC,CER NOT for base=14,2n,41-45,IEX,MAN,MMD]
$c Complete date (NR) - [ARC,CER NOT for base=14,2n,41-45,IEX,MAN,MMD]
NOTE: Don't use the following lines for CER base=14,2n,41-45 !!
NOTE: Don't use for THESES
270 ADDRESS (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Address or Alternate address (NR) - [CER,IEX]
$b City or Alternate city (NR) - [IEX]
$d Country (NR) - [CER,IEX]
$e Postal code for City or Alt. city (NR)- [IEX]
$k Telephone number (R) - [CER,IEX]
$l Fax number (R) - [CER,IEX]
$m Electronic mail address (NR) - [CER,IEX,MMD]
$p Contact person (NR) - [CER,IEX,MMD]
$s City or Alternate city: Suffix (NR) - [IEX]
$9 Telex (NR) - [CER,IEX]
300 PHYSICAL DESCRIPTION (R)
Indicators - Both undefined
Subfield Code(s)
$a Pagination (NR) - [ARC,CER,MAN,MMD]
$b Other physical details (NR)
310 CURRENT PUBLICATION FREQUENCY (NR)
Indicators - Both undefined
Subfield Code(s)
$a Current publication frequency (NR) - [CER base=3n]
336 - Content Type (R)
Indicators - Both undefined
Subfield Code(s)
$a Content type term (R) - [CER]
NOTE: use for SLIDES
340 PHYSICAL MEDIUM (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Material base and configuration (NR) - [ARC,CER,MAN,MMD]
$c Materials applied to surface (NR) - [ARC]
$d Information recording technique (NR) - [ARC]
$9 CD-ROM [code concatinated]
490 SERIES STATEMENT (R)
Indicators - Both undefined
Subfield Code(s)
$a Series statement (NR) - [CER]
$v Volume/sequential designation (NR) - [CER]
500 GENERAL NOTE (R)
Indicators - Both undefined
Subfield Code(s)
$a General note (NR) - [ARC,CER,IEX,MAN,MMD]
NOTE: Don't use this tag for base=3n use tag 935
502 DISSERTATION NOTE (R)
Indicators - Both undefined
Subfield Code(s)
$a Diploma (NR) - [CER base=14]
$b University (NR) - [CER base=14]
$c Date of year of defense - [CER base=14]
CER:
$a: PhD, Master, Bachelor, Diploma, Habilitation, Laurea, Thesis, internship report
Inspire: $b$c$d (marc standard)
506 RESTRICTIONS ON ACCESS NOTE (R)
Indicators - Both undefined
Subfield Code(s)
$a Terms governing access (NR) - [ARC,MAN,MMD]
$9 Local information (NR) - [ARC]
518 DATE/TIME AND PLACE OF AN EVENT NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$d Lectures: date (NR) - [CER]
$g Lectures: conference identification (NR) - [CER]
$h Lectures: starting time (NR) - [CER]
$l Lectures: length of speech (NR) - [CER]
$r Lectures: meeting (NR) - [CER]
NOTE: This tag is only in use for CER base=10-13,16,19
520 ENGLISH SUMMARY, ETC. (R)
Indicators - Both undefined
Subfield Code(s)
$a Summary, etc. note (NR) - [ARC,CER,IEX,MAN,MMD/BUL] (Abs. short Eng)
$b Expansion of summary note (NR) - [MMD/BUL] (Abs. long Eng)
$9 number of the abstract (NR) - [ARC,CER,IEX,MAN,MMD/BUL]
536 FUNDING INFORMATION (R)
Indicators - Both undefined
Subfield Code(s)
$a Funding agency/program (NR) - [CER]
$c Grant number
$f Project number
$r Access information
NOTE: $r used for Open Access tag in OpenAIRE
540 TERMS GOVERNING USE AND REPRODUCTION (LICENSE) [CER]
Indicators - Both undefined
Subfield Code(s)
$a Terms governing use and reproduction, e.g. CC License
$b person or institution imposing the license (author, publisher)
$u URI
$3 material (currently not used)
541 IMMEDIATE SOURCE OF ACQUISITION NOTE (R) [CER]
Indicators - Both undefined
Subfield Code(s)
$a Source of acquisition (NR) - [ARC,CER,MAN]
$d Date of acquisition (NR) - [ARC]
$e Accession number (NR) - [MMD]
$f Owner (NR) - [ARC]
$h Price paid by Bookshop [CER]
$9 Price for the user to pay [CER]
542 COPYRIGHT INFORMATION [CER]
Indicators
First Undefined
Second Undefined
Subfield Code(s)
$d Copyright holder
$g Copyright date
$u URI (URL or URN, more detailed statement about copyright status)
$e Copyright holder contact information
$f Copyright statement as presented on the resource
$3 materials (currently not used)
546 LANGUAGE NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Language of source (NR) - [MAN]
$g Target language (NR) - [MAN]
555 CUMULATIVE INDEX/FINDING AIDS NOTE (R)
Indicators - Both undefined
Subfield Code(s)
$a Cumulative index/finding aids note (NR) - [CER base=3n]
583 ACTION NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Action (NR) - [CERN:BOOKSHOP,MAN]
$c Time/date of action (NR) - [CERN:BOOKSHOP,MAN]
$i Mail; Method of action (NR) - [MAN]
$z Note (NR) - [CERN:ALD]
590 FRENCH SUMMARY NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Summary, etc. note in French (NR) - [MMD]
$b Expansion of summary note in French (NR) - [MMD]
594 TYPE OF DOCUMENT (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Type of document (NR) - [ARDA]
595 INTERNAL NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Internal note (NR) - [ARC,CER,IEX,MAN,MMD]
$d Control field (NR)
$i INSPEC number
$s Subject note (NR) - [MMD]
NOTE: Don't use this tag for base=3n use tag 937
USAGE
CER:
$a pr/lkr not found: publication not found for "to be published in"
$a no fulltext
596 SLAC NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a SLAC note (NR) - [CER]
597 OBSERVATION IN FRENCH (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Observation in French (NR) - [MMD]
598 COPYRIGHT (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Copyright (NR) - [MMD]
599 STATISTICS FOR THE CERN BOOKSHOP (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Total number of books bought by the Bookshop (NR) - [CER]
$b Total number of books sold by the Bookshop (NR) - [CER]
$c The values of $a minus the values of $b (NR) - [CER]
600 SUBJECT ADDED ENTRY--PERSONAL NAME (R)
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [ARC]
$c Titles and other words associated with a name (NR) - [ARC]
650 SUBJECT ADDED ENTRY--TOPICAL TERM:1 (R)
Indicators
First Level of subject
1 Primary
Second Subject heading system/thesaurus
7 Source specified in subfield $2
Subfield Code(s)
$a Topical term or geographic name (NR) - [CER,MAN,MMD]
$2 Source of heading or term (NR) - [CER,MAN,MMD]
$e description to be displayed (default: keyword) (NR) - [CER]
650 SUBJECT ADDED ENTRY--TOPICAL TERM:2 (R)
Indicators
First Level of subject
2 Secondary
Second Subject heading system/thesaurus
7 Source specified in subfield $2
Subfield Code(s)
$a Topical term or geographic name (NR) - [CER,MAN,MMD]
$2 Source of heading or term (NR) - [CER,MAN,MMD]
$e description to be displayed (default: keyword) (NR) - [CER]
$p percentage (relevance of topic, used for INTC) (NR)
653 ENGLISH INDEX TERM--UNCONTROLLED:1 (R) [CERN]
Indicators
First Level of index term
1 Primary
Second Undefined
Subfield Code(s)
$a Uncontrolled term (NR) - [ARC,CER,MAN,MMD,WAI/UDC]
$9 Institute of the uncontrolled term (NR) - [CER]
653 FRENCH INDEX TERM--UNCONTROLLED:2 (R)
Indicators
First Level of index term
2 Secondary [in French]
Second Undefined
Subfield Code(s)
$a Uncontrolled term (NR) - [CER,WAI/UDC]
$9 Institute of the uncontrolled term (NR) - [CER,WAI/UDC]
690 SUBJECT INDICATOR (R) [CERN]
Indicators
First Origine of indicator
C CERN
Second indicator undefined
Subfield Code(s)
$a Term (NR) - [ARC,CER,IEX,MAN,MMD]
691 OBSERVATION (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Observation (NR) - [ARC,MAN]
692 BEAM (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$e Elements (NR) - [CER]
$i Isotope (NR) - [CER]
$m Minimum intensity (NR) - [CER]
693 ACCELERATOR/EXPERIMENT (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Accelerator (NR) - [CER,IEX,MMD]
$e Experiment (NR) - [CER,IEX,MAN,MMD]
$f Facility
694 CLASSIFICATION TERMS (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Uncontrolled term (NR) - [CER]
$9 Institute of the uncontrolled term (NR) - [CER]
695 THESAURUS TERMS (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Uncontrolled term (NR) - [CER]
$9 Institute of the uncontrolled term (NR) - [CER]
699 SUBJECT CATEGORY FOR CERN BOOKSHOP (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Uncontrolled term (NR) - [CER]
$9 Institute of the uncontrolled term (NR) - [CER]
700 ADDED ENTRY--PERSONAL NAME (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [ARC,CER,MMD]
$e Relator term (NR) - [ARC,CER,MMD]
$g CCID (NR) - [CER]
$i INSPIRE Number (NR) - [CER]
$u Affiliation (R) - [CER,IEX,MMD]
CER:
$e entries: dir., ed., ...
710 ADDED ENTRY--CORPORATE NAME (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Corporate name (NR) - [ARC,CER,MAN]
$b Subordinate unit (NR) - [CER,MAN]
$g Collaboration (NR) - [CER]
$5 CERN Paper (NR) - [CER,MAN,MMD]
$9 CERN Work (NR) - [CER]
NOTE: $9 is no more used from 2009?
$5 Department, no more in use from May 2011
711 ADDED ENTRY--MEETING NAME (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Meeting name (NR) - [CER]
$c Location of meeting (NR) - [CER]
$d Date of meeting (NR) - [CER]
$f Date of a work (NR) - [CER]
$g Conference code (NR) - [CER]
$n Number of part/section/meeting (NR) - [CER]
$9 Conference opening date (NR) - [CER]
720 AUTHOR AS ON DOCUMENT / AUTHOR IN ARCHIVE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [ARC,CER]
721 TRANSLATOR (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [MAN]
$l Words translated (NR) - [MAN]
722 REVISOR (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [MAN]
723 RE-READER (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [MAN]
$l Words re-read (NR) - [MAN]
$s Language (NR) - [MAN]
724 "COMPOSER" OF MINUTES (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [MAN]
$l Words composed (NR) - [MAN]
$s Language (NR) - [MAN]
725 "TYPIST" OF MINUTES (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [MAN]
$l Words composed (NR) - [MAN]
770 SUPPLEMENT/SPECIAL ISSUE ENTRY (R)
Indicators - Both undefined
Subfield Code(s)
$a Main entry heading (NR) - [CER base=3n]
$i Display text (NR) - [CER base=3n]
$t Title (NR) - [CER base=3n]
$w Record control number (R) - [CER base=3n]
772 PARENT RECORD ENTRY (R)
Indicators - Both undefined
Subfield Code(s)
$a Main entry heading (NR) - [CER base=3n]
$i Display text (NR) - [CER base=3n] {Supplement to}
$t Title (NR) - [CER base=3n]
$w Record control number (R) - [CER base=3n]
773 HOST ITEM ENTRY (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a "DOI" (NR) - [CER]
$c Pagination (NR) - [ARC,CER,MMD]
$d Complete date (NR) - [CER,MMD]
$e Recid of the linked document record (NR) - [CER]
$f Note regarding a DN (down record) link - (NR) [CER]
$n Number [issue] (NR) - [ARC,CER,MMD]
$p Title (NR) - [ARC,CER,MMD]
$u URL (NR) - [MMD]
$v Volume (NR) - [CER,MMD]
$y Year (NR) - [ARC,CER,MMD]
$t Talk given at - [INS]
$w CNUM - [INS]
$x In *...* p1-p2 [INS] {for book chapters}
NOTE: 773__$$e<cds nr>$$f<conf code>$$c<pages> used for conference contributions from June 2011 to be standard with Inspire Cataloguing
NOTE: 773__a not to be used in the future, DOI only in 024
780 PRECEDING ENTRY (R)
Indicators - Both undefined
Subfield Code(s)
$a Main entry heading (NR) - [CER base=3n]
$i Display text (NR) - [CER base=3n] {Continues}
$t Title (NR) - [CER base=3n]
$w Record control number (R) - [CER base=3n]
785 SUCCEEDING ENTRY (R)
Indicators - Both undefined
Subfield Code(s)
$a Main entry heading (NR) - [CER base=3n]
$i Display text (NR) - [CER base=3n] {Continued by}
$t Title (NR) - [CER base=3n]
$w Record control number (R) - [CER base=3n]
787 NONSPECIFIC RELATIONSHIP ENTRY (R)
Indicators - Both undefined
Subfield Code(s)
$a Main entry heading (NR) - [CER base=3n]
$i Display text (NR) - [CER base=3n] {other forms of relation}
$r Report number
$t Title (NR) - [CER base=3n]
$w Record control number (R) - [CER base=3n]
787 OTHER RELATIONSHIP ENTRY (R)
Indicators
First Note controller
0 - Display note (in $i)
1 - Do not display note
Subfield Code(s)
$i Relationship information (R) - [CER]
$r Report number
$w Record control number (R) - [CER]
NOTE: Used to link Conference papers and Slides records ($i Conference paper - $w CDS recid)
852 LOCATION (R)
Indicators - Both undefined
Subfield Codes
$a Location (NR) - [ARC,CER,MAN,MMD]
$c Shelving location (NR) - [ARC,CER]
856 ELECTRONIC LOCATION AND ACCESS:1 (R)
Indicators
First Access method
0 Email
Second Relationship
^ No information provided
Subfield Code(s)
$f Electronic name (R) - [MAN,MMD]
856 ELECTRONIC LOCATION AND ACCESS:2 (R)
Indicators
First Access method
4 HTTP
Second Relationship
^ No information provided
Subfield Code(s)
$d Path (R) - [MMD]
$q Electronic format type (NR) - [IEX,MMD] {$x EDL;
MMD/PHO: IF .jpeg $x picture} {MMD [bases80-89][.gif] -> $x icon}
$s File size (R) - [INDICO]
$u Uniform Resource Identifier (NR) - [ARC,CER,IEX,MAN,MMD,INS]
$x Nonpublic note (NR) - [CER,MMD]
$y Link text (NR) - [ARC,CER,IEX,MAN,MMD]
$z Public note (R) - [CER,MMD]
NOTE: $u used for URL and URN, repeatable for URN. repeat 856 for several url's
NOTE: $z Stamped by WebSubmit: DATE
$z Figure
856 ELECTRONIC LOCATION AND ACCESS:3 (R)
Indicators
First Access method
4 HTTP
Second Relationship
1 Version of resource
Subfield Code(s)
$g - Version status (NR) - [CER base=3n]
$m - Contact for access assistance (NR) - [CER base=3n]
$n - Name of location of host (NR) - [CER base=3n]
$u - Uniform Resource Identifier (NR) - [CER base=3n]
$x - Nonpublic note (NR) - [CER base=3n]
$y - Link text (NR) - [CER base=3n]
$z - Public note (NR) - [CER base=3n]
$3 - Materials specified (NR) - [CER base=3n]
856 ELECTRONIC LOCATION AND ACCESS:4 (R)
Indicators
First Access method
4 HTTP
Second Relationship
2 Periodicals [TOC]
Subfield Code(s)
$u Uniform Resource Identifier (NR) - [CER base=3n]
$x Nonpublic note (NR) - [CER base=3n]
$y Link text (NR) - [CER base=3n]
856 ELECTRONIC LOCATION AND ACCESS:5 (R)
Indicators
First Access method
7 Method specified in subfield $2
Second Relationship
^ No information provided
Subfield Code(s)
$2 Access method (NR) - [CER,MMD]
$8 Field link and sequence number (R) - [CER,MMD]
$d Path (R) - [MMD]
$u Uniform Resource Identifier (NR) - [CER,MMD]
$x Nonpublic note (NR) - [CER,MMD]
$y Link text (NR) - [CER,MMD]
$z Public note (R) - [CER,MMD]
859 ELECTRONIC MAIL MESSAGE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Contact (NR) - [BUL]
$f E-mail address (NR) - [CER,IEX,MAN,MMD]
$x Date (NR) - [CER]
866 TEXTUAL HOLDINGS--BASIC BIBLIOGRAPHIC UNIT (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Alternative holdings statement (NR) - [CER base=3n]
$b Library (NR) - [CER base=3n]
$c Collection (NR) - [CER base=3n]
$g Subscription status code (NR) - [CER base=3n]
$x Retention code (NR) - [CER base=3n]
$z Public note (NR) - [CER base=3n]
901 AFFILIATION AT CONVERSION AL300/AL500 (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$u Name of institute (NR) - [CER,MAN] {Now no more in use
for CER, but 100 $u and 700 $u}
902 OTHER INSTITUTES (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Name of other institute (NR) - [CER]
903 "GREY BOOK" (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Approval (NR) - [CER]
$b Beam (NR) - [CER,IEX]
$d Status date (NR) - [IEX]
$s Status (NR) - [CER,IEX]
904 BEAMS PER SHIFT (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$s Beams per shift (NR) - [CER]
905 SPOKESMAN (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Address (NR) - [CER]
$k Telephone (NR) - [CER]
$l Fax (NR) - [CER]
$m E-mail (NR) - [CER]
$p Personal name (NR) - [CER,IEX]
$q Private address (NR) - [CER]
906 RESPONSIBLE PERSON / REFEREE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Address (NR) - [CER]
$k Telephone (NR) - [CER]
$l Fax (NR) - [CER]
$m E-mail (NR) - [CER]
$p Personal name (NR) - [CER]
$q Private address (NR) - [CER]
$u Affiliation - [INDICO]
907 INTC: RESOURCE COORDINATOR (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [CER,IEX]
908 INTC: TECHNICAL COORDINATOR (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [CER,IEX]
909 DEPUTY SPOKESMAN (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$d Personal name (NR) - [IEX]
910 FSGO (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$f Personal name (NR) - [IEX]
$9 Alternate abbreviated title (NR) - [CER]
911 GLIMOS (R) [CERN] [CERN]
Indicators - Both undefined
Subfield Code(s)
$g Personal name (NR) - [IEX]
912 REGISTRATION FOR CONFERENCE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Abstracts deadline (NR) - [CER]
$f Fee (NR) - [CER]
$i "By invitation only" (NR) - [CER]
$n Number of participants (NR) - [CER]
$p Paper deadline (NR) - [CER]
$r Registration deadline (NR) - [CER]
913 CITATION (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$c Citation (NR) - [CER]
$p Unformatted references (NR) - [IEX]
$t Title abbreviation (NR) - [CER]
$u Uniform Resource Identifier (NR) - [CER,IEX]
$v Volume (NR) - [CER]
$y Year (NR) - [CER]
914 UNIVERSAL DECIMAL CLASSIFICATION (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$u Secondary UDC number (NR) - [WAI/UDC]
$v Library shelving code (NR) - [WAI/UDC]
916 "STATUS WEEK" (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Acquisition of proceedings code (NR) - [CER]
$d Display period for books (NR) - [CER]
$e Number of copies bought by CERN (ebooks) - [CER]
$s Status of record (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$w Status week (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$y Year for Annual list (NR) - [CER]
917 CABLE
Indicators - Both undefined
Subfield Code(s)
$a Cable (NR) - [IEX]
918 DEPARTMENT INDEX (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Department index (NR) - [IEX]
919 ORGANIZATION INDEX (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Organization index (NR) - [IEX]
920 TOWN INDEX (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Town index (NR) - [IEX]
921 MICROCOSM: LOANS (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$d Loan date (NR) - [MMD]
$e Exhibition loan (NR) - [MMD]
$i Borrower institute (NR) - [MMD]
$t Loan to (NR) - [MMD]
$x Exhibition name (NR) - [MMD]
922 MICROCOSM: PHYSICAL VALUES (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$d Diameter (NR) - [MMD]
$h Height (NR) - [MMD]
$i Interactive objects (NR) - [MMD]
$l Length (NR) - [MMD]
$p Depth (NR) - [MMD]
$w Weight (NR) - [MMD]
923 PLACE OF PHOTO (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$p Place of photo (NR) - [MMD]
$r Requestor (NR) - [MAN,MMD]
924 PHOTOLAB (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a "Tirage" (NR) - [MMD]
$b ?? - [MMD]
$t ?? - [MMD]
925 DATES (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Opening date/Date received (NR) - [ARC,MAN]
$b Closing date/Date completed (NR) - [ARC,CER,MAN]
926 RECIPIENT (NR) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [ARC,CER,MAN]
927 FILE NUMBER (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a File number (NR) - [ARC,MAN]
928 ADDITIONAL RECIPIENT(S) (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Personal name (NR) - [CER,MAN]
929 RETENTION (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Retention (NR) - [ARC,MAN]
$d Retention date (NR) - [MAN]
931 PERI: MAIN CORPORATE AUTHOR (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Corporate name (NR) - [CER base=3n]
932 PERI: ADDITIONAL CORPORATE AUTHOR (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Corporate name (NR) - [CER base=3n]
933 PERI: IMPRINT (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Place of publisher (NR) - [CER base=3n]
$b Name of publisher (NR) - [CER base=3n]
934 PERI: IMPRINT OF E-JOURNALS (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Place of publisher (NR) - [CER base=3n]
$b Name of publisher (NR) - [CER base=3n]
$l Link for publisher (NR) - [CER base=3n]
$x Non-public note (NR) - [CER base=3n]
935 PERI: USER NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a User note (NR) - [CER base=3n]
936 PERI: E-J USER NOTE (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a E-J user note (NR) - [CER base=3n]
937 PERI: INTERNAL NOTE (R) - [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Internal note (NR) - [CER base=3n]
$c Modification date (NR) - [CER base=16,58]
$s Responsible of the modification (NR) - [CER-MMD]
938 PERI: LOCAL INFORMATION (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Frequncy given as numbers (NR) - [CER base=3n]
$f Impact factor (NR) - [CER base=3n]
$i Index (NR) - [CER base=3n]
$p Title status (NR) - [CER base=3n]
939 PERI: LOCAL INFORMATION (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a GoDirect Algorithm - [CER base=3n]
$d GoDirect URL - [CER base=3n]
$u GoDirect Homepage - [CER base=3n]
$v GoDirect Volume - [CER base=3n]
940 LINK TO COMPANY (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$u URL address (NR) - [CERN:BOOKSHOP,MAN]
$y URL note (NR) - [CERN:BOOKSHOP,MAN]
941 RELATED DOCUMENT NUMBER (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a Related document number (NR) - [MAN]
$t Type of document (NR) - [MMD]
942 INSTITUTE LINKMAN (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a ?? - [MAN]
$g ?? - [MAN]
$p ?? - [MAN]
$u ?? - [MAN]
960 BASE (R) [Invenio/MySQL]
Indicators - Both undefined
Subfield Code(s)
$a Base number (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
Taken from AL500 BAS
961 CAT (R) [Invenio/MySQL]
Indicators - Both undefined
Subfield Code(s)
$a Cataloguer (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$b Cataloguer level (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$c Modification date (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$l Library (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$h Hour - (NR) [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$x Creation date(NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
Taken from AL500 CAT
962 ALEPH Linking Field (R) [Invenio/MySQL]
Indicators - Both undefined
Subfield Code(s)
$a - link type
UP link to another BIB type record. A record can have only one
link of this type. "DN" link is automatically built in the
opposite direction.
DN "down" link to another BIB type record. Multiple links
are possible. "UP" link is automatically built in the
opposite direction.
PAR parallel link from BIB record to BIB record.
"PAR" link is automatically built in the opposite direction.
HOL link from HOL record to BIB record. Link is built from
BIB to HOL.
ADM link from ADM record to BIB record. Link is built from
BIB to ADM.
ANA is a link between bibliographic records of different levels.
When an anayltic link is created the system generates UP / DWN
links between the two records and an item link between the source
record and the item that corresponds to it (according to vol.,
part,year and pages) on the ADM record of the second record.
ITM links are created between a Bibliographic record and an ADM
record when there is no relationship between the two Bib records,
for example when two items are bound together. [NR} - [ARC]
{Only used for ARC in AL300}
$b - sysno of the linked document record (NR) - [ARC,CER,MMD]
$l - library where linked record is located (NR) - [ARC,CER,MMD]
$n - note regarding a DN (down record) link - (NR) [ARC,CER]
$m - note regarding an UP (up record) link - [not yet in use at CERN]
$y - analytic link - year link - [not yet in use at CERN]
$v - analytic link - volume link - [not yet in use at CERN]
$p - analytic link - part link - [not yet in use at CERN]
$i - analytic link - issue link - [not yet in use at CERN]
$k - analytic link - pages (NR) - [ARC,CER]
$t - base=3n [for paper version of e-journals]/title - (NR) - [CER,MMD
Taken from AL500 LKR
963 OWNER (NR) [Invenio/MySQL]
Indicators - Both undefined
Subfield Code(s)
$a Owner - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
Taken from AL500 OWN
964 ITEM (NR) [Invenio/MySQL]
Indicators - Both undefined
Indicates the number of physical items attached to the record. The field is created as soon as an item is linked to the record.
Subfield Code(s)
$a Owner - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
Taken from AL500 ITM
970 SYSTEM NUMBER (NR) [Invenio/MySQL]
Indicators - Both undefined
Subfield Code(s)
$a AL500 sysno (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$d recid of surviving record (in deleted record after merging)
Taken from AL500 SYS
980 COLLECTION INDICATOR (R) [CERN] [Invenio/MySQL]
Indicators - Both undefined
Subfield Code(s)
$a Primary indicator (NR) - [ARC,CER,IEX,MAN,MMD]
$b Secondary indicator (NR) - [ARC,CER,IEX,MAN,MMD]
$c Deleted indicator (NR) - [ARC,CER,IEX,MAN,MMD
Generated and used in MySQL; accepted in AL500
NOTE: Should be only one Primary ($a) and other Secondary collections in ($b)
981 SYSTEM NUMBER OF DELETED DOUBLE RECORDS (R) [CERN]
Indicators - Both undefined
Subfield Code(s)
$a System number (NR) - [ARC,CER,IEX,MAN,MMD,WAI]
999 REFERENCES (R) [CERN] [Invenio/MySQL]
Indicators
First Origine of indicator
C CERN
Second Type
5 References
Subfield Code(s)
$a DOI
$h authors
$m Miscellaneous [contains 1st part of reference] (R)
$n Issue Number (NR)
$o Order number [contains [ ] line number] (NR)
$p Page (NR)
$r Report Number (NR)
$s Journal publication note: title vol (year) page/artid
$t Journal Title abbreviation (NR)
$u Uniform Resource Identifier (NR)
$v Volume (NR)
$y Year (NR)
CER: $s title vol (year) page/artid {letter belongs to title}
INS: $s title, vol, page/artid {letter belongs to vol}
NOTE: Used for the references extracted with refextract
999 REFERENCES (R) [CERN] [Invenio/MySQL]
Indicators
First Origine of indicator
C CERN
Second Type
6 Refextract
Subfield Code(s)
$a Refextract info
NOTE: Used for the references extracted with refextract
BAS BASE
Indicators - Both undefined
Subfield Code(s)
$a Base number (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
For MySQL use tag 960
CAT CAT (R)
Indicators - Both undefined
Subfield Code(s)
$a Cataloguer (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$b Cataloguer level (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$c Modification date (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$l Library (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$h Hour - (NR) [ARC,CER,IEX,MAN,MMD,WAI/UDC]
$x Creation date(NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
For MySQL use tag 961
FFT FILE UPLOAD (R)
Indicators - Both undefined
Subfield Code(s)
$a Uniform Resource Identifier (NR)
$d Description (NR)
$t Document type
NOTE: Used in Bibupload to upload and attach files in CDS
NOTE: Default for $t is MAIN. In contrast to CDS, MAIN will hide a file on INSPIRE. For public files use INSPIRE-PUBLIC
FMT FORMAT
This field has no indicators or subfield codes.
It contains the Scope of material [2 character code]
Not used in MySQL only in ALEPH
ITM ITEM (NR)
Indicators - Both undefined
Subfield Code(s)
$a Owner - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
For MySQL use tag 964
LKR ALEPH Linking Field (R)
Indicators - Both undefined
Subfield Code(s)
$a - link type
UP link to another BIB type record. A record can have only one
link of this type. "DN" link is automatically built in the
opposite direction.
DN "down" link to another BIB type record. Multiple links
are possible. "UP" link is automatically built in the
opposite direction.
PAR parallel link from BIB record to BIB record.
"PAR" link is automatically built in the opposite direction.
HOL link from HOL record to BIB record. Link is built from
BIB to HOL.
ADM link from ADM record to BIB record. Link is built from
BIB to ADM.
ANA is a link between bibliographic records of different levels.
When an anayltic link is created the system generates UP / DWN
links between the two records and an item link between the source
record and the item that corresponds to it (according to vol.,
part,year and pages) on the ADM record of the second record.
ITM links are created between a Bibliographic record and an ADM
record when there is no relationship between the two Bib records,
for example when two items are bound together. [NR} - [ARC]
{Only used for ARC in AL300}
$b - sysno of the linked document record (NR) - [ARC,CER,MMD]
$l - library where linked record is located (NR) - [ARC,CER,MMD]
$n - note regarding a DN (down record) link - (NR) [ARC,CER]{CER base=3n}
$m - note regarding an UP (up record) link - [not yet in use at CERN]
$y - analytic link - year link - [not yet in use at CERN]
$v - analytic link - volume link - [not yet in use at CERN]
$p - analytic link - part link - [not yet in use at CERN]
$i - analytic link - issue link - [not yet in use at CERN]
$k - analytic link - pages (NR) - [ARC,CER]
$t - base=3n [for paper version of e-journals]/title - (NR) - [CER,MMD]
For MySQL use tag 962
NOTE: $b and $n used in conference contribution/book chapters records to make the link to the conference/book record (note that $b is actually the Aleph sysno)
OWN OWNER (NR)
Indicators - Both undefined
Subfield Code(s)
$a Owner - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
For MySQL use tag 963
SYS SYSTEM NUMBER (NR)
Indicators - Both undefined
Subfield Code(s)
$a AL500 sysno (NR) - [ARC,CER,IEX,MAN,MMD,WAI/UDC]
For MySQL use tag 970
</protect>
</pre>
diff --git a/modules/webhelp/web/admin/howto/howto.webdoc b/modules/webhelp/web/admin/howto/howto.webdoc
index 1d914af50..a936ec9d1 100644
--- a/modules/webhelp/web/admin/howto/howto.webdoc
+++ b/modules/webhelp/web/admin/howto/howto.webdoc
@@ -1,52 +1,57 @@
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
<!-- WebDoc-Page-Title: Admin HOWTOs -->
<!-- WebDoc-Page-Navtrail: <a class="navtrail" href="<CFG_SITE_URL>/help/admin<lang:link/>">Admin Area</a> -->
<!-- WebDoc-Page-Revision: $Id$ -->
<p>The HOWTO guides will give you both short and not-so-short
recipes and thoughts on some of the most frequently encountered
administrative tasks.</p>
<blockquote>
<dl>
<dt><a href="howto-marc">HOWTO MARC</a>
<dd>Describes how to choose the MARC representation of your metadata
and how it will be stored in Invenio.
<dt><a href="howto-migrate">HOWTO Migrate</a>
<dd>Describes how to migrate a bunch of your old data from any format
you might have into Invenio.
<dt><a href="howto-run">HOWTO Run</a>
<dd>Describes how to run your Invenio installation and how to take
care of its normal operation day by day.
<dt><a href="howto-fulltext">HOWTO Manage Fulltext Files</a>
<dd>Describes how to manipulate fulltext files within your Invenio
installation.
+
+<dt><a href="howto-authority">_(HOWTO Manage Authority Records)_</a>
+
+<dd>Describes how to manage Authority Records within your Invenio installation.
+
</dl>
</blockquote>
<p>Haven't found what you were looking for? <a href="mailto:<CFG_SITE_ADMIN_EMAIL>" title="Invenio-Suggest-a-HOWTO">Suggest a HOWTO</a>.</
diff --git a/modules/webhelp/web/hacking/hacking.webdoc b/modules/webhelp/web/hacking/hacking.webdoc
index c8095dda1..702ea1dfd 100644
--- a/modules/webhelp/web/hacking/hacking.webdoc
+++ b/modules/webhelp/web/hacking/hacking.webdoc
@@ -1,122 +1,126 @@
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
<!-- WebDoc-Page-Title: Hacking Invenio -->
<!-- WebDoc-Page-Navbar-Name: hacking-test-suite -->
<!-- WebDoc-Page-Navbar-Select: hacking -->
Welcome to the Invenio Developers' corner. Before diving into the
source, make sure you don't miss our <a
href="<CFG_SITE_URL>/help/">user-level</a> and <a
href="<CFG_SITE_URL>/help/admin/">admin-level</a> documentation as well. And now, back to the source, and happy hacking!
<h2>General information, coding practices</h2>
<blockquote>
<dl>
<dt><a href="common-concepts">Common Concepts</a></dt>
<dd>Summarizing common terms you will encounter here and there.</dd>
<dt><a href="coding-style">Coding Style</a></dt>
<dd>A policy we try to follow, for good or bad.</dd>
<dt><a href="release-numbering">Release Numbering</a></dt>
<dd>Presenting the version numbering scheme adopted for Invenio stable and development releases.</dd>
<dt><a href="directory-organization">Directory Organization</a></dt>
<dd>How the source and target directories are organized, where the
sources get installed to, what is the visible URL policy, etc.</dd>
<dt><a href="modules-overview">Modules Overview</a></dt>
<dd>Presenting a summary of various Invenio modules and their relationships.</dd>
<dt><a href="test-suite">Test Suite</a></dt>
<dd>Describes our unit and regression test suites.</dd>
<p>For more developer-related information, be sure to
visit <a href="https://twiki.cern.ch/twiki/bin/view/CDS/Invenio">Invenio
wiki</a>.</p>
</dl>
</blockquote>
<h2>Module-specific information</h2>
<blockquote>
<dl>
<dt><a href="bibauthorid-internals">BibAuthorID Internals</a></dt>
<dd>Describes information useful to understand how BibAuthorID works.
</dd>
+<dt><a href="bibauthority-internals">BibAuthority Internals</a></dt>
+<dd>Describes information useful to understand how BibAuthority works.
+</dd>
+
<dt><a href="bibclassify-internals">BibClassify Internals</a></dt>
<dd>Describes information useful to understand how BibClassify works,
the taxonomy extensions we use, how the keyword extraction algorithm works.
</dd>
<dt><a href="bibconvert-internals">BibConvert Internals</a></dt>
<dd>Describes information useful to understand how BibConvert works,
and the BibConvert functions can be reused.</dd>
<dt><a href="bibformat-internals">BibFormat Internals</a></dt>
<dd>Describes information useful to understand how BibFormat works.</dd>
<dt><a href="bibrank-internals">BibRank Internals</a></dt>
<dd>Describes information useful to understand how the various
ranking methods available in bibrank works, and how they can
be tweaked to give various output.</dd>
<dt><a href="bibsort-internals">BibSort Internals</a></dt>
<dd>Describes information useful to understand how BibSort
module works and how various data manipulations are done,
stored and retrieved.</dd>
<dt><a href="bibrecord-internals">BibRecord Internals</a></dt>
<dd>Describes information useful to manipulate single records.</dd>
<dt><a href="bibdocfile-internals">BibDocFile Internals</a></dt>
<dd>Describes information useful to manipulate documents within records.</dd>
<dt><a href="miscutil-internals">MiscUtil Internals</a></dt>
<dd>Describes information useful to understand what can be found inside the miscellaneous utilities
module, like database access, error management, date handling library, etc.</dd>
<dt><a href="webjournal-internals">WebJournal Internals</a></dt>
<dd>Describes the WebJournal database and required MARC tags for article records.</dd>
<dt><a href="search-engine-internals">WebSearch Internals</a></dt>
<dd>Describes information useful to understand the search process
internals, like the different search stages, the high- and low-level
API, etc.</dd>
<dt><a href="webaccess-internals">WebAccess Internals</a></dt>
<dd>Describes information useful to understand the access control process
internals, its API, etc.</dd>
<dt><a href="webstyle-internals">WebStyle Internals</a></dt>
<dd>Describes how to customize WebDoc files, etc.</dd>
<dt><a href="websubmit-internals">WebSubmit Internals</a></dt>
<dd>Describes information useful to understand the document submission internals.</dd>
<dt><a href="bibsched-internals">BibSched Internals</a></dt>
<dd>Describes information useful to understand the bibliographic task scheduler internals.</dd>
</dl>
</blockquote>
diff --git a/modules/webjournal/lib/webjournal_utils.py b/modules/webjournal/lib/webjournal_utils.py
index 9f969c3a7..27e027cf0 100644
--- a/modules/webjournal/lib/webjournal_utils.py
+++ b/modules/webjournal/lib/webjournal_utils.py
@@ -1,1809 +1,1809 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Various utilities for WebJournal, e.g. config parser, etc.
"""
import time
import datetime
import calendar
import re
import os
import cPickle
import math
import urllib
from MySQLdb import OperationalError
from xml.dom import minidom
from urlparse import urlparse
from invenio.config import \
CFG_ETCDIR, \
CFG_SITE_URL, \
CFG_CACHEDIR, \
CFG_SITE_LANG, \
CFG_ACCESS_CONTROL_LEVEL_SITE, \
CFG_SITE_SUPPORT_EMAIL, \
CFG_DEVEL_SITE, \
CFG_CERN_SITE
from invenio.dbquery import run_sql
from invenio.bibformat_engine import BibFormatObject
from invenio.search_engine import search_pattern, record_exists
from invenio.messages import gettext_set_language
from invenio.errorlib import register_exception
from invenio.urlutils import make_invenio_opener
WEBJOURNAL_OPENER = make_invenio_opener('WebJournal')
########################### REGULAR EXPRESSIONS ######################
header_pattern = re.compile('<p\s*(align=justify)??>\s*<strong>(?P<header>.*?)</strong>\s*</p>')
header_pattern2 = re.compile('<p\s*(class="articleHeader").*?>(?P<header>.*?)</p>')
para_pattern = re.compile('<p.*?>(?P<paragraph>.+?)</p>', re.DOTALL)
img_pattern = re.compile('<img.*?src=("|\')?(?P
<atom:link rel="search" href="%(siteurl)s/opensearchdescription" type="application/opensearchdescription+xml" title="Content Search" />
<textInput>
<title>Search </title>
<description>Search this site:</description>
<name>p</name>
<link>%(siteurl)s/search</link>
</textInput>
""" % {'sitename': CFG_SITE_NAME,
'siteurl': CFG_SITE_URL,
'sitelang': CFG_SITE_LANG,
'search_syntax': self.tmpl_opensearch_rss_url_syntax,
'timestamp': time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime()),
'version': CFG_VERSION,
'sitesupportemail': CFG_SITE_SUPPORT_EMAIL,
'timetolive': CFG_WEBSEARCH_RSS_TTL,
'current_link': (current_url and \
'\n<atom:link rel="self" href="%s" />\n' % current_url) or '',
'previous_link': (previous_url and \
'\n<atom:link rel="previous" href="%s" />' % previous_url) or '',
'next_link': (next_url and \
'\n<atom:link rel="next" href="%s" />' % next_url) or '',
'first_link': (first_url and \
'\n<atom:link rel="first" href="%s" />' % first_url) or '',
'last_link': (last_url and \
'\n<atom:link rel="last" href="%s" />' % last_url) or '',
'total_results': (nb_found and \
'\n<opensearch:totalResults>%i</opensearch:totalResults>' % nb_found) or '',
'start_index': (jrec and \
'\n<opensearch:startIndex>%i</opensearch:startIndex>' % jrec) or '',
'items_per_page': (rg and \
'\n<opensearch:itemsPerPage>%i</opensearch:itemsPerPage>' % rg) or '',
'rss_title': title,
'rss_description': description
}
return out
def tmpl_xml_rss_epilogue(self):
"""Creates XML RSS 2.0 epilogue."""
out = """\
</channel>
</rss>\n"""
return out
def tmpl_xml_podcast_prologue(self, current_url=None,
previous_url=None, next_url=None,
first_url=None, last_url=None,
nb_found=None, jrec=None, rg=None, cc=None):
"""Creates XML podcast prologue."""
title = CFG_SITE_NAME
description = '%s latest documents' % CFG_SITE_NAME
if CFG_CERN_SITE:
title = 'CERN'
description = 'CERN latest documents'
if cc and cc != CFG_SITE_NAME:
title += ': ' + cgi.escape(cc)
description += ' in ' + cgi.escape(cc)
out = """<rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0">
<channel>
<title>%(podcast_title)s</title>
<link>%(siteurl)s</link>
<description>%(podcast_description)s</description>
<language>%(sitelang)s</language>
<pubDate>%(timestamp)s</pubDate>
<category></category>
<generator>Invenio %(version)s</generator>
<webMaster>%(siteadminemail)s</webMaster>
<ttl>%(timetolive)s</ttl>%(previous_link)s%(next_link)s%(current_link)s

<itunes:owner>
<itunes:email>%(siteadminemail)s</itunes:email>
</itunes:owner>
""" % {'sitename': CFG_SITE_NAME,
'siteurl': CFG_SITE_URL,
'sitelang': CFG_SITE_LANG,
'siteadminemail': CFG_SITE_ADMIN_EMAIL,
'timestamp': time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime()),
'version': CFG_VERSION,
'sitesupportemail': CFG_SITE_SUPPORT_EMAIL,
'timetolive': CFG_WEBSEARCH_RSS_TTL,
'current_link': (current_url and \
'\n<atom:link rel="self" href="%s" />\n' % current_url) or '',
'previous_link': (previous_url and \
'\n<atom:link rel="previous" href="%s" />' % previous_url) or '',
'next_link': (next_url and \
'\n<atom:link rel="next" href="%s" />' % next_url) or '',
'first_link': (first_url and \
'\n<atom:link rel="first" href="%s" />' % first_url) or '',
'last_link': (last_url and \
'\n<atom:link rel="last" href="%s" />' % last_url) or '',
'podcast_title': title,
'podcast_description': description
}
return out
def tmpl_xml_podcast_epilogue(self):
"""Creates XML podcast epilogue."""
out = """\n</channel>
</rss>\n"""
return out
def tmpl_xml_nlm_prologue(self):
"""Creates XML NLM prologue."""
out = """<articles>\n"""
return out
def tmpl_xml_nlm_epilogue(self):
"""Creates XML NLM epilogue."""
out = """\n</articles>"""
return out
def tmpl_xml_refworks_prologue(self):
"""Creates XML RefWorks prologue."""
out = """<references>\n"""
return out
def tmpl_xml_refworks_epilogue(self):
"""Creates XML RefWorks epilogue."""
out = """\n</references>"""
return out
def tmpl_xml_endnote_prologue(self):
"""Creates XML EndNote prologue."""
out = """<xml>\n<records>\n"""
return out
def tmpl_xml_endnote_8x_prologue(self):
"""Creates XML EndNote prologue."""
out = """<records>\n"""
return out
def tmpl_xml_endnote_epilogue(self):
"""Creates XML EndNote epilogue."""
out = """\n</records>\n</xml>"""
return out
def tmpl_xml_endnote_8x_epilogue(self):
"""Creates XML EndNote epilogue."""
out = """\n</records>"""
return out
def tmpl_xml_marc_prologue(self):
"""Creates XML MARC prologue."""
out = """<collection xmlns="http://www.loc.gov/MARC21/slim">\n"""
return out
def tmpl_xml_marc_epilogue(self):
"""Creates XML MARC epilogue."""
out = """\n</collection>"""
return out
def tmpl_xml_mods_prologue(self):
"""Creates XML MODS prologue."""
out = """<modsCollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"\n
xsi:schemaLocation="http://www.loc.gov/mods/v3\n
http://www.loc.gov/standards/mods/v3/mods-3-3.xsd">\n"""
return out
def tmpl_xml_mods_epilogue(self):
"""Creates XML MODS epilogue."""
out = """\n</modsCollection>"""
return out
def tmpl_xml_default_prologue(self):
"""Creates XML default format prologue. (Sanity calls only.)"""
out = """<collection>\n"""
return out
def tmpl_xml_default_epilogue(self):
"""Creates XML default format epilogue. (Sanity calls only.)"""
out = """\n</collection>"""
return out
def tmpl_collection_not_found_page_title(self, colname, ln=CFG_SITE_LANG):
"""
Create page title for cases when unexisting collection was asked for.
"""
_ = gettext_set_language(ln)
out = _("Collection %s Not Found") % cgi.escape(colname)
return out
def tmpl_collection_not_found_page_body(self, colname, ln=CFG_SITE_LANG):
"""
Create page body for cases when unexisting collection was asked for.
"""
_ = gettext_set_language(ln)
out = """<h1>%(title)s</h1>
<p>%(sorry)s</p>
<p>%(you_may_want)s</p>
""" % { 'title': self.tmpl_collection_not_found_page_title(colname, ln),
'sorry': _("Sorry, collection %s does not seem to exist.") % \
('<strong>' + cgi.escape(colname) + '</strong>'),
'you_may_want': _("You may want to start browsing from %s.") % \
('<a href="' + CFG_SITE_URL + '?ln=' + ln + '">' + \
cgi.escape(CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME)) + '</a>')}
return out
def tmpl_alert_rss_teaser_box_for_query(self, id_query, ln, display_email_alert_part=True):
"""Propose teaser for setting up this query as alert or RSS feed.
Parameters:
- 'id_query' *int* - ID of the query we make teaser for
- 'ln' *string* - The language to display
- 'display_email_alert_part' *bool* - whether to display email alert part
"""
# load the right message language
_ = gettext_set_language(ln)
# get query arguments:
res = run_sql("SELECT urlargs FROM query WHERE id=%s", (id_query,))
argd = {}
if res:
argd = cgi.parse_qs(res[0][0])
rssurl = self.build_rss_url(argd)
alerturl = CFG_SITE_URL + '/youralerts/input?ln=%s&amp;idq=%s' % (ln, id_query)
if display_email_alert_part:
msg_alert = _("""Set up a personal %(x_url1_open)semail alert%(x_url1_close)s
or subscribe to the %(x_url2_open)sRSS feed%(x_url2_close)s.""") % \
{'x_url1_open': '<a href="%s"><img src="%s/img/mail-icon-12x8.gif" border="0" alt="" /></a> ' % (alerturl, CFG_SITE_URL) + ' <a class="google" href="%s">' % (alerturl),
'x_url1_close': '</a>',
'x_url2_open': '<a href="%s"><img src="%s/img/feed-icon-12x12.gif" border="0" alt="" /></a> ' % (rssurl, CFG_SITE_URL) + ' <a class="google" href="%s">' % rssurl,
'x_url2_close': '</a>', }
else:
msg_alert = _("""Subscribe to the %(x_url2_open)sRSS feed%(x_url2_close)s.""") % \
{'x_url2_open': '<a href="%s"><img src="%s/img/feed-icon-12x12.gif" border="0" alt="" /></a> ' % (rssurl, CFG_SITE_URL) + ' <a class="google" href="%s">' % rssurl,
'x_url2_close': '</a>', }
out = '''<a name="googlebox"></a>
<table class="googlebox"><tr><th class="googleboxheader">%(similar)s</th></tr>
<tr><td class="googleboxbody">%(msg_alert)s</td></tr>
</table>
''' % {
'similar' : _("Interested in being notified about new results for this query?"),
'msg_alert': msg_alert, }
return out
def tmpl_detailed_record_metadata(self, recID, ln, format,
content,
creationdate=None,
modificationdate=None):
"""Returns the main detailed page of a record
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
- 'format' *string* - The format in used to print the record
- 'content' *string* - The main content of the page
- 'creationdate' *string* - The creation date of the printed record
- 'modificationdate' *string* - The last modification date of the printed record
"""
_ = gettext_set_language(ln)
## unAPI identifier
out = '<abbr class="unapi-id" title="%s"></abbr>\n' % recID
out += content
return out
def tmpl_display_back_to_search(self, req, recID, ln):
"""
Displays next-hit/previous-hit/back-to-search links
on the detailed record pages in order to be able to quickly
flip between detailed record pages
@param req: Apache request object
@type req: Apache request object
@param recID: detailed record ID
@type recID: int
@param ln: language of the page
@type ln: string
@return: html output
@rtype: html
"""
_ = gettext_set_language(ln)
# this variable is set to zero and then, nothing is displayed
if not CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT:
return ''
# search for a specific record having not done any search before
wlq = session_param_get(req, 'websearch-last-query', '')
wlqh = session_param_get(req, 'websearch-last-query-hits')
out = '''<br/><br/><div align="right">'''
# excedeed limit CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT,
# then will be displayed only the back to search link
if wlqh is None:
out += '''<div style="padding-bottom:2px;padding-top:30px;"><span class="moreinfo" style="margin-right:10px;">
%(back)s </span></div></div>''' % \
{'back': create_html_link(wlq, {}, _("Back to search"), {'class': "moreinfo"})}
return out
# let's look for the recID's collection
record_found = False
for coll in wlqh:
if recID in coll:
record_found = True
coll_recID = coll
break
# let's calculate lenght of recID's collection
if record_found:
recIDs = coll_recID[::-1]
totalrec = len(recIDs)
# search for a specific record having not done any search before
else:
return ''
# if there is only one hit,
# to show only the "back to search" link
if totalrec == 1:
# to go back to the last search results page
out += '''<div style="padding-bottom:2px;padding-top:30px;"><span class="moreinfo" style="margin-right:10px;">
%(back)s </span></div></div>''' % \
{'back': create_html_link(wlq, {}, _("Back to search"), {'class': "moreinfo"})}
elif totalrec > 1:
pos = recIDs.index(recID)
numrec = pos + 1
if pos == 0:
recIDnext = recIDs[pos + 1]
recIDlast = recIDs[totalrec - 1]
# to display only next and last links
out += '''<div><span class="moreinfo" style="margin-right:10px;">
%(numrec)s %(totalrec)s %(next)s %(last)s </span></div> ''' % {
'numrec': _("%s of") % ('<strong>' + self.tmpl_nice_number(numrec, ln) + '</strong>'),
'totalrec': ("%s") % ('<strong>' + self.tmpl_nice_number(totalrec, ln) + '</strong>'),
'next': create_html_link(self.build_search_url(recid=recIDnext, ln=ln),
{}, ('<font size="4">&rsaquo;</font>'), {'class': "moreinfo"}),
'last': create_html_link(self.build_search_url(recid=recIDlast, ln=ln),
{}, ('<font size="4">&raquo;</font>'), {'class': "moreinfo"})}
elif pos == totalrec - 1:
recIDfirst = recIDs[0]
recIDprev = recIDs[pos - 1]
# to display only first and previous links
out += '''<div style="padding-top:30px;"><span class="moreinfo" style="margin-right:10px;">
%(first)s %(previous)s %(numrec)s %(totalrec)s</span></div>''' % {
'first': create_html_link(self.build_search_url(recid=recIDfirst, ln=ln),
{}, ('<font size="4">&laquo;</font>'), {'class': "moreinfo"}),
'previous': create_html_link(self.build_search_url(recid=recIDprev, ln=ln),
{}, ('<font size="4">&lsaquo;</font>'), {'class': "moreinfo"}),
'numrec': _("%s of") % ('<strong>' + self.tmpl_nice_number(numrec, ln) + '</strong>'),
'totalrec': ("%s") % ('<strong>' + self.tmpl_nice_number(totalrec, ln) + '</strong>')}
else:
# to display all links
recIDfirst = recIDs[0]
recIDprev = recIDs[pos - 1]
recIDnext = recIDs[pos + 1]
recIDlast = recIDs[len(recIDs) - 1]
out += '''<div style="padding-top:30px;"><span class="moreinfo" style="margin-right:10px;">
%(first)s %(previous)s
%(numrec)s %(totalrec)s %(next)s %(last)s </span></div>''' % {
'first': create_html_link(self.build_search_url(recid=recIDfirst, ln=ln),
{}, ('<font size="4">&laquo;</font>'),
{'class': "moreinfo"}),
'previous': create_html_link(self.build_search_url(recid=recIDprev, ln=ln),
{}, ('<font size="4">&lsaquo;</font>'), {'class': "moreinfo"}),
'numrec': _("%s of") % ('<strong>' + self.tmpl_nice_number(numrec, ln) + '</strong>'),
'totalrec': ("%s") % ('<strong>' + self.tmpl_nice_number(totalrec, ln) + '</strong>'),
'next': create_html_link(self.build_search_url(recid=recIDnext, ln=ln),
{}, ('<font size="4">&rsaquo;</font>'), {'class': "moreinfo"}),
'last': create_html_link(self.build_search_url(recid=recIDlast, ln=ln),
{}, ('<font size="4">&raquo;</font>'), {'class': "moreinfo"})}
out += '''<div style="padding-bottom:2px;"><span class="moreinfo" style="margin-right:10px;">
%(back)s </span></div></div>''' % {
'back': create_html_link(wlq, {}, _("Back to search"), {'class': "moreinfo"})}
return out
def tmpl_record_plots(self, recID, ln):
"""
Displays little tables containing the images and captions contained in the specified document.
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
"""
from invenio.search_engine import get_record
from invenio.bibrecord import field_get_subfield_values
from invenio.bibrecord import record_get_field_instances
_ = gettext_set_language(ln)
out = ''
rec = get_record(recID)
flds = record_get_field_instances(rec, '856', '4')
images = []
for fld in flds:
image = field_get_subfield_values(fld, 'u')
caption = field_get_subfield_values(fld, 'y')
if type(image) == list and len(image) > 0:
image = image[0]
else:
continue
if type(caption) == list and len(caption) > 0:
caption = caption[0]
else:
continue
if not image.endswith('.png'):
# huh?
continue
if len(caption) >= 5:
images.append((int(caption[:5]), image, caption[5:]))
else:
# we don't have any idea of the order... just put it on
images.append(99999, image, caption)
images = sorted(images, key=lambda x: x[0])
for (index, image, caption) in images:
# let's put everything in nice little subtables with the image
# next to the caption
out = out + '<table width="95%" style="display: inline;">' + \
'<tr><td width="66%"><a name="' + str(index) + '" ' + \
'href="' + image + '">' + \
'<img src="' + image + '" width="95%"/></a></td>' + \
'<td width="33%">' + caption + '</td></tr>' + \
'</table>'
out = out + '<br /><br />'
return out
def tmpl_detailed_record_statistics(self, recID, ln,
downloadsimilarity,
downloadhistory, viewsimilarity):
"""Returns the statistics page of a record
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
- downloadsimilarity *string* - downloadsimilarity box
- downloadhistory *string* - downloadhistory box
- viewsimilarity *string* - viewsimilarity box
"""
# load the right message language
_ = gettext_set_language(ln)
out = ''
if CFG_BIBRANK_SHOW_DOWNLOAD_STATS and downloadsimilarity is not None:
similar = self.tmpl_print_record_list_for_similarity_boxen (
_("People who downloaded this document also downloaded:"), downloadsimilarity, ln)
out = '<table>'
out += '''
<tr><td>%(graph)s</td></tr>
<tr><td>%(similar)s</td></tr>
''' % { 'siteurl': CFG_SITE_URL, 'recid': recID, 'ln': ln,
'similar': similar, 'more': _("more"),
'graph': downloadsimilarity
}
out += '</table>'
out += '<br />'
if CFG_BIBRANK_SHOW_READING_STATS and viewsimilarity is not None:
out += self.tmpl_print_record_list_for_similarity_boxen (
_("People who viewed this page also viewed:"), viewsimilarity, ln)
if CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS and downloadhistory is not None:
out += downloadhistory + '<br />'
return out
def tmpl_detailed_record_citations_prologue(self, recID, ln):
"""Returns the prologue of the citations page of a record
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
"""
return '<table>'
def tmpl_detailed_record_citations_epilogue(self, recID, ln):
"""Returns the epilogue of the citations page of a record
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
"""
return '</table>'
def tmpl_detailed_record_citations_citing_list(self, recID, ln,
citinglist,
sf='', so='d', sp='', rm=''):
"""Returns the list of record citing this one
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
- citinglist *list* - a list of tuples [(x1,y1),(x2,y2),..] where x is doc id and y is number of citations
"""
# load the right message language
_ = gettext_set_language(ln)
out = ''
if CFG_BIBRANK_SHOW_CITATION_STATS and citinglist is not None:
similar = self.tmpl_print_record_list_for_similarity_boxen(
_("Cited by: %s records") % len (citinglist), citinglist, ln)
out += '''
<tr><td>
%(similar)s&nbsp;%(more)s
<br /><br />
</td></tr>''' % {
'more': create_html_link(
self.build_search_url(p='refersto:recid:%d' % recID, #XXXX
sf=sf,
so=so,
sp=sp,
rm=rm,
ln=ln),
{}, _("more")),
'similar': similar}
return out
def tmpl_detailed_record_citations_citation_history(self, recID, ln,
citationhistory):
"""Returns the citations history graph of this record
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
- citationhistory *string* - citationhistory box
"""
# load the right message language
_ = gettext_set_language(ln)
out = ''
if CFG_BIBRANK_SHOW_CITATION_GRAPHS and citationhistory is not None:
out = '<!--citation history--><tr><td>%s</td></tr>' % citationhistory
else:
out = "<!--not showing citation history. CFG_BIBRANK_SHOW_CITATION_GRAPHS:"
out += str(CFG_BIBRANK_SHOW_CITATION_GRAPHS) + " citationhistory "
if citationhistory:
out += str(len(citationhistory)) + "-->"
else:
out += "no citationhistory -->"
return out
def tmpl_detailed_record_citations_co_citing(self, recID, ln,
cociting):
"""Returns the list of cocited records
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
- cociting *string* - cociting box
"""
# load the right message language
_ = gettext_set_language(ln)
out = ''
if CFG_BIBRANK_SHOW_CITATION_STATS and cociting is not None:
similar = self.tmpl_print_record_list_for_similarity_boxen (
_("Co-cited with: %s records") % len (cociting), cociting, ln)
out = '''
<tr><td>
%(similar)s&nbsp;%(more)s
<br />
</td></tr>''' % { 'more': create_html_link(self.build_search_url(p='cocitedwith:%d' % recID, ln=ln),
{}, _("more")),
'similar': similar }
return out
def tmpl_detailed_record_citations_self_cited(self, recID, ln,
selfcited, citinglist):
"""Returns the list of self-citations for this record
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
- selfcited list - a list of self-citations for recID
"""
# load the right message language
_ = gettext_set_language(ln)
out = ''
if CFG_BIBRANK_SHOW_CITATION_GRAPHS and selfcited is not None:
sc_scorelist = [] #a score list for print..
for s in selfcited:
#copy weight from citations
weight = 0
for c in citinglist:
(crec, score) = c
if crec == s:
weight = score
tmp = [s, weight]
sc_scorelist.append(tmp)
scite = self.tmpl_print_record_list_for_similarity_boxen (
_(".. of which self-citations: %s records") % len (selfcited), sc_scorelist, ln)
out = '<tr><td>' + scite + '</td></tr>'
return out
def tmpl_author_information(self, req, pubs, authorname, num_downloads,
aff_pubdict, citedbylist, kwtuples, authors,
vtuples, names_dict, person_link,
bibauthorid_data, ln, return_html=False):
"""Prints stuff about the author given as authorname.
1. Author name + his/her institutes. Each institute I has a link
to papers where the auhtor has I as institute.
2. Publications, number: link to search by author.
3. Keywords
4. Author collabs
5. Publication venues like journals
The parameters are data structures needed to produce 1-6, as follows:
req - request
pubs - list of recids, probably the records that have the author as an author
authorname - evident
num_downloads - evident
aff_pubdict - a dictionary where keys are inst names and values lists of recordids
citedbylist - list of recs that cite pubs
kwtuples - keyword tuples like ('HIGGS BOSON',[3,4]) where 3 and 4 are recids
authors - a list of authors that have collaborated with authorname
names_dict - a dict of {name: frequency}
"""
from invenio.search_engine import perform_request_search
from operator import itemgetter
_ = gettext_set_language(ln)
ib_pubs = intbitset(pubs)
html = []
# construct an extended search as an interim solution for author id
# searches. Will build "(exactauthor:v1 OR exactauthor:v2)" strings
# extended_author_search_str = ""
# if bibauthorid_data["is_baid"]:
# if len(names_dict.keys()) > 1:
# extended_author_search_str = '('
#
# for name_index, name_query in enumerate(names_dict.keys()):
# if name_index > 0:
# extended_author_search_str += " OR "
#
# extended_author_search_str += 'exactauthor:"' + name_query + '"'
#
# if len(names_dict.keys()) > 1:
# extended_author_search_str += ')'
# rec_query = 'exactauthor:"' + authorname + '"'
#
# if bibauthorid_data["is_baid"] and extended_author_search_str:
# rec_query = extended_author_search_str
baid_query = ""
extended_author_search_str = ""
if 'is_baid' in bibauthorid_data and bibauthorid_data['is_baid']:
if bibauthorid_data["cid"]:
baid_query = 'author:%s' % bibauthorid_data["cid"]
elif bibauthorid_data["pid"] > -1:
baid_query = 'author:%s' % bibauthorid_data["pid"]
## todo: figure out if the author index is filled with pids/cids.
## if not: fall back to exactauthor search.
# if not index:
# baid_query = ""
if not baid_query:
baid_query = 'exactauthor:"' + authorname + '"'
if bibauthorid_data['is_baid']:
if len(names_dict.keys()) > 1:
extended_author_search_str = '('
for name_index, name_query in enumerate(names_dict.keys()):
if name_index > 0:
extended_author_search_str += " OR "
extended_author_search_str += 'exactauthor:"' + name_query + '"'
if len(names_dict.keys()) > 1:
extended_author_search_str += ')'
if bibauthorid_data['is_baid'] and extended_author_search_str:
baid_query = extended_author_search_str
baid_query = baid_query + " "
sorted_names_list = sorted(names_dict.iteritems(), key=itemgetter(1),
reverse=True)
# Prepare data for display
# construct names box
header = "<strong>" + _("Name variants") + "</strong>"
content = []
for name, frequency in sorted_names_list:
prquery = baid_query + ' exactauthor:"' + name + '"'
name_lnk = create_html_link(self.build_search_url(p=prquery),
{},
str(frequency),)
content.append("%s (%s)" % (name, name_lnk))
if not content:
content = [_("No Name Variants")]
names_box = self.tmpl_print_searchresultbox(header, "<br />\n".join(content))
# construct papers box
rec_query = baid_query
searchstr = create_html_link(self.build_search_url(p=rec_query),
{}, "<strong>" + "All papers (" + str(len(pubs)) + ")" + "</strong>",)
line1 = "<strong>" + _("Papers") + "</strong>"
line2 = searchstr
if CFG_BIBRANK_SHOW_DOWNLOAD_STATS and num_downloads:
line2 += " (" + _("downloaded") + " "
line2 += str(num_downloads) + " " + _("times") + ")"
if CFG_INSPIRE_SITE:
CFG_COLLS = ['Book',
'Conference',
'Introductory',
'Lectures',
'Preprint',
'Published',
'Review',
'Thesis']
else:
CFG_COLLS = ['Article',
'Book',
'Preprint', ]
collsd = {}
for coll in CFG_COLLS:
coll_papers = list(ib_pubs & intbitset(perform_request_search(f="collection", p=coll)))
if coll_papers:
collsd[coll] = coll_papers
colls = collsd.keys()
colls.sort(lambda x, y: cmp(len(collsd[y]), len(collsd[x]))) # sort by number of papers
for coll in colls:
rec_query = baid_query + 'collection:' + coll
line2 += "<br />" + create_html_link(self.build_search_url(p=rec_query),
{}, coll + " (" + str(len(collsd[coll])) + ")",)
if not pubs:
line2 = _("No Papers")
papers_box = self.tmpl_print_searchresultbox(line1, line2)
#make a authoraff string that looks like CERN (1), Caltech (2) etc
authoraff = ""
aff_pubdict_keys = aff_pubdict.keys()
aff_pubdict_keys.sort(lambda x, y: cmp(len(aff_pubdict[y]), len(aff_pubdict[x])))
if aff_pubdict_keys:
for a in aff_pubdict_keys:
print_a = a
if (print_a == ' '):
print_a = _("unknown affiliation")
if authoraff:
authoraff += '<br>'
authoraff += create_html_link(self.build_search_url(p=' or '.join(["%s" % x for x in aff_pubdict[a]]),
f='recid'),
{}, print_a + ' (' + str(len(aff_pubdict[a])) + ')',)
else:
authoraff = _("No Affiliations")
line1 = "<strong>" + _("Affiliations") + "</strong>"
line2 = authoraff
affiliations_box = self.tmpl_print_searchresultbox(line1, line2)
# print frequent keywords:
keywstr = ""
if (kwtuples):
for (kw, freq) in kwtuples:
if keywstr:
keywstr += '<br>'
rec_query = baid_query + 'keyword:"' + kw + '"'
searchstr = create_html_link(self.build_search_url(p=rec_query),
{}, kw + " (" + str(freq) + ")",)
keywstr = keywstr + " " + searchstr
else:
keywstr += _('No Keywords')
line1 = "<strong>" + _("Frequent keywords") + "</strong>"
line2 = keywstr
keyword_box = self.tmpl_print_searchresultbox(line1, line2)
header = "<strong>" + _("Frequent co-authors") + "</strong>"
content = []
sorted_coauthors = sorted(sorted(authors.iteritems(), key=itemgetter(0)),
key=itemgetter(1), reverse=True)
for name, frequency in sorted_coauthors:
rec_query = baid_query + 'exactauthor:"' + name + '"'
lnk = create_html_link(self.build_search_url(p=rec_query), {}, "%s (%s)" % (name, frequency),)
content.append("%s" % lnk)
if not content:
content = [_("No Frequent Co-authors")]
coauthor_box = self.tmpl_print_searchresultbox(header, "<br />\n".join(content))
pubs_to_papers_link = create_html_link(self.build_search_url(p=baid_query), {}, str(len(pubs)))
display_name = ""
try:
display_name = sorted_names_list[0][0]
except IndexError:
display_name = "&nbsp;"
headertext = ('<h1>%s <span style="font-size:50%%;">(%s papers)</span></h1>'
% (display_name, pubs_to_papers_link))
if return_html:
html.append(headertext)
else:
req.write(headertext)
#req.write("<h1>%s</h1>" % (authorname))
if person_link:
cmp_link = ('<div><a href="%s/person/claimstub?person=%s">%s</a></div>'
% (CFG_SITE_URL, person_link,
_("This is me. Verify my publication list.")))
if return_html:
html.append(cmp_link)
else:
req.write(cmp_link)
if return_html:
html.append("<table width=80%><tr valign=top><td>")
html.append(names_box)
html.append("<br />")
html.append(papers_box)
html.append("<br />")
html.append(keyword_box)
html.append("</td>")
html.append("<td>&nbsp;</td>")
html.append("<td>")
html.append(affiliations_box)
html.append("<br />")
html.append(coauthor_box)
html.append("</td></tr></table>")
else:
req.write("<table width=80%><tr valign=top><td>")
req.write(names_box)
req.write("<br />")
req.write(papers_box)
req.write("<br />")
req.write(keyword_box)
req.write("</td>")
req.write("<td>&nbsp;</td>")
req.write("<td>")
req.write(affiliations_box)
req.write("<br />")
req.write(coauthor_box)
req.write("</td></tr></table>")
# print citations:
rec_query = baid_query
if len(citedbylist):
line1 = "<strong>" + _("Citations:") + "</strong>"
line2 = ""
if not pubs:
line2 = _("No Citation Information available")
sr_box = self.tmpl_print_searchresultbox(line1, line2)
if return_html:
html.append(sr_box)
else:
req.write(sr_box)
if return_html:
return "\n".join(html)
# print frequent co-authors:
# collabstr = ""
# if (authors):
# for c in authors:
# c = c.strip()
# if collabstr:
# collabstr += '<br>'
# #do not add this person him/herself in the list
# cUP = c.upper()
# authornameUP = authorname.upper()
# if not cUP == authornameUP:
# commpubs = intbitset(pubs) & intbitset(perform_request_search(p="exactauthor:\"%s\" exactauthor:\"%s\"" % (authorname, c)))
# collabstr = collabstr + create_html_link(self.build_search_url(p='exactauthor:"' + authorname + '" exactauthor:"' + c + '"'),
# {}, c + " (" + str(len(commpubs)) + ")",)
# else: collabstr += 'None'
# banner = self.tmpl_print_searchresultbox("<strong>" + _("Frequent co-authors:") + "</strong>", collabstr)
# print frequently publishes in journals:
#if (vtuples):
# pubinfo = ""
# for t in vtuples:
# (journal, num) = t
# pubinfo += create_html_link(self.build_search_url(p='exactauthor:"' + authorname + '" ' + \
# 'journal:"' + journal + '"'),
# {}, journal + " ("+str(num)+")<br/>")
# banner = self.tmpl_print_searchresultbox("<strong>" + _("Frequently publishes in:") + "<strong>", pubinfo)
# req.write(banner)
def tmpl_detailed_record_references(self, recID, ln, content):
"""Returns the discussion page of a record
Parameters:
- 'recID' *int* - The ID of the printed record
- 'ln' *string* - The language to display
- 'content' *string* - The main content of the page
"""
# load the right message language
out = ''
if content is not None:
out += content
return out
def tmpl_citesummary_title(self, ln=CFG_SITE_LANG):
"""HTML citesummary title and breadcrumbs
A part of HCS format suite."""
return ''
def tmpl_citesummary2_title(self, searchpattern, ln=CFG_SITE_LANG):
"""HTML citesummary title and breadcrumbs
A part of HCS2 format suite."""
return ''
def tmpl_citesummary_back_link(self, searchpattern, ln=CFG_SITE_LANG):
"""HTML back to citesummary link
A part of HCS2 format suite."""
_ = gettext_set_language(ln)
out = ''
params = {'ln': 'en',
'p': quote(searchpattern),
'of': 'hcs'}
msg = _('Back to citesummary')
url = CFG_SITE_URL + '/search?' + \
'&'.join(['='.join(i) for i in params.iteritems()])
out += '<p><a href="%(url)s">%(msg)s</a></p>' % {'url': url, 'msg': msg}
return out
def tmpl_citesummary_more_links(self, searchpattern, ln=CFG_SITE_LANG):
_ = gettext_set_language(ln)
out = ''
msg = _('<p><a href="%(url)s">%(msg)s</a></p>')
params = {'ln': ln,
'p': quote(searchpattern),
'of': 'hcs2'}
url = CFG_SITE_URL + '/search?' + \
'&amp;'.join(['='.join(i) for i in params.iteritems()])
out += msg % {'url': url,
'msg': _('Exclude self-citations')}
return out
def tmpl_citesummary_prologue(self, d_recids, collections, search_patterns,
searchfield, citable_recids, total_count,
ln=CFG_SITE_LANG):
"""HTML citesummary format, prologue. A part of HCS format suite."""
_ = gettext_set_language(ln)
out = """<table id="citesummary">
<tr>
<td>
<strong class="headline">%(msg_title)s</strong>
</td>""" % \
{'msg_title': _("Citation summary results"), }
for coll, dummy in collections:
out += '<td align="right">%s</td>' % _(coll)
out += '</tr>'
out += """<tr><td><strong>%(msg_recs)s</strong></td>""" % \
{'msg_recs': _("Total number of papers analyzed:"), }
for coll, colldef in collections:
link_url = CFG_SITE_URL + '/search?p='
if search_patterns[coll]:
p = search_patterns[coll]
if searchfield:
if " " in p:
p = searchfield + ':"' + p + '"'
else:
p = searchfield + ':' + p
link_url += quote(p)
if colldef:
link_url += '%20AND%20' + quote(colldef)
link_text = self.tmpl_nice_number(len(d_recids[coll]), ln)
out += '<td align="right"><a href="%s">%s</a></td>' % (link_url,
link_text)
out += '</tr>'
return out
def tmpl_citesummary_overview(self, collections, d_total_cites,
d_avg_cites, ln=CFG_SITE_LANG):
"""HTML citesummary format, overview. A part of HCS format suite."""
_ = gettext_set_language(ln)
out = """<tr><td><strong>%(msg_cites)s</strong></td>""" % \
{'msg_cites': _("Total number of citations:"), }
for coll, dummy in collections:
total_cites = d_total_cites[coll]
out += '<td align="right">%s</td>' % \
self.tmpl_nice_number(total_cites, ln)
out += '</tr>'
out += """<tr><td><strong>%(msg_avgcit)s</strong></td>""" % \
{'msg_avgcit': _("Average citations per paper:"), }
for coll, dummy in collections:
avg_cites = d_avg_cites[coll]
out += '<td align="right">%.1f</td>' % avg_cites
out += '</tr>'
return out
def tmpl_citesummary_minus_self_cites(self, d_total_cites, d_avg_cites,
ln=CFG_SITE_LANG):
"""HTML citesummary format, overview. A part of HCS format suite."""
_ = gettext_set_language(ln)
msg = _("Total number of citations excluding self-citations")
out = """<tr><td><strong>%(msg_cites)s</strong>""" % \
{'msg_cites': msg, }
# use ? help linking in the style of oai_repository_admin.py
msg = ' <small><small>[<a href="%s%s">?</a>]</small></small></td>'
out += msg % (CFG_SITE_URL,
'/help/citation-metrics#citesummary_self-cites')
for total_cites in d_total_cites.values():
out += '<td align="right">%s</td>' % \
self.tmpl_nice_number(total_cites, ln)
out += '</tr>'
msg = _("Average citations per paper excluding self-citations")
out += """<tr><td><strong>%(msg_avgcit)s</strong>""" % \
{'msg_avgcit': msg, }
# use ? help linking in the style of oai_repository_admin.py
msg = ' <small><small>[<a href="%s%s">?</a>]</small></small></td>'
out += msg % (CFG_SITE_URL,
'/help/citation-metrics#citesummary_self-cites')
for avg_cites in d_avg_cites.itervalues():
out += '<td align="right">%.1f</td>' % avg_cites
out += '</tr>'
return out
def tmpl_citesummary_footer(self):
return ''
def tmpl_citesummary_breakdown_header(self, ln=CFG_SITE_LANG):
_ = gettext_set_language(ln)
return """<tr><td><strong>%(msg_breakdown)s</strong></td></tr>""" % \
{'msg_breakdown': _("Breakdown of papers by citations:"), }
def tmpl_citesummary_breakdown_by_fame(self, d_cites, low, high, fame,
l_colls, searchpatterns,
searchfield, ln=CFG_SITE_LANG):
"""HTML citesummary format, breakdown by fame.
A part of HCS format suite."""
_ = gettext_set_language(ln)
out = """<tr><td>%(fame)s</td>""" % \
{'fame': _(fame), }
for coll, colldef in l_colls:
link_url = CFG_SITE_URL + '/search?p='
if searchpatterns.get(coll, None):
p = searchpatterns.get(coll, None)
if searchfield:
if " " in p:
p = searchfield + ':"' + p + '"'
else:
p = searchfield + ':' + p
link_url += quote(p) + '%20AND%20'
if colldef:
link_url += quote(colldef) + '%20AND%20'
if low == 0 and high == 0:
link_url += quote('cited:0')
else:
link_url += quote('cited:%i->%i' % (low, high))
link_text = self.tmpl_nice_number(d_cites[coll], ln)
out += '<td align="right"><a href="%s">%s</a></td>' % (link_url,
link_text)
out += '</tr>'
return out
def tmpl_citesummary_h_index(self, collections,
d_h_factors, ln=CFG_SITE_LANG):
"""HTML citesummary format, h factor output. A part of the HCS suite."""
_ = gettext_set_language(ln)
out = "<tr><td></td></tr><tr><td><strong>%(msg_metrics)s</strong> <small><small>[<a href=\"%(help_url)s\">?</a>]</small></small></td></tr>" % \
{'msg_metrics': _("Citation metrics"),
'help_url': CFG_SITE_URL + '/help/citation-metrics', }
out += '<tr><td>h-index'
# use ? help linking in the style of oai_repository_admin.py
msg = ' <small><small>[<a href="%s%s">?</a>]</small></small></td>'
out += msg % (CFG_SITE_URL,
'/help/citation-metrics#citesummary_h-index')
for coll, dummy in collections:
h_factors = d_h_factors[coll]
out += '<td align="right">%s</td>' % \
self.tmpl_nice_number(h_factors, ln)
out += '</tr>'
return out
def tmpl_citesummary_epilogue(self, ln=CFG_SITE_LANG):
"""HTML citesummary format, epilogue. A part of HCS format suite."""
out = "</table>"
return out
def tmpl_unapi(self, formats, identifier=None):
"""
Provide a list of object format available from the unAPI service
for the object identified by IDENTIFIER
"""
out = '<?xml version="1.0" encoding="UTF-8" ?>\n'
if identifier:
out += '<formats id="%i">\n' % (identifier)
else:
out += "<formats>\n"
for format_name, format_type in formats.iteritems():
docs = ''
if format_name == 'xn':
docs = 'http://www.nlm.nih.gov/databases/dtd/'
format_type = 'application/xml'
format_name = 'nlm'
elif format_name == 'xm':
docs = 'http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd'
format_type = 'application/xml'
format_name = 'marcxml'
elif format_name == 'xr':
format_type = 'application/rss+xml'
docs = 'http://www.rssboard.org/rss-2-0/'
elif format_name == 'xw':
format_type = 'application/xml'
docs = 'http://www.refworks.com/RefWorks/help/RefWorks_Tagged_Format.htm'
elif format_name == 'xoaidc':
format_type = 'application/xml'
docs = 'http://www.openarchives.org/OAI/2.0/oai_dc.xsd'
elif format_name == 'xe':
format_type = 'application/xml'
docs = 'http://www.endnote.com/support/'
format_name = 'endnote'
elif format_name == 'xd':
format_type = 'application/xml'
docs = 'http://dublincore.org/schemas/'
format_name = 'dc'
elif format_name == 'xo':
format_type = 'application/xml'
docs = 'http://www.loc.gov/standards/mods/v3/mods-3-3.xsd'
format_name = 'mods'
if docs:
out += '<format name="%s" type="%s" docs="%s" />\n' % (xml_escape(format_name), xml_escape(format_type), xml_escape(docs))
else:
out += '<format name="%s" type="%s" />\n' % (xml_escape(format_name), xml_escape(format_type))
out += "</formats>"
return out
diff --git a/modules/websearch/lib/websearch_webinterface.py b/modules/websearch/lib/websearch_webinterface.py
index 5616fec05..0bb919fbe 100644
--- a/modules/websearch/lib/websearch_webinterface.py
+++ b/modules/websearch/lib/websearch_webinterface.py
@@ -1,1141 +1,1151 @@
## This file is part of Invenio.
## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""WebSearch URL handler."""
__revision__ = "$Id$"
import cgi
import os
import datetime
import time
import sys
from urllib import quote
from invenio import webinterface_handler_config as apache
import threading
#maximum number of collaborating authors etc shown in GUI
MAX_COLLAB_LIST = 10
MAX_KEYWORD_LIST = 10
MAX_VENUE_LIST = 10
#tag constants
AUTHOR_TAG = "100__a"
AUTHOR_INST_TAG = "100__u"
COAUTHOR_TAG = "700__a"
COAUTHOR_INST_TAG = "700__u"
VENUE_TAG = "909C4p"
KEYWORD_TAG = "695__a"
FKEYWORD_TAG = "6531_a"
CFG_INSPIRE_UNWANTED_KEYWORDS_START = ['talk',
'conference',
'conference proceedings',
'numerical calculations',
'experimental results',
'review',
'bibliography',
'upper limit',
'lower limit',
'tables',
'search for',
'on-shell',
'off-shell',
'formula',
'lectures',
'book',
'thesis']
CFG_INSPIRE_UNWANTED_KEYWORDS_MIDDLE = ['GeV',
'((']
if sys.hexversion < 0x2040000:
# pylint: disable=W0622
from sets import Set as set
# pylint: enable=W0622
from invenio.config import \
CFG_SITE_URL, \
CFG_SITE_NAME, \
CFG_CACHEDIR, \
CFG_SITE_LANG, \
CFG_SITE_SECURE_URL, \
CFG_BIBRANK_SHOW_DOWNLOAD_STATS, \
CFG_WEBSEARCH_INSTANT_BROWSE_RSS, \
CFG_WEBSEARCH_RSS_TTL, \
CFG_WEBSEARCH_RSS_MAX_CACHED_REQUESTS, \
CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE, \
CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES, \
CFG_WEBDIR, \
CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS, \
CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS, \
CFG_WEBSEARCH_USE_ALEPH_SYSNOS, \
CFG_WEBSEARCH_RSS_I18N_COLLECTIONS, \
CFG_INSPIRE_SITE, \
CFG_WEBSEARCH_WILDCARD_LIMIT, \
CFG_SITE_RECORD
from invenio.dbquery import Error
from invenio.webinterface_handler import wash_urlargd, WebInterfaceDirectory
from invenio.urlutils import redirect_to_url, make_canonical_urlargd, drop_default_urlargd
from invenio.htmlutils import get_mathjax_header
from invenio.htmlutils import nmtoken_from_string
from invenio.webuser import getUid, page_not_authorized, get_user_preferences, \
collect_user_info, logoutUser, isUserSuperAdmin
from invenio.webcomment_webinterface import WebInterfaceCommentsPages
from invenio.weblinkback_webinterface import WebInterfaceRecordLinkbacksPages
from invenio.bibcirculation_webinterface import WebInterfaceHoldingsPages
from invenio.webpage import page, pageheaderonly, create_error_box
from invenio.messages import gettext_set_language
from invenio.search_engine import check_user_can_view_record, \
collection_reclist_cache, \
collection_restricted_p, \
create_similarly_named_authors_link_box, \
get_colID, \
get_coll_i18nname, \
get_most_popular_field_values, \
get_mysql_recid_from_aleph_sysno, \
guess_primary_collection_of_a_record, \
page_end, \
page_start, \
perform_request_cache, \
perform_request_log, \
perform_request_search, \
restricted_collection_cache, \
get_coll_normalised_name, \
EM_REPOSITORY
from invenio.websearch_webcoll import perform_display_collection
from invenio.search_engine_utils import get_fieldvalues, \
get_fieldvalues_alephseq_like
from invenio.access_control_engine import acc_authorize_action
from invenio.access_control_config import VIEWRESTRCOLL
from invenio.access_control_mailcookie import mail_cookie_create_authorize_action
from invenio.bibformat import format_records
from invenio.bibformat_engine import get_output_formats
from invenio.websearch_webcoll import get_collection
from invenio.intbitset import intbitset
from invenio.bibupload import find_record_from_sysno
from invenio.bibrank_citation_searcher import get_cited_by_list
from invenio.bibrank_downloads_indexer import get_download_weight_total
from invenio.search_engine_summarizer import summarize_records
from invenio.errorlib import register_exception
from invenio.bibedit_webinterface import WebInterfaceEditPages
from invenio.bibeditmulti_webinterface import WebInterfaceMultiEditPages
from invenio.bibmerge_webinterface import WebInterfaceMergePages
from invenio.bibdocfile_webinterface import WebInterfaceManageDocFilesPages, WebInterfaceFilesPages
from invenio.search_engine import get_record
from invenio.shellutils import mymkdir
import invenio.template
websearch_templates = invenio.template.load('websearch')
search_results_default_urlargd = websearch_templates.search_results_default_urlargd
search_interface_default_urlargd = websearch_templates.search_interface_default_urlargd
try:
output_formats = [output_format['attrs']['code'].lower() for output_format in \
get_output_formats(with_attributes=True).values()]
except KeyError:
output_formats = ['xd', 'xm', 'hd', 'hb', 'hs', 'hx']
output_formats.extend(['hm', 't', 'h'])
def wash_search_urlargd(form):
"""
Create canonical search arguments from those passed via web form.
"""
argd = wash_urlargd(form, search_results_default_urlargd)
if argd.has_key('as'):
argd['aas'] = argd['as']
del argd['as']
if argd.get('aas', CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE) not in CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES:
argd['aas'] = CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE
# Sometimes, users pass ot=245,700 instead of
# ot=245&ot=700. Normalize that.
ots = []
for ot in argd['ot']:
ots += ot.split(',')
argd['ot'] = ots
# We can either get the mode of function as
# action=<browse|search>, or by setting action_browse or
# action_search.
if argd['action_browse']:
argd['action'] = 'browse'
elif argd['action_search']:
argd['action'] = 'search'
else:
if argd['action'] not in ('browse', 'search'):
argd['action'] = 'search'
del argd['action_browse']
del argd['action_search']
if argd['em'] != "":
argd['em'] = argd['em'].split(",")
return argd
class WebInterfaceUnAPIPages(WebInterfaceDirectory):
""" Handle /unapi set of pages."""
_exports = ['']
def __call__(self, req, form):
argd = wash_urlargd(form, {
'id' : (int, 0),
'format' : (str, '')})
formats_dict = get_output_formats(True)
formats = {}
for format in formats_dict.values():
if format['attrs']['visibility']:
formats[format['attrs']['code'].lower()] = format['attrs']['content_type']
del formats_dict
if argd['id'] and argd['format']:
## Translate back common format names
format = {
'nlm' : 'xn',
'marcxml' : 'xm',
'dc' : 'xd',
'endnote' : 'xe',
'mods' : 'xo'
}.get(argd['format'], argd['format'])
if format in formats:
redirect_to_url(req, '%s/%s/%s/export/%s' % (CFG_SITE_URL, CFG_SITE_RECORD, argd['id'], format))
else:
raise apache.SERVER_RETURN, apache.HTTP_NOT_ACCEPTABLE
elif argd['id']:
return websearch_templates.tmpl_unapi(formats, identifier=argd['id'])
else:
return websearch_templates.tmpl_unapi(formats)
index = __call__
class WebInterfaceRecordPages(WebInterfaceDirectory):
""" Handling of a /CFG_SITE_RECORD/<recid> URL fragment """
_exports = ['', 'files', 'reviews', 'comments', 'usage',
'references', 'export', 'citations', 'holdings', 'edit',
'keywords', 'multiedit', 'merge', 'plots', 'linkbacks']
#_exports.extend(output_formats)
def __init__(self, recid, tab, format=None):
self.recid = recid
self.tab = tab
self.format = format
self.files = WebInterfaceFilesPages(self.recid)
self.reviews = WebInterfaceCommentsPages(self.recid, reviews=1)
self.comments = WebInterfaceCommentsPages(self.recid)
self.usage = self
self.references = self
self.keywords = self
self.holdings = WebInterfaceHoldingsPages(self.recid)
self.citations = self
self.plots = self
self.export = WebInterfaceRecordExport(self.recid, self.format)
self.edit = WebInterfaceEditPages(self.recid)
self.merge = WebInterfaceMergePages(self.recid)
self.linkbacks = WebInterfaceRecordLinkbacksPages(self.recid)
return
def __call__(self, req, form):
argd = wash_search_urlargd(form)
argd['recid'] = self.recid
argd['tab'] = self.tab
if self.format is not None:
argd['of'] = self.format
req.argd = argd
uid = getUid(req)
if uid == -1:
return page_not_authorized(req, "../",
text="You are not authorized to view this record.",
navmenuid='search')
elif uid > 0:
pref = get_user_preferences(uid)
try:
if not form.has_key('rg'):
# fetch user rg preference only if not overridden via URL
argd['rg'] = int(pref['websearch_group_records'])
except (KeyError, ValueError):
pass
user_info = collect_user_info(req)
(auth_code, auth_msg) = check_user_can_view_record(user_info, self.recid)
if argd['rg'] > CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS and acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['rg'] = CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS
#check if the user has rights to set a high wildcard limit
#if not, reduce the limit set by user, with the default one
if CFG_WEBSEARCH_WILDCARD_LIMIT > 0 and (argd['wl'] > CFG_WEBSEARCH_WILDCARD_LIMIT or argd['wl'] == 0):
if acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['wl'] = CFG_WEBSEARCH_WILDCARD_LIMIT
# only superadmins can use verbose parameter for obtaining debug information
if not isUserSuperAdmin(user_info):
argd['verbose'] = 0
if auth_code and user_info['email'] == 'guest':
cookie = mail_cookie_create_authorize_action(VIEWRESTRCOLL, {'collection' : guess_primary_collection_of_a_record(self.recid)})
target = CFG_SITE_SECURE_URL + '/youraccount/login' + \
make_canonical_urlargd({'action': cookie, 'ln' : argd['ln'], 'referer' : CFG_SITE_SECURE_URL + req.unparsed_uri}, {})
return redirect_to_url(req, target, norobot=True)
elif auth_code:
return page_not_authorized(req, "../", \
text=auth_msg, \
navmenuid='search')
from invenio.search_engine import record_exists, get_merged_recid
# check if the current record has been deleted
# and has been merged, case in which the deleted record
# will be redirect to the new one
record_status = record_exists(argd['recid'])
merged_recid = get_merged_recid(argd['recid'])
if record_status == -1 and merged_recid:
url = CFG_SITE_URL + '/' + CFG_SITE_RECORD + '/%s?ln=%s'
url %= (str(merged_recid), argd['ln'])
redirect_to_url(req, url)
elif record_status == -1:
req.status = apache.HTTP_GONE ## The record is gone!
# mod_python does not like to return [] in case when of=id:
out = perform_request_search(req, **argd)
- if out == []:
+ if isinstance(out, intbitset):
+ return out.fastdump()
+ elif out == []:
return str(out)
else:
return out
# Return the same page wether we ask for /CFG_SITE_RECORD/123 or /CFG_SITE_RECORD/123/
index = __call__
class WebInterfaceRecordRestrictedPages(WebInterfaceDirectory):
""" Handling of a /record-restricted/<recid> URL fragment """
_exports = ['', 'files', 'reviews', 'comments', 'usage',
'references', 'export', 'citations', 'holdings', 'edit',
'keywords', 'multiedit', 'merge', 'plots', 'linkbacks']
#_exports.extend(output_formats)
def __init__(self, recid, tab, format=None):
self.recid = recid
self.tab = tab
self.format = format
self.files = WebInterfaceFilesPages(self.recid)
self.reviews = WebInterfaceCommentsPages(self.recid, reviews=1)
self.comments = WebInterfaceCommentsPages(self.recid)
self.usage = self
self.references = self
self.keywords = self
self.holdings = WebInterfaceHoldingsPages(self.recid)
self.citations = self
self.plots = self
self.export = WebInterfaceRecordExport(self.recid, self.format)
self.edit = WebInterfaceEditPages(self.recid)
self.merge = WebInterfaceMergePages(self.recid)
self.linkbacks = WebInterfaceRecordLinkbacksPages(self.recid)
return
def __call__(self, req, form):
argd = wash_search_urlargd(form)
argd['recid'] = self.recid
if self.format is not None:
argd['of'] = self.format
req.argd = argd
uid = getUid(req)
user_info = collect_user_info(req)
if uid == -1:
return page_not_authorized(req, "../",
text="You are not authorized to view this record.",
navmenuid='search')
elif uid > 0:
pref = get_user_preferences(uid)
try:
if not form.has_key('rg'):
# fetch user rg preference only if not overridden via URL
argd['rg'] = int(pref['websearch_group_records'])
except (KeyError, ValueError):
pass
if argd['rg'] > CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS and acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['rg'] = CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS
#check if the user has rights to set a high wildcard limit
#if not, reduce the limit set by user, with the default one
if CFG_WEBSEARCH_WILDCARD_LIMIT > 0 and (argd['wl'] > CFG_WEBSEARCH_WILDCARD_LIMIT or argd['wl'] == 0):
if acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['wl'] = CFG_WEBSEARCH_WILDCARD_LIMIT
# only superadmins can use verbose parameter for obtaining debug information
if not isUserSuperAdmin(user_info):
argd['verbose'] = 0
record_primary_collection = guess_primary_collection_of_a_record(self.recid)
if collection_restricted_p(record_primary_collection):
(auth_code, dummy) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=record_primary_collection)
if auth_code:
return page_not_authorized(req, "../",
text="You are not authorized to view this record.",
navmenuid='search')
# Keep all the arguments, they might be reused in the
# record page itself to derivate other queries
req.argd = argd
# mod_python does not like to return [] in case when of=id:
out = perform_request_search(req, **argd)
- if out == []:
+ if isinstance(out, intbitset):
+ return out.fastdump()
+ elif out == []:
return str(out)
else:
return out
# Return the same page wether we ask for /CFG_SITE_RECORD/123 or /CFG_SITE_RECORD/123/
index = __call__
class WebInterfaceSearchResultsPages(WebInterfaceDirectory):
""" Handling of the /search URL and its sub-pages. """
_exports = ['', 'authenticate', 'cache', 'log']
def __call__(self, req, form):
""" Perform a search. """
argd = wash_search_urlargd(form)
_ = gettext_set_language(argd['ln'])
if req.method == 'POST':
raise apache.SERVER_RETURN, apache.HTTP_METHOD_NOT_ALLOWED
uid = getUid(req)
user_info = collect_user_info(req)
if uid == -1:
return page_not_authorized(req, "../",
text=_("You are not authorized to view this area."),
navmenuid='search')
elif uid > 0:
pref = get_user_preferences(uid)
try:
if not form.has_key('rg'):
# fetch user rg preference only if not overridden via URL
argd['rg'] = int(pref['websearch_group_records'])
except (KeyError, ValueError):
pass
if argd['rg'] > CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS and acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['rg'] = CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS
involved_collections = set()
involved_collections.update(argd['c'])
involved_collections.add(argd['cc'])
if argd['id'] > 0:
argd['recid'] = argd['id']
if argd['idb'] > 0:
argd['recidb'] = argd['idb']
if argd['sysno']:
tmp_recid = find_record_from_sysno(argd['sysno'])
if tmp_recid:
argd['recid'] = tmp_recid
if argd['sysnb']:
tmp_recid = find_record_from_sysno(argd['sysnb'])
if tmp_recid:
argd['recidb'] = tmp_recid
if argd['recid'] > 0:
if argd['recidb'] > argd['recid']:
# Hack to check if among the restricted collections
# at least a record of the range is there and
# then if the user is not authorized for that
# collection.
recids = intbitset(xrange(argd['recid'], argd['recidb']))
restricted_collection_cache.recreate_cache_if_needed()
for collname in restricted_collection_cache.cache:
(auth_code, auth_msg) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=collname)
if auth_code and user_info['email'] == 'guest':
coll_recids = get_collection(collname).reclist
if coll_recids & recids:
cookie = mail_cookie_create_authorize_action(VIEWRESTRCOLL, {'collection' : collname})
target = CFG_SITE_SECURE_URL + '/youraccount/login' + \
make_canonical_urlargd({'action': cookie, 'ln' : argd['ln'], 'referer' : CFG_SITE_SECURE_URL + req.unparsed_uri}, {})
return redirect_to_url(req, target, norobot=True)
elif auth_code:
return page_not_authorized(req, "../", \
text=auth_msg, \
navmenuid='search')
else:
involved_collections.add(guess_primary_collection_of_a_record(argd['recid']))
# If any of the collection requires authentication, redirect
# to the authentication form.
for coll in involved_collections:
if collection_restricted_p(coll):
(auth_code, auth_msg) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=coll)
if auth_code and user_info['email'] == 'guest':
cookie = mail_cookie_create_authorize_action(VIEWRESTRCOLL, {'collection' : coll})
target = CFG_SITE_SECURE_URL + '/youraccount/login' + \
make_canonical_urlargd({'action': cookie, 'ln' : argd['ln'], 'referer' : CFG_SITE_SECURE_URL + req.unparsed_uri}, {})
return redirect_to_url(req, target, norobot=True)
elif auth_code:
return page_not_authorized(req, "../", \
text=auth_msg, \
navmenuid='search')
#check if the user has rights to set a high wildcard limit
#if not, reduce the limit set by user, with the default one
if CFG_WEBSEARCH_WILDCARD_LIMIT > 0 and (argd['wl'] > CFG_WEBSEARCH_WILDCARD_LIMIT or argd['wl'] == 0):
auth_code, auth_message = acc_authorize_action(req, 'runbibedit')
if auth_code != 0:
argd['wl'] = CFG_WEBSEARCH_WILDCARD_LIMIT
# only superadmins can use verbose parameter for obtaining debug information
if not isUserSuperAdmin(user_info):
argd['verbose'] = 0
# Keep all the arguments, they might be reused in the
# search_engine itself to derivate other queries
req.argd = argd
# mod_python does not like to return [] in case when of=id:
out = perform_request_search(req, **argd)
- if out == []:
+ if isinstance(out, intbitset):
+ return out.fastdump()
+ elif out == []:
return str(out)
else:
return out
def cache(self, req, form):
"""Search cache page."""
argd = wash_urlargd(form, {'action': (str, 'show')})
return perform_request_cache(req, action=argd['action'])
def log(self, req, form):
"""Search log page."""
argd = wash_urlargd(form, {'date': (str, '')})
return perform_request_log(req, date=argd['date'])
def authenticate(self, req, form):
"""Restricted search results pages."""
argd = wash_search_urlargd(form)
user_info = collect_user_info(req)
for coll in argd['c'] + [argd['cc']]:
if collection_restricted_p(coll):
(auth_code, auth_msg) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=coll)
if auth_code and user_info['email'] == 'guest':
cookie = mail_cookie_create_authorize_action(VIEWRESTRCOLL, {'collection' : coll})
target = CFG_SITE_SECURE_URL + '/youraccount/login' + \
make_canonical_urlargd({'action': cookie, 'ln' : argd['ln'], 'referer' : CFG_SITE_SECURE_URL + req.unparsed_uri}, {})
return redirect_to_url(req, target, norobot=True)
elif auth_code:
return page_not_authorized(req, "../", \
text=auth_msg, \
navmenuid='search')
#check if the user has rights to set a high wildcard limit
#if not, reduce the limit set by user, with the default one
if CFG_WEBSEARCH_WILDCARD_LIMIT > 0 and (argd['wl'] > CFG_WEBSEARCH_WILDCARD_LIMIT or argd['wl'] == 0):
auth_code, auth_message = acc_authorize_action(req, 'runbibedit')
if auth_code != 0:
argd['wl'] = CFG_WEBSEARCH_WILDCARD_LIMIT
# only superadmins can use verbose parameter for obtaining debug information
if not isUserSuperAdmin(user_info):
argd['verbose'] = 0
# Keep all the arguments, they might be reused in the
# search_engine itself to derivate other queries
req.argd = argd
uid = getUid(req)
if uid > 0:
pref = get_user_preferences(uid)
try:
if not form.has_key('rg'):
# fetch user rg preference only if not overridden via URL
argd['rg'] = int(pref['websearch_group_records'])
except (KeyError, ValueError):
pass
# mod_python does not like to return [] in case when of=id:
out = perform_request_search(req, **argd)
- if out == []:
+ if isinstance(out, intbitset):
+ return out.fastdump()
+ elif out == []:
return str(out)
else:
return out
index = __call__
class WebInterfaceLegacySearchPages(WebInterfaceDirectory):
""" Handling of the /search.py URL and its sub-pages. """
_exports = ['', ('authenticate', 'index')]
def __call__(self, req, form):
""" Perform a search. """
argd = wash_search_urlargd(form)
# We either jump into the generic search form, or the specific
# /CFG_SITE_RECORD/... display if a recid is requested
if argd['recid'] != -1:
target = '/%s/%d' % (CFG_SITE_RECORD, argd['recid'])
del argd['recid']
else:
target = '/search'
target += make_canonical_urlargd(argd, search_results_default_urlargd)
return redirect_to_url(req, target, apache.HTTP_MOVED_PERMANENTLY)
index = __call__
# Parameters for the legacy URLs, of the form /?c=ALEPH
legacy_collection_default_urlargd = {
'as': (int, CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE),
'aas': (int, CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE),
'verbose': (int, 0),
'c': (str, CFG_SITE_NAME)}
class WebInterfaceSearchInterfacePages(WebInterfaceDirectory):
""" Handling of collection navigation."""
_exports = [('index.py', 'legacy_collection'),
('', 'legacy_collection'),
('search.py', 'legacy_search'),
'search', 'openurl',
'opensearchdescription', 'logout_SSO_hook']
search = WebInterfaceSearchResultsPages()
legacy_search = WebInterfaceLegacySearchPages()
def logout_SSO_hook(self, req, form):
"""Script triggered by the display of the centralized SSO logout
dialog. It logouts the user from Invenio and stream back the
expected picture."""
logoutUser(req)
req.content_type = 'image/gif'
req.encoding = None
req.filename = 'wsignout.gif'
req.headers_out["Content-Disposition"] = "inline; filename=wsignout.gif"
req.set_content_length(os.path.getsize('%s/img/wsignout.gif' % CFG_WEBDIR))
req.send_http_header()
req.sendfile('%s/img/wsignout.gif' % CFG_WEBDIR)
def _lookup(self, component, path):
""" This handler is invoked for the dynamic URLs (for
collections and records)"""
if component == 'collection':
c = '/'.join(path)
def answer(req, form):
"""Accessing collections cached pages."""
# Accessing collections: this is for accessing the
# cached page on top of each collection.
argd = wash_urlargd(form, search_interface_default_urlargd)
# We simply return the cached page of the collection
argd['c'] = c
if not argd['c']:
# collection argument not present; display
# home collection by default
argd['c'] = CFG_SITE_NAME
# Treat `as' argument specially:
if argd.has_key('as'):
argd['aas'] = argd['as']
del argd['as']
if argd.get('aas', CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE) not in CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES:
argd['aas'] = CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE
return display_collection(req, **argd)
return answer, []
elif component == CFG_SITE_RECORD and path and path[0] == 'merge':
return WebInterfaceMergePages(), path[1:]
elif component == CFG_SITE_RECORD and path and path[0] == 'edit':
return WebInterfaceEditPages(), path[1:]
elif component == CFG_SITE_RECORD and path and path[0] == 'multiedit':
return WebInterfaceMultiEditPages(), path[1:]
elif component == CFG_SITE_RECORD and path and path[0] in ('managedocfiles', 'managedocfilesasync'):
return WebInterfaceManageDocFilesPages(), path
elif component == CFG_SITE_RECORD or component == 'record-restricted':
try:
if CFG_WEBSEARCH_USE_ALEPH_SYSNOS:
# let us try to recognize /<CFG_SITE_RECORD>/<SYSNO> style of URLs:
# check for SYSNOs with an embedded slash; needed for [ARXIVINV-15]
if len(path) > 1 and get_mysql_recid_from_aleph_sysno(path[0] + "/" + path[1]):
path[0] = path[0] + "/" + path[1]
del path[1]
x = get_mysql_recid_from_aleph_sysno(path[0])
if x:
recid = x
else:
recid = int(path[0])
else:
recid = int(path[0])
except IndexError:
# display record #1 for URL /CFG_SITE_RECORD without a number
recid = 1
except ValueError:
if path[0] == '':
# display record #1 for URL /CFG_SITE_RECORD/ without a number
recid = 1
else:
# display page not found for URLs like /CFG_SITE_RECORD/foo
return None, []
from invenio.intbitset import __maxelem__
if recid <= 0 or recid > __maxelem__:
# __maxelem__ = 2147483647
# display page not found for URLs like /CFG_SITE_RECORD/-5 or /CFG_SITE_RECORD/0 or /CFG_SITE_RECORD/2147483649
return None, []
format = None
tab = ''
try:
if path[1] in ['', 'files', 'reviews', 'comments', 'usage',
'references', 'citations', 'holdings', 'edit',
'keywords', 'multiedit', 'merge', 'plots', 'linkbacks']:
tab = path[1]
elif path[1] == 'export':
tab = ''
format = path[2]
# format = None
# elif path[1] in output_formats:
# tab = ''
# format = path[1]
else:
# display page not found for URLs like /CFG_SITE_RECORD/references
# for a collection where 'references' tabs is not visible
return None, []
except IndexError:
# Keep normal url if tabs is not specified
pass
#if component == 'record-restricted':
#return WebInterfaceRecordRestrictedPages(recid, tab, format), path[1:]
#else:
return WebInterfaceRecordPages(recid, tab, format), path[1:]
elif component == 'sslredirect':
## Fallback solution for sslredirect special path that should
## be rather implemented as an Apache level redirection
def redirecter(req, form):
real_url = "http://" + '/'.join(path)
redirect_to_url(req, real_url)
return redirecter, []
return None, []
def openurl(self, req, form):
""" OpenURL Handler."""
argd = wash_urlargd(form, websearch_templates.tmpl_openurl_accepted_args)
ret_url = websearch_templates.tmpl_openurl2invenio(argd)
if ret_url:
return redirect_to_url(req, ret_url)
else:
return redirect_to_url(req, CFG_SITE_URL)
def opensearchdescription(self, req, form):
"""OpenSearch description file"""
req.content_type = "application/opensearchdescription+xml"
req.send_http_header()
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG),
'verbose': (int, 0) })
return websearch_templates.tmpl_opensearch_description(ln=argd['ln'])
def legacy_collection(self, req, form):
"""Collection URL backward compatibility handling."""
accepted_args = dict(legacy_collection_default_urlargd)
argd = wash_urlargd(form, accepted_args)
# Treat `as' argument specially:
if argd.has_key('as'):
argd['aas'] = argd['as']
del argd['as']
if argd.get('aas', CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE) not in (0, 1):
argd['aas'] = CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE
# If we specify no collection, then we don't need to redirect
# the user, so that accessing <http://yoursite/> returns the
# default collection.
if not form.has_key('c'):
return display_collection(req, **argd)
# make the collection an element of the path, and keep the
# other query elements as is. If the collection is CFG_SITE_NAME,
# however, redirect to the main URL.
c = argd['c']
del argd['c']
if c == CFG_SITE_NAME:
target = '/'
else:
target = '/collection/' + quote(c)
# Treat `as' argument specially:
# We are going to redirect, so replace `aas' by `as' visible argument:
if argd.has_key('aas'):
argd['as'] = argd['aas']
del argd['aas']
target += make_canonical_urlargd(argd, legacy_collection_default_urlargd)
return redirect_to_url(req, target)
def display_collection(req, c, aas, verbose, ln, em=""):
"""Display search interface page for collection c by looking
in the collection cache."""
_ = gettext_set_language(ln)
req.argd = drop_default_urlargd({'aas': aas, 'verbose': verbose, 'ln': ln, 'em' : em},
search_interface_default_urlargd)
if em != "":
em = em.split(",")
# get user ID:
try:
uid = getUid(req)
user_preferences = {}
if uid == -1:
return page_not_authorized(req, "../",
text="You are not authorized to view this collection",
navmenuid='search')
elif uid > 0:
user_preferences = get_user_preferences(uid)
except Error:
register_exception(req=req, alert_admin=True)
return page(title=_("Internal Error"),
body=create_error_box(req, verbose=verbose, ln=ln),
description="%s - Internal Error" % CFG_SITE_NAME,
keywords="%s, Internal Error" % CFG_SITE_NAME,
language=ln,
req=req,
navmenuid='search')
# start display:
req.content_type = "text/html"
req.send_http_header()
# deduce collection id:
colID = get_colID(get_coll_normalised_name(c))
if type(colID) is not int:
page_body = '<p>' + (_("Sorry, collection %s does not seem to exist.") % ('<strong>' + str(c) + '</strong>')) + '</p>'
page_body = '<p>' + (_("You may want to start browsing from %s.") % ('<a href="' + CFG_SITE_URL + '?ln=' + ln + '">' + get_coll_i18nname(CFG_SITE_NAME, ln) + '</a>')) + '</p>'
if req.method == 'HEAD':
raise apache.SERVER_RETURN, apache.HTTP_NOT_FOUND
return page(title=_("Collection %s Not Found") % cgi.escape(c),
body=page_body,
description=(CFG_SITE_NAME + ' - ' + _("Not found") + ': ' + cgi.escape(str(c))),
keywords="%s" % CFG_SITE_NAME,
uid=uid,
language=ln,
req=req,
navmenuid='search')
c_body, c_navtrail, c_portalbox_lt, c_portalbox_rt, c_portalbox_tp, c_portalbox_te, \
c_last_updated = perform_display_collection(colID, c, aas, ln, em,
user_preferences.get('websearch_helpbox', 1))
if em == "" or EM_REPOSITORY["body"] in em:
try:
title = get_coll_i18nname(c, ln)
except:
title = ""
else:
title = ""
show_title_p = True
body_css_classes = []
if c == CFG_SITE_NAME:
# Do not display title on home collection
show_title_p = False
body_css_classes.append('home')
if len(collection_reclist_cache.cache.keys()) == 1:
# if there is only one collection defined, do not print its
# title on the page as it would be displayed repetitively.
show_title_p = False
if aas == -1:
show_title_p = False
if CFG_INSPIRE_SITE == 1:
# INSPIRE should never show title, but instead use css to
# style collections
show_title_p = False
body_css_classes.append(nmtoken_from_string(c))
# RSS:
rssurl = CFG_SITE_URL + '/rss'
rssurl_params = []
if c != CFG_SITE_NAME:
rssurl_params.append('cc=' + quote(c))
if ln != CFG_SITE_LANG and \
c in CFG_WEBSEARCH_RSS_I18N_COLLECTIONS:
rssurl_params.append('ln=' + ln)
if rssurl_params:
rssurl += '?' + '&amp;'.join(rssurl_params)
if 'hb' in CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS:
metaheaderadd = get_mathjax_header(req.is_https())
else:
metaheaderadd = ''
return page(title=title,
body=c_body,
navtrail=c_navtrail,
description="%s - %s" % (CFG_SITE_NAME, c),
keywords="%s, %s" % (CFG_SITE_NAME, c),
metaheaderadd=metaheaderadd,
uid=uid,
language=ln,
req=req,
cdspageboxlefttopadd=c_portalbox_lt,
cdspageboxrighttopadd=c_portalbox_rt,
titleprologue=c_portalbox_tp,
titleepilogue=c_portalbox_te,
lastupdated=c_last_updated,
navmenuid='search',
rssurl=rssurl,
body_css_classes=body_css_classes,
show_title_p=show_title_p,
show_header=em == "" or EM_REPOSITORY["header"] in em,
show_footer=em == "" or EM_REPOSITORY["footer"] in em)
class WebInterfaceRSSFeedServicePages(WebInterfaceDirectory):
"""RSS 2.0 feed service pages."""
def __call__(self, req, form):
"""RSS 2.0 feed service."""
# Keep only interesting parameters for the search
default_params = websearch_templates.rss_default_urlargd
# We need to keep 'jrec' and 'rg' here in order to have
# 'multi-page' RSS. These parameters are not kept be default
# as we don't want to consider them when building RSS links
# from search and browse pages.
default_params.update({'jrec':(int, 1),
'rg': (int, CFG_WEBSEARCH_INSTANT_BROWSE_RSS)})
argd = wash_urlargd(form, default_params)
user_info = collect_user_info(req)
for coll in argd['c'] + [argd['cc']]:
if collection_restricted_p(coll):
(auth_code, auth_msg) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=coll)
if auth_code and user_info['email'] == 'guest':
cookie = mail_cookie_create_authorize_action(VIEWRESTRCOLL, {'collection' : coll})
target = CFG_SITE_SECURE_URL + '/youraccount/login' + \
make_canonical_urlargd({'action': cookie, 'ln' : argd['ln'], 'referer' : CFG_SITE_SECURE_URL + req.unparsed_uri}, {})
return redirect_to_url(req, target, norobot=True)
elif auth_code:
return page_not_authorized(req, "../", \
text=auth_msg, \
navmenuid='search')
# Create a standard filename with these parameters
current_url = websearch_templates.build_rss_url(argd)
cache_filename = current_url.split('/')[-1]
# In the same way as previously, add 'jrec' & 'rg'
req.content_type = "application/rss+xml"
req.send_http_header()
try:
# Try to read from cache
path = "%s/rss/%s.xml" % (CFG_CACHEDIR, cache_filename)
# Check if cache needs refresh
filedesc = open(path, "r")
last_update_time = datetime.datetime.fromtimestamp(os.stat(os.path.abspath(path)).st_mtime)
assert(datetime.datetime.now() < last_update_time + datetime.timedelta(minutes=CFG_WEBSEARCH_RSS_TTL))
c_rss = filedesc.read()
filedesc.close()
req.write(c_rss)
return
except Exception, e:
# do it live and cache
previous_url = None
if argd['jrec'] > 1:
prev_jrec = argd['jrec'] - argd['rg']
if prev_jrec < 1:
prev_jrec = 1
previous_url = websearch_templates.build_rss_url(argd,
jrec=prev_jrec)
#check if the user has rights to set a high wildcard limit
#if not, reduce the limit set by user, with the default one
if CFG_WEBSEARCH_WILDCARD_LIMIT > 0 and (argd['wl'] > CFG_WEBSEARCH_WILDCARD_LIMIT or argd['wl'] == 0):
if acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['wl'] = CFG_WEBSEARCH_WILDCARD_LIMIT
req.argd = argd
recIDs = perform_request_search(req, of="id",
c=argd['c'], cc=argd['cc'],
p=argd['p'], f=argd['f'],
p1=argd['p1'], f1=argd['f1'],
m1=argd['m1'], op1=argd['op1'],
p2=argd['p2'], f2=argd['f2'],
m2=argd['m2'], op2=argd['op2'],
p3=argd['p3'], f3=argd['f3'],
m3=argd['m3'], wl=argd['wl'])
nb_found = len(recIDs)
next_url = None
if len(recIDs) >= argd['jrec'] + argd['rg']:
next_url = websearch_templates.build_rss_url(argd,
jrec=(argd['jrec'] + argd['rg']))
first_url = websearch_templates.build_rss_url(argd, jrec=1)
last_url = websearch_templates.build_rss_url(argd, jrec=nb_found - argd['rg'] + 1)
recIDs = recIDs[-argd['jrec']:(-argd['rg'] - argd['jrec']):-1]
rss_prologue = '<?xml version="1.0" encoding="UTF-8"?>\n' + \
websearch_templates.tmpl_xml_rss_prologue(current_url=current_url,
previous_url=previous_url,
next_url=next_url,
first_url=first_url, last_url=last_url,
nb_found=nb_found,
jrec=argd['jrec'], rg=argd['rg'],
cc=argd['cc']) + '\n'
req.write(rss_prologue)
rss_body = format_records(recIDs,
of='xr',
ln=argd['ln'],
user_info=user_info,
record_separator="\n",
req=req, epilogue="\n")
rss_epilogue = websearch_templates.tmpl_xml_rss_epilogue() + '\n'
req.write(rss_epilogue)
# update cache
dirname = "%s/rss" % (CFG_CACHEDIR)
mymkdir(dirname)
fullfilename = "%s/rss/%s.xml" % (CFG_CACHEDIR, cache_filename)
try:
# Remove the file just in case it already existed
# so that a bit of space is created
os.remove(fullfilename)
except OSError:
pass
# Check if there's enough space to cache the request.
if len(os.listdir(dirname)) < CFG_WEBSEARCH_RSS_MAX_CACHED_REQUESTS:
try:
os.umask(022)
f = open(fullfilename, "w")
f.write(rss_prologue + rss_body + rss_epilogue)
f.close()
except IOError, v:
if v[0] == 36:
# URL was too long. Never mind, don't cache
pass
else:
raise repr(v)
index = __call__
class WebInterfaceRecordExport(WebInterfaceDirectory):
""" Handling of a /<CFG_SITE_RECORD>/<recid>/export/<format> URL fragment """
_exports = output_formats
def __init__(self, recid, format=None):
self.recid = recid
self.format = format
for output_format in output_formats:
self.__dict__[output_format] = self
return
def __call__(self, req, form):
argd = wash_search_urlargd(form)
argd['recid'] = self.recid
if self.format is not None:
argd['of'] = self.format
req.argd = argd
uid = getUid(req)
if uid == -1:
return page_not_authorized(req, "../",
text="You are not authorized to view this record.",
navmenuid='search')
elif uid > 0:
pref = get_user_preferences(uid)
try:
if not form.has_key('rg'):
# fetch user rg preference only if not overridden via URL
argd['rg'] = int(pref['websearch_group_records'])
except (KeyError, ValueError):
pass
# Check if the record belongs to a restricted primary
# collection. If yes, redirect to the authenticated URL.
user_info = collect_user_info(req)
(auth_code, auth_msg) = check_user_can_view_record(user_info, self.recid)
if argd['rg'] > CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS and acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['rg'] = CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS
#check if the user has rights to set a high wildcard limit
#if not, reduce the limit set by user, with the default one
if CFG_WEBSEARCH_WILDCARD_LIMIT > 0 and (argd['wl'] > CFG_WEBSEARCH_WILDCARD_LIMIT or argd['wl'] == 0):
if acc_authorize_action(req, 'runbibedit')[0] != 0:
argd['wl'] = CFG_WEBSEARCH_WILDCARD_LIMIT
# only superadmins can use verbose parameter for obtaining debug information
if not isUserSuperAdmin(user_info):
argd['verbose'] = 0
if auth_code and user_info['email'] == 'guest':
cookie = mail_cookie_create_authorize_action(VIEWRESTRCOLL, {'collection' : guess_primary_collection_of_a_record(self.recid)})
target = CFG_SITE_SECURE_URL + '/youraccount/login' + \
make_canonical_urlargd({'action': cookie, 'ln' : argd['ln'], 'referer' : CFG_SITE_SECURE_URL + req.unparsed_uri}, {})
return redirect_to_url(req, target, norobot=True)
elif auth_code:
return page_not_authorized(req, "../", \
text=auth_msg, \
navmenuid='search')
# mod_python does not like to return [] in case when of=id:
out = perform_request_search(req, **argd)
- if out == []:
+ if isinstance(out, intbitset):
+ return out.fastdump()
+ elif out == []:
return str(out)
else:
return out
# Return the same page wether we ask for /CFG_SITE_RECORD/123/export/xm or /CFG_SITE_RECORD/123/export/xm/
index = __call__
diff --git a/modules/websearch/lib/websearchadminlib.py b/modules/websearch/lib/websearchadminlib.py
index 7e0ec9730..50893512e 100644
--- a/modules/websearch/lib/websearchadminlib.py
+++ b/modules/websearch/lib/websearchadminlib.py
@@ -1,3536 +1,3535 @@
## This file is part of Invenio.
## Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
# pylint: disable=C0301
"""Invenio WebSearch Administrator Interface."""
__revision__ = "$Id$"
import cgi
import random
import time
import sys
from invenio.dateutils import strftime
if sys.hexversion < 0x2040000:
# pylint: disable=W0622
from sets import Set as set
# pylint: enable=W0622
from invenio.config import \
CFG_CACHEDIR, \
CFG_SITE_LANG, \
CFG_SITE_NAME, \
CFG_SITE_URL,\
CFG_WEBCOMMENT_ALLOW_COMMENTS, \
CFG_WEBSEARCH_SHOW_COMMENT_COUNT, \
CFG_WEBCOMMENT_ALLOW_REVIEWS, \
CFG_WEBSEARCH_SHOW_REVIEW_COUNT, \
CFG_BIBRANK_SHOW_CITATION_LINKS, \
CFG_INSPIRE_SITE, \
CFG_CERN_SITE
from invenio.bibrankadminlib import \
write_outcome, \
modify_translations, \
get_def_name, \
get_name, \
get_languages, \
addadminbox, \
tupletotable, \
createhiddenform
from invenio.dbquery import \
run_sql, \
get_table_update_time
from invenio.websearch_external_collections import \
external_collections_dictionary, \
external_collection_sort_engine_by_name, \
external_collection_get_state, \
external_collection_get_update_state_list, \
external_collection_apply_changes
from invenio.websearch_external_collections_utils import \
get_collection_descendants
from invenio.websearch_external_collections_config import CFG_EXTERNAL_COLLECTION_STATES_NAME
#from invenio.bibformat_elements import bfe_references
#from invenio.bibformat_engine import BibFormatObject
from invenio.bibdocfile import BibRecDocs
from invenio.messages import gettext_set_language
#from invenio.bibrank_citation_searcher import get_cited_by
from invenio.access_control_admin import acc_get_action_id
from invenio.access_control_config import VIEWRESTRCOLL
from invenio.errorlib import register_exception
from invenio.intbitset import intbitset
from invenio.bibrank_citation_searcher import get_cited_by_count
from invenio.bibrecord import record_get_field_instances
def getnavtrail(previous = ''):
"""Get the navtrail"""
navtrail = """<a class="navtrail" href="%s/help/admin">Admin Area</a> """ % (CFG_SITE_URL,)
navtrail = navtrail + previous
return navtrail
def fix_collection_scores():
"""
Re-calculate and re-normalize de scores of the collection relationship.
"""
for id_dad in intbitset(run_sql("SELECT id_dad FROM collection_collection")):
for index, id_son in enumerate(run_sql("SELECT id_son FROM collection_collection WHERE id_dad=%s ORDER BY score DESC", (id_dad, ))):
run_sql("UPDATE collection_collection SET score=%s WHERE id_dad=%s AND id_son=%s", (index * 10 + 10, id_dad, id_son[0]))
def perform_modifytranslations(colID, ln, sel_type='', trans=[], confirm=-1, callback='yes'):
"""Modify the translations of a collection
sel_type - the nametype to modify
trans - the translations in the same order as the languages from get_languages()"""
output = ''
subtitle = ''
sitelangs = get_languages()
if type(trans) is str:
trans = [trans]
if confirm in ["2", 2] and colID:
finresult = modify_translations(colID, sitelangs, sel_type, trans, "collection")
col_dict = dict(get_def_name('', "collection"))
if colID and col_dict.has_key(int(colID)):
colID = int(colID)
subtitle = """<a name="3">3. Modify translations for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a href="%s/help/admin/websearch-admin-guide#3.3">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
if sel_type == '':
sel_type = get_col_nametypes()[0][0]
header = ['Language', 'Translation']
actions = []
types = get_col_nametypes()
if len(types) > 1:
text = """
<span class="adminlabel">Name type</span>
<select name="sel_type" class="admin_w200">
"""
for (key, value) in types:
text += """<option value="%s" %s>%s""" % (key, key == sel_type and 'selected="selected"' or '', value)
trans_names = get_name(colID, ln, key, "collection")
if trans_names and trans_names[0][0]:
text += ": %s" % trans_names[0][0]
text += "</option>"
text += """</select>"""
output += createhiddenform(action="modifytranslations#3",
text=text,
button="Select",
colID=colID,
ln=ln,
confirm=0)
if confirm in [-1, "-1", 0, "0"]:
trans = []
for (key, value) in sitelangs:
try:
trans_names = get_name(colID, key, sel_type, "collection")
trans.append(trans_names[0][0])
except StandardError, e:
trans.append('')
for nr in range(0, len(sitelangs)):
actions.append(["%s" % (sitelangs[nr][1],)])
actions[-1].append('<input type="text" name="trans" size="30" value="%s"/>' % trans[nr])
text = tupletotable(header=header, tuple=actions)
output += createhiddenform(action="modifytranslations#3",
text=text,
button="Modify",
colID=colID,
sel_type=sel_type,
ln=ln,
confirm=2)
if sel_type and len(trans) and confirm in ["2", 2]:
output += write_outcome(finresult)
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_modifytranslations", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyrankmethods(colID, ln, func='', rnkID='', confirm=0, callback='yes'):
"""Modify which rank methods is visible to the collection
func - remove or add rank method
rnkID - the id of the rank method."""
output = ""
subtitle = ""
col_dict = dict(get_def_name('', "collection"))
rnk_dict = dict(get_def_name('', "rnkMETHOD"))
if colID and col_dict.has_key(int(colID)):
colID = int(colID)
if func in ["0", 0] and confirm in ["1", 1]:
finresult = attach_rnk_col(colID, rnkID)
elif func in ["1", 1] and confirm in ["1", 1]:
finresult = detach_rnk_col(colID, rnkID)
subtitle = """<a name="9">9. Modify rank options for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.9">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
output = """
<dl>
<dt>The rank methods enabled for the collection '%s' is:</dt>
""" % col_dict[colID]
rnkmethods = get_col_rnk(colID, ln)
output += """<dd>"""
if not rnkmethods:
output += """No rank methods"""
else:
for id, name in rnkmethods:
output += """%s, """ % name
output += """</dd>
</dl>
"""
rnk_list = get_def_name('', "rnkMETHOD")
rnk_dict_in_col = dict(get_col_rnk(colID, ln))
rnk_list = filter(lambda x: not rnk_dict_in_col.has_key(x[0]), rnk_list)
if rnk_list:
text = """
<span class="adminlabel">Enable:</span>
<select name="rnkID" class="admin_w200">
<option value="-1">- select rank method -</option>
"""
for (id, name) in rnk_list:
text += """<option value="%s" %s>%s</option>""" % (id, (func in ["0", 0] and confirm in ["0", 0] and int(rnkID) == int(id)) and 'selected="selected"' or '' , name)
text += """</select>"""
output += createhiddenform(action="modifyrankmethods#9",
text=text,
button="Enable",
colID=colID,
ln=ln,
func=0,
confirm=1)
if confirm in ["1", 1] and func in ["0", 0] and int(rnkID) != -1:
output += write_outcome(finresult)
elif confirm not in ["0", 0] and func in ["0", 0]:
output += """<b><span class="info">Please select a rank method.</span></b>"""
coll_list = get_col_rnk(colID, ln)
if coll_list:
text = """
<span class="adminlabel">Disable:</span>
<select name="rnkID" class="admin_w200">
<option value="-1">- select rank method-</option>
"""
for (id, name) in coll_list:
text += """<option value="%s" %s>%s</option>""" % (id, (func in ["1", 1] and confirm in ["0", 0] and int(rnkID) == int(id)) and 'selected="selected"' or '' , name)
text += """</select>"""
output += createhiddenform(action="modifyrankmethods#9",
text=text,
button="Disable",
colID=colID,
ln=ln,
func=1,
confirm=1)
if confirm in ["1", 1] and func in ["1", 1] and int(rnkID) != -1:
output += write_outcome(finresult)
elif confirm not in ["0", 0] and func in ["1", 1]:
output += """<b><span class="info">Please select a rank method.</span></b>"""
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_modifyrankmethods", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_addcollectiontotree(colID, ln, add_dad='', add_son='', rtype='', mtype='', callback='yes', confirm=-1):
"""Form to add a collection to the tree.
add_dad - the dad to add the collection to
add_son - the collection to add
rtype - add it as a regular or virtual
mtype - add it to the regular or virtual tree."""
output = ""
output2 = ""
subtitle = """Attach collection to tree&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#2.2">?</a>]</small>""" % (CFG_SITE_URL)
col_dict = dict(get_def_name('', "collection"))
if confirm not in [-1, "-1"] and not (add_son and add_dad and rtype):
output2 += """<b><span class="info">All fields must be filled.</span></b><br /><br />
"""
elif add_son and add_dad and rtype:
add_son = int(add_son)
add_dad = int(add_dad)
if confirm not in [-1, "-1"]:
if add_son == add_dad:
output2 += """<b><span class="info">Cannot add a collection as a pointer to itself.</span></b><br /><br />
"""
elif check_col(add_dad, add_son):
res = add_col_dad_son(add_dad, add_son, rtype)
output2 += write_outcome(res)
if res[0] == 1:
output2 += """<b><span class="info"><br /> The collection will appear on your website after the next webcoll run. You can either run it manually or wait until bibsched does it for you.</span></b><br /><br />
"""
else:
output2 += """<b><span class="info">Cannot add the collection '%s' as a %s subcollection of '%s' since it will either create a loop, or the association already exists.</span></b><br /><br />
""" % (col_dict[add_son], (rtype=="r" and 'regular' or 'virtual'), col_dict[add_dad])
add_son = ''
add_dad = ''
rtype = ''
tree = get_col_tree(colID)
col_list = col_dict.items()
col_list.sort(compare_on_val)
output = show_coll_not_in_tree(colID, ln, col_dict)
text = """
<span class="adminlabel">Attach collection:</span>
<select name="add_son" class="admin_w200">
<option value="">- select collection -</option>
"""
for (id, name) in col_list:
if id != colID:
text += """<option value="%s" %s>%s</option>""" % (id, str(id)==str(add_son) and 'selected="selected"' or '', name)
text += """
</select><br />
<span class="adminlabel">to parent collection:</span>
<select name="add_dad" class="admin_w200">
<option value="">- select parent collection -</option>
"""
for (id, name) in col_list:
text += """<option value="%s" %s>%s</option>
""" % (id, str(id)==add_dad and 'selected="selected"' or '', name)
text += """</select><br />
"""
text += """
<span class="adminlabel">with relationship:</span>
<select name="rtype" class="admin_w200">
<option value="">- select relationship -</option>
<option value="r" %s>Regular (Narrow by...)</option>
<option value="v" %s>Virtual (Focus on...)</option>
</select>
""" % ((rtype=="r" and 'selected="selected"' or ''), (rtype=="v" and 'selected="selected"' or ''))
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/addcollectiontotree" % CFG_SITE_URL,
text=text,
button="Add",
colID=colID,
ln=ln,
confirm=1)
output += output2
#output += perform_showtree(colID, ln)
body = [output]
if callback:
return perform_index(colID, ln, mtype="perform_addcollectiontotree", content=addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_addcollection(colID, ln, colNAME='', dbquery='', callback="yes", confirm=-1):
"""form to add a new collection.
colNAME - the name of the new collection
dbquery - the dbquery of the new collection"""
output = ""
subtitle = """Create new collection&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#2.1">?</a>]</small>""" % (CFG_SITE_URL)
text = """
<span class="adminlabel">Default name</span>
<input class="admin_w200" type="text" name="colNAME" value="%s" /><br />
""" % colNAME
output = createhiddenform(action="%s/admin/websearch/websearchadmin.py/addcollection" % CFG_SITE_URL,
text=text,
colID=colID,
ln=ln,
button="Add collection",
confirm=1)
if colNAME and confirm in ["1", 1]:
res = add_col(colNAME, '')
output += write_outcome(res)
if res[0] == 1:
output += perform_addcollectiontotree(colID=colID, ln=ln, add_son=res[1], callback='')
elif confirm not in ["-1", -1]:
output += """<b><span class="info">Please give the collection a name.</span></b>"""
body = [output]
if callback:
return perform_index(colID, ln=ln, mtype="perform_addcollection", content=addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifydbquery(colID, ln, dbquery='', callback='yes', confirm=-1):
"""form to modify the dbquery of the collection.
dbquery - the dbquery of the collection."""
subtitle = ''
output = ""
col_dict = dict(get_def_name('', "collection"))
if colID and col_dict.has_key(int(colID)):
colID = int(colID)
subtitle = """<a name="1">1. Modify collection query for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.1">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
if confirm == -1:
res = run_sql("SELECT dbquery FROM collection WHERE id=%s" % colID)
dbquery = res[0][0]
if not dbquery:
dbquery = ''
reg_sons = len(get_col_tree(colID, 'r'))
vir_sons = len(get_col_tree(colID, 'v'))
if reg_sons > 1:
if dbquery:
output += "Warning: This collection got subcollections, and should because of this not have a collection query, for further explanation, check the WebSearch Guide<br />"
elif reg_sons <= 1:
if not dbquery:
output += "Warning: This collection does not have any subcollections, and should because of this have a collection query, for further explanation, check the WebSearch Guide<br />"
text = """
<span class="adminlabel">Query</span>
<input class="admin_w200" type="text" name="dbquery" value="%s" /><br />
""" % cgi.escape(dbquery, 1)
output += createhiddenform(action="modifydbquery",
text=text,
button="Modify",
colID=colID,
ln=ln,
confirm=1)
if confirm in ["1", 1]:
res = modify_dbquery(colID, dbquery)
if res:
if dbquery == "":
text = """<b><span class="info">Query removed for this collection.</span></b>"""
else:
text = """<b><span class="info">Query set for this collection.</span></b>"""
else:
text = """<b><span class="info">Sorry, could not change query.</span></b>"""
output += text
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_modifydbquery", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifycollectiontree(colID, ln, move_up='', move_down='', move_from='', move_to='', delete='', rtype='', callback='yes', confirm=0):
"""to modify the collection tree: move a collection up and down, delete a collection, or change the father of the collection.
colID - the main collection of the tree, the root
move_up - move this collection up (is not the collection id, but the place in the tree)
move_up - move this collection down (is not the collection id, but the place in the tree)
move_from - move this collection from the current positon (is not the collection id, but the place in the tree)
move_to - move the move_from collection and set this as it's father. (is not the collection id, but the place in the tree)
delete - delete this collection from the tree (is not the collection id, but the place in the tree)
rtype - the type of the collection in the tree, regular or virtual"""
colID = int(colID)
tree = get_col_tree(colID, rtype)
col_dict = dict(get_def_name('', "collection"))
subtitle = """Modify collection tree: %s&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#2.3">?</a>]&nbsp;&nbsp;&nbsp;<a href="%s/admin/websearch/websearchadmin.py/showtree?colID=%s&amp;ln=%s">Printer friendly version</a></small>""" % (col_dict[colID], CFG_SITE_URL, CFG_SITE_URL, colID, ln)
fin_output = ""
output = ""
try:
if move_up:
move_up = int(move_up)
switch = find_last(tree, move_up)
if switch and switch_col_treescore(tree[move_up], tree[switch]):
output += """<b><span class="info">Moved the %s collection '%s' up and '%s' down.</span></b><br /><br />
""" % ((rtype=="r" and 'regular' or 'virtual'), col_dict[tree[move_up][0]], col_dict[tree[switch][0]])
else:
output += """<b><span class="info">Could not move the %s collection '%s' up and '%s' down.</span></b><br /><br />
""" % ((rtype=="r" and 'regular' or 'virtual'), col_dict[tree[move_up][0]], col_dict[tree[switch][0]])
elif move_down:
move_down = int(move_down)
switch = find_next(tree, move_down)
if switch and switch_col_treescore(tree[move_down], tree[switch]):
output += """<b><span class="info">Moved the %s collection '%s' down and '%s' up.</span></b><br /><br />
""" % ((rtype=="r" and 'regular' or 'virtual'), col_dict[tree[move_down][0]], col_dict[tree[switch][0]])
else:
output += """<b><span class="info">Could not move the %s collection '%s' up and '%s' down.</span></b><br /><br />
""" % ((rtype=="r" and 'regular' or 'virtual'), col_dict[tree[move_up][0]],col_dict[tree[switch][0]])
elif delete:
delete = int(delete)
if confirm in [0, "0"]:
if col_dict[tree[delete][0]] != col_dict[tree[delete][3]]:
text = """<b>Do you want to remove the %s collection '%s' and its subcollections in the %s collection '%s'.</b>
""" % ((tree[delete][4]=="r" and 'regular' or 'virtual'), col_dict[tree[delete][0]], (rtype=="r" and 'regular' or 'virtual'), col_dict[tree[delete][3]])
else:
text = """<b>Do you want to remove all subcollections of the %s collection '%s'.</b>
""" % ((rtype=="r" and 'regular' or 'virtual'), col_dict[tree[delete][3]])
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/modifycollectiontree#tree" % CFG_SITE_URL,
text=text,
button="Confirm",
colID=colID,
delete=delete,
rtype=rtype,
ln=ln,
confirm=1)
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/index?mtype=perform_modifycollectiontree#tree" % CFG_SITE_URL,
text="<b>To cancel</b>",
button="Cancel",
colID=colID,
ln=ln)
else:
if remove_col_subcol(tree[delete][0], tree[delete][3], rtype):
if col_dict[tree[delete][0]] != col_dict[tree[delete][3]]:
output += """<b><span class="info">Removed the %s collection '%s' and its subcollections in subdirectory '%s'.</span></b><br /><br />
""" % ((tree[delete][4]=="r" and 'regular' or 'virtual'), col_dict[tree[delete][0]], col_dict[tree[delete][3]])
else:
output += """<b><span class="info">Removed the subcollections of the %s collection '%s'.</span></b><br /><br />
""" % ((rtype=="r" and 'regular' or 'virtual'), col_dict[tree[delete][3]])
else:
output += """<b><span class="info">Could not remove the collection from the tree.</span></b><br /><br />
"""
delete = ''
elif move_from and not move_to:
move_from_rtype = move_from[0]
move_from_id = int(move_from[1:len(move_from)])
text = """<b>Select collection to place the %s collection '%s' under.</b><br /><br />
""" % ((move_from_rtype=="r" and 'regular' or 'virtual'), col_dict[tree[move_from_id][0]])
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/index?mtype=perform_modifycollectiontree#tree" % CFG_SITE_URL,
text=text,
button="Cancel",
colID=colID,
ln=ln)
elif move_from and move_to:
move_from_rtype = move_from[0]
move_from_id = int(move_from[1:len(move_from)])
move_to_rtype = move_to[0]
move_to_id = int(move_to[1:len(move_to)])
tree_from = get_col_tree(colID, move_from_rtype)
tree_to = get_col_tree(colID, move_to_rtype)
if confirm in [0, '0']:
if move_from_id == move_to_id and move_from_rtype == move_to_rtype:
output += """<b><span class="info">Cannot move to itself.</span></b><br /><br />
"""
elif tree_from[move_from_id][3] == tree_to[move_to_id][0] and move_from_rtype==move_to_rtype:
output += """<b><span class="info">The collection is already there.</span></b><br /><br />
"""
elif check_col(tree_to[move_to_id][0], tree_from[move_from_id][0]) or (tree_to[move_to_id][0] == 1 and tree_from[move_from_id][3] == tree_to[move_to_id][0] and move_from_rtype != move_to_rtype):
text = """<b>Move %s collection '%s' to the %s collection '%s'.</b>
""" % ((tree_from[move_from_id][4]=="r" and 'regular' or 'virtual'), col_dict[tree_from[move_from_id][0]], (tree_to[move_to_id][4]=="r" and 'regular' or 'virtual'), col_dict[tree_to[move_to_id][0]])
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/modifycollectiontree#tree" % CFG_SITE_URL,
text=text,
button="Confirm",
colID=colID,
move_from=move_from,
move_to=move_to,
ln=ln,
rtype=rtype,
confirm=1)
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/index?mtype=perform_modifycollectiontree#tree" % CFG_SITE_URL,
text="""<b>To cancel</b>""",
button="Cancel",
colID=colID,
ln=ln)
else:
output += """<b><span class="info">Cannot move the collection '%s' and set it as a subcollection of '%s' since it will create a loop.</span></b><br /><br />
""" % (col_dict[tree_from[move_from_id][0]], col_dict[tree_to[move_to_id][0]])
else:
if (move_to_id != 0 and move_col_tree(tree_from[move_from_id], tree_to[move_to_id])) or (move_to_id == 0 and move_col_tree(tree_from[move_from_id], tree_to[move_to_id], move_to_rtype)):
output += """<b><span class="info">Moved %s collection '%s' to the %s collection '%s'.</span></b><br /><br />
""" % ((move_from_rtype=="r" and 'regular' or 'virtual'), col_dict[tree_from[move_from_id][0]], (move_to_rtype=="r" and 'regular' or 'virtual'), col_dict[tree_to[move_to_id][0]])
else:
output += """<b><span class="info">Could not move %s collection '%s' to the %s collection '%s'.</span></b><br /><br />
""" % ((move_from_rtype=="r" and 'regular' or 'virtual'), col_dict[tree_from[move_from_id][0]], (move_to_rtype=="r" and 'regular' or 'virtual'), col_dict[tree_to[move_to_id][0]])
move_from = ''
move_to = ''
else:
output += """
"""
except StandardError, e:
register_exception()
return """<b><span class="info">An error occured.</span></b>
"""
output += """<table border ="0" width="100%">
<tr><td width="50%">
<b>Narrow by collection:</b>
</td><td width="50%">
<b>Focus on...:</b>
</td></tr><tr><td valign="top">
"""
tree = get_col_tree(colID, 'r')
output += create_colltree(tree, col_dict, colID, ln, move_from, move_to, 'r', "yes")
output += """</td><td valign="top">
"""
tree = get_col_tree(colID, 'v')
output += create_colltree(tree, col_dict, colID, ln, move_from, move_to, 'v', "yes")
output += """</td>
</tr>
</table>
"""
body = [output]
if callback:
return perform_index(colID, ln, mtype="perform_modifycollectiontree", content=addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_showtree(colID, ln):
"""create collection tree/hiarchy"""
col_dict = dict(get_def_name('', "collection"))
subtitle = "Collection tree: %s" % col_dict[int(colID)]
output = """<table border ="0" width="100%">
<tr><td width="50%">
<b>Narrow by collection:</b>
</td><td width="50%">
<b>Focus on...:</b>
</td></tr><tr><td valign="top">
"""
tree = get_col_tree(colID, 'r')
output += create_colltree(tree, col_dict, colID, ln, '', '', 'r', '')
output += """</td><td valign="top">
"""
tree = get_col_tree(colID, 'v')
output += create_colltree(tree, col_dict, colID, ln, '', '', 'v', '')
output += """</td>
</tr>
</table>
"""
body = [output]
return addadminbox(subtitle, body)
def perform_addportalbox(colID, ln, title='', body='', callback='yes', confirm=-1):
"""form to add a new portalbox
title - the title of the portalbox
body - the body of the portalbox"""
col_dict = dict(get_def_name('', "collection"))
colID = int(colID)
subtitle = """<a name="5.1"></a>Create new portalbox"""
text = """
<span class="adminlabel">Title</span>
<textarea cols="50" rows="1" class="admin_wvar" type="text" name="title">%s</textarea><br />
<span class="adminlabel">Body</span>
<textarea cols="50" rows="10" class="admin_wvar" type="text" name="body">%s</textarea><br />
""" % (cgi.escape(title), cgi.escape(body))
output = createhiddenform(action="addportalbox#5.1",
text=text,
button="Add",
colID=colID,
ln=ln,
confirm=1)
if body and confirm in [1, "1"]:
res = add_pbx(title, body)
output += write_outcome(res)
if res[1] == 1:
output += """<b><span class="info"><a href="addexistingportalbox?colID=%s&amp;ln=%s&amp;pbxID=%s#5">Add portalbox to collection</a></span></b>""" % (colID, ln, res[1])
elif confirm not in [-1, "-1"]:
output += """<b><span class="info">Body field must be filled.</span></b>
"""
body = [output]
return perform_showportalboxes(colID, ln, content=addadminbox(subtitle, body))
def perform_addexistingportalbox(colID, ln, pbxID=-1, score=0, position='', sel_ln='', callback='yes', confirm=-1):
"""form to add an existing portalbox to a collection.
colID - the collection to add the portalbox to
pbxID - the portalbox to add
score - the importance of the portalbox.
position - the position of the portalbox on the page
sel_ln - the language of the portalbox"""
subtitle = """<a name="5.2"></a>Add existing portalbox to collection"""
output = ""
colID = int(colID)
res = get_pbx()
pos = get_pbx_pos()
lang = dict(get_languages())
col_dict = dict(get_def_name('', "collection"))
pbx_dict = dict(map(lambda x: (x[0], x[1]), res))
col_pbx = get_col_pbx(colID)
col_pbx = dict(map(lambda x: (x[0], x[5]), col_pbx))
if len(res) > 0:
text = """
<span class="adminlabel">Portalbox</span>
<select name="pbxID" class="admin_w200">
<option value="-1">- Select portalbox -</option>
"""
for (id, t_title, t_body) in res:
text += """<option value="%s" %s>%s - %s...</option>\n""" % \
(id, id == int(pbxID) and 'selected="selected"' or '',
t_title[:40], cgi.escape(t_body[0:40 - min(40, len(t_title))]))
text += """</select><br />
<span class="adminlabel">Language</span>
<select name="sel_ln" class="admin_w200">
<option value="">- Select language -</option>
"""
listlang = lang.items()
listlang.sort()
for (key, name) in listlang:
text += """<option value="%s" %s>%s</option>
""" % (key, key == sel_ln and 'selected="selected"' or '', name)
text += """</select><br />
<span class="adminlabel">Position</span>
<select name="position" class="admin_w200">
<option value="">- Select position -</option>
"""
listpos = pos.items()
listpos.sort()
for (key, name) in listpos:
text += """<option value="%s" %s>%s</option>""" % (key, key==position and 'selected="selected"' or '', name)
text += "</select>"
output += createhiddenform(action="addexistingportalbox#5.2",
text=text,
button="Add",
colID=colID,
ln=ln,
confirm=1)
else:
output = """No existing portalboxes to add, please create a new one.
"""
if pbxID > -1 and position and sel_ln and confirm in [1, "1"]:
pbxID = int(pbxID)
res = add_col_pbx(colID, pbxID, sel_ln, position, '')
output += write_outcome(res)
elif pbxID > -1 and confirm not in [-1, "-1"]:
output += """<b><span class="info">All fields must be filled.</span></b>
"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_showportalboxes(colID, ln, content=output)
def perform_deleteportalbox(colID, ln, pbxID=-1, callback='yes', confirm=-1):
"""form to delete a portalbox which is not in use.
colID - the current collection.
pbxID - the id of the portalbox"""
subtitle = """<a name="5.3"></a>Delete an unused portalbox"""
output = ""
colID = int(colID)
if pbxID not in [-1, "-1"] and confirm in [1, "1"]:
ares = get_pbx()
pbx_dict = dict(map(lambda x: (x[0], x[1]), ares))
if pbx_dict.has_key(int(pbxID)):
pname = pbx_dict[int(pbxID)]
ares = delete_pbx(int(pbxID))
else:
return """<b><span class="info">This portalbox does not exist</span></b>"""
res = get_pbx()
col_dict = dict(get_def_name('', "collection"))
pbx_dict = dict(map(lambda x: (x[0], x[1]), res))
col_pbx = get_col_pbx()
col_pbx = dict(map(lambda x: (x[0], x[5]), col_pbx))
if len(res) > 0:
text = """
<span class="adminlabel">Portalbox</span>
<select name="pbxID" class="admin_w200">
"""
text += """<option value="-1">- Select portalbox -"""
for (id, t_title, t_body) in res:
if not col_pbx.has_key(id):
text += """<option value="%s" %s>%s - %s...""" % (id, id == int(pbxID) and 'selected="selected"' or '', t_title, cgi.escape(t_body[0:10]))
text += "</option>"
text += """</select><br />"""
output += createhiddenform(action="deleteportalbox#5.3",
text=text,
button="Delete",
colID=colID,
ln=ln,
confirm=1)
if pbxID not in [-1, "-1"]:
pbxID = int(pbxID)
if confirm in [1, "1"]:
output += write_outcome(ares)
elif confirm not in [-1, "-1"]:
output += """<b><span class="info">Choose a portalbox to delete.</span></b>
"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_showportalboxes(colID, ln, content=output)
def perform_modifyportalbox(colID, ln, pbxID=-1, score='', position='', sel_ln='', title='', body='', callback='yes', confirm=-1):
"""form to modify a portalbox in a collection, or change the portalbox itself.
colID - the id of the collection.
pbxID - the portalbox to change
score - the score of the portalbox connected to colID which should be changed.
position - the position of the portalbox in collection colID to change."""
subtitle = ""
output = ""
colID = int(colID)
res = get_pbx()
pos = get_pbx_pos()
lang = dict(get_languages())
col_dict = dict(get_def_name('', "collection"))
pbx_dict = dict(map(lambda x: (x[0], x[1]), res))
col_pbx = get_col_pbx(colID)
col_pbx = dict(map(lambda x: (x[0], x[5]), col_pbx))
if pbxID not in [-1, "-1"]:
pbxID = int(pbxID)
subtitle = """<a name="5.4"></a>Modify portalbox '%s' for this collection""" % pbx_dict[pbxID]
col_pbx = get_col_pbx(colID)
if not (score and position) and not (body and title):
for (id_pbx, id_collection, tln, score, position, title, body) in col_pbx:
if id_pbx == pbxID:
break
output += """Collection (presentation) specific values (Changes implies only to this collection.)<br />"""
text = """
<span class="adminlabel">Position</span>
<select name="position" class="admin_w200">
"""
listpos = pos.items()
listpos.sort()
for (key, name) in listpos:
text += """<option value="%s" %s>%s""" % (key, key==position and 'selected="selected"' or '', name)
text += "</option>"
text += """</select><br />"""
output += createhiddenform(action="modifyportalbox#5.4",
text=text,
button="Modify",
colID=colID,
pbxID=pbxID,
score=score,
title=title,
body=cgi.escape(body, 1),
sel_ln=sel_ln,
ln=ln,
confirm=3)
if pbxID > -1 and score and position and confirm in [3, "3"]:
pbxID = int(pbxID)
res = modify_pbx(colID, pbxID, sel_ln, score, position, '', '')
res2 = get_pbx()
pbx_dict = dict(map(lambda x: (x[0], x[1]), res2))
output += write_outcome(res)
output += """<br />Portalbox (content) specific values (any changes appears everywhere the portalbox is used.)"""
text = """
<span class="adminlabel">Title</span>
<textarea cols="50" rows="1" class="admin_wvar" type="text" name="title">%s</textarea><br />
""" % cgi.escape(title)
text += """
<span class="adminlabel">Body</span>
<textarea cols="50" rows="10" class="admin_wvar" type="text" name="body">%s</textarea><br />
""" % cgi.escape(body)
output += createhiddenform(action="modifyportalbox#5.4",
text=text,
button="Modify",
colID=colID,
pbxID=pbxID,
sel_ln=sel_ln,
score=score,
position=position,
ln=ln,
confirm=4)
if pbxID > -1 and confirm in [4, "4"]:
pbxID = int(pbxID)
res = modify_pbx(colID, pbxID, sel_ln, '', '', title, body)
output += write_outcome(res)
else:
output = """No portalbox to modify."""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_showportalboxes(colID, ln, content=output)
def perform_switchpbxscore(colID, id_1, id_2, sel_ln, ln):
"""Switch the score of id_1 and id_2 in collection_portalbox.
colID - the current collection
id_1/id_2 - the id's to change the score for.
sel_ln - the language of the portalbox"""
output = ""
res = get_pbx()
pbx_dict = dict(map(lambda x: (x[0], x[1]), res))
res = switch_pbx_score(colID, id_1, id_2, sel_ln)
output += write_outcome(res)
return perform_showportalboxes(colID, ln, content=output)
def perform_showportalboxes(colID, ln, callback='yes', content='', confirm=-1):
"""show the portalboxes of this collection.
colID - the portalboxes to show the collection for."""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
subtitle = """<a name="5">5. Modify portalboxes for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.5">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
output = ""
pos = get_pbx_pos()
output = """<dl>
<dt>Portalbox actions (not related to this collection)</dt>
<dd><a href="addportalbox?colID=%s&amp;ln=%s#5.1">Create new portalbox</a></dd>
<dd><a href="deleteportalbox?colID=%s&amp;ln=%s#5.3">Delete an unused portalbox</a></dd>
<dt>Collection specific actions</dt>
<dd><a href="addexistingportalbox?colID=%s&amp;ln=%s#5.2">Add existing portalbox to collection</a></dd>
</dl>
""" % (colID, ln, colID, ln, colID, ln)
header = ['Position', 'Language', '', 'Title', 'Actions']
actions = []
sitelangs = get_languages()
lang = dict(sitelangs)
pos_list = pos.items()
pos_list.sort()
if len(get_col_pbx(colID)) > 0:
for (key, value) in sitelangs:
for (pos_key, pos_value) in pos_list:
res = get_col_pbx(colID, key, pos_key)
i = 0
for (pbxID, colID_pbx, tln, score, position, title, body) in res:
move = """<table cellspacing="1" cellpadding="0" border="0"><tr><td>"""
if i != 0:
move += """<a href="%s/admin/websearch/websearchadmin.py/switchpbxscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;sel_ln=%s&amp;rand=%s#5"><img border="0" src="%s/img/smallup.gif" title="Move portalbox up" alt="up" /></a>""" % (CFG_SITE_URL, colID, ln, pbxID, res[i - 1][0], tln, random.randint(0, 1000), CFG_SITE_URL)
else:
move += "&nbsp;&nbsp;&nbsp;"
move += "</td><td>"
i += 1
if i != len(res):
move += """<a href="%s/admin/websearch/websearchadmin.py/switchpbxscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;sel_ln=%s&amp;rand=%s#5"><img border="0" src="%s/img/smalldown.gif" title="Move portalbox down" alt="down" /></a>""" % (CFG_SITE_URL, colID, ln, pbxID, res[i][0], tln, random.randint(0, 1000), CFG_SITE_URL)
move += """</td></tr></table>"""
actions.append(["%s" % (i==1 and pos[position] or ''), "%s" % (i==1 and lang[tln] or ''), move, "%s" % title])
for col in [(('Modify', 'modifyportalbox'), ('Remove', 'removeportalbox'),)]:
actions[-1].append('<a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;pbxID=%s&amp;sel_ln=%s#5.4">%s</a>' % (CFG_SITE_URL, col[0][1], colID, ln, pbxID, tln, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;pbxID=%s&amp;sel_ln=%s#5.5">%s</a>' % (CFG_SITE_URL, function, colID, ln, pbxID, tln, str)
output += tupletotable(header=header, tuple=actions)
else:
output += """No portalboxes exists for this collection"""
output += content
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_showportalboxes", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_removeportalbox(colID, ln, pbxID='', sel_ln='', callback='yes', confirm=0):
"""form to remove a portalbox from a collection.
colID - the current collection, remove the portalbox from this collection.
sel_ln - remove the portalbox with this language
pbxID - remove the portalbox with this id"""
subtitle = """<a name="5.5"></a>Remove portalbox"""
output = ""
col_dict = dict(get_def_name('', "collection"))
res = get_pbx()
pbx_dict = dict(map(lambda x: (x[0], x[1]), res))
if colID and pbxID and sel_ln:
colID = int(colID)
pbxID = int(pbxID)
if confirm in ["0", 0]:
text = """Do you want to remove the portalbox '%s' from the collection '%s'.""" % (pbx_dict[pbxID], col_dict[colID])
output += createhiddenform(action="removeportalbox#5.5",
text=text,
button="Confirm",
colID=colID,
pbxID=pbxID,
sel_ln=sel_ln,
confirm=1)
elif confirm in ["1", 1]:
res = remove_pbx(colID, pbxID, sel_ln)
output += write_outcome(res)
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_showportalboxes(colID, ln, content=output)
def perform_switchfmtscore(colID, type, id_1, id_2, ln):
"""Switch the score of id_1 and id_2 in the table type.
colID - the current collection
id_1/id_2 - the id's to change the score for.
type - like "format" """
fmt_dict = dict(get_def_name('', "format"))
res = switch_score(colID, id_1, id_2, type)
output = write_outcome(res)
return perform_showoutputformats(colID, ln, content=output)
def perform_switchfldscore(colID, id_1, id_2, fmeth, ln):
"""Switch the score of id_1 and id_2 in collection_field_fieldvalue.
colID - the current collection
id_1/id_2 - the id's to change the score for."""
fld_dict = dict(get_def_name('', "field"))
res = switch_fld_score(colID, id_1, id_2)
output = write_outcome(res)
if fmeth == "soo":
return perform_showsortoptions(colID, ln, content=output)
elif fmeth == "sew":
return perform_showsearchfields(colID, ln, content=output)
elif fmeth == "seo":
return perform_showsearchoptions(colID, ln, content=output)
def perform_switchfldvaluescore(colID, id_1, id_fldvalue_1, id_fldvalue_2, ln):
"""Switch the score of id_1 and id_2 in collection_field_fieldvalue.
colID - the current collection
id_1/id_2 - the id's to change the score for."""
name_1 = run_sql("SELECT name from fieldvalue where id=%s", (id_fldvalue_1, ))[0][0]
name_2 = run_sql("SELECT name from fieldvalue where id=%s", (id_fldvalue_2, ))[0][0]
res = switch_fld_value_score(colID, id_1, id_fldvalue_1, id_fldvalue_2)
output = write_outcome(res)
return perform_modifyfield(colID, fldID=id_1, ln=ln, content=output)
def perform_addnewfieldvalue(colID, fldID, ln, name='', value='', callback="yes", confirm=-1):
"""form to add a new fieldvalue.
name - the name of the new fieldvalue
value - the value of the new fieldvalue
"""
output = ""
subtitle = """<a name="7.4"></a>Add new value"""
text = """
<span class="adminlabel">Display name</span>
<input class="admin_w200" type="text" name="name" value="%s" /><br />
<span class="adminlabel">Search value</span>
<input class="admin_w200" type="text" name="value" value="%s" /><br />
""" % (name, value)
output = createhiddenform(action="%s/admin/websearch/websearchadmin.py/addnewfieldvalue" % CFG_SITE_URL,
text=text,
colID=colID,
fldID=fldID,
ln=ln,
button="Add",
confirm=1)
if name and value and confirm in ["1", 1]:
res = add_fldv(name, value)
output += write_outcome(res)
if res[0] == 1:
res = add_col_fld(colID, fldID, 'seo', res[1])
if res[0] == 0:
output += "<br />" + write_outcome(res)
elif confirm not in ["-1", -1]:
output += """<b><span class="info">Please fill in name and value.</span></b>
"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_modifyfield(colID, fldID=fldID, ln=ln, content=output)
def perform_modifyfieldvalue(colID, fldID, fldvID, ln, name='', value='', callback="yes", confirm=-1):
"""form to modify a fieldvalue.
name - the name of the fieldvalue
value - the value of the fieldvalue
"""
if confirm in [-1, "-1"]:
res = get_fld_value(fldvID)
(id, name, value) = res[0]
output = ""
subtitle = """<a name="7.4"></a>Modify existing value"""
output = """<dl>
<dt><b><span class="info">Warning: Modifications done below will also inflict on all places the modified data is used.</span></b></dt>
</dl>"""
text = """
<span class="adminlabel">Display name</span>
<input class="admin_w200" type="text" name="name" value="%s" /><br />
<span class="adminlabel">Search value</span>
<input class="admin_w200" type="text" name="value" value="%s" /><br />
""" % (name, value)
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/modifyfieldvalue" % CFG_SITE_URL,
text=text,
colID=colID,
fldID=fldID,
fldvID=fldvID,
ln=ln,
button="Update",
confirm=1)
output += createhiddenform(action="%s/admin/websearch/websearchadmin.py/modifyfieldvalue" % CFG_SITE_URL,
text="Delete value and all associations",
colID=colID,
fldID=fldID,
fldvID=fldvID,
ln=ln,
button="Delete",
confirm=2)
if name and value and confirm in ["1", 1]:
res = update_fldv(fldvID, name, value)
output += write_outcome(res)
#if res:
# output += """<b><span class="info">Operation successfully completed.</span></b>"""
#else:
# output += """<b><span class="info">Operation failed.</span></b>"""
elif confirm in ["2", 2]:
res = delete_fldv(fldvID)
output += write_outcome(res)
elif confirm not in ["-1", -1]:
output += """<b><span class="info">Please fill in name and value.</span></b>"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_modifyfield(colID, fldID=fldID, ln=ln, content=output)
def perform_removefield(colID, ln, fldID='', fldvID='', fmeth='', callback='yes', confirm=0):
"""form to remove a field from a collection.
colID - the current collection, remove the field from this collection.
sel_ln - remove the field with this language
fldID - remove the field with this id"""
if fmeth == "soo":
field = "sort option"
elif fmeth == "sew":
field = "search field"
elif fmeth == "seo":
field = "search option"
else:
field = "field"
subtitle = """<a name="6.4"><a name="7.4"><a name="8.4"></a>Remove %s""" % field
output = ""
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(get_def_name('', "field"))
res = get_fld_value()
fldv_dict = dict(map(lambda x: (x[0], x[1]), res))
if colID and fldID:
colID = int(colID)
fldID = int(fldID)
if fldvID and fldvID != "None":
fldvID = int(fldvID)
if confirm in ["0", 0]:
text = """Do you want to remove the %s '%s' %s from the collection '%s'.""" % (field, fld_dict[fldID], (fldvID not in["", "None"] and "with value '%s'" % fldv_dict[fldvID] or ''), col_dict[colID])
output += createhiddenform(action="removefield#6.5",
text=text,
button="Confirm",
colID=colID,
fldID=fldID,
fldvID=fldvID,
fmeth=fmeth,
confirm=1)
elif confirm in ["1", 1]:
res = remove_fld(colID, fldID, fldvID)
output += write_outcome(res)
body = [output]
output = "<br />" + addadminbox(subtitle, body)
if fmeth == "soo":
return perform_showsortoptions(colID, ln, content=output)
elif fmeth == "sew":
return perform_showsearchfields(colID, ln, content=output)
elif fmeth == "seo":
return perform_showsearchoptions(colID, ln, content=output)
def perform_removefieldvalue(colID, ln, fldID='', fldvID='', fmeth='', callback='yes', confirm=0):
"""form to remove a field from a collection.
colID - the current collection, remove the field from this collection.
sel_ln - remove the field with this language
fldID - remove the field with this id"""
subtitle = """<a name="7.4"></a>Remove value"""
output = ""
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(get_def_name('', "field"))
res = get_fld_value()
fldv_dict = dict(map(lambda x: (x[0], x[1]), res))
if colID and fldID:
colID = int(colID)
fldID = int(fldID)
if fldvID and fldvID != "None":
fldvID = int(fldvID)
if confirm in ["0", 0]:
text = """Do you want to remove the value '%s' from the search option '%s'.""" % (fldv_dict[fldvID], fld_dict[fldID])
output += createhiddenform(action="removefieldvalue#7.4",
text=text,
button="Confirm",
colID=colID,
fldID=fldID,
fldvID=fldvID,
fmeth=fmeth,
confirm=1)
elif confirm in ["1", 1]:
res = remove_fld(colID, fldID, fldvID)
output += write_outcome(res)
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_modifyfield(colID, fldID=fldID, ln=ln, content=output)
def perform_rearrangefieldvalue(colID, fldID, ln, callback='yes', confirm=-1):
"""rearrang the fieldvalues alphabetically
colID - the collection
fldID - the field to rearrange the fieldvalue for
"""
subtitle = "Order values alphabetically"
output = ""
col_fldv = get_col_fld(colID, 'seo', fldID)
col_fldv = dict(map(lambda x: (x[1], x[0]), col_fldv))
fldv_names = get_fld_value()
fldv_names = map(lambda x: (x[0], x[1]), fldv_names)
if not col_fldv.has_key(None):
vscore = len(col_fldv)
for (fldvID, name) in fldv_names:
if col_fldv.has_key(fldvID):
run_sql("UPDATE collection_field_fieldvalue SET score_fieldvalue=%s WHERE id_collection=%s and id_field=%s and id_fieldvalue=%s", (vscore, colID, fldID, fldvID))
vscore -= 1
output += write_outcome((1, ""))
else:
output += write_outcome((0, (0, "No values to order")))
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_modifyfield(colID, fldID, ln, content=output)
def perform_rearrangefield(colID, ln, fmeth, callback='yes', confirm=-1):
"""rearrang the fields alphabetically
colID - the collection
"""
subtitle = "Order fields alphabetically"
output = ""
col_fld = dict(map(lambda x: (x[0], x[1]), get_col_fld(colID, fmeth)))
fld_names = get_def_name('', "field")
if len(col_fld) > 0:
score = len(col_fld)
for (fldID, name) in fld_names:
if col_fld.has_key(fldID):
run_sql("UPDATE collection_field_fieldvalue SET score=%s WHERE id_collection=%s and id_field=%s", (score, colID, fldID))
score -= 1
output += write_outcome((1, ""))
else:
output += write_outcome((0, (0, "No fields to order")))
body = [output]
output = "<br />" + addadminbox(subtitle, body)
if fmeth == "soo":
return perform_showsortoptions(colID, ln, content=output)
elif fmeth == "sew":
return perform_showsearchfields(colID, ln, content=output)
elif fmeth == "seo":
return perform_showsearchoptions(colID, ln, content=output)
def perform_addexistingfieldvalue(colID, fldID, fldvID=-1, ln=CFG_SITE_LANG, callback='yes', confirm=-1):
"""form to add an existing fieldvalue to a field.
colID - the collection
fldID - the field to add the fieldvalue to
fldvID - the fieldvalue to add"""
subtitle = """</a><a name="7.4"></a>Add existing value to search option"""
output = ""
if fldvID not in [-1, "-1"] and confirm in [1, "1"]:
fldvID = int(fldvID)
ares = add_col_fld(colID, fldID, 'seo', fldvID)
colID = int(colID)
fldID = int(fldID)
lang = dict(get_languages())
res = get_def_name('', "field")
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(res)
col_fld = dict(map(lambda x: (x[0], x[1]), get_col_fld(colID, 'seo')))
fld_value = get_fld_value()
fldv_dict = dict(map(lambda x: (x[0], x[1]), fld_value))
text = """
<span class="adminlabel">Value</span>
<select name="fldvID" class="admin_w200">
<option value="-1">- Select value -</option>
"""
res = run_sql("SELECT id,name,value FROM fieldvalue ORDER BY name")
for (id, name, value) in res:
text += """<option value="%s" %s>%s - %s</option>
""" % (id, id == int(fldvID) and 'selected="selected"' or '', name, value)
text += """</select><br />"""
output += createhiddenform(action="addexistingfieldvalue#7.4",
text=text,
button="Add",
colID=colID,
fldID=fldID,
ln=ln,
confirm=1)
if fldvID not in [-1, "-1"] and confirm in [1, "1"]:
output += write_outcome(ares)
elif confirm in [1, "1"]:
output += """<b><span class="info">Select a value to add and try again.</span></b>"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_modifyfield(colID, fldID, ln, content=output)
def perform_addexistingfield(colID, ln, fldID=-1, fldvID=-1, fmeth='', callback='yes', confirm=-1):
"""form to add an existing field to a collection.
colID - the collection to add the field to
fldID - the field to add
sel_ln - the language of the field"""
subtitle = """<a name="6.2"></a><a name="7.2"></a><a name="8.2"></a>Add existing field to collection"""
output = ""
if fldID not in [-1, "-1"] and confirm in [1, "1"]:
fldID = int(fldID)
ares = add_col_fld(colID, fldID, fmeth, fldvID)
colID = int(colID)
lang = dict(get_languages())
res = get_def_name('', "field")
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(res)
col_fld = dict(map(lambda x: (x[0], x[1]), get_col_fld(colID, fmeth)))
fld_value = get_fld_value()
fldv_dict = dict(map(lambda x: (x[0], x[1]), fld_value))
if fldvID:
fldvID = int(fldvID)
text = """
<span class="adminlabel">Field</span>
<select name="fldID" class="admin_w200">
<option value="-1">- Select field -</option>
"""
for (id, var) in res:
if fmeth == 'seo' or (fmeth != 'seo' and not col_fld.has_key(id)):
text += """<option value="%s" %s>%s</option>
""" % (id, '', fld_dict[id])
text += """</select><br />"""
output += createhiddenform(action="addexistingfield#6.2",
text=text,
button="Add",
colID=colID,
fmeth=fmeth,
ln=ln,
confirm=1)
if fldID not in [-1, "-1"] and confirm in [1, "1"]:
output += write_outcome(ares)
elif fldID in [-1, "-1"] and confirm not in [-1, "-1"]:
output += """<b><span class="info">Select a field.</span></b>
"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
if fmeth == "soo":
return perform_showsortoptions(colID, ln, content=output)
elif fmeth == "sew":
return perform_showsearchfields(colID, ln, content=output)
elif fmeth == "seo":
return perform_showsearchoptions(colID, ln, content=output)
def perform_showsortoptions(colID, ln, callback='yes', content='', confirm=-1):
"""show the sort fields of this collection.."""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(get_def_name('', "field"))
fld_type = get_sort_nametypes()
subtitle = """<a name="8">8. Modify sort options for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.8">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
output = """<dl>
<dt>Field actions (not related to this collection)</dt>
<dd>Go to the BibIndex interface to modify the available sort options</dd>
<dt>Collection specific actions
<dd><a href="addexistingfield?colID=%s&amp;ln=%s&amp;fmeth=soo#8.2">Add sort option to collection</a></dd>
<dd><a href="rearrangefield?colID=%s&amp;ln=%s&amp;fmeth=soo#8.2">Order sort options alphabetically</a></dd>
</dl>
""" % (colID, ln, colID, ln)
header = ['', 'Sort option', 'Actions']
actions = []
sitelangs = get_languages()
lang = dict(sitelangs)
fld_type_list = fld_type.items()
if len(get_col_fld(colID, 'soo')) > 0:
res = get_col_fld(colID, 'soo')
i = 0
for (fldID, fldvID, stype, score, score_fieldvalue) in res:
move = """<table cellspacing="1" cellpadding="0" border="0"><tr><td>"""
if i != 0:
move += """<a href="%s/admin/websearch/websearchadmin.py/switchfldscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;fmeth=soo&amp;rand=%s#8"><img border="0" src="%s/img/smallup.gif" title="Move up"></a>""" % (CFG_SITE_URL, colID, ln, fldID, res[i - 1][0], random.randint(0, 1000), CFG_SITE_URL)
else:
move += "&nbsp;&nbsp;&nbsp;&nbsp;"
move += "</td><td>"
i += 1
if i != len(res):
move += """<a href="%s/admin/websearch/websearchadmin.py/switchfldscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;fmeth=soo&amp;rand=%s#8"><img border="0" src="%s/img/smalldown.gif" title="Move down"></a>""" % (CFG_SITE_URL, colID, ln, fldID, res[i][0], random.randint(0, 1000), CFG_SITE_URL)
move += """</td></tr></table>"""
actions.append([move, fld_dict[int(fldID)]])
for col in [(('Remove sort option', 'removefield'),)]:
actions[-1].append('<a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s&amp;fmeth=soo#8.4">%s</a>' % (CFG_SITE_URL, col[0][1], colID, ln, fldID, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s&amp;fmeth=soo#8.5">%s</a>' % (CFG_SITE_URL, function, colID, ln, fldID, str)
output += tupletotable(header=header, tuple=actions)
else:
output += """No sort options exists for this collection"""
output += content
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_showsortoptions", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_showsearchfields(colID, ln, callback='yes', content='', confirm=-1):
"""show the search fields of this collection.."""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(get_def_name('', "field"))
fld_type = get_sort_nametypes()
subtitle = """<a name="6">6. Modify search fields for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.6">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
output = """<dl>
<dt>Field actions (not related to this collection)</dt>
<dd>Go to the BibIndex interface to modify the available search fields</dd>
<dt>Collection specific actions
<dd><a href="addexistingfield?colID=%s&amp;ln=%s&amp;fmeth=sew#6.2">Add search field to collection</a></dd>
<dd><a href="rearrangefield?colID=%s&amp;ln=%s&amp;fmeth=sew#6.2">Order search fields alphabetically</a></dd>
</dl>
""" % (colID, ln, colID, ln)
header = ['', 'Search field', 'Actions']
actions = []
sitelangs = get_languages()
lang = dict(sitelangs)
fld_type_list = fld_type.items()
if len(get_col_fld(colID, 'sew')) > 0:
res = get_col_fld(colID, 'sew')
i = 0
for (fldID, fldvID, stype, score, score_fieldvalue) in res:
move = """<table cellspacing="1" cellpadding="0" border="0"><tr><td>"""
if i != 0:
move += """<a href="%s/admin/websearch/websearchadmin.py/switchfldscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;fmeth=sew&amp;rand=%s#6"><img border="0" src="%s/img/smallup.gif" title="Move up"></a>""" % (CFG_SITE_URL, colID, ln, fldID, res[i - 1][0], random.randint(0, 1000), CFG_SITE_URL)
else:
move += "&nbsp;&nbsp;&nbsp;"
move += "</td><td>"
i += 1
if i != len(res):
move += '<a href="%s/admin/websearch/websearchadmin.py/switchfldscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;fmeth=sew&amp;rand=%s#6"><img border="0" src="%s/img/smalldown.gif" title="Move down"></a>' % (CFG_SITE_URL, colID, ln, fldID, res[i][0], random.randint(0, 1000), CFG_SITE_URL)
move += """</td></tr></table>"""
actions.append([move, fld_dict[int(fldID)]])
for col in [(('Remove search field', 'removefield'),)]:
actions[-1].append('<a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s&amp;fmeth=sew#6.4">%s</a>' % (CFG_SITE_URL, col[0][1], colID, ln, fldID, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s#6.5">%s</a>' % (CFG_SITE_URL, function, colID, ln, fldID, str)
output += tupletotable(header=header, tuple=actions)
else:
output += """No search fields exists for this collection"""
output += content
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_showsearchfields", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_showsearchoptions(colID, ln, callback='yes', content='', confirm=-1):
"""show the sort and search options of this collection.."""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(get_def_name('', "field"))
fld_type = get_sort_nametypes()
subtitle = """<a name="7">7. Modify search options for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.7">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
output = """<dl>
<dt>Field actions (not related to this collection)</dt>
<dd>Go to the BibIndex interface to modify the available search options</dd>
<dt>Collection specific actions
<dd><a href="addexistingfield?colID=%s&amp;ln=%s&amp;fmeth=seo#7.2">Add search option to collection</a></dd>
<dd><a href="rearrangefield?colID=%s&amp;ln=%s&amp;fmeth=seo#7.2">Order search options alphabetically</a></dd>
</dl>
""" % (colID, ln, colID, ln)
header = ['', 'Search option', 'Actions']
actions = []
sitelangs = get_languages()
lang = dict(sitelangs)
fld_type_list = fld_type.items()
fld_distinct = run_sql("SELECT distinct(id_field) FROM collection_field_fieldvalue WHERE type='seo' AND id_collection=%s ORDER by score desc", (colID, ))
if len(fld_distinct) > 0:
i = 0
for (id) in fld_distinct:
fldID = id[0]
col_fld = get_col_fld(colID, 'seo', fldID)
move = ""
if i != 0:
move += """<a href="%s/admin/websearch/websearchadmin.py/switchfldscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;fmeth=seo&amp;rand=%s#7"><img border="0" src="%s/img/smallup.gif" title="Move up"></a>""" % (CFG_SITE_URL, colID, ln, fldID, fld_distinct[i - 1][0], random.randint(0, 1000), CFG_SITE_URL)
else:
move += "&nbsp;&nbsp;&nbsp;"
i += 1
if i != len(fld_distinct):
move += '<a href="%s/admin/websearch/websearchadmin.py/switchfldscore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_2=%s&amp;fmeth=seo&amp;rand=%s#7"><img border="0" src="%s/img/smalldown.gif" title="Move down"></a>' % (CFG_SITE_URL, colID, ln, fldID, fld_distinct[i][0], random.randint(0, 1000), CFG_SITE_URL)
actions.append([move, "%s" % fld_dict[fldID]])
for col in [(('Modify values', 'modifyfield'), ('Remove search option', 'removefield'),)]:
actions[-1].append('<a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s#7.3">%s</a>' % (CFG_SITE_URL, col[0][1], colID, ln, fldID, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s&amp;fmeth=seo#7.3">%s</a>' % (CFG_SITE_URL, function, colID, ln, fldID, str)
output += tupletotable(header=header, tuple=actions)
else:
output += """No search options exists for this collection"""
output += content
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_showsearchoptions", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyfield(colID, fldID, fldvID='', ln=CFG_SITE_LANG, content='', callback='yes', confirm=0):
"""Modify the fieldvalues for a field"""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
fld_dict = dict(get_def_name('', "field"))
fld_type = get_sort_nametypes()
fldID = int(fldID)
subtitle = """<a name="7.3">Modify values for field '%s'</a>""" % (fld_dict[fldID])
output = """<dl>
<dt>Value specific actions
<dd><a href="addexistingfieldvalue?colID=%s&amp;ln=%s&amp;fldID=%s#7.4">Add existing value to search option</a></dd>
<dd><a href="addnewfieldvalue?colID=%s&amp;ln=%s&amp;fldID=%s#7.4">Add new value to search option</a></dd>
<dd><a href="rearrangefieldvalue?colID=%s&amp;ln=%s&amp;fldID=%s#7.4">Order values alphabetically</a></dd>
</dl>
""" % (colID, ln, fldID, colID, ln, fldID, colID, ln, fldID)
header = ['', 'Value name', 'Actions']
actions = []
sitelangs = get_languages()
lang = dict(sitelangs)
fld_type_list = fld_type.items()
col_fld = list(get_col_fld(colID, 'seo', fldID))
if len(col_fld) == 1 and col_fld[0][1] is None:
output += """<b><span class="info">No values added for this search option yet</span></b>"""
else:
j = 0
for (fldID, fldvID, stype, score, score_fieldvalue) in col_fld:
fieldvalue = get_fld_value(fldvID)
move = ""
if j != 0:
move += """<a href="%s/admin/websearch/websearchadmin.py/switchfldvaluescore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_fldvalue_1=%s&amp;id_fldvalue_2=%s&amp;rand=%s#7.3"><img border="0" src="%s/img/smallup.gif" title="Move up"></a>""" % (CFG_SITE_URL, colID, ln, fldID, fldvID, col_fld[j - 1][1], random.randint(0, 1000), CFG_SITE_URL)
else:
move += "&nbsp;&nbsp;&nbsp;"
j += 1
if j != len(col_fld):
move += """<a href="%s/admin/websearch/websearchadmin.py/switchfldvaluescore?colID=%s&amp;ln=%s&amp;id_1=%s&amp;id_fldvalue_1=%s&amp;id_fldvalue_2=%s&amp;rand=%s#7.3"><img border="0" src="%s/img/smalldown.gif" title="Move down"></a>""" % (CFG_SITE_URL, colID, ln, fldID, fldvID, col_fld[j][1], random.randint(0, 1000), CFG_SITE_URL)
if fieldvalue[0][1] != fieldvalue[0][2] and fldvID is not None:
actions.append([move, "%s - %s" % (fieldvalue[0][1], fieldvalue[0][2])])
elif fldvID is not None:
actions.append([move, "%s" % fieldvalue[0][1]])
move = ''
for col in [(('Modify value', 'modifyfieldvalue'), ('Remove value', 'removefieldvalue'),)]:
actions[-1].append('<a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s&amp;fldvID=%s&amp;fmeth=seo#7.4">%s</a>' % (CFG_SITE_URL, col[0][1], colID, ln, fldID, fldvID, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fldID=%s&amp;fldvID=%s#7.4">%s</a>' % (CFG_SITE_URL, function, colID, ln, fldID, fldvID, str)
output += tupletotable(header=header, tuple=actions)
output += content
body = [output]
output = "<br />" + addadminbox(subtitle, body)
if len(col_fld) == 0:
output = content
return perform_showsearchoptions(colID, ln, content=output)
def perform_showoutputformats(colID, ln, callback='yes', content='', confirm=-1):
"""shows the outputformats of the current collection
colID - the collection id."""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
subtitle = """<a name="10">10. Modify output formats for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.10">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
output = """
<dl>
<dt>Output format actions (not specific to the chosen collection)
<dd>Go to the BibFormat interface to modify</dd>
<dt>Collection specific actions
<dd><a href="addexistingoutputformat?colID=%s&amp;ln=%s#10.2">Add existing output format to collection</a></dd>
</dl>
""" % (colID, ln)
header = ['', 'Code', 'Output format', 'Actions']
actions = []
col_fmt = get_col_fmt(colID)
fmt_dict = dict(get_def_name('', "format"))
i = 0
if len(col_fmt) > 0:
for (id_format, colID_fld, code, score) in col_fmt:
move = """<table cellspacing="1" cellpadding="0" border="0"><tr><td>"""
if i != 0:
move += """<a href="%s/admin/websearch/websearchadmin.py/switchfmtscore?colID=%s&amp;ln=%s&amp;type=format&amp;id_1=%s&amp;id_2=%s&amp;rand=%s#10"><img border="0" src="%s/img/smallup.gif" title="Move format up"></a>""" % (CFG_SITE_URL, colID, ln, id_format, col_fmt[i - 1][0], random.randint(0, 1000), CFG_SITE_URL)
else:
move += "&nbsp;&nbsp;&nbsp;"
move += "</td><td>"
i += 1
if i != len(col_fmt):
move += '<a href="%s/admin/websearch/websearchadmin.py/switchfmtscore?colID=%s&amp;ln=%s&amp;type=format&amp;id_1=%s&amp;id_2=%s&amp;rand=%s#10"><img border="0" src="%s/img/smalldown.gif" title="Move format down"></a>' % (CFG_SITE_URL, colID, ln, id_format, col_fmt[i][0], random.randint(0, 1000), CFG_SITE_URL)
move += """</td></tr></table>"""
actions.append([move, code, fmt_dict[int(id_format)]])
for col in [(('Remove', 'removeoutputformat'),)]:
actions[-1].append('<a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fmtID=%s#10">%s</a>' % (CFG_SITE_URL, col[0][1], colID, ln, id_format, col[0][0]))
for (str, function) in col[1:]:
actions[-1][-1] += ' / <a href="%s/admin/websearch/websearchadmin.py/%s?colID=%s&amp;ln=%s&amp;fmtID=%s#10">%s</a>' % (CFG_SITE_URL, function, colID, ln, id_format, str)
output += tupletotable(header=header, tuple=actions)
else:
output += """No output formats exists for this collection"""
output += content
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_showoutputformats", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def external_collections_build_select(colID, external_collection):
output = '<select name="state" class="admin_w200">'
if external_collection.parser:
max_state = 4
else:
max_state = 2
num_selected = external_collection_get_state(external_collection, colID)
for num in range(max_state):
state_name = CFG_EXTERNAL_COLLECTION_STATES_NAME[num]
if num == num_selected:
selected = ' selected'
else:
selected = ''
output += '<option value="%(num)d"%(selected)s>%(state_name)s</option>' % {'num': num, 'selected': selected, 'state_name': state_name}
output += '</select>\n'
return output
def perform_manage_external_collections(colID, ln, callback='yes', content='', confirm=-1):
"""Show the interface to configure external collections to the user."""
colID = int(colID)
subtitle = """<a name="11">11. Configuration of related external collections</a>
&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.11">?</a>]</small>""" % CFG_SITE_URL
output = '<form action="update_external_collections" method="POST"><input type="hidden" name="colID" value="%(colID)d">' % {'colID': colID}
table_header = ['External collection', 'Mode', 'Apply also to daughter collections?']
table_content = []
external_collections = external_collection_sort_engine_by_name(external_collections_dictionary.values())
for external_collection in external_collections:
collection_name = external_collection.name
select = external_collections_build_select(colID, external_collection)
recurse = '<input type=checkbox name="recurse" value="%(collection_name)s">' % {'collection_name': collection_name}
table_content.append([collection_name, select, recurse])
output += tupletotable(header=table_header, tuple=table_content)
output += '<input class="adminbutton" type="submit" value="Modify"/>'
output += '</form>'
return addadminbox(subtitle, [output])
def perform_update_external_collections(colID, ln, state_list, recurse_list):
colID = int(colID)
changes = []
output = ""
if not state_list:
return 'Warning : No state found.<br />' + perform_manage_external_collections(colID, ln)
external_collections = external_collection_sort_engine_by_name(external_collections_dictionary.values())
if len(external_collections) != len(state_list):
return 'Warning : Size of state_list different from external_collections!<br />' + perform_manage_external_collections(colID, ln)
for (external_collection, state) in zip(external_collections, state_list):
state = int(state)
collection_name = external_collection.name
recurse = recurse_list and collection_name in recurse_list
oldstate = external_collection_get_state(external_collection, colID)
if oldstate != state or recurse:
changes += external_collection_get_update_state_list(external_collection, colID, state, recurse)
external_collection_apply_changes(changes)
return output + '<br /><br />' + perform_manage_external_collections(colID, ln)
def perform_showdetailedrecordoptions(colID, ln, callback='yes', content='', confirm=-1):
"""Show the interface to configure detailed record page to the user."""
colID = int(colID)
subtitle = """<a name="12">12. Configuration of detailed record page</a>
&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.12">?</a>]</small>""" % CFG_SITE_URL
output = '''<form action="update_detailed_record_options" method="post">
<table><tr><td>
<input type="hidden" name="colID" value="%(colID)d">
<dl>
<dt><b>Show tabs:</b></dt>
<dd>
''' % {'colID': colID}
for (tab_id, tab_info) in get_detailed_page_tabs(colID).iteritems():
if tab_id == 'comments' and \
not CFG_WEBCOMMENT_ALLOW_REVIEWS and \
not CFG_WEBCOMMENT_ALLOW_COMMENTS:
continue
check = ''
output += '''<input type="checkbox" id="id%(tabid)s" name="tabs" value="%(tabid)s" %(check)s />
<label for="id%(tabid)s">&nbsp;%(label)s</label><br />
''' % {'tabid':tab_id,
'check':((tab_info['visible'] and 'checked="checked"') or ''),
'label':tab_info['label']}
output += '</dd></dl></td><td>'
output += '</td></tr></table><input class="adminbutton" type="submit" value="Modify"/>'
output += '''<input type="checkbox" id="recurse" name="recurse" value="1" />
<label for="recurse">&nbsp;Also apply to subcollections</label>'''
output += '</form>'
return addadminbox(subtitle, [output])
def perform_update_detailed_record_options(colID, ln, tabs, recurse):
"""Update the preferences for the tab to show/hide in the detailed record page."""
colID = int(colID)
changes = []
output = '<b><span class="info">Operation successfully completed.</span></b>'
if '' in tabs:
tabs.remove('')
tabs.append('metadata')
def update_settings(colID, tabs, recurse):
run_sql("DELETE FROM collectiondetailedrecordpagetabs WHERE id_collection=%s", (colID, ))
run_sql("REPLACE INTO collectiondetailedrecordpagetabs" + \
" SET id_collection=%s, tabs=%s", (colID, ';'.join(tabs)))
## for enabled_tab in tabs:
## run_sql("REPLACE INTO collectiondetailedrecordpagetabs" + \
## " SET id_collection='%s', tabs='%s'" % (colID, ';'.join(tabs)))
if recurse:
for descendant_id in get_collection_descendants(colID):
update_settings(descendant_id, tabs, recurse)
update_settings(colID, tabs, recurse)
## for colID in colIDs:
## run_sql("DELETE FROM collectiondetailedrecordpagetabs WHERE id_collection='%s'" % colID)
## for enabled_tab in tabs:
## run_sql("REPLACE INTO collectiondetailedrecordpagetabs" + \
## " SET id_collection='%s', tabs='%s'" % (colID, ';'.join(tabs)))
#if callback:
return perform_editcollection(colID, ln, "perform_modifytranslations",
'<br /><br />' + output + '<br /><br />' + \
perform_showdetailedrecordoptions(colID, ln))
#else:
# return addadminbox(subtitle, body)
#return output + '<br /><br />' + perform_showdetailedrecordoptions(colID, ln)
def perform_addexistingoutputformat(colID, ln, fmtID=-1, callback='yes', confirm=-1):
"""form to add an existing output format to a collection.
colID - the collection the format should be added to
fmtID - the format to add."""
subtitle = """<a name="10.2"></a>Add existing output format to collection"""
output = ""
if fmtID not in [-1, "-1"] and confirm in [1, "1"]:
ares = add_col_fmt(colID, fmtID)
colID = int(colID)
res = get_def_name('', "format")
fmt_dict = dict(res)
col_dict = dict(get_def_name('', "collection"))
col_fmt = get_col_fmt(colID)
col_fmt = dict(map(lambda x: (x[0], x[2]), col_fmt))
if len(res) > 0:
text = """
<span class="adminlabel">Output format</span>
<select name="fmtID" class="admin_w200">
<option value="-1">- Select output format -</option>
"""
for (id, name) in res:
if not col_fmt.has_key(id):
text += """<option value="%s" %s>%s</option>
""" % (id, id == int(fmtID) and 'selected="selected"' or '', name)
text += """</select><br />
"""
output += createhiddenform(action="addexistingoutputformat#10.2",
text=text,
button="Add",
colID=colID,
ln=ln,
confirm=1)
else:
output = """No existing output formats to add, please create a new one."""
if fmtID not in [-1, "-1"] and confirm in [1, "1"]:
output += write_outcome(ares)
elif fmtID in [-1, "-1"] and confirm not in [-1, "-1"]:
output += """<b><span class="info">Please select output format.</span></b>"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_showoutputformats(colID, ln, content=output)
def perform_deleteoutputformat(colID, ln, fmtID=-1, callback='yes', confirm=-1):
"""form to delete an output format not in use.
colID - the collection id of the current collection.
fmtID - the format id to delete."""
subtitle = """<a name="10.3"></a>Delete an unused output format"""
output = """
<dl>
<dd>Deleting an output format will also delete the translations associated.</dd>
</dl>
"""
colID = int(colID)
if fmtID not in [-1, "-1"] and confirm in [1, "1"]:
fmt_dict = dict(get_def_name('', "format"))
old_colNAME = fmt_dict[int(fmtID)]
ares = delete_fmt(int(fmtID))
res = get_def_name('', "format")
fmt_dict = dict(res)
col_dict = dict(get_def_name('', "collection"))
col_fmt = get_col_fmt()
col_fmt = dict(map(lambda x: (x[0], x[2]), col_fmt))
if len(res) > 0:
text = """
<span class="adminlabel">Output format</span>
<select name="fmtID" class="admin_w200">
"""
text += """<option value="-1">- Select output format -"""
for (id, name) in res:
if not col_fmt.has_key(id):
text += """<option value="%s" %s>%s""" % (id, id == int(fmtID) and 'selected="selected"' or '', name)
text += "</option>"
text += """</select><br />"""
output += createhiddenform(action="deleteoutputformat#10.3",
text=text,
button="Delete",
colID=colID,
ln=ln,
confirm=0)
if fmtID not in [-1, "-1"]:
fmtID = int(fmtID)
if confirm in [0, "0"]:
text = """<b>Do you want to delete the output format '%s'.</b>
""" % fmt_dict[fmtID]
output += createhiddenform(action="deleteoutputformat#10.3",
text=text,
button="Confirm",
colID=colID,
fmtID=fmtID,
ln=ln,
confirm=1)
elif confirm in [1, "1"]:
output += write_outcome(ares)
elif confirm not in [-1, "-1"]:
output += """<b><span class="info">Choose a output format to delete.</span></b>
"""
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_showoutputformats(colID, ln, content=output)
def perform_removeoutputformat(colID, ln, fmtID='', callback='yes', confirm=0):
"""form to remove an output format from a collection.
colID - the collection id of the current collection.
fmtID - the format id.
"""
subtitle = """<a name="10.5"></a>Remove output format"""
output = ""
col_dict = dict(get_def_name('', "collection"))
fmt_dict = dict(get_def_name('', "format"))
if colID and fmtID:
colID = int(colID)
fmtID = int(fmtID)
if confirm in ["0", 0]:
text = """Do you want to remove the output format '%s' from the collection '%s'.""" % (fmt_dict[fmtID], col_dict[colID])
output += createhiddenform(action="removeoutputformat#10.5",
text=text,
button="Confirm",
colID=colID,
fmtID=fmtID,
confirm=1)
elif confirm in ["1", 1]:
res = remove_fmt(colID, fmtID)
output += write_outcome(res)
body = [output]
output = "<br />" + addadminbox(subtitle, body)
return perform_showoutputformats(colID, ln, content=output)
def perform_index(colID=1, ln=CFG_SITE_LANG, mtype='', content='', confirm=0):
"""The index method, calling methods to show the collection tree, create new collections and add collections to tree.
"""
subtitle = "Overview"
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
output = ""
fin_output = ""
if not col_dict.has_key(1):
res = add_col(CFG_SITE_NAME, '')
if res:
fin_output += """<b><span class="info">Created root collection.</span></b><br />"""
else:
return "Cannot create root collection, please check database."
if CFG_SITE_NAME != run_sql("SELECT name from collection WHERE id=1")[0][0]:
res = run_sql("update collection set name=%s where id=1", (CFG_SITE_NAME, ))
if res:
fin_output += """<b><span class="info">The name of the root collection has been modified to be the same as the %(sitename)s installation name given prior to installing %(sitename)s.</span><b><br />""" % {'sitename' : CFG_SITE_NAME}
else:
return "Error renaming root collection."
fin_output += """
<table>
<tr>
<td>0.&nbsp;<small><a href="%s/admin/websearch/websearchadmin.py?colID=%s&amp;ln=%s&amp;mtype=perform_showall">Show all</a></small></td>
<td>1.&nbsp;<small><a href="%s/admin/websearch/websearchadmin.py?colID=%s&amp;ln=%s&amp;mtype=perform_addcollection">Create new collection</a></small></td>
<td>2.&nbsp;<small><a href="%s/admin/websearch/websearchadmin.py?colID=%s&amp;ln=%s&amp;mtype=perform_addcollectiontotree">Attach collection to tree</a></small></td>
<td>3.&nbsp;<small><a href="%s/admin/websearch/websearchadmin.py?colID=%s&amp;ln=%s&amp;mtype=perform_modifycollectiontree">Modify collection tree</a></small></td>
<td>4.&nbsp;<small><a href="%s/admin/websearch/websearchadmin.py?colID=%s&amp;ln=%s&amp;mtype=perform_checkwebcollstatus">Webcoll Status</a></small></td>
</tr><tr>
<td>5.&nbsp;<small><a href="%s/admin/websearch/websearchadmin.py?colID=%s&amp;ln=%s&amp;mtype=perform_checkcollectionstatus">Collection Status</a></small></td>
<td>6.&nbsp;<small><a href="%s/admin/websearch/websearchadmin.py?colID=%s&amp;ln=%s&amp;mtype=perform_checkexternalcollections">Check external collections</a></small></td>
<td>7.&nbsp;<small><a href="%s/help/admin/websearch-admin-guide?ln=%s">Guide</a></small></td>
</tr>
</table>
""" % (CFG_SITE_URL, colID, ln, CFG_SITE_URL, colID, ln, CFG_SITE_URL, colID, ln, CFG_SITE_URL, colID, ln, CFG_SITE_URL, colID, ln, CFG_SITE_URL, colID, ln, CFG_SITE_URL, colID, ln, CFG_SITE_URL, ln)
if mtype == "":
fin_output += """<br /><br /><b><span class="info">To manage the collections, select an item from the menu.</span><b><br />"""
if mtype == "perform_addcollection" and content:
fin_output += content
elif mtype == "perform_addcollection" or mtype == "perform_showall":
fin_output += perform_addcollection(colID=colID, ln=ln, callback='')
fin_output += "<br />"
if mtype == "perform_addcollectiontotree" and content:
fin_output += content
elif mtype == "perform_addcollectiontotree" or mtype == "perform_showall":
fin_output += perform_addcollectiontotree(colID=colID, ln=ln, callback='')
fin_output += "<br />"
if mtype == "perform_modifycollectiontree" and content:
fin_output += content
elif mtype == "perform_modifycollectiontree" or mtype == "perform_showall":
fin_output += perform_modifycollectiontree(colID=colID, ln=ln, callback='')
fin_output += "<br />"
if mtype == "perform_checkwebcollstatus" and content:
fin_output += content
elif mtype == "perform_checkwebcollstatus" or mtype == "perform_showall":
fin_output += perform_checkwebcollstatus(colID, ln, callback='')
if mtype == "perform_checkcollectionstatus" and content:
fin_output += content
elif mtype == "perform_checkcollectionstatus" or mtype == "perform_showall":
fin_output += perform_checkcollectionstatus(colID, ln, callback='')
if mtype == "perform_checkexternalcollections" and content:
fin_output += content
elif mtype == "perform_checkexternalcollections" or mtype == "perform_showall":
fin_output += perform_checkexternalcollections(colID, ln, callback='')
body = [fin_output]
body = [fin_output]
return addadminbox('<b>Menu</b>', body)
def show_coll_not_in_tree(colID, ln, col_dict):
"""Returns collections not in tree"""
tree = get_col_tree(colID)
in_tree = {}
output = "These collections are not in the tree, and should be added:<br />"
for (id, up, down, dad, reltype) in tree:
in_tree[id] = 1
in_tree[dad] = 1
res = run_sql("SELECT id from collection")
if len(res) != len(in_tree):
for id in res:
if not in_tree.has_key(id[0]):
output += """<a href="%s/admin/websearch/websearchadmin.py/editcollection?colID=%s&amp;ln=%s" title="Edit collection">%s</a> ,
""" % (CFG_SITE_URL, id[0], ln, col_dict[id[0]])
output += "<br /><br />"
else:
output = ""
return output
def create_colltree(tree, col_dict, colID, ln, move_from='', move_to='', rtype='', edit=''):
"""Creates the presentation of the collection tree, with the buttons for modifying it.
tree - the tree to present, from get_tree()
col_dict - the name of the collections in a dictionary
colID - the collection id to start with
move_from - if a collection to be moved has been chosen
move_to - the collection which should be set as father of move_from
rtype - the type of the tree, regular or virtual
edit - if the method should output the edit buttons."""
if move_from:
move_from_rtype = move_from[0]
move_from_id = int(move_from[1:len(move_from)])
tree_from = get_col_tree(colID, move_from_rtype)
tree_to = get_col_tree(colID, rtype)
tables = 0
tstack = []
i = 0
text = """
<table border ="0" cellspacing="0" cellpadding="0">"""
for i in range(0, len(tree)):
id_son = tree[i][0]
up = tree[i][1]
down = tree[i][2]
dad = tree[i][3]
reltype = tree[i][4]
tmove_from = ""
j = i
while j > 0:
j = j - 1
try:
if tstack[j][1] == dad:
table = tstack[j][2]
for k in range(0, tables - table):
tables = tables - 1
text += """</table></td></tr>
"""
break
except StandardError, e:
pass
text += """<tr><td>
"""
if i > 0 and tree[i][1] == 0:
tables = tables + 1
text += """</td><td></td><td></td><td></td><td><table border="0" cellspacing="0" cellpadding="0"><tr><td>
"""
if i == 0:
tstack.append((id_son, dad, 1))
else:
tstack.append((id_son, dad, tables))
if up == 1 and edit:
text += """<a href="%s/admin/websearch/websearchadmin.py/modifycollectiontree?colID=%s&amp;ln=%s&amp;move_up=%s&amp;rtype=%s#%s"><img border="0" src="%s/img/smallup.gif" title="Move collection up"></a>""" % (CFG_SITE_URL, colID, ln, i, rtype, tree[i][0], CFG_SITE_URL)
else:
text += """&nbsp;"""
text += "</td><td>"
if down == 1 and edit:
text += """<a href="%s/admin/websearch/websearchadmin.py/modifycollectiontree?colID=%s&amp;ln=%s&amp;move_down=%s&amp;rtype=%s#%s"><img border="0" src="%s/img/smalldown.gif" title="Move collection down"></a>""" % (CFG_SITE_URL, colID, ln, i, rtype, tree[i][0], CFG_SITE_URL)
else:
text += """&nbsp;"""
text += "</td><td>"
if edit:
if move_from and move_to:
tmove_from = move_from
move_from = ''
if not (move_from == "" and i == 0) and not (move_from != "" and int(move_from[1:len(move_from)]) == i and rtype == move_from[0]):
check = "true"
if move_from:
#if tree_from[move_from_id][0] == tree_to[i][0] or not check_col(tree_to[i][0], tree_from[move_from_id][0]):
# check = ''
#elif not check_col(tree_to[i][0], tree_from[move_from_id][0]):
# check = ''
#if not check and (tree_to[i][0] == 1 and tree_from[move_from_id][3] == tree_to[i][0] and move_from_rtype != rtype):
# check = "true"
if check:
text += """<a href="%s/admin/websearch/websearchadmin.py/modifycollectiontree?colID=%s&amp;ln=%s&amp;move_from=%s&amp;move_to=%s%s&amp;rtype=%s#tree"><img border="0" src="%s/img/move_to.gif" title="Move '%s' to '%s'"></a>
""" % (CFG_SITE_URL, colID, ln, move_from, rtype, i, rtype, CFG_SITE_URL, col_dict[tree_from[int(move_from[1:len(move_from)])][0]], col_dict[tree_to[i][0]])
else:
try:
text += """<a href="%s/admin/websearch/websearchadmin.py/modifycollectiontree?colID=%s&amp;ln=%s&amp;move_from=%s%s&amp;rtype=%s#%s"><img border="0" src="%s/img/move_from.gif" title="Move '%s' from this location."></a>""" % (CFG_SITE_URL, colID, ln, rtype, i, rtype, tree[i][0], CFG_SITE_URL, col_dict[tree[i][0]])
except KeyError:
pass
else:
text += """<img border="0" src="%s/img/white_field.gif">
""" % CFG_SITE_URL
else:
text += """<img border="0" src="%s/img/white_field.gif">
""" % CFG_SITE_URL
text += """
</td>
<td>"""
if edit:
try:
text += """<a href="%s/admin/websearch/websearchadmin.py/modifycollectiontree?colID=%s&amp;ln=%s&amp;delete=%s&amp;rtype=%s#%s"><img border="0" src="%s/img/iconcross.gif" title="Remove colletion from tree"></a>""" % (CFG_SITE_URL, colID, ln, i, rtype, tree[i][0], CFG_SITE_URL)
except KeyError:
pass
elif i != 0:
text += """<img border="0" src="%s/img/white_field.gif">
""" % CFG_SITE_URL
text += """</td><td>
"""
if tmove_from:
move_from = tmove_from
try:
text += """<a name="%s"></a>%s<a href="%s/admin/websearch/websearchadmin.py/editcollection?colID=%s&amp;ln=%s" title="Edit collection">%s</a>%s%s%s""" % (tree[i][0], (reltype=="v" and '<i>' or ''), CFG_SITE_URL, tree[i][0], ln, col_dict[id_son], (move_to=="%s%s" %(rtype, i) and '&nbsp;<img border="0" src="%s/img/move_to.gif">' % CFG_SITE_URL or ''), (move_from=="%s%s" % (rtype, i) and '&nbsp;<img border="0" src="%s/img/move_from.gif">' % CFG_SITE_URL or ''), (reltype=="v" and '</i>' or ''))
except KeyError:
pass
text += """</td></tr>
"""
while tables > 0:
text += """</table></td></tr>
"""
tables = tables - 1
- text += """</table>
- """
+ text += """</table>"""
return text
def perform_deletecollection(colID, ln, confirm=-1, callback='yes'):
"""form to delete a collection
colID - id of collection
"""
subtitle =''
output = """
<span class="warning">
<strong>
<dl>
<dt>WARNING:</dt>
<dd>When deleting a collection, you also deletes all data related to the collection like translations, relations to other collections and information about which rank methods to use.
<br />For more information, please go to the <a title="See guide" href="%s/help/admin/websearch-admin-guide">WebSearch guide</a> and read the section regarding deleting a collection.</dd>
</dl>
</strong>
</span>
""" % CFG_SITE_URL
col_dict = dict(get_def_name('', "collection"))
if colID != 1 and colID and col_dict.has_key(int(colID)):
colID = int(colID)
subtitle = """<a name="4">4. Delete collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.4">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
res = run_sql("SELECT id_dad,id_son,type,score from collection_collection WHERE id_dad=%s", (colID, ))
res2 = run_sql("SELECT id_dad,id_son,type,score from collection_collection WHERE id_son=%s", (colID, ))
if not res and not res2:
if confirm in ["-1", -1]:
text = """Do you want to delete this collection."""
output += createhiddenform(action="deletecollection#4",
text=text,
colID=colID,
button="Delete",
confirm=0)
elif confirm in ["0", 0]:
text = """Are you sure you want to delete this collection."""
output += createhiddenform(action="deletecollection#4",
text=text,
colID=colID,
button="Confirm",
confirm=1)
elif confirm in ["1", 1]:
result = delete_col(colID)
if not result:
raise Exception
else:
output = """<b><span class="info">Can not delete a collection that is a part of the collection tree, remove collection from the tree and try again.</span></b>"""
else:
subtitle = """4. Delete collection"""
output = """<b><span class="info">Not possible to delete the root collection</span></b>"""
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_deletecollection", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_editcollection(colID=1, ln=CFG_SITE_LANG, mtype='', content=''):
"""interface to modify a collection. this method is calling other methods which again is calling this and sending back the output of the method.
if callback, the method will call perform_editcollection, if not, it will just return its output.
colID - id of the collection
mtype - the method that called this method.
content - the output from that method."""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
if not col_dict.has_key(colID):
return """<b><span class="info">Collection deleted.</span></b>
"""
fin_output = """
<table>
<tr>
<td><b>Menu</b></td>
</tr>
<tr>
<td>0.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s">Show all</a></small></td>
<td>1.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_modifydbquery">Modify collection query</a></small></td>
<td>2.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_modifyrestricted">Modify access restrictions</a></small></td>
<td>3.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_modifytranslations">Modify translations</a></small></td>
<td>4.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_deletecollection">Delete collection</a></small></td>
</tr><tr>
<td>5.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_showportalboxes">Modify portalboxes</a></small></td>
<td>6.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_showsearchfields#6">Modify search fields</a></small></td>
<td>7.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_showsearchoptions#7">Modify search options</a></small></td>
<td>8.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_showsortoptions#8">Modify sort options</a></small></td>
<td>9.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_modifyrankmethods#9">Modify rank options</a></small></td>
</tr><tr>
<td>10.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_showoutputformats#10">Modify output formats</a></small></td>
<td>11.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_manage_external_collections#11">Configuration of related external collections</a></small></td>
<td>12.&nbsp;<small><a href="editcollection?colID=%s&amp;ln=%s&amp;mtype=perform_showdetailedrecordoptions#12">Detailed record page options</a></small></td>
</tr>
</table>
""" % (colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln, colID, ln)
if mtype == "perform_modifydbquery" and content:
fin_output += content
elif mtype == "perform_modifydbquery" or not mtype:
fin_output += perform_modifydbquery(colID, ln, callback='')
if mtype == "perform_modifyrestricted" and content:
fin_output += content
elif mtype == "perform_modifyrestricted" or not mtype:
fin_output += perform_modifyrestricted(colID, ln, callback='')
if mtype == "perform_modifytranslations" and content:
fin_output += content
elif mtype == "perform_modifytranslations" or not mtype:
fin_output += perform_modifytranslations(colID, ln, callback='')
if mtype == "perform_deletecollection" and content:
fin_output += content
elif mtype == "perform_deletecollection" or not mtype:
fin_output += perform_deletecollection(colID, ln, callback='')
if mtype == "perform_showportalboxes" and content:
fin_output += content
elif mtype == "perform_showportalboxes" or not mtype:
fin_output += perform_showportalboxes(colID, ln, callback='')
if mtype == "perform_showsearchfields" and content:
fin_output += content
elif mtype == "perform_showsearchfields" or not mtype:
fin_output += perform_showsearchfields(colID, ln, callback='')
if mtype == "perform_showsearchoptions" and content:
fin_output += content
elif mtype == "perform_showsearchoptions" or not mtype:
fin_output += perform_showsearchoptions(colID, ln, callback='')
if mtype == "perform_showsortoptions" and content:
fin_output += content
elif mtype == "perform_showsortoptions" or not mtype:
fin_output += perform_showsortoptions(colID, ln, callback='')
if mtype == "perform_modifyrankmethods" and content:
fin_output += content
elif mtype == "perform_modifyrankmethods" or not mtype:
fin_output += perform_modifyrankmethods(colID, ln, callback='')
if mtype == "perform_showoutputformats" and content:
fin_output += content
elif mtype == "perform_showoutputformats" or not mtype:
fin_output += perform_showoutputformats(colID, ln, callback='')
if mtype == "perform_manage_external_collections" and content:
fin_output += content
elif mtype == "perform_manage_external_collections" or not mtype:
fin_output += perform_manage_external_collections(colID, ln, callback='')
if mtype == "perform_showdetailedrecordoptions" and content:
fin_output += content
elif mtype == "perform_showdetailedrecordoptions" or not mtype:
fin_output += perform_showdetailedrecordoptions(colID, ln, callback='')
return addadminbox("Overview of edit options for collection '%s'" % col_dict[colID], [fin_output])
def perform_checkwebcollstatus(colID, ln, confirm=0, callback='yes'):
"""Check status of the collection tables with respect to the webcoll cache."""
subtitle = """<a name="11"></a>Webcoll Status&nbsp;&nbsp;&nbsp;[<a href="%s/help/admin/websearch-admin-guide#5">?</a>]""" % CFG_SITE_URL
output = ""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
output += """<br /><b>Last updates:</b><br />"""
collection_table_update_time = ""
collection_web_update_time = ""
collection_table_update_time = get_table_update_time('collection')
output += "Collection table last updated: %s<br />" % collection_table_update_time
try:
file = open("%s/collections/last_updated" % CFG_CACHEDIR)
collection_web_update_time = file.readline().strip()
output += "Collection cache last updated: %s<br />" % collection_web_update_time
file.close()
except:
pass
# reformat collection_web_update_time to the format suitable for comparisons
try:
collection_web_update_time = strftime("%Y-%m-%d %H:%M:%S",
time.strptime(collection_web_update_time, "%d %b %Y %H:%M:%S"))
except ValueError, e:
pass
if collection_table_update_time > collection_web_update_time:
output += """<br /><b><span class="info">Warning: The collections have been modified since last time Webcoll was executed, to process the changes, Webcoll must be executed.</span></b><br />"""
header = ['ID', 'Name', 'Time', 'Status', 'Progress']
actions = []
output += """<br /><b>Last BibSched tasks:</b><br />"""
res = run_sql("select id, proc, host, user, runtime, sleeptime, arguments, status, progress from schTASK where proc='webcoll' and runtime< now() ORDER by runtime")
if len(res) > 0:
(id, proc, host, user, runtime, sleeptime, arguments, status, progress) = res[len(res) - 1]
webcoll__update_time = runtime
actions.append([id, proc, runtime, (status !="" and status or ''), (progress !="" and progress or '')])
else:
actions.append(['', 'webcoll', '', '', 'Not executed yet'])
res = run_sql("select id, proc, host, user, runtime, sleeptime, arguments, status, progress from schTASK where proc='bibindex' and runtime< now() ORDER by runtime")
if len(res) > 0:
(id, proc, host, user, runtime, sleeptime, arguments, status, progress) = res[len(res) - 1]
actions.append([id, proc, runtime, (status !="" and status or ''), (progress !="" and progress or '')])
else:
actions.append(['', 'bibindex', '', '', 'Not executed yet'])
output += tupletotable(header=header, tuple=actions)
output += """<br /><b>Next scheduled BibSched run:</b><br />"""
actions = []
res = run_sql("select id, proc, host, user, runtime, sleeptime, arguments, status, progress from schTASK where proc='webcoll' and runtime > now() ORDER by runtime")
webcoll_future = ""
if len(res) > 0:
(id, proc, host, user, runtime, sleeptime, arguments, status, progress) = res[0]
webcoll__update_time = runtime
actions.append([id, proc, runtime, (status !="" and status or ''), (progress !="" and progress or '')])
webcoll_future = "yes"
else:
actions.append(['', 'webcoll', '', '', 'Not scheduled'])
res = run_sql("select id, proc, host, user, runtime, sleeptime, arguments, status, progress from schTASK where proc='bibindex' and runtime > now() ORDER by runtime")
bibindex_future = ""
if len(res) > 0:
(id, proc, host, user, runtime, sleeptime, arguments, status, progress) = res[0]
actions.append([id, proc, runtime, (status !="" and status or ''), (progress !="" and progress or '')])
bibindex_future = "yes"
else:
actions.append(['', 'bibindex', '', '', 'Not scheduled'])
output += tupletotable(header=header, tuple=actions)
if webcoll_future == "":
output += """<br /><b><span class="info">Warning: Webcoll is not scheduled for a future run by bibsched, any updates to the collection will not be processed.</span></b><br />"""
if bibindex_future == "":
output += """<br /><b><span class="info">Warning: Bibindex is not scheduled for a future run by bibsched, any updates to the records will not be processed.</span></b><br />"""
body = [output]
if callback:
return perform_index(colID, ln, "perform_checkwebcollstatus", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_modifyrestricted(colID, ln, rest='', callback='yes', confirm=-1):
"""modify which apache group is allowed to access the collection.
rest - the groupname"""
subtitle = ''
output = ""
col_dict = dict(get_def_name('', "collection"))
action_id = acc_get_action_id(VIEWRESTRCOLL)
if colID and col_dict.has_key(int(colID)):
colID = int(colID)
subtitle = """<a name="2">2. Modify access restrictions for collection '%s'</a>&nbsp;&nbsp;&nbsp;<small>[<a title="See guide" href="%s/help/admin/websearch-admin-guide#3.2">?</a>]</small>""" % (col_dict[colID], CFG_SITE_URL)
output = """<p>Please note that Invenio versions greater than <em>0.92.1</em> manage collection restriction via the standard
<strong><a href="/admin/webaccess/webaccessadmin.py/showactiondetails?id_action=%i">WebAccess Admin Interface</a></strong> (action '%s').</p>
""" % (action_id, VIEWRESTRCOLL)
body = [output]
if callback:
return perform_editcollection(colID, ln, "perform_modifyrestricted", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_checkcollectionstatus(colID, ln, confirm=0, callback='yes'):
"""Check the configuration of the collections."""
from invenio.search_engine import collection_restricted_p, restricted_collection_cache
subtitle = """<a name="11"></a>Collection Status&nbsp;&nbsp;&nbsp;[<a href="%s/help/admin/websearch-admin-guide#6">?</a>]""" % CFG_SITE_URL
output = ""
colID = int(colID)
col_dict = dict(get_def_name('', "collection"))
collections = run_sql("SELECT id, name, dbquery, nbrecs FROM collection "
"ORDER BY id")
header = ['ID', 'Name','Query', 'Subcollections', 'Restricted', 'Hosted',
'I18N', 'Status', 'Number of records']
rnk_list = get_def_name('', "rnkMETHOD")
actions = []
restricted_collection_cache.recreate_cache_if_needed()
for (id, name, dbquery, nbrecs) in collections:
reg_sons = col_has_son(id, 'r')
vir_sons = col_has_son(id, 'v')
status = ""
hosted = ""
if str(dbquery).startswith("hostedcollection:"): hosted = """<b><span class="info">Yes</span></b>"""
else: hosted = """<b><span class="info">No</span></b>"""
langs = run_sql("SELECT ln from collectionname where id_collection=%s", (id, ))
i8n = ""
if len(langs) > 0:
for lang in langs:
i8n += "%s, " % lang
else:
i8n = """<b><span class="info">None</span></b>"""
if reg_sons and dbquery:
status = """<b><span class="warning">1:Conflict</span></b>"""
elif not dbquery and not reg_sons:
status = """<b><span class="warning">2:Empty</span></b>"""
if (reg_sons or vir_sons):
subs = """<b><span class="info">Yes</span></b>"""
else:
subs = """<b><span class="info">No</span></b>"""
if dbquery is None:
dbquery = """<b><span class="info">No</span></b>"""
restricted = collection_restricted_p(name, recreate_cache_if_needed=False)
if restricted:
restricted = """<b><span class="warning">Yes</span></b>"""
if status:
status += """<b><span class="warning">,3:Restricted</span></b>"""
else:
status += """<b><span class="warning">3:Restricted</span></b>"""
else:
restricted = """<b><span class="info">No</span></b>"""
if status == "":
status = """<b><span class="info">OK</span></b>"""
actions.append([id, """<a href="%s/admin/websearch/websearchadmin.py/editcollection?colID=%s&amp;ln=%s">%s</a>""" % (CFG_SITE_URL, id, ln, name), dbquery, subs, restricted, hosted, i8n, status, nbrecs])
output += tupletotable(header=header, tuple=actions)
body = [output]
return addadminbox(subtitle, body)
if callback:
return perform_index(colID, ln, "perform_checkcollectionstatus", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def perform_checkexternalcollections(colID, ln, icl=None, update="", confirm=0, callback='yes'):
"""Check the external collections for inconsistencies."""
subtitle = """<a name="7"></a>Check external collections&nbsp;&nbsp;&nbsp;[<a href="%s/help/admin/websearch-admin-guide#7">?</a>]""" % CFG_SITE_URL
output = ""
colID = int(colID)
if icl:
if update == "add":
# icl : the "inconsistent list" comes as a string, it has to be converted back into a list
icl = eval(icl)
#icl = icl[1:-1].split(',')
for collection in icl:
#collection = str(collection[1:-1])
query_select = "SELECT name FROM externalcollection WHERE name like '%(name)s';" % {'name': collection}
results_select = run_sql(query_select)
if not results_select:
query_insert = "INSERT INTO externalcollection (name) VALUES ('%(name)s');" % {'name': collection}
run_sql(query_insert)
output += """<br /><span class=info>New collection \"%s\" has been added to the database table \"externalcollection\".</span><br />""" % (collection)
else:
output += """<br /><span class=info>Collection \"%s\" has already been added to the database table \"externalcollection\" or was already there.</span><br />""" % (collection)
elif update == "del":
# icl : the "inconsistent list" comes as a string, it has to be converted back into a list
icl = eval(icl)
#icl = icl[1:-1].split(',')
for collection in icl:
#collection = str(collection[1:-1])
query_select = "SELECT id FROM externalcollection WHERE name like '%(name)s';" % {'name': collection}
results_select = run_sql(query_select)
if results_select:
query_delete = "DELETE FROM externalcollection WHERE id like '%(id)s';" % {'id': results_select[0][0]}
query_delete_states = "DELETE FROM collection_externalcollection WHERE id_externalcollection like '%(id)s';" % {'id': results_select[0][0]}
run_sql(query_delete)
run_sql(query_delete_states)
output += """<br /><span class=info>Collection \"%s\" has been deleted from the database table \"externalcollection\".</span><br />""" % (collection)
else:
output += """<br /><span class=info>Collection \"%s\" has already been delete from the database table \"externalcollection\" or was never there.</span><br />""" % (collection)
external_collections_file = []
external_collections_db = []
for coll in external_collections_dictionary.values():
external_collections_file.append(coll.name)
external_collections_file.sort()
query = """SELECT name from externalcollection"""
results = run_sql(query)
for result in results:
external_collections_db.append(result[0])
external_collections_db.sort()
number_file = len(external_collections_file)
number_db = len(external_collections_db)
if external_collections_file == external_collections_db:
output += """<br /><span class="info">External collections are consistent.</span><br /><br />
&nbsp;&nbsp;&nbsp;- database table \"externalcollection\" has %(number_db)s collections<br />
&nbsp;&nbsp;&nbsp;- configuration file \"websearch_external_collections_config.py\" has %(number_file)s collections""" % {
"number_db" : number_db,
"number_file" : number_file}
elif len(external_collections_file) > len(external_collections_db):
external_collections_diff = list(set(external_collections_file) - set(external_collections_db))
external_collections_db.extend(external_collections_diff)
external_collections_db.sort()
if external_collections_file == external_collections_db:
output += """<br /><span class="warning">There is an inconsistency:</span><br /><br />
&nbsp;&nbsp;&nbsp;- database table \"externalcollection\" has %(number_db)s collections
&nbsp;(<span class="warning">missing: %(diff)s</span>)<br />
&nbsp;&nbsp;&nbsp;- configuration file \"websearch_external_collections_config.py\" has %(number_file)s collections
<br /><br /><a href="%(site_url)s/admin/websearch/websearchadmin.py/checkexternalcollections?colID=%(colID)s&amp;icl=%(diff)s&amp;update=add&amp;ln=%(ln)s">
Click here</a> to update your database adding the missing collections. If the problem persists please check your configuration manually.""" % {
"number_db" : number_db,
"number_file" : number_file,
"diff" : external_collections_diff,
"site_url" : CFG_SITE_URL,
"colID" : colID,
"ln" : ln}
else:
output += """<br /><span class="warning">There is an inconsistency:</span><br /><br />
&nbsp;&nbsp;&nbsp;- database table \"externalcollection\" has %(number_db)s collections<br />
&nbsp;&nbsp;&nbsp;- configuration file \"websearch_external_collections_config.py\" has %(number_file)s collections
<br /><br /><span class="warning">The external collections do not match.</span>
<br />To fix the problem please check your configuration manually.""" % {
"number_db" : number_db,
"number_file" : number_file}
elif len(external_collections_file) < len(external_collections_db):
external_collections_diff = list(set(external_collections_db) - set(external_collections_file))
external_collections_file.extend(external_collections_diff)
external_collections_file.sort()
if external_collections_file == external_collections_db:
output += """<br /><span class="warning">There is an inconsistency:</span><br /><br />
&nbsp;&nbsp;&nbsp;- database table \"externalcollection\" has %(number_db)s collections
&nbsp;(<span class="warning">extra: %(diff)s</span>)<br />
&nbsp;&nbsp;&nbsp;- configuration file \"websearch_external_collections_config.py\" has %(number_file)s collections
<br /><br /><a href="%(site_url)s/admin/websearch/websearchadmin.py/checkexternalcollections?colID=%(colID)s&amp;icl=%(diff)s&amp;update=del&amp;ln=%(ln)s">
Click here</a> to force remove the extra collections from your database (warning: use with caution!). If the problem persists please check your configuration manually.""" % {
"number_db" : number_db,
"number_file" : number_file,
"diff" : external_collections_diff,
"site_url" : CFG_SITE_URL,
"colID" : colID,
"ln" : ln}
else:
output += """<br /><span class="warning">There is an inconsistency:</span><br /><br />
&nbsp;&nbsp;&nbsp;- database table \"externalcollection\" has %(number_db)s collections<br />
&nbsp;&nbsp;&nbsp;- configuration file \"websearch_external_collections_config.py\" has %(number_file)s collections
<br /><br /><span class="warning">The external collections do not match.</span>
<br />To fix the problem please check your configuration manually.""" % {
"number_db" : number_db,
"number_file" : number_file}
else:
output += """<br /><span class="warning">There is an inconsistency:</span><br /><br />
&nbsp;&nbsp;&nbsp;- database table \"externalcollection\" has %(number_db)s collections<br />
&nbsp;&nbsp;&nbsp;- configuration file \"websearch_external_collections_config.py\" has %(number_file)s collections
<br /><br /><span class="warning">The number of external collections is the same but the collections do not match.</span>
<br />To fix the problem please check your configuration manually.""" % {
"number_db" : number_db,
"number_file" : number_file}
body = [output]
return addadminbox(subtitle, body)
if callback:
return perform_index(colID, ln, "perform_checkexternalcollections", addadminbox(subtitle, body))
else:
return addadminbox(subtitle, body)
def col_has_son(colID, rtype='r'):
"""Return True if the collection has at least one son."""
return run_sql("SELECT id_son FROM collection_collection WHERE id_dad=%s and type=%s LIMIT 1", (colID, rtype)) != ()
def get_col_tree(colID, rtype=''):
"""Returns a presentation of the tree as a list. TODO: Add loop detection
colID - startpoint for the tree
rtype - get regular or virtual part of the tree"""
try:
colID = int(colID)
stack = [colID]
ssize = 0
tree = [(colID, 0, 0, colID, 'r')]
while len(stack) > 0:
ccolID = stack.pop()
if ccolID == colID and rtype:
res = run_sql("SELECT id_son, score, type FROM collection_collection WHERE id_dad=%s AND type=%s ORDER BY score ASC,id_son", (ccolID, rtype))
else:
res = run_sql("SELECT id_son, score, type FROM collection_collection WHERE id_dad=%s ORDER BY score ASC,id_son", (ccolID, ))
ssize += 1
ntree = []
for i in range(0, len(res)):
id_son = res[i][0]
score = res[i][1]
rtype = res[i][2]
stack.append(id_son)
if i == (len(res) - 1):
up = 0
else:
up = 1
if i == 0:
down = 0
else:
down = 1
ntree.insert(0, (id_son, up, down, ccolID, rtype))
tree = tree[0:ssize] + ntree + tree[ssize:len(tree)]
return tree
except StandardError, e:
register_exception()
return ()
def add_col_dad_son(add_dad, add_son, rtype):
"""Add a son to a collection (dad)
add_dad - add to this collection id
add_son - add this collection id
rtype - either regular or virtual"""
try:
res = run_sql("SELECT score FROM collection_collection WHERE id_dad=%s ORDER BY score ASC", (add_dad, ))
highscore = 0
for score in res:
if int(score[0]) > highscore:
highscore = int(score[0])
highscore += 1
res = run_sql("INSERT INTO collection_collection(id_dad,id_son,score,type) values(%s,%s,%s,%s)", (add_dad, add_son, highscore, rtype))
return (1, highscore)
except StandardError, e:
register_exception()
return (0, e)
def compare_on_val(first, second):
"""Compare the two values"""
return cmp(first[1], second[1])
def get_col_fld(colID=-1, type = '', id_field=''):
"""Returns either all portalboxes associated with a collection, or based on either colID or language or both.
colID - collection id
ln - language id"""
sql = "SELECT id_field,id_fieldvalue,type,score,score_fieldvalue FROM collection_field_fieldvalue, field WHERE id_field=field.id"
params = []
if colID > -1:
sql += " AND id_collection=%s"
params.append(colID)
if id_field:
sql += " AND id_field=%s"
params.append(id_field)
if type:
sql += " AND type=%s"
params.append(type)
sql += " ORDER BY type, score desc, score_fieldvalue desc"
res = run_sql(sql, tuple(params))
return res
def get_col_pbx(colID=-1, ln='', position = ''):
"""Returns either all portalboxes associated with a collection, or based on either colID or language or both.
colID - collection id
ln - language id"""
sql = "SELECT id_portalbox, id_collection, ln, score, position, title, body FROM collection_portalbox, portalbox WHERE id_portalbox = portalbox.id"
params = []
if colID > -1:
sql += " AND id_collection=%s"
params.append(colID)
if ln:
sql += " AND ln=%s"
params.append(ln)
if position:
sql += " AND position=%s"
params.append(position)
sql += " ORDER BY position, ln, score desc"
res = run_sql(sql, tuple(params))
return res
def get_col_fmt(colID=-1):
"""Returns all formats currently associated with a collection, or for one specific collection
colID - the id of the collection"""
if colID not in [-1, "-1"]:
res = run_sql("SELECT id_format, id_collection, code, score FROM collection_format, format WHERE id_format = format.id AND id_collection=%s ORDER BY score desc", (colID, ))
else:
res = run_sql("SELECT id_format, id_collection, code, score FROM collection_format, format WHERE id_format = format.id ORDER BY score desc")
return res
def get_col_rnk(colID, ln):
""" Returns a list of the rank methods the given collection is attached to
colID - id from collection"""
try:
res1 = dict(run_sql("SELECT id_rnkMETHOD, '' FROM collection_rnkMETHOD WHERE id_collection=%s", (colID, )))
res2 = get_def_name('', "rnkMETHOD")
result = filter(lambda x: res1.has_key(x[0]), res2)
return result
except StandardError, e:
return ()
def get_pbx():
"""Returns all portalboxes"""
res = run_sql("SELECT id, title, body FROM portalbox ORDER by title,body")
return res
def get_fld_value(fldvID = ''):
"""Returns fieldvalue"""
sql = "SELECT id, name, value FROM fieldvalue"
params = []
if fldvID:
sql += " WHERE id=%s"
params.append(fldvID)
sql += " ORDER BY name"
res = run_sql(sql, tuple(params))
return res
def get_pbx_pos():
"""Returns a list of all the positions for a portalbox"""
position = {}
position["rt"] = "Right Top"
position["lt"] = "Left Top"
position["te"] = "Title Epilog"
position["tp"] = "Title Prolog"
position["ne"] = "Narrow by coll epilog"
position["np"] = "Narrow by coll prolog"
return position
def get_sort_nametypes():
"""Return a list of the various translationnames for the fields"""
type = {}
type['soo'] = 'Sort options'
type['seo'] = 'Search options'
type['sew'] = 'Search within'
return type
def get_fmt_nametypes():
"""Return a list of the various translationnames for the output formats"""
type = []
type.append(('ln', 'Long name'))
return type
def get_fld_nametypes():
"""Return a list of the various translationnames for the fields"""
type = []
type.append(('ln', 'Long name'))
return type
def get_col_nametypes():
"""Return a list of the various translationnames for the collections"""
type = []
type.append(('ln', 'Long name'))
return type
def find_last(tree, start_son):
"""Find the previous collection in the tree with the same father as start_son"""
id_dad = tree[start_son][3]
while start_son > 0:
start_son -= 1
if tree[start_son][3] == id_dad:
return start_son
def find_next(tree, start_son):
"""Find the next collection in the tree with the same father as start_son"""
id_dad = tree[start_son][3]
while start_son < len(tree):
start_son += 1
if tree[start_son][3] == id_dad:
return start_son
def remove_col_subcol(id_son, id_dad, type):
"""Remove a collection as a son of another collection in the tree, if collection isn't used elsewhere in the tree, remove all registered sons of the id_son.
id_son - collection id of son to remove
id_dad - the id of the dad"""
try:
if id_son != id_dad:
tree = get_col_tree(id_son)
run_sql("DELETE FROM collection_collection WHERE id_son=%s and id_dad=%s", (id_son, id_dad))
else:
tree = get_col_tree(id_son, type)
run_sql("DELETE FROM collection_collection WHERE id_son=%s and id_dad=%s and type=%s", (id_son, id_dad, type))
if not run_sql("SELECT id_dad,id_son,type,score from collection_collection WHERE id_son=%s and type=%s", (id_son, type)):
for (id, up, down, dad, rtype) in tree:
run_sql("DELETE FROM collection_collection WHERE id_son=%s and id_dad=%s", (id, dad))
return (1, "")
except StandardError, e:
return (0, e)
def check_col(add_dad, add_son):
"""Check if the collection can be placed as a son of the dad without causing loops.
add_dad - collection id
add_son - collection id"""
try:
stack = [add_dad]
res = run_sql("SELECT id_dad FROM collection_collection WHERE id_dad=%s AND id_son=%s", (add_dad, add_son))
if res:
raise StandardError
while len(stack) > 0:
colID = stack.pop()
res = run_sql("SELECT id_dad FROM collection_collection WHERE id_son=%s", (colID, ))
for id in res:
if int(id[0]) == int(add_son):
# raise StandardError # this was the original but it didnt work
return(0)
else:
stack.append(id[0])
return (1, "")
except StandardError, e:
return (0, e)
def attach_rnk_col(colID, rnkID):
"""attach rank method to collection
rnkID - id from rnkMETHOD table
colID - id of collection, as in collection table """
try:
res = run_sql("INSERT INTO collection_rnkMETHOD(id_collection, id_rnkMETHOD) values (%s,%s)", (colID, rnkID))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def detach_rnk_col(colID, rnkID):
"""detach rank method from collection
rnkID - id from rnkMETHOD table
colID - id of collection, as in collection table """
try:
res = run_sql("DELETE FROM collection_rnkMETHOD WHERE id_collection=%s AND id_rnkMETHOD=%s", (colID, rnkID))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def switch_col_treescore(col_1, col_2):
try:
res1 = run_sql("SELECT score FROM collection_collection WHERE id_dad=%s and id_son=%s", (col_1[3], col_1[0]))
res2 = run_sql("SELECT score FROM collection_collection WHERE id_dad=%s and id_son=%s", (col_2[3], col_2[0]))
res = run_sql("UPDATE collection_collection SET score=%s WHERE id_dad=%s and id_son=%s", (res2[0][0], col_1[3], col_1[0]))
res = run_sql("UPDATE collection_collection SET score=%s WHERE id_dad=%s and id_son=%s", (res1[0][0], col_2[3], col_2[0]))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def move_col_tree(col_from, col_to, move_to_rtype=''):
"""Move a collection from one point in the tree to another. becomes a son of the endpoint.
col_from - move this collection from current point
col_to - and set it as a son of this collection.
move_to_rtype - either virtual or regular collection"""
try:
res = run_sql("SELECT score FROM collection_collection WHERE id_dad=%s ORDER BY score asc", (col_to[0], ))
highscore = 0
for score in res:
if int(score[0]) > highscore:
highscore = int(score[0])
highscore += 1
if not move_to_rtype:
move_to_rtype = col_from[4]
res = run_sql("DELETE FROM collection_collection WHERE id_son=%s and id_dad=%s", (col_from[0], col_from[3]))
res = run_sql("INSERT INTO collection_collection(id_dad,id_son,score,type) values(%s,%s,%s,%s)", (col_to[0], col_from[0], highscore, move_to_rtype))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def remove_pbx(colID, pbxID, ln):
"""Removes a portalbox from the collection given.
colID - the collection the format is connected to
pbxID - the portalbox which should be removed from the collection.
ln - the language of the portalbox to be removed"""
try:
res = run_sql("DELETE FROM collection_portalbox WHERE id_collection=%s AND id_portalbox=%s AND ln=%s", (colID, pbxID, ln))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def remove_fmt(colID, fmtID):
"""Removes a format from the collection given.
colID - the collection the format is connected to
fmtID - the format which should be removed from the collection."""
try:
res = run_sql("DELETE FROM collection_format WHERE id_collection=%s AND id_format=%s", (colID, fmtID))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def remove_fld(colID, fldID, fldvID=''):
"""Removes a field from the collection given.
colID - the collection the format is connected to
fldID - the field which should be removed from the collection."""
try:
sql = "DELETE FROM collection_field_fieldvalue WHERE id_collection=%s AND id_field=%s"
params = [colID, fldID]
if fldvID:
if fldvID != "None":
sql += " AND id_fieldvalue=%s"
params.append(fldvID)
else:
sql += " AND id_fieldvalue is NULL"
res = run_sql(sql, tuple(params))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def delete_fldv(fldvID):
"""Deletes all data for the given fieldvalue
fldvID - delete all data in the tables associated with fieldvalue and this id"""
try:
res = run_sql("DELETE FROM collection_field_fieldvalue WHERE id_fieldvalue=%s", (fldvID, ))
res = run_sql("DELETE FROM fieldvalue WHERE id=%s", (fldvID, ))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def delete_pbx(pbxID):
"""Deletes all data for the given portalbox
pbxID - delete all data in the tables associated with portalbox and this id """
try:
res = run_sql("DELETE FROM collection_portalbox WHERE id_portalbox=%s", (pbxID, ))
res = run_sql("DELETE FROM portalbox WHERE id=%s", (pbxID, ))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def delete_fmt(fmtID):
"""Deletes all data for the given format
fmtID - delete all data in the tables associated with format and this id """
try:
res = run_sql("DELETE FROM format WHERE id=%s", (fmtID, ))
res = run_sql("DELETE FROM collection_format WHERE id_format=%s", (fmtID, ))
res = run_sql("DELETE FROM formatname WHERE id_format=%s", (fmtID, ))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def delete_col(colID):
"""Deletes all data for the given collection
colID - delete all data in the tables associated with collection and this id """
try:
res = run_sql("DELETE FROM collection WHERE id=%s", (colID, ))
res = run_sql("DELETE FROM collectionname WHERE id_collection=%s", (colID, ))
res = run_sql("DELETE FROM collection_rnkMETHOD WHERE id_collection=%s", (colID, ))
res = run_sql("DELETE FROM collection_collection WHERE id_dad=%s", (colID, ))
res = run_sql("DELETE FROM collection_collection WHERE id_son=%s", (colID, ))
res = run_sql("DELETE FROM collection_portalbox WHERE id_collection=%s", (colID, ))
res = run_sql("DELETE FROM collection_format WHERE id_collection=%s", (colID, ))
res = run_sql("DELETE FROM collection_field_fieldvalue WHERE id_collection=%s", (colID, ))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def add_fmt(code, name, rtype):
"""Add a new output format. Returns the id of the format.
code - the code for the format, max 6 chars.
name - the default name for the default language of the format.
rtype - the default nametype"""
try:
res = run_sql("INSERT INTO format (code, name) values (%s,%s)", (code, name))
fmtID = run_sql("SELECT id FROM format WHERE code=%s", (code,))
res = run_sql("INSERT INTO formatname(id_format, type, ln, value) VALUES (%s,%s,%s,%s)",
(fmtID[0][0], rtype, CFG_SITE_LANG, name))
return (1, fmtID)
except StandardError, e:
register_exception()
return (0, e)
def update_fldv(fldvID, name, value):
"""Modify existing fieldvalue
fldvID - id of fieldvalue to modify
value - the value of the fieldvalue
name - the name of the fieldvalue."""
try:
res = run_sql("UPDATE fieldvalue set name=%s where id=%s", (name, fldvID))
res = run_sql("UPDATE fieldvalue set value=%s where id=%s", (value, fldvID))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def add_fldv(name, value):
"""Add a new fieldvalue, returns id of fieldvalue
value - the value of the fieldvalue
name - the name of the fieldvalue."""
try:
res = run_sql("SELECT id FROM fieldvalue WHERE name=%s and value=%s", (name, value))
if not res:
res = run_sql("INSERT INTO fieldvalue (name, value) values (%s,%s)", (name, value))
res = run_sql("SELECT id FROM fieldvalue WHERE name=%s and value=%s", (name, value))
if res:
return (1, res[0][0])
else:
raise StandardError
except StandardError, e:
register_exception()
return (0, e)
def add_pbx(title, body):
try:
res = run_sql("INSERT INTO portalbox (title, body) values (%s,%s)", (title, body))
res = run_sql("SELECT id FROM portalbox WHERE title=%s AND body=%s", (title, body))
if res:
return (1, res[0][0])
else:
raise StandardError
except StandardError, e:
register_exception()
return (0, e)
def add_col(colNAME, dbquery=None):
"""Adds a new collection to collection table
colNAME - the default name for the collection, saved to collection and collectionname
dbquery - query related to the collection"""
# BTW, sometimes '' are passed instead of None, so change them to None
if not dbquery:
dbquery = None
try:
rtype = get_col_nametypes()[0][0]
colID = run_sql("SELECT id FROM collection WHERE id=1")
if colID:
res = run_sql("INSERT INTO collection (name,dbquery) VALUES (%s,%s)",
(colNAME,dbquery))
else:
res = run_sql("INSERT INTO collection (id,name,dbquery) VALUES (1,%s,%s)",
(colNAME,dbquery))
colID = run_sql("SELECT id FROM collection WHERE name=%s", (colNAME,))
res = run_sql("INSERT INTO collectionname(id_collection, type, ln, value) VALUES (%s,%s,%s,%s)",
(colID[0][0], rtype, CFG_SITE_LANG, colNAME))
if colID:
return (1, colID[0][0])
else:
raise StandardError
except StandardError, e:
register_exception()
return (0, e)
def add_col_pbx(colID, pbxID, ln, position, score=''):
"""add a portalbox to the collection.
colID - the id of the collection involved
pbxID - the portalbox to add
ln - which language the portalbox is for
score - decides which portalbox is the most important
position - position on page the portalbox should appear."""
try:
if score:
res = run_sql("INSERT INTO collection_portalbox(id_portalbox, id_collection, ln, score, position) values (%s,%s,'%s',%s,%s)", (pbxID, colID, ln, score, position))
else:
res = run_sql("SELECT score FROM collection_portalbox WHERE id_collection=%s and ln=%s and position=%s ORDER BY score desc, ln, position", (colID, ln, position))
if res:
score = int(res[0][0])
else:
score = 0
res = run_sql("INSERT INTO collection_portalbox(id_portalbox, id_collection, ln, score, position) values (%s,%s,%s,%s,%s)", (pbxID, colID, ln, (score + 1), position))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def add_col_fmt(colID, fmtID, score=''):
"""Add a output format to the collection.
colID - the id of the collection involved
fmtID - the id of the format.
score - the score of the format, decides sorting, if not given, place the format on top"""
try:
if score:
res = run_sql("INSERT INTO collection_format(id_format, id_collection, score) values (%s,%s,%s)", (fmtID, colID, score))
else:
res = run_sql("SELECT score FROM collection_format WHERE id_collection=%s ORDER BY score desc", (colID, ))
if res:
score = int(res[0][0])
else:
score = 0
res = run_sql("INSERT INTO collection_format(id_format, id_collection, score) values (%s,%s,%s)", (fmtID, colID, (score + 1)))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def add_col_fld(colID, fldID, type, fldvID=''):
"""Add a sort/search/field to the collection.
colID - the id of the collection involved
fldID - the id of the field.
fldvID - the id of the fieldvalue.
type - which type, seo, sew...
score - the score of the format, decides sorting, if not given, place the format on top"""
try:
if fldvID and fldvID not in [-1, "-1"]:
run_sql("DELETE FROM collection_field_fieldvalue WHERE id_collection=%s AND id_field=%s and type=%s and id_fieldvalue is NULL", (colID, fldID, type))
res = run_sql("SELECT score FROM collection_field_fieldvalue WHERE id_collection=%s AND id_field=%s and type=%s ORDER BY score desc", (colID, fldID, type))
if res:
score = int(res[0][0])
res = run_sql("SELECT score_fieldvalue FROM collection_field_fieldvalue WHERE id_collection=%s AND id_field=%s and type=%s ORDER BY score_fieldvalue desc", (colID, fldID, type))
else:
res = run_sql("SELECT score FROM collection_field_fieldvalue WHERE id_collection=%s and type=%s ORDER BY score desc", (colID, type))
if res:
score = int(res[0][0]) + 1
else:
score = 1
res = run_sql("SELECT id_collection,id_field,id_fieldvalue,type,score,score_fieldvalue FROM collection_field_fieldvalue where id_field=%s and id_collection=%s and type=%s and id_fieldvalue=%s", (fldID, colID, type, fldvID))
if not res:
run_sql("UPDATE collection_field_fieldvalue SET score_fieldvalue=score_fieldvalue+1 WHERE id_field=%s AND id_collection=%s and type=%s", (fldID, colID, type))
res = run_sql("INSERT INTO collection_field_fieldvalue(id_field, id_fieldvalue, id_collection, type, score, score_fieldvalue) values (%s,%s,%s,%s,%s,%s)", (fldID, fldvID, colID, type, score, 1))
else:
return (0, (1, "Already exists"))
else:
res = run_sql("SELECT id_collection,id_field,id_fieldvalue,type,score,score_fieldvalue FROM collection_field_fieldvalue WHERE id_collection=%s AND type=%s and id_field=%s and id_fieldvalue is NULL", (colID, type, fldID))
if res:
return (0, (1, "Already exists"))
else:
run_sql("UPDATE collection_field_fieldvalue SET score=score+1")
res = run_sql("INSERT INTO collection_field_fieldvalue(id_field, id_collection, type, score,score_fieldvalue) values (%s,%s,%s,%s, 0)", (fldID, colID, type, 1))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def modify_dbquery(colID, dbquery=None):
"""Modify the dbquery of an collection.
colID - the id of the collection involved
dbquery - the new dbquery"""
# BTW, sometimes '' is passed instead of None, so change it to None
if not dbquery:
dbquery = None
try:
res = run_sql("UPDATE collection SET dbquery=%s WHERE id=%s", (dbquery, colID))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def modify_pbx(colID, pbxID, sel_ln, score='', position='', title='', body=''):
"""Modify a portalbox
colID - the id of the collection involved
pbxID - the id of the portalbox that should be modified
sel_ln - the language of the portalbox that should be modified
title - the title
body - the content
score - if several portalboxes in one position, who should appear on top.
position - position on page"""
try:
if title:
res = run_sql("UPDATE portalbox SET title=%s WHERE id=%s", (title, pbxID))
if body:
res = run_sql("UPDATE portalbox SET body=%s WHERE id=%s", (body, pbxID))
if score:
res = run_sql("UPDATE collection_portalbox SET score=%s WHERE id_collection=%s and id_portalbox=%s and ln=%s", (score, colID, pbxID, sel_ln))
if position:
res = run_sql("UPDATE collection_portalbox SET position=%s WHERE id_collection=%s and id_portalbox=%s and ln=%s", (position, colID, pbxID, sel_ln))
return (1, "")
except Exception, e:
register_exception()
return (0, e)
def switch_fld_score(colID, id_1, id_2):
"""Switch the scores of id_1 and id_2 in collection_field_fieldvalue
colID - collection the id_1 or id_2 is connected to
id_1/id_2 - id field from tables like format..portalbox...
table - name of the table"""
try:
res1 = run_sql("SELECT score FROM collection_field_fieldvalue WHERE id_collection=%s and id_field=%s", (colID, id_1))
res2 = run_sql("SELECT score FROM collection_field_fieldvalue WHERE id_collection=%s and id_field=%s", (colID, id_2))
if res1[0][0] == res2[0][0]:
return (0, (1, "Cannot rearrange the selected fields, either rearrange by name or use the mySQL client to fix the problem."))
else:
res = run_sql("UPDATE collection_field_fieldvalue SET score=%s WHERE id_collection=%s and id_field=%s", (res2[0][0], colID, id_1))
res = run_sql("UPDATE collection_field_fieldvalue SET score=%s WHERE id_collection=%s and id_field=%s", (res1[0][0], colID, id_2))
return (1, "")
except StandardError, e:
register_exception()
return (0, e)
def switch_fld_value_score(colID, id_1, fldvID_1, fldvID_2):
"""Switch the scores of two field_value
colID - collection the id_1 or id_2 is connected to
id_1/id_2 - id field from tables like format..portalbox...
table - name of the table"""
try:
res1 = run_sql("SELECT score_fieldvalue FROM collection_field_fieldvalue WHERE id_collection=%s and id_field=%s and id_fieldvalue=%s", (colID, id_1, fldvID_1))
res2 = run_sql("SELECT score_fieldvalue FROM collection_field_fieldvalue WHERE id_collection=%s and id_field=%s and id_fieldvalue=%s", (colID, id_1, fldvID_2))
if res1[0][0] == res2[0][0]:
return (0, (1, "Cannot rearrange the selected fields, either rearrange by name or use the mySQL client to fix the problem."))
else:
res = run_sql("UPDATE collection_field_fieldvalue SET score_fieldvalue=%s WHERE id_collection=%s and id_field=%s and id_fieldvalue=%s", (res2[0][0], colID, id_1, fldvID_1))
res = run_sql("UPDATE collection_field_fieldvalue SET score_fieldvalue=%s WHERE id_collection=%s and id_field=%s and id_fieldvalue=%s", (res1[0][0], colID, id_1, fldvID_2))
return (1, "")
except Exception, e:
register_exception()
return (0, e)
def switch_pbx_score(colID, id_1, id_2, sel_ln):
"""Switch the scores of id_1 and id_2 in the table given by the argument.
colID - collection the id_1 or id_2 is connected to
id_1/id_2 - id field from tables like format..portalbox...
table - name of the table"""
try:
res1 = run_sql("SELECT score FROM collection_portalbox WHERE id_collection=%s and id_portalbox=%s and ln=%s", (colID, id_1, sel_ln))
res2 = run_sql("SELECT score FROM collection_portalbox WHERE id_collection=%s and id_portalbox=%s and ln=%s", (colID, id_2, sel_ln))
if res1[0][0] == res2[0][0]:
return (0, (1, "Cannot rearrange the selected fields, either rearrange by name or use the mySQL client to fix the problem."))
res = run_sql("UPDATE collection_portalbox SET score=%s WHERE id_collection=%s and id_portalbox=%s and ln=%s", (res2[0][0], colID, id_1, sel_ln))
res = run_sql("UPDATE collection_portalbox SET score=%s WHERE id_collection=%s and id_portalbox=%s and ln=%s", (res1[0][0], colID, id_2, sel_ln))
return (1, "")
except Exception, e:
register_exception()
return (0, e)
def switch_score(colID, id_1, id_2, table):
"""Switch the scores of id_1 and id_2 in the table given by the argument.
colID - collection the id_1 or id_2 is connected to
id_1/id_2 - id field from tables like format..portalbox...
table - name of the table"""
try:
res1 = run_sql("SELECT score FROM collection_%s WHERE id_collection=%%s and id_%s=%%s" % (table, table), (colID, id_1))
res2 = run_sql("SELECT score FROM collection_%s WHERE id_collection=%%s and id_%s=%%s" % (table, table), (colID, id_2))
if res1[0][0] == res2[0][0]:
return (0, (1, "Cannot rearrange the selected fields, either rearrange by name or use the mySQL client to fix the problem."))
res = run_sql("UPDATE collection_%s SET score=%%s WHERE id_collection=%%s and id_%s=%%s" % (table, table), (res2[0][0], colID, id_1))
res = run_sql("UPDATE collection_%s SET score=%%s WHERE id_collection=%%s and id_%s=%%s" % (table, table), (res1[0][0], colID, id_2))
return (1, "")
except Exception, e:
register_exception()
return (0, e)
def get_detailed_page_tabs(colID=None, recID=None, ln=CFG_SITE_LANG):
"""
Returns the complete list of tabs to be displayed in the
detailed record pages.
Returned structured is a dict with
- key : last component of the url that leads to detailed record tab: http:www.../CFG_SITE_RECORD/74/key
- values: a dictionary with the following keys:
- label: *string* label to be printed as tab (Not localized here)
- visible: *boolean* if False, tab should not be shown
- enabled: *boolean* if True, tab should be disabled
- order: *int* position of the tab in the list of tabs
- ln: language of the tab labels
returns dict
"""
_ = gettext_set_language(ln)
tabs = {'metadata' : {'label': _('Information'), 'visible': False, 'enabled': True, 'order': 1},
'references': {'label': _('References'), 'visible': False, 'enabled': True, 'order': 2},
'citations' : {'label': _('Citations'), 'visible': False, 'enabled': True, 'order': 3},
'keywords' : {'label': _('Keywords'), 'visible': False, 'enabled': True, 'order': 4},
'comments' : {'label': _('Comments'), 'visible': False, 'enabled': True, 'order': 5},
'reviews' : {'label': _('Reviews'), 'visible': False, 'enabled': True, 'order': 6},
'usage' : {'label': _('Usage statistics'), 'visible': False, 'enabled': True, 'order': 7},
'files' : {'label': _('Files'), 'visible': False, 'enabled': True, 'order': 8},
'plots' : {'label': _('Plots'), 'visible': False, 'enabled': True, 'order': 9},
'holdings' : {'label': _('Holdings'), 'visible': False, 'enabled': True, 'order': 10},
'linkbacks' : {'label': _('Linkbacks'), 'visible': False, 'enabled': True, 'order': 11},
}
res = run_sql("SELECT tabs FROM collectiondetailedrecordpagetabs " + \
"WHERE id_collection=%s", (colID, ))
if len(res) > 0:
tabs_state = res[0][0].split(';')
for tab_state in tabs_state:
if tabs.has_key(tab_state):
tabs[tab_state]['visible'] = True;
else:
# no preference set for this collection.
# assume all tabs are displayed
for key in tabs.keys():
tabs[key]['visible'] = True
if not CFG_WEBCOMMENT_ALLOW_COMMENTS:
tabs['comments']['visible'] = False
tabs['comments']['enabled'] = False
if not CFG_WEBCOMMENT_ALLOW_REVIEWS:
tabs['reviews']['visible'] = False
tabs['reviews']['enabled'] = False
if recID is not None:
# Disable references if no references found
#bfo = BibFormatObject(recID)
#if bfe_references.format_element(bfo, '', '') == '':
# tabs['references']['enabled'] = False
## FIXME: the above was commented out because bfe_references
## may be too slow. And we do not really need this anyway
## because we can disable tabs in WebSearch Admin on a
## collection-by-collection basis. If we need this, then we
## should probably call bfo.fields('999') here that should be
## much faster than calling bfe_references.
# Disable citations if not citations found
#if len(get_cited_by(recID)) == 0:
# tabs['citations']['enabled'] = False
## FIXME: the above was commented out because get_cited_by()
## may be too slow. And we do not really need this anyway
## because we can disable tags in WebSearch Admin on a
## collection-by-collection basis.
# Disable Files tab if no file found except for Plots:
disable_files_tab_p = True
for abibdoc in BibRecDocs(recID).list_bibdocs():
abibdoc_type = abibdoc.get_type()
if abibdoc_type == 'Plot':
continue # ignore attached plots
else:
if CFG_INSPIRE_SITE and not \
abibdoc_type in ('', 'INSPIRE-PUBLIC', 'Supplementary Material'):
# ignore non-empty, non-INSPIRE-PUBLIC, non-suppl doctypes for INSPIRE
continue
# okay, we found at least one non-Plot file:
disable_files_tab_p = False
break
if disable_files_tab_p:
tabs['files']['enabled'] = False
#Disable holdings tab if collection != Books
collection = run_sql("""select name from collection where id=%s""", (colID, ))
if collection[0][0] != 'Books':
tabs['holdings']['enabled'] = False
# Disable Plots tab if no docfile of doctype Plot found
brd = BibRecDocs(recID)
if len(brd.list_bibdocs('Plot')) == 0:
tabs['plots']['enabled'] = False
if CFG_CERN_SITE:
from invenio.search_engine import get_collection_reclist
if recID in get_collection_reclist("Books & Proceedings"):
tabs['holdings']['visible'] = True
tabs['holdings']['enabled'] = True
tabs[''] = tabs['metadata']
del tabs['metadata']
return tabs
def get_detailed_page_tabs_counts(recID):
"""
Returns the number of citations, references and comments/reviews
that have to be shown on the corresponding tabs in the
detailed record pages
@param recID: record id
@return: dictionary with following keys
'Citations': number of citations to be shown in the "Citations" tab
'References': number of references to be shown in the "References" tab
'Comments': number of comments to be shown in the "Comments" tab
'Reviews': number of reviews to be shown in the "Reviews" tab
"""
num_comments = 0 #num of comments
num_reviews = 0 #num of reviews
tabs_counts = {'Citations' : 0,
'References' : -1,
'Discussions' : 0,
'Comments' : 0,
'Reviews' : 0
}
from invenio.search_engine import get_field_tags, get_record
if CFG_BIBRANK_SHOW_CITATION_LINKS:
tabs_counts['Citations'] = get_cited_by_count(recID)
if not CFG_CERN_SITE:#FIXME:should be replaced by something like CFG_SHOW_REFERENCES
reftag = ""
reftags = get_field_tags("reference")
if reftags:
reftag = reftags[0]
tmprec = get_record(recID)
if reftag and len(reftag) > 4:
tabs_counts['References'] = len(record_get_field_instances(tmprec, reftag[0:3], reftag[3], reftag[4]))
# obtain number of comments/reviews
from invenio.webcommentadminlib import get_nb_reviews, get_nb_comments
if CFG_WEBCOMMENT_ALLOW_COMMENTS and CFG_WEBSEARCH_SHOW_COMMENT_COUNT:
num_comments = get_nb_comments(recID, count_deleted=False)
if CFG_WEBCOMMENT_ALLOW_REVIEWS and CFG_WEBSEARCH_SHOW_REVIEW_COUNT:
num_reviews = get_nb_reviews(recID, count_deleted=False)
if num_comments:
tabs_counts['Comments'] = num_comments
tabs_counts['Discussions'] += num_comments
if num_reviews:
tabs_counts['Reviews'] = num_reviews
tabs_counts['Discussions'] += num_reviews
return tabs_counts
diff --git a/modules/websession/lib/webuser.py b/modules/websession/lib/webuser.py
index 6cdf518a3..3e5e1f9f4 100644
--- a/modules/websession/lib/webuser.py
+++ b/modules/websession/lib/webuser.py
@@ -1,1404 +1,1409 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
This file implements all methods necessary for working with users and
sessions in Invenio. Contains methods for logging/registration
when a user log/register into the system, checking if it is a guest
user or not.
At the same time this presents all the stuff it could need with
sessions managements, working with websession.
It also contains Apache-related user authentication stuff.
"""
__revision__ = "$Id$"
import cgi
import urllib
import urlparse
import socket
import smtplib
import re
import random
import datetime
from socket import gaierror
from flask import Request
from invenio.config import \
CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS, \
CFG_ACCESS_CONTROL_LEVEL_GUESTS, \
CFG_ACCESS_CONTROL_LEVEL_SITE, \
CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN, \
CFG_ACCESS_CONTROL_NOTIFY_ADMIN_ABOUT_NEW_ACCOUNTS, \
CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_NEW_ACCOUNT, \
CFG_SITE_ADMIN_EMAIL, \
CFG_SITE_LANG, \
CFG_SITE_NAME, \
CFG_SITE_NAME_INTL, \
CFG_SITE_SUPPORT_EMAIL, \
CFG_SITE_SECURE_URL, \
CFG_SITE_URL, \
CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS, \
+ CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS, \
CFG_CERN_SITE, \
CFG_INSPIRE_SITE, \
CFG_BIBAUTHORID_ENABLED, \
CFG_SITE_RECORD
try:
from flask import session
except ImportError:
pass
from invenio.dbquery import run_sql, OperationalError, \
serialize_via_marshal, deserialize_via_marshal
from invenio.access_control_admin import acc_get_role_id, acc_get_action_roles, acc_get_action_id, acc_is_user_in_role, acc_find_possible_activities
from invenio.access_control_mailcookie import mail_cookie_create_mail_activation
from invenio.access_control_firerole import acc_firerole_check_user, load_role_definition
from invenio.access_control_config import SUPERADMINROLE, CFG_EXTERNAL_AUTH_USING_SSO
from invenio.messages import gettext_set_language, wash_languages, wash_language
from invenio.mailutils import send_email
from invenio.errorlib import register_exception
from invenio.webgroup_dblayer import get_groups
from invenio.external_authentication import InvenioWebAccessExternalAuthError
from invenio.access_control_config import CFG_EXTERNAL_AUTHENTICATION, \
CFG_WEBACCESS_MSGS, CFG_WEBACCESS_WARNING_MSGS, CFG_EXTERNAL_AUTH_DEFAULT, \
CFG_TEMP_EMAIL_ADDRESS
from invenio.webuser_config import CFG_WEBUSER_USER_TABLES
import invenio.template
tmpl = invenio.template.load('websession')
re_invalid_nickname = re.compile(""".*[,'@]+.*""")
# pylint: disable=C0301
def createGuestUser():
"""Create a guest user , insert into user null values in all fields
createGuestUser() -> GuestUserID
"""
if CFG_ACCESS_CONTROL_LEVEL_GUESTS == 0:
try:
return run_sql("insert into user (email, note) values ('', '1')")
except OperationalError:
return None
else:
try:
return run_sql("insert into user (email, note) values ('', '0')")
except OperationalError:
return None
def page_not_authorized(req, referer='', uid='', text='', navtrail='', ln=CFG_SITE_LANG,
navmenuid=""):
"""Show error message when user is not authorized to do something.
@param referer: in case the displayed message propose a login link, this
is the url to return to after logging in. If not specified it is guessed
from req.
@param uid: the uid of the user. If not specified it is guessed from req.
@param text: the message to be displayed. If not specified it will be
guessed from the context.
"""
from invenio.webpage import page
_ = gettext_set_language(ln)
if not referer:
referer = req.unparsed_uri
if not CFG_ACCESS_CONTROL_LEVEL_SITE:
title = CFG_WEBACCESS_MSGS[5]
if not uid:
uid = getUid(req)
try:
res = run_sql("SELECT email FROM user WHERE id=%s AND note=1", (uid,))
if res and res[0][0]:
if text:
body = text
else:
body = "%s %s" % (CFG_WEBACCESS_WARNING_MSGS[9] % cgi.escape(res[0][0]),
("%s %s" % (CFG_WEBACCESS_MSGS[0] % urllib.quote(referer), CFG_WEBACCESS_MSGS[1])))
else:
if text:
body = text
else:
if CFG_ACCESS_CONTROL_LEVEL_GUESTS == 1:
body = CFG_WEBACCESS_MSGS[3]
else:
body = CFG_WEBACCESS_WARNING_MSGS[4] + CFG_WEBACCESS_MSGS[2]
except OperationalError, e:
body = _("Database problem") + ': ' + str(e)
elif CFG_ACCESS_CONTROL_LEVEL_SITE == 1:
title = CFG_WEBACCESS_MSGS[8]
body = "%s %s" % (CFG_WEBACCESS_MSGS[7], CFG_WEBACCESS_MSGS[2])
elif CFG_ACCESS_CONTROL_LEVEL_SITE == 2:
title = CFG_WEBACCESS_MSGS[6]
body = "%s %s" % (CFG_WEBACCESS_MSGS[4], CFG_WEBACCESS_MSGS[2])
return page(title=title,
language=ln,
uid=getUid(req),
body=body,
navtrail=navtrail,
req=req,
navmenuid=navmenuid)
def getUid(req):
"""Return user ID taking it from the cookie of the request.
Includes control mechanism for the guest users, inserting in
the database table when need be, raising the cookie back to the
client.
User ID is set to 0 when client refuses cookie or we are in the
read-only site operation mode.
User ID is set to -1 when we are in the permission denied site
operation mode.
getUid(req) -> userId
"""
#if hasattr(req, '_user_info'):
# return req._user_info['_uid']
if CFG_ACCESS_CONTROL_LEVEL_SITE == 1: return 0
if CFG_ACCESS_CONTROL_LEVEL_SITE == 2: return -1
guest = 0
from flask import session
uid = session.uid
if not session.need_https:
if uid == -1: # first time, so create a guest user
if CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS:
uid = session['uid'] = createGuestUser()
session.set_remember_me(False)
guest = 1
else:
if CFG_ACCESS_CONTROL_LEVEL_GUESTS == 0:
session['uid'] = 0
session.set_remember_me(False)
return 0
else:
return -1
else:
if not hasattr(req, '_user_info') and 'user_info' in session:
req._user_info = session['user_info']
req._user_info = collect_user_info(req, refresh=True)
if guest == 0:
guest = isGuestUser(uid)
if guest:
if CFG_ACCESS_CONTROL_LEVEL_GUESTS == 0:
return uid
elif CFG_ACCESS_CONTROL_LEVEL_GUESTS >= 1:
return -1
else:
res = run_sql("SELECT note FROM user WHERE id=%s", (uid,))
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS == 0:
return uid
elif CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS >= 1 and res and res[0][0] in [1, "1"]:
return uid
else:
return -1
from invenio.webuser_flask import current_user, login_user, logout_user
getUid = lambda req: current_user.get_id()
def setUid(req, uid, remember_me=False):
"""It sets the userId into the session, and raise the cookie to the client.
"""
if uid > 0:
login_user(uid, remember_me)
else:
logout_user()
return uid
def session_param_del(req, key):
"""
Remove a given key from the session.
"""
del session[key]
def session_param_set(req, key, value):
"""
Set a VALUE for the session param KEY for the current session.
"""
session[key] = value
def session_param_get(req, key, default = None):
"""
Return session parameter value associated with session parameter KEY for the current session.
If the key doesn't exists return the provided default.
"""
return session.get(key, default)
def session_param_list(req):
"""
List all available session parameters.
"""
return session.keys()
def get_last_login(uid):
"""Return the last_login datetime for uid if any, otherwise return the Epoch."""
res = run_sql('SELECT last_login FROM user WHERE id=%s', (uid,), 1)
if res and res[0][0]:
return res[0][0]
else:
return datetime.datetime(1970, 1, 1)
def get_user_info(uid, ln=CFG_SITE_LANG):
"""Get infos for a given user.
@param uid: user id (int)
@return: tuple: (uid, nickname, display_name)
"""
_ = gettext_set_language(ln)
query = """SELECT id, nickname
FROM user
WHERE id=%s"""
res = run_sql(query, (uid,))
if res:
if res[0]:
user = list(res[0])
if user[1]:
user.append(user[1])
else:
user[1] = str(user[0])
user.append(_("user") + ' #' + str(user[0]))
return tuple(user)
return (uid, '', _("N/A"))
def get_uid_from_email(email):
"""Return the uid corresponding to an email.
Return -1 when the email does not exists."""
try:
res = run_sql("SELECT id FROM user WHERE email=%s", (email,))
if res:
return res[0][0]
else:
return -1
except OperationalError:
register_exception()
return -1
def isGuestUser(uid, run_on_slave=True):
"""It Checks if the userId corresponds to a guestUser or not
isGuestUser(uid) -> boolean
"""
out = 1
try:
res = run_sql("SELECT email FROM user WHERE id=%s LIMIT 1", (uid,), 1,
run_on_slave=run_on_slave)
if res:
if res[0][0]:
out = 0
except OperationalError:
register_exception()
return out
def isUserSubmitter(user_info):
"""Return True if the user is a submitter for something; False otherwise."""
u_email = get_email(user_info['uid'])
res = run_sql("SELECT email FROM sbmSUBMISSIONS WHERE email=%s LIMIT 1", (u_email,), 1)
return len(res) > 0
def isUserReferee(user_info):
"""Return True if the user is a referee for something; False otherwise."""
if CFG_CERN_SITE:
return True
else:
for (role_id, role_name, role_description) in acc_get_action_roles(acc_get_action_id('referee')):
if acc_is_user_in_role(user_info, role_id):
return True
return False
def isUserAdmin(user_info):
"""Return True if the user has some admin rights; False otherwise."""
return acc_find_possible_activities(user_info) != {}
def isUserSuperAdmin(user_info):
"""Return True if the user is superadmin; False otherwise."""
if run_sql("""SELECT r.id
FROM accROLE r LEFT JOIN user_accROLE ur
ON r.id = ur.id_accROLE
WHERE r.name = %s AND
ur.id_user = %s AND ur.expiration>=NOW() LIMIT 1""", (SUPERADMINROLE, user_info['uid']), 1, run_on_slave=True):
return True
return acc_firerole_check_user(user_info, load_role_definition(acc_get_role_id(SUPERADMINROLE)))
def nickname_valid_p(nickname):
"""Check whether wanted NICKNAME supplied by the user is valid.
At the moment we just check whether it is not empty, does not
contain blanks or @, is not equal to `guest', etc.
This check relies on re_invalid_nickname regexp (see above)
Return 1 if nickname is okay, return 0 if it is not.
"""
if nickname and \
not(nickname.startswith(' ') or nickname.endswith(' ')) and \
nickname.lower() != 'guest':
if not re_invalid_nickname.match(nickname):
return 1
return 0
def email_valid_p(email):
"""Check whether wanted EMAIL address supplied by the user is valid.
At the moment we just check whether it contains '@' and whether
it doesn't contain blanks. We also check the email domain if
CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN is set.
Return 1 if email is okay, return 0 if it is not.
"""
if (email.find("@") <= 0) or (email.find(" ") > 0):
return 0
elif CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN:
if not email.endswith(CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN):
return 0
return 1
def confirm_email(email):
"""Confirm the email. It returns None when there are problems, otherwise
it return the uid involved."""
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS == 0:
activated = 1
elif CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS == 1:
activated = 0
elif CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS >= 2:
return -1
run_sql('UPDATE user SET note=%s where email=%s', (activated, email))
res = run_sql('SELECT id FROM user where email=%s', (email,))
if res:
if CFG_ACCESS_CONTROL_NOTIFY_ADMIN_ABOUT_NEW_ACCOUNTS:
send_new_admin_account_warning(email, CFG_SITE_ADMIN_EMAIL)
return res[0][0]
else:
return None
def registerUser(req, email, passw, nickname, register_without_nickname=False,
login_method=None, ln=CFG_SITE_LANG):
"""Register user with the desired values of NICKNAME, EMAIL and
PASSW.
If REGISTER_WITHOUT_NICKNAME is set to True, then ignore
desired NICKNAME and do not set any. This is suitable for
external authentications so that people can login without
having to register an internal account first.
Return 0 if the registration is successful, 1 if email is not
valid, 2 if nickname is not valid, 3 if email is already in the
database, 4 if nickname is already in the database, 5 when
users cannot register themselves because of the site policy, 6 when the
site is having problem contacting the user.
If login_method is None or is equal to the key corresponding to local
authentication, then CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS is taken
in account for deciding the behaviour about registering.
"""
# is email valid?
email = email.lower()
if not email_valid_p(email):
return 1
_ = gettext_set_language(ln)
# is email already taken?
res = run_sql("SELECT email FROM user WHERE email=%s", (email,))
if len(res) > 0:
return 3
if register_without_nickname:
# ignore desired nick and use default empty string one:
nickname = ""
else:
# is nickname valid?
if not nickname_valid_p(nickname):
return 2
# is nickname already taken?
res = run_sql("SELECT nickname FROM user WHERE nickname=%s", (nickname,))
if len(res) > 0:
return 4
activated = 1 # By default activated
if not login_method or not CFG_EXTERNAL_AUTHENTICATION[login_method]: # local login
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS >= 2:
return 5
elif CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_NEW_ACCOUNT:
activated = 2 # Email confirmation required
elif CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS >= 1:
activated = 0 # Administrator confirmation required
-
if CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_NEW_ACCOUNT:
- address_activation_key = mail_cookie_create_mail_activation(email)
+ address_activation_key = mail_cookie_create_mail_activation(
+ email,
+ cookie_timeout=datetime.timedelta(
+ days=CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS
+ )
+ )
try:
ip_address = req.remote_host or req.remote_ip
except:
ip_address = None
try:
if not send_email(CFG_SITE_SUPPORT_EMAIL, email, _("Account registration at %s") % CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME),
tmpl.tmpl_account_address_activation_email_body(
email, address_activation_key,
ip_address, ln)):
return 1
except (smtplib.SMTPException, socket.error):
return 6
# okay, go on and register the user:
user_preference = get_default_user_preferences()
uid = run_sql("INSERT INTO user (nickname, email, password, note, settings, last_login) "
"VALUES (%s,%s,AES_ENCRYPT(email,%s),%s,%s, NOW())",
(nickname, email, passw, activated, serialize_via_marshal(user_preference)))
if activated == 1: # Ok we consider the user as logged in :-)
setUid(req, uid)
return 0
def updateDataUser(uid, email, nickname):
"""
Update user data. Used when a user changed his email or password
or nickname.
"""
email = email.lower()
if email == 'guest':
return 0
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS < 2:
run_sql("update user set email=%s where id=%s", (email, uid))
if nickname and nickname != '':
run_sql("update user set nickname=%s where id=%s", (nickname, uid))
return 1
def updatePasswordUser(uid, password):
"""Update the password of a user."""
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS < 3:
run_sql("update user set password=AES_ENCRYPT(email,%s) where id=%s", (password, uid))
return 1
def merge_usera_into_userb(id_usera, id_userb):
"""
Merges all the information of usera into userb.
Deletes afterwards any reference to usera.
The information about SQL tables is contained in the CFG_WEBUSER_USER_TABLES
variable.
"""
preferencea = get_user_preferences(id_usera)
preferenceb = get_user_preferences(id_userb)
preferencea.update(preferenceb)
set_user_preferences(id_userb, preferencea)
try:
## FIXME: for the time being, let's disable locking
## until we will move to InnoDB and we will have
## real transitions
#for table, dummy in CFG_WEBUSER_USER_TABLES:
#run_sql("LOCK TABLE %s WRITE" % table)
index = 0
table = ''
try:
for index, (table, column) in enumerate(CFG_WEBUSER_USER_TABLES):
run_sql("UPDATE %(table)s SET %(column)s=%%s WHERE %(column)s=%%s; DELETE FROM %(table)s WHERE %(column)s=%%s;" % {
'table': table,
'column': column
}, (id_userb, id_usera, id_usera))
except Exception, err:
msg = "Error when merging id_user=%s into id_userb=%s for table %s: %s\n" % (id_usera, id_userb, table, err)
msg += "users where succesfully already merged for tables: %s\n" % ', '.join([table[0] for table in CFG_WEBUSER_USER_TABLES[:index]])
msg += "users where not succesfully already merged for tables: %s\n" % ', '.join([table[0] for table in CFG_WEBUSER_USER_TABLES[index:]])
register_exception(alert_admin=True, prefix=msg)
raise
finally:
## FIXME: locking disabled
#run_sql("UNLOCK TABLES")
pass
def loginUser(req, p_un, p_pw, login_method):
"""It is a first simple version for the authentication of user. It returns the id of the user,
for checking afterwards if the login is correct
"""
# p_un passed may be an email or a nickname:
p_email = get_email_from_username(p_un)
# go on with the old stuff based on p_email:
if not login_method in CFG_EXTERNAL_AUTHENTICATION:
return (None, p_email, p_pw, 12)
if CFG_EXTERNAL_AUTHENTICATION[login_method]: # External Authentication
try:
result = CFG_EXTERNAL_AUTHENTICATION[login_method].auth_user(p_email, p_pw, req)
if (result == (None, None) or result is None) and not login_method in ['oauth1', 'oauth2', 'openid']:
# There is no need to call auth_user with username for
# OAuth1, OAuth2 and OpenID authentication
result = CFG_EXTERNAL_AUTHENTICATION[login_method].auth_user(p_un, p_pw, req) ## We try to login with either the email of the nickname
if isinstance(result, (tuple, list)) and len(result) == 2:
p_email, p_extid = result
else:
## For backward compatibility we use the email as external
## identifier if it was not returned already by the plugin
p_email, p_extid = str(result), str(result)
if p_email:
p_email = p_email.lower()
if not p_extid:
p_extid = p_email
elif not p_extid:
try:
# OpenID and OAuth authentications have own error messages
return (None, p_email, p_pw, CFG_EXTERNAL_AUTHENTICATION[login_method].get_msg(req))
except NotImplementedError:
return(None, p_email, p_pw, 15)
else:
# External login is successfull but couldn't fetch the email
# address.
generate_string = lambda: reduce((lambda x, y: x+y), [random.choice("qwertyuiopasdfghjklzxcvbnm1234567890") for i in range(32)])
random_string = generate_string()
p_email = CFG_TEMP_EMAIL_ADDRESS % random_string
while run_sql("SELECT * FROM user WHERE email=%s", (p_email,)):
random_string = generate_string()
p_email = CFG_TEMP_EMAIL_ADDRESS % random_string
except InvenioWebAccessExternalAuthError:
register_exception(req=req, alert_admin=True)
raise
if p_email: # Authenthicated externally
query_result = run_sql("SELECT id_user FROM userEXT WHERE id=%s and method=%s", (p_extid, login_method))
if query_result:
## User was already registered with this external method.
id_user = query_result[0][0]
old_email = run_sql("SELECT email FROM user WHERE id=%s", (id_user,))[0][0]
# Look if the email address matches with the template given.
# If it matches, use the email address saved in the database.
regexp = re.compile(CFG_TEMP_EMAIL_ADDRESS % r"\w*")
if regexp.match(p_email):
p_email = old_email
if old_email != p_email:
## User has changed email of reference.
res = run_sql("SELECT id FROM user WHERE email=%s", (p_email,))
if res:
## User was also registered with the other email.
## We should merge the two!
new_id = res[0][0]
if new_id == id_user:
raise AssertionError("We should not reach this situation: new_id=%s, id_user=%s, old_email=%s, p_email=%s" % (new_id, id_user, old_email, p_email))
merge_usera_into_userb(id_user, new_id)
run_sql("DELETE FROM user WHERE id=%s", (id_user, ))
for row in run_sql("SELECT method FROM userEXT WHERE id_user=%s", (id_user, )):
## For all known accounts of id_user not conflicting with new_id we move them to refer to new_id
if not run_sql("SELECT method FROM userEXT WHERE id_user=%s AND method=%s", (new_id, row[0])):
run_sql("UPDATE userEXT SET id_user=%s WHERE id_user=%s AND method=%s", (new_id, id_user, row[0]))
## And we delete the duplicate remaining ones :-)
run_sql("DELETE FROM userEXT WHERE id_user=%s", (id_user, ))
id_user = new_id
else:
## We just need to rename the email address of the
## corresponding user. Unfortunately the local
## password will be then invalid, but its unlikely
## the user is using both an external and a local
## account.
run_sql("UPDATE user SET email=%s WHERE id=%s", (p_email, id_user))
else:
## User was not already registered with this external method.
query_result = run_sql("SELECT id FROM user WHERE email=%s", (p_email, ))
if query_result:
## The user was already known with this email
id_user = query_result[0][0]
## We fix the inconsistence in the userEXT table.
run_sql("INSERT INTO userEXT(id, method, id_user) VALUES(%s, %s, %s) ON DUPLICATE KEY UPDATE id=%s, method=%s, id_user=%s", (p_extid, login_method, id_user, p_extid, login_method, id_user))
else:
## First time user
p_pw_local = int(random.random() * 1000000)
p_nickname = ''
if CFG_EXTERNAL_AUTHENTICATION[login_method].enforce_external_nicknames:
try: # Let's discover the external nickname!
p_nickname = CFG_EXTERNAL_AUTHENTICATION[login_method].fetch_user_nickname(p_email, p_pw, req)
except (AttributeError, NotImplementedError):
pass
except:
register_exception(req=req, alert_admin=True)
raise
res = registerUser(req, p_email, p_pw_local, p_nickname,
register_without_nickname=p_nickname == '',
login_method=login_method)
if res == 4 or res == 2: # The nickname was already taken
res = registerUser(req, p_email, p_pw_local, '',
register_without_nickname=True,
login_method=login_method)
query_result = run_sql("SELECT id from user where email=%s", (p_email,))
id_user = query_result[0][0]
elif res == 0: # Everything was ok, with or without nickname.
query_result = run_sql("SELECT id from user where email=%s", (p_email,))
id_user = query_result[0][0]
elif res == 6: # error in contacting the user via email
return (None, p_email, p_pw_local, 19)
else:
return (None, p_email, p_pw_local, 13)
run_sql("INSERT INTO userEXT(id, method, id_user) VALUES(%s, %s, %s)", (p_extid, login_method, id_user))
if CFG_EXTERNAL_AUTHENTICATION[login_method].enforce_external_nicknames:
## Let's still fetch a possibly upgraded nickname.
try: # Let's discover the external nickname!
p_nickname = CFG_EXTERNAL_AUTHENTICATION[login_method].fetch_user_nickname(p_email, p_pw, req)
if nickname_valid_p(p_nickname) and nicknameUnique(p_nickname) == 0:
updateDataUser(id_user, p_email, p_nickname)
except (AttributeError, NotImplementedError):
pass
except:
register_exception(alert_admin=True)
raise
try:
groups = CFG_EXTERNAL_AUTHENTICATION[login_method].fetch_user_groups_membership(p_email, p_pw, req)
# groups is a dictionary {group_name : group_description,}
new_groups = {}
for key, value in groups.items():
new_groups[key + " [" + str(login_method) + "]"] = value
groups = new_groups
except (AttributeError, NotImplementedError):
pass
except:
register_exception(req=req, alert_admin=True)
return (None, p_email, p_pw, 16)
else: # Groups synchronization
if groups:
from invenio.webgroup import synchronize_external_groups
synchronize_external_groups(id_user, groups, login_method)
user_prefs = get_user_preferences(id_user)
if not CFG_EXTERNAL_AUTHENTICATION[login_method]:
## I.e. if the login method is not of robot type:
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS >= 4:
# Let's prevent the user to switch login_method
if user_prefs.has_key("login_method") and \
user_prefs["login_method"] != login_method:
return (None, p_email, p_pw, 11)
user_prefs["login_method"] = login_method
# Cleaning external settings
for key in user_prefs.keys():
if key.startswith('EXTERNAL_'):
del user_prefs[key]
try:
# Importing external settings
new_prefs = CFG_EXTERNAL_AUTHENTICATION[login_method].fetch_user_preferences(p_email, p_pw, req)
for key, value in new_prefs.items():
user_prefs['EXTERNAL_' + key] = value
except (AttributeError, NotImplementedError):
pass
except InvenioWebAccessExternalAuthError:
register_exception(req=req, alert_admin=True)
return (None, p_email, p_pw, 16)
# Storing settings
set_user_preferences(id_user, user_prefs)
else:
return (None, p_un, p_pw, 10)
else: # Internal Authenthication
if not p_pw:
p_pw = ''
query_result = run_sql("SELECT id,email,note from user where email=%s and password=AES_ENCRYPT(email,%s)", (p_email, p_pw,))
if query_result:
#FIXME drop external groups and settings
note = query_result[0][2]
id_user = query_result[0][0]
if note == '1': # Good account
preferred_login_method = get_user_preferences(query_result[0][0])['login_method']
p_email = query_result[0][1].lower()
if login_method != preferred_login_method:
if preferred_login_method in CFG_EXTERNAL_AUTHENTICATION:
return (None, p_email, p_pw, 11)
elif note == '2': # Email address need to be confirmed by user
return (None, p_email, p_pw, 17)
elif note == '0': # Account need to be confirmed by administrator
return (None, p_email, p_pw, 18)
else:
return (None, p_email, p_pw, 14)
# Login successful! Updating the last access time
run_sql("UPDATE user SET last_login=NOW() WHERE email=%s", (p_email,))
return (id_user, p_email, p_pw, 0)
def drop_external_settings(userId):
"""Drop the external (EXTERNAL_) settings of userid."""
prefs = get_user_preferences(userId)
for key in prefs.keys():
if key.startswith('EXTERNAL_'):
del prefs[key]
set_user_preferences(userId, prefs)
def logoutUser(req):
"""It logout the user of the system, creating a guest user.
"""
if CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS:
uid = createGuestUser()
session['uid'] = uid
session.set_remember_me(False)
else:
uid = 0
session.invalidate()
if hasattr(req, '_user_info'):
delattr(req, '_user_info')
return uid
def username_exists_p(username):
"""Check if USERNAME exists in the system. Username may be either
nickname or email.
Return 1 if it does exist, 0 if it does not.
"""
if username == "":
# return not exists if asked for guest users
return 0
res = run_sql("SELECT email FROM user WHERE email=%s", (username,)) + \
run_sql("SELECT email FROM user WHERE nickname=%s", (username,))
if len(res) > 0:
return 1
return 0
def emailUnique(p_email):
"""Check if the email address only exists once. If yes, return userid, if not, -1
"""
query_result = run_sql("select id, email from user where email=%s", (p_email,))
if len(query_result) == 1:
return query_result[0][0]
elif len(query_result) == 0:
return 0
return -1
def nicknameUnique(p_nickname):
"""Check if the nickname only exists once. If yes, return userid, if not, -1
"""
query_result = run_sql("select id, nickname from user where nickname=%s", (p_nickname,))
if len(query_result) == 1:
return query_result[0][0]
elif len(query_result) == 0:
return 0
return -1
def update_Uid(req, p_email, remember_me=False):
"""It updates the userId of the session. It is used when a guest user is logged in succesfully in the system with a given email and password.
As a side effect it will discover all the restricted collection to which the user has right to
"""
query_ID = int(run_sql("select id from user where email=%s",
(p_email,))[0][0])
setUid(req, query_ID, remember_me)
return query_ID
def send_new_admin_account_warning(new_account_email, send_to, ln=CFG_SITE_LANG):
"""Send an email to the address given by send_to about the new account new_account_email."""
_ = gettext_set_language(ln)
sub = _("New account on") + " '%s'" % CFG_SITE_NAME
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS == 1:
sub += " - " + _("PLEASE ACTIVATE")
body = _("A new account has been created on") + " '%s'" % CFG_SITE_NAME
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS == 1:
body += _(" and is awaiting activation")
body += ":\n\n"
body += _(" Username/Email") + ": %s\n\n" % new_account_email
body += _("You can approve or reject this account request at") + ": %s/admin/webaccess/webaccessadmin.py/manageaccounts\n" % CFG_SITE_URL
return send_email(CFG_SITE_SUPPORT_EMAIL, send_to, subject=sub, content=body)
def get_email(uid):
"""Return email address of the user uid. Return string 'guest' in case
the user is not found."""
out = "guest"
res = run_sql("SELECT email FROM user WHERE id=%s", (uid,), 1)
if res and res[0][0]:
out = res[0][0].lower()
return out
def get_email_from_username(username):
"""Return email address of the user corresponding to USERNAME.
The username may be either nickname or email. Return USERNAME
untouched if not found in the database or if found several
matching entries.
"""
if username == '':
return ''
out = username
res = run_sql("SELECT email FROM user WHERE email=%s", (username,), 1) + \
run_sql("SELECT email FROM user WHERE nickname=%s", (username,), 1)
if res and len(res) == 1:
out = res[0][0].lower()
return out
#def get_password(uid):
#"""Return password of the user uid. Return None in case
#the user is not found."""
#out = None
#res = run_sql("SELECT password FROM user WHERE id=%s", (uid,), 1)
#if res and res[0][0] != None:
#out = res[0][0]
#return out
def get_nickname(uid):
"""Return nickname of the user uid. Return None in case
the user is not found."""
out = None
res = run_sql("SELECT nickname FROM user WHERE id=%s", (uid,), 1)
if res and res[0][0]:
out = res[0][0]
return out
def get_nickname_or_email(uid):
"""Return nickname (preferred) or the email address of the user uid.
Return string 'guest' in case the user is not found."""
out = "guest"
res = run_sql("SELECT nickname, email FROM user WHERE id=%s", (uid,), 1)
if res and res[0]:
if res[0][0]:
out = res[0][0]
elif res[0][1]:
out = res[0][1].lower()
return out
def create_userinfobox_body(req, uid, language="en"):
"""Create user info box body for user UID in language LANGUAGE."""
if req:
if req.is_https():
url_referer = CFG_SITE_SECURE_URL + req.unparsed_uri
else:
url_referer = CFG_SITE_URL + req.unparsed_uri
if '/youraccount/logout' in url_referer:
url_referer = ''
else:
url_referer = CFG_SITE_URL
user_info = collect_user_info(req)
try:
return tmpl.tmpl_create_userinfobox(ln=language,
url_referer=url_referer,
guest=int(user_info['guest']),
username=get_nickname_or_email(uid),
submitter=user_info['precached_viewsubmissions'],
referee=user_info['precached_useapprove'],
admin=user_info['precached_useadmin'],
usebaskets=user_info['precached_usebaskets'],
usemessages=user_info['precached_usemessages'],
usealerts=user_info['precached_usealerts'],
usegroups=user_info['precached_usegroups'],
useloans=user_info['precached_useloans'],
usestats=user_info['precached_usestats']
)
except OperationalError:
return ""
def create_useractivities_menu(req, uid, navmenuid, ln="en"):
"""Create user activities menu.
@param req: request object
@param uid: user id
@type uid: int
@param navmenuid: the section of the website this page belongs (search, submit, baskets, etc.)
@type navmenuid: string
@param ln: language
@type ln: string
@return: HTML menu of the user activities
@rtype: string
"""
if req:
if req.is_https():
url_referer = CFG_SITE_SECURE_URL + req.unparsed_uri
else:
url_referer = CFG_SITE_URL + req.unparsed_uri
if '/youraccount/logout' in url_referer:
url_referer = ''
else:
url_referer = CFG_SITE_URL
user_info = collect_user_info(req)
is_user_menu_selected = False
if navmenuid == 'personalize' or \
navmenuid.startswith('your') and \
navmenuid != 'youraccount':
is_user_menu_selected = True
try:
return tmpl.tmpl_create_useractivities_menu(
ln=ln,
selected=is_user_menu_selected,
url_referer=url_referer,
guest=int(user_info['guest']),
username=get_nickname_or_email(uid),
submitter=user_info['precached_viewsubmissions'],
referee=user_info['precached_useapprove'],
admin=user_info['precached_useadmin'],
usebaskets=user_info['precached_usebaskets'],
usemessages=user_info['precached_usemessages'],
usealerts=user_info['precached_usealerts'],
usegroups=user_info['precached_usegroups'],
useloans=user_info['precached_useloans'],
usestats=user_info['precached_usestats'],
usecomments=user_info['precached_sendcomments'],
)
except OperationalError:
return ""
def create_adminactivities_menu(req, uid, navmenuid, ln="en"):
"""Create admin activities menu.
@param req: request object
@param uid: user id
@type uid: int
@param navmenuid: the section of the website this page belongs (search, submit, baskets, etc.)
@type navmenuid: string
@param ln: language
@type ln: string
@return: HTML menu of the user activities
@rtype: string
"""
_ = gettext_set_language(ln)
if req:
if req.is_https():
url_referer = CFG_SITE_SECURE_URL + req.unparsed_uri
else:
url_referer = CFG_SITE_URL + req.unparsed_uri
if '/youraccount/logout' in url_referer:
url_referer = ''
else:
url_referer = CFG_SITE_URL
user_info = collect_user_info(req)
activities = acc_find_possible_activities(user_info, ln)
# For BibEdit and BibDocFile menu items, take into consideration
# current record whenever possible
if activities.has_key(_("Run Record Editor")) or \
activities.has_key(_("Run Document File Manager")) and \
user_info['uri'].startswith('/' + CFG_SITE_RECORD + '/'):
try:
# Get record ID and try to cast it to an int
current_record_id = int(urlparse.urlparse(user_info['uri'])[2].split('/')[2])
except:
pass
else:
if activities.has_key(_("Run Record Editor")):
activities[_("Run Record Editor")] = activities[_("Run Record Editor")] + '&amp;#state=edit&amp;recid=' + str(current_record_id)
if activities.has_key(_("Run Document File Manager")):
activities[_("Run Document File Manager")] = activities[_("Run Document File Manager")] + '&amp;recid=' + str(current_record_id)
try:
return tmpl.tmpl_create_adminactivities_menu(
ln=ln,
selected=navmenuid == 'admin',
url_referer=url_referer,
guest=int(user_info['guest']),
username=get_nickname_or_email(uid),
submitter=user_info['precached_viewsubmissions'],
referee=user_info['precached_useapprove'],
admin=user_info['precached_useadmin'],
usebaskets=user_info['precached_usebaskets'],
usemessages=user_info['precached_usemessages'],
usealerts=user_info['precached_usealerts'],
usegroups=user_info['precached_usegroups'],
useloans=user_info['precached_useloans'],
usestats=user_info['precached_usestats'],
activities=activities
)
except OperationalError:
return ""
def list_registered_users():
"""List all registered users."""
return run_sql("SELECT id,email FROM user where email!=''")
def list_users_in_role(role):
"""List all users of a given role (see table accROLE)
@param role: role of user (string)
@return: list of uids
"""
res = run_sql("""SELECT uacc.id_user
FROM user_accROLE uacc JOIN accROLE acc
ON uacc.id_accROLE=acc.id
WHERE acc.name=%s""",
(role,), run_on_slave=True)
if res:
return map(lambda x: int(x[0]), res)
return []
def list_users_in_roles(role_list):
"""List all users of given roles (see table accROLE)
@param role_list: list of roles [string]
@return: list of uids
"""
if not(type(role_list) is list or type(role_list) is tuple):
role_list = [role_list]
query = """SELECT DISTINCT(uacc.id_user)
FROM user_accROLE uacc JOIN accROLE acc
ON uacc.id_accROLE=acc.id
"""
query_addons = ""
query_params = ()
if len(role_list) > 0:
query_params = role_list
query_addons = " WHERE "
for role in role_list[:-1]:
query_addons += "acc.name=%s OR "
query_addons += "acc.name=%s"
res = run_sql(query + query_addons, query_params, run_on_slave=True)
if res:
return map(lambda x: int(x[0]), res)
return []
def get_uid_based_on_pref(prefname, prefvalue):
"""get the user's UID based where his/her preference prefname has value prefvalue in preferences"""
prefs = run_sql("SELECT id, settings FROM user WHERE settings is not NULL")
the_uid = None
for pref in prefs:
try:
settings = deserialize_via_marshal(pref[1])
if (settings.has_key(prefname)) and (settings[prefname] == prefvalue):
the_uid = pref[0]
except:
pass
return the_uid
def get_user_preferences(uid):
pref = run_sql("SELECT id, settings FROM user WHERE id=%s", (uid,))
if pref:
try:
return deserialize_via_marshal(pref[0][1])
except:
pass
return get_default_user_preferences() # empty dict mean no preferences
def set_user_preferences(uid, pref):
assert(type(pref) == type({}))
run_sql("UPDATE user SET settings=%s WHERE id=%s",
(serialize_via_marshal(pref), uid))
def get_default_user_preferences():
user_preference = {
'login_method': ''}
if CFG_EXTERNAL_AUTH_DEFAULT in CFG_EXTERNAL_AUTHENTICATION:
user_preference['login_method'] = CFG_EXTERNAL_AUTH_DEFAULT
return user_preference
def get_preferred_user_language(req):
def _get_language_from_req_header(accept_language_header):
"""Extract langs info from req.headers_in['Accept-Language'] which
should be set to something similar to:
'fr,en-us;q=0.7,en;q=0.3'
"""
tmp_langs = {}
for lang in accept_language_header.split(','):
lang = lang.split(';q=')
if len(lang) == 2:
lang[1] = lang[1].replace('"', '') # Hack for Yeti robot
try:
tmp_langs[float(lang[1])] = lang[0]
except ValueError:
pass
else:
tmp_langs[1.0] = lang[0]
ret = []
priorities = tmp_langs.keys()
priorities.sort()
priorities.reverse()
for priority in priorities:
ret.append(tmp_langs[priority])
return ret
uid = getUid(req)
guest = isGuestUser(uid)
new_lang = None
preferred_lang = None
if not guest:
user_preferences = get_user_preferences(uid)
preferred_lang = new_lang = user_preferences.get('language', None)
if not new_lang:
try:
new_lang = wash_languages(cgi.parse_qs(req.args)['ln'])
except (TypeError, AttributeError, KeyError):
pass
if not new_lang:
try:
new_lang = wash_languages(_get_language_from_req_header(req.headers_in['Accept-Language']))
except (TypeError, AttributeError, KeyError):
pass
new_lang = wash_language(new_lang)
if new_lang != preferred_lang and not guest:
user_preferences['language'] = new_lang
set_user_preferences(uid, user_preferences)
return new_lang
def collect_user_info(req, login_time=False, refresh=False):
"""Given the mod_python request object rec or a uid it returns a dictionary
containing at least the keys uid, nickname, email, groups, plus any external keys in
the user preferences (collected at login time and built by the different
external authentication plugins) and if the mod_python request object is
provided, also the remote_ip, remote_host, referer, agent fields.
NOTE: if req is a mod_python request object, the user_info dictionary
is saved into req._user_info (for caching purpouses)
setApacheUser & setUid will properly reset it.
"""
if type(req) in [long, int] or req is None:
from invenio.webuser_flask import UserInfo
return UserInfo(req)
from invenio.webuser_flask import current_user
return current_user._get_current_object()
##
## NOT USED ANYMORE
## please see webuser_flask.py
##
#FIXME move EXTERNAL SSO functionality
from invenio.search_engine import get_permitted_restricted_collections
user_info = {
'remote_ip' : '',
'remote_host' : '',
'referer' : '',
'uri' : '',
'agent' : '',
'uid' :-1,
'nickname' : '',
'email' : '',
'group' : [],
'guest' : '1',
'session' : None,
'precached_permitted_restricted_collections' : [],
'precached_usebaskets' : False,
'precached_useloans' : False,
'precached_usegroups' : False,
'precached_usealerts' : False,
'precached_usemessages' : False,
'precached_viewsubmissions' : False,
'precached_useapprove' : False,
'precached_useadmin' : False,
'precached_usestats' : False,
'precached_viewclaimlink' : False,
'precached_usepaperclaim' : False,
'precached_usepaperattribution' : False,
'precached_canseehiddenmarctags' : False,
'precached_sendcomments' : False,
}
try:
is_req = False
is_flask = False
session = None
if not req:
uid = -1
elif type(req) in (type(1), type(1L)):
## req is infact a user identification
uid = req
elif type(req) is dict:
## req is by mistake already a user_info
try:
assert(req.has_key('uid'))
assert(req.has_key('email'))
assert(req.has_key('nickname'))
except AssertionError:
## mmh... misuse of collect_user_info. Better warn the admin!
register_exception(alert_admin=True)
user_info.update(req)
return user_info
elif isinstance(req, Request):
is_flask = True
from flask import session
uid = session.uid
if 'user_info' in session:
user_info = session['user_info']
if not login_time and not refresh:
return user_info
user_info['remote_ip'] = req.remote_addr
user_info['session'] = session.sid
user_info['remote_host'] = req.environ.get('REMOTE_HOST', '')
user_info['referer'] = req.referrer
user_info['uri'] = req.url or ''
user_info['agent'] = req.user_agent or 'N/A'
else:
is_req = True
uid = getUid(req)
if hasattr(req, '_user_info') and not login_time:
user_info = req._user_info
if not refresh:
return req._user_info
req._user_info = user_info
try:
user_info['remote_ip'] = req.remote_ip
except gaierror:
#FIXME: we should support IPV6 too. (hint for FireRole)
pass
user_info['session'] = session.sid
user_info['remote_host'] = req.remote_host or ''
user_info['referer'] = req.headers_in.get('Referer', '')
user_info['uri'] = req.unparsed_uri or ''
user_info['agent'] = req.headers_in.get('User-Agent', 'N/A')
user_info['uid'] = uid
user_info['nickname'] = get_nickname(uid) or ''
user_info['email'] = get_email(uid) or ''
user_info['group'] = []
user_info['guest'] = str(isGuestUser(uid))
if user_info['guest'] == '1' and CFG_INSPIRE_SITE:
usepaperattribution = False
viewclaimlink = False
if (CFG_BIBAUTHORID_ENABLED
and acc_is_user_in_role(user_info, acc_get_role_id("paperattributionviewers"))):
usepaperattribution = True
# if (CFG_BIBAUTHORID_ENABLED
# and usepaperattribution
# and acc_is_user_in_role(user_info, acc_get_role_id("paperattributionlinkviewers"))):
# viewclaimlink = True
viewlink = False
if is_req or is_flask:
try:
viewlink = session['personinfo']['claim_in_process']
except (KeyError, TypeError):
pass
if (CFG_BIBAUTHORID_ENABLED
and usepaperattribution
and viewlink):
viewclaimlink = True
user_info['precached_viewclaimlink'] = viewclaimlink
user_info['precached_usepaperattribution'] = usepaperattribution
if user_info['guest'] == '0':
user_info['group'] = [group[1] for group in get_groups(uid)]
prefs = get_user_preferences(uid)
login_method = prefs['login_method']
## NOTE: we fall back to default login_method if the login_method
## specified in the user settings does not exist (e.g. after
## a migration.)
login_object = CFG_EXTERNAL_AUTHENTICATION.get(login_method, CFG_EXTERNAL_AUTHENTICATION[CFG_EXTERNAL_AUTH_DEFAULT])
if login_object and ((datetime.datetime.now() - get_last_login(uid)).seconds > 3600):
## The user uses an external authentication method and it's a bit since
## she has not performed a login
if not CFG_EXTERNAL_AUTH_USING_SSO or (
is_req and login_object.in_shibboleth(req)):
## If we're using SSO we must be sure to be in HTTPS and Shibboleth handler
## otherwise we can't really read anything, hence
## it's better skip the synchronization
try:
groups = login_object.fetch_user_groups_membership(user_info['email'], req=req)
# groups is a dictionary {group_name : group_description,}
new_groups = {}
for key, value in groups.items():
new_groups[key + " [" + str(login_method) + "]"] = value
groups = new_groups
except (AttributeError, NotImplementedError, TypeError, InvenioWebAccessExternalAuthError):
pass
else: # Groups synchronization
from invenio.webgroup import synchronize_external_groups
synchronize_external_groups(uid, groups, login_method)
user_info['group'] = [group[1] for group in get_groups(uid)]
try:
# Importing external settings
new_prefs = login_object.fetch_user_preferences(user_info['email'], req=req)
for key, value in new_prefs.items():
prefs['EXTERNAL_' + key] = value
except (AttributeError, NotImplementedError, TypeError, InvenioWebAccessExternalAuthError):
pass
else:
set_user_preferences(uid, prefs)
prefs = get_user_preferences(uid)
run_sql('UPDATE user SET last_login=NOW() WHERE id=%s', (uid,))
if prefs:
for key, value in prefs.iteritems():
user_info[key.lower()] = value
if login_time:
## Heavy computational information
from invenio.access_control_engine import acc_authorize_action
user_info['precached_permitted_restricted_collections'] = get_permitted_restricted_collections(user_info)
user_info['precached_usebaskets'] = acc_authorize_action(user_info, 'usebaskets')[0] == 0
user_info['precached_useloans'] = acc_authorize_action(user_info, 'useloans')[0] == 0
user_info['precached_usegroups'] = acc_authorize_action(user_info, 'usegroups')[0] == 0
user_info['precached_usealerts'] = acc_authorize_action(user_info, 'usealerts')[0] == 0
user_info['precached_usemessages'] = acc_authorize_action(user_info, 'usemessages')[0] == 0
user_info['precached_usestats'] = acc_authorize_action(user_info, 'runwebstatadmin')[0] == 0
user_info['precached_viewsubmissions'] = isUserSubmitter(user_info)
user_info['precached_useapprove'] = isUserReferee(user_info)
user_info['precached_useadmin'] = isUserAdmin(user_info)
user_info['precached_canseehiddenmarctags'] = acc_authorize_action(user_info, 'runbibedit')[0] == 0
user_info['precached_sendcomments'] = acc_authorize_action(user_info, 'sendcomment', '*')[0] == 0
usepaperclaim = False
usepaperattribution = False
viewclaimlink = False
if (CFG_BIBAUTHORID_ENABLED
and acc_is_user_in_role(user_info, acc_get_role_id("paperclaimviewers"))):
usepaperclaim = True
if (CFG_BIBAUTHORID_ENABLED
and acc_is_user_in_role(user_info, acc_get_role_id("paperattributionviewers"))):
usepaperattribution = True
viewlink = False
if is_req or is_flask:
try:
viewlink = session['personinfo']['claim_in_process']
except (KeyError, TypeError):
pass
if (CFG_BIBAUTHORID_ENABLED
and usepaperattribution
and viewlink):
viewclaimlink = True
# if (CFG_BIBAUTHORID_ENABLED
# and ((usepaperclaim or usepaperattribution)
# and acc_is_user_in_role(user_info, acc_get_role_id("paperattributionlinkviewers")))):
# viewclaimlink = True
user_info['precached_viewclaimlink'] = viewclaimlink
user_info['precached_usepaperclaim'] = usepaperclaim
user_info['precached_usepaperattribution'] = usepaperattribution
except Exception, e:
register_exception()
return user_info
diff --git a/modules/webstat/lib/webstat.py b/modules/webstat/lib/webstat.py
index 204f140f3..a7c989a02 100644
--- a/modules/webstat/lib/webstat.py
+++ b/modules/webstat/lib/webstat.py
@@ -1,1990 +1,1990 @@
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
__revision__ = "$Id$"
__lastupdated__ = "$Date$"
import os
import time
import re
import datetime
import cPickle
import calendar
from datetime import timedelta
from urllib import quote
from invenio import template
from invenio.config import \
CFG_WEBDIR, \
CFG_TMPDIR, \
CFG_SITE_URL, \
CFG_SITE_LANG, \
CFG_WEBSTAT_BIBCIRCULATION_START_YEAR
from invenio.webstat_config import CFG_WEBSTAT_CONFIG_PATH
-from invenio.bibindex_engine import CFG_JOURNAL_TAG
+from invenio.bibindex_tokenizers.BibIndexJournalTokenizer import CFG_JOURNAL_TAG
from invenio.search_engine import get_coll_i18nname, \
wash_index_term
from invenio.dbquery import run_sql, wash_table_column_name, ProgrammingError
from invenio.bibsched import is_task_scheduled, \
get_task_ids_by_descending_date, \
get_task_options
# Imports handling key events and error log
from invenio.webstat_engine import get_keyevent_trend_collection_population, \
get_keyevent_trend_new_records, \
get_keyevent_trend_search_frequency, \
get_keyevent_trend_search_type_distribution, \
get_keyevent_trend_download_frequency, \
get_keyevent_trend_comments_frequency, \
get_keyevent_trend_number_of_loans, \
get_keyevent_trend_web_submissions, \
get_keyevent_snapshot_apache_processes, \
get_keyevent_snapshot_bibsched_status, \
get_keyevent_snapshot_uptime_cmd, \
get_keyevent_snapshot_sessions, \
get_keyevent_bibcirculation_report, \
get_keyevent_loan_statistics, \
get_keyevent_loan_lists, \
get_keyevent_renewals_lists, \
get_keyevent_returns_table, \
get_keyevent_trend_returns_percentage, \
get_keyevent_ill_requests_statistics, \
get_keyevent_ill_requests_lists, \
get_keyevent_trend_satisfied_ill_requests_percentage, \
get_keyevent_items_statistics, \
get_keyevent_items_lists, \
get_keyevent_loan_request_statistics, \
get_keyevent_loan_request_lists, \
get_keyevent_user_statistics, \
get_keyevent_user_lists, \
_get_doctypes, \
_get_item_statuses, \
_get_item_doctype, \
_get_request_statuses, \
_get_libraries, \
_get_loan_periods, \
get_invenio_error_log_ranking, \
get_invenio_last_n_errors, \
update_error_log_analyzer, \
get_apache_error_log_ranking, \
get_last_updates, \
get_list_link, \
get_general_status, \
get_ingestion_matching_records, \
get_record_ingestion_status, \
get_specific_ingestion_status, \
get_title_ingestion, \
get_record_last_modification
# Imports handling custom events
from invenio.webstat_engine import get_customevent_table, \
get_customevent_trend, \
get_customevent_dump
# Imports handling custom report
from invenio.webstat_engine import get_custom_summary_data, \
_get_tag_name, \
create_custom_summary_graph
# Imports for handling outputting
from invenio.webstat_engine import create_graph_trend, \
create_graph_dump, \
create_graph_table, \
get_numeric_stats
# Imports for handling exports
from invenio.webstat_engine import export_to_python, \
export_to_csv, \
export_to_file
TEMPLATES = template.load('webstat')
# Constants
WEBSTAT_CACHE_INTERVAL = 600 # Seconds, cache_* functions not affected by this.
# Also not taking into account if BibSched has
# webstatadmin process.
WEBSTAT_RAWDATA_DIRECTORY = CFG_TMPDIR + "/"
WEBSTAT_GRAPH_DIRECTORY = CFG_WEBDIR + "/img/"
TYPE_REPOSITORY = [('gnuplot', 'Image - Gnuplot'),
('asciiart', 'Image - ASCII art'),
('flot', 'Image - Flot'),
('asciidump', 'Image - ASCII dump'),
('python', 'Data - Python code', export_to_python),
('csv', 'Data - CSV', export_to_csv)]
def get_collection_list_plus_all():
""" Return all the collection names plus the name All"""
coll = [('All', 'All')]
res = run_sql("SELECT name FROM collection WHERE (dbquery IS NULL OR dbquery \
NOT LIKE 'hostedcollection:%') ORDER BY name ASC")
for c_name in res:
# make a nice printable name (e.g. truncate c_printable for
# long collection names in given language):
c_printable_fullname = get_coll_i18nname(c_name[0], CFG_SITE_LANG, False)
c_printable = wash_index_term(c_printable_fullname, 30, False)
if c_printable != c_printable_fullname:
c_printable = c_printable + "..."
coll.append([c_name[0], c_printable])
return coll
# Key event repository, add an entry here to support new key measures.
KEYEVENT_REPOSITORY = {'collection population':
{'fullname': 'Collection population',
'specificname':
'Population in collection "%(collection)s"',
'description':
('The collection population is the number of \
documents existing in the selected collection.', ),
'gatherer':
get_keyevent_trend_collection_population,
'extraparams': {'collection': ('combobox', 'Collection',
get_collection_list_plus_all)},
'cachefilename':
'webstat_%(event_id)s_%(collection)s_%(timespan)s',
'ylabel': 'Number of records',
'multiple': None,
'output': 'Graph'},
'new records':
{'fullname': 'New records',
'specificname':
'New records in collection "%(collection)s"',
'description':
('The graph shows the new documents created in \
the selected collection and time span.', ),
'gatherer':
get_keyevent_trend_new_records,
'extraparams': {'collection': ('combobox', 'Collection',
get_collection_list_plus_all)},
'cachefilename':
'webstat_%(event_id)s_%(collection)s_%(timespan)s',
'ylabel': 'Number of records',
'multiple': None,
'output': 'Graph'},
'search frequency':
{'fullname': 'Search frequency',
'specificname': 'Search frequency',
'description':
('The search frequency is the number of searches \
performed in a specific time span.', ),
'gatherer': get_keyevent_trend_search_frequency,
'extraparams': {},
'cachefilename':
'webstat_%(event_id)s_%(timespan)s',
'ylabel': 'Number of searches',
'multiple': None,
'output': 'Graph'},
'search type distribution':
{'fullname': 'Search type distribution',
'specificname': 'Search type distribution',
'description':
('The search type distribution shows both the \
number of simple searches and the number of advanced searches in the same graph.', ),
'gatherer':
get_keyevent_trend_search_type_distribution,
'extraparams': {},
'cachefilename':
'webstat_%(event_id)s_%(timespan)s',
'ylabel': 'Number of searches',
'multiple': ['Simple searches',
'Advanced searches'],
'output': 'Graph'},
'download frequency':
{'fullname': 'Download frequency',
'specificname': 'Download frequency in collection "%(collection)s"',
'description':
('The download frequency is the number of fulltext \
downloads of the documents.', ),
'gatherer': get_keyevent_trend_download_frequency,
'extraparams': {'collection': ('combobox', 'Collection',
get_collection_list_plus_all)},
'cachefilename': 'webstat_%(event_id)s_%(collection)s_%(timespan)s',
'ylabel': 'Number of downloads',
'multiple': None,
'output': 'Graph'},
'comments frequency':
{'fullname': 'Comments frequency',
'specificname': 'Comments frequency in collection "%(collection)s"',
'description':
('The comments frequency is the amount of comments written \
for all the documents.', ),
'gatherer': get_keyevent_trend_comments_frequency,
'extraparams': {'collection': ('combobox', 'Collection',
get_collection_list_plus_all)},
'cachefilename': 'webstat_%(event_id)s_%(collection)s_%(timespan)s',
'ylabel': 'Number of comments',
'multiple': None,
'output': 'Graph'},
'number of loans':
{'fullname': 'Number of circulation loans',
'specificname': 'Number of circulation loans',
'description':
('The number of loans shows the total number of records loaned \
over a time span', ),
'gatherer': get_keyevent_trend_number_of_loans,
'extraparams': {},
'cachefilename':
'webstat_%(event_id)s_%(timespan)s',
'ylabel': 'Number of loans',
'multiple': None,
'output': 'Graph',
'type': 'bibcirculation'},
'web submissions':
{'fullname': 'Number of web submissions',
'specificname':
'Number of web submissions of "%(doctype)s"',
'description':
("The web submissions are the number of submitted \
documents using the web form.", ),
'gatherer': get_keyevent_trend_web_submissions,
'extraparams': {
'doctype': ('combobox', 'Type of document', _get_doctypes)},
'cachefilename':
'webstat_%(event_id)s_%(doctype)s_%(timespan)s',
'ylabel': 'Web submissions',
'multiple': None,
'output': 'Graph'},
'loans statistics':
{'fullname': 'Circulation loans statistics',
'specificname': 'Circulation loans statistics',
'description':
('The loan statistics consist on different numbers \
related to the records loaned. It is important to see the difference between document \
and item. The item is the physical representation of a document (like every copy of a \
book). There may be more items than documents, but never the opposite.', ),
'gatherer':
get_keyevent_loan_statistics,
'extraparams': {
'udc': ('textbox', 'UDC'),
'item_status': ('combobox', 'Item status', _get_item_statuses),
'publication_date': ('textbox', 'Publication date'),
'creation_date': ('textbox', 'Creation date')},
'cachefilename':
'webstat_%(event_id)s_%(udc)s_%(item_status)s_%(publication_date)s' + \
'_%(creation_date)s_%(timespan)s',
'rows': ['Number of documents loaned',
'Number of items loaned on the total number of items (%)',
'Number of items never loaned on the \
total number of items (%)',
'Average time between the date of \
the record creation and the date of the first loan (in days)'],
'output': 'Table',
'type': 'bibcirculation'},
'loans lists':
{'fullname': 'Circulation loans lists',
'specificname': 'Circulation loans lists',
'description':
('The loan lists show the most loaned and the never loaned \
records in a time span. The most loaned record are calculated as the number of loans by copy.', ),
'gatherer':
get_keyevent_loan_lists,
'extraparams': {
'udc': ('textbox', 'UDC'),
'loan_period': ('combobox', 'Loan period', _get_loan_periods),
'max_loans': ('textbox', 'Maximum number of loans'),
'min_loans': ('textbox', 'Minimum number of loans'),
'publication_date': ('textbox', 'Publication date'),
'creation_date': ('textbox', 'Creation date')},
'cachefilename':
'webstat_%(event_id)s_%(udc)s_%(loan_period)s' + \
'_%(min_loans)s_%(max_loans)s_%(publication_date)s_' + \
'%(creation_date)s_%(timespan)s',
'rows': [],
'output': 'List',
'type': 'bibcirculation'},
'renewals':
{'fullname': 'Circulation renewals',
'specificname': 'Circulation renewals',
'description':
('Here the list of most renewed items stored is shown \
by decreasing order', ),
'gatherer':
get_keyevent_renewals_lists,
'extraparams': {
'udc': ('textbox', 'UDC')},
'cachefilename':
'webstat_%(event_id)s_%(udc)s_%(timespan)s',
'rows': [],
'output': 'List',
'type': 'bibcirculation'},
'number returns':
{'fullname': 'Number of circulation overdue returns',
'specificname': 'Number of circulation overdue returns',
'description':
('The number of overdue returns is the number of loans \
that has not been returned by the due date (they may have been returned after or never).', ),
'gatherer':
get_keyevent_returns_table,
'extraparams': {},
'cachefilename':
'webstat_%(event_id)s_%(timespan)s',
'rows': ['Number of overdue returns'],
'output': 'Table',
'type': 'bibcirculation'},
'percentage returns':
{'fullname': 'Percentage of circulation overdue returns',
'specificname': 'Percentage of overdue returns',
'description':
('This graphs shows both the overdue returns and the total \
of returns.', ),
'gatherer':
get_keyevent_trend_returns_percentage,
'extraparams': {},
'cachefilename':
'webstat_%(event_id)s_%(timespan)s',
'ylabel': 'Percentage of overdue returns',
'multiple': ['Overdue returns',
'Total returns'],
'output': 'Graph',
'type': 'bibcirculation'},
'ill requests statistics':
{'fullname': 'Circulation ILL Requests statistics',
'specificname': 'Circulation ILL Requests statistics',
'description':
('The ILL requests statistics are different numbers \
related to the requests to other libraries.', ),
'gatherer':
get_keyevent_ill_requests_statistics,
'extraparams': {
'doctype': ('combobox', 'Type of document', _get_item_doctype),
'status': ('combobox', 'Status of request', _get_request_statuses),
'supplier': ('combobox', 'Supplier', _get_libraries)},
'cachefilename':
'webstat_%(event_id)s_%(doctype)s_%(status)s_%(supplier)s_%(timespan)s',
'rows': ['Number of ILL requests',
'Number of satisfied ILL requests 2 weeks \
after the date of request creation',
'Average time between the day \
of the ILL request date and day \
of the delivery item to the user (in days)',
'Average time between the day \
the ILL request was sent to the supplier and \
the day of the delivery item (in days)'],
'output': 'Table',
'type': 'bibcirculation'},
'ill requests list':
{'fullname': 'Circulation ILL Requests list',
'specificname': 'Circulation ILL Requests list',
'description':
('The ILL requests list shows 50 requests to other \
libraries on the selected time span.', ),
'gatherer':
get_keyevent_ill_requests_lists,
'extraparams': {
'doctype': ('combobox', 'Type of document', _get_item_doctype),
'supplier': ('combobox', 'Supplier', _get_libraries)},
'cachefilename':
'webstat_%(event_id)s_%(doctype)s_%(supplier)s_%(timespan)s',
'rows': [],
'output': 'List',
'type': 'bibcirculation'},
'percentage satisfied ill requests':
{'fullname': 'Percentage of circulation satisfied ILL requests',
'specificname': 'Percentage of circulation satisfied ILL requests',
'description':
('This graph shows both the satisfied ILL requests and \
the total number of requests in the selected time span.', ),
'gatherer':
get_keyevent_trend_satisfied_ill_requests_percentage,
'extraparams': {
'doctype': ('combobox', 'Type of document', _get_item_doctype),
'status': ('combobox', 'Status of request', _get_request_statuses),
'supplier': ('combobox', 'Supplier', _get_libraries)},
'cachefilename':
'webstat_%(event_id)s_%(doctype)s_%(status)s_%(supplier)s_%(timespan)s',
'ylabel': 'Percentage of satisfied ILL requests',
'multiple': ['Satisfied ILL requests',
'Total requests'],
'output': 'Graph',
'type': 'bibcirculation'},
'items stats':
{'fullname': 'Circulation items statistics',
'specificname': 'Circulation items statistics',
'description':
('The items statistics show the total number of items at \
the moment and the number of new items in the selected time span.', ),
'gatherer':
get_keyevent_items_statistics,
'extraparams': {
'udc': ('textbox', 'UDC'),
},
'cachefilename':
'webstat_%(event_id)s_%(udc)s_%(timespan)s',
'rows': ['The total number of items', 'Total number of new items'],
'output': 'Table',
'type': 'bibcirculation'},
'items list':
{'fullname': 'Circulation items list',
'specificname': 'Circulation items list',
'description':
('The item list shows data about the existing items.', ),
'gatherer':
get_keyevent_items_lists,
'extraparams': {
'library': ('combobox', 'Library', _get_libraries),
'status': ('combobox', 'Status', _get_item_statuses)},
'cachefilename':
'webstat_%(event_id)s_%(library)s_%(status)s',
'rows': [],
'output': 'List',
'type': 'bibcirculation'},
'loan request statistics':
{'fullname': 'Circulation hold requests statistics',
'specificname': 'Circulation hold requests statistics',
'description':
('The hold requests statistics show numbers about the \
requests for documents. For the numbers to be correct, there must be data in the loanrequest \
custom event.', ),
'gatherer':
get_keyevent_loan_request_statistics,
'extraparams': {
'item_status': ('combobox', 'Item status', _get_item_statuses)},
'cachefilename':
'webstat_%(event_id)s_%(item_status)s_%(timespan)s',
'rows': ['Number of hold requests, one week after the date of \
request creation',
'Number of successful hold requests transactions',
'Average time between the hold request date and \
the date of delivery document in a year'],
'output': 'Table',
'type': 'bibcirculation'},
'loan request lists':
{'fullname': 'Circulation hold requests lists',
'specificname': 'Circulation hold requests lists',
'description':
('The hold requests list shows the most requested items.', ),
'gatherer':
get_keyevent_loan_request_lists,
'extraparams': {
'udc': ('textbox', 'UDC')},
'cachefilename':
'webstat_%(event_id)s_%(udc)s_%(timespan)s',
'rows': [],
'output': 'List',
'type': 'bibcirculation'},
'user statistics':
{'fullname': 'Circulation users statistics',
'specificname': 'Circulation users statistics',
'description':
('The user statistics show the number of active users \
(at least one transaction) in the selected timespan.', ),
'gatherer':
get_keyevent_user_statistics,
'extraparams': {},
'cachefilename':
'webstat_%(event_id)s_%(timespan)s',
'rows': ['Number of active users'],
'output': 'Table',
'type': 'bibcirculation'},
'user lists':
{'fullname': 'Circulation users lists',
'specificname': 'Circulation users lists',
'description':
('The user list shows the most intensive users \
(ILL requests + Loans)', ),
'gatherer':
get_keyevent_user_lists,
'extraparams': {},
'cachefilename':
'webstat_%(event_id)s_%(timespan)s',
'rows': [],
'output': 'List',
'type': 'bibcirculation'}
}
# CLI
def create_customevent(event_id=None, name=None, cols=[]):
"""
Creates a new custom event by setting up the necessary MySQL tables.
@param event_id: Proposed human-readable id of the new event.
@type event_id: str
@param name: Optionally, a descriptive name.
@type name: str
@param cols: Optionally, the name of the additional columns.
@type cols: [str]
@return: A status message
@type: str
"""
if event_id is None:
return "Please specify a human-readable ID for the event."
# Only accept id and name with standard characters
if not re.search("[^\w]", str(event_id) + str(name)) is None:
return "Please note that both event id and event name needs to be " + \
"written without any non-standard characters."
# Make sure the chosen id is not already taken
if len(run_sql("SELECT NULL FROM staEVENT WHERE id = %s",
(event_id, ))) != 0:
return "Event id [%s] already exists! Aborted." % event_id
# Check if the cols are valid titles
for argument in cols:
if (argument == "creation_time") or (argument == "id"):
return "Invalid column title: %s! Aborted." % argument
# Insert a new row into the events table describing the new event
sql_param = [event_id]
if name is not None:
sql_name = "%s"
sql_param.append(name)
else:
sql_name = "NULL"
if len(cols) != 0:
sql_cols = "%s"
sql_param.append(cPickle.dumps(cols))
else:
sql_cols = "NULL"
run_sql("INSERT INTO staEVENT (id, name, cols) VALUES (%s, " + \
sql_name + ", " + sql_cols + ")", tuple(sql_param))
tbl_name = get_customevent_table(event_id)
# Create a table for the new event
sql_query = ["CREATE TABLE %s (" % wash_table_column_name(tbl_name)]
sql_query.append("id MEDIUMINT unsigned NOT NULL auto_increment,")
sql_query.append("creation_time TIMESTAMP DEFAULT NOW(),")
for argument in cols:
arg = wash_table_column_name(argument)
sql_query.append("`%s` MEDIUMTEXT NULL," % arg)
sql_query.append("INDEX `%s` (`%s` (50))," % (arg, arg))
sql_query.append("PRIMARY KEY (id))")
sql_str = ' '.join(sql_query)
run_sql(sql_str)
# We're done! Print notice containing the name of the event.
return ("Event table [%s] successfully created.\n" +
"Please use event id [%s] when registering an event.") \
% (tbl_name, event_id)
def modify_customevent(event_id=None, name=None, cols=[]):
"""
Modify a custom event. It can modify the columns definition
or/and the descriptive name
@param event_id: Human-readable id of the event.
@type event_id: str
@param name: Optionally, a descriptive name.
@type name: str
@param cols: Optionally, the name of the additional columns.
@type cols: [str]
@return: A status message
@type: str
"""
if event_id is None:
return "Please specify a human-readable ID for the event."
# Only accept name with standard characters
if not re.search("[^\w]", str(name)) is None:
return "Please note that event name needs to be written " + \
"without any non-standard characters."
# Check if the cols are valid titles
for argument in cols:
if (argument == "creation_time") or (argument == "id"):
return "Invalid column title: %s! Aborted." % argument
res = run_sql("SELECT CONCAT('staEVENT', number), cols " + \
"FROM staEVENT WHERE id = %s", (event_id, ))
if not res:
return "Invalid event id: %s! Aborted" % event_id
if not run_sql("SHOW TABLES LIKE %s", res[0][0]):
run_sql("DELETE FROM staEVENT WHERE id=%s", (event_id, ))
create_customevent(event_id, event_id, cols)
return
cols_orig = cPickle.loads(res[0][1])
# add new cols
cols_add = []
for col in cols:
if not col in cols_orig:
cols_add.append(col)
# del old cols
cols_del = []
for col in cols_orig:
if not col in cols:
cols_del.append(col)
#modify event table
if cols_del or cols_add:
sql_query = ["ALTER TABLE %s " % wash_table_column_name(res[0][0])]
# check if a column was renamed
for col_del in cols_del:
result = -1
while result < 1 or result > len(cols_add) + 1:
print """What do you want to do with the column %s in event %s?:
1.- Delete it""" % (col_del, event_id)
for i in range(len(cols_add)):
print "%d.- Rename it to %s" % (i + 2, cols_add[i])
result = int(raw_input("\n"))
if result == 1:
sql_query.append("DROP COLUMN `%s`" % col_del)
sql_query.append(", ")
else:
col_add = cols_add[result-2]
sql_query.append("CHANGE `%s` `%s` MEDIUMTEXT NULL"%(col_del, col_add))
sql_query.append(", ")
cols_add.remove(col_add)
# add the rest of the columns
for col_add in cols_add:
sql_query.append("ADD COLUMN `%s` MEDIUMTEXT NULL, " % col_add)
sql_query.append("ADD INDEX `%s` (`%s`(50))" % (col_add, col_add))
sql_query.append(", ")
sql_query[-1] = ";"
run_sql("".join(sql_query))
#modify event definition
sql_query = ["UPDATE staEVENT SET"]
sql_param = []
if cols_del or cols_add:
sql_query.append("cols = %s")
sql_query.append(",")
sql_param.append(cPickle.dumps(cols))
if name:
sql_query.append("name = %s")
sql_query.append(",")
sql_param.append(name)
if sql_param:
sql_query[-1] = "WHERE id = %s"
sql_param.append(event_id)
sql_str = ' '.join(sql_query)
run_sql(sql_str, sql_param)
# We're done! Print notice containing the name of the event.
return ("Event table [%s] successfully modified." % (event_id, ))
def destroy_customevent(event_id=None):
"""
Removes an existing custom event by destroying the MySQL tables and
the event data that might be around. Use with caution!
@param event_id: Human-readable id of the event to be removed.
@type event_id: str
@return: A status message
@type: str
"""
if event_id is None:
return "Please specify an existing event id."
# Check if the specified id exists
if len(run_sql("SELECT NULL FROM staEVENT WHERE id = %s",
(event_id, ))) == 0:
return "Custom event ID '%s' doesn't exist! Aborted." % event_id
else:
tbl_name = get_customevent_table(event_id)
run_sql("DROP TABLE %s" % wash_table_column_name(tbl_name)) # kwalitee: disable=sql
run_sql("DELETE FROM staEVENT WHERE id = %s", (event_id, ))
return ("Custom event ID '%s' table '%s' was successfully destroyed.\n") \
% (event_id, tbl_name)
def destroy_customevents():
"""
Removes all existing custom events by destroying the MySQL tables and
the events data that might be around. Use with caution!
@return: A status message
@type: str
"""
msg = ''
try:
res = run_sql("SELECT id FROM staEVENT")
except ProgrammingError:
return msg
for event in res:
msg += destroy_customevent(event[0])
return msg
def register_customevent(event_id, *arguments):
"""
Registers a custom event. Will add to the database's event tables
as created by create_customevent().
This function constitutes the "function hook" that should be
called throughout Invenio where one wants to register a
custom event! Refer to the help section on the admin web page.
@param event_id: Human-readable id of the event to be registered
@type event_id: str
@param *arguments: The rest of the parameters of the function call
@type *arguments: [params]
"""
res = run_sql("SELECT CONCAT('staEVENT', number),cols " + \
"FROM staEVENT WHERE id = %s", (event_id, ))
if not res:
return # the id does not exist
tbl_name = res[0][0]
if res[0][1]:
col_titles = cPickle.loads(res[0][1])
else:
col_titles = []
if len(col_titles) != len(arguments[0]):
return # there is different number of arguments than cols
# Make sql query
if len(arguments[0]) != 0:
sql_param = []
sql_query = ["INSERT INTO %s (" % wash_table_column_name(tbl_name)]
for title in col_titles:
sql_query.append("`%s`" % title)
sql_query.append(",")
sql_query.pop() # del the last ','
sql_query.append(") VALUES (")
for argument in arguments[0]:
sql_query.append("%s")
sql_query.append(",")
sql_param.append(argument)
sql_query.pop() # del the last ','
sql_query.append(")")
sql_str = ''.join(sql_query)
run_sql(sql_str, tuple(sql_param))
else:
run_sql("INSERT INTO %s () VALUES ()" % wash_table_column_name(tbl_name)) # kwalitee: disable=sql
def cache_keyevent_trend(ids=[]):
"""
Runs the rawdata gatherer for the specific key events.
Intended to be run mainly but the BibSched daemon interface.
For a specific id, all possible timespans' rawdata is gathered.
@param ids: The key event ids that are subject to caching.
@type ids: []
"""
args = {}
for event_id in ids:
args['event_id'] = event_id
if 'type' in KEYEVENT_REPOSITORY[event_id] and \
KEYEVENT_REPOSITORY[event_id]['type'] == 'bibcirculation':
timespans = _get_timespans(bibcirculation_stat=True)[:-1]
else:
timespans = _get_timespans()[:-1]
extraparams = KEYEVENT_REPOSITORY[event_id]['extraparams']
# Construct all combinations of extraparams and store as
# [{param name: arg value}] so as we can loop over them and just
# pattern-replace the each dictionary against
# the KEYEVENT_REPOSITORY['event_id']['cachefilename'].
combos = [[]]
for extra in [[(param, extra[0]) for extra in extraparams[param][1]()]
for param in extraparams]:
combos = [i + [y] for y in extra for i in combos]
combos = [dict(extra) for extra in combos]
for i in range(len(timespans)):
# Get timespans parameters
args['timespan'] = timespans[i][0]
args.update({'t_start': timespans[i][2], 't_end': timespans[i][3],
'granularity': timespans[i][4],
't_format': timespans[i][5],
'xtic_format': timespans[i][6]})
for combo in combos:
args.update(combo)
# Create unique filename for this combination of parameters
filename = KEYEVENT_REPOSITORY[event_id]['cachefilename'] \
% dict([(param, re.subn("[^\w]", "_",
args[param])[0]) for param in args])
# Create closure of gatherer function in case cache
# needs to be refreshed
gatherer = lambda: KEYEVENT_REPOSITORY[event_id] \
['gatherer'](args)
# Get data file from cache, ALWAYS REFRESH DATA!
_get_file_using_cache(filename, gatherer, True).read()
return True
def cache_customevent_trend(ids=[]):
"""
Runs the rawdata gatherer for the specific custom events.
Intended to be run mainly but the BibSched daemon interface.
For a specific id, all possible timespans' rawdata is gathered.
@param ids: The custom event ids that are subject to caching.
@type ids: []
"""
args = {}
timespans = _get_timespans()
for event_id in ids:
args['event_id'] = event_id
args['cols'] = []
for i in range(len(timespans)):
# Get timespans parameters
args['timespan'] = timespans[i][0]
args.update({'t_start': timespans[i][2], 't_end': timespans[i][3],
'granularity': timespans[i][4],
't_format': timespans[i][5],
'xtic_format': timespans[i][6]})
# Create unique filename for this combination of parameters
filename = "webstat_customevent_%(event_id)s_%(timespan)s" \
% {'event_id': re.subn("[^\w]", "_", event_id)[0],
'timespan': re.subn("[^\w]", "_", args['timespan'])[0]}
# Create closure of gatherer function in case cache
# needs to be refreshed
gatherer = lambda: get_customevent_trend(args)
# Get data file from cache, ALWAYS REFRESH DATA!
_get_file_using_cache(filename, gatherer, True).read()
return True
def basket_display():
"""
Display basket statistics.
"""
tbl_name = get_customevent_table("baskets")
if not tbl_name:
# custom event baskets not defined, so return empty output:
return []
try:
res = run_sql("SELECT creation_time FROM %s ORDER BY creation_time" % wash_table_column_name(tbl_name)) # kwalitee: disable=sql
days = (res[-1][0] - res[0][0]).days + 1
public = run_sql("SELECT COUNT(*) FROM %s " % wash_table_column_name(tbl_name) + " WHERE action = 'display_public'")[0][0] # kwalitee: disable=sql
users = run_sql("SELECT COUNT(DISTINCT user) FROM %s" % wash_table_column_name(tbl_name))[0][0] # kwalitee: disable=sql
adds = run_sql("SELECT COUNT(*) FROM %s WHERE action = 'add'" % wash_table_column_name(tbl_name))[0][0] # kwalitee: disable=sql
displays = run_sql("SELECT COUNT(*) FROM %s " % wash_table_column_name(tbl_name) + " WHERE action = 'display' OR action = 'display_public'")[0][0] # kwalitee: disable=sql
hits = adds + displays
average = hits / days
res = [("Basket page hits", hits)]
res.append((" Average per day", average))
res.append((" Unique users", users))
res.append((" Additions", adds))
res.append((" Public", public))
except IndexError:
res = []
return res
def alert_display():
"""
Display alert statistics.
"""
tbl_name = get_customevent_table("alerts")
if not tbl_name:
# custom event alerts not defined, so return empty output:
return []
try:
res = run_sql("SELECT creation_time FROM %s ORDER BY creation_time"
% wash_table_column_name(tbl_name))
days = (res[-1][0] - res[0][0]).days + 1
res = run_sql("SELECT COUNT(DISTINCT user),COUNT(*) FROM %s" % wash_table_column_name(tbl_name)) # kwalitee: disable=sql
users = res[0][0]
hits = res[0][1]
displays = run_sql("SELECT COUNT(*) FROM %s WHERE action = 'list'"
% wash_table_column_name(tbl_name))[0][0]
search = run_sql("SELECT COUNT(*) FROM %s WHERE action = 'display'"
% wash_table_column_name(tbl_name))[0][0]
average = hits / days
res = [("Alerts page hits", hits)]
res.append((" Average per day", average))
res.append((" Unique users", users))
res.append((" Displays", displays))
res.append((" Searches history display", search))
except IndexError:
res = []
return res
def loan_display():
"""
Display loan statistics.
"""
try:
loans, renewals, returns, illrequests, holdrequests = \
get_keyevent_bibcirculation_report()
res = [("Yearly report", '')]
res.append((" Loans", loans))
res.append((" Renewals", renewals))
res.append((" Returns", returns))
res.append((" ILL requests", illrequests))
res.append((" Hold requests", holdrequests))
return res
except IndexError:
return []
def get_url_customevent(url_dest, event_id, *arguments):
"""
Get an url for registers a custom event. Every time is load the
url will register a customevent as register_customevent().
@param url_dest: url to redirect after register the event
@type url_dest: str
@param event_id: Human-readable id of the event to be registered
@type event_id: str
@param *arguments: The rest of the parameters of the function call
the param "WEBSTAT_IP" will tell webstat that here
should be the IP who request the url
@type *arguments: [params]
@return: url for register event
@type: str
"""
return "%s/stats/customevent_register?event_id=%s&arg=%s&url=%s" % \
(CFG_SITE_URL, event_id, ','.join(arguments[0]), quote(url_dest))
# WEB
def perform_request_index(ln=CFG_SITE_LANG):
"""
Displays some informative text, the health box, and a the list of
key/custom events.
"""
out = TEMPLATES.tmpl_welcome(ln=ln)
# Display the health box
out += TEMPLATES.tmpl_system_health_list(get_general_status(), ln=ln)
# Produce a list of the key statistics
out += TEMPLATES.tmpl_keyevent_list(ln=ln)
# Display the custom statistics
out += TEMPLATES.tmpl_customevent_list(_get_customevents(), ln=ln)
# Display error log analyzer
out += TEMPLATES.tmpl_error_log_statistics_list(ln=ln)
# Display annual report
out += TEMPLATES.tmpl_custom_summary(ln=ln)
out += TEMPLATES.tmpl_yearly_report_list(ln=ln)
# Display test for collections
out += TEMPLATES.tmpl_collection_stats_main_list(ln=ln)
return out
def perform_display_current_system_health(ln=CFG_SITE_LANG):
"""
Display the current general system health:
- Uptime/load average
- Apache status
- Session information
- Searches recount
- New records
- Bibsched queue
- New/modified records
- Indexing, ranking, sorting and collecting methods
- Baskets
- Alerts
"""
from ConfigParser import ConfigParser
conf = ConfigParser()
conf.read(CFG_WEBSTAT_CONFIG_PATH)
# Prepare the health base data
health_indicators = []
now = datetime.datetime.now()
yesterday = (now - datetime.timedelta(days=1)).strftime("%Y-%m-%d")
today = now.strftime("%Y-%m-%d")
tomorrow = (now + datetime.timedelta(days=1)).strftime("%Y-%m-%d")
# Append uptime and load average to the health box
if conf.get("general", "uptime_box") == "True":
health_indicators.append(("Uptime cmd",
get_keyevent_snapshot_uptime_cmd()))
# Append number of Apache processes to the health box
if conf.get("general", "apache_box") == "True":
health_indicators.append(("Apache processes",
get_keyevent_snapshot_apache_processes()))
health_indicators.append(None)
# Append session information to the health box
if conf.get("general", "visitors_box") == "True":
sess = get_keyevent_snapshot_sessions()
health_indicators.append(("Total active visitors", sum(sess)))
health_indicators.append((" Logged in", sess[1]))
health_indicators.append(None)
# Append searches information to the health box
if conf.get("general", "search_box") == "True":
args = {'t_start': today, 't_end': tomorrow,
'granularity': "day", 't_format': "%Y-%m-%d"}
searches = get_keyevent_trend_search_type_distribution(args)
health_indicators.append(("Searches since midnight",
sum(searches[0][1])))
health_indicators.append((" Simple", searches[0][1][0]))
health_indicators.append((" Advanced", searches[0][1][1]))
health_indicators.append(None)
# Append new records information to the health box
if conf.get("general", "record_box") == "True":
args = {'collection': "All", 't_start': today,
't_end': tomorrow, 'granularity': "day",
't_format': "%Y-%m-%d"}
try:
tot_records = get_keyevent_trend_collection_population(args)[0][1]
except IndexError:
tot_records = 0
args = {'collection': "All", 't_start': yesterday,
't_end': today, 'granularity': "day", 't_format': "%Y-%m-%d"}
try:
new_records = tot_records - \
get_keyevent_trend_collection_population(args)[0][1]
except IndexError:
new_records = 0
health_indicators.append(("Total records", tot_records))
health_indicators.append((" New records since midnight",
new_records))
health_indicators.append(None)
# Append status of BibSched queue to the health box
if conf.get("general", "bibsched_box") == "True":
bibsched = get_keyevent_snapshot_bibsched_status()
health_indicators.append(("BibSched queue",
sum([x[1] for x in bibsched])))
for item in bibsched:
health_indicators.append((" " + item[0], str(item[1])))
health_indicators.append(None)
# Append records pending
if conf.get("general", "waiting_box") == "True":
last_index, last_rank, last_sort, last_coll=get_last_updates()
index_categories = ('global', 'collection', 'abstract',
'author', 'keyword', 'reference',
'reportnumber', 'title', 'fulltext',
'year', 'journal', 'collaboration',
'affiliation', 'exactauthor', 'caption',
'firstauthor', 'exactfirstauthor',
'authorcount')
rank_categories = ('wrd', 'demo_jif', 'citation',
'citerank_citation_t',
'citerank_pagerank_c',
'citerank_pagerank_t')
sort_categories = ('latest first', 'title', 'author', 'report number',
'most cited')
health_indicators.append(("Records pending per indexing method since", last_index))
for ic in index_categories:
health_indicators.append((" - " + str(ic), get_list_link('index', ic)))
health_indicators.append(None)
health_indicators.append(("Records pending per ranking method since", last_rank))
for rc in rank_categories:
health_indicators.append((" - " + str(rc), get_list_link('rank', rc)))
health_indicators.append(None)
health_indicators.append(("Records pending per sorting method since", last_sort))
for sc in sort_categories:
health_indicators.append((" - " + str(sc), get_list_link('sort', sc)))
health_indicators.append(None)
health_indicators.append(("Records pending for webcolling since", last_coll))
health_indicators.append((" - webcoll", get_list_link('collect')))
health_indicators.append(None)
# Append basket stats to the health box
if conf.get("general", "basket_box") == "True":
health_indicators += basket_display()
health_indicators.append(None)
# Append alerts stats to the health box
if conf.get("general", "alert_box") == "True":
health_indicators += alert_display()
health_indicators.append(None)
# Display the health box
return TEMPLATES.tmpl_system_health(health_indicators, ln=ln)
def perform_display_ingestion_status(req_ingestion, ln=CFG_SITE_LANG):
"""
Display the updating status for the records matching a
given request.
@param req_ingestion: Search pattern request
@type req_ingestion: str
"""
# preconfigured values
index_methods = ('global', 'collection', 'abstract', 'author', 'keyword',
'reference', 'reportnumber', 'title', 'fulltext',
'year', 'journal', 'collaboration', 'affiliation',
'exactauthor', 'caption', 'firstauthor',
'exactfirstauthor', 'authorcount')
rank_methods = ('wrd', 'demo_jif', 'citation', 'citerank_citation_t',
'citerank_pagerank_c', 'citerank_pagerank_t')
sort_methods = ('latest first', 'title', 'author', 'report number',
'most cited')
from ConfigParser import ConfigParser
conf = ConfigParser()
conf.read(CFG_WEBSTAT_CONFIG_PATH)
general = get_general_status()
flag = 0 # match with pending records
stats = []
list_records = get_ingestion_matching_records(req_ingestion, \
int(conf.get("general", "max_ingestion_health")))
if list_records == []:
stats.append(("No matches for your query!", " "*60))
return TEMPLATES.tmpl_ingestion_health(general, req_ingestion, stats, \
ln=ln)
else:
for record in list_records:
if record == 0:
return TEMPLATES.tmpl_ingestion_health(general, None, \
None, ln=ln)
elif record == -1:
stats.append(("Invalid pattern! Please retry", " "*60))
return TEMPLATES.tmpl_ingestion_health(general, None, \
stats, ln=ln)
else:
stat = get_record_ingestion_status(record)
last_mod = get_record_last_modification(record)
if stat != 0:
flag = 1 # match
# Indexing
stats.append((get_title_ingestion(record, last_mod)," "*90))
stats.append(("Pending for indexing methods:", " "*80))
for im in index_methods:
last = get_specific_ingestion_status(record,"index", im)
if last != None:
stats.append((" - %s"%im, "last: " + last))
# Ranking
stats.append(("Pending for ranking methods:", " "*80))
for rm in rank_methods:
last = get_specific_ingestion_status(record, "rank", rm)
if last != None:
stats.append((" - %s"%rm, "last: " + last))
# Sorting
stats.append(("Pending for sorting methods:", " "*80))
for sm in sort_methods:
last = get_specific_ingestion_status(record, "sort", sm)
if last != None:
stats.append((" - %s"%sm, "last: " + last))
# Collecting
stats.append(("Pending for webcolling:", " "*80))
last = get_specific_ingestion_status(record, "collect", )
if last != None:
stats.append((" - webcoll", "last: " + last))
# if there was no match
if flag == 0:
stats.append(("All matching records up to date!", " "*60))
return TEMPLATES.tmpl_ingestion_health(general, req_ingestion, stats, ln=ln)
def perform_display_yearly_report(ln=CFG_SITE_LANG):
"""
Display the year recount
"""
# Append loans stats to the box
year_report = []
year_report += loan_display()
year_report.append(None)
return TEMPLATES.tmpl_yearly_report(year_report, ln=ln)
def perform_display_keyevent(event_id=None, args={},
req=None, ln=CFG_SITE_LANG):
"""
Display key events using a certain output type over the given time span.
@param event_id: The ids for the custom events that are to be displayed.
@type event_id: [str]
@param args: { param name: argument value }
@type args: { str: str }
@param req: The Apache request object, necessary for export redirect.
@type req:
"""
# Get all the option lists:
# { parameter name: [(argument internal name, argument full name)]}
options = dict()
order = []
for param in KEYEVENT_REPOSITORY[event_id]['extraparams']:
# Order of options
order.append(param)
if KEYEVENT_REPOSITORY[event_id]['extraparams'][param][0] == 'combobox':
options[param] = ('combobox',
KEYEVENT_REPOSITORY[event_id]['extraparams'][param][1],
KEYEVENT_REPOSITORY[event_id]['extraparams'][param][2]())
else:
options[param] = (KEYEVENT_REPOSITORY[event_id]['extraparams'][param][0],
(KEYEVENT_REPOSITORY[event_id]['extraparams'][param][1]))
# Build a dictionary for the selected parameters:
# { parameter name: argument internal name }
choosed = dict([(param, args[param]) for param in KEYEVENT_REPOSITORY
[event_id]['extraparams']])
if KEYEVENT_REPOSITORY[event_id]['output'] == 'Graph':
options['format'] = ('combobox', 'Output format', _get_formats())
choosed['format'] = args['format']
order += ['format']
if event_id != 'items list':
if 'type' in KEYEVENT_REPOSITORY[event_id] and \
KEYEVENT_REPOSITORY[event_id]['type'] == 'bibcirculation':
options['timespan'] = ('combobox', 'Time span', _get_timespans(bibcirculation_stat=True))
else:
options['timespan'] = ('combobox', 'Time span', _get_timespans())
choosed['timespan'] = args['timespan']
order += ['timespan']
choosed['s_date'] = args['s_date']
choosed['f_date'] = args['f_date']
# Send to template to prepare event customization FORM box
list = KEYEVENT_REPOSITORY[event_id]['output'] == 'List'
out = "\n".join(["<p>%s</p>" % parr for parr in KEYEVENT_REPOSITORY[event_id]['description']]) \
+ TEMPLATES.tmpl_keyevent_box(options, order, choosed, ln=ln, list=list)
# Arguments OK?
# Check for existance. If nothing, only show FORM box from above.
if len(choosed) == 0:
return out
# Make sure extraparams are valid, if any
if KEYEVENT_REPOSITORY[event_id]['output'] == 'Graph' and \
event_id != 'percentage satisfied ill requests':
for param in choosed:
if param in options and options[param] == 'combobox' and \
not choosed[param] in [x[0] for x in options[param][2]]:
return out + TEMPLATES.tmpl_error(
'Please specify a valid value for parameter "%s".'
% options[param][0], ln=ln)
# Arguments OK beyond this point!
# Get unique name for caching purposes (make sure that the params used
# in the filename are safe!)
filename = KEYEVENT_REPOSITORY[event_id]['cachefilename'] \
% dict([(param, re.subn("[^\w]", "_", choosed[param])[0])
for param in choosed] +
[('event_id', re.subn("[^\w]", "_", event_id)[0])])
# Get time parameters from repository
if 'timespan' in choosed:
if choosed['timespan'] == "select date":
t_args = _get_time_parameters_select_date(args["s_date"], args["f_date"])
else:
t_args = _get_time_parameters(options, choosed['timespan'])
else:
t_args = args
for param in KEYEVENT_REPOSITORY[event_id]['extraparams']:
t_args[param] = choosed[param]
if 'format' in args and args['format'] == 'Full list':
gatherer = lambda: KEYEVENT_REPOSITORY[event_id]['gatherer'](t_args, limit=-1)
export_to_file(gatherer(), req)
return out
# Create closure of frequency function in case cache needs to be refreshed
gatherer = lambda return_sql: KEYEVENT_REPOSITORY[event_id]['gatherer'](t_args, return_sql=return_sql)
# Determine if this particular file is due for scheduling cacheing,
# in that case we must not allow refreshing of the rawdata.
allow_refresh = not _is_scheduled_for_cacheing(event_id)
# Get data file from cache (refresh if necessary)
force = 'timespan' in choosed and choosed['timespan'] == "select date"
data = eval(_get_file_using_cache(filename, gatherer, force,
allow_refresh=allow_refresh).read())
if KEYEVENT_REPOSITORY[event_id]['output'] == 'Graph':
# If type indicates an export, run the export function and we're done
if _is_type_export(choosed['format']):
_get_export_closure(choosed['format'])(data, req)
return out
# Prepare the graph settings that are being passed on to grapher
settings = {"title": KEYEVENT_REPOSITORY[event_id]['specificname']\
% choosed,
"xlabel": t_args['t_fullname'] + ' (' + \
t_args['granularity'] + ')',
"ylabel": KEYEVENT_REPOSITORY[event_id]['ylabel'],
"xtic_format": t_args['xtic_format'],
"format": choosed['format'],
"multiple": KEYEVENT_REPOSITORY[event_id]['multiple']}
else:
settings = {"title": KEYEVENT_REPOSITORY[event_id]['specificname']\
% choosed, "format": 'Table',
"rows": KEYEVENT_REPOSITORY[event_id]['rows']}
if args['sql']:
sql = gatherer(True)
else:
sql = ''
return out + _perform_display_event(data,
os.path.basename(filename), settings, ln=ln) + sql
def perform_display_customevent(ids=[], args={}, req=None, ln=CFG_SITE_LANG):
"""
Display custom events using a certain output type over the given time span.
@param ids: The ids for the custom events that are to be displayed.
@type ids: [str]
@param args: { param name: argument value }
@type args: { str: str }
@param req: The Apache request object, necessary for export redirect.
@type req:
"""
# Get all the option lists:
# { parameter name: [(argument internal name, argument full name)]}
cols_dict = _get_customevent_cols()
cols_dict['__header'] = 'Argument'
cols_dict['__none'] = []
options = {'ids': ('Custom event', _get_customevents()),
'timespan': ('Time span', _get_timespans()),
'format': ('Output format', _get_formats(True)),
'cols': cols_dict}
# Build a dictionary for the selected parameters:
# { parameter name: argument internal name }
choosed = {'ids': args['ids'], 'timespan': args['timespan'],
'format': args['format'], 's_date': args['s_date'],
'f_date': args['f_date']}
# Calculate cols
index = []
for key in args.keys():
if key[:4] == 'cols':
index.append(key[4:])
index.sort()
choosed['cols'] = [zip([""] + args['bool' + i], args['cols' + i],
args['col_value' + i]) for i in index]
# Send to template to prepare event customization FORM box
out = TEMPLATES.tmpl_customevent_box(options, choosed, ln=ln)
# Arguments OK?
# Make sure extraparams are valid, if any
for param in ['ids', 'timespan', 'format']:
legalvalues = [x[0] for x in options[param][1]]
if type(args[param]) is list:
# If the argument is a list, like the content of 'ids'
# every value has to be checked
if len(args[param]) == 0:
return out + TEMPLATES.tmpl_error(
'Please specify a valid value for parameter "%s".'
% options[param][0], ln=ln)
for arg in args[param]:
if not arg in legalvalues:
return out + TEMPLATES.tmpl_error(
'Please specify a valid value for parameter "%s".'
% options[param][0], ln=ln)
else:
if not args[param] in legalvalues:
return out + TEMPLATES.tmpl_error(
'Please specify a valid value for parameter "%s".'
% options[param][0], ln=ln)
# Fetch time parameters from repository
if choosed['timespan'] == "select date":
args_req = _get_time_parameters_select_date(args["s_date"],
args["f_date"])
else:
args_req = _get_time_parameters(options, choosed['timespan'])
# ASCII dump data is different from the standard formats
if choosed['format'] == 'asciidump':
data = perform_display_customevent_data_ascii_dump(ids, args,
args_req, choosed)
else:
data = perform_display_customevent_data(ids, args_req, choosed)
# If type indicates an export, run the export function and we're done
if _is_type_export(args['format']):
_get_export_closure(args['format'])(data, req)
return out
# Get full names, for those that have them
names = []
events = _get_customevents()
for event_id in ids:
temp = events[[x[0] for x in events].index(event_id)]
if temp[1] != None:
names.append(temp[1])
else:
names.append(temp[0])
# Generate a filename for the graph
filename = "tmp_webstat_customevent_" + ''.join([re.subn("[^\w]", "",
event_id)[0] for event_id in ids]) + "_"
if choosed['timespan'] == "select date":
filename += args_req['t_start'] + "_" + args_req['t_end']
else:
filename += choosed['timespan']
settings = {"title": 'Custom event',
"xlabel": args_req['t_fullname'] + ' (' + \
args_req['granularity'] + ')',
"ylabel": "Action quantity",
"xtic_format": args_req['xtic_format'],
"format": choosed['format'],
"multiple": (type(ids) is list) and names or []}
return out + _perform_display_event(data, os.path.basename(filename),
settings, ln=ln)
def perform_display_customevent_data(ids, args_req, choosed):
"""Returns the trend data"""
data_unmerged = []
for event_id, i in [(ids[i], str(i)) for i in range(len(ids))]:
# Calculate cols
args_req['cols'] = choosed['cols'][int(i)]
# Get unique name for the rawdata file (wash arguments!)
filename = "webstat_customevent_" + re.subn("[^\w]", "", event_id + \
"_" + choosed['timespan'] + "_" + '-'.join([':'.join(col)
for col in args_req['cols']]))[0]
# Add the current id to the gatherer's arguments
args_req['event_id'] = event_id
# Prepare raw data gatherer, if cache needs refreshing.
gatherer = lambda x: get_customevent_trend(args_req)
# Determine if this particular file is due for scheduling cacheing,
# in that case we must not allow refreshing of the rawdata.
allow_refresh = not _is_scheduled_for_cacheing(event_id)
# Get file from cache, and evaluate it to trend data
force = choosed['timespan'] == "select date"
data_unmerged.append(eval(_get_file_using_cache(filename, gatherer,
force, allow_refresh=allow_refresh).read()))
# Merge data from the unmerged trends into the final destination
return [(x[0][0], tuple([y[1] for y in x])) for x in zip(*data_unmerged)]
def perform_display_customevent_data_ascii_dump(ids, args, args_req, choosed):
"""Returns the trend data"""
for i in [str(j) for j in range(len(ids))]:
args['bool' + i].insert(0, "")
args_req['cols' + i] = zip(args['bool' + i], args['cols' + i],
args['col_value' + i])
filename = "webstat_customevent_" + re.subn("[^\w]", "", ''.join(ids) +
"_" + choosed['timespan'] + "_" + '-'.join([':'.join(col) for
col in [args['cols' + str(i)] for i in range(len(ids))]]) +
"_asciidump")[0]
args_req['ids'] = ids
gatherer = lambda: get_customevent_dump(args_req)
force = choosed['timespan'] == "select date"
return eval(_get_file_using_cache(filename, gatherer, force).read())
def perform_display_coll_list(req=None, ln=CFG_SITE_LANG):
"""
Display list of collections
@param req: The Apache request object, necessary for export redirect.
@type req:
"""
return TEMPLATES.tmpl_collection_stats_complete_list(get_collection_list_plus_all())
def perform_display_stats_per_coll(args={}, req=None, ln=CFG_SITE_LANG):
"""
Display general statistics for a given collection
@param args: { param name: argument value }
@type args: { str: str }
@param req: The Apache request object, necessary for export redirect.
@type req:
"""
events_id = ('collection population', 'download frequency', 'comments frequency')
# Get all the option lists:
# Make sure extraparams are valid, if any
if not args['collection'] in [x[0] for x in get_collection_list_plus_all()]:
return TEMPLATES.tmpl_error('Please specify a valid value for parameter "Collection".')
# { parameter name: [(argument internal name, argument full name)]}
options = {'collection': ('combobox', 'Collection', get_collection_list_plus_all()),
'timespan': ('combobox', 'Time span', _get_timespans()),
'format': ('combobox', 'Output format', _get_formats())}
order = options.keys()
# Arguments OK beyond this point!
# Get unique name for caching purposes (make sure that the params
# used in the filename are safe!)
out = TEMPLATES.tmpl_keyevent_box(options, order, args, ln=ln)
out += "<table>"
pair = False
for event_id in events_id:
# Get unique name for caching purposes (make sure that the params used
# in the filename are safe!)
filename = KEYEVENT_REPOSITORY[event_id]['cachefilename'] \
% dict([(param, re.subn("[^\w]", "_", args[param])[0])
for param in args] +
[('event_id', re.subn("[^\w]", "_", event_id)[0])])
# Get time parameters from repository
if args['timespan'] == "select date":
t_args = _get_time_parameters_select_date(args["s_date"], args["f_date"])
else:
t_args = _get_time_parameters(options, args['timespan'])
for param in KEYEVENT_REPOSITORY[event_id]['extraparams']:
t_args[param] = args[param]
# Create closure of frequency function in case cache needs to be refreshed
gatherer = lambda return_sql: KEYEVENT_REPOSITORY[event_id]['gatherer'](t_args, return_sql=return_sql)
# Determine if this particular file is due for scheduling cacheing,
# in that case we must not allow refreshing of the rawdata.
allow_refresh = not _is_scheduled_for_cacheing(event_id)
# Get data file from cache (refresh if necessary)
data = eval(_get_file_using_cache(filename, gatherer, allow_refresh=allow_refresh).read())
# Prepare the graph settings that are being passed on to grapher
settings = {"title": KEYEVENT_REPOSITORY[event_id]['specificname'] % t_args,
"xlabel": t_args['t_fullname'] + ' (' + \
t_args['granularity'] + ')',
"ylabel": KEYEVENT_REPOSITORY[event_id]['ylabel'],
"xtic_format": t_args['xtic_format'],
"format": args['format'],
"multiple": KEYEVENT_REPOSITORY[event_id]['multiple'],
"size": '360,270'}
if not pair:
out += '<tr>'
out += '<td>%s</td>' % _perform_display_event(data,
os.path.basename(filename), settings, ln=ln)
if pair:
out += '</tr>'
pair = not pair
return out + "</table>"
def perform_display_customevent_help(ln=CFG_SITE_LANG):
"""Display the custom event help"""
return TEMPLATES.tmpl_customevent_help(ln=ln)
def perform_display_error_log_analyzer(ln=CFG_SITE_LANG):
"""Display the error log analyzer"""
update_error_log_analyzer()
return TEMPLATES.tmpl_error_log_analyzer(get_invenio_error_log_ranking(),
get_invenio_last_n_errors(5),
get_apache_error_log_ranking())
def perform_display_custom_summary(args, ln=CFG_SITE_LANG):
"""Display the custom summary (annual report)
@param args: { param name: argument value } (chart title, search query and output tag)
@type args: { str: str }
"""
if args['tag'] == '':
args['tag'] = CFG_JOURNAL_TAG.replace("%", "p")
data = get_custom_summary_data(args['query'], args['tag'])
tag_name = _get_tag_name(args['tag'])
if tag_name == '':
tag_name = args['tag']
path = WEBSTAT_GRAPH_DIRECTORY + os.path.basename("tmp_webstat_custom_summary_"
+ args['query'] + args['tag'])
create_custom_summary_graph(data[:-1], path, args['title'])
return TEMPLATES.tmpl_display_custom_summary(tag_name, data, args['title'],
args['query'], args['tag'], path, ln=ln)
# INTERNALS
def _perform_display_event(data, name, settings, ln=CFG_SITE_LANG):
"""
Retrieves a graph or a table.
@param data: The trend/dump data
@type data: [(str, str|int|(str|int,...))] | [(str|int,...)]
@param name: The name of the trend (to be used as basename of graph file)
@type name: str
@param settings: Dictionary of graph parameters
@type settings: dict
@return: The URL of the graph (ASCII or image)
@type: str
"""
path = WEBSTAT_GRAPH_DIRECTORY + "tmp_" + name
# Generate, and insert using the appropriate template
if settings["format"] == "asciidump":
path += "_asciidump"
create_graph_dump(data, path)
out = TEMPLATES.tmpl_display_event_trend_ascii(settings["title"],
path, ln=ln)
if settings["format"] == "Table":
create_graph_table(data, path, settings)
return TEMPLATES.tmpl_display_event_trend_text(settings["title"], path, ln=ln)
create_graph_trend(data, path, settings)
if settings["format"] == "asciiart":
out = TEMPLATES.tmpl_display_event_trend_ascii(
settings["title"], path, ln=ln)
else:
if settings["format"] == "gnuplot":
try:
import Gnuplot
except ImportError:
out = 'Gnuplot is not installed. Returning ASCII art.' + \
TEMPLATES.tmpl_display_event_trend_ascii(
settings["title"], path, ln=ln)
out = TEMPLATES.tmpl_display_event_trend_image(
settings["title"], path, ln=ln)
elif settings["format"] == "flot":
out = TEMPLATES.tmpl_display_event_trend_text(
settings["title"], path, ln=ln)
else:
out = TEMPLATES.tmpl_display_event_trend_ascii(
settings["title"], path, ln=ln)
avgs, maxs, mins = get_numeric_stats(data, settings["multiple"] is not None)
return out + TEMPLATES.tmpl_display_numeric_stats(settings["multiple"],
avgs, maxs, mins)
def _get_customevents():
"""
Retrieves registered custom events from the database.
@return: [(internal name, readable name)]
@type: [(str, str)]
"""
return [(x[0], x[1]) for x in run_sql("SELECT id, name FROM staEVENT")]
def _get_timespans(dttime=None, bibcirculation_stat=False):
"""
Helper function that generates possible time spans to be put in the
drop-down in the generation box. Computes possible years, and also some
pre-defined simpler values. Some items in the list returned also tweaks the
output graph, if any, since such values are closely related to the nature
of the time span.
@param dttime: A datetime object indicating the current date and time
@type dttime: datetime.datetime
@return: [(Internal name, Readable name, t_start, t_end, granularity, format, xtic_format)]
@type [(str, str, str, str, str, str, str)]
"""
if dttime is None:
dttime = datetime.datetime.now()
dtformat = "%Y-%m-%d"
# Helper function to return a timediff object reflecting a diff of x days
d_diff = lambda x: datetime.timedelta(days=x)
# Helper function to return the number of days in the month x months ago
d_in_m = lambda x: calendar.monthrange(
((dttime.month - x < 1) and dttime.year - 1 or dttime.year),
(((dttime.month - 1) - x) % 12 + 1))[1]
to_str = lambda x: x.strftime(dtformat)
dt_str = to_str(dttime)
spans = [("today", "Today",
dt_str,
to_str(dttime + d_diff(1)),
"hour", dtformat, "%H"),
("this week", "This week",
to_str(dttime - d_diff(dttime.weekday())),
to_str(dttime + d_diff(1)),
"day", dtformat, "%a"),
("last week", "Last week",
to_str(dttime - d_diff(dttime.weekday() + 7)),
to_str(dttime - d_diff(dttime.weekday())),
"day", dtformat, "%a"),
("this month", "This month",
to_str(dttime - d_diff(dttime.day) + d_diff(1)),
to_str(dttime + d_diff(1)),
"day", dtformat, "%d"),
("last month", "Last month",
to_str(dttime - d_diff(d_in_m(1)) - d_diff(dttime.day) + d_diff(1)),
to_str(dttime - d_diff(dttime.day) + d_diff(1)),
"day", dtformat, "%d"),
("last three months", "Last three months",
to_str(dttime - d_diff(d_in_m(1)) - d_diff(d_in_m(2)) -
d_diff(dttime.day) + d_diff(1)),
dt_str,
"month", dtformat, "%b"),
("last year", "Last year",
to_str((dttime - datetime.timedelta(days=365)).replace(day=1)),
to_str((dttime + datetime.timedelta(days=31)).replace(day=1)),
"month", dtformat, "%b")]
# Get first year as indicated by the content's in bibrec or
# CFG_WEBSTAT_BIBCIRCULATION_START_YEAR
try:
if bibcirculation_stat and CFG_WEBSTAT_BIBCIRCULATION_START_YEAR:
year1 = int(CFG_WEBSTAT_BIBCIRCULATION_START_YEAR)
else:
year1 = run_sql("SELECT creation_date FROM bibrec ORDER BY \
creation_date LIMIT 1")[0][0].year
except:
year1 = dttime.year
year2 = time.localtime()[0]
diff_year = year2 - year1
if diff_year >= 2:
spans.append(("last 2 years", "Last 2 years",
to_str((dttime - datetime.timedelta(days=365 * 2)).replace(day=1)),
to_str((dttime + datetime.timedelta(days=31)).replace(day=1)),
"month", dtformat, "%b"))
if diff_year >= 5:
spans.append(("last 5 years", "Last 5 years",
to_str((dttime - datetime.timedelta(days=365 * 5)).replace(day=1)),
to_str((dttime + datetime.timedelta(days=31)).replace(day=1)),
"year", dtformat, "%Y"))
if diff_year >= 10:
spans.append(("last 10 years", "Last 10 years",
to_str((dttime - datetime.timedelta(days=365 * 10)).replace(day=1)),
to_str((dttime + datetime.timedelta(days=31)).replace(day=1)),
"year", dtformat, "%Y"))
spans.append(("full history", "Full history", str(year1), str(year2 + 1),
"year", "%Y", "%Y"))
spans.extend([(str(x), str(x), str(x), str(x + 1), "month", "%Y", "%b")
for x in range(year2, year1 - 1, -1)])
spans.append(("select date", "Select date...", "", "",
"hour", dtformat, "%H"))
return spans
def _get_time_parameters(options, timespan):
"""
Returns the time parameters from the repository when it is a default timespan
@param options: A dictionary with the option lists
@type options: { parameter name: [(argument internal name, argument full name)]}
@param timespan: name of the chosen timespan
@type timespan: str
@return: [(Full name, t_start, t_end, granularity, format, xtic_format)]
@type [(str, str, str, str, str, str, str)]
"""
if len(options['timespan']) == 2:
i = 1
else:
i = 2
_, t_fullname, t_start, t_end, granularity, t_format, xtic_format = \
options['timespan'][i][[x[0]
for x in options['timespan'][i]].index(timespan)]
return {'t_fullname': t_fullname, 't_start': t_start, 't_end': t_end,
'granularity': granularity, 't_format': t_format,
'xtic_format': xtic_format}
def _get_time_parameters_select_date(s_date, f_date):
"""
Returns the time parameters from the repository when it is a custom timespan
@param s_date: start date for the graph
@type s_date: str %m/%d/%Y %H:%M
@param f_date: finish date for the graph
@type f_date: str %m/%d/%Y %H:%M
@return: [(Full name, t_start, t_end, granularity, format, xtic_format)]
@type [(str, str, str, str, str, str, str)]
"""
t_fullname = "%s-%s" % (s_date, f_date)
dt_start = datetime.datetime(*(time.strptime(s_date, "%m/%d/%Y %H:%M")[0:6]))
dt_end = datetime.datetime(*(time.strptime(f_date, "%m/%d/%Y %H:%M")[0:6]))
if dt_end - dt_start <= timedelta(hours=1):
xtic_format = "%m:%s"
granularity = 'second'
elif dt_end - dt_start <= timedelta(days=1):
xtic_format = "%H:%m"
granularity = 'minute'
elif dt_end - dt_start <= timedelta(days=7):
xtic_format = "%H"
granularity = 'hour'
elif dt_end - dt_start <= timedelta(days=60):
xtic_format = "%a"
granularity = 'day'
elif dt_end - dt_start <= timedelta(days=730):
xtic_format = "%d"
granularity = 'month'
else:
xtic_format = "%H"
granularity = 'hour'
t_format = "%Y-%m-%d %H:%M:%S"
t_start = dt_start.strftime("%Y-%m-%d %H:%M:%S")
t_end = dt_end.strftime("%Y-%m-%d %H:%M:%S")
return {'t_fullname': t_fullname, 't_start': t_start, 't_end': t_end,
'granularity': granularity, 't_format': t_format,
'xtic_format': xtic_format}
def _get_formats(with_dump=False):
"""
Helper function to retrieve a Invenio friendly list of all possible
output types (displaying and exporting) from the central repository as
stored in the variable self.types at the top of this module.
@param with_dump: Optionally displays the custom-event only type 'asciidump'
@type with_dump: bool
@return: [(Internal name, Readable name)]
@type [(str, str)]
"""
# The third tuple value is internal
if with_dump:
return [(x[0], x[1]) for x in TYPE_REPOSITORY]
else:
return [(x[0], x[1]) for x in TYPE_REPOSITORY if x[0] != 'asciidump']
def _get_customevent_cols(event_id=""):
"""
List of all the diferent name of columns in customevents.
@return: {id: [(internal name, readable name)]}
@type: {str: [(str, str)]}
"""
sql_str = "SELECT id,cols FROM staEVENT"
sql_param = []
if event_id:
sql_str += "WHERE id = %s"
sql_param.append(event_id)
cols = {}
for event in run_sql(sql_str, sql_param):
if event[0]:
if event[1]:
cols[event[0]] = [(name, name) for name
in cPickle.loads(event[1])]
else:
cols[event[0]] = []
return cols
def _is_type_export(typename):
"""
Helper function that consults the central repository of types to determine
whether the input parameter represents an export type.
@param typename: Internal type name
@type typename: str
@return: Information whether a certain type exports data
@type: bool
"""
return len(TYPE_REPOSITORY[[x[0] for x in
TYPE_REPOSITORY].index(typename)]) == 3
def _get_export_closure(typename):
"""
Helper function that for a certain type, gives back the corresponding export
closure.
@param typename: Internal type name
@type typename: str
@return: Closure that exports data to the type's format
@type: function
"""
return TYPE_REPOSITORY[[x[0] for x in TYPE_REPOSITORY].index(typename)][2]
def _get_file_using_cache(filename, closure, force=False, allow_refresh=True):
"""
Uses the Invenio cache, i.e. the tempdir, to see if there's a recent
cached version of the sought-after file in there. If not, use the closure to
compute a new, and return that instead. Relies on Invenio configuration
parameter WEBSTAT_CACHE_INTERVAL.
@param filename: The name of the file that might be cached
@type filename: str
@param closure: A function, that executed will return data to be cached. The
function should return either a string, or something that
makes sense after being interpreted with str().
@type closure: function
@param force: Override cache default value.
@type force: bool
"""
# Absolute path to cached files, might not exist.
filename = os.path.normpath(WEBSTAT_RAWDATA_DIRECTORY + filename)
# Get the modification time of the cached file (if any).
try:
mtime = os.path.getmtime(filename)
except OSError:
# No cached version of this particular file exists, thus the
# modification time is set to 0 for easy logic below.
mtime = 0
# Consider refreshing cache if FORCE or NO CACHE AT ALL,
# or CACHE EXIST AND REFRESH IS ALLOWED.
if force or mtime == 0 or (mtime > 0 and allow_refresh):
# Is the file modification time recent enough?
if force or (time.time() - mtime > WEBSTAT_CACHE_INTERVAL):
# No! Use closure to compute new content
content = closure(False)
# Cache the data
open(filename, 'w').write(str(content))
# Return the (perhaps just) cached file
return open(filename, 'r')
def _is_scheduled_for_cacheing(event_id):
"""
@param event_id: The event id
@type event_id: str
@return: Indication of if the event id is scheduling for BibSched execution.
@type: bool
"""
if not is_task_scheduled('webstatadmin'):
return False
# Get the task id
try:
task_id = get_task_ids_by_descending_date('webstatadmin',
['RUNNING', 'WAITING'])[0]
except IndexError:
return False
else:
args = get_task_options(task_id)
return event_id in (args['keyevents'] + args['customevents'])
diff --git a/modules/webstat/lib/webstat_engine.py b/modules/webstat/lib/webstat_engine.py
index 33c223a28..713c64aa8 100644
--- a/modules/webstat/lib/webstat_engine.py
+++ b/modules/webstat/lib/webstat_engine.py
@@ -1,2865 +1,2865 @@
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
__revision__ = "$Id$"
__lastupdated__ = "$Date$"
import calendar, commands, datetime, time, os, cPickle, random, cgi
from operator import itemgetter
from invenio.config import CFG_TMPDIR, \
CFG_SITE_URL, \
CFG_SITE_NAME, \
CFG_BINDIR, \
CFG_CERN_SITE, \
CFG_BIBCIRCULATION_ITEM_STATUS_CANCELLED, \
CFG_BIBCIRCULATION_ITEM_STATUS_CLAIMED, \
CFG_BIBCIRCULATION_ITEM_STATUS_IN_PROCESS, \
CFG_BIBCIRCULATION_ITEM_STATUS_NOT_ARRIVED, \
CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN, \
CFG_BIBCIRCULATION_ITEM_STATUS_ON_ORDER, \
CFG_BIBCIRCULATION_ITEM_STATUS_ON_SHELF, \
CFG_BIBCIRCULATION_ITEM_STATUS_OPTIONAL, \
CFG_BIBCIRCULATION_REQUEST_STATUS_DONE, \
CFG_BIBCIRCULATION_ILL_STATUS_CANCELLED
-from invenio.bibindex_engine import CFG_JOURNAL_TAG
+from invenio.bibindex_tokenizers.BibIndexJournalTokenizer import CFG_JOURNAL_TAG
from invenio.urlutils import redirect_to_url
from invenio.search_engine import perform_request_search, \
get_collection_reclist, \
get_most_popular_field_values, \
search_pattern
from invenio.search_engine_utils import get_fieldvalues
from invenio.dbquery import run_sql, \
wash_table_column_name
from invenio.websubmitadmin_dblayer import get_docid_docname_alldoctypes
from invenio.bibcirculation_utils import book_title_from_MARC, \
book_information_from_MARC
from invenio.bibcirculation_dblayer import get_id_bibrec, \
get_borrower_data
from invenio.websearch_webcoll import CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE
from invenio.dateutils import convert_datetext_to_datestruct, convert_datestruct_to_dategui
WEBSTAT_SESSION_LENGTH = 48 * 60 * 60 # seconds
WEBSTAT_GRAPH_TOKENS = '-=#+@$%&XOSKEHBC'
# KEY EVENT TREND SECTION
def get_keyevent_trend_collection_population(args, return_sql=False):
"""
Returns the quantity of documents in Invenio for
the given timestamp range.
@param args['collection']: A collection name
@type args['collection']: str
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
# collect action dates
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
if args.get('collection', 'All') == 'All':
sql_query_g = _get_sql_query("creation_date", args['granularity'],
"bibrec")
sql_query_i = "SELECT COUNT(id) FROM bibrec WHERE creation_date < %s"
initial_quantity = run_sql(sql_query_i, (lower, ))[0][0]
return _get_keyevent_trend(args, sql_query_g, initial_quantity=initial_quantity,
return_sql=return_sql, sql_text=
"Previous count: %s<br />Current count: %%s" % (sql_query_i),
acumulative=True)
else:
ids = get_collection_reclist(args['collection'])
if len(ids) == 0:
return []
g = get_keyevent_trend_new_records(args, return_sql, True)
sql_query_i = "SELECT id FROM bibrec WHERE creation_date < %s"
if return_sql:
return "Previous count: %s<br />Current count: %s" % (sql_query_i % lower, g)
initial_quantity = len(filter(lambda x: x[0] in ids, run_sql(sql_query_i, (lower, ))))
return _get_trend_from_actions(g, initial_quantity, args['t_start'],
args['t_end'], args['granularity'], args['t_format'], acumulative=True)
def get_keyevent_trend_new_records(args, return_sql=False, only_action=False):
"""
Returns the number of new records uploaded during the given timestamp range.
@param args['collection']: A collection name
@type args['collection']: str
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
if args.get('collection', 'All') == 'All':
return _get_keyevent_trend(args, _get_sql_query("creation_date", args['granularity'],
"bibrec"),
return_sql=return_sql)
else:
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
ids = get_collection_reclist(args['collection'])
if len(ids) == 0:
return []
sql = _get_sql_query("creation_date", args["granularity"], "bibrec",
extra_select=", id", group_by=False, count=False)
if return_sql:
return sql % (lower, upper)
recs = run_sql(sql, (lower, upper))
if recs:
def add_count(i_list, element):
""" Reduce function to create a dictionary with the count of ids
for each date """
if i_list and element == i_list[-1][0]:
i_list[-1][1] += 1
else:
i_list.append([element, 1])
return i_list
action_dates = reduce(add_count,
map(lambda x: x[0], filter(lambda x: x[1] in ids, recs)),
[])
else:
action_dates = []
if only_action:
return action_dates
return _get_trend_from_actions(action_dates, 0, args['t_start'],
args['t_end'], args['granularity'], args['t_format'])
def get_keyevent_trend_search_frequency(args, return_sql=False):
"""
Returns the number of searches (of any kind) carried out
during the given timestamp range.
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
return _get_keyevent_trend(args, _get_sql_query("date", args["granularity"],
"query INNER JOIN user_query ON id=id_query"),
return_sql=return_sql)
def get_keyevent_trend_comments_frequency(args, return_sql=False):
"""
Returns the number of comments (of any kind) carried out
during the given timestamp range.
@param args['collection']: A collection name
@type args['collection']: str
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
if args.get('collection', 'All') == 'All':
sql = _get_sql_query("date_creation", args["granularity"],
"cmtRECORDCOMMENT")
else:
sql = _get_sql_query("date_creation", args["granularity"],
"cmtRECORDCOMMENT", conditions=
_get_collection_recids_for_sql_query(args['collection']))
return _get_keyevent_trend(args, sql, return_sql=return_sql)
def get_keyevent_trend_search_type_distribution(args, return_sql=False):
"""
Returns the number of searches carried out during the given
timestamp range, but also partion them by type Simple and
Advanced.
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
# SQL to determine all simple searches:
simple = _get_sql_query("date", args["granularity"],
"query INNER JOIN user_query ON id=id_query",
conditions="urlargs LIKE '%%p=%%'")
# SQL to determine all advanced searches:
advanced = _get_sql_query("date", args["granularity"],
"query INNER JOIN user_query ON id=id_query",
conditions="urlargs LIKE '%%as=1%%'")
# Compute the trend for both types
s_trend = _get_keyevent_trend(args, simple,
return_sql=return_sql, sql_text="Simple: %s")
a_trend = _get_keyevent_trend(args, advanced,
return_sql=return_sql, sql_text="Advanced: %s")
# Assemble, according to return type
if return_sql:
return "%s <br /> %s" % (s_trend, a_trend)
return [(s_trend[i][0], (s_trend[i][1], a_trend[i][1]))
for i in range(len(s_trend))]
def get_keyevent_trend_download_frequency(args, return_sql=False):
"""
Returns the number of full text downloads carried out
during the given timestamp range.
@param args['collection']: A collection name
@type args['collection']: str
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
# Collect list of timestamps of insertion in the specific collection
if args.get('collection', 'All') == 'All':
return _get_keyevent_trend(args, _get_sql_query("download_time",
args["granularity"], "rnkDOWNLOADS"), return_sql=return_sql)
else:
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
ids = get_collection_reclist(args['collection'])
if len(ids) == 0:
return []
sql = _get_sql_query("download_time", args["granularity"], "rnkDOWNLOADS",
extra_select=", GROUP_CONCAT(id_bibrec)")
if return_sql:
return sql % (lower, upper)
action_dates = []
for result in run_sql(sql, (lower, upper)):
count = result[1]
for id in result[2].split(","):
if id == '' or not int(id) in ids:
count -= 1
action_dates.append((result[0], count))
return _get_trend_from_actions(action_dates, 0, args['t_start'],
args['t_end'], args['granularity'], args['t_format'])
def get_keyevent_trend_number_of_loans(args, return_sql=False):
"""
Returns the number of loans carried out
during the given timestamp range.
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
return _get_keyevent_trend(args, _get_sql_query("loaned_on",
args["granularity"], "crcLOAN"), return_sql=return_sql)
def get_keyevent_trend_web_submissions(args, return_sql=False):
"""
Returns the quantity of websubmissions in Invenio for
the given timestamp range.
@param args['doctype']: A doctype name
@type args['doctype']: str
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
if args['doctype'] == 'all':
sql = _get_sql_query("cd", args["granularity"], "sbmSUBMISSIONS",
conditions="action='SBI' AND status='finished'")
res = _get_keyevent_trend(args, sql, return_sql=return_sql)
else:
sql = _get_sql_query("cd", args["granularity"], "sbmSUBMISSIONS",
conditions="doctype=%s AND action='SBI' AND status='finished'")
res = _get_keyevent_trend(args, sql, extra_param=[args['doctype']],
return_sql=return_sql)
return res
def get_keyevent_loan_statistics(args, return_sql=False):
"""
Data:
- Number of documents (=records) loaned
- Number of items loaned on the total number of items
- Number of items never loaned on the total number of items
- Average time between the date of the record creation and the date of the first loan
Filter by
- in a specified time span
- by UDC (see MARC field 080__a - list to be submitted)
- by item status (available, missing)
- by date of publication (MARC field 260__c)
- by date of the record creation in the database
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['udc']: MARC field 080__a
@type args['udc']: str
@param args['item_status']: available, missing...
@type args['item_status']: str
@param args['publication_date']: MARC field 260__c
@type args['publication_date']: str
@param args['creation_date']: date of the record creation in the database
@type args['creation_date']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
# collect action dates
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from = "FROM crcLOAN l "
sql_where = "WHERE loaned_on > %s AND loaned_on < %s "
param = [lower, upper]
if 'udc' in args and args['udc'] != '':
sql_where += "AND l." + _check_udc_value_where()
param.append(_get_udc_truncated(args['udc']))
if 'item_status' in args and args['item_status'] != '':
sql_from += ", crcITEM i "
sql_where += "AND l.barcode = i.barcode AND i.status = %s "
param.append(args['item_status'])
if 'publication_date' in args and args['publication_date'] != '':
sql_where += "AND l.id_bibrec IN ( SELECT brb.id_bibrec \
FROM bibrec_bib26x brb, bib26x b WHERE brb.id_bibxxx = b.id AND tag='260__c' \
AND value LIKE %s)"
param.append('%%%s%%' % args['publication_date'])
if 'creation_date' in args and args['creation_date'] != '':
sql_from += ", bibrec br "
sql_where += "AND br.id=l.id_bibrec AND br.creation_date LIKE %s "
param.append('%%%s%%' % args['creation_date'])
param = tuple(param)
# Number of loans:
loans_sql = "SELECT COUNT(DISTINCT l.id_bibrec) " + sql_from + sql_where
items_loaned_sql = "SELECT COUNT(DISTINCT l.barcode) " + sql_from + sql_where
# Only the CERN site wants the items of the collection "Books & Proceedings"
if CFG_CERN_SITE:
items_in_book_coll = _get_collection_recids_for_sql_query("Books & Proceedings")
if items_in_book_coll == "":
total_items_sql = 0
else:
total_items_sql = "SELECT COUNT(*) FROM crcITEM WHERE %s" % \
items_in_book_coll
else: # The rest take all the items
total_items_sql = "SELECT COUNT(*) FROM crcITEM"
# Average time between the date of the record creation and the date of the first loan
avg_sql = "SELECT AVG(DATEDIFF(loaned_on, br.creation_date)) " + sql_from
if not ('creation_date' in args and args['creation_date'] != ''):
avg_sql += ", bibrec br "
avg_sql += sql_where
if not ('creation_date' in args and args['creation_date'] != ''):
avg_sql += "AND br.id=l.id_bibrec "
if return_sql:
return "<ol><li>%s</li><li>Items loaned * 100 / Number of items <ul><li>\
Items loaned: %s </li><li>Number of items: %s</li></ul></li><li>100 - Items \
loaned on total number of items</li><li>%s</li></ol>" % \
(loans_sql % param, items_loaned_sql % param, total_items_sql, avg_sql % param)
loans = run_sql(loans_sql, param)[0][0]
items_loaned = run_sql(items_loaned_sql, param)[0][0]
if total_items_sql:
total_items = run_sql(total_items_sql)[0][0]
else:
total_items = 0
if total_items == 0:
loaned_on_total = 0
never_loaned_on_total = 0
else:
# Number of items loaned on the total number of items:
loaned_on_total = float(items_loaned) * 100 / float(total_items)
# Number of items never loaned on the total number of items:
never_loaned_on_total = 100L - loaned_on_total
avg = run_sql(avg_sql, param)[0][0]
if avg:
avg = float(avg)
else:
avg = 0L
return ((loans, ), (loaned_on_total, ), (never_loaned_on_total, ), (avg, ))
def get_keyevent_loan_lists(args, return_sql=False, limit=50):
"""
Lists:
- List of documents (= records) never loaned
- List of most loaned documents (columns: number of loans,
number of copies and the creation date of the record, in
order to calculate the number of loans by copy), sorted
by decreasing order (50 items)
Filter by
- in a specified time span
- by UDC (see MARC field 080__a - list to be submitted)
- by loan period (4 week loan, one week loan...)
- by a certain number of loans
- by date of publication (MARC field 260__c)
- by date of the record creation in the database
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['udc']: MARC field 080__a
@type args['udc']: str
@param args['loan_period']: 4 week loan, one week loan...
@type args['loan_period']: str
@param args['min_loan']: minimum number of loans
@type args['min_loan']: int
@param args['max_loan']: maximum number of loans
@type args['max_loan']: int
@param args['publication_date']: MARC field 260__c
@type args['publication_date']: str
@param args['creation_date']: date of the record creation in the database
@type args['creation_date']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_where = []
param = []
sql_from = ""
if 'udc' in args and args['udc'] != '':
sql_where.append("i." + _check_udc_value_where())
param.append(_get_udc_truncated(args['udc']))
if 'loan_period' in args and args['loan_period'] != '':
sql_where.append("loan_period = %s")
param.append(args['loan_period'])
if 'publication_date' in args and args['publication_date'] != '':
sql_where.append("i.id_bibrec IN ( SELECT brb.id_bibrec \
FROM bibrec_bib26x brb, bib26x b WHERE brb.id_bibxxx = b.id AND tag='260__c' \
AND value LIKE %s)")
param.append('%%%s%%' % args['publication_date'])
if 'creation_date' in args and args['creation_date'] != '':
sql_from += ", bibrec br"
sql_where.append("br.id=i.id_bibrec AND br.creation_date LIKE %s")
param.append('%%%s%%' % args['creation_date'])
if sql_where:
sql_where = "WHERE %s AND" % " AND ".join(sql_where)
else:
sql_where = "WHERE"
param = tuple(param + [lower, upper])
# SQL for both queries
check_num_loans = "HAVING "
if 'min_loans' in args and args['min_loans'] != '':
check_num_loans += "COUNT(*) >= %s" % args['min_loans']
if 'max_loans' in args and args['max_loans'] != '' and args['max_loans'] != 0:
if check_num_loans != "HAVING ":
check_num_loans += " AND "
check_num_loans += "COUNT(*) <= %s" % args['max_loans']
# Optimized to get all the data in only one query (not call get_fieldvalues several times)
mldocs_sql = "SELECT i.id_bibrec, COUNT(*) \
FROM crcLOAN l, crcITEM i%s %s l.barcode=i.barcode AND type = 'normal' AND \
loaned_on > %%s AND loaned_on < %%s GROUP BY i.id_bibrec %s" % \
(sql_from, sql_where, check_num_loans)
limit_n = ""
if limit > 0:
limit_n = "LIMIT %d" % limit
nldocs_sql = "SELECT id_bibrec, COUNT(*) FROM crcITEM i%s %s \
barcode NOT IN (SELECT id_bibrec FROM crcLOAN WHERE loaned_on > %%s AND \
loaned_on < %%s AND type = 'normal') GROUP BY id_bibrec ORDER BY COUNT(*) DESC %s" % \
(sql_from, sql_where, limit_n)
items_sql = "SELECT id_bibrec, COUNT(*) items FROM crcITEM GROUP BY id_bibrec"
creation_date_sql = "SELECT creation_date FROM bibrec WHERE id=%s"
authors_sql = "SELECT bx.value FROM bib10x bx, bibrec_bib10x bibx \
WHERE bx.id = bibx.id_bibxxx AND bx.tag LIKE '100__a' AND bibx.id_bibrec=%s"
title_sql = "SELECT GROUP_CONCAT(bx.value SEPARATOR ' ') value FROM bib24x bx, bibrec_bib24x bibx \
WHERE bx.id = bibx.id_bibxxx AND bx.tag LIKE %s AND bibx.id_bibrec=%s GROUP BY bibx.id_bibrec"
edition_sql = "SELECT bx.value FROM bib25x bx, bibrec_bib25x AS bibx \
WHERE bx.id = bibx.id_bibxxx AND bx.tag LIKE '250__a' AND bibx.id_bibrec=%s"
if return_sql:
return "Most loaned: %s<br \>Never loaned: %s" % \
(mldocs_sql % param, nldocs_sql % param)
mldocs = run_sql(mldocs_sql, param)
items = dict(run_sql(items_sql))
order_m = []
for mldoc in mldocs:
order_m.append([mldoc[0], mldoc[1], items[mldoc[0]], \
float(mldoc[1]) / float(items[mldoc[0]])])
order_m = sorted(order_m, key=itemgetter(3))
order_m.reverse()
# Check limit values
if limit > 0:
order_m = order_m[:limit]
res = [("", "Title", "Author", "Edition", "Number of loans",
"Number of copies", "Date of creation of the record")]
for mldoc in order_m:
res.append(("Most loaned documents",
_check_empty_value(run_sql(title_sql, ('245__%%', mldoc[0], ))),
_check_empty_value(run_sql(authors_sql, (mldoc[0], ))),
_check_empty_value(run_sql(edition_sql, (mldoc[0], ))),
mldoc[1], mldoc[2],
_check_empty_value(run_sql(creation_date_sql, (mldoc[0], )))))
nldocs = run_sql(nldocs_sql, param)
for nldoc in nldocs:
res.append(("Not loaned documents",
_check_empty_value(run_sql(title_sql, ('245__%%', nldoc[0], ))),
_check_empty_value(run_sql(authors_sql, (nldoc[0], ))),
_check_empty_value(run_sql(edition_sql, (nldoc[0], ))),
0, items[nldoc[0]],
_check_empty_value(run_sql(creation_date_sql, (nldoc[0], )))))
# nldocs = run_sql(nldocs_sql, param_n)
return (res)
def get_keyevent_renewals_lists(args, return_sql=False, limit=50):
"""
Lists:
- List of most renewed items stored by decreasing order (50 items)
Filter by
- in a specified time span
- by UDC (see MARC field 080__a - list to be submitted)
- by collection
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['udc']: MARC field 080__a
@type args['udc']: str
@param args['collection']: collection of the record
@type args['collection']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from = "FROM crcLOAN l, crcITEM i "
sql_where = "WHERE loaned_on > %s AND loaned_on < %s AND i.barcode = l.barcode "
param = [lower, upper]
if 'udc' in args and args['udc'] != '':
sql_where += "AND l." + _check_udc_value_where()
param.append(_get_udc_truncated(args['udc']))
filter_coll = False
if 'collection' in args and args['collection'] != '':
filter_coll = True
recid_list = get_collection_reclist(args['collection'])
param = tuple(param)
if limit > 0:
limit = "LIMIT %d" % limit
else:
limit = ""
sql = "SELECT i.id_bibrec, SUM(number_of_renewals) %s %s \
GROUP BY i.id_bibrec ORDER BY SUM(number_of_renewals) DESC %s" \
% (sql_from, sql_where, limit)
if return_sql:
return sql % param
# Results:
res = [("Title", "Author", "Edition", "Number of renewals")]
for rec, renewals in run_sql(sql, param):
if filter_coll and rec not in recid_list:
continue
author = get_fieldvalues(rec, "100__a")
if len(author) > 0:
author = author[0]
else:
author = ""
edition = get_fieldvalues(rec, "250__a")
if len(edition) > 0:
edition = edition[0]
else:
edition = ""
res.append((book_title_from_MARC(rec), author, edition, int(renewals)))
return (res)
def get_keyevent_returns_table(args, return_sql=False):
"""
Data:
- Number of overdue returns in a timespan
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
# Overdue returns:
sql = "SELECT COUNT(*) FROM crcLOAN l WHERE loaned_on > %s AND loaned_on < %s AND \
due_date < NOW() AND (returned_on IS NULL OR returned_on > due_date)"
if return_sql:
return sql % (lower, upper)
return ((run_sql(sql, (lower, upper))[0][0], ), )
def get_keyevent_trend_returns_percentage(args, return_sql=False):
"""
Returns the number of overdue returns and the total number of returns
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
# SQL to determine overdue returns:
overdue = _get_sql_query("due_date", args["granularity"], "crcLOAN",
conditions="due_date < NOW() AND due_date IS NOT NULL \
AND (returned_on IS NULL OR returned_on > due_date)",
dates_range_param="loaned_on")
# SQL to determine all returns:
total = _get_sql_query("due_date", args["granularity"], "crcLOAN",
conditions="due_date < NOW() AND due_date IS NOT NULL",
dates_range_param="loaned_on")
# Compute the trend for both types
o_trend = _get_keyevent_trend(args, overdue,
return_sql=return_sql, sql_text="Overdue: %s")
t_trend = _get_keyevent_trend(args, total,
return_sql=return_sql, sql_text="Total: %s")
# Assemble, according to return type
if return_sql:
return "%s <br /> %s" % (o_trend, t_trend)
return [(o_trend[i][0], (o_trend[i][1], t_trend[i][1]))
for i in range(len(o_trend))]
def get_keyevent_ill_requests_statistics(args, return_sql=False):
"""
Data:
- Number of ILL requests
- Number of satisfied ILL requests 2 weeks after the date of request
creation on a timespan
- Average time between the date and the hour of the ill request
date and the date and the hour of the delivery item to the user
on a timespan
- Average time between the date and the hour the ILL request
was sent to the supplier and the date and hour of the
delivery item on a timespan
Filter by
- in a specified time span
- by type of document (book or article)
- by status of the request (= new, sent, etc.)
- by supplier
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['doctype']: type of document (book or article)
@type args['doctype']: str
@param args['status']: status of the request (= new, sent, etc.)
@type args['status']: str
@param args['supplier']: supplier
@type args['supplier']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from = "FROM crcILLREQUEST ill "
sql_where = "WHERE period_of_interest_from > %s AND period_of_interest_from < %s "
param = [lower, upper]
if 'doctype' in args and args['doctype'] != '':
sql_where += "AND ill.request_type=%s"
param.append(args['doctype'])
if 'status' in args and args['status'] != '':
sql_where += "AND ill.status = %s "
param.append(args['status'])
else:
sql_where += "AND ill.status != %s "
param.append(CFG_BIBCIRCULATION_ILL_STATUS_CANCELLED)
if 'supplier' in args and args['supplier'] != '':
sql_from += ", crcLIBRARY lib "
sql_where += "AND lib.id=ill.id_crcLIBRARY AND lib.name=%s "
param.append(args['supplier'])
param = tuple(param)
requests_sql = "SELECT COUNT(*) %s %s" % (sql_from, sql_where)
satrequests_sql = "SELECT COUNT(*) %s %s \
AND arrival_date IS NOT NULL AND \
DATEDIFF(arrival_date, period_of_interest_from) < 14 " % (sql_from, sql_where)
avgdel_sql = "SELECT AVG(TIMESTAMPDIFF(DAY, period_of_interest_from, arrival_date)) %s %s \
AND arrival_date IS NOT NULL" % (sql_from, sql_where)
avgsup_sql = "SELECT AVG(TIMESTAMPDIFF(DAY, request_date, arrival_date)) %s %s \
AND arrival_date IS NOT NULL \
AND request_date IS NOT NULL" % (sql_from, sql_where)
if return_sql:
return "<ol><li>%s</li><li>%s</li><li>%s</li><li>%s</li></ol>" % \
(requests_sql % param, satrequests_sql % param,
avgdel_sql % param, avgsup_sql % param)
# Number of requests:
requests = run_sql(requests_sql, param)[0][0]
# Number of satisfied ILL requests 2 weeks after the date of request creation:
satrequests = run_sql(satrequests_sql, param)[0][0]
# Average time between the date and the hour of the ill request date and
# the date and the hour of the delivery item to the user
avgdel = run_sql(avgdel_sql, param)[0][0]
if avgdel:
avgdel = float(avgdel)
else:
avgdel = 0
# Average time between the date and the hour the ILL request was sent to
# the supplier and the date and hour of the delivery item
avgsup = run_sql(avgsup_sql, param)[0][0]
if avgsup:
avgsup = float(avgsup)
else:
avgsup = 0
return ((requests, ), (satrequests, ), (avgdel, ), (avgsup, ))
def get_keyevent_ill_requests_lists(args, return_sql=False, limit=50):
"""
Lists:
- List of ILL requests
Filter by
- in a specified time span
- by type of request (article or book)
- by supplier
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['doctype']: type of request (article or book)
@type args['doctype']: str
@param args['supplier']: supplier
@type args['supplier']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from = "FROM crcILLREQUEST ill "
sql_where = "WHERE status != '%s' AND request_date > %%s AND request_date < %%s " \
% CFG_BIBCIRCULATION_ITEM_STATUS_CANCELLED
param = [lower, upper]
if 'doctype' in args and args['doctype'] != '':
sql_where += "AND ill.request_type=%s "
param.append(args['doctype'])
if 'supplier' in args and args['supplier'] != '':
sql_from += ", crcLIBRARY lib "
sql_where += "AND lib.id=ill.id_crcLIBRARY AND lib.name=%s "
param.append(args['supplier'])
param = tuple(param)
if limit > 0:
limit = "LIMIT %d" % limit
else:
limit = ""
sql = "SELECT ill.id, item_info %s %s %s" % (sql_from, sql_where, limit)
if return_sql:
return sql % param
# Results:
res = [("Id", "Title", "Author", "Edition")]
for req_id, item_info in run_sql(sql, param):
item_info = eval(item_info)
try:
res.append((req_id, item_info['title'], item_info['authors'], item_info['edition']))
except KeyError:
pass
return (res)
def get_keyevent_trend_satisfied_ill_requests_percentage(args, return_sql=False):
"""
Returns the number of satisfied ILL requests 2 weeks after the date of request
creation and the total number of ILL requests
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['doctype']: type of document (book or article)
@type args['doctype']: str
@param args['status']: status of the request (= new, sent, etc.)
@type args['status']: str
@param args['supplier']: supplier
@type args['supplier']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
sql_from = "crcILLREQUEST ill "
sql_where = ""
param = []
if 'doctype' in args and args['doctype'] != '':
sql_where += "AND ill.request_type=%s"
param.append(args['doctype'])
if 'status' in args and args['status'] != '':
sql_where += "AND ill.status = %s "
param.append(args['status'])
else:
sql_where += "AND ill.status != %s "
param.append(CFG_BIBCIRCULATION_ILL_STATUS_CANCELLED)
if 'supplier' in args and args['supplier'] != '':
sql_from += ", crcLIBRARY lib "
sql_where += "AND lib.id=ill.id_crcLIBRARY AND lib.name=%s "
param.append(args['supplier'])
# SQL to determine satisfied ILL requests:
satisfied = _get_sql_query("request_date", args["granularity"], sql_from,
conditions="ADDDATE(request_date, 14) < NOW() AND \
(arrival_date IS NULL OR arrival_date < ADDDATE(request_date, 14)) " + sql_where)
# SQL to determine all ILL requests:
total = _get_sql_query("request_date", args["granularity"], sql_from,
conditions="ADDDATE(request_date, 14) < NOW() "+ sql_where)
# Compute the trend for both types
s_trend = _get_keyevent_trend(args, satisfied, extra_param=param,
return_sql=return_sql, sql_text="Satisfied: %s")
t_trend = _get_keyevent_trend(args, total, extra_param=param,
return_sql=return_sql, sql_text="Total: %s")
# Assemble, according to return type
if return_sql:
return "%s <br /> %s" % (s_trend, t_trend)
return [(s_trend[i][0], (s_trend[i][1], t_trend[i][1]))
for i in range(len(s_trend))]
def get_keyevent_items_statistics(args, return_sql=False):
"""
Data:
- The total number of items
- Total number of new items added in last year
Filter by
- in a specified time span
- by collection
- by UDC (see MARC field 080__a - list to be submitted)
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['udc']: MARC field 080__a
@type args['udc']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from = "FROM crcITEM i "
sql_where = "WHERE "
param = []
if 'udc' in args and args['udc'] != '':
sql_where += "i." + _check_udc_value_where()
param.append(_get_udc_truncated(args['udc']))
# Number of items:
if sql_where == "WHERE ":
sql_where = ""
items_sql = "SELECT COUNT(i.id_bibrec) %s %s" % (sql_from, sql_where)
# Number of new items:
if sql_where == "":
sql_where = "WHERE creation_date > %s AND creation_date < %s "
else:
sql_where += " AND creation_date > %s AND creation_date < %s "
new_items_sql = "SELECT COUNT(i.id_bibrec) %s %s" % (sql_from, sql_where)
if return_sql:
return "Total: %s <br />New: %s" % (items_sql % tuple(param), new_items_sql % tuple(param + [lower, upper]))
return ((run_sql(items_sql, tuple(param))[0][0], ), (run_sql(new_items_sql, tuple(param + [lower, upper]))[0][0], ))
def get_keyevent_items_lists(args, return_sql=False, limit=50):
"""
Lists:
- The list of items
Filter by
- by library (=physical location of the item)
- by status (=on loan, available, requested, missing...)
@param args['library']: physical location of the item
@type args[library'']: str
@param args['status']: on loan, available, requested, missing...
@type args['status']: str
"""
sql_from = "FROM crcITEM i "
sql_where = "WHERE "
param = []
if 'library' in args and args['library'] != '':
sql_from += ", crcLIBRARY li "
sql_where += "li.id=i.id_crcLIBRARY AND li.name=%s "
param.append(args['library'])
if 'status' in args and args['status'] != '':
if sql_where != "WHERE ":
sql_where += "AND "
sql_where += "i.status = %s "
param.append(args['status'])
param = tuple(param)
# Results:
res = [("Title", "Author", "Edition", "Barcode", "Publication date")]
if sql_where == "WHERE ":
sql_where = ""
if limit > 0:
limit = "LIMIT %d" % limit
else:
limit = ""
sql = "SELECT i.barcode, i.id_bibrec %s %s %s" % (sql_from, sql_where, limit)
if len(param) == 0:
sqlres = run_sql(sql)
else:
sqlres = run_sql(sql, tuple(param))
sql = sql % param
if return_sql:
return sql
for barcode, rec in sqlres:
author = get_fieldvalues(rec, "100__a")
if len(author) > 0:
author = author[0]
else:
author = ""
edition = get_fieldvalues(rec, "250__a")
if len(edition) > 0:
edition = edition[0]
else:
edition = ""
res.append((book_title_from_MARC(rec),
author, edition, barcode,
book_information_from_MARC(int(rec))[1]))
return (res)
def get_keyevent_loan_request_statistics(args, return_sql=False):
"""
Data:
- Number of hold requests, one week after the date of request creation
- Number of successful hold requests transactions
- Average time between the hold request date and the date of delivery document in a year
Filter by
- in a specified time span
- by item status (available, missing)
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['item_status']: available, missing...
@type args['item_status']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from = "FROM crcLOANREQUEST lr "
sql_where = "WHERE request_date > %s AND request_date < %s "
param = [lower, upper]
if 'item_status' in args and args['item_status'] != '':
sql_from += ", crcITEM i "
sql_where += "AND lr.barcode = i.barcode AND i.status = %s "
param.append(args['item_status'])
param = tuple(param)
custom_table = get_customevent_table("loanrequest")
# Number of hold requests, one week after the date of request creation:
holds = "SELECT COUNT(*) %s, %s ws %s AND ws.request_id=lr.id AND \
DATEDIFF(ws.creation_time, lr.request_date) >= 7" % (sql_from, custom_table, sql_where)
# Number of successful hold requests transactions
succesful_holds = "SELECT COUNT(*) %s %s AND lr.status='%s'" % (sql_from, sql_where,
CFG_BIBCIRCULATION_REQUEST_STATUS_DONE)
# Average time between the hold request date and the date of delivery document in a year
avg_sql = "SELECT AVG(DATEDIFF(ws.creation_time, lr.request_date)) \
%s, %s ws %s AND ws.request_id=lr.id" % (sql_from, custom_table, sql_where)
if return_sql:
return "<ol><li>%s</li><li>%s</li><li>%s</li></ol>" % \
(holds % param, succesful_holds % param, avg_sql % param)
avg = run_sql(avg_sql, param)[0][0]
if avg is int:
avg = int(avg)
else:
avg = 0
return ((run_sql(holds, param)[0][0], ),
(run_sql(succesful_holds, param)[0][0], ), (avg, ))
def get_keyevent_loan_request_lists(args, return_sql=False, limit=50):
"""
Lists:
- List of the most requested items
Filter by
- in a specified time span
- by UDC (see MARC field 080__a - list to be submitted)
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['udc']: MARC field 080__a
@type args['udc']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from = "FROM crcLOANREQUEST lr "
sql_where = "WHERE request_date > %s AND request_date < %s "
param = [lower, upper]
if 'udc' in args and args['udc'] != '':
sql_where += "AND lr." + _check_udc_value_where()
param.append(_get_udc_truncated(args['udc']))
if limit > 0:
limit = "LIMIT %d" % limit
else:
limit = ""
sql = "SELECT lr.barcode %s %s GROUP BY barcode \
ORDER BY COUNT(*) DESC %s" % (sql_from, sql_where, limit)
if return_sql:
return sql
res = [("Title", "Author", "Edition", "Barcode")]
# Most requested items:
for barcode in run_sql(sql, param):
rec = get_id_bibrec(barcode[0])
author = get_fieldvalues(rec, "100__a")
if len(author) > 0:
author = author[0]
else:
author = ""
edition = get_fieldvalues(rec, "250__a")
if len(edition) > 0:
edition = edition[0]
else:
edition = ""
res.append((book_title_from_MARC(rec), author, edition, barcode[0]))
return (res)
def get_keyevent_user_statistics(args, return_sql=False):
"""
Data:
- Total number of active users (to be defined = at least one transaction in the past year)
Filter by
- in a specified time span
- by registration date
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
sql_from_ill = "FROM crcILLREQUEST ill "
sql_from_loan = "FROM crcLOAN l "
sql_where_ill = "WHERE request_date > %s AND request_date < %s "
sql_where_loan = "WHERE loaned_on > %s AND loaned_on < %s "
param = (lower, upper, lower, upper)
# Total number of active users:
users = "SELECT COUNT(DISTINCT user) FROM ((SELECT id_crcBORROWER user %s %s) \
UNION (SELECT id_crcBORROWER user %s %s)) res" % \
(sql_from_ill, sql_where_ill, sql_from_loan, sql_where_loan)
if return_sql:
return users % param
return ((run_sql(users, param)[0][0], ), )
def get_keyevent_user_lists(args, return_sql=False, limit=50):
"""
Lists:
- List of most intensive users (ILL requests + Loan)
Filter by
- in a specified time span
- by registration date
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
param = (lower, upper, lower, upper)
if limit > 0:
limit = "LIMIT %d" % limit
else:
limit = ""
sql = "SELECT user, SUM(trans) FROM \
((SELECT id_crcBORROWER user, COUNT(*) trans FROM crcILLREQUEST ill \
WHERE request_date > %%s AND request_date < %%s GROUP BY id_crcBORROWER) UNION \
(SELECT id_crcBORROWER user, COUNT(*) trans FROM crcLOAN l WHERE loaned_on > %%s AND \
loaned_on < %%s GROUP BY id_crcBORROWER)) res GROUP BY user ORDER BY SUM(trans) DESC \
%s" % (limit)
if return_sql:
return sql % param
res = [("Name", "Address", "Mailbox", "E-mail", "Number of transactions")]
# List of most intensive users (ILL requests + Loan):
for borrower_id, trans in run_sql(sql, param):
name, address, mailbox, email = get_borrower_data(borrower_id)
res.append((name, address, mailbox, email, int(trans)))
return (res)
# KEY EVENT SNAPSHOT SECTION
def get_keyevent_snapshot_uptime_cmd():
"""
A specific implementation of get_current_event().
@return: The std-out from the UNIX command 'uptime'.
@type: str
"""
return _run_cmd('uptime').strip().replace(' ', ' ')
def get_keyevent_snapshot_apache_processes():
"""
A specific implementation of get_current_event().
@return: The std-out from the UNIX command 'uptime'.
@type: str
"""
# The number of Apache processes (root+children)
return _run_cmd('ps -e | grep apache2 | grep -v grep | wc -l')
def get_keyevent_snapshot_bibsched_status():
"""
A specific implementation of get_current_event().
@return: Information about the number of tasks in the different status modes.
@type: [(str, int)]
"""
sql = "SELECT status, COUNT(status) FROM schTASK GROUP BY status"
return [(x[0], int(x[1])) for x in run_sql(sql)]
def get_keyevent_snapshot_sessions():
"""
A specific implementation of get_current_event().
@return: The current number of website visitors (guests, logged in)
@type: (int, int)
"""
# SQL to retrieve sessions in the Guests
sql = "SELECT COUNT(session_expiry) " + \
"FROM session INNER JOIN user ON uid=id " + \
"WHERE email = '' AND " + \
"session_expiry-%d < unix_timestamp() AND " \
% WEBSTAT_SESSION_LENGTH + \
"unix_timestamp() < session_expiry"
guests = run_sql(sql)[0][0]
# SQL to retrieve sessions in the Logged in users
sql = "SELECT COUNT(session_expiry) " + \
"FROM session INNER JOIN user ON uid=id " + \
"WHERE email <> '' AND " + \
"session_expiry-%d < unix_timestamp() AND " \
% WEBSTAT_SESSION_LENGTH + \
"unix_timestamp() < session_expiry"
logged_ins = run_sql(sql)[0][0]
# Assemble, according to return type
return (guests, logged_ins)
def get_keyevent_bibcirculation_report(freq='yearly'):
"""
Monthly and yearly report with the total number of circulation
transactions (loans, renewals, returns, ILL requests, hold request).
@param freq: yearly or monthly
@type freq: str
@return: loans, renewals, returns, ILL requests, hold request
@type: (int, int, int, int, int)
"""
if freq == 'monthly':
datefrom = datetime.date.today().strftime("%Y-%m-01 00:00:00")
else: #yearly
datefrom = datetime.date.today().strftime("%Y-01-01 00:00:00")
loans, renewals = run_sql("SELECT COUNT(*), \
SUM(number_of_renewals) \
FROM crcLOAN WHERE loaned_on > %s", (datefrom, ))[0]
returns = run_sql("SELECT COUNT(*) FROM crcLOAN \
WHERE returned_on!='0000-00-00 00:00:00' and loaned_on > %s", (datefrom, ))[0][0]
illrequests = run_sql("SELECT COUNT(*) FROM crcILLREQUEST WHERE request_date > %s",
(datefrom, ))[0][0]
holdrequest = run_sql("SELECT COUNT(*) FROM crcLOANREQUEST WHERE request_date > %s",
(datefrom, ))[0][0]
return (loans, renewals, returns, illrequests, holdrequest)
def get_last_updates():
"""
List date/time when the last updates where done (easy reading format).
@return: last indexing, last ranking, last sorting, last webcolling
@type: (datetime, datetime, datetime, datetime)
"""
try:
last_index = convert_datestruct_to_dategui(convert_datetext_to_datestruct \
(str(run_sql('SELECT last_updated FROM idxINDEX WHERE \
name="global"')[0][0])))
last_rank = convert_datestruct_to_dategui(convert_datetext_to_datestruct \
(str(run_sql('SELECT last_updated FROM rnkMETHOD ORDER BY \
last_updated DESC LIMIT 1')[0][0])))
last_sort = convert_datestruct_to_dategui(convert_datetext_to_datestruct \
(str(run_sql('SELECT last_updated FROM bsrMETHODDATA ORDER BY \
last_updated DESC LIMIT 1')[0][0])))
file_coll_last_update = open(CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE, 'r')
last_coll = convert_datestruct_to_dategui(convert_datetext_to_datestruct \
(str(file_coll_last_update.read())))
file_coll_last_update.close()
# database not filled
except IndexError:
return ("", "", "", "")
return (last_index, last_rank, last_sort, last_coll)
def get_list_link(process, category=None):
"""
Builds the link for the list of records not indexed, ranked, sorted or
collected.
@param process: kind of process the records are waiting for (index, rank,
sort, collect)
@type process: str
@param category: specific sub-category of the process.
Index: global, collection, abstract, author, keyword,
reference, reportnumber, title, fulltext, year,
journal, collaboration, affiliation, exactauthor,
caption, firstauthor, exactfirstauthor, authorcount)
Rank: wrd, demo_jif, citation, citerank_citation_t,
citerank_pagerank_c, citerank_pagerank_t
Sort: latest first, title, author, report number,
most cited
Collect: Empty / None
@type category: str
@return: link text
@type: string
"""
if process == "index":
list_registers = run_sql('SELECT id FROM bibrec WHERE \
modification_date > (SELECT last_updated FROM \
idxINDEX WHERE name=%s)', (category,))
elif process == "rank":
list_registers = run_sql('SELECT id FROM bibrec WHERE \
modification_date > (SELECT last_updated FROM \
rnkMETHOD WHERE name=%s)', (category,))
elif process == "sort":
list_registers = run_sql('SELECT id FROM bibrec WHERE \
modification_date > (SELECT last_updated FROM \
bsrMETHODDATA WHERE id_bsrMETHOD=(SELECT id \
FROM bsrMETHOD WHERE name=%s))', (category,))
elif process == "collect":
file_coll_last_update = open(CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE, 'r')
coll_last_update = file_coll_last_update.read()
file_coll_last_update.close()
list_registers = run_sql('SELECT id FROM bibrec WHERE \
modification_date > %s', (coll_last_update,))
# build the link
if list_registers == ():
return "Up to date"
link = '<a href="' + CFG_SITE_URL + '/search?p='
for register in list_registers:
link += 'recid%3A' + str(register[0]) + '+or+'
# delete the last '+or+'
link = link[:len(link)-4]
link += '">' + str(len(list_registers)) + '</a>'
return link
def get_search_link(record_id):
"""
Auxiliar, builds the direct link for a given record.
@param record_id: record's id number
@type record_id: int
@return: link text
@type: string
"""
link = '<a href="' + CFG_SITE_URL + '/record/' + \
str(record_id) + '">Record [' + str(record_id) + ']</a>'
return link
def get_ingestion_matching_records(request=None, limit=25):
"""
Fetches all the records matching a given pattern, arranges them by last
modificaton date and returns a list.
@param request: requested pattern to match
@type request: str
@return: list of records matching a pattern,
(0,) if no request,
(-1,) if the request was invalid
@type: list
"""
if request==None or request=="":
return (0,)
try:
records = list(search_pattern(p=request))
except:
return (-1,)
if records == []:
return records
# order by most recent modification date
query = 'SELECT id FROM bibrec WHERE '
for r in records:
query += 'id="' + str(r) + '" OR '
query = query[:len(query)-4]
query += ' ORDER BY modification_date DESC LIMIT %s'
list_records = run_sql(query, (limit,))
final_list = []
for lr in list_records:
final_list.append(lr[0])
return final_list
def get_record_ingestion_status(record_id):
"""
Returns the amount of ingestion methods not updated yet to a given record.
If 0, the record is up to date.
@param record_id: record id number
@type record_id: int
@return: number of methods not updated for the record
@type: int
"""
counter = 0
counter += run_sql('SELECT COUNT(*) FROM bibrec WHERE \
id=%s AND modification_date > (SELECT last_updated FROM \
idxINDEX WHERE name="global")', (record_id, ))[0][0]
counter += run_sql('SELECT COUNT(*) FROM bibrec WHERE \
id=%s AND modification_date > (SELECT last_updated FROM \
rnkMETHOD ORDER BY last_updated DESC LIMIT 1)', \
(record_id, ))[0][0]
counter = run_sql('SELECT COUNT(*) FROM bibrec WHERE \
id=%s AND modification_date > (SELECT last_updated FROM \
bsrMETHODDATA ORDER BY last_updated DESC LIMIT 1)', \
(record_id, ))[0][0]
file_coll_last_update = open(CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE, 'r')
last_coll = file_coll_last_update.read()
file_coll_last_update.close()
counter += run_sql('SELECT COUNT(*) FROM bibrec WHERE \
id=%s AND \
modification_date >\
%s', (record_id, last_coll,))[0][0]
return counter
def get_specific_ingestion_status(record_id, process, method=None):
"""
Returns whether a record is or not up to date for a given
process and method.
@param record_id: identification number of the record
@type record_id: int
@param process: kind of process the records may be waiting for (index,
rank, sort, collect)
@type process: str
@param method: specific sub-method of the process.
Index: global, collection, abstract, author, keyword,
reference, reportnumber, title, fulltext, year,
journal, collaboration, affiliation, exactauthor,
caption, firstauthor, exactfirstauthor, authorcount
Rank: wrd, demo_jif, citation, citerank_citation_t,
citerank_pagerank_c, citerank_pagerank_t
Sort: latest first, title, author, report number,
most cited
Collect: Empty / None
@type category: str
@return: text: None if the record is up to date
Last time the method was updated if it is waiting
@type: date/time string
"""
exist = run_sql('SELECT COUNT(*) FROM bibrec WHERE id=%s', (record_id, ))
if exist[0][0] == 0:
return "REG not in DB"
if process == "index":
list_registers = run_sql('SELECT COUNT(*) FROM bibrec WHERE \
id=%s AND modification_date > (SELECT \
last_updated FROM idxINDEX WHERE name=%s)',
(record_id, method,))
last_time = run_sql ('SELECT last_updated FROM idxINDEX WHERE \
name=%s', (method,))[0][0]
elif process == "rank":
list_registers = run_sql('SELECT COUNT(*) FROM bibrec WHERE \
id=%s AND modification_date > (SELECT \
last_updated FROM rnkMETHOD WHERE name=%s)',
(record_id, method,))
last_time = run_sql ('SELECT last_updated FROM rnkMETHOD WHERE \
name=%s', (method,))[0][0]
elif process == "sort":
list_registers = run_sql('SELECT COUNT(*) FROM bibrec WHERE \
id=%s AND modification_date > (SELECT \
last_updated FROM bsrMETHODDATA WHERE \
id_bsrMETHOD=(SELECT id FROM bsrMETHOD \
WHERE name=%s))', (record_id, method,))
last_time = run_sql ('SELECT last_updated FROM bsrMETHODDATA WHERE \
id_bsrMETHOD=(SELECT id FROM bsrMETHOD \
WHERE name=%s)', (method,))[0][0]
elif process == "collect":
file_coll_last_update = open(CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE, 'r')
last_time = file_coll_last_update.read()
file_coll_last_update.close()
list_registers = run_sql('SELECT COUNT(*) FROM bibrec WHERE id=%s \
AND modification_date > %s',
(record_id, last_time,))
# no results means the register is up to date
if list_registers[0][0] == 0:
return None
else:
return convert_datestruct_to_dategui(convert_datetext_to_datestruct \
(str(last_time)))
def get_title_ingestion(record_id, last_modification):
"""
Auxiliar, builds a direct link for a given record, with its last
modification date.
@param record_id: id number of the record
@type record_id: string
@param last_modification: date/time of the last modification
@type last_modification: string
@return: link text
@type: string
"""
return '<h3><a href="%s/record/%s">Record [%s] last modification: %s</a></h3>' \
% (CFG_SITE_URL, record_id, record_id, last_modification)
def get_record_last_modification (record_id):
"""
Returns the date/time of the last modification made to a given record.
@param record_id: id number of the record
@type record_id: int
@return: date/time of the last modification
@type: string
"""
return convert_datestruct_to_dategui(convert_datetext_to_datestruct \
(str(run_sql('SELECT modification_date FROM bibrec \
WHERE id=%s', (record_id,))[0][0])))
def get_general_status():
"""
Returns an aproximate amount of ingestions processes not aplied to new or
updated records, using the "global" category.
@return: number of processes not updated
@type: int
"""
return run_sql('SELECT COUNT(*) FROM bibrec WHERE \
modification_date > (SELECT last_updated FROM \
idxINDEX WHERE name="global")')[0][0]
# ERROR LOG STATS
def update_error_log_analyzer():
"""Creates splitted files for today's errors"""
_run_cmd('bash %s/webstat -e -is' % CFG_BINDIR)
def get_invenio_error_log_ranking():
""" Returns the ranking of the errors in the invenio log"""
return _run_cmd('bash %s/webstat -e -ir' % CFG_BINDIR)
def get_invenio_last_n_errors(nerr):
"""Returns the last nerr errors in the invenio log (without details)"""
return _run_cmd('bash %s/webstat -e -il %d' % (CFG_BINDIR, nerr))
def get_invenio_error_details(error):
"""Returns the complete text of the invenio error."""
out = _run_cmd('bash %s/webstat -e -id %s' % (CFG_BINDIR, error))
return out
def get_apache_error_log_ranking():
""" Returns the ranking of the errors in the apache log"""
return _run_cmd('bash %s/webstat -e -ar' % CFG_BINDIR)
# CUSTOM EVENT SECTION
def get_customevent_trend(args):
"""
Returns trend data for a custom event over a given
timestamp range.
@param args['event_id']: The event id
@type args['event_id']: str
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
@param args['cols']: Columns and it's content that will be include
if don't exist or it's empty it will include all cols
@type args['cols']: [ [ str, str ], ]
"""
# Get a MySQL friendly date
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
tbl_name = get_customevent_table(args['event_id'])
col_names = get_customevent_args(args['event_id'])
where = []
sql_param = [lower, upper]
for col_bool, col_title, col_content in args['cols']:
if not col_title in col_names:
continue
if col_content:
if col_bool == "" or not where:
where.append(wash_table_column_name(col_title))
elif col_bool == "and":
where.append("AND %s"
% wash_table_column_name(col_title))
elif col_bool == "or":
where.append("OR %s"
% wash_table_column_name(col_title))
elif col_bool == "and_not":
where.append("AND NOT %s"
% wash_table_column_name(col_title))
else:
continue
where.append(" LIKE %s")
sql_param.append("%" + col_content + "%")
sql = _get_sql_query("creation_time", args['granularity'], tbl_name, " ".join(where))
return _get_trend_from_actions(run_sql(sql, tuple(sql_param)), 0,
args['t_start'], args['t_end'],
args['granularity'], args['t_format'])
def get_customevent_dump(args):
"""
Similar to a get_event_trend implemention, but NO refining aka frequency
handling is carried out what so ever. This is just a dump. A dump!
@param args['event_id']: The event id
@type args['event_id']: str
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
@param args['cols']: Columns and it's content that will be include
if don't exist or it's empty it will include all cols
@type args['cols']: [ [ str, str ], ]
"""
# Get a MySQL friendly date
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
# Get customevents
# events_list = [(creation_time, event, [arg1, arg2, ...]), ...]
event_list = []
event_cols = {}
for event_id, i in [(args['ids'][i], str(i))
for i in range(len(args['ids']))]:
# Get all the event arguments and creation times
tbl_name = get_customevent_table(event_id)
col_names = get_customevent_args(event_id)
sql_query = ["SELECT * FROM %s WHERE creation_time > '%%s'" % wash_table_column_name(tbl_name), (lower,)] # kwalitee: disable=sql
sql_query.append("AND creation_time < '%s'" % upper)
sql_param = []
for col_bool, col_title, col_content in args['cols' + i]:
if not col_title in col_names:
continue
if col_content:
if col_bool == "and" or col_bool == "":
sql_query.append("AND %s" % \
wash_table_column_name(col_title))
elif col_bool == "or":
sql_query.append("OR %s" % \
wash_table_column_name(col_title))
elif col_bool == "and_not":
sql_query.append("AND NOT %s" % \
wash_table_column_name(col_title))
else:
continue
sql_query.append(" LIKE %s")
sql_param.append("%" + col_content + "%")
sql_query.append("ORDER BY creation_time DESC")
sql = ' '.join(sql_query)
res = run_sql(sql, tuple(sql_param))
for row in res:
event_list.append((row[1], event_id, row[2:]))
# Get the event col names
try:
event_cols[event_id] = cPickle.loads(run_sql(
"SELECT cols FROM staEVENT WHERE id = %s",
(event_id, ))[0][0])
except TypeError:
event_cols[event_id] = ["Unnamed"]
event_list.sort()
output = []
for row in event_list:
temp = [row[1], row[0].strftime('%Y-%m-%d %H:%M:%S')]
arguments = ["%s: %s" % (event_cols[row[1]][i],
row[2][i]) for i in range(len(row[2]))]
temp.extend(arguments)
output.append(tuple(temp))
return output
def get_customevent_table(event_id):
"""
Helper function that for a certain event id retrives the corresponding
event table name.
"""
res = run_sql(
"SELECT CONCAT('staEVENT', number) FROM staEVENT WHERE id = %s", (event_id, ))
try:
return res[0][0]
except IndexError:
# No such event table
return None
def get_customevent_args(event_id):
"""
Helper function that for a certain event id retrives the corresponding
event argument (column) names.
"""
res = run_sql("SELECT cols FROM staEVENT WHERE id = %s", (event_id, ))
try:
if res[0][0]:
return cPickle.loads(res[0][0])
else:
return []
except IndexError:
# No such event table
return None
# CUSTOM SUMMARY SECTION
def get_custom_summary_data(query, tag):
"""Returns the annual report data for the specified year
@param query: Search query to make customized report
@type query: str
@param tag: MARC tag for the output
@type tag: str
"""
# Check arguments
if tag == '':
tag = CFG_JOURNAL_TAG.replace("%", "p")
# First get records of the year
recids = perform_request_search(p=query, of="id", wl=0)
# Then return list by tag
pub = get_most_popular_field_values(recids, tag)
if len(pub) == 0:
return []
if CFG_CERN_SITE:
total = sum([x[1] for x in pub])
else:
others = 0
total = 0
first_other = -1
for elem in pub:
total += elem[1]
if elem[1] < 2:
if first_other == -1:
first_other = pub.index(elem)
others += elem[1]
del pub[first_other:]
if others != 0:
pub.append(('Others', others))
pub.append(('TOTAL', total))
return pub
def create_custom_summary_graph(data, path, title):
"""
Creates a pie chart with the information from the custom summary and
saves it in the file specified by the path argument
"""
# If no input, we don't bother about anything
if len(data) == 0:
return
os.environ['HOME'] = CFG_TMPDIR
try:
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
except ImportError:
return
# make a square figure and axes
matplotlib.rcParams['font.size'] = 8
labels = [x[0] for x in data]
numb_elem = len(labels)
width = 6 + float(numb_elem) / 7
gfile = plt.figure(1, figsize=(width, 6))
plt.axes([0.1, 0.1, 4.2 / width, 0.7])
numb = [x[1] for x in data]
total = sum(numb)
fracs = [x * 100 / total for x in numb]
colors = []
random.seed()
for i in range(numb_elem):
col = 0.5 + float(i) / (float(numb_elem) * 2.0)
rand = random.random() / 2.0
if i % 3 == 0:
red = col
green = col + rand
blue = col - rand
if green > 1.0:
green = 1
elif i % 3 == 1:
red = col - rand
green = col
blue = col + rand
if blue > 1.0:
blue = 1
elif i % 3 == 2:
red = col + rand
green = col - rand
blue = col
if red > 1.0:
red = 1
colors.append((red, green, blue))
patches = plt.pie(fracs, colors=tuple(colors), labels=labels,
autopct='%1i%%', pctdistance=0.8, shadow=True)[0]
ttext = plt.title(title)
plt.setp(ttext, size='xx-large', color='b', family='monospace', weight='extra bold')
legend_keywords = {"prop": {"size": "small"}}
plt.figlegend(patches, labels, 'lower right', **legend_keywords)
plt.savefig(path)
plt.close(gfile)
# GRAPHER
def create_graph_trend(trend, path, settings):
"""
Creates a graph representation out of data produced from get_event_trend.
@param trend: The trend data
@type trend: [(str, str|int|(str|int,...))]
@param path: Where to store the graph
@type path: str
@param settings: Dictionary of graph parameters
@type settings: dict
"""
# If no input, we don't bother about anything
if not trend or len(trend) == 0:
return
# If no filename is given, we'll assume STD-out format and ASCII.
if path == '':
settings["format"] = 'asciiart'
if settings["format"] == 'asciiart':
create_graph_trend_ascii_art(trend, path, settings)
elif settings["format"] == 'gnuplot':
create_graph_trend_gnu_plot(trend, path, settings)
elif settings["format"] == "flot":
create_graph_trend_flot(trend, path, settings)
def create_graph_trend_ascii_art(trend, path, settings):
"""Creates the graph trend using ASCII art"""
out = ""
if settings["multiple"] is not None:
# Tokens that will represent the different data sets (maximum 16 sets)
# Set index (=100) to the biggest of the histogram sums
index = max([sum(x[1]) for x in trend])
# Print legend box
out += "Legend: %s\n\n" % ", ".join(["%s (%s)" % x
for x in zip(settings["multiple"], WEBSTAT_GRAPH_TOKENS)])
else:
index = max([x[1] for x in trend])
width = 82
# Figure out the max length of the xtics, in order to left align
xtic_max_len = max([len(_to_datetime(x[0]).strftime(
settings["xtic_format"])) for x in trend])
for row in trend:
# Print the xtic
xtic = _to_datetime(row[0]).strftime(settings["xtic_format"])
out_row = xtic + ': ' + ' ' * (xtic_max_len - len(xtic)) + '|'
try:
col_width = (1.0 * width / index)
except ZeroDivisionError:
col_width = 0
if settings["multiple"] is not None:
# The second value of the row-tuple, represents the n values from
# the n data sets. Each set, will be represented by a different
# ASCII character, chosen from the randomized string
# 'WEBSTAT_GRAPH_TOKENS'.
# NOTE: Only up to 16 (len(WEBSTAT_GRAPH_TOKENS)) data
# sets are supported.
total = sum(row[1])
for i in range(len(row[1])):
col = row[1][i]
try:
out_row += WEBSTAT_GRAPH_TOKENS[i] * int(1.0 * col * col_width)
except ZeroDivisionError:
break
if len([i for i in row[1] if type(i) is int and i > 0]) - 1 > 0:
out_row += out_row[-1]
else:
total = row[1]
try:
out_row += '-' * int(1.0 * total * col_width)
except ZeroDivisionError:
break
# Print sentinel, and the total
out += out_row + '>' + ' ' * (xtic_max_len + 4 +
width - len(out_row)) + str(total) + '\n'
# Write to destination file
if path == '':
print out
else:
open(path, 'w').write(out)
def create_graph_trend_gnu_plot(trend, path, settings):
"""Creates the graph trend using the GNU plot library"""
try:
import Gnuplot
except ImportError:
return
gnup = Gnuplot.Gnuplot()
gnup('set style data steps')
if 'size' in settings:
gnup('set terminal png tiny size %s' % settings['size'])
else:
gnup('set terminal png tiny')
gnup('set output "%s"' % path)
if settings["title"] != '':
gnup.title(settings["title"].replace("\"", ""))
if settings["xlabel"] != '':
gnup.xlabel(settings["xlabel"])
if settings["ylabel"] != '':
gnup.ylabel(settings["ylabel"])
if settings["xtic_format"] != '':
xtics = 'set xtics ('
xtics += ', '.join(['"%s" %d' %
(_to_datetime(trend[i][0], '%Y-%m-%d \
%H:%M:%S').strftime(settings["xtic_format"]), i)
for i in range(len(trend))]) + ')'
gnup(xtics)
gnup('set format y "%.0f"')
# If we have multiple data sets, we need to do
# some magic to make Gnuplot eat it,
# This is basically a matrix transposition,
# and the addition of index numbers.
if settings["multiple"] is not None:
cols = len(trend[0][1])
rows = len(trend)
plot_items = []
y_max = 0
y_min = 0
for col in range(cols):
data = []
for row in range(rows):
data.append([row, trend[row][1][col]])
data.append([rows, trend[-1][1][col]])
plot_items.append(Gnuplot.PlotItems
.Data(data, title=settings["multiple"][col]))
tmp_max = max([x[col] for x in data])
tmp_min = min([x[col] for x in data])
if tmp_max > y_max:
y_max = tmp_max
if tmp_min < y_min:
y_min = tmp_min
if y_max - y_min < 5 and y_min != 0:
gnup('set ytic %d, 1, %d' % (y_min - 1, y_max + 2))
elif y_max < 5:
gnup('set ytic 1')
gnup.plot(*plot_items)
else:
data = [x[1] for x in trend]
data.append(trend[-1][1])
y_max = max(data)
y_min = min(data)
if y_max - y_min < 5 and y_min != 0:
gnup('set ytic %d, 1, %d' % (y_min - 1, y_max + 2))
elif y_max < 5:
gnup('set ytic 1')
gnup.plot(data)
def create_graph_trend_flot(trend, path, settings):
"""Creates the graph trend using the flot library"""
size = settings.get("size", "500,400").split(",")
title = cgi.escape(settings["title"].replace(" ", "")[:10])
out = """<!--[if IE]><script language="javascript" type="text/javascript"
src="%(site)s/js/excanvas.min.js"></script><![endif]-->
<script language="javascript" type="text/javascript" src="%(site)s/js/jquery.flot.min.js"></script>
<script language="javascript" type="text/javascript" src="%(site)s/js/jquery.flot.selection.min.js"></script>
<script id="source" language="javascript" type="text/javascript">
document.write('<div style="float:left"><div id="placeholder%(title)s" style="width:%(width)spx;height:%(height)spx"></div></div>'+
'<div id="miniature%(title)s" style="float:left;margin-left:20px;margin-top:50px">' +
'<div id="overview%(title)s" style="width:%(hwidth)dpx;height:%(hheigth)dpx"></div>' +
'<p id="overviewLegend%(title)s" style="margin-left:10px"></p>' +
'</div>');
$(function () {
function parseDate%(title)s(sdate){
var div1 = sdate.split(' ');
var day = div1[0].split('-');
var hour = div1[1].split(':');
return new Date(day[0], day[1]-1, day[2], hour[0], hour[1], hour[2]).getTime() - (new Date().getTimezoneOffset() * 60 * 1000) ;
}
function getData%(title)s() {""" % \
{'site': CFG_SITE_URL, 'width': size[0], 'height': size[1], 'hwidth': int(size[0]) / 2,
'hheigth': int(size[1]) / 2, 'title': title}
if(len(trend) > 1):
granularity_td = (_to_datetime(trend[1][0], '%Y-%m-%d %H:%M:%S') -
_to_datetime(trend[0][0], '%Y-%m-%d %H:%M:%S'))
else:
granularity_td = datetime.timedelta()
# Create variables with the format dn = [[x1,y1], [x2,y2]]
minx = trend[0][0]
maxx = trend[0][0]
if settings["multiple"] is not None:
cols = len(trend[0][1])
rows = len(trend)
first = 0
for col in range(cols):
out += """var d%d = [""" % (col)
for row in range(rows):
if(first == 0):
first = 1
else:
out += ", "
if trend[row][0] < minx:
minx = trend[row][0]
if trend[row][0] > maxx:
maxx = trend[row][0]
out += '[parseDate%s("%s"),%d]' % \
(title, _to_datetime(trend[row][0], '%Y-%m-%d \
%H:%M:%S'), trend[row][1][col])
out += ", [parseDate%s('%s'), %d]];\n" % (title,
_to_datetime(maxx, '%Y-%m-%d %H:%M:%S')+ granularity_td,
trend[-1][1][col])
out += "return [\n"
first = 0
for col in range(cols):
if first == 0:
first = 1
else:
out += ", "
out += '{data : d%d, label : "%s"}' % \
(col, settings["multiple"][col])
out += "];\n}\n"
else:
out += """var d1 = ["""
rows = len(trend)
first = 0
for row in range(rows):
if trend[row][0] < minx:
minx = trend[row][0]
if trend[row][0] > maxx:
maxx = trend[row][0]
if first == 0:
first = 1
else:
out += ', '
out += '[parseDate%s("%s"),%d]' % \
(title, _to_datetime(trend[row][0], '%Y-%m-%d %H:%M:%S'),
trend[row][1])
out += """, [parseDate%s("%s"), %d]];
return [d1];
}
""" % (title, _to_datetime(maxx, '%Y-%m-%d %H:%M:%S') +
granularity_td, trend[-1][1])
# Set options
tics = """yaxis: {
tickDecimals : 0
},"""
if settings["xtic_format"] != '':
current = _to_datetime(maxx, '%Y-%m-%d %H:%M:%S')
next = current + granularity_td
if (granularity_td.seconds + granularity_td.days * 24 * 3600) > 2592000:
next = current.replace(day=31)
tics += 'xaxis: { mode:"time",min:parseDate%s("%s"),max:parseDate%s("%s")},'\
% (title, _to_datetime(minx, '%Y-%m-%d %H:%M:%S'), title, next)
out += """var options%s ={
series: {
lines: { steps: true, fill: true},
points: { show: false }
},
legend: {show: false},
%s
grid: { hoverable: true, clickable: true },
selection: { mode: "xy" }
};
""" % (title, tics, )
# Write the plot method in javascript
out += """var startData%(title)s = getData%(title)s();
var plot%(title)s = $.plot($("#placeholder%(title)s"), startData%(title)s, options%(title)s);
// setup overview
var overview%(title)s = $.plot($("#overview%(title)s"), startData%(title)s, {
legend: { show: true, container: $("#overviewLegend%(title)s") },
series: {
lines: { steps: true, fill: true, lineWidth: 1},
shadowSize: 0
},
%(tics)s
grid: { color: "#999" },
selection: { mode: "xy" }
});
""" % {"title": title, "tics": tics}
# Tooltip and zoom
out += """
function showTooltip%(title)s(x, y, contents) {
$('<div id="tooltip%(title)s">' + contents + '</div>').css( {
position: 'absolute',
display: 'none',
top: y - 5,
left: x + 10,
border: '1px solid #fdd',
padding: '2px',
'background-color': '#fee',
opacity: 0.80
}).appendTo("body").fadeIn(200);
}
var previousPoint%(title)s = null;
$("#placeholder%(title)s").bind("plothover", function (event, pos, item) {
if (item) {
if (previousPoint%(title)s != item.datapoint) {
previousPoint%(title)s = item.datapoint;
$("#tooltip%(title)s").remove();
var y = item.datapoint[1];
showTooltip%(title)s(item.pageX, item.pageY, y);
}
}
else {
$("#tooltip%(title)s").remove();
previousPoint%(title)s = null;
}
});
$("#placeholder%(title)s").bind("plotclick", function (event, pos, item) {
if (item) {
plot%(title)s.highlight(item.series, item.datapoint);
}
});
// now connect the two
$("#placeholder%(title)s").bind("plotselected", function (event, ranges) {
// clamp the zooming to prevent eternal zoom
if (ranges.xaxis.to - ranges.xaxis.from < 0.00001){
ranges.xaxis.to = ranges.xaxis.from + 0.00001;}
if (ranges.yaxis.to - ranges.yaxis.from < 0.00001){
ranges.yaxis.to = ranges.yaxis.from + 0.00001;}
// do the zooming
plot%(title)s = $.plot($("#placeholder%(title)s"), getData%(title)s(ranges.xaxis.from, ranges.xaxis.to),
$.extend(true, {}, options%(title)s, {
xaxis: { min: ranges.xaxis.from, max: ranges.xaxis.to },
yaxis: { min: ranges.yaxis.from, max: ranges.yaxis.to }
}));
// don't fire event on the overview to prevent eternal loop
overview%(title)s.setSelection(ranges, true);
});
$("#overview%(title)s").bind("plotselected", function (event, ranges) {
plot%(title)s.setSelection(ranges);
});
});
</script>
<noscript>Your browser does not support JavaScript!
Please, select another output format</noscript>""" % {'title' : title}
open(path, 'w').write(out)
def get_numeric_stats(data, multiple):
""" Returns average, max and min values for data """
data = [x[1] for x in data]
if data == []:
return (0, 0, 0)
if multiple:
lists = []
for i in range(len(data[0])):
lists.append([x[i] for x in data])
return ([float(sum(x)) / len(x) for x in lists], [max(x) for x in lists],
[min(x) for x in lists])
else:
return (float(sum(data)) / len(data), max(data), min(data))
def create_graph_table(data, path, settings):
"""
Creates a html table representation out of data.
@param data: The data
@type data: (str,...)
@param path: Where to store the graph
@type path: str
@param settings: Dictionary of table parameters
@type settings: dict
"""
out = """<table border="1">
"""
if settings['rows'] == []:
for row in data:
out += """<tr>
"""
for value in row:
out += """<td>%s</td>
""" % value
out += "</tr>"
else:
for dta, value in zip(settings['rows'], data):
out += """<tr>
<td>%s</td>
<td>
""" % dta
for vrow in value:
out += """%s<br />
""" % vrow
out = out[:-6] + "</td></tr>"
out += "</table>"
open(path, 'w').write(out)
def create_graph_dump(dump, path):
"""
Creates a graph representation out of data produced from get_event_trend.
@param dump: The dump data
@type dump: [(str|int,...)]
@param path: Where to store the graph
@type path: str
"""
out = ""
if len(dump) == 0:
out += "No actions for this custom event " + \
"are registered in the given time range."
else:
# Make every row in dump equally long, insert None if appropriate.
max_len = max([len(x) for x in dump])
events = [tuple(list(x) + [None] * (max_len - len(x))) for x in dump]
cols = ["Event", "Date and time"] + ["Argument %d" % i
for i in range(max_len - 2)]
column_widths = [max([len(str(x[i])) \
for x in events + [cols]]) + 3 for i in range(len(events[0]))]
for i in range(len(cols)):
out += cols[i] + ' ' * (column_widths[i] - len(cols[i]))
out += "\n"
for i in range(len(cols)):
out += '=' * (len(cols[i])) + ' ' * (column_widths[i] - len(cols[i]))
out += "\n\n"
for action in dump:
for i in range(len(action)):
if action[i] is None:
temp = ''
else:
temp = action[i]
out += str(temp) + ' ' * (column_widths[i] - len(str(temp)))
out += "\n"
# Write to destination file
if path == '':
print out
else:
open(path, 'w').write(out)
# EXPORT DATA TO SLS
def get_search_frequency(day=datetime.datetime.now().date()):
"""Returns the number of searches performed in the chosen day"""
searches = get_keyevent_trend_search_type_distribution(get_args(day))
return sum(searches[0][1])
def get_total_records(day=datetime.datetime.now().date()):
"""Returns the total number of records which existed in the chosen day"""
tomorrow = (datetime.datetime.now() +
datetime.timedelta(days=1)).strftime("%Y-%m-%d")
args = {'collection': CFG_SITE_NAME, 't_start': day.strftime("%Y-%m-%d"),
't_end': tomorrow, 'granularity': "day", 't_format': "%Y-%m-%d"}
try:
return get_keyevent_trend_collection_population(args)[0][1]
except IndexError:
return 0
def get_new_records(day=datetime.datetime.now().date()):
"""Returns the number of new records submitted in the chosen day"""
args = {'collection': CFG_SITE_NAME,
't_start': (day - datetime.timedelta(days=1)).strftime("%Y-%m-%d"),
't_end': day.strftime("%Y-%m-%d"), 'granularity': "day",
't_format': "%Y-%m-%d"}
try:
return (get_total_records(day) -
get_keyevent_trend_collection_population(args)[0][1])
except IndexError:
return 0
def get_download_frequency(day=datetime.datetime.now().date()):
"""Returns the number of downloads during the chosen day"""
return get_keyevent_trend_download_frequency(get_args(day))[0][1]
def get_comments_frequency(day=datetime.datetime.now().date()):
"""Returns the number of comments during the chosen day"""
return get_keyevent_trend_comments_frequency(get_args(day))[0][1]
def get_loans_frequency(day=datetime.datetime.now().date()):
"""Returns the number of comments during the chosen day"""
return get_keyevent_trend_number_of_loans(get_args(day))[0][1]
def get_web_submissions(day=datetime.datetime.now().date()):
"""Returns the number of web submissions during the chosen day"""
args = get_args(day)
args['doctype'] = 'all'
return get_keyevent_trend_web_submissions(args)[0][1]
def get_alerts(day=datetime.datetime.now().date()):
"""Returns the number of alerts during the chosen day"""
args = get_args(day)
args['cols'] = [('', '', '')]
args['event_id'] = 'alerts'
return get_customevent_trend(args)[0][1]
def get_journal_views(day=datetime.datetime.now().date()):
"""Returns the number of journal displays during the chosen day"""
args = get_args(day)
args['cols'] = [('', '', '')]
args['event_id'] = 'journals'
return get_customevent_trend(args)[0][1]
def get_basket_views(day=datetime.datetime.now().date()):
"""Returns the number of basket displays during the chosen day"""
args = get_args(day)
args['cols'] = [('', '', '')]
args['event_id'] = 'baskets'
return get_customevent_trend(args)[0][1]
def get_args(day):
"""Returns the most common arguments for the exporting to SLS methods"""
return {'t_start': day.strftime("%Y-%m-%d"),
't_end': (day + datetime.timedelta(days=1)).strftime("%Y-%m-%d"),
'granularity': "day", 't_format': "%Y-%m-%d"}
# EXPORTER
def export_to_python(data, req):
"""
Exports the data to Python code.
@param data: The Python data that should be exported
@type data: []
@param req: The Apache request object
@type req:
"""
_export("text/x-python", str(data), req)
def export_to_csv(data, req):
"""
Exports the data to CSV.
@param data: The Python data that should be exported
@type data: []
@param req: The Apache request object
@type req:
"""
csv_list = [""""%s",%s""" % (x[0], ",".join([str(y) for y in \
((type(x[1]) is tuple) and x[1] or (x[1], ))])) for x in data]
_export('text/csv', '\n'.join(csv_list), req)
def export_to_file(data, req):
"""
Exports the data to a file.
@param data: The Python data that should be exported
@type data: []
@param req: The Apache request object
@type req:
"""
try:
import xlwt
book = xlwt.Workbook(encoding="utf-8")
sheet1 = book.add_sheet('Sheet 1')
for row in range(0, len(data)):
for col in range(0, len(data[row])):
sheet1.write(row, col, "%s" % data[row][col])
filename = CFG_TMPDIR + "/webstat_export_" + \
str(time.time()).replace('.', '') + '.xls'
book.save(filename)
redirect_to_url(req, '%s/stats/export?filename=%s&mime=%s' \
% (CFG_SITE_URL, os.path.basename(filename), 'application/vnd.ms-excel'))
except ImportError:
csv_list = []
for row in data:
row = ['"%s"' % str(col) for col in row]
csv_list.append(",".join(row))
_export('text/csv', '\n'.join(csv_list), req)
# INTERNAL
def _export(mime, content, req):
"""
Helper function to pass on the export call. Create a
temporary file in which the content is stored, then let
redirect to the export web interface.
"""
filename = CFG_TMPDIR + "/webstat_export_" + \
str(time.time()).replace('.', '')
open(filename, 'w').write(content)
redirect_to_url(req, '%s/stats/export?filename=%s&mime=%s' \
% (CFG_SITE_URL, os.path.basename(filename), mime))
def _get_trend_from_actions(action_dates, initial_value,
t_start, t_end, granularity, dt_format, acumulative=False):
"""
Given a list of dates reflecting some sort of action/event, and some additional parameters,
an internal data format is returned. 'initial_value' set to zero, means that the frequency
will not be accumulative, but rather non-causal.
@param action_dates: A list of dates, indicating some sort of action/event.
@type action_dates: [datetime.datetime]
@param initial_value: The numerical offset the first action's value should make use of.
@type initial_value: int
@param t_start: Start time for the time domain in dt_format
@type t_start: str
@param t_end: End time for the time domain in dt_format
@type t_end: str
@param granularity: The granularity of the time domain, span between values.
Possible values are [year,month,day,hour,minute,second].
@type granularity: str
@param dt_format: Format of the 't_start' and 't_stop' parameters
@type dt_format: str
@return: A list of tuples zipping a time-domain and a value-domain
@type: [(str, int)]
"""
# Append the maximum date as a sentinel indicating we're done
action_dates = list(action_dates)
# Construct the datetime tuple for the stop time
stop_at = _to_datetime(t_end, dt_format) - datetime.timedelta(seconds=1)
vector = [(None, initial_value)]
try:
upcoming_action = action_dates.pop()
#Do not count null values (when year, month or day is 0)
if granularity in ("year", "month", "day") and upcoming_action[0] == 0:
upcoming_action = action_dates.pop()
except IndexError:
upcoming_action = (datetime.datetime.max, 0)
# Create an iterator running from the first day of activity
for current in _get_datetime_iter(t_start, granularity, dt_format):
# Counter of action_dates in the current span, set the initial value to
# zero to avoid accumlation.
if acumulative:
actions_here = vector[-1][1]
else:
actions_here = 0
# Check to see if there's an action date in the current span
if upcoming_action[0] == {"year": current.year,
"month": current.month,
"day": current.day,
"hour": current.hour,
"minute": current.minute,
"second": current.second
}[granularity]:
actions_here += upcoming_action[1]
try:
upcoming_action = action_dates.pop()
except IndexError:
upcoming_action = (datetime.datetime.max, 0)
vector.append((current.strftime('%Y-%m-%d %H:%M:%S'), actions_here))
# Make sure to stop the iteration at the end time
if {"year": current.year >= stop_at.year,
"month": current.month >= stop_at.month and current.year == stop_at.year,
"day": current.day >= stop_at.day and current.month == stop_at.month,
"hour": current.hour >= stop_at.hour and current.day == stop_at.day,
"minute": current.minute >= stop_at.minute and current.hour == stop_at.hour,
"second": current.second >= stop_at.second and current.minute == stop_at.minute
}[granularity]:
break
# Remove the first bogus tuple, and return
return vector[1:]
def _get_keyevent_trend(args, sql, initial_quantity=0, extra_param=[],
return_sql=False, sql_text='%s', acumulative=False):
"""
Returns the trend for the sql passed in the given timestamp range.
@param args['t_start']: Date and time of start point
@type args['t_start']: str
@param args['t_end']: Date and time of end point
@type args['t_end']: str
@param args['granularity']: Granularity of date and time
@type args['granularity']: str
@param args['t_format']: Date and time formatting string
@type args['t_format']: str
"""
# collect action dates
lower = _to_datetime(args['t_start'], args['t_format']).isoformat()
upper = _to_datetime(args['t_end'], args['t_format']).isoformat()
param = tuple([lower, upper] + extra_param)
if return_sql:
sql = sql % param
return sql_text % sql
return _get_trend_from_actions(run_sql(sql, param), initial_quantity, args['t_start'],
args['t_end'], args['granularity'], args['t_format'], acumulative)
def _get_datetime_iter(t_start, granularity='day',
dt_format='%Y-%m-%d %H:%M:%S'):
"""
Returns an iterator over datetime elements starting at an arbitrary time,
with granularity of a [year,month,day,hour,minute,second].
@param t_start: An arbitrary starting time in format %Y-%m-%d %H:%M:%S
@type t_start: str
@param granularity: The span between iterable elements, default is 'days'.
Possible values are [year,month,day,hour,minute,second].
@type granularity: str
@param dt_format: Format of the 't_start' parameter
@type dt_format: str
@return: An iterator of points in time
@type: iterator over datetime elements
"""
tim = _to_datetime(t_start, dt_format)
# Make a time increment depending on the granularity and the current time
# (the length of years and months vary over time)
span = ""
while True:
yield tim
if granularity == "year":
span = (calendar.isleap(tim.year) and ["days=366"] or ["days=365"])[0]
elif granularity == "month":
span = "days=" + str(calendar.monthrange(tim.year, tim.month)[1])
elif granularity == "day":
span = "days=1"
elif granularity == "hour":
span = "hours=1"
elif granularity == "minute":
span = "minutes=1"
elif granularity == "second":
span = "seconds=1"
else:
# Default just in case
span = "days=1"
tim += eval("datetime.timedelta(" + span + ")")
def _to_datetime(dttime, dt_format='%Y-%m-%d %H:%M:%S'):
"""
Transforms a string into a datetime
"""
return datetime.datetime(*time.strptime(dttime, dt_format)[:6])
def _run_cmd(command):
"""
Runs a certain command and returns the string output. If the command is
not found a string saying so will be returned. Use with caution!
@param command: The UNIX command to execute.
@type command: str
@return: The std-out from the command.
@type: str
"""
return commands.getoutput(command)
def _get_doctypes():
"""Returns all the possible doctypes of a new submission"""
doctypes = [("all", "All")]
for doctype in get_docid_docname_alldoctypes():
doctypes.append(doctype)
return doctypes
def _get_item_statuses():
"""Returns all the possible status of an item"""
return [(CFG_BIBCIRCULATION_ITEM_STATUS_CANCELLED, "Cancelled"),
(CFG_BIBCIRCULATION_ITEM_STATUS_CLAIMED, "Claimed"),
(CFG_BIBCIRCULATION_ITEM_STATUS_IN_PROCESS, "In process"),
(CFG_BIBCIRCULATION_ITEM_STATUS_NOT_ARRIVED, "Not arrived"),
(CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN, "On loan"),
(CFG_BIBCIRCULATION_ITEM_STATUS_ON_ORDER, "On order"),
(CFG_BIBCIRCULATION_ITEM_STATUS_ON_SHELF, "On shelf")] + \
[(status, status) for status in CFG_BIBCIRCULATION_ITEM_STATUS_OPTIONAL]
def _get_item_doctype():
"""Returns all the possible types of document for an item"""
dts = []
for dat in run_sql("""SELECT DISTINCT(request_type)
FROM crcILLREQUEST ORDER BY request_type ASC"""):
dts.append((dat[0], dat[0]))
return dts
def _get_request_statuses():
"""Returns all the possible statuses for an ILL request"""
dts = []
for dat in run_sql("SELECT DISTINCT(status) FROM crcILLREQUEST ORDER BY status ASC"):
dts.append((dat[0], dat[0]))
return dts
def _get_libraries():
"""Returns all the possible libraries"""
dts = []
for dat in run_sql("SELECT name FROM crcLIBRARY ORDER BY name ASC"):
if not CFG_CERN_SITE or not "CERN" in dat[0]: # do not add internal libraries for CERN site
dts.append((dat[0], dat[0]))
return dts
def _get_loan_periods():
"""Returns all the possible loan periods for an item"""
dts = []
for dat in run_sql("SELECT DISTINCT(loan_period) FROM crcITEM ORDER BY loan_period ASC"):
dts.append((dat[0], dat[0]))
return dts
def _get_tag_name(tag):
"""
For a specific MARC tag, it returns the human-readable name
"""
res = run_sql("SELECT name FROM tag WHERE value LIKE %s", ('%' + tag + '%',))
if res:
return res[0][0]
res = run_sql("SELECT name FROM tag WHERE value LIKE %s", ('%' + tag[:-1] + '%',))
if res:
return res[0][0]
return ''
def _get_collection_recids_for_sql_query(coll):
ids = get_collection_reclist(coll).tolist()
if len(ids) == 0:
return ""
return "id_bibrec IN %s" % str(ids).replace('[', '(').replace(']', ')')
def _check_udc_value_where():
return "id_bibrec IN (SELECT brb.id_bibrec \
FROM bibrec_bib08x brb, bib08x b WHERE brb.id_bibxxx = b.id AND tag='080__a' \
AND value LIKE %s) "
def _get_udc_truncated(udc):
if udc[-1] == '*':
return "%s%%" % udc[:-1]
if udc[0] == '*':
return "%%%s" % udc[1:]
return "%s" % udc
def _check_empty_value(value):
if len(value) == 0:
return ""
else:
return value[0][0]
def _get_granularity_sql_functions(granularity):
try:
return {
"year": ("YEAR",),
"month": ("YEAR", "MONTH",),
"day": ("MONTH", "DAY",),
"hour": ("DAY", "HOUR",),
"minute": ("HOUR", "MINUTE",),
"second": ("MINUTE", "SECOND")
}[granularity]
except KeyError:
return ("MONTH", "DAY",)
def _get_sql_query(creation_time_name, granularity, tables_from, conditions="",
extra_select="", dates_range_param="", group_by=True, count=True):
if len(dates_range_param) == 0:
dates_range_param = creation_time_name
conditions = "%s > %%s AND %s < %%s %s" % (dates_range_param, dates_range_param,
len(conditions) > 0 and "AND %s" % conditions or "")
values = {'creation_time_name': creation_time_name,
'granularity_sql_function': _get_granularity_sql_functions(granularity)[-1],
'count': count and ", COUNT(*)" or "",
'tables_from': tables_from,
'conditions': conditions,
'extra_select': extra_select,
'group_by': ""}
if group_by:
values['group_by'] = "GROUP BY "
for fun in _get_granularity_sql_functions(granularity):
values['group_by'] += "%s(%s), " % (fun, creation_time_name)
values['group_by'] = values['group_by'][:-2]
return "SELECT %(granularity_sql_function)s(%(creation_time_name)s) %(count)s %(extra_select)s \
FROM %(tables_from)s WHERE %(conditions)s \
%(group_by)s \
ORDER BY %(creation_time_name)s DESC" % values
diff --git a/modules/webstat/lib/webstat_webinterface.py b/modules/webstat/lib/webstat_webinterface.py
index cfa54fbae..0c98dd6f3 100644
--- a/modules/webstat/lib/webstat_webinterface.py
+++ b/modules/webstat/lib/webstat_webinterface.py
@@ -1,1071 +1,1071 @@
## This file is part of Invenio.
## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
__revision__ = "$Id$"
__lastupdated__ = "$Date$"
import os, sys
from urllib import unquote
from invenio import webinterface_handler_config as apache
from invenio.config import \
CFG_TMPDIR, \
CFG_SITE_URL, \
CFG_SITE_LANG
-from invenio.bibindex_engine import CFG_JOURNAL_TAG
+from invenio.bibindex_tokenizers.BibIndexJournalTokenizer import CFG_JOURNAL_TAG
from invenio.webinterface_handler import wash_urlargd, WebInterfaceDirectory
from invenio.webpage import page
from invenio.access_control_engine import acc_authorize_action
from invenio.access_control_config import VIEWRESTRCOLL
from invenio.search_engine import collection_restricted_p
from invenio.webuser import collect_user_info, page_not_authorized
from invenio.urlutils import redirect_to_url
from invenio.webstat import perform_request_index, \
perform_display_keyevent, \
perform_display_customevent, \
perform_display_customevent_help, \
perform_display_error_log_analyzer, \
register_customevent, \
perform_display_custom_summary, \
perform_display_stats_per_coll, \
perform_display_current_system_health, \
perform_display_yearly_report, \
perform_display_coll_list, \
perform_display_ingestion_status
def detect_suitable_graph_format():
"""
Return suitable graph format default argument. It is always flot (when there wasn't plot, gnuplot if it is
present, otherwise asciiart).
"""
return "flot"
# try:
# import Gnuplot
# suitable_graph_format = "gnuplot"
# except ImportError:
# suitable_graph_format = "asciiart"
# return suitable_graph_format
SUITABLE_GRAPH_FORMAT = detect_suitable_graph_format()
class WebInterfaceStatsPages(WebInterfaceDirectory):
"""Defines the set of stats pages."""
_exports = ['', 'system_health', 'systemhealth', 'yearly_report', 'ingestion_health',
'collection_population', 'new_records', 'search_frequency', 'search_type_distribution',
'download_frequency', 'comments_frequency', 'number_of_loans', 'web_submissions',
'loans_stats', 'loans_lists', 'renewals_lists', 'returns_table', 'returns_graph',
'ill_requests_stats', 'ill_requests_lists', 'ill_requests_graph', 'items_stats',
'items_list', 'loans_requests', 'loans_request_lists', 'user_stats',
'user_lists', 'error_log', 'customevent', 'customevent_help',
'customevent_register', 'custom_summary', 'collections' , 'collection_stats',
'export']
navtrail = """<a class="navtrail" href="%s/stats/%%(ln_link)s">Statistics</a>""" % CFG_SITE_URL
def __call__(self, req, form):
"""Index page."""
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='index',
ln=ln)
return page(title="Statistics",
body=perform_request_index(ln=ln),
description="Invenio, Statistics",
keywords="Invenio, statistics",
req=req,
lastupdated=__lastupdated__,
navmenuid='stats',
language=ln)
# CURRENT SYSTEM HEALTH
def system_health(self, req, form):
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='current system health',
ln=ln)
return page(title="Current system health",
body=perform_display_current_system_health(ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Current system health",
keywords="Invenio, statistics, current system health",
req=req,
lastupdated=__lastupdated__,
navmenuid='current system health',
language=ln)
def systemhealth(self, req, form):
"""Redirect for the old URL. """
return redirect_to_url (req, "%s/stats/system_health" % (CFG_SITE_URL))
def yearly_report(self, req, form):
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='yearly report',
ln=ln)
return page(title="Yearly report",
body=perform_display_yearly_report(ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Yearly report",
keywords="Invenio, statistics, yearly report",
req=req,
lastupdated=__lastupdated__,
navmenuid='yearly report',
language=ln)
def ingestion_health(self, req, form):
argd = wash_urlargd(form, { 'pattern': (str, None),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
req_ingestion = argd['pattern']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='ingestion status',
ln=ln)
return page(title="Check ingestion health",
body=perform_display_ingestion_status(req_ingestion, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Ingestion health",
keywords="Invenio, statistics, Ingestion health",
req=req,
lastupdated=__lastupdated__,
navmenuid='ingestion health',
language=ln)
# KEY EVENT SECTION
def collection_population(self, req, form):
"""Collection population statistics page."""
argd = wash_urlargd(form, {'collection': (str, "All"),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='collection population',
ln=ln)
return page(title="Collection population",
body=perform_display_keyevent('collection population', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Collection population",
keywords="Invenio, statistics, collection population",
req=req,
lastupdated=__lastupdated__,
navmenuid='collection population',
language=ln)
def new_records(self, req, form):
"""Collection population statistics page."""
argd = wash_urlargd(form, {'collection': (str, "All"),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='new records',
ln=ln)
return page(title="New records",
body=perform_display_keyevent('new records', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, New records",
keywords="Invenio, statistics, new records",
req=req,
lastupdated=__lastupdated__,
navmenuid='new records',
language=ln)
def search_frequency(self, req, form):
"""Search frequency statistics page."""
argd = wash_urlargd(form, {'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='search frequency',
ln=ln)
return page(title="Search frequency",
body=perform_display_keyevent('search frequency', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Search frequency",
keywords="Invenio, statistics, search frequency",
req=req,
lastupdated=__lastupdated__,
navmenuid='search frequency',
language=ln)
def comments_frequency(self, req, form):
"""Comments frequency statistics page."""
argd = wash_urlargd(form, {'collection': (str, "All"),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='comments frequency',
ln=ln)
return page(title="Comments frequency",
body=perform_display_keyevent('comments frequency', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Comments frequency",
keywords="Invenio, statistics, Comments frequency",
req=req,
lastupdated=__lastupdated__,
navmenuid='comments frequency',
language=ln)
def search_type_distribution(self, req, form):
"""Search type distribution statistics page."""
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
argd = wash_urlargd(form, {'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='search type distribution',
ln=ln)
return page(title="Search type distribution",
body=perform_display_keyevent('search type distribution', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Search type distribution",
keywords="Invenio, statistics, search type distribution",
req=req,
lastupdated=__lastupdated__,
navmenuid='search type distribution',
language=ln)
def download_frequency(self, req, form):
"""Download frequency statistics page."""
argd = wash_urlargd(form, {'collection': (str, "All"),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='download frequency',
ln=ln)
return page(title="Download frequency",
body=perform_display_keyevent('download frequency', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Download frequency",
keywords="Invenio, statistics, download frequency",
req=req,
lastupdated=__lastupdated__,
navmenuid='download frequency',
language=ln)
def number_of_loans(self, req, form):
"""Number of loans statistics page."""
argd = wash_urlargd(form, {'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='number of circulation loans',
ln=ln)
return page(title="Number of circulation loans",
body=perform_display_keyevent('number of loans', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Number of circulation loans",
keywords="Invenio, statistics, Number of circulation loans",
req=req,
lastupdated=__lastupdated__,
navmenuid='number of circulation loans',
language=ln)
def web_submissions(self, req, form):
"""Web submissions statistics page."""
argd = wash_urlargd(form, {'doctype': (str, "all"),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='web submissions',
ln=ln)
return page(title="Web submissions",
body=perform_display_keyevent('web submissions', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Web submissions",
keywords="Invenio, statistics, websubmissions",
req=req,
lastupdated=__lastupdated__,
navmenuid='web submissions',
language=ln)
def loans_stats(self, req, form):
"""Number of loans statistics page."""
argd = wash_urlargd(form, {'udc': (str, ""),
'item_status': (str, ""),
'publication_date': (str, ""),
'creation_date': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation loans statistics',
ln=ln)
return page(title="Circulation loans statistics",
body=perform_display_keyevent('loans statistics', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation loans statistics",
keywords="Invenio, statistics, Circulation loans statistics",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation loans statistics',
language=ln)
def loans_lists(self, req, form):
"""Number of loans lists page."""
argd = wash_urlargd(form, {'udc': (str, ""),
'loan_period': (str, ""),
'min_loans': (int, 0),
'max_loans': (int, sys.maxint),
'publication_date': (str, ""),
'creation_date': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
argd['min_loans'] = str(argd['min_loans'])
argd['max_loans'] = str(argd['max_loans'])
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation loans lists',
ln=ln)
return page(title="Circulation loans lists",
body=perform_display_keyevent('loans lists', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation oans lists",
keywords="Invenio, statistics, Circulation loans lists",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation loans lists',
language=ln)
def renewals_lists(self, req, form):
"""Renewed items lists page."""
argd = wash_urlargd(form, {'udc': (str, ""),
'collection': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation renewals lists',
ln=ln)
return page(title="Circulation renewals lists",
body=perform_display_keyevent('renewals', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation renewals lists",
keywords="Invenio, statistics, Circulation renewals lists",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation renewals lists',
language=ln)
def returns_table(self, req, form):
"""Number of returns table page."""
argd = wash_urlargd(form, {'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='Circulation returns table',
ln=ln)
return page(title="Circulation returns table",
body=perform_display_keyevent('number returns', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation returns table",
keywords="Invenio, statistics, Circulation returns table",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation returns table',
language=ln)
def returns_graph(self, req, form):
"""Percentage of returns graph page."""
argd = wash_urlargd(form, {'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation returns graph',
ln=ln)
return page(title="Circulation returns graph",
body=perform_display_keyevent('percentage returns', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation returns graph",
keywords="Invenio, statistics, Circulation returns graph",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation returns graph',
language=ln)
def ill_requests_stats(self, req, form):
"""ILL Requests statistics page."""
argd = wash_urlargd(form, {'doctype': (str, ""),
'status': (str, ""),
'supplier': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation ill requests statistics',
ln=ln)
return page(title="Circulation ILL Requests statistics",
body=perform_display_keyevent('ill requests statistics', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation ILL Requests statistics",
keywords="Invenio, statistics, Circulation ILL Requests statistics",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation ill requests statistics',
language=ln)
def ill_requests_lists(self, req, form):
"""Number of loans lists page."""
argd = wash_urlargd(form, {'doctype': (str, ""),
'supplier': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation ill requests list',
ln=ln)
return page(title="Circulation ILL Requests list",
body=perform_display_keyevent('ill requests list', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation ILL Requests list",
keywords="Invenio, statistics, Circulation ILL Requests list",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation ill requests list',
language=ln)
def ill_requests_graph(self, req, form):
"""Percentage of satisfied ILL requests graph page."""
argd = wash_urlargd(form, {'doctype': (str, ""),
'status': (str, ""),
'supplier': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='percentage circulation satisfied ill requests',
ln=ln)
return page(title="Percentage of circulation satisfied ILL requests",
body=perform_display_keyevent('percentage satisfied ill requests',
argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Percentage of circulation satisfied ILL requests",
keywords="Invenio, statistics, Percentage of circulation satisfied ILL requests",
req=req,
lastupdated=__lastupdated__,
navmenuid='percentage circulation satisfied ill requests',
language=ln)
def items_stats(self, req, form):
"""ILL Requests statistics page."""
argd = wash_urlargd(form, {'udc': (str, ""),
'collection': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation items stats',
ln=ln)
return page(title="Circulation items statistics",
body=perform_display_keyevent('items stats', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation items statistics",
keywords="Invenio, statistics, Circulation items statistics",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation items stats',
language=ln)
def items_list(self, req, form):
"""Number of loans lists page."""
argd = wash_urlargd(form, {'library': (str, ""),
'status': (str, ""),
'format': (str, ""),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation items list',
ln=ln)
return page(title="Circulation items list",
body=perform_display_keyevent('items list', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation items list",
keywords="Invenio, statistics, Circulation items list",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation items list',
language=ln)
def loans_requests(self, req, form):
"""Number of loans statistics page."""
argd = wash_urlargd(form, {'item_status': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation loan request statistics',
ln=ln)
return page(title="Circulation hold requests statistics",
body=perform_display_keyevent('loan request statistics', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation hold requests statistics",
keywords="Invenio, statistics, Circulation hold requests statistics",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation loan request statistics',
language=ln)
def loans_request_lists(self, req, form):
"""Number of loans request lists page."""
argd = wash_urlargd(form, {'udc': (str, ""),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation hold request lists',
ln=ln)
return page(title="Circulation loans request lists",
body=perform_display_keyevent('loan request lists', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation hold request lists",
keywords="Invenio, statistics, Circulation hold request lists",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation hold request lists',
language=ln)
def user_stats(self, req, form):
"""Number of loans statistics page."""
argd = wash_urlargd(form, {'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation user statistics',
ln=ln)
return page(title="Circulation users statistics",
body=perform_display_keyevent('user statistics', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation users statistics",
keywords="Invenio, statistics, Circulation users statistics",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation user statistics',
language=ln)
def user_lists(self, req, form):
"""Number of loans lists page."""
argd = wash_urlargd(form, {'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'sql': (int, 0),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='circulation users lists',
ln=ln)
return page(title="Circulation users lists",
body=perform_display_keyevent('user lists', argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Circulation users lists",
keywords="Invenio, statistics, Circulation users lists",
req=req,
lastupdated=__lastupdated__,
navmenuid='circulation users lists',
language=ln)
# CUSTOM EVENT SECTION
def customevent(self, req, form):
"""Custom event statistics page"""
arg_format = {'ids': (list, []),
'timespan': (str, "today"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, SUITABLE_GRAPH_FORMAT),
'ln': (str, CFG_SITE_LANG)}
for key in form.keys():
if key[:4] == 'cols':
i = key[4:]
arg_format['cols' + i] = (list, [])
arg_format['col_value' + i] = (list, [])
arg_format['bool' + i] = (list, [])
argd = wash_urlargd(form, arg_format)
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='custom event',
ln=ln)
body = perform_display_customevent(argd['ids'], argd, req=req, ln=ln)
return page(title="Custom event",
body=body,
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Custom event",
keywords="Invenio, statistics, custom event",
req=req,
lastupdated=__lastupdated__,
navmenuid='custom event',
language=ln)
def error_log(self, req, form):
"""Number of loans request lists page."""
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='error log analyzer',
ln=ln)
return page(title="Error log analyzer",
body=perform_display_error_log_analyzer(ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Error log analyzer",
keywords="Invenio, statistics, Error log analyzer",
req=req,
lastupdated=__lastupdated__,
navmenuid='error log analyzer',
language=ln)
def customevent_help(self, req, form):
"""Custom event help page"""
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='custom event help',
ln=ln)
return page(title="Custom event help",
body=perform_display_customevent_help(ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Custom event help",
keywords="Invenio, statistics, custom event help",
req=req,
lastupdated=__lastupdated__,
navmenuid='custom event help',
language=ln)
def customevent_register(self, req, form):
"""Register a customevent and reload to it defined url"""
argd = wash_urlargd(form, {'event_id': (str, ""),
'arg': (str, ""),
'url': (str, ""),
'ln': (str, CFG_SITE_LANG)})
params = argd['arg'].split(',')
if "WEBSTAT_IP" in params:
index = params.index("WEBSTAT_IP")
params[index] = str(req.remote_ip)
register_customevent(argd['event_id'], params)
return redirect_to_url(req, unquote(argd['url']), apache.HTTP_MOVED_PERMANENTLY)
# CUSTOM REPORT SECTION
def custom_summary(self, req, form):
"""Custom report page"""
argd = wash_urlargd(form, {'query': (str, ""),
'tag': (str, CFG_JOURNAL_TAG.replace("%", "p")),
'title': (str, "Publications"),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='custom query summary',
ln=ln)
return page(title="Custom query summary",
body=perform_display_custom_summary(argd, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Custom Query Summary",
keywords="Invenio, statistics, custom query summary",
req=req,
lastupdated=__lastupdated__,
navmenuid='custom query summary',
language=ln)
# COLLECTIONS SECTION
def collection_stats(self, req, form):
"""Collection statistics list page"""
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
navmenuid='collections list',
text=auth_msg,
ln=ln)
return page(title="Collection statistics",
body=perform_display_coll_list(req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Collection statistics",
keywords="Invenio, statistics",
req=req,
lastupdated=__lastupdated__,
navmenuid='collections list',
language=ln)
def collections(self, req, form):
"""Collections statistics page"""
argd = wash_urlargd(form, {'collection': (str, "All"),
'timespan': (str, "this month"),
's_date': (str, ""),
'f_date': (str, ""),
'format': (str, "flot"),
'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
navmenuid='collections',
text=auth_msg,
ln=ln)
if collection_restricted_p(argd['collection']):
(auth_code_coll, auth_msg_coll) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=argd['collection'])
if auth_code_coll:
return page_not_authorized(req,
navmenuid='collections',
text=auth_msg_coll,
ln=ln)
return page(title="Statistics of %s" % argd['collection'],
body=perform_display_stats_per_coll(argd, req, ln=ln),
navtrail="""<a class="navtrail" href="%s/stats/%s">Statistics</a>""" % \
(CFG_SITE_URL, (ln != CFG_SITE_LANG and '?ln=' + ln) or ''),
description="Invenio, Statistics, Collection %s" % argd['collection'],
keywords="Invenio, statistics, %s" % argd['collection'],
req=req,
lastupdated=__lastupdated__,
navmenuid='collections',
language=ln)
# EXPORT SECTION
def export(self, req, form):
"""Exports data"""
argd = wash_urlargd(form, {'ln': (str, CFG_SITE_LANG)})
ln = argd['ln']
user_info = collect_user_info(req)
(auth_code, auth_msg) = acc_authorize_action(user_info, 'runwebstatadmin')
if auth_code:
return page_not_authorized(req,
navtrail=self.navtrail % {'ln_link': (ln != CFG_SITE_LANG and '?ln=' + ln) or ''},
text=auth_msg,
navmenuid='export',
ln=ln)
argd = wash_urlargd(form, {"filename": (str, ""),
"mime": (str, "")})
# Check that the particular file exists and that it's OK to export
webstat_files = [x for x in os.listdir(CFG_TMPDIR) if x.startswith("webstat")]
if argd["filename"] not in webstat_files:
return "Bad file."
# Set correct header type
req.content_type = argd["mime"]
req.send_http_header()
# Rebuild path, send it to the user, and clean up.
filename = CFG_TMPDIR + '/' + argd["filename"]
req.sendfile(filename)
os.remove(filename)
index = __call__
diff --git a/modules/webstyle/lib/invenio.wsgi b/modules/webstyle/lib/invenio.wsgi
index f88953908..2bd86b3b0 100644
--- a/modules/webstyle/lib/invenio.wsgi
+++ b/modules/webstyle/lib/invenio.wsgi
@@ -1,68 +1,76 @@
# -*- coding: utf-8 -*-
## This file is part of Invenio.
## Copyright (C) 2009, 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
mod_wsgi Invenio application loader.
"""
from invenio import config
# Start remote debugger if appropriate:
try:
from invenio.remote_debugger_config import CFG_REMOTE_DEBUGGER_ENABLED, \
CFG_REMOTE_DEBUGGER_WSGI_LOADING
if CFG_REMOTE_DEBUGGER_ENABLED:
from invenio import remote_debugger
remote_debugger.start_file_changes_monitor()
if CFG_REMOTE_DEBUGGER_WSGI_LOADING:
remote_debugger.start()
except:
pass
# wrap warnings (usually from sql queries) to log the traceback
# of their origin for debugging
try:
from invenio.errorlib import wrap_warn
wrap_warn()
except:
pass
# pre-load citation dictionaries upon WSGI application start-up (the
# citation dictionaries are loaded lazily, which is good for CLI
# processes such as bibsched, but for web user queries we want them to
# be available right after web server start-up):
try:
from invenio.bibrank_citation_searcher import get_citedby_hitset, \
get_refersto_hitset
get_citedby_hitset(None)
get_refersto_hitset(None)
except:
pass
## You can't write to stdout in mod_wsgi, but some of our
## dependecies do this! (e.g. 4Suite)
import sys
sys.stdout = sys.stderr
-from invenio.webinterface_handler_flask import create_invenio_flask_app
-application = create_invenio_flask_app()
+try:
+ from invenio.webinterface_handler_flask import create_invenio_flask_app
+ application = create_invenio_flask_app()
+finally:
+ ## mod_wsgi uses one thread to import the .wsgi file
+ ## and a second one to instantiate the application.
+ ## Therefore we need to close redundant conenctions that
+ ## are allocated on the 1st thread.
+ from invenio.dbquery import close_connection
+ close_connection()
if 'werkzeug-debugger' in getattr(config, 'CFG_DEVEL_TOOLS', []):
from werkzeug.debug import DebuggedApplication
application = DebuggedApplication(application, evalex=True)
diff --git a/modules/webstyle/lib/webstyle_regression_tests.py b/modules/webstyle/lib/webstyle_regression_tests.py
index e6efd8681..ac81d64f0 100644
--- a/modules/webstyle/lib/webstyle_regression_tests.py
+++ b/modules/webstyle/lib/webstyle_regression_tests.py
@@ -1,167 +1,167 @@
# -*- coding: utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""WebStyle Regression Test Suite."""
__revision__ = "$Id$"
import httplib
import os
import urlparse
import mechanize
from flask import url_for
from urllib2 import urlopen, HTTPError
from invenio.config import CFG_SITE_URL, CFG_SITE_SECURE_URL, CFG_PREFIX, CFG_DEVEL_SITE
from invenio.testutils import InvenioTestCase, make_test_suite, run_test_suite, nottest
def get_final_url(url):
"""Perform a GET request to the given URL, discarding the result and return
the final one in case of redirections"""
response = urlopen(url)
response.read()
return response.url
class WebStyleWSGIUtilsTests(InvenioTestCase):
"""Test WSGI Utils."""
if CFG_DEVEL_SITE:
def test_iteration_over_posted_file(self):
"""webstyle - posting a file via form upload"""
path = os.path.join(CFG_PREFIX, 'lib', 'webtest', 'invenio', 'test.gif')
body = open(path).read()
br = mechanize.Browser()
br.open(CFG_SITE_URL + '/httptest/post1').read()
br.select_form(nr=0)
br.form.add_file(open(path))
body2 = br.submit().read()
self.assertEqual(body, body2, "Body sent differs from body received")
pass
if CFG_DEVEL_SITE:
def test_posting_file(self):
"""webstyle - direct posting of a file"""
from invenio.bibdocfile import calculate_md5
path = os.path.join(CFG_PREFIX, 'lib', 'webtest', 'invenio', 'test.gif')
body = open(path).read()
md5 = calculate_md5(path)
mimetype = 'image/gif'
connection = httplib.HTTPConnection(urlparse.urlsplit(CFG_SITE_URL)[1])
connection.request('POST', '/httptest/post2', body, {'Content-MD5': md5, 'Content-Type': mimetype, 'Content-Disposition': 'filename=test.gif'})
response = connection.getresponse()
body2 = response.read()
self.assertEqual(body, body2, "Body sent differs from body received")
class WebStyleGotoTests(InvenioTestCase):
"""Test the goto framework"""
def tearDown(self):
from invenio.goto_engine import drop_redirection
drop_redirection('first_record')
drop_redirection('invalid_external')
drop_redirection('latest_article')
drop_redirection('latest_pdf_article')
def test_plugin_availability(self):
"""webstyle - test GOTO plugin availability"""
from invenio.goto_engine import CFG_GOTO_PLUGINS
self.failUnless('goto_plugin_simple' in CFG_GOTO_PLUGINS)
self.failUnless('goto_plugin_latest_record' in CFG_GOTO_PLUGINS)
self.failUnless('goto_plugin_cern_hr_documents' in CFG_GOTO_PLUGINS)
self.failIf(CFG_GOTO_PLUGINS.get_broken_plugins())
def test_simple_relative_redirection(self):
"""webstyle - test simple relative redirection via goto_plugin_simple"""
from invenio.goto_engine import register_redirection
register_redirection('first_record', 'goto_plugin_simple', parameters={'url': '/record/1'})
self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/first_record'), CFG_SITE_URL + '/record/1')
def test_simple_absolute_redirection(self):
"""webstyle - test simple absolute redirection via goto_plugin_simple"""
from invenio.goto_engine import register_redirection
register_redirection('first_record', 'goto_plugin_simple', parameters={'url': CFG_SITE_URL + '/record/1'})
self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/first_record'), CFG_SITE_URL + '/record/1')
def test_simple_absolute_redirection_https(self):
"""webstyle - test simple absolute redirection to https via goto_plugin_simple"""
from invenio.goto_engine import register_redirection
register_redirection('first_record', 'goto_plugin_simple', parameters={'url': CFG_SITE_SECURE_URL + '/record/1'})
self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/first_record'), CFG_SITE_SECURE_URL + '/record/1')
def test_invalid_external_redirection(self):
"""webstyle - test simple absolute redirection to https via goto_plugin_simple"""
from invenio.goto_engine import register_redirection
register_redirection('invalid_external', 'goto_plugin_simple', parameters={'url': 'http://www.google.com'})
self.assertRaises(HTTPError, get_final_url, CFG_SITE_URL + '/goto/google')
def test_latest_article_redirection(self):
"""webstyle - test redirecting to latest article via goto_plugin_latest_record"""
from invenio.goto_engine import register_redirection
register_redirection('latest_article', 'goto_plugin_latest_record', parameters={'cc': 'Articles'})
- self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/latest_article'), CFG_SITE_URL + '/record/108')
+ self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/latest_article'), CFG_SITE_URL + '/record/128')
@nottest
def FIXME_TICKET_1293_test_latest_pdf_article_redirection(self):
"""webstyle - test redirecting to latest article via goto_plugin_latest_record"""
from invenio.goto_engine import register_redirection
register_redirection('latest_pdf_article', 'goto_plugin_latest_record', parameters={'cc': 'Articles', 'format': '.pdf'})
self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/latest_pdf_article'), CFG_SITE_URL + '/record/97/files/0002060.pdf')
@nottest
def FIXME_TICKET_1293_test_URL_argument_in_redirection(self):
"""webstyle - test redirecting while passing arguments on the URL"""
from invenio.goto_engine import register_redirection
register_redirection('latest_article', 'goto_plugin_latest_record', parameters={'cc': 'Articles'})
self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/latest_article?format=.pdf'), CFG_SITE_URL + '/record/97/files/0002060.pdf')
def test_updating_redirection(self):
"""webstyle - test updating redirection"""
from invenio.goto_engine import register_redirection, update_redirection
register_redirection('first_record', 'goto_plugin_simple', parameters={'url': '/record/1'})
update_redirection('first_record', 'goto_plugin_simple', parameters={'url': '/record/2'})
self.assertEqual(get_final_url(CFG_SITE_URL + '/goto/first_record'), CFG_SITE_URL + '/record/2')
class WebInterfaceHandlerFlaskTest(InvenioTestCase):
"""Test webinterface handlers."""
def test_authenticated_decorator(self):
response = self.client.get(url_for('webmessage.index'),
base_url=CFG_SITE_SECURE_URL,
follow_redirects=True)
self.assert401(response)
self.login('admin', '')
response = self.client.get(url_for('webmessage.index'),
base_url=CFG_SITE_SECURE_URL,
follow_redirects=True)
self.assert200(response)
self.logout()
response = self.client.get(url_for('webmessage.index'),
base_url=CFG_SITE_SECURE_URL,
follow_redirects=True)
self.assert401(response)
TEST_SUITE = make_test_suite(WebStyleWSGIUtilsTests,
WebStyleGotoTests,
WebInterfaceHandlerFlaskTest)
if __name__ == "__main__":
run_test_suite(TEST_SUITE, warn_user=True)
diff --git a/po/LINGUAS b/po/LINGUAS
index 8f21a0517..a3d806434 100644
--- a/po/LINGUAS
+++ b/po/LINGUAS
@@ -1,46 +1,47 @@
## This file is part of Invenio.
-## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
##
## This is the list of all languages supported by Invenio:
af
ar
bg
ca
cs
de
el
en
es
+fa
fr
hr
hu
gl
it
ka
lt
ja
no
pl
pt
ro
ru
rw
sk
sv
uk
zh_CN
zh_TW
diff --git a/po/es.po b/po/es.po
index 9bb5d8b4c..75d1f335b 100644
--- a/po/es.po
+++ b/po/es.po
@@ -1,14220 +1,14219 @@
# # This file is part of Invenio.
# # Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
# #
# # Invenio is free software; you can redistribute it and/or
# # modify it under the terms of the GNU General Public License as
# # published by the Free Software Foundation; either version 2 of the
# # License, or (at your option) any later version.
# #
# # Invenio is distributed in the hope that it will be useful, but
# # WITHOUT ANY WARRANTY; without even the implied warranty of
# # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# # General Public License for more details.
# #
# # You should have received a copy of the GNU General Public License
# # along with Invenio; if not, write to the Free Software Foundation, Inc.,
# # 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
msgid ""
msgstr ""
"Project-Id-Version: Invenio 1.1.1\n"
"Report-Msgid-Bugs-To: info@invenio-software.org\n"
"POT-Creation-Date: 2011-12-19 22:12+0100\n"
"PO-Revision-Date: 2013-02-25 11:40+0100\n"
"Last-Translator: Ferran Jorba <Ferran.Jorba@uab.cat>\n"
"Language-Team: ES <info@invenio-software.org>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: pygettext.py 1.5\n"
#: modules/websearch/doc/search-guide.webdoc:361
#: modules/websearch/doc/search-guide.webdoc:396
#: modules/websearch/doc/search-guide.webdoc:493
#: modules/websearch/doc/search-guide.webdoc:528
#: modules/websearch/doc/search-guide.webdoc:630
#: modules/websearch/doc/search-guide.webdoc:665
#: modules/websearch/doc/search-guide.webdoc:768
#: modules/websearch/doc/search-guide.webdoc:803
#: modules/websearch/lib/search_engine.py:1200
#: modules/websearch/lib/websearch_templates.py:1152
msgid "AND NOT"
msgstr "Y NO"
#: modules/webhelp/web/admin/admin.webdoc:18
#: modules/websearch/doc/admin/websearch-admin-guide.webdoc:21
#: modules/websubmit/doc/admin/websubmit-admin-guide.webdoc:21
#: modules/bibedit/doc/admin/bibedit-admin-guide.webdoc:21
#: modules/bibupload/doc/admin/bibupload-admin-guide.webdoc:21
#: modules/bibformat/doc/admin/bibformat-admin-guide.webdoc:21
#: modules/bibharvest/doc/admin/bibharvest-admin-guide.webdoc:21
#: modules/webmessage/doc/admin/webmessage-admin-guide.webdoc:21
#: modules/webalert/doc/admin/webalert-admin-guide.webdoc:21
#: modules/bibclassify/doc/admin/bibclassify-admin-guide.webdoc:22
#: modules/bibmatch/doc/admin/bibmatch-admin-guide.webdoc:21
#: modules/bibconvert/doc/admin/bibconvert-admin-guide.webdoc:21
#: modules/bibsched/doc/admin/bibsched-admin-guide.webdoc:21
#: modules/bibrank/doc/admin/bibrank-admin-guide.webdoc:21
#: modules/webstat/doc/admin/webstat-admin-guide.webdoc:21
#: modules/bibindex/doc/admin/bibindex-admin-guide.webdoc:21
#: modules/webbasket/doc/admin/webbasket-admin-guide.webdoc:21
#: modules/webcomment/doc/admin/webcomment-admin-guide.webdoc:21
#: modules/websession/doc/admin/websession-admin-guide.webdoc:21
#: modules/webstyle/doc/admin/webstyle-admin-guide.webdoc:21
#: modules/elmsubmit/doc/admin/elmsubmit-admin-guide.webdoc:21
#: modules/bibformat/lib/bibformatadminlib.py:61
#: modules/bibformat/web/admin/bibformatadmin.py:70
#: modules/webcomment/lib/webcommentadminlib.py:45
#: modules/webstyle/lib/webdoc_webinterface.py:153
#: modules/bibcheck/web/admin/bibcheckadmin.py:57
#: modules/bibcheck/web/admin/bibcheckadmin.py:159
#: modules/bibcheck/web/admin/bibcheckadmin.py:203
#: modules/bibcheck/web/admin/bibcheckadmin.py:263
#: modules/bibcheck/web/admin/bibcheckadmin.py:302
#: modules/bibknowledge/lib/bibknowledgeadmin.py:70
msgid "Admin Area"
msgstr "Zona de administración"
#: modules/websearch/doc/search-guide.webdoc:427
#: modules/websearch/doc/search-guide.webdoc:559
#: modules/websearch/doc/search-guide.webdoc:696
#: modules/websearch/doc/search-guide.webdoc:834
#: modules/websearch/lib/search_engine.py:4692
#: modules/websearch/lib/websearch_templates.py:793
#: modules/websearch/lib/websearch_templates.py:871
#: modules/websearch/lib/websearch_templates.py:994
#: modules/websearch/lib/websearch_templates.py:1965
#: modules/websearch/lib/websearch_templates.py:2056
#: modules/websearch/lib/websearch_templates.py:2113
#: modules/websearch/lib/websearch_templates.py:2170
#: modules/websearch/lib/websearch_templates.py:2209
#: modules/websearch/lib/websearch_templates.py:2232
#: modules/websearch/lib/websearch_templates.py:2263
msgid "Browse"
msgstr "Índice"
#: modules/webhelp/web/help-central.webdoc:50
#: modules/websearch/doc/search-tips.webdoc:20
#: modules/websearch/lib/websearch_templates.py:794
#: modules/websearch/lib/websearch_templates.py:872
#: modules/websearch/lib/websearch_templates.py:995
#: modules/websearch/lib/websearch_templates.py:2060
#: modules/websearch/lib/websearch_templates.py:2117
#: modules/websearch/lib/websearch_templates.py:2174
msgid "Search Tips"
msgstr "Consejos para la búsqueda"
#: modules/websearch/doc/search-guide.webdoc:343
#: modules/websearch/doc/search-guide.webdoc:378
#: modules/websearch/doc/search-guide.webdoc:413
#: modules/websearch/doc/search-guide.webdoc:475
#: modules/websearch/doc/search-guide.webdoc:510
#: modules/websearch/doc/search-guide.webdoc:545
#: modules/websearch/doc/search-guide.webdoc:612
#: modules/websearch/doc/search-guide.webdoc:647
#: modules/websearch/doc/search-guide.webdoc:682
#: modules/websearch/doc/search-guide.webdoc:750
#: modules/websearch/doc/search-guide.webdoc:785
#: modules/websearch/doc/search-guide.webdoc:820
#: modules/miscutil/lib/inveniocfg.py:482
msgid "abstract"
msgstr "resumen"
#: modules/websearch/doc/search-guide.webdoc:348
#: modules/websearch/doc/search-guide.webdoc:383
#: modules/websearch/doc/search-guide.webdoc:418
#: modules/websearch/doc/search-guide.webdoc:480
#: modules/websearch/doc/search-guide.webdoc:515
#: modules/websearch/doc/search-guide.webdoc:550
#: modules/websearch/doc/search-guide.webdoc:617
#: modules/websearch/doc/search-guide.webdoc:652
#: modules/websearch/doc/search-guide.webdoc:687
#: modules/websearch/doc/search-guide.webdoc:755
#: modules/websearch/doc/search-guide.webdoc:790
#: modules/websearch/doc/search-guide.webdoc:825
#: modules/miscutil/lib/inveniocfg.py:487
msgid "fulltext"
msgstr "texto completo"
#: modules/websearch/doc/search-guide.webdoc:337
#: modules/websearch/doc/search-guide.webdoc:373
#: modules/websearch/doc/search-guide.webdoc:408
#: modules/websearch/doc/search-guide.webdoc:469
#: modules/websearch/doc/search-guide.webdoc:505
#: modules/websearch/doc/search-guide.webdoc:540
#: modules/websearch/doc/search-guide.webdoc:606
#: modules/websearch/doc/search-guide.webdoc:642
#: modules/websearch/doc/search-guide.webdoc:677
#: modules/websearch/doc/search-guide.webdoc:744
#: modules/websearch/doc/search-guide.webdoc:780
#: modules/websearch/doc/search-guide.webdoc:815
#: modules/websearch/lib/search_engine.py:1222
#: modules/websearch/lib/websearch_templates.py:1108
msgid "Regular expression:"
msgstr "Expresión regular:"
#: modules/websearch/doc/search-guide.webdoc:333
#: modules/websearch/doc/search-guide.webdoc:369
#: modules/websearch/doc/search-guide.webdoc:404
#: modules/websearch/doc/search-guide.webdoc:465
#: modules/websearch/doc/search-guide.webdoc:501
#: modules/websearch/doc/search-guide.webdoc:536
#: modules/websearch/doc/search-guide.webdoc:602
#: modules/websearch/doc/search-guide.webdoc:638
#: modules/websearch/doc/search-guide.webdoc:673
#: modules/websearch/doc/search-guide.webdoc:740
#: modules/websearch/doc/search-guide.webdoc:776
#: modules/websearch/doc/search-guide.webdoc:811
#: modules/websearch/lib/search_engine.py:1218
#: modules/websearch/lib/websearch_templates.py:1100
msgid "All of the words:"
msgstr "Todas las palabras:"
#: modules/websearch/doc/search-guide.webdoc:351
#: modules/websearch/doc/search-guide.webdoc:386
#: modules/websearch/doc/search-guide.webdoc:421
#: modules/websearch/doc/search-guide.webdoc:483
#: modules/websearch/doc/search-guide.webdoc:518
#: modules/websearch/doc/search-guide.webdoc:553
#: modules/websearch/doc/search-guide.webdoc:620
#: modules/websearch/doc/search-guide.webdoc:655
#: modules/websearch/doc/search-guide.webdoc:690
#: modules/websearch/doc/search-guide.webdoc:758
#: modules/websearch/doc/search-guide.webdoc:793
#: modules/websearch/doc/search-guide.webdoc:828
#: modules/miscutil/lib/inveniocfg.py:484
msgid "report number"
msgstr "número de reporte"
#: modules/websearch/doc/search-tips.webdoc:406
#: modules/websearch/doc/search-tips.webdoc:413
#: modules/websearch/doc/search-tips.webdoc:414
#: modules/websearch/doc/search-tips.webdoc:415
#: modules/websearch/doc/search-tips.webdoc:433
#: modules/websearch/doc/search-tips.webdoc:434
#: modules/websearch/doc/search-tips.webdoc:435
#: modules/websearch/doc/search-guide.webdoc:354
#: modules/websearch/doc/search-guide.webdoc:389
#: modules/websearch/doc/search-guide.webdoc:424
#: modules/websearch/doc/search-guide.webdoc:486
#: modules/websearch/doc/search-guide.webdoc:521
#: modules/websearch/doc/search-guide.webdoc:556
#: modules/websearch/doc/search-guide.webdoc:623
#: modules/websearch/doc/search-guide.webdoc:658
#: modules/websearch/doc/search-guide.webdoc:693
#: modules/websearch/doc/search-guide.webdoc:761
#: modules/websearch/doc/search-guide.webdoc:796
#: modules/websearch/doc/search-guide.webdoc:831
#: modules/miscutil/lib/inveniocfg.py:490
#: modules/bibcirculation/lib/bibcirculation_templates.py:7218
#: modules/bibcirculation/lib/bibcirculation_templates.py:7917
msgid "year"
msgstr "año"
# Debe traducirse igual que el 'Subject' del correo electrónico
#: modules/websearch/doc/search-guide.webdoc:352
#: modules/websearch/doc/search-guide.webdoc:387
#: modules/websearch/doc/search-guide.webdoc:422
#: modules/websearch/doc/search-guide.webdoc:484
#: modules/websearch/doc/search-guide.webdoc:519
#: modules/websearch/doc/search-guide.webdoc:554
#: modules/websearch/doc/search-guide.webdoc:621
#: modules/websearch/doc/search-guide.webdoc:656
#: modules/websearch/doc/search-guide.webdoc:691
#: modules/websearch/doc/search-guide.webdoc:759
#: modules/websearch/doc/search-guide.webdoc:794
#: modules/websearch/doc/search-guide.webdoc:829
#: modules/miscutil/lib/inveniocfg.py:485
msgid "subject"
msgstr "materia"
#: modules/websearch/doc/search-guide.webdoc:336
#: modules/websearch/doc/search-guide.webdoc:372
#: modules/websearch/doc/search-guide.webdoc:407
#: modules/websearch/doc/search-guide.webdoc:468
#: modules/websearch/doc/search-guide.webdoc:504
#: modules/websearch/doc/search-guide.webdoc:539
#: modules/websearch/doc/search-guide.webdoc:605
#: modules/websearch/doc/search-guide.webdoc:641
#: modules/websearch/doc/search-guide.webdoc:676
#: modules/websearch/doc/search-guide.webdoc:743
#: modules/websearch/doc/search-guide.webdoc:779
#: modules/websearch/doc/search-guide.webdoc:814
#: modules/websearch/lib/search_engine.py:1221
#: modules/websearch/lib/websearch_templates.py:1106
msgid "Partial phrase:"
msgstr "Frase parcial:"
#: modules/websearch/doc/search-guide.webdoc:350
#: modules/websearch/doc/search-guide.webdoc:385
#: modules/websearch/doc/search-guide.webdoc:420
#: modules/websearch/doc/search-guide.webdoc:482
#: modules/websearch/doc/search-guide.webdoc:517
#: modules/websearch/doc/search-guide.webdoc:552
#: modules/websearch/doc/search-guide.webdoc:619
#: modules/websearch/doc/search-guide.webdoc:654
#: modules/websearch/doc/search-guide.webdoc:689
#: modules/websearch/doc/search-guide.webdoc:757
#: modules/websearch/doc/search-guide.webdoc:792
#: modules/websearch/doc/search-guide.webdoc:827
#: modules/miscutil/lib/inveniocfg.py:486
msgid "reference"
msgstr "referencia"
#: modules/websearch/doc/search-tips.webdoc:37
#: modules/websearch/doc/search-tips.webdoc:68
#: modules/websearch/doc/search-tips.webdoc:106
#: modules/websearch/doc/search-tips.webdoc:154
#: modules/websearch/doc/search-tips.webdoc:177
#: modules/websearch/doc/search-tips.webdoc:201
#: modules/websearch/doc/search-tips.webdoc:246
#: modules/websearch/doc/search-tips.webdoc:286
#: modules/websearch/doc/search-tips.webdoc:297
#: modules/websearch/doc/search-tips.webdoc:317
#: modules/websearch/doc/search-tips.webdoc:337
#: modules/websearch/doc/search-tips.webdoc:371
#: modules/websearch/doc/search-tips.webdoc:405
#: modules/websearch/doc/search-tips.webdoc:426
#: modules/websearch/doc/search-tips.webdoc:446
#: modules/websearch/doc/search-tips.webdoc:480
#: modules/websearch/doc/search-tips.webdoc:498
#: modules/websearch/doc/search-tips.webdoc:517
#: modules/websearch/doc/search-tips.webdoc:550
#: modules/websearch/doc/search-tips.webdoc:575
#: modules/websearch/doc/search-tips.webdoc:583
#: modules/websearch/doc/search-tips.webdoc:586
#: modules/websearch/doc/search-tips.webdoc:588
#: modules/websearch/doc/search-tips.webdoc:599
#: modules/websearch/doc/search-tips.webdoc:620
#: modules/websearch/doc/search-guide.webdoc:226
#: modules/websearch/doc/search-guide.webdoc:250
#: modules/websearch/doc/search-guide.webdoc:276
#: modules/websearch/doc/search-guide.webdoc:301
#: modules/websearch/doc/search-guide.webdoc:344
#: modules/websearch/doc/search-guide.webdoc:379
#: modules/websearch/doc/search-guide.webdoc:414
#: modules/websearch/doc/search-guide.webdoc:476
#: modules/websearch/doc/search-guide.webdoc:511
#: modules/websearch/doc/search-guide.webdoc:546
#: modules/websearch/doc/search-guide.webdoc:613
#: modules/websearch/doc/search-guide.webdoc:648
#: modules/websearch/doc/search-guide.webdoc:683
#: modules/websearch/doc/search-guide.webdoc:751
#: modules/websearch/doc/search-guide.webdoc:786
#: modules/websearch/doc/search-guide.webdoc:821
#: modules/websearch/doc/search-guide.webdoc:881
#: modules/websearch/doc/search-guide.webdoc:912
#: modules/websearch/doc/search-guide.webdoc:952
#: modules/websearch/doc/search-guide.webdoc:986
#: modules/websearch/doc/search-guide.webdoc:1026
#: modules/websearch/doc/search-guide.webdoc:1048
#: modules/websearch/doc/search-guide.webdoc:1068
#: modules/websearch/doc/search-guide.webdoc:1084
#: modules/websearch/doc/search-guide.webdoc:1124
#: modules/websearch/doc/search-guide.webdoc:1147
#: modules/websearch/doc/search-guide.webdoc:1168
#: modules/websearch/doc/search-guide.webdoc:1183
#: modules/websearch/doc/search-guide.webdoc:1227
#: modules/websearch/doc/search-guide.webdoc:1252
#: modules/websearch/doc/search-guide.webdoc:1273
#: modules/websearch/doc/search-guide.webdoc:1289
#: modules/websearch/doc/search-guide.webdoc:1334
#: modules/websearch/doc/search-guide.webdoc:1357
#: modules/websearch/doc/search-guide.webdoc:1379
#: modules/websearch/doc/search-guide.webdoc:1395
#: modules/websearch/doc/search-guide.webdoc:1765
#: modules/websearch/doc/search-guide.webdoc:1779
#: modules/websearch/doc/search-guide.webdoc:1797
#: modules/websearch/doc/search-guide.webdoc:1816
#: modules/websearch/doc/search-guide.webdoc:1829
#: modules/websearch/doc/search-guide.webdoc:1847
#: modules/websearch/doc/search-guide.webdoc:1867
#: modules/websearch/doc/search-guide.webdoc:1882
#: modules/websearch/doc/search-guide.webdoc:1901
#: modules/websearch/doc/search-guide.webdoc:1924
#: modules/websearch/doc/search-guide.webdoc:1939
#: modules/websearch/doc/search-guide.webdoc:1958
#: modules/websearch/doc/search-guide.webdoc:1986
#: modules/websearch/doc/search-guide.webdoc:2024
#: modules/websearch/doc/search-guide.webdoc:2035
#: modules/websearch/doc/search-guide.webdoc:2049
#: modules/websearch/doc/search-guide.webdoc:2063
#: modules/websearch/doc/search-guide.webdoc:2076
#: modules/websearch/doc/search-guide.webdoc:2092
#: modules/websearch/doc/search-guide.webdoc:2103
#: modules/websearch/doc/search-guide.webdoc:2117
#: modules/websearch/doc/search-guide.webdoc:2131
#: modules/websearch/doc/search-guide.webdoc:2144
#: modules/websearch/doc/search-guide.webdoc:2160
#: modules/websearch/doc/search-guide.webdoc:2171
#: modules/websearch/doc/search-guide.webdoc:2185
#: modules/websearch/doc/search-guide.webdoc:2199
#: modules/websearch/doc/search-guide.webdoc:2212
#: modules/websearch/doc/search-guide.webdoc:2230
#: modules/websearch/doc/search-guide.webdoc:2241
#: modules/websearch/doc/search-guide.webdoc:2255
#: modules/websearch/doc/search-guide.webdoc:2269
#: modules/websearch/doc/search-guide.webdoc:2282
#: modules/websearch/doc/search-guide.webdoc:2311
#: modules/websearch/doc/search-guide.webdoc:2325
#: modules/websearch/doc/search-guide.webdoc:2342
#: modules/websearch/doc/search-guide.webdoc:2355
#: modules/websearch/doc/search-guide.webdoc:2372
#: modules/websearch/doc/search-guide.webdoc:2386
#: modules/websearch/doc/search-guide.webdoc:2404
#: modules/websearch/doc/search-guide.webdoc:2418
#: modules/websearch/doc/search-guide.webdoc:2449
#: modules/websearch/doc/search-guide.webdoc:2464
#: modules/websearch/doc/search-guide.webdoc:2478
#: modules/websearch/doc/search-guide.webdoc:2493
#: modules/websearch/doc/search-guide.webdoc:2521
#: modules/websearch/doc/search-guide.webdoc:2536
#: modules/websearch/doc/search-guide.webdoc:2550
#: modules/websearch/doc/search-guide.webdoc:2566
#: modules/websearch/doc/search-guide.webdoc:2598
#: modules/websearch/doc/search-guide.webdoc:2614
#: modules/websearch/doc/search-guide.webdoc:2628
#: modules/websearch/doc/search-guide.webdoc:2643
#: modules/websearch/doc/search-guide.webdoc:2674
#: modules/websearch/doc/search-guide.webdoc:2690
#: modules/websearch/doc/search-guide.webdoc:2704
#: modules/websearch/doc/search-guide.webdoc:2719
#: modules/websearch/doc/search-guide.webdoc:2761
#: modules/websearch/doc/search-guide.webdoc:2776
#: modules/websearch/doc/search-guide.webdoc:2790
#: modules/websearch/doc/search-guide.webdoc:2815
#: modules/websearch/doc/search-guide.webdoc:2830
#: modules/websearch/doc/search-guide.webdoc:2844
#: modules/websearch/doc/search-guide.webdoc:2873
#: modules/websearch/doc/search-guide.webdoc:2888
#: modules/websearch/doc/search-guide.webdoc:2902
#: modules/websearch/doc/search-guide.webdoc:2930
#: modules/websearch/doc/search-guide.webdoc:2945
#: modules/websearch/doc/search-guide.webdoc:2958
#: modules/websearch/doc/search-guide.webdoc:2993
#: modules/websearch/doc/search-guide.webdoc:3015
#: modules/websearch/doc/search-guide.webdoc:3039
#: modules/websearch/doc/search-guide.webdoc:3063
#: modules/websearch/doc/search-guide.webdoc:3087
#: modules/websearch/doc/search-guide.webdoc:3102
#: modules/websearch/doc/search-guide.webdoc:3118
#: modules/websearch/doc/search-guide.webdoc:3135
#: modules/websearch/doc/search-guide.webdoc:3155
#: modules/websearch/doc/search-guide.webdoc:3173
#: modules/websearch/doc/search-guide.webdoc:3191
#: modules/websearch/doc/search-guide.webdoc:3210
#: modules/websearch/doc/search-guide.webdoc:3231
#: modules/websearch/doc/search-guide.webdoc:3245
#: modules/websearch/doc/search-guide.webdoc:3265
#: modules/websearch/doc/search-guide.webdoc:3280
#: modules/websearch/doc/search-guide.webdoc:3299
#: modules/websearch/doc/search-guide.webdoc:3314
#: modules/websearch/doc/search-guide.webdoc:3334
#: modules/websearch/doc/search-guide.webdoc:3349
#: modules/websearch/doc/search-guide.webdoc:3411
#: modules/websearch/doc/search-guide.webdoc:3425
#: modules/websearch/doc/search-guide.webdoc:3442
#: modules/websearch/doc/search-guide.webdoc:3455
#: modules/websearch/doc/search-guide.webdoc:3473
#: modules/websearch/doc/search-guide.webdoc:3488
#: modules/websearch/doc/search-guide.webdoc:3506
#: modules/websearch/doc/search-guide.webdoc:3521
#: modules/websearch/doc/search-guide.webdoc:3546
#: modules/websearch/doc/search-guide.webdoc:3559
#: modules/websearch/doc/search-guide.webdoc:3572
#: modules/websearch/doc/search-guide.webdoc:3588
#: modules/websearch/doc/search-guide.webdoc:3604
#: modules/websearch/doc/search-guide.webdoc:3621
#: modules/websearch/doc/search-guide.webdoc:3654
#: modules/websearch/doc/search-guide.webdoc:3670
#: modules/websearch/doc/search-guide.webdoc:3687
#: modules/websearch/doc/search-guide.webdoc:3707
#: modules/websearch/doc/search-guide.webdoc:3721
#: modules/websearch/doc/search-guide.webdoc:3739
#: modules/websearch/doc/search-guide.webdoc:3760
#: modules/websearch/doc/search-guide.webdoc:3779
#: modules/websearch/doc/search-guide.webdoc:3797
#: modules/websearch/doc/search-guide.webdoc:3819
#: modules/websearch/doc/search-guide.webdoc:3838
#: modules/websearch/doc/search-guide.webdoc:3855
#: modules/websearch/doc/search-guide.webdoc:3976
#: modules/websearch/doc/search-guide.webdoc:4001
#: modules/websearch/doc/search-guide.webdoc:4024
#: modules/websearch/doc/search-guide.webdoc:4050
#: modules/websearch/doc/search-guide.webdoc:4074
#: modules/websearch/doc/search-guide.webdoc:4101
#: modules/websearch/doc/search-guide.webdoc:4126
#: modules/websearch/doc/search-guide.webdoc:4152
#: modules/websearch/doc/search-guide.webdoc:4181
#: modules/websearch/doc/search-guide.webdoc:4201
#: modules/websearch/doc/search-guide.webdoc:4225
#: modules/websearch/doc/search-guide.webdoc:4252
#: modules/websearch/doc/search-guide.webdoc:4292
#: modules/websearch/doc/search-guide.webdoc:4313
#: modules/websearch/doc/search-guide.webdoc:4337
#: modules/websearch/doc/search-guide.webdoc:4367
#: modules/websearch/doc/search-guide.webdoc:4411
#: modules/websearch/doc/search-guide.webdoc:4433
#: modules/websearch/doc/search-guide.webdoc:4458
#: modules/websearch/doc/search-guide.webdoc:4488
#: modules/websearch/doc/search-guide.webdoc:4533
#: modules/websearch/doc/search-guide.webdoc:4554
#: modules/websearch/doc/search-guide.webdoc:4579
#: modules/websearch/doc/search-guide.webdoc:4609
#: modules/websearch/doc/search-guide.webdoc:4901
#: modules/websearch/doc/search-guide.webdoc:4917
#: modules/websearch/doc/search-guide.webdoc:4937
#: modules/websearch/doc/search-guide.webdoc:4956
#: modules/websearch/doc/search-guide.webdoc:4977
#: modules/websearch/doc/search-guide.webdoc:4995
#: modules/websearch/doc/search-guide.webdoc:5016
#: modules/websearch/doc/search-guide.webdoc:5034
#: modules/websearch/doc/search-guide.webdoc:5067
#: modules/websearch/doc/search-guide.webdoc:5081
#: modules/websearch/doc/search-guide.webdoc:5096
#: modules/websearch/doc/search-guide.webdoc:5112
#: modules/websearch/doc/search-guide.webdoc:5131
#: modules/websearch/doc/search-guide.webdoc:5145
#: modules/websearch/doc/search-guide.webdoc:5161
#: modules/websearch/doc/search-guide.webdoc:5179
#: modules/websearch/doc/search-guide.webdoc:5198
#: modules/websearch/doc/search-guide.webdoc:5213
#: modules/websearch/doc/search-guide.webdoc:5228
#: modules/websearch/doc/search-guide.webdoc:5246
#: modules/websearch/doc/search-guide.webdoc:5266
#: modules/websearch/doc/search-guide.webdoc:5281
#: modules/websearch/doc/search-guide.webdoc:5296
#: modules/websearch/doc/search-guide.webdoc:5316
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:131
#: modules/miscutil/lib/inveniocfg.py:481
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:7219
#: modules/bibcirculation/lib/bibcirculation_templates.py:7918
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16025
msgid "author"
msgstr "autor"
#: modules/webhelp/web/help-central.webdoc:98
#: modules/websearch/doc/search-guide.webdoc:20
msgid "Search Guide"
msgstr "Guía de búsqueda"
#: modules/websearch/doc/search-guide.webdoc:347
#: modules/websearch/doc/search-guide.webdoc:382
#: modules/websearch/doc/search-guide.webdoc:417
#: modules/websearch/doc/search-guide.webdoc:479
#: modules/websearch/doc/search-guide.webdoc:514
#: modules/websearch/doc/search-guide.webdoc:549
#: modules/websearch/doc/search-guide.webdoc:616
#: modules/websearch/doc/search-guide.webdoc:651
#: modules/websearch/doc/search-guide.webdoc:686
#: modules/websearch/doc/search-guide.webdoc:754
#: modules/websearch/doc/search-guide.webdoc:789
#: modules/websearch/doc/search-guide.webdoc:824
#: modules/miscutil/lib/inveniocfg.py:492
msgid "experiment"
msgstr "experimento"
#: modules/websearch/doc/search-guide.webdoc:334
#: modules/websearch/doc/search-guide.webdoc:370
#: modules/websearch/doc/search-guide.webdoc:405
#: modules/websearch/doc/search-guide.webdoc:466
#: modules/websearch/doc/search-guide.webdoc:502
#: modules/websearch/doc/search-guide.webdoc:537
#: modules/websearch/doc/search-guide.webdoc:603
#: modules/websearch/doc/search-guide.webdoc:639
#: modules/websearch/doc/search-guide.webdoc:674
#: modules/websearch/doc/search-guide.webdoc:741
#: modules/websearch/doc/search-guide.webdoc:777
#: modules/websearch/doc/search-guide.webdoc:812
#: modules/websearch/lib/search_engine.py:1219
#: modules/websearch/lib/websearch_templates.py:1102
msgid "Any of the words:"
msgstr "Al menos una de las palabras:"
#: modules/websearch/doc/search-guide.webdoc:346
#: modules/websearch/doc/search-guide.webdoc:381
#: modules/websearch/doc/search-guide.webdoc:416
#: modules/websearch/doc/search-guide.webdoc:478
#: modules/websearch/doc/search-guide.webdoc:513
#: modules/websearch/doc/search-guide.webdoc:548
#: modules/websearch/doc/search-guide.webdoc:615
#: modules/websearch/doc/search-guide.webdoc:650
#: modules/websearch/doc/search-guide.webdoc:685
#: modules/websearch/doc/search-guide.webdoc:753
#: modules/websearch/doc/search-guide.webdoc:788
#: modules/websearch/doc/search-guide.webdoc:823
#: modules/miscutil/lib/inveniocfg.py:489
msgid "division"
msgstr "división"
#: modules/websearch/doc/search-tips.webdoc:39
#: modules/websearch/doc/search-tips.webdoc:70
#: modules/websearch/doc/search-tips.webdoc:108
#: modules/websearch/doc/search-tips.webdoc:156
#: modules/websearch/doc/search-tips.webdoc:179
#: modules/websearch/doc/search-tips.webdoc:203
#: modules/websearch/doc/search-tips.webdoc:248
#: modules/websearch/doc/search-tips.webdoc:288
#: modules/websearch/doc/search-tips.webdoc:299
#: modules/websearch/doc/search-tips.webdoc:319
#: modules/websearch/doc/search-tips.webdoc:339
#: modules/websearch/doc/search-tips.webdoc:373
#: modules/websearch/doc/search-tips.webdoc:408
#: modules/websearch/doc/search-tips.webdoc:428
#: modules/websearch/doc/search-tips.webdoc:448
#: modules/websearch/doc/search-tips.webdoc:482
#: modules/websearch/doc/search-tips.webdoc:500
#: modules/websearch/doc/search-tips.webdoc:519
#: modules/websearch/doc/search-tips.webdoc:552
#: modules/websearch/doc/search-tips.webdoc:577
#: modules/websearch/doc/search-tips.webdoc:601
#: modules/websearch/doc/search-tips.webdoc:622
#: modules/websearch/doc/search-guide.webdoc:227
#: modules/websearch/doc/search-guide.webdoc:251
#: modules/websearch/doc/search-guide.webdoc:277
#: modules/websearch/doc/search-guide.webdoc:302
#: modules/websearch/doc/search-guide.webdoc:427
#: modules/websearch/doc/search-guide.webdoc:559
#: modules/websearch/doc/search-guide.webdoc:696
#: modules/websearch/doc/search-guide.webdoc:834
#: modules/websearch/doc/search-guide.webdoc:882
#: modules/websearch/doc/search-guide.webdoc:913
#: modules/websearch/doc/search-guide.webdoc:953
#: modules/websearch/doc/search-guide.webdoc:987
#: modules/websearch/doc/search-guide.webdoc:1027
#: modules/websearch/doc/search-guide.webdoc:1049
#: modules/websearch/doc/search-guide.webdoc:1069
#: modules/websearch/doc/search-guide.webdoc:1085
#: modules/websearch/doc/search-guide.webdoc:1125
#: modules/websearch/doc/search-guide.webdoc:1148
#: modules/websearch/doc/search-guide.webdoc:1169
#: modules/websearch/doc/search-guide.webdoc:1184
#: modules/websearch/doc/search-guide.webdoc:1228
#: modules/websearch/doc/search-guide.webdoc:1253
#: modules/websearch/doc/search-guide.webdoc:1274
#: modules/websearch/doc/search-guide.webdoc:1290
#: modules/websearch/doc/search-guide.webdoc:1335
#: modules/websearch/doc/search-guide.webdoc:1358
#: modules/websearch/doc/search-guide.webdoc:1380
#: modules/websearch/doc/search-guide.webdoc:1396
#: modules/websearch/doc/search-guide.webdoc:1766
#: modules/websearch/doc/search-guide.webdoc:1780
#: modules/websearch/doc/search-guide.webdoc:1798
#: modules/websearch/doc/search-guide.webdoc:1817
#: modules/websearch/doc/search-guide.webdoc:1830
#: modules/websearch/doc/search-guide.webdoc:1848
#: modules/websearch/doc/search-guide.webdoc:1868
#: modules/websearch/doc/search-guide.webdoc:1883
#: modules/websearch/doc/search-guide.webdoc:1902
#: modules/websearch/doc/search-guide.webdoc:1925
#: modules/websearch/doc/search-guide.webdoc:1940
#: modules/websearch/doc/search-guide.webdoc:1959
#: modules/websearch/doc/search-guide.webdoc:1988
#: modules/websearch/doc/search-guide.webdoc:2025
#: modules/websearch/doc/search-guide.webdoc:2036
#: modules/websearch/doc/search-guide.webdoc:2050
#: modules/websearch/doc/search-guide.webdoc:2064
#: modules/websearch/doc/search-guide.webdoc:2077
#: modules/websearch/doc/search-guide.webdoc:2093
#: modules/websearch/doc/search-guide.webdoc:2104
#: modules/websearch/doc/search-guide.webdoc:2118
#: modules/websearch/doc/search-guide.webdoc:2132
#: modules/websearch/doc/search-guide.webdoc:2145
#: modules/websearch/doc/search-guide.webdoc:2161
#: modules/websearch/doc/search-guide.webdoc:2172
#: modules/websearch/doc/search-guide.webdoc:2186
#: modules/websearch/doc/search-guide.webdoc:2200
#: modules/websearch/doc/search-guide.webdoc:2213
#: modules/websearch/doc/search-guide.webdoc:2231
#: modules/websearch/doc/search-guide.webdoc:2242
#: modules/websearch/doc/search-guide.webdoc:2256
#: modules/websearch/doc/search-guide.webdoc:2270
#: modules/websearch/doc/search-guide.webdoc:2283
#: modules/websearch/doc/search-guide.webdoc:2312
#: modules/websearch/doc/search-guide.webdoc:2326
#: modules/websearch/doc/search-guide.webdoc:2343
#: modules/websearch/doc/search-guide.webdoc:2356
#: modules/websearch/doc/search-guide.webdoc:2373
#: modules/websearch/doc/search-guide.webdoc:2387
#: modules/websearch/doc/search-guide.webdoc:2405
#: modules/websearch/doc/search-guide.webdoc:2419
#: modules/websearch/doc/search-guide.webdoc:2450
#: modules/websearch/doc/search-guide.webdoc:2465
#: modules/websearch/doc/search-guide.webdoc:2479
#: modules/websearch/doc/search-guide.webdoc:2494
#: modules/websearch/doc/search-guide.webdoc:2522
#: modules/websearch/doc/search-guide.webdoc:2537
#: modules/websearch/doc/search-guide.webdoc:2551
#: modules/websearch/doc/search-guide.webdoc:2567
#: modules/websearch/doc/search-guide.webdoc:2599
#: modules/websearch/doc/search-guide.webdoc:2615
#: modules/websearch/doc/search-guide.webdoc:2629
#: modules/websearch/doc/search-guide.webdoc:2644
#: modules/websearch/doc/search-guide.webdoc:2675
#: modules/websearch/doc/search-guide.webdoc:2691
#: modules/websearch/doc/search-guide.webdoc:2705
#: modules/websearch/doc/search-guide.webdoc:2720
#: modules/websearch/doc/search-guide.webdoc:2762
#: modules/websearch/doc/search-guide.webdoc:2777
#: modules/websearch/doc/search-guide.webdoc:2791
#: modules/websearch/doc/search-guide.webdoc:2816
#: modules/websearch/doc/search-guide.webdoc:2831
#: modules/websearch/doc/search-guide.webdoc:2845
#: modules/websearch/doc/search-guide.webdoc:2874
#: modules/websearch/doc/search-guide.webdoc:2889
#: modules/websearch/doc/search-guide.webdoc:2903
#: modules/websearch/doc/search-guide.webdoc:2931
#: modules/websearch/doc/search-guide.webdoc:2946
#: modules/websearch/doc/search-guide.webdoc:2959
#: modules/websearch/doc/search-guide.webdoc:2994
#: modules/websearch/doc/search-guide.webdoc:3016
#: modules/websearch/doc/search-guide.webdoc:3040
#: modules/websearch/doc/search-guide.webdoc:3064
#: modules/websearch/doc/search-guide.webdoc:3088
#: modules/websearch/doc/search-guide.webdoc:3103
#: modules/websearch/doc/search-guide.webdoc:3119
#: modules/websearch/doc/search-guide.webdoc:3136
#: modules/websearch/doc/search-guide.webdoc:3156
#: modules/websearch/doc/search-guide.webdoc:3174
#: modules/websearch/doc/search-guide.webdoc:3192
#: modules/websearch/doc/search-guide.webdoc:3211
#: modules/websearch/doc/search-guide.webdoc:3232
#: modules/websearch/doc/search-guide.webdoc:3246
#: modules/websearch/doc/search-guide.webdoc:3266
#: modules/websearch/doc/search-guide.webdoc:3281
#: modules/websearch/doc/search-guide.webdoc:3300
#: modules/websearch/doc/search-guide.webdoc:3315
#: modules/websearch/doc/search-guide.webdoc:3335
#: modules/websearch/doc/search-guide.webdoc:3350
#: modules/websearch/doc/search-guide.webdoc:3412
#: modules/websearch/doc/search-guide.webdoc:3426
#: modules/websearch/doc/search-guide.webdoc:3443
#: modules/websearch/doc/search-guide.webdoc:3456
#: modules/websearch/doc/search-guide.webdoc:3474
#: modules/websearch/doc/search-guide.webdoc:3489
#: modules/websearch/doc/search-guide.webdoc:3507
#: modules/websearch/doc/search-guide.webdoc:3522
#: modules/websearch/doc/search-guide.webdoc:3547
#: modules/websearch/doc/search-guide.webdoc:3560
#: modules/websearch/doc/search-guide.webdoc:3573
#: modules/websearch/doc/search-guide.webdoc:3589
#: modules/websearch/doc/search-guide.webdoc:3605
#: modules/websearch/doc/search-guide.webdoc:3622
#: modules/websearch/doc/search-guide.webdoc:3655
#: modules/websearch/doc/search-guide.webdoc:3671
#: modules/websearch/doc/search-guide.webdoc:3688
#: modules/websearch/doc/search-guide.webdoc:3708
#: modules/websearch/doc/search-guide.webdoc:3722
#: modules/websearch/doc/search-guide.webdoc:3740
#: modules/websearch/doc/search-guide.webdoc:3761
#: modules/websearch/doc/search-guide.webdoc:3780
#: modules/websearch/doc/search-guide.webdoc:3798
#: modules/websearch/doc/search-guide.webdoc:3820
#: modules/websearch/doc/search-guide.webdoc:3839
#: modules/websearch/doc/search-guide.webdoc:3856
#: modules/websearch/doc/search-guide.webdoc:3977
#: modules/websearch/doc/search-guide.webdoc:4002
#: modules/websearch/doc/search-guide.webdoc:4025
#: modules/websearch/doc/search-guide.webdoc:4051
#: modules/websearch/doc/search-guide.webdoc:4075
#: modules/websearch/doc/search-guide.webdoc:4102
#: modules/websearch/doc/search-guide.webdoc:4127
#: modules/websearch/doc/search-guide.webdoc:4153
#: modules/websearch/doc/search-guide.webdoc:4182
#: modules/websearch/doc/search-guide.webdoc:4202
#: modules/websearch/doc/search-guide.webdoc:4226
#: modules/websearch/doc/search-guide.webdoc:4253
#: modules/websearch/doc/search-guide.webdoc:4293
#: modules/websearch/doc/search-guide.webdoc:4314
#: modules/websearch/doc/search-guide.webdoc:4338
#: modules/websearch/doc/search-guide.webdoc:4368
#: modules/websearch/doc/search-guide.webdoc:4412
#: modules/websearch/doc/search-guide.webdoc:4434
#: modules/websearch/doc/search-guide.webdoc:4459
#: modules/websearch/doc/search-guide.webdoc:4489
#: modules/websearch/doc/search-guide.webdoc:4534
#: modules/websearch/doc/search-guide.webdoc:4555
#: modules/websearch/doc/search-guide.webdoc:4580
#: modules/websearch/doc/search-guide.webdoc:4610
#: modules/websearch/doc/search-guide.webdoc:4902
#: modules/websearch/doc/search-guide.webdoc:4918
#: modules/websearch/doc/search-guide.webdoc:4938
#: modules/websearch/doc/search-guide.webdoc:4957
#: modules/websearch/doc/search-guide.webdoc:4978
#: modules/websearch/doc/search-guide.webdoc:4996
#: modules/websearch/doc/search-guide.webdoc:5017
#: modules/websearch/doc/search-guide.webdoc:5035
#: modules/websearch/doc/search-guide.webdoc:5068
#: modules/websearch/doc/search-guide.webdoc:5082
#: modules/websearch/doc/search-guide.webdoc:5097
#: modules/websearch/doc/search-guide.webdoc:5113
#: modules/websearch/doc/search-guide.webdoc:5132
#: modules/websearch/doc/search-guide.webdoc:5146
#: modules/websearch/doc/search-guide.webdoc:5162
#: modules/websearch/doc/search-guide.webdoc:5180
#: modules/websearch/doc/search-guide.webdoc:5199
#: modules/websearch/doc/search-guide.webdoc:5214
#: modules/websearch/doc/search-guide.webdoc:5229
#: modules/websearch/doc/search-guide.webdoc:5247
#: modules/websearch/doc/search-guide.webdoc:5267
#: modules/websearch/doc/search-guide.webdoc:5282
#: modules/websearch/doc/search-guide.webdoc:5297
#: modules/websearch/doc/search-guide.webdoc:5317
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:133
#: modules/websearch/lib/websearch_templates.py:792
#: modules/websearch/lib/websearch_templates.py:870
#: modules/websearch/lib/websearch_templates.py:993
#: modules/websearch/lib/websearch_templates.py:1962
#: modules/websearch/lib/websearch_templates.py:2055
#: modules/websearch/lib/websearch_templates.py:2112
#: modules/websearch/lib/websearch_templates.py:2169
#: modules/webstyle/lib/webstyle_templates.py:433
#: modules/webstyle/lib/webstyle_templates.py:502
#: modules/webstyle/lib/webdoc_tests.py:86
#: modules/bibedit/lib/bibeditmulti_templates.py:312
#: modules/bibcirculation/lib/bibcirculation_templates.py:161
#: modules/bibcirculation/lib/bibcirculation_templates.py:207
#: modules/bibcirculation/lib/bibcirculation_templates.py:1977
#: modules/bibcirculation/lib/bibcirculation_templates.py:2052
#: modules/bibcirculation/lib/bibcirculation_templates.py:2245
#: modules/bibcirculation/lib/bibcirculation_templates.py:4088
#: modules/bibcirculation/lib/bibcirculation_templates.py:6806
#: modules/bibcirculation/lib/bibcirculation_templates.py:7251
#: modules/bibcirculation/lib/bibcirculation_templates.py:7948
#: modules/bibcirculation/lib/bibcirculation_templates.py:8600
#: modules/bibcirculation/lib/bibcirculation_templates.py:9190
#: modules/bibcirculation/lib/bibcirculation_templates.py:9687
#: modules/bibcirculation/lib/bibcirculation_templates.py:10162
#: modules/bibcirculation/lib/bibcirculation_templates.py:14248
#: modules/bibcirculation/lib/bibcirculation_templates.py:14571
#: modules/bibcirculation/lib/bibcirculation_templates.py:15243
#: modules/bibcirculation/lib/bibcirculation_templates.py:16055
#: modules/bibcirculation/lib/bibcirculation_templates.py:17122
#: modules/bibcirculation/lib/bibcirculation_templates.py:17604
#: modules/bibcirculation/lib/bibcirculation_templates.py:17788
#: modules/bibknowledge/lib/bibknowledge_templates.py:82
#: modules/bibknowledge/lib/bibknowledge_templates.py:423
msgid "Search"
msgstr "Buscar"
#: modules/webhelp/web/help-central.webdoc:133
msgid "Citation Metrics"
msgstr "Métrica de citas"
#: modules/websearch/doc/search-tips.webdoc:35
#: modules/websearch/doc/search-tips.webdoc:66
#: modules/websearch/doc/search-tips.webdoc:104
#: modules/websearch/doc/search-tips.webdoc:152
#: modules/websearch/doc/search-tips.webdoc:175
#: modules/websearch/doc/search-tips.webdoc:199
#: modules/websearch/doc/search-tips.webdoc:244
#: modules/websearch/doc/search-tips.webdoc:284
#: modules/websearch/doc/search-tips.webdoc:295
#: modules/websearch/doc/search-tips.webdoc:315
#: modules/websearch/doc/search-tips.webdoc:335
#: modules/websearch/doc/search-tips.webdoc:369
#: modules/websearch/doc/search-tips.webdoc:403
#: modules/websearch/doc/search-tips.webdoc:424
#: modules/websearch/doc/search-tips.webdoc:444
#: modules/websearch/doc/search-tips.webdoc:478
#: modules/websearch/doc/search-tips.webdoc:496
#: modules/websearch/doc/search-tips.webdoc:515
#: modules/websearch/doc/search-tips.webdoc:548
#: modules/websearch/doc/search-tips.webdoc:559
#: modules/websearch/doc/search-tips.webdoc:561
#: modules/websearch/doc/search-tips.webdoc:564
#: modules/websearch/doc/search-tips.webdoc:573
#: modules/websearch/doc/search-tips.webdoc:597
#: modules/websearch/doc/search-tips.webdoc:618
#: modules/websearch/doc/search-guide.webdoc:224
#: modules/websearch/doc/search-guide.webdoc:248
#: modules/websearch/doc/search-guide.webdoc:274
#: modules/websearch/doc/search-guide.webdoc:299
#: modules/websearch/doc/search-guide.webdoc:342
#: modules/websearch/doc/search-guide.webdoc:377
#: modules/websearch/doc/search-guide.webdoc:412
#: modules/websearch/doc/search-guide.webdoc:474
#: modules/websearch/doc/search-guide.webdoc:509
#: modules/websearch/doc/search-guide.webdoc:544
#: modules/websearch/doc/search-guide.webdoc:611
#: modules/websearch/doc/search-guide.webdoc:646
#: modules/websearch/doc/search-guide.webdoc:681
#: modules/websearch/doc/search-guide.webdoc:749
#: modules/websearch/doc/search-guide.webdoc:784
#: modules/websearch/doc/search-guide.webdoc:819
#: modules/websearch/doc/search-guide.webdoc:879
#: modules/websearch/doc/search-guide.webdoc:910
#: modules/websearch/doc/search-guide.webdoc:950
#: modules/websearch/doc/search-guide.webdoc:984
#: modules/websearch/doc/search-guide.webdoc:1024
#: modules/websearch/doc/search-guide.webdoc:1046
#: modules/websearch/doc/search-guide.webdoc:1066
#: modules/websearch/doc/search-guide.webdoc:1082
#: modules/websearch/doc/search-guide.webdoc:1122
#: modules/websearch/doc/search-guide.webdoc:1145
#: modules/websearch/doc/search-guide.webdoc:1166
#: modules/websearch/doc/search-guide.webdoc:1181
#: modules/websearch/doc/search-guide.webdoc:1225
#: modules/websearch/doc/search-guide.webdoc:1250
#: modules/websearch/doc/search-guide.webdoc:1271
#: modules/websearch/doc/search-guide.webdoc:1287
#: modules/websearch/doc/search-guide.webdoc:1332
#: modules/websearch/doc/search-guide.webdoc:1355
#: modules/websearch/doc/search-guide.webdoc:1377
#: modules/websearch/doc/search-guide.webdoc:1393
#: modules/websearch/doc/search-guide.webdoc:1763
#: modules/websearch/doc/search-guide.webdoc:1777
#: modules/websearch/doc/search-guide.webdoc:1795
#: modules/websearch/doc/search-guide.webdoc:1814
#: modules/websearch/doc/search-guide.webdoc:1827
#: modules/websearch/doc/search-guide.webdoc:1845
#: modules/websearch/doc/search-guide.webdoc:1865
#: modules/websearch/doc/search-guide.webdoc:1880
#: modules/websearch/doc/search-guide.webdoc:1899
#: modules/websearch/doc/search-guide.webdoc:1922
#: modules/websearch/doc/search-guide.webdoc:1937
#: modules/websearch/doc/search-guide.webdoc:1956
#: modules/websearch/doc/search-guide.webdoc:1984
#: modules/websearch/doc/search-guide.webdoc:2022
#: modules/websearch/doc/search-guide.webdoc:2033
#: modules/websearch/doc/search-guide.webdoc:2047
#: modules/websearch/doc/search-guide.webdoc:2061
#: modules/websearch/doc/search-guide.webdoc:2074
#: modules/websearch/doc/search-guide.webdoc:2090
#: modules/websearch/doc/search-guide.webdoc:2101
#: modules/websearch/doc/search-guide.webdoc:2115
#: modules/websearch/doc/search-guide.webdoc:2129
#: modules/websearch/doc/search-guide.webdoc:2142
#: modules/websearch/doc/search-guide.webdoc:2158
#: modules/websearch/doc/search-guide.webdoc:2169
#: modules/websearch/doc/search-guide.webdoc:2183
#: modules/websearch/doc/search-guide.webdoc:2197
#: modules/websearch/doc/search-guide.webdoc:2210
#: modules/websearch/doc/search-guide.webdoc:2228
#: modules/websearch/doc/search-guide.webdoc:2239
#: modules/websearch/doc/search-guide.webdoc:2253
#: modules/websearch/doc/search-guide.webdoc:2267
#: modules/websearch/doc/search-guide.webdoc:2280
#: modules/websearch/doc/search-guide.webdoc:2309
#: modules/websearch/doc/search-guide.webdoc:2323
#: modules/websearch/doc/search-guide.webdoc:2340
#: modules/websearch/doc/search-guide.webdoc:2353
#: modules/websearch/doc/search-guide.webdoc:2370
#: modules/websearch/doc/search-guide.webdoc:2384
#: modules/websearch/doc/search-guide.webdoc:2402
#: modules/websearch/doc/search-guide.webdoc:2416
#: modules/websearch/doc/search-guide.webdoc:2447
#: modules/websearch/doc/search-guide.webdoc:2462
#: modules/websearch/doc/search-guide.webdoc:2476
#: modules/websearch/doc/search-guide.webdoc:2491
#: modules/websearch/doc/search-guide.webdoc:2519
#: modules/websearch/doc/search-guide.webdoc:2534
#: modules/websearch/doc/search-guide.webdoc:2548
#: modules/websearch/doc/search-guide.webdoc:2564
#: modules/websearch/doc/search-guide.webdoc:2596
#: modules/websearch/doc/search-guide.webdoc:2612
#: modules/websearch/doc/search-guide.webdoc:2626
#: modules/websearch/doc/search-guide.webdoc:2641
#: modules/websearch/doc/search-guide.webdoc:2672
#: modules/websearch/doc/search-guide.webdoc:2688
#: modules/websearch/doc/search-guide.webdoc:2702
#: modules/websearch/doc/search-guide.webdoc:2717
#: modules/websearch/doc/search-guide.webdoc:2759
#: modules/websearch/doc/search-guide.webdoc:2774
#: modules/websearch/doc/search-guide.webdoc:2788
#: modules/websearch/doc/search-guide.webdoc:2813
#: modules/websearch/doc/search-guide.webdoc:2828
#: modules/websearch/doc/search-guide.webdoc:2842
#: modules/websearch/doc/search-guide.webdoc:2871
#: modules/websearch/doc/search-guide.webdoc:2886
#: modules/websearch/doc/search-guide.webdoc:2900
#: modules/websearch/doc/search-guide.webdoc:2928
#: modules/websearch/doc/search-guide.webdoc:2943
#: modules/websearch/doc/search-guide.webdoc:2956
#: modules/websearch/doc/search-guide.webdoc:2991
#: modules/websearch/doc/search-guide.webdoc:3013
#: modules/websearch/doc/search-guide.webdoc:3037
#: modules/websearch/doc/search-guide.webdoc:3061
#: modules/websearch/doc/search-guide.webdoc:3085
#: modules/websearch/doc/search-guide.webdoc:3100
#: modules/websearch/doc/search-guide.webdoc:3116
#: modules/websearch/doc/search-guide.webdoc:3133
#: modules/websearch/doc/search-guide.webdoc:3153
#: modules/websearch/doc/search-guide.webdoc:3171
#: modules/websearch/doc/search-guide.webdoc:3189
#: modules/websearch/doc/search-guide.webdoc:3208
#: modules/websearch/doc/search-guide.webdoc:3229
#: modules/websearch/doc/search-guide.webdoc:3243
#: modules/websearch/doc/search-guide.webdoc:3263
#: modules/websearch/doc/search-guide.webdoc:3278
#: modules/websearch/doc/search-guide.webdoc:3297
#: modules/websearch/doc/search-guide.webdoc:3312
#: modules/websearch/doc/search-guide.webdoc:3332
#: modules/websearch/doc/search-guide.webdoc:3347
#: modules/websearch/doc/search-guide.webdoc:3409
#: modules/websearch/doc/search-guide.webdoc:3423
#: modules/websearch/doc/search-guide.webdoc:3440
#: modules/websearch/doc/search-guide.webdoc:3453
#: modules/websearch/doc/search-guide.webdoc:3471
#: modules/websearch/doc/search-guide.webdoc:3486
#: modules/websearch/doc/search-guide.webdoc:3504
#: modules/websearch/doc/search-guide.webdoc:3519
#: modules/websearch/doc/search-guide.webdoc:3544
#: modules/websearch/doc/search-guide.webdoc:3557
#: modules/websearch/doc/search-guide.webdoc:3570
#: modules/websearch/doc/search-guide.webdoc:3586
#: modules/websearch/doc/search-guide.webdoc:3602
#: modules/websearch/doc/search-guide.webdoc:3619
#: modules/websearch/doc/search-guide.webdoc:3652
#: modules/websearch/doc/search-guide.webdoc:3668
#: modules/websearch/doc/search-guide.webdoc:3685
#: modules/websearch/doc/search-guide.webdoc:3705
#: modules/websearch/doc/search-guide.webdoc:3719
#: modules/websearch/doc/search-guide.webdoc:3737
#: modules/websearch/doc/search-guide.webdoc:3758
#: modules/websearch/doc/search-guide.webdoc:3777
#: modules/websearch/doc/search-guide.webdoc:3795
#: modules/websearch/doc/search-guide.webdoc:3817
#: modules/websearch/doc/search-guide.webdoc:3836
#: modules/websearch/doc/search-guide.webdoc:3853
#: modules/websearch/doc/search-guide.webdoc:3974
#: modules/websearch/doc/search-guide.webdoc:3999
#: modules/websearch/doc/search-guide.webdoc:4022
#: modules/websearch/doc/search-guide.webdoc:4048
#: modules/websearch/doc/search-guide.webdoc:4072
#: modules/websearch/doc/search-guide.webdoc:4099
#: modules/websearch/doc/search-guide.webdoc:4124
#: modules/websearch/doc/search-guide.webdoc:4150
#: modules/websearch/doc/search-guide.webdoc:4179
#: modules/websearch/doc/search-guide.webdoc:4199
#: modules/websearch/doc/search-guide.webdoc:4223
#: modules/websearch/doc/search-guide.webdoc:4250
#: modules/websearch/doc/search-guide.webdoc:4290
#: modules/websearch/doc/search-guide.webdoc:4311
#: modules/websearch/doc/search-guide.webdoc:4335
#: modules/websearch/doc/search-guide.webdoc:4365
#: modules/websearch/doc/search-guide.webdoc:4409
#: modules/websearch/doc/search-guide.webdoc:4431
#: modules/websearch/doc/search-guide.webdoc:4456
#: modules/websearch/doc/search-guide.webdoc:4486
#: modules/websearch/doc/search-guide.webdoc:4531
#: modules/websearch/doc/search-guide.webdoc:4552
#: modules/websearch/doc/search-guide.webdoc:4577
#: modules/websearch/doc/search-guide.webdoc:4607
#: modules/websearch/doc/search-guide.webdoc:4899
#: modules/websearch/doc/search-guide.webdoc:4915
#: modules/websearch/doc/search-guide.webdoc:4935
#: modules/websearch/doc/search-guide.webdoc:4954
#: modules/websearch/doc/search-guide.webdoc:4975
#: modules/websearch/doc/search-guide.webdoc:4993
#: modules/websearch/doc/search-guide.webdoc:5014
#: modules/websearch/doc/search-guide.webdoc:5032
#: modules/websearch/doc/search-guide.webdoc:5065
#: modules/websearch/doc/search-guide.webdoc:5079
#: modules/websearch/doc/search-guide.webdoc:5094
#: modules/websearch/doc/search-guide.webdoc:5110
#: modules/websearch/doc/search-guide.webdoc:5129
#: modules/websearch/doc/search-guide.webdoc:5143
#: modules/websearch/doc/search-guide.webdoc:5159
#: modules/websearch/doc/search-guide.webdoc:5177
#: modules/websearch/doc/search-guide.webdoc:5196
#: modules/websearch/doc/search-guide.webdoc:5211
#: modules/websearch/doc/search-guide.webdoc:5226
#: modules/websearch/doc/search-guide.webdoc:5244
#: modules/websearch/doc/search-guide.webdoc:5264
#: modules/websearch/doc/search-guide.webdoc:5279
#: modules/websearch/doc/search-guide.webdoc:5294
#: modules/websearch/doc/search-guide.webdoc:5314
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:129
#: modules/miscutil/lib/inveniocfg.py:479
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:7218
#: modules/bibcirculation/lib/bibcirculation_templates.py:7917
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16024
msgid "any field"
msgstr "cualquier campo"
#: modules/webhelp/web/help-central.webdoc:20
#: modules/webhelp/web/help-central.webdoc:25
#: modules/webhelp/web/help-central.webdoc:26
#: modules/webhelp/web/help-central.webdoc:27
#: modules/webhelp/web/help-central.webdoc:28
#: modules/webhelp/web/help-central.webdoc:29
#: modules/webhelp/web/help-central.webdoc:30
#: modules/webhelp/web/help-central.webdoc:31
#: modules/webhelp/web/help-central.webdoc:32
#: modules/webhelp/web/help-central.webdoc:33
#: modules/webhelp/web/help-central.webdoc:34
#: modules/webhelp/web/help-central.webdoc:35
#: modules/webhelp/web/help-central.webdoc:36
#: modules/webhelp/web/help-central.webdoc:37
#: modules/webhelp/web/help-central.webdoc:38
#: modules/webhelp/web/help-central.webdoc:39
#: modules/webhelp/web/help-central.webdoc:40
#: modules/webhelp/web/help-central.webdoc:41
#: modules/webhelp/web/help-central.webdoc:42
#: modules/webhelp/web/help-central.webdoc:43
#: modules/webhelp/web/help-central.webdoc:44
#: modules/webhelp/web/help-central.webdoc:45
#: modules/websearch/doc/search-tips.webdoc:21
#: modules/websearch/doc/search-guide.webdoc:21
#: modules/websubmit/doc/submit-guide.webdoc:21
#: modules/webstyle/lib/webdoc_tests.py:105
#: modules/webstyle/lib/webdoc_webinterface.py:155
msgid "Help Central"
msgstr "Centro de ayuda"
#: modules/bibformat/etc/format_templates/Default_HTML_actions.bft:6
msgid "Export as"
msgstr "Exportar como"
#: modules/websearch/doc/search-guide.webdoc:345
#: modules/websearch/doc/search-guide.webdoc:380
#: modules/websearch/doc/search-guide.webdoc:415
#: modules/websearch/doc/search-guide.webdoc:477
#: modules/websearch/doc/search-guide.webdoc:512
#: modules/websearch/doc/search-guide.webdoc:547
#: modules/websearch/doc/search-guide.webdoc:614
#: modules/websearch/doc/search-guide.webdoc:649
#: modules/websearch/doc/search-guide.webdoc:684
#: modules/websearch/doc/search-guide.webdoc:752
#: modules/websearch/doc/search-guide.webdoc:787
#: modules/websearch/doc/search-guide.webdoc:822
#: modules/miscutil/lib/inveniocfg.py:488
msgid "collection"
msgstr "colección"
#: modules/websearch/doc/admin/websearch-admin-guide.webdoc:20
msgid "WebSearch Admin Guide"
msgstr "Guía de administración de WebSearch"
#: modules/websearch/doc/search-guide.webdoc:335
#: modules/websearch/doc/search-guide.webdoc:371
#: modules/websearch/doc/search-guide.webdoc:406
#: modules/websearch/doc/search-guide.webdoc:467
#: modules/websearch/doc/search-guide.webdoc:503
#: modules/websearch/doc/search-guide.webdoc:538
#: modules/websearch/doc/search-guide.webdoc:604
#: modules/websearch/doc/search-guide.webdoc:640
#: modules/websearch/doc/search-guide.webdoc:675
#: modules/websearch/doc/search-guide.webdoc:742
#: modules/websearch/doc/search-guide.webdoc:778
#: modules/websearch/doc/search-guide.webdoc:813
#: modules/websearch/lib/search_engine.py:1220
#: modules/websearch/lib/websearch_templates.py:1104
msgid "Exact phrase:"
msgstr "Frase exacta:"
#: modules/webhelp/web/help-central.webdoc:108
#: modules/websubmit/doc/submit-guide.webdoc:20
msgid "Submit Guide"
-msgstr "Ayuda para el envio"
+msgstr "Ayuda para el envío"
#: modules/websearch/doc/search-guide.webdoc:360
#: modules/websearch/doc/search-guide.webdoc:395
#: modules/websearch/doc/search-guide.webdoc:492
#: modules/websearch/doc/search-guide.webdoc:527
#: modules/websearch/doc/search-guide.webdoc:629
#: modules/websearch/doc/search-guide.webdoc:664
#: modules/websearch/doc/search-guide.webdoc:767
#: modules/websearch/doc/search-guide.webdoc:802
#: modules/websearch/lib/search_engine.py:1053
#: modules/websearch/lib/search_engine.py:1199
#: modules/websearch/lib/websearch_templates.py:1151
#: modules/websearch/lib/websearch_webcoll.py:592
msgid "OR"
msgstr "O"
#: modules/websearch/doc/search-guide.webdoc:359
#: modules/websearch/doc/search-guide.webdoc:394
#: modules/websearch/doc/search-guide.webdoc:491
#: modules/websearch/doc/search-guide.webdoc:526
#: modules/websearch/doc/search-guide.webdoc:628
#: modules/websearch/doc/search-guide.webdoc:663
#: modules/websearch/doc/search-guide.webdoc:766
#: modules/websearch/doc/search-guide.webdoc:801
#: modules/websearch/lib/search_engine.py:1198
#: modules/websearch/lib/websearch_templates.py:1150
msgid "AND"
msgstr "Y"
#: modules/websearch/doc/search-guide.webdoc:349
#: modules/websearch/doc/search-guide.webdoc:384
#: modules/websearch/doc/search-guide.webdoc:419
#: modules/websearch/doc/search-guide.webdoc:481
#: modules/websearch/doc/search-guide.webdoc:516
#: modules/websearch/doc/search-guide.webdoc:551
#: modules/websearch/doc/search-guide.webdoc:618
#: modules/websearch/doc/search-guide.webdoc:653
#: modules/websearch/doc/search-guide.webdoc:688
#: modules/websearch/doc/search-guide.webdoc:756
#: modules/websearch/doc/search-guide.webdoc:791
#: modules/websearch/doc/search-guide.webdoc:826
#: modules/miscutil/lib/inveniocfg.py:483
msgid "keyword"
msgstr "palabra clave"
#: modules/websearch/doc/search-tips.webdoc:36
#: modules/websearch/doc/search-tips.webdoc:67
#: modules/websearch/doc/search-tips.webdoc:105
#: modules/websearch/doc/search-tips.webdoc:153
#: modules/websearch/doc/search-tips.webdoc:176
#: modules/websearch/doc/search-tips.webdoc:200
#: modules/websearch/doc/search-tips.webdoc:245
#: modules/websearch/doc/search-tips.webdoc:285
#: modules/websearch/doc/search-tips.webdoc:296
#: modules/websearch/doc/search-tips.webdoc:316
#: modules/websearch/doc/search-tips.webdoc:336
#: modules/websearch/doc/search-tips.webdoc:370
#: modules/websearch/doc/search-tips.webdoc:404
#: modules/websearch/doc/search-tips.webdoc:425
#: modules/websearch/doc/search-tips.webdoc:445
#: modules/websearch/doc/search-tips.webdoc:479
#: modules/websearch/doc/search-tips.webdoc:497
#: modules/websearch/doc/search-tips.webdoc:516
#: modules/websearch/doc/search-tips.webdoc:549
#: modules/websearch/doc/search-tips.webdoc:574
#: modules/websearch/doc/search-tips.webdoc:598
#: modules/websearch/doc/search-tips.webdoc:619
#: modules/websearch/doc/search-guide.webdoc:225
#: modules/websearch/doc/search-guide.webdoc:249
#: modules/websearch/doc/search-guide.webdoc:275
#: modules/websearch/doc/search-guide.webdoc:300
#: modules/websearch/doc/search-guide.webdoc:353
#: modules/websearch/doc/search-guide.webdoc:388
#: modules/websearch/doc/search-guide.webdoc:423
#: modules/websearch/doc/search-guide.webdoc:485
#: modules/websearch/doc/search-guide.webdoc:520
#: modules/websearch/doc/search-guide.webdoc:555
#: modules/websearch/doc/search-guide.webdoc:622
#: modules/websearch/doc/search-guide.webdoc:657
#: modules/websearch/doc/search-guide.webdoc:692
#: modules/websearch/doc/search-guide.webdoc:760
#: modules/websearch/doc/search-guide.webdoc:795
#: modules/websearch/doc/search-guide.webdoc:830
#: modules/websearch/doc/search-guide.webdoc:880
#: modules/websearch/doc/search-guide.webdoc:911
#: modules/websearch/doc/search-guide.webdoc:951
#: modules/websearch/doc/search-guide.webdoc:985
#: modules/websearch/doc/search-guide.webdoc:1025
#: modules/websearch/doc/search-guide.webdoc:1047
#: modules/websearch/doc/search-guide.webdoc:1067
#: modules/websearch/doc/search-guide.webdoc:1083
#: modules/websearch/doc/search-guide.webdoc:1123
#: modules/websearch/doc/search-guide.webdoc:1146
#: modules/websearch/doc/search-guide.webdoc:1167
#: modules/websearch/doc/search-guide.webdoc:1182
#: modules/websearch/doc/search-guide.webdoc:1226
#: modules/websearch/doc/search-guide.webdoc:1251
#: modules/websearch/doc/search-guide.webdoc:1272
#: modules/websearch/doc/search-guide.webdoc:1288
#: modules/websearch/doc/search-guide.webdoc:1333
#: modules/websearch/doc/search-guide.webdoc:1356
#: modules/websearch/doc/search-guide.webdoc:1378
#: modules/websearch/doc/search-guide.webdoc:1394
#: modules/websearch/doc/search-guide.webdoc:1764
#: modules/websearch/doc/search-guide.webdoc:1778
#: modules/websearch/doc/search-guide.webdoc:1796
#: modules/websearch/doc/search-guide.webdoc:1815
#: modules/websearch/doc/search-guide.webdoc:1828
#: modules/websearch/doc/search-guide.webdoc:1846
#: modules/websearch/doc/search-guide.webdoc:1866
#: modules/websearch/doc/search-guide.webdoc:1881
#: modules/websearch/doc/search-guide.webdoc:1900
#: modules/websearch/doc/search-guide.webdoc:1923
#: modules/websearch/doc/search-guide.webdoc:1938
#: modules/websearch/doc/search-guide.webdoc:1957
#: modules/websearch/doc/search-guide.webdoc:1985
#: modules/websearch/doc/search-guide.webdoc:2023
#: modules/websearch/doc/search-guide.webdoc:2034
#: modules/websearch/doc/search-guide.webdoc:2048
#: modules/websearch/doc/search-guide.webdoc:2062
#: modules/websearch/doc/search-guide.webdoc:2075
#: modules/websearch/doc/search-guide.webdoc:2091
#: modules/websearch/doc/search-guide.webdoc:2102
#: modules/websearch/doc/search-guide.webdoc:2116
#: modules/websearch/doc/search-guide.webdoc:2130
#: modules/websearch/doc/search-guide.webdoc:2143
#: modules/websearch/doc/search-guide.webdoc:2159
#: modules/websearch/doc/search-guide.webdoc:2170
#: modules/websearch/doc/search-guide.webdoc:2184
#: modules/websearch/doc/search-guide.webdoc:2198
#: modules/websearch/doc/search-guide.webdoc:2211
#: modules/websearch/doc/search-guide.webdoc:2229
#: modules/websearch/doc/search-guide.webdoc:2240
#: modules/websearch/doc/search-guide.webdoc:2254
#: modules/websearch/doc/search-guide.webdoc:2268
#: modules/websearch/doc/search-guide.webdoc:2281
#: modules/websearch/doc/search-guide.webdoc:2310
#: modules/websearch/doc/search-guide.webdoc:2324
#: modules/websearch/doc/search-guide.webdoc:2341
#: modules/websearch/doc/search-guide.webdoc:2354
#: modules/websearch/doc/search-guide.webdoc:2371
#: modules/websearch/doc/search-guide.webdoc:2385
#: modules/websearch/doc/search-guide.webdoc:2403
#: modules/websearch/doc/search-guide.webdoc:2417
#: modules/websearch/doc/search-guide.webdoc:2448
#: modules/websearch/doc/search-guide.webdoc:2463
#: modules/websearch/doc/search-guide.webdoc:2477
#: modules/websearch/doc/search-guide.webdoc:2492
#: modules/websearch/doc/search-guide.webdoc:2520
#: modules/websearch/doc/search-guide.webdoc:2535
#: modules/websearch/doc/search-guide.webdoc:2549
#: modules/websearch/doc/search-guide.webdoc:2565
#: modules/websearch/doc/search-guide.webdoc:2597
#: modules/websearch/doc/search-guide.webdoc:2613
#: modules/websearch/doc/search-guide.webdoc:2627
#: modules/websearch/doc/search-guide.webdoc:2642
#: modules/websearch/doc/search-guide.webdoc:2673
#: modules/websearch/doc/search-guide.webdoc:2689
#: modules/websearch/doc/search-guide.webdoc:2703
#: modules/websearch/doc/search-guide.webdoc:2718
#: modules/websearch/doc/search-guide.webdoc:2760
#: modules/websearch/doc/search-guide.webdoc:2775
#: modules/websearch/doc/search-guide.webdoc:2789
#: modules/websearch/doc/search-guide.webdoc:2814
#: modules/websearch/doc/search-guide.webdoc:2829
#: modules/websearch/doc/search-guide.webdoc:2843
#: modules/websearch/doc/search-guide.webdoc:2872
#: modules/websearch/doc/search-guide.webdoc:2887
#: modules/websearch/doc/search-guide.webdoc:2901
#: modules/websearch/doc/search-guide.webdoc:2929
#: modules/websearch/doc/search-guide.webdoc:2944
#: modules/websearch/doc/search-guide.webdoc:2957
#: modules/websearch/doc/search-guide.webdoc:2992
#: modules/websearch/doc/search-guide.webdoc:3014
#: modules/websearch/doc/search-guide.webdoc:3038
#: modules/websearch/doc/search-guide.webdoc:3062
#: modules/websearch/doc/search-guide.webdoc:3086
#: modules/websearch/doc/search-guide.webdoc:3101
#: modules/websearch/doc/search-guide.webdoc:3117
#: modules/websearch/doc/search-guide.webdoc:3134
#: modules/websearch/doc/search-guide.webdoc:3154
#: modules/websearch/doc/search-guide.webdoc:3172
#: modules/websearch/doc/search-guide.webdoc:3190
#: modules/websearch/doc/search-guide.webdoc:3209
#: modules/websearch/doc/search-guide.webdoc:3230
#: modules/websearch/doc/search-guide.webdoc:3244
#: modules/websearch/doc/search-guide.webdoc:3264
#: modules/websearch/doc/search-guide.webdoc:3279
#: modules/websearch/doc/search-guide.webdoc:3298
#: modules/websearch/doc/search-guide.webdoc:3313
#: modules/websearch/doc/search-guide.webdoc:3333
#: modules/websearch/doc/search-guide.webdoc:3348
#: modules/websearch/doc/search-guide.webdoc:3410
#: modules/websearch/doc/search-guide.webdoc:3424
#: modules/websearch/doc/search-guide.webdoc:3441
#: modules/websearch/doc/search-guide.webdoc:3454
#: modules/websearch/doc/search-guide.webdoc:3472
#: modules/websearch/doc/search-guide.webdoc:3487
#: modules/websearch/doc/search-guide.webdoc:3505
#: modules/websearch/doc/search-guide.webdoc:3520
#: modules/websearch/doc/search-guide.webdoc:3545
#: modules/websearch/doc/search-guide.webdoc:3558
#: modules/websearch/doc/search-guide.webdoc:3571
#: modules/websearch/doc/search-guide.webdoc:3587
#: modules/websearch/doc/search-guide.webdoc:3603
#: modules/websearch/doc/search-guide.webdoc:3620
#: modules/websearch/doc/search-guide.webdoc:3653
#: modules/websearch/doc/search-guide.webdoc:3669
#: modules/websearch/doc/search-guide.webdoc:3686
#: modules/websearch/doc/search-guide.webdoc:3706
#: modules/websearch/doc/search-guide.webdoc:3720
#: modules/websearch/doc/search-guide.webdoc:3738
#: modules/websearch/doc/search-guide.webdoc:3759
#: modules/websearch/doc/search-guide.webdoc:3778
#: modules/websearch/doc/search-guide.webdoc:3796
#: modules/websearch/doc/search-guide.webdoc:3818
#: modules/websearch/doc/search-guide.webdoc:3837
#: modules/websearch/doc/search-guide.webdoc:3854
#: modules/websearch/doc/search-guide.webdoc:3975
#: modules/websearch/doc/search-guide.webdoc:4000
#: modules/websearch/doc/search-guide.webdoc:4023
#: modules/websearch/doc/search-guide.webdoc:4049
#: modules/websearch/doc/search-guide.webdoc:4073
#: modules/websearch/doc/search-guide.webdoc:4100
#: modules/websearch/doc/search-guide.webdoc:4125
#: modules/websearch/doc/search-guide.webdoc:4151
#: modules/websearch/doc/search-guide.webdoc:4180
#: modules/websearch/doc/search-guide.webdoc:4200
#: modules/websearch/doc/search-guide.webdoc:4224
#: modules/websearch/doc/search-guide.webdoc:4251
#: modules/websearch/doc/search-guide.webdoc:4291
#: modules/websearch/doc/search-guide.webdoc:4312
#: modules/websearch/doc/search-guide.webdoc:4336
#: modules/websearch/doc/search-guide.webdoc:4366
#: modules/websearch/doc/search-guide.webdoc:4410
#: modules/websearch/doc/search-guide.webdoc:4432
#: modules/websearch/doc/search-guide.webdoc:4457
#: modules/websearch/doc/search-guide.webdoc:4487
#: modules/websearch/doc/search-guide.webdoc:4532
#: modules/websearch/doc/search-guide.webdoc:4553
#: modules/websearch/doc/search-guide.webdoc:4578
#: modules/websearch/doc/search-guide.webdoc:4608
#: modules/websearch/doc/search-guide.webdoc:4900
#: modules/websearch/doc/search-guide.webdoc:4916
#: modules/websearch/doc/search-guide.webdoc:4936
#: modules/websearch/doc/search-guide.webdoc:4955
#: modules/websearch/doc/search-guide.webdoc:4976
#: modules/websearch/doc/search-guide.webdoc:4994
#: modules/websearch/doc/search-guide.webdoc:5015
#: modules/websearch/doc/search-guide.webdoc:5033
#: modules/websearch/doc/search-guide.webdoc:5066
#: modules/websearch/doc/search-guide.webdoc:5080
#: modules/websearch/doc/search-guide.webdoc:5095
#: modules/websearch/doc/search-guide.webdoc:5111
#: modules/websearch/doc/search-guide.webdoc:5130
#: modules/websearch/doc/search-guide.webdoc:5144
#: modules/websearch/doc/search-guide.webdoc:5160
#: modules/websearch/doc/search-guide.webdoc:5178
#: modules/websearch/doc/search-guide.webdoc:5197
#: modules/websearch/doc/search-guide.webdoc:5212
#: modules/websearch/doc/search-guide.webdoc:5227
#: modules/websearch/doc/search-guide.webdoc:5245
#: modules/websearch/doc/search-guide.webdoc:5265
#: modules/websearch/doc/search-guide.webdoc:5280
#: modules/websearch/doc/search-guide.webdoc:5295
#: modules/websearch/doc/search-guide.webdoc:5315
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:130
#: modules/miscutil/lib/inveniocfg.py:480
#: modules/bibcirculation/lib/bibcirculation_templates.py:2020
#: modules/bibcirculation/lib/bibcirculation_templates.py:7219
#: modules/bibcirculation/lib/bibcirculation_templates.py:7918
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16025
#: modules/bibcirculation/lib/bibcirculation_templates.py:17724
msgid "title"
msgstr "título"
#: modules/websearch/doc/search-tips.webdoc:72
#: modules/websearch/doc/search-tips.webdoc:110
#: modules/websearch/lib/websearch_templates.py:1264
msgid "Narrow by collection:"
msgstr "Búsqueda por colección:"
#: modules/bibformat/etc/format_templates/Default_HTML_actions.bft:5
msgid "Add to personal basket"
msgstr "Añadir a la cesta personal"
#: modules/websubmit/doc/admin/websubmit-admin-guide.webdoc:20
msgid "WebSubmit Admin Guide"
msgstr "Guía d'administración de WebSubmit"
#: modules/websearch/doc/search-tips.webdoc:290
#: modules/websubmit/lib/websubmit_templates.py:1119
#: modules/bibharvest/lib/oai_harvest_admin.py:450
#: modules/bibharvest/lib/oai_harvest_admin.py:458
#: modules/bibharvest/lib/oai_harvest_admin.py:472
#: modules/bibharvest/lib/oai_harvest_admin.py:487
msgid "or"
msgstr "o"
#: modules/bibedit/lib/bibedit_templates.py:295
msgid "Comparison of:"
msgstr "Comparación entre:"
#: modules/bibedit/lib/bibedit_templates.py:296
msgid "Revision"
msgstr "Revisión"
#: modules/bibformat/lib/bibformat_templates.py:315
#: modules/bibformat/lib/bibformat_templates.py:429
#: modules/bibformat/lib/bibformat_templates.py:579
#: modules/bibformat/lib/bibformat_templates.py:595
#: modules/bibformat/lib/bibformat_templates.py:626
#: modules/bibformat/lib/bibformat_templates.py:937
#: modules/bibformat/lib/bibformat_templates.py:1069
#: modules/bibformat/lib/bibformat_templates.py:1385
#: modules/bibformat/lib/bibformat_templates.py:1491
#: modules/bibformat/lib/bibformat_templates.py:1550
#: modules/webcomment/lib/webcomment_templates.py:1475
#: modules/webjournal/lib/webjournal_templates.py:165
#: modules/webjournal/lib/webjournal_templates.py:537
#: modules/webjournal/lib/webjournal_templates.py:678
#: modules/bibknowledge/lib/bibknowledge_templates.py:327
#: modules/bibknowledge/lib/bibknowledge_templates.py:587
#: modules/bibknowledge/lib/bibknowledge_templates.py:656
msgid "Menu"
msgstr "Menú"
#: modules/bibformat/lib/bibformat_templates.py:317
#: modules/bibformat/lib/bibformat_templates.py:430
#: modules/bibformat/lib/bibformat_templates.py:582
#: modules/bibformat/lib/bibformat_templates.py:598
#: modules/bibformat/lib/bibformat_templates.py:629
#: modules/bibknowledge/lib/bibknowledge_templates.py:323
#: modules/bibknowledge/lib/bibknowledge_templates.py:586
#: modules/bibknowledge/lib/bibknowledge_templates.py:655
msgid "Close Editor"
msgstr "Cerrar el editor"
#: modules/bibformat/lib/bibformat_templates.py:318
#: modules/bibformat/lib/bibformat_templates.py:431
#: modules/bibformat/lib/bibformat_templates.py:583
#: modules/bibformat/lib/bibformat_templates.py:599
#: modules/bibformat/lib/bibformat_templates.py:630
msgid "Modify Template Attributes"
msgstr "Modificar los atributos de la plantilla"
#: modules/bibformat/lib/bibformat_templates.py:319
#: modules/bibformat/lib/bibformat_templates.py:432
#: modules/bibformat/lib/bibformat_templates.py:584
#: modules/bibformat/lib/bibformat_templates.py:600
#: modules/bibformat/lib/bibformat_templates.py:631
msgid "Template Editor"
msgstr "Editor de plantillas"
#: modules/bibformat/lib/bibformat_templates.py:320
#: modules/bibformat/lib/bibformat_templates.py:433
#: modules/bibformat/lib/bibformat_templates.py:585
#: modules/bibformat/lib/bibformat_templates.py:601
#: modules/bibformat/lib/bibformat_templates.py:632
#: modules/bibformat/lib/bibformat_templates.py:1184
#: modules/bibformat/lib/bibformat_templates.py:1384
#: modules/bibformat/lib/bibformat_templates.py:1490
msgid "Check Dependencies"
msgstr "Comprobar las dependencias"
#: modules/bibformat/lib/bibformat_templates.py:370
#: modules/bibformat/lib/bibformat_templates.py:935
#: modules/bibformat/lib/bibformat_templates.py:1060
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/webalert/lib/webalert_templates.py:320
#: modules/websubmit/lib/websubmit_managedocfiles.py:385
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:194
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:255
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:345
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:398
#: modules/bibcirculation/lib/bibcirculation_utils.py:454
#: modules/bibcirculation/lib/bibcirculation_templates.py:1147
#: modules/bibcirculation/lib/bibcirculation_templates.py:1347
#: modules/bibcirculation/lib/bibcirculation_templates.py:1522
#: modules/bibcirculation/lib/bibcirculation_templates.py:1797
#: modules/bibcirculation/lib/bibcirculation_templates.py:2387
#: modules/bibcirculation/lib/bibcirculation_templates.py:2504
#: modules/bibcirculation/lib/bibcirculation_templates.py:2736
#: modules/bibcirculation/lib/bibcirculation_templates.py:3094
#: modules/bibcirculation/lib/bibcirculation_templates.py:3940
#: modules/bibcirculation/lib/bibcirculation_templates.py:4043
#: modules/bibcirculation/lib/bibcirculation_templates.py:4266
#: modules/bibcirculation/lib/bibcirculation_templates.py:4327
#: modules/bibcirculation/lib/bibcirculation_templates.py:4454
#: modules/bibcirculation/lib/bibcirculation_templates.py:5602
#: modules/bibcirculation/lib/bibcirculation_templates.py:6185
#: modules/bibcirculation/lib/bibcirculation_templates.py:6234
#: modules/bibcirculation/lib/bibcirculation_templates.py:6533
#: modules/bibcirculation/lib/bibcirculation_templates.py:6596
#: modules/bibcirculation/lib/bibcirculation_templates.py:6699
#: modules/bibcirculation/lib/bibcirculation_templates.py:6928
#: modules/bibcirculation/lib/bibcirculation_templates.py:7027
#: modules/bibcirculation/lib/bibcirculation_templates.py:7427
#: modules/bibcirculation/lib/bibcirculation_templates.py:8079
#: modules/bibcirculation/lib/bibcirculation_templates.py:8236
#: modules/bibcirculation/lib/bibcirculation_templates.py:9025
#: modules/bibcirculation/lib/bibcirculation_templates.py:9270
#: modules/bibcirculation/lib/bibcirculation_templates.py:9595
#: modules/bibcirculation/lib/bibcirculation_templates.py:9834
#: modules/bibcirculation/lib/bibcirculation_templates.py:9879
#: modules/bibcirculation/lib/bibcirculation_templates.py:10070
#: modules/bibcirculation/lib/bibcirculation_templates.py:10313
#: modules/bibcirculation/lib/bibcirculation_templates.py:10357
#: modules/bibcirculation/lib/bibcirculation_templates.py:10521
#: modules/bibcirculation/lib/bibcirculation_templates.py:10748
#: modules/bibcirculation/lib/bibcirculation_templates.py:11221
#: modules/bibcirculation/lib/bibcirculation_templates.py:11349
#: modules/bibcirculation/lib/bibcirculation_templates.py:11854
#: modules/bibcirculation/lib/bibcirculation_templates.py:12210
#: modules/bibcirculation/lib/bibcirculation_templates.py:12831
#: modules/bibcirculation/lib/bibcirculation_templates.py:12994
#: modules/bibcirculation/lib/bibcirculation_templates.py:13604
#: modules/bibcirculation/lib/bibcirculation_templates.py:13867
#: modules/bibcirculation/lib/bibcirculation_templates.py:14070
#: modules/bibcirculation/lib/bibcirculation_templates.py:14140
#: modules/bibcirculation/lib/bibcirculation_templates.py:14389
#: modules/bibcirculation/lib/bibcirculation_templates.py:14460
#: modules/bibcirculation/lib/bibcirculation_templates.py:14713
#: modules/bibcirculation/lib/bibcirculation_templates.py:15143
#: modules/bibcirculation/lib/bibcirculation_templates.py:15515
#: modules/bibcirculation/lib/bibcirculation_templates.py:15862
#: modules/bibcirculation/lib/bibcirculation_templates.py:17909
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:447
#: modules/bibknowledge/lib/bibknowledge_templates.py:79
msgid "Name"
msgstr "Nombre"
#: modules/bibformat/lib/bibformat_templates.py:389
#: modules/bibformat/lib/bibformat_templates.py:936
#: modules/bibformat/lib/bibformat_templates.py:1061
#: modules/webbasket/lib/webbasket_templates.py:1273
#: modules/websession/lib/websession_templates.py:1504
#: modules/websession/lib/websession_templates.py:1578
#: modules/websession/lib/websession_templates.py:1641
#: modules/websubmit/lib/websubmit_managedocfiles.py:387
#: modules/bibcirculation/lib/bibcirculation_templates.py:357
#: modules/bibcirculation/lib/bibcirculation_templates.py:3187
#: modules/bibcirculation/lib/bibcirculation_templates.py:6050
#: modules/bibcirculation/lib/bibcirculation_templates.py:7476
#: modules/bibcirculation/lib/bibcirculation_templates.py:7654
#: modules/bibcirculation/lib/bibcirculation_templates.py:7813
#: modules/bibcirculation/lib/bibcirculation_templates.py:8111
#: modules/bibcirculation/lib/bibcirculation_templates.py:8321
#: modules/bibcirculation/lib/bibcirculation_templates.py:8481
#: modules/bibcirculation/lib/bibcirculation_templates.py:9330
#: modules/bibcirculation/lib/bibcirculation_templates.py:17958
#: modules/bibcirculation/lib/bibcirculation_templates.py:18044
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:451
#: modules/bibknowledge/lib/bibknowledge_templates.py:80
msgid "Description"
msgstr "Descripción"
#: modules/bibformat/lib/bibformat_templates.py:390
msgid "Update Format Attributes"
msgstr "Actualizar los atributos del formato"
#: modules/bibformat/lib/bibformat_templates.py:580
#: modules/bibformat/lib/bibformat_templates.py:596
#: modules/bibformat/lib/bibformat_templates.py:627
msgid "Show Documentation"
msgstr "Mostrar la documentación"
#: modules/bibformat/lib/bibformat_templates.py:581
#: modules/bibformat/lib/bibformat_templates.py:597
#: modules/bibformat/lib/bibformat_templates.py:628
#: modules/bibformat/lib/bibformat_templates.py:679
msgid "Hide Documentation"
msgstr "Esconder la documentación"
#: modules/bibformat/lib/bibformat_templates.py:588
#: modules/websubmit/lib/websubmit_templates.py:868
msgid "Your modifications will not be saved."
msgstr "Sus modificaciones no serán guardadas."
#: modules/bibformat/lib/bibformat_templates.py:938
#: modules/bibformat/lib/bibformat_templates.py:1062
#: modules/bibupload/lib/batchuploader_templates.py:253
#: modules/bibupload/lib/batchuploader_templates.py:295
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/websubmit/lib/websubmit_templates.py:1491
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:536
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:633
#: modules/bibcirculation/lib/bibcirculation_templates.py:358
#: modules/bibcirculation/lib/bibcirculation_templates.py:771
#: modules/bibcirculation/lib/bibcirculation_templates.py:2849
#: modules/bibcirculation/lib/bibcirculation_templates.py:2957
#: modules/bibcirculation/lib/bibcirculation_templates.py:3160
#: modules/bibcirculation/lib/bibcirculation_templates.py:7451
#: modules/bibcirculation/lib/bibcirculation_templates.py:7686
#: modules/bibcirculation/lib/bibcirculation_templates.py:7815
#: modules/bibcirculation/lib/bibcirculation_templates.py:8105
#: modules/bibcirculation/lib/bibcirculation_templates.py:8352
#: modules/bibcirculation/lib/bibcirculation_templates.py:8483
#: modules/bibcirculation/lib/bibcirculation_templates.py:9331
#: modules/bibcirculation/lib/bibcirculation_templates.py:10597
#: modules/bibcirculation/lib/bibcirculation_templates.py:10817
#: modules/bibcirculation/lib/bibcirculation_templates.py:10914
#: modules/bibcirculation/lib/bibcirculation_templates.py:11516
#: modules/bibcirculation/lib/bibcirculation_templates.py:11637
#: modules/bibcirculation/lib/bibcirculation_templates.py:12228
#: modules/bibcirculation/lib/bibcirculation_templates.py:13012
#: modules/bibcirculation/lib/bibcirculation_templates.py:13678
#: modules/bibcirculation/lib/bibcirculation_templates.py:13909
#: modules/bibcirculation/lib/bibcirculation_templates.py:15589
#: modules/bibcirculation/lib/bibcirculation_templates.py:17933
#: modules/bibcirculation/lib/bibcirculation_templates.py:18037
msgid "Status"
msgstr "Estado"
#: modules/bibformat/lib/bibformat_templates.py:939
#: modules/bibformat/lib/bibformat_templates.py:1063
msgid "Last Modification Date"
msgstr "Fecha de la última modificación"
#: modules/bibformat/lib/bibformat_templates.py:940
#: modules/bibformat/lib/bibformat_templates.py:1064
#: modules/webalert/lib/webalert_templates.py:327
#: modules/webalert/lib/webalert_templates.py:464
#: modules/webmessage/lib/webmessage_templates.py:89
#: modules/websubmit/lib/websubmit_templates.py:1490
#: modules/bibknowledge/lib/bibknowledge_templates.py:81
msgid "Action"
msgstr "Acción"
#: modules/bibformat/lib/bibformat_templates.py:942
#: modules/bibformat/lib/bibformat_templates.py:1066
#: modules/bibformat/lib/bibformat_templates.py:1551
#: modules/bibformat/web/admin/bibformatadmin.py:104
#: modules/bibformat/web/admin/bibformatadmin.py:167
#: modules/bibformat/web/admin/bibformatadmin.py:240
#: modules/bibformat/web/admin/bibformatadmin.py:287
#: modules/bibformat/web/admin/bibformatadmin.py:384
#: modules/bibformat/web/admin/bibformatadmin.py:999
msgid "Manage Output Formats"
msgstr "Gestionar los formatos de salida"
#: modules/bibformat/lib/bibformat_templates.py:943
#: modules/bibformat/lib/bibformat_templates.py:1067
#: modules/bibformat/lib/bibformat_templates.py:1552
#: modules/bibformat/web/admin/bibformatadmin.py:465
#: modules/bibformat/web/admin/bibformatadmin.py:500
#: modules/bibformat/web/admin/bibformatadmin.py:573
#: modules/bibformat/web/admin/bibformatadmin.py:620
#: modules/bibformat/web/admin/bibformatadmin.py:694
#: modules/bibformat/web/admin/bibformatadmin.py:1020
msgid "Manage Format Templates"
msgstr "Gestionar las plantillas de los formatos"
#: modules/bibformat/lib/bibformat_templates.py:944
#: modules/bibformat/lib/bibformat_templates.py:1068
#: modules/bibformat/lib/bibformat_templates.py:1553
#: modules/bibformat/web/admin/bibformatadmin.py:887
#: modules/bibformat/web/admin/bibformatadmin.py:911
#: modules/bibformat/web/admin/bibformatadmin.py:946
#: modules/bibformat/web/admin/bibformatadmin.py:1038
msgid "Format Elements Documentation"
msgstr "Documentación de los elementos de los formatos"
#: modules/bibformat/lib/bibformat_templates.py:996
#: modules/bibformat/web/admin/bibformatadmin.py:405
#: modules/bibformat/web/admin/bibformatadmin.py:407
#: modules/bibformat/web/admin/bibformatadmin.py:714
#: modules/bibformat/web/admin/bibformatadmin.py:716
#: modules/webbasket/lib/webbasket_templates.py:2894
#: modules/webmessage/lib/webmessage_templates.py:115
#: modules/webjournal/lib/webjournaladminlib.py:116
#: modules/webjournal/lib/webjournaladminlib.py:119
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:175
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:324
#: modules/bibcirculation/lib/bibcirculation_templates.py:1220
#: modules/bibcirculation/lib/bibcirculation_templates.py:15932
#: modules/bibcirculation/lib/bibcirculation_templates.py:18078
#: modules/bibcheck/web/admin/bibcheckadmin.py:137
#: modules/bibknowledge/lib/bibknowledge_templates.py:102
#: modules/bibknowledge/lib/bibknowledge_templates.py:529
#: modules/bibknowledge/lib/bibknowledgeadmin.py:747
#: modules/bibknowledge/lib/bibknowledgeadmin.py:749
msgid "Delete"
msgstr "Suprimir"
#: modules/bibformat/lib/bibformat_templates.py:1019
msgid "Add New Format Template"
msgstr "Añadir una nueva plantilla de formato"
#: modules/bibformat/lib/bibformat_templates.py:1020
msgid "Check Format Templates Extensively"
msgstr "Comprobar las plantillas de formato extensivamente"
#: modules/bibformat/lib/bibformat_templates.py:1059
msgid "Code"
msgstr "Código"
#: modules/bibformat/lib/bibformat_templates.py:1142
msgid "Add New Output Format"
msgstr "Añadir un nuevo formato de salida"
#: modules/bibformat/lib/bibformat_templates.py:1180
msgid "menu"
msgstr "menú"
#: modules/bibformat/lib/bibformat_templates.py:1181
#: modules/bibformat/lib/bibformat_templates.py:1381
#: modules/bibformat/lib/bibformat_templates.py:1487
msgid "Close Output Format"
msgstr "Cerrar formato de salida"
#: modules/bibformat/lib/bibformat_templates.py:1182
#: modules/bibformat/lib/bibformat_templates.py:1382
#: modules/bibformat/lib/bibformat_templates.py:1488
msgid "Rules"
msgstr "Reglas"
#: modules/bibformat/lib/bibformat_templates.py:1183
#: modules/bibformat/lib/bibformat_templates.py:1383
#: modules/bibformat/lib/bibformat_templates.py:1489
msgid "Modify Output Format Attributes"
msgstr "Modificar los atributos del formato de salida"
#: modules/bibformat/lib/bibformat_templates.py:1282
#: modules/bibformat/lib/bibformatadminlib.py:565
msgid "Remove Rule"
msgstr "Eliminar regla"
#: modules/bibformat/lib/bibformat_templates.py:1335
#: modules/bibformat/lib/bibformatadminlib.py:572
msgid "Add New Rule"
msgstr "Añadir una nueva regla"
#: modules/bibformat/lib/bibformat_templates.py:1336
#: modules/bibformat/lib/bibformatadminlib.py:569
#: modules/bibcheck/web/admin/bibcheckadmin.py:239
msgid "Save Changes"
msgstr "Guardar cambios"
#: modules/bibformat/lib/bibformat_templates.py:1910
msgid "No problem found with format"
msgstr "No se ha encontrado ningún problema con el formato"
#: modules/bibformat/lib/bibformat_templates.py:1912
msgid "An error has been found"
msgstr "Se ha encontrado un error"
#: modules/bibformat/lib/bibformat_templates.py:1914
msgid "The following errors have been found"
-msgstr "Se han encontrado los siguentes errores"
+msgstr "Se han encontrado los siguientes errores"
#: modules/bibformat/lib/bibformatadminlib.py:61
#: modules/bibformat/web/admin/bibformatadmin.py:72
msgid "BibFormat Admin"
msgstr "Administración de BibFormat"
#: modules/bibformat/lib/bibformatadminlib.py:357
#: modules/bibformat/lib/bibformatadminlib.py:396
#: modules/bibformat/lib/bibformatadminlib.py:398
msgid "Test with record:"
msgstr "Probarlo con el registro:"
#: modules/bibformat/lib/bibformatadminlib.py:358
msgid "Enter a search query here."
msgstr "Introduzca una búsqueda aquí."
#: modules/bibformat/lib/elements/bfe_aid_authors.py:287
#: modules/bibformat/lib/elements/bfe_authors.py:127
msgid "Hide"
msgstr "Esconder"
#: modules/bibformat/lib/elements/bfe_aid_authors.py:288
#: modules/bibformat/lib/elements/bfe_authors.py:128
#, python-format
msgid "Show all %i authors"
msgstr "Mostrar todos los %i autores"
#: modules/bibformat/lib/elements/bfe_fulltext.py:78
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:72
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:75
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:113
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:116
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:133
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:135
msgid "Download fulltext"
msgstr "Descargar el texto completo"
#: modules/bibformat/lib/elements/bfe_fulltext.py:87
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:61
msgid "additional files"
msgstr "archivos adicionales"
#: modules/bibformat/lib/elements/bfe_fulltext.py:130
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:120
#, python-format
msgid "%(x_sitename)s link"
msgstr "enlace %(x_sitename)s"
#: modules/bibformat/lib/elements/bfe_fulltext.py:130
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:120
#, python-format
msgid "%(x_sitename)s links"
msgstr "enlaces %(x_sitename)s"
#: modules/bibformat/lib/elements/bfe_fulltext.py:139
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:138
msgid "external link"
msgstr "enlace externo"
#: modules/bibformat/lib/elements/bfe_fulltext.py:139
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:138
msgid "external links"
msgstr "Enlaces externos"
#: modules/bibformat/lib/elements/bfe_fulltext.py:234
#: modules/bibformat/lib/elements/bfe_fulltext.py:238
#: modules/bibformat/lib/elements/bfe_fulltext.py:289
msgid "Fulltext"
msgstr "Texto completo"
#: modules/bibformat/lib/elements/bfe_edit_files.py:50
msgid "Manage Files of This Record"
msgstr "Gestionar los ficheros de este registro"
#: modules/bibformat/lib/elements/bfe_edit_record.py:46
msgid "Edit This Record"
msgstr "Edite este registro"
#: modules/bibformat/web/admin/bibformatadmin.py:182
#: modules/bibformat/web/admin/bibformatadmin.py:252
#: modules/bibformat/web/admin/bibformatadmin.py:299
#: modules/bibformat/web/admin/bibformatadmin.py:1002
msgid "Restricted Output Format"
msgstr "Formato de salida restringido"
#: modules/bibformat/web/admin/bibformatadmin.py:208
#: modules/bibformat/web/admin/bibformatadmin.py:534
#: modules/bibknowledge/lib/bibknowledgeadmin.py:563
msgid "Ok"
msgstr "De acuerdo"
#: modules/bibformat/web/admin/bibformatadmin.py:210
#, python-format
msgid "Output Format %s Rules"
msgstr "Reglas del formato de salida %s"
#: modules/bibformat/web/admin/bibformatadmin.py:265
#, python-format
msgid "Output Format %s Attributes"
msgstr "Atributos del formato de salida %s"
#: modules/bibformat/web/admin/bibformatadmin.py:312
#, python-format
msgid "Output Format %s Dependencies"
msgstr "Dependencias del formato de salida %s"
#: modules/bibformat/web/admin/bibformatadmin.py:384
msgid "Delete Output Format"
msgstr "Suprimir el formato de salida"
#: modules/bibformat/web/admin/bibformatadmin.py:405
#: modules/bibformat/web/admin/bibformatadmin.py:714
#: modules/webbasket/lib/webbasket_templates.py:1451
#: modules/webbasket/lib/webbasket_templates.py:1519
#: modules/webbasket/lib/webbasket_templates.py:1626
#: modules/webbasket/lib/webbasket_templates.py:1681
#: modules/webbasket/lib/webbasket_templates.py:1771
#: modules/webbasket/lib/webbasket_templates.py:2839
#: modules/webbasket/lib/webbasket_templates.py:3642
#: modules/websession/lib/websession_templates.py:1781
#: modules/websession/lib/websession_templates.py:1889
#: modules/websession/lib/websession_templates.py:2091
#: modules/websession/lib/websession_templates.py:2174
#: modules/websubmit/lib/websubmit_managedocfiles.py:877
#: modules/websubmit/lib/websubmit_templates.py:2535
#: modules/websubmit/lib/websubmit_templates.py:2598
#: modules/websubmit/lib/websubmit_templates.py:2618
#: modules/websubmit/web/publiline.py:1228
#: modules/webjournal/lib/webjournaladminlib.py:117
#: modules/webjournal/lib/webjournaladminlib.py:230
#: modules/bibedit/lib/bibeditmulti_templates.py:564
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:261
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:403
#: modules/bibcirculation/lib/bibcirculation_templates.py:784
#: modules/bibcirculation/lib/bibcirculation_templates.py:1411
#: modules/bibcirculation/lib/bibcirculation_templates.py:1556
#: modules/bibcirculation/lib/bibcirculation_templates.py:4705
#: modules/bibknowledge/lib/bibknowledgeadmin.py:747
msgid "Cancel"
msgstr "Cancelar"
#: modules/bibformat/web/admin/bibformatadmin.py:434
msgid "Cannot create output format"
msgstr "No ha sido posible crear el formato de salida"
#: modules/bibformat/web/admin/bibformatadmin.py:513
#: modules/bibformat/web/admin/bibformatadmin.py:587
#: modules/bibformat/web/admin/bibformatadmin.py:1023
msgid "Restricted Format Template"
msgstr "Plantilla de formato restringido"
#: modules/bibformat/web/admin/bibformatadmin.py:539
#, python-format
msgid "Format Template %s"
msgstr "Plantilla de formato %s"
#: modules/bibformat/web/admin/bibformatadmin.py:598
#, python-format
msgid "Format Template %s Attributes"
msgstr "Atributos de la plantilla de formato %s"
#: modules/bibformat/web/admin/bibformatadmin.py:632
#, python-format
msgid "Format Template %s Dependencies"
msgstr "Dependencias de la plantilla de formato %s"
#: modules/bibformat/web/admin/bibformatadmin.py:694
msgid "Delete Format Template"
msgstr "Suprimir la plantilla de formato"
#: modules/bibformat/web/admin/bibformatadmin.py:920
#, python-format
msgid "Format Element %s Dependencies"
msgstr "Dependencias del elemento de formato %s"
#: modules/bibformat/web/admin/bibformatadmin.py:953
#, python-format
msgid "Test Format Element %s"
msgstr "Probar el elemento de formato %s"
#: modules/bibformat/web/admin/bibformatadmin.py:1016
#, python-format
msgid "Validation of Output Format %s"
msgstr "Validación del formato de salida %s"
#: modules/bibformat/web/admin/bibformatadmin.py:1034
#, python-format
msgid "Validation of Format Template %s"
msgstr "Validación de la plantilla de formato %s"
#: modules/bibformat/web/admin/bibformatadmin.py:1042
msgid "Restricted Format Element"
msgstr "Elemento de formato restringido"
#: modules/bibformat/web/admin/bibformatadmin.py:1050
#, python-format
msgid "Validation of Format Element %s"
msgstr "Validación del elemento de formato %s"
#: modules/bibformat/web/admin/bibformatadmin.py:1053
msgid "Format Validation"
msgstr "Validación del formato"
#: modules/bibharvest/lib/bibharvest_templates.py:53
#: modules/bibharvest/lib/bibharvest_templates.py:70
msgid "See Guide"
msgstr "Véase la guía"
#: modules/bibharvest/lib/bibharvest_templates.py:81
msgid "OAI sources currently present in the database"
msgstr "Servidores OAI que están actualmente en la base de datos"
#: modules/bibharvest/lib/bibharvest_templates.py:82
msgid "No OAI sources currently present in the database"
msgstr "No hay ningún servidor OAI en la base de datos"
#: modules/bibharvest/lib/bibharvest_templates.py:92
msgid "Next oaiharvest task"
msgstr "Siguiente tarea oaiharvest"
#: modules/bibharvest/lib/bibharvest_templates.py:93
msgid "scheduled time:"
msgstr "previsto para"
#: modules/bibharvest/lib/bibharvest_templates.py:94
msgid "current status:"
msgstr "estado actual:"
#: modules/bibharvest/lib/bibharvest_templates.py:95
msgid "No oaiharvest task currently scheduled."
msgstr "No hay ninguna tarea oaiharvest programada"
#: modules/bibharvest/lib/bibharvest_templates.py:202
msgid "successfully validated"
msgstr "validado correctamente"
#: modules/bibharvest/lib/bibharvest_templates.py:203
msgid "does not seem to be a OAI-compliant baseURL"
msgstr "no parece una baseURL que cumpla con OAI"
#: modules/bibharvest/lib/bibharvest_templates.py:284
msgid "View next entries..."
msgstr "Ver las próximas entradas..."
#: modules/bibharvest/lib/bibharvest_templates.py:341
msgid "previous month"
msgstr "mes anterior"
#: modules/bibharvest/lib/bibharvest_templates.py:348
msgid "next month"
msgstr "mes siguiente"
#: modules/bibharvest/lib/bibharvest_templates.py:443
msgid "main Page"
msgstr "Páginas principal"
#: modules/bibharvest/lib/bibharvest_templates.py:450
#: modules/bibharvest/lib/oai_harvest_admin.py:94
msgid "edit"
msgstr "editar"
#: modules/bibharvest/lib/bibharvest_templates.py:454
#: modules/websubmit/lib/websubmit_managedocfiles.py:1037
#: modules/bibharvest/lib/oai_harvest_admin.py:98
msgid "delete"
msgstr "suprimir"
#: modules/bibharvest/lib/bibharvest_templates.py:458
#: modules/bibharvest/lib/oai_harvest_admin.py:102
msgid "test"
msgstr "comprobar"
#: modules/bibharvest/lib/bibharvest_templates.py:462
#: modules/bibharvest/lib/oai_harvest_admin.py:106
msgid "history"
msgstr "historia"
#: modules/bibharvest/lib/bibharvest_templates.py:466
#: modules/bibharvest/lib/oai_harvest_admin.py:110
msgid "harvest"
msgstr "recolectar"
#: modules/bibrank/lib/bibrank_citation_grapher.py:137
msgid "Citation history:"
msgstr "Histórico de citas:"
#: modules/bibrank/lib/bibrank_downloads_grapher.py:85
msgid "Download history:"
msgstr "Histórico de descargas:"
#: modules/bibrank/lib/bibrank_downloads_grapher.py:107
msgid "Download user distribution:"
msgstr "Distribución de las descargas"
#: modules/bibupload/lib/batchuploader_templates.py:138
msgid "Warning: Please, select a valid time"
msgstr "Atención: seleccione un tiempo válido"
#: modules/bibupload/lib/batchuploader_templates.py:142
msgid "Warning: Please, select a valid file"
msgstr "Atención: seleccione un fichero válido"
#: modules/bibupload/lib/batchuploader_templates.py:146
msgid "Warning: The date format is not correct"
msgstr "Atención: el formato de la fecha no es correcto"
#: modules/bibupload/lib/batchuploader_templates.py:150
msgid "Warning: Please, select a valid date"
msgstr "Atención: seleccione una fecha válida"
#: modules/bibupload/lib/batchuploader_templates.py:185
msgid "Select file to upload"
msgstr "Seleccione el fichero a subir"
#: modules/bibupload/lib/batchuploader_templates.py:186
msgid "File type"
msgstr "Tipo de fichero"
#: modules/bibupload/lib/batchuploader_templates.py:187
#: modules/bibupload/lib/batchuploader_templates.py:395
msgid "Upload mode"
msgstr "Modo de carga"
#: modules/bibupload/lib/batchuploader_templates.py:188
#: modules/bibupload/lib/batchuploader_templates.py:396
msgid "Upload later? then select:"
msgstr "Prefiere subirlo después? En este caso seleccione:"
#: modules/bibupload/lib/batchuploader_templates.py:189
#: modules/bibupload/lib/batchuploader_templates.py:397
#: modules/webmessage/lib/webmessage_templates.py:88
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:535
msgid "Date"
msgstr "Fecha"
#: modules/bibupload/lib/batchuploader_templates.py:190
#: modules/bibupload/lib/batchuploader_templates.py:398
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/webstyle/lib/webstyle_templates.py:670
msgid "Time"
msgstr "Hora"
#: modules/bibupload/lib/batchuploader_templates.py:191
#: modules/bibupload/lib/batchuploader_templates.py:393
#: modules/bibupload/lib/batchuploader_templates.py:399
#: modules/websession/lib/websession_templates.py:161
#: modules/websession/lib/websession_templates.py:164
#: modules/websession/lib/websession_templates.py:1038
msgid "Example"
msgstr "Ejemplo"
#: modules/bibupload/lib/batchuploader_templates.py:192
#: modules/bibupload/lib/batchuploader_templates.py:400
#, python-format
msgid "All fields with %(x_fmt_open)s*%(x_fmt_close)s are mandatory"
msgstr "Todos los campos con %(x_fmt_open)s*%(x_fmt_close)s son obligatorios"
#: modules/bibupload/lib/batchuploader_templates.py:194
#: modules/bibupload/lib/batchuploader_templates.py:391
msgid "Upload priority"
msgstr "Prioridad de los envíos"
#: modules/bibupload/lib/batchuploader_templates.py:214
#, python-format
msgid ""
"Your file has been successfully queued. You can check your "
"%(x_url1_open)supload history%(x_url1_close)s or %(x_url2_open)ssubmit "
"another file%(x_url2_close)s"
msgstr ""
"Su fichero está en cola. Ahora puede comprobar su %(x_url1_open)shistórico "
"de cargas%(x_url1_close)s o %(x_url2_open)senviar otro fichero"
"%(x_url2_close)s"
#: modules/bibupload/lib/batchuploader_templates.py:225
#, python-format
msgid ""
"The MARCXML submitted is not valid. Please, review the file and "
"%(x_url2_open)sresubmit it%(x_url2_close)s"
msgstr ""
"El fichero MARCXML enviado no es válido. Por favor revíselo y "
"%(x_url2_open)svuélvalo a enviar%(x_url2_close)s"
#: modules/bibupload/lib/batchuploader_templates.py:237
msgid "No metadata files have been uploaded yet."
msgstr "Todavía no se ha subido ningún fichero con metadatos."
#: modules/bibupload/lib/batchuploader_templates.py:250
#: modules/bibupload/lib/batchuploader_templates.py:292
msgid "Submit time"
msgstr "Enviado el"
#: modules/bibupload/lib/batchuploader_templates.py:251
#: modules/bibupload/lib/batchuploader_templates.py:293
msgid "File name"
msgstr "Nombre del fichero"
#: modules/bibupload/lib/batchuploader_templates.py:252
#: modules/bibupload/lib/batchuploader_templates.py:294
msgid "Execution time"
msgstr "Tiempo de ejecución"
#: modules/bibupload/lib/batchuploader_templates.py:279
msgid "No document files have been uploaded yet."
msgstr "Todavía no se ha subido ningún documento."
#: modules/bibupload/lib/batchuploader_templates.py:334
#: modules/bibupload/lib/batchuploader_webinterface.py:73
#: modules/bibupload/lib/batchuploader_webinterface.py:243
msgid "Metadata batch upload"
msgstr "Carga masiva de metadatos"
#: modules/bibupload/lib/batchuploader_templates.py:337
#: modules/bibupload/lib/batchuploader_webinterface.py:96
#: modules/bibupload/lib/batchuploader_webinterface.py:151
msgid "Document batch upload"
msgstr "Carga masiva de documentos"
#: modules/bibupload/lib/batchuploader_templates.py:340
#: modules/bibupload/lib/batchuploader_webinterface.py:267
msgid "Upload history"
msgstr "Histórico de envíos:"
#: modules/bibupload/lib/batchuploader_templates.py:343
msgid "Daemon monitor"
msgstr "Seguimiento de tareas"
#: modules/bibupload/lib/batchuploader_templates.py:392
msgid "Input directory"
msgstr "Directorio de entrada"
#: modules/bibupload/lib/batchuploader_templates.py:394
msgid "Filename matching"
msgstr "Ficheros del tipo"
#: modules/bibupload/lib/batchuploader_templates.py:409
#, python-format
msgid "<b>%s documents</b> have been found."
msgstr "Se han encontrado <b>%s documentos</b>."
#: modules/bibupload/lib/batchuploader_templates.py:411
msgid "The following files have been successfully queued:"
msgstr "Ficheros que están correctamente en la cola:"
#: modules/bibupload/lib/batchuploader_templates.py:416
msgid "The following errors have occurred:"
-msgstr "Se han encontrado los siguentes errores:"
+msgstr "Se han encontrado los siguientes errores:"
#: modules/bibupload/lib/batchuploader_templates.py:423
msgid ""
"Some files could not be moved to DONE folder. Please remove them manually."
msgstr ""
"Algunos ficheros no se han podido pasar al directorio DONE. Bórrelos "
"manualmente."
#: modules/bibupload/lib/batchuploader_templates.py:425
msgid "All uploaded files were moved to DONE folder."
msgstr "Todos los ficheros se han pasado al directorio DONE."
#: modules/bibupload/lib/batchuploader_templates.py:435
#, python-format
msgid ""
"Using %(x_fmt_open)sweb interface upload%(x_fmt_close)s, actions are "
"executed a single time."
msgstr ""
"Usando la %(x_fmt_open)sinterfaz web de subir ficheros%(x_fmt_close)s, las "
"acciones sólo se ejecutan una vez."
#: modules/bibupload/lib/batchuploader_templates.py:437
#, python-format
msgid ""
"Check the %(x_url_open)sBatch Uploader daemon help page%(x_url_close)s for "
"executing these actions periodically."
msgstr ""
"Consulte la %(x_url_open)spágina de ayuda del Batch Uploader daemon"
"%(x_url_close)s para ejecutar estas acciones periódicamente."
#: modules/bibupload/lib/batchuploader_templates.py:442
msgid "Metadata folders"
msgstr "Carpetas de metadatos"
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/bibcirculation/lib/bibcirculation_templates.py:2301
#: modules/bibcirculation/lib/bibcirculation_templates.py:2414
#: modules/bibcirculation/lib/bibcirculation_templates.py:2658
#: modules/bibcirculation/lib/bibcirculation_templates.py:4390
#: modules/bibcirculation/lib/bibcirculation_templates.py:5551
#: modules/bibcirculation/lib/bibcirculation_templates.py:6453
#: modules/bibcirculation/lib/bibcirculation_templates.py:8976
#: modules/bibcirculation/lib/bibcirculation_templates.py:9109
#: modules/bibcirculation/lib/bibcirculation_templates.py:9878
#: modules/bibcirculation/lib/bibcirculation_templates.py:10356
#: modules/bibcirculation/lib/bibcirculation_templates.py:11101
#: modules/bibcirculation/lib/bibcirculation_templates.py:11517
#: modules/bibcirculation/lib/bibcirculation_templates.py:11638
#: modules/bibcirculation/lib/bibcirculation_templates.py:15342
msgid "ID"
msgstr "Id"
#: modules/bibupload/lib/batchuploader_templates.py:464
msgid "Progress"
msgstr "Progreso"
#: modules/bibupload/lib/batchuploader_templates.py:466
msgid "Last BibSched tasks:"
msgstr "Últimas tareas en el BibSched:"
#: modules/bibupload/lib/batchuploader_templates.py:475
msgid "Next scheduled BibSched run:"
msgstr "Siguiente tarea en BibSched:"
#: modules/bibupload/lib/batchuploader_webinterface.py:154
msgid "Document batch upload result"
msgstr "Resultado de la subida masiva"
#: modules/bibupload/lib/batchuploader_webinterface.py:238
msgid "Invalid MARCXML"
msgstr "MARCXML no válido"
#: modules/bibupload/lib/batchuploader_webinterface.py:241
msgid "Upload successful"
msgstr "Subida correcta"
#: modules/bibupload/lib/batchuploader_webinterface.py:291
msgid "Batch Uploader: Daemon monitor"
msgstr "Subida masiva: seguimiento del proceso"
#: modules/miscutil/lib/dateutils.py:82 modules/miscutil/lib/dateutils.py:109
#: modules/webbasket/lib/webbasket.py:208
#: modules/webbasket/lib/webbasket.py:870
#: modules/webbasket/lib/webbasket.py:965
#: modules/websession/lib/webuser.py:301
#: modules/webstyle/lib/webstyle_templates.py:579
msgid "N/A"
msgstr "N/D"
#: modules/miscutil/lib/dateutils.py:172
msgid "Sun"
msgstr "Dom"
#: modules/miscutil/lib/dateutils.py:173
msgid "Mon"
msgstr "Lun"
#: modules/miscutil/lib/dateutils.py:174
msgid "Tue"
msgstr "Mar"
#: modules/miscutil/lib/dateutils.py:175
msgid "Wed"
msgstr "Mié"
#: modules/miscutil/lib/dateutils.py:176
msgid "Thu"
msgstr "Jue"
#: modules/miscutil/lib/dateutils.py:177
msgid "Fri"
msgstr "Vie"
#: modules/miscutil/lib/dateutils.py:178
msgid "Sat"
msgstr "Sáb"
#: modules/miscutil/lib/dateutils.py:180
msgid "Sunday"
msgstr "Domingo"
#: modules/miscutil/lib/dateutils.py:181
msgid "Monday"
msgstr "Lunes"
#: modules/miscutil/lib/dateutils.py:182
msgid "Tuesday"
msgstr "Martes"
#: modules/miscutil/lib/dateutils.py:183
msgid "Wednesday"
msgstr "Miércoles"
#: modules/miscutil/lib/dateutils.py:184
msgid "Thursday"
msgstr "Jueves"
#: modules/miscutil/lib/dateutils.py:185
msgid "Friday"
msgstr "Viernes"
#: modules/miscutil/lib/dateutils.py:186
msgid "Saturday"
msgstr "Sábado"
#: modules/miscutil/lib/dateutils.py:200 modules/miscutil/lib/dateutils.py:214
msgid "Month"
msgstr "Mes"
#: modules/miscutil/lib/dateutils.py:201
msgid "Jan"
msgstr "Ene"
#: modules/miscutil/lib/dateutils.py:202
msgid "Feb"
msgstr "Feb"
#: modules/miscutil/lib/dateutils.py:203
msgid "Mar"
msgstr "Mar"
#: modules/miscutil/lib/dateutils.py:204
msgid "Apr"
msgstr "Abr"
#: modules/miscutil/lib/dateutils.py:205 modules/miscutil/lib/dateutils.py:219
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "May"
msgstr "May"
#: modules/miscutil/lib/dateutils.py:206
msgid "Jun"
msgstr "Jun"
#: modules/miscutil/lib/dateutils.py:207
msgid "Jul"
msgstr "Jul"
#: modules/miscutil/lib/dateutils.py:208
msgid "Aug"
msgstr "Ago"
#: modules/miscutil/lib/dateutils.py:209
msgid "Sep"
msgstr "Sep"
#: modules/miscutil/lib/dateutils.py:210
msgid "Oct"
msgstr "Oct"
#: modules/miscutil/lib/dateutils.py:211
msgid "Nov"
msgstr "Nov"
#: modules/miscutil/lib/dateutils.py:212
msgid "Dec"
msgstr "Dic"
#: modules/miscutil/lib/dateutils.py:215
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "January"
msgstr "Enero"
#: modules/miscutil/lib/dateutils.py:216
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "February"
msgstr "Febrero"
#: modules/miscutil/lib/dateutils.py:217
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "March"
msgstr "Marzo"
#: modules/miscutil/lib/dateutils.py:218
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "April"
msgstr "Abril"
#: modules/miscutil/lib/dateutils.py:220
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "June"
msgstr "Junio"
#: modules/miscutil/lib/dateutils.py:221
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "July"
msgstr "Julio"
#: modules/miscutil/lib/dateutils.py:222
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "August"
msgstr "Agosto"
#: modules/miscutil/lib/dateutils.py:223
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "September"
msgstr "Septiembre"
#: modules/miscutil/lib/dateutils.py:224
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "October"
msgstr "Octubre"
#: modules/miscutil/lib/dateutils.py:225
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "November"
msgstr "Noviembre"
#: modules/miscutil/lib/dateutils.py:226
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "December"
msgstr "Diciembre"
#: modules/miscutil/lib/dateutils.py:244
msgid "Day"
msgstr "Día"
#: modules/miscutil/lib/dateutils.py:295
#: modules/bibcirculation/lib/bibcirculation_utils.py:432
#: modules/bibcirculation/lib/bibcirculation_templates.py:1757
#: modules/bibcirculation/lib/bibcirculation_templates.py:2743
#: modules/bibcirculation/lib/bibcirculation_templates.py:3096
#: modules/bibcirculation/lib/bibcirculation_templates.py:7154
#: modules/bibcirculation/lib/bibcirculation_templates.py:7431
#: modules/bibcirculation/lib/bibcirculation_templates.py:8083
#: modules/bibcirculation/lib/bibcirculation_templates.py:8240
#: modules/bibcirculation/lib/bibcirculation_templates.py:9597
#: modules/bibcirculation/lib/bibcirculation_templates.py:9836
#: modules/bibcirculation/lib/bibcirculation_templates.py:10072
#: modules/bibcirculation/lib/bibcirculation_templates.py:10315
#: modules/bibcirculation/lib/bibcirculation_templates.py:10525
#: modules/bibcirculation/lib/bibcirculation_templates.py:10750
#: modules/bibcirculation/lib/bibcirculation_templates.py:11209
#: modules/bibcirculation/lib/bibcirculation_templates.py:11353
#: modules/bibcirculation/lib/bibcirculation_templates.py:11858
#: modules/bibcirculation/lib/bibcirculation_templates.py:11954
#: modules/bibcirculation/lib/bibcirculation_templates.py:12071
#: modules/bibcirculation/lib/bibcirculation_templates.py:12155
#: modules/bibcirculation/lib/bibcirculation_templates.py:12835
#: modules/bibcirculation/lib/bibcirculation_templates.py:12938
#: modules/bibcirculation/lib/bibcirculation_templates.py:13608
#: modules/bibcirculation/lib/bibcirculation_templates.py:13869
#: modules/bibcirculation/lib/bibcirculation_templates.py:14924
#: modules/bibcirculation/lib/bibcirculation_templates.py:15146
#: modules/bibcirculation/lib/bibcirculation_templates.py:15422
#: modules/bibcirculation/lib/bibcirculation_templates.py:16843
#: modules/bibcirculation/lib/bibcirculation_templates.py:17031
#: modules/bibcirculation/lib/bibcirculation_templates.py:17316
#: modules/bibcirculation/lib/bibcirculation_templates.py:17514
#: modules/bibcirculation/lib/bibcirculation_templates.py:17913
msgid "Year"
msgstr "Año"
#: modules/miscutil/lib/errorlib_webinterface.py:64
#: modules/miscutil/lib/errorlib_webinterface.py:69
#: modules/miscutil/lib/errorlib_webinterface.py:74
#: modules/miscutil/lib/errorlib_webinterface.py:79
msgid "Sorry"
msgstr "Lo sentimos"
#: modules/miscutil/lib/errorlib_webinterface.py:65
#: modules/miscutil/lib/errorlib_webinterface.py:70
#: modules/miscutil/lib/errorlib_webinterface.py:75
#: modules/miscutil/lib/errorlib_webinterface.py:80
#, python-format
msgid "Cannot send error request, %s parameter missing."
msgstr "No ha sido posible enviar la petición de error, falta el parámetro %s."
#: modules/miscutil/lib/errorlib_webinterface.py:98
msgid "The error report has been sent."
msgstr "Se ha enviado el informe de error."
#: modules/miscutil/lib/errorlib_webinterface.py:99
msgid "Many thanks for helping us to improve the service."
msgstr "Muchas gracias por ayudarnos a mejorar el servicio."
#: modules/miscutil/lib/errorlib_webinterface.py:101
msgid "Use the back button of your browser to return to the previous page."
msgstr ""
"Use el botón de retroceso de su navegador para volver a la página anterior."
#: modules/miscutil/lib/errorlib_webinterface.py:103
msgid "Thank you!"
msgstr "¡Gracias!"
#: modules/miscutil/lib/inveniocfg.py:491
msgid "journal"
msgstr "revista"
#: modules/miscutil/lib/inveniocfg.py:493
msgid "record ID"
msgstr "el número de registro"
#: modules/miscutil/lib/inveniocfg.py:506
msgid "word similarity"
msgstr "similitud de palabras"
#: modules/miscutil/lib/inveniocfg.py:507
msgid "journal impact factor"
msgstr "factor de impacto de la revista"
#: modules/miscutil/lib/inveniocfg.py:508
msgid "times cited"
msgstr "veces citado"
#: modules/miscutil/lib/inveniocfg.py:509
msgid "time-decay cite count"
msgstr "contador de citas en el tiempo"
#: modules/miscutil/lib/inveniocfg.py:510
msgid "all-time-best cite rank"
msgstr "clasificación por máximo número de citas"
#: modules/miscutil/lib/inveniocfg.py:511
msgid "time-decay cite rank"
msgstr "clasificación por citas en el tiempo"
#: modules/miscutil/lib/mailutils.py:210 modules/miscutil/lib/mailutils.py:223
#: modules/webcomment/lib/webcomment_templates.py:2107
msgid "Hello:"
msgstr "Hola:"
#: modules/miscutil/lib/mailutils.py:241 modules/miscutil/lib/mailutils.py:261
msgid "Best regards"
msgstr "Cordialmente"
#: modules/miscutil/lib/mailutils.py:243 modules/miscutil/lib/mailutils.py:263
msgid "Need human intervention? Contact"
msgstr "¿Necesita la intervención del administrador? Póngase en contacto"
#: modules/webaccess/lib/access_control_config.py:300
#: modules/websession/lib/websession_templates.py:1090
#: modules/websession/lib/webuser.py:896 modules/websession/lib/webuser.py:905
#: modules/websession/lib/webuser.py:906
msgid "Run Record Editor"
msgstr "Ejecutar el editor de registros"
#: modules/webaccess/lib/access_control_config.py:301
#: modules/websession/lib/websession_templates.py:1092
msgid "Run Multi-Record Editor"
msgstr "Ejecutar el editor de múltiples registros"
#: modules/webaccess/lib/access_control_config.py:302
#: modules/websession/lib/websession_templates.py:1123
#: modules/websession/lib/webuser.py:897 modules/websession/lib/webuser.py:907
#: modules/websession/lib/webuser.py:908
msgid "Run Document File Manager"
msgstr "Ejecutar el gestor de ficheros del documento"
#: modules/webaccess/lib/access_control_config.py:303
#: modules/websession/lib/websession_templates.py:1096
msgid "Run Record Merger"
msgstr "Unificar registros"
#: modules/webaccess/lib/access_control_config.py:304
msgid "Run BibSword client"
msgstr "Ejecutar el cliente BibSword"
#: modules/webaccess/lib/access_control_config.py:305
#: modules/websession/lib/websession_templates.py:1103
msgid "Configure BibKnowledge"
msgstr "Configurar BibKnowledge"
#: modules/webaccess/lib/access_control_config.py:306
#: modules/websession/lib/websession_templates.py:1102
msgid "Configure BibFormat"
msgstr "Configurar BibFormat"
#: modules/webaccess/lib/access_control_config.py:307
#: modules/websession/lib/websession_templates.py:1105
msgid "Configure OAI Harvest"
msgstr "Configurar OAI Harvest"
#: modules/webaccess/lib/access_control_config.py:308
#: modules/websession/lib/websession_templates.py:1107
msgid "Configure OAI Repository"
msgstr "Configurar OAI Repository"
#: modules/webaccess/lib/access_control_config.py:309
#: modules/websession/lib/websession_templates.py:1109
msgid "Configure BibIndex"
msgstr "Configurar BibIndex"
#: modules/webaccess/lib/access_control_config.py:310
#: modules/websession/lib/websession_templates.py:1111
msgid "Configure BibRank"
msgstr "Configurar BibRank"
#: modules/webaccess/lib/access_control_config.py:311
#: modules/websession/lib/websession_templates.py:1113
msgid "Configure WebAccess"
msgstr "Configurar WebAccess"
#: modules/webaccess/lib/access_control_config.py:312
#: modules/websession/lib/websession_templates.py:1115
msgid "Configure WebComment"
msgstr "Configurar WebComment"
#: modules/webaccess/lib/access_control_config.py:313
#: modules/websession/lib/websession_templates.py:1119
msgid "Configure WebSearch"
msgstr "Configurar WebSearch"
#: modules/webaccess/lib/access_control_config.py:314
#: modules/websession/lib/websession_templates.py:1121
msgid "Configure WebSubmit"
msgstr "Configurar WebSumbit"
#: modules/webaccess/lib/access_control_config.py:315
#: modules/websession/lib/websession_templates.py:1117
msgid "Configure WebJournal"
msgstr "Configurar WebJournal"
#: modules/webaccess/lib/access_control_config.py:316
#: modules/websession/lib/websession_templates.py:1094
msgid "Run BibCirculation"
msgstr "Ejecutar BibCirculation"
#: modules/webaccess/lib/access_control_config.py:317
#: modules/websession/lib/websession_templates.py:1100
msgid "Run Batch Uploader"
msgstr "Ejecutar el gestor de cargas masivas"
#: modules/webaccess/lib/access_control_config.py:318
msgid "Run Person/Author Manager"
msgstr "Ejecutar el gestor de personas/autores"
#: modules/webaccess/lib/webaccessadmin_lib.py:3703
#, python-format
msgid "Your account on '%s' has been activated"
msgstr "Su cuenta en «%s» ha sido activada."
#: modules/webaccess/lib/webaccessadmin_lib.py:3704
#, python-format
msgid "Your account earlier created on '%s' has been activated:"
msgstr "Su cuenta creada previamente en «%s» ha sido activada:"
#: modules/webaccess/lib/webaccessadmin_lib.py:3706
#: modules/webaccess/lib/webaccessadmin_lib.py:3719
#: modules/webaccess/lib/webaccessadmin_lib.py:3745
msgid "Username/Email:"
msgstr "Nombre de usuario"
#: modules/webaccess/lib/webaccessadmin_lib.py:3707
#: modules/webaccess/lib/webaccessadmin_lib.py:3720
msgid "Password:"
msgstr "Contraseña"
#: modules/webaccess/lib/webaccessadmin_lib.py:3717
#, python-format
msgid "Account created on '%s'"
msgstr "Cuenta creada en «%s»"
#: modules/webaccess/lib/webaccessadmin_lib.py:3718
#, python-format
msgid "An account has been created for you on '%s':"
msgstr "Se ha creado su cuenta en «%s»:"
#: modules/webaccess/lib/webaccessadmin_lib.py:3730
#, python-format
msgid "Account rejected on '%s'"
msgstr "Cuenta rechazada en «%s»"
#: modules/webaccess/lib/webaccessadmin_lib.py:3731
#, python-format
msgid "Your request for an account has been rejected on '%s':"
msgstr "Su petición de una cuenta en «%s» ha sido rechazada:"
#: modules/webaccess/lib/webaccessadmin_lib.py:3733
#, python-format
msgid "Username/Email: %s"
msgstr "Nombre de usuario: %s"
#: modules/webaccess/lib/webaccessadmin_lib.py:3743
#, python-format
msgid "Account deleted on '%s'"
msgstr "Cuenta en «%s» eliminada"
#: modules/webaccess/lib/webaccessadmin_lib.py:3744
#, python-format
msgid "Your account on '%s' has been deleted:"
msgstr "Su cuenta en «%s» ha sido eliminada:"
#: modules/webalert/lib/htmlparser.py:186
#: modules/webbasket/lib/webbasket_templates.py:2373
#: modules/webbasket/lib/webbasket_templates.py:3249
#: modules/websearch/lib/websearch_templates.py:1610
#: modules/websearch/lib/websearch_templates.py:3330
#: modules/websearch/lib/websearch_templates.py:3336
#: modules/websearch/lib/websearch_templates.py:3341
msgid "Detailed record"
msgstr "Registro completo"
#: modules/webalert/lib/htmlparser.py:187
#: modules/websearch/lib/websearch_templates.py:1613
#: modules/websearch/lib/websearch_templates.py:3348
#: modules/webstyle/lib/webstyle_templates.py:845
msgid "Similar records"
msgstr "Registros similares"
#: modules/webalert/lib/htmlparser.py:188
msgid "Cited by"
msgstr "Citado por"
#: modules/webalert/lib/webalert.py:54
#, python-format
msgid "You already have an alert named %s."
msgstr "Ya tiene una alerta con el nombre de %s."
# En femenino porque es una fecha
#: modules/webalert/lib/webalert.py:111
msgid "unknown"
msgstr "desconocida"
#: modules/webalert/lib/webalert.py:163 modules/webalert/lib/webalert.py:217
#: modules/webalert/lib/webalert.py:303 modules/webalert/lib/webalert.py:341
msgid "You do not have rights for this operation."
msgstr "No tiene permisos para esta operación."
#: modules/webalert/lib/webalert.py:198
msgid "You already have an alert defined for the specified query and basket."
msgstr "Ya tiene una alerta definida para esta búsqueda y cesta."
#: modules/webalert/lib/webalert.py:221 modules/webalert/lib/webalert.py:345
msgid "The alert name cannot be empty."
msgstr "La alerta no puede estar vacía."
#: modules/webalert/lib/webalert.py:226
msgid "You are not the owner of this basket."
msgstr "Usted no es el propietario de esta cesta"
#: modules/webalert/lib/webalert.py:237
#, python-format
msgid "The alert %s has been added to your profile."
msgstr "La alerta %s ha sido añadida a su perfil"
#: modules/webalert/lib/webalert.py:376
#, python-format
msgid "The alert %s has been successfully updated."
msgstr "La alerta %s se ha actualizado correctamente."
#: modules/webalert/lib/webalert.py:428
#, python-format
msgid ""
"You have made %(x_nb)s queries. A %(x_url_open)sdetailed list%(x_url_close)s "
"is available with a possibility to (a) view search results and (b) subscribe "
"to an automatic email alerting service for these queries."
msgstr ""
"Ha hecho %(x_nb)s búsquedas. Hay una %(x_url_open)sdetailed_list"
"%(x_url_close)s disponible con la posibilidad de (a) ver los resultados de "
"la búsqueda y (b) subscribirse para recibir notificaciones por correo "
"electrónico de estas búsquedas"
#: modules/webalert/lib/webalert_templates.py:75
msgid "Pattern"
msgstr "Patrón"
#: modules/webalert/lib/webalert_templates.py:77
#: modules/bibedit/lib/bibeditmulti_templates.py:556
msgid "Field"
msgstr "Campo"
#: modules/webalert/lib/webalert_templates.py:79
msgid "Pattern 1"
msgstr "Patrón 1"
#: modules/webalert/lib/webalert_templates.py:81
msgid "Field 1"
msgstr "Campo 1"
#: modules/webalert/lib/webalert_templates.py:83
msgid "Pattern 2"
msgstr "Patrón 2"
#: modules/webalert/lib/webalert_templates.py:85
msgid "Field 2"
msgstr "Campo 2"
#: modules/webalert/lib/webalert_templates.py:87
msgid "Pattern 3"
msgstr "Patrón 3"
#: modules/webalert/lib/webalert_templates.py:89
msgid "Field 3"
msgstr "Campo 3"
#: modules/webalert/lib/webalert_templates.py:91
msgid "Collections"
msgstr "Colecciones"
#: modules/webalert/lib/webalert_templates.py:93
#: modules/bibcirculation/lib/bibcirculation_templates.py:356
#: modules/bibcirculation/lib/bibcirculation_templates.py:3179
#: modules/bibcirculation/lib/bibcirculation_templates.py:6049
#: modules/bibcirculation/lib/bibcirculation_templates.py:7469
#: modules/bibcirculation/lib/bibcirculation_templates.py:7620
#: modules/bibcirculation/lib/bibcirculation_templates.py:7812
#: modules/bibcirculation/lib/bibcirculation_templates.py:8110
#: modules/bibcirculation/lib/bibcirculation_templates.py:8298
#: modules/bibcirculation/lib/bibcirculation_templates.py:8480
#: modules/bibcirculation/lib/bibcirculation_templates.py:9329
#: modules/bibcirculation/lib/bibcirculation_templates.py:17951
#: modules/bibcirculation/lib/bibcirculation_templates.py:18043
msgid "Collection"
msgstr "Colección"
#: modules/webalert/lib/webalert_templates.py:114
msgid "You own the following alerts:"
msgstr "Usted ha definido las siguientes alertas:"
#: modules/webalert/lib/webalert_templates.py:115
msgid "alert name"
msgstr "nombre de la alerta"
#: modules/webalert/lib/webalert_templates.py:124
msgid "SHOW"
msgstr "MOSTRAR"
#: modules/webalert/lib/webalert_templates.py:173
msgid ""
"This alert will notify you each time/only if a new item satisfies the "
"following query:"
msgstr ""
-"Esta alerta le notificará cada vez/sólo si un nuevo ítem satisface la "
+"Esta alerta le notificará cada vez/sólo si un nuevo elemento satisface la "
"siguiente búsqueda:"
#: modules/webalert/lib/webalert_templates.py:174
msgid "QUERY"
msgstr "BÚSQUEDA"
#: modules/webalert/lib/webalert_templates.py:212
msgid "Alert identification name:"
msgstr "Nombre de identificación de la alerta:"
#: modules/webalert/lib/webalert_templates.py:214
msgid "Search-checking frequency:"
msgstr "Frecuencia de comprobación de la búsqueda:"
#: modules/webalert/lib/webalert_templates.py:218
#: modules/webalert/lib/webalert_templates.py:338
#: modules/bibharvest/lib/oai_harvest_admin.py:142
msgid "monthly"
msgstr "mensual"
#: modules/webalert/lib/webalert_templates.py:219
#: modules/webalert/lib/webalert_templates.py:336
#: modules/bibharvest/lib/oai_harvest_admin.py:141
msgid "weekly"
msgstr "semanal"
#: modules/webalert/lib/webalert_templates.py:220
#: modules/webalert/lib/webalert_templates.py:333
#: modules/bibharvest/lib/oai_harvest_admin.py:140
msgid "daily"
msgstr "diaria"
#: modules/webalert/lib/webalert_templates.py:221
msgid "Send notification email?"
msgstr "¿Enviar notificación por correo electrónico?"
#: modules/webalert/lib/webalert_templates.py:224
#: modules/webalert/lib/webalert_templates.py:341
msgid "yes"
msgstr "sí"
#: modules/webalert/lib/webalert_templates.py:225
#: modules/webalert/lib/webalert_templates.py:343
msgid "no"
msgstr "no"
#: modules/webalert/lib/webalert_templates.py:226
#, python-format
msgid "if %(x_fmt_open)sno%(x_fmt_close)s you must specify a basket"
msgstr "si %(x_fmt_open)sno%(x_fmt_close)s debe especificar una cesta"
#: modules/webalert/lib/webalert_templates.py:228
msgid "Store results in basket?"
msgstr "¿Guardar los resultados en una cesta?"
#: modules/webalert/lib/webalert_templates.py:249
msgid "SET ALERT"
msgstr "ACTIVAR LA ALERTA"
#: modules/webalert/lib/webalert_templates.py:250
msgid "CLEAR DATA"
msgstr "BORRAR DATOS"
#: modules/webalert/lib/webalert_templates.py:301
#, python-format
msgid ""
"Set a new alert from %(x_url1_open)syour searches%(x_url1_close)s, the "
"%(x_url2_open)spopular searches%(x_url2_close)s, or the input form."
msgstr ""
"Definir una nueva alerta a partir de %(x_url1_open)ssus búsquedas"
"%(x_url1_close)s, las %(x_url2_open)sbúsquedas más habituales"
"%(x_url2_close)s, o el formulario de datos."
#: modules/webalert/lib/webalert_templates.py:319
#: modules/webcomment/lib/webcomment_templates.py:233
#: modules/webcomment/lib/webcomment_templates.py:664
#: modules/webcomment/lib/webcomment_templates.py:1965
#: modules/webcomment/lib/webcomment_templates.py:1989
#: modules/webcomment/lib/webcomment_templates.py:2015
#: modules/webmessage/lib/webmessage_templates.py:509
#: modules/websession/lib/websession_templates.py:2215
#: modules/websession/lib/websession_templates.py:2255
msgid "No"
msgstr "No"
#: modules/webalert/lib/webalert_templates.py:321
msgid "Search checking frequency"
msgstr "Frecuencia de comprobación de la búsqueda"
#: modules/webalert/lib/webalert_templates.py:322
msgid "Notification by email"
msgstr "Notificación por correo electrónico"
#: modules/webalert/lib/webalert_templates.py:323
msgid "Result in basket"
msgstr "Resultado en cesta"
#: modules/webalert/lib/webalert_templates.py:324
msgid "Date last run"
msgstr "Fecha de la última ejecución"
#: modules/webalert/lib/webalert_templates.py:325
msgid "Creation date"
msgstr "Fecha de creación"
#: modules/webalert/lib/webalert_templates.py:326
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:346
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:399
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:459
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:632
msgid "Query"
msgstr "Búsqueda"
#: modules/webalert/lib/webalert_templates.py:369
#: modules/webbasket/lib/webbasket_templates.py:1786
msgid "no basket"
msgstr "ninguna cesta"
#: modules/webalert/lib/webalert_templates.py:386
msgid "Modify"
msgstr "Modificar"
#: modules/webalert/lib/webalert_templates.py:392
#: modules/webjournal/lib/webjournaladminlib.py:231
#: modules/webjournal/lib/webjournaladminlib.py:237
msgid "Remove"
msgstr "Eliminar"
#: modules/webalert/lib/webalert_templates.py:394
#: modules/webalert/lib/webalert_templates.py:484
msgid "Execute search"
msgstr "Ejecutar la búsqueda"
#: modules/webalert/lib/webalert_templates.py:400
#, python-format
msgid "You have defined %s alerts."
msgstr "Usted ha definido %s alertas."
#: modules/webalert/lib/webalert_templates.py:438
#, python-format
msgid ""
"You have not executed any search yet. Please go to the %(x_url_open)ssearch "
"interface%(x_url_close)s first."
msgstr ""
"Todavía no ha ejecutado ninguna búsqueda. Vaya primero a la "
"%(x_url_open)sinterfaz de búsqueda%(x_url_close)s."
#: modules/webalert/lib/webalert_templates.py:447
#, python-format
msgid ""
"You have performed %(x_nb1)s searches (%(x_nb2)s different questions) during "
"the last 30 days or so."
msgstr ""
"Ha ejecutado %(x_nb1)s búsquedas (%(x_nb2)s cuestiones diferentes) durante "
"los últimos 30 días aproximadamente."
#: modules/webalert/lib/webalert_templates.py:452
#, python-format
msgid "Here are the %s most popular searches."
msgstr "Estas son las %s búsquedas más habituales"
#: modules/webalert/lib/webalert_templates.py:463
msgid "Question"
msgstr "Cuestión"
#: modules/webalert/lib/webalert_templates.py:467
msgid "Last Run"
-msgstr "Última actualización"
+msgstr "Última ejecución"
#: modules/webalert/lib/webalert_templates.py:485
msgid "Set new alert"
msgstr "Definir una nueva alerta"
#: modules/webalert/lib/webalert_webinterface.py:76
#: modules/webalert/lib/webalert_webinterface.py:139
#: modules/webalert/lib/webalert_webinterface.py:224
#: modules/webalert/lib/webalert_webinterface.py:302
#: modules/webalert/lib/webalert_webinterface.py:358
#: modules/webalert/lib/webalert_webinterface.py:435
#: modules/webalert/lib/webalert_webinterface.py:509
msgid "You are not authorized to use alerts."
msgstr "No está autorizado a gestionar alertas."
#: modules/webalert/lib/webalert_webinterface.py:79
msgid "Popular Searches"
msgstr "Búsquedas populares"
#: modules/webalert/lib/webalert_webinterface.py:81
#: modules/websession/lib/websession_templates.py:457
#: modules/websession/lib/websession_templates.py:619
msgid "Your Searches"
msgstr "Sus búsquedas"
#: modules/webalert/lib/webalert_webinterface.py:98
#: modules/webalert/lib/webalert_webinterface.py:150
#: modules/webalert/lib/webalert_webinterface.py:183
#: modules/webalert/lib/webalert_webinterface.py:235
#: modules/webalert/lib/webalert_webinterface.py:268
#: modules/webalert/lib/webalert_webinterface.py:319
#: modules/webalert/lib/webalert_webinterface.py:369
#: modules/webalert/lib/webalert_webinterface.py:395
#: modules/webalert/lib/webalert_webinterface.py:446
#: modules/webalert/lib/webalert_webinterface.py:472
#: modules/webalert/lib/webalert_webinterface.py:520
#: modules/webalert/lib/webalert_webinterface.py:548
#: modules/webbasket/lib/webbasket.py:2349
#: modules/webbasket/lib/webbasket_webinterface.py:800
#: modules/webbasket/lib/webbasket_webinterface.py:894
#: modules/webbasket/lib/webbasket_webinterface.py:1015
#: modules/webbasket/lib/webbasket_webinterface.py:1110
#: modules/webbasket/lib/webbasket_webinterface.py:1240
#: modules/webmessage/lib/webmessage_templates.py:466
#: modules/websession/lib/websession_templates.py:605
#: modules/websession/lib/websession_templates.py:2328
#: modules/websession/lib/websession_webinterface.py:216
#: modules/websession/lib/websession_webinterface.py:238
#: modules/websession/lib/websession_webinterface.py:280
#: modules/websession/lib/websession_webinterface.py:508
#: modules/websession/lib/websession_webinterface.py:531
#: modules/websession/lib/websession_webinterface.py:559
#: modules/websession/lib/websession_webinterface.py:575
#: modules/websession/lib/websession_webinterface.py:627
#: modules/websession/lib/websession_webinterface.py:650
#: modules/websession/lib/websession_webinterface.py:676
#: modules/websession/lib/websession_webinterface.py:749
#: modules/websession/lib/websession_webinterface.py:805
#: modules/websession/lib/websession_webinterface.py:840
#: modules/websession/lib/websession_webinterface.py:871
#: modules/websession/lib/websession_webinterface.py:943
#: modules/websession/lib/websession_webinterface.py:981
#: modules/websubmit/web/publiline.py:136
#: modules/websubmit/web/publiline.py:157
#: modules/websubmit/web/yourapprovals.py:91
#: modules/websubmit/web/yoursubmissions.py:163
msgid "Your Account"
msgstr "Su cuenta"
#: modules/webalert/lib/webalert_webinterface.py:100
#, python-format
msgid "%s Personalize, Display searches"
msgstr "%s personalizar, mostrar las búsquedas"
#: modules/webalert/lib/webalert_webinterface.py:101
#: modules/webalert/lib/webalert_webinterface.py:153
#: modules/webalert/lib/webalert_webinterface.py:186
#: modules/webalert/lib/webalert_webinterface.py:238
#: modules/webalert/lib/webalert_webinterface.py:271
#: modules/webalert/lib/webalert_webinterface.py:322
#: modules/webalert/lib/webalert_webinterface.py:372
#: modules/webalert/lib/webalert_webinterface.py:398
#: modules/webalert/lib/webalert_webinterface.py:449
#: modules/webalert/lib/webalert_webinterface.py:475
#: modules/webalert/lib/webalert_webinterface.py:523
#: modules/webalert/lib/webalert_webinterface.py:551
#: modules/websession/lib/websession_webinterface.py:219
#: modules/websession/lib/websession_webinterface.py:241
#: modules/websession/lib/websession_webinterface.py:282
#: modules/websession/lib/websession_webinterface.py:510
#: modules/websession/lib/websession_webinterface.py:533
#: modules/websession/lib/websession_webinterface.py:562
#: modules/websession/lib/websession_webinterface.py:578
#: modules/websession/lib/websession_webinterface.py:596
#: modules/websession/lib/websession_webinterface.py:606
#: modules/websession/lib/websession_webinterface.py:629
#: modules/websession/lib/websession_webinterface.py:652
#: modules/websession/lib/websession_webinterface.py:678
#, python-format
msgid "%s, personalize"
msgstr "%s, personalizar"
#: modules/webalert/lib/webalert_webinterface.py:145
#: modules/webalert/lib/webalert_webinterface.py:230
#: modules/webalert/lib/webalert_webinterface.py:364
#: modules/webalert/lib/webalert_webinterface.py:441
#: modules/webalert/lib/webalert_webinterface.py:515
#: modules/webstyle/lib/webstyle_templates.py:583
#: modules/webstyle/lib/webstyle_templates.py:620
#: modules/webstyle/lib/webstyle_templates.py:622
#: modules/websubmit/lib/websubmit_engine.py:1734
#: modules/websubmit/lib/websubmit_webinterface.py:1361
#: modules/bibcatalog/lib/bibcatalog_templates.py:37
#: modules/bibedit/lib/bibedit_webinterface.py:193
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:496
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:559
#: modules/bibknowledge/lib/bibknowledgeadmin.py:279
msgid "Error"
msgstr "Error"
#: modules/webalert/lib/webalert_webinterface.py:152
#: modules/webalert/lib/webalert_webinterface.py:185
#: modules/webalert/lib/webalert_webinterface.py:237
#: modules/webalert/lib/webalert_webinterface.py:371
#: modules/webalert/lib/webalert_webinterface.py:448
#: modules/webalert/lib/webalert_webinterface.py:522
#, python-format
msgid "%s Personalize, Set a new alert"
msgstr "%s personalizar, definir una nueva alerta"
#: modules/webalert/lib/webalert_webinterface.py:178
msgid "Set a new alert"
msgstr "Definir una nueva alerta"
#: modules/webalert/lib/webalert_webinterface.py:263
msgid "Modify alert settings"
msgstr "Modificar la alerta"
#: modules/webalert/lib/webalert_webinterface.py:270
#, python-format
msgid "%s Personalize, Modify alert settings"
msgstr "%s personalizar, modificar la alerta"
#: modules/webalert/lib/webalert_webinterface.py:314
#: modules/websession/lib/websession_templates.py:474
msgid "Your Alerts"
msgstr "Sus alertas"
#: modules/webalert/lib/webalert_webinterface.py:321
#: modules/webalert/lib/webalert_webinterface.py:397
#: modules/webalert/lib/webalert_webinterface.py:474
#: modules/webalert/lib/webalert_webinterface.py:550
#, python-format
msgid "%s Personalize, Display alerts"
msgstr "%s personalizar, mostrar alertas"
#: modules/webalert/lib/webalert_webinterface.py:390
#: modules/webalert/lib/webalert_webinterface.py:467
#: modules/webalert/lib/webalert_webinterface.py:543
msgid "Display alerts"
msgstr "Mostrar alertas:"
#: modules/webbasket/lib/webbasket.py:104
#: modules/webbasket/lib/webbasket.py:2151
#: modules/webbasket/lib/webbasket.py:2181
msgid ""
"The selected public basket does not exist or you do not have access to it."
msgstr "La cesta pública que ha seleccionado no existe o no tiene acceso."
#: modules/webbasket/lib/webbasket.py:112
msgid "Please select a valid public basket from the list of public baskets."
msgstr "Seleccione una cesta válida de la lista de cestas públicas."
#: modules/webbasket/lib/webbasket.py:135
#: modules/webbasket/lib/webbasket.py:298
#: modules/webbasket/lib/webbasket.py:1000
msgid "The selected item does not exist or you do not have access to it."
-msgstr "El ítem que ha seleccionado no existe o no tiene acceso."
+msgstr "El elemento que ha seleccionado no existe o no tiene acceso."
#: modules/webbasket/lib/webbasket.py:141
msgid "Returning to the public basket view."
msgstr "Volver a la visualización de las cestas públicas."
#: modules/webbasket/lib/webbasket.py:416
#: modules/webbasket/lib/webbasket.py:474
#: modules/webbasket/lib/webbasket.py:1419
#: modules/webbasket/lib/webbasket.py:1483
msgid "You do not have permission to write notes to this item."
-msgstr "No tiene permisos para escribir notas en este ítem."
+msgstr "No tiene permisos para escribir notas en este elemento."
#: modules/webbasket/lib/webbasket.py:429
#: modules/webbasket/lib/webbasket.py:1431
msgid ""
"The note you are quoting does not exist or you do not have access to it."
msgstr "La nota que cita no existe o no tiene acceso."
#: modules/webbasket/lib/webbasket.py:484
#: modules/webbasket/lib/webbasket.py:1495
msgid "You must fill in both the subject and the body of the note."
msgstr "Debe llenar tanto el asunto como el texto de la nota."
#: modules/webbasket/lib/webbasket.py:581
#: modules/webbasket/lib/webbasket.py:657
#: modules/webbasket/lib/webbasket.py:713
#: modules/webbasket/lib/webbasket.py:2680
msgid "The selected topic does not exist or you do not have access to it."
msgstr "El tema que ha seleccionado no existe o no tiene acceso."
#: modules/webbasket/lib/webbasket.py:623
#: modules/webbasket/lib/webbasket.py:743
#: modules/webbasket/lib/webbasket.py:2707
#: modules/webbasket/lib/webbasket.py:2715
#: modules/webbasket/lib/webbasket.py:2722
msgid "The selected basket does not exist or you do not have access to it."
msgstr "La cesta que ha seleccionado no existe o no tiene acceso."
#: modules/webbasket/lib/webbasket.py:734
msgid "The selected basket is no longer public."
msgstr "La cesta que ha seleccionado ya no es pública."
#: modules/webbasket/lib/webbasket.py:1548
msgid "You do not have permission to delete this note."
msgstr "No tiene permisos para borrar esta nota."
#: modules/webbasket/lib/webbasket.py:1559
msgid ""
"The note you are deleting does not exist or you do not have access to it."
msgstr "La nota que ha seleccionado no existe o no tiene acceso."
#: modules/webbasket/lib/webbasket.py:1656
#, python-format
msgid "Sorry, you do not have sufficient rights to add record #%i."
msgstr "No tiene permisos para añadir el registro #%i."
#: modules/webbasket/lib/webbasket.py:1662
msgid "Some of the items were not added due to lack of sufficient rights."
msgstr ""
-"No se han añadido algunos de los ítems ya que no tiene suficientes permisos."
+"No se han añadido algunos de los elementos ya que no tiene suficientes permisos."
#: modules/webbasket/lib/webbasket.py:1679
msgid "Please provide a title for the external source."
msgstr "Añada el título de la fuente externa."
#: modules/webbasket/lib/webbasket.py:1685
msgid "Please provide a description for the external source."
msgstr "Añada una descripción a la fuente externa."
#: modules/webbasket/lib/webbasket.py:1691
msgid "Please provide a url for the external source."
msgstr "Añada la url de la fuente externa."
#: modules/webbasket/lib/webbasket.py:1700
msgid "The url you have provided is not valid."
msgstr "La url que ha dado no es válida."
#: modules/webbasket/lib/webbasket.py:1707
msgid ""
"The url you have provided is not valid: The request contains bad syntax or "
"cannot be fulfilled."
msgstr ""
"Esta url no es válida: la sintaxis no es correcta o no se puede satisfacer."
#: modules/webbasket/lib/webbasket.py:1714
msgid ""
"The url you have provided is not valid: The server failed to fulfil an "
"apparently valid request."
msgstr ""
"Esta url no es válida: el servidor no contestó una petición aparentemente "
"válida."
#: modules/webbasket/lib/webbasket.py:1763
#: modules/webbasket/lib/webbasket.py:1884
#: modules/webbasket/lib/webbasket.py:1953
msgid "Sorry, you do not have sufficient rights on this basket."
msgstr "No tiene suficientes permisos sobre esta cesta."
#: modules/webbasket/lib/webbasket.py:1772
msgid "No records to add."
msgstr "Ningún registro a añadir."
#: modules/webbasket/lib/webbasket.py:1812
#: modules/webbasket/lib/webbasket.py:2652
msgid "Cannot add items to the selected basket. Invalid parameters."
msgstr ""
-"No se han podido añadir items a la cesta seleccionada. Los parámetros no "
-"eran válidos."
+"No se han podido añadir elementos a la cesta seleccionada. Los parámetros no "
+"son válidos."
#: modules/webbasket/lib/webbasket.py:1824
msgid ""
"A default topic and basket have been automatically created. Edit them to "
"rename them as you see fit."
msgstr ""
-"Se ha creat automáticament una nueva cesta y un tema por defecto. Edítelo "
+"Se ha creado automáticamente una nueva cesta y un tema por defecto. Edítelo "
"para cambiar el nombre al que más le convenga."
#: modules/webbasket/lib/webbasket.py:2101
msgid "Please provide a name for the new basket."
-msgstr "Póngale un nombre a la nueva cetra."
+msgstr "Póngale un nombre a la nueva cesta."
#: modules/webbasket/lib/webbasket.py:2107
msgid "Please select an existing topic or create a new one."
msgstr "Seleccione uno de los temas existentes o cree uno de nuevo."
#: modules/webbasket/lib/webbasket.py:2143
msgid ""
"You cannot subscribe to this basket, you are the either owner or you have "
"already subscribed."
msgstr ""
"No puede suscribir-se a esta cesta, bien porque usted es el propietario o "
"porque ya está subscrito."
#: modules/webbasket/lib/webbasket.py:2173
msgid ""
"You cannot unsubscribe from this basket, you are the either owner or you "
"have already unsubscribed."
msgstr ""
"No se puede dar de baja de esta cesta, bien porque usted es el propietario o "
"porque ya no estaba subscrito."
#: modules/webbasket/lib/webbasket.py:2266
#: modules/webbasket/lib/webbasket.py:2379
#: modules/webbasket/lib/webbasket_templates.py:101
#: modules/webbasket/lib/webbasket_templates.py:151
#: modules/webbasket/lib/webbasket_templates.py:157
#: modules/webbasket/lib/webbasket_templates.py:605
#: modules/webbasket/lib/webbasket_templates.py:657
msgid "Personal baskets"
msgstr "Cestas personales"
#: modules/webbasket/lib/webbasket.py:2290
#: modules/webbasket/lib/webbasket.py:2396
#: modules/webbasket/lib/webbasket_templates.py:167
#: modules/webbasket/lib/webbasket_templates.py:199
#: modules/webbasket/lib/webbasket_templates.py:205
#: modules/webbasket/lib/webbasket_templates.py:614
#: modules/webbasket/lib/webbasket_templates.py:691
msgid "Group baskets"
msgstr "Cestas de grupo"
#: modules/webbasket/lib/webbasket.py:2316
msgid "Others' baskets"
msgstr "Cestas de otros"
#: modules/webbasket/lib/webbasket.py:2352
#: modules/websession/lib/websession_templates.py:465
#: modules/websession/lib/websession_templates.py:613
msgid "Your Baskets"
msgstr "Sus cestas"
#: modules/webbasket/lib/webbasket.py:2357
#: modules/webbasket/lib/webbasket_webinterface.py:1273
#: modules/webbasket/lib/webbasket_webinterface.py:1358
#: modules/webbasket/lib/webbasket_webinterface.py:1401
#: modules/webbasket/lib/webbasket_webinterface.py:1461
msgid "List of public baskets"
msgstr "Lista de cestas públicas"
#: modules/webbasket/lib/webbasket.py:2368
#: modules/webbasket/lib/webbasket_webinterface.py:428
msgid "Search baskets"
msgstr "Cestas de búsquedas"
#: modules/webbasket/lib/webbasket.py:2373
#: modules/webbasket/lib/webbasket_webinterface.py:738
#: modules/websearch/lib/websearch_templates.py:2852
#: modules/websearch/lib/websearch_templates.py:3038
msgid "Add to basket"
msgstr "Añadir a la cesta"
#: modules/webbasket/lib/webbasket.py:2413
#: modules/webbasket/lib/webbasket_templates.py:218
#: modules/webbasket/lib/webbasket_templates.py:224
#: modules/webbasket/lib/webbasket_templates.py:623
#: modules/webbasket/lib/webbasket_templates.py:725
msgid "Public baskets"
msgstr "Cestas públicas"
#: modules/webbasket/lib/webbasket.py:2443
#, python-format
msgid ""
"You have %(x_nb_perso)s personal baskets and are subscribed to "
"%(x_nb_group)s group baskets and %(x_nb_public)s other users public baskets."
msgstr ""
"Tiene %(x_nb_perso)s cestas personales, está subscrito a %(x_nb_group)s "
"cestas de grupo, y a %(x_nb_public)s cestas públicas de otros usuarios."
#: modules/webbasket/lib/webbasket.py:2629
#: modules/webbasket/lib/webbasket.py:2667
msgid ""
"The category you have selected does not exist. Please select a valid "
"category."
msgstr ""
"La categoría que ha seleccionado no existe. Seleccione una categoría válida."
#: modules/webbasket/lib/webbasket.py:2693
msgid "The selected group does not exist or you do not have access to it."
msgstr "El grupo que ha seleccionado no existe o no tiene acceso."
#: modules/webbasket/lib/webbasket.py:2738
msgid "The selected output format is not available or is invalid."
msgstr "El formato que ha seleccionado no está disponible o no es válido."
#: modules/webbasket/lib/webbasket_templates.py:87
msgid ""
"You have no personal or group baskets or are subscribed to any public "
"baskets."
msgstr ""
-"No tiene cestas personales ni de grupo, ni está susbcripto a ninguna cesta "
+"No tiene cestas personales ni de grupo, ni está subscrito a ninguna cesta "
"pública"
#: modules/webbasket/lib/webbasket_templates.py:88
#, python-format
msgid ""
"You may want to start by %(x_url_open)screating a new basket%(x_url_close)s."
msgstr "Puede empezar %(x_url_open)screando una nueva cesta%(x_url_close)s."
#: modules/webbasket/lib/webbasket_templates.py:112
#: modules/webbasket/lib/webbasket_templates.py:178
msgid "Back to Your Baskets"
msgstr "Volver a sus cestas"
#: modules/webbasket/lib/webbasket_templates.py:118
#: modules/webbasket/lib/webbasket_webinterface.py:1243
msgid "Create basket"
msgstr "Crear cesta"
#: modules/webbasket/lib/webbasket_templates.py:124
#: modules/webbasket/lib/webbasket_webinterface.py:1132
msgid "Edit topic"
msgstr "Editar el tema"
#: modules/webbasket/lib/webbasket_templates.py:559
msgid "Search baskets for"
msgstr "Buscarlo en las cestas"
#: modules/webbasket/lib/webbasket_templates.py:560
msgid "Search also in notes (where allowed)"
msgstr "Buscar también en las notas (si procede)"
#: modules/webbasket/lib/webbasket_templates.py:597
msgid "Results overview"
msgstr "Sumario de los resultados"
#: modules/webbasket/lib/webbasket_templates.py:598
#: modules/webbasket/lib/webbasket_templates.py:607
#: modules/webbasket/lib/webbasket_templates.py:616
#: modules/webbasket/lib/webbasket_templates.py:625
#: modules/webbasket/lib/webbasket_templates.py:634
#: modules/webbasket/lib/webbasket_templates.py:659
#: modules/webbasket/lib/webbasket_templates.py:677
#: modules/webbasket/lib/webbasket_templates.py:693
#: modules/webbasket/lib/webbasket_templates.py:711
#: modules/webbasket/lib/webbasket_templates.py:727
#: modules/webbasket/lib/webbasket_templates.py:744
#: modules/webbasket/lib/webbasket_templates.py:760
#: modules/webbasket/lib/webbasket_templates.py:776
#, python-format
msgid "%i items found"
-msgstr "%i items encontrados"
+msgstr "%i elementos encontrados"
#: modules/webbasket/lib/webbasket_templates.py:632
#: modules/webbasket/lib/webbasket_templates.py:758
msgid "All public baskets"
msgstr "Todas las cestas públicas"
#: modules/webbasket/lib/webbasket_templates.py:648
msgid "No items found."
msgstr "No se ha encontrado ninguno."
#: modules/webbasket/lib/webbasket_templates.py:675
#: modules/webbasket/lib/webbasket_templates.py:709
#: modules/webbasket/lib/webbasket_templates.py:742
#: modules/webbasket/lib/webbasket_templates.py:774
#, python-format
msgid "In %(x_linked_basket_name)s"
msgstr "En %(x_linked_basket_name)s"
#: modules/webbasket/lib/webbasket_templates.py:869
#: modules/webbasket/lib/webbasket_webinterface.py:1291
#: modules/webbasket/lib/webbasket_webinterface.py:1416
#: modules/webbasket/lib/webbasket_webinterface.py:1476
msgid "Public basket"
msgstr "Cesta pública"
#: modules/webbasket/lib/webbasket_templates.py:870
msgid "Owner"
msgstr "Propietario"
#: modules/webbasket/lib/webbasket_templates.py:871
msgid "Last update"
msgstr "Última actualización"
#: modules/webbasket/lib/webbasket_templates.py:872
#: modules/bibcirculation/lib/bibcirculation_templates.py:113
msgid "Items"
-msgstr "Ítems"
+msgstr "Elementos"
#: modules/webbasket/lib/webbasket_templates.py:873
msgid "Views"
msgstr "Vistas"
#: modules/webbasket/lib/webbasket_templates.py:955
msgid "There is currently no publicly accessible basket"
msgstr "No hay en este momento ninguna cesta públicamente accesible"
#: modules/webbasket/lib/webbasket_templates.py:977
#, python-format
msgid ""
"Displaying public baskets %(x_from)i - %(x_to)i out of "
"%(x_total_public_basket)i public baskets in total."
msgstr ""
"Lista de cestas públicas %(x_from)i - %(x_to)i de un total de "
"%(x_total_public_basket)i cestas públicas."
#: modules/webbasket/lib/webbasket_templates.py:1044
#: modules/webbasket/lib/webbasket_templates.py:1068
#, python-format
msgid "%(x_title)s, by %(x_name)s on %(x_date)s:"
msgstr "%(x_title)s, por %(x_name)s el %(x_date)s:"
#: modules/webbasket/lib/webbasket_templates.py:1047
#: modules/webbasket/lib/webbasket_templates.py:1071
#: modules/webcomment/lib/webcomment.py:1605
#, python-format
msgid "%(x_name)s wrote on %(x_date)s:"
msgstr "%(x_name)s escribió en %(x_date)s:"
#: modules/webbasket/lib/webbasket_templates.py:1127
msgid "Select topic"
msgstr "Seleccione el tema"
#: modules/webbasket/lib/webbasket_templates.py:1143
#: modules/webbasket/lib/webbasket_templates.py:1541
#: modules/webbasket/lib/webbasket_templates.py:1550
msgid "Choose topic"
msgstr "Escoja el tema"
#: modules/webbasket/lib/webbasket_templates.py:1144
#: modules/webbasket/lib/webbasket_templates.py:1552
msgid "or create a new one"
msgstr "o cree uno de nuevo"
#: modules/webbasket/lib/webbasket_templates.py:1144
msgid "Create new topic"
msgstr "Crear un nuevo tema"
#: modules/webbasket/lib/webbasket_templates.py:1145
#: modules/webbasket/lib/webbasket_templates.py:1538
msgid "Basket name"
msgstr "Nombre de la cesta"
#: modules/webbasket/lib/webbasket_templates.py:1147
msgid "Create a new basket"
msgstr "Crear una nueva cesta"
#: modules/webbasket/lib/webbasket_templates.py:1199
msgid "Create new basket"
msgstr "Crear una nueva cesta"
#: modules/webbasket/lib/webbasket_templates.py:1269
#: modules/webbasket/lib/webbasket_templates.py:2297
#: modules/webbasket/lib/webbasket_templates.py:2673
#: modules/webbasket/lib/webbasket_templates.py:3182
#: modules/webbasket/lib/webbasket_templates.py:3498
msgid "External item"
msgstr "Registro externo"
#: modules/webbasket/lib/webbasket_templates.py:1270
msgid ""
"Provide a url for the external item you wish to add and fill in a title and "
"description"
msgstr ""
-"Escriba una url para el item externo que desea añadir y póngale un título y "
+"Escriba una url para el elemento externo que desea añadir y póngale un título y "
"descripción"
#: modules/webbasket/lib/webbasket_templates.py:1271
#: modules/websubmit/lib/websubmit_templates.py:2726
#: modules/bibcirculation/lib/bibcirculation_utils.py:428
#: modules/bibcirculation/lib/bibcirculation_templates.py:2101
#: modules/bibcirculation/lib/bibcirculation_templates.py:2741
#: modules/bibcirculation/lib/bibcirculation_templates.py:5991
#: modules/bibcirculation/lib/bibcirculation_templates.py:8850
#: modules/bibcirculation/lib/bibcirculation_templates.py:11207
#: modules/bibcirculation/lib/bibcirculation_templates.py:11950
#: modules/bibcirculation/lib/bibcirculation_templates.py:12151
#: modules/bibcirculation/lib/bibcirculation_templates.py:12934
#: modules/bibcirculation/lib/bibcirculation_templates.py:15419
#: modules/bibcirculation/lib/bibcirculation_templates.py:16118
#: modules/bibcirculation/lib/bibcirculation_templates.py:16839
#: modules/bibcirculation/lib/bibcirculation_templates.py:17027
msgid "Title"
msgstr "Título"
#: modules/webbasket/lib/webbasket_templates.py:1275
msgid "URL"
msgstr "URL"
#: modules/webbasket/lib/webbasket_templates.py:1305
#, python-format
msgid "%i items have been successfully added to your basket"
msgstr "%i registros se han añadido correctamente a su cesta."
#: modules/webbasket/lib/webbasket_templates.py:1306
#, python-format
msgid "Proceed to the %(x_url_open)sbasket%(x_url_close)s"
-msgstr "Subscríbase a la %(x_url_open)scesta%(x_url_close)s"
+msgstr "Continuar a la %(x_url_open)scesta%(x_url_close)s"
#: modules/webbasket/lib/webbasket_templates.py:1311
#, python-format
msgid " or return to your %(x_url_open)sprevious basket%(x_url_close)s"
msgstr " o vuelva a su %(x_url_open)scesta anterior%(x_url_close)s"
#: modules/webbasket/lib/webbasket_templates.py:1315
#, python-format
msgid " or return to your %(x_url_open)ssearch%(x_url_close)s"
msgstr " o vuelva a su %(x_url_open)sbúsqueda%(x_url_close)s"
#: modules/webbasket/lib/webbasket_templates.py:1428
#, python-format
msgid "Adding %i items to your baskets"
-msgstr "%i items añadidos a sus cestas"
+msgstr "%i elementos añadidos a sus cestas"
#: modules/webbasket/lib/webbasket_templates.py:1429
#, python-format
msgid ""
"Please choose a basket: %(x_basket_selection_box)s %(x_fmt_open)s(or "
"%(x_url_open)screate a new one%(x_url_close)s first)%(x_fmt_close)s"
msgstr ""
"Escoja una cesta: %(x_basket_selection_box)s %(x_fmt_open)s(o antes "
"%(x_url_open)scree una de nueva%(x_url_close)s)%(x_fmt_close)s"
#: modules/webbasket/lib/webbasket_templates.py:1443
msgid "Optionally, add a note to each one of these items"
msgstr "Si lo desea, puede añadir una nota a uno de estos registros"
#: modules/webbasket/lib/webbasket_templates.py:1444
msgid "Optionally, add a note to this item"
msgstr "Opcionalmente, añada una nota a este registro"
#: modules/webbasket/lib/webbasket_templates.py:1450
msgid "Add items"
-msgstr "Añadir items"
+msgstr "Añadir elementos"
#: modules/webbasket/lib/webbasket_templates.py:1474
msgid "Are you sure you want to delete this basket?"
msgstr "¿Está seguro de que quiere suprimir esta cesta?"
#: modules/webbasket/lib/webbasket_templates.py:1476
#, python-format
msgid "%i users are subscribed to this basket."
msgstr "%i usuarios están subscritos a esta cesta."
#: modules/webbasket/lib/webbasket_templates.py:1478
#, python-format
msgid "%i user groups are subscribed to this basket."
msgstr "%i grupos de usuarios se han subscrito a esta cesta."
#: modules/webbasket/lib/webbasket_templates.py:1480
#, python-format
msgid "You have set %i alerts on this basket."
msgstr "Ha puesto %i alertas en esta cesta."
#: modules/webbasket/lib/webbasket_templates.py:1518
#: modules/webcomment/lib/webcomment_templates.py:232
#: modules/webcomment/lib/webcomment_templates.py:662
#: modules/webcomment/lib/webcomment_templates.py:1965
#: modules/webcomment/lib/webcomment_templates.py:1989
#: modules/webcomment/lib/webcomment_templates.py:2015
#: modules/webmessage/lib/webmessage_templates.py:508
#: modules/websession/lib/websession_templates.py:2214
#: modules/websession/lib/websession_templates.py:2254
msgid "Yes"
msgstr "Sí"
#: modules/webbasket/lib/webbasket_templates.py:1555
#: modules/webbasket/lib/webbasket_templates.py:1651
msgid "General settings"
msgstr "Parámetros generales"
#: modules/webbasket/lib/webbasket_templates.py:1570
#: modules/webbasket/lib/webbasket_templates.py:1745
#: modules/webbasket/lib/webbasket_templates.py:1772
msgid "Add group"
msgstr "Añadir un grupo"
#: modules/webbasket/lib/webbasket_templates.py:1575
msgid "Manage group rights"
msgstr "Gestionar los permisos de grupo"
# Quizás mejor: 'para compartir'?
#: modules/webbasket/lib/webbasket_templates.py:1587
msgid "Manage global sharing rights"
msgstr "Gestionar los permisos globales de compartición"
#: modules/webbasket/lib/webbasket_templates.py:1592
#: modules/webbasket/lib/webbasket_templates.py:1658
#: modules/webbasket/lib/webbasket_templates.py:2006
#: modules/webbasket/lib/webbasket_templates.py:2085
msgid "Delete basket"
msgstr "Suprimir la cesta"
#: modules/webbasket/lib/webbasket_templates.py:1616
#, python-format
msgid "Editing basket %(x_basket_name)s"
msgstr "Edición de la cesta %(x_basket_name)s"
#: modules/webbasket/lib/webbasket_templates.py:1625
#: modules/webbasket/lib/webbasket_templates.py:1680
msgid "Save changes"
msgstr "Guardar cambios"
#: modules/webbasket/lib/webbasket_templates.py:1646
msgid "Topic name"
msgstr "Nombre del tema"
#: modules/webbasket/lib/webbasket_templates.py:1675
#, python-format
msgid "Editing topic: %(x_topic_name)s"
msgstr "Editar tema: %(x_topic_name)s"
#: modules/webbasket/lib/webbasket_templates.py:1692
#: modules/webbasket/lib/webbasket_templates.py:1707
msgid "No rights"
msgstr "Sin permiso"
#: modules/webbasket/lib/webbasket_templates.py:1694
#: modules/webbasket/lib/webbasket_templates.py:1709
msgid "View records"
msgstr "Ver registros"
#: modules/webbasket/lib/webbasket_templates.py:1696
#: modules/webbasket/lib/webbasket_templates.py:1698
#: modules/webbasket/lib/webbasket_templates.py:1711
#: modules/webbasket/lib/webbasket_templates.py:1713
#: modules/webbasket/lib/webbasket_templates.py:1715
#: modules/webbasket/lib/webbasket_templates.py:1717
#: modules/webbasket/lib/webbasket_templates.py:1719
#: modules/webbasket/lib/webbasket_templates.py:1721
msgid "and"
msgstr "y"
#: modules/webbasket/lib/webbasket_templates.py:1696
msgid "view comments"
msgstr "ver comentarios"
#: modules/webbasket/lib/webbasket_templates.py:1698
msgid "add comments"
msgstr "añadir comentarios"
#: modules/webbasket/lib/webbasket_templates.py:1711
msgid "view notes"
msgstr "ver notas"
#: modules/webbasket/lib/webbasket_templates.py:1713
msgid "add notes"
msgstr "añadir notas"
#: modules/webbasket/lib/webbasket_templates.py:1715
msgid "add records"
msgstr "añadir registros"
#: modules/webbasket/lib/webbasket_templates.py:1717
msgid "delete notes"
msgstr "borrar notas"
#: modules/webbasket/lib/webbasket_templates.py:1719
msgid "remove records"
msgstr "elimina registros"
#: modules/webbasket/lib/webbasket_templates.py:1721
msgid "manage sharing rights"
msgstr "gestionar los permisos de compartición"
# De ningún?
#: modules/webbasket/lib/webbasket_templates.py:1743
msgid "You are not a member of a group."
msgstr "Usted no es miembro de un grupo."
#: modules/webbasket/lib/webbasket_templates.py:1765
msgid "Sharing basket to a new group"
msgstr "Compartir la cesta con un nuevo grupo"
#: modules/webbasket/lib/webbasket_templates.py:1794
#: modules/websession/lib/websession_templates.py:510
msgid ""
"You are logged in as a guest user, so your baskets will disappear at the end "
"of the current session."
msgstr ""
"Ahora usted está identificado como usuario visitante, con lo que sus cestas "
"desaparecerán al final de esta sesión."
#: modules/webbasket/lib/webbasket_templates.py:1795
#: modules/webbasket/lib/webbasket_templates.py:1810
#: modules/websession/lib/websession_templates.py:513
#, python-format
msgid ""
"If you wish you can %(x_url_open)slogin or register here%(x_url_close)s."
msgstr ""
"Si lo desea, puede %(x_url_open)sidentificarse o darse de alta aquí"
"%(x_url_close)s."
#: modules/webbasket/lib/webbasket_templates.py:1809
#: modules/websession/lib/websession_webinterface.py:263
msgid "This functionality is forbidden to guest users."
msgstr "Esta funcionalidad no está permitida a los usuarios visitantes."
#: modules/webbasket/lib/webbasket_templates.py:1862
#: modules/webcomment/lib/webcomment_templates.py:864
msgid "Back to search results"
msgstr "Volver al resultado de la búsqueda"
#: modules/webbasket/lib/webbasket_templates.py:1990
#: modules/webbasket/lib/webbasket_templates.py:3022
#, python-format
msgid "%i items"
-msgstr "%i items"
+msgstr "%i elementos"
#: modules/webbasket/lib/webbasket_templates.py:1991
#: modules/webbasket/lib/webbasket_templates.py:3024
#, python-format
msgid "%i notes"
msgstr "%i notas"
#: modules/webbasket/lib/webbasket_templates.py:1991
#: modules/webbasket/lib/webbasket_templates.py:3024
msgid "no notes yet"
msgstr "sin notas"
#: modules/webbasket/lib/webbasket_templates.py:1994
#, python-format
msgid "%i subscribers"
msgstr "%i subscriptores"
#: modules/webbasket/lib/webbasket_templates.py:1996
#: modules/webbasket/lib/webbasket_templates.py:3026
msgid "last update"
msgstr "Última actualización"
#: modules/webbasket/lib/webbasket_templates.py:2000
#: modules/webbasket/lib/webbasket_templates.py:2079
msgid "Add item"
-msgstr "Añadir item"
+msgstr "Añadir elementos"
#: modules/webbasket/lib/webbasket_templates.py:2003
#: modules/webbasket/lib/webbasket_templates.py:2082
#: modules/webbasket/lib/webbasket_webinterface.py:1037
msgid "Edit basket"
msgstr "Editar cestas"
#: modules/webbasket/lib/webbasket_templates.py:2016
#: modules/webbasket/lib/webbasket_templates.py:2093
#: modules/webbasket/lib/webbasket_templates.py:3034
#: modules/webbasket/lib/webbasket_templates.py:3087
msgid "Unsubscribe from basket"
msgstr "Darse de baja de esta cesta"
#: modules/webbasket/lib/webbasket_templates.py:2098
msgid "This basket is publicly accessible at the following address:"
msgstr ""
"Se puede acceder públicamente a esta cesta desde la siguiente dirección:"
#: modules/webbasket/lib/webbasket_templates.py:2162
#: modules/webbasket/lib/webbasket_templates.py:3137
msgid "Basket is empty"
msgstr "La cesta está vacía"
#: modules/webbasket/lib/webbasket_templates.py:2196
msgid "You do not have sufficient rights to view this basket's content."
msgstr "No tiene suficientes permisos para ver el contenido de esta cesta."
#: modules/webbasket/lib/webbasket_templates.py:2239
msgid "Move item up"
msgstr "Subirlo"
#: modules/webbasket/lib/webbasket_templates.py:2243
msgid "You cannot move this item up"
msgstr "No es posible subirlo"
#: modules/webbasket/lib/webbasket_templates.py:2257
msgid "Move item down"
msgstr "Bajarlo"
#: modules/webbasket/lib/webbasket_templates.py:2261
msgid "You cannot move this item down"
msgstr "No es posible bajarlo"
#: modules/webbasket/lib/webbasket_templates.py:2275
#: modules/webbasket/lib/webbasket_templates.py:3178
msgid "Copy item"
msgstr "Copiarlo"
#: modules/webbasket/lib/webbasket_templates.py:2291
msgid "Remove item"
msgstr "Eliminarlo"
#: modules/webbasket/lib/webbasket_templates.py:2363
#: modules/webbasket/lib/webbasket_templates.py:2718
#: modules/webbasket/lib/webbasket_templates.py:3239
#: modules/webbasket/lib/webbasket_templates.py:3539
msgid "This record does not seem to exist any more"
msgstr "El registro solicitado ya no existe."
#: modules/webbasket/lib/webbasket_templates.py:2366
#: modules/webbasket/lib/webbasket_templates.py:2868
#: modules/webbasket/lib/webbasket_templates.py:3242
#: modules/webbasket/lib/webbasket_templates.py:3671
#: modules/bibcirculation/lib/bibcirculation_templates.py:3945
#: modules/bibcirculation/lib/bibcirculation_templates.py:4048
#: modules/bibcirculation/lib/bibcirculation_templates.py:4271
#: modules/bibcirculation/lib/bibcirculation_templates.py:4332
#: modules/bibcirculation/lib/bibcirculation_templates.py:4459
#: modules/bibcirculation/lib/bibcirculation_templates.py:6190
#: modules/bibcirculation/lib/bibcirculation_templates.py:6239
#: modules/bibcirculation/lib/bibcirculation_templates.py:6629
#: modules/bibcirculation/lib/bibcirculation_templates.py:6704
#: modules/bibcirculation/lib/bibcirculation_templates.py:10669
#: modules/bibcirculation/lib/bibcirculation_templates.py:10821
#: modules/bibcirculation/lib/bibcirculation_templates.py:10916
#: modules/bibcirculation/lib/bibcirculation_templates.py:13775
#: modules/bibcirculation/lib/bibcirculation_templates.py:13956
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14144
#: modules/bibcirculation/lib/bibcirculation_templates.py:14717
msgid "Notes"
msgstr "Notas"
#: modules/webbasket/lib/webbasket_templates.py:2366
#: modules/webbasket/lib/webbasket_templates.py:3242
msgid "Add a note..."
msgstr "Añadir una nota..."
#: modules/webbasket/lib/webbasket_templates.py:2478
#: modules/webbasket/lib/webbasket_templates.py:3330
#, python-format
msgid "Item %(x_item_index)i of %(x_item_total)i"
-msgstr "Ítem %(x_item_index)i de %(x_item_total)i"
+msgstr "Elemento %(x_item_index)i de %(x_item_total)i"
#: modules/webbasket/lib/webbasket_templates.py:2491
#: modules/webbasket/lib/webbasket_templates.py:2494
#: modules/webbasket/lib/webbasket_templates.py:2578
#: modules/webbasket/lib/webbasket_templates.py:2581
#: modules/webbasket/lib/webbasket_templates.py:3340
#: modules/webbasket/lib/webbasket_templates.py:3343
#: modules/webbasket/lib/webbasket_templates.py:3415
#: modules/webbasket/lib/webbasket_templates.py:3418
msgid "Previous item"
-msgstr "Ítem anterior"
+msgstr "Elemento anterior"
#: modules/webbasket/lib/webbasket_templates.py:2506
#: modules/webbasket/lib/webbasket_templates.py:2509
#: modules/webbasket/lib/webbasket_templates.py:2593
#: modules/webbasket/lib/webbasket_templates.py:2596
#: modules/webbasket/lib/webbasket_templates.py:3352
#: modules/webbasket/lib/webbasket_templates.py:3355
#: modules/webbasket/lib/webbasket_templates.py:3427
#: modules/webbasket/lib/webbasket_templates.py:3430
msgid "Next item"
-msgstr "Ítem siguiente"
+msgstr "Elemento siguiente"
#: modules/webbasket/lib/webbasket_templates.py:2519
#: modules/webbasket/lib/webbasket_templates.py:2606
#: modules/webbasket/lib/webbasket_templates.py:3362
#: modules/webbasket/lib/webbasket_templates.py:3437
msgid "Return to basket"
msgstr "Volver a la cesta"
#: modules/webbasket/lib/webbasket_templates.py:2666
#: modules/webbasket/lib/webbasket_templates.py:3491
msgid "The item you have selected does not exist."
-msgstr "El ítem seleccionado no existe."
+msgstr "El elemento seleccionado no existe."
#: modules/webbasket/lib/webbasket_templates.py:2694
#: modules/webbasket/lib/webbasket_templates.py:3515
msgid "You do not have sufficient rights to view this item's notes."
-msgstr "No tiene permisos para ver las notas de este ítem."
+msgstr "No tiene permisos para ver las notas de este elemento."
#: modules/webbasket/lib/webbasket_templates.py:2735
msgid "You do not have sufficient rights to view this item."
-msgstr "No tiene permisos para ver este ítem."
+msgstr "No tiene permisos para ver este elemento."
#: modules/webbasket/lib/webbasket_templates.py:2842
#: modules/webbasket/lib/webbasket_templates.py:2852
#: modules/webbasket/lib/webbasket_templates.py:3645
#: modules/webbasket/lib/webbasket_templates.py:3655
#: modules/webbasket/lib/webbasket_webinterface.py:493
#: modules/webbasket/lib/webbasket_webinterface.py:1538
msgid "Add a note"
msgstr "Añadir una nota"
#: modules/webbasket/lib/webbasket_templates.py:2843
#: modules/webbasket/lib/webbasket_templates.py:3646
msgid "Add note"
msgstr "Añadir nota"
#: modules/webbasket/lib/webbasket_templates.py:2889
#: modules/webbasket/lib/webbasket_templates.py:3692
#: modules/webcomment/lib/webcomment_templates.py:373
#: modules/webmessage/lib/webmessage_templates.py:111
msgid "Reply"
msgstr "Contestar"
#: modules/webbasket/lib/webbasket_templates.py:2919
#: modules/webbasket/lib/webbasket_templates.py:3717
#, python-format
msgid "%(x_title)s, by %(x_name)s on %(x_date)s"
msgstr "%(x_title)s, por %(x_name)s el %(x_date)s"
#: modules/webbasket/lib/webbasket_templates.py:2921
#: modules/webbasket/lib/webbasket_templates.py:3719
#: modules/websession/lib/websession_templates.py:165
#: modules/websession/lib/websession_templates.py:214
#: modules/websession/lib/websession_templates.py:915
#: modules/websession/lib/websession_templates.py:1039
msgid "Note"
msgstr "Nota"
#: modules/webbasket/lib/webbasket_templates.py:3031
#: modules/webbasket/lib/webbasket_templates.py:3084
msgid "Subscribe to basket"
msgstr "Subscribirse a la cesta"
#: modules/webbasket/lib/webbasket_templates.py:3090
msgid "This public basket belongs to the user "
msgstr "Esta cesta pública pertenece al usuario "
#: modules/webbasket/lib/webbasket_templates.py:3114
msgid "This public basket belongs to you."
msgstr "Esta cesta pública es suya."
#: modules/webbasket/lib/webbasket_templates.py:3889
msgid "All your baskets"
msgstr "Todas sus cestas"
#: modules/webbasket/lib/webbasket_templates.py:3891
#: modules/webbasket/lib/webbasket_templates.py:3966
msgid "Your personal baskets"
msgstr "Sus cestas personales"
#: modules/webbasket/lib/webbasket_templates.py:3897
#: modules/webbasket/lib/webbasket_templates.py:3977
msgid "Your group baskets"
msgstr "Sus cestas de grupo"
#: modules/webbasket/lib/webbasket_templates.py:3903
msgid "Your public baskets"
msgstr "Sus cestas públicas"
#: modules/webbasket/lib/webbasket_templates.py:3904
msgid "All the public baskets"
msgstr "Todas las cestas públicas"
#: modules/webbasket/lib/webbasket_templates.py:3961
msgid "*** basket name ***"
msgstr "*** nombre de la cesta ***"
#: modules/webbasket/lib/webbasket_webinterface.py:158
#: modules/webbasket/lib/webbasket_webinterface.py:319
#: modules/webbasket/lib/webbasket_webinterface.py:406
#: modules/webbasket/lib/webbasket_webinterface.py:471
#: modules/webbasket/lib/webbasket_webinterface.py:538
#: modules/webbasket/lib/webbasket_webinterface.py:615
#: modules/webbasket/lib/webbasket_webinterface.py:702
#: modules/webbasket/lib/webbasket_webinterface.py:780
#: modules/webbasket/lib/webbasket_webinterface.py:864
#: modules/webbasket/lib/webbasket_webinterface.py:961
#: modules/webbasket/lib/webbasket_webinterface.py:1077
#: modules/webbasket/lib/webbasket_webinterface.py:1177
#: modules/webbasket/lib/webbasket_webinterface.py:1397
#: modules/webbasket/lib/webbasket_webinterface.py:1457
#: modules/webbasket/lib/webbasket_webinterface.py:1519
#: modules/webbasket/lib/webbasket_webinterface.py:1581
msgid "You are not authorized to use baskets."
msgstr "No está autorizado a usar cestas."
#: modules/webbasket/lib/webbasket_webinterface.py:169
msgid "You are not authorized to view this attachment"
msgstr "No está autorizado a ver esta adjunto"
#: modules/webbasket/lib/webbasket_webinterface.py:361
msgid "Display baskets"
msgstr "Mostrar cestas"
#: modules/webbasket/lib/webbasket_webinterface.py:564
#: modules/webbasket/lib/webbasket_webinterface.py:639
#: modules/webbasket/lib/webbasket_webinterface.py:1604
msgid "Display item and notes"
-msgstr "Mostrar el ítem y las notas"
+msgstr "Mostrar el elemento y las notas"
#: modules/webbasket/lib/webbasket_webinterface.py:821
msgid "Delete a basket"
msgstr "Suprimir una cesta"
#: modules/webbasket/lib/webbasket_webinterface.py:879
msgid "Copy record to basket"
msgstr "Copiar el registro a la cesta"
#: modules/webcomment/lib/webcommentadminlib.py:122
msgid "Invalid comment ID."
msgstr "Número de comentario no válido."
#: modules/webcomment/lib/webcommentadminlib.py:142
#, python-format
msgid "Comment ID %s does not exist."
msgstr "El comentario %s no existe."
#: modules/webcomment/lib/webcommentadminlib.py:156
#, python-format
msgid "Record ID %s does not exist."
msgstr "El registro %s no existe."
#: modules/webcomment/lib/webcomment.py:166
#: modules/webcomment/lib/webcomment.py:210
msgid "Bad page number --> showing first page."
msgstr "Número de página incorrecta --> se muestra la primera."
#: modules/webcomment/lib/webcomment.py:174
msgid "Bad number of results per page --> showing 10 results per page."
msgstr ""
"Número de resultados por página incorrecto --> se mostrarán 10 por página."
#: modules/webcomment/lib/webcomment.py:183
msgid "Bad display order --> showing most helpful first."
msgstr "Orden incorrecto --> vea las más útiles."
#: modules/webcomment/lib/webcomment.py:192
msgid "Bad display order --> showing oldest first."
msgstr "Orden incorrecto --> se ordenará por antigüedad."
#: modules/webcomment/lib/webcomment.py:229
#: modules/webcomment/lib/webcomment.py:1579
#: modules/webcomment/lib/webcomment.py:1632
msgid "Comments on records have been disallowed by the administrator."
-msgstr "L'administrador ha deshabilitat l'opció de comentaris als registres."
+msgstr "El administrador ha deshabilitado la opción de comentar en los registros."
#: modules/webcomment/lib/webcomment.py:237
#: modules/webcomment/lib/webcomment.py:260
#: modules/webcomment/lib/webcomment.py:1419
#: modules/webcomment/lib/webcomment.py:1440
msgid "Your feedback has been recorded, many thanks."
msgstr "Muchas gracias por su contribución."
#: modules/webcomment/lib/webcomment.py:244
msgid "You have already reported an abuse for this comment."
msgstr "Ya había denunciado este comentario."
#: modules/webcomment/lib/webcomment.py:251
msgid "The comment you have reported no longer exists."
msgstr "El comentario que había denunciado ya no existe."
#: modules/webcomment/lib/webcomment.py:267
msgid "Sorry, you have already voted. This vote has not been recorded."
msgstr "Ya había votado, de manera que este voto no se ha contabilizado."
#: modules/webcomment/lib/webcomment.py:274
msgid ""
"You have been subscribed to this discussion. From now on, you will receive "
"an email whenever a new comment is posted."
msgstr ""
-"Se ha subscrito a aquesta discusión. A partir de ahora recibirá un correu "
-"electrónic cada vez que se publique un nuevo comentario."
+"Se ha subscrito a esta discusión. A partir de ahora recibirá un correo "
+"electrónico cada vez que se publique un nuevo comentario."
#: modules/webcomment/lib/webcomment.py:281
msgid "You have been unsubscribed from this discussion."
msgstr "Se ha dado de baja de esta discusión."
#: modules/webcomment/lib/webcomment.py:1171
#, python-format
msgid "Record %i"
msgstr "Registro %i"
#: modules/webcomment/lib/webcomment.py:1182
#, python-format
msgid "%(report_number)s\"%(title)s\" has been reviewed"
msgstr "Se han revisado el %(report_number)s\"%(title)s\""
#: modules/webcomment/lib/webcomment.py:1186
#, python-format
msgid "%(report_number)s\"%(title)s\" has been commented"
msgstr "Se han comentado el %(report_number)s\"%(title)s\""
#: modules/webcomment/lib/webcomment.py:1407
#, python-format
msgid "%s is an invalid record ID"
msgstr "%s no es un número válido de registro"
#: modules/webcomment/lib/webcomment.py:1426
#: modules/webcomment/lib/webcomment.py:1447
msgid "Your feedback could not be recorded, please try again."
msgstr ""
"No ha sido posible guardar su contribución. Por favor inténtelo de nuevo."
#: modules/webcomment/lib/webcomment.py:1555
#, python-format
msgid "%s is an invalid user ID."
msgstr "%s no es un identificador válido de usuario"
#: modules/webcomment/lib/webcomment.py:1589
msgid "Cannot reply to a review."
msgstr "No es posible contestar a una reseña."
#: modules/webcomment/lib/webcomment.py:1644
msgid "You must enter a title."
msgstr "Debe ponerle un título."
#: modules/webcomment/lib/webcomment.py:1651
msgid "You must choose a score."
msgstr "Escoja una puntuación."
#: modules/webcomment/lib/webcomment.py:1658
msgid "You must enter a text."
msgstr "Debe redactar un texto."
#: modules/webcomment/lib/webcomment.py:1675
msgid "You already wrote a review for this record."
msgstr "Ya ha escrito una reseña para este registro."
#: modules/webcomment/lib/webcomment.py:1693
msgid "You already posted a comment short ago. Please retry later."
msgstr ""
-"Hace poco ya ha publicado un comentario. Por favor vueva a intentarlo más "
+"Hace poco ya ha publicado un comentario. Por favor vuelva a intentarlo más "
"tarde."
#: modules/webcomment/lib/webcomment.py:1705
msgid "Failed to insert your comment to the database. Please try again."
msgstr ""
"No ha sido posible guardar su comentario. Por favor inténtelo de nuevo."
#: modules/webcomment/lib/webcomment.py:1719
msgid "Unknown action --> showing you the default add comment form."
msgstr ""
"Acción desconocida --> se muestra el formulario de añadir un comentario."
#: modules/webcomment/lib/webcomment.py:1841
#, python-format
msgid "Record ID %s does not exist in the database."
msgstr "El registro %s no existe en la base de datos."
#: modules/webcomment/lib/webcomment.py:1849
msgid "No record ID was given."
msgstr "No ha dado el número de registro."
#: modules/webcomment/lib/webcomment.py:1857
#, python-format
msgid "Record ID %s is an invalid ID."
msgstr "%s no es un identificador válido de registro."
#: modules/webcomment/lib/webcomment.py:1865
#, python-format
msgid "Record ID %s is not a number."
msgstr "El registro %s no es numérico."
#: modules/webcomment/lib/webcomment_templates.py:79
#: modules/webcomment/lib/webcomment_templates.py:839
#: modules/websubmit/lib/websubmit_templates.py:2674
msgid "Write a comment"
msgstr "Escriba un comentario"
#: modules/webcomment/lib/webcomment_templates.py:94
#, python-format
msgid ""
"<div class=\"webcomment_comment_round_header\">%(x_nb)i Comments for round "
"\"%(x_name)s\""
msgstr ""
"<div class=\"webcomment_comment_round_header\">%(x_nb)i Comentarios para la "
"vuelta \"%(x_name)s\""
#: modules/webcomment/lib/webcomment_templates.py:96
#, python-format
msgid "<div class=\"webcomment_comment_round_header\">%(x_nb)i Comments"
msgstr "<div class=\"webcomment_comment_round_header\">%(x_nb)i Comentarios"
#: modules/webcomment/lib/webcomment_templates.py:124
#, python-format
msgid "Showing the latest %i comments:"
msgstr "Mostrar los últimos %i comentarios:"
#: modules/webcomment/lib/webcomment_templates.py:137
#: modules/webcomment/lib/webcomment_templates.py:163
msgid "Discuss this document"
msgstr "Comente este documento"
#: modules/webcomment/lib/webcomment_templates.py:164
#: modules/webcomment/lib/webcomment_templates.py:849
msgid "Start a discussion about any aspect of this document."
msgstr "Inicie un debate sobre cualquier aspecto de este documento."
#: modules/webcomment/lib/webcomment_templates.py:180
#, python-format
msgid "Sorry, the record %s does not seem to exist."
msgstr "Parece ser que el registro %s no existe."
#: modules/webcomment/lib/webcomment_templates.py:182
#, python-format
msgid "Sorry, %s is not a valid ID value."
msgstr "%s no es un identificador válido."
#: modules/webcomment/lib/webcomment_templates.py:184
msgid "Sorry, no record ID was provided."
msgstr "No ha dado el número de registro."
#: modules/webcomment/lib/webcomment_templates.py:188
#, python-format
msgid "You may want to start browsing from %s"
msgstr "Puede comenzar a visualizar desde %s"
#: modules/webcomment/lib/webcomment_templates.py:244
#: modules/webcomment/lib/webcomment_templates.py:704
#: modules/webcomment/lib/webcomment_templates.py:714
#, python-format
msgid "%(x_nb)i comments for round \"%(x_name)s\""
msgstr "%(x_nb)i comentarios por la vuelta «%(x_name)s»"
#: modules/webcomment/lib/webcomment_templates.py:267
#: modules/webcomment/lib/webcomment_templates.py:763
msgid "Was this review helpful?"
msgstr "¿Ha sido útil esta reseña?"
#: modules/webcomment/lib/webcomment_templates.py:278
#: modules/webcomment/lib/webcomment_templates.py:315
#: modules/webcomment/lib/webcomment_templates.py:839
msgid "Write a review"
msgstr "Escriba una reseña"
#: modules/webcomment/lib/webcomment_templates.py:285
#: modules/webcomment/lib/webcomment_templates.py:827
#: modules/webcomment/lib/webcomment_templates.py:2036
#, python-format
msgid "Average review score: %(x_nb_score)s based on %(x_nb_reviews)s reviews"
-msgstr "Puntuación media: %(x_nb_score)s, basada en %(x_nb_reviews)s resseñas"
+msgstr "Puntuación media: %(x_nb_score)s, basada en %(x_nb_reviews)s reseñas"
#: modules/webcomment/lib/webcomment_templates.py:288
#, python-format
msgid "Readers found the following %s reviews to be most helpful."
msgstr ""
"Los lectores han encontrado que las siguientes %s reseñas son las más útiles."
#: modules/webcomment/lib/webcomment_templates.py:291
#: modules/webcomment/lib/webcomment_templates.py:314
#, python-format
msgid "View all %s reviews"
msgstr "Visualizar todas las %s reseñas"
#: modules/webcomment/lib/webcomment_templates.py:310
#: modules/webcomment/lib/webcomment_templates.py:332
#: modules/webcomment/lib/webcomment_templates.py:2077
msgid "Rate this document"
msgstr "Valore este documento"
#: modules/webcomment/lib/webcomment_templates.py:333
msgid ""
"<div class=\"webcomment_review_first_introduction\">Be the first to review "
"this document.</div>"
msgstr ""
"<div class=\"webcomment_review_first_introduction\">Sea el primero de "
"reseñar este documento.</div>"
#: modules/webcomment/lib/webcomment_templates.py:368
#, python-format
msgid "%(x_name)s"
msgstr "%(x_name)s"
#: modules/webcomment/lib/webcomment_templates.py:375
#: modules/webcomment/lib/webcomment_templates.py:764
msgid "Report abuse"
msgstr "Denuncie un abuso"
#: modules/webcomment/lib/webcomment_templates.py:390
msgid "Undelete comment"
msgstr "Recupera el comentario suprimido"
#: modules/webcomment/lib/webcomment_templates.py:399
#: modules/webcomment/lib/webcomment_templates.py:401
msgid "Delete comment"
msgstr "Suprimir comentario"
#: modules/webcomment/lib/webcomment_templates.py:407
msgid "Unreport comment"
msgstr "Suprimir la denuncia al comentario"
#: modules/webcomment/lib/webcomment_templates.py:418
msgid "Attached file"
msgstr "Fichero adjunto"
#: modules/webcomment/lib/webcomment_templates.py:418
msgid "Attached files"
msgstr "Ficheros adjuntos"
#: modules/webcomment/lib/webcomment_templates.py:484
#, python-format
msgid "Reviewed by %(x_nickname)s on %(x_date)s"
msgstr "Reseñado por %(x_nickname)s el %(x_date)s"
#: modules/webcomment/lib/webcomment_templates.py:488
#, python-format
msgid "%(x_nb_people)s out of %(x_nb_total)s people found this review useful"
msgstr ""
"%(x_nb_people)s de %(x_nb_total)s personas han encontrado esta reseña útil"
#: modules/webcomment/lib/webcomment_templates.py:510
msgid "Undelete review"
msgstr "Recupera la reseña suprimido"
#: modules/webcomment/lib/webcomment_templates.py:519
msgid "Delete review"
msgstr "Eliminar la reseña"
#: modules/webcomment/lib/webcomment_templates.py:525
msgid "Unreport review"
msgstr "Suprimir la denuncia a la reseña"
#: modules/webcomment/lib/webcomment_templates.py:631
#: modules/webcomment/lib/webcomment_templates.py:646
#: modules/webcomment/lib/webcomment_webinterface.py:237
#: modules/webcomment/lib/webcomment_webinterface.py:429
#: modules/websubmit/lib/websubmit_templates.py:2672
msgid "Comments"
msgstr "Comentarios"
#: modules/webcomment/lib/webcomment_templates.py:632
#: modules/webcomment/lib/webcomment_templates.py:647
#: modules/webcomment/lib/webcomment_webinterface.py:237
#: modules/webcomment/lib/webcomment_webinterface.py:429
msgid "Reviews"
msgstr "Reseñas"
#: modules/webcomment/lib/webcomment_templates.py:802
#: modules/websearch/lib/websearch_templates.py:1864
#: modules/bibcatalog/lib/bibcatalog_templates.py:50
#: modules/bibknowledge/lib/bibknowledge_templates.py:167
msgid "Previous"
msgstr "Anterior"
#: modules/webcomment/lib/webcomment_templates.py:818
#: modules/bibcatalog/lib/bibcatalog_templates.py:72
#: modules/bibknowledge/lib/bibknowledge_templates.py:165
msgid "Next"
msgstr "Siguiente"
#: modules/webcomment/lib/webcomment_templates.py:842
#, python-format
msgid "There is a total of %s reviews"
msgstr "Hay un total de %s reseñas"
#: modules/webcomment/lib/webcomment_templates.py:844
#, python-format
msgid "There is a total of %s comments"
msgstr "Hay un total de %s comentarios"
#: modules/webcomment/lib/webcomment_templates.py:851
msgid "Be the first to review this document."
msgstr "Sea el primero a escribir una reseña de este documento."
#: modules/webcomment/lib/webcomment_templates.py:863
#: modules/webcomment/lib/webcomment_templates.py:1643
#: modules/websearch/lib/websearch_templates.py:558
msgid "Record"
msgstr "Registro"
#: modules/webcomment/lib/webcomment_templates.py:870
#: modules/webcomment/lib/webcomment_templates.py:929
msgid "review"
msgstr "reseña"
#: modules/webcomment/lib/webcomment_templates.py:870
#: modules/webcomment/lib/webcomment_templates.py:929
msgid "comment"
msgstr "comentario"
#: modules/webcomment/lib/webcomment_templates.py:871
#: modules/webcomment/lib/webcomment_templates.py:1879
msgid "Review"
msgstr "Reseña"
#: modules/webcomment/lib/webcomment_templates.py:871
#: modules/webcomment/lib/webcomment_templates.py:1202
#: modules/webcomment/lib/webcomment_templates.py:1647
#: modules/webcomment/lib/webcomment_templates.py:1879
#: modules/websubmit/lib/websubmit_managedocfiles.py:389
#: modules/websubmit/lib/websubmit_templates.py:2728
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:347
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:401
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:455
msgid "Comment"
msgstr "Comentario"
#: modules/webcomment/lib/webcomment_templates.py:927
msgid "Viewing"
msgstr "Visualización"
#: modules/webcomment/lib/webcomment_templates.py:928
msgid "Page:"
msgstr "Página:"
#: modules/webcomment/lib/webcomment_templates.py:946
msgid "Subscribe"
msgstr "Subscribirse"
#: modules/webcomment/lib/webcomment_templates.py:955
msgid "Unsubscribe"
msgstr "Darse de baja"
#: modules/webcomment/lib/webcomment_templates.py:962
msgid "You are not authorized to comment or review."
msgstr "No está autorizado a hacer comentarios o reseñas."
#: modules/webcomment/lib/webcomment_templates.py:1132
#, python-format
msgid "Note: Your nickname, %s, will be displayed as author of this comment."
msgstr ""
"Atención: su alias, %s, será el que se muestre como autor de este comentario"
#: modules/webcomment/lib/webcomment_templates.py:1136
#: modules/webcomment/lib/webcomment_templates.py:1253
#, python-format
msgid ""
"Note: you have not %(x_url_open)sdefined your nickname%(x_url_close)s. "
"%(x_nickname)s will be displayed as the author of this comment."
msgstr ""
"Atención: todavía no ha %(x_url_open)sdefinido su alias%(x_url_close)s. "
"%(x_nickname)s, será el que se muestre como autor de este comentario"
#: modules/webcomment/lib/webcomment_templates.py:1153
msgid "Once logged in, authorized users can also attach files."
msgstr ""
"Una vez identificados, los usuarios autorizados también pueden añadir "
"ficheros."
#: modules/webcomment/lib/webcomment_templates.py:1168
msgid "Optionally, attach a file to this comment"
msgstr "Opcionalmente, añada un fichero a este comentario"
#: modules/webcomment/lib/webcomment_templates.py:1169
msgid "Optionally, attach files to this comment"
msgstr "Opcionalmente, añada ficheros a este comentario"
#: modules/webcomment/lib/webcomment_templates.py:1170
msgid "Max one file"
msgstr "Máximo un fichero"
#: modules/webcomment/lib/webcomment_templates.py:1171
#, python-format
msgid "Max %i files"
msgstr "Máximo %i ficheros"
#: modules/webcomment/lib/webcomment_templates.py:1172
#, python-format
msgid "Max %(x_nb_bytes)s per file"
msgstr "Máximo %(x_nb_bytes)s por fichero"
#: modules/webcomment/lib/webcomment_templates.py:1187
msgid "Send me an email when a new comment is posted"
-msgstr "Enviar un email cuando se publique un nuevo comentario"
+msgstr "Envíame un correo electrónico cuando se publique un nuevo comentario"
#: modules/webcomment/lib/webcomment_templates.py:1201
#: modules/webcomment/lib/webcomment_templates.py:1324
msgid "Article"
msgstr "Artículo"
#: modules/webcomment/lib/webcomment_templates.py:1203
msgid "Add comment"
msgstr "Añadir comentario"
#: modules/webcomment/lib/webcomment_templates.py:1248
#, python-format
msgid ""
"Note: Your nickname, %s, will be displayed as the author of this review."
msgstr ""
"Atención: su alias, %s, será el que se muestre como autor de esta reseña."
#: modules/webcomment/lib/webcomment_templates.py:1325
msgid "Rate this article"
msgstr "Valore este artículo"
#: modules/webcomment/lib/webcomment_templates.py:1326
msgid "Select a score"
msgstr "Seleccione una puntuación"
#: modules/webcomment/lib/webcomment_templates.py:1327
msgid "Give a title to your review"
msgstr "Dé un título a su reseña"
#: modules/webcomment/lib/webcomment_templates.py:1328
msgid "Write your review"
msgstr "Escriba su reseña"
#: modules/webcomment/lib/webcomment_templates.py:1333
msgid "Add review"
-msgstr "Añada su reseña"
+msgstr "Añadir una reseña"
#: modules/webcomment/lib/webcomment_templates.py:1343
#: modules/webcomment/lib/webcomment_webinterface.py:474
msgid "Add Review"
-msgstr "Añada su reseña"
+msgstr "Añadir una reseña"
#: modules/webcomment/lib/webcomment_templates.py:1365
msgid "Your review was successfully added."
msgstr "Su reseña se ha añadido correctamente."
#: modules/webcomment/lib/webcomment_templates.py:1367
msgid "Your comment was successfully added."
msgstr "Su comentario se ha añadido correctamente."
#: modules/webcomment/lib/webcomment_templates.py:1370
msgid "Back to record"
msgstr "Volver al registro"
#: modules/webcomment/lib/webcomment_templates.py:1448
#: modules/webcomment/web/admin/webcommentadmin.py:171
msgid "View most commented records"
msgstr "Ver los registros más comentados"
#: modules/webcomment/lib/webcomment_templates.py:1450
#: modules/webcomment/web/admin/webcommentadmin.py:207
msgid "View latest commented records"
msgstr "Ver los registros con los comentarios más recientes"
#: modules/webcomment/lib/webcomment_templates.py:1452
#: modules/webcomment/web/admin/webcommentadmin.py:140
msgid "View all comments reported as abuse"
msgstr "Visualizar todos los comentarios denunciados"
#: modules/webcomment/lib/webcomment_templates.py:1456
#: modules/webcomment/web/admin/webcommentadmin.py:170
msgid "View most reviewed records"
msgstr "Ver los registros con más reseñas"
#: modules/webcomment/lib/webcomment_templates.py:1458
#: modules/webcomment/web/admin/webcommentadmin.py:206
msgid "View latest reviewed records"
msgstr "Ver los registros con las reseñas más recientes"
#: modules/webcomment/lib/webcomment_templates.py:1460
#: modules/webcomment/web/admin/webcommentadmin.py:140
msgid "View all reviews reported as abuse"
msgstr "Visualizar todas las reseñas denunciadas"
#: modules/webcomment/lib/webcomment_templates.py:1468
msgid "View all users who have been reported"
msgstr "Ver todos los usuarios que han sido denunciados"
#: modules/webcomment/lib/webcomment_templates.py:1470
msgid "Guide"
msgstr "Guía"
#: modules/webcomment/lib/webcomment_templates.py:1472
msgid "Comments and reviews are disabled"
msgstr "Los comentarios y las reseñas están desactivadas"
#: modules/webcomment/lib/webcomment_templates.py:1492
msgid ""
"Please enter the ID of the comment/review so that you can view it before "
"deciding whether to delete it or not"
msgstr ""
"Introduzca el número del comentario o reseña; así puede visualizarlo antes "
"de decidir si lo suprime o no"
#: modules/webcomment/lib/webcomment_templates.py:1516
msgid "Comment ID:"
msgstr "Número del comentario:"
#: modules/webcomment/lib/webcomment_templates.py:1517
msgid "Or enter a record ID to list all the associated comments/reviews:"
msgstr ""
"O entre el número de registro para ver todos los comentarios y reseñas "
"asociadas:"
#: modules/webcomment/lib/webcomment_templates.py:1518
msgid "Record ID:"
msgstr "Registro: "
#: modules/webcomment/lib/webcomment_templates.py:1520
msgid "View Comment"
msgstr "Visualizar el comentario"
#: modules/webcomment/lib/webcomment_templates.py:1541
msgid "There have been no reports so far."
msgstr "De momento no hay denuncias."
#: modules/webcomment/lib/webcomment_templates.py:1545
#, python-format
msgid "View all %s reported comments"
msgstr "Visualizar todos los %s comentarios denunciados"
#: modules/webcomment/lib/webcomment_templates.py:1548
#, python-format
msgid "View all %s reported reviews"
msgstr "Visualizar todas las %s reseñas denunciadas"
#: modules/webcomment/lib/webcomment_templates.py:1585
msgid ""
"Here is a list, sorted by total number of reports, of all users who have had "
"a comment reported at least once."
msgstr ""
"Esta es la lista, ordenada por el número de denuncias, de los usuarios que "
"han tenido al menos una denuncia a alguno de sus comentarios."
#: modules/webcomment/lib/webcomment_templates.py:1593
#: modules/webcomment/lib/webcomment_templates.py:1622
#: modules/websession/lib/websession_templates.py:158
#: modules/websession/lib/websession_templates.py:1034
msgid "Nickname"
msgstr "Alias"
#: modules/webcomment/lib/webcomment_templates.py:1594
#: modules/webcomment/lib/webcomment_templates.py:1626
#: modules/bibcirculation/lib/bibcirculation_utils.py:457
#: modules/bibcirculation/lib/bibcirculation_templates.py:2390
#: modules/bibcirculation/lib/bibcirculation_templates.py:2507
#: modules/bibcirculation/lib/bibcirculation_templates.py:2739
#: modules/bibcirculation/lib/bibcirculation_templates.py:3942
#: modules/bibcirculation/lib/bibcirculation_templates.py:4045
#: modules/bibcirculation/lib/bibcirculation_templates.py:4268
#: modules/bibcirculation/lib/bibcirculation_templates.py:4329
#: modules/bibcirculation/lib/bibcirculation_templates.py:4457
#: modules/bibcirculation/lib/bibcirculation_templates.py:5605
#: modules/bibcirculation/lib/bibcirculation_templates.py:6186
#: modules/bibcirculation/lib/bibcirculation_templates.py:6235
#: modules/bibcirculation/lib/bibcirculation_templates.py:6536
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6700
#: modules/bibcirculation/lib/bibcirculation_templates.py:6929
#: modules/bibcirculation/lib/bibcirculation_templates.py:7028
#: modules/bibcirculation/lib/bibcirculation_templates.py:9028
#: modules/bibcirculation/lib/bibcirculation_templates.py:9273
#: modules/bibcirculation/lib/bibcirculation_templates.py:9882
#: modules/bibcirculation/lib/bibcirculation_templates.py:10360
#: modules/bibcirculation/lib/bibcirculation_templates.py:11224
#: modules/bibcirculation/lib/bibcirculation_templates.py:12211
#: modules/bibcirculation/lib/bibcirculation_templates.py:12995
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14141
#: modules/bibcirculation/lib/bibcirculation_templates.py:14390
#: modules/bibcirculation/lib/bibcirculation_templates.py:14461
#: modules/bibcirculation/lib/bibcirculation_templates.py:14715
#: modules/bibcirculation/lib/bibcirculation_templates.py:15518
msgid "Email"
msgstr "Dirección electrónica"
#: modules/webcomment/lib/webcomment_templates.py:1595
#: modules/webcomment/lib/webcomment_templates.py:1624
msgid "User ID"
msgstr "Número de usuario"
# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1597
msgid "Number positive votes"
msgstr "Número de votos positivos"
# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1598
msgid "Number negative votes"
msgstr "Número de votos negativos"
# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1599
msgid "Total number votes"
msgstr "Número total de votos"
# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1600
msgid "Total number of reports"
msgstr "Número total de denuncias"
#: modules/webcomment/lib/webcomment_templates.py:1601
msgid "View all user's reported comments/reviews"
msgstr "Visualizar todos los comentarios/reseñas denunciadas de este usuario"
#: modules/webcomment/lib/webcomment_templates.py:1634
#, python-format
msgid "This review has been reported %i times"
msgstr "Esta reseña ha sido denunciada %i veces"
#: modules/webcomment/lib/webcomment_templates.py:1636
#, python-format
msgid "This comment has been reported %i times"
msgstr "Este comentario ha sido denunciado %i veces"
#: modules/webcomment/lib/webcomment_templates.py:1880
msgid "Written by"
msgstr "Escrita por"
#: modules/webcomment/lib/webcomment_templates.py:1881
msgid "General informations"
msgstr "Informaciones generales"
#: modules/webcomment/lib/webcomment_templates.py:1882
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:652
msgid "Select"
msgstr "Seleccionar"
#: modules/webcomment/lib/webcomment_templates.py:1896
msgid "Delete selected reviews"
msgstr "Eliminar las reseñas seleccionadas"
#: modules/webcomment/lib/webcomment_templates.py:1897
#: modules/webcomment/lib/webcomment_templates.py:1904
msgid "Suppress selected abuse report"
msgstr "Suprimir el informe de abuso seleccionado"
#: modules/webcomment/lib/webcomment_templates.py:1898
msgid "Undelete selected reviews"
msgstr "Recuperar las reseñas eliminadas"
#: modules/webcomment/lib/webcomment_templates.py:1902
msgid "Undelete selected comments"
msgstr "Recuperar los comentarios eliminados"
#: modules/webcomment/lib/webcomment_templates.py:1903
msgid "Delete selected comments"
msgstr "Suprimir los comentarios seleccionados"
#: modules/webcomment/lib/webcomment_templates.py:1912
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:494
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:557
#: modules/bibcirculation/lib/bibcirculation_templates.py:1635
msgid "OK"
msgstr "De acuerdo"
#: modules/webcomment/lib/webcomment_templates.py:1918
#, python-format
msgid "Here are the reported reviews of user %s"
msgstr "Estas son las reseñes denunciadas del usuario %s"
#: modules/webcomment/lib/webcomment_templates.py:1920
#, python-format
msgid "Here are the reported comments of user %s"
msgstr "Estos son los comentarios denunciados del usuario %s"
#: modules/webcomment/lib/webcomment_templates.py:1924
#, python-format
msgid "Here is review %s"
msgstr "Ésta es la reseña %s"
#: modules/webcomment/lib/webcomment_templates.py:1926
#, python-format
msgid "Here is comment %s"
msgstr "Éste es el comentario %s"
#: modules/webcomment/lib/webcomment_templates.py:1929
#, python-format
msgid "Here is review %(x_cmtID)s written by user %(x_user)s"
msgstr "Ésta es la reseña %(x_cmtID)s escrita por el usuario %(x_user)s"
#: modules/webcomment/lib/webcomment_templates.py:1931
#, python-format
msgid "Here is comment %(x_cmtID)s written by user %(x_user)s"
msgstr "Éste es el comentario %(x_cmtID)s escrito por el usuario %(x_user)s"
#: modules/webcomment/lib/webcomment_templates.py:1937
msgid "Here are all reported reviews sorted by the most reported"
msgstr "Estas son todas las reseñas denunciadas, ordenadas de más a menos"
#: modules/webcomment/lib/webcomment_templates.py:1939
msgid "Here are all reported comments sorted by the most reported"
msgstr "Estos son todos los comentarios denunciados, ordenados de más a menos"
#: modules/webcomment/lib/webcomment_templates.py:1944
#, python-format
msgid "Here are all reviews for record %i, sorted by the most reported"
msgstr "Reseñas del registro %i, ordenadas de más a menos denuncias"
#: modules/webcomment/lib/webcomment_templates.py:1945
msgid "Show comments"
msgstr "Ver comentarios"
#: modules/webcomment/lib/webcomment_templates.py:1947
#, python-format
msgid "Here are all comments for record %i, sorted by the most reported"
msgstr "Comentarios al registro %i, ordenados de más a menos denuncias"
#: modules/webcomment/lib/webcomment_templates.py:1948
msgid "Show reviews"
msgstr "Visualizar las reseñas"
#: modules/webcomment/lib/webcomment_templates.py:1973
#: modules/webcomment/lib/webcomment_templates.py:1997
#: modules/webcomment/lib/webcomment_templates.py:2023
msgid "comment ID"
msgstr "número de comentario"
#: modules/webcomment/lib/webcomment_templates.py:1973
msgid "successfully deleted"
msgstr "suprimido correctamente"
#: modules/webcomment/lib/webcomment_templates.py:1997
msgid "successfully undeleted"
msgstr "recuperado correctamente"
#: modules/webcomment/lib/webcomment_templates.py:2023
msgid "successfully suppressed abuse report"
msgstr "eliminada correctamente la denuncia de abuso"
#: modules/webcomment/lib/webcomment_templates.py:2040
msgid "Not yet reviewed"
msgstr "Sin ninguna reseña"
#: modules/webcomment/lib/webcomment_templates.py:2108
#, python-format
msgid ""
"The following review was sent to %(CFG_SITE_NAME)s by %(user_nickname)s:"
msgstr "Se ha enviado esta reseña a %(CFG_SITE_NAME)s por %(user_nickname)s:"
#: modules/webcomment/lib/webcomment_templates.py:2109
#, python-format
msgid ""
"The following comment was sent to %(CFG_SITE_NAME)s by %(user_nickname)s:"
msgstr ""
"Se ha enviado este comentario a %(CFG_SITE_NAME)s por %(user_nickname)s:"
#: modules/webcomment/lib/webcomment_templates.py:2136
msgid "This is an automatic message, please don't reply to it."
msgstr "Este es un mensaje automático, no lo responda."
#: modules/webcomment/lib/webcomment_templates.py:2138
#, python-format
msgid "To post another comment, go to <%(x_url)s> instead."
msgstr "Para publicar otro comentario, debe ir a <%(x_url)s>."
#: modules/webcomment/lib/webcomment_templates.py:2143
#, python-format
msgid "To specifically reply to this comment, go to <%(x_url)s>"
msgstr ""
"Para contestar específicamente a este comentario, debe ir a <%(x_url)s>."
#: modules/webcomment/lib/webcomment_templates.py:2148
#, python-format
msgid "To unsubscribe from this discussion, go to <%(x_url)s>"
msgstr "Para darse de baja de esta discusión, debe ir a <%(x_url)s>."
#: modules/webcomment/lib/webcomment_templates.py:2152
#, python-format
msgid "For any question, please use <%(CFG_SITE_SUPPORT_EMAIL)s>"
msgstr ""
"Para resolver dudas, póngase en contacto con <%(CFG_SITE_SUPPORT_EMAIL)s>"
#: modules/webcomment/lib/webcomment_templates.py:2219
msgid "Your comment will be lost."
msgstr "Su comentario se perderá."
#: modules/webcomment/lib/webcomment_webinterface.py:261
#: modules/webcomment/lib/webcomment_webinterface.py:493
msgid "Record Not Found"
msgstr "No se ha encontrado el registro"
#: modules/webcomment/lib/webcomment_webinterface.py:394
#, python-format
msgid ""
"The size of file \\\"%s\\\" (%s) is larger than maximum allowed file size "
"(%s). Select files again."
msgstr ""
"El tamaño del fichero \\\"%s\\\" (%s) es mayor que el máximo permitido (%s). "
"Vuelva a seleccionar los ficheros."
#: modules/webcomment/lib/webcomment_webinterface.py:476
#: modules/websubmit/lib/websubmit_templates.py:2668
#: modules/websubmit/lib/websubmit_templates.py:2669
msgid "Add Comment"
msgstr "Añadir comentario"
#: modules/webcomment/lib/webcomment_webinterface.py:734
#: modules/webcomment/lib/webcomment_webinterface.py:768
msgid "Page Not Found"
msgstr "No se ha encontrado la página"
#: modules/webcomment/lib/webcomment_webinterface.py:735
msgid "The requested comment could not be found"
msgstr "No se ha encontrado el comentario solicitado"
#: modules/webcomment/lib/webcomment_webinterface.py:769
msgid "The requested file could not be found"
msgstr "No se ha encontrado el fichero solicitado"
#: modules/webcomment/web/admin/webcommentadmin.py:45
#: modules/webcomment/web/admin/webcommentadmin.py:59
#: modules/webcomment/web/admin/webcommentadmin.py:83
#: modules/webcomment/web/admin/webcommentadmin.py:126
#: modules/webcomment/web/admin/webcommentadmin.py:164
#: modules/webcomment/web/admin/webcommentadmin.py:192
#: modules/webcomment/web/admin/webcommentadmin.py:228
#: modules/webcomment/web/admin/webcommentadmin.py:266
msgid "WebComment Admin"
msgstr "Administración de WebComment"
#: modules/webcomment/web/admin/webcommentadmin.py:50
#: modules/webcomment/web/admin/webcommentadmin.py:88
#: modules/webcomment/web/admin/webcommentadmin.py:131
#: modules/webcomment/web/admin/webcommentadmin.py:197
#: modules/webcomment/web/admin/webcommentadmin.py:233
#: modules/webcomment/web/admin/webcommentadmin.py:271
#: modules/websearch/lib/websearch_webinterface.py:1563
#: modules/websearch/web/admin/websearchadmin.py:1040
#: modules/websession/lib/websession_webinterface.py:937
#: modules/webstyle/lib/webstyle_templates.py:585
#: modules/webjournal/web/admin/webjournaladmin.py:390
#: modules/bibcheck/web/admin/bibcheckadmin.py:331
msgid "Internal Error"
msgstr "Error Interno"
#: modules/webcomment/web/admin/webcommentadmin.py:102
msgid "Delete/Undelete Reviews"
msgstr "Suprimir/recuperar reseñas"
#: modules/webcomment/web/admin/webcommentadmin.py:102
msgid "Delete/Undelete Comments"
msgstr "Suprimir/recuperar comentarios"
#: modules/webcomment/web/admin/webcommentadmin.py:102
msgid " or Suppress abuse reports"
msgstr " o eliminar las denuncias de abuso"
#: modules/webcomment/web/admin/webcommentadmin.py:242
msgid "View all reported users"
msgstr "Visualizar todos los usuarios denunciados"
#: modules/webcomment/web/admin/webcommentadmin.py:289
msgid "Delete comments"
msgstr "Suprimir comentarios"
#: modules/webcomment/web/admin/webcommentadmin.py:292
msgid "Suppress abuse reports"
msgstr "Eliminar las denuncias de abuso"
#: modules/webcomment/web/admin/webcommentadmin.py:295
msgid "Undelete comments"
msgstr "Recuperar comentarios eliminados"
#: modules/webmessage/lib/webmessage.py:58
#: modules/webmessage/lib/webmessage.py:137
#: modules/webmessage/lib/webmessage.py:203
msgid "Sorry, this message in not in your mailbox."
msgstr "Por desgracia, este mensaje no está en su buzón."
#: modules/webmessage/lib/webmessage.py:75
#: modules/webmessage/lib/webmessage.py:219
msgid "This message does not exist."
msgstr "Este mensaje no existe."
#: modules/webmessage/lib/webmessage.py:144
msgid "The message could not be deleted."
msgstr "El mensaje no se ha podido suprimir."
#: modules/webmessage/lib/webmessage.py:146
msgid "The message was successfully deleted."
msgstr "El mensaje se ha suprimido correctamente."
#: modules/webmessage/lib/webmessage.py:162
msgid "Your mailbox has been emptied."
msgstr "Se ha vaciado su buzón."
#: modules/webmessage/lib/webmessage.py:368
#, python-format
msgid "The chosen date (%(x_year)i/%(x_month)i/%(x_day)i) is invalid."
msgstr "La fecha escogida (%(x_year)i-%(x_month)i-%(x_day)i) no es válida"
#: modules/webmessage/lib/webmessage.py:377
msgid "Please enter a user name or a group name."
msgstr "Introduzca un nombre de usuario o de grupo."
#: modules/webmessage/lib/webmessage.py:381
#, python-format
msgid ""
"Your message is too long, please edit it. Maximum size allowed is %i "
"characters."
msgstr ""
"Su mensaje es demasiado largo, edítelo por favor. El tamaño máximo es de %i "
"caracteres."
#: modules/webmessage/lib/webmessage.py:396
#, python-format
msgid "Group %s does not exist."
msgstr "El grupo %s no existe"
#: modules/webmessage/lib/webmessage.py:421
#, python-format
msgid "User %s does not exist."
msgstr "El usuario %s no existe"
#: modules/webmessage/lib/webmessage.py:434
#: modules/webmessage/lib/webmessage_webinterface.py:145
#: modules/webmessage/lib/webmessage_webinterface.py:242
msgid "Write a message"
msgstr "Escriba un mensaje"
#: modules/webmessage/lib/webmessage.py:449
msgid ""
"Your message could not be sent to the following recipients due to their "
"quota:"
msgstr ""
"No se ha podido enviar su mensaje a los siguientes destinatarios debido a su "
"cuota:"
#: modules/webmessage/lib/webmessage.py:453
msgid "Your message has been sent."
msgstr "Su mensaje se ha enviado."
#: modules/webmessage/lib/webmessage.py:458
#: modules/webmessage/lib/webmessage_templates.py:472
#: modules/webmessage/lib/webmessage_webinterface.py:87
#: modules/webmessage/lib/webmessage_webinterface.py:311
#: modules/webmessage/lib/webmessage_webinterface.py:357
#: modules/websession/lib/websession_templates.py:607
msgid "Your Messages"
msgstr "Sus mensajes"
# Debe traducirse igual que el 'Subject' del correo electrónico
#: modules/webmessage/lib/webmessage_templates.py:86
#: modules/bibcirculation/lib/bibcirculation_templates.py:5322
msgid "Subject"
msgstr "Asunto"
#: modules/webmessage/lib/webmessage_templates.py:87
msgid "Sender"
msgstr "Remitente"
#: modules/webmessage/lib/webmessage_templates.py:96
msgid "No messages"
msgstr "Sin mensajes"
#: modules/webmessage/lib/webmessage_templates.py:100
msgid "No subject"
msgstr "Sin asunto"
#: modules/webmessage/lib/webmessage_templates.py:146
msgid "Write new message"
msgstr "Escriba el mensaje"
#: modules/webmessage/lib/webmessage_templates.py:147
msgid "Delete All"
msgstr "Suprimirlos todo"
#: modules/webmessage/lib/webmessage_templates.py:189
msgid "Re:"
msgstr "Re:"
#: modules/webmessage/lib/webmessage_templates.py:281
msgid "Send later?"
msgstr "¿Enviar más tarde?"
#: modules/webmessage/lib/webmessage_templates.py:282
#: modules/websubmit/lib/websubmit_templates.py:3080
msgid "To:"
msgstr "A:"
#: modules/webmessage/lib/webmessage_templates.py:283
msgid "Users"
msgstr "Usuarios"
#: modules/webmessage/lib/webmessage_templates.py:284
msgid "Groups"
msgstr "Grupos"
#: modules/webmessage/lib/webmessage_templates.py:285
#: modules/webmessage/lib/webmessage_templates.py:447
#: modules/websubmit/lib/websubmit_templates.py:3081
msgid "Subject:"
msgstr "Asunto:"
#: modules/webmessage/lib/webmessage_templates.py:286
#: modules/websubmit/lib/websubmit_templates.py:3082
msgid "Message:"
msgstr "Mensaje:"
#: modules/webmessage/lib/webmessage_templates.py:287
#: modules/websubmit/lib/websubmit_templates.py:3083
msgid "SEND"
msgstr "ENVIAR"
#: modules/webmessage/lib/webmessage_templates.py:446
msgid "From:"
msgstr "De:"
# 'en' o 'el'?
#: modules/webmessage/lib/webmessage_templates.py:448
msgid "Sent on:"
msgstr "Enviado el:"
#: modules/webmessage/lib/webmessage_templates.py:449
msgid "Received on:"
msgstr "Recibido el:"
#: modules/webmessage/lib/webmessage_templates.py:450
msgid "Sent to:"
msgstr "Enviado a:"
#: modules/webmessage/lib/webmessage_templates.py:451
msgid "Sent to groups:"
msgstr "Enviado a los grupos:"
#: modules/webmessage/lib/webmessage_templates.py:452
msgid "REPLY"
msgstr "CONTESTAR"
#: modules/webmessage/lib/webmessage_templates.py:453
msgid "DELETE"
msgstr "SUPRIMIR"
#: modules/webmessage/lib/webmessage_templates.py:506
msgid "Are you sure you want to empty your whole mailbox?"
msgstr "¿Está seguro de que desea vaciar todo su buzón?"
#: modules/webmessage/lib/webmessage_templates.py:582
#, python-format
msgid "Quota used: %(x_nb_used)i messages out of max. %(x_nb_total)i"
msgstr "Cuota usada: %(x_nb_used)i mensajes de un máximo de %(x_nb_total)i"
# Una?
#: modules/webmessage/lib/webmessage_templates.py:600
msgid "Please select one or more:"
msgstr "Seleccione uno o más:"
#: modules/webmessage/lib/webmessage_templates.py:631
msgid "Add to users"
msgstr "Añadir a los usuarios"
#: modules/webmessage/lib/webmessage_templates.py:633
msgid "Add to groups"
msgstr "Añadir a los grupos"
#: modules/webmessage/lib/webmessage_templates.py:636
msgid "No matching user"
msgstr "No se ha encontrado ningún usuario que coincida"
#: modules/webmessage/lib/webmessage_templates.py:638
#: modules/websession/lib/websession_templates.py:1819
msgid "No matching group"
msgstr "No se ha encontrado ningún grupo que coincida"
#: modules/webmessage/lib/webmessage_templates.py:675
msgid "Find users or groups:"
msgstr "Buscar usuarios o grupos:"
#: modules/webmessage/lib/webmessage_templates.py:676
msgid "Find a user"
msgstr "Buscar un usuario"
#: modules/webmessage/lib/webmessage_templates.py:677
msgid "Find a group"
msgstr "Buscar un grupo"
#: modules/webmessage/lib/webmessage_templates.py:692
#, python-format
msgid "You have %(x_nb_new)s new messages out of %(x_nb_total)s messages"
msgstr "Tiene %(x_nb_new)s mensajes nuevos de un total de %(x_nb_total)s"
#: modules/webmessage/lib/webmessage_webinterface.py:82
#: modules/webmessage/lib/webmessage_webinterface.py:134
#: modules/webmessage/lib/webmessage_webinterface.py:228
#: modules/webmessage/lib/webmessage_webinterface.py:305
#: modules/webmessage/lib/webmessage_webinterface.py:351
#: modules/webmessage/lib/webmessage_webinterface.py:397
msgid "You are not authorized to use messages."
-msgstr "No está autorizado a utilitzar mensajes."
+msgstr "No está autorizado a utilizar mensajes."
#: modules/webmessage/lib/webmessage_webinterface.py:403
msgid "Read a message"
msgstr "Lea un mensaje"
#: modules/websearch/lib/search_engine.py:833
#: modules/websearch/lib/search_engine.py:860
#: modules/websearch/lib/search_engine.py:4715
#: modules/websearch/lib/search_engine.py:4768
msgid "Search Results"
msgstr "Resultados de la búsqueda"
#: modules/websearch/lib/search_engine.py:973
#: modules/websearch/lib/websearch_templates.py:1174
msgid "any day"
msgstr "cualquier día"
#: modules/websearch/lib/search_engine.py:979
#: modules/websearch/lib/websearch_templates.py:1186
msgid "any month"
msgstr "cualquier mes"
#: modules/websearch/lib/search_engine.py:987
#: modules/websearch/lib/websearch_templates.py:1200
msgid "any year"
msgstr "cualquier año"
#: modules/websearch/lib/search_engine.py:1028
#: modules/websearch/lib/search_engine.py:1047
msgid "any public collection"
-msgstr "qualquier colección pública"
+msgstr "cualquier colección pública"
#: modules/websearch/lib/search_engine.py:1032
msgid "remove this collection"
msgstr "eliminar esta colección"
#: modules/websearch/lib/search_engine.py:1043
msgid "add another collection"
msgstr "añadir otra colección"
#: modules/websearch/lib/search_engine.py:1053
#: modules/websearch/lib/websearch_webcoll.py:592
msgid "rank by"
msgstr "ordenar por"
#: modules/websearch/lib/search_engine.py:1177
#: modules/websearch/lib/websearch_webcoll.py:562
msgid "latest first"
msgstr "el último primero"
#: modules/websearch/lib/search_engine.py:1827
msgid "No values found."
msgstr "No se han encontrado valores."
#: modules/websearch/lib/search_engine.py:1945
msgid ""
"Warning: full-text search is only available for a subset of papers mostly "
"from 2006-2011."
msgstr ""
"Atención: la búsqueda a texto completo sólo está disponible para un "
"subconjunto de documentos, mayoritariamente de entre 2006-2011."
#: modules/websearch/lib/search_engine.py:1947
msgid ""
"Warning: figure caption search is only available for a subset of papers "
"mostly from 2008-2011."
msgstr ""
"Atención: la búsqueda en los pies de imágenes sólo está disponible para un "
"subconjunto de documentos, mayoritariamente de entre 2006-2011."
#: modules/websearch/lib/search_engine.py:1953
#, python-format
msgid "There is no index %s. Searching for %s in all fields."
msgstr "No existe el índice %s. Se buscará %s en todos los campos."
#: modules/websearch/lib/search_engine.py:1957
#, python-format
msgid "Instead searching %s."
msgstr "En cambio se buscará %s."
#: modules/websearch/lib/search_engine.py:1963
msgid "Search term too generic, displaying only partial results..."
msgstr ""
"Término de búsqueda demasiado genérico, solo se mostrarán unos resultados "
"parciales..."
#: modules/websearch/lib/search_engine.py:1966
msgid ""
"No phrase index available for fulltext yet, looking for word combination..."
msgstr ""
"Todavía no hay índice de frases para el texto completo, buscando por "
"combinación de palabras..."
#: modules/websearch/lib/search_engine.py:2006
#, python-format
msgid "No exact match found for %(x_query1)s, using %(x_query2)s instead..."
msgstr ""
"No se han encontrado coincidencias con %(x_query1)s, pero utilizando en su "
"lugar %(x_query2)s..."
#: modules/websearch/lib/search_engine.py:2016
#: modules/websearch/lib/search_engine.py:2025
#: modules/websearch/lib/search_engine.py:4687
#: modules/websearch/lib/search_engine.py:4725
#: modules/websearch/lib/search_engine.py:4776
#: modules/websubmit/lib/websubmit_webinterface.py:112
#: modules/websubmit/lib/websubmit_webinterface.py:155
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:839
msgid "Requested record does not seem to exist."
msgstr "El registro solicitado no existe."
#: modules/websearch/lib/search_engine.py:2148
msgid ""
"Search syntax misunderstood. Ignoring all parentheses in the query. If this "
"doesn't help, please check your search and try again."
msgstr ""
"No se entiende la sintaxis de su búsqueda. Se ignorarán todos los "
"paréntesis de la búsqueda. Si así no funciona, repase su búsqueda y vuelva "
"a intentarlo."
#: modules/websearch/lib/search_engine.py:2573
#, python-format
msgid ""
"No match found in collection %(x_collection)s. Other public collections gave "
"%(x_url_open)s%(x_nb_hits)d hits%(x_url_close)s."
msgstr ""
-"No se ha encontrado ninguna coincidencia en la coleción %(x_collection)s. "
+"No se ha encontrado ninguna coincidencia en la colección %(x_collection)s. "
"Las otras colecciones públicas dieron %(x_url_open)s%(x_nb_hits)d resultados"
"%(x_url_close)s."
#: modules/websearch/lib/search_engine.py:2582
msgid ""
"No public collection matched your query. If you were looking for a non-"
"public document, please choose the desired restricted collection first."
msgstr ""
"Ninguna colección pública coincide con su búsqueda. Si estaba buscando "
"documentos no públicos, por favor escoja primero la colección restringida "
"deseada."
#: modules/websearch/lib/search_engine.py:2696
msgid "No match found, please enter different search terms."
msgstr "No se han encontrado resultados. Use términos de búsqueda distintos."
#: modules/websearch/lib/search_engine.py:2702
#, python-format
msgid "There are no records referring to %s."
msgstr "No hay registros que se refieran a %s."
#: modules/websearch/lib/search_engine.py:2704
#, python-format
msgid "There are no records cited by %s."
msgstr "No hay registros citados por a %s."
#: modules/websearch/lib/search_engine.py:2709
#, python-format
msgid "No word index is available for %s."
msgstr "No hay índices de palabras disponible para %s."
#: modules/websearch/lib/search_engine.py:2720
#, python-format
msgid "No phrase index is available for %s."
msgstr "No hay índices de frases disponible para %s."
#: modules/websearch/lib/search_engine.py:2767
#, python-format
msgid ""
"Search term %(x_term)s inside index %(x_index)s did not match any record. "
"Nearest terms in any collection are:"
msgstr ""
"El término de búsqueda %(x_term)s en el índice %(x_index)s no se ha "
"encontrado en ningún registro. Los términos aproximados en cualquier "
"colección son:"
#: modules/websearch/lib/search_engine.py:2771
#, python-format
msgid ""
"Search term %s did not match any record. Nearest terms in any collection are:"
msgstr ""
"El término de búsqueda %s no se ha encontrado en ningún registro. Los "
"términos aproximados, en cualquier las colección, son:"
#: modules/websearch/lib/search_engine.py:3486
#, python-format
msgid ""
"Sorry, sorting is allowed on sets of up to %d records only. Using default "
"sort order."
msgstr ""
"Sólo se puede ordenar en conjuntos de hasta %d registros. Ordenado por "
"defecto (\"primero los más recientes\")."
#: modules/websearch/lib/search_engine.py:3510
#, python-format
msgid ""
"Sorry, %s does not seem to be a valid sort option. Choosing title sort "
"instead."
msgstr "No es posible ordenar por %s. Ha quedado ordenado por título."
#: modules/websearch/lib/search_engine.py:3703
#: modules/websearch/lib/search_engine.py:4012
#: modules/websearch/lib/search_engine.py:4191
#: modules/websearch/lib/search_engine.py:4214
#: modules/websearch/lib/search_engine.py:4222
#: modules/websearch/lib/search_engine.py:4230
#: modules/websearch/lib/search_engine.py:4276
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:837
msgid "The record has been deleted."
msgstr "El registro se ha suprimido."
#: modules/websearch/lib/search_engine.py:3901
msgid "Use different search terms."
msgstr "Use términos de búsqueda distintos."
#: modules/websearch/lib/search_engine.py:4997
msgid "No match within your time limits, discarding this condition..."
msgstr ""
"No se han encontrado resultados dentro de los límites de tiempo "
"especificados. Descartando esta condición..."
#: modules/websearch/lib/search_engine.py:5024
msgid "No match within your search limits, discarding this condition..."
msgstr ""
"No se han encontrado resultados dentro de los límites especificados, "
"descartando esta condición..."
#: modules/websearch/lib/websearchadminlib.py:3393
msgid "Information"
msgstr "Información:"
#: modules/websearch/lib/websearchadminlib.py:3394
msgid "References"
msgstr "Referencias"
#: modules/websearch/lib/websearchadminlib.py:3395
msgid "Citations"
msgstr "Citas"
#: modules/websearch/lib/websearchadminlib.py:3396
msgid "Keywords"
msgstr "Palabras clave"
#: modules/websearch/lib/websearchadminlib.py:3397
msgid "Discussion"
msgstr "Discusión"
#: modules/websearch/lib/websearchadminlib.py:3398
msgid "Usage statistics"
msgstr "Estadísticas de uso"
#: modules/websearch/lib/websearchadminlib.py:3399
msgid "Files"
msgstr "Ficheros"
#: modules/websearch/lib/websearchadminlib.py:3400
msgid "Plots"
msgstr "Gráficos"
#: modules/websearch/lib/websearchadminlib.py:3401
msgid "Holdings"
msgstr "Disponibilidad"
#: modules/websearch/lib/websearch_templates.py:458
#, python-format
msgid "Search on %(x_CFG_SITE_NAME_INTL)s"
msgstr "Buscar en %(x_CFG_SITE_NAME_INTL)s"
#: modules/websearch/lib/websearch_templates.py:682
#: modules/websearch/lib/websearch_templates.py:831
#, python-format
msgid "Search %s records for:"
msgstr "Buscar en %s registros por:"
#: modules/websearch/lib/websearch_templates.py:734
msgid "less"
msgstr "menos"
#: modules/websearch/lib/websearch_templates.py:735
#: modules/websearch/lib/websearch_templates.py:1508
#: modules/websearch/lib/websearch_templates.py:3893
#: modules/websearch/lib/websearch_templates.py:3970
#: modules/websearch/lib/websearch_templates.py:4030
msgid "more"
msgstr "más"
#: modules/websearch/lib/websearch_templates.py:740
#, python-format
msgid "Example: %(x_sample_search_query)s"
msgstr "Ejemplo: %(x_sample_search_query)s"
#: modules/websearch/lib/websearch_templates.py:752
#: modules/websearch/lib/websearch_templates.py:2129
#, python-format
msgid "Search in %(x_collection_name)s"
msgstr "Búsqueda en %(x_collection_name)s"
#: modules/websearch/lib/websearch_templates.py:756
#: modules/websearch/lib/websearch_templates.py:2133
msgid "Search everywhere"
msgstr "Buscar en todas partes"
#: modules/websearch/lib/websearch_templates.py:790
#: modules/websearch/lib/websearch_templates.py:867
#: modules/websearch/lib/websearch_templates.py:2101
#: modules/websearch/lib/websearch_templates.py:2159
msgid "Advanced Search"
msgstr "Búsqueda avanzada"
#: modules/websearch/lib/websearch_templates.py:928
#, python-format
msgid "Search %s records for"
msgstr "Buscar en %s registros:"
#: modules/websearch/lib/websearch_templates.py:979
#: modules/websearch/lib/websearch_templates.py:2017
msgid "Simple Search"
msgstr "Búsqueda simple"
#: modules/websearch/lib/websearch_templates.py:1012
msgid "Search options:"
msgstr "Opciones de búsqueda:"
#: modules/websearch/lib/websearch_templates.py:1059
#: modules/websearch/lib/websearch_templates.py:2255
msgid "Added/modified since:"
msgstr "Añadido/modificado desde:"
#: modules/websearch/lib/websearch_templates.py:1060
#: modules/websearch/lib/websearch_templates.py:2256
msgid "until:"
msgstr "hasta:"
#: modules/websearch/lib/websearch_templates.py:1065
#: modules/websearch/lib/websearch_templates.py:2298
msgid "Sort by:"
msgstr "Ordenar por:"
#: modules/websearch/lib/websearch_templates.py:1066
#: modules/websearch/lib/websearch_templates.py:2299
msgid "Display results:"
msgstr "Mostrar resultados:"
#: modules/websearch/lib/websearch_templates.py:1067
#: modules/websearch/lib/websearch_templates.py:2300
msgid "Output format:"
msgstr "Formato de visualización:"
#: modules/websearch/lib/websearch_templates.py:1227
msgid "Added since:"
msgstr "Añadido a partir de:"
#: modules/websearch/lib/websearch_templates.py:1228
msgid "Modified since:"
msgstr "Modificado desde:"
#: modules/websearch/lib/websearch_templates.py:1265
msgid "Focus on:"
msgstr "Enfocado a:"
#: modules/websearch/lib/websearch_templates.py:1328
msgid "restricted"
msgstr "restringido"
#: modules/websearch/lib/websearch_templates.py:1355
msgid "Search also:"
msgstr "Busque también:"
#: modules/websearch/lib/websearch_templates.py:1426
msgid ""
"This collection is restricted. If you are authorized to access it, please "
"click on the Search button."
msgstr ""
"Esta colección es restringida. Si tiene acceso, haga clic en el botón de "
"Buscar."
#: modules/websearch/lib/websearch_templates.py:1441
msgid ""
"This is a hosted external collection. Please click on the Search button to "
"see its content."
msgstr ""
"Esta es una colección externa alojada. Pulse el botón de búsqueda para ver "
"su contenido."
#: modules/websearch/lib/websearch_templates.py:1456
msgid "This collection does not contain any document yet."
msgstr "Esta colección no contiene aún ningún documento."
#: modules/websearch/lib/websearch_templates.py:1523
msgid "Latest additions:"
msgstr "Últimas adquisiciones:"
#: modules/websearch/lib/websearch_templates.py:1626
#: modules/websearch/lib/websearch_templates.py:3361
#, python-format
msgid "Cited by %i records"
msgstr "Citado por %i registros"
#: modules/websearch/lib/websearch_templates.py:1692
#, python-format
msgid "Words nearest to %(x_word)s inside %(x_field)s in any collection are:"
msgstr ""
"Las palabras más cercanas a %(x_word)s en %(x_field)s, en cualquier "
"colección, son:"
#: modules/websearch/lib/websearch_templates.py:1695
#, python-format
msgid "Words nearest to %(x_word)s in any collection are:"
msgstr "Las palabras más cercanas a %(x_word)s, en cualquier colección, son:"
#: modules/websearch/lib/websearch_templates.py:1787
msgid "Hits"
msgstr "Resultados"
#: modules/websearch/lib/websearch_templates.py:1866
#: modules/websearch/lib/websearch_templates.py:2518
#: modules/websearch/lib/websearch_templates.py:2708
#: modules/bibedit/lib/bibeditmulti_templates.py:657
msgid "next"
msgstr "siguiente"
#: modules/websearch/lib/websearch_templates.py:2204
msgid "collections"
msgstr "colecciones"
#: modules/websearch/lib/websearch_templates.py:2226
msgid "Limit to:"
msgstr "Limitar a:"
#: modules/websearch/lib/websearch_templates.py:2268
#: modules/websearch/lib/websearch_webcoll.py:610
msgid "results"
msgstr "resultados"
#: modules/websearch/lib/websearch_templates.py:2304
#: modules/websearch/lib/websearch_webcoll.py:580
msgid "asc."
msgstr "asc."
#: modules/websearch/lib/websearch_templates.py:2307
#: modules/websearch/lib/websearch_webcoll.py:581
msgid "desc."
msgstr "desc."
#: modules/websearch/lib/websearch_templates.py:2313
#: modules/websearch/lib/websearch_webcoll.py:624
msgid "single list"
msgstr "lista única"
#: modules/websearch/lib/websearch_templates.py:2316
#: modules/websearch/lib/websearch_webcoll.py:623
msgid "split by collection"
msgstr "reagrupar por colección"
#: modules/websearch/lib/websearch_templates.py:2354
msgid "MARC tag"
msgstr "Etiqueta MARC"
#: modules/websearch/lib/websearch_templates.py:2469
#: modules/websearch/lib/websearch_templates.py:2474
#: modules/websearch/lib/websearch_templates.py:2652
#: modules/websearch/lib/websearch_templates.py:2664
#: modules/websearch/lib/websearch_templates.py:2985
#: modules/websearch/lib/websearch_templates.py:2994
#, python-format
msgid "%s records found"
msgstr "Encontrados %s registros"
#: modules/websearch/lib/websearch_templates.py:2501
#: modules/websearch/lib/websearch_templates.py:2691
#: modules/bibedit/lib/bibeditmulti_templates.py:655
msgid "begin"
msgstr "inicio"
#: modules/websearch/lib/websearch_templates.py:2506
#: modules/websearch/lib/websearch_templates.py:2696
#: modules/websubmit/lib/websubmit_templates.py:1241
#: modules/bibedit/lib/bibeditmulti_templates.py:656
msgid "previous"
msgstr "anterior"
#: modules/websearch/lib/websearch_templates.py:2525
#: modules/websearch/lib/websearch_templates.py:2715
msgid "end"
msgstr "final"
#: modules/websearch/lib/websearch_templates.py:2545
#: modules/websearch/lib/websearch_templates.py:2735
msgid "jump to record:"
msgstr "ir al registro:"
#: modules/websearch/lib/websearch_templates.py:2558
#: modules/websearch/lib/websearch_templates.py:2748
#, python-format
msgid "Search took %s seconds."
msgstr "La búsqueda tardó %s segundos."
#: modules/websearch/lib/websearch_templates.py:2952
#, python-format
msgid ""
"%(x_fmt_open)sResults overview:%(x_fmt_close)s Found %(x_nb_records)s "
"records in %(x_nb_seconds)s seconds."
msgstr ""
"%(x_fmt_open)sResultados globales:%(x_fmt_close)s %(x_nb_records)s registros "
"encontrados en %(x_nb_seconds)s segundos."
#: modules/websearch/lib/websearch_templates.py:2964
#, python-format
msgid "%(x_fmt_open)sResults overview%(x_fmt_close)s"
msgstr "%(x_fmt_open)sResultados globales%(x_fmt_close)s"
#: modules/websearch/lib/websearch_templates.py:2972
#, python-format
msgid ""
"%(x_fmt_open)sResults overview:%(x_fmt_close)s Found at least "
"%(x_nb_records)s records in %(x_nb_seconds)s seconds."
msgstr ""
"%(x_fmt_open)sResultados globales:%(x_fmt_close)s Al menos %(x_nb_records)s "
"registros encontrados en %(x_nb_seconds)s segundos."
#: modules/websearch/lib/websearch_templates.py:3049
msgid "No results found..."
msgstr "No se han encontrado resultados..."
#: modules/websearch/lib/websearch_templates.py:3082
msgid ""
"Boolean query returned no hits. Please combine your search terms differently."
msgstr ""
"La combinación booleana no ha dado resultados. Por favor combine los "
"términos de búsqueda de otra manera."
#: modules/websearch/lib/websearch_templates.py:3114
msgid "See also: similar author names"
msgstr "Vea también: autores con nombres similares"
#: modules/websearch/lib/websearch_templates.py:3362
msgid "Cited by 1 record"
msgstr "Citado por 1 registro"
#: modules/websearch/lib/websearch_templates.py:3377
#, python-format
msgid "%i comments"
msgstr "%i comentarios"
#: modules/websearch/lib/websearch_templates.py:3378
msgid "1 comment"
msgstr "1 comentario"
#: modules/websearch/lib/websearch_templates.py:3388
#, python-format
msgid "%i reviews"
msgstr "%i reseñas"
#: modules/websearch/lib/websearch_templates.py:3389
msgid "1 review"
msgstr "1 reseña"
#: modules/websearch/lib/websearch_templates.py:3602
#: modules/websearch/lib/websearch_webinterface.py:1580
#, python-format
msgid "Collection %s Not Found"
msgstr "No se ha encontrado la colección %s"
#: modules/websearch/lib/websearch_templates.py:3614
#: modules/websearch/lib/websearch_webinterface.py:1576
#, python-format
msgid "Sorry, collection %s does not seem to exist."
msgstr "Parece ser que la colección %s no existe."
#: modules/websearch/lib/websearch_templates.py:3616
#: modules/websearch/lib/websearch_webinterface.py:1577
#, python-format
msgid "You may want to start browsing from %s."
msgstr "Puede comenzar las búsquedas desde %s."
#: modules/websearch/lib/websearch_templates.py:3643
#, python-format
msgid ""
"Set up a personal %(x_url1_open)semail alert%(x_url1_close)s\n"
" or subscribe to the %(x_url2_open)sRSS feed"
"%(x_url2_close)s."
msgstr ""
"Defina una %(x_url1_open)salerta personal%(x_url1_close)s vía correo "
"electrónico o subscríbase al %(x_url2_open)scanal RSS%(x_url2_close)s."
#: modules/websearch/lib/websearch_templates.py:3650
#, python-format
msgid "Subscribe to the %(x_url2_open)sRSS feed%(x_url2_close)s."
msgstr "Subscríbase al %(x_url2_open)scanal RSS%(x_url2_close)s."
#: modules/websearch/lib/websearch_templates.py:3659
msgid "Interested in being notified about new results for this query?"
msgstr "¿Le interesa recibir alertas sobre nuevos resultados de esta búsqueda?"
#: modules/websearch/lib/websearch_templates.py:3746
#: modules/websearch/lib/websearch_templates.py:3796
msgid "Back to search"
msgstr "Volver a la búsqueda"
#: modules/websearch/lib/websearch_templates.py:3756
#: modules/websearch/lib/websearch_templates.py:3772
#: modules/websearch/lib/websearch_templates.py:3787
#, python-format
msgid "%s of"
msgstr "%s de"
#: modules/websearch/lib/websearch_templates.py:3886
msgid "People who downloaded this document also downloaded:"
msgstr "La gente que descargó este documento también descargó:"
#: modules/websearch/lib/websearch_templates.py:3902
msgid "People who viewed this page also viewed:"
msgstr "La gente que vio esta página también vio:"
#: modules/websearch/lib/websearch_templates.py:3956
#, python-format
msgid "Cited by: %s records"
msgstr "Citado por: %s registros"
#: modules/websearch/lib/websearch_templates.py:4023
#, python-format
msgid "Co-cited with: %s records"
msgstr "Co-citado con: %s registros"
#: modules/websearch/lib/websearch_templates.py:4065
#, python-format
msgid ".. of which self-citations: %s records"
msgstr ".. de los cuales son auto-citas: %s registros<"
#: modules/websearch/lib/websearch_templates.py:4157
msgid "Name variants"
msgstr "Variantes del nombre"
#: modules/websearch/lib/websearch_templates.py:4168
msgid "No Name Variants"
msgstr "Sin nombres variantes"
#: modules/websearch/lib/websearch_templates.py:4176
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:763
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:920
msgid "Papers"
msgstr "Documentos:"
#: modules/websearch/lib/websearch_templates.py:4180
msgid "downloaded"
msgstr "descargado"
#: modules/websearch/lib/websearch_templates.py:4181
msgid "times"
msgstr "veces"
#: modules/websearch/lib/websearch_templates.py:4210
msgid "No Papers"
msgstr "Ningún documento"
#: modules/websearch/lib/websearch_templates.py:4223
msgid "unknown affiliation"
msgstr "afiliación desconocida"
#: modules/websearch/lib/websearch_templates.py:4230
msgid "No Affiliations"
msgstr "Sin afiliaciones"
#: modules/websearch/lib/websearch_templates.py:4232
msgid "Affiliations"
msgstr "Afiliaciones"
#: modules/websearch/lib/websearch_templates.py:4248
msgid "No Keywords"
msgstr "Sin palabras clave"
#: modules/websearch/lib/websearch_templates.py:4251
msgid "Frequent keywords"
msgstr "Palabras clave frecuentes"
#: modules/websearch/lib/websearch_templates.py:4256
msgid "Frequent co-authors"
msgstr "Co-autores frecuentes"
#: modules/websearch/lib/websearch_templates.py:4267
msgid "No Frequent Co-authors"
msgstr "Co-autores poco frecuentes"
#: modules/websearch/lib/websearch_templates.py:4291
msgid "This is me. Verify my publication list."
msgstr "Soy yo mismo. Verifiquen mi lista de publicaciones."
#: modules/websearch/lib/websearch_templates.py:4330
msgid "Citations:"
msgstr "Citaciones:"
#: modules/websearch/lib/websearch_templates.py:4334
msgid "No Citation Information available"
msgstr "No hay información de citas disponible"
#: modules/websearch/lib/websearch_templates.py:4401
msgid "Citation summary results"
msgstr "Reccuento de citaciones"
#: modules/websearch/lib/websearch_templates.py:4406
msgid "Total number of citable papers analyzed:"
msgstr "Número total de artículos citables analizados:"
#: modules/websearch/lib/websearch_templates.py:4429
msgid "Total number of citations:"
msgstr "Número total de citaciones:"
#: modules/websearch/lib/websearch_templates.py:4434
msgid "Average citations per paper:"
msgstr "Media de citas por artículo"
#: modules/websearch/lib/websearch_templates.py:4444
msgid "Total number of citations excluding self-citations:"
msgstr "Número total de citaciones excluyendo las autocitas:"
#: modules/websearch/lib/websearch_templates.py:4449
msgid "Average citations per paper excluding self-citations:"
msgstr "Media de citas por artículo excluyendo las autocitas:"
#: modules/websearch/lib/websearch_templates.py:4458
msgid "Breakdown of papers by citations:"
msgstr "Clasificación de artículos por citas:"
#: modules/websearch/lib/websearch_templates.py:4492
msgid "Additional Citation Metrics"
msgstr "Otras métricas de citas"
#: modules/websearch/lib/websearch_webinterface.py:735
#: modules/websearch/lib/websearch_webinterface.py:747
#, python-format
msgid ""
"We're sorry. The requested author \"%s\" seems not to be listed on the "
"specified paper."
msgstr "Por desgracia, el autor \"%s\" no aparece en este artículo."
#: modules/websearch/lib/websearch_webinterface.py:738
#: modules/websearch/lib/websearch_webinterface.py:750
msgid "Please try the following link to start a broader search on the author: "
msgstr ""
"Pinchando en el siguiente enlace realizará una búsqueda más amplia del autor:"
#: modules/websearch/lib/websearch_webinterface.py:1163
msgid "You are not authorized to view this area."
msgstr "No está autorizado a ver esta área."
#: modules/websearch/lib/websearch_webinterface.py:1582
msgid "Not found"
msgstr "No se ha encontrado"
#: modules/websearch/lib/websearch_external_collections.py:145
msgid "in"
msgstr "en"
#: modules/websearch/lib/websearch_external_collections_templates.py:51
msgid ""
"Haven't found what you were looking for? Try your search on other servers:"
msgstr "¿No ha encontrado lo que estaba buscando? Intente su búsqueda en:"
#: modules/websearch/lib/websearch_external_collections_templates.py:79
msgid "External collections results overview:"
msgstr "Resumen de los resultados de las colecciones externas:"
#: modules/websearch/lib/websearch_external_collections_templates.py:119
msgid "Search timed out."
msgstr "Tiempo de búsqueda excedido."
#: modules/websearch/lib/websearch_external_collections_templates.py:120
msgid ""
"The external search engine has not responded in time. You can check its "
"results here:"
msgstr ""
"El buscador externo no ha respondido a tiempo. Puede ver los resultados "
"aquí:"
#: modules/websearch/lib/websearch_external_collections_templates.py:146
#: modules/websearch/lib/websearch_external_collections_templates.py:154
#: modules/websearch/lib/websearch_external_collections_templates.py:167
msgid "No results found."
msgstr "No se han encontrado resultados."
#: modules/websearch/lib/websearch_external_collections_templates.py:150
#, python-format
msgid "%s results found"
msgstr "Se han encontrado %s resultados"
#: modules/websearch/lib/websearch_external_collections_templates.py:152
#, python-format
msgid "%s seconds"
msgstr "%s segundos"
#: modules/websearch/lib/websearch_webcoll.py:645
msgid "brief"
msgstr "breve"
#: modules/websession/lib/webaccount.py:116
#, python-format
msgid ""
"You are logged in as guest. You may want to %(x_url_open)slogin"
"%(x_url_close)s as a regular user."
msgstr ""
"Está conectado como visitante. Quizás quiera %(x_url_open)sidentificarse"
"%(x_url_close)s como usuario conocido"
#: modules/websession/lib/webaccount.py:120
#, python-format
msgid ""
"The %(x_fmt_open)sguest%(x_fmt_close)s users need to %(x_url_open)sregister"
"%(x_url_close)s first"
msgstr ""
"Los %(x_fmt_open)svisitantes%(x_fmt_close)s antes han de %(x_url_open)sdarse "
"de alta%(x_url_close)s."
#: modules/websession/lib/webaccount.py:125
msgid "No queries found"
msgstr "No se ha encontrado ninguna búsqueda"
#: modules/websession/lib/webaccount.py:367
msgid ""
"This collection is restricted. If you think you have right to access it, "
"please authenticate yourself."
msgstr ""
"Esta colección es restringida. Si cree que tiene derecho a ella, "
"identifíquese."
#: modules/websession/lib/webaccount.py:368
msgid ""
"This file is restricted. If you think you have right to access it, please "
"authenticate yourself."
msgstr ""
"Esta documento es restringido. Si cree que tiene derecho a él, "
"identifíquese."
#: modules/websession/lib/websession_templates.py:93
msgid "External account settings"
msgstr "Configuración de la cuenta externa"
#: modules/websession/lib/websession_templates.py:95
#, python-format
msgid ""
"You can consult the list of your external groups directly in the "
"%(x_url_open)sgroups page%(x_url_close)s."
msgstr ""
-"Puede consultar la lista de sus grupos externos directament en la "
+"Puede consultar la lista de sus grupos externos directamente en la "
"%(x_url_open)spágina de los grupos%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:99
msgid "External user groups"
msgstr "Grupos de usuarios externos"
#: modules/websession/lib/websession_templates.py:156
msgid ""
"If you want to change your email or set for the first time your nickname, "
"please set new values in the form below."
msgstr ""
"Si desea cambiar su dirección de correo electrónico o contraseña, ponga nos "
"nuevos valores en este formulario."
#: modules/websession/lib/websession_templates.py:157
msgid "Edit login credentials"
msgstr "Edite las credenciales de identificación"
#: modules/websession/lib/websession_templates.py:162
msgid "New email address"
msgstr "Nueva dirección de correo electrónico"
#: modules/websession/lib/websession_templates.py:163
#: modules/websession/lib/websession_templates.py:210
#: modules/websession/lib/websession_templates.py:1036
msgid "mandatory"
msgstr "obligatorio"
#: modules/websession/lib/websession_templates.py:166
msgid "Set new values"
msgstr "Guardar los nuevos valores"
#: modules/websession/lib/websession_templates.py:170
msgid ""
"Since this is considered as a signature for comments and reviews, once set "
"it can not be changed."
msgstr ""
"Ya que se considera una firma para comentarios y reseñas, una vez definido "
"no se puede cambiar."
#: modules/websession/lib/websession_templates.py:209
msgid ""
"If you want to change your password, please enter the old one and set the "
"new value in the form below."
msgstr ""
"Si desea cambiar su contraseña, ponga nos valores antiguo y nuevo en este "
"formulario."
#: modules/websession/lib/websession_templates.py:211
msgid "Old password"
msgstr "Contraseña antigua"
#: modules/websession/lib/websession_templates.py:212
msgid "New password"
msgstr "Contraseña nueva"
#: modules/websession/lib/websession_templates.py:213
#: modules/websession/lib/websession_templates.py:1037
msgid "optional"
msgstr "opcional"
#: modules/websession/lib/websession_templates.py:215
#: modules/websession/lib/websession_templates.py:1040
msgid "The password phrase may contain punctuation, spaces, etc."
msgstr "La contraseña puede contener puntuación, espacios, etc."
#: modules/websession/lib/websession_templates.py:216
msgid "You must fill the old password in order to set a new one."
msgstr ""
"Tiene que entrar la contraseña antigua para cambiarla por una de nueva."
#: modules/websession/lib/websession_templates.py:217
msgid "Retype password"
msgstr "Vuelva a escribir la contraseña"
#: modules/websession/lib/websession_templates.py:218
msgid "Set new password"
msgstr "Ponga la contraseña nueva"
#: modules/websession/lib/websession_templates.py:223
#, python-format
msgid ""
"If you are using a lightweight CERN account you can\n"
" %(x_url_open)sreset the password%(x_url_close)s."
msgstr ""
"Si está utilizando una cuenta CERN ligera puede %(x_url_open)sreiniciar la "
"contraseña%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:229
#, python-format
msgid ""
"You can change or reset your CERN account password by means of the "
"%(x_url_open)sCERN account system%(x_url_close)s."
msgstr ""
"Puede cambiar o reiniciar la contraseña de su cuenta del CERN vía el "
"%(x_url_open)ssistema de cuentas del CERN%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:253
msgid "Edit cataloging interface settings"
msgstr "Editar los parámetros de catalogación"
#: modules/websession/lib/websession_templates.py:254
#: modules/websession/lib/websession_templates.py:900
msgid "Username"
msgstr "Nombre de usuario"
#: modules/websession/lib/websession_templates.py:255
#: modules/websession/lib/websession_templates.py:901
#: modules/websession/lib/websession_templates.py:1035
msgid "Password"
msgstr "Contraseña"
#: modules/websession/lib/websession_templates.py:256
#: modules/websession/lib/websession_templates.py:282
#: modules/websession/lib/websession_templates.py:316
msgid "Update settings"
msgstr "Actualizar los parámetros"
#: modules/websession/lib/websession_templates.py:270
msgid "Edit language-related settings"
msgstr "Editar los parámetros de lengua"
#: modules/websession/lib/websession_templates.py:281
msgid "Select desired language of the web interface."
msgstr "Escoja la lengua que prefiera para la web."
#: modules/websession/lib/websession_templates.py:299
msgid "Edit search-related settings"
msgstr "Editar parámetros de búsqueda"
#: modules/websession/lib/websession_templates.py:300
msgid "Show the latest additions box"
msgstr "Mostrar el texto de las últimas entradas"
#: modules/websession/lib/websession_templates.py:302
msgid "Show collection help boxes"
msgstr "Muestra los textos de ayuda de la colección"
#: modules/websession/lib/websession_templates.py:317
msgid "Number of search results per page"
msgstr "Número de resultados por página"
#: modules/websession/lib/websession_templates.py:347
msgid "Edit login method"
msgstr "Edite el método de identificación"
#: modules/websession/lib/websession_templates.py:348
msgid ""
"Please select which login method you would like to use to authenticate "
"yourself"
msgstr "Seleccione qué método de identificación prefiere para autenticarse"
#: modules/websession/lib/websession_templates.py:349
#: modules/websession/lib/websession_templates.py:363
msgid "Select method"
msgstr "Seleccione el método"
#: modules/websession/lib/websession_templates.py:381
#, python-format
msgid ""
"If you have lost the password for your %(sitename)s %(x_fmt_open)sinternal "
"account%(x_fmt_close)s, then please enter your email address in the "
"following form in order to have a password reset link emailed to you."
msgstr ""
"Si ha perdido la contraseña de la %(x_fmt_open)scuenta interna"
-"%(x_fmt_close)s de %(sitename)s, escriba su diercción elecrónica en este "
+"%(x_fmt_close)s de %(sitename)s, escriba su dirección de correo electrónico en este "
"formulario para que le enviemos un enlace para reiniciar su contraseña."
#: modules/websession/lib/websession_templates.py:403
#: modules/websession/lib/websession_templates.py:1033
msgid "Email address"
msgstr "Dirección de correo electrónico"
#: modules/websession/lib/websession_templates.py:404
msgid "Send password reset link"
msgstr "Enviar el enlace para reiniciar la contraseña"
#: modules/websession/lib/websession_templates.py:408
#, python-format
msgid ""
"If you have been using the %(x_fmt_open)sCERN login system%(x_fmt_close)s, "
"then you can recover your password through the %(x_url_open)sCERN "
"authentication system%(x_url_close)s."
msgstr ""
"Si su cuenta utiliza el %(x_fmt_open)ssistema de identificación del CERN"
"%(x_fmt_close)s, puede recuperar su contraseña a través del "
"%(x_url_open)ssistema de autenticación del CERN%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:411
msgid ""
"Note that if you have been using an external login system, then we cannot do "
"anything and you have to ask there."
msgstr ""
"Tenga en cuenta que si ha estado utilizando un sistema de identificación "
"externo, sentimos no podemos hacer nada. Tendrá que preguntarlo allí."
#: modules/websession/lib/websession_templates.py:412
#, python-format
msgid ""
"Alternatively, you can ask %s to change your login system from external to "
"internal."
msgstr ""
"Alternativamente, puede pedir a %s que le cambie el sistema de "
"identificación al interno."
#: modules/websession/lib/websession_templates.py:439
#, python-format
msgid ""
"%s offers you the possibility to personalize the interface, to set up your "
"own personal library of documents, or to set up an automatic alert query "
"that would run periodically and would notify you of search results by email."
msgstr ""
"%s le ofrece la posibilidad de personalizar la interfaz, crear su propia "
"biblioteca de documentos, o crear alertas automáticas que se ejecuten "
"periódicamente y le notifiquen del resultado de la búsqueda por correo "
"electrónico."
#: modules/websession/lib/websession_templates.py:449
#: modules/websession/lib/websession_webinterface.py:276
msgid "Your Settings"
msgstr "Sus parámetros"
#: modules/websession/lib/websession_templates.py:450
msgid ""
"Set or change your account email address or password. Specify your "
"preferences about the look and feel of the interface."
msgstr ""
"Ponga o cambie la dirección de correo electrónico de esta cuenta o su "
"contraseña. Especifique sus preferencias sobre el aspecto que desea."
#: modules/websession/lib/websession_templates.py:458
msgid "View all the searches you performed during the last 30 days."
msgstr "Vea todas las búsquedas que ha realizado durante los últimos 30 días."
#: modules/websession/lib/websession_templates.py:466
msgid ""
"With baskets you can define specific collections of items, store interesting "
"records you want to access later or share with others."
msgstr ""
-"Las cestas le permiten definir colecciones específicas de ítem, guardar "
+"Las cestas le permiten definir colecciones específicas de elemento, guardar "
"registros interesantes para acceder más adelante o compartir con otros."
#: modules/websession/lib/websession_templates.py:475
msgid ""
"Subscribe to a search which will be run periodically by our service. The "
"result can be sent to you via Email or stored in one of your baskets."
msgstr ""
"Subscríbase a una búsqueda para que se ejecute periódicamente en nuestro "
"servicio. Podrá recibir el resultado por correo electrónico o guardarlo en "
"una de sus cestas."
#: modules/websession/lib/websession_templates.py:484
#: modules/websession/lib/websession_templates.py:610
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:126
msgid "Your Loans"
msgstr "Sus préstamos"
#: modules/websession/lib/websession_templates.py:485
msgid ""
"Check out book you have on loan, submit borrowing requests, etc. Requires "
"CERN ID."
msgstr ""
"Compruebe los libros que tiene en préstamo, solicite reservas, etc. Requiere "
"el ID del CERN."
#: modules/websession/lib/websession_templates.py:512
msgid ""
"You are logged in as a guest user, so your alerts will disappear at the end "
"of the current session."
msgstr ""
"Ahora usted está identificado como usuario visitante, con lo que sus alertas "
"desaparecerán al final de esta sesión."
#: modules/websession/lib/websession_templates.py:535
#, python-format
msgid ""
"You are logged in as %(x_user)s. You may want to a) %(x_url1_open)slogout"
"%(x_url1_close)s; b) edit your %(x_url2_open)saccount settings"
"%(x_url2_close)s."
msgstr ""
"Usted se ha identificado como %(x_user)s. Ahora puede a) "
"%(x_url1_open)sdesconectarse%(x_url1_close)s; b) modificar las "
"%(x_url2_open)spreferencias de su cuenta%(x_url2_close)s."
#: modules/websession/lib/websession_templates.py:616
msgid "Your Alert Searches"
msgstr "Sus alertas"
#: modules/websession/lib/websession_templates.py:622
#, python-format
msgid ""
"You can consult the list of %(x_url_open)syour groups%(x_url_close)s you are "
"administering or are a member of."
msgstr ""
"Puede consultar la lista de %(x_url_open)slos grupos%(x_url_close)s que "
"administra o de los que forma parte."
#: modules/websession/lib/websession_templates.py:625
#: modules/websession/lib/websession_templates.py:2326
#: modules/websession/lib/websession_webinterface.py:1020
msgid "Your Groups"
msgstr "Sus grupos"
#: modules/websession/lib/websession_templates.py:628
#, python-format
msgid ""
"You can consult the list of %(x_url_open)syour submissions%(x_url_close)s "
"and inquire about their status."
msgstr ""
"Puede consultar la lista de %(x_url_open)ssus envíos%(x_url_close)s e "
"informarse de su estado."
#: modules/websession/lib/websession_templates.py:631
#: modules/websubmit/web/yoursubmissions.py:160
msgid "Your Submissions"
msgstr "Sus envíos"
#: modules/websession/lib/websession_templates.py:634
#, python-format
msgid ""
"You can consult the list of %(x_url_open)syour approvals%(x_url_close)s with "
"the documents you approved or refereed."
msgstr ""
"Puede consultar la lista de %(x_url_open)ssus aprobaciones%(x_url_close)s "
"con los documentos que ha aprobado o revisado."
#: modules/websession/lib/websession_templates.py:637
#: modules/websubmit/web/yourapprovals.py:88
msgid "Your Approvals"
msgstr "Sus aprobaciones"
#: modules/websession/lib/websession_templates.py:641
#, python-format
msgid "You can consult the list of %(x_url_open)syour tickets%(x_url_close)s."
msgstr "Puede consultar la lista de %(x_url_open)ssus tareas%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:644
msgid "Your Tickets"
msgstr "Sus tareas"
#: modules/websession/lib/websession_templates.py:646
#: modules/websession/lib/websession_webinterface.py:625
msgid "Your Administrative Activities"
msgstr "Sus actividades administrativas"
#: modules/websession/lib/websession_templates.py:673
#: modules/bibharvest/lib/oai_harvest_admin.py:470
#: modules/bibharvest/lib/oai_harvest_admin.py:485
msgid "Try again"
msgstr "Vuélvalo a intentar"
#: modules/websession/lib/websession_templates.py:695
#, python-format
msgid ""
"Somebody (possibly you) coming from %(x_ip_address)s has asked\n"
"for a password reset at %(x_sitename)s\n"
"for the account \"%(x_email)s\"."
msgstr ""
-"Alguien (possiblement usted), desde a dirección %(x_ip_address)s, ha "
+"Alguien (posiblemente usted), desde a dirección %(x_ip_address)s, ha "
"solicitado un cambio de contraseña en %(x_sitename)s para la cuenta "
"%(x_email)s. "
#: modules/websession/lib/websession_templates.py:703
msgid "If you want to reset the password for this account, please go to:"
msgstr "Si quiere reiniciar la contraseña de esta cuenta, vaya a:"
#: modules/websession/lib/websession_templates.py:709
#: modules/websession/lib/websession_templates.py:746
msgid "in order to confirm the validity of this request."
msgstr "para confirmar la validez de esta petición."
#: modules/websession/lib/websession_templates.py:710
#: modules/websession/lib/websession_templates.py:747
#, python-format
msgid ""
"Please note that this URL will remain valid for about %(days)s days only."
msgstr ""
"Tenga en cuenta que esta URL sólo será válida durante unos %(days)s días."
#: modules/websession/lib/websession_templates.py:732
#, python-format
msgid ""
"Somebody (possibly you) coming from %(x_ip_address)s has asked\n"
"to register a new account at %(x_sitename)s\n"
"for the email address \"%(x_email)s\"."
msgstr ""
-"Alguien (possiblement usted), desde a dirección %(x_ip_address)s, ha "
+"Alguien (posiblemente usted), desde a dirección %(x_ip_address)s, ha "
"solicitado una cuenta nueva en %(x_sitename)s para la dirección de correo "
"electrónico %(x_email)s."
#: modules/websession/lib/websession_templates.py:740
msgid "If you want to complete this account registration, please go to:"
msgstr "Para completar el alta de la cuenta, vaya a:"
#: modules/websession/lib/websession_templates.py:766
#, python-format
msgid "Okay, a password reset link has been emailed to %s."
msgstr ""
"El enlace para reiniciar la contraseña se ha enviado por correo electrónico "
"a %s."
#: modules/websession/lib/websession_templates.py:781
msgid "Deleting your account"
msgstr "Borrando su cuenta"
#: modules/websession/lib/websession_templates.py:795
msgid "You are no longer recognized by our system."
msgstr "Ya no está identificado en nuestro sistema."
#: modules/websession/lib/websession_templates.py:797
#, python-format
msgid ""
"You are still recognized by the centralized\n"
" %(x_fmt_open)sSSO%(x_fmt_close)s system. You can\n"
" %(x_url_open)slogout from SSO%(x_url_close)s, too."
msgstr ""
"Usted todavía está reconocido por el sistema central de %(x_fmt_open)sSSO"
"%(x_fmt_close)s. También puede %(x_url_open)sdesconectar del SSO"
"%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:804
#, python-format
msgid "If you wish you can %(x_url_open)slogin here%(x_url_close)s."
msgstr "Si lo desea puede %(x_url_open)sidentificarse aquí%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:835
msgid "If you already have an account, please login using the form below."
msgstr "Si ya tiene una cuenta, identifíquese por favor en este formulario."
#: modules/websession/lib/websession_templates.py:839
#, python-format
msgid ""
"If you don't own a CERN account yet, you can register a %(x_url_open)snew "
"CERN lightweight account%(x_url_close)s."
msgstr ""
"Si todavía no dispone de una cuenta en el CERN, puede crear una "
"%(x_url_open)scuenta CERN ligera%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:842
#, python-format
msgid ""
"If you don't own an account yet, please %(x_url_open)sregister"
"%(x_url_close)s an internal account."
msgstr ""
"Si todavía no tiene una cuenta, puede %(x_url_open)sdarse de alta"
"%(x_url_close)s en una cuenta interna."
#: modules/websession/lib/websession_templates.py:850
#, python-format
msgid "If you don't own an account yet, please contact %s."
msgstr "Si todavía no tiene una cuenta, póngase en contacto con %s."
#: modules/websession/lib/websession_templates.py:873
msgid "Login method:"
msgstr "Método de identificación"
#: modules/websession/lib/websession_templates.py:902
msgid "Remember login on this computer."
msgstr "Recordar la identificación en este ordenador."
#: modules/websession/lib/websession_templates.py:903
#: modules/websession/lib/websession_templates.py:1182
#: modules/websession/lib/websession_webinterface.py:103
#: modules/websession/lib/websession_webinterface.py:198
#: modules/websession/lib/websession_webinterface.py:748
#: modules/websession/lib/websession_webinterface.py:839
msgid "login"
msgstr "identificación"
#: modules/websession/lib/websession_templates.py:908
#: modules/websession/lib/websession_webinterface.py:529
msgid "Lost your password?"
msgstr "¿Ha perdido su contraseña?"
#: modules/websession/lib/websession_templates.py:916
msgid "You can use your nickname or your email address to login."
msgstr ""
"Para identificarse puede usar su alias o su dirección de correo electrónico."
#: modules/websession/lib/websession_templates.py:940
msgid ""
"Your request is valid. Please set the new desired password in the following "
"form."
msgstr ""
"Su petición ha sido validada. Escriba la nueva contraseña en este "
"formulario."
#: modules/websession/lib/websession_templates.py:963
msgid "Set a new password for"
msgstr "Definir la nueva contraseña para"
#: modules/websession/lib/websession_templates.py:964
msgid "Type the new password"
msgstr "Escriba la nueva contraseña"
#: modules/websession/lib/websession_templates.py:965
msgid "Type again the new password"
-msgstr "Escriba ortra vez la nueva contraseña"
+msgstr "Escriba otra vez la nueva contraseña"
#: modules/websession/lib/websession_templates.py:966
msgid "Set the new password"
msgstr "Definir la nueva contraseña"
#: modules/websession/lib/websession_templates.py:988
msgid "Please enter your email address and desired nickname and password:"
msgstr ""
"Introduzca su dirección de correo electrónico así como el alias y contraseña:"
#: modules/websession/lib/websession_templates.py:990
msgid ""
"It will not be possible to use the account before it has been verified and "
"activated."
msgstr ""
"No será posible usar esta cuenta hasta que se haya verificado y activado."
#: modules/websession/lib/websession_templates.py:1041
msgid "Retype Password"
msgstr "Vuelva a escribir la contraseña"
#: modules/websession/lib/websession_templates.py:1042
#: modules/websession/lib/websession_webinterface.py:942
msgid "register"
msgstr "darse de alta"
#: modules/websession/lib/websession_templates.py:1043
#, python-format
msgid ""
"Please do not use valuable passwords such as your Unix, AFS or NICE "
"passwords with this service. Your email address will stay strictly "
"confidential and will not be disclosed to any third party. It will be used "
"to identify you for personal services of %s. For example, you may set up an "
"automatic alert search that will look for new preprints and will notify you "
"daily of new arrivals by email."
msgstr ""
"No escoja contraseñas valiosas como las de sus cuentas personales de correo "
"electrónico o acceso a datos profesionales. Su dirección de correo "
"electrónico será estrictamente confidencial y no se pasará a terceros. Será "
"usada para identificar sus servicios personales en %s. Por ejemplo, puede "
"activar un servicio de alerta automático que busque nuevos registros y le "
"notifique diariamente de las nuevas entradas por correo electrónico."
#: modules/websession/lib/websession_templates.py:1047
#, python-format
msgid ""
"It is not possible to create an account yourself. Contact %s if you want an "
"account."
msgstr ""
"No es posible que usted cree una cuenta. Póngase en contacto con %s si "
"quiere una."
#: modules/websession/lib/websession_templates.py:1073
#, python-format
msgid ""
"You seem to be a guest user. You have to %(x_url_open)slogin%(x_url_close)s "
"first."
msgstr ""
"Usted parece ser un usuario visitante. Antes tiene que "
"%(x_url_open)sidentificarse%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:1079
msgid "You are not authorized to access administrative functions."
-msgstr "No está autorizado a accedir a funciones administrativas."
+msgstr "No está autorizado a acceder a funciones administrativas."
#: modules/websession/lib/websession_templates.py:1082
#, python-format
msgid "You are enabled to the following roles: %(x_role)s."
-msgstr "Tiene activados los seguientes roles: %(x_role)s."
+msgstr "Tiene activados los siguientes roles: %(x_role)s."
#: modules/websession/lib/websession_templates.py:1098
msgid "Run BibSword Client"
msgstr "Ejecutar el cliente BibSword"
#: modules/websession/lib/websession_templates.py:1125
msgid "Here are some interesting web admin links for you:"
msgstr "Aquí tiene algunos enlaces de administración interesantes:"
#: modules/websession/lib/websession_templates.py:1127
#, python-format
msgid ""
"For more admin-level activities, see the complete %(x_url_open)sAdmin Area"
"%(x_url_close)s."
msgstr ""
-"Para más activitades administrativas, vea toda la %(x_url_open)sZona de "
+"Para más actividades administrativas, vea toda la %(x_url_open)sZona de "
"administración%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:1180
msgid "guest"
msgstr "visitante"
#: modules/websession/lib/websession_templates.py:1194
msgid "logout"
msgstr "salir"
#: modules/websession/lib/websession_templates.py:1242
#: modules/webstyle/lib/webstyle_templates.py:435
#: modules/webstyle/lib/webstyle_templates.py:504
msgid "Personalize"
msgstr "Personalizar"
#: modules/websession/lib/websession_templates.py:1250
msgid "Your account"
msgstr "Su cuenta"
#: modules/websession/lib/websession_templates.py:1256
msgid "Your alerts"
msgstr "Sus alertas"
#: modules/websession/lib/websession_templates.py:1262
msgid "Your approvals"
msgstr "Sus aprobaciones"
#: modules/websession/lib/websession_templates.py:1268
msgid "Your baskets"
msgstr "Sus cestas"
#: modules/websession/lib/websession_templates.py:1274
msgid "Your groups"
msgstr "Sus grupos"
#: modules/websession/lib/websession_templates.py:1280
msgid "Your loans"
msgstr "Sus préstamos"
#: modules/websession/lib/websession_templates.py:1286
msgid "Your messages"
msgstr "Sus mensajes"
#: modules/websession/lib/websession_templates.py:1292
msgid "Your submissions"
msgstr "Sus envíos"
#: modules/websession/lib/websession_templates.py:1298
msgid "Your searches"
msgstr "Sus búsquedas"
#: modules/websession/lib/websession_templates.py:1351
msgid "Administration"
msgstr "administración"
#: modules/websession/lib/websession_templates.py:1367
msgid "Statistics"
msgstr "Estadísticas"
#: modules/websession/lib/websession_templates.py:1484
msgid "You are an administrator of the following groups:"
msgstr "Usted es administrador de estos grupos:"
#: modules/websession/lib/websession_templates.py:1504
#: modules/websession/lib/websession_templates.py:1578
#: modules/websession/lib/websession_templates.py:1641
#: modules/websubmit/lib/websubmit_templates.py:3012
#: modules/websubmit/lib/websubmit_templates.py:3018
msgid "Group"
msgstr "Grupo"
#: modules/websession/lib/websession_templates.py:1511
msgid "You are not an administrator of any groups."
msgstr "Usted no es administrador de ningún grupo."
#: modules/websession/lib/websession_templates.py:1518
msgid "Edit group"
msgstr "Editar grupo"
#: modules/websession/lib/websession_templates.py:1525
#, python-format
msgid "Edit %s members"
msgstr "Editar los %s miembros del grupo"
#: modules/websession/lib/websession_templates.py:1548
#: modules/websession/lib/websession_templates.py:1688
#: modules/websession/lib/websession_templates.py:1690
#: modules/websession/lib/websession_webinterface.py:1076
msgid "Create new group"
msgstr "Crear un nuevo grupo"
#: modules/websession/lib/websession_templates.py:1562
msgid "You are a member of the following groups:"
msgstr "Usted es miembro de los grupos siguientes:"
#: modules/websession/lib/websession_templates.py:1585
msgid "You are not a member of any groups."
msgstr "Usted no es miembro de ningún grupo."
#: modules/websession/lib/websession_templates.py:1609
msgid "Join new group"
msgstr "Unirse a un grupo"
#: modules/websession/lib/websession_templates.py:1610
#: modules/websession/lib/websession_templates.py:2162
#: modules/websession/lib/websession_templates.py:2173
msgid "Leave group"
msgstr "Dejar el grupo"
#: modules/websession/lib/websession_templates.py:1625
msgid "You are a member of the following external groups:"
msgstr "Usted es miembro de los siguientes grupos externos:"
#: modules/websession/lib/websession_templates.py:1648
msgid "You are not a member of any external groups."
msgstr "Usted no es miembro de ningún grupo externo."
#: modules/websession/lib/websession_templates.py:1696
msgid "Update group"
msgstr "Actualizar grupo"
#: modules/websession/lib/websession_templates.py:1698
#, python-format
msgid "Edit group %s"
msgstr "Editar el grupo %s"
#: modules/websession/lib/websession_templates.py:1700
msgid "Delete group"
msgstr "Suprimir el grupo"
#: modules/websession/lib/websession_templates.py:1773
msgid "Group name:"
msgstr "Nombre del grupo:"
#: modules/websession/lib/websession_templates.py:1775
msgid "Group description:"
msgstr "Descripción del grupo:"
#: modules/websession/lib/websession_templates.py:1776
msgid "Group join policy:"
msgstr "Política para unirse al grupo:"
#: modules/websession/lib/websession_templates.py:1817
#: modules/websession/lib/websession_templates.py:1890
#: modules/websession/lib/websession_templates.py:2031
#: modules/websession/lib/websession_templates.py:2040
#: modules/websession/lib/websession_templates.py:2160
#: modules/websession/lib/websession_templates.py:2272
msgid "Please select:"
msgstr "Seleccione:"
#: modules/websession/lib/websession_templates.py:1883
msgid "Join group"
msgstr "Unirse a un grupo"
#: modules/websession/lib/websession_templates.py:1885
msgid "or find it"
msgstr "o buscarlo: "
#: modules/websession/lib/websession_templates.py:1886
msgid "Choose group:"
msgstr "Escoja grupo:"
#: modules/websession/lib/websession_templates.py:1888
msgid "Find group"
msgstr "Busque grupo"
#: modules/websession/lib/websession_templates.py:2036
msgid "Remove member"
msgstr "Eliminar miembro"
#: modules/websession/lib/websession_templates.py:2038
msgid "No members."
msgstr "Sin miembros."
#: modules/websession/lib/websession_templates.py:2048
msgid "Accept member"
-msgstr "Acceptar miembro"
+msgstr "Aceptar miembro"
#: modules/websession/lib/websession_templates.py:2048
msgid "Reject member"
msgstr "Rechazar miembro"
#: modules/websession/lib/websession_templates.py:2050
msgid "No members awaiting approval."
msgstr "No hay miembros pendientes de aprobación"
#: modules/websession/lib/websession_templates.py:2052
#: modules/websession/lib/websession_templates.py:2086
msgid "Current members"
msgstr "Miembros actuales"
#: modules/websession/lib/websession_templates.py:2053
#: modules/websession/lib/websession_templates.py:2087
msgid "Members awaiting approval"
msgstr "Miembros pendientes de aprobación"
#: modules/websession/lib/websession_templates.py:2054
#: modules/websession/lib/websession_templates.py:2088
msgid "Invite new members"
msgstr "Invitar a nuevos miembros"
#: modules/websession/lib/websession_templates.py:2059
#, python-format
msgid "Invitation to join \"%s\" group"
msgstr "Invitación a unirse al grupo \"%s\""
#: modules/websession/lib/websession_templates.py:2060
#, python-format
msgid ""
"Hello:\n"
"\n"
"I think you might be interested in joining the group \"%(x_name)s\".\n"
"You can join by clicking here: %(x_url)s.\n"
"\n"
"Best regards.\n"
msgstr ""
"Hola,\n"
"\n"
"quizás pueda estar interesado en unirse al grupo «%(x_name)s».\n"
"Se puede añadir pinchando aquí: %(x_url)s.\n"
"\n"
"Atentamente,\n"
#: modules/websession/lib/websession_templates.py:2074
#, python-format
msgid ""
"If you want to invite new members to join your group, please use the "
"%(x_url_open)sweb message%(x_url_close)s system."
msgstr ""
"Si quiere invitar a nuevos miembros a formar parte de su grupo, use el "
"%(x_url_open)ssistema de mensajería interna%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:2078
#, python-format
msgid "Group: %s"
msgstr "Grupo: %s"
#: modules/websession/lib/websession_templates.py:2161
msgid "Group list"
msgstr "Lista de los grupos"
#: modules/websession/lib/websession_templates.py:2164
msgid "You are not member of any group."
msgstr "Usted no es miembro de ningún grupo."
#: modules/websession/lib/websession_templates.py:2212
msgid "Are you sure you want to delete this group?"
msgstr "¿Está seguro de que quiere borrar este grupo?"
#: modules/websession/lib/websession_templates.py:2252
msgid "Are you sure you want to leave this group?"
msgstr "¿Está seguro de que quiere dejar este grupo?"
#: modules/websession/lib/websession_templates.py:2268
msgid "Visible and open for new members"
msgstr "Visible y abierto a nuevos miembros"
#: modules/websession/lib/websession_templates.py:2270
msgid "Visible but new members need approval"
msgstr "Visible pero los nuevos miembros requieren una aprobación"
#: modules/websession/lib/websession_templates.py:2355
#, python-format
msgid "Group %s: New membership request"
msgstr "Grupo %s: nueva petición de ingreso"
#: modules/websession/lib/websession_templates.py:2359
#, python-format
msgid "A user wants to join the group %s."
msgstr "Un usuario desea unirse al grupo %s."
#: modules/websession/lib/websession_templates.py:2360
#, python-format
msgid ""
"Please %(x_url_open)saccept or reject%(x_url_close)s this user's request."
msgstr ""
"Debería %(x_url_open)saceptar o rechazar%(x_url_close)s la petición de este "
"usuario."
#: modules/websession/lib/websession_templates.py:2377
#, python-format
msgid "Group %s: Join request has been accepted"
msgstr "Grupo %s: la petición de ingreso ha sido aceptada"
#: modules/websession/lib/websession_templates.py:2378
#, python-format
msgid "Your request for joining group %s has been accepted."
msgstr "Su petición de ingreso en el grupo %s ha sido aceptada."
#: modules/websession/lib/websession_templates.py:2380
#, python-format
msgid "Group %s: Join request has been rejected"
msgstr "Grupo %s: la petición de ingreso ha sido rechazada"
#: modules/websession/lib/websession_templates.py:2381
#, python-format
msgid "Your request for joining group %s has been rejected."
msgstr "Su petición de ingreso en el grupo %s ha sido rechazada."
#: modules/websession/lib/websession_templates.py:2384
#: modules/websession/lib/websession_templates.py:2402
#, python-format
msgid "You can consult the list of %(x_url_open)syour groups%(x_url_close)s."
msgstr "Puede consultar la lista de %(x_url_open)ssus grupos%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:2398
#, python-format
msgid "Group %s has been deleted"
msgstr "El grupo %s ha sido suprimido."
#: modules/websession/lib/websession_templates.py:2400
#, python-format
msgid "Group %s has been deleted by its administrator."
msgstr "El grupo %s ha sido suprimido por su administrador."
#: modules/websession/lib/websession_templates.py:2417
#, python-format
msgid ""
"You can consult the list of %(x_url_open)s%(x_nb_total)i groups"
"%(x_url_close)s you are subscribed to (%(x_nb_member)i) or administering "
"(%(x_nb_admin)i)."
msgstr ""
"Puede consultar la lista de los %(x_url_open)s%(x_nb_total)i grupos"
"%(x_url_close)s de los que usted es miembro (%(x_nb_member)i) o administrador"
"(%(x_nb_admin)i)."
#: modules/websession/lib/websession_templates.py:2436
msgid ""
"Warning: The password set for MySQL root user is the same as the default "
"Invenio password. For security purposes, you may want to change the password."
msgstr ""
"Atención: la contraseña que ha establecido para el usuario root de MySQL es "
"la misma que la de lleva Invenio por defecto. Por motivos de seguridad, es "
"preferible cambiarla."
#: modules/websession/lib/websession_templates.py:2442
msgid ""
"Warning: The password set for the Invenio MySQL user is the same as the "
"shipped default. For security purposes, you may want to change the password."
msgstr ""
"Atención: la contraseña que ha establecido para el usuario MySQL de Invenio "
"es la misma que lleva por defecto la aplicación. Por motivos de seguridad, "
"es preferible cambiarla."
#: modules/websession/lib/websession_templates.py:2448
msgid ""
"Warning: The password set for the Invenio admin user is currently empty. For "
"security purposes, it is strongly recommended that you add a password."
msgstr ""
"Atención: la contraseña para el usuario admin de Invenio está vacía. Por "
"motivos de seguridad, es necesario establecer una."
#: modules/websession/lib/websession_templates.py:2454
msgid ""
"Warning: The email address set for support email is currently set to "
"info@invenio-software.org. It is recommended that you change this to your "
"own address."
msgstr ""
"Atención: la dirección configurada para soporte electrónico es info@invenio-"
"software.org. Es conveniente cambiarla a su dirección particular."
#: modules/websession/lib/websession_templates.py:2460
msgid ""
"A newer version of Invenio is available for download. You may want to visit "
msgstr "Puede bajarse una versión más reciente de Invenio. Visite "
#: modules/websession/lib/websession_templates.py:2467
msgid ""
"Cannot download or parse release notes from http://invenio-software.org/repo/"
"invenio/tree/RELEASE-NOTES"
msgstr ""
"No ha sido posible descargar o leer las notas versión desde http://invenio-"
"software.org/repo/invenio/tree/RELEASE-NOTES"
#: modules/websession/lib/webuser.py:149
msgid "Database problem"
msgstr "Problema en la base de datos"
#: modules/websession/lib/webuser.py:299
#: modules/websession/lib/webgroup_dblayer.py:314
msgid "user"
msgstr "usuario"
#: modules/websession/lib/webuser.py:470
#, python-format
msgid "Account registration at %s"
msgstr "Alta de cuenta en %s"
#: modules/websession/lib/webuser.py:714
msgid "New account on"
msgstr "Nueva cuenta en"
#: modules/websession/lib/webuser.py:716
msgid "PLEASE ACTIVATE"
msgstr "ACTÍVELO, POR FAVOR"
#: modules/websession/lib/webuser.py:717
msgid "A new account has been created on"
msgstr "Su cuenta ha sido creada en"
#: modules/websession/lib/webuser.py:719
msgid " and is awaiting activation"
msgstr " y está esperando que se active"
#: modules/websession/lib/webuser.py:721
msgid " Username/Email"
msgstr "Nombre de usuario/Dirección de correo"
#: modules/websession/lib/webuser.py:722
msgid "You can approve or reject this account request at"
msgstr "Puede aprobar o rechazar esta cuenta en"
#: modules/websession/lib/websession_webinterface.py:85
msgid "Mail Cookie Service"
msgstr "Servicio de activación por correo electrónico"
#: modules/websession/lib/websession_webinterface.py:95
msgid "Role authorization request"
msgstr "Petición de rol de autorización"
#: modules/websession/lib/websession_webinterface.py:95
msgid "This request for an authorization has already been authorized."
msgstr "La petición para la autorización ya se ha aprobado."
#: modules/websession/lib/websession_webinterface.py:98
#, python-format
msgid ""
"You have successfully obtained an authorization as %(x_role)s! This "
"authorization will last until %(x_expiration)s and until you close your "
"browser if you are a guest user."
msgstr ""
-"Ya tiene una autoritación válida para ejercer el rol de %(x_role)s. Esta "
+"Ya tiene una autorización válida para ejercer el rol de %(x_role)s. Esta "
"autorización durará hasta %(x_expiration)s y, si usted es usuario invitado, "
"hasta que cierre su navegador."
#: modules/websession/lib/websession_webinterface.py:116
msgid "You have confirmed the validity of your email address!"
-msgstr "¡Ha confirmado la validez de su dirección de correo electónico!"
+msgstr "¡Ha confirmado la validez de su dirección de correo electrónico!"
#: modules/websession/lib/websession_webinterface.py:119
#: modules/websession/lib/websession_webinterface.py:129
msgid "Please, wait for the administrator to enable your account."
msgstr "Por favor, espere a que el administrador le active la cuenta."
#: modules/websession/lib/websession_webinterface.py:123
#: modules/websession/lib/websession_webinterface.py:132
#, python-format
msgid "You can now go to %(x_url_open)syour account page%(x_url_close)s."
msgstr "Ya puede ir a la %(x_url_open)spágina de su cuenta%(x_url_close)s."
#: modules/websession/lib/websession_webinterface.py:124
#: modules/websession/lib/websession_webinterface.py:133
msgid "Email address successfully activated"
msgstr "Dirección electrónica activada correctamente"
#: modules/websession/lib/websession_webinterface.py:127
msgid "You have already confirmed the validity of your email address!"
-msgstr "¡Ya ha confirmado la validez de su dirección de correo electónico!"
+msgstr "¡Ya ha confirmado la validez de su dirección de correo electrónico!"
#: modules/websession/lib/websession_webinterface.py:136
msgid ""
"This request for confirmation of an email address is not valid or is expired."
msgstr ""
"Esta petición de confirmación de la validez de su dirección de correo "
-"electónico no es válida o ha expirado."
+"electrónico no es válida o ha expirado."
#: modules/websession/lib/websession_webinterface.py:141
msgid "This request for an authorization is not valid or is expired."
msgstr "Esta petición de autorización no es válida o ha expirado."
#: modules/websession/lib/websession_webinterface.py:154
msgid "Reset password"
msgstr "Reiniciar la contraseña"
#: modules/websession/lib/websession_webinterface.py:160
msgid "This request for resetting a password has already been used."
msgstr "Esta petición de reiniciar la contraseña ya se ha utilizado."
#: modules/websession/lib/websession_webinterface.py:163
msgid "This request for resetting a password is not valid or is expired."
msgstr "Esta petición de reiniciar una contraseña no es válida o ha expirado."
#: modules/websession/lib/websession_webinterface.py:168
msgid "This request for resetting the password is not valid or is expired."
msgstr "Esta petición de reiniciar la contraseña no es válida o ha expirado."
#: modules/websession/lib/websession_webinterface.py:181
msgid "The two provided passwords aren't equal."
msgstr "Las dos contraseñas no coinciden."
#: modules/websession/lib/websession_webinterface.py:196
msgid "The password was successfully set! You can now proceed with the login."
msgstr "La contraseña se ha definido correctamente. Ya puede identificarse."
#: modules/websession/lib/websession_webinterface.py:281
#, python-format
msgid "%s Personalize, Your Settings"
msgstr "%s Personalizar, sus parámetros"
#: modules/websession/lib/websession_webinterface.py:335
#: modules/websession/lib/websession_webinterface.py:402
#: modules/websession/lib/websession_webinterface.py:464
#: modules/websession/lib/websession_webinterface.py:477
#: modules/websession/lib/websession_webinterface.py:490
msgid "Settings edited"
msgstr "Se han editado los parámetros"
#: modules/websession/lib/websession_webinterface.py:337
#: modules/websession/lib/websession_webinterface.py:401
#: modules/websession/lib/websession_webinterface.py:442
#: modules/websession/lib/websession_webinterface.py:466
#: modules/websession/lib/websession_webinterface.py:479
#: modules/websession/lib/websession_webinterface.py:485
msgid "Show account"
msgstr "Mostrar la cuenta"
#: modules/websession/lib/websession_webinterface.py:341
msgid "Unable to change login method."
msgstr "No se ha podido cambiar el método de identificación."
#: modules/websession/lib/websession_webinterface.py:349
msgid "Switched to internal login method."
msgstr "El método de identificación ha cambiado al interno."
#: modules/websession/lib/websession_webinterface.py:350
msgid ""
"Please note that if this is the first time that you are using this account "
"with the internal login method then the system has set for you a randomly "
"generated password. Please click the following button to obtain a password "
"reset request link sent to you via email:"
msgstr ""
"Tenga en cuenta que si es la primera vez que está usando esta cuenta con el "
"método de identificación interno, el sistema le ha definido una contraseña "
-"aleatoria. Haga clic en el botón siguiente para a que le envíe, via correo "
+"aleatoria. Haga clic en el botón siguiente para a que le envíe, vía correo "
"electrónico, un enlace para reiniciarla:"
#: modules/websession/lib/websession_webinterface.py:358
msgid "Send Password"
msgstr "Enviar contraseña"
#: modules/websession/lib/websession_webinterface.py:366
#, python-format
msgid ""
"Unable to switch to external login method %s, because your email address is "
"unknown."
msgstr ""
"No es posible cambiar al método de autenticación externo %s, porque no "
"consta su dirección de correo electrónico."
#: modules/websession/lib/websession_webinterface.py:370
#, python-format
msgid ""
"Unable to switch to external login method %s, because your email address is "
"unknown to the external login system."
msgstr ""
"No es posible cambiar al método de autenticación externo %s, porque el "
"sistema externo desconoce su dirección de correo electrónico."
#: modules/websession/lib/websession_webinterface.py:374
msgid "Login method successfully selected."
msgstr "Método de identificación seleccionado correctamente."
#: modules/websession/lib/websession_webinterface.py:376
#, python-format
msgid ""
"The external login method %s does not support email address based logins. "
"Please contact the site administrators."
msgstr ""
-"El métode de identificación externo %s no acepta identificaciones basadas en "
-"direcciones de correo electrónico. Póngase en cotacto con los "
+"El método de identificación externo %s no acepta identificaciones basadas en "
+"direcciones de correo electrónico. Póngase en contacto con los "
"administradores de la instalación."
#: modules/websession/lib/websession_webinterface.py:395
msgid "Settings successfully edited."
-msgstr "Se han editado los parámetres correctamente."
+msgstr "Se han editado los parámetros correctamente."
#: modules/websession/lib/websession_webinterface.py:396
#, python-format
msgid ""
"Note that if you have changed your email address, you will have to "
"%(x_url_open)sreset your password%(x_url_close)s anew."
msgstr ""
"Si ha cambiado su dirección, tendrá que %(x_url_open)svolver a poner su "
-"contraseña%(x_url_close)s otre vez."
+"contraseña%(x_url_close)s otra vez."
#: modules/websession/lib/websession_webinterface.py:404
#: modules/websession/lib/websession_webinterface.py:912
#, python-format
msgid "Desired nickname %s is invalid."
msgstr "El alias %s no es válido."
#: modules/websession/lib/websession_webinterface.py:405
#: modules/websession/lib/websession_webinterface.py:411
#: modules/websession/lib/websession_webinterface.py:424
#: modules/websession/lib/websession_webinterface.py:446
#: modules/websession/lib/websession_webinterface.py:452
#: modules/websession/lib/websession_webinterface.py:903
#: modules/websession/lib/websession_webinterface.py:908
#: modules/websession/lib/websession_webinterface.py:913
#: modules/websession/lib/websession_webinterface.py:924
msgid "Please try again."
msgstr "Vuélvalo a intentar."
#: modules/websession/lib/websession_webinterface.py:407
#: modules/websession/lib/websession_webinterface.py:413
#: modules/websession/lib/websession_webinterface.py:420
#: modules/websession/lib/websession_webinterface.py:426
#: modules/websession/lib/websession_webinterface.py:448
#: modules/websession/lib/websession_webinterface.py:454
#: modules/websession/lib/websession_webinterface.py:501
msgid "Edit settings"
msgstr "Editar parámetros"
#: modules/websession/lib/websession_webinterface.py:408
#: modules/websession/lib/websession_webinterface.py:414
#: modules/websession/lib/websession_webinterface.py:421
#: modules/websession/lib/websession_webinterface.py:427
#: modules/websession/lib/websession_webinterface.py:503
msgid "Editing settings failed"
msgstr "Ha fallado la edición de parámetros"
#: modules/websession/lib/websession_webinterface.py:410
#: modules/websession/lib/websession_webinterface.py:907
#, python-format
msgid "Supplied email address %s is invalid."
msgstr "La dirección electrónica facilitada %s no es válida."
#: modules/websession/lib/websession_webinterface.py:416
#: modules/websession/lib/websession_webinterface.py:917
#, python-format
msgid "Supplied email address %s already exists in the database."
msgstr "La dirección electrónica facilitada %s ya existe en la base de datos."
#: modules/websession/lib/websession_webinterface.py:418
#: modules/websession/lib/websession_webinterface.py:919
msgid "Or please try again."
msgstr "O vuélvalo a intentar."
#: modules/websession/lib/websession_webinterface.py:423
#, python-format
msgid "Desired nickname %s is already in use."
msgstr "El alias solicitado %s ya está en uso."
#: modules/websession/lib/websession_webinterface.py:432
msgid "Users cannot edit passwords on this site."
msgstr "En este sitio, los usuarios no pueden editar sus contraseñas."
#: modules/websession/lib/websession_webinterface.py:440
msgid "Password successfully edited."
msgstr "Se ha editado la contraseña correctamente."
#: modules/websession/lib/websession_webinterface.py:443
msgid "Password edited"
msgstr "Contraseña modificada"
#: modules/websession/lib/websession_webinterface.py:445
#: modules/websession/lib/websession_webinterface.py:902
msgid "Both passwords must match."
msgstr "Ambas contraseñas deben coincidir."
#: modules/websession/lib/websession_webinterface.py:449
#: modules/websession/lib/websession_webinterface.py:455
msgid "Editing password failed"
msgstr "Ha fallado el cambio de contraseña"
#: modules/websession/lib/websession_webinterface.py:451
msgid "Wrong old password inserted."
msgstr "Contraseña anterior incorrecta."
#: modules/websession/lib/websession_webinterface.py:467
#: modules/websession/lib/websession_webinterface.py:480
#: modules/websession/lib/websession_webinterface.py:494
msgid "User settings saved correctly."
msgstr "Se han guardado correctamente los parámetros de usuario."
#: modules/websession/lib/websession_webinterface.py:487
msgid "Editing bibcatalog authorization failed"
msgstr "Ha fallado la edición de la autorización de bibcatalog"
#: modules/websession/lib/websession_webinterface.py:488
msgid "Empty username or password"
msgstr "La identificación o la contraseña están vacíos"
#: modules/websession/lib/websession_webinterface.py:497
msgid "Unable to update settings."
msgstr "No ha sido posible actualizar los parámetros."
#: modules/websession/lib/websession_webinterface.py:558
msgid ""
"Cannot send password reset request since you are using external "
"authentication system."
msgstr ""
"No se puede enviar la petición de reinicialización de contraseña ya que "
"usted está usando un sistema de autenticación externo."
#: modules/websession/lib/websession_webinterface.py:574
msgid "The entered email address does not exist in the database."
msgstr "La dirección electrónica facilitada no existe en la base de datos."
#: modules/websession/lib/websession_webinterface.py:588
msgid "Password reset request for"
msgstr "Petición de reiniciar la contraseña de"
#: modules/websession/lib/websession_webinterface.py:592
msgid ""
"The entered email address is incorrect, please check that it is written "
"correctly (e.g. johndoe@example.com)."
msgstr ""
"La dirección de correo electrónico facilitada es incorrecta. Compruebe que "
-"está correctamente escrita (por ej., sin.verguenza@ejemplo.es)."
+"está correctamente escrita (por ej., danielgarcia@ejemplo.es)."
#: modules/websession/lib/websession_webinterface.py:593
msgid "Incorrect email address"
msgstr "Dirección de correo electrónico incorrecta"
#: modules/websession/lib/websession_webinterface.py:603
msgid "Reset password link sent"
msgstr "Se ha enviado el enlace para reiniciar la contraseña"
#: modules/websession/lib/websession_webinterface.py:648
msgid "Delete Account"
msgstr "Suprimir cuenta"
#: modules/websession/lib/websession_webinterface.py:674
msgid "Logout"
msgstr "Salir"
#: modules/websession/lib/websession_webinterface.py:747
#: modules/websession/lib/websession_webinterface.py:803
#: modules/websession/lib/websession_webinterface.py:838
msgid "Login"
msgstr "Identificación"
#: modules/websession/lib/websession_webinterface.py:869
msgid "Register"
msgstr "Darse de alta"
#: modules/websession/lib/websession_webinterface.py:872
#: modules/websession/lib/websession_webinterface.py:944
#, python-format
msgid "%s Personalize, Main page"
msgstr "%s Personalizar, página principal"
#: modules/websession/lib/websession_webinterface.py:889
msgid "Your account has been successfully created."
msgstr "Su cuenta ha sido creada correctamente."
#: modules/websession/lib/websession_webinterface.py:890
msgid "Account created"
msgstr "Cuenta creada"
#: modules/websession/lib/websession_webinterface.py:892
msgid ""
"In order to confirm its validity, an email message containing an account "
"activation key has been sent to the given email address."
msgstr ""
-"Para confirmar su validez, se ha enviado un mensage a esta dirección que "
+"Para confirmar su validez, se ha enviado un mensaje a esta dirección que "
"tiene una clave de activación de la cuenta."
#: modules/websession/lib/websession_webinterface.py:893
msgid ""
"Please follow instructions presented there in order to complete the account "
"registration process."
msgstr ""
"Siga las instrucciones indicadas para completar el proceso de alta de la "
"cuenta."
#: modules/websession/lib/websession_webinterface.py:895
msgid ""
"A second email will be sent when the account has been activated and can be "
"used."
msgstr ""
"Se enviará un segundo mensaje cuando la cuenta se haya activado y pueda "
"usarse."
#: modules/websession/lib/websession_webinterface.py:898
#, python-format
msgid "You can now access your %(x_url_open)saccount%(x_url_close)s."
msgstr "Ya puede acceder a su %(x_url_open)scuenta%(x_url_close)s."
#: modules/websession/lib/websession_webinterface.py:905
#: modules/websession/lib/websession_webinterface.py:910
#: modules/websession/lib/websession_webinterface.py:915
#: modules/websession/lib/websession_webinterface.py:921
#: modules/websession/lib/websession_webinterface.py:926
#: modules/websession/lib/websession_webinterface.py:930
#: modules/websession/lib/websession_webinterface.py:934
#: modules/websession/lib/websession_webinterface.py:939
msgid "Registration failure"
msgstr "Ha fallado el alta"
#: modules/websession/lib/websession_webinterface.py:923
#, python-format
msgid "Desired nickname %s already exists in the database."
msgstr "El alias solicitado %s ya existe en la base de datos."
#: modules/websession/lib/websession_webinterface.py:928
msgid "Users cannot register themselves, only admin can register them."
msgstr ""
"Los usuarios no pueden darse de alta ellos mismos; sólo lo puede hacer el "
"administrador."
#: modules/websession/lib/websession_webinterface.py:932
msgid ""
"The site is having troubles in sending you an email for confirming your "
"email address."
msgstr ""
-"Tenemos problemas para enviarle un correo de confirmació de su dirección."
+"Tenemos problemas para enviarle un correo de confirmación de su dirección."
#: modules/websession/lib/websession_webinterface.py:932
#: modules/websubmit/lib/websubmit_webinterface.py:151
msgid ""
"The error has been logged and will be taken in consideration as soon as "
"possible."
msgstr ""
"El error ha sido anotado y será tenido en cuenta tan pronto como sea posible."
#: modules/websession/lib/websession_webinterface.py:979
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:729
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:733
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:765
msgid "Your tickets"
msgstr "Sus tareas"
#: modules/websession/lib/websession_webinterface.py:1015
#: modules/websession/lib/websession_webinterface.py:1056
#: modules/websession/lib/websession_webinterface.py:1115
#: modules/websession/lib/websession_webinterface.py:1177
#: modules/websession/lib/websession_webinterface.py:1234
#: modules/websession/lib/websession_webinterface.py:1305
msgid "You are not authorized to use groups."
msgstr "No está autorizado a usar grupos."
#: modules/websession/lib/websession_webinterface.py:1140
msgid "Join New Group"
msgstr "Unirse a un grupo nuevo"
#: modules/websession/lib/websession_webinterface.py:1192
msgid "Leave Group"
msgstr "Dejar el grupo"
#: modules/websession/lib/websession_webinterface.py:1262
msgid "Edit Group"
msgstr "Editar el grupo"
#: modules/websession/lib/websession_webinterface.py:1334
msgid "Edit group members"
msgstr "Editar los miembros del grupo"
#: modules/websession/lib/webgroup.py:158
#: modules/websession/lib/webgroup.py:432
msgid "Please enter a group name."
msgstr "Introduzca un nombre grupo."
#: modules/websession/lib/webgroup.py:168
#: modules/websession/lib/webgroup.py:442
msgid "Please enter a valid group name."
msgstr "Introduzca un nombre de grupo válido."
#: modules/websession/lib/webgroup.py:178
#: modules/websession/lib/webgroup.py:452
msgid "Please choose a group join policy."
msgstr "Escoja una política de admisión al grupo."
#: modules/websession/lib/webgroup.py:188
#: modules/websession/lib/webgroup.py:462
msgid "Group name already exists. Please choose another group name."
msgstr "Ya existe este nombre de grupo. Escoja otro, por favor."
#: modules/websession/lib/webgroup.py:260
msgid "You are already member of the group."
msgstr "Usted ya es miembro del grupo."
#: modules/websession/lib/webgroup.py:302
msgid "Please select only one group."
msgstr "Seleccione solo un grupo:"
#: modules/websession/lib/webgroup.py:359
msgid "Please select one group."
msgstr "Seleccione un grupo."
#: modules/websession/lib/webgroup.py:384
#: modules/websession/lib/webgroup.py:399
#: modules/websession/lib/webgroup.py:510
#: modules/websession/lib/webgroup.py:555
#: modules/websession/lib/webgroup.py:570
#: modules/websession/lib/webgroup.py:604
#: modules/websession/lib/webgroup.py:644
#: modules/websession/lib/webgroup.py:711
msgid "Sorry, there was an error with the database."
msgstr "Por desgracia ha habido un error en la base de datos."
#: modules/websession/lib/webgroup.py:391
#: modules/websession/lib/webgroup.py:562
msgid "Sorry, you do not have sufficient rights on this group."
msgstr "No tiene suficientes permisos en este grupo."
#: modules/websession/lib/webgroup.py:499
msgid "The group has already been deleted."
msgstr "El grupo ya ha sido suprimido."
#: modules/websession/lib/webgroup.py:611
msgid "Please choose a member if you want to remove him from the group."
msgstr "Escoja el miembro que desee borrar del grupo."
#: modules/websession/lib/webgroup.py:651
msgid ""
"Please choose a user from the list if you want him to be added to the group."
msgstr "Escoja una persona de la lista para añadirla al grupo."
#: modules/websession/lib/webgroup.py:664
msgid "The user is already member of the group."
msgstr "El usuario ya es miembro del grupo."
#: modules/websession/lib/webgroup.py:718
msgid ""
"Please choose a user from the list if you want him to be removed from "
"waiting list."
msgstr ""
"Escoja el miembro de la lista que desee eliminar de la lista de espera."
#: modules/websession/lib/webgroup.py:731
msgid "The user request for joining group has already been rejected."
msgstr "La petición de ingreso en el grupo ya ha sido rechazada."
#: modules/webstyle/lib/webstyle_templates.py:86
#: modules/webstyle/lib/webstyle_templates.py:95
#: modules/bibcirculation/lib/bibcirculation_templates.py:111
msgid "Home"
msgstr "Página principal"
#: modules/webstyle/lib/webstyle_templates.py:434
#: modules/webstyle/lib/webstyle_templates.py:503
#: modules/websubmit/lib/websubmit_engine.py:697
#: modules/websubmit/lib/websubmit_engine.py:1142
#: modules/websubmit/lib/websubmit_engine.py:1188
#: modules/websubmit/lib/websubmit_engine.py:1421
msgid "Submit"
msgstr "Enviar"
#: modules/webstyle/lib/webstyle_templates.py:436
#: modules/webstyle/lib/webstyle_templates.py:505
#: modules/bibcirculation/lib/bibcirculation_templates.py:215
msgid "Help"
msgstr "Ayuda"
#: modules/webstyle/lib/webstyle_templates.py:469
msgid "Last updated"
msgstr "Última actualización"
#: modules/webstyle/lib/webstyle_templates.py:507
msgid "Powered by"
msgstr "Powered by"
#: modules/webstyle/lib/webstyle_templates.py:508
msgid "Maintained by"
msgstr "Mantenido por"
#: modules/webstyle/lib/webstyle_templates.py:554
msgid "This site is also available in the following languages:"
msgstr "Este sitio también está disponible en los siguientes idiomas:"
#: modules/webstyle/lib/webstyle_templates.py:587
msgid "Browser"
msgstr "Navegador"
#: modules/webstyle/lib/webstyle_templates.py:609
msgid "System Error"
msgstr "Error de sistema"
#: modules/webstyle/lib/webstyle_templates.py:624
msgid "Traceback"
msgstr "Traceback"
#: modules/webstyle/lib/webstyle_templates.py:671
msgid "Client"
msgstr "Cliente"
#: modules/webstyle/lib/webstyle_templates.py:673
msgid "Please send an error report to the administrator."
msgstr "Por favor, envíe el informe de error al administrador."
#: modules/webstyle/lib/webstyle_templates.py:674
msgid "Send error report"
msgstr "Enviar el informe de error"
#: modules/webstyle/lib/webstyle_templates.py:678
#, python-format
msgid "Please contact %s quoting the following information:"
msgstr "Por favor contacte con %s indicando la siguiente información:"
#: modules/webstyle/lib/webstyle_templates.py:728
#: modules/websubmit/lib/websubmit_templates.py:1231
msgid "Restricted"
msgstr "Restringido"
#: modules/webstyle/lib/webstyle_templates.py:848
#, python-format
msgid ""
"Record created %(x_date_creation)s, last modified %(x_date_modification)s"
msgstr ""
"Registro creado el %(x_date_creation)s, última modificación el "
"%(x_date_modification)s"
#: modules/webstyle/lib/webstyle_templates.py:919
msgid "The server encountered an error while dealing with your request."
msgstr "El sistema ha encontrado un error mientras gestionaba su petición."
#: modules/webstyle/lib/webstyle_templates.py:920
msgid "The system administrators have been alerted."
msgstr "Los administradores del sistema han sido avisados."
#: modules/webstyle/lib/webstyle_templates.py:921
#, python-format
msgid "In case of doubt, please contact %(x_admin_email)s."
msgstr "En caso de duda, póngase en contacto con %(x_admin_email)s"
#: modules/webstyle/lib/webdoc.py:550
#, python-format
msgid "%(category)s Pages"
msgstr "Páginas de %(category)s"
#: modules/webstyle/lib/webdoc_webinterface.py:144
#: modules/webstyle/lib/webdoc_webinterface.py:149
msgid "Admin Pages"
msgstr "Páginas de administración"
#: modules/webstyle/lib/webdoc_webinterface.py:146
#: modules/webstyle/lib/webdoc_webinterface.py:150
msgid "Help Pages"
msgstr "Páginas de ayuda"
#: modules/webstyle/lib/webdoc_webinterface.py:148
#: modules/webstyle/lib/webdoc_webinterface.py:151
msgid "Hacking Pages"
msgstr "Páginas para los desarrolladores"
#: modules/webstyle/lib/webdoc_webinterface.py:157
msgid "Hacking Invenio"
msgstr "Desarrollo de Invenio"
#: modules/webstyle/lib/webdoc_webinterface.py:159
msgid "Latest modifications:"
msgstr "Últimas modificaciones:"
#: modules/webstyle/lib/webdoc_webinterface.py:162
#, python-format
msgid "This is the table of contents of the %(x_category)s pages."
msgstr "Esta es la tabla de contenidos de las páginas de %(x_category)s."
#: modules/webstyle/lib/webdoc_webinterface.py:164
msgid "See also"
msgstr "Véase también"
#: modules/webstyle/lib/webdoc_webinterface.py:179
#, python-format
msgid "Page %s Not Found"
msgstr "No se ha encontrado la página %s"
#: modules/webstyle/lib/webdoc_webinterface.py:187
#, python-format
msgid "Sorry, page %s does not seem to exist."
msgstr "Parece ser que la página %s no existe."
#: modules/webstyle/lib/webdoc_webinterface.py:190
#, python-format
msgid ""
"You may want to look at the %(x_url_open)s%(x_category)s pages"
"%(x_url_close)s."
msgstr ""
"Le puede interesar consultar las %(x_url_open)spágines %(x_category)s"
"%(x_url_close)s."
#: modules/websubmit/lib/websubmit_managedocfiles.py:383
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:443
msgid "Choose a file"
msgstr "Escoja el fichero"
#: modules/websubmit/lib/websubmit_managedocfiles.py:391
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:459
msgid "Access"
msgstr "Acceso"
#: modules/websubmit/lib/websubmit_managedocfiles.py:543
msgid ""
"The file you want to edit is protected against modifications. Your action "
"has not been applied"
msgstr ""
"El fichero que quiere editar está protegido contra modificaciones. Vuestra "
"acción no se ha realizado"
#: modules/websubmit/lib/websubmit_managedocfiles.py:562
#, python-format
msgid ""
"The uploaded file is too small (<%i o) and has therefore not been considered"
msgstr ""
"El archivo que ha subido es demasiado pequeño (<%i o) y por tanto no se ha "
"tenido en cuenta"
#: modules/websubmit/lib/websubmit_managedocfiles.py:567
#, python-format
msgid ""
"The uploaded file is too big (>%i o) and has therefore not been considered"
msgstr ""
"El archivo que ha subido es demasiado grande (>%i o) y por tanto no se ha "
"tenido en cuenta"
#: modules/websubmit/lib/websubmit_managedocfiles.py:574
msgid ""
"The uploaded file name is too long and has therefore not been considered"
msgstr ""
"El nombre del archivo que ha subido es demasiado largo y por tanto no se ha "
"tenido en cuenta"
#: modules/websubmit/lib/websubmit_managedocfiles.py:586
msgid ""
"You have already reached the maximum number of files for this type of "
"document"
msgstr "Ya ha llegado al máximo número de archivos para este tipo de documento"
#: modules/websubmit/lib/websubmit_managedocfiles.py:609
#: modules/websubmit/lib/websubmit_managedocfiles.py:620
#: modules/websubmit/lib/websubmit_managedocfiles.py:730
#, python-format
msgid "A file named %s already exists. Please choose another name."
msgstr "Ya existe un fichero con el nombre %s. Escoja otro, por favor."
#: modules/websubmit/lib/websubmit_managedocfiles.py:631
#, python-format
msgid "A file with format '%s' already exists. Please upload another format."
msgstr "Ya existe un archivo con el formato '%s'. Súbalo en otro formato."
#: modules/websubmit/lib/websubmit_managedocfiles.py:639
msgid ""
"You are not allowed to use dot '.', slash '/', or backslash '\\\\' in file "
"names. Choose a different name and upload your file again. In particular, "
"note that you should not include the extension in the renaming field."
msgstr ""
"No se puede usar el punto '.', barra '/', o barra inversa '\\\\\\\\' en los "
"nombres de ficheros. Escoja un nombre diferente o vuelva a subir su "
"fichero. En particular, tenga en cuenta de no incluir la extensión en el "
"nuevo nombre."
#: modules/websubmit/lib/websubmit_managedocfiles.py:811
msgid "Choose how you want to restrict access to this file."
msgstr "Escoja cómo desea restringir el acceso a este archivo"
#: modules/websubmit/lib/websubmit_managedocfiles.py:848
msgid "Add new file"
msgstr "Añadir otro fichero"
#: modules/websubmit/lib/websubmit_managedocfiles.py:873
msgid "You can decide to hide or not previous version(s) of this file."
msgstr "Puede decidir esconder o no versiones anteriores de este archivo."
#: modules/websubmit/lib/websubmit_managedocfiles.py:874
msgid ""
"When you revise a file, the additional formats that you might have "
"previously uploaded are removed, since they no longer up-to-date with the "
"new file."
msgstr ""
"Cuando revise un archivo, los formatos adicionales que haya subido "
"previamente serán borrados, ya que no estarían sincronizados con el nuevo "
"archivo."
#: modules/websubmit/lib/websubmit_managedocfiles.py:875
msgid ""
"Alternative formats uploaded for current version of this file will be removed"
msgstr ""
"Los formatos alternativos para la versión actual de este archivo serán "
"borrados"
#: modules/websubmit/lib/websubmit_managedocfiles.py:876
msgid "Keep previous versions"
msgstr "Guardar las versiones anteriores"
#: modules/websubmit/lib/websubmit_managedocfiles.py:878
#: modules/bibknowledge/lib/bibknowledge_templates.py:207
msgid "Upload"
msgstr "Subir"
#: modules/websubmit/lib/websubmit_managedocfiles.py:890
#: modules/websubmit/lib/websubmit_webinterface.py:481
#: modules/bibedit/lib/bibeditmulti_templates.py:321
msgid "Apply changes"
msgstr "Guardar cambios"
#: modules/websubmit/lib/websubmit_managedocfiles.py:895
#, python-format
msgid "Need help revising or adding files to record %(recid)s"
msgstr "Necesito ayuda para revisar o añadir ficheros al registro %(recid)s"
#: modules/websubmit/lib/websubmit_managedocfiles.py:897
#, python-format
msgid ""
"Dear Support,\n"
"I would need help to revise or add a file to record %(recid)s.\n"
"I have attached the new version to this email.\n"
"Best regards"
msgstr ""
"Estimado soporte,\n"
"necesito su ayuda para revisar o añadir ficheros al registro %(recid)s.\n"
"He añadido la nueva versión en este mensaje.\n"
"Cordialmente,"
#: modules/websubmit/lib/websubmit_managedocfiles.py:902
#, python-format
msgid ""
"Having a problem revising a file? Send the revised version to "
"%(mailto_link)s."
msgstr ""
"Tiene problemas revisando un archivo? Envíe la versión revisada a "
"%(mailto_link)s."
#: modules/websubmit/lib/websubmit_managedocfiles.py:905
#, python-format
msgid ""
"Having a problem adding or revising a file? Send the new/revised version to "
"%(mailto_link)s."
msgstr ""
"Tiene problemas al añadir o revisar un fichero? Envíe la versión nueva o "
"revisada a %(mailto_link)s."
#: modules/websubmit/lib/websubmit_managedocfiles.py:1029
msgid "revise"
msgstr "revisar"
#: modules/websubmit/lib/websubmit_managedocfiles.py:1073
msgid "add format"
msgstr "añadir formato"
#: modules/websubmit/lib/functions/Shared_Functions.py:176
msgid ""
"Note that your submission as been inserted into the bibliographic task queue "
"and is waiting for execution.\n"
msgstr ""
"Su contribución ha sido enviada a la cola de tareas bibliográficas y está "
"esperando su ejecución.\n"
#: modules/websubmit/lib/functions/Shared_Functions.py:179
#, python-format
msgid ""
"The task queue is currently running in automatic mode, and there are "
"currently %s tasks waiting to be executed. Your record should be available "
"within a few minutes and searchable within an hour or thereabouts.\n"
msgstr ""
-"La cola de tareas se está ejecutando automáticament, y en estos momentos hay "
+"La cola de tareas se está ejecutando automáticamente, y en estos momentos hay "
"%s tareas esperando su ejecución. Su registro estará a punto dentro de "
"poco, y buscable en una hora, aproximadamente.\n"
#: modules/websubmit/lib/functions/Shared_Functions.py:181
msgid ""
"Because of a human intervention or a temporary problem, the task queue is "
"currently set to the manual mode. Your submission is well registered but may "
"take longer than usual before it is fully integrated and searchable.\n"
msgstr ""
"Debido a una intervención humana o a un problema temporal, la cola de tareas "
"está en modo manual. Su contribución ha quedado registrada, pero puede "
"tardar más de lo habitual hasta que esté bien integrado y buscable.\n"
#: modules/websubmit/lib/websubmit_engine.py:179
#: modules/websubmit/lib/websubmit_engine.py:797
#: modules/websubmit/web/yoursubmissions.py:61
msgid "Sorry, you must log in to perform this action."
msgstr "Tiene que identificarse para ejecutar esta acción."
#: modules/websubmit/lib/websubmit_engine.py:186
#: modules/websubmit/lib/websubmit_engine.py:804
msgid "Not enough information to go ahead with the submission."
msgstr "Falta información para continuar con el envío."
#: modules/websubmit/lib/websubmit_engine.py:192
#: modules/websubmit/lib/websubmit_engine.py:273
#: modules/websubmit/lib/websubmit_engine.py:283
#: modules/websubmit/lib/websubmit_engine.py:360
#: modules/websubmit/lib/websubmit_engine.py:401
#: modules/websubmit/lib/websubmit_engine.py:818
#: modules/websubmit/lib/websubmit_engine.py:856
#: modules/websubmit/lib/websubmit_engine.py:919
#: modules/websubmit/lib/websubmit_engine.py:960
msgid "Invalid parameters"
msgstr "Parámetros no válidos"
#: modules/websubmit/lib/websubmit_engine.py:198
#: modules/websubmit/lib/websubmit_engine.py:810
msgid "Invalid doctype and act parameters"
msgstr "Parámetros para doctype y act no válidos"
#: modules/websubmit/lib/websubmit_engine.py:229
#: modules/websubmit/lib/websubmit_engine.py:845
#, python-format
msgid "Unable to find the submission directory for the action: %s"
msgstr "No se puede encontrar el directorio de envíos para la acción: %s"
#: modules/websubmit/lib/websubmit_engine.py:238
#: modules/websubmit/lib/websubmit_engine.py:1003
msgid "Unknown document type"
msgstr "Tipo de documento desconocido"
#: modules/websubmit/lib/websubmit_engine.py:244
#: modules/websubmit/lib/websubmit_engine.py:1009
msgid "Unknown action"
msgstr "Acción desconocida"
#: modules/websubmit/lib/websubmit_engine.py:252
#: modules/websubmit/lib/websubmit_engine.py:1016
msgid "Unable to determine the number of submission pages."
msgstr "No ha sido posible determinar el número de páginas del envío."
#: modules/websubmit/lib/websubmit_engine.py:293
#: modules/websubmit/lib/websubmit_engine.py:864
msgid ""
"Unable to create a directory for this submission. The administrator has been "
"alerted."
msgstr ""
"No ha sido posible crear el directorio para este envío. Se ha avisado al "
"administrador."
#: modules/websubmit/lib/websubmit_engine.py:407
#: modules/websubmit/lib/websubmit_engine.py:967
msgid "Cannot create submission directory. The administrator has been alerted."
msgstr ""
"No ha sido posible crear el directorio para los envíos. Se ha avisado al "
"administrador."
#: modules/websubmit/lib/websubmit_engine.py:429
#: modules/websubmit/lib/websubmit_engine.py:989
msgid "No file uploaded?"
msgstr "No ha subido ningún archivo?"
#: modules/websubmit/lib/websubmit_engine.py:470
#: modules/websubmit/lib/websubmit_engine.py:473
#: modules/websubmit/lib/websubmit_engine.py:606
#: modules/websubmit/lib/websubmit_engine.py:609
msgid "Unknown form field found on submission page."
msgstr "Campo desconocido en el formulario de envíos."
#: modules/websubmit/lib/websubmit_engine.py:1055
msgid ""
"A serious function-error has been encountered. Adminstrators have been "
"alerted. <br /><em>Please not that this might be due to wrong characters "
"inserted into the form</em> (e.g. by copy and pasting some text from a PDF "
"file)."
msgstr ""
-"Se ha encontrado un error de funcionament serio. Se ha avisado a los "
-"administradores. <br /><em>Segurament la causa sea que se hayan insertado "
-"caracteres erróneos en el formulario</em> (p. ej., copiando y pegando algun "
+"Se ha encontrado un error de funcionamiento serio. Se ha avisado a los "
+"administradores. <br /><em>Seguramente la causa sea que se hayan insertado "
+"caracteres erróneos en el formulario</em> (p. ej., copiando y pegando algún "
"texto de un archivo PDF)."
#: modules/websubmit/lib/websubmit_engine.py:1384
#, python-format
msgid "Unable to find document type: %s"
msgstr "Imposible encontrar el tipo de documento: %s"
#: modules/websubmit/lib/websubmit_engine.py:1670
msgid "The chosen action is not supported by the document type."
msgstr ""
"La acción que ha escogido no está soportada para este tipo de documento."
#: modules/websubmit/lib/websubmit_engine.py:1747
#: modules/websubmit/lib/websubmit_webinterface.py:1377
#: modules/websubmit/web/approve.py:81
msgid "Warning"
msgstr "Atención"
#: modules/websubmit/lib/websubmit_templates.py:85
msgid "Document types available for submission"
msgstr "Tipos de documentos disponibles para realizar envíos"
#: modules/websubmit/lib/websubmit_templates.py:86
msgid "Please select the type of document you want to submit"
msgstr "Seleccione el tipo de documento que quiere enviar."
#: modules/websubmit/lib/websubmit_templates.py:103
msgid "No document types available."
msgstr "No hay tipos de documentos disponibles."
#: modules/websubmit/lib/websubmit_templates.py:269
msgid "Please log in first."
msgstr "Primero hace falta que se identifique."
#: modules/websubmit/lib/websubmit_templates.py:269
msgid "Use the top-right menu to log in."
msgstr "Use el menú superior derecho para entrar."
#: modules/websubmit/lib/websubmit_templates.py:313
msgid "Please select a category"
msgstr "Seleccione una categoría"
#: modules/websubmit/lib/websubmit_templates.py:352
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2114
msgid "Notice"
msgstr "Atención"
#: modules/websubmit/lib/websubmit_templates.py:353
msgid "Select a category and then click on an action button."
msgstr "Seleccione una categoría y después seleccione una acción."
#: modules/websubmit/lib/websubmit_templates.py:376
msgid ""
"To continue with a previously interrupted submission, enter an access number "
"into the box below:"
msgstr ""
"Pera continuar un envío interrumpido, introduzca el número de acceso en la "
"siguiente celda:"
#: modules/websubmit/lib/websubmit_templates.py:378
msgid "GO"
msgstr "ADELANTE"
#: modules/websubmit/lib/websubmit_templates.py:499
#: modules/websubmit/lib/websubmit_templates.py:968
msgid "SUMMARY"
msgstr "RESUMEN"
#: modules/websubmit/lib/websubmit_templates.py:535
#: modules/bibharvest/lib/oai_harvest_admin.py:858
#: modules/bibharvest/lib/oai_harvest_admin.py:911
msgid "Previous page"
msgstr "Página anterior"
#: modules/websubmit/lib/websubmit_templates.py:541
msgid "Submission number"
msgstr "Número de envío"
#: modules/websubmit/lib/websubmit_templates.py:555
#: modules/bibharvest/lib/oai_harvest_admin.py:843
#: modules/bibharvest/lib/oai_harvest_admin.py:899
msgid "Next page"
msgstr "Página siguiente"
#: modules/websubmit/lib/websubmit_templates.py:570
#: modules/websubmit/lib/websubmit_templates.py:1009
msgid "Are you sure you want to quit this submission?"
msgstr "¿Está seguro de que desea abandonar este envío?"
#: modules/websubmit/lib/websubmit_templates.py:572
#: modules/websubmit/lib/websubmit_templates.py:1010
#: modules/websubmit/lib/websubmit_templates.py:1019
msgid "Back to main menu"
msgstr "Volver al menú principal"
#: modules/websubmit/lib/websubmit_templates.py:575
msgid ""
"This is your submission access number. It can be used to continue with an "
"interrupted submission in case of problems."
msgstr ""
"Este es su número de acceso de envío. Lo puede utilizar para continuar un "
"envío interrumpido en caso de haber problemas."
#: modules/websubmit/lib/websubmit_templates.py:576
msgid "Mandatory fields appear in red in the SUMMARY window."
msgstr "Los campos obligatorios aparecen en rojo en el recuadro RESUMEN."
#: modules/websubmit/lib/websubmit_templates.py:722
#, python-format
msgid "The field %s is mandatory."
msgstr "El campo %s es obligatorio."
#: modules/websubmit/lib/websubmit_templates.py:722
msgid "Please make a choice in the select box"
msgstr "Por favor escoja uno en el desplegable"
#: modules/websubmit/lib/websubmit_templates.py:736
msgid "Please press a button."
msgstr "Pulse un botón."
#: modules/websubmit/lib/websubmit_templates.py:744
#, python-format
msgid "The field %s is mandatory. Please fill it in."
msgstr "El campo %s es obligatorio. Rellénelo, por favor."
#: modules/websubmit/lib/websubmit_templates.py:821
#, python-format
msgid "The field %(field)s is mandatory."
msgstr "El campo %(field)s es obligatorio."
#: modules/websubmit/lib/websubmit_templates.py:822
msgid "Going back to page"
msgstr "Volver a la página"
#: modules/websubmit/lib/websubmit_templates.py:959
msgid "finished!"
msgstr "¡finalizado!"
#: modules/websubmit/lib/websubmit_templates.py:967
msgid "end of action"
msgstr "final de la acción"
#: modules/websubmit/lib/websubmit_templates.py:991
msgid "Submission no"
msgstr "Envío nº"
#: modules/websubmit/lib/websubmit_templates.py:1062
#, python-format
msgid ""
"Here is the %(x_action)s function list for %(x_doctype)s documents at level "
"%(x_step)s"
msgstr ""
"Lista de funciones %(x_action)s para los documentos de tipo %(x_doctype)s en "
"el nivel %(x_step)s"
#: modules/websubmit/lib/websubmit_templates.py:1067
msgid "Function"
msgstr "Función"
#: modules/websubmit/lib/websubmit_templates.py:1068
msgid "Score"
msgstr "Puntuación"
#: modules/websubmit/lib/websubmit_templates.py:1069
msgid "Running function"
msgstr "Función en ejecución"
#: modules/websubmit/lib/websubmit_templates.py:1075
#, python-format
msgid "Function %s does not exist."
msgstr "La función %s no existe."
#: modules/websubmit/lib/websubmit_templates.py:1114
msgid "You must now"
msgstr "Ahora usted debe"
#: modules/websubmit/lib/websubmit_templates.py:1146
msgid "record"
msgstr "registro"
#: modules/websubmit/lib/websubmit_templates.py:1148
msgid "document"
msgstr "documento"
#: modules/websubmit/lib/websubmit_templates.py:1150
#: modules/websubmit/lib/websubmit_templates.py:1253
msgid "version"
msgstr "versión"
#: modules/websubmit/lib/websubmit_templates.py:1185
msgid "file(s)"
msgstr "archivo(s)"
#: modules/websubmit/lib/websubmit_templates.py:1236
msgid "see"
msgstr "véase"
#: modules/websubmit/lib/websubmit_templates.py:1436
#: modules/bibauthorid/lib/bibauthorid_templates.py:1011
msgid "For"
msgstr "Para"
#: modules/websubmit/lib/websubmit_templates.py:1437
msgid "all types of document"
msgstr "todos los tipos de documento"
#: modules/websubmit/lib/websubmit_templates.py:1492
msgid "Subm.No."
msgstr "Envío"
#: modules/websubmit/lib/websubmit_templates.py:1493
msgid "Reference"
msgstr "Referencia"
#: modules/websubmit/lib/websubmit_templates.py:1495
msgid "First access"
msgstr "Primer acceso"
#: modules/websubmit/lib/websubmit_templates.py:1496
msgid "Last access"
msgstr "Último acceso"
#: modules/websubmit/lib/websubmit_templates.py:1506
msgid "Are you sure you want to delete this submission?"
msgstr "¿Está seguro de que desea suprimir su envío?"
#: modules/websubmit/lib/websubmit_templates.py:1507
#, python-format
msgid "Delete submission %(x_id)s in %(x_docname)s"
msgstr "Suprimir el envío %(x_id)s en %(x_docname)s"
#: modules/websubmit/lib/websubmit_templates.py:1531
msgid "Reference not yet given"
msgstr "Todavía no tiene referencia"
#: modules/websubmit/lib/websubmit_templates.py:1602
msgid "Refereed Documents"
msgstr "Documentos revisados"
#: modules/websubmit/lib/websubmit_templates.py:1612
msgid "You are a general referee"
msgstr "Usted es el revisor general"
#: modules/websubmit/lib/websubmit_templates.py:1618
msgid "You are a referee for category:"
msgstr "Usted es revisor para al categoría:"
#: modules/websubmit/lib/websubmit_templates.py:1657
#: modules/websubmit/lib/websubmit_templates.py:1702
msgid "List of refereed types of documents"
msgstr "Lista de tipos de documentos revisados"
#: modules/websubmit/lib/websubmit_templates.py:1658
#: modules/websubmit/lib/websubmit_templates.py:1703
msgid ""
"Select one of the following types of documents to check the documents status"
msgstr ""
"Seleccione uno de los tipos de documentos siguientes para comprobar el "
"estado de los documentos"
#: modules/websubmit/lib/websubmit_templates.py:1671
msgid "Go to specific approval workflow"
msgstr "Ir al procedimiento de aprobación específico"
#: modules/websubmit/lib/websubmit_templates.py:1759
msgid "List of refereed categories"
msgstr "Lista de categorías a revisar"
#: modules/websubmit/lib/websubmit_templates.py:1760
#: modules/websubmit/lib/websubmit_templates.py:1909
msgid "Please choose a category"
msgstr "Escoja una categoría"
#: modules/websubmit/lib/websubmit_templates.py:1780
#: modules/websubmit/lib/websubmit_templates.py:1821
#: modules/websubmit/lib/websubmit_templates.py:1932
#: modules/websubmit/lib/websubmit_templates.py:1990
#: modules/websubmit/lib/websubmit_templates.py:2056
#: modules/websubmit/lib/websubmit_templates.py:2181
msgid "Pending"
msgstr "Pendiente"
#: modules/websubmit/lib/websubmit_templates.py:1786
#: modules/websubmit/lib/websubmit_templates.py:1824
#: modules/websubmit/lib/websubmit_templates.py:1939
#: modules/websubmit/lib/websubmit_templates.py:1993
#: modules/websubmit/lib/websubmit_templates.py:2057
#: modules/websubmit/lib/websubmit_templates.py:2182
msgid "Approved"
msgstr "Aprobado"
#: modules/websubmit/lib/websubmit_templates.py:1792
#: modules/websubmit/lib/websubmit_templates.py:1826
#: modules/websubmit/lib/websubmit_templates.py:1827
#: modules/websubmit/lib/websubmit_templates.py:1946
#: modules/websubmit/lib/websubmit_templates.py:1995
#: modules/websubmit/lib/websubmit_templates.py:1996
#: modules/websubmit/lib/websubmit_templates.py:2058
#: modules/websubmit/lib/websubmit_templates.py:2183
msgid "Rejected"
msgstr "Rechazado"
#: modules/websubmit/lib/websubmit_templates.py:1820
#: modules/websubmit/lib/websubmit_templates.py:1989
msgid "Key"
msgstr "Código"
#: modules/websubmit/lib/websubmit_templates.py:1823
#: modules/websubmit/lib/websubmit_templates.py:1992
msgid "Waiting for approval"
msgstr "Esperando la aprobación"
#: modules/websubmit/lib/websubmit_templates.py:1825
#: modules/websubmit/lib/websubmit_templates.py:1994
msgid "Already approved"
msgstr "Ya aprobado"
#: modules/websubmit/lib/websubmit_templates.py:1828
#: modules/websubmit/lib/websubmit_templates.py:1999
msgid "Some documents are pending."
msgstr "Algunos documentos están pendientes."
#: modules/websubmit/lib/websubmit_templates.py:1873
msgid "List of refereing categories"
msgstr "Lista de categorías a revisar"
#: modules/websubmit/lib/websubmit_templates.py:1953
#: modules/websubmit/lib/websubmit_templates.py:1997
#: modules/websubmit/lib/websubmit_templates.py:1998
#: modules/websubmit/lib/websubmit_templates.py:2184
msgid "Cancelled"
msgstr "Cancelado"
#: modules/websubmit/lib/websubmit_templates.py:2053
#: modules/websubmit/lib/websubmit_templates.py:2141
msgid "List of refereed documents"
msgstr "Lista de los documentos revisados"
#: modules/websubmit/lib/websubmit_templates.py:2054
#: modules/websubmit/lib/websubmit_templates.py:2178
msgid "Click on a report number for more information."
msgstr "Haga clic en un número de informe para más información."
#: modules/websubmit/lib/websubmit_templates.py:2055
#: modules/websubmit/lib/websubmit_templates.py:2180
msgid "Report Number"
msgstr "Número de informe"
#: modules/websubmit/lib/websubmit_templates.py:2143
msgid "List of publication documents"
msgstr "Lista de documentos de publicación"
#: modules/websubmit/lib/websubmit_templates.py:2145
msgid "List of direct approval documents"
msgstr "Lista de los documentos de aprobación directa"
#: modules/websubmit/lib/websubmit_templates.py:2318
msgid "Your request has been sent to the referee."
msgstr "Su mensaje se ha enviado al revisor."
#: modules/websubmit/lib/websubmit_templates.py:2334
#: modules/websubmit/lib/websubmit_templates.py:2455
#: modules/websubmit/lib/websubmit_templates.py:2766
#: modules/websubmit/lib/websubmit_templates.py:2950
msgid "Title:"
msgstr "Título:"
#: modules/websubmit/lib/websubmit_templates.py:2340
#: modules/websubmit/lib/websubmit_templates.py:2462
#: modules/websubmit/lib/websubmit_templates.py:2772
#: modules/websubmit/lib/websubmit_templates.py:2956
msgid "Author:"
msgstr "Autor:"
#: modules/websubmit/lib/websubmit_templates.py:2348
#: modules/websubmit/lib/websubmit_templates.py:2471
#: modules/websubmit/lib/websubmit_templates.py:2780
#: modules/websubmit/lib/websubmit_templates.py:2964
msgid "More information:"
msgstr "Más información:"
#: modules/websubmit/lib/websubmit_templates.py:2349
#: modules/websubmit/lib/websubmit_templates.py:2472
#: modules/websubmit/lib/websubmit_templates.py:2781
#: modules/websubmit/lib/websubmit_templates.py:2965
msgid "Click here"
msgstr "Haga clic aquí"
#: modules/websubmit/lib/websubmit_templates.py:2358
msgid "Approval note:"
msgstr "Nota de aprobación:"
#: modules/websubmit/lib/websubmit_templates.py:2363
#, python-format
msgid ""
"This document is still %(x_fmt_open)swaiting for approval%(x_fmt_close)s."
msgstr ""
"Este documento todavía está %(x_fmt_open)sesperando su aprobación"
"%(x_fmt_close)s."
#: modules/websubmit/lib/websubmit_templates.py:2366
#: modules/websubmit/lib/websubmit_templates.py:2386
#: modules/websubmit/lib/websubmit_templates.py:2395
msgid "It was first sent for approval on:"
msgstr "Fue enviado para su aprobación el:"
#: modules/websubmit/lib/websubmit_templates.py:2368
#: modules/websubmit/lib/websubmit_templates.py:2370
#: modules/websubmit/lib/websubmit_templates.py:2388
#: modules/websubmit/lib/websubmit_templates.py:2390
#: modules/websubmit/lib/websubmit_templates.py:2397
#: modules/websubmit/lib/websubmit_templates.py:2399
msgid "Last approval email was sent on:"
msgstr "El último mensaje de aprobación fue enviado el:"
#: modules/websubmit/lib/websubmit_templates.py:2371
msgid ""
"You can send an approval request email again by clicking the following "
"button:"
msgstr ""
"Puede volver a enviar otra petición de aprobación vía correo electrónico "
"pulsando este botón:"
#: modules/websubmit/lib/websubmit_templates.py:2373
#: modules/websubmit/web/publiline.py:366
msgid "Send Again"
msgstr "Volverlo a enviar"
#: modules/websubmit/lib/websubmit_templates.py:2374
msgid "WARNING! Upon confirmation, an email will be sent to the referee."
msgstr ""
"¡ATENCIÓN! Se enviará un correo electrónico a su revisor cuando lo confirme."
#: modules/websubmit/lib/websubmit_templates.py:2377
msgid ""
"As a referee for this document, you may click this button to approve or "
"reject it"
msgstr ""
"Como revisor de este documento, puede hacer clic en este botón para "
"aprobarlo o rechazarlo."
#: modules/websubmit/lib/websubmit_templates.py:2379
msgid "Approve/Reject"
msgstr "Aprobar/Rechazar"
#: modules/websubmit/lib/websubmit_templates.py:2384
#, python-format
msgid "This document has been %(x_fmt_open)sapproved%(x_fmt_close)s."
msgstr "Este documento ha sido %(x_fmt_open)saprobado%(x_fmt_close)s."
#: modules/websubmit/lib/websubmit_templates.py:2385
msgid "Its approved reference is:"
msgstr "Su referencia de aprobación es:"
#: modules/websubmit/lib/websubmit_templates.py:2391
msgid "It was approved on:"
msgstr "Fue aprobado el:"
#: modules/websubmit/lib/websubmit_templates.py:2393
#, python-format
msgid "This document has been %(x_fmt_open)srejected%(x_fmt_close)s."
msgstr "Este documento ha sido %(x_fmt_open)srechazado%(x_fmt_close)s."
#: modules/websubmit/lib/websubmit_templates.py:2400
msgid "It was rejected on:"
msgstr "Fue rechazado el:"
#: modules/websubmit/lib/websubmit_templates.py:2484
#: modules/websubmit/lib/websubmit_templates.py:2539
#: modules/websubmit/lib/websubmit_templates.py:2602
msgid "It has first been asked for refereing process on the "
msgstr "La petición de revisión fue hecha por primera vez el "
#: modules/websubmit/lib/websubmit_templates.py:2487
#: modules/websubmit/lib/websubmit_templates.py:2541
msgid "Last request e-mail was sent to the publication committee chair on the "
msgstr ""
"El último mensaje de petición fue enviado al responsable del comité de "
"publicación el "
#: modules/websubmit/lib/websubmit_templates.py:2491
msgid "A referee has been selected by the publication committee on the "
msgstr "Un revisor ha sido seleccionado por el comité de publicación el "
#: modules/websubmit/lib/websubmit_templates.py:2494
#: modules/websubmit/lib/websubmit_templates.py:2557
msgid "No referee has been selected yet."
msgstr "Todavía no existe ningún revisor seleccionado."
#: modules/websubmit/lib/websubmit_templates.py:2496
#: modules/websubmit/lib/websubmit_templates.py:2559
msgid "Select a referee"
msgstr "Seleccione un revisor"
#: modules/websubmit/lib/websubmit_templates.py:2501
msgid ""
"The referee has sent his final recommendations to the publication committee "
"on the "
msgstr ""
-"El revisor ha enviado sus recomanaciones finales al comité de publicación el "
+"El revisor ha enviado sus recomendaciones finales al comité de publicación el "
#: modules/websubmit/lib/websubmit_templates.py:2504
#: modules/websubmit/lib/websubmit_templates.py:2565
msgid "No recommendation from the referee yet."
msgstr "Todavía no hay ninguna recomendación del revisor."
#: modules/websubmit/lib/websubmit_templates.py:2506
#: modules/websubmit/lib/websubmit_templates.py:2516
#: modules/websubmit/lib/websubmit_templates.py:2567
#: modules/websubmit/lib/websubmit_templates.py:2575
#: modules/websubmit/lib/websubmit_templates.py:2583
msgid "Send a recommendation"
msgstr "Envíe una recomendación"
#: modules/websubmit/lib/websubmit_templates.py:2511
#: modules/websubmit/lib/websubmit_templates.py:2579
msgid ""
"The publication committee has sent his final recommendations to the project "
"leader on the "
msgstr ""
"El comité de publicación ha enviado sus recomendaciones finales al "
-"responsable de projecto el "
+"responsable de proyecto el "
#: modules/websubmit/lib/websubmit_templates.py:2514
#: modules/websubmit/lib/websubmit_templates.py:2581
msgid "No recommendation from the publication committee yet."
msgstr "Todavía no hay ninguna recomendación del comité de publicación."
#: modules/websubmit/lib/websubmit_templates.py:2521
#: modules/websubmit/lib/websubmit_templates.py:2587
#: modules/websubmit/lib/websubmit_templates.py:2607
msgid "It has been cancelled by the author on the "
msgstr "Ha sido cancelado por el autor el "
#: modules/websubmit/lib/websubmit_templates.py:2525
#: modules/websubmit/lib/websubmit_templates.py:2590
#: modules/websubmit/lib/websubmit_templates.py:2610
msgid "It has been approved by the project leader on the "
-msgstr "Ha sido aprobado por el responsable de projecto el "
+msgstr "Ha sido aprobado por el responsable de proyecto el "
#: modules/websubmit/lib/websubmit_templates.py:2528
#: modules/websubmit/lib/websubmit_templates.py:2592
#: modules/websubmit/lib/websubmit_templates.py:2612
msgid "It has been rejected by the project leader on the "
-msgstr "Ha sido rechazado por el responsable de projecto el "
+msgstr "Ha sido rechazado por el responsable de proyecto el "
#: modules/websubmit/lib/websubmit_templates.py:2531
#: modules/websubmit/lib/websubmit_templates.py:2594
#: modules/websubmit/lib/websubmit_templates.py:2614
msgid "No final decision taken yet."
msgstr "Todavía no hay ninguna decisión final."
#: modules/websubmit/lib/websubmit_templates.py:2533
#: modules/websubmit/lib/websubmit_templates.py:2596
#: modules/websubmit/lib/websubmit_templates.py:2616
#: modules/websubmit/web/publiline.py:1136
#: modules/websubmit/web/publiline.py:1146
msgid "Take a decision"
msgstr "Tome una decisión"
#: modules/websubmit/lib/websubmit_templates.py:2544
msgid ""
"An editorial board has been selected by the publication committee on the "
msgstr ""
"Un consejo editor ha sido seleccionado por el comité de publicación el "
#: modules/websubmit/lib/websubmit_templates.py:2546
msgid "Add an author list"
msgstr "Añadir una lista de autores"
#: modules/websubmit/lib/websubmit_templates.py:2549
msgid "No editorial board has been selected yet."
msgstr "Todavía no se ha seleccionado ningún consejo editor"
#: modules/websubmit/lib/websubmit_templates.py:2551
msgid "Select an editorial board"
msgstr "Seleccionar un consejo editor"
#: modules/websubmit/lib/websubmit_templates.py:2555
msgid "A referee has been selected by the editorial board on the "
msgstr "Un revisor ha sido seleccionado por el consejo editor el "
#: modules/websubmit/lib/websubmit_templates.py:2563
msgid ""
"The referee has sent his final recommendations to the editorial board on the "
msgstr ""
"El revisor ha enviado sus recomendaciones finales al consejo editor el "
#: modules/websubmit/lib/websubmit_templates.py:2571
msgid ""
"The editorial board has sent his final recommendations to the publication "
"committee on the "
msgstr ""
"El consejo editor ha enviado sus recomendaciones finales al comité de "
"publicación el"
#: modules/websubmit/lib/websubmit_templates.py:2573
msgid "No recommendation from the editorial board yet."
msgstr "Todavía no hay ninguna recomendación del consejo editor"
#: modules/websubmit/lib/websubmit_templates.py:2604
msgid "Last request e-mail was sent to the project leader on the "
msgstr ""
"El último mensaje de petición fue enviado al responsable del proyecto el "
#: modules/websubmit/lib/websubmit_templates.py:2698
msgid "Comments overview"
msgstr "Sumario de los comentarios"
#: modules/websubmit/lib/websubmit_templates.py:2820
msgid "search for user"
msgstr "buscar un usuario"
#: modules/websubmit/lib/websubmit_templates.py:2822
msgid "search for users"
msgstr "buscar usuarios"
#: modules/websubmit/lib/websubmit_templates.py:2825
#: modules/websubmit/lib/websubmit_templates.py:2827
#: modules/websubmit/lib/websubmit_templates.py:2880
#: modules/websubmit/lib/websubmit_templates.py:2882
msgid "select user"
msgstr "Seleccione el usuario"
#: modules/websubmit/lib/websubmit_templates.py:2836
msgid "connected"
msgstr "conectado"
#: modules/websubmit/lib/websubmit_templates.py:2839
msgid "add this user"
msgstr "añadir este usuario"
#: modules/websubmit/lib/websubmit_templates.py:2889
msgid "remove this user"
msgstr "eliminar este usuario"
#: modules/websubmit/lib/websubmit_templates.py:3003
msgid "User"
msgstr "Usuario"
#: modules/websubmit/lib/websubmit_templates.py:3084
#: modules/websubmit/web/publiline.py:1133
msgid "Select:"
msgstr "Seleccionar:"
#: modules/websubmit/lib/websubmit_templates.py:3085
msgid "approve"
msgstr "aprobar"
#: modules/websubmit/lib/websubmit_templates.py:3086
msgid "reject"
msgstr "rechazar"
#: modules/websubmit/lib/websubmit_webinterface.py:116
msgid "Requested record does not seem to have been integrated."
msgstr "El registro solicitado no parece haber sido integrado."
#: modules/websubmit/lib/websubmit_webinterface.py:150
msgid ""
"The system has encountered an error in retrieving the list of files for this "
"document."
msgstr ""
"El sistema ha encontrado un error recuperando la lista de los archivos de "
"este documento."
#: modules/websubmit/lib/websubmit_webinterface.py:216
msgid "This file is restricted: "
msgstr "Este archivo es de acceso restringido."
#: modules/websubmit/lib/websubmit_webinterface.py:228
msgid "An error has happened in trying to stream the request file."
msgstr ""
"Este archivo está escondido y usted no tiene permiso para acceder a él."
#: modules/websubmit/lib/websubmit_webinterface.py:231
msgid "The requested file is hidden and can not be accessed."
msgstr "Este archivo está escondido y usted no puede acceder a él."
#: modules/websubmit/lib/websubmit_webinterface.py:238
msgid "Requested file does not seem to exist."
msgstr "El fichero solicitado no existe."
#: modules/websubmit/lib/websubmit_webinterface.py:272
msgid "Access to Fulltext"
msgstr "Acceso al texto completo"
#: modules/websubmit/lib/websubmit_webinterface.py:321
msgid "An error has happened in trying to retrieve the requested file."
msgstr "Ha habido un error al intentar recuperar el archivo solicitado."
#: modules/websubmit/lib/websubmit_webinterface.py:323
msgid "Not enough information to retrieve the document"
msgstr "Falta información para recuperar el documento"
#: modules/websubmit/lib/websubmit_webinterface.py:331
msgid "An error has happened in trying to retrieving the requested file."
msgstr "Ha habido un error al intentar recuperar el archivo solicitado."
#: modules/websubmit/lib/websubmit_webinterface.py:383
msgid "Manage Document Files"
msgstr "Gestionar los ficheros de los documentos"
#: modules/websubmit/lib/websubmit_webinterface.py:401
#, python-format
msgid "Your modifications to record #%i have been submitted"
msgstr "Sus modificaciones al registro #%i han sido enviadas"
#: modules/websubmit/lib/websubmit_webinterface.py:409
#, python-format
msgid "Your modifications to record #%i have been cancelled"
msgstr "Sus modificaciones al registro #%i han sido canceladas"
#: modules/websubmit/lib/websubmit_webinterface.py:418
msgid "Edit"
msgstr "Editar"
#: modules/websubmit/lib/websubmit_webinterface.py:419
msgid "Edit record"
msgstr "Edite el registro"
#: modules/websubmit/lib/websubmit_webinterface.py:434
#: modules/websubmit/lib/websubmit_webinterface.py:490
msgid "Document File Manager"
msgstr "Gestión de ficheros del documento"
#: modules/websubmit/lib/websubmit_webinterface.py:435
#: modules/websubmit/lib/websubmit_webinterface.py:490
#, python-format
msgid "Record #%i"
msgstr "Registro #%i"
#: modules/websubmit/lib/websubmit_webinterface.py:482
msgid "Cancel all changes"
msgstr "Cancelar todos los cambios"
#: modules/websubmit/lib/websubmit_webinterface.py:1171
msgid "Sorry, 'sub' parameter missing..."
msgstr "Falta el parámetro «sub»..."
#: modules/websubmit/lib/websubmit_webinterface.py:1174
msgid "Sorry. Cannot analyse parameter"
msgstr "No se ha podido analizar el parámetro"
#: modules/websubmit/lib/websubmit_webinterface.py:1235
msgid "Sorry, invalid URL..."
-msgstr "URL no vàlid..."
+msgstr "URL no válida..."
#: modules/websubmit/lib/websubmitadmin_engine.py:3902
#, python-format
msgid ""
"Unable to move field at position %s to position %s on page %s of submission "
"'%s%s' - Invalid Field Position Numbers"
msgstr ""
"No ha sido posible mover el campo de la posición %s a la %s de la página %s "
"del envío '%s%s' - Números de posición no válidos"
#: modules/websubmit/lib/websubmitadmin_engine.py:3913
#, python-format
msgid ""
"Unable to swap field at position %s with field at position %s on page %s of "
"submission %s - could not move field at position %s to temporary field "
"location"
msgstr ""
-"No ha sido posible intercanviar el campo de la posición %s con el campo de "
+"No ha sido posible intercambiar el campo de la posición %s con el campo de "
"la posición %s en la página %s del envío %s - no se ha podido mover el campo "
"de la posición %s al campo temporal"
#: modules/websubmit/lib/websubmitadmin_engine.py:3924
#, python-format
msgid ""
"Unable to swap field at position %s with field at position %s on page %s of "
"submission %s - could not move field at position %s to position %s. Please "
"ask Admin to check that a field was not stranded in a temporary position"
msgstr ""
-"No ha sido posible intercanviar el campo de la posición %s con el campo de "
+"No ha sido posible intercambiar el campo de la posición %s con el campo de "
"la posición %s en la página %s del envío %s - no se ha podido mover el campo "
"de la posición %s a la posición %s. Pida al administrador que compruebe si "
"el campo no ha quedado encallado en una posición temporal."
#: modules/websubmit/lib/websubmitadmin_engine.py:3935
#, python-format
msgid ""
"Unable to swap field at position %s with field at position %s on page %s of "
"submission %s - could not move field that was located at position %s to "
"position %s from temporary position. Field is now stranded in temporary "
"position and must be corrected manually by an Admin"
msgstr ""
-"No ha sido posible intercanviar el campo de la posición %s con el campo de "
+"No ha sido posible intercambiar el campo de la posición %s con el campo de "
"la posición %s en la página %s del envío %s - no se ha podido mover el campo "
"de la posición %s a la posición %s. El campo no ha quedado encallado en una "
-"posición temporal y el administador tiene que corregirlo manualmente."
+"posición temporal y el administrador tiene que corregirlo manualmente."
#: modules/websubmit/lib/websubmitadmin_engine.py:3946
#, python-format
msgid ""
"Unable to move field at position %s to position %s on page %s of submission "
"%s - could not decrement the position of the fields below position %s. Tried "
"to recover - please check that field ordering is not broken"
msgstr ""
"No ha sido posible mover el campo de la posición %s a la %s en la página %s "
"del envío %s - no se ha podido colocar en una posición menor que %s. Se ha "
"intentado recuperar. Compruebe que el orden de los campos sea el correcto."
#: modules/websubmit/lib/websubmitadmin_engine.py:3957
#, python-format
msgid ""
"Unable to move field at position %s to position %s on page %s of submission "
"%s%s - could not increment the position of the fields at and below position "
"%s. The field that was at position %s is now stranded in a temporary "
"position."
msgstr ""
"No ha sido posible mover el campo de la posición %s a la %s en la página %s "
"del envío %s%s - no se ha podido colocar en una posición mayor que y debajo "
"de la %s. El campo que estaba en la posición %s está ahora en una posición "
"temporal."
#: modules/websubmit/lib/websubmitadmin_engine.py:3968
#, python-format
msgid ""
"Moved field from position %s to position %s on page %s of submission '%s%s'."
msgstr ""
"Campo movido de la posición %s a la posición %s de la página %s del envío '%s"
"%s'."
#: modules/websubmit/lib/websubmitadmin_engine.py:3989
#, python-format
msgid "Unable to delete field at position %s from page %s of submission '%s'"
msgstr ""
"No ha sido posible eliminar el campo en la posición %s de la página %s del "
"envío '%s'."
#: modules/websubmit/lib/websubmitadmin_engine.py:3999
#, python-format
msgid "Unable to delete field at position %s from page %s of submission '%s%s'"
msgstr ""
"No ha sido posible eliminar el campo en la posición %s de la página %s del "
"envío '%s%s'."
#: modules/websubmit/web/approve.py:55
msgid "approve.py: cannot determine document reference"
msgstr "approve.py: no se ha podido determinar la referencia del documento."
#: modules/websubmit/web/approve.py:58
msgid "approve.py: cannot find document in database"
msgstr "approve.py: no se ha encontrado el documento en la base de datos"
#: modules/websubmit/web/approve.py:72
msgid "Sorry parameter missing..."
msgstr "Falta el parámetro..."
#: modules/websubmit/web/publiline.py:133
msgid "Document Approval Workflow"
msgstr "Circuito de aprobación de documentos"
#: modules/websubmit/web/publiline.py:154
msgid "Approval and Refereeing Workflow"
-msgstr "Procedimiento de aprovación y revisión"
+msgstr "Procedimiento de aprobación y revisión"
#: modules/websubmit/web/publiline.py:333
#: modules/websubmit/web/publiline.py:434
#: modules/websubmit/web/publiline.py:660
msgid "Approval has never been requested for this document."
msgstr "No se ha pedido nunca la aprobación de este documento."
#: modules/websubmit/web/publiline.py:356
#: modules/websubmit/web/publiline.py:358
#: modules/websubmit/web/publiline.py:460
#: modules/websubmit/web/publiline.py:685
msgid "Unable to display document."
msgstr "No es posible visualizar el documento."
#: modules/websubmit/web/publiline.py:689
#: modules/websubmit/web/publiline.py:813
#: modules/websubmit/web/publiline.py:928
#: modules/websubmit/web/publiline.py:992
#: modules/websubmit/web/publiline.py:1033
#: modules/websubmit/web/publiline.py:1089
#: modules/websubmit/web/publiline.py:1152
#: modules/websubmit/web/publiline.py:1202
msgid "Action unauthorized for this document."
msgstr "Acción no autorizada para este documento."
#: modules/websubmit/web/publiline.py:692
#: modules/websubmit/web/publiline.py:816
#: modules/websubmit/web/publiline.py:931
#: modules/websubmit/web/publiline.py:995
#: modules/websubmit/web/publiline.py:1036
#: modules/websubmit/web/publiline.py:1092
#: modules/websubmit/web/publiline.py:1155
#: modules/websubmit/web/publiline.py:1205
msgid "Action unavailable for this document."
msgstr "Acción no aplicable para este documento."
#: modules/websubmit/web/publiline.py:702
msgid "Adding users to the editorial board"
msgstr "Añadir usuarios al consejo editor"
#: modules/websubmit/web/publiline.py:730
#: modules/websubmit/web/publiline.py:853
msgid "no qualified users, try new search."
msgstr "no coincide con ningún usuario, pruebe otra búsqueda."
#: modules/websubmit/web/publiline.py:732
#: modules/websubmit/web/publiline.py:855
msgid "hits"
msgstr "resultados"
#: modules/websubmit/web/publiline.py:732
#: modules/websubmit/web/publiline.py:855
msgid "too many qualified users, specify more narrow search."
msgstr "coincide con demasiados usuarios, restrinja más la búsqueda."
#: modules/websubmit/web/publiline.py:732
#: modules/websubmit/web/publiline.py:855
msgid "limit"
msgstr "límite"
#: modules/websubmit/web/publiline.py:748
msgid "users in brackets are already attached to the role, try another one..."
msgstr ""
-"los usuarios entre corchetes ja tenen asignado este rol, escoja otro..."
+"los usuarios entre corchetes ya tienen asignado este rol, escoja otro..."
#: modules/websubmit/web/publiline.py:754
msgid "Removing users from the editorial board"
-msgstr "Borrado de usuarios del consejo editor"
+msgstr "Borrando usuarios del consejo editor"
#: modules/websubmit/web/publiline.py:790
msgid "Validate the editorial board selection"
-msgstr "Valude la selección del consejo editor"
+msgstr "Validar la selección del consejo editor"
#: modules/websubmit/web/publiline.py:835
msgid "Referee selection"
msgstr "Selección de revisor"
#: modules/websubmit/web/publiline.py:921
msgid "Come back to the document"
msgstr "Volver al documento"
#: modules/websubmit/web/publiline.py:1106
msgid "Back to the document"
msgstr "Volver al documento"
#: modules/websubmit/web/publiline.py:1134
#: modules/websubmit/web/publiline.py:1194
msgid "Approve"
msgstr "Aprobar"
#: modules/websubmit/web/publiline.py:1135
#: modules/websubmit/web/publiline.py:1195
msgid "Reject"
msgstr "Rechazar"
#: modules/websubmit/web/publiline.py:1233
msgid "Wrong action for this document."
-msgstr "Acción incorrecta para este document."
+msgstr "Acción incorrecta para este documento."
#: modules/websubmit/web/yourapprovals.py:57
msgid "You are not authorized to use approval system."
msgstr "No está autorizado a utilizar el sistema de aprobaciones."
#: modules/webjournal/lib/webjournal_templates.py:50
msgid "Available Journals"
msgstr "Revistas disponibles"
#: modules/webjournal/lib/webjournal_templates.py:59
#: modules/webjournal/lib/webjournal_templates.py:98
#, python-format
msgid "Contact %(x_url_open)sthe administrator%(x_url_close)s"
msgstr "Escriba a los %(x_url_open)sadministradores%(x_url_close)s"
#: modules/webjournal/lib/webjournal_templates.py:143
msgid "Regeneration Error"
msgstr "Error en la regeneración"
#: modules/webjournal/lib/webjournal_templates.py:144
msgid ""
"The issue could not be correctly regenerated. Please contact your "
"administrator."
msgstr ""
"El número no se ha podido regenerar correctamente. Contacte con el "
"administrador."
#: modules/webjournal/lib/webjournal_templates.py:270
#, python-format
msgid "If you cannot read this email please go to %(x_journal_link)s"
msgstr "Si no puede leer este mensaje vaya a %(x_journal_link)s"
#: modules/webjournal/lib/webjournal_templates.py:417
#: modules/webjournal/lib/webjournal_templates.py:666
#: modules/webjournal/lib/webjournaladminlib.py:299
#: modules/webjournal/lib/webjournaladminlib.py:319
msgid "Add"
msgstr "Añadir"
#: modules/webjournal/lib/webjournal_templates.py:418
#: modules/webjournal/lib/webjournaladminlib.py:338
msgid "Publish"
msgstr "Publicar"
#: modules/webjournal/lib/webjournal_templates.py:419
#: modules/webjournal/lib/webjournaladminlib.py:299
#: modules/webjournal/lib/webjournaladminlib.py:316
msgid "Refresh"
msgstr "Refrescar"
#: modules/webjournal/lib/webjournal_templates.py:482
#: modules/webjournal/lib/webjournaladminlib.py:364
#: modules/bibcirculation/lib/bibcirculation_templates.py:3254
#: modules/bibcirculation/lib/bibcirculation_templates.py:3275
#: modules/bibcirculation/lib/bibcirculation_templates.py:3296
#: modules/bibcirculation/lib/bibcirculation_templates.py:3317
#: modules/bibcirculation/lib/bibcirculation_templates.py:3947
#: modules/bibcirculation/lib/bibcirculation_templates.py:4499
#: modules/bibcirculation/lib/bibcirculation_templates.py:4507
#: modules/bibcirculation/lib/bibcirculation_templates.py:8140
#: modules/bibcirculation/lib/bibcirculation_templates.py:14718
msgid "Update"
msgstr "Actualizar"
#: modules/webjournal/lib/webjournal_templates.py:661
msgid "Apply"
msgstr "Aplicar"
#: modules/webjournal/lib/webjournal_config.py:64
msgid "Page not found"
msgstr "No se ha encontrado la página"
#: modules/webjournal/lib/webjournal_config.py:65
msgid "The requested page does not exist"
msgstr "La página solicitada no existe"
#: modules/webjournal/lib/webjournal_config.py:96
msgid "No journal articles"
msgstr "Sin artículos de revista"
#: modules/webjournal/lib/webjournal_config.py:97
#: modules/webjournal/lib/webjournal_config.py:138
msgid "Problem with the configuration of this journal"
msgstr "Problema con la configuración de esta revista"
#: modules/webjournal/lib/webjournal_config.py:137
msgid "No journal issues"
msgstr "Sin números de revista"
#: modules/webjournal/lib/webjournal_config.py:176
msgid "Journal article error"
msgstr "Error interno de la revista"
#: modules/webjournal/lib/webjournal_config.py:177
msgid "We could not know which article you were looking for"
msgstr "No ha sido posible encontrar el artículo que buscaba"
#: modules/webjournal/lib/webjournal_config.py:211
msgid "No journals available"
msgstr "No hay ninguna revista"
#: modules/webjournal/lib/webjournal_config.py:212
msgid "We could not provide you any journals"
msgstr "No ha sido posible ofrecerle ninguna revista"
#: modules/webjournal/lib/webjournal_config.py:213
msgid ""
"It seems that there are no journals defined on this server. Please contact "
"support if this is not right."
msgstr ""
"Parece ser que no hay ninguna revista definida en este servidor. Contacte "
"con el soporte si no es así."
#: modules/webjournal/lib/webjournal_config.py:239
msgid "Select a journal on this server"
msgstr "Seleccione una revista en este servidor"
#: modules/webjournal/lib/webjournal_config.py:240
msgid "We couldn't guess which journal you are looking for"
msgstr "No ha sido posible encontrar la revista que buscaba"
#: modules/webjournal/lib/webjournal_config.py:241
msgid ""
"You did not provide an argument for a journal name. Please select the "
"journal you want to read in the list below."
msgstr "No ha especificado qué revista busca. Seleccione una de esta lista."
#: modules/webjournal/lib/webjournal_config.py:268
msgid "No current issue"
msgstr "No existe ningún número actual"
#: modules/webjournal/lib/webjournal_config.py:269
msgid "We could not find any informtion on the current issue"
msgstr "No existe ninguna información en el número actual"
#: modules/webjournal/lib/webjournal_config.py:270
msgid ""
"The configuration for the current issue seems to be empty. Try providing an "
"issue number or check with support."
msgstr ""
"Parece que la configuración del número actual está vacía. Pruebe de escoger "
"un número en concreto o póngase en contacte con soporte."
#: modules/webjournal/lib/webjournal_config.py:298
msgid "Issue number badly formed"
msgstr "Número mal formateado"
#: modules/webjournal/lib/webjournal_config.py:299
msgid "We could not read the issue number you provided"
msgstr "No ha sido posible leer el número que ha escogido"
#: modules/webjournal/lib/webjournal_config.py:329
msgid "Archive date badly formed"
msgstr "Fecha de archivo mal formateada"
#: modules/webjournal/lib/webjournal_config.py:330
msgid "We could not read the archive date you provided"
msgstr "No ha sido posible leer la data de archivo que ha escogido"
#: modules/webjournal/lib/webjournal_config.py:365
msgid "No popup record"
msgstr "No existe el registro «popup»"
#: modules/webjournal/lib/webjournal_config.py:366
msgid "We could not deduce the popup article you requested"
msgstr "No ha sido posible leer el artículo «popup» que ha escogido"
#: modules/webjournal/lib/webjournal_config.py:399
msgid "Update error"
msgstr "Error de actualización"
#: modules/webjournal/lib/webjournal_config.py:400
#: modules/webjournal/lib/webjournal_config.py:431
msgid "There was an internal error"
msgstr "Ha habido un error interno"
#: modules/webjournal/lib/webjournal_config.py:430
msgid "Journal publishing DB error"
-msgstr "Error en la base dades de publicación de revistas"
+msgstr "Error en la base datos de publicación de revistas"
#: modules/webjournal/lib/webjournal_config.py:463
msgid "Journal issue error"
msgstr "Error de número de revista"
#: modules/webjournal/lib/webjournal_config.py:464
msgid "Issue not found"
msgstr "No se ha encontrado el número"
#: modules/webjournal/lib/webjournal_config.py:494
msgid "Journal ID error"
msgstr "Error del código de revista"
#: modules/webjournal/lib/webjournal_config.py:495
msgid "We could not find the id for this journal in the Database"
msgstr ""
"No ha sido posible encontrar el identificador interno de esta revista en la "
"base de datos"
#: modules/webjournal/lib/webjournal_config.py:527
#: modules/webjournal/lib/webjournal_config.py:529
#, python-format
msgid "Category \"%(category_name)s\" not found"
-msgstr "No se ha encontrado la categoria «%(category_name)s»"
+msgstr "No se ha encontrado la categoría «%(category_name)s»"
#: modules/webjournal/lib/webjournal_config.py:531
msgid "Sorry, this category does not exist for this journal and issue."
-msgstr "Esta categoria no existe para esta revista y número."
-
+msgstr "Esta categoría no existe para esta revista y número."
#: modules/webjournal/lib/webjournaladminlib.py:350
msgid "Please select an issue"
msgstr "Seleccione un número"
#: modules/webjournal/web/admin/webjournaladmin.py:77
msgid "WebJournal Admin"
msgstr "Administración de WebJournal"
#: modules/webjournal/web/admin/webjournaladmin.py:119
#, python-format
msgid "Administrate %(journal_name)s"
msgstr "Administrar %(journal_name)s"
#: modules/webjournal/web/admin/webjournaladmin.py:158
msgid "Feature a record"
msgstr "Destacar un registro"
#: modules/webjournal/web/admin/webjournaladmin.py:220
msgid "Email Alert System"
msgstr "Sistema de alertas por correo electrónico"
#: modules/webjournal/web/admin/webjournaladmin.py:273
msgid "Issue regenerated"
msgstr "Número regenerado"
#: modules/webjournal/web/admin/webjournaladmin.py:324
msgid "Publishing Interface"
msgstr "Interfaz de publicación"
#: modules/webjournal/web/admin/webjournaladmin.py:350
msgid "Add Journal"
msgstr "Añadir una revista"
#: modules/webjournal/web/admin/webjournaladmin.py:352
msgid "Edit Settings"
msgstr "Editar parámetros"
#: modules/bibcatalog/lib/bibcatalog_templates.py:43
#, python-format
msgid "You have %i tickets."
msgstr "Tiene %i tareas."
#: modules/bibcatalog/lib/bibcatalog_templates.py:62
msgid "show"
msgstr "mostrar"
#: modules/bibcatalog/lib/bibcatalog_templates.py:63
msgid "close"
msgstr "cerrar"
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:68
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:82
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_weather.py:125
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_weather.py:140
msgid "No information available"
msgstr "No hay información disponible"
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:86
msgid "No seminars today"
msgstr "Hoy no hay seminarios"
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:184
msgid "What's on today"
msgstr "Previsto para hoy"
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:185
msgid "Seminars of the week"
msgstr "Seminarios de la semana"
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_whatsNew.py:163
msgid "There are no new articles for the moment"
msgstr "De momento no hay artículos nuevos"
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_whatsNew.py:285
msgid "What's new"
msgstr "Es noticia"
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_weather.py:224
msgid "Under the CERN sky"
msgstr "Bajo el cielo del CERN"
#: modules/webjournal/lib/elements/bfe_webjournal_article_author.py:48
#, python-format
msgid "About your article at %(url)s"
msgstr "Sobre su artículo en %(url)s"
#: modules/webjournal/lib/elements/bfe_webjournal_imprint.py:122
msgid "Issue No."
msgstr "Número"
#: modules/webjournal/lib/elements/bfe_webjournal_article_body.py:221
msgid "Did you know?"
msgstr "Lo sabía?"
#: modules/webjournal/lib/elements/bfe_webjournal_article_body.py:253
#, python-format
msgid ""
"Hi,\n"
"\n"
"Have a look at the following article:\n"
"<%(url)s>"
msgstr ""
"Hola,\n"
"\n"
"échele un vistazo a este artículo:\n"
"<%(url)s>"
#: modules/webjournal/lib/elements/bfe_webjournal_article_body.py:260
msgid "Send this article"
msgstr "Enviar este artículo"
#: modules/webjournal/lib/elements/bfe_webjournal_rss.py:136
msgid "Subscribe by RSS"
msgstr "Subscribirse vía RSS"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:90
msgid "News Articles"
msgstr "Noticias"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:91
msgid "Official News"
msgstr "Noticias oficiales"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:92
msgid "Training and Development"
msgstr "Formación y desarrollo"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:93
msgid "General Information"
msgstr "Información general"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:94
msgid "Announcements"
msgstr "Avisos"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:95
msgid "Training"
msgstr "Formación"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:96
msgid "Events"
msgstr "Eventos"
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:97
msgid "Staff Association"
-msgstr "Associación de personal"
+msgstr "Asociación de personal"
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:106
msgid "Archive"
msgstr "Archivo"
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:133
msgid "Select Year:"
msgstr "Seleccionar año:"
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:139
msgid "Select Issue:"
msgstr "Seleccionar número:"
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:143
msgid "Select Date:"
msgstr "Seleccionar fecha:"
#: modules/bibedit/lib/bibedit_webinterface.py:169
msgid "Comparing two record revisions"
msgstr "Comparar dos revisiones del registro"
#: modules/bibedit/lib/bibedit_webinterface.py:192
msgid "Failed to create a ticket"
msgstr "No se ha podido crear un ticket"
#: modules/bibedit/lib/bibeditmulti_templates.py:313
msgid "Next Step"
msgstr "Paso siguiente"
#: modules/bibedit/lib/bibeditmulti_templates.py:314
msgid "Search criteria"
msgstr "Criterios de búsqueda"
#: modules/bibedit/lib/bibeditmulti_templates.py:315
msgid "Output tags"
-msgstr "ETiquetas de visualización"
+msgstr "Etiquetas de visualización"
#: modules/bibedit/lib/bibeditmulti_templates.py:316
msgid "Filter collection"
msgstr "Filtro de colección"
#: modules/bibedit/lib/bibeditmulti_templates.py:317
msgid "1. Choose search criteria"
msgstr "1. Escoja los criterios de búsqueda"
#: modules/bibedit/lib/bibeditmulti_templates.py:318
msgid ""
"Specify the criteria you'd like to use for filtering records that will be "
"changed. Use \"Search\" to see which records would have been filtered using "
"these criteria."
msgstr ""
"Especifique los criterios que quiera para filtrar los registros a cambiar. "
-"Use «Buscar» per a ver qué registros se filtrarían con estos criterios."
+"Use «Buscar» para ver qué registros se filtrarían con estos criterios."
#: modules/bibedit/lib/bibeditmulti_templates.py:320
msgid "Preview results"
msgstr "Previsualización de los resultados"
#: modules/bibedit/lib/bibeditmulti_templates.py:552
msgid "2. Define changes"
msgstr "2. Definición de los cambios"
#: modules/bibedit/lib/bibeditmulti_templates.py:553
msgid ""
"Specify fields and their subfields that should be changed in every record "
"matching the search criteria."
msgstr ""
"Especifique los campos y subcampos a cambiar para cada registro que "
"concuerde con los criterios de búsqueda."
#: modules/bibedit/lib/bibeditmulti_templates.py:554
msgid "Define new field action"
-msgstr "Definir una nueva acción para el camp"
+msgstr "Definir una nueva acción para el campo"
#: modules/bibedit/lib/bibeditmulti_templates.py:555
msgid "Define new subfield action"
-msgstr "Definir una nueva acción para el subcamp"
+msgstr "Definir una nueva acción para el subcampo"
#: modules/bibedit/lib/bibeditmulti_templates.py:557
msgid "Select action"
msgstr "Seleccionar una acción"
#: modules/bibedit/lib/bibeditmulti_templates.py:558
msgid "Add field"
msgstr "Añadir un campo"
#: modules/bibedit/lib/bibeditmulti_templates.py:559
msgid "Delete field"
msgstr "Suprimirlos un campo"
#: modules/bibedit/lib/bibeditmulti_templates.py:560
msgid "Update field"
msgstr "Actualizar un campo"
#: modules/bibedit/lib/bibeditmulti_templates.py:561
msgid "Add subfield"
msgstr "Añadir un subcampo"
#: modules/bibedit/lib/bibeditmulti_templates.py:562
msgid "Delete subfield"
msgstr "Suprimir un subcampo"
#: modules/bibedit/lib/bibeditmulti_templates.py:563
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:260
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:402
#: modules/bibknowledge/lib/bibknowledge_templates.py:280
#: modules/bibknowledge/lib/bibknowledge_templates.py:528
msgid "Save"
msgstr "Guardar"
#: modules/bibedit/lib/bibeditmulti_templates.py:565
msgid "Replace substring"
msgstr "Substituir texto"
#: modules/bibedit/lib/bibeditmulti_templates.py:566
msgid "Replace full content"
msgstr "Substituir todo el contenido"
#: modules/bibedit/lib/bibeditmulti_templates.py:567
msgid "with"
msgstr "con"
#: modules/bibedit/lib/bibeditmulti_templates.py:568
msgid "when subfield $$"
msgstr "cuando el subcampo $$"
#: modules/bibedit/lib/bibeditmulti_templates.py:569
msgid "new value"
msgstr "nuevo valor"
#: modules/bibedit/lib/bibeditmulti_templates.py:570
msgid "is equal to"
msgstr "es igual a"
#: modules/bibedit/lib/bibeditmulti_templates.py:571
msgid "contains"
msgstr "contiene"
#: modules/bibedit/lib/bibeditmulti_templates.py:572
msgid "condition"
msgstr "condición"
#: modules/bibedit/lib/bibeditmulti_templates.py:573
msgid "when other subfield"
msgstr "cuando otro subcampo"
#: modules/bibedit/lib/bibeditmulti_templates.py:574
msgid "when subfield"
msgstr "cuando el subcampo"
#: modules/bibedit/lib/bibeditmulti_templates.py:575
msgid "Apply only to specific field instances"
msgstr "Aplicar sólo a instancias específicas de campos"
#: modules/bibedit/lib/bibeditmulti_templates.py:576
msgid "value"
msgstr "valor"
#: modules/bibedit/lib/bibeditmulti_templates.py:605
msgid "Back to Results"
msgstr "Volver a los resultados"
#: modules/bibedit/lib/bibeditmulti_templates.py:703
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:515
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:571
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:590
msgid "records found"
msgstr "registros encontrados"
#: modules/bibedit/lib/bibeditmulti_webinterface.py:102
msgid "Multi-Record Editor"
msgstr "Editor de múltiples registros"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:116
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:132
msgid "Export Job Overview"
msgstr "Resumen de tareas de exportación"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:117
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:189
msgid "New Export Job"
-msgstr "Nueva tareq de exportación"
+msgstr "Nueva tarea de exportación"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:118
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:443
msgid "Export Job History"
msgstr "Exportar el histórico de tareas"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:174
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:195
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:323
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:534
msgid "Run"
msgstr "Ejecutar"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:176
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:325
msgid "New"
msgstr "Nuevo"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:196
msgid "Last run"
msgstr "Última actualización"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:256
msgid "Frequency"
msgstr "Frecuencia"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:257
msgid "Output Format"
msgstr "Formato de salida"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:258
msgid "Start"
msgstr "Inicio"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:259
msgid "Output Directory"
msgstr "Directorio de salida"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:262
msgid "Edit Queries"
msgstr "Editar búsquedas"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:348
msgid "Output Fields"
msgstr "Campos de salida"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:400
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:460
msgid "Output fields"
msgstr "Campos de salida"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:438
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:605
msgid "Download"
msgstr "Descargar"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:439
msgid "View as: "
msgstr "Visualizar como:"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:533
msgid "Job"
msgstr "Tarea"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:588
msgid "Total"
msgstr "Total"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:653
msgid "All"
msgstr "Todas"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:654
msgid "None"
msgstr "Ninguno"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:672
msgid "Manually"
msgstr "Manualmente"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:674
msgid "Daily"
msgstr "Diariamente"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:676
msgid "Weekly"
msgstr "Semanalmente"
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:678
msgid "Monthly"
msgstr "Mensualmente"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:192
msgid "Edit Export Job"
msgstr "Editar la tarea de exportación"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:239
msgid "Query Results"
msgstr "Resultados de la búsqueda"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:246
msgid "Export Job Queries"
msgstr "Exportar las búsquedas de la tarea"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:320
msgid "New Query"
msgstr "Nueva búsqueda"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:325
msgid "Edit Query"
msgstr "Editar búsqueda"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:356
msgid "Export Job Results"
msgstr "Exportar los resultados de la tarea"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:389
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:423
msgid "Export Job Result"
msgstr "Resultado de la tarea de exportación"
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:465
#: modules/bibexport/lib/bibexport_method_fieldexporter.py:500
#: modules/bibexport/lib/bibexport_method_fieldexporter.py:515
#: modules/bibexport/lib/bibexport_method_fieldexporter.py:530
msgid "You are not authorised to access this resource."
msgstr "No está autorizado a acceder a este recurso."
#: modules/bibcirculation/lib/bibcirculation_utils.py:399
#: modules/bibcirculation/lib/bibcirculation_templates.py:8849
#: modules/bibcirculation/lib/bibcirculation_templates.py:10425
msgid "Loan information"
msgstr "Información de préstamo"
#: modules/bibcirculation/lib/bibcirculation_utils.py:403
#, fuzzy
msgid "This book has been sent to you:"
msgstr "Se le ha enviado este libro..."
#: modules/bibcirculation/lib/bibcirculation_utils.py:429
#: modules/bibcirculation/lib/bibcirculation_templates.py:1756
#: modules/bibcirculation/lib/bibcirculation_templates.py:2102
#: modules/bibcirculation/lib/bibcirculation_templates.py:5993
#: modules/bibcirculation/lib/bibcirculation_templates.py:16119
msgid "Author"
msgstr "Autor"
#: modules/bibcirculation/lib/bibcirculation_utils.py:430
msgid "Editor"
msgstr "Editor"
#: modules/bibcirculation/lib/bibcirculation_utils.py:431
#: modules/bibcirculation/lib/bibcirculation_templates.py:1759
#: modules/bibcirculation/lib/bibcirculation_templates.py:2745
#: modules/bibcirculation/lib/bibcirculation_templates.py:3098
#: modules/bibcirculation/lib/bibcirculation_templates.py:5995
#: modules/bibcirculation/lib/bibcirculation_templates.py:6099
#: modules/bibcirculation/lib/bibcirculation_templates.py:7156
#: modules/bibcirculation/lib/bibcirculation_templates.py:7435
#: modules/bibcirculation/lib/bibcirculation_templates.py:8087
#: modules/bibcirculation/lib/bibcirculation_templates.py:8244
#: modules/bibcirculation/lib/bibcirculation_templates.py:9599
#: modules/bibcirculation/lib/bibcirculation_templates.py:9838
#: modules/bibcirculation/lib/bibcirculation_templates.py:10074
#: modules/bibcirculation/lib/bibcirculation_templates.py:10317
#: modules/bibcirculation/lib/bibcirculation_templates.py:10529
#: modules/bibcirculation/lib/bibcirculation_templates.py:10752
#: modules/bibcirculation/lib/bibcirculation_templates.py:11211
#: modules/bibcirculation/lib/bibcirculation_templates.py:11357
#: modules/bibcirculation/lib/bibcirculation_templates.py:11862
#: modules/bibcirculation/lib/bibcirculation_templates.py:11956
#: modules/bibcirculation/lib/bibcirculation_templates.py:12157
#: modules/bibcirculation/lib/bibcirculation_templates.py:12839
#: modules/bibcirculation/lib/bibcirculation_templates.py:12940
#: modules/bibcirculation/lib/bibcirculation_templates.py:13612
#: modules/bibcirculation/lib/bibcirculation_templates.py:13871
#: modules/bibcirculation/lib/bibcirculation_templates.py:14926
#: modules/bibcirculation/lib/bibcirculation_templates.py:15149
#: modules/bibcirculation/lib/bibcirculation_templates.py:15425
#: modules/bibcirculation/lib/bibcirculation_templates.py:16846
#: modules/bibcirculation/lib/bibcirculation_templates.py:17033
#: modules/bibcirculation/lib/bibcirculation_templates.py:17917
msgid "ISBN"
msgstr "ISBN"
#: modules/bibcirculation/lib/bibcirculation_utils.py:455
#: modules/bibcirculation/lib/bibcirculation_templates.py:2389
#: modules/bibcirculation/lib/bibcirculation_templates.py:2506
#: modules/bibcirculation/lib/bibcirculation_templates.py:2738
#: modules/bibcirculation/lib/bibcirculation_templates.py:4456
#: modules/bibcirculation/lib/bibcirculation_templates.py:5604
#: modules/bibcirculation/lib/bibcirculation_templates.py:6189
#: modules/bibcirculation/lib/bibcirculation_templates.py:6238
#: modules/bibcirculation/lib/bibcirculation_templates.py:6535
#: modules/bibcirculation/lib/bibcirculation_templates.py:9027
#: modules/bibcirculation/lib/bibcirculation_templates.py:9272
#: modules/bibcirculation/lib/bibcirculation_templates.py:9881
#: modules/bibcirculation/lib/bibcirculation_templates.py:10359
#: modules/bibcirculation/lib/bibcirculation_templates.py:11223
#: modules/bibcirculation/lib/bibcirculation_templates.py:12212
#: modules/bibcirculation/lib/bibcirculation_templates.py:12996
#: modules/bibcirculation/lib/bibcirculation_templates.py:15517
msgid "Mailbox"
msgstr "Buzón"
#: modules/bibcirculation/lib/bibcirculation_utils.py:456
#: modules/bibcirculation/lib/bibcirculation_templates.py:2388
#: modules/bibcirculation/lib/bibcirculation_templates.py:2505
#: modules/bibcirculation/lib/bibcirculation_templates.py:2737
#: modules/bibcirculation/lib/bibcirculation_templates.py:3941
#: modules/bibcirculation/lib/bibcirculation_templates.py:4044
#: modules/bibcirculation/lib/bibcirculation_templates.py:4267
#: modules/bibcirculation/lib/bibcirculation_templates.py:4328
#: modules/bibcirculation/lib/bibcirculation_templates.py:4455
#: modules/bibcirculation/lib/bibcirculation_templates.py:5603
#: modules/bibcirculation/lib/bibcirculation_templates.py:6188
#: modules/bibcirculation/lib/bibcirculation_templates.py:6237
#: modules/bibcirculation/lib/bibcirculation_templates.py:6534
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6702
#: modules/bibcirculation/lib/bibcirculation_templates.py:6931
#: modules/bibcirculation/lib/bibcirculation_templates.py:7030
#: modules/bibcirculation/lib/bibcirculation_templates.py:9026
#: modules/bibcirculation/lib/bibcirculation_templates.py:9271
#: modules/bibcirculation/lib/bibcirculation_templates.py:9880
#: modules/bibcirculation/lib/bibcirculation_templates.py:10358
#: modules/bibcirculation/lib/bibcirculation_templates.py:11222
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14143
#: modules/bibcirculation/lib/bibcirculation_templates.py:14392
#: modules/bibcirculation/lib/bibcirculation_templates.py:14463
#: modules/bibcirculation/lib/bibcirculation_templates.py:14714
#: modules/bibcirculation/lib/bibcirculation_templates.py:15516
msgid "Address"
msgstr "Dirección"
#: modules/bibcirculation/lib/bibcirculation_utils.py:467
#: modules/bibcirculation/lib/bibcirculation_templates.py:358
#: modules/bibcirculation/lib/bibcirculation_templates.py:673
#: modules/bibcirculation/lib/bibcirculation_templates.py:2513
#: modules/bibcirculation/lib/bibcirculation_templates.py:3162
#: modules/bibcirculation/lib/bibcirculation_templates.py:3603
#: modules/bibcirculation/lib/bibcirculation_templates.py:3808
#: modules/bibcirculation/lib/bibcirculation_templates.py:4787
#: modules/bibcirculation/lib/bibcirculation_templates.py:4980
#: modules/bibcirculation/lib/bibcirculation_templates.py:5158
#: modules/bibcirculation/lib/bibcirculation_templates.py:5430
#: modules/bibcirculation/lib/bibcirculation_templates.py:7452
#: modules/bibcirculation/lib/bibcirculation_templates.py:8853
#: modules/bibcirculation/lib/bibcirculation_templates.py:9331
#: modules/bibcirculation/lib/bibcirculation_templates.py:10427
#: modules/bibcirculation/lib/bibcirculation_templates.py:11519
#: modules/bibcirculation/lib/bibcirculation_templates.py:12463
#: modules/bibcirculation/lib/bibcirculation_templates.py:12564
#: modules/bibcirculation/lib/bibcirculation_templates.py:13261
#: modules/bibcirculation/lib/bibcirculation_templates.py:13373
#: modules/bibcirculation/lib/bibcirculation_templates.py:15588
#: modules/bibcirculation/lib/bibcirculation_templates.py:17934
#: modules/bibcirculation/lib/bibcirculation_templates.py:18038
msgid "Due date"
msgstr "Devolver el"
#: modules/bibcirculation/lib/bibcirculation_utils.py:513
msgid "List of pending hold requests"
msgstr "Lista de peticiones de reserva pendientes"
#: modules/bibcirculation/lib/bibcirculation_utils.py:533
#: modules/bibcirculation/lib/bibcirculation_templates.py:1754
#: modules/bibcirculation/lib/bibcirculation_templates.py:2783
#: modules/bibcirculation/lib/bibcirculation_templates.py:2847
#: modules/bibcirculation/lib/bibcirculation_templates.py:2956
#: modules/bibcirculation/lib/bibcirculation_templates.py:3711
#: modules/bibcirculation/lib/bibcirculation_templates.py:3803
#: modules/bibcirculation/lib/bibcirculation_templates.py:4976
#: modules/bibcirculation/lib/bibcirculation_templates.py:5154
#: modules/bibcirculation/lib/bibcirculation_templates.py:5427
#: modules/bibcirculation/lib/bibcirculation_templates.py:11513
#: modules/bibcirculation/lib/bibcirculation_templates.py:11633
msgid "Borrower"
msgstr "Usuario"
#: modules/bibcirculation/lib/bibcirculation_utils.py:534
#: modules/bibcirculation/lib/bibcirculation_templates.py:671
#: modules/bibcirculation/lib/bibcirculation_templates.py:769
#: modules/bibcirculation/lib/bibcirculation_templates.py:860
#: modules/bibcirculation/lib/bibcirculation_templates.py:1148
#: modules/bibcirculation/lib/bibcirculation_templates.py:1348
#: modules/bibcirculation/lib/bibcirculation_templates.py:1523
#: modules/bibcirculation/lib/bibcirculation_templates.py:1755
#: modules/bibcirculation/lib/bibcirculation_templates.py:1798
#: modules/bibcirculation/lib/bibcirculation_templates.py:2511
#: modules/bibcirculation/lib/bibcirculation_templates.py:2795
#: modules/bibcirculation/lib/bibcirculation_templates.py:2848
#: modules/bibcirculation/lib/bibcirculation_templates.py:3506
#: modules/bibcirculation/lib/bibcirculation_templates.py:3598
#: modules/bibcirculation/lib/bibcirculation_templates.py:4668
#: modules/bibcirculation/lib/bibcirculation_templates.py:4784
#: modules/bibcirculation/lib/bibcirculation_templates.py:4977
#: modules/bibcirculation/lib/bibcirculation_templates.py:5155
#: modules/bibcirculation/lib/bibcirculation_templates.py:5633
#: modules/bibcirculation/lib/bibcirculation_templates.py:10910
#: modules/bibcirculation/lib/bibcirculation_templates.py:11514
#: modules/bibcirculation/lib/bibcirculation_templates.py:11634
#: modules/bibcirculation/lib/bibcirculation_templates.py:15583
#: modules/bibcirculation/lib/bibcirculation_templates.py:15863
msgid "Item"
-msgstr "Ítem"
+msgstr "Elemento"
#: modules/bibcirculation/lib/bibcirculation_utils.py:535
#: modules/bibcirculation/lib/bibcirculation_templates.py:356
#: modules/bibcirculation/lib/bibcirculation_templates.py:1149
#: modules/bibcirculation/lib/bibcirculation_templates.py:1349
#: modules/bibcirculation/lib/bibcirculation_templates.py:2512
#: modules/bibcirculation/lib/bibcirculation_templates.py:2958
#: modules/bibcirculation/lib/bibcirculation_templates.py:3163
#: modules/bibcirculation/lib/bibcirculation_templates.py:3506
#: modules/bibcirculation/lib/bibcirculation_templates.py:3600
#: modules/bibcirculation/lib/bibcirculation_templates.py:3713
#: modules/bibcirculation/lib/bibcirculation_templates.py:3805
#: modules/bibcirculation/lib/bibcirculation_templates.py:4670
#: modules/bibcirculation/lib/bibcirculation_templates.py:7453
#: modules/bibcirculation/lib/bibcirculation_templates.py:7571
#: modules/bibcirculation/lib/bibcirculation_templates.py:7810
#: modules/bibcirculation/lib/bibcirculation_templates.py:8106
#: modules/bibcirculation/lib/bibcirculation_templates.py:8270
#: modules/bibcirculation/lib/bibcirculation_templates.py:8478
#: modules/bibcirculation/lib/bibcirculation_templates.py:9329
#: modules/bibcirculation/lib/bibcirculation_templates.py:10638
#: modules/bibcirculation/lib/bibcirculation_templates.py:10820
#: modules/bibcirculation/lib/bibcirculation_templates.py:12419
#: modules/bibcirculation/lib/bibcirculation_templates.py:12562
#: modules/bibcirculation/lib/bibcirculation_templates.py:12647
#: modules/bibcirculation/lib/bibcirculation_templates.py:13217
#: modules/bibcirculation/lib/bibcirculation_templates.py:13371
#: modules/bibcirculation/lib/bibcirculation_templates.py:13458
#: modules/bibcirculation/lib/bibcirculation_templates.py:15864
#: modules/bibcirculation/lib/bibcirculation_templates.py:17935
#: modules/bibcirculation/lib/bibcirculation_templates.py:18039
msgid "Library"
msgstr "Biblioteca"
#: modules/bibcirculation/lib/bibcirculation_utils.py:536
#: modules/bibcirculation/lib/bibcirculation_templates.py:357
#: modules/bibcirculation/lib/bibcirculation_templates.py:1150
#: modules/bibcirculation/lib/bibcirculation_templates.py:1350
#: modules/bibcirculation/lib/bibcirculation_templates.py:2512
#: modules/bibcirculation/lib/bibcirculation_templates.py:2959
#: modules/bibcirculation/lib/bibcirculation_templates.py:3168
#: modules/bibcirculation/lib/bibcirculation_templates.py:3507
#: modules/bibcirculation/lib/bibcirculation_templates.py:3601
#: modules/bibcirculation/lib/bibcirculation_templates.py:3714
#: modules/bibcirculation/lib/bibcirculation_templates.py:3806
#: modules/bibcirculation/lib/bibcirculation_templates.py:4671
#: modules/bibcirculation/lib/bibcirculation_templates.py:6003
#: modules/bibcirculation/lib/bibcirculation_templates.py:7458
#: modules/bibcirculation/lib/bibcirculation_templates.py:7613
#: modules/bibcirculation/lib/bibcirculation_templates.py:7811
#: modules/bibcirculation/lib/bibcirculation_templates.py:8107
#: modules/bibcirculation/lib/bibcirculation_templates.py:8297
#: modules/bibcirculation/lib/bibcirculation_templates.py:8479
#: modules/bibcirculation/lib/bibcirculation_templates.py:9330
#: modules/bibcirculation/lib/bibcirculation_templates.py:15865
#: modules/bibcirculation/lib/bibcirculation_templates.py:17940
#: modules/bibcirculation/lib/bibcirculation_templates.py:18040
msgid "Location"
msgstr "Lugar"
#: modules/bibcirculation/lib/bibcirculation_utils.py:537
#: modules/bibcirculation/lib/bibcirculation_templates.py:977
#: modules/bibcirculation/lib/bibcirculation_templates.py:1153
#: modules/bibcirculation/lib/bibcirculation_templates.py:1351
#: modules/bibcirculation/lib/bibcirculation_templates.py:1525
#: modules/bibcirculation/lib/bibcirculation_templates.py:1800
#: modules/bibcirculation/lib/bibcirculation_templates.py:2850
#: modules/bibcirculation/lib/bibcirculation_templates.py:2960
#: modules/bibcirculation/lib/bibcirculation_templates.py:3507
#: modules/bibcirculation/lib/bibcirculation_templates.py:3715
#: modules/bibcirculation/lib/bibcirculation_templates.py:4672
#: modules/bibcirculation/lib/bibcirculation_templates.py:5288
#: modules/bibcirculation/lib/bibcirculation_templates.py:15866
#: modules/bibcirculation/lib/bibcirculation_templates.py:17765
msgid "From"
msgstr "Desde"
#: modules/bibcirculation/lib/bibcirculation_utils.py:538
#: modules/bibcirculation/lib/bibcirculation_templates.py:977
#: modules/bibcirculation/lib/bibcirculation_templates.py:1154
#: modules/bibcirculation/lib/bibcirculation_templates.py:1352
#: modules/bibcirculation/lib/bibcirculation_templates.py:1526
#: modules/bibcirculation/lib/bibcirculation_templates.py:1801
#: modules/bibcirculation/lib/bibcirculation_templates.py:2851
#: modules/bibcirculation/lib/bibcirculation_templates.py:2961
#: modules/bibcirculation/lib/bibcirculation_templates.py:3508
#: modules/bibcirculation/lib/bibcirculation_templates.py:3716
#: modules/bibcirculation/lib/bibcirculation_templates.py:4673
#: modules/bibcirculation/lib/bibcirculation_templates.py:5290
#: modules/bibcirculation/lib/bibcirculation_templates.py:15867
#: modules/bibcirculation/lib/bibcirculation_templates.py:17766
#: modules/bibknowledge/lib/bibknowledge_templates.py:363
msgid "To"
msgstr "Hasta"
#: modules/bibcirculation/lib/bibcirculation_utils.py:539
#: modules/bibcirculation/lib/bibcirculation_templates.py:770
#: modules/bibcirculation/lib/bibcirculation_templates.py:1155
#: modules/bibcirculation/lib/bibcirculation_templates.py:1353
#: modules/bibcirculation/lib/bibcirculation_templates.py:1527
#: modules/bibcirculation/lib/bibcirculation_templates.py:1802
#: modules/bibcirculation/lib/bibcirculation_templates.py:2852
#: modules/bibcirculation/lib/bibcirculation_templates.py:2962
#: modules/bibcirculation/lib/bibcirculation_templates.py:3508
#: modules/bibcirculation/lib/bibcirculation_templates.py:3717
#: modules/bibcirculation/lib/bibcirculation_templates.py:4674
#: modules/bibcirculation/lib/bibcirculation_templates.py:12357
#: modules/bibcirculation/lib/bibcirculation_templates.py:12420
#: modules/bibcirculation/lib/bibcirculation_templates.py:12563
#: modules/bibcirculation/lib/bibcirculation_templates.py:12648
#: modules/bibcirculation/lib/bibcirculation_templates.py:13152
#: modules/bibcirculation/lib/bibcirculation_templates.py:13218
#: modules/bibcirculation/lib/bibcirculation_templates.py:13371
#: modules/bibcirculation/lib/bibcirculation_templates.py:13458
#: modules/bibcirculation/lib/bibcirculation_templates.py:15585
#: modules/bibcirculation/lib/bibcirculation_templates.py:15868
msgid "Request date"
msgstr "Fecha de petición"
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:117
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:162
msgid "You are not authorized to use loans."
msgstr "No está autorizado a hacer uso del préstamo."
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:167
#: modules/bibcirculation/lib/bibcirculation_templates.py:633
msgid "Loans - historical overview"
msgstr "Histórico de préstamos"
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:222
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:317
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:404
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:493
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:565
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:640
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:692
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:774
msgid "You are not authorized to use ill."
msgstr "No está autorizado a hacer uso del préstamo interbibliotecario."
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:232
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:340
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:426
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:723
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:792
#: modules/bibcirculation/lib/bibcirculation_templates.py:11346
msgid "Interlibrary loan request for books"
msgstr "Peticiones de préstamo interbibliotecario de libros"
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:503
#, fuzzy
msgid "Purchase request"
msgstr "Nueva petición"
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:570
msgid ""
"Payment method information is mandatory. Please, type your budget code or "
"tick the 'cash' checkbox."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:601
#: modules/bibcirculation/lib/bibcirculation_templates.py:206
msgid "Register purchase request"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:649
#, fuzzy
msgid "Interlibrary loan request for articles"
msgstr "Peticiones de préstamo interbibliotecario de libros"
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:721
msgid "Wrong user id"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:110
msgid "Main navigation links"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:111
#, fuzzy
msgid "Loan"
msgstr "Préstamos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:111
#: modules/bibcirculation/lib/bibcirculation_templates.py:4845
#: modules/bibcirculation/lib/bibcirculation_templates.py:5483
#, fuzzy
msgid "Return"
msgstr "Devuelto"
#: modules/bibcirculation/lib/bibcirculation_templates.py:112
#: modules/bibcirculation/lib/bibcirculation_templates.py:371
#: modules/bibcirculation/lib/bibcirculation_templates.py:9355
msgid "Request"
msgstr "Petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:112
#, fuzzy
msgid "Borrowers"
msgstr "Usuario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:154
#: modules/bibcirculation/lib/bibcirculation_templates.py:208
msgid "Lists"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:155
#, fuzzy
msgid "Last loans"
msgstr "Última actualización"
#: modules/bibcirculation/lib/bibcirculation_templates.py:156
#, fuzzy
msgid "Overdue loans"
msgstr "Cartas de reclamación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:157
msgid "Items on shelf with holds"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:158
msgid "Items on loan with holds"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:159
msgid "Overdue loans with holds"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:160
#, fuzzy
msgid "Ordered books"
msgstr "Fecha de pedido"
#: modules/bibcirculation/lib/bibcirculation_templates.py:160
msgid "Others"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:161
#: modules/bibcirculation/lib/bibcirculation_templates.py:6836
#: modules/bibcirculation/lib/bibcirculation_templates.py:8643
#, fuzzy
msgid "Libraries"
msgstr "Bibliotecas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:162
#, fuzzy
msgid "Add new library"
msgstr "Añadir otro fichero"
#: modules/bibcirculation/lib/bibcirculation_templates.py:163
#, fuzzy
msgid "Update info"
msgstr "Actualizar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:164
#, fuzzy
msgid "Acquisitions"
msgstr "Acciones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:165
#, fuzzy
msgid "List of ordered books"
msgstr "Lista de libros en préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:166
#, fuzzy
msgid "Order new book"
msgstr "Pedir otra copia"
#: modules/bibcirculation/lib/bibcirculation_templates.py:167
#, fuzzy
msgid "Vendors"
msgstr "Proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:168
#, fuzzy
msgid "Add new vendor"
msgstr "Añadir otro fichero"
#: modules/bibcirculation/lib/bibcirculation_templates.py:203
#: modules/bibcirculation/lib/bibcirculation_templates.py:4606
#: modules/bibcirculation/lib/bibcirculation_templates.py:4613
msgid "ILL"
msgstr "PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:204
#, fuzzy
msgid "Register Book request"
msgstr "No hay peticiones."
#: modules/bibcirculation/lib/bibcirculation_templates.py:205
#, fuzzy
msgid "Register Article"
msgstr "Noticias"
#: modules/bibcirculation/lib/bibcirculation_templates.py:209
msgid "Purchase"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:216
#, fuzzy
msgid "Admin guide"
msgstr "Páginas de administración"
#: modules/bibcirculation/lib/bibcirculation_templates.py:217
msgid "Contact Support"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:267
#, fuzzy
msgid "This record does not exist."
msgstr "La página solicitada no existe"
#: modules/bibcirculation/lib/bibcirculation_templates.py:271
#, fuzzy
msgid "This record has no copies."
-msgstr "Este ítem no tiene ejemplares."
+msgstr "Este registro no tiene ejemplares."
#: modules/bibcirculation/lib/bibcirculation_templates.py:278
#, fuzzy
msgid "Add a new copy"
msgstr "Añadir otra copia"
#: modules/bibcirculation/lib/bibcirculation_templates.py:291
#, fuzzy
msgid "ILL services"
msgstr "Petición de préstamo interbibliotecario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:295
#, python-format
msgid ""
"All the copies of %(strong_tag_open)s%(title)s%(strong_tag_close)s are "
"missing. You can request a copy using %(strong_tag_open)s%(ill_link)s"
"%(strong_tag_close)s"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:302
#: modules/bibcirculation/lib/bibcirculation_templates.py:9308
msgid "This item has no holdings."
-msgstr "Este ítem no tiene ejemplares."
+msgstr "Este elemento no tiene ejemplares."
#: modules/bibcirculation/lib/bibcirculation_templates.py:356
#: modules/bibcirculation/lib/bibcirculation_templates.py:1354
#: modules/bibcirculation/lib/bibcirculation_templates.py:11641
msgid "Options"
msgstr "Opciones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:357
#: modules/bibcirculation/lib/bibcirculation_templates.py:3173
#: modules/bibcirculation/lib/bibcirculation_templates.py:6010
#: modules/bibcirculation/lib/bibcirculation_templates.py:7463
#: modules/bibcirculation/lib/bibcirculation_templates.py:7661
#: modules/bibcirculation/lib/bibcirculation_templates.py:7814
#: modules/bibcirculation/lib/bibcirculation_templates.py:8108
#: modules/bibcirculation/lib/bibcirculation_templates.py:8328
#: modules/bibcirculation/lib/bibcirculation_templates.py:8482
#: modules/bibcirculation/lib/bibcirculation_templates.py:8855
#: modules/bibcirculation/lib/bibcirculation_templates.py:9330
#: modules/bibcirculation/lib/bibcirculation_templates.py:17945
#: modules/bibcirculation/lib/bibcirculation_templates.py:18041
msgid "Loan period"
msgstr "Período de préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:358
#: modules/bibcirculation/lib/bibcirculation_templates.py:1617
#: modules/bibcirculation/lib/bibcirculation_templates.py:2511
#: modules/bibcirculation/lib/bibcirculation_templates.py:3159
#: modules/bibcirculation/lib/bibcirculation_templates.py:3506
#: modules/bibcirculation/lib/bibcirculation_templates.py:3599
#: modules/bibcirculation/lib/bibcirculation_templates.py:3712
#: modules/bibcirculation/lib/bibcirculation_templates.py:3804
#: modules/bibcirculation/lib/bibcirculation_templates.py:4785
#: modules/bibcirculation/lib/bibcirculation_templates.py:4978
#: modules/bibcirculation/lib/bibcirculation_templates.py:5156
#: modules/bibcirculation/lib/bibcirculation_templates.py:5428
#: modules/bibcirculation/lib/bibcirculation_templates.py:5636
#: modules/bibcirculation/lib/bibcirculation_templates.py:6048
#: modules/bibcirculation/lib/bibcirculation_templates.py:7450
#: modules/bibcirculation/lib/bibcirculation_templates.py:7570
#: modules/bibcirculation/lib/bibcirculation_templates.py:7809
#: modules/bibcirculation/lib/bibcirculation_templates.py:8104
#: modules/bibcirculation/lib/bibcirculation_templates.py:8269
#: modules/bibcirculation/lib/bibcirculation_templates.py:8477
#: modules/bibcirculation/lib/bibcirculation_templates.py:8851
#: modules/bibcirculation/lib/bibcirculation_templates.py:9042
#: modules/bibcirculation/lib/bibcirculation_templates.py:9329
#: modules/bibcirculation/lib/bibcirculation_templates.py:9600
#: modules/bibcirculation/lib/bibcirculation_templates.py:9839
#: modules/bibcirculation/lib/bibcirculation_templates.py:10075
#: modules/bibcirculation/lib/bibcirculation_templates.py:10318
#: modules/bibcirculation/lib/bibcirculation_templates.py:10556
#: modules/bibcirculation/lib/bibcirculation_templates.py:10814
#: modules/bibcirculation/lib/bibcirculation_templates.py:12376
#: modules/bibcirculation/lib/bibcirculation_templates.py:12482
#: modules/bibcirculation/lib/bibcirculation_templates.py:12580
#: modules/bibcirculation/lib/bibcirculation_templates.py:12663
#: modules/bibcirculation/lib/bibcirculation_templates.py:17932
#: modules/bibcirculation/lib/bibcirculation_templates.py:18036
msgid "Barcode"
msgstr "Código de barras"
#: modules/bibcirculation/lib/bibcirculation_templates.py:402
msgid "See this book on BibCirculation"
msgstr "Ver este libro en BibCirculation"
#: modules/bibcirculation/lib/bibcirculation_templates.py:453
#, fuzzy
msgid "This item is not for loan."
-msgstr "Este ítem no tiene ejemplares."
+msgstr "Este elemento no tiene ejemplares."
#: modules/bibcirculation/lib/bibcirculation_templates.py:458
msgid "Server busy. Please, try again in a few seconds."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:463
msgid ""
"Your request has been registered and the document will be sent to you via "
"internal mail."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:468
#, fuzzy
msgid "Your request has been registered."
msgstr "Su mensaje se ha enviado al revisor."
#: modules/bibcirculation/lib/bibcirculation_templates.py:473
#: modules/bibcirculation/lib/bibcirculation_templates.py:481
msgid "It is not possible to validate your request. "
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:474
#: modules/bibcirculation/lib/bibcirculation_templates.py:482
msgid "Your office address is not available. "
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:475
#: modules/bibcirculation/lib/bibcirculation_templates.py:483
#, fuzzy, python-format
msgid "Please contact %(librarian_email)s"
msgstr "En caso de duda, póngase en contacto con %(x_admin_email)s"
#: modules/bibcirculation/lib/bibcirculation_templates.py:489
#, fuzzy
msgid "Your purchase request has been registered."
msgstr "Su registrado correctamente la nueva petición."
#: modules/bibcirculation/lib/bibcirculation_templates.py:509
#, fuzzy
msgid "No messages to be displayed"
msgstr "El mensaje no se ha podido suprimir."
#: modules/bibcirculation/lib/bibcirculation_templates.py:539
#: modules/bibcirculation/lib/bibcirculation_templates.py:544
#, fuzzy
msgid "0 borrowers found."
msgstr "No se ha encontrado ningún usuario."
#: modules/bibcirculation/lib/bibcirculation_templates.py:539
#, fuzzy
msgid "Search by CCID."
msgstr "Buscar biblioteca por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:543
msgid "Register new borrower."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:569
msgid "Borrower(s)"
msgstr "Usuarios"
#: modules/bibcirculation/lib/bibcirculation_templates.py:607
#: modules/bibcirculation/lib/bibcirculation_templates.py:894
#: modules/bibcirculation/lib/bibcirculation_templates.py:979
#: modules/bibcirculation/lib/bibcirculation_templates.py:1123
#: modules/bibcirculation/lib/bibcirculation_templates.py:1268
#: modules/bibcirculation/lib/bibcirculation_templates.py:1325
#: modules/bibcirculation/lib/bibcirculation_templates.py:1456
#: modules/bibcirculation/lib/bibcirculation_templates.py:1577
#: modules/bibcirculation/lib/bibcirculation_templates.py:1977
#: modules/bibcirculation/lib/bibcirculation_templates.py:2052
#: modules/bibcirculation/lib/bibcirculation_templates.py:2143
#: modules/bibcirculation/lib/bibcirculation_templates.py:2392
#: modules/bibcirculation/lib/bibcirculation_templates.py:2573
#: modules/bibcirculation/lib/bibcirculation_templates.py:2814
#: modules/bibcirculation/lib/bibcirculation_templates.py:2905
#: modules/bibcirculation/lib/bibcirculation_templates.py:3012
#: modules/bibcirculation/lib/bibcirculation_templates.py:3457
#: modules/bibcirculation/lib/bibcirculation_templates.py:3547
#: modules/bibcirculation/lib/bibcirculation_templates.py:3660
#: modules/bibcirculation/lib/bibcirculation_templates.py:3758
#: modules/bibcirculation/lib/bibcirculation_templates.py:3852
#: modules/bibcirculation/lib/bibcirculation_templates.py:3964
#: modules/bibcirculation/lib/bibcirculation_templates.py:4146
#: modules/bibcirculation/lib/bibcirculation_templates.py:4354
#: modules/bibcirculation/lib/bibcirculation_templates.py:4615
#: modules/bibcirculation/lib/bibcirculation_templates.py:4725
#: modules/bibcirculation/lib/bibcirculation_templates.py:4896
#: modules/bibcirculation/lib/bibcirculation_templates.py:4953
#: modules/bibcirculation/lib/bibcirculation_templates.py:5072
#: modules/bibcirculation/lib/bibcirculation_templates.py:5129
#: modules/bibcirculation/lib/bibcirculation_templates.py:5253
#: modules/bibcirculation/lib/bibcirculation_templates.py:5365
#: modules/bibcirculation/lib/bibcirculation_templates.py:5524
#: modules/bibcirculation/lib/bibcirculation_templates.py:5672
#: modules/bibcirculation/lib/bibcirculation_templates.py:5771
#: modules/bibcirculation/lib/bibcirculation_templates.py:5871
#: modules/bibcirculation/lib/bibcirculation_templates.py:6191
#: modules/bibcirculation/lib/bibcirculation_templates.py:6258
#: modules/bibcirculation/lib/bibcirculation_templates.py:6278
#: modules/bibcirculation/lib/bibcirculation_templates.py:6539
#: modules/bibcirculation/lib/bibcirculation_templates.py:6629
#: modules/bibcirculation/lib/bibcirculation_templates.py:6705
#: modules/bibcirculation/lib/bibcirculation_templates.py:6806
#: modules/bibcirculation/lib/bibcirculation_templates.py:6866
#: modules/bibcirculation/lib/bibcirculation_templates.py:6962
#: modules/bibcirculation/lib/bibcirculation_templates.py:7032
#: modules/bibcirculation/lib/bibcirculation_templates.py:7179
#: modules/bibcirculation/lib/bibcirculation_templates.py:7251
#: modules/bibcirculation/lib/bibcirculation_templates.py:7309
#: modules/bibcirculation/lib/bibcirculation_templates.py:7725
#: modules/bibcirculation/lib/bibcirculation_templates.py:7817
#: modules/bibcirculation/lib/bibcirculation_templates.py:7948
#: modules/bibcirculation/lib/bibcirculation_templates.py:8005
#: modules/bibcirculation/lib/bibcirculation_templates.py:8156
#: modules/bibcirculation/lib/bibcirculation_templates.py:8395
#: modules/bibcirculation/lib/bibcirculation_templates.py:8485
#: modules/bibcirculation/lib/bibcirculation_templates.py:8599
#: modules/bibcirculation/lib/bibcirculation_templates.py:8675
#: modules/bibcirculation/lib/bibcirculation_templates.py:8779
#: modules/bibcirculation/lib/bibcirculation_templates.py:8901
#: modules/bibcirculation/lib/bibcirculation_templates.py:9071
#: modules/bibcirculation/lib/bibcirculation_templates.py:9190
#: modules/bibcirculation/lib/bibcirculation_templates.py:9461
#: modules/bibcirculation/lib/bibcirculation_templates.py:9946
#: modules/bibcirculation/lib/bibcirculation_templates.py:10430
#: modules/bibcirculation/lib/bibcirculation_templates.py:10669
#: modules/bibcirculation/lib/bibcirculation_templates.py:10822
#: modules/bibcirculation/lib/bibcirculation_templates.py:11073
#: modules/bibcirculation/lib/bibcirculation_templates.py:11238
#: modules/bibcirculation/lib/bibcirculation_templates.py:11436
#: modules/bibcirculation/lib/bibcirculation_templates.py:12703
#: modules/bibcirculation/lib/bibcirculation_templates.py:13518
#: modules/bibcirculation/lib/bibcirculation_templates.py:13775
#: modules/bibcirculation/lib/bibcirculation_templates.py:13957
#: modules/bibcirculation/lib/bibcirculation_templates.py:14072
#: modules/bibcirculation/lib/bibcirculation_templates.py:14145
#: modules/bibcirculation/lib/bibcirculation_templates.py:14248
#: modules/bibcirculation/lib/bibcirculation_templates.py:14315
#: modules/bibcirculation/lib/bibcirculation_templates.py:14393
#: modules/bibcirculation/lib/bibcirculation_templates.py:14464
#: modules/bibcirculation/lib/bibcirculation_templates.py:14571
#: modules/bibcirculation/lib/bibcirculation_templates.py:14637
#: modules/bibcirculation/lib/bibcirculation_templates.py:14735
#: modules/bibcirculation/lib/bibcirculation_templates.py:14818
#: modules/bibcirculation/lib/bibcirculation_templates.py:15015
#: modules/bibcirculation/lib/bibcirculation_templates.py:15532
#: modules/bibcirculation/lib/bibcirculation_templates.py:15685
#: modules/bibcirculation/lib/bibcirculation_templates.py:15788
#: modules/bibcirculation/lib/bibcirculation_templates.py:16055
#: modules/bibcirculation/lib/bibcirculation_templates.py:16161
#: modules/bibcirculation/lib/bibcirculation_templates.py:16925
#: modules/bibcirculation/lib/bibcirculation_templates.py:17387
#: modules/bibcirculation/lib/bibcirculation_templates.py:17788
#: modules/bibcirculation/lib/bibcirculation_templates.py:18076
#: modules/bibknowledge/lib/bibknowledgeadmin.py:142
msgid "Back"
msgstr "Atrás"
#: modules/bibcirculation/lib/bibcirculation_templates.py:628
#: modules/bibcirculation/lib/bibcirculation_templates.py:4879
msgid "Renew all loans"
msgstr "Renovar todos los préstamos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:647
msgid "You don't have any book on loan."
msgstr "No tiene ningún libro en préstamo."
#: modules/bibcirculation/lib/bibcirculation_templates.py:672
#: modules/bibcirculation/lib/bibcirculation_templates.py:3602
#: modules/bibcirculation/lib/bibcirculation_templates.py:3807
#: modules/bibcirculation/lib/bibcirculation_templates.py:4979
#: modules/bibcirculation/lib/bibcirculation_templates.py:5157
#: modules/bibcirculation/lib/bibcirculation_templates.py:5429
msgid "Loaned on"
msgstr "Prestado en"
#: modules/bibcirculation/lib/bibcirculation_templates.py:674
#: modules/bibcirculation/lib/bibcirculation_templates.py:772
msgid "Action(s)"
msgstr "Acciones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:686
#: modules/bibcirculation/lib/bibcirculation_templates.py:4844
#: modules/bibcirculation/lib/bibcirculation_templates.py:5482
msgid "Renew"
msgstr "Renovar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:739
#: modules/bibcirculation/lib/bibcirculation_templates.py:768
msgid "Your Requests"
msgstr "Sus peticiones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:740
msgid "You don't have any request (waiting or pending)."
msgstr "No tiene ninguna petición (esperando o pendiente)."
#: modules/bibcirculation/lib/bibcirculation_templates.py:742
#: modules/bibcirculation/lib/bibcirculation_templates.py:823
#: modules/bibcirculation/lib/bibcirculation_templates.py:1017
#: modules/bibcirculation/lib/bibcirculation_templates.py:1054
#: modules/bibcirculation/lib/bibcirculation_templates.py:1497
#: modules/bibcirculation/lib/bibcirculation_templates.py:2605
#: modules/bibcirculation/lib/bibcirculation_templates.py:2640
#: modules/bibcirculation/lib/bibcirculation_templates.py:2746
#: modules/bibcirculation/lib/bibcirculation_templates.py:6313
#: modules/bibcirculation/lib/bibcirculation_templates.py:6740
#: modules/bibcirculation/lib/bibcirculation_templates.py:7072
#: modules/bibcirculation/lib/bibcirculation_templates.py:7860
#: modules/bibcirculation/lib/bibcirculation_templates.py:8531
#: modules/bibcirculation/lib/bibcirculation_templates.py:9502
#: modules/bibcirculation/lib/bibcirculation_templates.py:9979
#: modules/bibcirculation/lib/bibcirculation_templates.py:10864
#: modules/bibcirculation/lib/bibcirculation_templates.py:11280
#: modules/bibcirculation/lib/bibcirculation_templates.py:11470
#: modules/bibcirculation/lib/bibcirculation_templates.py:13998
#: modules/bibcirculation/lib/bibcirculation_templates.py:14184
#: modules/bibcirculation/lib/bibcirculation_templates.py:14503
#: modules/bibcirculation/lib/bibcirculation_templates.py:15825
msgid "Back to home"
msgstr "Volver al inicio"
#: modules/bibcirculation/lib/bibcirculation_templates.py:861
msgid "Loaned"
msgstr "prestado"
#: modules/bibcirculation/lib/bibcirculation_templates.py:862
msgid "Returned"
msgstr "Devuelto"
#: modules/bibcirculation/lib/bibcirculation_templates.py:863
msgid "Renewalls"
msgstr "Renovaciones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:976
msgid "Enter your period of interest"
msgstr "Entre el período en el que está interesado"
#: modules/bibcirculation/lib/bibcirculation_templates.py:979
#: modules/bibcirculation/lib/bibcirculation_templates.py:2815
#: modules/bibcirculation/lib/bibcirculation_templates.py:4361
#: modules/bibcirculation/lib/bibcirculation_templates.py:5673
#: modules/bibcirculation/lib/bibcirculation_templates.py:5772
#: modules/bibcirculation/lib/bibcirculation_templates.py:5873
#: modules/bibcirculation/lib/bibcirculation_templates.py:6705
#: modules/bibcirculation/lib/bibcirculation_templates.py:8485
#: modules/bibcirculation/lib/bibcirculation_templates.py:8780
#: modules/bibcirculation/lib/bibcirculation_templates.py:9072
#: modules/bibcirculation/lib/bibcirculation_templates.py:9461
#: modules/bibcirculation/lib/bibcirculation_templates.py:11074
#: modules/bibcirculation/lib/bibcirculation_templates.py:14145
#: modules/bibcirculation/lib/bibcirculation_templates.py:14782
#: modules/bibcirculation/lib/bibcirculation_templates.py:15789
msgid "Confirm"
msgstr "Confirmar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1013
#, fuzzy, python-format
msgid "You can see your loans %(x_url_open)shere%(x_url_close)s."
msgstr "Si lo desea puede %(x_url_open)sidentificarse aquí%(x_url_close)s."
#: modules/bibcirculation/lib/bibcirculation_templates.py:1052
msgid "A new loan has been registered."
msgstr "Se ha registrado un nuevo préstamo."
#: modules/bibcirculation/lib/bibcirculation_templates.py:1100
#, fuzzy
msgid "Delete this request?"
msgstr "Eliminar las reseñas seleccionadas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1101
#: modules/bibcirculation/lib/bibcirculation_templates.py:1368
#, fuzzy
msgid "Request not deleted."
msgstr "Fecha de petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1122
#: modules/bibcirculation/lib/bibcirculation_templates.py:1324
msgid "No more requests are pending."
msgstr "No hay más peticiones pendientes."
#: modules/bibcirculation/lib/bibcirculation_templates.py:1151
msgid "Vol."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1152
msgid "Ed."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1156
#: modules/bibcirculation/lib/bibcirculation_templates.py:3188
#: modules/bibcirculation/lib/bibcirculation_templates.py:15869
msgid "Actions"
msgstr "Acciones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1222
#: modules/bibcirculation/lib/bibcirculation_templates.py:1417
#: modules/bibcirculation/lib/bibcirculation_templates.py:15939
msgid "Associate barcode"
msgstr "Asocie el código de barras"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1496
msgid "No hold requests waiting."
msgstr "No hay reservas pendientes."
#: modules/bibcirculation/lib/bibcirculation_templates.py:1524
#: modules/bibcirculation/lib/bibcirculation_templates.py:1799
#: modules/bibcirculation/lib/bibcirculation_templates.py:4669
msgid "Request status"
-msgstr "Estat de la petición"
+msgstr "Estado de la petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1528
#: modules/bibcirculation/lib/bibcirculation_templates.py:1803
msgid "Request options"
msgstr "Opciones de la petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1558
msgid "Select hold request"
msgstr "Seleccione una petición de reserva"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1634
#: modules/bibcirculation/lib/bibcirculation_templates.py:5366
msgid "Reset"
msgstr "Reiniciar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1680
#, fuzzy, python-format
msgid ""
"The item %(x_strong_tag_open)s%(x_title)s%(x_strong_tag_close)s, with "
"barcode %(x_strong_tag_open)s%(x_barcode)s%(x_strong_tag_close)s, has been "
"returned with success."
msgstr ""
-"Se ha devuelto correctamente el item %(x_title)s, con el código de barras "
+"Se ha devuelto correctamente el elemento %(x_title)s, con el código de barras "
"%(x_barcode)s."
#: modules/bibcirculation/lib/bibcirculation_templates.py:1694
#, python-format
msgid ""
"There are %(x_strong_tag_open)s%(x_number_of_requests)s requests"
"%(x_strong_tag_close)s on the book that has been returned."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1753
msgid "Loan informations"
msgstr "Informaciones de préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1758
#: modules/bibcirculation/lib/bibcirculation_templates.py:2102
#: modules/bibcirculation/lib/bibcirculation_templates.py:2744
#: modules/bibcirculation/lib/bibcirculation_templates.py:3097
#: modules/bibcirculation/lib/bibcirculation_templates.py:5996
#: modules/bibcirculation/lib/bibcirculation_templates.py:7153
#: modules/bibcirculation/lib/bibcirculation_templates.py:7433
#: modules/bibcirculation/lib/bibcirculation_templates.py:8085
#: modules/bibcirculation/lib/bibcirculation_templates.py:8242
#: modules/bibcirculation/lib/bibcirculation_templates.py:9598
#: modules/bibcirculation/lib/bibcirculation_templates.py:9837
#: modules/bibcirculation/lib/bibcirculation_templates.py:10073
#: modules/bibcirculation/lib/bibcirculation_templates.py:10316
#: modules/bibcirculation/lib/bibcirculation_templates.py:10527
#: modules/bibcirculation/lib/bibcirculation_templates.py:10751
#: modules/bibcirculation/lib/bibcirculation_templates.py:11210
#: modules/bibcirculation/lib/bibcirculation_templates.py:11355
#: modules/bibcirculation/lib/bibcirculation_templates.py:11860
#: modules/bibcirculation/lib/bibcirculation_templates.py:11953
#: modules/bibcirculation/lib/bibcirculation_templates.py:12070
#: modules/bibcirculation/lib/bibcirculation_templates.py:12154
#: modules/bibcirculation/lib/bibcirculation_templates.py:12837
#: modules/bibcirculation/lib/bibcirculation_templates.py:12937
#: modules/bibcirculation/lib/bibcirculation_templates.py:13610
#: modules/bibcirculation/lib/bibcirculation_templates.py:13870
#: modules/bibcirculation/lib/bibcirculation_templates.py:14923
#: modules/bibcirculation/lib/bibcirculation_templates.py:15147
#: modules/bibcirculation/lib/bibcirculation_templates.py:15423
#: modules/bibcirculation/lib/bibcirculation_templates.py:16119
#: modules/bibcirculation/lib/bibcirculation_templates.py:16842
#: modules/bibcirculation/lib/bibcirculation_templates.py:17030
#: modules/bibcirculation/lib/bibcirculation_templates.py:17915
msgid "Publisher"
msgstr "Editor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1760
#: modules/bibcirculation/lib/bibcirculation_templates.py:12565
#: modules/bibcirculation/lib/bibcirculation_templates.py:13373
msgid "Return date"
msgstr "Fecha de devolución"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1796
msgid "Waiting requests"
msgstr "Peticiones en espera"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1838
msgid "Select request"
msgstr "Escoja petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1878
msgid "Welcome to Invenio BibCirculation Admin"
msgstr "Bienvenidos a la administración BibCirculation de Invenio"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1904
#: modules/bibcirculation/lib/bibcirculation_templates.py:2207
#: modules/bibcirculation/lib/bibcirculation_templates.py:2214
#: modules/bibcirculation/lib/bibcirculation_templates.py:2221
#: modules/bibcirculation/lib/bibcirculation_templates.py:10128
#: modules/bibcirculation/lib/bibcirculation_templates.py:10135
#: modules/bibcirculation/lib/bibcirculation_templates.py:10142
#: modules/bibcirculation/lib/bibcirculation_templates.py:15206
#: modules/bibcirculation/lib/bibcirculation_templates.py:15213
#: modules/bibcirculation/lib/bibcirculation_templates.py:15220
#: modules/bibcirculation/lib/bibcirculation_templates.py:17087
#: modules/bibcirculation/lib/bibcirculation_templates.py:17094
#: modules/bibcirculation/lib/bibcirculation_templates.py:17101
#: modules/bibcirculation/lib/bibcirculation_templates.py:17569
#: modules/bibcirculation/lib/bibcirculation_templates.py:17576
#: modules/bibcirculation/lib/bibcirculation_templates.py:17583
#, fuzzy
msgid "id"
msgstr "Esconder"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1917
#, fuzzy
msgid "register new borrower"
msgstr "Escriba la nota"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1948
#: modules/bibcirculation/lib/bibcirculation_templates.py:2200
#: modules/bibcirculation/lib/bibcirculation_templates.py:9646
#: modules/bibcirculation/lib/bibcirculation_templates.py:15199
#: modules/bibcirculation/lib/bibcirculation_templates.py:17080
#: modules/bibcirculation/lib/bibcirculation_templates.py:17562
msgid "Search borrower by"
msgstr "Buscar usuario por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1949
#: modules/bibcirculation/lib/bibcirculation_templates.py:2180
#: modules/bibcirculation/lib/bibcirculation_templates.py:2187
#: modules/bibcirculation/lib/bibcirculation_templates.py:2194
#: modules/bibcirculation/lib/bibcirculation_templates.py:2207
#: modules/bibcirculation/lib/bibcirculation_templates.py:2214
#: modules/bibcirculation/lib/bibcirculation_templates.py:2221
#: modules/bibcirculation/lib/bibcirculation_templates.py:4088
#: modules/bibcirculation/lib/bibcirculation_templates.py:6776
#: modules/bibcirculation/lib/bibcirculation_templates.py:8570
#: modules/bibcirculation/lib/bibcirculation_templates.py:9626
#: modules/bibcirculation/lib/bibcirculation_templates.py:9633
#: modules/bibcirculation/lib/bibcirculation_templates.py:9640
#: modules/bibcirculation/lib/bibcirculation_templates.py:9653
#: modules/bibcirculation/lib/bibcirculation_templates.py:9660
#: modules/bibcirculation/lib/bibcirculation_templates.py:9667
#: modules/bibcirculation/lib/bibcirculation_templates.py:10101
#: modules/bibcirculation/lib/bibcirculation_templates.py:10108
#: modules/bibcirculation/lib/bibcirculation_templates.py:10115
#: modules/bibcirculation/lib/bibcirculation_templates.py:10128
#: modules/bibcirculation/lib/bibcirculation_templates.py:10135
#: modules/bibcirculation/lib/bibcirculation_templates.py:10142
#: modules/bibcirculation/lib/bibcirculation_templates.py:14221
#: modules/bibcirculation/lib/bibcirculation_templates.py:14540
#: modules/bibcirculation/lib/bibcirculation_templates.py:15179
#: modules/bibcirculation/lib/bibcirculation_templates.py:15186
#: modules/bibcirculation/lib/bibcirculation_templates.py:15193
#: modules/bibcirculation/lib/bibcirculation_templates.py:15206
#: modules/bibcirculation/lib/bibcirculation_templates.py:15213
#: modules/bibcirculation/lib/bibcirculation_templates.py:15220
#: modules/bibcirculation/lib/bibcirculation_templates.py:17060
#: modules/bibcirculation/lib/bibcirculation_templates.py:17067
#: modules/bibcirculation/lib/bibcirculation_templates.py:17074
#: modules/bibcirculation/lib/bibcirculation_templates.py:17087
#: modules/bibcirculation/lib/bibcirculation_templates.py:17094
#: modules/bibcirculation/lib/bibcirculation_templates.py:17101
#: modules/bibcirculation/lib/bibcirculation_templates.py:17542
#: modules/bibcirculation/lib/bibcirculation_templates.py:17549
#: modules/bibcirculation/lib/bibcirculation_templates.py:17556
#: modules/bibcirculation/lib/bibcirculation_templates.py:17569
#: modules/bibcirculation/lib/bibcirculation_templates.py:17576
#: modules/bibcirculation/lib/bibcirculation_templates.py:17583
#, fuzzy
msgid "name"
msgstr "Alias"
#: modules/bibcirculation/lib/bibcirculation_templates.py:1949
#: modules/bibcirculation/lib/bibcirculation_templates.py:2180
#: modules/bibcirculation/lib/bibcirculation_templates.py:2187
#: modules/bibcirculation/lib/bibcirculation_templates.py:2194
#: modules/bibcirculation/lib/bibcirculation_templates.py:2207
#: modules/bibcirculation/lib/bibcirculation_templates.py:2214
#: modules/bibcirculation/lib/bibcirculation_templates.py:2221
#: modules/bibcirculation/lib/bibcirculation_templates.py:4088
#: modules/bibcirculation/lib/bibcirculation_templates.py:6777
#: modules/bibcirculation/lib/bibcirculation_templates.py:8570
#: modules/bibcirculation/lib/bibcirculation_templates.py:9626
#: modules/bibcirculation/lib/bibcirculation_templates.py:9633
#: modules/bibcirculation/lib/bibcirculation_templates.py:9640
#: modules/bibcirculation/lib/bibcirculation_templates.py:9653
#: modules/bibcirculation/lib/bibcirculation_templates.py:9660
#: modules/bibcirculation/lib/bibcirculation_templates.py:9667
#: modules/bibcirculation/lib/bibcirculation_templates.py:10101
#: modules/bibcirculation/lib/bibcirculation_templates.py:10108
#: modules/bibcirculation/lib/bibcirculation_templates.py:10115
#: modules/bibcirculation/lib/bibcirculation_templates.py:10128
#: modules/bibcirculation/lib/bibcirculation_templates.py:10135
#: modules/bibcirculation/lib/bibcirculation_templates.py:10142
#: modules/bibcirculation/lib/bibcirculation_templates.py:14222
#: modules/bibcirculation/lib/bibcirculation_templates.py:14541
#: modules/bibcirculation/lib/bibcirculation_templates.py:15179
#: modules/bibcirculation/lib/bibcirculation_templates.py:15186
#: modules/bibcirculation/lib/bibcirculation_templates.py:15193
#: modules/bibcirculation/lib/bibcirculation_templates.py:15206
#: modules/bibcirculation/lib/bibcirculation_templates.py:15213
#: modules/bibcirculation/lib/bibcirculation_templates.py:15220
#: modules/bibcirculation/lib/bibcirculation_templates.py:17060
#: modules/bibcirculation/lib/bibcirculation_templates.py:17067
#: modules/bibcirculation/lib/bibcirculation_templates.py:17074
#: modules/bibcirculation/lib/bibcirculation_templates.py:17087
#: modules/bibcirculation/lib/bibcirculation_templates.py:17094
#: modules/bibcirculation/lib/bibcirculation_templates.py:17101
#: modules/bibcirculation/lib/bibcirculation_templates.py:17542
#: modules/bibcirculation/lib/bibcirculation_templates.py:17549
#: modules/bibcirculation/lib/bibcirculation_templates.py:17556
#: modules/bibcirculation/lib/bibcirculation_templates.py:17569
#: modules/bibcirculation/lib/bibcirculation_templates.py:17576
#: modules/bibcirculation/lib/bibcirculation_templates.py:17583
#, fuzzy
msgid "email"
msgstr "Dirección electrónica"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:7218
#: modules/bibcirculation/lib/bibcirculation_templates.py:7917
#: modules/bibcirculation/lib/bibcirculation_templates.py:9132
#: modules/bibcirculation/lib/bibcirculation_templates.py:16024
#, fuzzy
msgid "Search item by"
msgstr "Buscar proveedor por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16024
#, fuzzy
msgid "barcode"
msgstr "Código de barras"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
msgid "recid"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2076
msgid "0 item(s) found."
msgstr "No se ha encontrado ninguno."
#: modules/bibcirculation/lib/bibcirculation_templates.py:2101
#, fuzzy, python-format
msgid "%i items found."
-msgstr "%i items encontrados"
+msgstr "%i elementos encontrados"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2103
#: modules/bibcirculation/lib/bibcirculation_templates.py:16120
#, fuzzy
msgid "# copies"
msgstr "Ejemplares"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2173
#: modules/bibcirculation/lib/bibcirculation_templates.py:9619
#: modules/bibcirculation/lib/bibcirculation_templates.py:10094
#: modules/bibcirculation/lib/bibcirculation_templates.py:10121
#: modules/bibcirculation/lib/bibcirculation_templates.py:15172
#: modules/bibcirculation/lib/bibcirculation_templates.py:17053
#: modules/bibcirculation/lib/bibcirculation_templates.py:17535
#, fuzzy
msgid "Search user by"
msgstr "Buscar proveedor por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2279
#: modules/bibcirculation/lib/bibcirculation_templates.py:9725
#: modules/bibcirculation/lib/bibcirculation_templates.py:10212
#: modules/bibcirculation/lib/bibcirculation_templates.py:15294
#: modules/bibcirculation/lib/bibcirculation_templates.py:17176
#: modules/bibcirculation/lib/bibcirculation_templates.py:17654
msgid "Select user"
msgstr "Seleccione el usuario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2304
#: modules/bibcirculation/lib/bibcirculation_templates.py:2417
#: modules/bibcirculation/lib/bibcirculation_templates.py:2661
#: modules/bibcirculation/lib/bibcirculation_templates.py:4393
#: modules/bibcirculation/lib/bibcirculation_templates.py:5554
#: modules/bibcirculation/lib/bibcirculation_templates.py:8979
#: modules/bibcirculation/lib/bibcirculation_templates.py:9112
#: modules/bibcirculation/lib/bibcirculation_templates.py:11104
#: modules/bibcirculation/lib/bibcirculation_templates.py:15345
msgid "CCID"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2321
#: modules/bibcirculation/lib/bibcirculation_templates.py:2502
msgid "User information"
msgstr "Información del usuario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2391
#: modules/bibcirculation/lib/bibcirculation_templates.py:2508
#: modules/bibcirculation/lib/bibcirculation_templates.py:2740
#: modules/bibcirculation/lib/bibcirculation_templates.py:3943
#: modules/bibcirculation/lib/bibcirculation_templates.py:4046
#: modules/bibcirculation/lib/bibcirculation_templates.py:4269
#: modules/bibcirculation/lib/bibcirculation_templates.py:4330
#: modules/bibcirculation/lib/bibcirculation_templates.py:4458
#: modules/bibcirculation/lib/bibcirculation_templates.py:5606
#: modules/bibcirculation/lib/bibcirculation_templates.py:6187
#: modules/bibcirculation/lib/bibcirculation_templates.py:6236
#: modules/bibcirculation/lib/bibcirculation_templates.py:6537
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6701
#: modules/bibcirculation/lib/bibcirculation_templates.py:6930
#: modules/bibcirculation/lib/bibcirculation_templates.py:7029
#: modules/bibcirculation/lib/bibcirculation_templates.py:9029
#: modules/bibcirculation/lib/bibcirculation_templates.py:9274
#: modules/bibcirculation/lib/bibcirculation_templates.py:9883
#: modules/bibcirculation/lib/bibcirculation_templates.py:10361
#: modules/bibcirculation/lib/bibcirculation_templates.py:11225
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14142
#: modules/bibcirculation/lib/bibcirculation_templates.py:14391
#: modules/bibcirculation/lib/bibcirculation_templates.py:14462
#: modules/bibcirculation/lib/bibcirculation_templates.py:14716
#: modules/bibcirculation/lib/bibcirculation_templates.py:15519
msgid "Phone"
msgstr "Teléfono"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2392
msgid "Barcode(s)"
msgstr "Código de barras"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2393
#: modules/bibcirculation/lib/bibcirculation_templates.py:2573
#: modules/bibcirculation/lib/bibcirculation_templates.py:6191
#: modules/bibcirculation/lib/bibcirculation_templates.py:6278
#: modules/bibcirculation/lib/bibcirculation_templates.py:6539
#: modules/bibcirculation/lib/bibcirculation_templates.py:6629
#: modules/bibcirculation/lib/bibcirculation_templates.py:6962
#: modules/bibcirculation/lib/bibcirculation_templates.py:7032
#: modules/bibcirculation/lib/bibcirculation_templates.py:7179
#: modules/bibcirculation/lib/bibcirculation_templates.py:7726
#: modules/bibcirculation/lib/bibcirculation_templates.py:7817
#: modules/bibcirculation/lib/bibcirculation_templates.py:8395
#: modules/bibcirculation/lib/bibcirculation_templates.py:9946
#: modules/bibcirculation/lib/bibcirculation_templates.py:10430
#: modules/bibcirculation/lib/bibcirculation_templates.py:10669
#: modules/bibcirculation/lib/bibcirculation_templates.py:10822
#: modules/bibcirculation/lib/bibcirculation_templates.py:11238
#: modules/bibcirculation/lib/bibcirculation_templates.py:11436
#: modules/bibcirculation/lib/bibcirculation_templates.py:12703
#: modules/bibcirculation/lib/bibcirculation_templates.py:13518
#: modules/bibcirculation/lib/bibcirculation_templates.py:13775
#: modules/bibcirculation/lib/bibcirculation_templates.py:13957
#: modules/bibcirculation/lib/bibcirculation_templates.py:14072
#: modules/bibcirculation/lib/bibcirculation_templates.py:14393
#: modules/bibcirculation/lib/bibcirculation_templates.py:14464
#: modules/bibcirculation/lib/bibcirculation_templates.py:15015
#: modules/bibcirculation/lib/bibcirculation_templates.py:15532
#: modules/bibcirculation/lib/bibcirculation_templates.py:16925
#: modules/bibcirculation/lib/bibcirculation_templates.py:17387
msgid "Continue"
msgstr "Continuar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2509
msgid "List of borrowed books"
msgstr "Lista de libros en préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2513
msgid "Write note(s)"
msgstr "Nota(s)"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2639
msgid "Notification has been sent!"
msgstr "Se ha enviado la notificación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2742
#: modules/bibcirculation/lib/bibcirculation_templates.py:3095
#: modules/bibcirculation/lib/bibcirculation_templates.py:7151
#: modules/bibcirculation/lib/bibcirculation_templates.py:7429
#: modules/bibcirculation/lib/bibcirculation_templates.py:8081
#: modules/bibcirculation/lib/bibcirculation_templates.py:8238
#: modules/bibcirculation/lib/bibcirculation_templates.py:9596
#: modules/bibcirculation/lib/bibcirculation_templates.py:9835
#: modules/bibcirculation/lib/bibcirculation_templates.py:10071
#: modules/bibcirculation/lib/bibcirculation_templates.py:10314
#: modules/bibcirculation/lib/bibcirculation_templates.py:10523
#: modules/bibcirculation/lib/bibcirculation_templates.py:10749
#: modules/bibcirculation/lib/bibcirculation_templates.py:11208
#: modules/bibcirculation/lib/bibcirculation_templates.py:11351
#: modules/bibcirculation/lib/bibcirculation_templates.py:11856
#: modules/bibcirculation/lib/bibcirculation_templates.py:11951
#: modules/bibcirculation/lib/bibcirculation_templates.py:12064
#: modules/bibcirculation/lib/bibcirculation_templates.py:12152
#: modules/bibcirculation/lib/bibcirculation_templates.py:12833
#: modules/bibcirculation/lib/bibcirculation_templates.py:12935
#: modules/bibcirculation/lib/bibcirculation_templates.py:13606
#: modules/bibcirculation/lib/bibcirculation_templates.py:13868
#: modules/bibcirculation/lib/bibcirculation_templates.py:14921
#: modules/bibcirculation/lib/bibcirculation_templates.py:15144
#: modules/bibcirculation/lib/bibcirculation_templates.py:15420
#: modules/bibcirculation/lib/bibcirculation_templates.py:16840
#: modules/bibcirculation/lib/bibcirculation_templates.py:17028
#: modules/bibcirculation/lib/bibcirculation_templates.py:17311
#: modules/bibcirculation/lib/bibcirculation_templates.py:17509
#: modules/bibcirculation/lib/bibcirculation_templates.py:17911
msgid "Author(s)"
msgstr "Autor(s)"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2748
msgid "Print loan information"
-msgstr "Imprimir la informacióm de préstamo"
+msgstr "Imprimir la información de préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2853
#: modules/bibcirculation/lib/bibcirculation_templates.py:2963
#: modules/bibcirculation/lib/bibcirculation_templates.py:10917
#: modules/bibcirculation/lib/bibcirculation_templates.py:11521
msgid "Option(s)"
msgstr "Opción(es)"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2889
#: modules/bibcirculation/lib/bibcirculation_templates.py:2990
msgid "Cancel hold request"
msgstr "Cancelar la reserva"
#: modules/bibcirculation/lib/bibcirculation_templates.py:2924
#: modules/bibcirculation/lib/bibcirculation_templates.py:3479
#: modules/bibcirculation/lib/bibcirculation_templates.py:3682
#: modules/bibcirculation/lib/bibcirculation_templates.py:4637
msgid "There are no requests."
msgstr "No hay peticiones."
#: modules/bibcirculation/lib/bibcirculation_templates.py:3093
#: modules/bibcirculation/lib/bibcirculation_templates.py:7149
#: modules/bibcirculation/lib/bibcirculation_templates.py:7426
#: modules/bibcirculation/lib/bibcirculation_templates.py:8078
#: modules/bibcirculation/lib/bibcirculation_templates.py:8235
#: modules/bibcirculation/lib/bibcirculation_templates.py:9594
#: modules/bibcirculation/lib/bibcirculation_templates.py:9833
#: modules/bibcirculation/lib/bibcirculation_templates.py:10069
#: modules/bibcirculation/lib/bibcirculation_templates.py:10312
#: modules/bibcirculation/lib/bibcirculation_templates.py:10519
#: modules/bibcirculation/lib/bibcirculation_templates.py:10747
#: modules/bibcirculation/lib/bibcirculation_templates.py:11206
#: modules/bibcirculation/lib/bibcirculation_templates.py:11347
#: modules/bibcirculation/lib/bibcirculation_templates.py:11853
#: modules/bibcirculation/lib/bibcirculation_templates.py:11949
#: modules/bibcirculation/lib/bibcirculation_templates.py:12061
#: modules/bibcirculation/lib/bibcirculation_templates.py:12150
#: modules/bibcirculation/lib/bibcirculation_templates.py:12830
#: modules/bibcirculation/lib/bibcirculation_templates.py:12933
#: modules/bibcirculation/lib/bibcirculation_templates.py:13602
#: modules/bibcirculation/lib/bibcirculation_templates.py:13866
#: modules/bibcirculation/lib/bibcirculation_templates.py:14864
#: modules/bibcirculation/lib/bibcirculation_templates.py:15142
#: modules/bibcirculation/lib/bibcirculation_templates.py:15418
#: modules/bibcirculation/lib/bibcirculation_templates.py:17025
#: modules/bibcirculation/lib/bibcirculation_templates.py:17506
#: modules/bibcirculation/lib/bibcirculation_templates.py:17908
msgid "Item details"
-msgstr "Detalles del ítem"
+msgstr "Detalles del elemento"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3100
msgid "Edit this record"
msgstr "Edite este registro"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3101
#, fuzzy
msgid "Book Cover"
msgstr "Título del libro"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3102
msgid "Additional details"
-msgstr "Detalls addicionals"
+msgstr "Detalles adicionales"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3174
#: modules/bibcirculation/lib/bibcirculation_templates.py:7464
#: modules/bibcirculation/lib/bibcirculation_templates.py:8109
#: modules/bibcirculation/lib/bibcirculation_templates.py:17946
#: modules/bibcirculation/lib/bibcirculation_templates.py:18042
msgid "No of loans"
msgstr "Préstamos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3253
#: modules/bibcirculation/lib/bibcirculation_templates.py:3274
#: modules/bibcirculation/lib/bibcirculation_templates.py:3295
#: modules/bibcirculation/lib/bibcirculation_templates.py:3316
#: modules/bibcirculation/lib/bibcirculation_templates.py:4843
#: modules/bibcirculation/lib/bibcirculation_templates.py:5481
#, fuzzy
msgid "Select an action"
msgstr "Seleccionar una acción"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3255
#: modules/bibcirculation/lib/bibcirculation_templates.py:3276
#: modules/bibcirculation/lib/bibcirculation_templates.py:3297
#: modules/bibcirculation/lib/bibcirculation_templates.py:3318
#, fuzzy
msgid "Add similar copy"
msgstr "similitud de palabras"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3256
#: modules/bibcirculation/lib/bibcirculation_templates.py:3277
#: modules/bibcirculation/lib/bibcirculation_templates.py:3298
#: modules/bibcirculation/lib/bibcirculation_templates.py:3319
#: modules/bibcirculation/lib/bibcirculation_templates.py:4490
msgid "New request"
msgstr "Nueva petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3257
#: modules/bibcirculation/lib/bibcirculation_templates.py:3278
#: modules/bibcirculation/lib/bibcirculation_templates.py:3299
#: modules/bibcirculation/lib/bibcirculation_templates.py:3320
#: modules/bibcirculation/lib/bibcirculation_templates.py:4489
msgid "New loan"
msgstr "Nuevo préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3258
#: modules/bibcirculation/lib/bibcirculation_templates.py:3279
#: modules/bibcirculation/lib/bibcirculation_templates.py:3300
#: modules/bibcirculation/lib/bibcirculation_templates.py:3321
#, fuzzy
msgid "Delete copy"
msgstr "Suprimir el grupo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3432
msgid "Add new copy"
msgstr "Añadir otra copia"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3433
msgid "Order new copy"
msgstr "Pedir otra copia"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3434
msgid "ILL request"
msgstr "Petición de préstamo interbibliotecario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3435
#, fuzzy, python-format
msgid "Hold requests and loans overview on %(date)s"
msgstr "Reservas y préstamos para"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3437
#: modules/bibcirculation/lib/bibcirculation_templates.py:3439
msgid "Hold requests"
msgstr "Reservas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3437
#: modules/bibcirculation/lib/bibcirculation_templates.py:3438
#: modules/bibcirculation/lib/bibcirculation_templates.py:3440
#: modules/bibcirculation/lib/bibcirculation_templates.py:3441
#: modules/bibcirculation/lib/bibcirculation_templates.py:4603
#: modules/bibcirculation/lib/bibcirculation_templates.py:4605
#: modules/bibcirculation/lib/bibcirculation_templates.py:4607
#: modules/bibcirculation/lib/bibcirculation_templates.py:4610
#: modules/bibcirculation/lib/bibcirculation_templates.py:4612
#: modules/bibcirculation/lib/bibcirculation_templates.py:4614
msgid "More details"
msgstr "Más detalles"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3438
#: modules/bibcirculation/lib/bibcirculation_templates.py:3440
#: modules/bibcirculation/lib/bibcirculation_templates.py:4604
#: modules/bibcirculation/lib/bibcirculation_templates.py:4611
msgid "Loans"
msgstr "Préstamos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3439
#: modules/bibcirculation/lib/bibcirculation_templates.py:4608
msgid "Historical overview"
msgstr "Visión histórica"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3569
#: modules/bibcirculation/lib/bibcirculation_templates.py:4747
#: modules/bibcirculation/lib/bibcirculation_templates.py:5389
msgid "There are no loans."
msgstr "Sin préstamos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3604
#: modules/bibcirculation/lib/bibcirculation_templates.py:3809
msgid "Returned on"
msgstr "Devuelto el"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3605
#: modules/bibcirculation/lib/bibcirculation_templates.py:3810
#: modules/bibcirculation/lib/bibcirculation_templates.py:4788
#: modules/bibcirculation/lib/bibcirculation_templates.py:4981
#: modules/bibcirculation/lib/bibcirculation_templates.py:5159
#: modules/bibcirculation/lib/bibcirculation_templates.py:5431
msgid "Renewals"
msgstr "Renovaciones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3606
#: modules/bibcirculation/lib/bibcirculation_templates.py:3811
#: modules/bibcirculation/lib/bibcirculation_templates.py:4789
#: modules/bibcirculation/lib/bibcirculation_templates.py:4982
#: modules/bibcirculation/lib/bibcirculation_templates.py:5160
msgid "Overdue letters"
msgstr "Cartas de reclamación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3878
#: modules/bibcirculation/lib/bibcirculation_templates.py:3988
#: modules/bibcirculation/lib/bibcirculation_templates.py:4182
#: modules/bibcirculation/lib/bibcirculation_templates.py:4197
#: modules/bibcirculation/lib/bibcirculation_templates.py:4398
#: modules/bibcirculation/lib/bibcirculation_templates.py:4808
#: modules/bibcirculation/lib/bibcirculation_templates.py:5449
#: modules/bibcirculation/lib/bibcirculation_templates.py:10933
#: modules/bibcirculation/lib/bibcirculation_templates.py:14664
#: modules/bibcirculation/lib/bibcirculation_templates.py:15641
msgid "No notes"
msgstr "Sin notas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3883
#: modules/bibcirculation/lib/bibcirculation_templates.py:3993
#: modules/bibcirculation/lib/bibcirculation_templates.py:4187
#: modules/bibcirculation/lib/bibcirculation_templates.py:4202
msgid "Notes about this library"
msgstr "Notas sobre esta biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3939
msgid "Library details"
msgstr "Detalles de la biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3944
#: modules/bibcirculation/lib/bibcirculation_templates.py:4047
#: modules/bibcirculation/lib/bibcirculation_templates.py:4270
#: modules/bibcirculation/lib/bibcirculation_templates.py:4331
#: modules/bibcirculation/lib/bibcirculation_templates.py:4790
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6703
#: modules/bibcirculation/lib/bibcirculation_templates.py:6932
#: modules/bibcirculation/lib/bibcirculation_templates.py:7031
#: modules/bibcirculation/lib/bibcirculation_templates.py:11520
#: modules/bibcirculation/lib/bibcirculation_templates.py:11640
#: modules/bibcirculation/lib/bibcirculation_templates.py:13060
#: modules/bibcirculation/lib/bibcirculation_templates.py:13095
#: modules/bibcirculation/lib/bibcirculation_templates.py:13217
#: modules/bibcirculation/lib/bibcirculation_templates.py:13370
#: modules/bibcirculation/lib/bibcirculation_templates.py:13457
#: modules/bibcirculation/lib/bibcirculation_templates.py:17026
msgid "Type"
msgstr "Tipo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3946
#: modules/bibcirculation/lib/bibcirculation_templates.py:4049
#: modules/bibcirculation/lib/bibcirculation_templates.py:4272
#: modules/bibcirculation/lib/bibcirculation_templates.py:4333
msgid "No of items"
-msgstr "Número de ítems"
+msgstr "Número de elementos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:3948
msgid "Duplicated library?"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4042
#: modules/bibcirculation/lib/bibcirculation_templates.py:4265
#, fuzzy
msgid "Library to be deleted"
msgstr "Notes de la biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4087
#, fuzzy
msgid "Search library"
msgstr "Buscar biblioteca por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4125
#, fuzzy
msgid "Select library"
msgstr "Buscar biblioteca por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4218
msgid "Please, note that this action is NOT reversible"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4277
#: modules/bibcirculation/lib/bibcirculation_templates.py:4338
#, fuzzy
msgid "Library not found"
msgstr "Notes de la biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4326
#, fuzzy
msgid "Merged library"
msgstr "Buscar biblioteca por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4403
msgid "Notes about this borrower"
msgstr "Notas sobre este lector"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4452
#: modules/bibcirculation/lib/bibcirculation_templates.py:5600
#: modules/bibcirculation/lib/bibcirculation_templates.py:9023
msgid "Personal details"
msgstr "Detalles personales"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4491
msgid "New ILL request"
msgstr "Nueva petición de PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4492
msgid "Notify this borrower"
msgstr "Avisar a este lector"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4600
msgid "Requests, Loans and ILL overview on"
msgstr "Reservas, préstamos y PI en"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4602
#: modules/bibcirculation/lib/bibcirculation_templates.py:4609
msgid "Requests"
msgstr "Peticiones"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4675
msgid "Request option(s)"
msgstr "Opciones de la petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4786
#: modules/bibcirculation/lib/bibcirculation_templates.py:8852
#: modules/bibcirculation/lib/bibcirculation_templates.py:10426
msgid "Loan date"
msgstr "Prestado en"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4791
#: modules/bibcirculation/lib/bibcirculation_templates.py:5434
msgid "Loan notes"
msgstr "Notas de préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4792
msgid "Loans status"
msgstr "Estado de los préstamos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4793
#: modules/bibcirculation/lib/bibcirculation_templates.py:5435
msgid "Loan options"
msgstr "Opciones de préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4813
#: modules/bibcirculation/lib/bibcirculation_templates.py:5453
#: modules/bibcirculation/lib/bibcirculation_templates.py:10938
msgid "See notes"
msgstr "Ver notas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4850
#: modules/bibcirculation/lib/bibcirculation_templates.py:4854
#: modules/bibcirculation/lib/bibcirculation_templates.py:5488
#: modules/bibcirculation/lib/bibcirculation_templates.py:5492
#, fuzzy
msgid "Change due date"
msgstr "Nueva fecha de devolución: "
#: modules/bibcirculation/lib/bibcirculation_templates.py:4863
#: modules/bibcirculation/lib/bibcirculation_templates.py:5032
#: modules/bibcirculation/lib/bibcirculation_templates.py:5212
#: modules/bibcirculation/lib/bibcirculation_templates.py:5347
#: modules/bibcirculation/lib/bibcirculation_templates.py:5500
msgid "Send recall"
msgstr "Enviar reclamación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4952
#: modules/bibcirculation/lib/bibcirculation_templates.py:5128
msgid "No result for your search."
msgstr "No se han encontrado resultados."
#: modules/bibcirculation/lib/bibcirculation_templates.py:4983
#: modules/bibcirculation/lib/bibcirculation_templates.py:5161
msgid "Loan Notes"
msgstr "Notas de préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:4996
#: modules/bibcirculation/lib/bibcirculation_templates.py:5175
msgid "see notes"
msgstr "ver notas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5000
#: modules/bibcirculation/lib/bibcirculation_templates.py:5180
msgid "no notes"
msgstr "sin notas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5289
msgid "CERN Library"
msgstr "Biblioteca del CERN"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5324
msgid "Message"
msgstr "Mensaje"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5325
msgid "Choose a template"
msgstr "Escoja la plantilla"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5343
msgid "Templates"
msgstr "Plantillas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5344
#: modules/bibcirculation/lib/bibcirculation_templates.py:5432
msgid "Overdue letter"
msgstr "Carta de reclamación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5345
msgid "Reminder"
msgstr "Recordatorio"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5346
msgid "Notification"
msgstr "Notificación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5348
msgid "Load"
msgstr "Carga"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5367
msgid "Send"
msgstr "Enviar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5433
#: modules/bibcirculation/lib/bibcirculation_templates.py:8854
msgid "Loan status"
msgstr "Estado del préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5652
#: modules/bibcirculation/lib/bibcirculation_templates.py:9055
#: modules/bibcirculation/lib/bibcirculation_templates.py:10428
msgid "Write notes"
msgstr "Escriba notas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5713
msgid "Notes about borrower"
msgstr "Notas sobre el lector"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5722
#: modules/bibcirculation/lib/bibcirculation_templates.py:5823
#: modules/bibcirculation/lib/bibcirculation_templates.py:8732
#: modules/bibcirculation/lib/bibcirculation_templates.py:11026
#: modules/bibcirculation/lib/bibcirculation_templates.py:11782
#: modules/bibcirculation/lib/bibcirculation_templates.py:12760
#: modules/bibcirculation/lib/bibcirculation_templates.py:13736
#: modules/bibcirculation/lib/bibcirculation_templates.py:15738
msgid "[delete]"
msgstr "[suprimir]"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5768
#: modules/bibcirculation/lib/bibcirculation_templates.py:5870
#: modules/bibcirculation/lib/bibcirculation_templates.py:8776
#: modules/bibcirculation/lib/bibcirculation_templates.py:11071
#: modules/bibcirculation/lib/bibcirculation_templates.py:15785
msgid "Write new note"
msgstr "Escriba la nota"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5814
msgid "Notes about loan"
msgstr "Notas sobre el préstamo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5990
msgid "Book Information"
msgstr "Información del libro"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5994
msgid "EAN"
msgstr "EAN"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5997
msgid "Publication date"
msgstr "Fecha de publicación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5998
msgid "Publication place"
msgstr "Lugar de publicación"
#: modules/bibcirculation/lib/bibcirculation_templates.py:5999
#: modules/bibcirculation/lib/bibcirculation_templates.py:7155
#: modules/bibcirculation/lib/bibcirculation_templates.py:11955
#: modules/bibcirculation/lib/bibcirculation_templates.py:12156
#: modules/bibcirculation/lib/bibcirculation_templates.py:12939
#: modules/bibcirculation/lib/bibcirculation_templates.py:14925
#: modules/bibcirculation/lib/bibcirculation_templates.py:15148
#: modules/bibcirculation/lib/bibcirculation_templates.py:15424
#: modules/bibcirculation/lib/bibcirculation_templates.py:16844
#: modules/bibcirculation/lib/bibcirculation_templates.py:17032
msgid "Edition"
msgstr "Edición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6000
msgid "Number of pages"
msgstr "Páginas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6001
msgid "Sub-library"
msgstr "Sub-biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6002
msgid "CERN Central Library"
msgstr "Biblioteca Central del CERN"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6099
#, fuzzy
msgid "Retrieve book information"
msgstr "Información del usuario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6312
msgid "A new borrower has been registered."
msgstr "Un nuevo usuario se ha dado de alta."
#: modules/bibcirculation/lib/bibcirculation_templates.py:6531
msgid "Borrower information"
msgstr "Información del usuario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6596
#: modules/bibcirculation/lib/bibcirculation_templates.py:6698
msgid "New library information"
msgstr "Información de la nueva biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6739
msgid "A new library has been registered."
msgstr "Se ha dado de alta la nueva biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6775
#: modules/bibcirculation/lib/bibcirculation_templates.py:8569
msgid "Search library by"
msgstr "Buscar biblioteca por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:6927
#: modules/bibcirculation/lib/bibcirculation_templates.py:7026
msgid "Library information"
msgstr "Información de la biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:7071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14502
msgid "The information has been updated."
msgstr "Se ha actualizado la información."
#: modules/bibcirculation/lib/bibcirculation_templates.py:7150
#: modules/bibcirculation/lib/bibcirculation_templates.py:14920
msgid "Book title"
msgstr "Título del libro"
#: modules/bibcirculation/lib/bibcirculation_templates.py:7152
#: modules/bibcirculation/lib/bibcirculation_templates.py:11952
#: modules/bibcirculation/lib/bibcirculation_templates.py:12068
#: modules/bibcirculation/lib/bibcirculation_templates.py:12153
#: modules/bibcirculation/lib/bibcirculation_templates.py:12936
#: modules/bibcirculation/lib/bibcirculation_templates.py:14922
#: modules/bibcirculation/lib/bibcirculation_templates.py:15145
#: modules/bibcirculation/lib/bibcirculation_templates.py:15421
#: modules/bibcirculation/lib/bibcirculation_templates.py:16841
#: modules/bibcirculation/lib/bibcirculation_templates.py:17029
msgid "Place"
msgstr "Lugar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:7188
msgid "Coming soon..."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7438
#: modules/bibcirculation/lib/bibcirculation_templates.py:17920
#, python-format
msgid "Copies of %s"
msgstr "Copies de %s"
#: modules/bibcirculation/lib/bibcirculation_templates.py:7570
msgid "New copy details"
msgstr "Detalles de la nueva copia"
#: modules/bibcirculation/lib/bibcirculation_templates.py:7725
#: modules/bibcirculation/lib/bibcirculation_templates.py:7816
#: modules/bibcirculation/lib/bibcirculation_templates.py:8394
#: modules/bibcirculation/lib/bibcirculation_templates.py:8484
#, fuzzy
msgid "Expected arrival date"
msgstr "Fecha prevista"
#: modules/bibcirculation/lib/bibcirculation_templates.py:7859
#, fuzzy, python-format
msgid "A %(x_url_open)snew copy%(x_url_close)s has been added."
msgstr ""
"Debería %(x_url_open)saceptar o rechazar%(x_url_close)s la petición de este "
"usuario."
#: modules/bibcirculation/lib/bibcirculation_templates.py:7883
#, fuzzy
msgid "Back to the record"
msgstr "Volver al registro"
#: modules/bibcirculation/lib/bibcirculation_templates.py:7975
#, fuzzy, python-format
msgid "%(nb_items_found)i items found"
-msgstr "%i items encontrados"
+msgstr "%i elementos encontrados"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8268
msgid "Update copy information"
msgstr "Actualizar información de la copia"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8476
msgid "New copy information"
msgstr "Información de la nueva copia"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8530
msgid "This item has been updated."
-msgstr "Este ítem se ha actualitzado."
+msgstr "Este elemento se ha actualizado."
#: modules/bibcirculation/lib/bibcirculation_templates.py:8625
#, fuzzy
msgid "0 libraries found."
msgstr "No se ha encontrado ninguna biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8723
msgid "Notes about library"
msgstr "Notas sobre la biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8856
msgid "Requested ?"
msgstr "Solicitado?"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8876
msgid "New due date: "
msgstr "Nueva fecha de devolución: "
#: modules/bibcirculation/lib/bibcirculation_templates.py:8901
msgid "Submit new due date"
msgstr "Nueva fecha de devolución"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8947
#, python-format
msgid "The due date has been updated. New due date: %s"
msgstr "Se ha actualizado la fecha de devolución. Ahora es: %s"
#: modules/bibcirculation/lib/bibcirculation_templates.py:8948
#, fuzzy
msgid "Back to borrower's loans"
msgstr "Volver a los préstamos"
#: modules/bibcirculation/lib/bibcirculation_templates.py:9225
msgid "Select item"
-msgstr "Seleccionar el item"
+msgstr "Seleccionar el elemento"
#: modules/bibcirculation/lib/bibcirculation_templates.py:9268
#: modules/bibcirculation/lib/bibcirculation_templates.py:9877
#: modules/bibcirculation/lib/bibcirculation_templates.py:10355
#: modules/bibcirculation/lib/bibcirculation_templates.py:11219
#: modules/bibcirculation/lib/bibcirculation_templates.py:15513
msgid "Borrower details"
msgstr "Detalles del lector"
#: modules/bibcirculation/lib/bibcirculation_templates.py:9438
#: modules/bibcirculation/lib/bibcirculation_templates.py:9942
msgid "Enter the period of interest"
msgstr "Período de interés"
#: modules/bibcirculation/lib/bibcirculation_templates.py:9439
#: modules/bibcirculation/lib/bibcirculation_templates.py:9943
msgid "From: "
msgstr "De: "
#: modules/bibcirculation/lib/bibcirculation_templates.py:9441
#: modules/bibcirculation/lib/bibcirculation_templates.py:9944
msgid "To: "
msgstr "A: "
#: modules/bibcirculation/lib/bibcirculation_templates.py:9501
#: modules/bibcirculation/lib/bibcirculation_templates.py:9978
msgid "A new request has been registered with success."
msgstr "Su registrado correctamente la nueva petición."
#: modules/bibcirculation/lib/bibcirculation_templates.py:9626
#: modules/bibcirculation/lib/bibcirculation_templates.py:9633
#: modules/bibcirculation/lib/bibcirculation_templates.py:9640
#: modules/bibcirculation/lib/bibcirculation_templates.py:9653
#: modules/bibcirculation/lib/bibcirculation_templates.py:9660
#: modules/bibcirculation/lib/bibcirculation_templates.py:9667
#: modules/bibcirculation/lib/bibcirculation_templates.py:10101
#: modules/bibcirculation/lib/bibcirculation_templates.py:10108
#: modules/bibcirculation/lib/bibcirculation_templates.py:10115
msgid "ccid"
msgstr ""
# Una?
#: modules/bibcirculation/lib/bibcirculation_templates.py:10178
#, fuzzy
msgid "Please select one borrower to continue."
msgstr "Seleccione uno o más:"
#: modules/bibcirculation/lib/bibcirculation_templates.py:10429
msgid "This note will be associate to this new loan, not to the borrower."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10556
#: modules/bibcirculation/lib/bibcirculation_templates.py:10813
#: modules/bibcirculation/lib/bibcirculation_templates.py:13630
#: modules/bibcirculation/lib/bibcirculation_templates.py:13906
msgid "Order details"
msgstr "Detalles del pedido"
#: modules/bibcirculation/lib/bibcirculation_templates.py:10556
#: modules/bibcirculation/lib/bibcirculation_templates.py:10815
#: modules/bibcirculation/lib/bibcirculation_templates.py:10911
#: modules/bibcirculation/lib/bibcirculation_templates.py:13102
#: modules/bibcirculation/lib/bibcirculation_templates.py:13630
#: modules/bibcirculation/lib/bibcirculation_templates.py:13907
msgid "Vendor"
msgstr "Proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:10584
#: modules/bibcirculation/lib/bibcirculation_templates.py:10816
#: modules/bibcirculation/lib/bibcirculation_templates.py:10913
#: modules/bibcirculation/lib/bibcirculation_templates.py:13908
msgid "Price"
msgstr "Precio"
#: modules/bibcirculation/lib/bibcirculation_templates.py:10636
#: modules/bibcirculation/lib/bibcirculation_templates.py:10818
#: modules/bibcirculation/lib/bibcirculation_templates.py:13724
#: modules/bibcirculation/lib/bibcirculation_templates.py:13910
msgid "Order date"
msgstr "Fecha de petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:10637
#: modules/bibcirculation/lib/bibcirculation_templates.py:10819
#: modules/bibcirculation/lib/bibcirculation_templates.py:10915
#: modules/bibcirculation/lib/bibcirculation_templates.py:12359
#: modules/bibcirculation/lib/bibcirculation_templates.py:12421
#: modules/bibcirculation/lib/bibcirculation_templates.py:12563
#: modules/bibcirculation/lib/bibcirculation_templates.py:12648
#: modules/bibcirculation/lib/bibcirculation_templates.py:13154
#: modules/bibcirculation/lib/bibcirculation_templates.py:13219
#: modules/bibcirculation/lib/bibcirculation_templates.py:13372
#: modules/bibcirculation/lib/bibcirculation_templates.py:13459
#: modules/bibcirculation/lib/bibcirculation_templates.py:13725
#: modules/bibcirculation/lib/bibcirculation_templates.py:13911
#: modules/bibcirculation/lib/bibcirculation_templates.py:15586
msgid "Expected date"
msgstr "Fecha prevista"
#: modules/bibcirculation/lib/bibcirculation_templates.py:10863
msgid "A new purchase has been registered with success."
msgstr "Se ha cursado correctamente una nueva compra."
#: modules/bibcirculation/lib/bibcirculation_templates.py:10912
msgid "Ordered date"
msgstr "Fecha de pedido"
#: modules/bibcirculation/lib/bibcirculation_templates.py:10962
#: modules/bibcirculation/lib/bibcirculation_templates.py:11577
#: modules/bibcirculation/lib/bibcirculation_templates.py:11584
#: modules/bibcirculation/lib/bibcirculation_templates.py:11690
msgid "select"
msgstr "seleccionar"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11017
#: modules/bibcirculation/lib/bibcirculation_templates.py:15729
msgid "Notes about acquisition"
msgstr "Notas sobre la adquisición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11212
#: modules/bibcirculation/lib/bibcirculation_templates.py:11428
#: modules/bibcirculation/lib/bibcirculation_templates.py:12217
#: modules/bibcirculation/lib/bibcirculation_templates.py:14959
#: modules/bibcirculation/lib/bibcirculation_templates.py:15150
#: modules/bibcirculation/lib/bibcirculation_templates.py:15466
#: modules/bibcirculation/lib/bibcirculation_templates.py:17381
#: modules/bibcirculation/lib/bibcirculation_templates.py:17517
msgid "ILL request details"
msgstr "Detalles de la petición PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11213
#: modules/bibcirculation/lib/bibcirculation_templates.py:11429
#: modules/bibcirculation/lib/bibcirculation_templates.py:15152
#: modules/bibcirculation/lib/bibcirculation_templates.py:16921
#: modules/bibcirculation/lib/bibcirculation_templates.py:17037
#: modules/bibcirculation/lib/bibcirculation_templates.py:17382
#: modules/bibcirculation/lib/bibcirculation_templates.py:17518
msgid "Period of interest - From"
msgstr "Período de interés - Desde"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11215
#: modules/bibcirculation/lib/bibcirculation_templates.py:11431
#: modules/bibcirculation/lib/bibcirculation_templates.py:15154
#: modules/bibcirculation/lib/bibcirculation_templates.py:16923
#: modules/bibcirculation/lib/bibcirculation_templates.py:17039
#: modules/bibcirculation/lib/bibcirculation_templates.py:17384
#: modules/bibcirculation/lib/bibcirculation_templates.py:17520
msgid "Period of interest - To"
msgstr "Período de interés - Hasta"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11217
#: modules/bibcirculation/lib/bibcirculation_templates.py:11433
#: modules/bibcirculation/lib/bibcirculation_templates.py:15013
#: modules/bibcirculation/lib/bibcirculation_templates.py:15156
#: modules/bibcirculation/lib/bibcirculation_templates.py:15470
#: modules/bibcirculation/lib/bibcirculation_templates.py:16925
#: modules/bibcirculation/lib/bibcirculation_templates.py:17041
#: modules/bibcirculation/lib/bibcirculation_templates.py:17386
#: modules/bibcirculation/lib/bibcirculation_templates.py:17522
msgid "Additional comments"
msgstr "Comentario adicionales"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11218
#: modules/bibcirculation/lib/bibcirculation_templates.py:15471
msgid "Only this edition"
msgstr "Solamente esta edición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11279
msgid "A new ILL request has been registered with success."
msgstr "Se ha registrado correctamente una nueva petición PI."
#: modules/bibcirculation/lib/bibcirculation_templates.py:11434
#, fuzzy, python-format
msgid ""
"I accept the %(x_url_open)sconditions%(x_url_close)s of the service in "
"particular the return of books in due time."
msgstr ""
"Acepto los %s del servicio, en particular devolver los libros a tiempo."
#: modules/bibcirculation/lib/bibcirculation_templates.py:11435
msgid "I want this edition only."
msgstr "Sólo quiero esta edición."
#: modules/bibcirculation/lib/bibcirculation_templates.py:11466
#, fuzzy, python-format
msgid "You can see your loans %(here_link)s."
msgstr "Puede ver sus préstamos "
#: modules/bibcirculation/lib/bibcirculation_templates.py:11468
msgid "here"
msgstr "aquí"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11515
#: modules/bibcirculation/lib/bibcirculation_templates.py:11635
#: modules/bibcirculation/lib/bibcirculation_templates.py:15584
msgid "Supplier"
msgstr "Proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11518
msgid "Interest from"
msgstr "Interés des de"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11636
#: modules/bibcirculation/lib/bibcirculation_templates.py:12361
#: modules/bibcirculation/lib/bibcirculation_templates.py:12470
#: modules/bibcirculation/lib/bibcirculation_templates.py:12566
#: modules/bibcirculation/lib/bibcirculation_templates.py:12650
#: modules/bibcirculation/lib/bibcirculation_templates.py:13156
#: modules/bibcirculation/lib/bibcirculation_templates.py:13270
#: modules/bibcirculation/lib/bibcirculation_templates.py:13375
#: modules/bibcirculation/lib/bibcirculation_templates.py:13461
#: modules/bibcirculation/lib/bibcirculation_templates.py:13649
msgid "Cost"
msgstr "Coste"
#: modules/bibcirculation/lib/bibcirculation_templates.py:11639
#, fuzzy
msgid "Date requested"
msgstr "Nueva petición"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12062
msgid "Periodical Title"
msgstr "Título de la publicación periódica"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12063
msgid "Article Title"
msgstr "Artículo del artículo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12065
#: modules/bibcirculation/lib/bibcirculation_templates.py:17313
#: modules/bibcirculation/lib/bibcirculation_templates.py:17511
msgid "Volume"
msgstr "Volumen"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12066
#: modules/bibcirculation/lib/bibcirculation_templates.py:17314
#: modules/bibcirculation/lib/bibcirculation_templates.py:17512
msgid "Issue"
msgstr "Número"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12067
#: modules/bibcirculation/lib/bibcirculation_templates.py:17315
#: modules/bibcirculation/lib/bibcirculation_templates.py:17513
msgid "Page"
msgstr "Página"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12069
#: modules/bibcirculation/lib/bibcirculation_templates.py:17318
#: modules/bibcirculation/lib/bibcirculation_templates.py:17516
msgid "ISSN"
msgstr "ISSN"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12210
#: modules/bibcirculation/lib/bibcirculation_templates.py:12994
msgid "Borrower request"
msgstr "Petición del lector"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12213
#: modules/bibcirculation/lib/bibcirculation_templates.py:12997
#: modules/bibcirculation/lib/bibcirculation_templates.py:14960
#: modules/bibcirculation/lib/bibcirculation_templates.py:15468
msgid "Period of interest (From)"
msgstr "Período de interés (desde)"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12214
#: modules/bibcirculation/lib/bibcirculation_templates.py:12998
#: modules/bibcirculation/lib/bibcirculation_templates.py:15011
#: modules/bibcirculation/lib/bibcirculation_templates.py:15469
msgid "Period of interest (To)"
msgstr "Período de interés (hasta)"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12215
#: modules/bibcirculation/lib/bibcirculation_templates.py:12999
msgid "Borrower comments"
msgstr "Comentarios del lector"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12216
#: modules/bibcirculation/lib/bibcirculation_templates.py:13000
msgid "Only this edition?"
msgstr "Sólo esta edición?"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12271
#: modules/bibcirculation/lib/bibcirculation_templates.py:12303
#: modules/bibcirculation/lib/bibcirculation_templates.py:12419
#: modules/bibcirculation/lib/bibcirculation_templates.py:12562
#: modules/bibcirculation/lib/bibcirculation_templates.py:12647
#: modules/bibcirculation/lib/bibcirculation_templates.py:13059
#: modules/bibcirculation/lib/bibcirculation_templates.py:13094
#: modules/bibcirculation/lib/bibcirculation_templates.py:13216
#: modules/bibcirculation/lib/bibcirculation_templates.py:13370
#: modules/bibcirculation/lib/bibcirculation_templates.py:13457
msgid "ILL request ID"
msgstr "Código de petición PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12272
#: modules/bibcirculation/lib/bibcirculation_templates.py:12377
#: modules/bibcirculation/lib/bibcirculation_templates.py:12482
#: modules/bibcirculation/lib/bibcirculation_templates.py:12580
#: modules/bibcirculation/lib/bibcirculation_templates.py:12663
#: modules/bibcirculation/lib/bibcirculation_templates.py:13062
#: modules/bibcirculation/lib/bibcirculation_templates.py:13173
#: modules/bibcirculation/lib/bibcirculation_templates.py:13285
#: modules/bibcirculation/lib/bibcirculation_templates.py:13388
#: modules/bibcirculation/lib/bibcirculation_templates.py:13475
#: modules/bibcirculation/lib/bibcirculation_templates.py:13726
#: modules/bibcirculation/lib/bibcirculation_templates.py:13912
msgid "Previous notes"
-msgstr "Notes anteriors"
+msgstr "Notas anteriores"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12293
#: modules/bibcirculation/lib/bibcirculation_templates.py:12397
#: modules/bibcirculation/lib/bibcirculation_templates.py:12500
#: modules/bibcirculation/lib/bibcirculation_templates.py:12600
#: modules/bibcirculation/lib/bibcirculation_templates.py:12683
#: modules/bibcirculation/lib/bibcirculation_templates.py:13082
#: modules/bibcirculation/lib/bibcirculation_templates.py:13192
#: modules/bibcirculation/lib/bibcirculation_templates.py:13306
#: modules/bibcirculation/lib/bibcirculation_templates.py:13408
#: modules/bibcirculation/lib/bibcirculation_templates.py:13495
#: modules/bibcirculation/lib/bibcirculation_templates.py:15590
msgid "Library notes"
msgstr "Notes de la biblioteca"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12310
msgid "Library/Supplier"
-msgstr "Biblioteca/proveïdor"
+msgstr "Biblioteca/proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12463
#: modules/bibcirculation/lib/bibcirculation_templates.py:12564
#: modules/bibcirculation/lib/bibcirculation_templates.py:12649
#: modules/bibcirculation/lib/bibcirculation_templates.py:13261
#: modules/bibcirculation/lib/bibcirculation_templates.py:13372
#: modules/bibcirculation/lib/bibcirculation_templates.py:13459
#: modules/bibcirculation/lib/bibcirculation_templates.py:15587
msgid "Arrival date"
msgstr "Fecha de llegada"
#: modules/bibcirculation/lib/bibcirculation_templates.py:12941
#: modules/bibcirculation/lib/bibcirculation_templates.py:16847
#: modules/bibcirculation/lib/bibcirculation_templates.py:17034
msgid "Standard number"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:13001
#: modules/bibcirculation/lib/bibcirculation_templates.py:16919
#: modules/bibcirculation/lib/bibcirculation_templates.py:17035
#, fuzzy
msgid "Request details"
msgstr "Detalles de la petición PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:13061
#: modules/bibcirculation/lib/bibcirculation_templates.py:13172
#: modules/bibcirculation/lib/bibcirculation_templates.py:13285
#: modules/bibcirculation/lib/bibcirculation_templates.py:13388
#: modules/bibcirculation/lib/bibcirculation_templates.py:13475
#: modules/bibcirculation/lib/bibcirculation_templates.py:14959
#: modules/bibcirculation/lib/bibcirculation_templates.py:15151
#: modules/bibcirculation/lib/bibcirculation_templates.py:15467
#: modules/bibcirculation/lib/bibcirculation_templates.py:16920
#: modules/bibcirculation/lib/bibcirculation_templates.py:17036
#: modules/bibcirculation/lib/bibcirculation_templates.py:17317
#: modules/bibcirculation/lib/bibcirculation_templates.py:17515
msgid "Budget code"
msgstr "Código de presupuesto"
#: modules/bibcirculation/lib/bibcirculation_templates.py:13997
msgid "Purchase information updated with success."
msgstr "Se ha actualizado la información de compra."
#: modules/bibcirculation/lib/bibcirculation_templates.py:14070
#: modules/bibcirculation/lib/bibcirculation_templates.py:14139
msgid "New vendor information"
msgstr "Información del nuevo proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:14183
msgid "A new vendor has been registered."
msgstr "El nuevo proveedor ha sido dado de alta."
#: modules/bibcirculation/lib/bibcirculation_templates.py:14220
#: modules/bibcirculation/lib/bibcirculation_templates.py:14539
msgid "Search vendor by"
msgstr "Buscar proveedor por"
#: modules/bibcirculation/lib/bibcirculation_templates.py:14283
#: modules/bibcirculation/lib/bibcirculation_templates.py:14606
msgid "Vendor(s)"
msgstr "Proveedor(es)"
#: modules/bibcirculation/lib/bibcirculation_templates.py:14388
#: modules/bibcirculation/lib/bibcirculation_templates.py:14459
msgid "Vendor information"
msgstr "Información del proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:14669
#: modules/bibcirculation/lib/bibcirculation_templates.py:14758
msgid "Notes about this vendor"
msgstr "Notas sobre este proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:14712
msgid "Vendor details"
msgstr "Detalles del proveedor"
#: modules/bibcirculation/lib/bibcirculation_templates.py:14797
msgid "Add notes"
msgstr "Añadir notas"
#: modules/bibcirculation/lib/bibcirculation_templates.py:14850
#, fuzzy, python-format
msgid "Book does not exists in %(CFG_SITE_NAME)s"
msgstr "Este libro no existe en Invenio."
#: modules/bibcirculation/lib/bibcirculation_templates.py:14852
msgid "Please fill the following form."
-msgstr "Rellene por favor el sigüente formulario."
+msgstr "Por favor, rellene el siguiente formulario."
#: modules/bibcirculation/lib/bibcirculation_templates.py:15014
#, fuzzy, python-format
msgid ""
"Borrower accepts the %(x_url_open)sconditions%(x_url_close)s of the service "
"in particular the return of books in due time."
msgstr ""
"El lector acepta el %s del servicio, en particular devolver los libros en el "
"plazo."
#: modules/bibcirculation/lib/bibcirculation_templates.py:15015
msgid "Borrower wants this edition only."
msgstr "El lector sólo quiere esta edición."
#: modules/bibcirculation/lib/bibcirculation_templates.py:15158
msgid "Only this edition."
msgstr "Sólo esta edición."
#: modules/bibcirculation/lib/bibcirculation_templates.py:15582
#, fuzzy
msgid "ILL ID"
msgstr "Código de petición PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:15645
msgid "Notes about this ILL"
msgstr "Notas sobre este PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:15823
msgid "No more requests are pending or waiting."
msgstr "No existen más peticiones pendientes o esperando."
#: modules/bibcirculation/lib/bibcirculation_templates.py:15975
msgid "Printable format"
msgstr "Formato imprimible"
#: modules/bibcirculation/lib/bibcirculation_templates.py:16006
#, fuzzy, python-format
msgid ""
"Check if the book already exists on %(CFG_SITE_NAME)s, before sending your "
"ILL request."
msgstr ""
"Compruebe si el libro existe en Invenio antes de solicitar una petición de "
"PI."
#: modules/bibcirculation/lib/bibcirculation_templates.py:16078
msgid "0 items found."
-msgstr "No se han encontrado items."
+msgstr "No se han encontrado elementos."
#: modules/bibcirculation/lib/bibcirculation_templates.py:16161
msgid "Proceed anyway"
msgstr "Continuar de todas maneras"
#: modules/bibcirculation/lib/bibcirculation_templates.py:16730
msgid ""
"According to a decision from the Scientific Information Policy Board, books "
"purchased with budget codes other than Team accounts will be added to the "
"Library catalogue, with the indication of the purchaser."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16751
#, fuzzy
msgid "Document details"
msgstr "Más detalles"
#: modules/bibcirculation/lib/bibcirculation_templates.py:16751
#, fuzzy
msgid "Document type"
msgstr "Tipo de documento desconocido"
#: modules/bibcirculation/lib/bibcirculation_templates.py:16845
#, fuzzy
msgid "This edition only"
msgstr "Sólo quiero esta edición."
#: modules/bibcirculation/lib/bibcirculation_templates.py:16920
msgid "Cash"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17308
msgid "Article details"
msgstr "Detalles del artículo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17309
#: modules/bibcirculation/lib/bibcirculation_templates.py:17507
msgid "Periodical title"
msgstr "Título de la revista"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17310
#: modules/bibcirculation/lib/bibcirculation_templates.py:17508
msgid "Article title"
msgstr "Título del artículo"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17312
#: modules/bibcirculation/lib/bibcirculation_templates.py:17510
msgid "Report number"
msgstr "Número de informe"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17724
#, fuzzy
msgid "Search ILL request by"
msgstr "Nueva petición de PI"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17725
#, fuzzy
msgid "ILL request id"
msgstr "Petición de préstamo interbibliotecario"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17725
msgid "cost"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17725
msgid "notes"
msgstr "notes"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17764
#, fuzzy
msgid "date restriction"
msgstr "Actualizar los parámetros"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17765
msgid "the beginning"
msgstr "el principio"
#: modules/bibcirculation/lib/bibcirculation_templates.py:17766
msgid "now"
msgstr "ahora"
#: modules/bibcheck/web/admin/bibcheckadmin.py:60
msgid "BibCheck Admin"
msgstr "Administración de BibCheck"
#: modules/bibcheck/web/admin/bibcheckadmin.py:70
#: modules/bibcheck/web/admin/bibcheckadmin.py:250
#: modules/bibcheck/web/admin/bibcheckadmin.py:289
#: modules/bibcheck/web/admin/bibcheckadmin.py:326
msgid "Not authorized"
msgstr "No autorizado"
#: modules/bibcheck/web/admin/bibcheckadmin.py:80
#, python-format
msgid "ERROR: %s does not exist"
msgstr "ERROR: %s no existe"
#: modules/bibcheck/web/admin/bibcheckadmin.py:82
#, python-format
msgid "ERROR: %s is not a directory"
-msgstr "ERROR: %s no és un directorio"
+msgstr "ERROR: %s no es un directorio"
#: modules/bibcheck/web/admin/bibcheckadmin.py:84
#, python-format
msgid "ERROR: %s is not writable"
msgstr "ERROR: no tiene permiso de escritura en %s"
#: modules/bibcheck/web/admin/bibcheckadmin.py:117
msgid "Limit to knowledge bases containing string:"
msgstr "Limitarlo a las bases de conocimiento con el texto:"
#: modules/bibcheck/web/admin/bibcheckadmin.py:135
msgid "Really delete"
msgstr "Confirmación para eliminar"
#: modules/bibcheck/web/admin/bibcheckadmin.py:141
msgid "Verify syntax"
msgstr "Verifique la sintaxis"
#: modules/bibcheck/web/admin/bibcheckadmin.py:146
msgid "Create new"
msgstr "Crear otro"
#: modules/bibcheck/web/admin/bibcheckadmin.py:166
#, python-format
msgid "File %s does not exist."
msgstr "El fichero %s no existe"
#: modules/bibcheck/web/admin/bibcheckadmin.py:175
msgid "Calling bibcheck -verify failed."
msgstr "La invocación bibcheck -verify ha fallado."
#: modules/bibcheck/web/admin/bibcheckadmin.py:182
msgid "Verify BibCheck config file"
msgstr "Verifique el archivo de configuración de BibCheck"
#: modules/bibcheck/web/admin/bibcheckadmin.py:183
msgid "Verify problem"
msgstr "Problema de verificación"
#: modules/bibcheck/web/admin/bibcheckadmin.py:205
msgid "File"
msgstr "Fichero"
#: modules/bibcheck/web/admin/bibcheckadmin.py:241
msgid "Edit BibCheck config file"
msgstr "Editar el fichero de configuración de BibCheck"
#: modules/bibcheck/web/admin/bibcheckadmin.py:269
#, python-format
msgid "File %s already exists."
msgstr "El fichero %s ya existe."
#: modules/bibcheck/web/admin/bibcheckadmin.py:272
#, python-format
msgid "File %s: written OK."
msgstr "Fichero %s escrito correctamente."
#: modules/bibcheck/web/admin/bibcheckadmin.py:278
#, python-format
msgid "File %s: write failed."
-msgstr "Fitxer %s: no se ha podido escribir."
+msgstr "Archivo %s: no se ha podido escribir."
#: modules/bibcheck/web/admin/bibcheckadmin.py:280
msgid "Save BibCheck config file"
msgstr "Guardar el fichero de configuración de BibCheck"
#: modules/bibcheck/web/admin/bibcheckadmin.py:313
#, python-format
msgid "File %s deleted."
msgstr "Fichero %s eliminado."
#: modules/bibcheck/web/admin/bibcheckadmin.py:315
#, python-format
msgid "File %s: delete failed."
msgstr "Fichero %s: no se ha podido eliminar."
#: modules/bibcheck/web/admin/bibcheckadmin.py:317
msgid "Delete BibCheck config file"
msgstr "Eliminar el fichero de configuración de BibCheck"
#: modules/bibharvest/lib/oai_repository_admin.py:155
#: modules/bibharvest/lib/oai_repository_admin.py:260
#: modules/bibharvest/lib/oai_repository_admin.py:339
msgid "Return to main selection"
msgstr "Volver a la selección principal"
#: modules/bibharvest/lib/oai_harvest_admin.py:119
msgid "Overview of sources"
msgstr "Resumen de los servidores OAI"
#: modules/bibharvest/lib/oai_harvest_admin.py:120
msgid "Harvesting status"
msgstr "Estado de la recolección"
#: modules/bibharvest/lib/oai_harvest_admin.py:138
msgid "Not Set"
msgstr "Sin definir"
#: modules/bibharvest/lib/oai_harvest_admin.py:139
msgid "never"
msgstr "nunca"
#: modules/bibharvest/lib/oai_harvest_admin.py:150
msgid "Never harvested"
msgstr "Nunca recolectado"
#: modules/bibharvest/lib/oai_harvest_admin.py:162
msgid "View Holding Pen"
msgstr "Ver los registros en espera de revisión"
#: modules/bibharvest/lib/oai_harvest_admin.py:187
#: modules/bibharvest/lib/oai_harvest_admin.py:559
msgid "No OAI source ID selected."
msgstr "No ha seleccionado ningún identificador de servidor OAI"
#: modules/bibharvest/lib/oai_harvest_admin.py:290
#: modules/bibharvest/lib/oai_harvest_admin.py:463
#: modules/bibharvest/lib/oai_harvest_admin.py:477
#: modules/bibharvest/lib/oai_harvest_admin.py:492
#: modules/bibharvest/lib/oai_harvest_admin.py:500
#: modules/bibharvest/lib/oai_harvest_admin.py:547
msgid "Go back to the OAI sources overview"
-msgstr "Volver a la lista dels servidores OAI"
+msgstr "Volver a la lista de servidores OAI"
#: modules/bibharvest/lib/oai_harvest_admin.py:449
msgid "Try again with another url"
msgstr "Pruebe con otra URL"
#: modules/bibharvest/lib/oai_harvest_admin.py:456
msgid "Continue anyway"
msgstr "Continuar igualmente"
#: modules/bibharvest/lib/oai_harvest_admin.py:830
msgid "Return to the month view"
msgstr "Volver al resumen mensual"
#: modules/bibharvest/lib/oai_harvest_admin.py:1104
msgid "Compare with original"
msgstr "Comparar con el original"
#: modules/bibharvest/lib/oai_harvest_admin.py:1110
#: modules/bibharvest/lib/oai_harvest_admin.py:1155
msgid "Delete from holding pen"
msgstr "Eliminar de la lista de espera"
#: modules/bibharvest/lib/oai_harvest_admin.py:1128
msgid "Error when retrieving the Holding Pen entry"
msgstr "Error al recuperar la entrada en espera"
#: modules/bibharvest/lib/oai_harvest_admin.py:1136
msgid "Error when retrieving the record"
msgstr "Error al recuperar el registro"
#: modules/bibharvest/lib/oai_harvest_admin.py:1144
msgid ""
"Error when formatting the Holding Pen entry. Probably its content is broken"
msgstr ""
"Error al formatear la entrada en espera. Probablemente su contenido esté mal"
#: modules/bibharvest/lib/oai_harvest_admin.py:1149
msgid "Accept Holding Pen version"
msgstr "Aceptar la versión en espera de revisión"
#: modules/bibknowledge/lib/bibknowledge_templates.py:51
#, python-format
msgid ""
"Limit display to knowledge bases matching %(keyword_field)s in their rules "
"and descriptions"
msgstr ""
"Limitar la visualización a las bases de conocimiento con el texto "
"%(keyword_field)s en sus reglas y descripciones"
#: modules/bibknowledge/lib/bibknowledge_templates.py:89
msgid "No Knowledge Base"
msgstr "Sin base de conocimiento"
#: modules/bibknowledge/lib/bibknowledge_templates.py:148
msgid "Add New Knowledge Base"
msgstr "Añadir otra base de conocimiento"
#: modules/bibknowledge/lib/bibknowledge_templates.py:149
msgid "Configure a dynamic KB"
msgstr "Configurar una base de conocimiento dinámica"
#: modules/bibknowledge/lib/bibknowledge_templates.py:150
msgid "Add New Taxonomy"
msgstr "Añadir una nueva taxonomía"
#: modules/bibknowledge/lib/bibknowledge_templates.py:191
msgid "This knowledge base already has a taxonomy file."
-msgstr "Esta base de conociminento ya tiene un archivo de taxonomía"
+msgstr "Esta base de conocimiento ya tiene un archivo de taxonomía"
#: modules/bibknowledge/lib/bibknowledge_templates.py:192
msgid "If you upload another file, the current version will be replaced."
msgstr "Si sube otro archivo, se reemplazará la versión actual."
#: modules/bibknowledge/lib/bibknowledge_templates.py:194
#, python-format
msgid "The current taxonomy can be accessed with this URL: %s"
msgstr "La taxonomía actual es accesible desde esta URL: %s"
#: modules/bibknowledge/lib/bibknowledge_templates.py:197
#, python-format
msgid "Please upload the RDF file for taxonomy %s"
msgstr "Suba el fichero RDF de la taxonomía %s"
#: modules/bibknowledge/lib/bibknowledge_templates.py:234
msgid "Please configure"
msgstr "Es necesario configurarla"
#: modules/bibknowledge/lib/bibknowledge_templates.py:235
msgid ""
"A dynamic knowledge base is a list of values of a "
"given field. The list is generated dynamically by "
"searching the records using a search expression."
msgstr ""
"Un base de conocimiento dinámico es una lista de valores de un campo. La "
"lista se genera dinámicamente a medida que se buscan registros a partir de "
"un valor de búsqueda."
#: modules/bibknowledge/lib/bibknowledge_templates.py:239
msgid ""
"Example: Your records contain field 270__a for the "
"name and address of the author's institute. If you "
"set the field to '270__a' and the expression to "
"'270__a:*Paris*', a list of institutes in Paris "
"will be created."
msgstr ""
-"Por ejemplo: los registros tenen el camp 270__a para el nombre y la "
+"Por ejemplo: los registros tienen el campo 270__a para el nombre y la "
"dirección de la institución del autor. Si pone como valor de campo «270__a» "
-"y la expresión «270__a:*Paris*», creará una lista d'instituciones en París."
+"y la expresión «270__a:*Paris*», creará una lista de instituciones en París."
#: modules/bibknowledge/lib/bibknowledge_templates.py:244
msgid ""
"If the expression is empty, a list of all values in "
"270__a will be created."
msgstr ""
"Si deja la expresión vacía, creará una lista con todos los valores del campo "
"270__a."
#: modules/bibknowledge/lib/bibknowledge_templates.py:246
msgid ""
"If the expression contains '%', like '270__a:*%*', "
"it will be replaced by a search string when the "
"knowledge base is used."
msgstr ""
"Si la expresión contiene «%», como «270__a:*%*», será remplazado por el "
"valor creado cuando se use la base de conocimiento."
#: modules/bibknowledge/lib/bibknowledge_templates.py:249
msgid ""
"You can enter a collection name if the expression "
"should be evaluated in a specific collection."
msgstr ""
"Puede entrar un nombre de colección si la expresión se ha de evaluar en una "
"colección específica."
#: modules/bibknowledge/lib/bibknowledge_templates.py:251
msgid ""
"Example 1: Your records contain field 270__a for "
"the name and address of the author's institute. If "
"you set the field to '270__a' and the expression to "
"'270__a:*Paris*', a list of institutes in Paris "
"will be created."
msgstr ""
-"Ejemplo 1: los registros tenen el camp 270__a para el nombre y la dirección "
+"Ejemplo 1: los registros tienen el campo 270__a para el nombre y la dirección "
"de la institución del autor. Si pone como valor de campo «270__a» y la "
"expresión «270__a:*Paris*», creará una lista d'instituciones en París."
#: modules/bibknowledge/lib/bibknowledge_templates.py:256
msgid ""
"Example 2: Return the institute's name (100__a) when "
"the user gives its postal code "
"(270__a): Set field to 100__a, expression to 270__a:"
"*%*."
msgstr ""
"Ejemplo 2: mostrar el nombre del instituto (100__a) cuando el usuario "
"informe del código postal (270__a): escriba 100__a en el campo, y la "
"expresión como 270__a:*%*."
#: modules/bibknowledge/lib/bibknowledge_templates.py:260
msgid "Any collection"
msgstr "Cualquier colección"
#: modules/bibknowledge/lib/bibknowledge_templates.py:282
msgid "Exporting: "
msgstr "Exportando: "
#: modules/bibknowledge/lib/bibknowledge_templates.py:324
#: modules/bibknowledge/lib/bibknowledge_templates.py:588
#: modules/bibknowledge/lib/bibknowledge_templates.py:657
msgid "Knowledge Base Mappings"
-msgstr "Mapeados de la base de conocimientos"
+msgstr "Asignaciones de la base de conocimientos"
#: modules/bibknowledge/lib/bibknowledge_templates.py:325
#: modules/bibknowledge/lib/bibknowledge_templates.py:589
#: modules/bibknowledge/lib/bibknowledge_templates.py:658
msgid "Knowledge Base Attributes"
msgstr "Atributos de la base de conocimientos"
#: modules/bibknowledge/lib/bibknowledge_templates.py:326
#: modules/bibknowledge/lib/bibknowledge_templates.py:590
#: modules/bibknowledge/lib/bibknowledge_templates.py:659
msgid "Knowledge Base Dependencies"
msgstr "Dependencias de la base de conocimiento"
#: modules/bibknowledge/lib/bibknowledge_templates.py:347
msgid ""
"Here you can add new mappings to this base and "
"change the base attributes."
msgstr ""
-"Aquí puede añadir nuevos mapajes a esta base y cambiar los atributos de la "
+"Aquí puede añadir nuevas asignaciones a esta base y cambiar los atributos de la "
"base."
#: modules/bibknowledge/lib/bibknowledge_templates.py:362
msgid "Map From"
msgstr "Convertir de:"
#: modules/bibknowledge/lib/bibknowledge_templates.py:425
msgid "Search for a mapping"
msgstr "Buscar una conversión"
#: modules/bibknowledge/lib/bibknowledge_templates.py:480
msgid "Knowledge base is empty"
msgstr "La base de conocimiento está vacía"
#: modules/bibknowledge/lib/bibknowledge_templates.py:545
msgid "You can get a these mappings in textual format by: "
-msgstr "Puede obtener los mapajes de manera textual haciendo: "
+msgstr "Puede obtener las asignaciones de manera textual haciendo: "
#: modules/bibknowledge/lib/bibknowledge_templates.py:547
msgid "And the KBA version by:"
msgstr "Y la versión KBA haciendo:"
#: modules/bibknowledge/lib/bibknowledge_templates.py:627
msgid "Update Base Attributes"
msgstr "Actualizar los atributos de la base"
#: modules/bibknowledge/lib/bibknowledge_templates.py:670
msgid "This knowledge base is not used in any format elements."
msgstr ""
-"Esta base de conociminento no se está utilizando en ningún elemento de "
+"Esta base de conocimiento no se está utilizando en ningún elemento de "
"formato."
#: modules/bibknowledge/lib/bibknowledge_templates.py:700
#, python-format
msgid "Your rule: %s"
msgstr "Su regla: %s"
#: modules/bibknowledge/lib/bibknowledge_templates.py:702
#, python-format
msgid ""
"The left side of the rule (%s) already appears in these knowledge bases:"
msgstr ""
"La parte izquierda de la regla (%s) ya aparece en estas bases de "
"conocimiento:"
#: modules/bibknowledge/lib/bibknowledge_templates.py:705
#, python-format
msgid ""
"The right side of the rule (%s) already appears in these knowledge bases:"
msgstr ""
"La parte derecha de la regla (%s) ya aparece en estas bases de conocimiento:"
#: modules/bibknowledge/lib/bibknowledge_templates.py:719
msgid "Please select action"
msgstr "Seleccione una acción"
#: modules/bibknowledge/lib/bibknowledge_templates.py:720
msgid "Replace the selected rules with this rule"
msgstr "Reemplace las reglas seleccionadas con esta regla"
#: modules/bibknowledge/lib/bibknowledge_templates.py:721
msgid "Add this rule in the current knowledge base"
msgstr "Añadir esta regla a la base de conocimiento actual"
#: modules/bibknowledge/lib/bibknowledge_templates.py:722
msgid "Cancel: do not add this rule"
msgstr "Cancelar: no añadir esta regla"
#: modules/bibknowledge/lib/bibknowledge_templates.py:755
msgid ""
"It is not possible to have two rules with the same left side in the same "
"knowledge base."
msgstr ""
"No es posible tener dos reglas con la misma parte izquierda en la misma base "
-"de conocimento."
+"de conocimiento."
#: modules/bibknowledge/lib/bibknowledgeadmin.py:72
msgid "BibKnowledge Admin"
msgstr "Administración de BibKnowledge"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:92
msgid "Knowledge Bases"
msgstr "Bases de conocimiento"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:106
#: modules/bibknowledge/lib/bibknowledgeadmin.py:117
#: modules/bibknowledge/lib/bibknowledgeadmin.py:129
msgid "Cannot upload file"
msgstr "No ha sido posible subir el fichero"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:107
msgid "You have not selected a file to upload"
msgstr "No ha seleccionado ningún fichero para subir"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:141
#, python-format
msgid "File %s uploaded."
msgstr "Fichero %s subido."
#: modules/bibknowledge/lib/bibknowledgeadmin.py:143
msgid "File uploaded"
msgstr "Fichero subido"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:172
#: modules/bibknowledge/lib/bibknowledgeadmin.py:216
#: modules/bibknowledge/lib/bibknowledgeadmin.py:266
#: modules/bibknowledge/lib/bibknowledgeadmin.py:303
#: modules/bibknowledge/lib/bibknowledgeadmin.py:356
#: modules/bibknowledge/lib/bibknowledgeadmin.py:465
#: modules/bibknowledge/lib/bibknowledgeadmin.py:524
#: modules/bibknowledge/lib/bibknowledgeadmin.py:590
#: modules/bibknowledge/lib/bibknowledgeadmin.py:686
#: modules/bibknowledge/lib/bibknowledgeadmin.py:703
#: modules/bibknowledge/lib/bibknowledgeadmin.py:718
#: modules/bibknowledge/lib/bibknowledgeadmin.py:754
msgid "Manage Knowledge Bases"
msgstr "Gestionar las bases de conocimiento"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:185
#: modules/bibknowledge/lib/bibknowledgeadmin.py:230
#: modules/bibknowledge/lib/bibknowledgeadmin.py:316
#: modules/bibknowledge/lib/bibknowledgeadmin.py:370
#: modules/bibknowledge/lib/bibknowledgeadmin.py:478
#: modules/bibknowledge/lib/bibknowledgeadmin.py:543
#: modules/bibknowledge/lib/bibknowledgeadmin.py:730
msgid "Unknown Knowledge Base"
msgstr "Base de conocimiento desconocida"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:192
#, python-format
msgid "Knowledge Base %s"
msgstr "Base de conocimiento %s"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:239
#, python-format
msgid "Knowledge Base %s Attributes"
msgstr "Atributos de la base de conocimiento %s"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:325
#, python-format
msgid "Knowledge Base %s Dependencies"
msgstr "Dependencias de la base de conocimiento %s"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:407
msgid "Left side exists"
msgstr "Ya existe la parte izquierda"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:415
msgid "Right side exists"
msgstr "Ya existe la parte derecha"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:592
msgid "Knowledge base name missing"
msgstr "Falta el nombre de la base de conocimiento"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:612
msgid "Unknown knowledge base"
msgstr "Base de conocimiento desconocida"
#: modules/bibknowledge/lib/bibknowledgeadmin.py:613
msgid "There is no knowledge base with that name."
msgstr "No existe ninguna base de conocimiento con este nombre."
#: modules/bibknowledge/lib/bibknowledgeadmin.py:718
msgid "Delete Knowledge Base"
msgstr "Suprimir la base de conocimiento"
#: modules/bibsword/lib/bibsword_webinterface.py:157
msgid "BibSword Admin Interface"
msgstr "Administración de BibSword"
#: modules/bibsword/lib/bibsword_webinterface.py:171
#: modules/bibsword/lib/bibsword_webinterface.py:277
#: modules/bibsword/lib/bibsword_webinterface.py:301
#: modules/bibsword/lib/bibsword_webinterface.py:330
msgid "Export with BibSword: Step 2/4"
msgstr "Exportar con BibSword: paso 2/4"
#: modules/bibsword/lib/bibsword_webinterface.py:222
#: modules/bibsword/lib/bibsword_webinterface.py:233
#: modules/bibsword/lib/bibsword_webinterface.py:291
msgid "Export with BibSword: Step 1/4"
msgstr "Exportar con BibSword: paso 1/4"
#: modules/bibsword/lib/bibsword_webinterface.py:315
#: modules/bibsword/lib/bibsword_webinterface.py:343
#: modules/bibsword/lib/bibsword_webinterface.py:374
msgid "Export with BibSword: Step 3/4"
msgstr "Exportar con BibSword: paso 3/4"
#: modules/bibsword/lib/bibsword_webinterface.py:358
#: modules/bibsword/lib/bibsword_webinterface.py:389
msgid "Export with BibSword: Step 4/4"
msgstr "Exportar con BibSword: paso 4/4"
#: modules/bibsword/lib/bibsword_webinterface.py:434
msgid "Export with BibSword: Acknowledgement"
msgstr "Exportar con BibSword: verificación"
#: modules/bibupload/lib/batchuploader_engine.py:243
msgid "More than one possible recID, ambiguous behaviour"
msgstr "Más de un posible recID, comportamiento ambiguo"
#: modules/bibupload/lib/batchuploader_engine.py:243
msgid "No records match that file name"
msgstr "Ningún registro tiene ficheros con este nombre"
#: modules/bibupload/lib/batchuploader_engine.py:244
msgid "File already exists"
msgstr "Este fichero ya existe"
#: modules/bibupload/lib/batchuploader_engine.py:244
msgid "A file with the same name and format already exists"
-msgstr "Ya existe un registro com este nombre y formato"
+msgstr "Ya existe un registro con este nombre y formato"
#: modules/bibupload/lib/batchuploader_engine.py:245
#, python-format
msgid "No rights to upload to collection '%s'"
msgstr "No tiene permiso para subir documentos a la colección «%s»"
#: modules/bibupload/lib/batchuploader_engine.py:449
msgid "Guests are not authorized to run batchuploader"
msgstr ""
"Los usuarios no identificados no están autorizados a efectuar cargas masivas"
#: modules/bibupload/lib/batchuploader_engine.py:451
#, python-format
msgid "The user '%s' is not authorized to run batchuploader"
msgstr "El usuario «%s» no está autorizado a efectuar cargas masivas"
#: modules/bibupload/lib/batchuploader_engine.py:506
#: modules/bibupload/lib/batchuploader_engine.py:519
#, python-format
msgid ""
"The user '%(x_user)s' is not authorized to modify collection '%(x_coll)s'"
msgstr ""
"El usuario «%(x_user)s» no está autorizado a modificar la colección "
"«%(x_coll)s»"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:267
msgid "Fatal: Author ID capabilities are disabled on this system."
msgstr ""
"Fatal: no están habilitadas las opciones de identificación de autor (Author "
"ID) en este sistema."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:270
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:313
msgid "Fatal: You are not allowed to access this functionality."
-msgstr "Fatal: no está autorizado a accedir a esta funcionalidad."
+msgstr "Fatal: no está autorizado a acceder a esta funcionalidad."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:662
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:763
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:920
msgid "Papers removed from this profile"
msgstr "Documentos eliminados de este perfil"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:663
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:667
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:728
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:732
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:764
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:768
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:921
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:925
msgid "Papers in need of review"
msgstr "Documentos que necesitan revisión"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:664
msgid "Open Tickets"
msgstr "Tareas abiertas"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:664
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:729
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:765
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:922
msgid "Data"
msgstr "Datos"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:665
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:766
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:923
msgid "Papers of this Person"
msgstr "Documentos de esta persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:666
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:767
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:924
msgid "Papers _not_ of this Person"
msgstr "Documentos _no_ de esta persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:668
msgid "Tickets for this Person"
msgstr "Tareas para esta persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:669
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:734
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:770
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:927
msgid "Additional Data for this Person"
msgstr "Otros datos de esta persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:671
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:735
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:771
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:947
msgid "Sorry, there are currently no documents to be found in this category."
msgstr "No hay ningún documento de esta categoría."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:672
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:772
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:948
msgid "Yes, those papers are by this person."
msgstr "Sí, estos documentos son de esta persona."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:673
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:773
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:949
msgid "No, those papers are not by this person"
msgstr "No, estos documentos no son de esta persona."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:674
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:774
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:950
msgid "Assign to other person"
msgstr "Asignarlos a otra persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:675
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:739
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:775
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:951
msgid "Forget decision"
msgstr "Olvidar la decisión"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:676
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:690
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:776
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:790
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:952
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:966
msgid "Confirm!"
msgstr "¡Confirmar!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:677
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:777
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:953
msgid "Yes, this paper is by this person."
msgstr "Sí, este documento es de esta persona."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:678
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:778
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:954
msgid "Rejected!"
msgstr "¡Rechazado!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:679
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:779
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:955
msgid "No, this paper is <i>not</i> by this person"
msgstr "No, este documento <i>no</i> es de esta persona."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:680
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:688
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:696
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:744
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:752
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:760
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:780
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:788
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:796
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:956
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:964
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:972
msgid "Assign to another person"
msgstr "Asignarlo a una otra persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:681
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:689
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:697
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:745
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:753
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:761
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:781
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:789
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:797
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:957
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:965
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:973
msgid "To other person!"
msgstr "¡A otra persona!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:682
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:782
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:958
msgid "Confirmed."
msgstr "Confirmado."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:683
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:783
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:959
msgid "Marked as this person's paper"
msgstr "Marcado como de esta persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:684
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:692
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:748
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:756
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:757
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:784
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:792
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:960
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:968
msgid "Forget decision!"
msgstr "¡Olvidar la decisión!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:685
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:693
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:785
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:793
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:961
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:969
msgid "Forget decision."
msgstr "Olvidar la decisión."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:686
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:786
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:962
msgid "Repeal!"
msgstr "¡Anular!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:687
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:787
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:963
msgid "But it's <i>not</i> this person's paper."
-msgstr "Pero <i>no</i> es el document de esta persona."
+msgstr "Pero <i>no</i> es el documento de esta persona."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:691
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:791
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:967
msgid "But it <i>is</i> this person's paper."
-msgstr "Pero <i>sí</i> que es un document de esta persona."
+msgstr "Pero <i>sí</i> que es un documento de esta persona."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:694
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:794
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:970
msgid "Repealed"
msgstr "Anulado"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:695
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:795
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:971
msgid "Marked as not this person's paper"
msgstr "Marcado que no es de esta persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:727
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:730
msgid "Your papers"
msgstr "Sus documentos"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:727
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:731
msgid "Not your papers"
msgstr "Documentos no suyos"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:736
msgid "These are mine!"
msgstr "¡Éstos son míos!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:737
msgid "These are not mine!"
msgstr "¡Estos no son míos!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:738
msgid "It's not mine, but I know whose it is!"
msgstr "No es mío, pero sé de quien es"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:740
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:754
msgid "Mine!"
msgstr "¡Mío!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:741
msgid "This is my paper!"
-msgstr "¡Este es mi document!"
+msgstr "¡Este es mi documento!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:742
msgid "Not mine!"
msgstr "¡No es mío!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:743
msgid "This is not my paper!"
msgstr "¡Este documento no es mío!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:746
msgid "Not Mine."
msgstr "No es mío."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:747
msgid "Marked as my paper!"
msgstr "Marcado como mío"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:749
msgid "Forget assignment decision"
msgstr "Olvidar la decisión de asignación"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:750
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:758
msgid "Not Mine!"
msgstr "¡No es mío!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:751
msgid "But this is mine!"
msgstr "¡Pero este es mío!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:755
msgid "But this is my paper!"
msgstr "¡Pero este documento es mío!"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:759
msgid "Marked as not your paper."
msgstr "Marcado que no es suyo."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:769
msgid "Tickes you created about this person"
msgstr "Tareas que usted ha creado sobre esta persona"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:922
msgid "Tickets"
msgstr "Tareas"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:926
msgid "Request Tickets"
msgstr "Peticiones"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:1178
msgid "Submit Attribution Information"
msgstr "Enviar la información de atribución"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:1323
msgid "Please review your actions"
msgstr "Revise por favor sus acciones"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2008
msgid "Claim this paper"
msgstr "Reivindicar este documento"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2109
msgid ""
"<p>We're sorry. An error occurred while handling your request. Please find "
"more information below:</p>"
msgstr ""
"<p>Desgraciadamente, ha ocurrido un error mientras se gestionaba su "
"petición. Vea aquí más información:</p>"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2187
msgid "Person search for assignment in progress!"
msgstr "Se está efectuando la búsqueda de la persona para la asignación."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2188
msgid "You are searching for a person to assign the following papers:"
msgstr "Está buscando una persona para asignarle estos documentos:"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2349
#, python-format
msgid "You are going to claim papers for: %s"
msgstr "Está a punto de reivindicar documentos en nombre de: %s"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2377
msgid "This page in not accessible directly."
msgstr "No puede acceder a esta página directamente."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2379
msgid "Welcome!"
msgstr "Bienvenido(a)!"
#: modules/bibauthorid/lib/bibauthorid_templates.py:153
msgid "Click here to review the transactions."
msgstr "Pinche aquí para revisar las transacciones."
#: modules/bibauthorid/lib/bibauthorid_templates.py:196
msgid "Quit searching."
msgstr "Abandona la búsqueda"
#: modules/bibauthorid/lib/bibauthorid_templates.py:417
msgid "You are about to attribute the following paper"
msgstr "Está a punto de atribuir este documento"
#: modules/bibauthorid/lib/bibauthorid_templates.py:439
msgid "Info"
msgstr "Información"
#: modules/bibauthorid/lib/bibauthorid_templates.py:451
msgid " Search for a person to attribute the paper to"
msgstr " Buscar a una persona para atribuirle el documento"
#: modules/bibauthorid/lib/bibauthorid_templates.py:512
#: modules/bibauthorid/lib/bibauthorid_templates.py:607
#: modules/bibauthorid/lib/bibauthorid_templates.py:679
msgid "Select All"
msgstr "Seleccionarlos todos"
#: modules/bibauthorid/lib/bibauthorid_templates.py:513
#: modules/bibauthorid/lib/bibauthorid_templates.py:608
#: modules/bibauthorid/lib/bibauthorid_templates.py:680
msgid "Select None"
msgstr "No seleccionar ninguno"
#: modules/bibauthorid/lib/bibauthorid_templates.py:514
#: modules/bibauthorid/lib/bibauthorid_templates.py:609
#: modules/bibauthorid/lib/bibauthorid_templates.py:681
msgid "Invert Selection"
msgstr "Invertir la selección"
#: modules/bibauthorid/lib/bibauthorid_templates.py:516
#: modules/bibauthorid/lib/bibauthorid_templates.py:611
msgid "Hide successful claims"
msgstr "Esconder las reivindicaciones satisfechas"
#: modules/bibauthorid/lib/bibauthorid_templates.py:576
msgid "No status information found."
msgstr "No se ha encontrado información del estado."
#: modules/bibauthorid/lib/bibauthorid_templates.py:598
msgid "Operator review of user actions pending"
-msgstr "Revisión por parte del operador de las acciones de usuari pendientes"
+msgstr "Revisión por parte del operador de las acciones de usuario pendientes"
#: modules/bibauthorid/lib/bibauthorid_templates.py:642
msgid "Sorry, there are currently no records to be found in this category."
msgstr "No hay ningún registro de esta categoría."
#: modules/bibauthorid/lib/bibauthorid_templates.py:671
msgid "Review Transaction"
msgstr "Revise la transacción"
#: modules/bibauthorid/lib/bibauthorid_templates.py:678
msgid " On all pages: "
msgstr " En todos los documentos: "
#: modules/bibauthorid/lib/bibauthorid_templates.py:714
msgid "Names variants:"
msgstr "Variantes del nombre:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:836
msgid "These records have been marked as not being from this person."
msgstr "Estos registros se han marcado como que no son de esta persona."
#: modules/bibauthorid/lib/bibauthorid_templates.py:837
msgid "They will be regarded in the next run of the author "
msgstr "Se tendrán en cuenta la próxima ejecución del algoritmo "
#: modules/bibauthorid/lib/bibauthorid_templates.py:838
msgid "disambiguation algorithm and might disappear from this listing."
msgstr "de desambiguación de autores y podrán desaparecer de esta lista."
#: modules/bibauthorid/lib/bibauthorid_templates.py:864
#: modules/bibauthorid/lib/bibauthorid_templates.py:865
#: modules/bibauthorid/lib/bibauthorid_templates.py:868
msgid "Not provided"
msgstr "Sin información"
#: modules/bibauthorid/lib/bibauthorid_templates.py:866
msgid "Not available"
msgstr "No disponible"
#: modules/bibauthorid/lib/bibauthorid_templates.py:867
msgid "No comments"
msgstr "Sin comentarios"
#: modules/bibauthorid/lib/bibauthorid_templates.py:869
msgid "Not Available"
msgstr "No disponible"
#: modules/bibauthorid/lib/bibauthorid_templates.py:889
msgid " Delete this ticket"
msgstr " Suprimir esta tarea"
#: modules/bibauthorid/lib/bibauthorid_templates.py:893
msgid " Commit this entire ticket"
msgstr " Dar por buena toda esta tarea"
#: modules/bibauthorid/lib/bibauthorid_templates.py:952
msgid "... This tab is currently under construction ... "
msgstr "... Esta pestaña está todavía en construcción... "
#: modules/bibauthorid/lib/bibauthorid_templates.py:973
msgid ""
"We could not reliably determine the name of the author on the records below "
"to automatically perform an assignment."
msgstr ""
"No se ha podido determinar de una manera fiable el nombre del autor de los "
"siguientes registros para realizar una asignación automática."
#: modules/bibauthorid/lib/bibauthorid_templates.py:975
msgid "Please select an author for the records in question."
msgstr "Seleccione un autor para los registros en cuestión."
#: modules/bibauthorid/lib/bibauthorid_templates.py:976
msgid "Boxes not selected will be ignored in the process."
msgstr "Las casillas no seleccionadas serán ignoradas en el proceso."
#: modules/bibauthorid/lib/bibauthorid_templates.py:983
msgid "Select name for"
msgstr "Seleccione el nombre para"
#: modules/bibauthorid/lib/bibauthorid_templates.py:992
#: modules/bibauthorid/lib/bibauthorid_templates.py:1018
#: modules/bibauthorid/lib/bibauthorid_templates.py:1162
msgid "Error retrieving record title"
msgstr "Error al recuperar el registro"
#: modules/bibauthorid/lib/bibauthorid_templates.py:994
msgid "Paper title: "
msgstr "Título del documento: "
#: modules/bibauthorid/lib/bibauthorid_templates.py:1006
msgid "The following names have been automatically chosen:"
msgstr "Se han escogido automáticamente estos nombres:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1021
msgid " -- With name: "
msgstr " -- Con el nombre:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1027
msgid "Ignore"
msgstr "Ignorar"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1076
#: modules/bibauthorid/lib/bibauthorid_templates.py:1092
msgid "Navigation:"
msgstr "Navegación:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1077
msgid "Run paper attribution for another author"
msgstr "Ejecutar la atribución de documentos para otro autor"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1078
#: modules/bibauthorid/lib/bibauthorid_templates.py:1095
msgid "Person Interface FAQ"
msgstr "Preguntas más frecuentes sobre la interfaz de personas"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1093
msgid "Person Search"
msgstr "Buscar personas"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1094
msgid "Open tickets"
msgstr "Tareas abiertas"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1123
msgid "Symbols legend: "
msgstr "Leyenda: "
#: modules/bibauthorid/lib/bibauthorid_templates.py:1128
#: modules/bibauthorid/lib/bibauthorid_templates.py:1186
msgid "Everything is shiny, captain!"
msgstr "¡Todo va sobre ruedas, capitán!"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1129
msgid "The result of this request will be visible immediately"
-msgstr "El resultado de esta petición será visible imediatamente"
+msgstr "El resultado de esta petición será visible inmediatamente"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1134
msgid "Confirmation needed to continue"
msgstr "Hace falta la confirmación para continuar"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1135
msgid ""
"The result of this request will be visible immediately but we need your "
"confirmation to do so for this paper have been manually claimed before"
msgstr ""
-"El resultado de esta petición será visible imediatamente pero es necesaria "
-"su confirmación, ya que este document ya había sido reivindicado manualmente"
+"El resultado de esta petición será visible inmediatamente pero es necesaria "
+"su confirmación, ya que este documento ya había sido reivindicado manualmente"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1140
msgid "This will create a change request for the operators"
msgstr "Esto creará una petición de cambio a los operadores"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1141
msgid ""
"The result of this request will be visible upon confirmation through an "
"operator"
msgstr ""
"El resultado de esta petición será visible una vez sea confirmado por un "
"operador"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1179
msgid "Selected name on paper"
msgstr "Nombre seleccionado en el documento"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1190
msgid "Verification needed to continue"
msgstr "Hace falta la verificación para continuar"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1194
msgid "This will create a request for the operators"
msgstr "Esto creará una petición a los operadores"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1216
msgid "Please Check your entries"
msgstr "Compruebe sus entradas"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1216
msgid "Sorry."
msgstr "Lo sentimos."
#: modules/bibauthorid/lib/bibauthorid_templates.py:1221
msgid "Please provide at least one transaction."
msgstr "Seleccione al menos una transacción."
#: modules/bibauthorid/lib/bibauthorid_templates.py:1221
msgid "Error:"
msgstr "Error:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1232
msgid "Please provide your information"
msgstr "Introduzca sus datos"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1239
msgid "Please provide your first name"
msgstr "Introduzca su nombre de pila"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1243
#: modules/bibauthorid/lib/bibauthorid_templates.py:1245
msgid "Your first name:"
msgstr "Su nombre:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1251
msgid "Please provide your last name"
msgstr "Sus apellidos"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1256
#: modules/bibauthorid/lib/bibauthorid_templates.py:1258
msgid "Your last name:"
msgstr "Sus apellidos:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1266
msgid "Please provide your eMail address"
msgstr "Su dirección de correo electrónico"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1272
msgid ""
"This eMail address is reserved by a user. Please log in or provide an "
"alternative eMail address"
msgstr ""
"Esta dirección de correo electrónico está reservada por otro usuario. Por "
"favor, dese de alta o ofrezca una dirección alternativa"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1277
#: modules/bibauthorid/lib/bibauthorid_templates.py:1279
msgid "Your eMail:"
msgstr "Su dirección de correo electrónico:"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1283
msgid "You may leave a comment (optional)"
msgstr "Puede dejar un comentario (opcional)"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1296
msgid "Continue claiming*"
msgstr "Continuar con las reivindicaciones*"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1298
msgid "Confirm these changes**"
msgstr "Confirme estos cambios**"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1301
msgid "!Delete the entire request!"
msgstr "¡Eliminar toda la petición!"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1314
msgid "Mark as your documents"
-msgstr "Marcar lo com sus documentos"
+msgstr "Marcar como sus documentos"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1329
msgid "Mark as _not_ your documents"
msgstr "Marcados como documentos _no_ suyos"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1340
msgid "Nothing staged as not yours"
msgstr "Nada pendiente como no suyo"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1344
msgid "Mark as their documents"
msgstr "Marcarlo como documentos suyos"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1355
#: modules/bibauthorid/lib/bibauthorid_templates.py:1370
msgid "Nothing staged in this category"
msgstr "Nada pendiente en esta categoría"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1359
msgid "Mark as _not_ their documents"
msgstr "Marcarlo como a documentos _no_ suyos"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1376
msgid " * You can come back to this page later. Nothing will be lost. <br />"
msgstr ""
-" * Puede volver a esta página més adelante. No se perderá nada. <Br />"
+" * Puede volver a esta página mas adelante. No se perderá nada. <Br />"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1377
msgid ""
" ** Performs all requested changes. Changes subject to permission "
"restrictions will be submitted to an operator for manual review."
msgstr ""
-" ** Executa totes les peticions pendents. Els canvis que tinguin "
-"restricció de permisos s'enviaran a un operador per a la seva revisió manual."
+" ** Ejecuta todos los cambios solicitados. Los cambios sujetos a restricciones"
+" de permisos se enviaran a un operador para la revisión manual."
#: modules/bibauthorid/lib/bibauthorid_templates.py:1433
#, python-format
msgid "We do not have a publication list for '%s'."
msgstr "No tenemos ninguna lista de publicaciones de '%s'."
#: modules/bibauthorid/lib/bibauthorid_templates.py:1448
#: modules/bibauthorid/lib/bibauthorid_templates.py:1560
msgid "Create a new Person for your search"
msgstr "Crear una nueva persona para sus búsqueda"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1503
msgid "Recent Papers"
msgstr "Documentos recientes"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1515
msgid "YES!"
msgstr "¡SÍ!"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1516
msgid " Attribute Papers To "
msgstr " Atribuir los documentos a "
#: modules/bibauthorid/lib/bibauthorid_templates.py:1522
#: modules/bibauthorid/lib/bibauthorid_templates.py:1544
msgid "Publication List "
msgstr "Lista de publicaciones"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1529
msgid "Showing the"
msgstr "Se muestran"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1529
msgid "most recent documents:"
msgstr "los documentos más recientes"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1538
msgid "Sorry, there are no documents known for this person"
-msgstr "No hi hay documentos conocidos de esta persona"
+msgstr "No hay documentos conocidos de esta persona"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1540
msgid ""
"Information not shown to increase performances. Please refine your search."
msgstr ""
-"No se muestra toda la información para mejorar el rendimento. Por favor "
+"No se muestra toda la información para mejorar el rendimiento. Por favor "
"concrete su búsqueda."
#: modules/bibauthorid/lib/bibauthorid_templates.py:1648
msgid "Correct my publication lists!"
msgstr "Corrijan mi lista de publicaciones"
#~ msgid "Make sure we match the right names!"
#~ msgstr "¡Asegúrese que los nombres se correspondan!"
#~ msgid ""
#~ "Please select an author on each of the records that will be assigned."
#~ msgstr ""
#~ "Seleccione un autor para cada uno de los los registros que se le asignen."
#~ msgid "Papers without a name selected will be ignored in the process."
#~ msgstr ""
#~ "En el proceso, se ignoraran los documentos sin un nombre seleccionado."
#~ msgid ""
#~ "The result of this request will be visible immediately but we need your "
#~ "confirmation to do so for this paper has been manually claimed before"
#~ msgstr ""
#~ "El resultado de esta petición será visible, imediatamente pero es "
#~ "necesaria su confirmación, porque este document ya había sido "
#~ "reivindicado manualmente"
#~ msgid "Tickets you created about this person"
#~ msgstr "Tareas que usted ha creado sobre esta persona"
#~ msgid "Error: No BibCatalog system configured."
#~ msgstr "Error: el sistema BibCatalog no está configurado"
#~ msgid "0 borrower(s) found."
#~ msgstr "No se ha encontrado ningún usuario."
#~ msgid "You can see your loans "
#~ msgstr "Puede ver sus préstamos "
#~ msgid "."
#~ msgstr "."
#~ msgid ""
#~ "The item %(x_title)s with barcode %(x_barcode)s has been returned with "
#~ "success."
#~ msgstr ""
#~ "Se ha devuelto correctamente el item %(x_title)s, con el código de barras "
#~ "%(x_barcode)s."
#~ msgid "There %s request(s) on the book who has been returned."
#~ msgstr "Hay %s reserva(s) para el libro devuelto."
#~ msgid "There are no requests waiting on the item <strong>%s</strong>."
#~ msgstr "No hay reservas pendientes para el item <strong>%s</strong>."
#~ msgid "No. Copies"
#~ msgstr "Ejemplares"
#~ msgid "Hold requests and loans overview on"
#~ msgstr "Reservas y préstamos para"
#~ msgid "Library(ies)"
#~ msgstr "Bibliotecas"
#~ msgid "new copy"
#~ msgstr "nueva copia"
#~ msgid "A %s has been added."
#~ msgstr "Se ha añadido un %s."
#~ msgid "0 library(ies) found."
#~ msgstr "No se ha encontrado ninguna biblioteca"
#~ msgid "Back borrower's loans"
#~ msgstr "Volver a los préstamos"
#~ msgid "Borrower wants only this edition?"
#~ msgstr "Quiere solamente esta edición, el lector?"
#~ msgid ""
#~ "I accept the %s of the service in particular the return of books in due "
#~ "time."
#~ msgstr ""
#~ "Acepto los %s del servicio, en particular devolver los libros a tiempo."
#~ msgid "Volume, Issue, Page"
#~ msgstr "Volumen, número, página"
#~ msgid "Barcoce"
#~ msgstr "Código de barras"
#~ msgid "An ILL request has been updated with success."
#~ msgstr "Se ha actualizado la petición de PI"
#~ msgid "Book does not exists on Invenio."
#~ msgstr "Este libro no existe en Invenio."
#~ msgid ""
#~ "Borrower accepts the %s of the service in particular the return of books "
#~ "in due time."
#~ msgstr ""
#~ "El lector acepta el %s del servicio, en particular devolver los libros en "
#~ "el plazo."
#~ msgid "Check if the book already exists on Invenio,"
#~ msgstr "Compruebe si el libro existe en Invenio,"
#~ msgid ""
#~ "Check if the book already exists on Invenio, before to send your ILL "
#~ "request."
#~ msgstr ""
#~ "Compruebe si el libro existe en Invenio antes de solicitar una petición "
#~ "de PI."
#~ msgid "Book does not exists on Invenio. Please fill the following form."
#~ msgstr ""
#~ "Este libro no existe en Invenio. Rellene por favor el sigüente "
#~ "formulario."
#~ msgid "This book is sent to you ..."
#~ msgstr "Se le ha enviado este libro..."
#~ msgid "Id"
#~ msgstr "Identificador"
#~ msgid ""
#~ "Automatically generated <span class=\"keyword single\">single</"
#~ "span>, <span class=\"keyword composite\">composite</span>, <span "
#~ "class=\"keyword author-kw\">author</span>, and <span class="
#~ "\"keyword other-kw\">other keywords</span>."
#~ msgstr ""
#~ "Generado automáticamente <span class=\"keyword single\">sencillo</span>, "
#~ "<span class=\"keyword composite\">compuesto</span>, <span class=\"keyword "
#~ "author-kw\">autor</span>, i <span class=\"keyword other-kw\">otras "
#~ "palabras clave</span>."
#~ msgid "Automated keyword extraction wasn't run for this document yet."
#~ msgstr ""
#~ "Para este documento todavía no se han extraído automáticamente las "
#~ "palabras clave."
#~ msgid "Generate keywords"
#~ msgstr "Generar palabras clave"
#~ msgid "There are no suitable keywords for display in this record."
#~ msgstr "No hay palabras clave relevantes para este registro."
#~ msgid "Show more..."
#~ msgstr "Mostrar más..."
#~ msgid "Unweighted %s keywords:"
#~ msgstr "Palabras clave no sospesades de tipo %s"
#~ msgid "Weighted %s keywords:"
#~ msgstr "Palabras clave sospesades de tipo %s"
#~ msgid "tag cloud"
#~ msgstr "nuve de etiquetas"
#~ msgid "list"
#~ msgstr "lista"
#~ msgid "XML"
#~ msgstr "XML"
#~ msgid "Unknown type: %s"
#~ msgstr "Tipo desconocido: %s"
#~ msgid "The site settings do not allow automatic keyword extraction"
#~ msgstr ""
#~ "La configuración de este sitio no permite la extracción automática de "
#~ "palabras clave"
#~ msgid ""
#~ "We have registered your request, the automatedkeyword extraction will run "
#~ "after some time. Please return back in a while."
#~ msgstr ""
#~ "Su petición se ha registrado. La extracción automática de palabras clave "
#~ "se ejecutará pronto. Vuelva dentro de un rato."
#~ msgid ""
#~ "Unfortunately, we don't have a PDF fulltext for this record in the "
#~ "storage, keywords cannot be generated using an "
#~ "automated process."
#~ msgstr ""
#~ "Por desgracia, no existe una copia local del texto completo en PDF. No es "
#~ "posible generar automáticamente las palabras clave."
#~ msgid "The format %s does not exist for the given version: %s"
#~ msgstr "El formato %s no existe para la versión %s"
#~ msgid "does not exist"
#~ msgstr "no existe"
#~ msgid ""
#~ "WARNING: The following records are pending "
#~ "execution in the task "
#~ "queue. If you proceed with the changes, the "
#~ "modifications made with other tool (e.g. BibEdit) "
#~ "to these records will be lost"
#~ msgstr ""
#~ "ATENCIÓN: los siguientes registros están pendentes de actualizar en la "
#~ "cola de taresa pendientes. Si sigue con los cambios, se perderán las "
#~ "modificaciones hechas con la otra herramienta (es decir, BibEdit) sobre "
#~ "este registro."
#~ msgid ""
#~ "We are sorry, a problem has occured during the processing of your video "
#~ "upload%(submission_title)s."
#~ msgstr ""
#~ "Por desgracia ha ocurrido un problema durante el proceso de carga de su "
#~ "video upload%(submission_title)s."
#~ msgid "The file you uploaded was %(input_filename)s."
#~ msgstr "El fichero que ha subido es %(input_filename)s."
#~ msgid "Your video might not be fully available until intervention."
#~ msgstr "Su video no estará disponible hasta su verificación."
#~ msgid "You can check the status of your video here: %(record_url)s."
#~ msgstr "Puede comprobar el estado de su video aquí: %(record_url)s."
#~ msgid ""
#~ "You might want to take a look at %(guidelines_url)s and modify or redo "
#~ "your submission."
#~ msgstr ""
#~ "Puede echar un vistazo a %(guidelines_url)s y modificar o repetir su "
#~ "envío."
#~ msgid "the video guidelines"
#~ msgstr "las guías para los videos"
#~ msgid ""
#~ "Your video submission%(submission_title)s was successfully processed."
#~ msgstr ""
#~ "Su video submission%(submission_title)s ha sido correctamente procesado."
#~ msgid "Your video is now available here: %(record_url)s."
#~ msgstr "Su video ahora está accesible en: %(record_url)s."
#~ msgid ""
#~ "If the videos quality is not as expected, you might want to take a look "
#~ "at %(guidelines_url)s and modify or redo your submission."
#~ msgstr ""
#~ "Si la cualidad de los videos no es la que espera, puede echar un vistazo "
#~ "a %(guidelines_url)s y modificar o repetir su envío."
#~ msgid "More than one templates found in the document. No format found."
#~ msgstr ""
#~ "Se ha encontrado más de una plantilla en el documento. No se ha "
#~ "encontrado el formato."
#~ msgid "Note for programmer: you have not implemented operator %s."
#~ msgstr "Nota para el programador: no ha implementado el operador %s."
#~ msgid "Name %s is not recognised as a valid operator name."
#~ msgstr "El nombre %s no es un operador reconocido."
#~ msgid "Duplicate name: %s."
#~ msgstr "Nombre duplicado: %s."
#~ msgid "No name defined for the template."
#~ msgstr "No se ha definido ningún nombre para la plantilla."
#~ msgid "No description entered for the template."
#~ msgstr "No se ha entrado ninguna descripción para la plantilla."
#~ msgid "No content type specified for the template. Using default: text/xml."
#~ msgstr ""
#~ "No se ha entrado el tipo de contenido para la plantilla. Se usará el "
#~ "valor per defecto: text/xml."
#~ msgid "Missing attribute \"name\" in TEMPLATE_REF."
#~ msgstr "Falta el atributo \"name\" en TEMPLATE_REF."
#~ msgid "Missing attribute \"name\" in ELEMENT."
#~ msgstr "Falta el atributo \"name\" en ELEMENT."
#~ msgid "Missing attribute \"name\" in FIELD."
#~ msgstr "Falta el atributo \"name\" en FIELD."
#~ msgid "Field %s is not defined."
#~ msgstr "El campo %s no está definido."
#~ msgid "Missing attribute \"value\" in TEXT."
#~ msgstr "Falta el atributo \"name\" en TEXT."
#~ msgid "Missing attribute \"object\" in LOOP."
#~ msgstr "Falta el atributo \"name\" en LOOP."
#~ msgid "Missing attrbute \"name\" in IF."
#~ msgstr "Falta el atributo \"name\" en IF."
#~ msgid "Invalid regular expression: %s."
#~ msgstr "Expresión regular no válida: %s"
#~ msgid "Invalid syntax of IF statement."
#~ msgstr "Sintaxis no válida para el condicional IF."
#~ msgid "Invalid address: %s %s"
#~ msgstr "Dirección no válida: %s %s"
#~ msgid ""
#~ "Invalid display type. Must be one of: value, tag, ind1, ind2, code; "
#~ "received: %s."
#~ msgstr ""
#~ "Tipo de visualización no válida. Ha de ser uno de: value, tag, ind1, "
#~ "ind2, code; received: %s."
#~ msgid "Repeating subfield codes in the same instance!"
#~ msgstr "¡Códigos de subcampo repetidos en la misma instancia!"
#~ msgid "No template could be found for output format %s."
#~ msgstr "No se ha encontrado una plantilla para el formato de salida %s."
#~ msgid "Could not find format element named %s."
#~ msgstr "No se ha encontrado el elemento de formato %s"
#~ msgid "Error when evaluating format element %s with parameters %s."
#~ msgstr "Error al evaluar el elemento de formato %s con el parámetro %s."
#~ msgid ""
#~ "Escape mode for format element %s could not be retrieved. Using default "
#~ "mode instead."
#~ msgstr ""
#~ "No se ha podido obtener el modo de escape para el elemento de formato %s. "
#~ "Se usará el modo por defecto."
#~ msgid "\"nbMax\" parameter for %s must be an \"int\"."
#~ msgstr "El parámetro \"nbMax\" para %s ha de ser un \"int\"."
#~ msgid "Could not read format template named %s. %s."
#~ msgstr "No se ha podido leer la plantilla de formato %s. %s."
#~ msgid "Format element %s could not be found."
#~ msgstr "No se ha encontrado el elemento de formato %s."
#~ msgid "Error in format element %s. %s."
#~ msgstr "Error en el elemento de formato %s. %s."
#~ msgid "Format element %s has no function named \"format\"."
#~ msgstr ""
#~ "El elemento de formato %s no tiene ninguna función llamada \"format\"."
#~ msgid "Output format with code %s could not be found."
#~ msgstr "No se ha encontrado el formato de salida con el código %s."
#~ msgid "Output format %s cannot not be read. %s."
#~ msgstr "No se ha podido leer el formato de salida %s. %s."
#~ msgid "Could not find output format named %s."
#~ msgstr "No se ha podido encontrar el formato de salida %s."
#~ msgid "Could not find a fresh name for output format %s."
#~ msgstr ""
#~ "No ha sido posible encontrar un nuevo nombre para el formato de salida %s"
#~ msgid "No Record Found for %s."
#~ msgstr "No se ha encontrado ningún registro para %s."
#~ msgid "Tag specification \"%s\" must end with column \":\" at line %s."
#~ msgstr ""
#~ "La especificación para la etiqueta \"%s\" debe acabar en la columna \":"
#~ "\", linea %s."
#~ msgid "Tag specification \"%s\" must start with \"tag\" at line %s."
#~ msgstr ""
#~ "La especificación para la etiqueta \"%s\" debe comenzar con \"tag\", "
#~ "linea %s."
#~ msgid "\"tag\" must be lowercase in \"%s\" at line %s."
#~ msgstr "\"tag\" debe estar en minúscula en \"%s\", linea %s."
#~ msgid "Should be \"tag field_number:\" at line %s."
#~ msgstr "Debería ser \"tag field_number:\", linea %s."
#~ msgid "Invalid tag \"%s\" at line %s."
#~ msgstr "Etiqueta \"%s\" no válida, linea %s."
#~ msgid "Condition \"%s\" is outside a tag specification at line %s."
#~ msgstr ""
#~ "La condición \"%s\" ocurre fuera de una especificación de etiqueta, linea "
#~ "%s."
#~ msgid "Condition \"%s\" can only have a single separator --- at line %s."
#~ msgstr "La condición \"%s\" solo puede tenir un solo carácter, linea %s."
#~ msgid "Template \"%s\" does not exist at line %s."
#~ msgstr "La plantilla \"%s\" no existe, linea %s."
#~ msgid "Missing column \":\" after \"default\" in \"%s\" at line %s."
#~ msgstr "Falta la columna \":\" después de \"default\" en \"%s\", linea %s."
#~ msgid ""
#~ "Default template specification \"%s\" must start with \"default :\" at "
#~ "line %s."
#~ msgstr ""
#~ "La especificación de la plantilla por defecto \"%s\" debe comenzar por "
#~ "\"default :\", linea %s."
#~ msgid "\"default\" keyword must be lowercase in \"%s\" at line %s."
#~ msgstr ""
#~ "La palabra \"default\" debe estar en minúsculas en \"%s\", linea %s."
#~ msgid "Line %s could not be understood at line %s."
#~ msgstr "No se puede entender la linea %s, linea %s."
#~ msgid "Output format %s cannot not be read. %s"
#~ msgstr "No se ha podido leer el format de salida %s. %s"
#~ msgid ""
#~ "Could not find a name specified in tag \"<name>\" inside format template "
#~ "%s."
#~ msgstr ""
#~ "No se ha podido encontrar el nombre especificado en la etiqueta \"<name>"
#~ "\" en la plantilla de formato %s."
#~ msgid ""
#~ "Could not find a description specified in tag \"<description>\" inside "
#~ "format template %s."
#~ msgstr ""
#~ "No se ha podido econtrar la descripción especificada en la etiqueta "
#~ "\"<description>\" en la plantilla de formato %s."
#~ msgid "Format template %s calls undefined element \"%s\"."
#~ msgstr "La plantilla de formato %s llama al elemento no definido \"%s\"."
#~ msgid ""
#~ "Format template %s calls unreadable element \"%s\". Check element file "
#~ "permissions."
#~ msgstr ""
#~ "La plantilla de formato %s llama al elemento ilegible \"%s\". Compruebe "
#~ "los permisos de fichero del elemento."
#~ msgid "Cannot load element \"%s\" in template %s. Check element code."
#~ msgstr ""
#~ "No se ha podido cargar el elemento \"%s\" en la plantilla %s. Compruebe "
#~ "el código del elemento."
#~ msgid ""
#~ "Format element %s uses unknown parameter \"%s\" in format template %s."
#~ msgstr ""
#~ "El elemento de formato %s usa el parámetro desconocido \"%s\" en la "
#~ "plantilla de formato %s."
#~ msgid "Could not read format template named %s. %s"
#~ msgstr "No se ha podido leer la plantilla de formato %s. %s"
#~ msgid "Format element %s cannot not be read. %s"
#~ msgstr "No se ha podido leer el elemento de formato %s. %s"
#~ msgid "Add this document to your ScienceWise.info bookmarks"
#~ msgstr "Añadir este documento a sus favoritos en ScienceWise.info"
#~ msgid "Add this article to your ScienceWise.info bookmarks"
#~ msgstr "Añadir este artículo a sus favoritos en ScienceWise.info"
#~ msgid ""
#~ "Cannot write in etc/bibformat dir of your Invenio installation. Check "
#~ "directory permission."
#~ msgstr ""
#~ "No se ha podido escribir en el directorio etc/bibformat de su instalación "
#~ "de Invenio. Compruebe los permisos del directorio."
#~ msgid "Format template %s cannot not be read. %s"
#~ msgstr "No se ha podido leer la plantilla de formato %s. %s"
#~ msgid "No format specified for validation. Please specify one."
#~ msgstr ""
#~ "No se ha especificado ningún formato para la validación. Especifique uno."
#~ msgid "BibSort Guide"
#~ msgstr "Guía de Bibsort"
#~ msgid "May "
#~ msgstr "Mayo"
#~ msgid ""
#~ "The system is not attempting to send an email from %s, to %s, with body "
#~ "%s."
#~ msgstr ""
#~ "El sistema no intenta enviar un mensaje de %s, a %s, amb el texto %s."
#~ msgid ""
#~ "Error in connecting to the SMPT server waiting %s seconds. Exception is "
#~ "%s, while sending email from %s to %s with body %s."
#~ msgstr ""
#~ "Error en la conexión con el servidor SMPT esperando %s segundos. La "
#~ "excepción es %s, al intentar enviar el mensaje de %s a %s con el texto %s."
#~ msgid "Error in sending email from %s to %s with body %s."
#~ msgstr "Error al enviar un mensaje de %s, a %s, con el texto %s."
#~ msgid "Please enter a name for the source."
#~ msgstr "Introduzca un nombre para la fuente."
#~ msgid "Please enter a metadata prefix."
#~ msgstr "Untroduzca el prefijo de metadatos."
#~ msgid "Please enter a base url."
#~ msgstr "Untroduzca la url base."
#~ msgid "Please choose a frequency of harvesting"
#~ msgstr "Escoja una frecuencia de recolecta"
#~ msgid "You selected a postprocess mode which involves conversion."
#~ msgstr "Ha seleccionado un modo de postproceso que involucra conversión."
#~ msgid ""
#~ "Please enter a valid name of or a full path to a BibConvert config file "
#~ "or change postprocess mode."
#~ msgstr ""
#~ "Entre un nombre válido o la ruta completa a un fichero de configuración "
#~ "BibConvert, o cambie el modo de postproceso."
#~ msgid "You selected a postprocess mode which involves filtering."
#~ msgstr "Ha seleccionado un modo de postproceso que involucra filtrage."
#~ msgid ""
#~ "Please enter a valid name of or a full path to a BibFilter script or "
#~ "change postprocess mode."
#~ msgstr ""
#~ "Entre un nombre válido o la ruta completa a un script BibFilter, o cambie "
#~ "el modo de postproceso."
#~ msgid "Please choose the harvesting starting date"
#~ msgstr "Escoja la fecha de inicio de la recolecta"
#~ msgid "Record deleted from the holding pen"
#~ msgstr "Registro eliminado de la lista de espera"
#~ msgid "Configure BibSort"
#~ msgstr "Configurar BibSort"
#~ msgid "Papers written alone"
#~ msgstr "Articles como autor individual"
#~ msgid "No Collaborations"
#~ msgstr "Sin colaboraciones"
#~ msgid "Collaborations"
#~ msgstr "Colaboraciones"
#~ msgid "Frequent co-authors (excluding collaborations)"
#~ msgstr "Coautores frecuentes (excluyendo colaboraciones)"
#~ msgid "Citations%s:"
#~ msgstr "Citaciones%s:"
#~ msgid "Recompute Now!"
#~ msgstr "¡Recalcular ahora!"
#~ msgid "Untitled basket"
#~ msgstr "Cesta sin título"
#~ msgid "Untitled topic"
#~ msgstr "Tema sin título"
#~ msgid "%(x_search_for_term)s in %(x_collection_list)s"
#~ msgstr "%(x_search_for_term)s en %(x_collection_list)s"
#~ msgid "%i matching items"
#~ msgstr "encontrados %i items"
#~ msgid "This basket does not contain any records yet."
#~ msgstr "Esta cesta no contiene aún ningún registro."
#~ msgid "All your topics"
#~ msgstr "Todos sus temas"
#~ msgid "All your groups"
#~ msgstr "Todos sus grupos"
#~ msgid "All your public baskets"
#~ msgstr "Todas sus las cestas públicas"
#~ msgid "Please select a basket..."
#~ msgstr "Seleccione una cesta..."
#~ msgid "%(x_nb)i Comments for round \"%(x_name)s\""
#~ msgstr "%(x_nb)i comentarios por la vuelta «%(x_name)s»"
#~ msgid "%(x_nb)i Comments"
#~ msgstr "%(x_nb)i comentarios"
#~ msgid "Be the first to review this document.</div>"
#~ msgstr "Sea el primero a escribir una reseña de este documento.</div>"
#~ msgid "Close"
#~ msgstr "Cerrar"
#~ msgid "Open"
#~ msgstr "Abrir"
#~ msgid "Specified comment does not belong to this record"
#~ msgstr "El comentario especificado no pertenece a este registro"
#~ msgid "You do not have access to the specified comment"
#~ msgstr "No tiene acceso al comentario especificado"
#~ msgid "You cannot vote for a deleted comment"
#~ msgstr "No puede votar a un comentario borrado"
#~ msgid "You cannot report a deleted comment"
#~ msgstr "No puede denunciar a un comentario borrado"
#~ msgid "You cannot access files of a deleted comment"
#~ msgstr "No puede acceder a los ficheros de un comentario borrado"
#~ msgid "Regenerate Issue"
#~ msgstr "Regenerar número"
#~ msgid ""
#~ "Warning: full-text search is only available for a subset of papers mostly "
#~ "from %(x_range_from_year)s-%(x_range_to_year)s."
#~ msgstr ""
#~ "Atención: la búsqueda a texto completo sólo está disponible para un "
#~ "subconjunto de documentos, mayoritariamente de entre "
#~ "%(x_range_from_year)s-%(x_range_to_year)s."
#~ msgid ""
#~ "Warning: figure caption search is only available for a subset of papers "
#~ "mostly from %(x_range_from_year)s-%(x_range_to_year)s."
#~ msgstr ""
#~ "Atención: la búsqueda en los pies de imágenes sólo está disponible para "
#~ "un subconjunto de documentos, mayoritariamente de entre "
#~ "%(x_range_from_year)s-%(x_range_to_year)s."
#~ msgid "Your search did not match any records. Please try again."
#~ msgstr "Su búsqueda no ha encontrado ningún registro. Vuelva a intentarlo."
#~ msgid ""
#~ "Sorry, %s does not seem to be a valid sort option. The records will not "
#~ "be sorted."
#~ msgstr "No es posible ordenar por %s. No se ordenarán los registros."
#~ msgid "The record %d replaces it."
#~ msgstr "El registro %d lo reemplaza."
#~ msgid "Total number of citations excluding self-citations"
#~ msgstr "Número total de citaciones excluyendo las autocitas"
#~ msgid "Average citations per paper excluding self-citations"
#~ msgstr "Media de citas por artículo excluyendo las autocitas"
#~ msgid "API keys"
#~ msgstr "Llaves API"
#~ msgid "These are your current API keys"
#~ msgstr "Estas son sus llaves API"
#~ msgid "Description: "
#~ msgstr "Descripción: "
#~ msgid "Status: "
#~ msgstr "Estado: "
#~ msgid "API key"
#~ msgstr "Llave API"
#~ msgid "Delete key"
#~ msgstr "Suprimir la llave"
#~ msgid ""
#~ "If you want to create a new API key, please enter a description for it"
#~ msgstr "Si desea crear una nueva llave API, escriba su descripción"
#~ msgid "Description for the new API key"
#~ msgstr "Descripción para la nueva llave API"
#~ msgid ""
#~ "The description should be something meaningful for you to recognize the "
#~ "API key"
#~ msgstr ""
#~ "La descripción debería ser algo significativo para que pueda reconocer la "
#~ "nueva llave API"
#~ msgid "Create new key"
#~ msgstr "Crear una llave nueva"
#~ msgid "Your nickname has not been updated"
#~ msgstr "Su alias no se ha actualitzado."
#~ msgid "Login to display all document types you can access"
#~ msgstr ""
#~ "Identifíquese para visualizar todos los tipos de documento a los que "
#~ "tiene acceso"
#~ msgid ""
#~ "As a referee for this document, you may approve or reject it from the "
#~ "submission interface"
#~ msgstr ""
#~ "Como revisor de este documento, puede aprobarlo o rechazarlo des de la "
#~ "interfaz de envíos"
#~ msgid "Sorry, invalid arguments"
#~ msgstr "Argumentos no válidos"
#~ msgid "Note: the requested submission has already been completed"
#~ msgstr "El envío ya se ha completado"
#~ msgid ""
#~ "Sorry, you don't seem to have initiated a submission with the provided "
#~ "access number"
#~ msgstr ""
#~ "No parece que haya iniciado un envío con el número de acceso que ha\n"
#~ "especificado"
#~ msgid "period_of_interest_from"
#~ msgstr "período_de_interés_desde"
#~ msgid "jsCal3"
#~ msgstr "jsCal3"
#~ msgid "period_of_interest_to"
#~ msgstr "período_de_interés_hasta"
#~ msgid "jsCal4"
#~ msgstr "jsCal4"
#~ msgid "jsCal1"
#~ msgstr "jsCal1"
#~ msgid "jsCal2"
#~ msgstr "jsCal2"
#~ msgid "No fulltext"
#~ msgstr "Sin texto completo"
#~ msgid "last note on"
#~ msgstr "último comentario en"
#~ msgid ", no notes yet"
#~ msgstr ", sin notas"
#~ msgid "HTML brief"
#~ msgstr "HTML resumido"
#~ msgid "You have "
#~ msgstr "Tiene "
#~ msgid "when field equals"
#~ msgstr "cuando el campo es igual a"
#~ msgid "ERROR"
#~ msgstr "ERROR"
#~ msgid "already exists."
#~ msgstr "ya existe"
#~ msgid "deleted"
#~ msgstr "Eliminado"
#~ msgid "in their rules and descriptions"
#~ msgstr "en sus reglas y descripciones"
#~ msgid "The left side of the rule "
#~ msgstr "La parte izquierda de la regla "
#~ msgid "The right side of the rule "
#~ msgstr "La parte derecha de la regla "
#~ msgid "upload is a file"
#~ msgstr "subir este fichero"
#~ msgid "No such knowledge base"
#~ msgstr "No existe esta base de conocimiemento"
diff --git a/po/es.po b/po/fa.po
similarity index 77%
copy from po/es.po
copy to po/fa.po
index 9bb5d8b4c..72b2ba982 100644
--- a/po/es.po
+++ b/po/fa.po
@@ -1,14220 +1,12925 @@
# # This file is part of Invenio.
-# # Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 CERN.
+# # Copyright (C) 2013 CERN.
# #
# # Invenio is free software; you can redistribute it and/or
# # modify it under the terms of the GNU General Public License as
# # published by the Free Software Foundation; either version 2 of the
# # License, or (at your option) any later version.
# #
# # Invenio is distributed in the hope that it will be useful, but
# # WITHOUT ANY WARRANTY; without even the implied warranty of
# # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# # General Public License for more details.
# #
# # You should have received a copy of the GNU General Public License
# # along with Invenio; if not, write to the Free Software Foundation, Inc.,
# # 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
msgid ""
msgstr ""
-"Project-Id-Version: Invenio 1.1.1\n"
+"Project-Id-Version: Invenio 1.0.0-rc0\n"
"Report-Msgid-Bugs-To: info@invenio-software.org\n"
"POT-Creation-Date: 2011-12-19 22:12+0100\n"
-"PO-Revision-Date: 2013-02-25 11:40+0100\n"
-"Last-Translator: Ferran Jorba <Ferran.Jorba@uab.cat>\n"
-"Language-Team: ES <info@invenio-software.org>\n"
+"PO-Revision-Date: 2013-10-30 22:17+0100\n"
+"Last-Translator: Tibor Simko <tibor.simko@cern.ch>\n"
+"Language-Team: FA <info@invenio-software.org>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: pygettext.py 1.5\n"
#: modules/websearch/doc/search-guide.webdoc:361
#: modules/websearch/doc/search-guide.webdoc:396
#: modules/websearch/doc/search-guide.webdoc:493
#: modules/websearch/doc/search-guide.webdoc:528
#: modules/websearch/doc/search-guide.webdoc:630
#: modules/websearch/doc/search-guide.webdoc:665
#: modules/websearch/doc/search-guide.webdoc:768
#: modules/websearch/doc/search-guide.webdoc:803
#: modules/websearch/lib/search_engine.py:1200
#: modules/websearch/lib/websearch_templates.py:1152
msgid "AND NOT"
-msgstr "Y NO"
+msgstr ""
#: modules/webhelp/web/admin/admin.webdoc:18
#: modules/websearch/doc/admin/websearch-admin-guide.webdoc:21
#: modules/websubmit/doc/admin/websubmit-admin-guide.webdoc:21
#: modules/bibedit/doc/admin/bibedit-admin-guide.webdoc:21
#: modules/bibupload/doc/admin/bibupload-admin-guide.webdoc:21
#: modules/bibformat/doc/admin/bibformat-admin-guide.webdoc:21
#: modules/bibharvest/doc/admin/bibharvest-admin-guide.webdoc:21
#: modules/webmessage/doc/admin/webmessage-admin-guide.webdoc:21
#: modules/webalert/doc/admin/webalert-admin-guide.webdoc:21
#: modules/bibclassify/doc/admin/bibclassify-admin-guide.webdoc:22
#: modules/bibmatch/doc/admin/bibmatch-admin-guide.webdoc:21
#: modules/bibconvert/doc/admin/bibconvert-admin-guide.webdoc:21
#: modules/bibsched/doc/admin/bibsched-admin-guide.webdoc:21
#: modules/bibrank/doc/admin/bibrank-admin-guide.webdoc:21
#: modules/webstat/doc/admin/webstat-admin-guide.webdoc:21
#: modules/bibindex/doc/admin/bibindex-admin-guide.webdoc:21
#: modules/webbasket/doc/admin/webbasket-admin-guide.webdoc:21
#: modules/webcomment/doc/admin/webcomment-admin-guide.webdoc:21
#: modules/websession/doc/admin/websession-admin-guide.webdoc:21
#: modules/webstyle/doc/admin/webstyle-admin-guide.webdoc:21
#: modules/elmsubmit/doc/admin/elmsubmit-admin-guide.webdoc:21
#: modules/bibformat/lib/bibformatadminlib.py:61
#: modules/bibformat/web/admin/bibformatadmin.py:70
#: modules/webcomment/lib/webcommentadminlib.py:45
#: modules/webstyle/lib/webdoc_webinterface.py:153
#: modules/bibcheck/web/admin/bibcheckadmin.py:57
#: modules/bibcheck/web/admin/bibcheckadmin.py:159
#: modules/bibcheck/web/admin/bibcheckadmin.py:203
#: modules/bibcheck/web/admin/bibcheckadmin.py:263
#: modules/bibcheck/web/admin/bibcheckadmin.py:302
#: modules/bibknowledge/lib/bibknowledgeadmin.py:70
msgid "Admin Area"
-msgstr "Zona de administración"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:427
#: modules/websearch/doc/search-guide.webdoc:559
#: modules/websearch/doc/search-guide.webdoc:696
#: modules/websearch/doc/search-guide.webdoc:834
#: modules/websearch/lib/search_engine.py:4692
#: modules/websearch/lib/websearch_templates.py:793
#: modules/websearch/lib/websearch_templates.py:871
#: modules/websearch/lib/websearch_templates.py:994
#: modules/websearch/lib/websearch_templates.py:1965
#: modules/websearch/lib/websearch_templates.py:2056
#: modules/websearch/lib/websearch_templates.py:2113
#: modules/websearch/lib/websearch_templates.py:2170
#: modules/websearch/lib/websearch_templates.py:2209
#: modules/websearch/lib/websearch_templates.py:2232
#: modules/websearch/lib/websearch_templates.py:2263
msgid "Browse"
-msgstr "Índice"
+msgstr ""
#: modules/webhelp/web/help-central.webdoc:50
#: modules/websearch/doc/search-tips.webdoc:20
#: modules/websearch/lib/websearch_templates.py:794
#: modules/websearch/lib/websearch_templates.py:872
#: modules/websearch/lib/websearch_templates.py:995
#: modules/websearch/lib/websearch_templates.py:2060
#: modules/websearch/lib/websearch_templates.py:2117
#: modules/websearch/lib/websearch_templates.py:2174
msgid "Search Tips"
-msgstr "Consejos para la búsqueda"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:343
#: modules/websearch/doc/search-guide.webdoc:378
#: modules/websearch/doc/search-guide.webdoc:413
#: modules/websearch/doc/search-guide.webdoc:475
#: modules/websearch/doc/search-guide.webdoc:510
#: modules/websearch/doc/search-guide.webdoc:545
#: modules/websearch/doc/search-guide.webdoc:612
#: modules/websearch/doc/search-guide.webdoc:647
#: modules/websearch/doc/search-guide.webdoc:682
#: modules/websearch/doc/search-guide.webdoc:750
#: modules/websearch/doc/search-guide.webdoc:785
#: modules/websearch/doc/search-guide.webdoc:820
#: modules/miscutil/lib/inveniocfg.py:482
msgid "abstract"
-msgstr "resumen"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:348
#: modules/websearch/doc/search-guide.webdoc:383
#: modules/websearch/doc/search-guide.webdoc:418
#: modules/websearch/doc/search-guide.webdoc:480
#: modules/websearch/doc/search-guide.webdoc:515
#: modules/websearch/doc/search-guide.webdoc:550
#: modules/websearch/doc/search-guide.webdoc:617
#: modules/websearch/doc/search-guide.webdoc:652
#: modules/websearch/doc/search-guide.webdoc:687
#: modules/websearch/doc/search-guide.webdoc:755
#: modules/websearch/doc/search-guide.webdoc:790
#: modules/websearch/doc/search-guide.webdoc:825
#: modules/miscutil/lib/inveniocfg.py:487
msgid "fulltext"
-msgstr "texto completo"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:337
#: modules/websearch/doc/search-guide.webdoc:373
#: modules/websearch/doc/search-guide.webdoc:408
#: modules/websearch/doc/search-guide.webdoc:469
#: modules/websearch/doc/search-guide.webdoc:505
#: modules/websearch/doc/search-guide.webdoc:540
#: modules/websearch/doc/search-guide.webdoc:606
#: modules/websearch/doc/search-guide.webdoc:642
#: modules/websearch/doc/search-guide.webdoc:677
#: modules/websearch/doc/search-guide.webdoc:744
#: modules/websearch/doc/search-guide.webdoc:780
#: modules/websearch/doc/search-guide.webdoc:815
#: modules/websearch/lib/search_engine.py:1222
#: modules/websearch/lib/websearch_templates.py:1108
msgid "Regular expression:"
-msgstr "Expresión regular:"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:333
#: modules/websearch/doc/search-guide.webdoc:369
#: modules/websearch/doc/search-guide.webdoc:404
#: modules/websearch/doc/search-guide.webdoc:465
#: modules/websearch/doc/search-guide.webdoc:501
#: modules/websearch/doc/search-guide.webdoc:536
#: modules/websearch/doc/search-guide.webdoc:602
#: modules/websearch/doc/search-guide.webdoc:638
#: modules/websearch/doc/search-guide.webdoc:673
#: modules/websearch/doc/search-guide.webdoc:740
#: modules/websearch/doc/search-guide.webdoc:776
#: modules/websearch/doc/search-guide.webdoc:811
#: modules/websearch/lib/search_engine.py:1218
#: modules/websearch/lib/websearch_templates.py:1100
msgid "All of the words:"
-msgstr "Todas las palabras:"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:351
#: modules/websearch/doc/search-guide.webdoc:386
#: modules/websearch/doc/search-guide.webdoc:421
#: modules/websearch/doc/search-guide.webdoc:483
#: modules/websearch/doc/search-guide.webdoc:518
#: modules/websearch/doc/search-guide.webdoc:553
#: modules/websearch/doc/search-guide.webdoc:620
#: modules/websearch/doc/search-guide.webdoc:655
#: modules/websearch/doc/search-guide.webdoc:690
#: modules/websearch/doc/search-guide.webdoc:758
#: modules/websearch/doc/search-guide.webdoc:793
#: modules/websearch/doc/search-guide.webdoc:828
#: modules/miscutil/lib/inveniocfg.py:484
msgid "report number"
-msgstr "número de reporte"
+msgstr ""
#: modules/websearch/doc/search-tips.webdoc:406
#: modules/websearch/doc/search-tips.webdoc:413
#: modules/websearch/doc/search-tips.webdoc:414
#: modules/websearch/doc/search-tips.webdoc:415
#: modules/websearch/doc/search-tips.webdoc:433
#: modules/websearch/doc/search-tips.webdoc:434
#: modules/websearch/doc/search-tips.webdoc:435
#: modules/websearch/doc/search-guide.webdoc:354
#: modules/websearch/doc/search-guide.webdoc:389
#: modules/websearch/doc/search-guide.webdoc:424
#: modules/websearch/doc/search-guide.webdoc:486
#: modules/websearch/doc/search-guide.webdoc:521
#: modules/websearch/doc/search-guide.webdoc:556
#: modules/websearch/doc/search-guide.webdoc:623
#: modules/websearch/doc/search-guide.webdoc:658
#: modules/websearch/doc/search-guide.webdoc:693
#: modules/websearch/doc/search-guide.webdoc:761
#: modules/websearch/doc/search-guide.webdoc:796
#: modules/websearch/doc/search-guide.webdoc:831
#: modules/miscutil/lib/inveniocfg.py:490
#: modules/bibcirculation/lib/bibcirculation_templates.py:7218
#: modules/bibcirculation/lib/bibcirculation_templates.py:7917
msgid "year"
-msgstr "año"
+msgstr ""
-# Debe traducirse igual que el 'Subject' del correo electrónico
#: modules/websearch/doc/search-guide.webdoc:352
#: modules/websearch/doc/search-guide.webdoc:387
#: modules/websearch/doc/search-guide.webdoc:422
#: modules/websearch/doc/search-guide.webdoc:484
#: modules/websearch/doc/search-guide.webdoc:519
#: modules/websearch/doc/search-guide.webdoc:554
#: modules/websearch/doc/search-guide.webdoc:621
#: modules/websearch/doc/search-guide.webdoc:656
#: modules/websearch/doc/search-guide.webdoc:691
#: modules/websearch/doc/search-guide.webdoc:759
#: modules/websearch/doc/search-guide.webdoc:794
#: modules/websearch/doc/search-guide.webdoc:829
#: modules/miscutil/lib/inveniocfg.py:485
msgid "subject"
-msgstr "materia"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:336
#: modules/websearch/doc/search-guide.webdoc:372
#: modules/websearch/doc/search-guide.webdoc:407
#: modules/websearch/doc/search-guide.webdoc:468
#: modules/websearch/doc/search-guide.webdoc:504
#: modules/websearch/doc/search-guide.webdoc:539
#: modules/websearch/doc/search-guide.webdoc:605
#: modules/websearch/doc/search-guide.webdoc:641
#: modules/websearch/doc/search-guide.webdoc:676
#: modules/websearch/doc/search-guide.webdoc:743
#: modules/websearch/doc/search-guide.webdoc:779
#: modules/websearch/doc/search-guide.webdoc:814
#: modules/websearch/lib/search_engine.py:1221
#: modules/websearch/lib/websearch_templates.py:1106
msgid "Partial phrase:"
-msgstr "Frase parcial:"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:350
#: modules/websearch/doc/search-guide.webdoc:385
#: modules/websearch/doc/search-guide.webdoc:420
#: modules/websearch/doc/search-guide.webdoc:482
#: modules/websearch/doc/search-guide.webdoc:517
#: modules/websearch/doc/search-guide.webdoc:552
#: modules/websearch/doc/search-guide.webdoc:619
#: modules/websearch/doc/search-guide.webdoc:654
#: modules/websearch/doc/search-guide.webdoc:689
#: modules/websearch/doc/search-guide.webdoc:757
#: modules/websearch/doc/search-guide.webdoc:792
#: modules/websearch/doc/search-guide.webdoc:827
#: modules/miscutil/lib/inveniocfg.py:486
msgid "reference"
-msgstr "referencia"
+msgstr ""
#: modules/websearch/doc/search-tips.webdoc:37
#: modules/websearch/doc/search-tips.webdoc:68
#: modules/websearch/doc/search-tips.webdoc:106
#: modules/websearch/doc/search-tips.webdoc:154
#: modules/websearch/doc/search-tips.webdoc:177
#: modules/websearch/doc/search-tips.webdoc:201
#: modules/websearch/doc/search-tips.webdoc:246
#: modules/websearch/doc/search-tips.webdoc:286
#: modules/websearch/doc/search-tips.webdoc:297
#: modules/websearch/doc/search-tips.webdoc:317
#: modules/websearch/doc/search-tips.webdoc:337
#: modules/websearch/doc/search-tips.webdoc:371
#: modules/websearch/doc/search-tips.webdoc:405
#: modules/websearch/doc/search-tips.webdoc:426
#: modules/websearch/doc/search-tips.webdoc:446
#: modules/websearch/doc/search-tips.webdoc:480
#: modules/websearch/doc/search-tips.webdoc:498
#: modules/websearch/doc/search-tips.webdoc:517
#: modules/websearch/doc/search-tips.webdoc:550
#: modules/websearch/doc/search-tips.webdoc:575
#: modules/websearch/doc/search-tips.webdoc:583
#: modules/websearch/doc/search-tips.webdoc:586
#: modules/websearch/doc/search-tips.webdoc:588
#: modules/websearch/doc/search-tips.webdoc:599
#: modules/websearch/doc/search-tips.webdoc:620
#: modules/websearch/doc/search-guide.webdoc:226
#: modules/websearch/doc/search-guide.webdoc:250
#: modules/websearch/doc/search-guide.webdoc:276
#: modules/websearch/doc/search-guide.webdoc:301
#: modules/websearch/doc/search-guide.webdoc:344
#: modules/websearch/doc/search-guide.webdoc:379
#: modules/websearch/doc/search-guide.webdoc:414
#: modules/websearch/doc/search-guide.webdoc:476
#: modules/websearch/doc/search-guide.webdoc:511
#: modules/websearch/doc/search-guide.webdoc:546
#: modules/websearch/doc/search-guide.webdoc:613
#: modules/websearch/doc/search-guide.webdoc:648
#: modules/websearch/doc/search-guide.webdoc:683
#: modules/websearch/doc/search-guide.webdoc:751
#: modules/websearch/doc/search-guide.webdoc:786
#: modules/websearch/doc/search-guide.webdoc:821
#: modules/websearch/doc/search-guide.webdoc:881
#: modules/websearch/doc/search-guide.webdoc:912
#: modules/websearch/doc/search-guide.webdoc:952
#: modules/websearch/doc/search-guide.webdoc:986
#: modules/websearch/doc/search-guide.webdoc:1026
#: modules/websearch/doc/search-guide.webdoc:1048
#: modules/websearch/doc/search-guide.webdoc:1068
#: modules/websearch/doc/search-guide.webdoc:1084
#: modules/websearch/doc/search-guide.webdoc:1124
#: modules/websearch/doc/search-guide.webdoc:1147
#: modules/websearch/doc/search-guide.webdoc:1168
#: modules/websearch/doc/search-guide.webdoc:1183
#: modules/websearch/doc/search-guide.webdoc:1227
#: modules/websearch/doc/search-guide.webdoc:1252
#: modules/websearch/doc/search-guide.webdoc:1273
#: modules/websearch/doc/search-guide.webdoc:1289
#: modules/websearch/doc/search-guide.webdoc:1334
#: modules/websearch/doc/search-guide.webdoc:1357
#: modules/websearch/doc/search-guide.webdoc:1379
#: modules/websearch/doc/search-guide.webdoc:1395
#: modules/websearch/doc/search-guide.webdoc:1765
#: modules/websearch/doc/search-guide.webdoc:1779
#: modules/websearch/doc/search-guide.webdoc:1797
#: modules/websearch/doc/search-guide.webdoc:1816
#: modules/websearch/doc/search-guide.webdoc:1829
#: modules/websearch/doc/search-guide.webdoc:1847
#: modules/websearch/doc/search-guide.webdoc:1867
#: modules/websearch/doc/search-guide.webdoc:1882
#: modules/websearch/doc/search-guide.webdoc:1901
#: modules/websearch/doc/search-guide.webdoc:1924
#: modules/websearch/doc/search-guide.webdoc:1939
#: modules/websearch/doc/search-guide.webdoc:1958
#: modules/websearch/doc/search-guide.webdoc:1986
#: modules/websearch/doc/search-guide.webdoc:2024
#: modules/websearch/doc/search-guide.webdoc:2035
#: modules/websearch/doc/search-guide.webdoc:2049
#: modules/websearch/doc/search-guide.webdoc:2063
#: modules/websearch/doc/search-guide.webdoc:2076
#: modules/websearch/doc/search-guide.webdoc:2092
#: modules/websearch/doc/search-guide.webdoc:2103
#: modules/websearch/doc/search-guide.webdoc:2117
#: modules/websearch/doc/search-guide.webdoc:2131
#: modules/websearch/doc/search-guide.webdoc:2144
#: modules/websearch/doc/search-guide.webdoc:2160
#: modules/websearch/doc/search-guide.webdoc:2171
#: modules/websearch/doc/search-guide.webdoc:2185
#: modules/websearch/doc/search-guide.webdoc:2199
#: modules/websearch/doc/search-guide.webdoc:2212
#: modules/websearch/doc/search-guide.webdoc:2230
#: modules/websearch/doc/search-guide.webdoc:2241
#: modules/websearch/doc/search-guide.webdoc:2255
#: modules/websearch/doc/search-guide.webdoc:2269
#: modules/websearch/doc/search-guide.webdoc:2282
#: modules/websearch/doc/search-guide.webdoc:2311
#: modules/websearch/doc/search-guide.webdoc:2325
#: modules/websearch/doc/search-guide.webdoc:2342
#: modules/websearch/doc/search-guide.webdoc:2355
#: modules/websearch/doc/search-guide.webdoc:2372
#: modules/websearch/doc/search-guide.webdoc:2386
#: modules/websearch/doc/search-guide.webdoc:2404
#: modules/websearch/doc/search-guide.webdoc:2418
#: modules/websearch/doc/search-guide.webdoc:2449
#: modules/websearch/doc/search-guide.webdoc:2464
#: modules/websearch/doc/search-guide.webdoc:2478
#: modules/websearch/doc/search-guide.webdoc:2493
#: modules/websearch/doc/search-guide.webdoc:2521
#: modules/websearch/doc/search-guide.webdoc:2536
#: modules/websearch/doc/search-guide.webdoc:2550
#: modules/websearch/doc/search-guide.webdoc:2566
#: modules/websearch/doc/search-guide.webdoc:2598
#: modules/websearch/doc/search-guide.webdoc:2614
#: modules/websearch/doc/search-guide.webdoc:2628
#: modules/websearch/doc/search-guide.webdoc:2643
#: modules/websearch/doc/search-guide.webdoc:2674
#: modules/websearch/doc/search-guide.webdoc:2690
#: modules/websearch/doc/search-guide.webdoc:2704
#: modules/websearch/doc/search-guide.webdoc:2719
#: modules/websearch/doc/search-guide.webdoc:2761
#: modules/websearch/doc/search-guide.webdoc:2776
#: modules/websearch/doc/search-guide.webdoc:2790
#: modules/websearch/doc/search-guide.webdoc:2815
#: modules/websearch/doc/search-guide.webdoc:2830
#: modules/websearch/doc/search-guide.webdoc:2844
#: modules/websearch/doc/search-guide.webdoc:2873
#: modules/websearch/doc/search-guide.webdoc:2888
#: modules/websearch/doc/search-guide.webdoc:2902
#: modules/websearch/doc/search-guide.webdoc:2930
#: modules/websearch/doc/search-guide.webdoc:2945
#: modules/websearch/doc/search-guide.webdoc:2958
#: modules/websearch/doc/search-guide.webdoc:2993
#: modules/websearch/doc/search-guide.webdoc:3015
#: modules/websearch/doc/search-guide.webdoc:3039
#: modules/websearch/doc/search-guide.webdoc:3063
#: modules/websearch/doc/search-guide.webdoc:3087
#: modules/websearch/doc/search-guide.webdoc:3102
#: modules/websearch/doc/search-guide.webdoc:3118
#: modules/websearch/doc/search-guide.webdoc:3135
#: modules/websearch/doc/search-guide.webdoc:3155
#: modules/websearch/doc/search-guide.webdoc:3173
#: modules/websearch/doc/search-guide.webdoc:3191
#: modules/websearch/doc/search-guide.webdoc:3210
#: modules/websearch/doc/search-guide.webdoc:3231
#: modules/websearch/doc/search-guide.webdoc:3245
#: modules/websearch/doc/search-guide.webdoc:3265
#: modules/websearch/doc/search-guide.webdoc:3280
#: modules/websearch/doc/search-guide.webdoc:3299
#: modules/websearch/doc/search-guide.webdoc:3314
#: modules/websearch/doc/search-guide.webdoc:3334
#: modules/websearch/doc/search-guide.webdoc:3349
#: modules/websearch/doc/search-guide.webdoc:3411
#: modules/websearch/doc/search-guide.webdoc:3425
#: modules/websearch/doc/search-guide.webdoc:3442
#: modules/websearch/doc/search-guide.webdoc:3455
#: modules/websearch/doc/search-guide.webdoc:3473
#: modules/websearch/doc/search-guide.webdoc:3488
#: modules/websearch/doc/search-guide.webdoc:3506
#: modules/websearch/doc/search-guide.webdoc:3521
#: modules/websearch/doc/search-guide.webdoc:3546
#: modules/websearch/doc/search-guide.webdoc:3559
#: modules/websearch/doc/search-guide.webdoc:3572
#: modules/websearch/doc/search-guide.webdoc:3588
#: modules/websearch/doc/search-guide.webdoc:3604
#: modules/websearch/doc/search-guide.webdoc:3621
#: modules/websearch/doc/search-guide.webdoc:3654
#: modules/websearch/doc/search-guide.webdoc:3670
#: modules/websearch/doc/search-guide.webdoc:3687
#: modules/websearch/doc/search-guide.webdoc:3707
#: modules/websearch/doc/search-guide.webdoc:3721
#: modules/websearch/doc/search-guide.webdoc:3739
#: modules/websearch/doc/search-guide.webdoc:3760
#: modules/websearch/doc/search-guide.webdoc:3779
#: modules/websearch/doc/search-guide.webdoc:3797
#: modules/websearch/doc/search-guide.webdoc:3819
#: modules/websearch/doc/search-guide.webdoc:3838
#: modules/websearch/doc/search-guide.webdoc:3855
#: modules/websearch/doc/search-guide.webdoc:3976
#: modules/websearch/doc/search-guide.webdoc:4001
#: modules/websearch/doc/search-guide.webdoc:4024
#: modules/websearch/doc/search-guide.webdoc:4050
#: modules/websearch/doc/search-guide.webdoc:4074
#: modules/websearch/doc/search-guide.webdoc:4101
#: modules/websearch/doc/search-guide.webdoc:4126
#: modules/websearch/doc/search-guide.webdoc:4152
#: modules/websearch/doc/search-guide.webdoc:4181
#: modules/websearch/doc/search-guide.webdoc:4201
#: modules/websearch/doc/search-guide.webdoc:4225
#: modules/websearch/doc/search-guide.webdoc:4252
#: modules/websearch/doc/search-guide.webdoc:4292
#: modules/websearch/doc/search-guide.webdoc:4313
#: modules/websearch/doc/search-guide.webdoc:4337
#: modules/websearch/doc/search-guide.webdoc:4367
#: modules/websearch/doc/search-guide.webdoc:4411
#: modules/websearch/doc/search-guide.webdoc:4433
#: modules/websearch/doc/search-guide.webdoc:4458
#: modules/websearch/doc/search-guide.webdoc:4488
#: modules/websearch/doc/search-guide.webdoc:4533
#: modules/websearch/doc/search-guide.webdoc:4554
#: modules/websearch/doc/search-guide.webdoc:4579
#: modules/websearch/doc/search-guide.webdoc:4609
#: modules/websearch/doc/search-guide.webdoc:4901
#: modules/websearch/doc/search-guide.webdoc:4917
#: modules/websearch/doc/search-guide.webdoc:4937
#: modules/websearch/doc/search-guide.webdoc:4956
#: modules/websearch/doc/search-guide.webdoc:4977
#: modules/websearch/doc/search-guide.webdoc:4995
#: modules/websearch/doc/search-guide.webdoc:5016
#: modules/websearch/doc/search-guide.webdoc:5034
#: modules/websearch/doc/search-guide.webdoc:5067
#: modules/websearch/doc/search-guide.webdoc:5081
#: modules/websearch/doc/search-guide.webdoc:5096
#: modules/websearch/doc/search-guide.webdoc:5112
#: modules/websearch/doc/search-guide.webdoc:5131
#: modules/websearch/doc/search-guide.webdoc:5145
#: modules/websearch/doc/search-guide.webdoc:5161
#: modules/websearch/doc/search-guide.webdoc:5179
#: modules/websearch/doc/search-guide.webdoc:5198
#: modules/websearch/doc/search-guide.webdoc:5213
#: modules/websearch/doc/search-guide.webdoc:5228
#: modules/websearch/doc/search-guide.webdoc:5246
#: modules/websearch/doc/search-guide.webdoc:5266
#: modules/websearch/doc/search-guide.webdoc:5281
#: modules/websearch/doc/search-guide.webdoc:5296
#: modules/websearch/doc/search-guide.webdoc:5316
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:131
#: modules/miscutil/lib/inveniocfg.py:481
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:7219
#: modules/bibcirculation/lib/bibcirculation_templates.py:7918
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16025
msgid "author"
-msgstr "autor"
+msgstr ""
#: modules/webhelp/web/help-central.webdoc:98
#: modules/websearch/doc/search-guide.webdoc:20
msgid "Search Guide"
-msgstr "Guía de búsqueda"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:347
#: modules/websearch/doc/search-guide.webdoc:382
#: modules/websearch/doc/search-guide.webdoc:417
#: modules/websearch/doc/search-guide.webdoc:479
#: modules/websearch/doc/search-guide.webdoc:514
#: modules/websearch/doc/search-guide.webdoc:549
#: modules/websearch/doc/search-guide.webdoc:616
#: modules/websearch/doc/search-guide.webdoc:651
#: modules/websearch/doc/search-guide.webdoc:686
#: modules/websearch/doc/search-guide.webdoc:754
#: modules/websearch/doc/search-guide.webdoc:789
#: modules/websearch/doc/search-guide.webdoc:824
#: modules/miscutil/lib/inveniocfg.py:492
msgid "experiment"
-msgstr "experimento"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:334
#: modules/websearch/doc/search-guide.webdoc:370
#: modules/websearch/doc/search-guide.webdoc:405
#: modules/websearch/doc/search-guide.webdoc:466
#: modules/websearch/doc/search-guide.webdoc:502
#: modules/websearch/doc/search-guide.webdoc:537
#: modules/websearch/doc/search-guide.webdoc:603
#: modules/websearch/doc/search-guide.webdoc:639
#: modules/websearch/doc/search-guide.webdoc:674
#: modules/websearch/doc/search-guide.webdoc:741
#: modules/websearch/doc/search-guide.webdoc:777
#: modules/websearch/doc/search-guide.webdoc:812
#: modules/websearch/lib/search_engine.py:1219
#: modules/websearch/lib/websearch_templates.py:1102
msgid "Any of the words:"
-msgstr "Al menos una de las palabras:"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:346
#: modules/websearch/doc/search-guide.webdoc:381
#: modules/websearch/doc/search-guide.webdoc:416
#: modules/websearch/doc/search-guide.webdoc:478
#: modules/websearch/doc/search-guide.webdoc:513
#: modules/websearch/doc/search-guide.webdoc:548
#: modules/websearch/doc/search-guide.webdoc:615
#: modules/websearch/doc/search-guide.webdoc:650
#: modules/websearch/doc/search-guide.webdoc:685
#: modules/websearch/doc/search-guide.webdoc:753
#: modules/websearch/doc/search-guide.webdoc:788
#: modules/websearch/doc/search-guide.webdoc:823
#: modules/miscutil/lib/inveniocfg.py:489
msgid "division"
-msgstr "división"
+msgstr ""
#: modules/websearch/doc/search-tips.webdoc:39
#: modules/websearch/doc/search-tips.webdoc:70
#: modules/websearch/doc/search-tips.webdoc:108
#: modules/websearch/doc/search-tips.webdoc:156
#: modules/websearch/doc/search-tips.webdoc:179
#: modules/websearch/doc/search-tips.webdoc:203
#: modules/websearch/doc/search-tips.webdoc:248
#: modules/websearch/doc/search-tips.webdoc:288
#: modules/websearch/doc/search-tips.webdoc:299
#: modules/websearch/doc/search-tips.webdoc:319
#: modules/websearch/doc/search-tips.webdoc:339
#: modules/websearch/doc/search-tips.webdoc:373
#: modules/websearch/doc/search-tips.webdoc:408
#: modules/websearch/doc/search-tips.webdoc:428
#: modules/websearch/doc/search-tips.webdoc:448
#: modules/websearch/doc/search-tips.webdoc:482
#: modules/websearch/doc/search-tips.webdoc:500
#: modules/websearch/doc/search-tips.webdoc:519
#: modules/websearch/doc/search-tips.webdoc:552
#: modules/websearch/doc/search-tips.webdoc:577
#: modules/websearch/doc/search-tips.webdoc:601
#: modules/websearch/doc/search-tips.webdoc:622
#: modules/websearch/doc/search-guide.webdoc:227
#: modules/websearch/doc/search-guide.webdoc:251
#: modules/websearch/doc/search-guide.webdoc:277
#: modules/websearch/doc/search-guide.webdoc:302
#: modules/websearch/doc/search-guide.webdoc:427
#: modules/websearch/doc/search-guide.webdoc:559
#: modules/websearch/doc/search-guide.webdoc:696
#: modules/websearch/doc/search-guide.webdoc:834
#: modules/websearch/doc/search-guide.webdoc:882
#: modules/websearch/doc/search-guide.webdoc:913
#: modules/websearch/doc/search-guide.webdoc:953
#: modules/websearch/doc/search-guide.webdoc:987
#: modules/websearch/doc/search-guide.webdoc:1027
#: modules/websearch/doc/search-guide.webdoc:1049
#: modules/websearch/doc/search-guide.webdoc:1069
#: modules/websearch/doc/search-guide.webdoc:1085
#: modules/websearch/doc/search-guide.webdoc:1125
#: modules/websearch/doc/search-guide.webdoc:1148
#: modules/websearch/doc/search-guide.webdoc:1169
#: modules/websearch/doc/search-guide.webdoc:1184
#: modules/websearch/doc/search-guide.webdoc:1228
#: modules/websearch/doc/search-guide.webdoc:1253
#: modules/websearch/doc/search-guide.webdoc:1274
#: modules/websearch/doc/search-guide.webdoc:1290
#: modules/websearch/doc/search-guide.webdoc:1335
#: modules/websearch/doc/search-guide.webdoc:1358
#: modules/websearch/doc/search-guide.webdoc:1380
#: modules/websearch/doc/search-guide.webdoc:1396
#: modules/websearch/doc/search-guide.webdoc:1766
#: modules/websearch/doc/search-guide.webdoc:1780
#: modules/websearch/doc/search-guide.webdoc:1798
#: modules/websearch/doc/search-guide.webdoc:1817
#: modules/websearch/doc/search-guide.webdoc:1830
#: modules/websearch/doc/search-guide.webdoc:1848
#: modules/websearch/doc/search-guide.webdoc:1868
#: modules/websearch/doc/search-guide.webdoc:1883
#: modules/websearch/doc/search-guide.webdoc:1902
#: modules/websearch/doc/search-guide.webdoc:1925
#: modules/websearch/doc/search-guide.webdoc:1940
#: modules/websearch/doc/search-guide.webdoc:1959
#: modules/websearch/doc/search-guide.webdoc:1988
#: modules/websearch/doc/search-guide.webdoc:2025
#: modules/websearch/doc/search-guide.webdoc:2036
#: modules/websearch/doc/search-guide.webdoc:2050
#: modules/websearch/doc/search-guide.webdoc:2064
#: modules/websearch/doc/search-guide.webdoc:2077
#: modules/websearch/doc/search-guide.webdoc:2093
#: modules/websearch/doc/search-guide.webdoc:2104
#: modules/websearch/doc/search-guide.webdoc:2118
#: modules/websearch/doc/search-guide.webdoc:2132
#: modules/websearch/doc/search-guide.webdoc:2145
#: modules/websearch/doc/search-guide.webdoc:2161
#: modules/websearch/doc/search-guide.webdoc:2172
#: modules/websearch/doc/search-guide.webdoc:2186
#: modules/websearch/doc/search-guide.webdoc:2200
#: modules/websearch/doc/search-guide.webdoc:2213
#: modules/websearch/doc/search-guide.webdoc:2231
#: modules/websearch/doc/search-guide.webdoc:2242
#: modules/websearch/doc/search-guide.webdoc:2256
#: modules/websearch/doc/search-guide.webdoc:2270
#: modules/websearch/doc/search-guide.webdoc:2283
#: modules/websearch/doc/search-guide.webdoc:2312
#: modules/websearch/doc/search-guide.webdoc:2326
#: modules/websearch/doc/search-guide.webdoc:2343
#: modules/websearch/doc/search-guide.webdoc:2356
#: modules/websearch/doc/search-guide.webdoc:2373
#: modules/websearch/doc/search-guide.webdoc:2387
#: modules/websearch/doc/search-guide.webdoc:2405
#: modules/websearch/doc/search-guide.webdoc:2419
#: modules/websearch/doc/search-guide.webdoc:2450
#: modules/websearch/doc/search-guide.webdoc:2465
#: modules/websearch/doc/search-guide.webdoc:2479
#: modules/websearch/doc/search-guide.webdoc:2494
#: modules/websearch/doc/search-guide.webdoc:2522
#: modules/websearch/doc/search-guide.webdoc:2537
#: modules/websearch/doc/search-guide.webdoc:2551
#: modules/websearch/doc/search-guide.webdoc:2567
#: modules/websearch/doc/search-guide.webdoc:2599
#: modules/websearch/doc/search-guide.webdoc:2615
#: modules/websearch/doc/search-guide.webdoc:2629
#: modules/websearch/doc/search-guide.webdoc:2644
#: modules/websearch/doc/search-guide.webdoc:2675
#: modules/websearch/doc/search-guide.webdoc:2691
#: modules/websearch/doc/search-guide.webdoc:2705
#: modules/websearch/doc/search-guide.webdoc:2720
#: modules/websearch/doc/search-guide.webdoc:2762
#: modules/websearch/doc/search-guide.webdoc:2777
#: modules/websearch/doc/search-guide.webdoc:2791
#: modules/websearch/doc/search-guide.webdoc:2816
#: modules/websearch/doc/search-guide.webdoc:2831
#: modules/websearch/doc/search-guide.webdoc:2845
#: modules/websearch/doc/search-guide.webdoc:2874
#: modules/websearch/doc/search-guide.webdoc:2889
#: modules/websearch/doc/search-guide.webdoc:2903
#: modules/websearch/doc/search-guide.webdoc:2931
#: modules/websearch/doc/search-guide.webdoc:2946
#: modules/websearch/doc/search-guide.webdoc:2959
#: modules/websearch/doc/search-guide.webdoc:2994
#: modules/websearch/doc/search-guide.webdoc:3016
#: modules/websearch/doc/search-guide.webdoc:3040
#: modules/websearch/doc/search-guide.webdoc:3064
#: modules/websearch/doc/search-guide.webdoc:3088
#: modules/websearch/doc/search-guide.webdoc:3103
#: modules/websearch/doc/search-guide.webdoc:3119
#: modules/websearch/doc/search-guide.webdoc:3136
#: modules/websearch/doc/search-guide.webdoc:3156
#: modules/websearch/doc/search-guide.webdoc:3174
#: modules/websearch/doc/search-guide.webdoc:3192
#: modules/websearch/doc/search-guide.webdoc:3211
#: modules/websearch/doc/search-guide.webdoc:3232
#: modules/websearch/doc/search-guide.webdoc:3246
#: modules/websearch/doc/search-guide.webdoc:3266
#: modules/websearch/doc/search-guide.webdoc:3281
#: modules/websearch/doc/search-guide.webdoc:3300
#: modules/websearch/doc/search-guide.webdoc:3315
#: modules/websearch/doc/search-guide.webdoc:3335
#: modules/websearch/doc/search-guide.webdoc:3350
#: modules/websearch/doc/search-guide.webdoc:3412
#: modules/websearch/doc/search-guide.webdoc:3426
#: modules/websearch/doc/search-guide.webdoc:3443
#: modules/websearch/doc/search-guide.webdoc:3456
#: modules/websearch/doc/search-guide.webdoc:3474
#: modules/websearch/doc/search-guide.webdoc:3489
#: modules/websearch/doc/search-guide.webdoc:3507
#: modules/websearch/doc/search-guide.webdoc:3522
#: modules/websearch/doc/search-guide.webdoc:3547
#: modules/websearch/doc/search-guide.webdoc:3560
#: modules/websearch/doc/search-guide.webdoc:3573
#: modules/websearch/doc/search-guide.webdoc:3589
#: modules/websearch/doc/search-guide.webdoc:3605
#: modules/websearch/doc/search-guide.webdoc:3622
#: modules/websearch/doc/search-guide.webdoc:3655
#: modules/websearch/doc/search-guide.webdoc:3671
#: modules/websearch/doc/search-guide.webdoc:3688
#: modules/websearch/doc/search-guide.webdoc:3708
#: modules/websearch/doc/search-guide.webdoc:3722
#: modules/websearch/doc/search-guide.webdoc:3740
#: modules/websearch/doc/search-guide.webdoc:3761
#: modules/websearch/doc/search-guide.webdoc:3780
#: modules/websearch/doc/search-guide.webdoc:3798
#: modules/websearch/doc/search-guide.webdoc:3820
#: modules/websearch/doc/search-guide.webdoc:3839
#: modules/websearch/doc/search-guide.webdoc:3856
#: modules/websearch/doc/search-guide.webdoc:3977
#: modules/websearch/doc/search-guide.webdoc:4002
#: modules/websearch/doc/search-guide.webdoc:4025
#: modules/websearch/doc/search-guide.webdoc:4051
#: modules/websearch/doc/search-guide.webdoc:4075
#: modules/websearch/doc/search-guide.webdoc:4102
#: modules/websearch/doc/search-guide.webdoc:4127
#: modules/websearch/doc/search-guide.webdoc:4153
#: modules/websearch/doc/search-guide.webdoc:4182
#: modules/websearch/doc/search-guide.webdoc:4202
#: modules/websearch/doc/search-guide.webdoc:4226
#: modules/websearch/doc/search-guide.webdoc:4253
#: modules/websearch/doc/search-guide.webdoc:4293
#: modules/websearch/doc/search-guide.webdoc:4314
#: modules/websearch/doc/search-guide.webdoc:4338
#: modules/websearch/doc/search-guide.webdoc:4368
#: modules/websearch/doc/search-guide.webdoc:4412
#: modules/websearch/doc/search-guide.webdoc:4434
#: modules/websearch/doc/search-guide.webdoc:4459
#: modules/websearch/doc/search-guide.webdoc:4489
#: modules/websearch/doc/search-guide.webdoc:4534
#: modules/websearch/doc/search-guide.webdoc:4555
#: modules/websearch/doc/search-guide.webdoc:4580
#: modules/websearch/doc/search-guide.webdoc:4610
#: modules/websearch/doc/search-guide.webdoc:4902
#: modules/websearch/doc/search-guide.webdoc:4918
#: modules/websearch/doc/search-guide.webdoc:4938
#: modules/websearch/doc/search-guide.webdoc:4957
#: modules/websearch/doc/search-guide.webdoc:4978
#: modules/websearch/doc/search-guide.webdoc:4996
#: modules/websearch/doc/search-guide.webdoc:5017
#: modules/websearch/doc/search-guide.webdoc:5035
#: modules/websearch/doc/search-guide.webdoc:5068
#: modules/websearch/doc/search-guide.webdoc:5082
#: modules/websearch/doc/search-guide.webdoc:5097
#: modules/websearch/doc/search-guide.webdoc:5113
#: modules/websearch/doc/search-guide.webdoc:5132
#: modules/websearch/doc/search-guide.webdoc:5146
#: modules/websearch/doc/search-guide.webdoc:5162
#: modules/websearch/doc/search-guide.webdoc:5180
#: modules/websearch/doc/search-guide.webdoc:5199
#: modules/websearch/doc/search-guide.webdoc:5214
#: modules/websearch/doc/search-guide.webdoc:5229
#: modules/websearch/doc/search-guide.webdoc:5247
#: modules/websearch/doc/search-guide.webdoc:5267
#: modules/websearch/doc/search-guide.webdoc:5282
#: modules/websearch/doc/search-guide.webdoc:5297
#: modules/websearch/doc/search-guide.webdoc:5317
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:133
#: modules/websearch/lib/websearch_templates.py:792
#: modules/websearch/lib/websearch_templates.py:870
#: modules/websearch/lib/websearch_templates.py:993
#: modules/websearch/lib/websearch_templates.py:1962
#: modules/websearch/lib/websearch_templates.py:2055
#: modules/websearch/lib/websearch_templates.py:2112
#: modules/websearch/lib/websearch_templates.py:2169
#: modules/webstyle/lib/webstyle_templates.py:433
#: modules/webstyle/lib/webstyle_templates.py:502
#: modules/webstyle/lib/webdoc_tests.py:86
#: modules/bibedit/lib/bibeditmulti_templates.py:312
#: modules/bibcirculation/lib/bibcirculation_templates.py:161
#: modules/bibcirculation/lib/bibcirculation_templates.py:207
#: modules/bibcirculation/lib/bibcirculation_templates.py:1977
#: modules/bibcirculation/lib/bibcirculation_templates.py:2052
#: modules/bibcirculation/lib/bibcirculation_templates.py:2245
#: modules/bibcirculation/lib/bibcirculation_templates.py:4088
#: modules/bibcirculation/lib/bibcirculation_templates.py:6806
#: modules/bibcirculation/lib/bibcirculation_templates.py:7251
#: modules/bibcirculation/lib/bibcirculation_templates.py:7948
#: modules/bibcirculation/lib/bibcirculation_templates.py:8600
#: modules/bibcirculation/lib/bibcirculation_templates.py:9190
#: modules/bibcirculation/lib/bibcirculation_templates.py:9687
#: modules/bibcirculation/lib/bibcirculation_templates.py:10162
#: modules/bibcirculation/lib/bibcirculation_templates.py:14248
#: modules/bibcirculation/lib/bibcirculation_templates.py:14571
#: modules/bibcirculation/lib/bibcirculation_templates.py:15243
#: modules/bibcirculation/lib/bibcirculation_templates.py:16055
#: modules/bibcirculation/lib/bibcirculation_templates.py:17122
#: modules/bibcirculation/lib/bibcirculation_templates.py:17604
#: modules/bibcirculation/lib/bibcirculation_templates.py:17788
#: modules/bibknowledge/lib/bibknowledge_templates.py:82
#: modules/bibknowledge/lib/bibknowledge_templates.py:423
msgid "Search"
-msgstr "Buscar"
+msgstr ""
#: modules/webhelp/web/help-central.webdoc:133
msgid "Citation Metrics"
-msgstr "Métrica de citas"
+msgstr ""
#: modules/websearch/doc/search-tips.webdoc:35
#: modules/websearch/doc/search-tips.webdoc:66
#: modules/websearch/doc/search-tips.webdoc:104
#: modules/websearch/doc/search-tips.webdoc:152
#: modules/websearch/doc/search-tips.webdoc:175
#: modules/websearch/doc/search-tips.webdoc:199
#: modules/websearch/doc/search-tips.webdoc:244
#: modules/websearch/doc/search-tips.webdoc:284
#: modules/websearch/doc/search-tips.webdoc:295
#: modules/websearch/doc/search-tips.webdoc:315
#: modules/websearch/doc/search-tips.webdoc:335
#: modules/websearch/doc/search-tips.webdoc:369
#: modules/websearch/doc/search-tips.webdoc:403
#: modules/websearch/doc/search-tips.webdoc:424
#: modules/websearch/doc/search-tips.webdoc:444
#: modules/websearch/doc/search-tips.webdoc:478
#: modules/websearch/doc/search-tips.webdoc:496
#: modules/websearch/doc/search-tips.webdoc:515
#: modules/websearch/doc/search-tips.webdoc:548
#: modules/websearch/doc/search-tips.webdoc:559
#: modules/websearch/doc/search-tips.webdoc:561
#: modules/websearch/doc/search-tips.webdoc:564
#: modules/websearch/doc/search-tips.webdoc:573
#: modules/websearch/doc/search-tips.webdoc:597
#: modules/websearch/doc/search-tips.webdoc:618
#: modules/websearch/doc/search-guide.webdoc:224
#: modules/websearch/doc/search-guide.webdoc:248
#: modules/websearch/doc/search-guide.webdoc:274
#: modules/websearch/doc/search-guide.webdoc:299
#: modules/websearch/doc/search-guide.webdoc:342
#: modules/websearch/doc/search-guide.webdoc:377
#: modules/websearch/doc/search-guide.webdoc:412
#: modules/websearch/doc/search-guide.webdoc:474
#: modules/websearch/doc/search-guide.webdoc:509
#: modules/websearch/doc/search-guide.webdoc:544
#: modules/websearch/doc/search-guide.webdoc:611
#: modules/websearch/doc/search-guide.webdoc:646
#: modules/websearch/doc/search-guide.webdoc:681
#: modules/websearch/doc/search-guide.webdoc:749
#: modules/websearch/doc/search-guide.webdoc:784
#: modules/websearch/doc/search-guide.webdoc:819
#: modules/websearch/doc/search-guide.webdoc:879
#: modules/websearch/doc/search-guide.webdoc:910
#: modules/websearch/doc/search-guide.webdoc:950
#: modules/websearch/doc/search-guide.webdoc:984
#: modules/websearch/doc/search-guide.webdoc:1024
#: modules/websearch/doc/search-guide.webdoc:1046
#: modules/websearch/doc/search-guide.webdoc:1066
#: modules/websearch/doc/search-guide.webdoc:1082
#: modules/websearch/doc/search-guide.webdoc:1122
#: modules/websearch/doc/search-guide.webdoc:1145
#: modules/websearch/doc/search-guide.webdoc:1166
#: modules/websearch/doc/search-guide.webdoc:1181
#: modules/websearch/doc/search-guide.webdoc:1225
#: modules/websearch/doc/search-guide.webdoc:1250
#: modules/websearch/doc/search-guide.webdoc:1271
#: modules/websearch/doc/search-guide.webdoc:1287
#: modules/websearch/doc/search-guide.webdoc:1332
#: modules/websearch/doc/search-guide.webdoc:1355
#: modules/websearch/doc/search-guide.webdoc:1377
#: modules/websearch/doc/search-guide.webdoc:1393
#: modules/websearch/doc/search-guide.webdoc:1763
#: modules/websearch/doc/search-guide.webdoc:1777
#: modules/websearch/doc/search-guide.webdoc:1795
#: modules/websearch/doc/search-guide.webdoc:1814
#: modules/websearch/doc/search-guide.webdoc:1827
#: modules/websearch/doc/search-guide.webdoc:1845
#: modules/websearch/doc/search-guide.webdoc:1865
#: modules/websearch/doc/search-guide.webdoc:1880
#: modules/websearch/doc/search-guide.webdoc:1899
#: modules/websearch/doc/search-guide.webdoc:1922
#: modules/websearch/doc/search-guide.webdoc:1937
#: modules/websearch/doc/search-guide.webdoc:1956
#: modules/websearch/doc/search-guide.webdoc:1984
#: modules/websearch/doc/search-guide.webdoc:2022
#: modules/websearch/doc/search-guide.webdoc:2033
#: modules/websearch/doc/search-guide.webdoc:2047
#: modules/websearch/doc/search-guide.webdoc:2061
#: modules/websearch/doc/search-guide.webdoc:2074
#: modules/websearch/doc/search-guide.webdoc:2090
#: modules/websearch/doc/search-guide.webdoc:2101
#: modules/websearch/doc/search-guide.webdoc:2115
#: modules/websearch/doc/search-guide.webdoc:2129
#: modules/websearch/doc/search-guide.webdoc:2142
#: modules/websearch/doc/search-guide.webdoc:2158
#: modules/websearch/doc/search-guide.webdoc:2169
#: modules/websearch/doc/search-guide.webdoc:2183
#: modules/websearch/doc/search-guide.webdoc:2197
#: modules/websearch/doc/search-guide.webdoc:2210
#: modules/websearch/doc/search-guide.webdoc:2228
#: modules/websearch/doc/search-guide.webdoc:2239
#: modules/websearch/doc/search-guide.webdoc:2253
#: modules/websearch/doc/search-guide.webdoc:2267
#: modules/websearch/doc/search-guide.webdoc:2280
#: modules/websearch/doc/search-guide.webdoc:2309
#: modules/websearch/doc/search-guide.webdoc:2323
#: modules/websearch/doc/search-guide.webdoc:2340
#: modules/websearch/doc/search-guide.webdoc:2353
#: modules/websearch/doc/search-guide.webdoc:2370
#: modules/websearch/doc/search-guide.webdoc:2384
#: modules/websearch/doc/search-guide.webdoc:2402
#: modules/websearch/doc/search-guide.webdoc:2416
#: modules/websearch/doc/search-guide.webdoc:2447
#: modules/websearch/doc/search-guide.webdoc:2462
#: modules/websearch/doc/search-guide.webdoc:2476
#: modules/websearch/doc/search-guide.webdoc:2491
#: modules/websearch/doc/search-guide.webdoc:2519
#: modules/websearch/doc/search-guide.webdoc:2534
#: modules/websearch/doc/search-guide.webdoc:2548
#: modules/websearch/doc/search-guide.webdoc:2564
#: modules/websearch/doc/search-guide.webdoc:2596
#: modules/websearch/doc/search-guide.webdoc:2612
#: modules/websearch/doc/search-guide.webdoc:2626
#: modules/websearch/doc/search-guide.webdoc:2641
#: modules/websearch/doc/search-guide.webdoc:2672
#: modules/websearch/doc/search-guide.webdoc:2688
#: modules/websearch/doc/search-guide.webdoc:2702
#: modules/websearch/doc/search-guide.webdoc:2717
#: modules/websearch/doc/search-guide.webdoc:2759
#: modules/websearch/doc/search-guide.webdoc:2774
#: modules/websearch/doc/search-guide.webdoc:2788
#: modules/websearch/doc/search-guide.webdoc:2813
#: modules/websearch/doc/search-guide.webdoc:2828
#: modules/websearch/doc/search-guide.webdoc:2842
#: modules/websearch/doc/search-guide.webdoc:2871
#: modules/websearch/doc/search-guide.webdoc:2886
#: modules/websearch/doc/search-guide.webdoc:2900
#: modules/websearch/doc/search-guide.webdoc:2928
#: modules/websearch/doc/search-guide.webdoc:2943
#: modules/websearch/doc/search-guide.webdoc:2956
#: modules/websearch/doc/search-guide.webdoc:2991
#: modules/websearch/doc/search-guide.webdoc:3013
#: modules/websearch/doc/search-guide.webdoc:3037
#: modules/websearch/doc/search-guide.webdoc:3061
#: modules/websearch/doc/search-guide.webdoc:3085
#: modules/websearch/doc/search-guide.webdoc:3100
#: modules/websearch/doc/search-guide.webdoc:3116
#: modules/websearch/doc/search-guide.webdoc:3133
#: modules/websearch/doc/search-guide.webdoc:3153
#: modules/websearch/doc/search-guide.webdoc:3171
#: modules/websearch/doc/search-guide.webdoc:3189
#: modules/websearch/doc/search-guide.webdoc:3208
#: modules/websearch/doc/search-guide.webdoc:3229
#: modules/websearch/doc/search-guide.webdoc:3243
#: modules/websearch/doc/search-guide.webdoc:3263
#: modules/websearch/doc/search-guide.webdoc:3278
#: modules/websearch/doc/search-guide.webdoc:3297
#: modules/websearch/doc/search-guide.webdoc:3312
#: modules/websearch/doc/search-guide.webdoc:3332
#: modules/websearch/doc/search-guide.webdoc:3347
#: modules/websearch/doc/search-guide.webdoc:3409
#: modules/websearch/doc/search-guide.webdoc:3423
#: modules/websearch/doc/search-guide.webdoc:3440
#: modules/websearch/doc/search-guide.webdoc:3453
#: modules/websearch/doc/search-guide.webdoc:3471
#: modules/websearch/doc/search-guide.webdoc:3486
#: modules/websearch/doc/search-guide.webdoc:3504
#: modules/websearch/doc/search-guide.webdoc:3519
#: modules/websearch/doc/search-guide.webdoc:3544
#: modules/websearch/doc/search-guide.webdoc:3557
#: modules/websearch/doc/search-guide.webdoc:3570
#: modules/websearch/doc/search-guide.webdoc:3586
#: modules/websearch/doc/search-guide.webdoc:3602
#: modules/websearch/doc/search-guide.webdoc:3619
#: modules/websearch/doc/search-guide.webdoc:3652
#: modules/websearch/doc/search-guide.webdoc:3668
#: modules/websearch/doc/search-guide.webdoc:3685
#: modules/websearch/doc/search-guide.webdoc:3705
#: modules/websearch/doc/search-guide.webdoc:3719
#: modules/websearch/doc/search-guide.webdoc:3737
#: modules/websearch/doc/search-guide.webdoc:3758
#: modules/websearch/doc/search-guide.webdoc:3777
#: modules/websearch/doc/search-guide.webdoc:3795
#: modules/websearch/doc/search-guide.webdoc:3817
#: modules/websearch/doc/search-guide.webdoc:3836
#: modules/websearch/doc/search-guide.webdoc:3853
#: modules/websearch/doc/search-guide.webdoc:3974
#: modules/websearch/doc/search-guide.webdoc:3999
#: modules/websearch/doc/search-guide.webdoc:4022
#: modules/websearch/doc/search-guide.webdoc:4048
#: modules/websearch/doc/search-guide.webdoc:4072
#: modules/websearch/doc/search-guide.webdoc:4099
#: modules/websearch/doc/search-guide.webdoc:4124
#: modules/websearch/doc/search-guide.webdoc:4150
#: modules/websearch/doc/search-guide.webdoc:4179
#: modules/websearch/doc/search-guide.webdoc:4199
#: modules/websearch/doc/search-guide.webdoc:4223
#: modules/websearch/doc/search-guide.webdoc:4250
#: modules/websearch/doc/search-guide.webdoc:4290
#: modules/websearch/doc/search-guide.webdoc:4311
#: modules/websearch/doc/search-guide.webdoc:4335
#: modules/websearch/doc/search-guide.webdoc:4365
#: modules/websearch/doc/search-guide.webdoc:4409
#: modules/websearch/doc/search-guide.webdoc:4431
#: modules/websearch/doc/search-guide.webdoc:4456
#: modules/websearch/doc/search-guide.webdoc:4486
#: modules/websearch/doc/search-guide.webdoc:4531
#: modules/websearch/doc/search-guide.webdoc:4552
#: modules/websearch/doc/search-guide.webdoc:4577
#: modules/websearch/doc/search-guide.webdoc:4607
#: modules/websearch/doc/search-guide.webdoc:4899
#: modules/websearch/doc/search-guide.webdoc:4915
#: modules/websearch/doc/search-guide.webdoc:4935
#: modules/websearch/doc/search-guide.webdoc:4954
#: modules/websearch/doc/search-guide.webdoc:4975
#: modules/websearch/doc/search-guide.webdoc:4993
#: modules/websearch/doc/search-guide.webdoc:5014
#: modules/websearch/doc/search-guide.webdoc:5032
#: modules/websearch/doc/search-guide.webdoc:5065
#: modules/websearch/doc/search-guide.webdoc:5079
#: modules/websearch/doc/search-guide.webdoc:5094
#: modules/websearch/doc/search-guide.webdoc:5110
#: modules/websearch/doc/search-guide.webdoc:5129
#: modules/websearch/doc/search-guide.webdoc:5143
#: modules/websearch/doc/search-guide.webdoc:5159
#: modules/websearch/doc/search-guide.webdoc:5177
#: modules/websearch/doc/search-guide.webdoc:5196
#: modules/websearch/doc/search-guide.webdoc:5211
#: modules/websearch/doc/search-guide.webdoc:5226
#: modules/websearch/doc/search-guide.webdoc:5244
#: modules/websearch/doc/search-guide.webdoc:5264
#: modules/websearch/doc/search-guide.webdoc:5279
#: modules/websearch/doc/search-guide.webdoc:5294
#: modules/websearch/doc/search-guide.webdoc:5314
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:129
#: modules/miscutil/lib/inveniocfg.py:479
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:7218
#: modules/bibcirculation/lib/bibcirculation_templates.py:7917
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16024
msgid "any field"
-msgstr "cualquier campo"
+msgstr ""
#: modules/webhelp/web/help-central.webdoc:20
#: modules/webhelp/web/help-central.webdoc:25
#: modules/webhelp/web/help-central.webdoc:26
#: modules/webhelp/web/help-central.webdoc:27
#: modules/webhelp/web/help-central.webdoc:28
#: modules/webhelp/web/help-central.webdoc:29
#: modules/webhelp/web/help-central.webdoc:30
#: modules/webhelp/web/help-central.webdoc:31
#: modules/webhelp/web/help-central.webdoc:32
#: modules/webhelp/web/help-central.webdoc:33
#: modules/webhelp/web/help-central.webdoc:34
#: modules/webhelp/web/help-central.webdoc:35
#: modules/webhelp/web/help-central.webdoc:36
#: modules/webhelp/web/help-central.webdoc:37
#: modules/webhelp/web/help-central.webdoc:38
#: modules/webhelp/web/help-central.webdoc:39
#: modules/webhelp/web/help-central.webdoc:40
#: modules/webhelp/web/help-central.webdoc:41
#: modules/webhelp/web/help-central.webdoc:42
#: modules/webhelp/web/help-central.webdoc:43
#: modules/webhelp/web/help-central.webdoc:44
#: modules/webhelp/web/help-central.webdoc:45
#: modules/websearch/doc/search-tips.webdoc:21
#: modules/websearch/doc/search-guide.webdoc:21
#: modules/websubmit/doc/submit-guide.webdoc:21
#: modules/webstyle/lib/webdoc_tests.py:105
#: modules/webstyle/lib/webdoc_webinterface.py:155
msgid "Help Central"
-msgstr "Centro de ayuda"
+msgstr ""
#: modules/bibformat/etc/format_templates/Default_HTML_actions.bft:6
msgid "Export as"
-msgstr "Exportar como"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:345
#: modules/websearch/doc/search-guide.webdoc:380
#: modules/websearch/doc/search-guide.webdoc:415
#: modules/websearch/doc/search-guide.webdoc:477
#: modules/websearch/doc/search-guide.webdoc:512
#: modules/websearch/doc/search-guide.webdoc:547
#: modules/websearch/doc/search-guide.webdoc:614
#: modules/websearch/doc/search-guide.webdoc:649
#: modules/websearch/doc/search-guide.webdoc:684
#: modules/websearch/doc/search-guide.webdoc:752
#: modules/websearch/doc/search-guide.webdoc:787
#: modules/websearch/doc/search-guide.webdoc:822
#: modules/miscutil/lib/inveniocfg.py:488
msgid "collection"
-msgstr "colección"
+msgstr ""
#: modules/websearch/doc/admin/websearch-admin-guide.webdoc:20
msgid "WebSearch Admin Guide"
-msgstr "Guía de administración de WebSearch"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:335
#: modules/websearch/doc/search-guide.webdoc:371
#: modules/websearch/doc/search-guide.webdoc:406
#: modules/websearch/doc/search-guide.webdoc:467
#: modules/websearch/doc/search-guide.webdoc:503
#: modules/websearch/doc/search-guide.webdoc:538
#: modules/websearch/doc/search-guide.webdoc:604
#: modules/websearch/doc/search-guide.webdoc:640
#: modules/websearch/doc/search-guide.webdoc:675
#: modules/websearch/doc/search-guide.webdoc:742
#: modules/websearch/doc/search-guide.webdoc:778
#: modules/websearch/doc/search-guide.webdoc:813
#: modules/websearch/lib/search_engine.py:1220
#: modules/websearch/lib/websearch_templates.py:1104
msgid "Exact phrase:"
-msgstr "Frase exacta:"
+msgstr ""
#: modules/webhelp/web/help-central.webdoc:108
#: modules/websubmit/doc/submit-guide.webdoc:20
msgid "Submit Guide"
-msgstr "Ayuda para el envio"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:360
#: modules/websearch/doc/search-guide.webdoc:395
#: modules/websearch/doc/search-guide.webdoc:492
#: modules/websearch/doc/search-guide.webdoc:527
#: modules/websearch/doc/search-guide.webdoc:629
#: modules/websearch/doc/search-guide.webdoc:664
#: modules/websearch/doc/search-guide.webdoc:767
#: modules/websearch/doc/search-guide.webdoc:802
#: modules/websearch/lib/search_engine.py:1053
#: modules/websearch/lib/search_engine.py:1199
#: modules/websearch/lib/websearch_templates.py:1151
#: modules/websearch/lib/websearch_webcoll.py:592
msgid "OR"
-msgstr "O"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:359
#: modules/websearch/doc/search-guide.webdoc:394
#: modules/websearch/doc/search-guide.webdoc:491
#: modules/websearch/doc/search-guide.webdoc:526
#: modules/websearch/doc/search-guide.webdoc:628
#: modules/websearch/doc/search-guide.webdoc:663
#: modules/websearch/doc/search-guide.webdoc:766
#: modules/websearch/doc/search-guide.webdoc:801
#: modules/websearch/lib/search_engine.py:1198
#: modules/websearch/lib/websearch_templates.py:1150
msgid "AND"
-msgstr "Y"
+msgstr ""
#: modules/websearch/doc/search-guide.webdoc:349
#: modules/websearch/doc/search-guide.webdoc:384
#: modules/websearch/doc/search-guide.webdoc:419
#: modules/websearch/doc/search-guide.webdoc:481
#: modules/websearch/doc/search-guide.webdoc:516
#: modules/websearch/doc/search-guide.webdoc:551
#: modules/websearch/doc/search-guide.webdoc:618
#: modules/websearch/doc/search-guide.webdoc:653
#: modules/websearch/doc/search-guide.webdoc:688
#: modules/websearch/doc/search-guide.webdoc:756
#: modules/websearch/doc/search-guide.webdoc:791
#: modules/websearch/doc/search-guide.webdoc:826
#: modules/miscutil/lib/inveniocfg.py:483
msgid "keyword"
-msgstr "palabra clave"
+msgstr ""
#: modules/websearch/doc/search-tips.webdoc:36
#: modules/websearch/doc/search-tips.webdoc:67
#: modules/websearch/doc/search-tips.webdoc:105
#: modules/websearch/doc/search-tips.webdoc:153
#: modules/websearch/doc/search-tips.webdoc:176
#: modules/websearch/doc/search-tips.webdoc:200
#: modules/websearch/doc/search-tips.webdoc:245
#: modules/websearch/doc/search-tips.webdoc:285
#: modules/websearch/doc/search-tips.webdoc:296
#: modules/websearch/doc/search-tips.webdoc:316
#: modules/websearch/doc/search-tips.webdoc:336
#: modules/websearch/doc/search-tips.webdoc:370
#: modules/websearch/doc/search-tips.webdoc:404
#: modules/websearch/doc/search-tips.webdoc:425
#: modules/websearch/doc/search-tips.webdoc:445
#: modules/websearch/doc/search-tips.webdoc:479
#: modules/websearch/doc/search-tips.webdoc:497
#: modules/websearch/doc/search-tips.webdoc:516
#: modules/websearch/doc/search-tips.webdoc:549
#: modules/websearch/doc/search-tips.webdoc:574
#: modules/websearch/doc/search-tips.webdoc:598
#: modules/websearch/doc/search-tips.webdoc:619
#: modules/websearch/doc/search-guide.webdoc:225
#: modules/websearch/doc/search-guide.webdoc:249
#: modules/websearch/doc/search-guide.webdoc:275
#: modules/websearch/doc/search-guide.webdoc:300
#: modules/websearch/doc/search-guide.webdoc:353
#: modules/websearch/doc/search-guide.webdoc:388
#: modules/websearch/doc/search-guide.webdoc:423
#: modules/websearch/doc/search-guide.webdoc:485
#: modules/websearch/doc/search-guide.webdoc:520
#: modules/websearch/doc/search-guide.webdoc:555
#: modules/websearch/doc/search-guide.webdoc:622
#: modules/websearch/doc/search-guide.webdoc:657
#: modules/websearch/doc/search-guide.webdoc:692
#: modules/websearch/doc/search-guide.webdoc:760
#: modules/websearch/doc/search-guide.webdoc:795
#: modules/websearch/doc/search-guide.webdoc:830
#: modules/websearch/doc/search-guide.webdoc:880
#: modules/websearch/doc/search-guide.webdoc:911
#: modules/websearch/doc/search-guide.webdoc:951
#: modules/websearch/doc/search-guide.webdoc:985
#: modules/websearch/doc/search-guide.webdoc:1025
#: modules/websearch/doc/search-guide.webdoc:1047
#: modules/websearch/doc/search-guide.webdoc:1067
#: modules/websearch/doc/search-guide.webdoc:1083
#: modules/websearch/doc/search-guide.webdoc:1123
#: modules/websearch/doc/search-guide.webdoc:1146
#: modules/websearch/doc/search-guide.webdoc:1167
#: modules/websearch/doc/search-guide.webdoc:1182
#: modules/websearch/doc/search-guide.webdoc:1226
#: modules/websearch/doc/search-guide.webdoc:1251
#: modules/websearch/doc/search-guide.webdoc:1272
#: modules/websearch/doc/search-guide.webdoc:1288
#: modules/websearch/doc/search-guide.webdoc:1333
#: modules/websearch/doc/search-guide.webdoc:1356
#: modules/websearch/doc/search-guide.webdoc:1378
#: modules/websearch/doc/search-guide.webdoc:1394
#: modules/websearch/doc/search-guide.webdoc:1764
#: modules/websearch/doc/search-guide.webdoc:1778
#: modules/websearch/doc/search-guide.webdoc:1796
#: modules/websearch/doc/search-guide.webdoc:1815
#: modules/websearch/doc/search-guide.webdoc:1828
#: modules/websearch/doc/search-guide.webdoc:1846
#: modules/websearch/doc/search-guide.webdoc:1866
#: modules/websearch/doc/search-guide.webdoc:1881
#: modules/websearch/doc/search-guide.webdoc:1900
#: modules/websearch/doc/search-guide.webdoc:1923
#: modules/websearch/doc/search-guide.webdoc:1938
#: modules/websearch/doc/search-guide.webdoc:1957
#: modules/websearch/doc/search-guide.webdoc:1985
#: modules/websearch/doc/search-guide.webdoc:2023
#: modules/websearch/doc/search-guide.webdoc:2034
#: modules/websearch/doc/search-guide.webdoc:2048
#: modules/websearch/doc/search-guide.webdoc:2062
#: modules/websearch/doc/search-guide.webdoc:2075
#: modules/websearch/doc/search-guide.webdoc:2091
#: modules/websearch/doc/search-guide.webdoc:2102
#: modules/websearch/doc/search-guide.webdoc:2116
#: modules/websearch/doc/search-guide.webdoc:2130
#: modules/websearch/doc/search-guide.webdoc:2143
#: modules/websearch/doc/search-guide.webdoc:2159
#: modules/websearch/doc/search-guide.webdoc:2170
#: modules/websearch/doc/search-guide.webdoc:2184
#: modules/websearch/doc/search-guide.webdoc:2198
#: modules/websearch/doc/search-guide.webdoc:2211
#: modules/websearch/doc/search-guide.webdoc:2229
#: modules/websearch/doc/search-guide.webdoc:2240
#: modules/websearch/doc/search-guide.webdoc:2254
#: modules/websearch/doc/search-guide.webdoc:2268
#: modules/websearch/doc/search-guide.webdoc:2281
#: modules/websearch/doc/search-guide.webdoc:2310
#: modules/websearch/doc/search-guide.webdoc:2324
#: modules/websearch/doc/search-guide.webdoc:2341
#: modules/websearch/doc/search-guide.webdoc:2354
#: modules/websearch/doc/search-guide.webdoc:2371
#: modules/websearch/doc/search-guide.webdoc:2385
#: modules/websearch/doc/search-guide.webdoc:2403
#: modules/websearch/doc/search-guide.webdoc:2417
#: modules/websearch/doc/search-guide.webdoc:2448
#: modules/websearch/doc/search-guide.webdoc:2463
#: modules/websearch/doc/search-guide.webdoc:2477
#: modules/websearch/doc/search-guide.webdoc:2492
#: modules/websearch/doc/search-guide.webdoc:2520
#: modules/websearch/doc/search-guide.webdoc:2535
#: modules/websearch/doc/search-guide.webdoc:2549
#: modules/websearch/doc/search-guide.webdoc:2565
#: modules/websearch/doc/search-guide.webdoc:2597
#: modules/websearch/doc/search-guide.webdoc:2613
#: modules/websearch/doc/search-guide.webdoc:2627
#: modules/websearch/doc/search-guide.webdoc:2642
#: modules/websearch/doc/search-guide.webdoc:2673
#: modules/websearch/doc/search-guide.webdoc:2689
#: modules/websearch/doc/search-guide.webdoc:2703
#: modules/websearch/doc/search-guide.webdoc:2718
#: modules/websearch/doc/search-guide.webdoc:2760
#: modules/websearch/doc/search-guide.webdoc:2775
#: modules/websearch/doc/search-guide.webdoc:2789
#: modules/websearch/doc/search-guide.webdoc:2814
#: modules/websearch/doc/search-guide.webdoc:2829
#: modules/websearch/doc/search-guide.webdoc:2843
#: modules/websearch/doc/search-guide.webdoc:2872
#: modules/websearch/doc/search-guide.webdoc:2887
#: modules/websearch/doc/search-guide.webdoc:2901
#: modules/websearch/doc/search-guide.webdoc:2929
#: modules/websearch/doc/search-guide.webdoc:2944
#: modules/websearch/doc/search-guide.webdoc:2957
#: modules/websearch/doc/search-guide.webdoc:2992
#: modules/websearch/doc/search-guide.webdoc:3014
#: modules/websearch/doc/search-guide.webdoc:3038
#: modules/websearch/doc/search-guide.webdoc:3062
#: modules/websearch/doc/search-guide.webdoc:3086
#: modules/websearch/doc/search-guide.webdoc:3101
#: modules/websearch/doc/search-guide.webdoc:3117
#: modules/websearch/doc/search-guide.webdoc:3134
#: modules/websearch/doc/search-guide.webdoc:3154
#: modules/websearch/doc/search-guide.webdoc:3172
#: modules/websearch/doc/search-guide.webdoc:3190
#: modules/websearch/doc/search-guide.webdoc:3209
#: modules/websearch/doc/search-guide.webdoc:3230
#: modules/websearch/doc/search-guide.webdoc:3244
#: modules/websearch/doc/search-guide.webdoc:3264
#: modules/websearch/doc/search-guide.webdoc:3279
#: modules/websearch/doc/search-guide.webdoc:3298
#: modules/websearch/doc/search-guide.webdoc:3313
#: modules/websearch/doc/search-guide.webdoc:3333
#: modules/websearch/doc/search-guide.webdoc:3348
#: modules/websearch/doc/search-guide.webdoc:3410
#: modules/websearch/doc/search-guide.webdoc:3424
#: modules/websearch/doc/search-guide.webdoc:3441
#: modules/websearch/doc/search-guide.webdoc:3454
#: modules/websearch/doc/search-guide.webdoc:3472
#: modules/websearch/doc/search-guide.webdoc:3487
#: modules/websearch/doc/search-guide.webdoc:3505
#: modules/websearch/doc/search-guide.webdoc:3520
#: modules/websearch/doc/search-guide.webdoc:3545
#: modules/websearch/doc/search-guide.webdoc:3558
#: modules/websearch/doc/search-guide.webdoc:3571
#: modules/websearch/doc/search-guide.webdoc:3587
#: modules/websearch/doc/search-guide.webdoc:3603
#: modules/websearch/doc/search-guide.webdoc:3620
#: modules/websearch/doc/search-guide.webdoc:3653
#: modules/websearch/doc/search-guide.webdoc:3669
#: modules/websearch/doc/search-guide.webdoc:3686
#: modules/websearch/doc/search-guide.webdoc:3706
#: modules/websearch/doc/search-guide.webdoc:3720
#: modules/websearch/doc/search-guide.webdoc:3738
#: modules/websearch/doc/search-guide.webdoc:3759
#: modules/websearch/doc/search-guide.webdoc:3778
#: modules/websearch/doc/search-guide.webdoc:3796
#: modules/websearch/doc/search-guide.webdoc:3818
#: modules/websearch/doc/search-guide.webdoc:3837
#: modules/websearch/doc/search-guide.webdoc:3854
#: modules/websearch/doc/search-guide.webdoc:3975
#: modules/websearch/doc/search-guide.webdoc:4000
#: modules/websearch/doc/search-guide.webdoc:4023
#: modules/websearch/doc/search-guide.webdoc:4049
#: modules/websearch/doc/search-guide.webdoc:4073
#: modules/websearch/doc/search-guide.webdoc:4100
#: modules/websearch/doc/search-guide.webdoc:4125
#: modules/websearch/doc/search-guide.webdoc:4151
#: modules/websearch/doc/search-guide.webdoc:4180
#: modules/websearch/doc/search-guide.webdoc:4200
#: modules/websearch/doc/search-guide.webdoc:4224
#: modules/websearch/doc/search-guide.webdoc:4251
#: modules/websearch/doc/search-guide.webdoc:4291
#: modules/websearch/doc/search-guide.webdoc:4312
#: modules/websearch/doc/search-guide.webdoc:4336
#: modules/websearch/doc/search-guide.webdoc:4366
#: modules/websearch/doc/search-guide.webdoc:4410
#: modules/websearch/doc/search-guide.webdoc:4432
#: modules/websearch/doc/search-guide.webdoc:4457
#: modules/websearch/doc/search-guide.webdoc:4487
#: modules/websearch/doc/search-guide.webdoc:4532
#: modules/websearch/doc/search-guide.webdoc:4553
#: modules/websearch/doc/search-guide.webdoc:4578
#: modules/websearch/doc/search-guide.webdoc:4608
#: modules/websearch/doc/search-guide.webdoc:4900
#: modules/websearch/doc/search-guide.webdoc:4916
#: modules/websearch/doc/search-guide.webdoc:4936
#: modules/websearch/doc/search-guide.webdoc:4955
#: modules/websearch/doc/search-guide.webdoc:4976
#: modules/websearch/doc/search-guide.webdoc:4994
#: modules/websearch/doc/search-guide.webdoc:5015
#: modules/websearch/doc/search-guide.webdoc:5033
#: modules/websearch/doc/search-guide.webdoc:5066
#: modules/websearch/doc/search-guide.webdoc:5080
#: modules/websearch/doc/search-guide.webdoc:5095
#: modules/websearch/doc/search-guide.webdoc:5111
#: modules/websearch/doc/search-guide.webdoc:5130
#: modules/websearch/doc/search-guide.webdoc:5144
#: modules/websearch/doc/search-guide.webdoc:5160
#: modules/websearch/doc/search-guide.webdoc:5178
#: modules/websearch/doc/search-guide.webdoc:5197
#: modules/websearch/doc/search-guide.webdoc:5212
#: modules/websearch/doc/search-guide.webdoc:5227
#: modules/websearch/doc/search-guide.webdoc:5245
#: modules/websearch/doc/search-guide.webdoc:5265
#: modules/websearch/doc/search-guide.webdoc:5280
#: modules/websearch/doc/search-guide.webdoc:5295
#: modules/websearch/doc/search-guide.webdoc:5315
#: modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc:130
#: modules/miscutil/lib/inveniocfg.py:480
#: modules/bibcirculation/lib/bibcirculation_templates.py:2020
#: modules/bibcirculation/lib/bibcirculation_templates.py:7219
#: modules/bibcirculation/lib/bibcirculation_templates.py:7918
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16025
#: modules/bibcirculation/lib/bibcirculation_templates.py:17724
msgid "title"
-msgstr "título"
+msgstr ""
#: modules/websearch/doc/search-tips.webdoc:72
#: modules/websearch/doc/search-tips.webdoc:110
#: modules/websearch/lib/websearch_templates.py:1264
msgid "Narrow by collection:"
-msgstr "Búsqueda por colección:"
+msgstr ""
#: modules/bibformat/etc/format_templates/Default_HTML_actions.bft:5
msgid "Add to personal basket"
-msgstr "Añadir a la cesta personal"
+msgstr ""
#: modules/websubmit/doc/admin/websubmit-admin-guide.webdoc:20
msgid "WebSubmit Admin Guide"
-msgstr "Guía d'administración de WebSubmit"
+msgstr ""
#: modules/websearch/doc/search-tips.webdoc:290
#: modules/websubmit/lib/websubmit_templates.py:1119
#: modules/bibharvest/lib/oai_harvest_admin.py:450
#: modules/bibharvest/lib/oai_harvest_admin.py:458
#: modules/bibharvest/lib/oai_harvest_admin.py:472
#: modules/bibharvest/lib/oai_harvest_admin.py:487
msgid "or"
-msgstr "o"
+msgstr ""
#: modules/bibedit/lib/bibedit_templates.py:295
msgid "Comparison of:"
-msgstr "Comparación entre:"
+msgstr ""
#: modules/bibedit/lib/bibedit_templates.py:296
msgid "Revision"
-msgstr "Revisión"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:315
#: modules/bibformat/lib/bibformat_templates.py:429
#: modules/bibformat/lib/bibformat_templates.py:579
#: modules/bibformat/lib/bibformat_templates.py:595
#: modules/bibformat/lib/bibformat_templates.py:626
#: modules/bibformat/lib/bibformat_templates.py:937
#: modules/bibformat/lib/bibformat_templates.py:1069
#: modules/bibformat/lib/bibformat_templates.py:1385
#: modules/bibformat/lib/bibformat_templates.py:1491
#: modules/bibformat/lib/bibformat_templates.py:1550
#: modules/webcomment/lib/webcomment_templates.py:1475
#: modules/webjournal/lib/webjournal_templates.py:165
#: modules/webjournal/lib/webjournal_templates.py:537
#: modules/webjournal/lib/webjournal_templates.py:678
#: modules/bibknowledge/lib/bibknowledge_templates.py:327
#: modules/bibknowledge/lib/bibknowledge_templates.py:587
#: modules/bibknowledge/lib/bibknowledge_templates.py:656
msgid "Menu"
-msgstr "Menú"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:317
#: modules/bibformat/lib/bibformat_templates.py:430
#: modules/bibformat/lib/bibformat_templates.py:582
#: modules/bibformat/lib/bibformat_templates.py:598
#: modules/bibformat/lib/bibformat_templates.py:629
#: modules/bibknowledge/lib/bibknowledge_templates.py:323
#: modules/bibknowledge/lib/bibknowledge_templates.py:586
#: modules/bibknowledge/lib/bibknowledge_templates.py:655
msgid "Close Editor"
-msgstr "Cerrar el editor"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:318
#: modules/bibformat/lib/bibformat_templates.py:431
#: modules/bibformat/lib/bibformat_templates.py:583
#: modules/bibformat/lib/bibformat_templates.py:599
#: modules/bibformat/lib/bibformat_templates.py:630
msgid "Modify Template Attributes"
-msgstr "Modificar los atributos de la plantilla"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:319
#: modules/bibformat/lib/bibformat_templates.py:432
#: modules/bibformat/lib/bibformat_templates.py:584
#: modules/bibformat/lib/bibformat_templates.py:600
#: modules/bibformat/lib/bibformat_templates.py:631
msgid "Template Editor"
-msgstr "Editor de plantillas"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:320
#: modules/bibformat/lib/bibformat_templates.py:433
#: modules/bibformat/lib/bibformat_templates.py:585
#: modules/bibformat/lib/bibformat_templates.py:601
#: modules/bibformat/lib/bibformat_templates.py:632
#: modules/bibformat/lib/bibformat_templates.py:1184
#: modules/bibformat/lib/bibformat_templates.py:1384
#: modules/bibformat/lib/bibformat_templates.py:1490
msgid "Check Dependencies"
-msgstr "Comprobar las dependencias"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:370
#: modules/bibformat/lib/bibformat_templates.py:935
#: modules/bibformat/lib/bibformat_templates.py:1060
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/webalert/lib/webalert_templates.py:320
#: modules/websubmit/lib/websubmit_managedocfiles.py:385
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:194
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:255
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:345
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:398
#: modules/bibcirculation/lib/bibcirculation_utils.py:454
#: modules/bibcirculation/lib/bibcirculation_templates.py:1147
#: modules/bibcirculation/lib/bibcirculation_templates.py:1347
#: modules/bibcirculation/lib/bibcirculation_templates.py:1522
#: modules/bibcirculation/lib/bibcirculation_templates.py:1797
#: modules/bibcirculation/lib/bibcirculation_templates.py:2387
#: modules/bibcirculation/lib/bibcirculation_templates.py:2504
#: modules/bibcirculation/lib/bibcirculation_templates.py:2736
#: modules/bibcirculation/lib/bibcirculation_templates.py:3094
#: modules/bibcirculation/lib/bibcirculation_templates.py:3940
#: modules/bibcirculation/lib/bibcirculation_templates.py:4043
#: modules/bibcirculation/lib/bibcirculation_templates.py:4266
#: modules/bibcirculation/lib/bibcirculation_templates.py:4327
#: modules/bibcirculation/lib/bibcirculation_templates.py:4454
#: modules/bibcirculation/lib/bibcirculation_templates.py:5602
#: modules/bibcirculation/lib/bibcirculation_templates.py:6185
#: modules/bibcirculation/lib/bibcirculation_templates.py:6234
#: modules/bibcirculation/lib/bibcirculation_templates.py:6533
#: modules/bibcirculation/lib/bibcirculation_templates.py:6596
#: modules/bibcirculation/lib/bibcirculation_templates.py:6699
#: modules/bibcirculation/lib/bibcirculation_templates.py:6928
#: modules/bibcirculation/lib/bibcirculation_templates.py:7027
#: modules/bibcirculation/lib/bibcirculation_templates.py:7427
#: modules/bibcirculation/lib/bibcirculation_templates.py:8079
#: modules/bibcirculation/lib/bibcirculation_templates.py:8236
#: modules/bibcirculation/lib/bibcirculation_templates.py:9025
#: modules/bibcirculation/lib/bibcirculation_templates.py:9270
#: modules/bibcirculation/lib/bibcirculation_templates.py:9595
#: modules/bibcirculation/lib/bibcirculation_templates.py:9834
#: modules/bibcirculation/lib/bibcirculation_templates.py:9879
#: modules/bibcirculation/lib/bibcirculation_templates.py:10070
#: modules/bibcirculation/lib/bibcirculation_templates.py:10313
#: modules/bibcirculation/lib/bibcirculation_templates.py:10357
#: modules/bibcirculation/lib/bibcirculation_templates.py:10521
#: modules/bibcirculation/lib/bibcirculation_templates.py:10748
#: modules/bibcirculation/lib/bibcirculation_templates.py:11221
#: modules/bibcirculation/lib/bibcirculation_templates.py:11349
#: modules/bibcirculation/lib/bibcirculation_templates.py:11854
#: modules/bibcirculation/lib/bibcirculation_templates.py:12210
#: modules/bibcirculation/lib/bibcirculation_templates.py:12831
#: modules/bibcirculation/lib/bibcirculation_templates.py:12994
#: modules/bibcirculation/lib/bibcirculation_templates.py:13604
#: modules/bibcirculation/lib/bibcirculation_templates.py:13867
#: modules/bibcirculation/lib/bibcirculation_templates.py:14070
#: modules/bibcirculation/lib/bibcirculation_templates.py:14140
#: modules/bibcirculation/lib/bibcirculation_templates.py:14389
#: modules/bibcirculation/lib/bibcirculation_templates.py:14460
#: modules/bibcirculation/lib/bibcirculation_templates.py:14713
#: modules/bibcirculation/lib/bibcirculation_templates.py:15143
#: modules/bibcirculation/lib/bibcirculation_templates.py:15515
#: modules/bibcirculation/lib/bibcirculation_templates.py:15862
#: modules/bibcirculation/lib/bibcirculation_templates.py:17909
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:447
#: modules/bibknowledge/lib/bibknowledge_templates.py:79
msgid "Name"
-msgstr "Nombre"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:389
#: modules/bibformat/lib/bibformat_templates.py:936
#: modules/bibformat/lib/bibformat_templates.py:1061
#: modules/webbasket/lib/webbasket_templates.py:1273
#: modules/websession/lib/websession_templates.py:1504
#: modules/websession/lib/websession_templates.py:1578
#: modules/websession/lib/websession_templates.py:1641
#: modules/websubmit/lib/websubmit_managedocfiles.py:387
#: modules/bibcirculation/lib/bibcirculation_templates.py:357
#: modules/bibcirculation/lib/bibcirculation_templates.py:3187
#: modules/bibcirculation/lib/bibcirculation_templates.py:6050
#: modules/bibcirculation/lib/bibcirculation_templates.py:7476
#: modules/bibcirculation/lib/bibcirculation_templates.py:7654
#: modules/bibcirculation/lib/bibcirculation_templates.py:7813
#: modules/bibcirculation/lib/bibcirculation_templates.py:8111
#: modules/bibcirculation/lib/bibcirculation_templates.py:8321
#: modules/bibcirculation/lib/bibcirculation_templates.py:8481
#: modules/bibcirculation/lib/bibcirculation_templates.py:9330
#: modules/bibcirculation/lib/bibcirculation_templates.py:17958
#: modules/bibcirculation/lib/bibcirculation_templates.py:18044
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:451
#: modules/bibknowledge/lib/bibknowledge_templates.py:80
msgid "Description"
-msgstr "Descripción"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:390
msgid "Update Format Attributes"
-msgstr "Actualizar los atributos del formato"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:580
#: modules/bibformat/lib/bibformat_templates.py:596
#: modules/bibformat/lib/bibformat_templates.py:627
msgid "Show Documentation"
-msgstr "Mostrar la documentación"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:581
#: modules/bibformat/lib/bibformat_templates.py:597
#: modules/bibformat/lib/bibformat_templates.py:628
#: modules/bibformat/lib/bibformat_templates.py:679
msgid "Hide Documentation"
-msgstr "Esconder la documentación"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:588
#: modules/websubmit/lib/websubmit_templates.py:868
msgid "Your modifications will not be saved."
-msgstr "Sus modificaciones no serán guardadas."
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:938
#: modules/bibformat/lib/bibformat_templates.py:1062
#: modules/bibupload/lib/batchuploader_templates.py:253
#: modules/bibupload/lib/batchuploader_templates.py:295
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/websubmit/lib/websubmit_templates.py:1491
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:536
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:633
#: modules/bibcirculation/lib/bibcirculation_templates.py:358
#: modules/bibcirculation/lib/bibcirculation_templates.py:771
#: modules/bibcirculation/lib/bibcirculation_templates.py:2849
#: modules/bibcirculation/lib/bibcirculation_templates.py:2957
#: modules/bibcirculation/lib/bibcirculation_templates.py:3160
#: modules/bibcirculation/lib/bibcirculation_templates.py:7451
#: modules/bibcirculation/lib/bibcirculation_templates.py:7686
#: modules/bibcirculation/lib/bibcirculation_templates.py:7815
#: modules/bibcirculation/lib/bibcirculation_templates.py:8105
#: modules/bibcirculation/lib/bibcirculation_templates.py:8352
#: modules/bibcirculation/lib/bibcirculation_templates.py:8483
#: modules/bibcirculation/lib/bibcirculation_templates.py:9331
#: modules/bibcirculation/lib/bibcirculation_templates.py:10597
#: modules/bibcirculation/lib/bibcirculation_templates.py:10817
#: modules/bibcirculation/lib/bibcirculation_templates.py:10914
#: modules/bibcirculation/lib/bibcirculation_templates.py:11516
#: modules/bibcirculation/lib/bibcirculation_templates.py:11637
#: modules/bibcirculation/lib/bibcirculation_templates.py:12228
#: modules/bibcirculation/lib/bibcirculation_templates.py:13012
#: modules/bibcirculation/lib/bibcirculation_templates.py:13678
#: modules/bibcirculation/lib/bibcirculation_templates.py:13909
#: modules/bibcirculation/lib/bibcirculation_templates.py:15589
#: modules/bibcirculation/lib/bibcirculation_templates.py:17933
#: modules/bibcirculation/lib/bibcirculation_templates.py:18037
msgid "Status"
-msgstr "Estado"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:939
#: modules/bibformat/lib/bibformat_templates.py:1063
msgid "Last Modification Date"
-msgstr "Fecha de la última modificación"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:940
#: modules/bibformat/lib/bibformat_templates.py:1064
#: modules/webalert/lib/webalert_templates.py:327
#: modules/webalert/lib/webalert_templates.py:464
#: modules/webmessage/lib/webmessage_templates.py:89
#: modules/websubmit/lib/websubmit_templates.py:1490
#: modules/bibknowledge/lib/bibknowledge_templates.py:81
msgid "Action"
-msgstr "Acción"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:942
#: modules/bibformat/lib/bibformat_templates.py:1066
#: modules/bibformat/lib/bibformat_templates.py:1551
#: modules/bibformat/web/admin/bibformatadmin.py:104
#: modules/bibformat/web/admin/bibformatadmin.py:167
#: modules/bibformat/web/admin/bibformatadmin.py:240
#: modules/bibformat/web/admin/bibformatadmin.py:287
#: modules/bibformat/web/admin/bibformatadmin.py:384
#: modules/bibformat/web/admin/bibformatadmin.py:999
msgid "Manage Output Formats"
-msgstr "Gestionar los formatos de salida"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:943
#: modules/bibformat/lib/bibformat_templates.py:1067
#: modules/bibformat/lib/bibformat_templates.py:1552
#: modules/bibformat/web/admin/bibformatadmin.py:465
#: modules/bibformat/web/admin/bibformatadmin.py:500
#: modules/bibformat/web/admin/bibformatadmin.py:573
#: modules/bibformat/web/admin/bibformatadmin.py:620
#: modules/bibformat/web/admin/bibformatadmin.py:694
#: modules/bibformat/web/admin/bibformatadmin.py:1020
msgid "Manage Format Templates"
-msgstr "Gestionar las plantillas de los formatos"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:944
#: modules/bibformat/lib/bibformat_templates.py:1068
#: modules/bibformat/lib/bibformat_templates.py:1553
#: modules/bibformat/web/admin/bibformatadmin.py:887
#: modules/bibformat/web/admin/bibformatadmin.py:911
#: modules/bibformat/web/admin/bibformatadmin.py:946
#: modules/bibformat/web/admin/bibformatadmin.py:1038
msgid "Format Elements Documentation"
-msgstr "Documentación de los elementos de los formatos"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:996
#: modules/bibformat/web/admin/bibformatadmin.py:405
#: modules/bibformat/web/admin/bibformatadmin.py:407
#: modules/bibformat/web/admin/bibformatadmin.py:714
#: modules/bibformat/web/admin/bibformatadmin.py:716
#: modules/webbasket/lib/webbasket_templates.py:2894
#: modules/webmessage/lib/webmessage_templates.py:115
#: modules/webjournal/lib/webjournaladminlib.py:116
#: modules/webjournal/lib/webjournaladminlib.py:119
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:175
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:324
#: modules/bibcirculation/lib/bibcirculation_templates.py:1220
#: modules/bibcirculation/lib/bibcirculation_templates.py:15932
#: modules/bibcirculation/lib/bibcirculation_templates.py:18078
#: modules/bibcheck/web/admin/bibcheckadmin.py:137
#: modules/bibknowledge/lib/bibknowledge_templates.py:102
#: modules/bibknowledge/lib/bibknowledge_templates.py:529
#: modules/bibknowledge/lib/bibknowledgeadmin.py:747
#: modules/bibknowledge/lib/bibknowledgeadmin.py:749
msgid "Delete"
-msgstr "Suprimir"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1019
msgid "Add New Format Template"
-msgstr "Añadir una nueva plantilla de formato"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1020
msgid "Check Format Templates Extensively"
-msgstr "Comprobar las plantillas de formato extensivamente"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1059
msgid "Code"
-msgstr "Código"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1142
msgid "Add New Output Format"
-msgstr "Añadir un nuevo formato de salida"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1180
msgid "menu"
-msgstr "menú"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1181
#: modules/bibformat/lib/bibformat_templates.py:1381
#: modules/bibformat/lib/bibformat_templates.py:1487
msgid "Close Output Format"
-msgstr "Cerrar formato de salida"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1182
#: modules/bibformat/lib/bibformat_templates.py:1382
#: modules/bibformat/lib/bibformat_templates.py:1488
msgid "Rules"
-msgstr "Reglas"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1183
#: modules/bibformat/lib/bibformat_templates.py:1383
#: modules/bibformat/lib/bibformat_templates.py:1489
msgid "Modify Output Format Attributes"
-msgstr "Modificar los atributos del formato de salida"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1282
#: modules/bibformat/lib/bibformatadminlib.py:565
msgid "Remove Rule"
-msgstr "Eliminar regla"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1335
#: modules/bibformat/lib/bibformatadminlib.py:572
msgid "Add New Rule"
-msgstr "Añadir una nueva regla"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1336
#: modules/bibformat/lib/bibformatadminlib.py:569
#: modules/bibcheck/web/admin/bibcheckadmin.py:239
msgid "Save Changes"
-msgstr "Guardar cambios"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1910
msgid "No problem found with format"
-msgstr "No se ha encontrado ningún problema con el formato"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1912
msgid "An error has been found"
-msgstr "Se ha encontrado un error"
+msgstr ""
#: modules/bibformat/lib/bibformat_templates.py:1914
msgid "The following errors have been found"
-msgstr "Se han encontrado los siguentes errores"
+msgstr ""
#: modules/bibformat/lib/bibformatadminlib.py:61
#: modules/bibformat/web/admin/bibformatadmin.py:72
msgid "BibFormat Admin"
-msgstr "Administración de BibFormat"
+msgstr ""
#: modules/bibformat/lib/bibformatadminlib.py:357
#: modules/bibformat/lib/bibformatadminlib.py:396
#: modules/bibformat/lib/bibformatadminlib.py:398
msgid "Test with record:"
-msgstr "Probarlo con el registro:"
+msgstr ""
#: modules/bibformat/lib/bibformatadminlib.py:358
msgid "Enter a search query here."
-msgstr "Introduzca una búsqueda aquí."
+msgstr ""
#: modules/bibformat/lib/elements/bfe_aid_authors.py:287
#: modules/bibformat/lib/elements/bfe_authors.py:127
msgid "Hide"
-msgstr "Esconder"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_aid_authors.py:288
#: modules/bibformat/lib/elements/bfe_authors.py:128
#, python-format
msgid "Show all %i authors"
-msgstr "Mostrar todos los %i autores"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_fulltext.py:78
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:72
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:75
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:113
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:116
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:133
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:135
msgid "Download fulltext"
-msgstr "Descargar el texto completo"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_fulltext.py:87
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:61
msgid "additional files"
-msgstr "archivos adicionales"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_fulltext.py:130
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:120
#, python-format
msgid "%(x_sitename)s link"
-msgstr "enlace %(x_sitename)s"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_fulltext.py:130
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:120
#, python-format
msgid "%(x_sitename)s links"
-msgstr "enlaces %(x_sitename)s"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_fulltext.py:139
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:138
msgid "external link"
-msgstr "enlace externo"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_fulltext.py:139
#: modules/bibformat/lib/elements/bfe_fulltext_mini.py:138
msgid "external links"
-msgstr "Enlaces externos"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_fulltext.py:234
#: modules/bibformat/lib/elements/bfe_fulltext.py:238
#: modules/bibformat/lib/elements/bfe_fulltext.py:289
msgid "Fulltext"
-msgstr "Texto completo"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_edit_files.py:50
msgid "Manage Files of This Record"
-msgstr "Gestionar los ficheros de este registro"
+msgstr ""
#: modules/bibformat/lib/elements/bfe_edit_record.py:46
msgid "Edit This Record"
-msgstr "Edite este registro"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:182
#: modules/bibformat/web/admin/bibformatadmin.py:252
#: modules/bibformat/web/admin/bibformatadmin.py:299
#: modules/bibformat/web/admin/bibformatadmin.py:1002
msgid "Restricted Output Format"
-msgstr "Formato de salida restringido"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:208
#: modules/bibformat/web/admin/bibformatadmin.py:534
#: modules/bibknowledge/lib/bibknowledgeadmin.py:563
msgid "Ok"
-msgstr "De acuerdo"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:210
#, python-format
msgid "Output Format %s Rules"
-msgstr "Reglas del formato de salida %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:265
#, python-format
msgid "Output Format %s Attributes"
-msgstr "Atributos del formato de salida %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:312
#, python-format
msgid "Output Format %s Dependencies"
-msgstr "Dependencias del formato de salida %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:384
msgid "Delete Output Format"
-msgstr "Suprimir el formato de salida"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:405
#: modules/bibformat/web/admin/bibformatadmin.py:714
#: modules/webbasket/lib/webbasket_templates.py:1451
#: modules/webbasket/lib/webbasket_templates.py:1519
#: modules/webbasket/lib/webbasket_templates.py:1626
#: modules/webbasket/lib/webbasket_templates.py:1681
#: modules/webbasket/lib/webbasket_templates.py:1771
#: modules/webbasket/lib/webbasket_templates.py:2839
#: modules/webbasket/lib/webbasket_templates.py:3642
#: modules/websession/lib/websession_templates.py:1781
#: modules/websession/lib/websession_templates.py:1889
#: modules/websession/lib/websession_templates.py:2091
#: modules/websession/lib/websession_templates.py:2174
#: modules/websubmit/lib/websubmit_managedocfiles.py:877
#: modules/websubmit/lib/websubmit_templates.py:2535
#: modules/websubmit/lib/websubmit_templates.py:2598
#: modules/websubmit/lib/websubmit_templates.py:2618
#: modules/websubmit/web/publiline.py:1228
#: modules/webjournal/lib/webjournaladminlib.py:117
#: modules/webjournal/lib/webjournaladminlib.py:230
#: modules/bibedit/lib/bibeditmulti_templates.py:564
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:261
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:403
#: modules/bibcirculation/lib/bibcirculation_templates.py:784
#: modules/bibcirculation/lib/bibcirculation_templates.py:1411
#: modules/bibcirculation/lib/bibcirculation_templates.py:1556
#: modules/bibcirculation/lib/bibcirculation_templates.py:4705
#: modules/bibknowledge/lib/bibknowledgeadmin.py:747
msgid "Cancel"
-msgstr "Cancelar"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:434
msgid "Cannot create output format"
-msgstr "No ha sido posible crear el formato de salida"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:513
#: modules/bibformat/web/admin/bibformatadmin.py:587
#: modules/bibformat/web/admin/bibformatadmin.py:1023
msgid "Restricted Format Template"
-msgstr "Plantilla de formato restringido"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:539
#, python-format
msgid "Format Template %s"
-msgstr "Plantilla de formato %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:598
#, python-format
msgid "Format Template %s Attributes"
-msgstr "Atributos de la plantilla de formato %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:632
#, python-format
msgid "Format Template %s Dependencies"
-msgstr "Dependencias de la plantilla de formato %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:694
msgid "Delete Format Template"
-msgstr "Suprimir la plantilla de formato"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:920
#, python-format
msgid "Format Element %s Dependencies"
-msgstr "Dependencias del elemento de formato %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:953
#, python-format
msgid "Test Format Element %s"
-msgstr "Probar el elemento de formato %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:1016
#, python-format
msgid "Validation of Output Format %s"
-msgstr "Validación del formato de salida %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:1034
#, python-format
msgid "Validation of Format Template %s"
-msgstr "Validación de la plantilla de formato %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:1042
msgid "Restricted Format Element"
-msgstr "Elemento de formato restringido"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:1050
#, python-format
msgid "Validation of Format Element %s"
-msgstr "Validación del elemento de formato %s"
+msgstr ""
#: modules/bibformat/web/admin/bibformatadmin.py:1053
msgid "Format Validation"
-msgstr "Validación del formato"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:53
#: modules/bibharvest/lib/bibharvest_templates.py:70
msgid "See Guide"
-msgstr "Véase la guía"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:81
msgid "OAI sources currently present in the database"
-msgstr "Servidores OAI que están actualmente en la base de datos"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:82
msgid "No OAI sources currently present in the database"
-msgstr "No hay ningún servidor OAI en la base de datos"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:92
msgid "Next oaiharvest task"
-msgstr "Siguiente tarea oaiharvest"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:93
msgid "scheduled time:"
-msgstr "previsto para"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:94
msgid "current status:"
-msgstr "estado actual:"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:95
msgid "No oaiharvest task currently scheduled."
-msgstr "No hay ninguna tarea oaiharvest programada"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:202
msgid "successfully validated"
-msgstr "validado correctamente"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:203
msgid "does not seem to be a OAI-compliant baseURL"
-msgstr "no parece una baseURL que cumpla con OAI"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:284
msgid "View next entries..."
-msgstr "Ver las próximas entradas..."
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:341
msgid "previous month"
-msgstr "mes anterior"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:348
msgid "next month"
-msgstr "mes siguiente"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:443
msgid "main Page"
-msgstr "Páginas principal"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:450
#: modules/bibharvest/lib/oai_harvest_admin.py:94
msgid "edit"
-msgstr "editar"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:454
#: modules/websubmit/lib/websubmit_managedocfiles.py:1037
#: modules/bibharvest/lib/oai_harvest_admin.py:98
msgid "delete"
-msgstr "suprimir"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:458
#: modules/bibharvest/lib/oai_harvest_admin.py:102
msgid "test"
-msgstr "comprobar"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:462
#: modules/bibharvest/lib/oai_harvest_admin.py:106
msgid "history"
-msgstr "historia"
+msgstr ""
#: modules/bibharvest/lib/bibharvest_templates.py:466
#: modules/bibharvest/lib/oai_harvest_admin.py:110
msgid "harvest"
-msgstr "recolectar"
+msgstr ""
#: modules/bibrank/lib/bibrank_citation_grapher.py:137
msgid "Citation history:"
-msgstr "Histórico de citas:"
+msgstr ""
#: modules/bibrank/lib/bibrank_downloads_grapher.py:85
msgid "Download history:"
-msgstr "Histórico de descargas:"
+msgstr ""
#: modules/bibrank/lib/bibrank_downloads_grapher.py:107
msgid "Download user distribution:"
-msgstr "Distribución de las descargas"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:138
msgid "Warning: Please, select a valid time"
-msgstr "Atención: seleccione un tiempo válido"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:142
msgid "Warning: Please, select a valid file"
-msgstr "Atención: seleccione un fichero válido"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:146
msgid "Warning: The date format is not correct"
-msgstr "Atención: el formato de la fecha no es correcto"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:150
msgid "Warning: Please, select a valid date"
-msgstr "Atención: seleccione una fecha válida"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:185
msgid "Select file to upload"
-msgstr "Seleccione el fichero a subir"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:186
msgid "File type"
-msgstr "Tipo de fichero"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:187
#: modules/bibupload/lib/batchuploader_templates.py:395
msgid "Upload mode"
-msgstr "Modo de carga"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:188
#: modules/bibupload/lib/batchuploader_templates.py:396
msgid "Upload later? then select:"
-msgstr "Prefiere subirlo después? En este caso seleccione:"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:189
#: modules/bibupload/lib/batchuploader_templates.py:397
#: modules/webmessage/lib/webmessage_templates.py:88
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:535
msgid "Date"
-msgstr "Fecha"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:190
#: modules/bibupload/lib/batchuploader_templates.py:398
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/webstyle/lib/webstyle_templates.py:670
msgid "Time"
-msgstr "Hora"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:191
#: modules/bibupload/lib/batchuploader_templates.py:393
#: modules/bibupload/lib/batchuploader_templates.py:399
#: modules/websession/lib/websession_templates.py:161
#: modules/websession/lib/websession_templates.py:164
#: modules/websession/lib/websession_templates.py:1038
msgid "Example"
-msgstr "Ejemplo"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:192
#: modules/bibupload/lib/batchuploader_templates.py:400
#, python-format
msgid "All fields with %(x_fmt_open)s*%(x_fmt_close)s are mandatory"
-msgstr "Todos los campos con %(x_fmt_open)s*%(x_fmt_close)s son obligatorios"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:194
#: modules/bibupload/lib/batchuploader_templates.py:391
msgid "Upload priority"
-msgstr "Prioridad de los envíos"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:214
#, python-format
msgid ""
"Your file has been successfully queued. You can check your "
"%(x_url1_open)supload history%(x_url1_close)s or %(x_url2_open)ssubmit "
"another file%(x_url2_close)s"
msgstr ""
-"Su fichero está en cola. Ahora puede comprobar su %(x_url1_open)shistórico "
-"de cargas%(x_url1_close)s o %(x_url2_open)senviar otro fichero"
-"%(x_url2_close)s"
#: modules/bibupload/lib/batchuploader_templates.py:225
#, python-format
msgid ""
"The MARCXML submitted is not valid. Please, review the file and "
"%(x_url2_open)sresubmit it%(x_url2_close)s"
msgstr ""
-"El fichero MARCXML enviado no es válido. Por favor revíselo y "
-"%(x_url2_open)svuélvalo a enviar%(x_url2_close)s"
#: modules/bibupload/lib/batchuploader_templates.py:237
msgid "No metadata files have been uploaded yet."
-msgstr "Todavía no se ha subido ningún fichero con metadatos."
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:250
#: modules/bibupload/lib/batchuploader_templates.py:292
msgid "Submit time"
-msgstr "Enviado el"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:251
#: modules/bibupload/lib/batchuploader_templates.py:293
msgid "File name"
-msgstr "Nombre del fichero"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:252
#: modules/bibupload/lib/batchuploader_templates.py:294
msgid "Execution time"
-msgstr "Tiempo de ejecución"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:279
msgid "No document files have been uploaded yet."
-msgstr "Todavía no se ha subido ningún documento."
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:334
#: modules/bibupload/lib/batchuploader_webinterface.py:73
#: modules/bibupload/lib/batchuploader_webinterface.py:243
msgid "Metadata batch upload"
-msgstr "Carga masiva de metadatos"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:337
#: modules/bibupload/lib/batchuploader_webinterface.py:96
#: modules/bibupload/lib/batchuploader_webinterface.py:151
msgid "Document batch upload"
-msgstr "Carga masiva de documentos"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:340
#: modules/bibupload/lib/batchuploader_webinterface.py:267
msgid "Upload history"
-msgstr "Histórico de envíos:"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:343
msgid "Daemon monitor"
-msgstr "Seguimiento de tareas"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:392
msgid "Input directory"
-msgstr "Directorio de entrada"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:394
msgid "Filename matching"
-msgstr "Ficheros del tipo"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:409
#, python-format
msgid "<b>%s documents</b> have been found."
-msgstr "Se han encontrado <b>%s documentos</b>."
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:411
msgid "The following files have been successfully queued:"
-msgstr "Ficheros que están correctamente en la cola:"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:416
msgid "The following errors have occurred:"
-msgstr "Se han encontrado los siguentes errores:"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:423
msgid ""
"Some files could not be moved to DONE folder. Please remove them manually."
msgstr ""
-"Algunos ficheros no se han podido pasar al directorio DONE. Bórrelos "
-"manualmente."
#: modules/bibupload/lib/batchuploader_templates.py:425
msgid "All uploaded files were moved to DONE folder."
-msgstr "Todos los ficheros se han pasado al directorio DONE."
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:435
#, python-format
msgid ""
"Using %(x_fmt_open)sweb interface upload%(x_fmt_close)s, actions are "
"executed a single time."
msgstr ""
-"Usando la %(x_fmt_open)sinterfaz web de subir ficheros%(x_fmt_close)s, las "
-"acciones sólo se ejecutan una vez."
#: modules/bibupload/lib/batchuploader_templates.py:437
#, python-format
msgid ""
"Check the %(x_url_open)sBatch Uploader daemon help page%(x_url_close)s for "
"executing these actions periodically."
msgstr ""
-"Consulte la %(x_url_open)spágina de ayuda del Batch Uploader daemon"
-"%(x_url_close)s para ejecutar estas acciones periódicamente."
#: modules/bibupload/lib/batchuploader_templates.py:442
msgid "Metadata folders"
-msgstr "Carpetas de metadatos"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:464
#: modules/bibcirculation/lib/bibcirculation_templates.py:2301
#: modules/bibcirculation/lib/bibcirculation_templates.py:2414
#: modules/bibcirculation/lib/bibcirculation_templates.py:2658
#: modules/bibcirculation/lib/bibcirculation_templates.py:4390
#: modules/bibcirculation/lib/bibcirculation_templates.py:5551
#: modules/bibcirculation/lib/bibcirculation_templates.py:6453
#: modules/bibcirculation/lib/bibcirculation_templates.py:8976
#: modules/bibcirculation/lib/bibcirculation_templates.py:9109
#: modules/bibcirculation/lib/bibcirculation_templates.py:9878
#: modules/bibcirculation/lib/bibcirculation_templates.py:10356
#: modules/bibcirculation/lib/bibcirculation_templates.py:11101
#: modules/bibcirculation/lib/bibcirculation_templates.py:11517
#: modules/bibcirculation/lib/bibcirculation_templates.py:11638
#: modules/bibcirculation/lib/bibcirculation_templates.py:15342
msgid "ID"
-msgstr "Id"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:464
msgid "Progress"
-msgstr "Progreso"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:466
msgid "Last BibSched tasks:"
-msgstr "Últimas tareas en el BibSched:"
+msgstr ""
#: modules/bibupload/lib/batchuploader_templates.py:475
msgid "Next scheduled BibSched run:"
-msgstr "Siguiente tarea en BibSched:"
+msgstr ""
#: modules/bibupload/lib/batchuploader_webinterface.py:154
msgid "Document batch upload result"
-msgstr "Resultado de la subida masiva"
+msgstr ""
#: modules/bibupload/lib/batchuploader_webinterface.py:238
msgid "Invalid MARCXML"
-msgstr "MARCXML no válido"
+msgstr ""
#: modules/bibupload/lib/batchuploader_webinterface.py:241
msgid "Upload successful"
-msgstr "Subida correcta"
+msgstr ""
#: modules/bibupload/lib/batchuploader_webinterface.py:291
msgid "Batch Uploader: Daemon monitor"
-msgstr "Subida masiva: seguimiento del proceso"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:82 modules/miscutil/lib/dateutils.py:109
#: modules/webbasket/lib/webbasket.py:208
#: modules/webbasket/lib/webbasket.py:870
#: modules/webbasket/lib/webbasket.py:965
#: modules/websession/lib/webuser.py:301
#: modules/webstyle/lib/webstyle_templates.py:579
msgid "N/A"
-msgstr "N/D"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:172
msgid "Sun"
-msgstr "Dom"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:173
msgid "Mon"
-msgstr "Lun"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:174
msgid "Tue"
-msgstr "Mar"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:175
msgid "Wed"
-msgstr "Mié"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:176
msgid "Thu"
-msgstr "Jue"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:177
msgid "Fri"
-msgstr "Vie"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:178
msgid "Sat"
-msgstr "Sáb"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:180
msgid "Sunday"
-msgstr "Domingo"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:181
msgid "Monday"
-msgstr "Lunes"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:182
msgid "Tuesday"
-msgstr "Martes"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:183
msgid "Wednesday"
-msgstr "Miércoles"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:184
msgid "Thursday"
-msgstr "Jueves"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:185
msgid "Friday"
-msgstr "Viernes"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:186
msgid "Saturday"
-msgstr "Sábado"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:200 modules/miscutil/lib/dateutils.py:214
msgid "Month"
-msgstr "Mes"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:201
msgid "Jan"
-msgstr "Ene"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:202
msgid "Feb"
-msgstr "Feb"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:203
msgid "Mar"
-msgstr "Mar"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:204
msgid "Apr"
-msgstr "Abr"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:205 modules/miscutil/lib/dateutils.py:219
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "May"
-msgstr "May"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:206
msgid "Jun"
-msgstr "Jun"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:207
msgid "Jul"
-msgstr "Jul"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:208
msgid "Aug"
-msgstr "Ago"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:209
msgid "Sep"
-msgstr "Sep"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:210
msgid "Oct"
-msgstr "Oct"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:211
msgid "Nov"
-msgstr "Nov"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:212
msgid "Dec"
-msgstr "Dic"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:215
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "January"
-msgstr "Enero"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:216
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "February"
-msgstr "Febrero"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:217
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "March"
-msgstr "Marzo"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:218
#: modules/websearch/lib/search_engine.py:980
#: modules/websearch/lib/websearch_templates.py:1189
msgid "April"
-msgstr "Abril"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:220
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "June"
-msgstr "Junio"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:221
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "July"
-msgstr "Julio"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:222
#: modules/websearch/lib/search_engine.py:981
#: modules/websearch/lib/websearch_templates.py:1190
msgid "August"
-msgstr "Agosto"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:223
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "September"
-msgstr "Septiembre"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:224
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "October"
-msgstr "Octubre"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:225
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "November"
-msgstr "Noviembre"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:226
#: modules/websearch/lib/search_engine.py:982
#: modules/websearch/lib/websearch_templates.py:1191
msgid "December"
-msgstr "Diciembre"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:244
msgid "Day"
-msgstr "Día"
+msgstr ""
#: modules/miscutil/lib/dateutils.py:295
#: modules/bibcirculation/lib/bibcirculation_utils.py:432
#: modules/bibcirculation/lib/bibcirculation_templates.py:1757
#: modules/bibcirculation/lib/bibcirculation_templates.py:2743
#: modules/bibcirculation/lib/bibcirculation_templates.py:3096
#: modules/bibcirculation/lib/bibcirculation_templates.py:7154
#: modules/bibcirculation/lib/bibcirculation_templates.py:7431
#: modules/bibcirculation/lib/bibcirculation_templates.py:8083
#: modules/bibcirculation/lib/bibcirculation_templates.py:8240
#: modules/bibcirculation/lib/bibcirculation_templates.py:9597
#: modules/bibcirculation/lib/bibcirculation_templates.py:9836
#: modules/bibcirculation/lib/bibcirculation_templates.py:10072
#: modules/bibcirculation/lib/bibcirculation_templates.py:10315
#: modules/bibcirculation/lib/bibcirculation_templates.py:10525
#: modules/bibcirculation/lib/bibcirculation_templates.py:10750
#: modules/bibcirculation/lib/bibcirculation_templates.py:11209
#: modules/bibcirculation/lib/bibcirculation_templates.py:11353
#: modules/bibcirculation/lib/bibcirculation_templates.py:11858
#: modules/bibcirculation/lib/bibcirculation_templates.py:11954
#: modules/bibcirculation/lib/bibcirculation_templates.py:12071
#: modules/bibcirculation/lib/bibcirculation_templates.py:12155
#: modules/bibcirculation/lib/bibcirculation_templates.py:12835
#: modules/bibcirculation/lib/bibcirculation_templates.py:12938
#: modules/bibcirculation/lib/bibcirculation_templates.py:13608
#: modules/bibcirculation/lib/bibcirculation_templates.py:13869
#: modules/bibcirculation/lib/bibcirculation_templates.py:14924
#: modules/bibcirculation/lib/bibcirculation_templates.py:15146
#: modules/bibcirculation/lib/bibcirculation_templates.py:15422
#: modules/bibcirculation/lib/bibcirculation_templates.py:16843
#: modules/bibcirculation/lib/bibcirculation_templates.py:17031
#: modules/bibcirculation/lib/bibcirculation_templates.py:17316
#: modules/bibcirculation/lib/bibcirculation_templates.py:17514
#: modules/bibcirculation/lib/bibcirculation_templates.py:17913
msgid "Year"
-msgstr "Año"
+msgstr ""
#: modules/miscutil/lib/errorlib_webinterface.py:64
#: modules/miscutil/lib/errorlib_webinterface.py:69
#: modules/miscutil/lib/errorlib_webinterface.py:74
#: modules/miscutil/lib/errorlib_webinterface.py:79
msgid "Sorry"
-msgstr "Lo sentimos"
+msgstr ""
#: modules/miscutil/lib/errorlib_webinterface.py:65
#: modules/miscutil/lib/errorlib_webinterface.py:70
#: modules/miscutil/lib/errorlib_webinterface.py:75
#: modules/miscutil/lib/errorlib_webinterface.py:80
#, python-format
msgid "Cannot send error request, %s parameter missing."
-msgstr "No ha sido posible enviar la petición de error, falta el parámetro %s."
+msgstr ""
#: modules/miscutil/lib/errorlib_webinterface.py:98
msgid "The error report has been sent."
-msgstr "Se ha enviado el informe de error."
+msgstr ""
#: modules/miscutil/lib/errorlib_webinterface.py:99
msgid "Many thanks for helping us to improve the service."
-msgstr "Muchas gracias por ayudarnos a mejorar el servicio."
+msgstr ""
#: modules/miscutil/lib/errorlib_webinterface.py:101
msgid "Use the back button of your browser to return to the previous page."
msgstr ""
-"Use el botón de retroceso de su navegador para volver a la página anterior."
#: modules/miscutil/lib/errorlib_webinterface.py:103
msgid "Thank you!"
-msgstr "¡Gracias!"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:491
msgid "journal"
-msgstr "revista"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:493
msgid "record ID"
-msgstr "el número de registro"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:506
msgid "word similarity"
-msgstr "similitud de palabras"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:507
msgid "journal impact factor"
-msgstr "factor de impacto de la revista"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:508
msgid "times cited"
-msgstr "veces citado"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:509
msgid "time-decay cite count"
-msgstr "contador de citas en el tiempo"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:510
msgid "all-time-best cite rank"
-msgstr "clasificación por máximo número de citas"
+msgstr ""
#: modules/miscutil/lib/inveniocfg.py:511
msgid "time-decay cite rank"
-msgstr "clasificación por citas en el tiempo"
+msgstr ""
#: modules/miscutil/lib/mailutils.py:210 modules/miscutil/lib/mailutils.py:223
#: modules/webcomment/lib/webcomment_templates.py:2107
msgid "Hello:"
-msgstr "Hola:"
+msgstr ""
#: modules/miscutil/lib/mailutils.py:241 modules/miscutil/lib/mailutils.py:261
msgid "Best regards"
-msgstr "Cordialmente"
+msgstr ""
#: modules/miscutil/lib/mailutils.py:243 modules/miscutil/lib/mailutils.py:263
msgid "Need human intervention? Contact"
-msgstr "¿Necesita la intervención del administrador? Póngase en contacto"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:300
#: modules/websession/lib/websession_templates.py:1090
#: modules/websession/lib/webuser.py:896 modules/websession/lib/webuser.py:905
#: modules/websession/lib/webuser.py:906
msgid "Run Record Editor"
-msgstr "Ejecutar el editor de registros"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:301
#: modules/websession/lib/websession_templates.py:1092
msgid "Run Multi-Record Editor"
-msgstr "Ejecutar el editor de múltiples registros"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:302
#: modules/websession/lib/websession_templates.py:1123
#: modules/websession/lib/webuser.py:897 modules/websession/lib/webuser.py:907
#: modules/websession/lib/webuser.py:908
msgid "Run Document File Manager"
-msgstr "Ejecutar el gestor de ficheros del documento"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:303
#: modules/websession/lib/websession_templates.py:1096
msgid "Run Record Merger"
-msgstr "Unificar registros"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:304
msgid "Run BibSword client"
-msgstr "Ejecutar el cliente BibSword"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:305
#: modules/websession/lib/websession_templates.py:1103
msgid "Configure BibKnowledge"
-msgstr "Configurar BibKnowledge"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:306
#: modules/websession/lib/websession_templates.py:1102
msgid "Configure BibFormat"
-msgstr "Configurar BibFormat"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:307
#: modules/websession/lib/websession_templates.py:1105
msgid "Configure OAI Harvest"
-msgstr "Configurar OAI Harvest"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:308
#: modules/websession/lib/websession_templates.py:1107
msgid "Configure OAI Repository"
-msgstr "Configurar OAI Repository"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:309
#: modules/websession/lib/websession_templates.py:1109
msgid "Configure BibIndex"
-msgstr "Configurar BibIndex"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:310
#: modules/websession/lib/websession_templates.py:1111
msgid "Configure BibRank"
-msgstr "Configurar BibRank"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:311
#: modules/websession/lib/websession_templates.py:1113
msgid "Configure WebAccess"
-msgstr "Configurar WebAccess"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:312
#: modules/websession/lib/websession_templates.py:1115
msgid "Configure WebComment"
-msgstr "Configurar WebComment"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:313
#: modules/websession/lib/websession_templates.py:1119
msgid "Configure WebSearch"
-msgstr "Configurar WebSearch"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:314
#: modules/websession/lib/websession_templates.py:1121
msgid "Configure WebSubmit"
-msgstr "Configurar WebSumbit"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:315
#: modules/websession/lib/websession_templates.py:1117
msgid "Configure WebJournal"
-msgstr "Configurar WebJournal"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:316
#: modules/websession/lib/websession_templates.py:1094
msgid "Run BibCirculation"
-msgstr "Ejecutar BibCirculation"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:317
#: modules/websession/lib/websession_templates.py:1100
msgid "Run Batch Uploader"
-msgstr "Ejecutar el gestor de cargas masivas"
+msgstr ""
#: modules/webaccess/lib/access_control_config.py:318
msgid "Run Person/Author Manager"
-msgstr "Ejecutar el gestor de personas/autores"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3703
#, python-format
msgid "Your account on '%s' has been activated"
-msgstr "Su cuenta en «%s» ha sido activada."
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3704
#, python-format
msgid "Your account earlier created on '%s' has been activated:"
-msgstr "Su cuenta creada previamente en «%s» ha sido activada:"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3706
#: modules/webaccess/lib/webaccessadmin_lib.py:3719
#: modules/webaccess/lib/webaccessadmin_lib.py:3745
msgid "Username/Email:"
-msgstr "Nombre de usuario"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3707
#: modules/webaccess/lib/webaccessadmin_lib.py:3720
msgid "Password:"
-msgstr "Contraseña"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3717
#, python-format
msgid "Account created on '%s'"
-msgstr "Cuenta creada en «%s»"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3718
#, python-format
msgid "An account has been created for you on '%s':"
-msgstr "Se ha creado su cuenta en «%s»:"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3730
#, python-format
msgid "Account rejected on '%s'"
-msgstr "Cuenta rechazada en «%s»"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3731
#, python-format
msgid "Your request for an account has been rejected on '%s':"
-msgstr "Su petición de una cuenta en «%s» ha sido rechazada:"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3733
#, python-format
msgid "Username/Email: %s"
-msgstr "Nombre de usuario: %s"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3743
#, python-format
msgid "Account deleted on '%s'"
-msgstr "Cuenta en «%s» eliminada"
+msgstr ""
#: modules/webaccess/lib/webaccessadmin_lib.py:3744
#, python-format
msgid "Your account on '%s' has been deleted:"
-msgstr "Su cuenta en «%s» ha sido eliminada:"
+msgstr ""
#: modules/webalert/lib/htmlparser.py:186
#: modules/webbasket/lib/webbasket_templates.py:2373
#: modules/webbasket/lib/webbasket_templates.py:3249
#: modules/websearch/lib/websearch_templates.py:1610
#: modules/websearch/lib/websearch_templates.py:3330
#: modules/websearch/lib/websearch_templates.py:3336
#: modules/websearch/lib/websearch_templates.py:3341
msgid "Detailed record"
-msgstr "Registro completo"
+msgstr ""
#: modules/webalert/lib/htmlparser.py:187
#: modules/websearch/lib/websearch_templates.py:1613
#: modules/websearch/lib/websearch_templates.py:3348
#: modules/webstyle/lib/webstyle_templates.py:845
msgid "Similar records"
-msgstr "Registros similares"
+msgstr ""
#: modules/webalert/lib/htmlparser.py:188
msgid "Cited by"
-msgstr "Citado por"
+msgstr ""
#: modules/webalert/lib/webalert.py:54
#, python-format
msgid "You already have an alert named %s."
-msgstr "Ya tiene una alerta con el nombre de %s."
+msgstr ""
-# En femenino porque es una fecha
#: modules/webalert/lib/webalert.py:111
msgid "unknown"
-msgstr "desconocida"
+msgstr ""
#: modules/webalert/lib/webalert.py:163 modules/webalert/lib/webalert.py:217
#: modules/webalert/lib/webalert.py:303 modules/webalert/lib/webalert.py:341
msgid "You do not have rights for this operation."
-msgstr "No tiene permisos para esta operación."
+msgstr ""
#: modules/webalert/lib/webalert.py:198
msgid "You already have an alert defined for the specified query and basket."
-msgstr "Ya tiene una alerta definida para esta búsqueda y cesta."
+msgstr ""
#: modules/webalert/lib/webalert.py:221 modules/webalert/lib/webalert.py:345
msgid "The alert name cannot be empty."
-msgstr "La alerta no puede estar vacía."
+msgstr ""
#: modules/webalert/lib/webalert.py:226
msgid "You are not the owner of this basket."
-msgstr "Usted no es el propietario de esta cesta"
+msgstr ""
#: modules/webalert/lib/webalert.py:237
#, python-format
msgid "The alert %s has been added to your profile."
-msgstr "La alerta %s ha sido añadida a su perfil"
+msgstr ""
#: modules/webalert/lib/webalert.py:376
#, python-format
msgid "The alert %s has been successfully updated."
-msgstr "La alerta %s se ha actualizado correctamente."
+msgstr ""
#: modules/webalert/lib/webalert.py:428
#, python-format
msgid ""
"You have made %(x_nb)s queries. A %(x_url_open)sdetailed list%(x_url_close)s "
"is available with a possibility to (a) view search results and (b) subscribe "
"to an automatic email alerting service for these queries."
msgstr ""
-"Ha hecho %(x_nb)s búsquedas. Hay una %(x_url_open)sdetailed_list"
-"%(x_url_close)s disponible con la posibilidad de (a) ver los resultados de "
-"la búsqueda y (b) subscribirse para recibir notificaciones por correo "
-"electrónico de estas búsquedas"
#: modules/webalert/lib/webalert_templates.py:75
msgid "Pattern"
-msgstr "Patrón"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:77
#: modules/bibedit/lib/bibeditmulti_templates.py:556
msgid "Field"
-msgstr "Campo"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:79
msgid "Pattern 1"
-msgstr "Patrón 1"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:81
msgid "Field 1"
-msgstr "Campo 1"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:83
msgid "Pattern 2"
-msgstr "Patrón 2"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:85
msgid "Field 2"
-msgstr "Campo 2"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:87
msgid "Pattern 3"
-msgstr "Patrón 3"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:89
msgid "Field 3"
-msgstr "Campo 3"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:91
msgid "Collections"
-msgstr "Colecciones"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:93
#: modules/bibcirculation/lib/bibcirculation_templates.py:356
#: modules/bibcirculation/lib/bibcirculation_templates.py:3179
#: modules/bibcirculation/lib/bibcirculation_templates.py:6049
#: modules/bibcirculation/lib/bibcirculation_templates.py:7469
#: modules/bibcirculation/lib/bibcirculation_templates.py:7620
#: modules/bibcirculation/lib/bibcirculation_templates.py:7812
#: modules/bibcirculation/lib/bibcirculation_templates.py:8110
#: modules/bibcirculation/lib/bibcirculation_templates.py:8298
#: modules/bibcirculation/lib/bibcirculation_templates.py:8480
#: modules/bibcirculation/lib/bibcirculation_templates.py:9329
#: modules/bibcirculation/lib/bibcirculation_templates.py:17951
#: modules/bibcirculation/lib/bibcirculation_templates.py:18043
msgid "Collection"
-msgstr "Colección"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:114
msgid "You own the following alerts:"
-msgstr "Usted ha definido las siguientes alertas:"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:115
msgid "alert name"
-msgstr "nombre de la alerta"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:124
msgid "SHOW"
-msgstr "MOSTRAR"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:173
msgid ""
"This alert will notify you each time/only if a new item satisfies the "
"following query:"
msgstr ""
-"Esta alerta le notificará cada vez/sólo si un nuevo ítem satisface la "
-"siguiente búsqueda:"
#: modules/webalert/lib/webalert_templates.py:174
msgid "QUERY"
-msgstr "BÚSQUEDA"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:212
msgid "Alert identification name:"
-msgstr "Nombre de identificación de la alerta:"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:214
msgid "Search-checking frequency:"
-msgstr "Frecuencia de comprobación de la búsqueda:"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:218
#: modules/webalert/lib/webalert_templates.py:338
#: modules/bibharvest/lib/oai_harvest_admin.py:142
msgid "monthly"
-msgstr "mensual"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:219
#: modules/webalert/lib/webalert_templates.py:336
#: modules/bibharvest/lib/oai_harvest_admin.py:141
msgid "weekly"
-msgstr "semanal"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:220
#: modules/webalert/lib/webalert_templates.py:333
#: modules/bibharvest/lib/oai_harvest_admin.py:140
msgid "daily"
-msgstr "diaria"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:221
msgid "Send notification email?"
-msgstr "¿Enviar notificación por correo electrónico?"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:224
#: modules/webalert/lib/webalert_templates.py:341
msgid "yes"
-msgstr "sí"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:225
#: modules/webalert/lib/webalert_templates.py:343
msgid "no"
-msgstr "no"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:226
#, python-format
msgid "if %(x_fmt_open)sno%(x_fmt_close)s you must specify a basket"
-msgstr "si %(x_fmt_open)sno%(x_fmt_close)s debe especificar una cesta"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:228
msgid "Store results in basket?"
-msgstr "¿Guardar los resultados en una cesta?"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:249
msgid "SET ALERT"
-msgstr "ACTIVAR LA ALERTA"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:250
msgid "CLEAR DATA"
-msgstr "BORRAR DATOS"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:301
#, python-format
msgid ""
"Set a new alert from %(x_url1_open)syour searches%(x_url1_close)s, the "
"%(x_url2_open)spopular searches%(x_url2_close)s, or the input form."
msgstr ""
-"Definir una nueva alerta a partir de %(x_url1_open)ssus búsquedas"
-"%(x_url1_close)s, las %(x_url2_open)sbúsquedas más habituales"
-"%(x_url2_close)s, o el formulario de datos."
#: modules/webalert/lib/webalert_templates.py:319
#: modules/webcomment/lib/webcomment_templates.py:233
#: modules/webcomment/lib/webcomment_templates.py:664
#: modules/webcomment/lib/webcomment_templates.py:1965
#: modules/webcomment/lib/webcomment_templates.py:1989
#: modules/webcomment/lib/webcomment_templates.py:2015
#: modules/webmessage/lib/webmessage_templates.py:509
#: modules/websession/lib/websession_templates.py:2215
#: modules/websession/lib/websession_templates.py:2255
msgid "No"
-msgstr "No"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:321
msgid "Search checking frequency"
-msgstr "Frecuencia de comprobación de la búsqueda"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:322
msgid "Notification by email"
-msgstr "Notificación por correo electrónico"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:323
msgid "Result in basket"
-msgstr "Resultado en cesta"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:324
msgid "Date last run"
-msgstr "Fecha de la última ejecución"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:325
msgid "Creation date"
-msgstr "Fecha de creación"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:326
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:346
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:399
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:459
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:632
msgid "Query"
-msgstr "Búsqueda"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:369
#: modules/webbasket/lib/webbasket_templates.py:1786
msgid "no basket"
-msgstr "ninguna cesta"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:386
msgid "Modify"
-msgstr "Modificar"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:392
#: modules/webjournal/lib/webjournaladminlib.py:231
#: modules/webjournal/lib/webjournaladminlib.py:237
msgid "Remove"
-msgstr "Eliminar"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:394
#: modules/webalert/lib/webalert_templates.py:484
msgid "Execute search"
-msgstr "Ejecutar la búsqueda"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:400
#, python-format
msgid "You have defined %s alerts."
-msgstr "Usted ha definido %s alertas."
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:438
#, python-format
msgid ""
"You have not executed any search yet. Please go to the %(x_url_open)ssearch "
"interface%(x_url_close)s first."
msgstr ""
-"Todavía no ha ejecutado ninguna búsqueda. Vaya primero a la "
-"%(x_url_open)sinterfaz de búsqueda%(x_url_close)s."
#: modules/webalert/lib/webalert_templates.py:447
#, python-format
msgid ""
"You have performed %(x_nb1)s searches (%(x_nb2)s different questions) during "
"the last 30 days or so."
msgstr ""
-"Ha ejecutado %(x_nb1)s búsquedas (%(x_nb2)s cuestiones diferentes) durante "
-"los últimos 30 días aproximadamente."
#: modules/webalert/lib/webalert_templates.py:452
#, python-format
msgid "Here are the %s most popular searches."
-msgstr "Estas son las %s búsquedas más habituales"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:463
msgid "Question"
-msgstr "Cuestión"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:467
msgid "Last Run"
-msgstr "Última actualización"
+msgstr ""
#: modules/webalert/lib/webalert_templates.py:485
msgid "Set new alert"
-msgstr "Definir una nueva alerta"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:76
#: modules/webalert/lib/webalert_webinterface.py:139
#: modules/webalert/lib/webalert_webinterface.py:224
#: modules/webalert/lib/webalert_webinterface.py:302
#: modules/webalert/lib/webalert_webinterface.py:358
#: modules/webalert/lib/webalert_webinterface.py:435
#: modules/webalert/lib/webalert_webinterface.py:509
msgid "You are not authorized to use alerts."
-msgstr "No está autorizado a gestionar alertas."
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:79
msgid "Popular Searches"
-msgstr "Búsquedas populares"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:81
#: modules/websession/lib/websession_templates.py:457
#: modules/websession/lib/websession_templates.py:619
msgid "Your Searches"
-msgstr "Sus búsquedas"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:98
#: modules/webalert/lib/webalert_webinterface.py:150
#: modules/webalert/lib/webalert_webinterface.py:183
#: modules/webalert/lib/webalert_webinterface.py:235
#: modules/webalert/lib/webalert_webinterface.py:268
#: modules/webalert/lib/webalert_webinterface.py:319
#: modules/webalert/lib/webalert_webinterface.py:369
#: modules/webalert/lib/webalert_webinterface.py:395
#: modules/webalert/lib/webalert_webinterface.py:446
#: modules/webalert/lib/webalert_webinterface.py:472
#: modules/webalert/lib/webalert_webinterface.py:520
#: modules/webalert/lib/webalert_webinterface.py:548
#: modules/webbasket/lib/webbasket.py:2349
#: modules/webbasket/lib/webbasket_webinterface.py:800
#: modules/webbasket/lib/webbasket_webinterface.py:894
#: modules/webbasket/lib/webbasket_webinterface.py:1015
#: modules/webbasket/lib/webbasket_webinterface.py:1110
#: modules/webbasket/lib/webbasket_webinterface.py:1240
#: modules/webmessage/lib/webmessage_templates.py:466
#: modules/websession/lib/websession_templates.py:605
#: modules/websession/lib/websession_templates.py:2328
#: modules/websession/lib/websession_webinterface.py:216
#: modules/websession/lib/websession_webinterface.py:238
#: modules/websession/lib/websession_webinterface.py:280
#: modules/websession/lib/websession_webinterface.py:508
#: modules/websession/lib/websession_webinterface.py:531
#: modules/websession/lib/websession_webinterface.py:559
#: modules/websession/lib/websession_webinterface.py:575
#: modules/websession/lib/websession_webinterface.py:627
#: modules/websession/lib/websession_webinterface.py:650
#: modules/websession/lib/websession_webinterface.py:676
#: modules/websession/lib/websession_webinterface.py:749
#: modules/websession/lib/websession_webinterface.py:805
#: modules/websession/lib/websession_webinterface.py:840
#: modules/websession/lib/websession_webinterface.py:871
#: modules/websession/lib/websession_webinterface.py:943
#: modules/websession/lib/websession_webinterface.py:981
#: modules/websubmit/web/publiline.py:136
#: modules/websubmit/web/publiline.py:157
#: modules/websubmit/web/yourapprovals.py:91
#: modules/websubmit/web/yoursubmissions.py:163
msgid "Your Account"
-msgstr "Su cuenta"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:100
#, python-format
msgid "%s Personalize, Display searches"
-msgstr "%s personalizar, mostrar las búsquedas"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:101
#: modules/webalert/lib/webalert_webinterface.py:153
#: modules/webalert/lib/webalert_webinterface.py:186
#: modules/webalert/lib/webalert_webinterface.py:238
#: modules/webalert/lib/webalert_webinterface.py:271
#: modules/webalert/lib/webalert_webinterface.py:322
#: modules/webalert/lib/webalert_webinterface.py:372
#: modules/webalert/lib/webalert_webinterface.py:398
#: modules/webalert/lib/webalert_webinterface.py:449
#: modules/webalert/lib/webalert_webinterface.py:475
#: modules/webalert/lib/webalert_webinterface.py:523
#: modules/webalert/lib/webalert_webinterface.py:551
#: modules/websession/lib/websession_webinterface.py:219
#: modules/websession/lib/websession_webinterface.py:241
#: modules/websession/lib/websession_webinterface.py:282
#: modules/websession/lib/websession_webinterface.py:510
#: modules/websession/lib/websession_webinterface.py:533
#: modules/websession/lib/websession_webinterface.py:562
#: modules/websession/lib/websession_webinterface.py:578
#: modules/websession/lib/websession_webinterface.py:596
#: modules/websession/lib/websession_webinterface.py:606
#: modules/websession/lib/websession_webinterface.py:629
#: modules/websession/lib/websession_webinterface.py:652
#: modules/websession/lib/websession_webinterface.py:678
#, python-format
msgid "%s, personalize"
-msgstr "%s, personalizar"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:145
#: modules/webalert/lib/webalert_webinterface.py:230
#: modules/webalert/lib/webalert_webinterface.py:364
#: modules/webalert/lib/webalert_webinterface.py:441
#: modules/webalert/lib/webalert_webinterface.py:515
#: modules/webstyle/lib/webstyle_templates.py:583
#: modules/webstyle/lib/webstyle_templates.py:620
#: modules/webstyle/lib/webstyle_templates.py:622
#: modules/websubmit/lib/websubmit_engine.py:1734
#: modules/websubmit/lib/websubmit_webinterface.py:1361
#: modules/bibcatalog/lib/bibcatalog_templates.py:37
#: modules/bibedit/lib/bibedit_webinterface.py:193
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:496
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:559
#: modules/bibknowledge/lib/bibknowledgeadmin.py:279
msgid "Error"
-msgstr "Error"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:152
#: modules/webalert/lib/webalert_webinterface.py:185
#: modules/webalert/lib/webalert_webinterface.py:237
#: modules/webalert/lib/webalert_webinterface.py:371
#: modules/webalert/lib/webalert_webinterface.py:448
#: modules/webalert/lib/webalert_webinterface.py:522
#, python-format
msgid "%s Personalize, Set a new alert"
-msgstr "%s personalizar, definir una nueva alerta"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:178
msgid "Set a new alert"
-msgstr "Definir una nueva alerta"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:263
msgid "Modify alert settings"
-msgstr "Modificar la alerta"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:270
#, python-format
msgid "%s Personalize, Modify alert settings"
-msgstr "%s personalizar, modificar la alerta"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:314
#: modules/websession/lib/websession_templates.py:474
msgid "Your Alerts"
-msgstr "Sus alertas"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:321
#: modules/webalert/lib/webalert_webinterface.py:397
#: modules/webalert/lib/webalert_webinterface.py:474
#: modules/webalert/lib/webalert_webinterface.py:550
#, python-format
msgid "%s Personalize, Display alerts"
-msgstr "%s personalizar, mostrar alertas"
+msgstr ""
#: modules/webalert/lib/webalert_webinterface.py:390
#: modules/webalert/lib/webalert_webinterface.py:467
#: modules/webalert/lib/webalert_webinterface.py:543
msgid "Display alerts"
-msgstr "Mostrar alertas:"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:104
#: modules/webbasket/lib/webbasket.py:2151
#: modules/webbasket/lib/webbasket.py:2181
msgid ""
"The selected public basket does not exist or you do not have access to it."
-msgstr "La cesta pública que ha seleccionado no existe o no tiene acceso."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:112
msgid "Please select a valid public basket from the list of public baskets."
-msgstr "Seleccione una cesta válida de la lista de cestas públicas."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:135
#: modules/webbasket/lib/webbasket.py:298
#: modules/webbasket/lib/webbasket.py:1000
msgid "The selected item does not exist or you do not have access to it."
-msgstr "El ítem que ha seleccionado no existe o no tiene acceso."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:141
msgid "Returning to the public basket view."
-msgstr "Volver a la visualización de las cestas públicas."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:416
#: modules/webbasket/lib/webbasket.py:474
#: modules/webbasket/lib/webbasket.py:1419
#: modules/webbasket/lib/webbasket.py:1483
msgid "You do not have permission to write notes to this item."
-msgstr "No tiene permisos para escribir notas en este ítem."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:429
#: modules/webbasket/lib/webbasket.py:1431
msgid ""
"The note you are quoting does not exist or you do not have access to it."
-msgstr "La nota que cita no existe o no tiene acceso."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:484
#: modules/webbasket/lib/webbasket.py:1495
msgid "You must fill in both the subject and the body of the note."
-msgstr "Debe llenar tanto el asunto como el texto de la nota."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:581
#: modules/webbasket/lib/webbasket.py:657
#: modules/webbasket/lib/webbasket.py:713
#: modules/webbasket/lib/webbasket.py:2680
msgid "The selected topic does not exist or you do not have access to it."
-msgstr "El tema que ha seleccionado no existe o no tiene acceso."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:623
#: modules/webbasket/lib/webbasket.py:743
#: modules/webbasket/lib/webbasket.py:2707
#: modules/webbasket/lib/webbasket.py:2715
#: modules/webbasket/lib/webbasket.py:2722
msgid "The selected basket does not exist or you do not have access to it."
-msgstr "La cesta que ha seleccionado no existe o no tiene acceso."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:734
msgid "The selected basket is no longer public."
-msgstr "La cesta que ha seleccionado ya no es pública."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1548
msgid "You do not have permission to delete this note."
-msgstr "No tiene permisos para borrar esta nota."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1559
msgid ""
"The note you are deleting does not exist or you do not have access to it."
-msgstr "La nota que ha seleccionado no existe o no tiene acceso."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1656
#, python-format
msgid "Sorry, you do not have sufficient rights to add record #%i."
-msgstr "No tiene permisos para añadir el registro #%i."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1662
msgid "Some of the items were not added due to lack of sufficient rights."
msgstr ""
-"No se han añadido algunos de los ítems ya que no tiene suficientes permisos."
#: modules/webbasket/lib/webbasket.py:1679
msgid "Please provide a title for the external source."
-msgstr "Añada el título de la fuente externa."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1685
msgid "Please provide a description for the external source."
-msgstr "Añada una descripción a la fuente externa."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1691
msgid "Please provide a url for the external source."
-msgstr "Añada la url de la fuente externa."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1700
msgid "The url you have provided is not valid."
-msgstr "La url que ha dado no es válida."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1707
msgid ""
"The url you have provided is not valid: The request contains bad syntax or "
"cannot be fulfilled."
msgstr ""
-"Esta url no es válida: la sintaxis no es correcta o no se puede satisfacer."
#: modules/webbasket/lib/webbasket.py:1714
msgid ""
"The url you have provided is not valid: The server failed to fulfil an "
"apparently valid request."
msgstr ""
-"Esta url no es válida: el servidor no contestó una petición aparentemente "
-"válida."
#: modules/webbasket/lib/webbasket.py:1763
#: modules/webbasket/lib/webbasket.py:1884
#: modules/webbasket/lib/webbasket.py:1953
msgid "Sorry, you do not have sufficient rights on this basket."
-msgstr "No tiene suficientes permisos sobre esta cesta."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1772
msgid "No records to add."
-msgstr "Ningún registro a añadir."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:1812
#: modules/webbasket/lib/webbasket.py:2652
msgid "Cannot add items to the selected basket. Invalid parameters."
msgstr ""
-"No se han podido añadir items a la cesta seleccionada. Los parámetros no "
-"eran válidos."
#: modules/webbasket/lib/webbasket.py:1824
msgid ""
"A default topic and basket have been automatically created. Edit them to "
"rename them as you see fit."
msgstr ""
-"Se ha creat automáticament una nueva cesta y un tema por defecto. Edítelo "
-"para cambiar el nombre al que más le convenga."
#: modules/webbasket/lib/webbasket.py:2101
msgid "Please provide a name for the new basket."
-msgstr "Póngale un nombre a la nueva cetra."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2107
msgid "Please select an existing topic or create a new one."
-msgstr "Seleccione uno de los temas existentes o cree uno de nuevo."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2143
msgid ""
"You cannot subscribe to this basket, you are the either owner or you have "
"already subscribed."
msgstr ""
-"No puede suscribir-se a esta cesta, bien porque usted es el propietario o "
-"porque ya está subscrito."
#: modules/webbasket/lib/webbasket.py:2173
msgid ""
"You cannot unsubscribe from this basket, you are the either owner or you "
"have already unsubscribed."
msgstr ""
-"No se puede dar de baja de esta cesta, bien porque usted es el propietario o "
-"porque ya no estaba subscrito."
#: modules/webbasket/lib/webbasket.py:2266
#: modules/webbasket/lib/webbasket.py:2379
#: modules/webbasket/lib/webbasket_templates.py:101
#: modules/webbasket/lib/webbasket_templates.py:151
#: modules/webbasket/lib/webbasket_templates.py:157
#: modules/webbasket/lib/webbasket_templates.py:605
#: modules/webbasket/lib/webbasket_templates.py:657
msgid "Personal baskets"
-msgstr "Cestas personales"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2290
#: modules/webbasket/lib/webbasket.py:2396
#: modules/webbasket/lib/webbasket_templates.py:167
#: modules/webbasket/lib/webbasket_templates.py:199
#: modules/webbasket/lib/webbasket_templates.py:205
#: modules/webbasket/lib/webbasket_templates.py:614
#: modules/webbasket/lib/webbasket_templates.py:691
msgid "Group baskets"
-msgstr "Cestas de grupo"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2316
msgid "Others' baskets"
-msgstr "Cestas de otros"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2352
#: modules/websession/lib/websession_templates.py:465
#: modules/websession/lib/websession_templates.py:613
msgid "Your Baskets"
-msgstr "Sus cestas"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2357
#: modules/webbasket/lib/webbasket_webinterface.py:1273
#: modules/webbasket/lib/webbasket_webinterface.py:1358
#: modules/webbasket/lib/webbasket_webinterface.py:1401
#: modules/webbasket/lib/webbasket_webinterface.py:1461
msgid "List of public baskets"
-msgstr "Lista de cestas públicas"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2368
#: modules/webbasket/lib/webbasket_webinterface.py:428
msgid "Search baskets"
-msgstr "Cestas de búsquedas"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2373
#: modules/webbasket/lib/webbasket_webinterface.py:738
#: modules/websearch/lib/websearch_templates.py:2852
#: modules/websearch/lib/websearch_templates.py:3038
msgid "Add to basket"
-msgstr "Añadir a la cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2413
#: modules/webbasket/lib/webbasket_templates.py:218
#: modules/webbasket/lib/webbasket_templates.py:224
#: modules/webbasket/lib/webbasket_templates.py:623
#: modules/webbasket/lib/webbasket_templates.py:725
msgid "Public baskets"
-msgstr "Cestas públicas"
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2443
#, python-format
msgid ""
"You have %(x_nb_perso)s personal baskets and are subscribed to "
"%(x_nb_group)s group baskets and %(x_nb_public)s other users public baskets."
msgstr ""
-"Tiene %(x_nb_perso)s cestas personales, está subscrito a %(x_nb_group)s "
-"cestas de grupo, y a %(x_nb_public)s cestas públicas de otros usuarios."
#: modules/webbasket/lib/webbasket.py:2629
#: modules/webbasket/lib/webbasket.py:2667
msgid ""
"The category you have selected does not exist. Please select a valid "
"category."
msgstr ""
-"La categoría que ha seleccionado no existe. Seleccione una categoría válida."
#: modules/webbasket/lib/webbasket.py:2693
msgid "The selected group does not exist or you do not have access to it."
-msgstr "El grupo que ha seleccionado no existe o no tiene acceso."
+msgstr ""
#: modules/webbasket/lib/webbasket.py:2738
msgid "The selected output format is not available or is invalid."
-msgstr "El formato que ha seleccionado no está disponible o no es válido."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:87
msgid ""
"You have no personal or group baskets or are subscribed to any public "
"baskets."
msgstr ""
-"No tiene cestas personales ni de grupo, ni está susbcripto a ninguna cesta "
-"pública"
#: modules/webbasket/lib/webbasket_templates.py:88
#, python-format
msgid ""
"You may want to start by %(x_url_open)screating a new basket%(x_url_close)s."
-msgstr "Puede empezar %(x_url_open)screando una nueva cesta%(x_url_close)s."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:112
#: modules/webbasket/lib/webbasket_templates.py:178
msgid "Back to Your Baskets"
-msgstr "Volver a sus cestas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:118
#: modules/webbasket/lib/webbasket_webinterface.py:1243
msgid "Create basket"
-msgstr "Crear cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:124
#: modules/webbasket/lib/webbasket_webinterface.py:1132
msgid "Edit topic"
-msgstr "Editar el tema"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:559
msgid "Search baskets for"
-msgstr "Buscarlo en las cestas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:560
msgid "Search also in notes (where allowed)"
-msgstr "Buscar también en las notas (si procede)"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:597
msgid "Results overview"
-msgstr "Sumario de los resultados"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:598
#: modules/webbasket/lib/webbasket_templates.py:607
#: modules/webbasket/lib/webbasket_templates.py:616
#: modules/webbasket/lib/webbasket_templates.py:625
#: modules/webbasket/lib/webbasket_templates.py:634
#: modules/webbasket/lib/webbasket_templates.py:659
#: modules/webbasket/lib/webbasket_templates.py:677
#: modules/webbasket/lib/webbasket_templates.py:693
#: modules/webbasket/lib/webbasket_templates.py:711
#: modules/webbasket/lib/webbasket_templates.py:727
#: modules/webbasket/lib/webbasket_templates.py:744
#: modules/webbasket/lib/webbasket_templates.py:760
#: modules/webbasket/lib/webbasket_templates.py:776
#, python-format
msgid "%i items found"
-msgstr "%i items encontrados"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:632
#: modules/webbasket/lib/webbasket_templates.py:758
msgid "All public baskets"
-msgstr "Todas las cestas públicas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:648
msgid "No items found."
-msgstr "No se ha encontrado ninguno."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:675
#: modules/webbasket/lib/webbasket_templates.py:709
#: modules/webbasket/lib/webbasket_templates.py:742
#: modules/webbasket/lib/webbasket_templates.py:774
#, python-format
msgid "In %(x_linked_basket_name)s"
-msgstr "En %(x_linked_basket_name)s"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:869
#: modules/webbasket/lib/webbasket_webinterface.py:1291
#: modules/webbasket/lib/webbasket_webinterface.py:1416
#: modules/webbasket/lib/webbasket_webinterface.py:1476
msgid "Public basket"
-msgstr "Cesta pública"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:870
msgid "Owner"
-msgstr "Propietario"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:871
msgid "Last update"
-msgstr "Última actualización"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:872
#: modules/bibcirculation/lib/bibcirculation_templates.py:113
msgid "Items"
-msgstr "Ítems"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:873
msgid "Views"
-msgstr "Vistas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:955
msgid "There is currently no publicly accessible basket"
-msgstr "No hay en este momento ninguna cesta públicamente accesible"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:977
#, python-format
msgid ""
"Displaying public baskets %(x_from)i - %(x_to)i out of "
"%(x_total_public_basket)i public baskets in total."
msgstr ""
-"Lista de cestas públicas %(x_from)i - %(x_to)i de un total de "
-"%(x_total_public_basket)i cestas públicas."
#: modules/webbasket/lib/webbasket_templates.py:1044
#: modules/webbasket/lib/webbasket_templates.py:1068
#, python-format
msgid "%(x_title)s, by %(x_name)s on %(x_date)s:"
-msgstr "%(x_title)s, por %(x_name)s el %(x_date)s:"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1047
#: modules/webbasket/lib/webbasket_templates.py:1071
#: modules/webcomment/lib/webcomment.py:1605
#, python-format
msgid "%(x_name)s wrote on %(x_date)s:"
-msgstr "%(x_name)s escribió en %(x_date)s:"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1127
msgid "Select topic"
-msgstr "Seleccione el tema"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1143
#: modules/webbasket/lib/webbasket_templates.py:1541
#: modules/webbasket/lib/webbasket_templates.py:1550
msgid "Choose topic"
-msgstr "Escoja el tema"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1144
#: modules/webbasket/lib/webbasket_templates.py:1552
msgid "or create a new one"
-msgstr "o cree uno de nuevo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1144
msgid "Create new topic"
-msgstr "Crear un nuevo tema"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1145
#: modules/webbasket/lib/webbasket_templates.py:1538
msgid "Basket name"
-msgstr "Nombre de la cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1147
msgid "Create a new basket"
-msgstr "Crear una nueva cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1199
msgid "Create new basket"
-msgstr "Crear una nueva cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1269
#: modules/webbasket/lib/webbasket_templates.py:2297
#: modules/webbasket/lib/webbasket_templates.py:2673
#: modules/webbasket/lib/webbasket_templates.py:3182
#: modules/webbasket/lib/webbasket_templates.py:3498
msgid "External item"
-msgstr "Registro externo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1270
msgid ""
"Provide a url for the external item you wish to add and fill in a title and "
"description"
msgstr ""
-"Escriba una url para el item externo que desea añadir y póngale un título y "
-"descripción"
#: modules/webbasket/lib/webbasket_templates.py:1271
#: modules/websubmit/lib/websubmit_templates.py:2726
#: modules/bibcirculation/lib/bibcirculation_utils.py:428
#: modules/bibcirculation/lib/bibcirculation_templates.py:2101
#: modules/bibcirculation/lib/bibcirculation_templates.py:2741
#: modules/bibcirculation/lib/bibcirculation_templates.py:5991
#: modules/bibcirculation/lib/bibcirculation_templates.py:8850
#: modules/bibcirculation/lib/bibcirculation_templates.py:11207
#: modules/bibcirculation/lib/bibcirculation_templates.py:11950
#: modules/bibcirculation/lib/bibcirculation_templates.py:12151
#: modules/bibcirculation/lib/bibcirculation_templates.py:12934
#: modules/bibcirculation/lib/bibcirculation_templates.py:15419
#: modules/bibcirculation/lib/bibcirculation_templates.py:16118
#: modules/bibcirculation/lib/bibcirculation_templates.py:16839
#: modules/bibcirculation/lib/bibcirculation_templates.py:17027
msgid "Title"
-msgstr "Título"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1275
msgid "URL"
-msgstr "URL"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1305
#, python-format
msgid "%i items have been successfully added to your basket"
-msgstr "%i registros se han añadido correctamente a su cesta."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1306
#, python-format
msgid "Proceed to the %(x_url_open)sbasket%(x_url_close)s"
-msgstr "Subscríbase a la %(x_url_open)scesta%(x_url_close)s"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1311
#, python-format
msgid " or return to your %(x_url_open)sprevious basket%(x_url_close)s"
-msgstr " o vuelva a su %(x_url_open)scesta anterior%(x_url_close)s"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1315
#, python-format
msgid " or return to your %(x_url_open)ssearch%(x_url_close)s"
-msgstr " o vuelva a su %(x_url_open)sbúsqueda%(x_url_close)s"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1428
#, python-format
msgid "Adding %i items to your baskets"
-msgstr "%i items añadidos a sus cestas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1429
#, python-format
msgid ""
"Please choose a basket: %(x_basket_selection_box)s %(x_fmt_open)s(or "
"%(x_url_open)screate a new one%(x_url_close)s first)%(x_fmt_close)s"
msgstr ""
-"Escoja una cesta: %(x_basket_selection_box)s %(x_fmt_open)s(o antes "
-"%(x_url_open)scree una de nueva%(x_url_close)s)%(x_fmt_close)s"
#: modules/webbasket/lib/webbasket_templates.py:1443
msgid "Optionally, add a note to each one of these items"
-msgstr "Si lo desea, puede añadir una nota a uno de estos registros"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1444
msgid "Optionally, add a note to this item"
-msgstr "Opcionalmente, añada una nota a este registro"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1450
msgid "Add items"
-msgstr "Añadir items"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1474
msgid "Are you sure you want to delete this basket?"
-msgstr "¿Está seguro de que quiere suprimir esta cesta?"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1476
#, python-format
msgid "%i users are subscribed to this basket."
-msgstr "%i usuarios están subscritos a esta cesta."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1478
#, python-format
msgid "%i user groups are subscribed to this basket."
-msgstr "%i grupos de usuarios se han subscrito a esta cesta."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1480
#, python-format
msgid "You have set %i alerts on this basket."
-msgstr "Ha puesto %i alertas en esta cesta."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1518
#: modules/webcomment/lib/webcomment_templates.py:232
#: modules/webcomment/lib/webcomment_templates.py:662
#: modules/webcomment/lib/webcomment_templates.py:1965
#: modules/webcomment/lib/webcomment_templates.py:1989
#: modules/webcomment/lib/webcomment_templates.py:2015
#: modules/webmessage/lib/webmessage_templates.py:508
#: modules/websession/lib/websession_templates.py:2214
#: modules/websession/lib/websession_templates.py:2254
msgid "Yes"
-msgstr "Sí"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1555
#: modules/webbasket/lib/webbasket_templates.py:1651
msgid "General settings"
-msgstr "Parámetros generales"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1570
#: modules/webbasket/lib/webbasket_templates.py:1745
#: modules/webbasket/lib/webbasket_templates.py:1772
msgid "Add group"
-msgstr "Añadir un grupo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1575
msgid "Manage group rights"
-msgstr "Gestionar los permisos de grupo"
+msgstr ""
-# Quizás mejor: 'para compartir'?
#: modules/webbasket/lib/webbasket_templates.py:1587
msgid "Manage global sharing rights"
-msgstr "Gestionar los permisos globales de compartición"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1592
#: modules/webbasket/lib/webbasket_templates.py:1658
#: modules/webbasket/lib/webbasket_templates.py:2006
#: modules/webbasket/lib/webbasket_templates.py:2085
msgid "Delete basket"
-msgstr "Suprimir la cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1616
#, python-format
msgid "Editing basket %(x_basket_name)s"
-msgstr "Edición de la cesta %(x_basket_name)s"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1625
#: modules/webbasket/lib/webbasket_templates.py:1680
msgid "Save changes"
-msgstr "Guardar cambios"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1646
msgid "Topic name"
-msgstr "Nombre del tema"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1675
#, python-format
msgid "Editing topic: %(x_topic_name)s"
-msgstr "Editar tema: %(x_topic_name)s"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1692
#: modules/webbasket/lib/webbasket_templates.py:1707
msgid "No rights"
-msgstr "Sin permiso"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1694
#: modules/webbasket/lib/webbasket_templates.py:1709
msgid "View records"
-msgstr "Ver registros"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1696
#: modules/webbasket/lib/webbasket_templates.py:1698
#: modules/webbasket/lib/webbasket_templates.py:1711
#: modules/webbasket/lib/webbasket_templates.py:1713
#: modules/webbasket/lib/webbasket_templates.py:1715
#: modules/webbasket/lib/webbasket_templates.py:1717
#: modules/webbasket/lib/webbasket_templates.py:1719
#: modules/webbasket/lib/webbasket_templates.py:1721
msgid "and"
-msgstr "y"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1696
msgid "view comments"
-msgstr "ver comentarios"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1698
msgid "add comments"
-msgstr "añadir comentarios"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1711
msgid "view notes"
-msgstr "ver notas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1713
msgid "add notes"
-msgstr "añadir notas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1715
msgid "add records"
-msgstr "añadir registros"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1717
msgid "delete notes"
-msgstr "borrar notas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1719
msgid "remove records"
-msgstr "elimina registros"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1721
msgid "manage sharing rights"
-msgstr "gestionar los permisos de compartición"
+msgstr ""
-# De ningún?
#: modules/webbasket/lib/webbasket_templates.py:1743
msgid "You are not a member of a group."
-msgstr "Usted no es miembro de un grupo."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1765
msgid "Sharing basket to a new group"
-msgstr "Compartir la cesta con un nuevo grupo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1794
#: modules/websession/lib/websession_templates.py:510
msgid ""
"You are logged in as a guest user, so your baskets will disappear at the end "
"of the current session."
msgstr ""
-"Ahora usted está identificado como usuario visitante, con lo que sus cestas "
-"desaparecerán al final de esta sesión."
#: modules/webbasket/lib/webbasket_templates.py:1795
#: modules/webbasket/lib/webbasket_templates.py:1810
#: modules/websession/lib/websession_templates.py:513
#, python-format
msgid ""
"If you wish you can %(x_url_open)slogin or register here%(x_url_close)s."
msgstr ""
-"Si lo desea, puede %(x_url_open)sidentificarse o darse de alta aquí"
-"%(x_url_close)s."
#: modules/webbasket/lib/webbasket_templates.py:1809
#: modules/websession/lib/websession_webinterface.py:263
msgid "This functionality is forbidden to guest users."
-msgstr "Esta funcionalidad no está permitida a los usuarios visitantes."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1862
#: modules/webcomment/lib/webcomment_templates.py:864
msgid "Back to search results"
-msgstr "Volver al resultado de la búsqueda"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1990
#: modules/webbasket/lib/webbasket_templates.py:3022
#, python-format
msgid "%i items"
-msgstr "%i items"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1991
#: modules/webbasket/lib/webbasket_templates.py:3024
#, python-format
msgid "%i notes"
-msgstr "%i notas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1991
#: modules/webbasket/lib/webbasket_templates.py:3024
msgid "no notes yet"
-msgstr "sin notas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1994
#, python-format
msgid "%i subscribers"
-msgstr "%i subscriptores"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:1996
#: modules/webbasket/lib/webbasket_templates.py:3026
msgid "last update"
-msgstr "Última actualización"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2000
#: modules/webbasket/lib/webbasket_templates.py:2079
msgid "Add item"
-msgstr "Añadir item"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2003
#: modules/webbasket/lib/webbasket_templates.py:2082
#: modules/webbasket/lib/webbasket_webinterface.py:1037
msgid "Edit basket"
-msgstr "Editar cestas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2016
#: modules/webbasket/lib/webbasket_templates.py:2093
#: modules/webbasket/lib/webbasket_templates.py:3034
#: modules/webbasket/lib/webbasket_templates.py:3087
msgid "Unsubscribe from basket"
-msgstr "Darse de baja de esta cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2098
msgid "This basket is publicly accessible at the following address:"
msgstr ""
-"Se puede acceder públicamente a esta cesta desde la siguiente dirección:"
#: modules/webbasket/lib/webbasket_templates.py:2162
#: modules/webbasket/lib/webbasket_templates.py:3137
msgid "Basket is empty"
-msgstr "La cesta está vacía"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2196
msgid "You do not have sufficient rights to view this basket's content."
-msgstr "No tiene suficientes permisos para ver el contenido de esta cesta."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2239
msgid "Move item up"
-msgstr "Subirlo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2243
msgid "You cannot move this item up"
-msgstr "No es posible subirlo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2257
msgid "Move item down"
-msgstr "Bajarlo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2261
msgid "You cannot move this item down"
-msgstr "No es posible bajarlo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2275
#: modules/webbasket/lib/webbasket_templates.py:3178
msgid "Copy item"
-msgstr "Copiarlo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2291
msgid "Remove item"
-msgstr "Eliminarlo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2363
#: modules/webbasket/lib/webbasket_templates.py:2718
#: modules/webbasket/lib/webbasket_templates.py:3239
#: modules/webbasket/lib/webbasket_templates.py:3539
msgid "This record does not seem to exist any more"
-msgstr "El registro solicitado ya no existe."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2366
#: modules/webbasket/lib/webbasket_templates.py:2868
#: modules/webbasket/lib/webbasket_templates.py:3242
#: modules/webbasket/lib/webbasket_templates.py:3671
#: modules/bibcirculation/lib/bibcirculation_templates.py:3945
#: modules/bibcirculation/lib/bibcirculation_templates.py:4048
#: modules/bibcirculation/lib/bibcirculation_templates.py:4271
#: modules/bibcirculation/lib/bibcirculation_templates.py:4332
#: modules/bibcirculation/lib/bibcirculation_templates.py:4459
#: modules/bibcirculation/lib/bibcirculation_templates.py:6190
#: modules/bibcirculation/lib/bibcirculation_templates.py:6239
#: modules/bibcirculation/lib/bibcirculation_templates.py:6629
#: modules/bibcirculation/lib/bibcirculation_templates.py:6704
#: modules/bibcirculation/lib/bibcirculation_templates.py:10669
#: modules/bibcirculation/lib/bibcirculation_templates.py:10821
#: modules/bibcirculation/lib/bibcirculation_templates.py:10916
#: modules/bibcirculation/lib/bibcirculation_templates.py:13775
#: modules/bibcirculation/lib/bibcirculation_templates.py:13956
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14144
#: modules/bibcirculation/lib/bibcirculation_templates.py:14717
msgid "Notes"
-msgstr "Notas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2366
#: modules/webbasket/lib/webbasket_templates.py:3242
msgid "Add a note..."
-msgstr "Añadir una nota..."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2478
#: modules/webbasket/lib/webbasket_templates.py:3330
#, python-format
msgid "Item %(x_item_index)i of %(x_item_total)i"
-msgstr "Ítem %(x_item_index)i de %(x_item_total)i"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2491
#: modules/webbasket/lib/webbasket_templates.py:2494
#: modules/webbasket/lib/webbasket_templates.py:2578
#: modules/webbasket/lib/webbasket_templates.py:2581
#: modules/webbasket/lib/webbasket_templates.py:3340
#: modules/webbasket/lib/webbasket_templates.py:3343
#: modules/webbasket/lib/webbasket_templates.py:3415
#: modules/webbasket/lib/webbasket_templates.py:3418
msgid "Previous item"
-msgstr "Ítem anterior"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2506
#: modules/webbasket/lib/webbasket_templates.py:2509
#: modules/webbasket/lib/webbasket_templates.py:2593
#: modules/webbasket/lib/webbasket_templates.py:2596
#: modules/webbasket/lib/webbasket_templates.py:3352
#: modules/webbasket/lib/webbasket_templates.py:3355
#: modules/webbasket/lib/webbasket_templates.py:3427
#: modules/webbasket/lib/webbasket_templates.py:3430
msgid "Next item"
-msgstr "Ítem siguiente"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2519
#: modules/webbasket/lib/webbasket_templates.py:2606
#: modules/webbasket/lib/webbasket_templates.py:3362
#: modules/webbasket/lib/webbasket_templates.py:3437
msgid "Return to basket"
-msgstr "Volver a la cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2666
#: modules/webbasket/lib/webbasket_templates.py:3491
msgid "The item you have selected does not exist."
-msgstr "El ítem seleccionado no existe."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2694
#: modules/webbasket/lib/webbasket_templates.py:3515
msgid "You do not have sufficient rights to view this item's notes."
-msgstr "No tiene permisos para ver las notas de este ítem."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2735
msgid "You do not have sufficient rights to view this item."
-msgstr "No tiene permisos para ver este ítem."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2842
#: modules/webbasket/lib/webbasket_templates.py:2852
#: modules/webbasket/lib/webbasket_templates.py:3645
#: modules/webbasket/lib/webbasket_templates.py:3655
#: modules/webbasket/lib/webbasket_webinterface.py:493
#: modules/webbasket/lib/webbasket_webinterface.py:1538
msgid "Add a note"
-msgstr "Añadir una nota"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2843
#: modules/webbasket/lib/webbasket_templates.py:3646
msgid "Add note"
-msgstr "Añadir nota"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2889
#: modules/webbasket/lib/webbasket_templates.py:3692
#: modules/webcomment/lib/webcomment_templates.py:373
#: modules/webmessage/lib/webmessage_templates.py:111
msgid "Reply"
-msgstr "Contestar"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2919
#: modules/webbasket/lib/webbasket_templates.py:3717
#, python-format
msgid "%(x_title)s, by %(x_name)s on %(x_date)s"
-msgstr "%(x_title)s, por %(x_name)s el %(x_date)s"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:2921
#: modules/webbasket/lib/webbasket_templates.py:3719
#: modules/websession/lib/websession_templates.py:165
#: modules/websession/lib/websession_templates.py:214
#: modules/websession/lib/websession_templates.py:915
#: modules/websession/lib/websession_templates.py:1039
msgid "Note"
-msgstr "Nota"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3031
#: modules/webbasket/lib/webbasket_templates.py:3084
msgid "Subscribe to basket"
-msgstr "Subscribirse a la cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3090
msgid "This public basket belongs to the user "
-msgstr "Esta cesta pública pertenece al usuario "
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3114
msgid "This public basket belongs to you."
-msgstr "Esta cesta pública es suya."
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3889
msgid "All your baskets"
-msgstr "Todas sus cestas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3891
#: modules/webbasket/lib/webbasket_templates.py:3966
msgid "Your personal baskets"
-msgstr "Sus cestas personales"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3897
#: modules/webbasket/lib/webbasket_templates.py:3977
msgid "Your group baskets"
-msgstr "Sus cestas de grupo"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3903
msgid "Your public baskets"
-msgstr "Sus cestas públicas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3904
msgid "All the public baskets"
-msgstr "Todas las cestas públicas"
+msgstr ""
#: modules/webbasket/lib/webbasket_templates.py:3961
msgid "*** basket name ***"
-msgstr "*** nombre de la cesta ***"
+msgstr ""
#: modules/webbasket/lib/webbasket_webinterface.py:158
#: modules/webbasket/lib/webbasket_webinterface.py:319
#: modules/webbasket/lib/webbasket_webinterface.py:406
#: modules/webbasket/lib/webbasket_webinterface.py:471
#: modules/webbasket/lib/webbasket_webinterface.py:538
#: modules/webbasket/lib/webbasket_webinterface.py:615
#: modules/webbasket/lib/webbasket_webinterface.py:702
#: modules/webbasket/lib/webbasket_webinterface.py:780
#: modules/webbasket/lib/webbasket_webinterface.py:864
#: modules/webbasket/lib/webbasket_webinterface.py:961
#: modules/webbasket/lib/webbasket_webinterface.py:1077
#: modules/webbasket/lib/webbasket_webinterface.py:1177
#: modules/webbasket/lib/webbasket_webinterface.py:1397
#: modules/webbasket/lib/webbasket_webinterface.py:1457
#: modules/webbasket/lib/webbasket_webinterface.py:1519
#: modules/webbasket/lib/webbasket_webinterface.py:1581
msgid "You are not authorized to use baskets."
-msgstr "No está autorizado a usar cestas."
+msgstr ""
#: modules/webbasket/lib/webbasket_webinterface.py:169
msgid "You are not authorized to view this attachment"
-msgstr "No está autorizado a ver esta adjunto"
+msgstr ""
#: modules/webbasket/lib/webbasket_webinterface.py:361
msgid "Display baskets"
-msgstr "Mostrar cestas"
+msgstr ""
#: modules/webbasket/lib/webbasket_webinterface.py:564
#: modules/webbasket/lib/webbasket_webinterface.py:639
#: modules/webbasket/lib/webbasket_webinterface.py:1604
msgid "Display item and notes"
-msgstr "Mostrar el ítem y las notas"
+msgstr ""
#: modules/webbasket/lib/webbasket_webinterface.py:821
msgid "Delete a basket"
-msgstr "Suprimir una cesta"
+msgstr ""
#: modules/webbasket/lib/webbasket_webinterface.py:879
msgid "Copy record to basket"
-msgstr "Copiar el registro a la cesta"
+msgstr ""
#: modules/webcomment/lib/webcommentadminlib.py:122
msgid "Invalid comment ID."
-msgstr "Número de comentario no válido."
+msgstr ""
#: modules/webcomment/lib/webcommentadminlib.py:142
#, python-format
msgid "Comment ID %s does not exist."
-msgstr "El comentario %s no existe."
+msgstr ""
#: modules/webcomment/lib/webcommentadminlib.py:156
#, python-format
msgid "Record ID %s does not exist."
-msgstr "El registro %s no existe."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:166
#: modules/webcomment/lib/webcomment.py:210
msgid "Bad page number --> showing first page."
-msgstr "Número de página incorrecta --> se muestra la primera."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:174
msgid "Bad number of results per page --> showing 10 results per page."
msgstr ""
-"Número de resultados por página incorrecto --> se mostrarán 10 por página."
#: modules/webcomment/lib/webcomment.py:183
msgid "Bad display order --> showing most helpful first."
-msgstr "Orden incorrecto --> vea las más útiles."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:192
msgid "Bad display order --> showing oldest first."
-msgstr "Orden incorrecto --> se ordenará por antigüedad."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:229
#: modules/webcomment/lib/webcomment.py:1579
#: modules/webcomment/lib/webcomment.py:1632
msgid "Comments on records have been disallowed by the administrator."
-msgstr "L'administrador ha deshabilitat l'opció de comentaris als registres."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:237
#: modules/webcomment/lib/webcomment.py:260
#: modules/webcomment/lib/webcomment.py:1419
#: modules/webcomment/lib/webcomment.py:1440
msgid "Your feedback has been recorded, many thanks."
-msgstr "Muchas gracias por su contribución."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:244
msgid "You have already reported an abuse for this comment."
-msgstr "Ya había denunciado este comentario."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:251
msgid "The comment you have reported no longer exists."
-msgstr "El comentario que había denunciado ya no existe."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:267
msgid "Sorry, you have already voted. This vote has not been recorded."
-msgstr "Ya había votado, de manera que este voto no se ha contabilizado."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:274
msgid ""
"You have been subscribed to this discussion. From now on, you will receive "
"an email whenever a new comment is posted."
msgstr ""
-"Se ha subscrito a aquesta discusión. A partir de ahora recibirá un correu "
-"electrónic cada vez que se publique un nuevo comentario."
#: modules/webcomment/lib/webcomment.py:281
msgid "You have been unsubscribed from this discussion."
-msgstr "Se ha dado de baja de esta discusión."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1171
#, python-format
msgid "Record %i"
-msgstr "Registro %i"
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1182
#, python-format
msgid "%(report_number)s\"%(title)s\" has been reviewed"
-msgstr "Se han revisado el %(report_number)s\"%(title)s\""
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1186
#, python-format
msgid "%(report_number)s\"%(title)s\" has been commented"
-msgstr "Se han comentado el %(report_number)s\"%(title)s\""
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1407
#, python-format
msgid "%s is an invalid record ID"
-msgstr "%s no es un número válido de registro"
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1426
#: modules/webcomment/lib/webcomment.py:1447
msgid "Your feedback could not be recorded, please try again."
msgstr ""
-"No ha sido posible guardar su contribución. Por favor inténtelo de nuevo."
#: modules/webcomment/lib/webcomment.py:1555
#, python-format
msgid "%s is an invalid user ID."
-msgstr "%s no es un identificador válido de usuario"
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1589
msgid "Cannot reply to a review."
-msgstr "No es posible contestar a una reseña."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1644
msgid "You must enter a title."
-msgstr "Debe ponerle un título."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1651
msgid "You must choose a score."
-msgstr "Escoja una puntuación."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1658
msgid "You must enter a text."
-msgstr "Debe redactar un texto."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1675
msgid "You already wrote a review for this record."
-msgstr "Ya ha escrito una reseña para este registro."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1693
msgid "You already posted a comment short ago. Please retry later."
msgstr ""
-"Hace poco ya ha publicado un comentario. Por favor vueva a intentarlo más "
-"tarde."
#: modules/webcomment/lib/webcomment.py:1705
msgid "Failed to insert your comment to the database. Please try again."
msgstr ""
-"No ha sido posible guardar su comentario. Por favor inténtelo de nuevo."
#: modules/webcomment/lib/webcomment.py:1719
msgid "Unknown action --> showing you the default add comment form."
msgstr ""
-"Acción desconocida --> se muestra el formulario de añadir un comentario."
#: modules/webcomment/lib/webcomment.py:1841
#, python-format
msgid "Record ID %s does not exist in the database."
-msgstr "El registro %s no existe en la base de datos."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1849
msgid "No record ID was given."
-msgstr "No ha dado el número de registro."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1857
#, python-format
msgid "Record ID %s is an invalid ID."
-msgstr "%s no es un identificador válido de registro."
+msgstr ""
#: modules/webcomment/lib/webcomment.py:1865
#, python-format
msgid "Record ID %s is not a number."
-msgstr "El registro %s no es numérico."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:79
#: modules/webcomment/lib/webcomment_templates.py:839
#: modules/websubmit/lib/websubmit_templates.py:2674
msgid "Write a comment"
-msgstr "Escriba un comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:94
#, python-format
msgid ""
"<div class=\"webcomment_comment_round_header\">%(x_nb)i Comments for round "
"\"%(x_name)s\""
msgstr ""
-"<div class=\"webcomment_comment_round_header\">%(x_nb)i Comentarios para la "
-"vuelta \"%(x_name)s\""
#: modules/webcomment/lib/webcomment_templates.py:96
#, python-format
msgid "<div class=\"webcomment_comment_round_header\">%(x_nb)i Comments"
-msgstr "<div class=\"webcomment_comment_round_header\">%(x_nb)i Comentarios"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:124
#, python-format
msgid "Showing the latest %i comments:"
-msgstr "Mostrar los últimos %i comentarios:"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:137
#: modules/webcomment/lib/webcomment_templates.py:163
msgid "Discuss this document"
-msgstr "Comente este documento"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:164
#: modules/webcomment/lib/webcomment_templates.py:849
msgid "Start a discussion about any aspect of this document."
-msgstr "Inicie un debate sobre cualquier aspecto de este documento."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:180
#, python-format
msgid "Sorry, the record %s does not seem to exist."
-msgstr "Parece ser que el registro %s no existe."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:182
#, python-format
msgid "Sorry, %s is not a valid ID value."
-msgstr "%s no es un identificador válido."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:184
msgid "Sorry, no record ID was provided."
-msgstr "No ha dado el número de registro."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:188
#, python-format
msgid "You may want to start browsing from %s"
-msgstr "Puede comenzar a visualizar desde %s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:244
#: modules/webcomment/lib/webcomment_templates.py:704
#: modules/webcomment/lib/webcomment_templates.py:714
#, python-format
msgid "%(x_nb)i comments for round \"%(x_name)s\""
-msgstr "%(x_nb)i comentarios por la vuelta «%(x_name)s»"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:267
#: modules/webcomment/lib/webcomment_templates.py:763
msgid "Was this review helpful?"
-msgstr "¿Ha sido útil esta reseña?"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:278
#: modules/webcomment/lib/webcomment_templates.py:315
#: modules/webcomment/lib/webcomment_templates.py:839
msgid "Write a review"
-msgstr "Escriba una reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:285
#: modules/webcomment/lib/webcomment_templates.py:827
#: modules/webcomment/lib/webcomment_templates.py:2036
#, python-format
msgid "Average review score: %(x_nb_score)s based on %(x_nb_reviews)s reviews"
-msgstr "Puntuación media: %(x_nb_score)s, basada en %(x_nb_reviews)s resseñas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:288
#, python-format
msgid "Readers found the following %s reviews to be most helpful."
msgstr ""
-"Los lectores han encontrado que las siguientes %s reseñas son las más útiles."
#: modules/webcomment/lib/webcomment_templates.py:291
#: modules/webcomment/lib/webcomment_templates.py:314
#, python-format
msgid "View all %s reviews"
-msgstr "Visualizar todas las %s reseñas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:310
#: modules/webcomment/lib/webcomment_templates.py:332
#: modules/webcomment/lib/webcomment_templates.py:2077
msgid "Rate this document"
-msgstr "Valore este documento"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:333
msgid ""
"<div class=\"webcomment_review_first_introduction\">Be the first to review "
"this document.</div>"
msgstr ""
-"<div class=\"webcomment_review_first_introduction\">Sea el primero de "
-"reseñar este documento.</div>"
#: modules/webcomment/lib/webcomment_templates.py:368
#, python-format
msgid "%(x_name)s"
-msgstr "%(x_name)s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:375
#: modules/webcomment/lib/webcomment_templates.py:764
msgid "Report abuse"
-msgstr "Denuncie un abuso"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:390
msgid "Undelete comment"
-msgstr "Recupera el comentario suprimido"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:399
#: modules/webcomment/lib/webcomment_templates.py:401
msgid "Delete comment"
-msgstr "Suprimir comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:407
msgid "Unreport comment"
-msgstr "Suprimir la denuncia al comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:418
msgid "Attached file"
-msgstr "Fichero adjunto"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:418
msgid "Attached files"
-msgstr "Ficheros adjuntos"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:484
#, python-format
msgid "Reviewed by %(x_nickname)s on %(x_date)s"
-msgstr "Reseñado por %(x_nickname)s el %(x_date)s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:488
#, python-format
msgid "%(x_nb_people)s out of %(x_nb_total)s people found this review useful"
msgstr ""
-"%(x_nb_people)s de %(x_nb_total)s personas han encontrado esta reseña útil"
#: modules/webcomment/lib/webcomment_templates.py:510
msgid "Undelete review"
-msgstr "Recupera la reseña suprimido"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:519
msgid "Delete review"
-msgstr "Eliminar la reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:525
msgid "Unreport review"
-msgstr "Suprimir la denuncia a la reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:631
#: modules/webcomment/lib/webcomment_templates.py:646
#: modules/webcomment/lib/webcomment_webinterface.py:237
#: modules/webcomment/lib/webcomment_webinterface.py:429
#: modules/websubmit/lib/websubmit_templates.py:2672
msgid "Comments"
-msgstr "Comentarios"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:632
#: modules/webcomment/lib/webcomment_templates.py:647
#: modules/webcomment/lib/webcomment_webinterface.py:237
#: modules/webcomment/lib/webcomment_webinterface.py:429
msgid "Reviews"
-msgstr "Reseñas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:802
#: modules/websearch/lib/websearch_templates.py:1864
#: modules/bibcatalog/lib/bibcatalog_templates.py:50
#: modules/bibknowledge/lib/bibknowledge_templates.py:167
msgid "Previous"
-msgstr "Anterior"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:818
#: modules/bibcatalog/lib/bibcatalog_templates.py:72
#: modules/bibknowledge/lib/bibknowledge_templates.py:165
msgid "Next"
-msgstr "Siguiente"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:842
#, python-format
msgid "There is a total of %s reviews"
-msgstr "Hay un total de %s reseñas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:844
#, python-format
msgid "There is a total of %s comments"
-msgstr "Hay un total de %s comentarios"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:851
msgid "Be the first to review this document."
-msgstr "Sea el primero a escribir una reseña de este documento."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:863
#: modules/webcomment/lib/webcomment_templates.py:1643
#: modules/websearch/lib/websearch_templates.py:558
msgid "Record"
-msgstr "Registro"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:870
#: modules/webcomment/lib/webcomment_templates.py:929
msgid "review"
-msgstr "reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:870
#: modules/webcomment/lib/webcomment_templates.py:929
msgid "comment"
-msgstr "comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:871
#: modules/webcomment/lib/webcomment_templates.py:1879
msgid "Review"
-msgstr "Reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:871
#: modules/webcomment/lib/webcomment_templates.py:1202
#: modules/webcomment/lib/webcomment_templates.py:1647
#: modules/webcomment/lib/webcomment_templates.py:1879
#: modules/websubmit/lib/websubmit_managedocfiles.py:389
#: modules/websubmit/lib/websubmit_templates.py:2728
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:347
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:401
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:455
msgid "Comment"
-msgstr "Comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:927
msgid "Viewing"
-msgstr "Visualización"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:928
msgid "Page:"
-msgstr "Página:"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:946
msgid "Subscribe"
-msgstr "Subscribirse"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:955
msgid "Unsubscribe"
-msgstr "Darse de baja"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:962
msgid "You are not authorized to comment or review."
-msgstr "No está autorizado a hacer comentarios o reseñas."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1132
#, python-format
msgid "Note: Your nickname, %s, will be displayed as author of this comment."
msgstr ""
-"Atención: su alias, %s, será el que se muestre como autor de este comentario"
#: modules/webcomment/lib/webcomment_templates.py:1136
#: modules/webcomment/lib/webcomment_templates.py:1253
#, python-format
msgid ""
"Note: you have not %(x_url_open)sdefined your nickname%(x_url_close)s. "
"%(x_nickname)s will be displayed as the author of this comment."
msgstr ""
-"Atención: todavía no ha %(x_url_open)sdefinido su alias%(x_url_close)s. "
-"%(x_nickname)s, será el que se muestre como autor de este comentario"
#: modules/webcomment/lib/webcomment_templates.py:1153
msgid "Once logged in, authorized users can also attach files."
msgstr ""
-"Una vez identificados, los usuarios autorizados también pueden añadir "
-"ficheros."
#: modules/webcomment/lib/webcomment_templates.py:1168
msgid "Optionally, attach a file to this comment"
-msgstr "Opcionalmente, añada un fichero a este comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1169
msgid "Optionally, attach files to this comment"
-msgstr "Opcionalmente, añada ficheros a este comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1170
msgid "Max one file"
-msgstr "Máximo un fichero"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1171
#, python-format
msgid "Max %i files"
-msgstr "Máximo %i ficheros"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1172
#, python-format
msgid "Max %(x_nb_bytes)s per file"
-msgstr "Máximo %(x_nb_bytes)s por fichero"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1187
msgid "Send me an email when a new comment is posted"
-msgstr "Enviar un email cuando se publique un nuevo comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1201
#: modules/webcomment/lib/webcomment_templates.py:1324
msgid "Article"
-msgstr "Artículo"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1203
msgid "Add comment"
-msgstr "Añadir comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1248
#, python-format
msgid ""
"Note: Your nickname, %s, will be displayed as the author of this review."
msgstr ""
-"Atención: su alias, %s, será el que se muestre como autor de esta reseña."
#: modules/webcomment/lib/webcomment_templates.py:1325
msgid "Rate this article"
-msgstr "Valore este artículo"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1326
msgid "Select a score"
-msgstr "Seleccione una puntuación"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1327
msgid "Give a title to your review"
-msgstr "Dé un título a su reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1328
msgid "Write your review"
-msgstr "Escriba su reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1333
msgid "Add review"
-msgstr "Añada su reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1343
#: modules/webcomment/lib/webcomment_webinterface.py:474
msgid "Add Review"
-msgstr "Añada su reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1365
msgid "Your review was successfully added."
-msgstr "Su reseña se ha añadido correctamente."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1367
msgid "Your comment was successfully added."
-msgstr "Su comentario se ha añadido correctamente."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1370
msgid "Back to record"
-msgstr "Volver al registro"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1448
#: modules/webcomment/web/admin/webcommentadmin.py:171
msgid "View most commented records"
-msgstr "Ver los registros más comentados"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1450
#: modules/webcomment/web/admin/webcommentadmin.py:207
msgid "View latest commented records"
-msgstr "Ver los registros con los comentarios más recientes"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1452
#: modules/webcomment/web/admin/webcommentadmin.py:140
msgid "View all comments reported as abuse"
-msgstr "Visualizar todos los comentarios denunciados"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1456
#: modules/webcomment/web/admin/webcommentadmin.py:170
msgid "View most reviewed records"
-msgstr "Ver los registros con más reseñas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1458
#: modules/webcomment/web/admin/webcommentadmin.py:206
msgid "View latest reviewed records"
-msgstr "Ver los registros con las reseñas más recientes"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1460
#: modules/webcomment/web/admin/webcommentadmin.py:140
msgid "View all reviews reported as abuse"
-msgstr "Visualizar todas las reseñas denunciadas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1468
msgid "View all users who have been reported"
-msgstr "Ver todos los usuarios que han sido denunciados"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1470
msgid "Guide"
-msgstr "Guía"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1472
msgid "Comments and reviews are disabled"
-msgstr "Los comentarios y las reseñas están desactivadas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1492
msgid ""
"Please enter the ID of the comment/review so that you can view it before "
"deciding whether to delete it or not"
msgstr ""
-"Introduzca el número del comentario o reseña; así puede visualizarlo antes "
-"de decidir si lo suprime o no"
#: modules/webcomment/lib/webcomment_templates.py:1516
msgid "Comment ID:"
-msgstr "Número del comentario:"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1517
msgid "Or enter a record ID to list all the associated comments/reviews:"
msgstr ""
-"O entre el número de registro para ver todos los comentarios y reseñas "
-"asociadas:"
#: modules/webcomment/lib/webcomment_templates.py:1518
msgid "Record ID:"
-msgstr "Registro: "
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1520
msgid "View Comment"
-msgstr "Visualizar el comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1541
msgid "There have been no reports so far."
-msgstr "De momento no hay denuncias."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1545
#, python-format
msgid "View all %s reported comments"
-msgstr "Visualizar todos los %s comentarios denunciados"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1548
#, python-format
msgid "View all %s reported reviews"
-msgstr "Visualizar todas las %s reseñas denunciadas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1585
msgid ""
"Here is a list, sorted by total number of reports, of all users who have had "
"a comment reported at least once."
msgstr ""
-"Esta es la lista, ordenada por el número de denuncias, de los usuarios que "
-"han tenido al menos una denuncia a alguno de sus comentarios."
#: modules/webcomment/lib/webcomment_templates.py:1593
#: modules/webcomment/lib/webcomment_templates.py:1622
#: modules/websession/lib/websession_templates.py:158
#: modules/websession/lib/websession_templates.py:1034
msgid "Nickname"
-msgstr "Alias"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1594
#: modules/webcomment/lib/webcomment_templates.py:1626
#: modules/bibcirculation/lib/bibcirculation_utils.py:457
#: modules/bibcirculation/lib/bibcirculation_templates.py:2390
#: modules/bibcirculation/lib/bibcirculation_templates.py:2507
#: modules/bibcirculation/lib/bibcirculation_templates.py:2739
#: modules/bibcirculation/lib/bibcirculation_templates.py:3942
#: modules/bibcirculation/lib/bibcirculation_templates.py:4045
#: modules/bibcirculation/lib/bibcirculation_templates.py:4268
#: modules/bibcirculation/lib/bibcirculation_templates.py:4329
#: modules/bibcirculation/lib/bibcirculation_templates.py:4457
#: modules/bibcirculation/lib/bibcirculation_templates.py:5605
#: modules/bibcirculation/lib/bibcirculation_templates.py:6186
#: modules/bibcirculation/lib/bibcirculation_templates.py:6235
#: modules/bibcirculation/lib/bibcirculation_templates.py:6536
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6700
#: modules/bibcirculation/lib/bibcirculation_templates.py:6929
#: modules/bibcirculation/lib/bibcirculation_templates.py:7028
#: modules/bibcirculation/lib/bibcirculation_templates.py:9028
#: modules/bibcirculation/lib/bibcirculation_templates.py:9273
#: modules/bibcirculation/lib/bibcirculation_templates.py:9882
#: modules/bibcirculation/lib/bibcirculation_templates.py:10360
#: modules/bibcirculation/lib/bibcirculation_templates.py:11224
#: modules/bibcirculation/lib/bibcirculation_templates.py:12211
#: modules/bibcirculation/lib/bibcirculation_templates.py:12995
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14141
#: modules/bibcirculation/lib/bibcirculation_templates.py:14390
#: modules/bibcirculation/lib/bibcirculation_templates.py:14461
#: modules/bibcirculation/lib/bibcirculation_templates.py:14715
#: modules/bibcirculation/lib/bibcirculation_templates.py:15518
msgid "Email"
-msgstr "Dirección electrónica"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1595
#: modules/webcomment/lib/webcomment_templates.py:1624
msgid "User ID"
-msgstr "Número de usuario"
+msgstr ""
-# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1597
msgid "Number positive votes"
-msgstr "Número de votos positivos"
+msgstr ""
-# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1598
msgid "Number negative votes"
-msgstr "Número de votos negativos"
+msgstr ""
-# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1599
msgid "Total number votes"
-msgstr "Número total de votos"
+msgstr ""
-# Falta 'of'?
#: modules/webcomment/lib/webcomment_templates.py:1600
msgid "Total number of reports"
-msgstr "Número total de denuncias"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1601
msgid "View all user's reported comments/reviews"
-msgstr "Visualizar todos los comentarios/reseñas denunciadas de este usuario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1634
#, python-format
msgid "This review has been reported %i times"
-msgstr "Esta reseña ha sido denunciada %i veces"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1636
#, python-format
msgid "This comment has been reported %i times"
-msgstr "Este comentario ha sido denunciado %i veces"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1880
msgid "Written by"
-msgstr "Escrita por"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1881
msgid "General informations"
-msgstr "Informaciones generales"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1882
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:652
msgid "Select"
-msgstr "Seleccionar"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1896
msgid "Delete selected reviews"
-msgstr "Eliminar las reseñas seleccionadas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1897
#: modules/webcomment/lib/webcomment_templates.py:1904
msgid "Suppress selected abuse report"
-msgstr "Suprimir el informe de abuso seleccionado"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1898
msgid "Undelete selected reviews"
-msgstr "Recuperar las reseñas eliminadas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1902
msgid "Undelete selected comments"
-msgstr "Recuperar los comentarios eliminados"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1903
msgid "Delete selected comments"
-msgstr "Suprimir los comentarios seleccionados"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1912
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:494
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:557
#: modules/bibcirculation/lib/bibcirculation_templates.py:1635
msgid "OK"
-msgstr "De acuerdo"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1918
#, python-format
msgid "Here are the reported reviews of user %s"
-msgstr "Estas son las reseñes denunciadas del usuario %s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1920
#, python-format
msgid "Here are the reported comments of user %s"
-msgstr "Estos son los comentarios denunciados del usuario %s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1924
#, python-format
msgid "Here is review %s"
-msgstr "Ésta es la reseña %s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1926
#, python-format
msgid "Here is comment %s"
-msgstr "Éste es el comentario %s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1929
#, python-format
msgid "Here is review %(x_cmtID)s written by user %(x_user)s"
-msgstr "Ésta es la reseña %(x_cmtID)s escrita por el usuario %(x_user)s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1931
#, python-format
msgid "Here is comment %(x_cmtID)s written by user %(x_user)s"
-msgstr "Éste es el comentario %(x_cmtID)s escrito por el usuario %(x_user)s"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1937
msgid "Here are all reported reviews sorted by the most reported"
-msgstr "Estas son todas las reseñas denunciadas, ordenadas de más a menos"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1939
msgid "Here are all reported comments sorted by the most reported"
-msgstr "Estos son todos los comentarios denunciados, ordenados de más a menos"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1944
#, python-format
msgid "Here are all reviews for record %i, sorted by the most reported"
-msgstr "Reseñas del registro %i, ordenadas de más a menos denuncias"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1945
msgid "Show comments"
-msgstr "Ver comentarios"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1947
#, python-format
msgid "Here are all comments for record %i, sorted by the most reported"
-msgstr "Comentarios al registro %i, ordenados de más a menos denuncias"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1948
msgid "Show reviews"
-msgstr "Visualizar las reseñas"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1973
#: modules/webcomment/lib/webcomment_templates.py:1997
#: modules/webcomment/lib/webcomment_templates.py:2023
msgid "comment ID"
-msgstr "número de comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1973
msgid "successfully deleted"
-msgstr "suprimido correctamente"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:1997
msgid "successfully undeleted"
-msgstr "recuperado correctamente"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:2023
msgid "successfully suppressed abuse report"
-msgstr "eliminada correctamente la denuncia de abuso"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:2040
msgid "Not yet reviewed"
-msgstr "Sin ninguna reseña"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:2108
#, python-format
msgid ""
"The following review was sent to %(CFG_SITE_NAME)s by %(user_nickname)s:"
-msgstr "Se ha enviado esta reseña a %(CFG_SITE_NAME)s por %(user_nickname)s:"
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:2109
#, python-format
msgid ""
"The following comment was sent to %(CFG_SITE_NAME)s by %(user_nickname)s:"
msgstr ""
-"Se ha enviado este comentario a %(CFG_SITE_NAME)s por %(user_nickname)s:"
#: modules/webcomment/lib/webcomment_templates.py:2136
msgid "This is an automatic message, please don't reply to it."
-msgstr "Este es un mensaje automático, no lo responda."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:2138
#, python-format
msgid "To post another comment, go to <%(x_url)s> instead."
-msgstr "Para publicar otro comentario, debe ir a <%(x_url)s>."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:2143
#, python-format
msgid "To specifically reply to this comment, go to <%(x_url)s>"
msgstr ""
-"Para contestar específicamente a este comentario, debe ir a <%(x_url)s>."
#: modules/webcomment/lib/webcomment_templates.py:2148
#, python-format
msgid "To unsubscribe from this discussion, go to <%(x_url)s>"
-msgstr "Para darse de baja de esta discusión, debe ir a <%(x_url)s>."
+msgstr ""
#: modules/webcomment/lib/webcomment_templates.py:2152
#, python-format
msgid "For any question, please use <%(CFG_SITE_SUPPORT_EMAIL)s>"
msgstr ""
-"Para resolver dudas, póngase en contacto con <%(CFG_SITE_SUPPORT_EMAIL)s>"
#: modules/webcomment/lib/webcomment_templates.py:2219
msgid "Your comment will be lost."
-msgstr "Su comentario se perderá."
+msgstr ""
#: modules/webcomment/lib/webcomment_webinterface.py:261
#: modules/webcomment/lib/webcomment_webinterface.py:493
msgid "Record Not Found"
-msgstr "No se ha encontrado el registro"
+msgstr ""
#: modules/webcomment/lib/webcomment_webinterface.py:394
#, python-format
msgid ""
"The size of file \\\"%s\\\" (%s) is larger than maximum allowed file size "
"(%s). Select files again."
msgstr ""
-"El tamaño del fichero \\\"%s\\\" (%s) es mayor que el máximo permitido (%s). "
-"Vuelva a seleccionar los ficheros."
#: modules/webcomment/lib/webcomment_webinterface.py:476
#: modules/websubmit/lib/websubmit_templates.py:2668
#: modules/websubmit/lib/websubmit_templates.py:2669
msgid "Add Comment"
-msgstr "Añadir comentario"
+msgstr ""
#: modules/webcomment/lib/webcomment_webinterface.py:734
#: modules/webcomment/lib/webcomment_webinterface.py:768
msgid "Page Not Found"
-msgstr "No se ha encontrado la página"
+msgstr ""
#: modules/webcomment/lib/webcomment_webinterface.py:735
msgid "The requested comment could not be found"
-msgstr "No se ha encontrado el comentario solicitado"
+msgstr ""
#: modules/webcomment/lib/webcomment_webinterface.py:769
msgid "The requested file could not be found"
-msgstr "No se ha encontrado el fichero solicitado"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:45
#: modules/webcomment/web/admin/webcommentadmin.py:59
#: modules/webcomment/web/admin/webcommentadmin.py:83
#: modules/webcomment/web/admin/webcommentadmin.py:126
#: modules/webcomment/web/admin/webcommentadmin.py:164
#: modules/webcomment/web/admin/webcommentadmin.py:192
#: modules/webcomment/web/admin/webcommentadmin.py:228
#: modules/webcomment/web/admin/webcommentadmin.py:266
msgid "WebComment Admin"
-msgstr "Administración de WebComment"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:50
#: modules/webcomment/web/admin/webcommentadmin.py:88
#: modules/webcomment/web/admin/webcommentadmin.py:131
#: modules/webcomment/web/admin/webcommentadmin.py:197
#: modules/webcomment/web/admin/webcommentadmin.py:233
#: modules/webcomment/web/admin/webcommentadmin.py:271
#: modules/websearch/lib/websearch_webinterface.py:1563
#: modules/websearch/web/admin/websearchadmin.py:1040
#: modules/websession/lib/websession_webinterface.py:937
#: modules/webstyle/lib/webstyle_templates.py:585
#: modules/webjournal/web/admin/webjournaladmin.py:390
#: modules/bibcheck/web/admin/bibcheckadmin.py:331
msgid "Internal Error"
-msgstr "Error Interno"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:102
msgid "Delete/Undelete Reviews"
-msgstr "Suprimir/recuperar reseñas"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:102
msgid "Delete/Undelete Comments"
-msgstr "Suprimir/recuperar comentarios"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:102
msgid " or Suppress abuse reports"
-msgstr " o eliminar las denuncias de abuso"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:242
msgid "View all reported users"
-msgstr "Visualizar todos los usuarios denunciados"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:289
msgid "Delete comments"
-msgstr "Suprimir comentarios"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:292
msgid "Suppress abuse reports"
-msgstr "Eliminar las denuncias de abuso"
+msgstr ""
#: modules/webcomment/web/admin/webcommentadmin.py:295
msgid "Undelete comments"
-msgstr "Recuperar comentarios eliminados"
+msgstr ""
#: modules/webmessage/lib/webmessage.py:58
#: modules/webmessage/lib/webmessage.py:137
#: modules/webmessage/lib/webmessage.py:203
msgid "Sorry, this message in not in your mailbox."
-msgstr "Por desgracia, este mensaje no está en su buzón."
+msgstr ""
#: modules/webmessage/lib/webmessage.py:75
#: modules/webmessage/lib/webmessage.py:219
msgid "This message does not exist."
-msgstr "Este mensaje no existe."
+msgstr ""
#: modules/webmessage/lib/webmessage.py:144
msgid "The message could not be deleted."
-msgstr "El mensaje no se ha podido suprimir."
+msgstr ""
#: modules/webmessage/lib/webmessage.py:146
msgid "The message was successfully deleted."
-msgstr "El mensaje se ha suprimido correctamente."
+msgstr ""
#: modules/webmessage/lib/webmessage.py:162
msgid "Your mailbox has been emptied."
-msgstr "Se ha vaciado su buzón."
+msgstr ""
#: modules/webmessage/lib/webmessage.py:368
#, python-format
msgid "The chosen date (%(x_year)i/%(x_month)i/%(x_day)i) is invalid."
-msgstr "La fecha escogida (%(x_year)i-%(x_month)i-%(x_day)i) no es válida"
+msgstr ""
#: modules/webmessage/lib/webmessage.py:377
msgid "Please enter a user name or a group name."
-msgstr "Introduzca un nombre de usuario o de grupo."
+msgstr ""
#: modules/webmessage/lib/webmessage.py:381
#, python-format
msgid ""
"Your message is too long, please edit it. Maximum size allowed is %i "
"characters."
msgstr ""
-"Su mensaje es demasiado largo, edítelo por favor. El tamaño máximo es de %i "
-"caracteres."
#: modules/webmessage/lib/webmessage.py:396
#, python-format
msgid "Group %s does not exist."
-msgstr "El grupo %s no existe"
+msgstr ""
#: modules/webmessage/lib/webmessage.py:421
#, python-format
msgid "User %s does not exist."
-msgstr "El usuario %s no existe"
+msgstr ""
#: modules/webmessage/lib/webmessage.py:434
#: modules/webmessage/lib/webmessage_webinterface.py:145
#: modules/webmessage/lib/webmessage_webinterface.py:242
msgid "Write a message"
-msgstr "Escriba un mensaje"
+msgstr ""
#: modules/webmessage/lib/webmessage.py:449
msgid ""
"Your message could not be sent to the following recipients due to their "
"quota:"
msgstr ""
-"No se ha podido enviar su mensaje a los siguientes destinatarios debido a su "
-"cuota:"
#: modules/webmessage/lib/webmessage.py:453
msgid "Your message has been sent."
-msgstr "Su mensaje se ha enviado."
+msgstr ""
#: modules/webmessage/lib/webmessage.py:458
#: modules/webmessage/lib/webmessage_templates.py:472
#: modules/webmessage/lib/webmessage_webinterface.py:87
#: modules/webmessage/lib/webmessage_webinterface.py:311
#: modules/webmessage/lib/webmessage_webinterface.py:357
#: modules/websession/lib/websession_templates.py:607
msgid "Your Messages"
-msgstr "Sus mensajes"
+msgstr ""
-# Debe traducirse igual que el 'Subject' del correo electrónico
#: modules/webmessage/lib/webmessage_templates.py:86
#: modules/bibcirculation/lib/bibcirculation_templates.py:5322
msgid "Subject"
-msgstr "Asunto"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:87
msgid "Sender"
-msgstr "Remitente"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:96
msgid "No messages"
-msgstr "Sin mensajes"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:100
msgid "No subject"
-msgstr "Sin asunto"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:146
msgid "Write new message"
-msgstr "Escriba el mensaje"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:147
msgid "Delete All"
-msgstr "Suprimirlos todo"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:189
msgid "Re:"
-msgstr "Re:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:281
msgid "Send later?"
-msgstr "¿Enviar más tarde?"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:282
#: modules/websubmit/lib/websubmit_templates.py:3080
msgid "To:"
-msgstr "A:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:283
msgid "Users"
-msgstr "Usuarios"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:284
msgid "Groups"
-msgstr "Grupos"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:285
#: modules/webmessage/lib/webmessage_templates.py:447
#: modules/websubmit/lib/websubmit_templates.py:3081
msgid "Subject:"
-msgstr "Asunto:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:286
#: modules/websubmit/lib/websubmit_templates.py:3082
msgid "Message:"
-msgstr "Mensaje:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:287
#: modules/websubmit/lib/websubmit_templates.py:3083
msgid "SEND"
-msgstr "ENVIAR"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:446
msgid "From:"
-msgstr "De:"
+msgstr ""
-# 'en' o 'el'?
#: modules/webmessage/lib/webmessage_templates.py:448
msgid "Sent on:"
-msgstr "Enviado el:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:449
msgid "Received on:"
-msgstr "Recibido el:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:450
msgid "Sent to:"
-msgstr "Enviado a:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:451
msgid "Sent to groups:"
-msgstr "Enviado a los grupos:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:452
msgid "REPLY"
-msgstr "CONTESTAR"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:453
msgid "DELETE"
-msgstr "SUPRIMIR"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:506
msgid "Are you sure you want to empty your whole mailbox?"
-msgstr "¿Está seguro de que desea vaciar todo su buzón?"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:582
#, python-format
msgid "Quota used: %(x_nb_used)i messages out of max. %(x_nb_total)i"
-msgstr "Cuota usada: %(x_nb_used)i mensajes de un máximo de %(x_nb_total)i"
+msgstr ""
-# Una?
#: modules/webmessage/lib/webmessage_templates.py:600
msgid "Please select one or more:"
-msgstr "Seleccione uno o más:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:631
msgid "Add to users"
-msgstr "Añadir a los usuarios"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:633
msgid "Add to groups"
-msgstr "Añadir a los grupos"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:636
msgid "No matching user"
-msgstr "No se ha encontrado ningún usuario que coincida"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:638
#: modules/websession/lib/websession_templates.py:1819
msgid "No matching group"
-msgstr "No se ha encontrado ningún grupo que coincida"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:675
msgid "Find users or groups:"
-msgstr "Buscar usuarios o grupos:"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:676
msgid "Find a user"
-msgstr "Buscar un usuario"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:677
msgid "Find a group"
-msgstr "Buscar un grupo"
+msgstr ""
#: modules/webmessage/lib/webmessage_templates.py:692
#, python-format
msgid "You have %(x_nb_new)s new messages out of %(x_nb_total)s messages"
-msgstr "Tiene %(x_nb_new)s mensajes nuevos de un total de %(x_nb_total)s"
+msgstr ""
#: modules/webmessage/lib/webmessage_webinterface.py:82
#: modules/webmessage/lib/webmessage_webinterface.py:134
#: modules/webmessage/lib/webmessage_webinterface.py:228
#: modules/webmessage/lib/webmessage_webinterface.py:305
#: modules/webmessage/lib/webmessage_webinterface.py:351
#: modules/webmessage/lib/webmessage_webinterface.py:397
msgid "You are not authorized to use messages."
-msgstr "No está autorizado a utilitzar mensajes."
+msgstr ""
#: modules/webmessage/lib/webmessage_webinterface.py:403
msgid "Read a message"
-msgstr "Lea un mensaje"
+msgstr ""
#: modules/websearch/lib/search_engine.py:833
#: modules/websearch/lib/search_engine.py:860
#: modules/websearch/lib/search_engine.py:4715
#: modules/websearch/lib/search_engine.py:4768
msgid "Search Results"
-msgstr "Resultados de la búsqueda"
+msgstr ""
#: modules/websearch/lib/search_engine.py:973
#: modules/websearch/lib/websearch_templates.py:1174
msgid "any day"
-msgstr "cualquier día"
+msgstr ""
#: modules/websearch/lib/search_engine.py:979
#: modules/websearch/lib/websearch_templates.py:1186
msgid "any month"
-msgstr "cualquier mes"
+msgstr ""
#: modules/websearch/lib/search_engine.py:987
#: modules/websearch/lib/websearch_templates.py:1200
msgid "any year"
-msgstr "cualquier año"
+msgstr ""
#: modules/websearch/lib/search_engine.py:1028
#: modules/websearch/lib/search_engine.py:1047
msgid "any public collection"
-msgstr "qualquier colección pública"
+msgstr ""
#: modules/websearch/lib/search_engine.py:1032
msgid "remove this collection"
-msgstr "eliminar esta colección"
+msgstr ""
#: modules/websearch/lib/search_engine.py:1043
msgid "add another collection"
-msgstr "añadir otra colección"
+msgstr ""
#: modules/websearch/lib/search_engine.py:1053
#: modules/websearch/lib/websearch_webcoll.py:592
msgid "rank by"
-msgstr "ordenar por"
+msgstr ""
#: modules/websearch/lib/search_engine.py:1177
#: modules/websearch/lib/websearch_webcoll.py:562
msgid "latest first"
-msgstr "el último primero"
+msgstr ""
#: modules/websearch/lib/search_engine.py:1827
msgid "No values found."
-msgstr "No se han encontrado valores."
+msgstr ""
#: modules/websearch/lib/search_engine.py:1945
msgid ""
"Warning: full-text search is only available for a subset of papers mostly "
"from 2006-2011."
msgstr ""
-"Atención: la búsqueda a texto completo sólo está disponible para un "
-"subconjunto de documentos, mayoritariamente de entre 2006-2011."
#: modules/websearch/lib/search_engine.py:1947
msgid ""
"Warning: figure caption search is only available for a subset of papers "
"mostly from 2008-2011."
msgstr ""
-"Atención: la búsqueda en los pies de imágenes sólo está disponible para un "
-"subconjunto de documentos, mayoritariamente de entre 2006-2011."
#: modules/websearch/lib/search_engine.py:1953
#, python-format
msgid "There is no index %s. Searching for %s in all fields."
-msgstr "No existe el índice %s. Se buscará %s en todos los campos."
+msgstr ""
#: modules/websearch/lib/search_engine.py:1957
#, python-format
msgid "Instead searching %s."
-msgstr "En cambio se buscará %s."
+msgstr ""
#: modules/websearch/lib/search_engine.py:1963
msgid "Search term too generic, displaying only partial results..."
msgstr ""
-"Término de búsqueda demasiado genérico, solo se mostrarán unos resultados "
-"parciales..."
#: modules/websearch/lib/search_engine.py:1966
msgid ""
"No phrase index available for fulltext yet, looking for word combination..."
msgstr ""
-"Todavía no hay índice de frases para el texto completo, buscando por "
-"combinación de palabras..."
#: modules/websearch/lib/search_engine.py:2006
#, python-format
msgid "No exact match found for %(x_query1)s, using %(x_query2)s instead..."
msgstr ""
-"No se han encontrado coincidencias con %(x_query1)s, pero utilizando en su "
-"lugar %(x_query2)s..."
#: modules/websearch/lib/search_engine.py:2016
#: modules/websearch/lib/search_engine.py:2025
#: modules/websearch/lib/search_engine.py:4687
#: modules/websearch/lib/search_engine.py:4725
#: modules/websearch/lib/search_engine.py:4776
#: modules/websubmit/lib/websubmit_webinterface.py:112
#: modules/websubmit/lib/websubmit_webinterface.py:155
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:839
msgid "Requested record does not seem to exist."
-msgstr "El registro solicitado no existe."
+msgstr ""
#: modules/websearch/lib/search_engine.py:2148
msgid ""
"Search syntax misunderstood. Ignoring all parentheses in the query. If this "
"doesn't help, please check your search and try again."
msgstr ""
-"No se entiende la sintaxis de su búsqueda. Se ignorarán todos los "
-"paréntesis de la búsqueda. Si así no funciona, repase su búsqueda y vuelva "
-"a intentarlo."
#: modules/websearch/lib/search_engine.py:2573
#, python-format
msgid ""
"No match found in collection %(x_collection)s. Other public collections gave "
"%(x_url_open)s%(x_nb_hits)d hits%(x_url_close)s."
msgstr ""
-"No se ha encontrado ninguna coincidencia en la coleción %(x_collection)s. "
-"Las otras colecciones públicas dieron %(x_url_open)s%(x_nb_hits)d resultados"
-"%(x_url_close)s."
#: modules/websearch/lib/search_engine.py:2582
msgid ""
"No public collection matched your query. If you were looking for a non-"
"public document, please choose the desired restricted collection first."
msgstr ""
-"Ninguna colección pública coincide con su búsqueda. Si estaba buscando "
-"documentos no públicos, por favor escoja primero la colección restringida "
-"deseada."
#: modules/websearch/lib/search_engine.py:2696
msgid "No match found, please enter different search terms."
-msgstr "No se han encontrado resultados. Use términos de búsqueda distintos."
+msgstr ""
#: modules/websearch/lib/search_engine.py:2702
#, python-format
msgid "There are no records referring to %s."
-msgstr "No hay registros que se refieran a %s."
+msgstr ""
#: modules/websearch/lib/search_engine.py:2704
#, python-format
msgid "There are no records cited by %s."
-msgstr "No hay registros citados por a %s."
+msgstr ""
#: modules/websearch/lib/search_engine.py:2709
#, python-format
msgid "No word index is available for %s."
-msgstr "No hay índices de palabras disponible para %s."
+msgstr ""
#: modules/websearch/lib/search_engine.py:2720
#, python-format
msgid "No phrase index is available for %s."
-msgstr "No hay índices de frases disponible para %s."
+msgstr ""
#: modules/websearch/lib/search_engine.py:2767
#, python-format
msgid ""
"Search term %(x_term)s inside index %(x_index)s did not match any record. "
"Nearest terms in any collection are:"
msgstr ""
-"El término de búsqueda %(x_term)s en el índice %(x_index)s no se ha "
-"encontrado en ningún registro. Los términos aproximados en cualquier "
-"colección son:"
#: modules/websearch/lib/search_engine.py:2771
#, python-format
msgid ""
"Search term %s did not match any record. Nearest terms in any collection are:"
msgstr ""
-"El término de búsqueda %s no se ha encontrado en ningún registro. Los "
-"términos aproximados, en cualquier las colección, son:"
#: modules/websearch/lib/search_engine.py:3486
#, python-format
msgid ""
"Sorry, sorting is allowed on sets of up to %d records only. Using default "
"sort order."
msgstr ""
-"Sólo se puede ordenar en conjuntos de hasta %d registros. Ordenado por "
-"defecto (\"primero los más recientes\")."
#: modules/websearch/lib/search_engine.py:3510
#, python-format
msgid ""
"Sorry, %s does not seem to be a valid sort option. Choosing title sort "
"instead."
-msgstr "No es posible ordenar por %s. Ha quedado ordenado por título."
+msgstr ""
#: modules/websearch/lib/search_engine.py:3703
#: modules/websearch/lib/search_engine.py:4012
#: modules/websearch/lib/search_engine.py:4191
#: modules/websearch/lib/search_engine.py:4214
#: modules/websearch/lib/search_engine.py:4222
#: modules/websearch/lib/search_engine.py:4230
#: modules/websearch/lib/search_engine.py:4276
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:837
msgid "The record has been deleted."
-msgstr "El registro se ha suprimido."
+msgstr ""
#: modules/websearch/lib/search_engine.py:3901
msgid "Use different search terms."
-msgstr "Use términos de búsqueda distintos."
+msgstr ""
#: modules/websearch/lib/search_engine.py:4997
msgid "No match within your time limits, discarding this condition..."
msgstr ""
-"No se han encontrado resultados dentro de los límites de tiempo "
-"especificados. Descartando esta condición..."
#: modules/websearch/lib/search_engine.py:5024
msgid "No match within your search limits, discarding this condition..."
msgstr ""
-"No se han encontrado resultados dentro de los límites especificados, "
-"descartando esta condición..."
#: modules/websearch/lib/websearchadminlib.py:3393
msgid "Information"
-msgstr "Información:"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3394
msgid "References"
-msgstr "Referencias"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3395
msgid "Citations"
-msgstr "Citas"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3396
msgid "Keywords"
-msgstr "Palabras clave"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3397
msgid "Discussion"
-msgstr "Discusión"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3398
msgid "Usage statistics"
-msgstr "Estadísticas de uso"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3399
msgid "Files"
-msgstr "Ficheros"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3400
msgid "Plots"
-msgstr "Gráficos"
+msgstr ""
#: modules/websearch/lib/websearchadminlib.py:3401
msgid "Holdings"
-msgstr "Disponibilidad"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:458
#, python-format
msgid "Search on %(x_CFG_SITE_NAME_INTL)s"
-msgstr "Buscar en %(x_CFG_SITE_NAME_INTL)s"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:682
#: modules/websearch/lib/websearch_templates.py:831
#, python-format
msgid "Search %s records for:"
-msgstr "Buscar en %s registros por:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:734
msgid "less"
-msgstr "menos"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:735
#: modules/websearch/lib/websearch_templates.py:1508
#: modules/websearch/lib/websearch_templates.py:3893
#: modules/websearch/lib/websearch_templates.py:3970
#: modules/websearch/lib/websearch_templates.py:4030
msgid "more"
-msgstr "más"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:740
#, python-format
msgid "Example: %(x_sample_search_query)s"
-msgstr "Ejemplo: %(x_sample_search_query)s"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:752
#: modules/websearch/lib/websearch_templates.py:2129
#, python-format
msgid "Search in %(x_collection_name)s"
-msgstr "Búsqueda en %(x_collection_name)s"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:756
#: modules/websearch/lib/websearch_templates.py:2133
msgid "Search everywhere"
-msgstr "Buscar en todas partes"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:790
#: modules/websearch/lib/websearch_templates.py:867
#: modules/websearch/lib/websearch_templates.py:2101
#: modules/websearch/lib/websearch_templates.py:2159
msgid "Advanced Search"
-msgstr "Búsqueda avanzada"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:928
#, python-format
msgid "Search %s records for"
-msgstr "Buscar en %s registros:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:979
#: modules/websearch/lib/websearch_templates.py:2017
msgid "Simple Search"
-msgstr "Búsqueda simple"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1012
msgid "Search options:"
-msgstr "Opciones de búsqueda:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1059
#: modules/websearch/lib/websearch_templates.py:2255
msgid "Added/modified since:"
-msgstr "Añadido/modificado desde:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1060
#: modules/websearch/lib/websearch_templates.py:2256
msgid "until:"
-msgstr "hasta:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1065
#: modules/websearch/lib/websearch_templates.py:2298
msgid "Sort by:"
-msgstr "Ordenar por:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1066
#: modules/websearch/lib/websearch_templates.py:2299
msgid "Display results:"
-msgstr "Mostrar resultados:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1067
#: modules/websearch/lib/websearch_templates.py:2300
msgid "Output format:"
-msgstr "Formato de visualización:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1227
msgid "Added since:"
-msgstr "Añadido a partir de:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1228
msgid "Modified since:"
-msgstr "Modificado desde:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1265
msgid "Focus on:"
-msgstr "Enfocado a:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1328
msgid "restricted"
-msgstr "restringido"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1355
msgid "Search also:"
-msgstr "Busque también:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1426
msgid ""
"This collection is restricted. If you are authorized to access it, please "
"click on the Search button."
msgstr ""
-"Esta colección es restringida. Si tiene acceso, haga clic en el botón de "
-"Buscar."
#: modules/websearch/lib/websearch_templates.py:1441
msgid ""
"This is a hosted external collection. Please click on the Search button to "
"see its content."
msgstr ""
-"Esta es una colección externa alojada. Pulse el botón de búsqueda para ver "
-"su contenido."
#: modules/websearch/lib/websearch_templates.py:1456
msgid "This collection does not contain any document yet."
-msgstr "Esta colección no contiene aún ningún documento."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1523
msgid "Latest additions:"
-msgstr "Últimas adquisiciones:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1626
#: modules/websearch/lib/websearch_templates.py:3361
#, python-format
msgid "Cited by %i records"
-msgstr "Citado por %i registros"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1692
#, python-format
msgid "Words nearest to %(x_word)s inside %(x_field)s in any collection are:"
msgstr ""
-"Las palabras más cercanas a %(x_word)s en %(x_field)s, en cualquier "
-"colección, son:"
#: modules/websearch/lib/websearch_templates.py:1695
#, python-format
msgid "Words nearest to %(x_word)s in any collection are:"
-msgstr "Las palabras más cercanas a %(x_word)s, en cualquier colección, son:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1787
msgid "Hits"
-msgstr "Resultados"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:1866
#: modules/websearch/lib/websearch_templates.py:2518
#: modules/websearch/lib/websearch_templates.py:2708
#: modules/bibedit/lib/bibeditmulti_templates.py:657
msgid "next"
-msgstr "siguiente"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2204
msgid "collections"
-msgstr "colecciones"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2226
msgid "Limit to:"
-msgstr "Limitar a:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2268
#: modules/websearch/lib/websearch_webcoll.py:610
msgid "results"
-msgstr "resultados"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2304
#: modules/websearch/lib/websearch_webcoll.py:580
msgid "asc."
-msgstr "asc."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2307
#: modules/websearch/lib/websearch_webcoll.py:581
msgid "desc."
-msgstr "desc."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2313
#: modules/websearch/lib/websearch_webcoll.py:624
msgid "single list"
-msgstr "lista única"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2316
#: modules/websearch/lib/websearch_webcoll.py:623
msgid "split by collection"
-msgstr "reagrupar por colección"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2354
msgid "MARC tag"
-msgstr "Etiqueta MARC"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2469
#: modules/websearch/lib/websearch_templates.py:2474
#: modules/websearch/lib/websearch_templates.py:2652
#: modules/websearch/lib/websearch_templates.py:2664
#: modules/websearch/lib/websearch_templates.py:2985
#: modules/websearch/lib/websearch_templates.py:2994
#, python-format
msgid "%s records found"
-msgstr "Encontrados %s registros"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2501
#: modules/websearch/lib/websearch_templates.py:2691
#: modules/bibedit/lib/bibeditmulti_templates.py:655
msgid "begin"
-msgstr "inicio"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2506
#: modules/websearch/lib/websearch_templates.py:2696
#: modules/websubmit/lib/websubmit_templates.py:1241
#: modules/bibedit/lib/bibeditmulti_templates.py:656
msgid "previous"
-msgstr "anterior"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2525
#: modules/websearch/lib/websearch_templates.py:2715
msgid "end"
-msgstr "final"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2545
#: modules/websearch/lib/websearch_templates.py:2735
msgid "jump to record:"
-msgstr "ir al registro:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2558
#: modules/websearch/lib/websearch_templates.py:2748
#, python-format
msgid "Search took %s seconds."
-msgstr "La búsqueda tardó %s segundos."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2952
#, python-format
msgid ""
"%(x_fmt_open)sResults overview:%(x_fmt_close)s Found %(x_nb_records)s "
"records in %(x_nb_seconds)s seconds."
msgstr ""
-"%(x_fmt_open)sResultados globales:%(x_fmt_close)s %(x_nb_records)s registros "
-"encontrados en %(x_nb_seconds)s segundos."
#: modules/websearch/lib/websearch_templates.py:2964
#, python-format
msgid "%(x_fmt_open)sResults overview%(x_fmt_close)s"
-msgstr "%(x_fmt_open)sResultados globales%(x_fmt_close)s"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:2972
#, python-format
msgid ""
"%(x_fmt_open)sResults overview:%(x_fmt_close)s Found at least "
"%(x_nb_records)s records in %(x_nb_seconds)s seconds."
msgstr ""
-"%(x_fmt_open)sResultados globales:%(x_fmt_close)s Al menos %(x_nb_records)s "
-"registros encontrados en %(x_nb_seconds)s segundos."
#: modules/websearch/lib/websearch_templates.py:3049
msgid "No results found..."
-msgstr "No se han encontrado resultados..."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3082
msgid ""
"Boolean query returned no hits. Please combine your search terms differently."
msgstr ""
-"La combinación booleana no ha dado resultados. Por favor combine los "
-"términos de búsqueda de otra manera."
#: modules/websearch/lib/websearch_templates.py:3114
msgid "See also: similar author names"
-msgstr "Vea también: autores con nombres similares"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3362
msgid "Cited by 1 record"
-msgstr "Citado por 1 registro"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3377
#, python-format
msgid "%i comments"
-msgstr "%i comentarios"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3378
msgid "1 comment"
-msgstr "1 comentario"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3388
#, python-format
msgid "%i reviews"
-msgstr "%i reseñas"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3389
msgid "1 review"
-msgstr "1 reseña"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3602
#: modules/websearch/lib/websearch_webinterface.py:1580
#, python-format
msgid "Collection %s Not Found"
-msgstr "No se ha encontrado la colección %s"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3614
#: modules/websearch/lib/websearch_webinterface.py:1576
#, python-format
msgid "Sorry, collection %s does not seem to exist."
-msgstr "Parece ser que la colección %s no existe."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3616
#: modules/websearch/lib/websearch_webinterface.py:1577
#, python-format
msgid "You may want to start browsing from %s."
-msgstr "Puede comenzar las búsquedas desde %s."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3643
#, python-format
msgid ""
"Set up a personal %(x_url1_open)semail alert%(x_url1_close)s\n"
" or subscribe to the %(x_url2_open)sRSS feed"
"%(x_url2_close)s."
msgstr ""
-"Defina una %(x_url1_open)salerta personal%(x_url1_close)s vía correo "
-"electrónico o subscríbase al %(x_url2_open)scanal RSS%(x_url2_close)s."
#: modules/websearch/lib/websearch_templates.py:3650
#, python-format
msgid "Subscribe to the %(x_url2_open)sRSS feed%(x_url2_close)s."
-msgstr "Subscríbase al %(x_url2_open)scanal RSS%(x_url2_close)s."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3659
msgid "Interested in being notified about new results for this query?"
-msgstr "¿Le interesa recibir alertas sobre nuevos resultados de esta búsqueda?"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3746
#: modules/websearch/lib/websearch_templates.py:3796
msgid "Back to search"
-msgstr "Volver a la búsqueda"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3756
#: modules/websearch/lib/websearch_templates.py:3772
#: modules/websearch/lib/websearch_templates.py:3787
#, python-format
msgid "%s of"
-msgstr "%s de"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3886
msgid "People who downloaded this document also downloaded:"
-msgstr "La gente que descargó este documento también descargó:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3902
msgid "People who viewed this page also viewed:"
-msgstr "La gente que vio esta página también vio:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:3956
#, python-format
msgid "Cited by: %s records"
-msgstr "Citado por: %s registros"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4023
#, python-format
msgid "Co-cited with: %s records"
-msgstr "Co-citado con: %s registros"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4065
#, python-format
msgid ".. of which self-citations: %s records"
-msgstr ".. de los cuales son auto-citas: %s registros<"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4157
msgid "Name variants"
-msgstr "Variantes del nombre"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4168
msgid "No Name Variants"
-msgstr "Sin nombres variantes"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4176
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:763
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:920
msgid "Papers"
-msgstr "Documentos:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4180
msgid "downloaded"
-msgstr "descargado"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4181
msgid "times"
-msgstr "veces"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4210
msgid "No Papers"
-msgstr "Ningún documento"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4223
msgid "unknown affiliation"
-msgstr "afiliación desconocida"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4230
msgid "No Affiliations"
-msgstr "Sin afiliaciones"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4232
msgid "Affiliations"
-msgstr "Afiliaciones"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4248
msgid "No Keywords"
-msgstr "Sin palabras clave"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4251
msgid "Frequent keywords"
-msgstr "Palabras clave frecuentes"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4256
msgid "Frequent co-authors"
-msgstr "Co-autores frecuentes"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4267
msgid "No Frequent Co-authors"
-msgstr "Co-autores poco frecuentes"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4291
msgid "This is me. Verify my publication list."
-msgstr "Soy yo mismo. Verifiquen mi lista de publicaciones."
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4330
msgid "Citations:"
-msgstr "Citaciones:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4334
msgid "No Citation Information available"
-msgstr "No hay información de citas disponible"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4401
msgid "Citation summary results"
-msgstr "Reccuento de citaciones"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4406
msgid "Total number of citable papers analyzed:"
-msgstr "Número total de artículos citables analizados:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4429
msgid "Total number of citations:"
-msgstr "Número total de citaciones:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4434
msgid "Average citations per paper:"
-msgstr "Media de citas por artículo"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4444
msgid "Total number of citations excluding self-citations:"
-msgstr "Número total de citaciones excluyendo las autocitas:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4449
msgid "Average citations per paper excluding self-citations:"
-msgstr "Media de citas por artículo excluyendo las autocitas:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4458
msgid "Breakdown of papers by citations:"
-msgstr "Clasificación de artículos por citas:"
+msgstr ""
#: modules/websearch/lib/websearch_templates.py:4492
msgid "Additional Citation Metrics"
-msgstr "Otras métricas de citas"
+msgstr ""
#: modules/websearch/lib/websearch_webinterface.py:735
#: modules/websearch/lib/websearch_webinterface.py:747
#, python-format
msgid ""
"We're sorry. The requested author \"%s\" seems not to be listed on the "
"specified paper."
-msgstr "Por desgracia, el autor \"%s\" no aparece en este artículo."
+msgstr ""
#: modules/websearch/lib/websearch_webinterface.py:738
#: modules/websearch/lib/websearch_webinterface.py:750
msgid "Please try the following link to start a broader search on the author: "
msgstr ""
-"Pinchando en el siguiente enlace realizará una búsqueda más amplia del autor:"
#: modules/websearch/lib/websearch_webinterface.py:1163
msgid "You are not authorized to view this area."
-msgstr "No está autorizado a ver esta área."
+msgstr ""
#: modules/websearch/lib/websearch_webinterface.py:1582
msgid "Not found"
-msgstr "No se ha encontrado"
+msgstr ""
#: modules/websearch/lib/websearch_external_collections.py:145
msgid "in"
-msgstr "en"
+msgstr ""
#: modules/websearch/lib/websearch_external_collections_templates.py:51
msgid ""
"Haven't found what you were looking for? Try your search on other servers:"
-msgstr "¿No ha encontrado lo que estaba buscando? Intente su búsqueda en:"
+msgstr ""
#: modules/websearch/lib/websearch_external_collections_templates.py:79
msgid "External collections results overview:"
-msgstr "Resumen de los resultados de las colecciones externas:"
+msgstr ""
#: modules/websearch/lib/websearch_external_collections_templates.py:119
msgid "Search timed out."
-msgstr "Tiempo de búsqueda excedido."
+msgstr ""
#: modules/websearch/lib/websearch_external_collections_templates.py:120
msgid ""
"The external search engine has not responded in time. You can check its "
"results here:"
msgstr ""
-"El buscador externo no ha respondido a tiempo. Puede ver los resultados "
-"aquí:"
#: modules/websearch/lib/websearch_external_collections_templates.py:146
#: modules/websearch/lib/websearch_external_collections_templates.py:154
#: modules/websearch/lib/websearch_external_collections_templates.py:167
msgid "No results found."
-msgstr "No se han encontrado resultados."
+msgstr ""
#: modules/websearch/lib/websearch_external_collections_templates.py:150
#, python-format
msgid "%s results found"
-msgstr "Se han encontrado %s resultados"
+msgstr ""
#: modules/websearch/lib/websearch_external_collections_templates.py:152
#, python-format
msgid "%s seconds"
-msgstr "%s segundos"
+msgstr ""
#: modules/websearch/lib/websearch_webcoll.py:645
msgid "brief"
-msgstr "breve"
+msgstr ""
#: modules/websession/lib/webaccount.py:116
#, python-format
msgid ""
"You are logged in as guest. You may want to %(x_url_open)slogin"
"%(x_url_close)s as a regular user."
msgstr ""
-"Está conectado como visitante. Quizás quiera %(x_url_open)sidentificarse"
-"%(x_url_close)s como usuario conocido"
#: modules/websession/lib/webaccount.py:120
#, python-format
msgid ""
"The %(x_fmt_open)sguest%(x_fmt_close)s users need to %(x_url_open)sregister"
"%(x_url_close)s first"
msgstr ""
-"Los %(x_fmt_open)svisitantes%(x_fmt_close)s antes han de %(x_url_open)sdarse "
-"de alta%(x_url_close)s."
#: modules/websession/lib/webaccount.py:125
msgid "No queries found"
-msgstr "No se ha encontrado ninguna búsqueda"
+msgstr ""
#: modules/websession/lib/webaccount.py:367
msgid ""
"This collection is restricted. If you think you have right to access it, "
"please authenticate yourself."
msgstr ""
-"Esta colección es restringida. Si cree que tiene derecho a ella, "
-"identifíquese."
#: modules/websession/lib/webaccount.py:368
msgid ""
"This file is restricted. If you think you have right to access it, please "
"authenticate yourself."
msgstr ""
-"Esta documento es restringido. Si cree que tiene derecho a él, "
-"identifíquese."
#: modules/websession/lib/websession_templates.py:93
msgid "External account settings"
-msgstr "Configuración de la cuenta externa"
+msgstr ""
#: modules/websession/lib/websession_templates.py:95
#, python-format
msgid ""
"You can consult the list of your external groups directly in the "
"%(x_url_open)sgroups page%(x_url_close)s."
msgstr ""
-"Puede consultar la lista de sus grupos externos directament en la "
-"%(x_url_open)spágina de los grupos%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:99
msgid "External user groups"
-msgstr "Grupos de usuarios externos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:156
msgid ""
"If you want to change your email or set for the first time your nickname, "
"please set new values in the form below."
msgstr ""
-"Si desea cambiar su dirección de correo electrónico o contraseña, ponga nos "
-"nuevos valores en este formulario."
#: modules/websession/lib/websession_templates.py:157
msgid "Edit login credentials"
-msgstr "Edite las credenciales de identificación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:162
msgid "New email address"
-msgstr "Nueva dirección de correo electrónico"
+msgstr ""
#: modules/websession/lib/websession_templates.py:163
#: modules/websession/lib/websession_templates.py:210
#: modules/websession/lib/websession_templates.py:1036
msgid "mandatory"
-msgstr "obligatorio"
+msgstr ""
#: modules/websession/lib/websession_templates.py:166
msgid "Set new values"
-msgstr "Guardar los nuevos valores"
+msgstr ""
#: modules/websession/lib/websession_templates.py:170
msgid ""
"Since this is considered as a signature for comments and reviews, once set "
"it can not be changed."
msgstr ""
-"Ya que se considera una firma para comentarios y reseñas, una vez definido "
-"no se puede cambiar."
#: modules/websession/lib/websession_templates.py:209
msgid ""
"If you want to change your password, please enter the old one and set the "
"new value in the form below."
msgstr ""
-"Si desea cambiar su contraseña, ponga nos valores antiguo y nuevo en este "
-"formulario."
#: modules/websession/lib/websession_templates.py:211
msgid "Old password"
-msgstr "Contraseña antigua"
+msgstr ""
#: modules/websession/lib/websession_templates.py:212
msgid "New password"
-msgstr "Contraseña nueva"
+msgstr ""
#: modules/websession/lib/websession_templates.py:213
#: modules/websession/lib/websession_templates.py:1037
msgid "optional"
-msgstr "opcional"
+msgstr ""
#: modules/websession/lib/websession_templates.py:215
#: modules/websession/lib/websession_templates.py:1040
msgid "The password phrase may contain punctuation, spaces, etc."
-msgstr "La contraseña puede contener puntuación, espacios, etc."
+msgstr ""
#: modules/websession/lib/websession_templates.py:216
msgid "You must fill the old password in order to set a new one."
msgstr ""
-"Tiene que entrar la contraseña antigua para cambiarla por una de nueva."
#: modules/websession/lib/websession_templates.py:217
msgid "Retype password"
-msgstr "Vuelva a escribir la contraseña"
+msgstr ""
#: modules/websession/lib/websession_templates.py:218
msgid "Set new password"
-msgstr "Ponga la contraseña nueva"
+msgstr ""
#: modules/websession/lib/websession_templates.py:223
#, python-format
msgid ""
"If you are using a lightweight CERN account you can\n"
" %(x_url_open)sreset the password%(x_url_close)s."
msgstr ""
-"Si está utilizando una cuenta CERN ligera puede %(x_url_open)sreiniciar la "
-"contraseña%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:229
#, python-format
msgid ""
"You can change or reset your CERN account password by means of the "
"%(x_url_open)sCERN account system%(x_url_close)s."
msgstr ""
-"Puede cambiar o reiniciar la contraseña de su cuenta del CERN vía el "
-"%(x_url_open)ssistema de cuentas del CERN%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:253
msgid "Edit cataloging interface settings"
-msgstr "Editar los parámetros de catalogación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:254
#: modules/websession/lib/websession_templates.py:900
msgid "Username"
-msgstr "Nombre de usuario"
+msgstr ""
#: modules/websession/lib/websession_templates.py:255
#: modules/websession/lib/websession_templates.py:901
#: modules/websession/lib/websession_templates.py:1035
msgid "Password"
-msgstr "Contraseña"
+msgstr ""
#: modules/websession/lib/websession_templates.py:256
#: modules/websession/lib/websession_templates.py:282
#: modules/websession/lib/websession_templates.py:316
msgid "Update settings"
-msgstr "Actualizar los parámetros"
+msgstr ""
#: modules/websession/lib/websession_templates.py:270
msgid "Edit language-related settings"
-msgstr "Editar los parámetros de lengua"
+msgstr ""
#: modules/websession/lib/websession_templates.py:281
msgid "Select desired language of the web interface."
-msgstr "Escoja la lengua que prefiera para la web."
+msgstr ""
#: modules/websession/lib/websession_templates.py:299
msgid "Edit search-related settings"
-msgstr "Editar parámetros de búsqueda"
+msgstr ""
#: modules/websession/lib/websession_templates.py:300
msgid "Show the latest additions box"
-msgstr "Mostrar el texto de las últimas entradas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:302
msgid "Show collection help boxes"
-msgstr "Muestra los textos de ayuda de la colección"
+msgstr ""
#: modules/websession/lib/websession_templates.py:317
msgid "Number of search results per page"
-msgstr "Número de resultados por página"
+msgstr ""
#: modules/websession/lib/websession_templates.py:347
msgid "Edit login method"
-msgstr "Edite el método de identificación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:348
msgid ""
"Please select which login method you would like to use to authenticate "
"yourself"
-msgstr "Seleccione qué método de identificación prefiere para autenticarse"
+msgstr ""
#: modules/websession/lib/websession_templates.py:349
#: modules/websession/lib/websession_templates.py:363
msgid "Select method"
-msgstr "Seleccione el método"
+msgstr ""
#: modules/websession/lib/websession_templates.py:381
#, python-format
msgid ""
"If you have lost the password for your %(sitename)s %(x_fmt_open)sinternal "
"account%(x_fmt_close)s, then please enter your email address in the "
"following form in order to have a password reset link emailed to you."
msgstr ""
-"Si ha perdido la contraseña de la %(x_fmt_open)scuenta interna"
-"%(x_fmt_close)s de %(sitename)s, escriba su diercción elecrónica en este "
-"formulario para que le enviemos un enlace para reiniciar su contraseña."
#: modules/websession/lib/websession_templates.py:403
#: modules/websession/lib/websession_templates.py:1033
msgid "Email address"
-msgstr "Dirección de correo electrónico"
+msgstr ""
#: modules/websession/lib/websession_templates.py:404
msgid "Send password reset link"
-msgstr "Enviar el enlace para reiniciar la contraseña"
+msgstr ""
#: modules/websession/lib/websession_templates.py:408
#, python-format
msgid ""
"If you have been using the %(x_fmt_open)sCERN login system%(x_fmt_close)s, "
"then you can recover your password through the %(x_url_open)sCERN "
"authentication system%(x_url_close)s."
msgstr ""
-"Si su cuenta utiliza el %(x_fmt_open)ssistema de identificación del CERN"
-"%(x_fmt_close)s, puede recuperar su contraseña a través del "
-"%(x_url_open)ssistema de autenticación del CERN%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:411
msgid ""
"Note that if you have been using an external login system, then we cannot do "
"anything and you have to ask there."
msgstr ""
-"Tenga en cuenta que si ha estado utilizando un sistema de identificación "
-"externo, sentimos no podemos hacer nada. Tendrá que preguntarlo allí."
#: modules/websession/lib/websession_templates.py:412
#, python-format
msgid ""
"Alternatively, you can ask %s to change your login system from external to "
"internal."
msgstr ""
-"Alternativamente, puede pedir a %s que le cambie el sistema de "
-"identificación al interno."
#: modules/websession/lib/websession_templates.py:439
#, python-format
msgid ""
"%s offers you the possibility to personalize the interface, to set up your "
"own personal library of documents, or to set up an automatic alert query "
"that would run periodically and would notify you of search results by email."
msgstr ""
-"%s le ofrece la posibilidad de personalizar la interfaz, crear su propia "
-"biblioteca de documentos, o crear alertas automáticas que se ejecuten "
-"periódicamente y le notifiquen del resultado de la búsqueda por correo "
-"electrónico."
#: modules/websession/lib/websession_templates.py:449
#: modules/websession/lib/websession_webinterface.py:276
msgid "Your Settings"
-msgstr "Sus parámetros"
+msgstr ""
#: modules/websession/lib/websession_templates.py:450
msgid ""
"Set or change your account email address or password. Specify your "
"preferences about the look and feel of the interface."
msgstr ""
-"Ponga o cambie la dirección de correo electrónico de esta cuenta o su "
-"contraseña. Especifique sus preferencias sobre el aspecto que desea."
#: modules/websession/lib/websession_templates.py:458
msgid "View all the searches you performed during the last 30 days."
-msgstr "Vea todas las búsquedas que ha realizado durante los últimos 30 días."
+msgstr ""
#: modules/websession/lib/websession_templates.py:466
msgid ""
"With baskets you can define specific collections of items, store interesting "
"records you want to access later or share with others."
msgstr ""
-"Las cestas le permiten definir colecciones específicas de ítem, guardar "
-"registros interesantes para acceder más adelante o compartir con otros."
#: modules/websession/lib/websession_templates.py:475
msgid ""
"Subscribe to a search which will be run periodically by our service. The "
"result can be sent to you via Email or stored in one of your baskets."
msgstr ""
-"Subscríbase a una búsqueda para que se ejecute periódicamente en nuestro "
-"servicio. Podrá recibir el resultado por correo electrónico o guardarlo en "
-"una de sus cestas."
#: modules/websession/lib/websession_templates.py:484
#: modules/websession/lib/websession_templates.py:610
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:126
msgid "Your Loans"
-msgstr "Sus préstamos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:485
msgid ""
"Check out book you have on loan, submit borrowing requests, etc. Requires "
"CERN ID."
msgstr ""
-"Compruebe los libros que tiene en préstamo, solicite reservas, etc. Requiere "
-"el ID del CERN."
#: modules/websession/lib/websession_templates.py:512
msgid ""
"You are logged in as a guest user, so your alerts will disappear at the end "
"of the current session."
msgstr ""
-"Ahora usted está identificado como usuario visitante, con lo que sus alertas "
-"desaparecerán al final de esta sesión."
#: modules/websession/lib/websession_templates.py:535
#, python-format
msgid ""
"You are logged in as %(x_user)s. You may want to a) %(x_url1_open)slogout"
"%(x_url1_close)s; b) edit your %(x_url2_open)saccount settings"
"%(x_url2_close)s."
msgstr ""
-"Usted se ha identificado como %(x_user)s. Ahora puede a) "
-"%(x_url1_open)sdesconectarse%(x_url1_close)s; b) modificar las "
-"%(x_url2_open)spreferencias de su cuenta%(x_url2_close)s."
#: modules/websession/lib/websession_templates.py:616
msgid "Your Alert Searches"
-msgstr "Sus alertas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:622
#, python-format
msgid ""
"You can consult the list of %(x_url_open)syour groups%(x_url_close)s you are "
"administering or are a member of."
msgstr ""
-"Puede consultar la lista de %(x_url_open)slos grupos%(x_url_close)s que "
-"administra o de los que forma parte."
#: modules/websession/lib/websession_templates.py:625
#: modules/websession/lib/websession_templates.py:2326
#: modules/websession/lib/websession_webinterface.py:1020
msgid "Your Groups"
-msgstr "Sus grupos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:628
#, python-format
msgid ""
"You can consult the list of %(x_url_open)syour submissions%(x_url_close)s "
"and inquire about their status."
msgstr ""
-"Puede consultar la lista de %(x_url_open)ssus envíos%(x_url_close)s e "
-"informarse de su estado."
#: modules/websession/lib/websession_templates.py:631
#: modules/websubmit/web/yoursubmissions.py:160
msgid "Your Submissions"
-msgstr "Sus envíos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:634
#, python-format
msgid ""
"You can consult the list of %(x_url_open)syour approvals%(x_url_close)s with "
"the documents you approved or refereed."
msgstr ""
-"Puede consultar la lista de %(x_url_open)ssus aprobaciones%(x_url_close)s "
-"con los documentos que ha aprobado o revisado."
#: modules/websession/lib/websession_templates.py:637
#: modules/websubmit/web/yourapprovals.py:88
msgid "Your Approvals"
-msgstr "Sus aprobaciones"
+msgstr ""
#: modules/websession/lib/websession_templates.py:641
#, python-format
msgid "You can consult the list of %(x_url_open)syour tickets%(x_url_close)s."
-msgstr "Puede consultar la lista de %(x_url_open)ssus tareas%(x_url_close)s."
+msgstr ""
#: modules/websession/lib/websession_templates.py:644
msgid "Your Tickets"
-msgstr "Sus tareas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:646
#: modules/websession/lib/websession_webinterface.py:625
msgid "Your Administrative Activities"
-msgstr "Sus actividades administrativas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:673
#: modules/bibharvest/lib/oai_harvest_admin.py:470
#: modules/bibharvest/lib/oai_harvest_admin.py:485
msgid "Try again"
-msgstr "Vuélvalo a intentar"
+msgstr ""
#: modules/websession/lib/websession_templates.py:695
#, python-format
msgid ""
"Somebody (possibly you) coming from %(x_ip_address)s has asked\n"
"for a password reset at %(x_sitename)s\n"
"for the account \"%(x_email)s\"."
msgstr ""
-"Alguien (possiblement usted), desde a dirección %(x_ip_address)s, ha "
-"solicitado un cambio de contraseña en %(x_sitename)s para la cuenta "
-"%(x_email)s. "
#: modules/websession/lib/websession_templates.py:703
msgid "If you want to reset the password for this account, please go to:"
-msgstr "Si quiere reiniciar la contraseña de esta cuenta, vaya a:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:709
#: modules/websession/lib/websession_templates.py:746
msgid "in order to confirm the validity of this request."
-msgstr "para confirmar la validez de esta petición."
+msgstr ""
#: modules/websession/lib/websession_templates.py:710
#: modules/websession/lib/websession_templates.py:747
#, python-format
msgid ""
"Please note that this URL will remain valid for about %(days)s days only."
msgstr ""
-"Tenga en cuenta que esta URL sólo será válida durante unos %(days)s días."
#: modules/websession/lib/websession_templates.py:732
#, python-format
msgid ""
"Somebody (possibly you) coming from %(x_ip_address)s has asked\n"
"to register a new account at %(x_sitename)s\n"
"for the email address \"%(x_email)s\"."
msgstr ""
-"Alguien (possiblement usted), desde a dirección %(x_ip_address)s, ha "
-"solicitado una cuenta nueva en %(x_sitename)s para la dirección de correo "
-"electrónico %(x_email)s."
#: modules/websession/lib/websession_templates.py:740
msgid "If you want to complete this account registration, please go to:"
-msgstr "Para completar el alta de la cuenta, vaya a:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:766
#, python-format
msgid "Okay, a password reset link has been emailed to %s."
msgstr ""
-"El enlace para reiniciar la contraseña se ha enviado por correo electrónico "
-"a %s."
#: modules/websession/lib/websession_templates.py:781
msgid "Deleting your account"
-msgstr "Borrando su cuenta"
+msgstr ""
#: modules/websession/lib/websession_templates.py:795
msgid "You are no longer recognized by our system."
-msgstr "Ya no está identificado en nuestro sistema."
+msgstr ""
#: modules/websession/lib/websession_templates.py:797
#, python-format
msgid ""
"You are still recognized by the centralized\n"
" %(x_fmt_open)sSSO%(x_fmt_close)s system. You can\n"
" %(x_url_open)slogout from SSO%(x_url_close)s, too."
msgstr ""
-"Usted todavía está reconocido por el sistema central de %(x_fmt_open)sSSO"
-"%(x_fmt_close)s. También puede %(x_url_open)sdesconectar del SSO"
-"%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:804
#, python-format
msgid "If you wish you can %(x_url_open)slogin here%(x_url_close)s."
-msgstr "Si lo desea puede %(x_url_open)sidentificarse aquí%(x_url_close)s."
+msgstr ""
#: modules/websession/lib/websession_templates.py:835
msgid "If you already have an account, please login using the form below."
-msgstr "Si ya tiene una cuenta, identifíquese por favor en este formulario."
+msgstr ""
#: modules/websession/lib/websession_templates.py:839
#, python-format
msgid ""
"If you don't own a CERN account yet, you can register a %(x_url_open)snew "
"CERN lightweight account%(x_url_close)s."
msgstr ""
-"Si todavía no dispone de una cuenta en el CERN, puede crear una "
-"%(x_url_open)scuenta CERN ligera%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:842
#, python-format
msgid ""
"If you don't own an account yet, please %(x_url_open)sregister"
"%(x_url_close)s an internal account."
msgstr ""
-"Si todavía no tiene una cuenta, puede %(x_url_open)sdarse de alta"
-"%(x_url_close)s en una cuenta interna."
#: modules/websession/lib/websession_templates.py:850
#, python-format
msgid "If you don't own an account yet, please contact %s."
-msgstr "Si todavía no tiene una cuenta, póngase en contacto con %s."
+msgstr ""
#: modules/websession/lib/websession_templates.py:873
msgid "Login method:"
-msgstr "Método de identificación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:902
msgid "Remember login on this computer."
-msgstr "Recordar la identificación en este ordenador."
+msgstr ""
#: modules/websession/lib/websession_templates.py:903
#: modules/websession/lib/websession_templates.py:1182
#: modules/websession/lib/websession_webinterface.py:103
#: modules/websession/lib/websession_webinterface.py:198
#: modules/websession/lib/websession_webinterface.py:748
#: modules/websession/lib/websession_webinterface.py:839
msgid "login"
-msgstr "identificación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:908
#: modules/websession/lib/websession_webinterface.py:529
msgid "Lost your password?"
-msgstr "¿Ha perdido su contraseña?"
+msgstr ""
#: modules/websession/lib/websession_templates.py:916
msgid "You can use your nickname or your email address to login."
msgstr ""
-"Para identificarse puede usar su alias o su dirección de correo electrónico."
#: modules/websession/lib/websession_templates.py:940
msgid ""
"Your request is valid. Please set the new desired password in the following "
"form."
msgstr ""
-"Su petición ha sido validada. Escriba la nueva contraseña en este "
-"formulario."
#: modules/websession/lib/websession_templates.py:963
msgid "Set a new password for"
-msgstr "Definir la nueva contraseña para"
+msgstr ""
#: modules/websession/lib/websession_templates.py:964
msgid "Type the new password"
-msgstr "Escriba la nueva contraseña"
+msgstr ""
#: modules/websession/lib/websession_templates.py:965
msgid "Type again the new password"
-msgstr "Escriba ortra vez la nueva contraseña"
+msgstr ""
#: modules/websession/lib/websession_templates.py:966
msgid "Set the new password"
-msgstr "Definir la nueva contraseña"
+msgstr ""
#: modules/websession/lib/websession_templates.py:988
msgid "Please enter your email address and desired nickname and password:"
msgstr ""
-"Introduzca su dirección de correo electrónico así como el alias y contraseña:"
#: modules/websession/lib/websession_templates.py:990
msgid ""
"It will not be possible to use the account before it has been verified and "
"activated."
msgstr ""
-"No será posible usar esta cuenta hasta que se haya verificado y activado."
#: modules/websession/lib/websession_templates.py:1041
msgid "Retype Password"
-msgstr "Vuelva a escribir la contraseña"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1042
#: modules/websession/lib/websession_webinterface.py:942
msgid "register"
-msgstr "darse de alta"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1043
#, python-format
msgid ""
"Please do not use valuable passwords such as your Unix, AFS or NICE "
"passwords with this service. Your email address will stay strictly "
"confidential and will not be disclosed to any third party. It will be used "
"to identify you for personal services of %s. For example, you may set up an "
"automatic alert search that will look for new preprints and will notify you "
"daily of new arrivals by email."
msgstr ""
-"No escoja contraseñas valiosas como las de sus cuentas personales de correo "
-"electrónico o acceso a datos profesionales. Su dirección de correo "
-"electrónico será estrictamente confidencial y no se pasará a terceros. Será "
-"usada para identificar sus servicios personales en %s. Por ejemplo, puede "
-"activar un servicio de alerta automático que busque nuevos registros y le "
-"notifique diariamente de las nuevas entradas por correo electrónico."
#: modules/websession/lib/websession_templates.py:1047
#, python-format
msgid ""
"It is not possible to create an account yourself. Contact %s if you want an "
"account."
msgstr ""
-"No es posible que usted cree una cuenta. Póngase en contacto con %s si "
-"quiere una."
#: modules/websession/lib/websession_templates.py:1073
#, python-format
msgid ""
"You seem to be a guest user. You have to %(x_url_open)slogin%(x_url_close)s "
"first."
msgstr ""
-"Usted parece ser un usuario visitante. Antes tiene que "
-"%(x_url_open)sidentificarse%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:1079
msgid "You are not authorized to access administrative functions."
-msgstr "No está autorizado a accedir a funciones administrativas."
+msgstr ""
#: modules/websession/lib/websession_templates.py:1082
#, python-format
msgid "You are enabled to the following roles: %(x_role)s."
-msgstr "Tiene activados los seguientes roles: %(x_role)s."
+msgstr ""
#: modules/websession/lib/websession_templates.py:1098
msgid "Run BibSword Client"
-msgstr "Ejecutar el cliente BibSword"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1125
msgid "Here are some interesting web admin links for you:"
-msgstr "Aquí tiene algunos enlaces de administración interesantes:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1127
#, python-format
msgid ""
"For more admin-level activities, see the complete %(x_url_open)sAdmin Area"
"%(x_url_close)s."
msgstr ""
-"Para más activitades administrativas, vea toda la %(x_url_open)sZona de "
-"administración%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:1180
msgid "guest"
-msgstr "visitante"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1194
msgid "logout"
-msgstr "salir"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1242
#: modules/webstyle/lib/webstyle_templates.py:435
#: modules/webstyle/lib/webstyle_templates.py:504
msgid "Personalize"
-msgstr "Personalizar"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1250
msgid "Your account"
-msgstr "Su cuenta"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1256
msgid "Your alerts"
-msgstr "Sus alertas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1262
msgid "Your approvals"
-msgstr "Sus aprobaciones"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1268
msgid "Your baskets"
-msgstr "Sus cestas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1274
msgid "Your groups"
-msgstr "Sus grupos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1280
msgid "Your loans"
-msgstr "Sus préstamos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1286
msgid "Your messages"
-msgstr "Sus mensajes"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1292
msgid "Your submissions"
-msgstr "Sus envíos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1298
msgid "Your searches"
-msgstr "Sus búsquedas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1351
msgid "Administration"
-msgstr "administración"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1367
msgid "Statistics"
-msgstr "Estadísticas"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1484
msgid "You are an administrator of the following groups:"
-msgstr "Usted es administrador de estos grupos:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1504
#: modules/websession/lib/websession_templates.py:1578
#: modules/websession/lib/websession_templates.py:1641
#: modules/websubmit/lib/websubmit_templates.py:3012
#: modules/websubmit/lib/websubmit_templates.py:3018
msgid "Group"
-msgstr "Grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1511
msgid "You are not an administrator of any groups."
-msgstr "Usted no es administrador de ningún grupo."
+msgstr ""
#: modules/websession/lib/websession_templates.py:1518
msgid "Edit group"
-msgstr "Editar grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1525
#, python-format
msgid "Edit %s members"
-msgstr "Editar los %s miembros del grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1548
#: modules/websession/lib/websession_templates.py:1688
#: modules/websession/lib/websession_templates.py:1690
#: modules/websession/lib/websession_webinterface.py:1076
msgid "Create new group"
-msgstr "Crear un nuevo grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1562
msgid "You are a member of the following groups:"
-msgstr "Usted es miembro de los grupos siguientes:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1585
msgid "You are not a member of any groups."
-msgstr "Usted no es miembro de ningún grupo."
+msgstr ""
#: modules/websession/lib/websession_templates.py:1609
msgid "Join new group"
-msgstr "Unirse a un grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1610
#: modules/websession/lib/websession_templates.py:2162
#: modules/websession/lib/websession_templates.py:2173
msgid "Leave group"
-msgstr "Dejar el grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1625
msgid "You are a member of the following external groups:"
-msgstr "Usted es miembro de los siguientes grupos externos:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1648
msgid "You are not a member of any external groups."
-msgstr "Usted no es miembro de ningún grupo externo."
+msgstr ""
#: modules/websession/lib/websession_templates.py:1696
msgid "Update group"
-msgstr "Actualizar grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1698
#, python-format
msgid "Edit group %s"
-msgstr "Editar el grupo %s"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1700
msgid "Delete group"
-msgstr "Suprimir el grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1773
msgid "Group name:"
-msgstr "Nombre del grupo:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1775
msgid "Group description:"
-msgstr "Descripción del grupo:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1776
msgid "Group join policy:"
-msgstr "Política para unirse al grupo:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1817
#: modules/websession/lib/websession_templates.py:1890
#: modules/websession/lib/websession_templates.py:2031
#: modules/websession/lib/websession_templates.py:2040
#: modules/websession/lib/websession_templates.py:2160
#: modules/websession/lib/websession_templates.py:2272
msgid "Please select:"
-msgstr "Seleccione:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1883
msgid "Join group"
-msgstr "Unirse a un grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1885
msgid "or find it"
-msgstr "o buscarlo: "
+msgstr ""
#: modules/websession/lib/websession_templates.py:1886
msgid "Choose group:"
-msgstr "Escoja grupo:"
+msgstr ""
#: modules/websession/lib/websession_templates.py:1888
msgid "Find group"
-msgstr "Busque grupo"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2036
msgid "Remove member"
-msgstr "Eliminar miembro"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2038
msgid "No members."
-msgstr "Sin miembros."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2048
msgid "Accept member"
-msgstr "Acceptar miembro"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2048
msgid "Reject member"
-msgstr "Rechazar miembro"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2050
msgid "No members awaiting approval."
-msgstr "No hay miembros pendientes de aprobación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2052
#: modules/websession/lib/websession_templates.py:2086
msgid "Current members"
-msgstr "Miembros actuales"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2053
#: modules/websession/lib/websession_templates.py:2087
msgid "Members awaiting approval"
-msgstr "Miembros pendientes de aprobación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2054
#: modules/websession/lib/websession_templates.py:2088
msgid "Invite new members"
-msgstr "Invitar a nuevos miembros"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2059
#, python-format
msgid "Invitation to join \"%s\" group"
-msgstr "Invitación a unirse al grupo \"%s\""
+msgstr ""
#: modules/websession/lib/websession_templates.py:2060
#, python-format
msgid ""
"Hello:\n"
"\n"
"I think you might be interested in joining the group \"%(x_name)s\".\n"
"You can join by clicking here: %(x_url)s.\n"
"\n"
"Best regards.\n"
msgstr ""
-"Hola,\n"
-"\n"
-"quizás pueda estar interesado en unirse al grupo «%(x_name)s».\n"
-"Se puede añadir pinchando aquí: %(x_url)s.\n"
-"\n"
-"Atentamente,\n"
#: modules/websession/lib/websession_templates.py:2074
#, python-format
msgid ""
"If you want to invite new members to join your group, please use the "
"%(x_url_open)sweb message%(x_url_close)s system."
msgstr ""
-"Si quiere invitar a nuevos miembros a formar parte de su grupo, use el "
-"%(x_url_open)ssistema de mensajería interna%(x_url_close)s."
#: modules/websession/lib/websession_templates.py:2078
#, python-format
msgid "Group: %s"
-msgstr "Grupo: %s"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2161
msgid "Group list"
-msgstr "Lista de los grupos"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2164
msgid "You are not member of any group."
-msgstr "Usted no es miembro de ningún grupo."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2212
msgid "Are you sure you want to delete this group?"
-msgstr "¿Está seguro de que quiere borrar este grupo?"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2252
msgid "Are you sure you want to leave this group?"
-msgstr "¿Está seguro de que quiere dejar este grupo?"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2268
msgid "Visible and open for new members"
-msgstr "Visible y abierto a nuevos miembros"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2270
msgid "Visible but new members need approval"
-msgstr "Visible pero los nuevos miembros requieren una aprobación"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2355
#, python-format
msgid "Group %s: New membership request"
-msgstr "Grupo %s: nueva petición de ingreso"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2359
#, python-format
msgid "A user wants to join the group %s."
-msgstr "Un usuario desea unirse al grupo %s."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2360
#, python-format
msgid ""
"Please %(x_url_open)saccept or reject%(x_url_close)s this user's request."
msgstr ""
-"Debería %(x_url_open)saceptar o rechazar%(x_url_close)s la petición de este "
-"usuario."
#: modules/websession/lib/websession_templates.py:2377
#, python-format
msgid "Group %s: Join request has been accepted"
-msgstr "Grupo %s: la petición de ingreso ha sido aceptada"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2378
#, python-format
msgid "Your request for joining group %s has been accepted."
-msgstr "Su petición de ingreso en el grupo %s ha sido aceptada."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2380
#, python-format
msgid "Group %s: Join request has been rejected"
-msgstr "Grupo %s: la petición de ingreso ha sido rechazada"
+msgstr ""
#: modules/websession/lib/websession_templates.py:2381
#, python-format
msgid "Your request for joining group %s has been rejected."
-msgstr "Su petición de ingreso en el grupo %s ha sido rechazada."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2384
#: modules/websession/lib/websession_templates.py:2402
#, python-format
msgid "You can consult the list of %(x_url_open)syour groups%(x_url_close)s."
-msgstr "Puede consultar la lista de %(x_url_open)ssus grupos%(x_url_close)s."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2398
#, python-format
msgid "Group %s has been deleted"
-msgstr "El grupo %s ha sido suprimido."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2400
#, python-format
msgid "Group %s has been deleted by its administrator."
-msgstr "El grupo %s ha sido suprimido por su administrador."
+msgstr ""
#: modules/websession/lib/websession_templates.py:2417
#, python-format
msgid ""
"You can consult the list of %(x_url_open)s%(x_nb_total)i groups"
"%(x_url_close)s you are subscribed to (%(x_nb_member)i) or administering "
"(%(x_nb_admin)i)."
msgstr ""
-"Puede consultar la lista de los %(x_url_open)s%(x_nb_total)i grupos"
-"%(x_url_close)s de los que usted es miembro (%(x_nb_member)i) o administrador"
-"(%(x_nb_admin)i)."
#: modules/websession/lib/websession_templates.py:2436
msgid ""
"Warning: The password set for MySQL root user is the same as the default "
"Invenio password. For security purposes, you may want to change the password."
msgstr ""
-"Atención: la contraseña que ha establecido para el usuario root de MySQL es "
-"la misma que la de lleva Invenio por defecto. Por motivos de seguridad, es "
-"preferible cambiarla."
#: modules/websession/lib/websession_templates.py:2442
msgid ""
"Warning: The password set for the Invenio MySQL user is the same as the "
"shipped default. For security purposes, you may want to change the password."
msgstr ""
-"Atención: la contraseña que ha establecido para el usuario MySQL de Invenio "
-"es la misma que lleva por defecto la aplicación. Por motivos de seguridad, "
-"es preferible cambiarla."
#: modules/websession/lib/websession_templates.py:2448
msgid ""
"Warning: The password set for the Invenio admin user is currently empty. For "
"security purposes, it is strongly recommended that you add a password."
msgstr ""
-"Atención: la contraseña para el usuario admin de Invenio está vacía. Por "
-"motivos de seguridad, es necesario establecer una."
#: modules/websession/lib/websession_templates.py:2454
msgid ""
"Warning: The email address set for support email is currently set to "
"info@invenio-software.org. It is recommended that you change this to your "
"own address."
msgstr ""
-"Atención: la dirección configurada para soporte electrónico es info@invenio-"
-"software.org. Es conveniente cambiarla a su dirección particular."
#: modules/websession/lib/websession_templates.py:2460
msgid ""
"A newer version of Invenio is available for download. You may want to visit "
-msgstr "Puede bajarse una versión más reciente de Invenio. Visite "
+msgstr ""
#: modules/websession/lib/websession_templates.py:2467
msgid ""
"Cannot download or parse release notes from http://invenio-software.org/repo/"
"invenio/tree/RELEASE-NOTES"
msgstr ""
-"No ha sido posible descargar o leer las notas versión desde http://invenio-"
-"software.org/repo/invenio/tree/RELEASE-NOTES"
#: modules/websession/lib/webuser.py:149
msgid "Database problem"
-msgstr "Problema en la base de datos"
+msgstr ""
#: modules/websession/lib/webuser.py:299
#: modules/websession/lib/webgroup_dblayer.py:314
msgid "user"
-msgstr "usuario"
+msgstr ""
#: modules/websession/lib/webuser.py:470
#, python-format
msgid "Account registration at %s"
-msgstr "Alta de cuenta en %s"
+msgstr ""
#: modules/websession/lib/webuser.py:714
msgid "New account on"
-msgstr "Nueva cuenta en"
+msgstr ""
#: modules/websession/lib/webuser.py:716
msgid "PLEASE ACTIVATE"
-msgstr "ACTÍVELO, POR FAVOR"
+msgstr ""
#: modules/websession/lib/webuser.py:717
msgid "A new account has been created on"
-msgstr "Su cuenta ha sido creada en"
+msgstr ""
#: modules/websession/lib/webuser.py:719
msgid " and is awaiting activation"
-msgstr " y está esperando que se active"
+msgstr ""
#: modules/websession/lib/webuser.py:721
msgid " Username/Email"
-msgstr "Nombre de usuario/Dirección de correo"
+msgstr ""
#: modules/websession/lib/webuser.py:722
msgid "You can approve or reject this account request at"
-msgstr "Puede aprobar o rechazar esta cuenta en"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:85
msgid "Mail Cookie Service"
-msgstr "Servicio de activación por correo electrónico"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:95
msgid "Role authorization request"
-msgstr "Petición de rol de autorización"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:95
msgid "This request for an authorization has already been authorized."
-msgstr "La petición para la autorización ya se ha aprobado."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:98
#, python-format
msgid ""
"You have successfully obtained an authorization as %(x_role)s! This "
"authorization will last until %(x_expiration)s and until you close your "
"browser if you are a guest user."
msgstr ""
-"Ya tiene una autoritación válida para ejercer el rol de %(x_role)s. Esta "
-"autorización durará hasta %(x_expiration)s y, si usted es usuario invitado, "
-"hasta que cierre su navegador."
#: modules/websession/lib/websession_webinterface.py:116
msgid "You have confirmed the validity of your email address!"
-msgstr "¡Ha confirmado la validez de su dirección de correo electónico!"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:119
#: modules/websession/lib/websession_webinterface.py:129
msgid "Please, wait for the administrator to enable your account."
-msgstr "Por favor, espere a que el administrador le active la cuenta."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:123
#: modules/websession/lib/websession_webinterface.py:132
#, python-format
msgid "You can now go to %(x_url_open)syour account page%(x_url_close)s."
-msgstr "Ya puede ir a la %(x_url_open)spágina de su cuenta%(x_url_close)s."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:124
#: modules/websession/lib/websession_webinterface.py:133
msgid "Email address successfully activated"
-msgstr "Dirección electrónica activada correctamente"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:127
msgid "You have already confirmed the validity of your email address!"
-msgstr "¡Ya ha confirmado la validez de su dirección de correo electónico!"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:136
msgid ""
"This request for confirmation of an email address is not valid or is expired."
msgstr ""
-"Esta petición de confirmación de la validez de su dirección de correo "
-"electónico no es válida o ha expirado."
#: modules/websession/lib/websession_webinterface.py:141
msgid "This request for an authorization is not valid or is expired."
-msgstr "Esta petición de autorización no es válida o ha expirado."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:154
msgid "Reset password"
-msgstr "Reiniciar la contraseña"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:160
msgid "This request for resetting a password has already been used."
-msgstr "Esta petición de reiniciar la contraseña ya se ha utilizado."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:163
msgid "This request for resetting a password is not valid or is expired."
-msgstr "Esta petición de reiniciar una contraseña no es válida o ha expirado."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:168
msgid "This request for resetting the password is not valid or is expired."
-msgstr "Esta petición de reiniciar la contraseña no es válida o ha expirado."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:181
msgid "The two provided passwords aren't equal."
-msgstr "Las dos contraseñas no coinciden."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:196
msgid "The password was successfully set! You can now proceed with the login."
-msgstr "La contraseña se ha definido correctamente. Ya puede identificarse."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:281
#, python-format
msgid "%s Personalize, Your Settings"
-msgstr "%s Personalizar, sus parámetros"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:335
#: modules/websession/lib/websession_webinterface.py:402
#: modules/websession/lib/websession_webinterface.py:464
#: modules/websession/lib/websession_webinterface.py:477
#: modules/websession/lib/websession_webinterface.py:490
msgid "Settings edited"
-msgstr "Se han editado los parámetros"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:337
#: modules/websession/lib/websession_webinterface.py:401
#: modules/websession/lib/websession_webinterface.py:442
#: modules/websession/lib/websession_webinterface.py:466
#: modules/websession/lib/websession_webinterface.py:479
#: modules/websession/lib/websession_webinterface.py:485
msgid "Show account"
-msgstr "Mostrar la cuenta"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:341
msgid "Unable to change login method."
-msgstr "No se ha podido cambiar el método de identificación."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:349
msgid "Switched to internal login method."
-msgstr "El método de identificación ha cambiado al interno."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:350
msgid ""
"Please note that if this is the first time that you are using this account "
"with the internal login method then the system has set for you a randomly "
"generated password. Please click the following button to obtain a password "
"reset request link sent to you via email:"
msgstr ""
-"Tenga en cuenta que si es la primera vez que está usando esta cuenta con el "
-"método de identificación interno, el sistema le ha definido una contraseña "
-"aleatoria. Haga clic en el botón siguiente para a que le envíe, via correo "
-"electrónico, un enlace para reiniciarla:"
#: modules/websession/lib/websession_webinterface.py:358
msgid "Send Password"
-msgstr "Enviar contraseña"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:366
#, python-format
msgid ""
"Unable to switch to external login method %s, because your email address is "
"unknown."
msgstr ""
-"No es posible cambiar al método de autenticación externo %s, porque no "
-"consta su dirección de correo electrónico."
#: modules/websession/lib/websession_webinterface.py:370
#, python-format
msgid ""
"Unable to switch to external login method %s, because your email address is "
"unknown to the external login system."
msgstr ""
-"No es posible cambiar al método de autenticación externo %s, porque el "
-"sistema externo desconoce su dirección de correo electrónico."
#: modules/websession/lib/websession_webinterface.py:374
msgid "Login method successfully selected."
-msgstr "Método de identificación seleccionado correctamente."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:376
#, python-format
msgid ""
"The external login method %s does not support email address based logins. "
"Please contact the site administrators."
msgstr ""
-"El métode de identificación externo %s no acepta identificaciones basadas en "
-"direcciones de correo electrónico. Póngase en cotacto con los "
-"administradores de la instalación."
#: modules/websession/lib/websession_webinterface.py:395
msgid "Settings successfully edited."
-msgstr "Se han editado los parámetres correctamente."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:396
#, python-format
msgid ""
"Note that if you have changed your email address, you will have to "
"%(x_url_open)sreset your password%(x_url_close)s anew."
msgstr ""
-"Si ha cambiado su dirección, tendrá que %(x_url_open)svolver a poner su "
-"contraseña%(x_url_close)s otre vez."
#: modules/websession/lib/websession_webinterface.py:404
#: modules/websession/lib/websession_webinterface.py:912
#, python-format
msgid "Desired nickname %s is invalid."
-msgstr "El alias %s no es válido."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:405
#: modules/websession/lib/websession_webinterface.py:411
#: modules/websession/lib/websession_webinterface.py:424
#: modules/websession/lib/websession_webinterface.py:446
#: modules/websession/lib/websession_webinterface.py:452
#: modules/websession/lib/websession_webinterface.py:903
#: modules/websession/lib/websession_webinterface.py:908
#: modules/websession/lib/websession_webinterface.py:913
#: modules/websession/lib/websession_webinterface.py:924
msgid "Please try again."
-msgstr "Vuélvalo a intentar."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:407
#: modules/websession/lib/websession_webinterface.py:413
#: modules/websession/lib/websession_webinterface.py:420
#: modules/websession/lib/websession_webinterface.py:426
#: modules/websession/lib/websession_webinterface.py:448
#: modules/websession/lib/websession_webinterface.py:454
#: modules/websession/lib/websession_webinterface.py:501
msgid "Edit settings"
-msgstr "Editar parámetros"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:408
#: modules/websession/lib/websession_webinterface.py:414
#: modules/websession/lib/websession_webinterface.py:421
#: modules/websession/lib/websession_webinterface.py:427
#: modules/websession/lib/websession_webinterface.py:503
msgid "Editing settings failed"
-msgstr "Ha fallado la edición de parámetros"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:410
#: modules/websession/lib/websession_webinterface.py:907
#, python-format
msgid "Supplied email address %s is invalid."
-msgstr "La dirección electrónica facilitada %s no es válida."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:416
#: modules/websession/lib/websession_webinterface.py:917
#, python-format
msgid "Supplied email address %s already exists in the database."
-msgstr "La dirección electrónica facilitada %s ya existe en la base de datos."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:418
#: modules/websession/lib/websession_webinterface.py:919
msgid "Or please try again."
-msgstr "O vuélvalo a intentar."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:423
#, python-format
msgid "Desired nickname %s is already in use."
-msgstr "El alias solicitado %s ya está en uso."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:432
msgid "Users cannot edit passwords on this site."
-msgstr "En este sitio, los usuarios no pueden editar sus contraseñas."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:440
msgid "Password successfully edited."
-msgstr "Se ha editado la contraseña correctamente."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:443
msgid "Password edited"
-msgstr "Contraseña modificada"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:445
#: modules/websession/lib/websession_webinterface.py:902
msgid "Both passwords must match."
-msgstr "Ambas contraseñas deben coincidir."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:449
#: modules/websession/lib/websession_webinterface.py:455
msgid "Editing password failed"
-msgstr "Ha fallado el cambio de contraseña"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:451
msgid "Wrong old password inserted."
-msgstr "Contraseña anterior incorrecta."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:467
#: modules/websession/lib/websession_webinterface.py:480
#: modules/websession/lib/websession_webinterface.py:494
msgid "User settings saved correctly."
-msgstr "Se han guardado correctamente los parámetros de usuario."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:487
msgid "Editing bibcatalog authorization failed"
-msgstr "Ha fallado la edición de la autorización de bibcatalog"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:488
msgid "Empty username or password"
-msgstr "La identificación o la contraseña están vacíos"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:497
msgid "Unable to update settings."
-msgstr "No ha sido posible actualizar los parámetros."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:558
msgid ""
"Cannot send password reset request since you are using external "
"authentication system."
msgstr ""
-"No se puede enviar la petición de reinicialización de contraseña ya que "
-"usted está usando un sistema de autenticación externo."
#: modules/websession/lib/websession_webinterface.py:574
msgid "The entered email address does not exist in the database."
-msgstr "La dirección electrónica facilitada no existe en la base de datos."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:588
msgid "Password reset request for"
-msgstr "Petición de reiniciar la contraseña de"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:592
msgid ""
"The entered email address is incorrect, please check that it is written "
"correctly (e.g. johndoe@example.com)."
msgstr ""
-"La dirección de correo electrónico facilitada es incorrecta. Compruebe que "
-"está correctamente escrita (por ej., sin.verguenza@ejemplo.es)."
#: modules/websession/lib/websession_webinterface.py:593
msgid "Incorrect email address"
-msgstr "Dirección de correo electrónico incorrecta"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:603
msgid "Reset password link sent"
-msgstr "Se ha enviado el enlace para reiniciar la contraseña"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:648
msgid "Delete Account"
-msgstr "Suprimir cuenta"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:674
msgid "Logout"
-msgstr "Salir"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:747
#: modules/websession/lib/websession_webinterface.py:803
#: modules/websession/lib/websession_webinterface.py:838
msgid "Login"
-msgstr "Identificación"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:869
msgid "Register"
-msgstr "Darse de alta"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:872
#: modules/websession/lib/websession_webinterface.py:944
#, python-format
msgid "%s Personalize, Main page"
-msgstr "%s Personalizar, página principal"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:889
msgid "Your account has been successfully created."
-msgstr "Su cuenta ha sido creada correctamente."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:890
msgid "Account created"
-msgstr "Cuenta creada"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:892
msgid ""
"In order to confirm its validity, an email message containing an account "
"activation key has been sent to the given email address."
msgstr ""
-"Para confirmar su validez, se ha enviado un mensage a esta dirección que "
-"tiene una clave de activación de la cuenta."
#: modules/websession/lib/websession_webinterface.py:893
msgid ""
"Please follow instructions presented there in order to complete the account "
"registration process."
msgstr ""
-"Siga las instrucciones indicadas para completar el proceso de alta de la "
-"cuenta."
#: modules/websession/lib/websession_webinterface.py:895
msgid ""
"A second email will be sent when the account has been activated and can be "
"used."
msgstr ""
-"Se enviará un segundo mensaje cuando la cuenta se haya activado y pueda "
-"usarse."
#: modules/websession/lib/websession_webinterface.py:898
#, python-format
msgid "You can now access your %(x_url_open)saccount%(x_url_close)s."
-msgstr "Ya puede acceder a su %(x_url_open)scuenta%(x_url_close)s."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:905
#: modules/websession/lib/websession_webinterface.py:910
#: modules/websession/lib/websession_webinterface.py:915
#: modules/websession/lib/websession_webinterface.py:921
#: modules/websession/lib/websession_webinterface.py:926
#: modules/websession/lib/websession_webinterface.py:930
#: modules/websession/lib/websession_webinterface.py:934
#: modules/websession/lib/websession_webinterface.py:939
msgid "Registration failure"
-msgstr "Ha fallado el alta"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:923
#, python-format
msgid "Desired nickname %s already exists in the database."
-msgstr "El alias solicitado %s ya existe en la base de datos."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:928
msgid "Users cannot register themselves, only admin can register them."
msgstr ""
-"Los usuarios no pueden darse de alta ellos mismos; sólo lo puede hacer el "
-"administrador."
#: modules/websession/lib/websession_webinterface.py:932
msgid ""
"The site is having troubles in sending you an email for confirming your "
"email address."
msgstr ""
-"Tenemos problemas para enviarle un correo de confirmació de su dirección."
#: modules/websession/lib/websession_webinterface.py:932
#: modules/websubmit/lib/websubmit_webinterface.py:151
msgid ""
"The error has been logged and will be taken in consideration as soon as "
"possible."
msgstr ""
-"El error ha sido anotado y será tenido en cuenta tan pronto como sea posible."
#: modules/websession/lib/websession_webinterface.py:979
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:729
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:733
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:765
msgid "Your tickets"
-msgstr "Sus tareas"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:1015
#: modules/websession/lib/websession_webinterface.py:1056
#: modules/websession/lib/websession_webinterface.py:1115
#: modules/websession/lib/websession_webinterface.py:1177
#: modules/websession/lib/websession_webinterface.py:1234
#: modules/websession/lib/websession_webinterface.py:1305
msgid "You are not authorized to use groups."
-msgstr "No está autorizado a usar grupos."
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:1140
msgid "Join New Group"
-msgstr "Unirse a un grupo nuevo"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:1192
msgid "Leave Group"
-msgstr "Dejar el grupo"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:1262
msgid "Edit Group"
-msgstr "Editar el grupo"
+msgstr ""
#: modules/websession/lib/websession_webinterface.py:1334
msgid "Edit group members"
-msgstr "Editar los miembros del grupo"
+msgstr ""
#: modules/websession/lib/webgroup.py:158
#: modules/websession/lib/webgroup.py:432
msgid "Please enter a group name."
-msgstr "Introduzca un nombre grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:168
#: modules/websession/lib/webgroup.py:442
msgid "Please enter a valid group name."
-msgstr "Introduzca un nombre de grupo válido."
+msgstr ""
#: modules/websession/lib/webgroup.py:178
#: modules/websession/lib/webgroup.py:452
msgid "Please choose a group join policy."
-msgstr "Escoja una política de admisión al grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:188
#: modules/websession/lib/webgroup.py:462
msgid "Group name already exists. Please choose another group name."
-msgstr "Ya existe este nombre de grupo. Escoja otro, por favor."
+msgstr ""
#: modules/websession/lib/webgroup.py:260
msgid "You are already member of the group."
-msgstr "Usted ya es miembro del grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:302
msgid "Please select only one group."
-msgstr "Seleccione solo un grupo:"
+msgstr ""
#: modules/websession/lib/webgroup.py:359
msgid "Please select one group."
-msgstr "Seleccione un grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:384
#: modules/websession/lib/webgroup.py:399
#: modules/websession/lib/webgroup.py:510
#: modules/websession/lib/webgroup.py:555
#: modules/websession/lib/webgroup.py:570
#: modules/websession/lib/webgroup.py:604
#: modules/websession/lib/webgroup.py:644
#: modules/websession/lib/webgroup.py:711
msgid "Sorry, there was an error with the database."
-msgstr "Por desgracia ha habido un error en la base de datos."
+msgstr ""
#: modules/websession/lib/webgroup.py:391
#: modules/websession/lib/webgroup.py:562
msgid "Sorry, you do not have sufficient rights on this group."
-msgstr "No tiene suficientes permisos en este grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:499
msgid "The group has already been deleted."
-msgstr "El grupo ya ha sido suprimido."
+msgstr ""
#: modules/websession/lib/webgroup.py:611
msgid "Please choose a member if you want to remove him from the group."
-msgstr "Escoja el miembro que desee borrar del grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:651
msgid ""
"Please choose a user from the list if you want him to be added to the group."
-msgstr "Escoja una persona de la lista para añadirla al grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:664
msgid "The user is already member of the group."
-msgstr "El usuario ya es miembro del grupo."
+msgstr ""
#: modules/websession/lib/webgroup.py:718
msgid ""
"Please choose a user from the list if you want him to be removed from "
"waiting list."
msgstr ""
-"Escoja el miembro de la lista que desee eliminar de la lista de espera."
#: modules/websession/lib/webgroup.py:731
msgid "The user request for joining group has already been rejected."
-msgstr "La petición de ingreso en el grupo ya ha sido rechazada."
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:86
#: modules/webstyle/lib/webstyle_templates.py:95
#: modules/bibcirculation/lib/bibcirculation_templates.py:111
msgid "Home"
-msgstr "Página principal"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:434
#: modules/webstyle/lib/webstyle_templates.py:503
#: modules/websubmit/lib/websubmit_engine.py:697
#: modules/websubmit/lib/websubmit_engine.py:1142
#: modules/websubmit/lib/websubmit_engine.py:1188
#: modules/websubmit/lib/websubmit_engine.py:1421
msgid "Submit"
-msgstr "Enviar"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:436
#: modules/webstyle/lib/webstyle_templates.py:505
#: modules/bibcirculation/lib/bibcirculation_templates.py:215
msgid "Help"
-msgstr "Ayuda"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:469
msgid "Last updated"
-msgstr "Última actualización"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:507
msgid "Powered by"
-msgstr "Powered by"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:508
msgid "Maintained by"
-msgstr "Mantenido por"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:554
msgid "This site is also available in the following languages:"
-msgstr "Este sitio también está disponible en los siguientes idiomas:"
+msgstr "این صفحه در زبان های زیر نیز قابل دسترسی است:"
#: modules/webstyle/lib/webstyle_templates.py:587
msgid "Browser"
-msgstr "Navegador"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:609
msgid "System Error"
-msgstr "Error de sistema"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:624
msgid "Traceback"
-msgstr "Traceback"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:671
msgid "Client"
-msgstr "Cliente"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:673
msgid "Please send an error report to the administrator."
-msgstr "Por favor, envíe el informe de error al administrador."
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:674
msgid "Send error report"
-msgstr "Enviar el informe de error"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:678
#, python-format
msgid "Please contact %s quoting the following information:"
-msgstr "Por favor contacte con %s indicando la siguiente información:"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:728
#: modules/websubmit/lib/websubmit_templates.py:1231
msgid "Restricted"
-msgstr "Restringido"
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:848
#, python-format
msgid ""
"Record created %(x_date_creation)s, last modified %(x_date_modification)s"
msgstr ""
-"Registro creado el %(x_date_creation)s, última modificación el "
-"%(x_date_modification)s"
#: modules/webstyle/lib/webstyle_templates.py:919
msgid "The server encountered an error while dealing with your request."
-msgstr "El sistema ha encontrado un error mientras gestionaba su petición."
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:920
msgid "The system administrators have been alerted."
-msgstr "Los administradores del sistema han sido avisados."
+msgstr ""
#: modules/webstyle/lib/webstyle_templates.py:921
#, python-format
msgid "In case of doubt, please contact %(x_admin_email)s."
-msgstr "En caso de duda, póngase en contacto con %(x_admin_email)s"
+msgstr ""
#: modules/webstyle/lib/webdoc.py:550
#, python-format
msgid "%(category)s Pages"
-msgstr "Páginas de %(category)s"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:144
#: modules/webstyle/lib/webdoc_webinterface.py:149
msgid "Admin Pages"
-msgstr "Páginas de administración"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:146
#: modules/webstyle/lib/webdoc_webinterface.py:150
msgid "Help Pages"
-msgstr "Páginas de ayuda"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:148
#: modules/webstyle/lib/webdoc_webinterface.py:151
msgid "Hacking Pages"
-msgstr "Páginas para los desarrolladores"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:157
msgid "Hacking Invenio"
-msgstr "Desarrollo de Invenio"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:159
msgid "Latest modifications:"
-msgstr "Últimas modificaciones:"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:162
#, python-format
msgid "This is the table of contents of the %(x_category)s pages."
-msgstr "Esta es la tabla de contenidos de las páginas de %(x_category)s."
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:164
msgid "See also"
-msgstr "Véase también"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:179
#, python-format
msgid "Page %s Not Found"
-msgstr "No se ha encontrado la página %s"
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:187
#, python-format
msgid "Sorry, page %s does not seem to exist."
-msgstr "Parece ser que la página %s no existe."
+msgstr ""
#: modules/webstyle/lib/webdoc_webinterface.py:190
#, python-format
msgid ""
"You may want to look at the %(x_url_open)s%(x_category)s pages"
"%(x_url_close)s."
msgstr ""
-"Le puede interesar consultar las %(x_url_open)spágines %(x_category)s"
-"%(x_url_close)s."
#: modules/websubmit/lib/websubmit_managedocfiles.py:383
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:443
msgid "Choose a file"
-msgstr "Escoja el fichero"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:391
#: modules/websubmit/lib/functions/Create_Upload_Files_Interface.py:459
msgid "Access"
-msgstr "Acceso"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:543
msgid ""
"The file you want to edit is protected against modifications. Your action "
"has not been applied"
msgstr ""
-"El fichero que quiere editar está protegido contra modificaciones. Vuestra "
-"acción no se ha realizado"
#: modules/websubmit/lib/websubmit_managedocfiles.py:562
#, python-format
msgid ""
"The uploaded file is too small (<%i o) and has therefore not been considered"
msgstr ""
-"El archivo que ha subido es demasiado pequeño (<%i o) y por tanto no se ha "
-"tenido en cuenta"
#: modules/websubmit/lib/websubmit_managedocfiles.py:567
#, python-format
msgid ""
"The uploaded file is too big (>%i o) and has therefore not been considered"
msgstr ""
-"El archivo que ha subido es demasiado grande (>%i o) y por tanto no se ha "
-"tenido en cuenta"
#: modules/websubmit/lib/websubmit_managedocfiles.py:574
msgid ""
"The uploaded file name is too long and has therefore not been considered"
msgstr ""
-"El nombre del archivo que ha subido es demasiado largo y por tanto no se ha "
-"tenido en cuenta"
#: modules/websubmit/lib/websubmit_managedocfiles.py:586
msgid ""
"You have already reached the maximum number of files for this type of "
"document"
-msgstr "Ya ha llegado al máximo número de archivos para este tipo de documento"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:609
#: modules/websubmit/lib/websubmit_managedocfiles.py:620
#: modules/websubmit/lib/websubmit_managedocfiles.py:730
#, python-format
msgid "A file named %s already exists. Please choose another name."
-msgstr "Ya existe un fichero con el nombre %s. Escoja otro, por favor."
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:631
#, python-format
msgid "A file with format '%s' already exists. Please upload another format."
-msgstr "Ya existe un archivo con el formato '%s'. Súbalo en otro formato."
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:639
msgid ""
"You are not allowed to use dot '.', slash '/', or backslash '\\\\' in file "
"names. Choose a different name and upload your file again. In particular, "
"note that you should not include the extension in the renaming field."
msgstr ""
-"No se puede usar el punto '.', barra '/', o barra inversa '\\\\\\\\' en los "
-"nombres de ficheros. Escoja un nombre diferente o vuelva a subir su "
-"fichero. En particular, tenga en cuenta de no incluir la extensión en el "
-"nuevo nombre."
#: modules/websubmit/lib/websubmit_managedocfiles.py:811
msgid "Choose how you want to restrict access to this file."
-msgstr "Escoja cómo desea restringir el acceso a este archivo"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:848
msgid "Add new file"
-msgstr "Añadir otro fichero"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:873
msgid "You can decide to hide or not previous version(s) of this file."
-msgstr "Puede decidir esconder o no versiones anteriores de este archivo."
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:874
msgid ""
"When you revise a file, the additional formats that you might have "
"previously uploaded are removed, since they no longer up-to-date with the "
"new file."
msgstr ""
-"Cuando revise un archivo, los formatos adicionales que haya subido "
-"previamente serán borrados, ya que no estarían sincronizados con el nuevo "
-"archivo."
#: modules/websubmit/lib/websubmit_managedocfiles.py:875
msgid ""
"Alternative formats uploaded for current version of this file will be removed"
msgstr ""
-"Los formatos alternativos para la versión actual de este archivo serán "
-"borrados"
#: modules/websubmit/lib/websubmit_managedocfiles.py:876
msgid "Keep previous versions"
-msgstr "Guardar las versiones anteriores"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:878
#: modules/bibknowledge/lib/bibknowledge_templates.py:207
msgid "Upload"
-msgstr "Subir"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:890
#: modules/websubmit/lib/websubmit_webinterface.py:481
#: modules/bibedit/lib/bibeditmulti_templates.py:321
msgid "Apply changes"
-msgstr "Guardar cambios"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:895
#, python-format
msgid "Need help revising or adding files to record %(recid)s"
-msgstr "Necesito ayuda para revisar o añadir ficheros al registro %(recid)s"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:897
#, python-format
msgid ""
"Dear Support,\n"
"I would need help to revise or add a file to record %(recid)s.\n"
"I have attached the new version to this email.\n"
"Best regards"
msgstr ""
-"Estimado soporte,\n"
-"necesito su ayuda para revisar o añadir ficheros al registro %(recid)s.\n"
-"He añadido la nueva versión en este mensaje.\n"
-"Cordialmente,"
#: modules/websubmit/lib/websubmit_managedocfiles.py:902
#, python-format
msgid ""
"Having a problem revising a file? Send the revised version to "
"%(mailto_link)s."
msgstr ""
-"Tiene problemas revisando un archivo? Envíe la versión revisada a "
-"%(mailto_link)s."
#: modules/websubmit/lib/websubmit_managedocfiles.py:905
#, python-format
msgid ""
"Having a problem adding or revising a file? Send the new/revised version to "
"%(mailto_link)s."
msgstr ""
-"Tiene problemas al añadir o revisar un fichero? Envíe la versión nueva o "
-"revisada a %(mailto_link)s."
#: modules/websubmit/lib/websubmit_managedocfiles.py:1029
msgid "revise"
-msgstr "revisar"
+msgstr ""
#: modules/websubmit/lib/websubmit_managedocfiles.py:1073
msgid "add format"
-msgstr "añadir formato"
+msgstr ""
#: modules/websubmit/lib/functions/Shared_Functions.py:176
msgid ""
"Note that your submission as been inserted into the bibliographic task queue "
"and is waiting for execution.\n"
msgstr ""
-"Su contribución ha sido enviada a la cola de tareas bibliográficas y está "
-"esperando su ejecución.\n"
#: modules/websubmit/lib/functions/Shared_Functions.py:179
#, python-format
msgid ""
"The task queue is currently running in automatic mode, and there are "
"currently %s tasks waiting to be executed. Your record should be available "
"within a few minutes and searchable within an hour or thereabouts.\n"
msgstr ""
-"La cola de tareas se está ejecutando automáticament, y en estos momentos hay "
-"%s tareas esperando su ejecución. Su registro estará a punto dentro de "
-"poco, y buscable en una hora, aproximadamente.\n"
#: modules/websubmit/lib/functions/Shared_Functions.py:181
msgid ""
"Because of a human intervention or a temporary problem, the task queue is "
"currently set to the manual mode. Your submission is well registered but may "
"take longer than usual before it is fully integrated and searchable.\n"
msgstr ""
-"Debido a una intervención humana o a un problema temporal, la cola de tareas "
-"está en modo manual. Su contribución ha quedado registrada, pero puede "
-"tardar más de lo habitual hasta que esté bien integrado y buscable.\n"
#: modules/websubmit/lib/websubmit_engine.py:179
#: modules/websubmit/lib/websubmit_engine.py:797
#: modules/websubmit/web/yoursubmissions.py:61
msgid "Sorry, you must log in to perform this action."
-msgstr "Tiene que identificarse para ejecutar esta acción."
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:186
#: modules/websubmit/lib/websubmit_engine.py:804
msgid "Not enough information to go ahead with the submission."
-msgstr "Falta información para continuar con el envío."
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:192
#: modules/websubmit/lib/websubmit_engine.py:273
#: modules/websubmit/lib/websubmit_engine.py:283
#: modules/websubmit/lib/websubmit_engine.py:360
#: modules/websubmit/lib/websubmit_engine.py:401
#: modules/websubmit/lib/websubmit_engine.py:818
#: modules/websubmit/lib/websubmit_engine.py:856
#: modules/websubmit/lib/websubmit_engine.py:919
#: modules/websubmit/lib/websubmit_engine.py:960
msgid "Invalid parameters"
-msgstr "Parámetros no válidos"
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:198
#: modules/websubmit/lib/websubmit_engine.py:810
msgid "Invalid doctype and act parameters"
-msgstr "Parámetros para doctype y act no válidos"
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:229
#: modules/websubmit/lib/websubmit_engine.py:845
#, python-format
msgid "Unable to find the submission directory for the action: %s"
-msgstr "No se puede encontrar el directorio de envíos para la acción: %s"
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:238
#: modules/websubmit/lib/websubmit_engine.py:1003
msgid "Unknown document type"
-msgstr "Tipo de documento desconocido"
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:244
#: modules/websubmit/lib/websubmit_engine.py:1009
msgid "Unknown action"
-msgstr "Acción desconocida"
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:252
#: modules/websubmit/lib/websubmit_engine.py:1016
msgid "Unable to determine the number of submission pages."
-msgstr "No ha sido posible determinar el número de páginas del envío."
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:293
#: modules/websubmit/lib/websubmit_engine.py:864
msgid ""
"Unable to create a directory for this submission. The administrator has been "
"alerted."
msgstr ""
-"No ha sido posible crear el directorio para este envío. Se ha avisado al "
-"administrador."
#: modules/websubmit/lib/websubmit_engine.py:407
#: modules/websubmit/lib/websubmit_engine.py:967
msgid "Cannot create submission directory. The administrator has been alerted."
msgstr ""
-"No ha sido posible crear el directorio para los envíos. Se ha avisado al "
-"administrador."
#: modules/websubmit/lib/websubmit_engine.py:429
#: modules/websubmit/lib/websubmit_engine.py:989
msgid "No file uploaded?"
-msgstr "No ha subido ningún archivo?"
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:470
#: modules/websubmit/lib/websubmit_engine.py:473
#: modules/websubmit/lib/websubmit_engine.py:606
#: modules/websubmit/lib/websubmit_engine.py:609
msgid "Unknown form field found on submission page."
-msgstr "Campo desconocido en el formulario de envíos."
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:1055
msgid ""
"A serious function-error has been encountered. Adminstrators have been "
"alerted. <br /><em>Please not that this might be due to wrong characters "
"inserted into the form</em> (e.g. by copy and pasting some text from a PDF "
"file)."
msgstr ""
-"Se ha encontrado un error de funcionament serio. Se ha avisado a los "
-"administradores. <br /><em>Segurament la causa sea que se hayan insertado "
-"caracteres erróneos en el formulario</em> (p. ej., copiando y pegando algun "
-"texto de un archivo PDF)."
#: modules/websubmit/lib/websubmit_engine.py:1384
#, python-format
msgid "Unable to find document type: %s"
-msgstr "Imposible encontrar el tipo de documento: %s"
+msgstr ""
#: modules/websubmit/lib/websubmit_engine.py:1670
msgid "The chosen action is not supported by the document type."
msgstr ""
-"La acción que ha escogido no está soportada para este tipo de documento."
#: modules/websubmit/lib/websubmit_engine.py:1747
#: modules/websubmit/lib/websubmit_webinterface.py:1377
#: modules/websubmit/web/approve.py:81
msgid "Warning"
-msgstr "Atención"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:85
msgid "Document types available for submission"
-msgstr "Tipos de documentos disponibles para realizar envíos"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:86
msgid "Please select the type of document you want to submit"
-msgstr "Seleccione el tipo de documento que quiere enviar."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:103
msgid "No document types available."
-msgstr "No hay tipos de documentos disponibles."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:269
msgid "Please log in first."
-msgstr "Primero hace falta que se identifique."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:269
msgid "Use the top-right menu to log in."
-msgstr "Use el menú superior derecho para entrar."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:313
msgid "Please select a category"
-msgstr "Seleccione una categoría"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:352
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2114
msgid "Notice"
-msgstr "Atención"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:353
msgid "Select a category and then click on an action button."
-msgstr "Seleccione una categoría y después seleccione una acción."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:376
msgid ""
"To continue with a previously interrupted submission, enter an access number "
"into the box below:"
msgstr ""
-"Pera continuar un envío interrumpido, introduzca el número de acceso en la "
-"siguiente celda:"
#: modules/websubmit/lib/websubmit_templates.py:378
msgid "GO"
-msgstr "ADELANTE"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:499
#: modules/websubmit/lib/websubmit_templates.py:968
msgid "SUMMARY"
-msgstr "RESUMEN"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:535
#: modules/bibharvest/lib/oai_harvest_admin.py:858
#: modules/bibharvest/lib/oai_harvest_admin.py:911
msgid "Previous page"
-msgstr "Página anterior"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:541
msgid "Submission number"
-msgstr "Número de envío"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:555
#: modules/bibharvest/lib/oai_harvest_admin.py:843
#: modules/bibharvest/lib/oai_harvest_admin.py:899
msgid "Next page"
-msgstr "Página siguiente"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:570
#: modules/websubmit/lib/websubmit_templates.py:1009
msgid "Are you sure you want to quit this submission?"
-msgstr "¿Está seguro de que desea abandonar este envío?"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:572
#: modules/websubmit/lib/websubmit_templates.py:1010
#: modules/websubmit/lib/websubmit_templates.py:1019
msgid "Back to main menu"
-msgstr "Volver al menú principal"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:575
msgid ""
"This is your submission access number. It can be used to continue with an "
"interrupted submission in case of problems."
msgstr ""
-"Este es su número de acceso de envío. Lo puede utilizar para continuar un "
-"envío interrumpido en caso de haber problemas."
#: modules/websubmit/lib/websubmit_templates.py:576
msgid "Mandatory fields appear in red in the SUMMARY window."
-msgstr "Los campos obligatorios aparecen en rojo en el recuadro RESUMEN."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:722
#, python-format
msgid "The field %s is mandatory."
-msgstr "El campo %s es obligatorio."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:722
msgid "Please make a choice in the select box"
-msgstr "Por favor escoja uno en el desplegable"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:736
msgid "Please press a button."
-msgstr "Pulse un botón."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:744
#, python-format
msgid "The field %s is mandatory. Please fill it in."
-msgstr "El campo %s es obligatorio. Rellénelo, por favor."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:821
#, python-format
msgid "The field %(field)s is mandatory."
-msgstr "El campo %(field)s es obligatorio."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:822
msgid "Going back to page"
-msgstr "Volver a la página"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:959
msgid "finished!"
-msgstr "¡finalizado!"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:967
msgid "end of action"
-msgstr "final de la acción"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:991
msgid "Submission no"
-msgstr "Envío nº"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1062
#, python-format
msgid ""
"Here is the %(x_action)s function list for %(x_doctype)s documents at level "
"%(x_step)s"
msgstr ""
-"Lista de funciones %(x_action)s para los documentos de tipo %(x_doctype)s en "
-"el nivel %(x_step)s"
#: modules/websubmit/lib/websubmit_templates.py:1067
msgid "Function"
-msgstr "Función"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1068
msgid "Score"
-msgstr "Puntuación"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1069
msgid "Running function"
-msgstr "Función en ejecución"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1075
#, python-format
msgid "Function %s does not exist."
-msgstr "La función %s no existe."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1114
msgid "You must now"
-msgstr "Ahora usted debe"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1146
msgid "record"
-msgstr "registro"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1148
msgid "document"
-msgstr "documento"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1150
#: modules/websubmit/lib/websubmit_templates.py:1253
msgid "version"
-msgstr "versión"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1185
msgid "file(s)"
-msgstr "archivo(s)"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1236
msgid "see"
-msgstr "véase"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1436
#: modules/bibauthorid/lib/bibauthorid_templates.py:1011
msgid "For"
-msgstr "Para"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1437
msgid "all types of document"
-msgstr "todos los tipos de documento"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1492
msgid "Subm.No."
-msgstr "Envío"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1493
msgid "Reference"
-msgstr "Referencia"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1495
msgid "First access"
-msgstr "Primer acceso"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1496
msgid "Last access"
-msgstr "Último acceso"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1506
msgid "Are you sure you want to delete this submission?"
-msgstr "¿Está seguro de que desea suprimir su envío?"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1507
#, python-format
msgid "Delete submission %(x_id)s in %(x_docname)s"
-msgstr "Suprimir el envío %(x_id)s en %(x_docname)s"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1531
msgid "Reference not yet given"
-msgstr "Todavía no tiene referencia"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1602
msgid "Refereed Documents"
-msgstr "Documentos revisados"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1612
msgid "You are a general referee"
-msgstr "Usted es el revisor general"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1618
msgid "You are a referee for category:"
-msgstr "Usted es revisor para al categoría:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1657
#: modules/websubmit/lib/websubmit_templates.py:1702
msgid "List of refereed types of documents"
-msgstr "Lista de tipos de documentos revisados"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1658
#: modules/websubmit/lib/websubmit_templates.py:1703
msgid ""
"Select one of the following types of documents to check the documents status"
msgstr ""
-"Seleccione uno de los tipos de documentos siguientes para comprobar el "
-"estado de los documentos"
#: modules/websubmit/lib/websubmit_templates.py:1671
msgid "Go to specific approval workflow"
-msgstr "Ir al procedimiento de aprobación específico"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1759
msgid "List of refereed categories"
-msgstr "Lista de categorías a revisar"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1760
#: modules/websubmit/lib/websubmit_templates.py:1909
msgid "Please choose a category"
-msgstr "Escoja una categoría"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1780
#: modules/websubmit/lib/websubmit_templates.py:1821
#: modules/websubmit/lib/websubmit_templates.py:1932
#: modules/websubmit/lib/websubmit_templates.py:1990
#: modules/websubmit/lib/websubmit_templates.py:2056
#: modules/websubmit/lib/websubmit_templates.py:2181
msgid "Pending"
-msgstr "Pendiente"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1786
#: modules/websubmit/lib/websubmit_templates.py:1824
#: modules/websubmit/lib/websubmit_templates.py:1939
#: modules/websubmit/lib/websubmit_templates.py:1993
#: modules/websubmit/lib/websubmit_templates.py:2057
#: modules/websubmit/lib/websubmit_templates.py:2182
msgid "Approved"
-msgstr "Aprobado"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1792
#: modules/websubmit/lib/websubmit_templates.py:1826
#: modules/websubmit/lib/websubmit_templates.py:1827
#: modules/websubmit/lib/websubmit_templates.py:1946
#: modules/websubmit/lib/websubmit_templates.py:1995
#: modules/websubmit/lib/websubmit_templates.py:1996
#: modules/websubmit/lib/websubmit_templates.py:2058
#: modules/websubmit/lib/websubmit_templates.py:2183
msgid "Rejected"
-msgstr "Rechazado"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1820
#: modules/websubmit/lib/websubmit_templates.py:1989
msgid "Key"
-msgstr "Código"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1823
#: modules/websubmit/lib/websubmit_templates.py:1992
msgid "Waiting for approval"
-msgstr "Esperando la aprobación"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1825
#: modules/websubmit/lib/websubmit_templates.py:1994
msgid "Already approved"
-msgstr "Ya aprobado"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1828
#: modules/websubmit/lib/websubmit_templates.py:1999
msgid "Some documents are pending."
-msgstr "Algunos documentos están pendientes."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1873
msgid "List of refereing categories"
-msgstr "Lista de categorías a revisar"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:1953
#: modules/websubmit/lib/websubmit_templates.py:1997
#: modules/websubmit/lib/websubmit_templates.py:1998
#: modules/websubmit/lib/websubmit_templates.py:2184
msgid "Cancelled"
-msgstr "Cancelado"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2053
#: modules/websubmit/lib/websubmit_templates.py:2141
msgid "List of refereed documents"
-msgstr "Lista de los documentos revisados"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2054
#: modules/websubmit/lib/websubmit_templates.py:2178
msgid "Click on a report number for more information."
-msgstr "Haga clic en un número de informe para más información."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2055
#: modules/websubmit/lib/websubmit_templates.py:2180
msgid "Report Number"
-msgstr "Número de informe"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2143
msgid "List of publication documents"
-msgstr "Lista de documentos de publicación"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2145
msgid "List of direct approval documents"
-msgstr "Lista de los documentos de aprobación directa"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2318
msgid "Your request has been sent to the referee."
-msgstr "Su mensaje se ha enviado al revisor."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2334
#: modules/websubmit/lib/websubmit_templates.py:2455
#: modules/websubmit/lib/websubmit_templates.py:2766
#: modules/websubmit/lib/websubmit_templates.py:2950
msgid "Title:"
-msgstr "Título:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2340
#: modules/websubmit/lib/websubmit_templates.py:2462
#: modules/websubmit/lib/websubmit_templates.py:2772
#: modules/websubmit/lib/websubmit_templates.py:2956
msgid "Author:"
-msgstr "Autor:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2348
#: modules/websubmit/lib/websubmit_templates.py:2471
#: modules/websubmit/lib/websubmit_templates.py:2780
#: modules/websubmit/lib/websubmit_templates.py:2964
msgid "More information:"
-msgstr "Más información:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2349
#: modules/websubmit/lib/websubmit_templates.py:2472
#: modules/websubmit/lib/websubmit_templates.py:2781
#: modules/websubmit/lib/websubmit_templates.py:2965
msgid "Click here"
-msgstr "Haga clic aquí"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2358
msgid "Approval note:"
-msgstr "Nota de aprobación:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2363
#, python-format
msgid ""
"This document is still %(x_fmt_open)swaiting for approval%(x_fmt_close)s."
msgstr ""
-"Este documento todavía está %(x_fmt_open)sesperando su aprobación"
-"%(x_fmt_close)s."
#: modules/websubmit/lib/websubmit_templates.py:2366
#: modules/websubmit/lib/websubmit_templates.py:2386
#: modules/websubmit/lib/websubmit_templates.py:2395
msgid "It was first sent for approval on:"
-msgstr "Fue enviado para su aprobación el:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2368
#: modules/websubmit/lib/websubmit_templates.py:2370
#: modules/websubmit/lib/websubmit_templates.py:2388
#: modules/websubmit/lib/websubmit_templates.py:2390
#: modules/websubmit/lib/websubmit_templates.py:2397
#: modules/websubmit/lib/websubmit_templates.py:2399
msgid "Last approval email was sent on:"
-msgstr "El último mensaje de aprobación fue enviado el:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2371
msgid ""
"You can send an approval request email again by clicking the following "
"button:"
msgstr ""
-"Puede volver a enviar otra petición de aprobación vía correo electrónico "
-"pulsando este botón:"
#: modules/websubmit/lib/websubmit_templates.py:2373
#: modules/websubmit/web/publiline.py:366
msgid "Send Again"
-msgstr "Volverlo a enviar"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2374
msgid "WARNING! Upon confirmation, an email will be sent to the referee."
msgstr ""
-"¡ATENCIÓN! Se enviará un correo electrónico a su revisor cuando lo confirme."
#: modules/websubmit/lib/websubmit_templates.py:2377
msgid ""
"As a referee for this document, you may click this button to approve or "
"reject it"
msgstr ""
-"Como revisor de este documento, puede hacer clic en este botón para "
-"aprobarlo o rechazarlo."
#: modules/websubmit/lib/websubmit_templates.py:2379
msgid "Approve/Reject"
-msgstr "Aprobar/Rechazar"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2384
#, python-format
msgid "This document has been %(x_fmt_open)sapproved%(x_fmt_close)s."
-msgstr "Este documento ha sido %(x_fmt_open)saprobado%(x_fmt_close)s."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2385
msgid "Its approved reference is:"
-msgstr "Su referencia de aprobación es:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2391
msgid "It was approved on:"
-msgstr "Fue aprobado el:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2393
#, python-format
msgid "This document has been %(x_fmt_open)srejected%(x_fmt_close)s."
-msgstr "Este documento ha sido %(x_fmt_open)srechazado%(x_fmt_close)s."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2400
msgid "It was rejected on:"
-msgstr "Fue rechazado el:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2484
#: modules/websubmit/lib/websubmit_templates.py:2539
#: modules/websubmit/lib/websubmit_templates.py:2602
msgid "It has first been asked for refereing process on the "
-msgstr "La petición de revisión fue hecha por primera vez el "
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2487
#: modules/websubmit/lib/websubmit_templates.py:2541
msgid "Last request e-mail was sent to the publication committee chair on the "
msgstr ""
-"El último mensaje de petición fue enviado al responsable del comité de "
-"publicación el "
#: modules/websubmit/lib/websubmit_templates.py:2491
msgid "A referee has been selected by the publication committee on the "
-msgstr "Un revisor ha sido seleccionado por el comité de publicación el "
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2494
#: modules/websubmit/lib/websubmit_templates.py:2557
msgid "No referee has been selected yet."
-msgstr "Todavía no existe ningún revisor seleccionado."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2496
#: modules/websubmit/lib/websubmit_templates.py:2559
msgid "Select a referee"
-msgstr "Seleccione un revisor"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2501
msgid ""
"The referee has sent his final recommendations to the publication committee "
"on the "
msgstr ""
-"El revisor ha enviado sus recomanaciones finales al comité de publicación el "
#: modules/websubmit/lib/websubmit_templates.py:2504
#: modules/websubmit/lib/websubmit_templates.py:2565
msgid "No recommendation from the referee yet."
-msgstr "Todavía no hay ninguna recomendación del revisor."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2506
#: modules/websubmit/lib/websubmit_templates.py:2516
#: modules/websubmit/lib/websubmit_templates.py:2567
#: modules/websubmit/lib/websubmit_templates.py:2575
#: modules/websubmit/lib/websubmit_templates.py:2583
msgid "Send a recommendation"
-msgstr "Envíe una recomendación"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2511
#: modules/websubmit/lib/websubmit_templates.py:2579
msgid ""
"The publication committee has sent his final recommendations to the project "
"leader on the "
msgstr ""
-"El comité de publicación ha enviado sus recomendaciones finales al "
-"responsable de projecto el "
#: modules/websubmit/lib/websubmit_templates.py:2514
#: modules/websubmit/lib/websubmit_templates.py:2581
msgid "No recommendation from the publication committee yet."
-msgstr "Todavía no hay ninguna recomendación del comité de publicación."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2521
#: modules/websubmit/lib/websubmit_templates.py:2587
#: modules/websubmit/lib/websubmit_templates.py:2607
msgid "It has been cancelled by the author on the "
-msgstr "Ha sido cancelado por el autor el "
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2525
#: modules/websubmit/lib/websubmit_templates.py:2590
#: modules/websubmit/lib/websubmit_templates.py:2610
msgid "It has been approved by the project leader on the "
-msgstr "Ha sido aprobado por el responsable de projecto el "
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2528
#: modules/websubmit/lib/websubmit_templates.py:2592
#: modules/websubmit/lib/websubmit_templates.py:2612
msgid "It has been rejected by the project leader on the "
-msgstr "Ha sido rechazado por el responsable de projecto el "
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2531
#: modules/websubmit/lib/websubmit_templates.py:2594
#: modules/websubmit/lib/websubmit_templates.py:2614
msgid "No final decision taken yet."
-msgstr "Todavía no hay ninguna decisión final."
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2533
#: modules/websubmit/lib/websubmit_templates.py:2596
#: modules/websubmit/lib/websubmit_templates.py:2616
#: modules/websubmit/web/publiline.py:1136
#: modules/websubmit/web/publiline.py:1146
msgid "Take a decision"
-msgstr "Tome una decisión"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2544
msgid ""
"An editorial board has been selected by the publication committee on the "
msgstr ""
-"Un consejo editor ha sido seleccionado por el comité de publicación el "
#: modules/websubmit/lib/websubmit_templates.py:2546
msgid "Add an author list"
-msgstr "Añadir una lista de autores"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2549
msgid "No editorial board has been selected yet."
-msgstr "Todavía no se ha seleccionado ningún consejo editor"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2551
msgid "Select an editorial board"
-msgstr "Seleccionar un consejo editor"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2555
msgid "A referee has been selected by the editorial board on the "
-msgstr "Un revisor ha sido seleccionado por el consejo editor el "
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2563
msgid ""
"The referee has sent his final recommendations to the editorial board on the "
msgstr ""
-"El revisor ha enviado sus recomendaciones finales al consejo editor el "
#: modules/websubmit/lib/websubmit_templates.py:2571
msgid ""
"The editorial board has sent his final recommendations to the publication "
"committee on the "
msgstr ""
-"El consejo editor ha enviado sus recomendaciones finales al comité de "
-"publicación el"
#: modules/websubmit/lib/websubmit_templates.py:2573
msgid "No recommendation from the editorial board yet."
-msgstr "Todavía no hay ninguna recomendación del consejo editor"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2604
msgid "Last request e-mail was sent to the project leader on the "
msgstr ""
-"El último mensaje de petición fue enviado al responsable del proyecto el "
#: modules/websubmit/lib/websubmit_templates.py:2698
msgid "Comments overview"
-msgstr "Sumario de los comentarios"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2820
msgid "search for user"
-msgstr "buscar un usuario"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2822
msgid "search for users"
-msgstr "buscar usuarios"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2825
#: modules/websubmit/lib/websubmit_templates.py:2827
#: modules/websubmit/lib/websubmit_templates.py:2880
#: modules/websubmit/lib/websubmit_templates.py:2882
msgid "select user"
-msgstr "Seleccione el usuario"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2836
msgid "connected"
-msgstr "conectado"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2839
msgid "add this user"
-msgstr "añadir este usuario"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:2889
msgid "remove this user"
-msgstr "eliminar este usuario"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:3003
msgid "User"
-msgstr "Usuario"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:3084
#: modules/websubmit/web/publiline.py:1133
msgid "Select:"
-msgstr "Seleccionar:"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:3085
msgid "approve"
-msgstr "aprobar"
+msgstr ""
#: modules/websubmit/lib/websubmit_templates.py:3086
msgid "reject"
-msgstr "rechazar"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:116
msgid "Requested record does not seem to have been integrated."
-msgstr "El registro solicitado no parece haber sido integrado."
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:150
msgid ""
"The system has encountered an error in retrieving the list of files for this "
"document."
msgstr ""
-"El sistema ha encontrado un error recuperando la lista de los archivos de "
-"este documento."
#: modules/websubmit/lib/websubmit_webinterface.py:216
msgid "This file is restricted: "
-msgstr "Este archivo es de acceso restringido."
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:228
msgid "An error has happened in trying to stream the request file."
msgstr ""
-"Este archivo está escondido y usted no tiene permiso para acceder a él."
#: modules/websubmit/lib/websubmit_webinterface.py:231
msgid "The requested file is hidden and can not be accessed."
-msgstr "Este archivo está escondido y usted no puede acceder a él."
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:238
msgid "Requested file does not seem to exist."
-msgstr "El fichero solicitado no existe."
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:272
msgid "Access to Fulltext"
-msgstr "Acceso al texto completo"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:321
msgid "An error has happened in trying to retrieve the requested file."
-msgstr "Ha habido un error al intentar recuperar el archivo solicitado."
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:323
msgid "Not enough information to retrieve the document"
-msgstr "Falta información para recuperar el documento"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:331
msgid "An error has happened in trying to retrieving the requested file."
-msgstr "Ha habido un error al intentar recuperar el archivo solicitado."
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:383
msgid "Manage Document Files"
-msgstr "Gestionar los ficheros de los documentos"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:401
#, python-format
msgid "Your modifications to record #%i have been submitted"
-msgstr "Sus modificaciones al registro #%i han sido enviadas"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:409
#, python-format
msgid "Your modifications to record #%i have been cancelled"
-msgstr "Sus modificaciones al registro #%i han sido canceladas"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:418
msgid "Edit"
-msgstr "Editar"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:419
msgid "Edit record"
-msgstr "Edite el registro"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:434
#: modules/websubmit/lib/websubmit_webinterface.py:490
msgid "Document File Manager"
-msgstr "Gestión de ficheros del documento"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:435
#: modules/websubmit/lib/websubmit_webinterface.py:490
#, python-format
msgid "Record #%i"
-msgstr "Registro #%i"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:482
msgid "Cancel all changes"
-msgstr "Cancelar todos los cambios"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:1171
msgid "Sorry, 'sub' parameter missing..."
-msgstr "Falta el parámetro «sub»..."
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:1174
msgid "Sorry. Cannot analyse parameter"
-msgstr "No se ha podido analizar el parámetro"
+msgstr ""
#: modules/websubmit/lib/websubmit_webinterface.py:1235
msgid "Sorry, invalid URL..."
-msgstr "URL no vàlid..."
+msgstr ""
#: modules/websubmit/lib/websubmitadmin_engine.py:3902
#, python-format
msgid ""
"Unable to move field at position %s to position %s on page %s of submission "
"'%s%s' - Invalid Field Position Numbers"
msgstr ""
-"No ha sido posible mover el campo de la posición %s a la %s de la página %s "
-"del envío '%s%s' - Números de posición no válidos"
#: modules/websubmit/lib/websubmitadmin_engine.py:3913
#, python-format
msgid ""
"Unable to swap field at position %s with field at position %s on page %s of "
"submission %s - could not move field at position %s to temporary field "
"location"
msgstr ""
-"No ha sido posible intercanviar el campo de la posición %s con el campo de "
-"la posición %s en la página %s del envío %s - no se ha podido mover el campo "
-"de la posición %s al campo temporal"
#: modules/websubmit/lib/websubmitadmin_engine.py:3924
#, python-format
msgid ""
"Unable to swap field at position %s with field at position %s on page %s of "
"submission %s - could not move field at position %s to position %s. Please "
"ask Admin to check that a field was not stranded in a temporary position"
msgstr ""
-"No ha sido posible intercanviar el campo de la posición %s con el campo de "
-"la posición %s en la página %s del envío %s - no se ha podido mover el campo "
-"de la posición %s a la posición %s. Pida al administrador que compruebe si "
-"el campo no ha quedado encallado en una posición temporal."
#: modules/websubmit/lib/websubmitadmin_engine.py:3935
#, python-format
msgid ""
"Unable to swap field at position %s with field at position %s on page %s of "
"submission %s - could not move field that was located at position %s to "
"position %s from temporary position. Field is now stranded in temporary "
"position and must be corrected manually by an Admin"
msgstr ""
-"No ha sido posible intercanviar el campo de la posición %s con el campo de "
-"la posición %s en la página %s del envío %s - no se ha podido mover el campo "
-"de la posición %s a la posición %s. El campo no ha quedado encallado en una "
-"posición temporal y el administador tiene que corregirlo manualmente."
#: modules/websubmit/lib/websubmitadmin_engine.py:3946
#, python-format
msgid ""
"Unable to move field at position %s to position %s on page %s of submission "
"%s - could not decrement the position of the fields below position %s. Tried "
"to recover - please check that field ordering is not broken"
msgstr ""
-"No ha sido posible mover el campo de la posición %s a la %s en la página %s "
-"del envío %s - no se ha podido colocar en una posición menor que %s. Se ha "
-"intentado recuperar. Compruebe que el orden de los campos sea el correcto."
#: modules/websubmit/lib/websubmitadmin_engine.py:3957
#, python-format
msgid ""
"Unable to move field at position %s to position %s on page %s of submission "
"%s%s - could not increment the position of the fields at and below position "
"%s. The field that was at position %s is now stranded in a temporary "
"position."
msgstr ""
-"No ha sido posible mover el campo de la posición %s a la %s en la página %s "
-"del envío %s%s - no se ha podido colocar en una posición mayor que y debajo "
-"de la %s. El campo que estaba en la posición %s está ahora en una posición "
-"temporal."
#: modules/websubmit/lib/websubmitadmin_engine.py:3968
#, python-format
msgid ""
"Moved field from position %s to position %s on page %s of submission '%s%s'."
msgstr ""
-"Campo movido de la posición %s a la posición %s de la página %s del envío '%s"
-"%s'."
#: modules/websubmit/lib/websubmitadmin_engine.py:3989
#, python-format
msgid "Unable to delete field at position %s from page %s of submission '%s'"
msgstr ""
-"No ha sido posible eliminar el campo en la posición %s de la página %s del "
-"envío '%s'."
#: modules/websubmit/lib/websubmitadmin_engine.py:3999
#, python-format
msgid "Unable to delete field at position %s from page %s of submission '%s%s'"
msgstr ""
-"No ha sido posible eliminar el campo en la posición %s de la página %s del "
-"envío '%s%s'."
#: modules/websubmit/web/approve.py:55
msgid "approve.py: cannot determine document reference"
-msgstr "approve.py: no se ha podido determinar la referencia del documento."
+msgstr ""
#: modules/websubmit/web/approve.py:58
msgid "approve.py: cannot find document in database"
-msgstr "approve.py: no se ha encontrado el documento en la base de datos"
+msgstr ""
#: modules/websubmit/web/approve.py:72
msgid "Sorry parameter missing..."
-msgstr "Falta el parámetro..."
+msgstr ""
#: modules/websubmit/web/publiline.py:133
msgid "Document Approval Workflow"
-msgstr "Circuito de aprobación de documentos"
+msgstr ""
#: modules/websubmit/web/publiline.py:154
msgid "Approval and Refereeing Workflow"
-msgstr "Procedimiento de aprovación y revisión"
+msgstr ""
#: modules/websubmit/web/publiline.py:333
#: modules/websubmit/web/publiline.py:434
#: modules/websubmit/web/publiline.py:660
msgid "Approval has never been requested for this document."
-msgstr "No se ha pedido nunca la aprobación de este documento."
+msgstr ""
#: modules/websubmit/web/publiline.py:356
#: modules/websubmit/web/publiline.py:358
#: modules/websubmit/web/publiline.py:460
#: modules/websubmit/web/publiline.py:685
msgid "Unable to display document."
-msgstr "No es posible visualizar el documento."
+msgstr ""
#: modules/websubmit/web/publiline.py:689
#: modules/websubmit/web/publiline.py:813
#: modules/websubmit/web/publiline.py:928
#: modules/websubmit/web/publiline.py:992
#: modules/websubmit/web/publiline.py:1033
#: modules/websubmit/web/publiline.py:1089
#: modules/websubmit/web/publiline.py:1152
#: modules/websubmit/web/publiline.py:1202
msgid "Action unauthorized for this document."
-msgstr "Acción no autorizada para este documento."
+msgstr ""
#: modules/websubmit/web/publiline.py:692
#: modules/websubmit/web/publiline.py:816
#: modules/websubmit/web/publiline.py:931
#: modules/websubmit/web/publiline.py:995
#: modules/websubmit/web/publiline.py:1036
#: modules/websubmit/web/publiline.py:1092
#: modules/websubmit/web/publiline.py:1155
#: modules/websubmit/web/publiline.py:1205
msgid "Action unavailable for this document."
-msgstr "Acción no aplicable para este documento."
+msgstr ""
#: modules/websubmit/web/publiline.py:702
msgid "Adding users to the editorial board"
-msgstr "Añadir usuarios al consejo editor"
+msgstr ""
#: modules/websubmit/web/publiline.py:730
#: modules/websubmit/web/publiline.py:853
msgid "no qualified users, try new search."
-msgstr "no coincide con ningún usuario, pruebe otra búsqueda."
+msgstr ""
#: modules/websubmit/web/publiline.py:732
#: modules/websubmit/web/publiline.py:855
msgid "hits"
-msgstr "resultados"
+msgstr ""
#: modules/websubmit/web/publiline.py:732
#: modules/websubmit/web/publiline.py:855
msgid "too many qualified users, specify more narrow search."
-msgstr "coincide con demasiados usuarios, restrinja más la búsqueda."
+msgstr ""
#: modules/websubmit/web/publiline.py:732
#: modules/websubmit/web/publiline.py:855
msgid "limit"
-msgstr "límite"
+msgstr ""
#: modules/websubmit/web/publiline.py:748
msgid "users in brackets are already attached to the role, try another one..."
msgstr ""
-"los usuarios entre corchetes ja tenen asignado este rol, escoja otro..."
#: modules/websubmit/web/publiline.py:754
msgid "Removing users from the editorial board"
-msgstr "Borrado de usuarios del consejo editor"
+msgstr ""
#: modules/websubmit/web/publiline.py:790
msgid "Validate the editorial board selection"
-msgstr "Valude la selección del consejo editor"
+msgstr ""
#: modules/websubmit/web/publiline.py:835
msgid "Referee selection"
-msgstr "Selección de revisor"
+msgstr ""
#: modules/websubmit/web/publiline.py:921
msgid "Come back to the document"
-msgstr "Volver al documento"
+msgstr ""
#: modules/websubmit/web/publiline.py:1106
msgid "Back to the document"
-msgstr "Volver al documento"
+msgstr ""
#: modules/websubmit/web/publiline.py:1134
#: modules/websubmit/web/publiline.py:1194
msgid "Approve"
-msgstr "Aprobar"
+msgstr ""
#: modules/websubmit/web/publiline.py:1135
#: modules/websubmit/web/publiline.py:1195
msgid "Reject"
-msgstr "Rechazar"
+msgstr ""
#: modules/websubmit/web/publiline.py:1233
msgid "Wrong action for this document."
-msgstr "Acción incorrecta para este document."
+msgstr ""
#: modules/websubmit/web/yourapprovals.py:57
msgid "You are not authorized to use approval system."
-msgstr "No está autorizado a utilizar el sistema de aprobaciones."
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:50
msgid "Available Journals"
-msgstr "Revistas disponibles"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:59
#: modules/webjournal/lib/webjournal_templates.py:98
#, python-format
msgid "Contact %(x_url_open)sthe administrator%(x_url_close)s"
-msgstr "Escriba a los %(x_url_open)sadministradores%(x_url_close)s"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:143
msgid "Regeneration Error"
-msgstr "Error en la regeneración"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:144
msgid ""
"The issue could not be correctly regenerated. Please contact your "
"administrator."
msgstr ""
-"El número no se ha podido regenerar correctamente. Contacte con el "
-"administrador."
#: modules/webjournal/lib/webjournal_templates.py:270
#, python-format
msgid "If you cannot read this email please go to %(x_journal_link)s"
-msgstr "Si no puede leer este mensaje vaya a %(x_journal_link)s"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:417
#: modules/webjournal/lib/webjournal_templates.py:666
#: modules/webjournal/lib/webjournaladminlib.py:299
#: modules/webjournal/lib/webjournaladminlib.py:319
msgid "Add"
-msgstr "Añadir"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:418
#: modules/webjournal/lib/webjournaladminlib.py:338
msgid "Publish"
-msgstr "Publicar"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:419
#: modules/webjournal/lib/webjournaladminlib.py:299
#: modules/webjournal/lib/webjournaladminlib.py:316
msgid "Refresh"
-msgstr "Refrescar"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:482
#: modules/webjournal/lib/webjournaladminlib.py:364
#: modules/bibcirculation/lib/bibcirculation_templates.py:3254
#: modules/bibcirculation/lib/bibcirculation_templates.py:3275
#: modules/bibcirculation/lib/bibcirculation_templates.py:3296
#: modules/bibcirculation/lib/bibcirculation_templates.py:3317
#: modules/bibcirculation/lib/bibcirculation_templates.py:3947
#: modules/bibcirculation/lib/bibcirculation_templates.py:4499
#: modules/bibcirculation/lib/bibcirculation_templates.py:4507
#: modules/bibcirculation/lib/bibcirculation_templates.py:8140
#: modules/bibcirculation/lib/bibcirculation_templates.py:14718
msgid "Update"
-msgstr "Actualizar"
+msgstr ""
#: modules/webjournal/lib/webjournal_templates.py:661
msgid "Apply"
-msgstr "Aplicar"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:64
msgid "Page not found"
-msgstr "No se ha encontrado la página"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:65
msgid "The requested page does not exist"
-msgstr "La página solicitada no existe"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:96
msgid "No journal articles"
-msgstr "Sin artículos de revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:97
#: modules/webjournal/lib/webjournal_config.py:138
msgid "Problem with the configuration of this journal"
-msgstr "Problema con la configuración de esta revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:137
msgid "No journal issues"
-msgstr "Sin números de revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:176
msgid "Journal article error"
-msgstr "Error interno de la revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:177
msgid "We could not know which article you were looking for"
-msgstr "No ha sido posible encontrar el artículo que buscaba"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:211
msgid "No journals available"
-msgstr "No hay ninguna revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:212
msgid "We could not provide you any journals"
-msgstr "No ha sido posible ofrecerle ninguna revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:213
msgid ""
"It seems that there are no journals defined on this server. Please contact "
"support if this is not right."
msgstr ""
-"Parece ser que no hay ninguna revista definida en este servidor. Contacte "
-"con el soporte si no es así."
#: modules/webjournal/lib/webjournal_config.py:239
msgid "Select a journal on this server"
-msgstr "Seleccione una revista en este servidor"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:240
msgid "We couldn't guess which journal you are looking for"
-msgstr "No ha sido posible encontrar la revista que buscaba"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:241
msgid ""
"You did not provide an argument for a journal name. Please select the "
"journal you want to read in the list below."
-msgstr "No ha especificado qué revista busca. Seleccione una de esta lista."
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:268
msgid "No current issue"
-msgstr "No existe ningún número actual"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:269
msgid "We could not find any informtion on the current issue"
-msgstr "No existe ninguna información en el número actual"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:270
msgid ""
"The configuration for the current issue seems to be empty. Try providing an "
"issue number or check with support."
msgstr ""
-"Parece que la configuración del número actual está vacía. Pruebe de escoger "
-"un número en concreto o póngase en contacte con soporte."
#: modules/webjournal/lib/webjournal_config.py:298
msgid "Issue number badly formed"
-msgstr "Número mal formateado"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:299
msgid "We could not read the issue number you provided"
-msgstr "No ha sido posible leer el número que ha escogido"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:329
msgid "Archive date badly formed"
-msgstr "Fecha de archivo mal formateada"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:330
msgid "We could not read the archive date you provided"
-msgstr "No ha sido posible leer la data de archivo que ha escogido"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:365
msgid "No popup record"
-msgstr "No existe el registro «popup»"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:366
msgid "We could not deduce the popup article you requested"
-msgstr "No ha sido posible leer el artículo «popup» que ha escogido"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:399
msgid "Update error"
-msgstr "Error de actualización"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:400
#: modules/webjournal/lib/webjournal_config.py:431
msgid "There was an internal error"
-msgstr "Ha habido un error interno"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:430
msgid "Journal publishing DB error"
-msgstr "Error en la base dades de publicación de revistas"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:463
msgid "Journal issue error"
-msgstr "Error de número de revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:464
msgid "Issue not found"
-msgstr "No se ha encontrado el número"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:494
msgid "Journal ID error"
-msgstr "Error del código de revista"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:495
msgid "We could not find the id for this journal in the Database"
msgstr ""
-"No ha sido posible encontrar el identificador interno de esta revista en la "
-"base de datos"
#: modules/webjournal/lib/webjournal_config.py:527
#: modules/webjournal/lib/webjournal_config.py:529
#, python-format
msgid "Category \"%(category_name)s\" not found"
-msgstr "No se ha encontrado la categoria «%(category_name)s»"
+msgstr ""
#: modules/webjournal/lib/webjournal_config.py:531
msgid "Sorry, this category does not exist for this journal and issue."
-msgstr "Esta categoria no existe para esta revista y número."
+msgstr ""
#: modules/webjournal/lib/webjournaladminlib.py:350
msgid "Please select an issue"
-msgstr "Seleccione un número"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:77
msgid "WebJournal Admin"
-msgstr "Administración de WebJournal"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:119
#, python-format
msgid "Administrate %(journal_name)s"
-msgstr "Administrar %(journal_name)s"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:158
msgid "Feature a record"
-msgstr "Destacar un registro"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:220
msgid "Email Alert System"
-msgstr "Sistema de alertas por correo electrónico"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:273
msgid "Issue regenerated"
-msgstr "Número regenerado"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:324
msgid "Publishing Interface"
-msgstr "Interfaz de publicación"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:350
msgid "Add Journal"
-msgstr "Añadir una revista"
+msgstr ""
#: modules/webjournal/web/admin/webjournaladmin.py:352
msgid "Edit Settings"
-msgstr "Editar parámetros"
+msgstr ""
#: modules/bibcatalog/lib/bibcatalog_templates.py:43
#, python-format
msgid "You have %i tickets."
-msgstr "Tiene %i tareas."
+msgstr ""
#: modules/bibcatalog/lib/bibcatalog_templates.py:62
msgid "show"
-msgstr "mostrar"
+msgstr ""
#: modules/bibcatalog/lib/bibcatalog_templates.py:63
msgid "close"
-msgstr "cerrar"
+msgstr ""
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:68
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:82
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_weather.py:125
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_weather.py:140
msgid "No information available"
-msgstr "No hay información disponible"
+msgstr ""
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:86
msgid "No seminars today"
-msgstr "Hoy no hay seminarios"
+msgstr ""
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:184
msgid "What's on today"
-msgstr "Previsto para hoy"
+msgstr ""
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py:185
msgid "Seminars of the week"
-msgstr "Seminarios de la semana"
+msgstr ""
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_whatsNew.py:163
msgid "There are no new articles for the moment"
-msgstr "De momento no hay artículos nuevos"
+msgstr ""
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_whatsNew.py:285
msgid "What's new"
-msgstr "Es noticia"
+msgstr ""
#: modules/webjournal/lib/widgets/bfe_webjournal_widget_weather.py:224
msgid "Under the CERN sky"
-msgstr "Bajo el cielo del CERN"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_article_author.py:48
#, python-format
msgid "About your article at %(url)s"
-msgstr "Sobre su artículo en %(url)s"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_imprint.py:122
msgid "Issue No."
-msgstr "Número"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_article_body.py:221
msgid "Did you know?"
-msgstr "Lo sabía?"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_article_body.py:253
#, python-format
msgid ""
"Hi,\n"
"\n"
"Have a look at the following article:\n"
"<%(url)s>"
msgstr ""
-"Hola,\n"
-"\n"
-"échele un vistazo a este artículo:\n"
-"<%(url)s>"
#: modules/webjournal/lib/elements/bfe_webjournal_article_body.py:260
msgid "Send this article"
-msgstr "Enviar este artículo"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_rss.py:136
msgid "Subscribe by RSS"
-msgstr "Subscribirse vía RSS"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:90
msgid "News Articles"
-msgstr "Noticias"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:91
msgid "Official News"
-msgstr "Noticias oficiales"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:92
msgid "Training and Development"
-msgstr "Formación y desarrollo"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:93
msgid "General Information"
-msgstr "Información general"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:94
msgid "Announcements"
-msgstr "Avisos"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:95
msgid "Training"
-msgstr "Formación"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:96
msgid "Events"
-msgstr "Eventos"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py:97
msgid "Staff Association"
-msgstr "Associación de personal"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:106
msgid "Archive"
-msgstr "Archivo"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:133
msgid "Select Year:"
-msgstr "Seleccionar año:"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:139
msgid "Select Issue:"
-msgstr "Seleccionar número:"
+msgstr ""
#: modules/webjournal/lib/elements/bfe_webjournal_archive.py:143
msgid "Select Date:"
-msgstr "Seleccionar fecha:"
+msgstr ""
#: modules/bibedit/lib/bibedit_webinterface.py:169
msgid "Comparing two record revisions"
-msgstr "Comparar dos revisiones del registro"
+msgstr ""
#: modules/bibedit/lib/bibedit_webinterface.py:192
msgid "Failed to create a ticket"
-msgstr "No se ha podido crear un ticket"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:313
msgid "Next Step"
-msgstr "Paso siguiente"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:314
msgid "Search criteria"
-msgstr "Criterios de búsqueda"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:315
msgid "Output tags"
-msgstr "ETiquetas de visualización"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:316
msgid "Filter collection"
-msgstr "Filtro de colección"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:317
msgid "1. Choose search criteria"
-msgstr "1. Escoja los criterios de búsqueda"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:318
msgid ""
"Specify the criteria you'd like to use for filtering records that will be "
"changed. Use \"Search\" to see which records would have been filtered using "
"these criteria."
msgstr ""
-"Especifique los criterios que quiera para filtrar los registros a cambiar. "
-"Use «Buscar» per a ver qué registros se filtrarían con estos criterios."
#: modules/bibedit/lib/bibeditmulti_templates.py:320
msgid "Preview results"
-msgstr "Previsualización de los resultados"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:552
msgid "2. Define changes"
-msgstr "2. Definición de los cambios"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:553
msgid ""
"Specify fields and their subfields that should be changed in every record "
"matching the search criteria."
msgstr ""
-"Especifique los campos y subcampos a cambiar para cada registro que "
-"concuerde con los criterios de búsqueda."
#: modules/bibedit/lib/bibeditmulti_templates.py:554
msgid "Define new field action"
-msgstr "Definir una nueva acción para el camp"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:555
msgid "Define new subfield action"
-msgstr "Definir una nueva acción para el subcamp"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:557
msgid "Select action"
-msgstr "Seleccionar una acción"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:558
msgid "Add field"
-msgstr "Añadir un campo"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:559
msgid "Delete field"
-msgstr "Suprimirlos un campo"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:560
msgid "Update field"
-msgstr "Actualizar un campo"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:561
msgid "Add subfield"
-msgstr "Añadir un subcampo"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:562
msgid "Delete subfield"
-msgstr "Suprimir un subcampo"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:563
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:260
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:402
#: modules/bibknowledge/lib/bibknowledge_templates.py:280
#: modules/bibknowledge/lib/bibknowledge_templates.py:528
msgid "Save"
-msgstr "Guardar"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:565
msgid "Replace substring"
-msgstr "Substituir texto"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:566
msgid "Replace full content"
-msgstr "Substituir todo el contenido"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:567
msgid "with"
-msgstr "con"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:568
msgid "when subfield $$"
-msgstr "cuando el subcampo $$"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:569
msgid "new value"
-msgstr "nuevo valor"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:570
msgid "is equal to"
-msgstr "es igual a"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:571
msgid "contains"
-msgstr "contiene"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:572
msgid "condition"
-msgstr "condición"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:573
msgid "when other subfield"
-msgstr "cuando otro subcampo"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:574
msgid "when subfield"
-msgstr "cuando el subcampo"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:575
msgid "Apply only to specific field instances"
-msgstr "Aplicar sólo a instancias específicas de campos"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:576
msgid "value"
-msgstr "valor"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:605
msgid "Back to Results"
-msgstr "Volver a los resultados"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_templates.py:703
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:515
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:571
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:590
msgid "records found"
-msgstr "registros encontrados"
+msgstr ""
#: modules/bibedit/lib/bibeditmulti_webinterface.py:102
msgid "Multi-Record Editor"
-msgstr "Editor de múltiples registros"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:116
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:132
msgid "Export Job Overview"
-msgstr "Resumen de tareas de exportación"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:117
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:189
msgid "New Export Job"
-msgstr "Nueva tareq de exportación"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:118
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:443
msgid "Export Job History"
-msgstr "Exportar el histórico de tareas"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:174
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:195
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:323
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:534
msgid "Run"
-msgstr "Ejecutar"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:176
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:325
msgid "New"
-msgstr "Nuevo"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:196
msgid "Last run"
-msgstr "Última actualización"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:256
msgid "Frequency"
-msgstr "Frecuencia"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:257
msgid "Output Format"
-msgstr "Formato de salida"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:258
msgid "Start"
-msgstr "Inicio"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:259
msgid "Output Directory"
-msgstr "Directorio de salida"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:262
msgid "Edit Queries"
-msgstr "Editar búsquedas"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:348
msgid "Output Fields"
-msgstr "Campos de salida"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:400
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:460
msgid "Output fields"
-msgstr "Campos de salida"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:438
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:605
msgid "Download"
-msgstr "Descargar"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:439
msgid "View as: "
-msgstr "Visualizar como:"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:533
msgid "Job"
-msgstr "Tarea"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:588
msgid "Total"
-msgstr "Total"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:653
msgid "All"
-msgstr "Todas"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:654
msgid "None"
-msgstr "Ninguno"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:672
msgid "Manually"
-msgstr "Manualmente"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:674
msgid "Daily"
-msgstr "Diariamente"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:676
msgid "Weekly"
-msgstr "Semanalmente"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_templates.py:678
msgid "Monthly"
-msgstr "Mensualmente"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:192
msgid "Edit Export Job"
-msgstr "Editar la tarea de exportación"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:239
msgid "Query Results"
-msgstr "Resultados de la búsqueda"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:246
msgid "Export Job Queries"
-msgstr "Exportar las búsquedas de la tarea"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:320
msgid "New Query"
-msgstr "Nueva búsqueda"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:325
msgid "Edit Query"
-msgstr "Editar búsqueda"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:356
msgid "Export Job Results"
-msgstr "Exportar los resultados de la tarea"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:389
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:423
msgid "Export Job Result"
-msgstr "Resultado de la tarea de exportación"
+msgstr ""
#: modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py:465
#: modules/bibexport/lib/bibexport_method_fieldexporter.py:500
#: modules/bibexport/lib/bibexport_method_fieldexporter.py:515
#: modules/bibexport/lib/bibexport_method_fieldexporter.py:530
msgid "You are not authorised to access this resource."
-msgstr "No está autorizado a acceder a este recurso."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:399
#: modules/bibcirculation/lib/bibcirculation_templates.py:8849
#: modules/bibcirculation/lib/bibcirculation_templates.py:10425
msgid "Loan information"
-msgstr "Información de préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:403
-#, fuzzy
msgid "This book has been sent to you:"
-msgstr "Se le ha enviado este libro..."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:429
#: modules/bibcirculation/lib/bibcirculation_templates.py:1756
#: modules/bibcirculation/lib/bibcirculation_templates.py:2102
#: modules/bibcirculation/lib/bibcirculation_templates.py:5993
#: modules/bibcirculation/lib/bibcirculation_templates.py:16119
msgid "Author"
-msgstr "Autor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:430
msgid "Editor"
-msgstr "Editor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:431
#: modules/bibcirculation/lib/bibcirculation_templates.py:1759
#: modules/bibcirculation/lib/bibcirculation_templates.py:2745
#: modules/bibcirculation/lib/bibcirculation_templates.py:3098
#: modules/bibcirculation/lib/bibcirculation_templates.py:5995
#: modules/bibcirculation/lib/bibcirculation_templates.py:6099
#: modules/bibcirculation/lib/bibcirculation_templates.py:7156
#: modules/bibcirculation/lib/bibcirculation_templates.py:7435
#: modules/bibcirculation/lib/bibcirculation_templates.py:8087
#: modules/bibcirculation/lib/bibcirculation_templates.py:8244
#: modules/bibcirculation/lib/bibcirculation_templates.py:9599
#: modules/bibcirculation/lib/bibcirculation_templates.py:9838
#: modules/bibcirculation/lib/bibcirculation_templates.py:10074
#: modules/bibcirculation/lib/bibcirculation_templates.py:10317
#: modules/bibcirculation/lib/bibcirculation_templates.py:10529
#: modules/bibcirculation/lib/bibcirculation_templates.py:10752
#: modules/bibcirculation/lib/bibcirculation_templates.py:11211
#: modules/bibcirculation/lib/bibcirculation_templates.py:11357
#: modules/bibcirculation/lib/bibcirculation_templates.py:11862
#: modules/bibcirculation/lib/bibcirculation_templates.py:11956
#: modules/bibcirculation/lib/bibcirculation_templates.py:12157
#: modules/bibcirculation/lib/bibcirculation_templates.py:12839
#: modules/bibcirculation/lib/bibcirculation_templates.py:12940
#: modules/bibcirculation/lib/bibcirculation_templates.py:13612
#: modules/bibcirculation/lib/bibcirculation_templates.py:13871
#: modules/bibcirculation/lib/bibcirculation_templates.py:14926
#: modules/bibcirculation/lib/bibcirculation_templates.py:15149
#: modules/bibcirculation/lib/bibcirculation_templates.py:15425
#: modules/bibcirculation/lib/bibcirculation_templates.py:16846
#: modules/bibcirculation/lib/bibcirculation_templates.py:17033
#: modules/bibcirculation/lib/bibcirculation_templates.py:17917
msgid "ISBN"
-msgstr "ISBN"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:455
#: modules/bibcirculation/lib/bibcirculation_templates.py:2389
#: modules/bibcirculation/lib/bibcirculation_templates.py:2506
#: modules/bibcirculation/lib/bibcirculation_templates.py:2738
#: modules/bibcirculation/lib/bibcirculation_templates.py:4456
#: modules/bibcirculation/lib/bibcirculation_templates.py:5604
#: modules/bibcirculation/lib/bibcirculation_templates.py:6189
#: modules/bibcirculation/lib/bibcirculation_templates.py:6238
#: modules/bibcirculation/lib/bibcirculation_templates.py:6535
#: modules/bibcirculation/lib/bibcirculation_templates.py:9027
#: modules/bibcirculation/lib/bibcirculation_templates.py:9272
#: modules/bibcirculation/lib/bibcirculation_templates.py:9881
#: modules/bibcirculation/lib/bibcirculation_templates.py:10359
#: modules/bibcirculation/lib/bibcirculation_templates.py:11223
#: modules/bibcirculation/lib/bibcirculation_templates.py:12212
#: modules/bibcirculation/lib/bibcirculation_templates.py:12996
#: modules/bibcirculation/lib/bibcirculation_templates.py:15517
msgid "Mailbox"
-msgstr "Buzón"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:456
#: modules/bibcirculation/lib/bibcirculation_templates.py:2388
#: modules/bibcirculation/lib/bibcirculation_templates.py:2505
#: modules/bibcirculation/lib/bibcirculation_templates.py:2737
#: modules/bibcirculation/lib/bibcirculation_templates.py:3941
#: modules/bibcirculation/lib/bibcirculation_templates.py:4044
#: modules/bibcirculation/lib/bibcirculation_templates.py:4267
#: modules/bibcirculation/lib/bibcirculation_templates.py:4328
#: modules/bibcirculation/lib/bibcirculation_templates.py:4455
#: modules/bibcirculation/lib/bibcirculation_templates.py:5603
#: modules/bibcirculation/lib/bibcirculation_templates.py:6188
#: modules/bibcirculation/lib/bibcirculation_templates.py:6237
#: modules/bibcirculation/lib/bibcirculation_templates.py:6534
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6702
#: modules/bibcirculation/lib/bibcirculation_templates.py:6931
#: modules/bibcirculation/lib/bibcirculation_templates.py:7030
#: modules/bibcirculation/lib/bibcirculation_templates.py:9026
#: modules/bibcirculation/lib/bibcirculation_templates.py:9271
#: modules/bibcirculation/lib/bibcirculation_templates.py:9880
#: modules/bibcirculation/lib/bibcirculation_templates.py:10358
#: modules/bibcirculation/lib/bibcirculation_templates.py:11222
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14143
#: modules/bibcirculation/lib/bibcirculation_templates.py:14392
#: modules/bibcirculation/lib/bibcirculation_templates.py:14463
#: modules/bibcirculation/lib/bibcirculation_templates.py:14714
#: modules/bibcirculation/lib/bibcirculation_templates.py:15516
msgid "Address"
-msgstr "Dirección"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:467
#: modules/bibcirculation/lib/bibcirculation_templates.py:358
#: modules/bibcirculation/lib/bibcirculation_templates.py:673
#: modules/bibcirculation/lib/bibcirculation_templates.py:2513
#: modules/bibcirculation/lib/bibcirculation_templates.py:3162
#: modules/bibcirculation/lib/bibcirculation_templates.py:3603
#: modules/bibcirculation/lib/bibcirculation_templates.py:3808
#: modules/bibcirculation/lib/bibcirculation_templates.py:4787
#: modules/bibcirculation/lib/bibcirculation_templates.py:4980
#: modules/bibcirculation/lib/bibcirculation_templates.py:5158
#: modules/bibcirculation/lib/bibcirculation_templates.py:5430
#: modules/bibcirculation/lib/bibcirculation_templates.py:7452
#: modules/bibcirculation/lib/bibcirculation_templates.py:8853
#: modules/bibcirculation/lib/bibcirculation_templates.py:9331
#: modules/bibcirculation/lib/bibcirculation_templates.py:10427
#: modules/bibcirculation/lib/bibcirculation_templates.py:11519
#: modules/bibcirculation/lib/bibcirculation_templates.py:12463
#: modules/bibcirculation/lib/bibcirculation_templates.py:12564
#: modules/bibcirculation/lib/bibcirculation_templates.py:13261
#: modules/bibcirculation/lib/bibcirculation_templates.py:13373
#: modules/bibcirculation/lib/bibcirculation_templates.py:15588
#: modules/bibcirculation/lib/bibcirculation_templates.py:17934
#: modules/bibcirculation/lib/bibcirculation_templates.py:18038
msgid "Due date"
-msgstr "Devolver el"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:513
msgid "List of pending hold requests"
-msgstr "Lista de peticiones de reserva pendientes"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:533
#: modules/bibcirculation/lib/bibcirculation_templates.py:1754
#: modules/bibcirculation/lib/bibcirculation_templates.py:2783
#: modules/bibcirculation/lib/bibcirculation_templates.py:2847
#: modules/bibcirculation/lib/bibcirculation_templates.py:2956
#: modules/bibcirculation/lib/bibcirculation_templates.py:3711
#: modules/bibcirculation/lib/bibcirculation_templates.py:3803
#: modules/bibcirculation/lib/bibcirculation_templates.py:4976
#: modules/bibcirculation/lib/bibcirculation_templates.py:5154
#: modules/bibcirculation/lib/bibcirculation_templates.py:5427
#: modules/bibcirculation/lib/bibcirculation_templates.py:11513
#: modules/bibcirculation/lib/bibcirculation_templates.py:11633
msgid "Borrower"
-msgstr "Usuario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:534
#: modules/bibcirculation/lib/bibcirculation_templates.py:671
#: modules/bibcirculation/lib/bibcirculation_templates.py:769
#: modules/bibcirculation/lib/bibcirculation_templates.py:860
#: modules/bibcirculation/lib/bibcirculation_templates.py:1148
#: modules/bibcirculation/lib/bibcirculation_templates.py:1348
#: modules/bibcirculation/lib/bibcirculation_templates.py:1523
#: modules/bibcirculation/lib/bibcirculation_templates.py:1755
#: modules/bibcirculation/lib/bibcirculation_templates.py:1798
#: modules/bibcirculation/lib/bibcirculation_templates.py:2511
#: modules/bibcirculation/lib/bibcirculation_templates.py:2795
#: modules/bibcirculation/lib/bibcirculation_templates.py:2848
#: modules/bibcirculation/lib/bibcirculation_templates.py:3506
#: modules/bibcirculation/lib/bibcirculation_templates.py:3598
#: modules/bibcirculation/lib/bibcirculation_templates.py:4668
#: modules/bibcirculation/lib/bibcirculation_templates.py:4784
#: modules/bibcirculation/lib/bibcirculation_templates.py:4977
#: modules/bibcirculation/lib/bibcirculation_templates.py:5155
#: modules/bibcirculation/lib/bibcirculation_templates.py:5633
#: modules/bibcirculation/lib/bibcirculation_templates.py:10910
#: modules/bibcirculation/lib/bibcirculation_templates.py:11514
#: modules/bibcirculation/lib/bibcirculation_templates.py:11634
#: modules/bibcirculation/lib/bibcirculation_templates.py:15583
#: modules/bibcirculation/lib/bibcirculation_templates.py:15863
msgid "Item"
-msgstr "Ítem"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:535
#: modules/bibcirculation/lib/bibcirculation_templates.py:356
#: modules/bibcirculation/lib/bibcirculation_templates.py:1149
#: modules/bibcirculation/lib/bibcirculation_templates.py:1349
#: modules/bibcirculation/lib/bibcirculation_templates.py:2512
#: modules/bibcirculation/lib/bibcirculation_templates.py:2958
#: modules/bibcirculation/lib/bibcirculation_templates.py:3163
#: modules/bibcirculation/lib/bibcirculation_templates.py:3506
#: modules/bibcirculation/lib/bibcirculation_templates.py:3600
#: modules/bibcirculation/lib/bibcirculation_templates.py:3713
#: modules/bibcirculation/lib/bibcirculation_templates.py:3805
#: modules/bibcirculation/lib/bibcirculation_templates.py:4670
#: modules/bibcirculation/lib/bibcirculation_templates.py:7453
#: modules/bibcirculation/lib/bibcirculation_templates.py:7571
#: modules/bibcirculation/lib/bibcirculation_templates.py:7810
#: modules/bibcirculation/lib/bibcirculation_templates.py:8106
#: modules/bibcirculation/lib/bibcirculation_templates.py:8270
#: modules/bibcirculation/lib/bibcirculation_templates.py:8478
#: modules/bibcirculation/lib/bibcirculation_templates.py:9329
#: modules/bibcirculation/lib/bibcirculation_templates.py:10638
#: modules/bibcirculation/lib/bibcirculation_templates.py:10820
#: modules/bibcirculation/lib/bibcirculation_templates.py:12419
#: modules/bibcirculation/lib/bibcirculation_templates.py:12562
#: modules/bibcirculation/lib/bibcirculation_templates.py:12647
#: modules/bibcirculation/lib/bibcirculation_templates.py:13217
#: modules/bibcirculation/lib/bibcirculation_templates.py:13371
#: modules/bibcirculation/lib/bibcirculation_templates.py:13458
#: modules/bibcirculation/lib/bibcirculation_templates.py:15864
#: modules/bibcirculation/lib/bibcirculation_templates.py:17935
#: modules/bibcirculation/lib/bibcirculation_templates.py:18039
msgid "Library"
-msgstr "Biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:536
#: modules/bibcirculation/lib/bibcirculation_templates.py:357
#: modules/bibcirculation/lib/bibcirculation_templates.py:1150
#: modules/bibcirculation/lib/bibcirculation_templates.py:1350
#: modules/bibcirculation/lib/bibcirculation_templates.py:2512
#: modules/bibcirculation/lib/bibcirculation_templates.py:2959
#: modules/bibcirculation/lib/bibcirculation_templates.py:3168
#: modules/bibcirculation/lib/bibcirculation_templates.py:3507
#: modules/bibcirculation/lib/bibcirculation_templates.py:3601
#: modules/bibcirculation/lib/bibcirculation_templates.py:3714
#: modules/bibcirculation/lib/bibcirculation_templates.py:3806
#: modules/bibcirculation/lib/bibcirculation_templates.py:4671
#: modules/bibcirculation/lib/bibcirculation_templates.py:6003
#: modules/bibcirculation/lib/bibcirculation_templates.py:7458
#: modules/bibcirculation/lib/bibcirculation_templates.py:7613
#: modules/bibcirculation/lib/bibcirculation_templates.py:7811
#: modules/bibcirculation/lib/bibcirculation_templates.py:8107
#: modules/bibcirculation/lib/bibcirculation_templates.py:8297
#: modules/bibcirculation/lib/bibcirculation_templates.py:8479
#: modules/bibcirculation/lib/bibcirculation_templates.py:9330
#: modules/bibcirculation/lib/bibcirculation_templates.py:15865
#: modules/bibcirculation/lib/bibcirculation_templates.py:17940
#: modules/bibcirculation/lib/bibcirculation_templates.py:18040
msgid "Location"
-msgstr "Lugar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:537
#: modules/bibcirculation/lib/bibcirculation_templates.py:977
#: modules/bibcirculation/lib/bibcirculation_templates.py:1153
#: modules/bibcirculation/lib/bibcirculation_templates.py:1351
#: modules/bibcirculation/lib/bibcirculation_templates.py:1525
#: modules/bibcirculation/lib/bibcirculation_templates.py:1800
#: modules/bibcirculation/lib/bibcirculation_templates.py:2850
#: modules/bibcirculation/lib/bibcirculation_templates.py:2960
#: modules/bibcirculation/lib/bibcirculation_templates.py:3507
#: modules/bibcirculation/lib/bibcirculation_templates.py:3715
#: modules/bibcirculation/lib/bibcirculation_templates.py:4672
#: modules/bibcirculation/lib/bibcirculation_templates.py:5288
#: modules/bibcirculation/lib/bibcirculation_templates.py:15866
#: modules/bibcirculation/lib/bibcirculation_templates.py:17765
msgid "From"
-msgstr "Desde"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:538
#: modules/bibcirculation/lib/bibcirculation_templates.py:977
#: modules/bibcirculation/lib/bibcirculation_templates.py:1154
#: modules/bibcirculation/lib/bibcirculation_templates.py:1352
#: modules/bibcirculation/lib/bibcirculation_templates.py:1526
#: modules/bibcirculation/lib/bibcirculation_templates.py:1801
#: modules/bibcirculation/lib/bibcirculation_templates.py:2851
#: modules/bibcirculation/lib/bibcirculation_templates.py:2961
#: modules/bibcirculation/lib/bibcirculation_templates.py:3508
#: modules/bibcirculation/lib/bibcirculation_templates.py:3716
#: modules/bibcirculation/lib/bibcirculation_templates.py:4673
#: modules/bibcirculation/lib/bibcirculation_templates.py:5290
#: modules/bibcirculation/lib/bibcirculation_templates.py:15867
#: modules/bibcirculation/lib/bibcirculation_templates.py:17766
#: modules/bibknowledge/lib/bibknowledge_templates.py:363
msgid "To"
-msgstr "Hasta"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_utils.py:539
#: modules/bibcirculation/lib/bibcirculation_templates.py:770
#: modules/bibcirculation/lib/bibcirculation_templates.py:1155
#: modules/bibcirculation/lib/bibcirculation_templates.py:1353
#: modules/bibcirculation/lib/bibcirculation_templates.py:1527
#: modules/bibcirculation/lib/bibcirculation_templates.py:1802
#: modules/bibcirculation/lib/bibcirculation_templates.py:2852
#: modules/bibcirculation/lib/bibcirculation_templates.py:2962
#: modules/bibcirculation/lib/bibcirculation_templates.py:3508
#: modules/bibcirculation/lib/bibcirculation_templates.py:3717
#: modules/bibcirculation/lib/bibcirculation_templates.py:4674
#: modules/bibcirculation/lib/bibcirculation_templates.py:12357
#: modules/bibcirculation/lib/bibcirculation_templates.py:12420
#: modules/bibcirculation/lib/bibcirculation_templates.py:12563
#: modules/bibcirculation/lib/bibcirculation_templates.py:12648
#: modules/bibcirculation/lib/bibcirculation_templates.py:13152
#: modules/bibcirculation/lib/bibcirculation_templates.py:13218
#: modules/bibcirculation/lib/bibcirculation_templates.py:13371
#: modules/bibcirculation/lib/bibcirculation_templates.py:13458
#: modules/bibcirculation/lib/bibcirculation_templates.py:15585
#: modules/bibcirculation/lib/bibcirculation_templates.py:15868
msgid "Request date"
-msgstr "Fecha de petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:117
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:162
msgid "You are not authorized to use loans."
-msgstr "No está autorizado a hacer uso del préstamo."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:167
#: modules/bibcirculation/lib/bibcirculation_templates.py:633
msgid "Loans - historical overview"
-msgstr "Histórico de préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:222
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:317
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:404
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:493
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:565
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:640
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:692
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:774
msgid "You are not authorized to use ill."
-msgstr "No está autorizado a hacer uso del préstamo interbibliotecario."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:232
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:340
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:426
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:723
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:792
#: modules/bibcirculation/lib/bibcirculation_templates.py:11346
msgid "Interlibrary loan request for books"
-msgstr "Peticiones de préstamo interbibliotecario de libros"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:503
-#, fuzzy
msgid "Purchase request"
-msgstr "Nueva petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:570
msgid ""
"Payment method information is mandatory. Please, type your budget code or "
"tick the 'cash' checkbox."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:601
#: modules/bibcirculation/lib/bibcirculation_templates.py:206
msgid "Register purchase request"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:649
-#, fuzzy
msgid "Interlibrary loan request for articles"
-msgstr "Peticiones de préstamo interbibliotecario de libros"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_webinterface.py:721
msgid "Wrong user id"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:110
msgid "Main navigation links"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:111
-#, fuzzy
msgid "Loan"
-msgstr "Préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:111
#: modules/bibcirculation/lib/bibcirculation_templates.py:4845
#: modules/bibcirculation/lib/bibcirculation_templates.py:5483
-#, fuzzy
msgid "Return"
-msgstr "Devuelto"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:112
#: modules/bibcirculation/lib/bibcirculation_templates.py:371
#: modules/bibcirculation/lib/bibcirculation_templates.py:9355
msgid "Request"
-msgstr "Petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:112
-#, fuzzy
msgid "Borrowers"
-msgstr "Usuario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:154
#: modules/bibcirculation/lib/bibcirculation_templates.py:208
msgid "Lists"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:155
-#, fuzzy
msgid "Last loans"
-msgstr "Última actualización"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:156
-#, fuzzy
msgid "Overdue loans"
-msgstr "Cartas de reclamación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:157
msgid "Items on shelf with holds"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:158
msgid "Items on loan with holds"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:159
msgid "Overdue loans with holds"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:160
-#, fuzzy
msgid "Ordered books"
-msgstr "Fecha de pedido"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:160
msgid "Others"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:161
#: modules/bibcirculation/lib/bibcirculation_templates.py:6836
#: modules/bibcirculation/lib/bibcirculation_templates.py:8643
-#, fuzzy
msgid "Libraries"
-msgstr "Bibliotecas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:162
-#, fuzzy
msgid "Add new library"
-msgstr "Añadir otro fichero"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:163
-#, fuzzy
msgid "Update info"
-msgstr "Actualizar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:164
-#, fuzzy
msgid "Acquisitions"
-msgstr "Acciones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:165
-#, fuzzy
msgid "List of ordered books"
-msgstr "Lista de libros en préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:166
-#, fuzzy
msgid "Order new book"
-msgstr "Pedir otra copia"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:167
-#, fuzzy
msgid "Vendors"
-msgstr "Proveedor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:168
-#, fuzzy
msgid "Add new vendor"
-msgstr "Añadir otro fichero"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:203
#: modules/bibcirculation/lib/bibcirculation_templates.py:4606
#: modules/bibcirculation/lib/bibcirculation_templates.py:4613
msgid "ILL"
-msgstr "PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:204
-#, fuzzy
msgid "Register Book request"
-msgstr "No hay peticiones."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:205
-#, fuzzy
msgid "Register Article"
-msgstr "Noticias"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:209
msgid "Purchase"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:216
-#, fuzzy
msgid "Admin guide"
-msgstr "Páginas de administración"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:217
msgid "Contact Support"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:267
-#, fuzzy
msgid "This record does not exist."
-msgstr "La página solicitada no existe"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:271
-#, fuzzy
msgid "This record has no copies."
-msgstr "Este ítem no tiene ejemplares."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:278
-#, fuzzy
msgid "Add a new copy"
-msgstr "Añadir otra copia"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:291
-#, fuzzy
msgid "ILL services"
-msgstr "Petición de préstamo interbibliotecario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:295
#, python-format
msgid ""
"All the copies of %(strong_tag_open)s%(title)s%(strong_tag_close)s are "
"missing. You can request a copy using %(strong_tag_open)s%(ill_link)s"
"%(strong_tag_close)s"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:302
#: modules/bibcirculation/lib/bibcirculation_templates.py:9308
msgid "This item has no holdings."
-msgstr "Este ítem no tiene ejemplares."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:356
#: modules/bibcirculation/lib/bibcirculation_templates.py:1354
#: modules/bibcirculation/lib/bibcirculation_templates.py:11641
msgid "Options"
-msgstr "Opciones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:357
#: modules/bibcirculation/lib/bibcirculation_templates.py:3173
#: modules/bibcirculation/lib/bibcirculation_templates.py:6010
#: modules/bibcirculation/lib/bibcirculation_templates.py:7463
#: modules/bibcirculation/lib/bibcirculation_templates.py:7661
#: modules/bibcirculation/lib/bibcirculation_templates.py:7814
#: modules/bibcirculation/lib/bibcirculation_templates.py:8108
#: modules/bibcirculation/lib/bibcirculation_templates.py:8328
#: modules/bibcirculation/lib/bibcirculation_templates.py:8482
#: modules/bibcirculation/lib/bibcirculation_templates.py:8855
#: modules/bibcirculation/lib/bibcirculation_templates.py:9330
#: modules/bibcirculation/lib/bibcirculation_templates.py:17945
#: modules/bibcirculation/lib/bibcirculation_templates.py:18041
msgid "Loan period"
-msgstr "Período de préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:358
#: modules/bibcirculation/lib/bibcirculation_templates.py:1617
#: modules/bibcirculation/lib/bibcirculation_templates.py:2511
#: modules/bibcirculation/lib/bibcirculation_templates.py:3159
#: modules/bibcirculation/lib/bibcirculation_templates.py:3506
#: modules/bibcirculation/lib/bibcirculation_templates.py:3599
#: modules/bibcirculation/lib/bibcirculation_templates.py:3712
#: modules/bibcirculation/lib/bibcirculation_templates.py:3804
#: modules/bibcirculation/lib/bibcirculation_templates.py:4785
#: modules/bibcirculation/lib/bibcirculation_templates.py:4978
#: modules/bibcirculation/lib/bibcirculation_templates.py:5156
#: modules/bibcirculation/lib/bibcirculation_templates.py:5428
#: modules/bibcirculation/lib/bibcirculation_templates.py:5636
#: modules/bibcirculation/lib/bibcirculation_templates.py:6048
#: modules/bibcirculation/lib/bibcirculation_templates.py:7450
#: modules/bibcirculation/lib/bibcirculation_templates.py:7570
#: modules/bibcirculation/lib/bibcirculation_templates.py:7809
#: modules/bibcirculation/lib/bibcirculation_templates.py:8104
#: modules/bibcirculation/lib/bibcirculation_templates.py:8269
#: modules/bibcirculation/lib/bibcirculation_templates.py:8477
#: modules/bibcirculation/lib/bibcirculation_templates.py:8851
#: modules/bibcirculation/lib/bibcirculation_templates.py:9042
#: modules/bibcirculation/lib/bibcirculation_templates.py:9329
#: modules/bibcirculation/lib/bibcirculation_templates.py:9600
#: modules/bibcirculation/lib/bibcirculation_templates.py:9839
#: modules/bibcirculation/lib/bibcirculation_templates.py:10075
#: modules/bibcirculation/lib/bibcirculation_templates.py:10318
#: modules/bibcirculation/lib/bibcirculation_templates.py:10556
#: modules/bibcirculation/lib/bibcirculation_templates.py:10814
#: modules/bibcirculation/lib/bibcirculation_templates.py:12376
#: modules/bibcirculation/lib/bibcirculation_templates.py:12482
#: modules/bibcirculation/lib/bibcirculation_templates.py:12580
#: modules/bibcirculation/lib/bibcirculation_templates.py:12663
#: modules/bibcirculation/lib/bibcirculation_templates.py:17932
#: modules/bibcirculation/lib/bibcirculation_templates.py:18036
msgid "Barcode"
-msgstr "Código de barras"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:402
msgid "See this book on BibCirculation"
-msgstr "Ver este libro en BibCirculation"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:453
-#, fuzzy
msgid "This item is not for loan."
-msgstr "Este ítem no tiene ejemplares."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:458
msgid "Server busy. Please, try again in a few seconds."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:463
msgid ""
"Your request has been registered and the document will be sent to you via "
"internal mail."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:468
-#, fuzzy
msgid "Your request has been registered."
-msgstr "Su mensaje se ha enviado al revisor."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:473
#: modules/bibcirculation/lib/bibcirculation_templates.py:481
msgid "It is not possible to validate your request. "
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:474
#: modules/bibcirculation/lib/bibcirculation_templates.py:482
msgid "Your office address is not available. "
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:475
#: modules/bibcirculation/lib/bibcirculation_templates.py:483
-#, fuzzy, python-format
+#, python-format
msgid "Please contact %(librarian_email)s"
-msgstr "En caso de duda, póngase en contacto con %(x_admin_email)s"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:489
-#, fuzzy
msgid "Your purchase request has been registered."
-msgstr "Su registrado correctamente la nueva petición."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:509
-#, fuzzy
msgid "No messages to be displayed"
-msgstr "El mensaje no se ha podido suprimir."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:539
#: modules/bibcirculation/lib/bibcirculation_templates.py:544
-#, fuzzy
msgid "0 borrowers found."
-msgstr "No se ha encontrado ningún usuario."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:539
-#, fuzzy
msgid "Search by CCID."
-msgstr "Buscar biblioteca por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:543
msgid "Register new borrower."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:569
msgid "Borrower(s)"
-msgstr "Usuarios"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:607
#: modules/bibcirculation/lib/bibcirculation_templates.py:894
#: modules/bibcirculation/lib/bibcirculation_templates.py:979
#: modules/bibcirculation/lib/bibcirculation_templates.py:1123
#: modules/bibcirculation/lib/bibcirculation_templates.py:1268
#: modules/bibcirculation/lib/bibcirculation_templates.py:1325
#: modules/bibcirculation/lib/bibcirculation_templates.py:1456
#: modules/bibcirculation/lib/bibcirculation_templates.py:1577
#: modules/bibcirculation/lib/bibcirculation_templates.py:1977
#: modules/bibcirculation/lib/bibcirculation_templates.py:2052
#: modules/bibcirculation/lib/bibcirculation_templates.py:2143
#: modules/bibcirculation/lib/bibcirculation_templates.py:2392
#: modules/bibcirculation/lib/bibcirculation_templates.py:2573
#: modules/bibcirculation/lib/bibcirculation_templates.py:2814
#: modules/bibcirculation/lib/bibcirculation_templates.py:2905
#: modules/bibcirculation/lib/bibcirculation_templates.py:3012
#: modules/bibcirculation/lib/bibcirculation_templates.py:3457
#: modules/bibcirculation/lib/bibcirculation_templates.py:3547
#: modules/bibcirculation/lib/bibcirculation_templates.py:3660
#: modules/bibcirculation/lib/bibcirculation_templates.py:3758
#: modules/bibcirculation/lib/bibcirculation_templates.py:3852
#: modules/bibcirculation/lib/bibcirculation_templates.py:3964
#: modules/bibcirculation/lib/bibcirculation_templates.py:4146
#: modules/bibcirculation/lib/bibcirculation_templates.py:4354
#: modules/bibcirculation/lib/bibcirculation_templates.py:4615
#: modules/bibcirculation/lib/bibcirculation_templates.py:4725
#: modules/bibcirculation/lib/bibcirculation_templates.py:4896
#: modules/bibcirculation/lib/bibcirculation_templates.py:4953
#: modules/bibcirculation/lib/bibcirculation_templates.py:5072
#: modules/bibcirculation/lib/bibcirculation_templates.py:5129
#: modules/bibcirculation/lib/bibcirculation_templates.py:5253
#: modules/bibcirculation/lib/bibcirculation_templates.py:5365
#: modules/bibcirculation/lib/bibcirculation_templates.py:5524
#: modules/bibcirculation/lib/bibcirculation_templates.py:5672
#: modules/bibcirculation/lib/bibcirculation_templates.py:5771
#: modules/bibcirculation/lib/bibcirculation_templates.py:5871
#: modules/bibcirculation/lib/bibcirculation_templates.py:6191
#: modules/bibcirculation/lib/bibcirculation_templates.py:6258
#: modules/bibcirculation/lib/bibcirculation_templates.py:6278
#: modules/bibcirculation/lib/bibcirculation_templates.py:6539
#: modules/bibcirculation/lib/bibcirculation_templates.py:6629
#: modules/bibcirculation/lib/bibcirculation_templates.py:6705
#: modules/bibcirculation/lib/bibcirculation_templates.py:6806
#: modules/bibcirculation/lib/bibcirculation_templates.py:6866
#: modules/bibcirculation/lib/bibcirculation_templates.py:6962
#: modules/bibcirculation/lib/bibcirculation_templates.py:7032
#: modules/bibcirculation/lib/bibcirculation_templates.py:7179
#: modules/bibcirculation/lib/bibcirculation_templates.py:7251
#: modules/bibcirculation/lib/bibcirculation_templates.py:7309
#: modules/bibcirculation/lib/bibcirculation_templates.py:7725
#: modules/bibcirculation/lib/bibcirculation_templates.py:7817
#: modules/bibcirculation/lib/bibcirculation_templates.py:7948
#: modules/bibcirculation/lib/bibcirculation_templates.py:8005
#: modules/bibcirculation/lib/bibcirculation_templates.py:8156
#: modules/bibcirculation/lib/bibcirculation_templates.py:8395
#: modules/bibcirculation/lib/bibcirculation_templates.py:8485
#: modules/bibcirculation/lib/bibcirculation_templates.py:8599
#: modules/bibcirculation/lib/bibcirculation_templates.py:8675
#: modules/bibcirculation/lib/bibcirculation_templates.py:8779
#: modules/bibcirculation/lib/bibcirculation_templates.py:8901
#: modules/bibcirculation/lib/bibcirculation_templates.py:9071
#: modules/bibcirculation/lib/bibcirculation_templates.py:9190
#: modules/bibcirculation/lib/bibcirculation_templates.py:9461
#: modules/bibcirculation/lib/bibcirculation_templates.py:9946
#: modules/bibcirculation/lib/bibcirculation_templates.py:10430
#: modules/bibcirculation/lib/bibcirculation_templates.py:10669
#: modules/bibcirculation/lib/bibcirculation_templates.py:10822
#: modules/bibcirculation/lib/bibcirculation_templates.py:11073
#: modules/bibcirculation/lib/bibcirculation_templates.py:11238
#: modules/bibcirculation/lib/bibcirculation_templates.py:11436
#: modules/bibcirculation/lib/bibcirculation_templates.py:12703
#: modules/bibcirculation/lib/bibcirculation_templates.py:13518
#: modules/bibcirculation/lib/bibcirculation_templates.py:13775
#: modules/bibcirculation/lib/bibcirculation_templates.py:13957
#: modules/bibcirculation/lib/bibcirculation_templates.py:14072
#: modules/bibcirculation/lib/bibcirculation_templates.py:14145
#: modules/bibcirculation/lib/bibcirculation_templates.py:14248
#: modules/bibcirculation/lib/bibcirculation_templates.py:14315
#: modules/bibcirculation/lib/bibcirculation_templates.py:14393
#: modules/bibcirculation/lib/bibcirculation_templates.py:14464
#: modules/bibcirculation/lib/bibcirculation_templates.py:14571
#: modules/bibcirculation/lib/bibcirculation_templates.py:14637
#: modules/bibcirculation/lib/bibcirculation_templates.py:14735
#: modules/bibcirculation/lib/bibcirculation_templates.py:14818
#: modules/bibcirculation/lib/bibcirculation_templates.py:15015
#: modules/bibcirculation/lib/bibcirculation_templates.py:15532
#: modules/bibcirculation/lib/bibcirculation_templates.py:15685
#: modules/bibcirculation/lib/bibcirculation_templates.py:15788
#: modules/bibcirculation/lib/bibcirculation_templates.py:16055
#: modules/bibcirculation/lib/bibcirculation_templates.py:16161
#: modules/bibcirculation/lib/bibcirculation_templates.py:16925
#: modules/bibcirculation/lib/bibcirculation_templates.py:17387
#: modules/bibcirculation/lib/bibcirculation_templates.py:17788
#: modules/bibcirculation/lib/bibcirculation_templates.py:18076
#: modules/bibknowledge/lib/bibknowledgeadmin.py:142
msgid "Back"
-msgstr "Atrás"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:628
#: modules/bibcirculation/lib/bibcirculation_templates.py:4879
msgid "Renew all loans"
-msgstr "Renovar todos los préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:647
msgid "You don't have any book on loan."
-msgstr "No tiene ningún libro en préstamo."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:672
#: modules/bibcirculation/lib/bibcirculation_templates.py:3602
#: modules/bibcirculation/lib/bibcirculation_templates.py:3807
#: modules/bibcirculation/lib/bibcirculation_templates.py:4979
#: modules/bibcirculation/lib/bibcirculation_templates.py:5157
#: modules/bibcirculation/lib/bibcirculation_templates.py:5429
msgid "Loaned on"
-msgstr "Prestado en"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:674
#: modules/bibcirculation/lib/bibcirculation_templates.py:772
msgid "Action(s)"
-msgstr "Acciones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:686
#: modules/bibcirculation/lib/bibcirculation_templates.py:4844
#: modules/bibcirculation/lib/bibcirculation_templates.py:5482
msgid "Renew"
-msgstr "Renovar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:739
#: modules/bibcirculation/lib/bibcirculation_templates.py:768
msgid "Your Requests"
-msgstr "Sus peticiones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:740
msgid "You don't have any request (waiting or pending)."
-msgstr "No tiene ninguna petición (esperando o pendiente)."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:742
#: modules/bibcirculation/lib/bibcirculation_templates.py:823
#: modules/bibcirculation/lib/bibcirculation_templates.py:1017
#: modules/bibcirculation/lib/bibcirculation_templates.py:1054
#: modules/bibcirculation/lib/bibcirculation_templates.py:1497
#: modules/bibcirculation/lib/bibcirculation_templates.py:2605
#: modules/bibcirculation/lib/bibcirculation_templates.py:2640
#: modules/bibcirculation/lib/bibcirculation_templates.py:2746
#: modules/bibcirculation/lib/bibcirculation_templates.py:6313
#: modules/bibcirculation/lib/bibcirculation_templates.py:6740
#: modules/bibcirculation/lib/bibcirculation_templates.py:7072
#: modules/bibcirculation/lib/bibcirculation_templates.py:7860
#: modules/bibcirculation/lib/bibcirculation_templates.py:8531
#: modules/bibcirculation/lib/bibcirculation_templates.py:9502
#: modules/bibcirculation/lib/bibcirculation_templates.py:9979
#: modules/bibcirculation/lib/bibcirculation_templates.py:10864
#: modules/bibcirculation/lib/bibcirculation_templates.py:11280
#: modules/bibcirculation/lib/bibcirculation_templates.py:11470
#: modules/bibcirculation/lib/bibcirculation_templates.py:13998
#: modules/bibcirculation/lib/bibcirculation_templates.py:14184
#: modules/bibcirculation/lib/bibcirculation_templates.py:14503
#: modules/bibcirculation/lib/bibcirculation_templates.py:15825
msgid "Back to home"
-msgstr "Volver al inicio"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:861
msgid "Loaned"
-msgstr "prestado"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:862
msgid "Returned"
-msgstr "Devuelto"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:863
msgid "Renewalls"
-msgstr "Renovaciones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:976
msgid "Enter your period of interest"
-msgstr "Entre el período en el que está interesado"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:979
#: modules/bibcirculation/lib/bibcirculation_templates.py:2815
#: modules/bibcirculation/lib/bibcirculation_templates.py:4361
#: modules/bibcirculation/lib/bibcirculation_templates.py:5673
#: modules/bibcirculation/lib/bibcirculation_templates.py:5772
#: modules/bibcirculation/lib/bibcirculation_templates.py:5873
#: modules/bibcirculation/lib/bibcirculation_templates.py:6705
#: modules/bibcirculation/lib/bibcirculation_templates.py:8485
#: modules/bibcirculation/lib/bibcirculation_templates.py:8780
#: modules/bibcirculation/lib/bibcirculation_templates.py:9072
#: modules/bibcirculation/lib/bibcirculation_templates.py:9461
#: modules/bibcirculation/lib/bibcirculation_templates.py:11074
#: modules/bibcirculation/lib/bibcirculation_templates.py:14145
#: modules/bibcirculation/lib/bibcirculation_templates.py:14782
#: modules/bibcirculation/lib/bibcirculation_templates.py:15789
msgid "Confirm"
-msgstr "Confirmar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1013
-#, fuzzy, python-format
+#, python-format
msgid "You can see your loans %(x_url_open)shere%(x_url_close)s."
-msgstr "Si lo desea puede %(x_url_open)sidentificarse aquí%(x_url_close)s."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1052
msgid "A new loan has been registered."
-msgstr "Se ha registrado un nuevo préstamo."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1100
-#, fuzzy
msgid "Delete this request?"
-msgstr "Eliminar las reseñas seleccionadas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1101
#: modules/bibcirculation/lib/bibcirculation_templates.py:1368
-#, fuzzy
msgid "Request not deleted."
-msgstr "Fecha de petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1122
#: modules/bibcirculation/lib/bibcirculation_templates.py:1324
msgid "No more requests are pending."
-msgstr "No hay más peticiones pendientes."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1151
msgid "Vol."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1152
msgid "Ed."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1156
#: modules/bibcirculation/lib/bibcirculation_templates.py:3188
#: modules/bibcirculation/lib/bibcirculation_templates.py:15869
msgid "Actions"
-msgstr "Acciones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1222
#: modules/bibcirculation/lib/bibcirculation_templates.py:1417
#: modules/bibcirculation/lib/bibcirculation_templates.py:15939
msgid "Associate barcode"
-msgstr "Asocie el código de barras"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1496
msgid "No hold requests waiting."
-msgstr "No hay reservas pendientes."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1524
#: modules/bibcirculation/lib/bibcirculation_templates.py:1799
#: modules/bibcirculation/lib/bibcirculation_templates.py:4669
msgid "Request status"
-msgstr "Estat de la petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1528
#: modules/bibcirculation/lib/bibcirculation_templates.py:1803
msgid "Request options"
-msgstr "Opciones de la petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1558
msgid "Select hold request"
-msgstr "Seleccione una petición de reserva"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1634
#: modules/bibcirculation/lib/bibcirculation_templates.py:5366
msgid "Reset"
-msgstr "Reiniciar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1680
-#, fuzzy, python-format
+#, python-format
msgid ""
"The item %(x_strong_tag_open)s%(x_title)s%(x_strong_tag_close)s, with "
"barcode %(x_strong_tag_open)s%(x_barcode)s%(x_strong_tag_close)s, has been "
"returned with success."
msgstr ""
-"Se ha devuelto correctamente el item %(x_title)s, con el código de barras "
-"%(x_barcode)s."
#: modules/bibcirculation/lib/bibcirculation_templates.py:1694
#, python-format
msgid ""
"There are %(x_strong_tag_open)s%(x_number_of_requests)s requests"
"%(x_strong_tag_close)s on the book that has been returned."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1753
msgid "Loan informations"
-msgstr "Informaciones de préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1758
#: modules/bibcirculation/lib/bibcirculation_templates.py:2102
#: modules/bibcirculation/lib/bibcirculation_templates.py:2744
#: modules/bibcirculation/lib/bibcirculation_templates.py:3097
#: modules/bibcirculation/lib/bibcirculation_templates.py:5996
#: modules/bibcirculation/lib/bibcirculation_templates.py:7153
#: modules/bibcirculation/lib/bibcirculation_templates.py:7433
#: modules/bibcirculation/lib/bibcirculation_templates.py:8085
#: modules/bibcirculation/lib/bibcirculation_templates.py:8242
#: modules/bibcirculation/lib/bibcirculation_templates.py:9598
#: modules/bibcirculation/lib/bibcirculation_templates.py:9837
#: modules/bibcirculation/lib/bibcirculation_templates.py:10073
#: modules/bibcirculation/lib/bibcirculation_templates.py:10316
#: modules/bibcirculation/lib/bibcirculation_templates.py:10527
#: modules/bibcirculation/lib/bibcirculation_templates.py:10751
#: modules/bibcirculation/lib/bibcirculation_templates.py:11210
#: modules/bibcirculation/lib/bibcirculation_templates.py:11355
#: modules/bibcirculation/lib/bibcirculation_templates.py:11860
#: modules/bibcirculation/lib/bibcirculation_templates.py:11953
#: modules/bibcirculation/lib/bibcirculation_templates.py:12070
#: modules/bibcirculation/lib/bibcirculation_templates.py:12154
#: modules/bibcirculation/lib/bibcirculation_templates.py:12837
#: modules/bibcirculation/lib/bibcirculation_templates.py:12937
#: modules/bibcirculation/lib/bibcirculation_templates.py:13610
#: modules/bibcirculation/lib/bibcirculation_templates.py:13870
#: modules/bibcirculation/lib/bibcirculation_templates.py:14923
#: modules/bibcirculation/lib/bibcirculation_templates.py:15147
#: modules/bibcirculation/lib/bibcirculation_templates.py:15423
#: modules/bibcirculation/lib/bibcirculation_templates.py:16119
#: modules/bibcirculation/lib/bibcirculation_templates.py:16842
#: modules/bibcirculation/lib/bibcirculation_templates.py:17030
#: modules/bibcirculation/lib/bibcirculation_templates.py:17915
msgid "Publisher"
-msgstr "Editor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1760
#: modules/bibcirculation/lib/bibcirculation_templates.py:12565
#: modules/bibcirculation/lib/bibcirculation_templates.py:13373
msgid "Return date"
-msgstr "Fecha de devolución"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1796
msgid "Waiting requests"
-msgstr "Peticiones en espera"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1838
msgid "Select request"
-msgstr "Escoja petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1878
msgid "Welcome to Invenio BibCirculation Admin"
-msgstr "Bienvenidos a la administración BibCirculation de Invenio"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1904
#: modules/bibcirculation/lib/bibcirculation_templates.py:2207
#: modules/bibcirculation/lib/bibcirculation_templates.py:2214
#: modules/bibcirculation/lib/bibcirculation_templates.py:2221
#: modules/bibcirculation/lib/bibcirculation_templates.py:10128
#: modules/bibcirculation/lib/bibcirculation_templates.py:10135
#: modules/bibcirculation/lib/bibcirculation_templates.py:10142
#: modules/bibcirculation/lib/bibcirculation_templates.py:15206
#: modules/bibcirculation/lib/bibcirculation_templates.py:15213
#: modules/bibcirculation/lib/bibcirculation_templates.py:15220
#: modules/bibcirculation/lib/bibcirculation_templates.py:17087
#: modules/bibcirculation/lib/bibcirculation_templates.py:17094
#: modules/bibcirculation/lib/bibcirculation_templates.py:17101
#: modules/bibcirculation/lib/bibcirculation_templates.py:17569
#: modules/bibcirculation/lib/bibcirculation_templates.py:17576
#: modules/bibcirculation/lib/bibcirculation_templates.py:17583
-#, fuzzy
msgid "id"
-msgstr "Esconder"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1917
-#, fuzzy
msgid "register new borrower"
-msgstr "Escriba la nota"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1948
#: modules/bibcirculation/lib/bibcirculation_templates.py:2200
#: modules/bibcirculation/lib/bibcirculation_templates.py:9646
#: modules/bibcirculation/lib/bibcirculation_templates.py:15199
#: modules/bibcirculation/lib/bibcirculation_templates.py:17080
#: modules/bibcirculation/lib/bibcirculation_templates.py:17562
msgid "Search borrower by"
-msgstr "Buscar usuario por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1949
#: modules/bibcirculation/lib/bibcirculation_templates.py:2180
#: modules/bibcirculation/lib/bibcirculation_templates.py:2187
#: modules/bibcirculation/lib/bibcirculation_templates.py:2194
#: modules/bibcirculation/lib/bibcirculation_templates.py:2207
#: modules/bibcirculation/lib/bibcirculation_templates.py:2214
#: modules/bibcirculation/lib/bibcirculation_templates.py:2221
#: modules/bibcirculation/lib/bibcirculation_templates.py:4088
#: modules/bibcirculation/lib/bibcirculation_templates.py:6776
#: modules/bibcirculation/lib/bibcirculation_templates.py:8570
#: modules/bibcirculation/lib/bibcirculation_templates.py:9626
#: modules/bibcirculation/lib/bibcirculation_templates.py:9633
#: modules/bibcirculation/lib/bibcirculation_templates.py:9640
#: modules/bibcirculation/lib/bibcirculation_templates.py:9653
#: modules/bibcirculation/lib/bibcirculation_templates.py:9660
#: modules/bibcirculation/lib/bibcirculation_templates.py:9667
#: modules/bibcirculation/lib/bibcirculation_templates.py:10101
#: modules/bibcirculation/lib/bibcirculation_templates.py:10108
#: modules/bibcirculation/lib/bibcirculation_templates.py:10115
#: modules/bibcirculation/lib/bibcirculation_templates.py:10128
#: modules/bibcirculation/lib/bibcirculation_templates.py:10135
#: modules/bibcirculation/lib/bibcirculation_templates.py:10142
#: modules/bibcirculation/lib/bibcirculation_templates.py:14221
#: modules/bibcirculation/lib/bibcirculation_templates.py:14540
#: modules/bibcirculation/lib/bibcirculation_templates.py:15179
#: modules/bibcirculation/lib/bibcirculation_templates.py:15186
#: modules/bibcirculation/lib/bibcirculation_templates.py:15193
#: modules/bibcirculation/lib/bibcirculation_templates.py:15206
#: modules/bibcirculation/lib/bibcirculation_templates.py:15213
#: modules/bibcirculation/lib/bibcirculation_templates.py:15220
#: modules/bibcirculation/lib/bibcirculation_templates.py:17060
#: modules/bibcirculation/lib/bibcirculation_templates.py:17067
#: modules/bibcirculation/lib/bibcirculation_templates.py:17074
#: modules/bibcirculation/lib/bibcirculation_templates.py:17087
#: modules/bibcirculation/lib/bibcirculation_templates.py:17094
#: modules/bibcirculation/lib/bibcirculation_templates.py:17101
#: modules/bibcirculation/lib/bibcirculation_templates.py:17542
#: modules/bibcirculation/lib/bibcirculation_templates.py:17549
#: modules/bibcirculation/lib/bibcirculation_templates.py:17556
#: modules/bibcirculation/lib/bibcirculation_templates.py:17569
#: modules/bibcirculation/lib/bibcirculation_templates.py:17576
#: modules/bibcirculation/lib/bibcirculation_templates.py:17583
-#, fuzzy
msgid "name"
-msgstr "Alias"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:1949
#: modules/bibcirculation/lib/bibcirculation_templates.py:2180
#: modules/bibcirculation/lib/bibcirculation_templates.py:2187
#: modules/bibcirculation/lib/bibcirculation_templates.py:2194
#: modules/bibcirculation/lib/bibcirculation_templates.py:2207
#: modules/bibcirculation/lib/bibcirculation_templates.py:2214
#: modules/bibcirculation/lib/bibcirculation_templates.py:2221
#: modules/bibcirculation/lib/bibcirculation_templates.py:4088
#: modules/bibcirculation/lib/bibcirculation_templates.py:6777
#: modules/bibcirculation/lib/bibcirculation_templates.py:8570
#: modules/bibcirculation/lib/bibcirculation_templates.py:9626
#: modules/bibcirculation/lib/bibcirculation_templates.py:9633
#: modules/bibcirculation/lib/bibcirculation_templates.py:9640
#: modules/bibcirculation/lib/bibcirculation_templates.py:9653
#: modules/bibcirculation/lib/bibcirculation_templates.py:9660
#: modules/bibcirculation/lib/bibcirculation_templates.py:9667
#: modules/bibcirculation/lib/bibcirculation_templates.py:10101
#: modules/bibcirculation/lib/bibcirculation_templates.py:10108
#: modules/bibcirculation/lib/bibcirculation_templates.py:10115
#: modules/bibcirculation/lib/bibcirculation_templates.py:10128
#: modules/bibcirculation/lib/bibcirculation_templates.py:10135
#: modules/bibcirculation/lib/bibcirculation_templates.py:10142
#: modules/bibcirculation/lib/bibcirculation_templates.py:14222
#: modules/bibcirculation/lib/bibcirculation_templates.py:14541
#: modules/bibcirculation/lib/bibcirculation_templates.py:15179
#: modules/bibcirculation/lib/bibcirculation_templates.py:15186
#: modules/bibcirculation/lib/bibcirculation_templates.py:15193
#: modules/bibcirculation/lib/bibcirculation_templates.py:15206
#: modules/bibcirculation/lib/bibcirculation_templates.py:15213
#: modules/bibcirculation/lib/bibcirculation_templates.py:15220
#: modules/bibcirculation/lib/bibcirculation_templates.py:17060
#: modules/bibcirculation/lib/bibcirculation_templates.py:17067
#: modules/bibcirculation/lib/bibcirculation_templates.py:17074
#: modules/bibcirculation/lib/bibcirculation_templates.py:17087
#: modules/bibcirculation/lib/bibcirculation_templates.py:17094
#: modules/bibcirculation/lib/bibcirculation_templates.py:17101
#: modules/bibcirculation/lib/bibcirculation_templates.py:17542
#: modules/bibcirculation/lib/bibcirculation_templates.py:17549
#: modules/bibcirculation/lib/bibcirculation_templates.py:17556
#: modules/bibcirculation/lib/bibcirculation_templates.py:17569
#: modules/bibcirculation/lib/bibcirculation_templates.py:17576
#: modules/bibcirculation/lib/bibcirculation_templates.py:17583
-#, fuzzy
msgid "email"
-msgstr "Dirección electrónica"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:7218
#: modules/bibcirculation/lib/bibcirculation_templates.py:7917
#: modules/bibcirculation/lib/bibcirculation_templates.py:9132
#: modules/bibcirculation/lib/bibcirculation_templates.py:16024
-#, fuzzy
msgid "Search item by"
-msgstr "Buscar proveedor por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
#: modules/bibcirculation/lib/bibcirculation_templates.py:9140
#: modules/bibcirculation/lib/bibcirculation_templates.py:9148
#: modules/bibcirculation/lib/bibcirculation_templates.py:9156
#: modules/bibcirculation/lib/bibcirculation_templates.py:9164
#: modules/bibcirculation/lib/bibcirculation_templates.py:16024
-#, fuzzy
msgid "barcode"
-msgstr "Código de barras"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2019
msgid "recid"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2076
msgid "0 item(s) found."
-msgstr "No se ha encontrado ninguno."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2101
-#, fuzzy, python-format
+#, python-format
msgid "%i items found."
-msgstr "%i items encontrados"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2103
#: modules/bibcirculation/lib/bibcirculation_templates.py:16120
-#, fuzzy
msgid "# copies"
-msgstr "Ejemplares"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2173
#: modules/bibcirculation/lib/bibcirculation_templates.py:9619
#: modules/bibcirculation/lib/bibcirculation_templates.py:10094
#: modules/bibcirculation/lib/bibcirculation_templates.py:10121
#: modules/bibcirculation/lib/bibcirculation_templates.py:15172
#: modules/bibcirculation/lib/bibcirculation_templates.py:17053
#: modules/bibcirculation/lib/bibcirculation_templates.py:17535
-#, fuzzy
msgid "Search user by"
-msgstr "Buscar proveedor por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2279
#: modules/bibcirculation/lib/bibcirculation_templates.py:9725
#: modules/bibcirculation/lib/bibcirculation_templates.py:10212
#: modules/bibcirculation/lib/bibcirculation_templates.py:15294
#: modules/bibcirculation/lib/bibcirculation_templates.py:17176
#: modules/bibcirculation/lib/bibcirculation_templates.py:17654
msgid "Select user"
-msgstr "Seleccione el usuario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2304
#: modules/bibcirculation/lib/bibcirculation_templates.py:2417
#: modules/bibcirculation/lib/bibcirculation_templates.py:2661
#: modules/bibcirculation/lib/bibcirculation_templates.py:4393
#: modules/bibcirculation/lib/bibcirculation_templates.py:5554
#: modules/bibcirculation/lib/bibcirculation_templates.py:8979
#: modules/bibcirculation/lib/bibcirculation_templates.py:9112
#: modules/bibcirculation/lib/bibcirculation_templates.py:11104
#: modules/bibcirculation/lib/bibcirculation_templates.py:15345
msgid "CCID"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2321
#: modules/bibcirculation/lib/bibcirculation_templates.py:2502
msgid "User information"
-msgstr "Información del usuario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2391
#: modules/bibcirculation/lib/bibcirculation_templates.py:2508
#: modules/bibcirculation/lib/bibcirculation_templates.py:2740
#: modules/bibcirculation/lib/bibcirculation_templates.py:3943
#: modules/bibcirculation/lib/bibcirculation_templates.py:4046
#: modules/bibcirculation/lib/bibcirculation_templates.py:4269
#: modules/bibcirculation/lib/bibcirculation_templates.py:4330
#: modules/bibcirculation/lib/bibcirculation_templates.py:4458
#: modules/bibcirculation/lib/bibcirculation_templates.py:5606
#: modules/bibcirculation/lib/bibcirculation_templates.py:6187
#: modules/bibcirculation/lib/bibcirculation_templates.py:6236
#: modules/bibcirculation/lib/bibcirculation_templates.py:6537
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6701
#: modules/bibcirculation/lib/bibcirculation_templates.py:6930
#: modules/bibcirculation/lib/bibcirculation_templates.py:7029
#: modules/bibcirculation/lib/bibcirculation_templates.py:9029
#: modules/bibcirculation/lib/bibcirculation_templates.py:9274
#: modules/bibcirculation/lib/bibcirculation_templates.py:9883
#: modules/bibcirculation/lib/bibcirculation_templates.py:10361
#: modules/bibcirculation/lib/bibcirculation_templates.py:11225
#: modules/bibcirculation/lib/bibcirculation_templates.py:14071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14142
#: modules/bibcirculation/lib/bibcirculation_templates.py:14391
#: modules/bibcirculation/lib/bibcirculation_templates.py:14462
#: modules/bibcirculation/lib/bibcirculation_templates.py:14716
#: modules/bibcirculation/lib/bibcirculation_templates.py:15519
msgid "Phone"
-msgstr "Teléfono"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2392
msgid "Barcode(s)"
-msgstr "Código de barras"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2393
#: modules/bibcirculation/lib/bibcirculation_templates.py:2573
#: modules/bibcirculation/lib/bibcirculation_templates.py:6191
#: modules/bibcirculation/lib/bibcirculation_templates.py:6278
#: modules/bibcirculation/lib/bibcirculation_templates.py:6539
#: modules/bibcirculation/lib/bibcirculation_templates.py:6629
#: modules/bibcirculation/lib/bibcirculation_templates.py:6962
#: modules/bibcirculation/lib/bibcirculation_templates.py:7032
#: modules/bibcirculation/lib/bibcirculation_templates.py:7179
#: modules/bibcirculation/lib/bibcirculation_templates.py:7726
#: modules/bibcirculation/lib/bibcirculation_templates.py:7817
#: modules/bibcirculation/lib/bibcirculation_templates.py:8395
#: modules/bibcirculation/lib/bibcirculation_templates.py:9946
#: modules/bibcirculation/lib/bibcirculation_templates.py:10430
#: modules/bibcirculation/lib/bibcirculation_templates.py:10669
#: modules/bibcirculation/lib/bibcirculation_templates.py:10822
#: modules/bibcirculation/lib/bibcirculation_templates.py:11238
#: modules/bibcirculation/lib/bibcirculation_templates.py:11436
#: modules/bibcirculation/lib/bibcirculation_templates.py:12703
#: modules/bibcirculation/lib/bibcirculation_templates.py:13518
#: modules/bibcirculation/lib/bibcirculation_templates.py:13775
#: modules/bibcirculation/lib/bibcirculation_templates.py:13957
#: modules/bibcirculation/lib/bibcirculation_templates.py:14072
#: modules/bibcirculation/lib/bibcirculation_templates.py:14393
#: modules/bibcirculation/lib/bibcirculation_templates.py:14464
#: modules/bibcirculation/lib/bibcirculation_templates.py:15015
#: modules/bibcirculation/lib/bibcirculation_templates.py:15532
#: modules/bibcirculation/lib/bibcirculation_templates.py:16925
#: modules/bibcirculation/lib/bibcirculation_templates.py:17387
msgid "Continue"
-msgstr "Continuar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2509
msgid "List of borrowed books"
-msgstr "Lista de libros en préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2513
msgid "Write note(s)"
-msgstr "Nota(s)"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2639
msgid "Notification has been sent!"
-msgstr "Se ha enviado la notificación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2742
#: modules/bibcirculation/lib/bibcirculation_templates.py:3095
#: modules/bibcirculation/lib/bibcirculation_templates.py:7151
#: modules/bibcirculation/lib/bibcirculation_templates.py:7429
#: modules/bibcirculation/lib/bibcirculation_templates.py:8081
#: modules/bibcirculation/lib/bibcirculation_templates.py:8238
#: modules/bibcirculation/lib/bibcirculation_templates.py:9596
#: modules/bibcirculation/lib/bibcirculation_templates.py:9835
#: modules/bibcirculation/lib/bibcirculation_templates.py:10071
#: modules/bibcirculation/lib/bibcirculation_templates.py:10314
#: modules/bibcirculation/lib/bibcirculation_templates.py:10523
#: modules/bibcirculation/lib/bibcirculation_templates.py:10749
#: modules/bibcirculation/lib/bibcirculation_templates.py:11208
#: modules/bibcirculation/lib/bibcirculation_templates.py:11351
#: modules/bibcirculation/lib/bibcirculation_templates.py:11856
#: modules/bibcirculation/lib/bibcirculation_templates.py:11951
#: modules/bibcirculation/lib/bibcirculation_templates.py:12064
#: modules/bibcirculation/lib/bibcirculation_templates.py:12152
#: modules/bibcirculation/lib/bibcirculation_templates.py:12833
#: modules/bibcirculation/lib/bibcirculation_templates.py:12935
#: modules/bibcirculation/lib/bibcirculation_templates.py:13606
#: modules/bibcirculation/lib/bibcirculation_templates.py:13868
#: modules/bibcirculation/lib/bibcirculation_templates.py:14921
#: modules/bibcirculation/lib/bibcirculation_templates.py:15144
#: modules/bibcirculation/lib/bibcirculation_templates.py:15420
#: modules/bibcirculation/lib/bibcirculation_templates.py:16840
#: modules/bibcirculation/lib/bibcirculation_templates.py:17028
#: modules/bibcirculation/lib/bibcirculation_templates.py:17311
#: modules/bibcirculation/lib/bibcirculation_templates.py:17509
#: modules/bibcirculation/lib/bibcirculation_templates.py:17911
msgid "Author(s)"
-msgstr "Autor(s)"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2748
msgid "Print loan information"
-msgstr "Imprimir la informacióm de préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2853
#: modules/bibcirculation/lib/bibcirculation_templates.py:2963
#: modules/bibcirculation/lib/bibcirculation_templates.py:10917
#: modules/bibcirculation/lib/bibcirculation_templates.py:11521
msgid "Option(s)"
-msgstr "Opción(es)"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2889
#: modules/bibcirculation/lib/bibcirculation_templates.py:2990
msgid "Cancel hold request"
-msgstr "Cancelar la reserva"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:2924
#: modules/bibcirculation/lib/bibcirculation_templates.py:3479
#: modules/bibcirculation/lib/bibcirculation_templates.py:3682
#: modules/bibcirculation/lib/bibcirculation_templates.py:4637
msgid "There are no requests."
-msgstr "No hay peticiones."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3093
#: modules/bibcirculation/lib/bibcirculation_templates.py:7149
#: modules/bibcirculation/lib/bibcirculation_templates.py:7426
#: modules/bibcirculation/lib/bibcirculation_templates.py:8078
#: modules/bibcirculation/lib/bibcirculation_templates.py:8235
#: modules/bibcirculation/lib/bibcirculation_templates.py:9594
#: modules/bibcirculation/lib/bibcirculation_templates.py:9833
#: modules/bibcirculation/lib/bibcirculation_templates.py:10069
#: modules/bibcirculation/lib/bibcirculation_templates.py:10312
#: modules/bibcirculation/lib/bibcirculation_templates.py:10519
#: modules/bibcirculation/lib/bibcirculation_templates.py:10747
#: modules/bibcirculation/lib/bibcirculation_templates.py:11206
#: modules/bibcirculation/lib/bibcirculation_templates.py:11347
#: modules/bibcirculation/lib/bibcirculation_templates.py:11853
#: modules/bibcirculation/lib/bibcirculation_templates.py:11949
#: modules/bibcirculation/lib/bibcirculation_templates.py:12061
#: modules/bibcirculation/lib/bibcirculation_templates.py:12150
#: modules/bibcirculation/lib/bibcirculation_templates.py:12830
#: modules/bibcirculation/lib/bibcirculation_templates.py:12933
#: modules/bibcirculation/lib/bibcirculation_templates.py:13602
#: modules/bibcirculation/lib/bibcirculation_templates.py:13866
#: modules/bibcirculation/lib/bibcirculation_templates.py:14864
#: modules/bibcirculation/lib/bibcirculation_templates.py:15142
#: modules/bibcirculation/lib/bibcirculation_templates.py:15418
#: modules/bibcirculation/lib/bibcirculation_templates.py:17025
#: modules/bibcirculation/lib/bibcirculation_templates.py:17506
#: modules/bibcirculation/lib/bibcirculation_templates.py:17908
msgid "Item details"
-msgstr "Detalles del ítem"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3100
msgid "Edit this record"
-msgstr "Edite este registro"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3101
-#, fuzzy
msgid "Book Cover"
-msgstr "Título del libro"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3102
msgid "Additional details"
-msgstr "Detalls addicionals"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3174
#: modules/bibcirculation/lib/bibcirculation_templates.py:7464
#: modules/bibcirculation/lib/bibcirculation_templates.py:8109
#: modules/bibcirculation/lib/bibcirculation_templates.py:17946
#: modules/bibcirculation/lib/bibcirculation_templates.py:18042
msgid "No of loans"
-msgstr "Préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3253
#: modules/bibcirculation/lib/bibcirculation_templates.py:3274
#: modules/bibcirculation/lib/bibcirculation_templates.py:3295
#: modules/bibcirculation/lib/bibcirculation_templates.py:3316
#: modules/bibcirculation/lib/bibcirculation_templates.py:4843
#: modules/bibcirculation/lib/bibcirculation_templates.py:5481
-#, fuzzy
msgid "Select an action"
-msgstr "Seleccionar una acción"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3255
#: modules/bibcirculation/lib/bibcirculation_templates.py:3276
#: modules/bibcirculation/lib/bibcirculation_templates.py:3297
#: modules/bibcirculation/lib/bibcirculation_templates.py:3318
-#, fuzzy
msgid "Add similar copy"
-msgstr "similitud de palabras"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3256
#: modules/bibcirculation/lib/bibcirculation_templates.py:3277
#: modules/bibcirculation/lib/bibcirculation_templates.py:3298
#: modules/bibcirculation/lib/bibcirculation_templates.py:3319
#: modules/bibcirculation/lib/bibcirculation_templates.py:4490
msgid "New request"
-msgstr "Nueva petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3257
#: modules/bibcirculation/lib/bibcirculation_templates.py:3278
#: modules/bibcirculation/lib/bibcirculation_templates.py:3299
#: modules/bibcirculation/lib/bibcirculation_templates.py:3320
#: modules/bibcirculation/lib/bibcirculation_templates.py:4489
msgid "New loan"
-msgstr "Nuevo préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3258
#: modules/bibcirculation/lib/bibcirculation_templates.py:3279
#: modules/bibcirculation/lib/bibcirculation_templates.py:3300
#: modules/bibcirculation/lib/bibcirculation_templates.py:3321
-#, fuzzy
msgid "Delete copy"
-msgstr "Suprimir el grupo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3432
msgid "Add new copy"
-msgstr "Añadir otra copia"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3433
msgid "Order new copy"
-msgstr "Pedir otra copia"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3434
msgid "ILL request"
-msgstr "Petición de préstamo interbibliotecario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3435
-#, fuzzy, python-format
+#, python-format
msgid "Hold requests and loans overview on %(date)s"
-msgstr "Reservas y préstamos para"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3437
#: modules/bibcirculation/lib/bibcirculation_templates.py:3439
msgid "Hold requests"
-msgstr "Reservas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3437
#: modules/bibcirculation/lib/bibcirculation_templates.py:3438
#: modules/bibcirculation/lib/bibcirculation_templates.py:3440
#: modules/bibcirculation/lib/bibcirculation_templates.py:3441
#: modules/bibcirculation/lib/bibcirculation_templates.py:4603
#: modules/bibcirculation/lib/bibcirculation_templates.py:4605
#: modules/bibcirculation/lib/bibcirculation_templates.py:4607
#: modules/bibcirculation/lib/bibcirculation_templates.py:4610
#: modules/bibcirculation/lib/bibcirculation_templates.py:4612
#: modules/bibcirculation/lib/bibcirculation_templates.py:4614
msgid "More details"
-msgstr "Más detalles"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3438
#: modules/bibcirculation/lib/bibcirculation_templates.py:3440
#: modules/bibcirculation/lib/bibcirculation_templates.py:4604
#: modules/bibcirculation/lib/bibcirculation_templates.py:4611
msgid "Loans"
-msgstr "Préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3439
#: modules/bibcirculation/lib/bibcirculation_templates.py:4608
msgid "Historical overview"
-msgstr "Visión histórica"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3569
#: modules/bibcirculation/lib/bibcirculation_templates.py:4747
#: modules/bibcirculation/lib/bibcirculation_templates.py:5389
msgid "There are no loans."
-msgstr "Sin préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3604
#: modules/bibcirculation/lib/bibcirculation_templates.py:3809
msgid "Returned on"
-msgstr "Devuelto el"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3605
#: modules/bibcirculation/lib/bibcirculation_templates.py:3810
#: modules/bibcirculation/lib/bibcirculation_templates.py:4788
#: modules/bibcirculation/lib/bibcirculation_templates.py:4981
#: modules/bibcirculation/lib/bibcirculation_templates.py:5159
#: modules/bibcirculation/lib/bibcirculation_templates.py:5431
msgid "Renewals"
-msgstr "Renovaciones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3606
#: modules/bibcirculation/lib/bibcirculation_templates.py:3811
#: modules/bibcirculation/lib/bibcirculation_templates.py:4789
#: modules/bibcirculation/lib/bibcirculation_templates.py:4982
#: modules/bibcirculation/lib/bibcirculation_templates.py:5160
msgid "Overdue letters"
-msgstr "Cartas de reclamación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3878
#: modules/bibcirculation/lib/bibcirculation_templates.py:3988
#: modules/bibcirculation/lib/bibcirculation_templates.py:4182
#: modules/bibcirculation/lib/bibcirculation_templates.py:4197
#: modules/bibcirculation/lib/bibcirculation_templates.py:4398
#: modules/bibcirculation/lib/bibcirculation_templates.py:4808
#: modules/bibcirculation/lib/bibcirculation_templates.py:5449
#: modules/bibcirculation/lib/bibcirculation_templates.py:10933
#: modules/bibcirculation/lib/bibcirculation_templates.py:14664
#: modules/bibcirculation/lib/bibcirculation_templates.py:15641
msgid "No notes"
-msgstr "Sin notas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3883
#: modules/bibcirculation/lib/bibcirculation_templates.py:3993
#: modules/bibcirculation/lib/bibcirculation_templates.py:4187
#: modules/bibcirculation/lib/bibcirculation_templates.py:4202
msgid "Notes about this library"
-msgstr "Notas sobre esta biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3939
msgid "Library details"
-msgstr "Detalles de la biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3944
#: modules/bibcirculation/lib/bibcirculation_templates.py:4047
#: modules/bibcirculation/lib/bibcirculation_templates.py:4270
#: modules/bibcirculation/lib/bibcirculation_templates.py:4331
#: modules/bibcirculation/lib/bibcirculation_templates.py:4790
#: modules/bibcirculation/lib/bibcirculation_templates.py:6597
#: modules/bibcirculation/lib/bibcirculation_templates.py:6703
#: modules/bibcirculation/lib/bibcirculation_templates.py:6932
#: modules/bibcirculation/lib/bibcirculation_templates.py:7031
#: modules/bibcirculation/lib/bibcirculation_templates.py:11520
#: modules/bibcirculation/lib/bibcirculation_templates.py:11640
#: modules/bibcirculation/lib/bibcirculation_templates.py:13060
#: modules/bibcirculation/lib/bibcirculation_templates.py:13095
#: modules/bibcirculation/lib/bibcirculation_templates.py:13217
#: modules/bibcirculation/lib/bibcirculation_templates.py:13370
#: modules/bibcirculation/lib/bibcirculation_templates.py:13457
#: modules/bibcirculation/lib/bibcirculation_templates.py:17026
msgid "Type"
-msgstr "Tipo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3946
#: modules/bibcirculation/lib/bibcirculation_templates.py:4049
#: modules/bibcirculation/lib/bibcirculation_templates.py:4272
#: modules/bibcirculation/lib/bibcirculation_templates.py:4333
msgid "No of items"
-msgstr "Número de ítems"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:3948
msgid "Duplicated library?"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4042
#: modules/bibcirculation/lib/bibcirculation_templates.py:4265
-#, fuzzy
msgid "Library to be deleted"
-msgstr "Notes de la biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4087
-#, fuzzy
msgid "Search library"
-msgstr "Buscar biblioteca por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4125
-#, fuzzy
msgid "Select library"
-msgstr "Buscar biblioteca por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4218
msgid "Please, note that this action is NOT reversible"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4277
#: modules/bibcirculation/lib/bibcirculation_templates.py:4338
-#, fuzzy
msgid "Library not found"
-msgstr "Notes de la biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4326
-#, fuzzy
msgid "Merged library"
-msgstr "Buscar biblioteca por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4403
msgid "Notes about this borrower"
-msgstr "Notas sobre este lector"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4452
#: modules/bibcirculation/lib/bibcirculation_templates.py:5600
#: modules/bibcirculation/lib/bibcirculation_templates.py:9023
msgid "Personal details"
-msgstr "Detalles personales"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4491
msgid "New ILL request"
-msgstr "Nueva petición de PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4492
msgid "Notify this borrower"
-msgstr "Avisar a este lector"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4600
msgid "Requests, Loans and ILL overview on"
-msgstr "Reservas, préstamos y PI en"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4602
#: modules/bibcirculation/lib/bibcirculation_templates.py:4609
msgid "Requests"
-msgstr "Peticiones"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4675
msgid "Request option(s)"
-msgstr "Opciones de la petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4786
#: modules/bibcirculation/lib/bibcirculation_templates.py:8852
#: modules/bibcirculation/lib/bibcirculation_templates.py:10426
msgid "Loan date"
-msgstr "Prestado en"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4791
#: modules/bibcirculation/lib/bibcirculation_templates.py:5434
msgid "Loan notes"
-msgstr "Notas de préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4792
msgid "Loans status"
-msgstr "Estado de los préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4793
#: modules/bibcirculation/lib/bibcirculation_templates.py:5435
msgid "Loan options"
-msgstr "Opciones de préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4813
#: modules/bibcirculation/lib/bibcirculation_templates.py:5453
#: modules/bibcirculation/lib/bibcirculation_templates.py:10938
msgid "See notes"
-msgstr "Ver notas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4850
#: modules/bibcirculation/lib/bibcirculation_templates.py:4854
#: modules/bibcirculation/lib/bibcirculation_templates.py:5488
#: modules/bibcirculation/lib/bibcirculation_templates.py:5492
-#, fuzzy
msgid "Change due date"
-msgstr "Nueva fecha de devolución: "
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4863
#: modules/bibcirculation/lib/bibcirculation_templates.py:5032
#: modules/bibcirculation/lib/bibcirculation_templates.py:5212
#: modules/bibcirculation/lib/bibcirculation_templates.py:5347
#: modules/bibcirculation/lib/bibcirculation_templates.py:5500
msgid "Send recall"
-msgstr "Enviar reclamación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4952
#: modules/bibcirculation/lib/bibcirculation_templates.py:5128
msgid "No result for your search."
-msgstr "No se han encontrado resultados."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4983
#: modules/bibcirculation/lib/bibcirculation_templates.py:5161
msgid "Loan Notes"
-msgstr "Notas de préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:4996
#: modules/bibcirculation/lib/bibcirculation_templates.py:5175
msgid "see notes"
-msgstr "ver notas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5000
#: modules/bibcirculation/lib/bibcirculation_templates.py:5180
msgid "no notes"
-msgstr "sin notas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5289
msgid "CERN Library"
-msgstr "Biblioteca del CERN"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5324
msgid "Message"
-msgstr "Mensaje"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5325
msgid "Choose a template"
-msgstr "Escoja la plantilla"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5343
msgid "Templates"
-msgstr "Plantillas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5344
#: modules/bibcirculation/lib/bibcirculation_templates.py:5432
msgid "Overdue letter"
-msgstr "Carta de reclamación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5345
msgid "Reminder"
-msgstr "Recordatorio"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5346
msgid "Notification"
-msgstr "Notificación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5348
msgid "Load"
-msgstr "Carga"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5367
msgid "Send"
-msgstr "Enviar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5433
#: modules/bibcirculation/lib/bibcirculation_templates.py:8854
msgid "Loan status"
-msgstr "Estado del préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5652
#: modules/bibcirculation/lib/bibcirculation_templates.py:9055
#: modules/bibcirculation/lib/bibcirculation_templates.py:10428
msgid "Write notes"
-msgstr "Escriba notas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5713
msgid "Notes about borrower"
-msgstr "Notas sobre el lector"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5722
#: modules/bibcirculation/lib/bibcirculation_templates.py:5823
#: modules/bibcirculation/lib/bibcirculation_templates.py:8732
#: modules/bibcirculation/lib/bibcirculation_templates.py:11026
#: modules/bibcirculation/lib/bibcirculation_templates.py:11782
#: modules/bibcirculation/lib/bibcirculation_templates.py:12760
#: modules/bibcirculation/lib/bibcirculation_templates.py:13736
#: modules/bibcirculation/lib/bibcirculation_templates.py:15738
msgid "[delete]"
-msgstr "[suprimir]"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5768
#: modules/bibcirculation/lib/bibcirculation_templates.py:5870
#: modules/bibcirculation/lib/bibcirculation_templates.py:8776
#: modules/bibcirculation/lib/bibcirculation_templates.py:11071
#: modules/bibcirculation/lib/bibcirculation_templates.py:15785
msgid "Write new note"
-msgstr "Escriba la nota"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5814
msgid "Notes about loan"
-msgstr "Notas sobre el préstamo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5990
msgid "Book Information"
-msgstr "Información del libro"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5994
msgid "EAN"
-msgstr "EAN"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5997
msgid "Publication date"
-msgstr "Fecha de publicación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5998
msgid "Publication place"
-msgstr "Lugar de publicación"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:5999
#: modules/bibcirculation/lib/bibcirculation_templates.py:7155
#: modules/bibcirculation/lib/bibcirculation_templates.py:11955
#: modules/bibcirculation/lib/bibcirculation_templates.py:12156
#: modules/bibcirculation/lib/bibcirculation_templates.py:12939
#: modules/bibcirculation/lib/bibcirculation_templates.py:14925
#: modules/bibcirculation/lib/bibcirculation_templates.py:15148
#: modules/bibcirculation/lib/bibcirculation_templates.py:15424
#: modules/bibcirculation/lib/bibcirculation_templates.py:16844
#: modules/bibcirculation/lib/bibcirculation_templates.py:17032
msgid "Edition"
-msgstr "Edición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6000
msgid "Number of pages"
-msgstr "Páginas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6001
msgid "Sub-library"
-msgstr "Sub-biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6002
msgid "CERN Central Library"
-msgstr "Biblioteca Central del CERN"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6099
-#, fuzzy
msgid "Retrieve book information"
-msgstr "Información del usuario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6312
msgid "A new borrower has been registered."
-msgstr "Un nuevo usuario se ha dado de alta."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6531
msgid "Borrower information"
-msgstr "Información del usuario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6596
#: modules/bibcirculation/lib/bibcirculation_templates.py:6698
msgid "New library information"
-msgstr "Información de la nueva biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6739
msgid "A new library has been registered."
-msgstr "Se ha dado de alta la nueva biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6775
#: modules/bibcirculation/lib/bibcirculation_templates.py:8569
msgid "Search library by"
-msgstr "Buscar biblioteca por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:6927
#: modules/bibcirculation/lib/bibcirculation_templates.py:7026
msgid "Library information"
-msgstr "Información de la biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7071
#: modules/bibcirculation/lib/bibcirculation_templates.py:14502
msgid "The information has been updated."
-msgstr "Se ha actualizado la información."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7150
#: modules/bibcirculation/lib/bibcirculation_templates.py:14920
msgid "Book title"
-msgstr "Título del libro"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7152
#: modules/bibcirculation/lib/bibcirculation_templates.py:11952
#: modules/bibcirculation/lib/bibcirculation_templates.py:12068
#: modules/bibcirculation/lib/bibcirculation_templates.py:12153
#: modules/bibcirculation/lib/bibcirculation_templates.py:12936
#: modules/bibcirculation/lib/bibcirculation_templates.py:14922
#: modules/bibcirculation/lib/bibcirculation_templates.py:15145
#: modules/bibcirculation/lib/bibcirculation_templates.py:15421
#: modules/bibcirculation/lib/bibcirculation_templates.py:16841
#: modules/bibcirculation/lib/bibcirculation_templates.py:17029
msgid "Place"
-msgstr "Lugar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7188
msgid "Coming soon..."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7438
#: modules/bibcirculation/lib/bibcirculation_templates.py:17920
#, python-format
msgid "Copies of %s"
-msgstr "Copies de %s"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7570
msgid "New copy details"
-msgstr "Detalles de la nueva copia"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7725
#: modules/bibcirculation/lib/bibcirculation_templates.py:7816
#: modules/bibcirculation/lib/bibcirculation_templates.py:8394
#: modules/bibcirculation/lib/bibcirculation_templates.py:8484
-#, fuzzy
msgid "Expected arrival date"
-msgstr "Fecha prevista"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7859
-#, fuzzy, python-format
+#, python-format
msgid "A %(x_url_open)snew copy%(x_url_close)s has been added."
msgstr ""
-"Debería %(x_url_open)saceptar o rechazar%(x_url_close)s la petición de este "
-"usuario."
#: modules/bibcirculation/lib/bibcirculation_templates.py:7883
-#, fuzzy
msgid "Back to the record"
-msgstr "Volver al registro"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:7975
-#, fuzzy, python-format
+#, python-format
msgid "%(nb_items_found)i items found"
-msgstr "%i items encontrados"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8268
msgid "Update copy information"
-msgstr "Actualizar información de la copia"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8476
msgid "New copy information"
-msgstr "Información de la nueva copia"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8530
msgid "This item has been updated."
-msgstr "Este ítem se ha actualitzado."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8625
-#, fuzzy
msgid "0 libraries found."
-msgstr "No se ha encontrado ninguna biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8723
msgid "Notes about library"
-msgstr "Notas sobre la biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8856
msgid "Requested ?"
-msgstr "Solicitado?"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8876
msgid "New due date: "
-msgstr "Nueva fecha de devolución: "
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8901
msgid "Submit new due date"
-msgstr "Nueva fecha de devolución"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8947
#, python-format
msgid "The due date has been updated. New due date: %s"
-msgstr "Se ha actualizado la fecha de devolución. Ahora es: %s"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:8948
-#, fuzzy
msgid "Back to borrower's loans"
-msgstr "Volver a los préstamos"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:9225
msgid "Select item"
-msgstr "Seleccionar el item"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:9268
#: modules/bibcirculation/lib/bibcirculation_templates.py:9877
#: modules/bibcirculation/lib/bibcirculation_templates.py:10355
#: modules/bibcirculation/lib/bibcirculation_templates.py:11219
#: modules/bibcirculation/lib/bibcirculation_templates.py:15513
msgid "Borrower details"
-msgstr "Detalles del lector"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:9438
#: modules/bibcirculation/lib/bibcirculation_templates.py:9942
msgid "Enter the period of interest"
-msgstr "Período de interés"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:9439
#: modules/bibcirculation/lib/bibcirculation_templates.py:9943
msgid "From: "
-msgstr "De: "
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:9441
#: modules/bibcirculation/lib/bibcirculation_templates.py:9944
msgid "To: "
-msgstr "A: "
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:9501
#: modules/bibcirculation/lib/bibcirculation_templates.py:9978
msgid "A new request has been registered with success."
-msgstr "Su registrado correctamente la nueva petición."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:9626
#: modules/bibcirculation/lib/bibcirculation_templates.py:9633
#: modules/bibcirculation/lib/bibcirculation_templates.py:9640
#: modules/bibcirculation/lib/bibcirculation_templates.py:9653
#: modules/bibcirculation/lib/bibcirculation_templates.py:9660
#: modules/bibcirculation/lib/bibcirculation_templates.py:9667
#: modules/bibcirculation/lib/bibcirculation_templates.py:10101
#: modules/bibcirculation/lib/bibcirculation_templates.py:10108
#: modules/bibcirculation/lib/bibcirculation_templates.py:10115
msgid "ccid"
msgstr ""
-# Una?
#: modules/bibcirculation/lib/bibcirculation_templates.py:10178
-#, fuzzy
msgid "Please select one borrower to continue."
-msgstr "Seleccione uno o más:"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10429
msgid "This note will be associate to this new loan, not to the borrower."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10556
#: modules/bibcirculation/lib/bibcirculation_templates.py:10813
#: modules/bibcirculation/lib/bibcirculation_templates.py:13630
#: modules/bibcirculation/lib/bibcirculation_templates.py:13906
msgid "Order details"
-msgstr "Detalles del pedido"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10556
#: modules/bibcirculation/lib/bibcirculation_templates.py:10815
#: modules/bibcirculation/lib/bibcirculation_templates.py:10911
#: modules/bibcirculation/lib/bibcirculation_templates.py:13102
#: modules/bibcirculation/lib/bibcirculation_templates.py:13630
#: modules/bibcirculation/lib/bibcirculation_templates.py:13907
msgid "Vendor"
-msgstr "Proveedor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10584
#: modules/bibcirculation/lib/bibcirculation_templates.py:10816
#: modules/bibcirculation/lib/bibcirculation_templates.py:10913
#: modules/bibcirculation/lib/bibcirculation_templates.py:13908
msgid "Price"
-msgstr "Precio"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10636
#: modules/bibcirculation/lib/bibcirculation_templates.py:10818
#: modules/bibcirculation/lib/bibcirculation_templates.py:13724
#: modules/bibcirculation/lib/bibcirculation_templates.py:13910
msgid "Order date"
-msgstr "Fecha de petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10637
#: modules/bibcirculation/lib/bibcirculation_templates.py:10819
#: modules/bibcirculation/lib/bibcirculation_templates.py:10915
#: modules/bibcirculation/lib/bibcirculation_templates.py:12359
#: modules/bibcirculation/lib/bibcirculation_templates.py:12421
#: modules/bibcirculation/lib/bibcirculation_templates.py:12563
#: modules/bibcirculation/lib/bibcirculation_templates.py:12648
#: modules/bibcirculation/lib/bibcirculation_templates.py:13154
#: modules/bibcirculation/lib/bibcirculation_templates.py:13219
#: modules/bibcirculation/lib/bibcirculation_templates.py:13372
#: modules/bibcirculation/lib/bibcirculation_templates.py:13459
#: modules/bibcirculation/lib/bibcirculation_templates.py:13725
#: modules/bibcirculation/lib/bibcirculation_templates.py:13911
#: modules/bibcirculation/lib/bibcirculation_templates.py:15586
msgid "Expected date"
-msgstr "Fecha prevista"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10863
msgid "A new purchase has been registered with success."
-msgstr "Se ha cursado correctamente una nueva compra."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10912
msgid "Ordered date"
-msgstr "Fecha de pedido"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:10962
#: modules/bibcirculation/lib/bibcirculation_templates.py:11577
#: modules/bibcirculation/lib/bibcirculation_templates.py:11584
#: modules/bibcirculation/lib/bibcirculation_templates.py:11690
msgid "select"
-msgstr "seleccionar"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11017
#: modules/bibcirculation/lib/bibcirculation_templates.py:15729
msgid "Notes about acquisition"
-msgstr "Notas sobre la adquisición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11212
#: modules/bibcirculation/lib/bibcirculation_templates.py:11428
#: modules/bibcirculation/lib/bibcirculation_templates.py:12217
#: modules/bibcirculation/lib/bibcirculation_templates.py:14959
#: modules/bibcirculation/lib/bibcirculation_templates.py:15150
#: modules/bibcirculation/lib/bibcirculation_templates.py:15466
#: modules/bibcirculation/lib/bibcirculation_templates.py:17381
#: modules/bibcirculation/lib/bibcirculation_templates.py:17517
msgid "ILL request details"
-msgstr "Detalles de la petición PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11213
#: modules/bibcirculation/lib/bibcirculation_templates.py:11429
#: modules/bibcirculation/lib/bibcirculation_templates.py:15152
#: modules/bibcirculation/lib/bibcirculation_templates.py:16921
#: modules/bibcirculation/lib/bibcirculation_templates.py:17037
#: modules/bibcirculation/lib/bibcirculation_templates.py:17382
#: modules/bibcirculation/lib/bibcirculation_templates.py:17518
msgid "Period of interest - From"
-msgstr "Período de interés - Desde"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11215
#: modules/bibcirculation/lib/bibcirculation_templates.py:11431
#: modules/bibcirculation/lib/bibcirculation_templates.py:15154
#: modules/bibcirculation/lib/bibcirculation_templates.py:16923
#: modules/bibcirculation/lib/bibcirculation_templates.py:17039
#: modules/bibcirculation/lib/bibcirculation_templates.py:17384
#: modules/bibcirculation/lib/bibcirculation_templates.py:17520
msgid "Period of interest - To"
-msgstr "Período de interés - Hasta"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11217
#: modules/bibcirculation/lib/bibcirculation_templates.py:11433
#: modules/bibcirculation/lib/bibcirculation_templates.py:15013
#: modules/bibcirculation/lib/bibcirculation_templates.py:15156
#: modules/bibcirculation/lib/bibcirculation_templates.py:15470
#: modules/bibcirculation/lib/bibcirculation_templates.py:16925
#: modules/bibcirculation/lib/bibcirculation_templates.py:17041
#: modules/bibcirculation/lib/bibcirculation_templates.py:17386
#: modules/bibcirculation/lib/bibcirculation_templates.py:17522
msgid "Additional comments"
-msgstr "Comentario adicionales"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11218
#: modules/bibcirculation/lib/bibcirculation_templates.py:15471
msgid "Only this edition"
-msgstr "Solamente esta edición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11279
msgid "A new ILL request has been registered with success."
-msgstr "Se ha registrado correctamente una nueva petición PI."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11434
-#, fuzzy, python-format
+#, python-format
msgid ""
"I accept the %(x_url_open)sconditions%(x_url_close)s of the service in "
"particular the return of books in due time."
msgstr ""
-"Acepto los %s del servicio, en particular devolver los libros a tiempo."
#: modules/bibcirculation/lib/bibcirculation_templates.py:11435
msgid "I want this edition only."
-msgstr "Sólo quiero esta edición."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11466
-#, fuzzy, python-format
+#, python-format
msgid "You can see your loans %(here_link)s."
-msgstr "Puede ver sus préstamos "
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11468
msgid "here"
-msgstr "aquí"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11515
#: modules/bibcirculation/lib/bibcirculation_templates.py:11635
#: modules/bibcirculation/lib/bibcirculation_templates.py:15584
msgid "Supplier"
-msgstr "Proveedor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11518
msgid "Interest from"
-msgstr "Interés des de"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11636
#: modules/bibcirculation/lib/bibcirculation_templates.py:12361
#: modules/bibcirculation/lib/bibcirculation_templates.py:12470
#: modules/bibcirculation/lib/bibcirculation_templates.py:12566
#: modules/bibcirculation/lib/bibcirculation_templates.py:12650
#: modules/bibcirculation/lib/bibcirculation_templates.py:13156
#: modules/bibcirculation/lib/bibcirculation_templates.py:13270
#: modules/bibcirculation/lib/bibcirculation_templates.py:13375
#: modules/bibcirculation/lib/bibcirculation_templates.py:13461
#: modules/bibcirculation/lib/bibcirculation_templates.py:13649
msgid "Cost"
-msgstr "Coste"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:11639
-#, fuzzy
msgid "Date requested"
-msgstr "Nueva petición"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12062
msgid "Periodical Title"
-msgstr "Título de la publicación periódica"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12063
msgid "Article Title"
-msgstr "Artículo del artículo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12065
#: modules/bibcirculation/lib/bibcirculation_templates.py:17313
#: modules/bibcirculation/lib/bibcirculation_templates.py:17511
msgid "Volume"
-msgstr "Volumen"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12066
#: modules/bibcirculation/lib/bibcirculation_templates.py:17314
#: modules/bibcirculation/lib/bibcirculation_templates.py:17512
msgid "Issue"
-msgstr "Número"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12067
#: modules/bibcirculation/lib/bibcirculation_templates.py:17315
#: modules/bibcirculation/lib/bibcirculation_templates.py:17513
msgid "Page"
-msgstr "Página"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12069
#: modules/bibcirculation/lib/bibcirculation_templates.py:17318
#: modules/bibcirculation/lib/bibcirculation_templates.py:17516
msgid "ISSN"
-msgstr "ISSN"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12210
#: modules/bibcirculation/lib/bibcirculation_templates.py:12994
msgid "Borrower request"
-msgstr "Petición del lector"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12213
#: modules/bibcirculation/lib/bibcirculation_templates.py:12997
#: modules/bibcirculation/lib/bibcirculation_templates.py:14960
#: modules/bibcirculation/lib/bibcirculation_templates.py:15468
msgid "Period of interest (From)"
-msgstr "Período de interés (desde)"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12214
#: modules/bibcirculation/lib/bibcirculation_templates.py:12998
#: modules/bibcirculation/lib/bibcirculation_templates.py:15011
#: modules/bibcirculation/lib/bibcirculation_templates.py:15469
msgid "Period of interest (To)"
-msgstr "Período de interés (hasta)"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12215
#: modules/bibcirculation/lib/bibcirculation_templates.py:12999
msgid "Borrower comments"
-msgstr "Comentarios del lector"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12216
#: modules/bibcirculation/lib/bibcirculation_templates.py:13000
msgid "Only this edition?"
-msgstr "Sólo esta edición?"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12271
#: modules/bibcirculation/lib/bibcirculation_templates.py:12303
#: modules/bibcirculation/lib/bibcirculation_templates.py:12419
#: modules/bibcirculation/lib/bibcirculation_templates.py:12562
#: modules/bibcirculation/lib/bibcirculation_templates.py:12647
#: modules/bibcirculation/lib/bibcirculation_templates.py:13059
#: modules/bibcirculation/lib/bibcirculation_templates.py:13094
#: modules/bibcirculation/lib/bibcirculation_templates.py:13216
#: modules/bibcirculation/lib/bibcirculation_templates.py:13370
#: modules/bibcirculation/lib/bibcirculation_templates.py:13457
msgid "ILL request ID"
-msgstr "Código de petición PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12272
#: modules/bibcirculation/lib/bibcirculation_templates.py:12377
#: modules/bibcirculation/lib/bibcirculation_templates.py:12482
#: modules/bibcirculation/lib/bibcirculation_templates.py:12580
#: modules/bibcirculation/lib/bibcirculation_templates.py:12663
#: modules/bibcirculation/lib/bibcirculation_templates.py:13062
#: modules/bibcirculation/lib/bibcirculation_templates.py:13173
#: modules/bibcirculation/lib/bibcirculation_templates.py:13285
#: modules/bibcirculation/lib/bibcirculation_templates.py:13388
#: modules/bibcirculation/lib/bibcirculation_templates.py:13475
#: modules/bibcirculation/lib/bibcirculation_templates.py:13726
#: modules/bibcirculation/lib/bibcirculation_templates.py:13912
msgid "Previous notes"
-msgstr "Notes anteriors"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12293
#: modules/bibcirculation/lib/bibcirculation_templates.py:12397
#: modules/bibcirculation/lib/bibcirculation_templates.py:12500
#: modules/bibcirculation/lib/bibcirculation_templates.py:12600
#: modules/bibcirculation/lib/bibcirculation_templates.py:12683
#: modules/bibcirculation/lib/bibcirculation_templates.py:13082
#: modules/bibcirculation/lib/bibcirculation_templates.py:13192
#: modules/bibcirculation/lib/bibcirculation_templates.py:13306
#: modules/bibcirculation/lib/bibcirculation_templates.py:13408
#: modules/bibcirculation/lib/bibcirculation_templates.py:13495
#: modules/bibcirculation/lib/bibcirculation_templates.py:15590
msgid "Library notes"
-msgstr "Notes de la biblioteca"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12310
msgid "Library/Supplier"
-msgstr "Biblioteca/proveïdor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12463
#: modules/bibcirculation/lib/bibcirculation_templates.py:12564
#: modules/bibcirculation/lib/bibcirculation_templates.py:12649
#: modules/bibcirculation/lib/bibcirculation_templates.py:13261
#: modules/bibcirculation/lib/bibcirculation_templates.py:13372
#: modules/bibcirculation/lib/bibcirculation_templates.py:13459
#: modules/bibcirculation/lib/bibcirculation_templates.py:15587
msgid "Arrival date"
-msgstr "Fecha de llegada"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:12941
#: modules/bibcirculation/lib/bibcirculation_templates.py:16847
#: modules/bibcirculation/lib/bibcirculation_templates.py:17034
msgid "Standard number"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:13001
#: modules/bibcirculation/lib/bibcirculation_templates.py:16919
#: modules/bibcirculation/lib/bibcirculation_templates.py:17035
-#, fuzzy
msgid "Request details"
-msgstr "Detalles de la petición PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:13061
#: modules/bibcirculation/lib/bibcirculation_templates.py:13172
#: modules/bibcirculation/lib/bibcirculation_templates.py:13285
#: modules/bibcirculation/lib/bibcirculation_templates.py:13388
#: modules/bibcirculation/lib/bibcirculation_templates.py:13475
#: modules/bibcirculation/lib/bibcirculation_templates.py:14959
#: modules/bibcirculation/lib/bibcirculation_templates.py:15151
#: modules/bibcirculation/lib/bibcirculation_templates.py:15467
#: modules/bibcirculation/lib/bibcirculation_templates.py:16920
#: modules/bibcirculation/lib/bibcirculation_templates.py:17036
#: modules/bibcirculation/lib/bibcirculation_templates.py:17317
#: modules/bibcirculation/lib/bibcirculation_templates.py:17515
msgid "Budget code"
-msgstr "Código de presupuesto"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:13997
msgid "Purchase information updated with success."
-msgstr "Se ha actualizado la información de compra."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14070
#: modules/bibcirculation/lib/bibcirculation_templates.py:14139
msgid "New vendor information"
-msgstr "Información del nuevo proveedor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14183
msgid "A new vendor has been registered."
-msgstr "El nuevo proveedor ha sido dado de alta."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14220
#: modules/bibcirculation/lib/bibcirculation_templates.py:14539
msgid "Search vendor by"
-msgstr "Buscar proveedor por"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14283
#: modules/bibcirculation/lib/bibcirculation_templates.py:14606
msgid "Vendor(s)"
-msgstr "Proveedor(es)"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14388
#: modules/bibcirculation/lib/bibcirculation_templates.py:14459
msgid "Vendor information"
-msgstr "Información del proveedor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14669
#: modules/bibcirculation/lib/bibcirculation_templates.py:14758
msgid "Notes about this vendor"
-msgstr "Notas sobre este proveedor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14712
msgid "Vendor details"
-msgstr "Detalles del proveedor"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14797
msgid "Add notes"
-msgstr "Añadir notas"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14850
-#, fuzzy, python-format
+#, python-format
msgid "Book does not exists in %(CFG_SITE_NAME)s"
-msgstr "Este libro no existe en Invenio."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:14852
msgid "Please fill the following form."
-msgstr "Rellene por favor el sigüente formulario."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:15014
-#, fuzzy, python-format
+#, python-format
msgid ""
"Borrower accepts the %(x_url_open)sconditions%(x_url_close)s of the service "
"in particular the return of books in due time."
msgstr ""
-"El lector acepta el %s del servicio, en particular devolver los libros en el "
-"plazo."
#: modules/bibcirculation/lib/bibcirculation_templates.py:15015
msgid "Borrower wants this edition only."
-msgstr "El lector sólo quiere esta edición."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:15158
msgid "Only this edition."
-msgstr "Sólo esta edición."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:15582
-#, fuzzy
msgid "ILL ID"
-msgstr "Código de petición PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:15645
msgid "Notes about this ILL"
-msgstr "Notas sobre este PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:15823
msgid "No more requests are pending or waiting."
-msgstr "No existen más peticiones pendientes o esperando."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:15975
msgid "Printable format"
-msgstr "Formato imprimible"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16006
-#, fuzzy, python-format
+#, python-format
msgid ""
"Check if the book already exists on %(CFG_SITE_NAME)s, before sending your "
"ILL request."
msgstr ""
-"Compruebe si el libro existe en Invenio antes de solicitar una petición de "
-"PI."
#: modules/bibcirculation/lib/bibcirculation_templates.py:16078
msgid "0 items found."
-msgstr "No se han encontrado items."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16161
msgid "Proceed anyway"
-msgstr "Continuar de todas maneras"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16730
msgid ""
"According to a decision from the Scientific Information Policy Board, books "
"purchased with budget codes other than Team accounts will be added to the "
"Library catalogue, with the indication of the purchaser."
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16751
-#, fuzzy
msgid "Document details"
-msgstr "Más detalles"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16751
-#, fuzzy
msgid "Document type"
-msgstr "Tipo de documento desconocido"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16845
-#, fuzzy
msgid "This edition only"
-msgstr "Sólo quiero esta edición."
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:16920
msgid "Cash"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17308
msgid "Article details"
-msgstr "Detalles del artículo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17309
#: modules/bibcirculation/lib/bibcirculation_templates.py:17507
msgid "Periodical title"
-msgstr "Título de la revista"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17310
#: modules/bibcirculation/lib/bibcirculation_templates.py:17508
msgid "Article title"
-msgstr "Título del artículo"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17312
#: modules/bibcirculation/lib/bibcirculation_templates.py:17510
msgid "Report number"
-msgstr "Número de informe"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17724
-#, fuzzy
msgid "Search ILL request by"
-msgstr "Nueva petición de PI"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17725
-#, fuzzy
msgid "ILL request id"
-msgstr "Petición de préstamo interbibliotecario"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17725
msgid "cost"
msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17725
msgid "notes"
-msgstr "notes"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17764
-#, fuzzy
msgid "date restriction"
-msgstr "Actualizar los parámetros"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17765
msgid "the beginning"
-msgstr "el principio"
+msgstr ""
#: modules/bibcirculation/lib/bibcirculation_templates.py:17766
msgid "now"
-msgstr "ahora"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:60
msgid "BibCheck Admin"
-msgstr "Administración de BibCheck"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:70
#: modules/bibcheck/web/admin/bibcheckadmin.py:250
#: modules/bibcheck/web/admin/bibcheckadmin.py:289
#: modules/bibcheck/web/admin/bibcheckadmin.py:326
msgid "Not authorized"
-msgstr "No autorizado"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:80
#, python-format
msgid "ERROR: %s does not exist"
-msgstr "ERROR: %s no existe"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:82
#, python-format
msgid "ERROR: %s is not a directory"
-msgstr "ERROR: %s no és un directorio"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:84
#, python-format
msgid "ERROR: %s is not writable"
-msgstr "ERROR: no tiene permiso de escritura en %s"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:117
msgid "Limit to knowledge bases containing string:"
-msgstr "Limitarlo a las bases de conocimiento con el texto:"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:135
msgid "Really delete"
-msgstr "Confirmación para eliminar"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:141
msgid "Verify syntax"
-msgstr "Verifique la sintaxis"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:146
msgid "Create new"
-msgstr "Crear otro"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:166
#, python-format
msgid "File %s does not exist."
-msgstr "El fichero %s no existe"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:175
msgid "Calling bibcheck -verify failed."
-msgstr "La invocación bibcheck -verify ha fallado."
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:182
msgid "Verify BibCheck config file"
-msgstr "Verifique el archivo de configuración de BibCheck"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:183
msgid "Verify problem"
-msgstr "Problema de verificación"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:205
msgid "File"
-msgstr "Fichero"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:241
msgid "Edit BibCheck config file"
-msgstr "Editar el fichero de configuración de BibCheck"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:269
#, python-format
msgid "File %s already exists."
-msgstr "El fichero %s ya existe."
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:272
#, python-format
msgid "File %s: written OK."
-msgstr "Fichero %s escrito correctamente."
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:278
#, python-format
msgid "File %s: write failed."
-msgstr "Fitxer %s: no se ha podido escribir."
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:280
msgid "Save BibCheck config file"
-msgstr "Guardar el fichero de configuración de BibCheck"
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:313
#, python-format
msgid "File %s deleted."
-msgstr "Fichero %s eliminado."
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:315
#, python-format
msgid "File %s: delete failed."
-msgstr "Fichero %s: no se ha podido eliminar."
+msgstr ""
#: modules/bibcheck/web/admin/bibcheckadmin.py:317
msgid "Delete BibCheck config file"
-msgstr "Eliminar el fichero de configuración de BibCheck"
+msgstr ""
#: modules/bibharvest/lib/oai_repository_admin.py:155
#: modules/bibharvest/lib/oai_repository_admin.py:260
#: modules/bibharvest/lib/oai_repository_admin.py:339
msgid "Return to main selection"
-msgstr "Volver a la selección principal"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:119
msgid "Overview of sources"
-msgstr "Resumen de los servidores OAI"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:120
msgid "Harvesting status"
-msgstr "Estado de la recolección"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:138
msgid "Not Set"
-msgstr "Sin definir"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:139
msgid "never"
-msgstr "nunca"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:150
msgid "Never harvested"
-msgstr "Nunca recolectado"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:162
msgid "View Holding Pen"
-msgstr "Ver los registros en espera de revisión"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:187
#: modules/bibharvest/lib/oai_harvest_admin.py:559
msgid "No OAI source ID selected."
-msgstr "No ha seleccionado ningún identificador de servidor OAI"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:290
#: modules/bibharvest/lib/oai_harvest_admin.py:463
#: modules/bibharvest/lib/oai_harvest_admin.py:477
#: modules/bibharvest/lib/oai_harvest_admin.py:492
#: modules/bibharvest/lib/oai_harvest_admin.py:500
#: modules/bibharvest/lib/oai_harvest_admin.py:547
msgid "Go back to the OAI sources overview"
-msgstr "Volver a la lista dels servidores OAI"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:449
msgid "Try again with another url"
-msgstr "Pruebe con otra URL"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:456
msgid "Continue anyway"
-msgstr "Continuar igualmente"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:830
msgid "Return to the month view"
-msgstr "Volver al resumen mensual"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:1104
msgid "Compare with original"
-msgstr "Comparar con el original"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:1110
#: modules/bibharvest/lib/oai_harvest_admin.py:1155
msgid "Delete from holding pen"
-msgstr "Eliminar de la lista de espera"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:1128
msgid "Error when retrieving the Holding Pen entry"
-msgstr "Error al recuperar la entrada en espera"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:1136
msgid "Error when retrieving the record"
-msgstr "Error al recuperar el registro"
+msgstr ""
#: modules/bibharvest/lib/oai_harvest_admin.py:1144
msgid ""
"Error when formatting the Holding Pen entry. Probably its content is broken"
msgstr ""
-"Error al formatear la entrada en espera. Probablemente su contenido esté mal"
#: modules/bibharvest/lib/oai_harvest_admin.py:1149
msgid "Accept Holding Pen version"
-msgstr "Aceptar la versión en espera de revisión"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:51
#, python-format
msgid ""
"Limit display to knowledge bases matching %(keyword_field)s in their rules "
"and descriptions"
msgstr ""
-"Limitar la visualización a las bases de conocimiento con el texto "
-"%(keyword_field)s en sus reglas y descripciones"
#: modules/bibknowledge/lib/bibknowledge_templates.py:89
msgid "No Knowledge Base"
-msgstr "Sin base de conocimiento"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:148
msgid "Add New Knowledge Base"
-msgstr "Añadir otra base de conocimiento"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:149
msgid "Configure a dynamic KB"
-msgstr "Configurar una base de conocimiento dinámica"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:150
msgid "Add New Taxonomy"
-msgstr "Añadir una nueva taxonomía"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:191
msgid "This knowledge base already has a taxonomy file."
-msgstr "Esta base de conociminento ya tiene un archivo de taxonomía"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:192
msgid "If you upload another file, the current version will be replaced."
-msgstr "Si sube otro archivo, se reemplazará la versión actual."
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:194
#, python-format
msgid "The current taxonomy can be accessed with this URL: %s"
-msgstr "La taxonomía actual es accesible desde esta URL: %s"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:197
#, python-format
msgid "Please upload the RDF file for taxonomy %s"
-msgstr "Suba el fichero RDF de la taxonomía %s"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:234
msgid "Please configure"
-msgstr "Es necesario configurarla"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:235
msgid ""
"A dynamic knowledge base is a list of values of a "
"given field. The list is generated dynamically by "
"searching the records using a search expression."
msgstr ""
-"Un base de conocimiento dinámico es una lista de valores de un campo. La "
-"lista se genera dinámicamente a medida que se buscan registros a partir de "
-"un valor de búsqueda."
#: modules/bibknowledge/lib/bibknowledge_templates.py:239
msgid ""
"Example: Your records contain field 270__a for the "
"name and address of the author's institute. If you "
"set the field to '270__a' and the expression to "
"'270__a:*Paris*', a list of institutes in Paris "
"will be created."
msgstr ""
-"Por ejemplo: los registros tenen el camp 270__a para el nombre y la "
-"dirección de la institución del autor. Si pone como valor de campo «270__a» "
-"y la expresión «270__a:*Paris*», creará una lista d'instituciones en París."
#: modules/bibknowledge/lib/bibknowledge_templates.py:244
msgid ""
"If the expression is empty, a list of all values in "
"270__a will be created."
msgstr ""
-"Si deja la expresión vacía, creará una lista con todos los valores del campo "
-"270__a."
#: modules/bibknowledge/lib/bibknowledge_templates.py:246
msgid ""
"If the expression contains '%', like '270__a:*%*', "
"it will be replaced by a search string when the "
"knowledge base is used."
msgstr ""
-"Si la expresión contiene «%», como «270__a:*%*», será remplazado por el "
-"valor creado cuando se use la base de conocimiento."
#: modules/bibknowledge/lib/bibknowledge_templates.py:249
msgid ""
"You can enter a collection name if the expression "
"should be evaluated in a specific collection."
msgstr ""
-"Puede entrar un nombre de colección si la expresión se ha de evaluar en una "
-"colección específica."
#: modules/bibknowledge/lib/bibknowledge_templates.py:251
msgid ""
"Example 1: Your records contain field 270__a for "
"the name and address of the author's institute. If "
"you set the field to '270__a' and the expression to "
"'270__a:*Paris*', a list of institutes in Paris "
"will be created."
msgstr ""
-"Ejemplo 1: los registros tenen el camp 270__a para el nombre y la dirección "
-"de la institución del autor. Si pone como valor de campo «270__a» y la "
-"expresión «270__a:*Paris*», creará una lista d'instituciones en París."
#: modules/bibknowledge/lib/bibknowledge_templates.py:256
msgid ""
"Example 2: Return the institute's name (100__a) when "
"the user gives its postal code "
"(270__a): Set field to 100__a, expression to 270__a:"
"*%*."
msgstr ""
-"Ejemplo 2: mostrar el nombre del instituto (100__a) cuando el usuario "
-"informe del código postal (270__a): escriba 100__a en el campo, y la "
-"expresión como 270__a:*%*."
#: modules/bibknowledge/lib/bibknowledge_templates.py:260
msgid "Any collection"
-msgstr "Cualquier colección"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:282
msgid "Exporting: "
-msgstr "Exportando: "
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:324
#: modules/bibknowledge/lib/bibknowledge_templates.py:588
#: modules/bibknowledge/lib/bibknowledge_templates.py:657
msgid "Knowledge Base Mappings"
-msgstr "Mapeados de la base de conocimientos"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:325
#: modules/bibknowledge/lib/bibknowledge_templates.py:589
#: modules/bibknowledge/lib/bibknowledge_templates.py:658
msgid "Knowledge Base Attributes"
-msgstr "Atributos de la base de conocimientos"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:326
#: modules/bibknowledge/lib/bibknowledge_templates.py:590
#: modules/bibknowledge/lib/bibknowledge_templates.py:659
msgid "Knowledge Base Dependencies"
-msgstr "Dependencias de la base de conocimiento"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:347
msgid ""
"Here you can add new mappings to this base and "
"change the base attributes."
msgstr ""
-"Aquí puede añadir nuevos mapajes a esta base y cambiar los atributos de la "
-"base."
#: modules/bibknowledge/lib/bibknowledge_templates.py:362
msgid "Map From"
-msgstr "Convertir de:"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:425
msgid "Search for a mapping"
-msgstr "Buscar una conversión"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:480
msgid "Knowledge base is empty"
-msgstr "La base de conocimiento está vacía"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:545
msgid "You can get a these mappings in textual format by: "
-msgstr "Puede obtener los mapajes de manera textual haciendo: "
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:547
msgid "And the KBA version by:"
-msgstr "Y la versión KBA haciendo:"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:627
msgid "Update Base Attributes"
-msgstr "Actualizar los atributos de la base"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:670
msgid "This knowledge base is not used in any format elements."
msgstr ""
-"Esta base de conociminento no se está utilizando en ningún elemento de "
-"formato."
#: modules/bibknowledge/lib/bibknowledge_templates.py:700
#, python-format
msgid "Your rule: %s"
-msgstr "Su regla: %s"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:702
#, python-format
msgid ""
"The left side of the rule (%s) already appears in these knowledge bases:"
msgstr ""
-"La parte izquierda de la regla (%s) ya aparece en estas bases de "
-"conocimiento:"
#: modules/bibknowledge/lib/bibknowledge_templates.py:705
#, python-format
msgid ""
"The right side of the rule (%s) already appears in these knowledge bases:"
msgstr ""
-"La parte derecha de la regla (%s) ya aparece en estas bases de conocimiento:"
#: modules/bibknowledge/lib/bibknowledge_templates.py:719
msgid "Please select action"
-msgstr "Seleccione una acción"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:720
msgid "Replace the selected rules with this rule"
-msgstr "Reemplace las reglas seleccionadas con esta regla"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:721
msgid "Add this rule in the current knowledge base"
-msgstr "Añadir esta regla a la base de conocimiento actual"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:722
msgid "Cancel: do not add this rule"
-msgstr "Cancelar: no añadir esta regla"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledge_templates.py:755
msgid ""
"It is not possible to have two rules with the same left side in the same "
"knowledge base."
msgstr ""
-"No es posible tener dos reglas con la misma parte izquierda en la misma base "
-"de conocimento."
#: modules/bibknowledge/lib/bibknowledgeadmin.py:72
msgid "BibKnowledge Admin"
-msgstr "Administración de BibKnowledge"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:92
msgid "Knowledge Bases"
-msgstr "Bases de conocimiento"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:106
#: modules/bibknowledge/lib/bibknowledgeadmin.py:117
#: modules/bibknowledge/lib/bibknowledgeadmin.py:129
msgid "Cannot upload file"
-msgstr "No ha sido posible subir el fichero"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:107
msgid "You have not selected a file to upload"
-msgstr "No ha seleccionado ningún fichero para subir"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:141
#, python-format
msgid "File %s uploaded."
-msgstr "Fichero %s subido."
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:143
msgid "File uploaded"
-msgstr "Fichero subido"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:172
#: modules/bibknowledge/lib/bibknowledgeadmin.py:216
#: modules/bibknowledge/lib/bibknowledgeadmin.py:266
#: modules/bibknowledge/lib/bibknowledgeadmin.py:303
#: modules/bibknowledge/lib/bibknowledgeadmin.py:356
#: modules/bibknowledge/lib/bibknowledgeadmin.py:465
#: modules/bibknowledge/lib/bibknowledgeadmin.py:524
#: modules/bibknowledge/lib/bibknowledgeadmin.py:590
#: modules/bibknowledge/lib/bibknowledgeadmin.py:686
#: modules/bibknowledge/lib/bibknowledgeadmin.py:703
#: modules/bibknowledge/lib/bibknowledgeadmin.py:718
#: modules/bibknowledge/lib/bibknowledgeadmin.py:754
msgid "Manage Knowledge Bases"
-msgstr "Gestionar las bases de conocimiento"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:185
#: modules/bibknowledge/lib/bibknowledgeadmin.py:230
#: modules/bibknowledge/lib/bibknowledgeadmin.py:316
#: modules/bibknowledge/lib/bibknowledgeadmin.py:370
#: modules/bibknowledge/lib/bibknowledgeadmin.py:478
#: modules/bibknowledge/lib/bibknowledgeadmin.py:543
#: modules/bibknowledge/lib/bibknowledgeadmin.py:730
msgid "Unknown Knowledge Base"
-msgstr "Base de conocimiento desconocida"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:192
#, python-format
msgid "Knowledge Base %s"
-msgstr "Base de conocimiento %s"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:239
#, python-format
msgid "Knowledge Base %s Attributes"
-msgstr "Atributos de la base de conocimiento %s"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:325
#, python-format
msgid "Knowledge Base %s Dependencies"
-msgstr "Dependencias de la base de conocimiento %s"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:407
msgid "Left side exists"
-msgstr "Ya existe la parte izquierda"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:415
msgid "Right side exists"
-msgstr "Ya existe la parte derecha"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:592
msgid "Knowledge base name missing"
-msgstr "Falta el nombre de la base de conocimiento"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:612
msgid "Unknown knowledge base"
-msgstr "Base de conocimiento desconocida"
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:613
msgid "There is no knowledge base with that name."
-msgstr "No existe ninguna base de conocimiento con este nombre."
+msgstr ""
#: modules/bibknowledge/lib/bibknowledgeadmin.py:718
msgid "Delete Knowledge Base"
-msgstr "Suprimir la base de conocimiento"
+msgstr ""
#: modules/bibsword/lib/bibsword_webinterface.py:157
msgid "BibSword Admin Interface"
-msgstr "Administración de BibSword"
+msgstr ""
#: modules/bibsword/lib/bibsword_webinterface.py:171
#: modules/bibsword/lib/bibsword_webinterface.py:277
#: modules/bibsword/lib/bibsword_webinterface.py:301
#: modules/bibsword/lib/bibsword_webinterface.py:330
msgid "Export with BibSword: Step 2/4"
-msgstr "Exportar con BibSword: paso 2/4"
+msgstr ""
#: modules/bibsword/lib/bibsword_webinterface.py:222
#: modules/bibsword/lib/bibsword_webinterface.py:233
#: modules/bibsword/lib/bibsword_webinterface.py:291
msgid "Export with BibSword: Step 1/4"
-msgstr "Exportar con BibSword: paso 1/4"
+msgstr ""
#: modules/bibsword/lib/bibsword_webinterface.py:315
#: modules/bibsword/lib/bibsword_webinterface.py:343
#: modules/bibsword/lib/bibsword_webinterface.py:374
msgid "Export with BibSword: Step 3/4"
-msgstr "Exportar con BibSword: paso 3/4"
+msgstr ""
#: modules/bibsword/lib/bibsword_webinterface.py:358
#: modules/bibsword/lib/bibsword_webinterface.py:389
msgid "Export with BibSword: Step 4/4"
-msgstr "Exportar con BibSword: paso 4/4"
+msgstr ""
#: modules/bibsword/lib/bibsword_webinterface.py:434
msgid "Export with BibSword: Acknowledgement"
-msgstr "Exportar con BibSword: verificación"
+msgstr ""
#: modules/bibupload/lib/batchuploader_engine.py:243
msgid "More than one possible recID, ambiguous behaviour"
-msgstr "Más de un posible recID, comportamiento ambiguo"
+msgstr ""
#: modules/bibupload/lib/batchuploader_engine.py:243
msgid "No records match that file name"
-msgstr "Ningún registro tiene ficheros con este nombre"
+msgstr ""
#: modules/bibupload/lib/batchuploader_engine.py:244
msgid "File already exists"
-msgstr "Este fichero ya existe"
+msgstr ""
#: modules/bibupload/lib/batchuploader_engine.py:244
msgid "A file with the same name and format already exists"
-msgstr "Ya existe un registro com este nombre y formato"
+msgstr ""
#: modules/bibupload/lib/batchuploader_engine.py:245
#, python-format
msgid "No rights to upload to collection '%s'"
-msgstr "No tiene permiso para subir documentos a la colección «%s»"
+msgstr ""
#: modules/bibupload/lib/batchuploader_engine.py:449
msgid "Guests are not authorized to run batchuploader"
msgstr ""
-"Los usuarios no identificados no están autorizados a efectuar cargas masivas"
#: modules/bibupload/lib/batchuploader_engine.py:451
#, python-format
msgid "The user '%s' is not authorized to run batchuploader"
-msgstr "El usuario «%s» no está autorizado a efectuar cargas masivas"
+msgstr ""
#: modules/bibupload/lib/batchuploader_engine.py:506
#: modules/bibupload/lib/batchuploader_engine.py:519
#, python-format
msgid ""
"The user '%(x_user)s' is not authorized to modify collection '%(x_coll)s'"
msgstr ""
-"El usuario «%(x_user)s» no está autorizado a modificar la colección "
-"«%(x_coll)s»"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:267
msgid "Fatal: Author ID capabilities are disabled on this system."
msgstr ""
-"Fatal: no están habilitadas las opciones de identificación de autor (Author "
-"ID) en este sistema."
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:270
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:313
msgid "Fatal: You are not allowed to access this functionality."
-msgstr "Fatal: no está autorizado a accedir a esta funcionalidad."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:662
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:763
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:920
msgid "Papers removed from this profile"
-msgstr "Documentos eliminados de este perfil"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:663
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:667
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:728
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:732
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:764
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:768
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:921
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:925
msgid "Papers in need of review"
-msgstr "Documentos que necesitan revisión"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:664
msgid "Open Tickets"
-msgstr "Tareas abiertas"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:664
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:729
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:765
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:922
msgid "Data"
-msgstr "Datos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:665
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:766
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:923
msgid "Papers of this Person"
-msgstr "Documentos de esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:666
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:767
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:924
msgid "Papers _not_ of this Person"
-msgstr "Documentos _no_ de esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:668
msgid "Tickets for this Person"
-msgstr "Tareas para esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:669
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:734
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:770
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:927
msgid "Additional Data for this Person"
-msgstr "Otros datos de esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:671
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:735
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:771
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:947
msgid "Sorry, there are currently no documents to be found in this category."
-msgstr "No hay ningún documento de esta categoría."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:672
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:772
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:948
msgid "Yes, those papers are by this person."
-msgstr "Sí, estos documentos son de esta persona."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:673
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:773
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:949
msgid "No, those papers are not by this person"
-msgstr "No, estos documentos no son de esta persona."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:674
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:774
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:950
msgid "Assign to other person"
-msgstr "Asignarlos a otra persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:675
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:739
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:775
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:951
msgid "Forget decision"
-msgstr "Olvidar la decisión"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:676
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:690
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:776
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:790
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:952
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:966
msgid "Confirm!"
-msgstr "¡Confirmar!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:677
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:777
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:953
msgid "Yes, this paper is by this person."
-msgstr "Sí, este documento es de esta persona."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:678
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:778
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:954
msgid "Rejected!"
-msgstr "¡Rechazado!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:679
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:779
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:955
msgid "No, this paper is <i>not</i> by this person"
-msgstr "No, este documento <i>no</i> es de esta persona."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:680
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:688
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:696
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:744
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:752
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:760
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:780
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:788
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:796
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:956
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:964
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:972
msgid "Assign to another person"
-msgstr "Asignarlo a una otra persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:681
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:689
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:697
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:745
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:753
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:761
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:781
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:789
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:797
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:957
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:965
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:973
msgid "To other person!"
-msgstr "¡A otra persona!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:682
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:782
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:958
msgid "Confirmed."
-msgstr "Confirmado."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:683
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:783
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:959
msgid "Marked as this person's paper"
-msgstr "Marcado como de esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:684
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:692
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:748
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:756
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:757
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:784
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:792
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:960
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:968
msgid "Forget decision!"
-msgstr "¡Olvidar la decisión!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:685
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:693
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:785
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:793
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:961
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:969
msgid "Forget decision."
-msgstr "Olvidar la decisión."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:686
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:786
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:962
msgid "Repeal!"
-msgstr "¡Anular!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:687
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:787
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:963
msgid "But it's <i>not</i> this person's paper."
-msgstr "Pero <i>no</i> es el document de esta persona."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:691
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:791
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:967
msgid "But it <i>is</i> this person's paper."
-msgstr "Pero <i>sí</i> que es un document de esta persona."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:694
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:794
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:970
msgid "Repealed"
-msgstr "Anulado"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:695
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:795
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:971
msgid "Marked as not this person's paper"
-msgstr "Marcado que no es de esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:727
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:730
msgid "Your papers"
-msgstr "Sus documentos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:727
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:731
msgid "Not your papers"
-msgstr "Documentos no suyos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:736
msgid "These are mine!"
-msgstr "¡Éstos son míos!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:737
msgid "These are not mine!"
-msgstr "¡Estos no son míos!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:738
msgid "It's not mine, but I know whose it is!"
-msgstr "No es mío, pero sé de quien es"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:740
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:754
msgid "Mine!"
-msgstr "¡Mío!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:741
msgid "This is my paper!"
-msgstr "¡Este es mi document!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:742
msgid "Not mine!"
-msgstr "¡No es mío!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:743
msgid "This is not my paper!"
-msgstr "¡Este documento no es mío!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:746
msgid "Not Mine."
-msgstr "No es mío."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:747
msgid "Marked as my paper!"
-msgstr "Marcado como mío"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:749
msgid "Forget assignment decision"
-msgstr "Olvidar la decisión de asignación"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:750
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:758
msgid "Not Mine!"
-msgstr "¡No es mío!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:751
msgid "But this is mine!"
-msgstr "¡Pero este es mío!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:755
msgid "But this is my paper!"
-msgstr "¡Pero este documento es mío!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:759
msgid "Marked as not your paper."
-msgstr "Marcado que no es suyo."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:769
msgid "Tickes you created about this person"
-msgstr "Tareas que usted ha creado sobre esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:922
msgid "Tickets"
-msgstr "Tareas"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:926
msgid "Request Tickets"
-msgstr "Peticiones"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:1178
msgid "Submit Attribution Information"
-msgstr "Enviar la información de atribución"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:1323
msgid "Please review your actions"
-msgstr "Revise por favor sus acciones"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2008
msgid "Claim this paper"
-msgstr "Reivindicar este documento"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2109
msgid ""
"<p>We're sorry. An error occurred while handling your request. Please find "
"more information below:</p>"
msgstr ""
-"<p>Desgraciadamente, ha ocurrido un error mientras se gestionaba su "
-"petición. Vea aquí más información:</p>"
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2187
msgid "Person search for assignment in progress!"
-msgstr "Se está efectuando la búsqueda de la persona para la asignación."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2188
msgid "You are searching for a person to assign the following papers:"
-msgstr "Está buscando una persona para asignarle estos documentos:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2349
#, python-format
msgid "You are going to claim papers for: %s"
-msgstr "Está a punto de reivindicar documentos en nombre de: %s"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2377
msgid "This page in not accessible directly."
-msgstr "No puede acceder a esta página directamente."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_webinterface.py:2379
msgid "Welcome!"
-msgstr "Bienvenido(a)!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:153
msgid "Click here to review the transactions."
-msgstr "Pinche aquí para revisar las transacciones."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:196
msgid "Quit searching."
-msgstr "Abandona la búsqueda"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:417
msgid "You are about to attribute the following paper"
-msgstr "Está a punto de atribuir este documento"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:439
msgid "Info"
-msgstr "Información"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:451
msgid " Search for a person to attribute the paper to"
-msgstr " Buscar a una persona para atribuirle el documento"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:512
#: modules/bibauthorid/lib/bibauthorid_templates.py:607
#: modules/bibauthorid/lib/bibauthorid_templates.py:679
msgid "Select All"
-msgstr "Seleccionarlos todos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:513
#: modules/bibauthorid/lib/bibauthorid_templates.py:608
#: modules/bibauthorid/lib/bibauthorid_templates.py:680
msgid "Select None"
-msgstr "No seleccionar ninguno"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:514
#: modules/bibauthorid/lib/bibauthorid_templates.py:609
#: modules/bibauthorid/lib/bibauthorid_templates.py:681
msgid "Invert Selection"
-msgstr "Invertir la selección"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:516
#: modules/bibauthorid/lib/bibauthorid_templates.py:611
msgid "Hide successful claims"
-msgstr "Esconder las reivindicaciones satisfechas"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:576
msgid "No status information found."
-msgstr "No se ha encontrado información del estado."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:598
msgid "Operator review of user actions pending"
-msgstr "Revisión por parte del operador de las acciones de usuari pendientes"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:642
msgid "Sorry, there are currently no records to be found in this category."
-msgstr "No hay ningún registro de esta categoría."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:671
msgid "Review Transaction"
-msgstr "Revise la transacción"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:678
msgid " On all pages: "
-msgstr " En todos los documentos: "
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:714
msgid "Names variants:"
-msgstr "Variantes del nombre:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:836
msgid "These records have been marked as not being from this person."
-msgstr "Estos registros se han marcado como que no son de esta persona."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:837
msgid "They will be regarded in the next run of the author "
-msgstr "Se tendrán en cuenta la próxima ejecución del algoritmo "
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:838
msgid "disambiguation algorithm and might disappear from this listing."
-msgstr "de desambiguación de autores y podrán desaparecer de esta lista."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:864
#: modules/bibauthorid/lib/bibauthorid_templates.py:865
#: modules/bibauthorid/lib/bibauthorid_templates.py:868
msgid "Not provided"
-msgstr "Sin información"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:866
msgid "Not available"
-msgstr "No disponible"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:867
msgid "No comments"
-msgstr "Sin comentarios"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:869
msgid "Not Available"
-msgstr "No disponible"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:889
msgid " Delete this ticket"
-msgstr " Suprimir esta tarea"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:893
msgid " Commit this entire ticket"
-msgstr " Dar por buena toda esta tarea"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:952
msgid "... This tab is currently under construction ... "
-msgstr "... Esta pestaña está todavía en construcción... "
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:973
msgid ""
"We could not reliably determine the name of the author on the records below "
"to automatically perform an assignment."
msgstr ""
-"No se ha podido determinar de una manera fiable el nombre del autor de los "
-"siguientes registros para realizar una asignación automática."
#: modules/bibauthorid/lib/bibauthorid_templates.py:975
msgid "Please select an author for the records in question."
-msgstr "Seleccione un autor para los registros en cuestión."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:976
msgid "Boxes not selected will be ignored in the process."
-msgstr "Las casillas no seleccionadas serán ignoradas en el proceso."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:983
msgid "Select name for"
-msgstr "Seleccione el nombre para"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:992
#: modules/bibauthorid/lib/bibauthorid_templates.py:1018
#: modules/bibauthorid/lib/bibauthorid_templates.py:1162
msgid "Error retrieving record title"
-msgstr "Error al recuperar el registro"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:994
msgid "Paper title: "
-msgstr "Título del documento: "
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1006
msgid "The following names have been automatically chosen:"
-msgstr "Se han escogido automáticamente estos nombres:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1021
msgid " -- With name: "
-msgstr " -- Con el nombre:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1027
msgid "Ignore"
-msgstr "Ignorar"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1076
#: modules/bibauthorid/lib/bibauthorid_templates.py:1092
msgid "Navigation:"
-msgstr "Navegación:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1077
msgid "Run paper attribution for another author"
-msgstr "Ejecutar la atribución de documentos para otro autor"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1078
#: modules/bibauthorid/lib/bibauthorid_templates.py:1095
msgid "Person Interface FAQ"
-msgstr "Preguntas más frecuentes sobre la interfaz de personas"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1093
msgid "Person Search"
-msgstr "Buscar personas"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1094
msgid "Open tickets"
-msgstr "Tareas abiertas"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1123
msgid "Symbols legend: "
-msgstr "Leyenda: "
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1128
#: modules/bibauthorid/lib/bibauthorid_templates.py:1186
msgid "Everything is shiny, captain!"
-msgstr "¡Todo va sobre ruedas, capitán!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1129
msgid "The result of this request will be visible immediately"
-msgstr "El resultado de esta petición será visible imediatamente"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1134
msgid "Confirmation needed to continue"
-msgstr "Hace falta la confirmación para continuar"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1135
msgid ""
"The result of this request will be visible immediately but we need your "
"confirmation to do so for this paper have been manually claimed before"
msgstr ""
-"El resultado de esta petición será visible imediatamente pero es necesaria "
-"su confirmación, ya que este document ya había sido reivindicado manualmente"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1140
msgid "This will create a change request for the operators"
-msgstr "Esto creará una petición de cambio a los operadores"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1141
msgid ""
"The result of this request will be visible upon confirmation through an "
"operator"
msgstr ""
-"El resultado de esta petición será visible una vez sea confirmado por un "
-"operador"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1179
msgid "Selected name on paper"
-msgstr "Nombre seleccionado en el documento"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1190
msgid "Verification needed to continue"
-msgstr "Hace falta la verificación para continuar"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1194
msgid "This will create a request for the operators"
-msgstr "Esto creará una petición a los operadores"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1216
msgid "Please Check your entries"
-msgstr "Compruebe sus entradas"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1216
msgid "Sorry."
-msgstr "Lo sentimos."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1221
msgid "Please provide at least one transaction."
-msgstr "Seleccione al menos una transacción."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1221
msgid "Error:"
-msgstr "Error:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1232
msgid "Please provide your information"
-msgstr "Introduzca sus datos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1239
msgid "Please provide your first name"
-msgstr "Introduzca su nombre de pila"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1243
#: modules/bibauthorid/lib/bibauthorid_templates.py:1245
msgid "Your first name:"
-msgstr "Su nombre:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1251
msgid "Please provide your last name"
-msgstr "Sus apellidos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1256
#: modules/bibauthorid/lib/bibauthorid_templates.py:1258
msgid "Your last name:"
-msgstr "Sus apellidos:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1266
msgid "Please provide your eMail address"
-msgstr "Su dirección de correo electrónico"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1272
msgid ""
"This eMail address is reserved by a user. Please log in or provide an "
"alternative eMail address"
msgstr ""
-"Esta dirección de correo electrónico está reservada por otro usuario. Por "
-"favor, dese de alta o ofrezca una dirección alternativa"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1277
#: modules/bibauthorid/lib/bibauthorid_templates.py:1279
msgid "Your eMail:"
-msgstr "Su dirección de correo electrónico:"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1283
msgid "You may leave a comment (optional)"
-msgstr "Puede dejar un comentario (opcional)"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1296
msgid "Continue claiming*"
-msgstr "Continuar con las reivindicaciones*"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1298
msgid "Confirm these changes**"
-msgstr "Confirme estos cambios**"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1301
msgid "!Delete the entire request!"
-msgstr "¡Eliminar toda la petición!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1314
msgid "Mark as your documents"
-msgstr "Marcar lo com sus documentos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1329
msgid "Mark as _not_ your documents"
-msgstr "Marcados como documentos _no_ suyos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1340
msgid "Nothing staged as not yours"
-msgstr "Nada pendiente como no suyo"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1344
msgid "Mark as their documents"
-msgstr "Marcarlo como documentos suyos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1355
#: modules/bibauthorid/lib/bibauthorid_templates.py:1370
msgid "Nothing staged in this category"
-msgstr "Nada pendiente en esta categoría"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1359
msgid "Mark as _not_ their documents"
-msgstr "Marcarlo como a documentos _no_ suyos"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1376
msgid " * You can come back to this page later. Nothing will be lost. <br />"
msgstr ""
-" * Puede volver a esta página més adelante. No se perderá nada. <Br />"
#: modules/bibauthorid/lib/bibauthorid_templates.py:1377
msgid ""
" ** Performs all requested changes. Changes subject to permission "
"restrictions will be submitted to an operator for manual review."
msgstr ""
-" ** Executa totes les peticions pendents. Els canvis que tinguin "
-"restricció de permisos s'enviaran a un operador per a la seva revisió manual."
#: modules/bibauthorid/lib/bibauthorid_templates.py:1433
#, python-format
msgid "We do not have a publication list for '%s'."
-msgstr "No tenemos ninguna lista de publicaciones de '%s'."
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1448
#: modules/bibauthorid/lib/bibauthorid_templates.py:1560
msgid "Create a new Person for your search"
-msgstr "Crear una nueva persona para sus búsqueda"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1503
msgid "Recent Papers"
-msgstr "Documentos recientes"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1515
msgid "YES!"
-msgstr "¡SÍ!"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1516
msgid " Attribute Papers To "
-msgstr " Atribuir los documentos a "
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1522
#: modules/bibauthorid/lib/bibauthorid_templates.py:1544
msgid "Publication List "
-msgstr "Lista de publicaciones"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1529
msgid "Showing the"
-msgstr "Se muestran"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1529
msgid "most recent documents:"
-msgstr "los documentos más recientes"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1538
msgid "Sorry, there are no documents known for this person"
-msgstr "No hi hay documentos conocidos de esta persona"
+msgstr ""
#: modules/bibauthorid/lib/bibauthorid_templates.py:1540
msgid ""
"Information not shown to increase performances. Please refine your search."
msgstr ""
-"No se muestra toda la información para mejorar el rendimento. Por favor "
-"concrete su búsqueda."
#: modules/bibauthorid/lib/bibauthorid_templates.py:1648
msgid "Correct my publication lists!"
-msgstr "Corrijan mi lista de publicaciones"
-
-#~ msgid "Make sure we match the right names!"
-#~ msgstr "¡Asegúrese que los nombres se correspondan!"
-
-#~ msgid ""
-#~ "Please select an author on each of the records that will be assigned."
-#~ msgstr ""
-#~ "Seleccione un autor para cada uno de los los registros que se le asignen."
-
-#~ msgid "Papers without a name selected will be ignored in the process."
-#~ msgstr ""
-#~ "En el proceso, se ignoraran los documentos sin un nombre seleccionado."
-
-#~ msgid ""
-#~ "The result of this request will be visible immediately but we need your "
-#~ "confirmation to do so for this paper has been manually claimed before"
-#~ msgstr ""
-#~ "El resultado de esta petición será visible, imediatamente pero es "
-#~ "necesaria su confirmación, porque este document ya había sido "
-#~ "reivindicado manualmente"
-
-#~ msgid "Tickets you created about this person"
-#~ msgstr "Tareas que usted ha creado sobre esta persona"
-
-#~ msgid "Error: No BibCatalog system configured."
-#~ msgstr "Error: el sistema BibCatalog no está configurado"
-
-#~ msgid "0 borrower(s) found."
-#~ msgstr "No se ha encontrado ningún usuario."
-
-#~ msgid "You can see your loans "
-#~ msgstr "Puede ver sus préstamos "
-
-#~ msgid "."
-#~ msgstr "."
-
-#~ msgid ""
-#~ "The item %(x_title)s with barcode %(x_barcode)s has been returned with "
-#~ "success."
-#~ msgstr ""
-#~ "Se ha devuelto correctamente el item %(x_title)s, con el código de barras "
-#~ "%(x_barcode)s."
-
-#~ msgid "There %s request(s) on the book who has been returned."
-#~ msgstr "Hay %s reserva(s) para el libro devuelto."
-
-#~ msgid "There are no requests waiting on the item <strong>%s</strong>."
-#~ msgstr "No hay reservas pendientes para el item <strong>%s</strong>."
-
-#~ msgid "No. Copies"
-#~ msgstr "Ejemplares"
-
-#~ msgid "Hold requests and loans overview on"
-#~ msgstr "Reservas y préstamos para"
-
-#~ msgid "Library(ies)"
-#~ msgstr "Bibliotecas"
-
-#~ msgid "new copy"
-#~ msgstr "nueva copia"
-
-#~ msgid "A %s has been added."
-#~ msgstr "Se ha añadido un %s."
-
-#~ msgid "0 library(ies) found."
-#~ msgstr "No se ha encontrado ninguna biblioteca"
-
-#~ msgid "Back borrower's loans"
-#~ msgstr "Volver a los préstamos"
-
-#~ msgid "Borrower wants only this edition?"
-#~ msgstr "Quiere solamente esta edición, el lector?"
-
-#~ msgid ""
-#~ "I accept the %s of the service in particular the return of books in due "
-#~ "time."
-#~ msgstr ""
-#~ "Acepto los %s del servicio, en particular devolver los libros a tiempo."
-
-#~ msgid "Volume, Issue, Page"
-#~ msgstr "Volumen, número, página"
-
-#~ msgid "Barcoce"
-#~ msgstr "Código de barras"
-
-#~ msgid "An ILL request has been updated with success."
-#~ msgstr "Se ha actualizado la petición de PI"
-
-#~ msgid "Book does not exists on Invenio."
-#~ msgstr "Este libro no existe en Invenio."
-
-#~ msgid ""
-#~ "Borrower accepts the %s of the service in particular the return of books "
-#~ "in due time."
-#~ msgstr ""
-#~ "El lector acepta el %s del servicio, en particular devolver los libros en "
-#~ "el plazo."
-
-#~ msgid "Check if the book already exists on Invenio,"
-#~ msgstr "Compruebe si el libro existe en Invenio,"
-
-#~ msgid ""
-#~ "Check if the book already exists on Invenio, before to send your ILL "
-#~ "request."
-#~ msgstr ""
-#~ "Compruebe si el libro existe en Invenio antes de solicitar una petición "
-#~ "de PI."
-
-#~ msgid "Book does not exists on Invenio. Please fill the following form."
-#~ msgstr ""
-#~ "Este libro no existe en Invenio. Rellene por favor el sigüente "
-#~ "formulario."
-
-#~ msgid "This book is sent to you ..."
-#~ msgstr "Se le ha enviado este libro..."
-
-#~ msgid "Id"
-#~ msgstr "Identificador"
-
-#~ msgid ""
-#~ "Automatically generated <span class=\"keyword single\">single</"
-#~ "span>, <span class=\"keyword composite\">composite</span>, <span "
-#~ "class=\"keyword author-kw\">author</span>, and <span class="
-#~ "\"keyword other-kw\">other keywords</span>."
-#~ msgstr ""
-#~ "Generado automáticamente <span class=\"keyword single\">sencillo</span>, "
-#~ "<span class=\"keyword composite\">compuesto</span>, <span class=\"keyword "
-#~ "author-kw\">autor</span>, i <span class=\"keyword other-kw\">otras "
-#~ "palabras clave</span>."
-
-#~ msgid "Automated keyword extraction wasn't run for this document yet."
-#~ msgstr ""
-#~ "Para este documento todavía no se han extraído automáticamente las "
-#~ "palabras clave."
-
-#~ msgid "Generate keywords"
-#~ msgstr "Generar palabras clave"
-
-#~ msgid "There are no suitable keywords for display in this record."
-#~ msgstr "No hay palabras clave relevantes para este registro."
-
-#~ msgid "Show more..."
-#~ msgstr "Mostrar más..."
-
-#~ msgid "Unweighted %s keywords:"
-#~ msgstr "Palabras clave no sospesades de tipo %s"
-
-#~ msgid "Weighted %s keywords:"
-#~ msgstr "Palabras clave sospesades de tipo %s"
-
-#~ msgid "tag cloud"
-#~ msgstr "nuve de etiquetas"
-
-#~ msgid "list"
-#~ msgstr "lista"
-
-#~ msgid "XML"
-#~ msgstr "XML"
-
-#~ msgid "Unknown type: %s"
-#~ msgstr "Tipo desconocido: %s"
-
-#~ msgid "The site settings do not allow automatic keyword extraction"
-#~ msgstr ""
-#~ "La configuración de este sitio no permite la extracción automática de "
-#~ "palabras clave"
-
-#~ msgid ""
-#~ "We have registered your request, the automatedkeyword extraction will run "
-#~ "after some time. Please return back in a while."
-#~ msgstr ""
-#~ "Su petición se ha registrado. La extracción automática de palabras clave "
-#~ "se ejecutará pronto. Vuelva dentro de un rato."
-
-#~ msgid ""
-#~ "Unfortunately, we don't have a PDF fulltext for this record in the "
-#~ "storage, keywords cannot be generated using an "
-#~ "automated process."
-#~ msgstr ""
-#~ "Por desgracia, no existe una copia local del texto completo en PDF. No es "
-#~ "posible generar automáticamente las palabras clave."
-
-#~ msgid "The format %s does not exist for the given version: %s"
-#~ msgstr "El formato %s no existe para la versión %s"
-
-#~ msgid "does not exist"
-#~ msgstr "no existe"
-
-#~ msgid ""
-#~ "WARNING: The following records are pending "
-#~ "execution in the task "
-#~ "queue. If you proceed with the changes, the "
-#~ "modifications made with other tool (e.g. BibEdit) "
-#~ "to these records will be lost"
-#~ msgstr ""
-#~ "ATENCIÓN: los siguientes registros están pendentes de actualizar en la "
-#~ "cola de taresa pendientes. Si sigue con los cambios, se perderán las "
-#~ "modificaciones hechas con la otra herramienta (es decir, BibEdit) sobre "
-#~ "este registro."
-
-#~ msgid ""
-#~ "We are sorry, a problem has occured during the processing of your video "
-#~ "upload%(submission_title)s."
-#~ msgstr ""
-#~ "Por desgracia ha ocurrido un problema durante el proceso de carga de su "
-#~ "video upload%(submission_title)s."
-
-#~ msgid "The file you uploaded was %(input_filename)s."
-#~ msgstr "El fichero que ha subido es %(input_filename)s."
-
-#~ msgid "Your video might not be fully available until intervention."
-#~ msgstr "Su video no estará disponible hasta su verificación."
-
-#~ msgid "You can check the status of your video here: %(record_url)s."
-#~ msgstr "Puede comprobar el estado de su video aquí: %(record_url)s."
-
-#~ msgid ""
-#~ "You might want to take a look at %(guidelines_url)s and modify or redo "
-#~ "your submission."
-#~ msgstr ""
-#~ "Puede echar un vistazo a %(guidelines_url)s y modificar o repetir su "
-#~ "envío."
-
-#~ msgid "the video guidelines"
-#~ msgstr "las guías para los videos"
-
-#~ msgid ""
-#~ "Your video submission%(submission_title)s was successfully processed."
-#~ msgstr ""
-#~ "Su video submission%(submission_title)s ha sido correctamente procesado."
-
-#~ msgid "Your video is now available here: %(record_url)s."
-#~ msgstr "Su video ahora está accesible en: %(record_url)s."
-
-#~ msgid ""
-#~ "If the videos quality is not as expected, you might want to take a look "
-#~ "at %(guidelines_url)s and modify or redo your submission."
-#~ msgstr ""
-#~ "Si la cualidad de los videos no es la que espera, puede echar un vistazo "
-#~ "a %(guidelines_url)s y modificar o repetir su envío."
-
-#~ msgid "More than one templates found in the document. No format found."
-#~ msgstr ""
-#~ "Se ha encontrado más de una plantilla en el documento. No se ha "
-#~ "encontrado el formato."
-
-#~ msgid "Note for programmer: you have not implemented operator %s."
-#~ msgstr "Nota para el programador: no ha implementado el operador %s."
-
-#~ msgid "Name %s is not recognised as a valid operator name."
-#~ msgstr "El nombre %s no es un operador reconocido."
-
-#~ msgid "Duplicate name: %s."
-#~ msgstr "Nombre duplicado: %s."
-
-#~ msgid "No name defined for the template."
-#~ msgstr "No se ha definido ningún nombre para la plantilla."
-
-#~ msgid "No description entered for the template."
-#~ msgstr "No se ha entrado ninguna descripción para la plantilla."
-
-#~ msgid "No content type specified for the template. Using default: text/xml."
-#~ msgstr ""
-#~ "No se ha entrado el tipo de contenido para la plantilla. Se usará el "
-#~ "valor per defecto: text/xml."
-
-#~ msgid "Missing attribute \"name\" in TEMPLATE_REF."
-#~ msgstr "Falta el atributo \"name\" en TEMPLATE_REF."
-
-#~ msgid "Missing attribute \"name\" in ELEMENT."
-#~ msgstr "Falta el atributo \"name\" en ELEMENT."
-
-#~ msgid "Missing attribute \"name\" in FIELD."
-#~ msgstr "Falta el atributo \"name\" en FIELD."
-
-#~ msgid "Field %s is not defined."
-#~ msgstr "El campo %s no está definido."
-
-#~ msgid "Missing attribute \"value\" in TEXT."
-#~ msgstr "Falta el atributo \"name\" en TEXT."
-
-#~ msgid "Missing attribute \"object\" in LOOP."
-#~ msgstr "Falta el atributo \"name\" en LOOP."
-
-#~ msgid "Missing attrbute \"name\" in IF."
-#~ msgstr "Falta el atributo \"name\" en IF."
-
-#~ msgid "Invalid regular expression: %s."
-#~ msgstr "Expresión regular no válida: %s"
-
-#~ msgid "Invalid syntax of IF statement."
-#~ msgstr "Sintaxis no válida para el condicional IF."
-
-#~ msgid "Invalid address: %s %s"
-#~ msgstr "Dirección no válida: %s %s"
-
-#~ msgid ""
-#~ "Invalid display type. Must be one of: value, tag, ind1, ind2, code; "
-#~ "received: %s."
-#~ msgstr ""
-#~ "Tipo de visualización no válida. Ha de ser uno de: value, tag, ind1, "
-#~ "ind2, code; received: %s."
-
-#~ msgid "Repeating subfield codes in the same instance!"
-#~ msgstr "¡Códigos de subcampo repetidos en la misma instancia!"
-
-#~ msgid "No template could be found for output format %s."
-#~ msgstr "No se ha encontrado una plantilla para el formato de salida %s."
-
-#~ msgid "Could not find format element named %s."
-#~ msgstr "No se ha encontrado el elemento de formato %s"
-
-#~ msgid "Error when evaluating format element %s with parameters %s."
-#~ msgstr "Error al evaluar el elemento de formato %s con el parámetro %s."
-
-#~ msgid ""
-#~ "Escape mode for format element %s could not be retrieved. Using default "
-#~ "mode instead."
-#~ msgstr ""
-#~ "No se ha podido obtener el modo de escape para el elemento de formato %s. "
-#~ "Se usará el modo por defecto."
-
-#~ msgid "\"nbMax\" parameter for %s must be an \"int\"."
-#~ msgstr "El parámetro \"nbMax\" para %s ha de ser un \"int\"."
-
-#~ msgid "Could not read format template named %s. %s."
-#~ msgstr "No se ha podido leer la plantilla de formato %s. %s."
-
-#~ msgid "Format element %s could not be found."
-#~ msgstr "No se ha encontrado el elemento de formato %s."
-
-#~ msgid "Error in format element %s. %s."
-#~ msgstr "Error en el elemento de formato %s. %s."
-
-#~ msgid "Format element %s has no function named \"format\"."
-#~ msgstr ""
-#~ "El elemento de formato %s no tiene ninguna función llamada \"format\"."
-
-#~ msgid "Output format with code %s could not be found."
-#~ msgstr "No se ha encontrado el formato de salida con el código %s."
-
-#~ msgid "Output format %s cannot not be read. %s."
-#~ msgstr "No se ha podido leer el formato de salida %s. %s."
-
-#~ msgid "Could not find output format named %s."
-#~ msgstr "No se ha podido encontrar el formato de salida %s."
-
-#~ msgid "Could not find a fresh name for output format %s."
-#~ msgstr ""
-#~ "No ha sido posible encontrar un nuevo nombre para el formato de salida %s"
-
-#~ msgid "No Record Found for %s."
-#~ msgstr "No se ha encontrado ningún registro para %s."
-
-#~ msgid "Tag specification \"%s\" must end with column \":\" at line %s."
-#~ msgstr ""
-#~ "La especificación para la etiqueta \"%s\" debe acabar en la columna \":"
-#~ "\", linea %s."
-
-#~ msgid "Tag specification \"%s\" must start with \"tag\" at line %s."
-#~ msgstr ""
-#~ "La especificación para la etiqueta \"%s\" debe comenzar con \"tag\", "
-#~ "linea %s."
-
-#~ msgid "\"tag\" must be lowercase in \"%s\" at line %s."
-#~ msgstr "\"tag\" debe estar en minúscula en \"%s\", linea %s."
-
-#~ msgid "Should be \"tag field_number:\" at line %s."
-#~ msgstr "Debería ser \"tag field_number:\", linea %s."
-
-#~ msgid "Invalid tag \"%s\" at line %s."
-#~ msgstr "Etiqueta \"%s\" no válida, linea %s."
-
-#~ msgid "Condition \"%s\" is outside a tag specification at line %s."
-#~ msgstr ""
-#~ "La condición \"%s\" ocurre fuera de una especificación de etiqueta, linea "
-#~ "%s."
-
-#~ msgid "Condition \"%s\" can only have a single separator --- at line %s."
-#~ msgstr "La condición \"%s\" solo puede tenir un solo carácter, linea %s."
-
-#~ msgid "Template \"%s\" does not exist at line %s."
-#~ msgstr "La plantilla \"%s\" no existe, linea %s."
-
-#~ msgid "Missing column \":\" after \"default\" in \"%s\" at line %s."
-#~ msgstr "Falta la columna \":\" después de \"default\" en \"%s\", linea %s."
-
-#~ msgid ""
-#~ "Default template specification \"%s\" must start with \"default :\" at "
-#~ "line %s."
-#~ msgstr ""
-#~ "La especificación de la plantilla por defecto \"%s\" debe comenzar por "
-#~ "\"default :\", linea %s."
-
-#~ msgid "\"default\" keyword must be lowercase in \"%s\" at line %s."
-#~ msgstr ""
-#~ "La palabra \"default\" debe estar en minúsculas en \"%s\", linea %s."
-
-#~ msgid "Line %s could not be understood at line %s."
-#~ msgstr "No se puede entender la linea %s, linea %s."
-
-#~ msgid "Output format %s cannot not be read. %s"
-#~ msgstr "No se ha podido leer el format de salida %s. %s"
-
-#~ msgid ""
-#~ "Could not find a name specified in tag \"<name>\" inside format template "
-#~ "%s."
-#~ msgstr ""
-#~ "No se ha podido encontrar el nombre especificado en la etiqueta \"<name>"
-#~ "\" en la plantilla de formato %s."
-
-#~ msgid ""
-#~ "Could not find a description specified in tag \"<description>\" inside "
-#~ "format template %s."
-#~ msgstr ""
-#~ "No se ha podido econtrar la descripción especificada en la etiqueta "
-#~ "\"<description>\" en la plantilla de formato %s."
-
-#~ msgid "Format template %s calls undefined element \"%s\"."
-#~ msgstr "La plantilla de formato %s llama al elemento no definido \"%s\"."
-
-#~ msgid ""
-#~ "Format template %s calls unreadable element \"%s\". Check element file "
-#~ "permissions."
-#~ msgstr ""
-#~ "La plantilla de formato %s llama al elemento ilegible \"%s\". Compruebe "
-#~ "los permisos de fichero del elemento."
-
-#~ msgid "Cannot load element \"%s\" in template %s. Check element code."
-#~ msgstr ""
-#~ "No se ha podido cargar el elemento \"%s\" en la plantilla %s. Compruebe "
-#~ "el código del elemento."
-
-#~ msgid ""
-#~ "Format element %s uses unknown parameter \"%s\" in format template %s."
-#~ msgstr ""
-#~ "El elemento de formato %s usa el parámetro desconocido \"%s\" en la "
-#~ "plantilla de formato %s."
-
-#~ msgid "Could not read format template named %s. %s"
-#~ msgstr "No se ha podido leer la plantilla de formato %s. %s"
-
-#~ msgid "Format element %s cannot not be read. %s"
-#~ msgstr "No se ha podido leer el elemento de formato %s. %s"
-
-#~ msgid "Add this document to your ScienceWise.info bookmarks"
-#~ msgstr "Añadir este documento a sus favoritos en ScienceWise.info"
-
-#~ msgid "Add this article to your ScienceWise.info bookmarks"
-#~ msgstr "Añadir este artículo a sus favoritos en ScienceWise.info"
-
-#~ msgid ""
-#~ "Cannot write in etc/bibformat dir of your Invenio installation. Check "
-#~ "directory permission."
-#~ msgstr ""
-#~ "No se ha podido escribir en el directorio etc/bibformat de su instalación "
-#~ "de Invenio. Compruebe los permisos del directorio."
-
-#~ msgid "Format template %s cannot not be read. %s"
-#~ msgstr "No se ha podido leer la plantilla de formato %s. %s"
-
-#~ msgid "No format specified for validation. Please specify one."
-#~ msgstr ""
-#~ "No se ha especificado ningún formato para la validación. Especifique uno."
-
-#~ msgid "BibSort Guide"
-#~ msgstr "Guía de Bibsort"
-
-#~ msgid "May "
-#~ msgstr "Mayo"
-
-#~ msgid ""
-#~ "The system is not attempting to send an email from %s, to %s, with body "
-#~ "%s."
-#~ msgstr ""
-#~ "El sistema no intenta enviar un mensaje de %s, a %s, amb el texto %s."
-
-#~ msgid ""
-#~ "Error in connecting to the SMPT server waiting %s seconds. Exception is "
-#~ "%s, while sending email from %s to %s with body %s."
-#~ msgstr ""
-#~ "Error en la conexión con el servidor SMPT esperando %s segundos. La "
-#~ "excepción es %s, al intentar enviar el mensaje de %s a %s con el texto %s."
-
-#~ msgid "Error in sending email from %s to %s with body %s."
-#~ msgstr "Error al enviar un mensaje de %s, a %s, con el texto %s."
-
-#~ msgid "Please enter a name for the source."
-#~ msgstr "Introduzca un nombre para la fuente."
-
-#~ msgid "Please enter a metadata prefix."
-#~ msgstr "Untroduzca el prefijo de metadatos."
-
-#~ msgid "Please enter a base url."
-#~ msgstr "Untroduzca la url base."
-
-#~ msgid "Please choose a frequency of harvesting"
-#~ msgstr "Escoja una frecuencia de recolecta"
-
-#~ msgid "You selected a postprocess mode which involves conversion."
-#~ msgstr "Ha seleccionado un modo de postproceso que involucra conversión."
-
-#~ msgid ""
-#~ "Please enter a valid name of or a full path to a BibConvert config file "
-#~ "or change postprocess mode."
-#~ msgstr ""
-#~ "Entre un nombre válido o la ruta completa a un fichero de configuración "
-#~ "BibConvert, o cambie el modo de postproceso."
-
-#~ msgid "You selected a postprocess mode which involves filtering."
-#~ msgstr "Ha seleccionado un modo de postproceso que involucra filtrage."
-
-#~ msgid ""
-#~ "Please enter a valid name of or a full path to a BibFilter script or "
-#~ "change postprocess mode."
-#~ msgstr ""
-#~ "Entre un nombre válido o la ruta completa a un script BibFilter, o cambie "
-#~ "el modo de postproceso."
-
-#~ msgid "Please choose the harvesting starting date"
-#~ msgstr "Escoja la fecha de inicio de la recolecta"
-
-#~ msgid "Record deleted from the holding pen"
-#~ msgstr "Registro eliminado de la lista de espera"
-
-#~ msgid "Configure BibSort"
-#~ msgstr "Configurar BibSort"
-
-#~ msgid "Papers written alone"
-#~ msgstr "Articles como autor individual"
-
-#~ msgid "No Collaborations"
-#~ msgstr "Sin colaboraciones"
-
-#~ msgid "Collaborations"
-#~ msgstr "Colaboraciones"
-
-#~ msgid "Frequent co-authors (excluding collaborations)"
-#~ msgstr "Coautores frecuentes (excluyendo colaboraciones)"
-
-#~ msgid "Citations%s:"
-#~ msgstr "Citaciones%s:"
-
-#~ msgid "Recompute Now!"
-#~ msgstr "¡Recalcular ahora!"
-
-#~ msgid "Untitled basket"
-#~ msgstr "Cesta sin título"
-
-#~ msgid "Untitled topic"
-#~ msgstr "Tema sin título"
-
-#~ msgid "%(x_search_for_term)s in %(x_collection_list)s"
-#~ msgstr "%(x_search_for_term)s en %(x_collection_list)s"
-
-#~ msgid "%i matching items"
-#~ msgstr "encontrados %i items"
-
-#~ msgid "This basket does not contain any records yet."
-#~ msgstr "Esta cesta no contiene aún ningún registro."
-
-#~ msgid "All your topics"
-#~ msgstr "Todos sus temas"
-
-#~ msgid "All your groups"
-#~ msgstr "Todos sus grupos"
-
-#~ msgid "All your public baskets"
-#~ msgstr "Todas sus las cestas públicas"
-
-#~ msgid "Please select a basket..."
-#~ msgstr "Seleccione una cesta..."
-
-#~ msgid "%(x_nb)i Comments for round \"%(x_name)s\""
-#~ msgstr "%(x_nb)i comentarios por la vuelta «%(x_name)s»"
-
-#~ msgid "%(x_nb)i Comments"
-#~ msgstr "%(x_nb)i comentarios"
-
-#~ msgid "Be the first to review this document.</div>"
-#~ msgstr "Sea el primero a escribir una reseña de este documento.</div>"
-
-#~ msgid "Close"
-#~ msgstr "Cerrar"
-
-#~ msgid "Open"
-#~ msgstr "Abrir"
-
-#~ msgid "Specified comment does not belong to this record"
-#~ msgstr "El comentario especificado no pertenece a este registro"
-
-#~ msgid "You do not have access to the specified comment"
-#~ msgstr "No tiene acceso al comentario especificado"
-
-#~ msgid "You cannot vote for a deleted comment"
-#~ msgstr "No puede votar a un comentario borrado"
-
-#~ msgid "You cannot report a deleted comment"
-#~ msgstr "No puede denunciar a un comentario borrado"
-
-#~ msgid "You cannot access files of a deleted comment"
-#~ msgstr "No puede acceder a los ficheros de un comentario borrado"
-
-#~ msgid "Regenerate Issue"
-#~ msgstr "Regenerar número"
-
-#~ msgid ""
-#~ "Warning: full-text search is only available for a subset of papers mostly "
-#~ "from %(x_range_from_year)s-%(x_range_to_year)s."
-#~ msgstr ""
-#~ "Atención: la búsqueda a texto completo sólo está disponible para un "
-#~ "subconjunto de documentos, mayoritariamente de entre "
-#~ "%(x_range_from_year)s-%(x_range_to_year)s."
-
-#~ msgid ""
-#~ "Warning: figure caption search is only available for a subset of papers "
-#~ "mostly from %(x_range_from_year)s-%(x_range_to_year)s."
-#~ msgstr ""
-#~ "Atención: la búsqueda en los pies de imágenes sólo está disponible para "
-#~ "un subconjunto de documentos, mayoritariamente de entre "
-#~ "%(x_range_from_year)s-%(x_range_to_year)s."
-
-#~ msgid "Your search did not match any records. Please try again."
-#~ msgstr "Su búsqueda no ha encontrado ningún registro. Vuelva a intentarlo."
-
-#~ msgid ""
-#~ "Sorry, %s does not seem to be a valid sort option. The records will not "
-#~ "be sorted."
-#~ msgstr "No es posible ordenar por %s. No se ordenarán los registros."
-
-#~ msgid "The record %d replaces it."
-#~ msgstr "El registro %d lo reemplaza."
-
-#~ msgid "Total number of citations excluding self-citations"
-#~ msgstr "Número total de citaciones excluyendo las autocitas"
-
-#~ msgid "Average citations per paper excluding self-citations"
-#~ msgstr "Media de citas por artículo excluyendo las autocitas"
-
-#~ msgid "API keys"
-#~ msgstr "Llaves API"
-
-#~ msgid "These are your current API keys"
-#~ msgstr "Estas son sus llaves API"
-
-#~ msgid "Description: "
-#~ msgstr "Descripción: "
-
-#~ msgid "Status: "
-#~ msgstr "Estado: "
-
-#~ msgid "API key"
-#~ msgstr "Llave API"
-
-#~ msgid "Delete key"
-#~ msgstr "Suprimir la llave"
-
-#~ msgid ""
-#~ "If you want to create a new API key, please enter a description for it"
-#~ msgstr "Si desea crear una nueva llave API, escriba su descripción"
-
-#~ msgid "Description for the new API key"
-#~ msgstr "Descripción para la nueva llave API"
-
-#~ msgid ""
-#~ "The description should be something meaningful for you to recognize the "
-#~ "API key"
-#~ msgstr ""
-#~ "La descripción debería ser algo significativo para que pueda reconocer la "
-#~ "nueva llave API"
-
-#~ msgid "Create new key"
-#~ msgstr "Crear una llave nueva"
-
-#~ msgid "Your nickname has not been updated"
-#~ msgstr "Su alias no se ha actualitzado."
-
-#~ msgid "Login to display all document types you can access"
-#~ msgstr ""
-#~ "Identifíquese para visualizar todos los tipos de documento a los que "
-#~ "tiene acceso"
-
-#~ msgid ""
-#~ "As a referee for this document, you may approve or reject it from the "
-#~ "submission interface"
-#~ msgstr ""
-#~ "Como revisor de este documento, puede aprobarlo o rechazarlo des de la "
-#~ "interfaz de envíos"
-
-#~ msgid "Sorry, invalid arguments"
-#~ msgstr "Argumentos no válidos"
-
-#~ msgid "Note: the requested submission has already been completed"
-#~ msgstr "El envío ya se ha completado"
-
-#~ msgid ""
-#~ "Sorry, you don't seem to have initiated a submission with the provided "
-#~ "access number"
-#~ msgstr ""
-#~ "No parece que haya iniciado un envío con el número de acceso que ha\n"
-#~ "especificado"
-
-#~ msgid "period_of_interest_from"
-#~ msgstr "período_de_interés_desde"
-
-#~ msgid "jsCal3"
-#~ msgstr "jsCal3"
-
-#~ msgid "period_of_interest_to"
-#~ msgstr "período_de_interés_hasta"
-
-#~ msgid "jsCal4"
-#~ msgstr "jsCal4"
-
-#~ msgid "jsCal1"
-#~ msgstr "jsCal1"
-
-#~ msgid "jsCal2"
-#~ msgstr "jsCal2"
-
-#~ msgid "No fulltext"
-#~ msgstr "Sin texto completo"
-
-#~ msgid "last note on"
-#~ msgstr "último comentario en"
-
-#~ msgid ", no notes yet"
-#~ msgstr ", sin notas"
-
-#~ msgid "HTML brief"
-#~ msgstr "HTML resumido"
-
-#~ msgid "You have "
-#~ msgstr "Tiene "
-
-#~ msgid "when field equals"
-#~ msgstr "cuando el campo es igual a"
-
-#~ msgid "ERROR"
-#~ msgstr "ERROR"
-
-#~ msgid "already exists."
-#~ msgstr "ya existe"
-
-#~ msgid "deleted"
-#~ msgstr "Eliminado"
-
-#~ msgid "in their rules and descriptions"
-#~ msgstr "en sus reglas y descripciones"
-
-#~ msgid "The left side of the rule "
-#~ msgstr "La parte izquierda de la regla "
-
-#~ msgid "The right side of the rule "
-#~ msgstr "La parte derecha de la regla "
-
-#~ msgid "upload is a file"
-#~ msgstr "subir este fichero"
-
-#~ msgid "No such knowledge base"
-#~ msgstr "No existe esta base de conocimiemento"
+msgstr ""
diff --git a/requirements-extras.txt b/requirements-extras.txt
index 4b48e8b6b..4f81718b3 100644
--- a/requirements-extras.txt
+++ b/requirements-extras.txt
@@ -1,11 +1,16 @@
# More requirements files are needed, since e.g gnuplot-py import numpy in its setup.py,
# which means it has to be installed in a second step.
gnuplot-py==1.8
# Following packages are optional (if you do development you probably want to install them):
pylint
http://sourceforge.net/projects/pychecker/files/pychecker/0.8.19/pychecker-0.8.19.tar.gz/download
pep8
selenium
winpdb
mock
+ipython
+cython
+nose
+nosexcover
+flake8
diff --git a/requirements.txt b/requirements.txt
index 7657813a0..ab11fac08 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,23 +1,29 @@
# Invenio requirements.
MySQL-python==1.2.4
rdflib==2.4.2
reportlab==2.5
python-dateutil<=1.9999
python-magic==0.4.2
http://www.reportlab.com/ftp/pyRXP-1.16-daily-unix.tar.gz
numpy==1.7.0
lxml==3.1.2
mechanize==0.2.5
python-Levenshtein==0.10.2
pyPdf==1.13
PyStemmer==1.3.0
https://py-editdist.googlecode.com/files/py-editdist-0.3.tar.gz
feedparser==5.1.3
BeautifulSoup==3.2.1
beautifulsoup4==4.1.3
python-twitter==0.8.7
celery==3.0.17
msgpack-python==0.3.0
pyparsing==1.5.6
git+git://github.com/romanchyla/workflow.git@e41299579501704b1486c72cc2509a9f82e63ea6
requests==1.2.1
+PyPDF2
+rauth
+unidecode
+python-openid
+qrcode
+PIL

Event Timeline