diff --git a/ABOUT-NLS b/ABOUT-NLS index cfc796ed6..60f81badf 100644 --- a/ABOUT-NLS +++ b/ABOUT-NLS @@ -1,310 +1,330 @@ Invenio NATIVE LANGUAGE SUPPORT =============================== About ===== This document describes the Native Language Support (NLS) in Invenio. Contents ======== 1. Native Language Support information for administrators 2. Native Language Support information for translators 3. Native Language Support information for programmers A. Introducing a new language B. Integrating translation contributions 1. Native Language Support information for administrators ========================================================= Invenio is currently available in the following languages: af = Afrikaans ar = Arabic bg = Bulgarian ca = Catalan cs = Czech de = German el = Greek en = English es = Spanish fa = Persian (Farsi) fr = French gl = Galician hr = Croatian hu = Hungarian it = Italian ja = Japanese ka = Georgian lt = Lithuanian no = Norwegian (Bokmål) pl = Polish pt = Portuguese ro = Romanian ru = Russian rw = Kinyarwanda sk = Slovak sv = Swedish uk = Ukrainian zh_CN = Chinese (China) zh_TW = Chinese (Taiwan) If you are installing Invenio and you want to enable/disable some languages, please just follow the standard installation procedure as described in the INSTALL file. The default language of the installation as well as the list of all user-seen languages can be selected in the general invenio.conf file, see variables CFG_SITE_LANG and CFG_SITE_LANGS. (Please note that some runtime Invenio daemons -- such as webcoll, responsible for updating the collection cache, running every hour or so -- may work twice as long when twice as many user-seen languages are selected, because it creates collection cache page elements for every user-seen language. Therefore, if you have defined thousands of collections and if you find the webcoll speed to be slow in your setup, you may want to try to limit the list of selected languages.) 2. Native Language Support information for translators ====================================================== If you want to contibute a translation to Invenio, then please follow the procedure below: - Please check out the existence of po/LL.po file for your language, where LL stands for the ISO 639 language code (e.g. `el' for Greek). If such a file exists, then this language is already supported, in which case you may want to review the existing translation (see below). If the file does not exist yet, then you can create an empty one by copying the invenio.pot template file into LL.po that you can review as described in the next item. (Please note that you would have to translate some dynamic elements that are currently not located in the PO file, see the appendix A below.) - Please edit LL.po to review existing translation. The PO file format is a standard GNU gettext one and so you can take advantage of dedicated editing modes of programs such as GNU Emacs, KBabel, or poEdit to edit it. Pay special attention to strings marked as fuzzy and untranslated. (E.g. in the Emacs PO mode, press `f' and `u' to find them.) Do not forget to remove fuzzy marks for reviewed translations. (E.g. in the Emacs PO mode, press `TAB' to remove fuzzy status of a string.) - After you are done with translations, please validate your file to make sure it does not contain formatting errors. (E.g. in the Emacs PO mode, press `V' to validate the file.) - If you have access to a test installation of Invenio, you may want to see your modified PO file in action: $ cd po $ emacs ja.po # edit Japanese translation $ make update-gmo $ make install $ sudo apachectl restart $ firefox http://your.site/?ln=ja # check it out in context If you do not have access to a test installation, please contribute your PO file to the developers team (see the next step) and we shall install it on a test site and contact you so that you will be able to check your translation in the global context of the application. (Note to developers: note that ``make update-gmo'' command may be necessary to run before ``make'' if the latter fails, even if you are not touching translation business at all. The reason being that the gmo files are not stored in CVS, while they are included in the distribution tarball. So, if you are building from CVS, and you do not have them in your tree, you may get build errors in directories like modules/webhelp/web/admin saying things like ``No rule to make target `index.bg.html'''. The solution is to run ``make update-gmo'' to produce the gmo files before running ``make''. End of note to developers.) - Please contribute your translation by emailing the file to . You help is greatly appreciated and will be properly credited in the THANKS file. See also the GNU gettext manual, especially the chapters 5, 6 and 11. 3. Native Language Support information for programmers ====================================================== Invenio uses standard GNU gettext I18N and L12N philosophy. In Python programs, all output strings should be made translatable via the _() convention: from messages import gettext_set_language [...] def square(x, ln=CFG_SITE_LANG): _ = gettext_set_language(ln) print _("Hello there!") print _("The square of %s is %s.") % (x, x*x) In webdoc source files, the convention is _()_: _(Search Help)_ Here are some tips for writing easily translatable output messages: - Do not cut big phrases into several pieces, the meaning may be harder to grasp and to render properly in another language. Leave them in the context. Do not try to economize and reuse standalone-translated words as parts of bigger sentences. The translation could differ due to gender, for example. Rather define two sentences instead: not: _("This %s is not available.") % x, where x is either _("basket") or _("alert") but: _("This basket is not available.") and _("This alert is not available.") - If you print some value in a translatable phrase, you can use an unnamed %i or %s string replacement placeholders: yes: _("There are %i baskets.") % nb_baskets But, as soon as you are printing more than one value, you should use named string placeholders because in some languages the parts of the sentence may be reversed when translated: not: _("There are %i baskets shared by %i groups.") % \ (nb_baskets, nb_groups) but: _("There are %(x_nb_baskets)s baskets shared by %(x_nb_groups)s groups.") % \ {'x_nb_baskets': nb_baskets, 'x_nb_groups': nb_groups,} Please use the `x_' prefix for the named placeholder variables to ease the localization task of the translator. - Do not mix HTML presentation inside phrases. If you want to reserve space for HTML markup, please use generic replacement placeholders as prologue and epilogue: not: _("This is cold.") but: _("This is %(x_fmt_open)scold%(x_fmt_close)s.") Ditto for links: not: _("This is homepage.") not: _("This is %(x_url_open)shomepage%(x_url_close)s.") - Do not leave unnecessary things in short commonly used translatable expressions, such as extraneous spaces or colons before or after them. Rather put them in the business logic: not: _(" subject") but: " " + _("subject") not: _("Record %i:") but: _("Record") + "%i:" % recID On the other hand, in long sentences when the trailing punctuation has its meaning as an integral part of the label to be shown on the interface, you should leave them: not: _("Nearest terms in any collection are") but: _("Nearest terms in any collection are:") - Last but not least: the best is to follow the style of existing messages as a model, so that the translators are presented with a homogeneous and consistently presented output phrase set. Appendix A. Introducing a new language ====================================== If you are introducing a new language for the first time, then please firstly create and edit the PO file as described above in Section 2. This will make the largest portion of the translating work done, but it is not fully enough, because we currently have also to translate some dynamic elements that aren't located in PO files. The development team can edit the respective files ourself, if the translator sends over the following translations by email: - demo server name, from invenio.conf: Atlantis Institute of Fictive Science - demo collection names, from democfgdata.sql: Preprints Books Theses Reports Articles Pictures CERN Divisions CERN Experiments Theoretical Physics (TH) Experimental Physics (EP) Articles & Preprints Books & Reports Multimedia & Arts Poetry + Atlantis Times News + Atlantis Times Arts + Atlantis Times Science + Atlantis Times + Atlantis Institute Books + Atlantis Institute Articles + Atlantis Times Drafts + Notes + ALEPH Papers + ALEPH Internal Notes + ALEPH Theses + ISOLDE Papers + ISOLDE Internal Notes + Drafts + Videos + Authorities + People + Institutes + Journals + Subjects - demo right-hand-side portalbox, from democfgdata.sql: ABOUT THIS SITE Welcome to the demo site of the Invenio, a free document server software coming from CERN. Please feel free to explore all the features of this demo site to the full. SEE ALSO The development team will than edit various files (po/LINGUAS, config files, sql files, plenty of Makefile files, etc) as needed. The last phase of the initial introduction of the new language would be to translate some short static HTML pages such as: - modules/webhelp/web/help-central.webdoc Thanks for helping us to internationalize Invenio. Appendix B. Integrating translation contributions ================================================= This appendix contains some tips on integrating translated phrases that were prepared for different Invenio releases. It is mostly of interest to Invenio developers or the release manager. Imagine that we have a working translation file sk.po and that we have received a contribution co-CONTRIB.po that was prepared for previous Invenio release, so that the messages do not fully correspond. Moreover, another person might have had worked with the sk.po file in the meantime. The goal is to integrate the contributions. Firstly, check whether the contributed file sk-CONTRIB.po was indeed prepared for different software release version: $ msgcmp --use-fuzzy --use-untranslated sk-CONTRIB.po invenio.pot If yes, then join its translations with the ones in the latest sk.po file: $ msgcat sk-CONTRIB.po sk.po > sk-TMP.po and update the message references: $ msgmerge sk-TMP.po invenio.pot > sk-NEW.po This will give the new file sk-NEW.po that should now be msgcmp'rable to invenio.pot. Lastly, we will have to go manually through sk-NEW.po in order to resolve potential translation conflicts (marked via ``#-#-#-#-#'' fuzzy translations). If the conflicts are evident and easy to resolve, for example corrected typos, we can fix them. If the conflicts are of translational nature and cannot be resolved without consulting the translators, we should warn them about the conflicts. After the evident conflicts are resolved and the file validates okay, we can rename it to sk.po and we are done. (Note that we could have used ``--use-first'' option to msgcat if we were fully sure that the first translation file (sk-CONTRIB) could have been preferred as far as the quality of translation goes.) - end of file - diff --git a/Makefile.am b/Makefile.am index 3f135f82d..7389e3d3f 100644 --- a/Makefile.am +++ b/Makefile.am @@ -1,673 +1,673 @@ ## This file is part of Invenio. ## Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. confignicedir = $(sysconfdir)/build confignice_SCRIPTS=config.nice SUBDIRS = config EXTRA_DIST = UNINSTALL THANKS RELEASE-NOTES configure-tests.py config.nice.in \ config.rpath # current MathJax version and packages # See also modules/miscutil/lib/htmlutils.py (get_mathjax_header) MJV = 2.1 MATHJAX = http://invenio-software.org/download/mathjax/MathJax-v$(MJV).zip # current CKeditor version CKV = 3.6.6 CKEDITOR = ckeditor_$(CKV).zip # current MediaElement.js version MEV = master MEDIAELEMENT = http://github.com/johndyer/mediaelement/zipball/$(MEV) #for solrutils INVENIO_JAVA_PATH = org/invenio_software/solr solrdirname = apache-solr-3.1.0 solrdir = $(prefix)/lib/$(solrdirname) solrutils_dir=$(CURDIR)/modules/miscutil/lib/solrutils # for recline.js RECLINEVER=master CLASSPATH=.:${solrdir}/dist/solrj-lib/commons-io-1.4.jar:${solrdir}/dist/apache-solr-core-*jar:${solrdir}/contrib/jzlib-1.0.7.jar:${solrdir}/dist/apache-solr-solrj-3.1.0.jar:${solrdir}/dist/solrj-lib/slf4j-api-1.5.5.jar:${solrdir}/dist/*:${solrdir}/contrib/basic-lucene-libs/*:${solrdir}/contrib/analysis-extras/lucene-libs/*:${solrdir}/dist/solrj-lib/* # git-version-get stuff: BUILT_SOURCES = $(top_srcdir)/.version $(top_srcdir)/.version: echo $(VERSION) > $@-t && mv $@-t $@ dist-hook: echo $(VERSION) > $(distdir)/.tarball-version # Bootstrap version BOOTSTRAPV = 3.0.2 # Hogan.js version HOGANVER = 2.0.0 check-custom-templates: $(PYTHON) $(top_srcdir)/modules/webstyle/lib/template.py --check-custom-templates $(top_srcdir) kwalitee-check: @$(PYTHON) $(top_srcdir)/scripts/kwalitee.py --stats $(top_srcdir) kwalitee-check-errors-only: @$(PYTHON) $(top_srcdir)/scripts/kwalitee.py --check-errors $(top_srcdir) kwalitee-check-variables: @$(PYTHON) $(top_srcdir)/scripts/kwalitee.py --check-variables $(top_srcdir) kwalitee-check-indentation: @$(PYTHON) $(top_srcdir)/scripts/kwalitee.py --check-indentation $(top_srcdir) kwalitee-check-sql-queries: @$(PYTHON) $(top_srcdir)/scripts/kwalitee.py --check-sql $(top_srcdir) etags: \rm -f $(top_srcdir)/TAGS (cd $(top_srcdir) && find $(top_srcdir) -name "*.py" -print | xargs etags) install-data-local: for d in / /cache /cache/RTdata /log /tmp /tmp-shared /data /run /tmp-shared/bibencode/jobs/done /tmp-shared/bibedit-cache; do \ mkdir -p $(localstatedir)$$d ; \ done @echo "************************************************************" @echo "** Invenio software has been successfully installed! **" @echo "** **" @echo "** You may proceed to customizing your installation now. **" @echo "************************************************************" install-mathjax-plugin: @echo "***********************************************************" @echo "** Installing MathJax plugin, please wait... **" @echo "***********************************************************" rm -rf /tmp/invenio-mathjax-plugin mkdir /tmp/invenio-mathjax-plugin rm -fr ${prefix}/var/www/MathJax mkdir -p ${prefix}/var/www/MathJax (cd /tmp/invenio-mathjax-plugin && \ wget '$(MATHJAX)' -O mathjax.zip && \ unzip -q mathjax.zip && cd mathjax-MathJax-* && cp -r * \ ${prefix}/var/www/MathJax) rm -fr /tmp/invenio-mathjax-plugin @echo "************************************************************" @echo "** The MathJax plugin was successfully installed. **" @echo "** Please do not forget to properly set the option **" @echo "** CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS and **" @echo "** CFG_WEBSUBMIT_USE_MATHJAX in invenio.conf. **" @echo "************************************************************" uninstall-mathjax-plugin: @rm -rvf ${prefix}/var/www/MathJax @echo "***********************************************************" @echo "** The MathJax plugin was successfully uninstalled. **" @echo "***********************************************************" install-jscalendar-plugin: @echo "***********************************************************" @echo "** Installing jsCalendar plugin, please wait... **" @echo "***********************************************************" rm -rf /tmp/invenio-jscalendar-plugin mkdir /tmp/invenio-jscalendar-plugin (cd /tmp/invenio-jscalendar-plugin && \ wget 'http://www.dynarch.com/static/jscalendar-1.0.zip' && \ unzip -u jscalendar-1.0.zip && \ mkdir -p ${prefix}/var/www/jsCalendar && \ cp jscalendar-1.0/img.gif ${prefix}/var/www/jsCalendar/jsCalendar.gif && \ cp jscalendar-1.0/calendar.js ${prefix}/var/www/jsCalendar/ && \ cp jscalendar-1.0/calendar-setup.js ${prefix}/var/www/jsCalendar/ && \ cp jscalendar-1.0/lang/calendar-en.js ${prefix}/var/www/jsCalendar/ && \ cp jscalendar-1.0/calendar-blue.css ${prefix}/var/www/jsCalendar/) rm -fr /tmp/invenio-jscalendar-plugin @echo "***********************************************************" @echo "** The jsCalendar plugin was successfully installed. **" @echo "***********************************************************" uninstall-jscalendar-plugin: @rm -rvf ${prefix}/var/www/jsCalendar @echo "***********************************************************" @echo "** The jsCalendar plugin was successfully uninstalled. **" @echo "***********************************************************" install-js-test-driver: @echo "*******************************************************" @echo "** Installing js-test-driver, please wait... **" @echo "*******************************************************" mkdir -p $(prefix)/lib/java/js-test-driver && \ cd $(prefix)/lib/java/js-test-driver && \ wget http://invenio-software.org/download/js-test-driver/JsTestDriver-1.3.5.jar -O JsTestDriver.jar uninstall-js-test-driver: @rm -rvf ${prefix}/lib/java/js-test-driver @echo "*********************************************************" @echo "** The js-test-driver was successfully uninstalled. **" @echo "*********************************************************" install-jquery-plugins: @echo "***********************************************************" @echo "** Installing various jQuery plugins, please wait... **" @echo "***********************************************************" mkdir -p ${prefix}/var/www/js mkdir -p $(prefix)/var/www/css mkdir -p $(prefix)/var/www/img (cd ${prefix}/var/www/js && \ - wget http://code.jquery.com/jquery-1.7.1.min.js && \ + wget http://invenio-software.org/download/jquery/jquery-1.7.1.min.js && \ mv jquery-1.7.1.min.js jquery.min.js && \ wget http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.17/jquery-ui.min.js && \ wget http://invenio-software.org/download/jquery/v1.5/js/jquery.jeditable.mini.js && \ wget https://raw.github.com/malsup/form/master/jquery.form.js --no-check-certificate && \ wget http://jquery-multifile-plugin.googlecode.com/svn/trunk/jquery.MultiFile.pack.js && \ wget -O jquery.tablesorter.zip http://invenio-software.org/download/jquery/jquery.tablesorter.20111208.zip && \ wget http://invenio-software.org/download/jquery/uploadify-v2.1.4.zip -O uploadify.zip && \ wget http://www.datatables.net/download/build/jquery.dataTables.min.js && \ rm -rf /tmp/invenio-dt-bootstrap && \ mkdir /tmp/invenio-dt-bootstrap && \ (cd /tmp/invenio-dt-bootstrap && \ git clone 'https://github.com/DataTables/Plugins.git' && \ cp Plugins/integration/bootstrap/2/dataTables.bootstrap.css ${prefix}/var/www/css/dataTables.bootstrap.css && \ cp Plugins/integration/bootstrap/2/dataTables.bootstrap.js ${prefix}/var/www/js/dataTables.bootstrap.js && \ cp Plugins/integration/bootstrap/images/*.png ${prefix}/var/www/img/ && \ ln -s ${prefix}/var/www/img/ ${prefix}/var/www/images) && \ rm -rf /tmp/invenio-dt-bootstrap/plugins && \ mkdir /tmp/invenio-dt-bootstrap/plugins && \ (cd /tmp/invenio-dt-bootstrap/plugins && \ git clone 'https://github.com/DataTables/ColVis' && \ cp ColVis/media/css/ColVis.css ${prefix}/var/www/css/ColVis.css && \ cp ColVis/media/js/ColVis.js ${prefix}/var/www/js/ColVis.js && \ cp ColVis/media/images/*.png ${prefix}/var/www/img/) && \ cd ${prefix}/var/www/js && \ rm -rf /tmp/invenio-dt-bootstrap/plugins && \ mkdir /tmp/invenio-dt-bootstrap/plugins && \ (cd /tmp/invenio-dt-bootstrap/plugins && \ git clone 'https://github.com/LeaVerou/prism' && \ cp prism/themes/prism.css ${prefix}/var/www/css/prism.css && \ cp prism/prism.js ${prefix}/var/www/js/prism.js) && \ rm -rf /tmp/invenio-dt-bootstrap/plugins && \ wget http://invenio-software.org/download/jquery/jquery.bookmark.package-1.4.0.zip && \ unzip jquery.tablesorter.zip -d tablesorter && \ rm jquery.tablesorter.zip && \ rm -rf uploadify && \ unzip -u uploadify.zip -d uploadify && \ wget http://invenio-software.org/download/jquery/flot-0.6.zip && \ wget -O jquery-ui-timepicker-addon.js http://invenio-software.org/download/jquery/jquery-ui-timepicker-addon-1.0.3.js && \ unzip -u flot-0.6.zip && \ mv flot/jquery.flot.selection.min.js flot/jquery.flot.min.js flot/excanvas.min.js ./ && \ rm flot-0.6.zip && rm -r flot && \ mv uploadify/swfobject.js ./ && \ mv uploadify/cancel.png uploadify/uploadify.css uploadify/uploadify.allglyphs.swf uploadify/uploadify.fla uploadify/uploadify.swf ../img/ && \ mv uploadify/jquery.uploadify.v2.1.4.min.js ./jquery.uploadify.min.js && \ rm uploadify.zip && rm -r uploadify && \ wget --no-check-certificate https://github.com/douglascrockford/JSON-js/raw/master/json2.js && \ wget http://invenio-software.org/download/jquery/jquery.hotkeys-0.8.js -O jquery.hotkeys.js && \ wget http://jquery.bassistance.de/treeview/jquery.treeview.zip && \ unzip jquery.treeview.zip -d jquery-treeview && \ rm jquery.treeview.zip && \ wget http://invenio-software.org/download/jquery/v1.5/js/jquery.ajaxPager.js && \ unzip jquery.bookmark.package-1.4.0.zip && \ rm -f jquery.bookmark.ext.* bookmarks-big.png bookmarkBasic.html jquery.bookmark.js jquery.bookmark.pack.js && \ mv bookmarks.png ../img/ && \ mv jquery.bookmark.css ../css/ && \ rm -f jquery.bookmark.package-1.4.0.zip && \ mkdir -p ${prefix}/var/www/img && \ cd ${prefix}/var/www/img && \ wget -r -np -nH --cut-dirs=4 -A "png,css" -P jquery-ui/themes http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/base/ && \ wget -r -np -nH --cut-dirs=4 -A "png,css" -P jquery-ui/themes http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/smoothness/ && \ wget -r -np -nH --cut-dirs=4 -A "png,css" -P jquery-ui/themes http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/redmond/ && \ wget --no-check-certificate -O datatables_jquery-ui.css https://github.com/DataTables/DataTables/raw/master/media/css/demo_table_jui.css && \ wget http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/redmond/jquery-ui.css && \ wget http://jquery-ui.googlecode.com/svn/tags/1.8.17/demos/images/calendar.gif && \ wget -r -np -nH --cut-dirs=5 -A "png" http://jquery-ui.googlecode.com/svn/tags/1.8.17/themes/redmond/images/) @echo "***********************************************************" @echo "** The jQuery plugins were successfully installed. **" @echo "***********************************************************" uninstall-jquery-plugins: (cd ${prefix}/var/www/js && \ rm -f jquery.min.js && \ rm -f jquery.MultiFile.pack.js && \ rm -f jquery.jeditable.mini.js && \ rm -f jquery.flot.selection.min.js && \ rm -f jquery.flot.min.js && \ rm -f excanvas.min.js && \ rm -f jquery-ui-timepicker-addon.min.js && \ rm -f json2.js && \ rm -f jquery.uploadify.min.js && \ rm -rf tablesorter && \ rm -rf jquery-treeview && \ rm -f jquery.ajaxPager.js && \ rm -f jquery.form.js && \ rm -f jquery.dataTables.min.js && \ rm -f ui.core.js && \ rm -f jquery.bookmark.min.js && \ rm -f jquery.hotkeys.js && \ rm -f jquery.tablesorter.min.js && \ rm -f jquery-ui-1.7.3.custom.min.js && \ rm -f jquery.metadata.js && \ rm -f jquery-latest.js && \ rm -f jquery-ui.min.js) (cd ${prefix}/var/www/img && \ rm -f cancel.png uploadify.css uploadify.swf uploadify.allglyphs.swf uploadify.fla && \ rm -f datatables_jquery-ui.css \ rm -f bookmarks.png) && \ (cd ${prefix}/var/www/css && \ rm -f jquery.bookmark.css) @echo "***********************************************************" @echo "** The jquery plugins were successfully uninstalled. **" @echo "***********************************************************" install-ckeditor-plugin: @echo "***********************************************************" @echo "** Installing CKeditor plugin, please wait... **" @echo "***********************************************************" rm -rf ${prefix}/lib/python/invenio/ckeditor/ rm -rf /tmp/invenio-ckeditor-plugin mkdir /tmp/invenio-ckeditor-plugin (cd /tmp/invenio-ckeditor-plugin && \ wget 'http://invenio-software.org/download/ckeditor/$(CKEDITOR)' && \ unzip -u -d ${prefix}/var/www $(CKEDITOR)) && \ find ${prefix}/var/www/ckeditor/ -depth -name '_*' -exec rm -rf {} \; && \ find ${prefix}/var/www/ckeditor/ckeditor* -maxdepth 0 ! -name "ckeditor.js" -exec rm -r {} \; && \ touch ${prefix}/var/www/ckeditor/invenio-ckeditor-config.js && \ rm -fr /tmp/invenio-ckeditor-plugin @echo "* Installing Invenio-specific CKeditor config..." @echo "Ignored: (cd $(top_srcdir)/modules/webstyle/etc && make install)" @echo "***********************************************************" @echo "** The CKeditor plugin was successfully installed. **" @echo "** Please do not forget to properly set the option **" @echo "** CFG_WEBCOMMENT_USE_RICH_TEXT_EDITOR in invenio.conf. **" @echo "***********************************************************" uninstall-ckeditor-plugin: @rm -rvf ${prefix}/var/www/ckeditor @rm -rvf ${prefix}/lib/python/invenio/ckeditor @echo "***********************************************************" @echo "** The CKeditor plugin was successfully uninstalled. **" @echo "***********************************************************" install-pdfa-helper-files: @echo "***********************************************************" @echo "** Installing PDF/A helper files, please wait... **" @echo "***********************************************************" wget 'http://invenio-software.org/download/invenio-demo-site-files/ISOCoatedsb.icc' -O ${prefix}/etc/websubmit/file_converter_templates/ISOCoatedsb.icc @echo "***********************************************************" @echo "** The PDF/A helper files were successfully installed. **" @echo "***********************************************************" install-mediaelement: @echo "***********************************************************" @echo "** MediaElement.js, please wait... **" @echo "***********************************************************" rm -rf /tmp/mediaelement mkdir /tmp/mediaelement wget 'http://github.com/johndyer/mediaelement/zipball/master' -O '/tmp/mediaelement/mediaelement.zip' --no-check-certificate unzip -u -d '/tmp/mediaelement' '/tmp/mediaelement/mediaelement.zip' rm -rf ${prefix}/var/www/mediaelement mkdir ${prefix}/var/www/mediaelement mv /tmp/mediaelement/johndyer-mediaelement-*/build/* ${prefix}/var/www/mediaelement rm -rf /tmp/mediaelement @echo "***********************************************************" @echo "** MediaElement.js was successfully installed. **" @echo "***********************************************************" install-bootstrap: @echo "***********************************************************" @echo "** Installing Twitter Bootstrap, please wait... **" @echo "***********************************************************" rm -rf /tmp/invenio-bootstrap mkdir /tmp/invenio-bootstrap (cd /tmp/invenio-bootstrap && \ wget -O bootstrap.zip 'https://github.com/twbs/bootstrap/releases/download/v${BOOTSTRAPV}/bootstrap-${BOOTSTRAPV}-dist.zip' && \ unzip -u bootstrap.zip && \ cp dist/css/bootstrap.css ${prefix}/var/www/css/bootstrap.css && \ cp dist/css/bootstrap.min.css ${prefix}/var/www/css/bootstrap.min.css && \ cp dist/css/bootstrap-theme.css ${prefix}/var/www/css/bootstrap-theme.css && \ cp dist/css/bootstrap-theme.min.css ${prefix}/var/www/css/bootstrap-theme.min.css && \ mkdir -p ${prefix}/var/www/fonts && \ cp dist/fonts/glyphicons-halflings-regular.eot ${prefix}/var/www/fonts/glyphicons-halflings-regular.eot && \ cp dist/fonts/glyphicons-halflings-regular.svg ${prefix}/var/www/fonts/glyphicons-halflings-regular.svg && \ cp dist/fonts/glyphicons-halflings-regular.ttf ${prefix}/var/www/fonts/glyphicons-halflings-regular.ttf && \ cp dist/fonts/glyphicons-halflings-regular.woff ${prefix}/var/www/fonts/glyphicons-halflings-regular.woff && \ cp dist/js/bootstrap.js ${prefix}/var/www/js/bootstrap.js && \ cp dist/js/bootstrap.min.js ${prefix}/var/www/js/bootstrap.min.js && \ wget -O bootstrap-typeahead.zip 'http://twitter.github.com/typeahead.js/releases/latest/typeahead.js.zip' && \ unzip -u bootstrap-typeahead.zip && \ cp typeahead.js/typeahead.js ${prefix}/var/www/js/typeahead.js && \ cp typeahead.js/typeahead.min.js ${prefix}/var/www/js/typeahead.min.js && \ wget -O typeahead.js-bootstrap.css 'https://raw.github.com/jharding/typeahead.js-bootstrap.css/master/typeahead.js-bootstrap.css' && \ mv typeahead.js-bootstrap.css ${prefix}/var/www/css/typeahead.js-bootstrap.css && \ rm -fr /tmp/invenio-bootstrap ) @echo "***********************************************************" @echo "** The Twitter Bootstrap was successfully installed. **" @echo "***********************************************************" uninstall-bootstrap: rm ${prefix}/var/www/css/bootstrap.css && \ rm ${prefix}/var/www/css/bootstrap.min.css && \ rm ${prefix}/var/www/css/bootstrap-theme.css && \ rm ${prefix}/var/www/css/bootstrap-theme.min.css && \ rm ${prefix}/var/www/css/typeahead.js-bootstrap.css && \ rm ${prefix}/var/www/fonts/glyphicons-halflings-regular.eot && \ rm ${prefix}/var/www/fonts/glyphicons-halflings-regular.svg && \ rm ${prefix}/var/www/fonts/glyphicons-halflings-regular.ttf && \ rm ${prefix}/var/www/fonts/glyphicons-halflings-regular.woff && \ rm ${prefix}/var/www/js/bootstrap.js && \ rm ${prefix}/var/www/js/bootstrap.min.js && \ rm ${prefix}/var/www/js/typeahead.js && \ rm ${prefix}/var/www/js/typeahead.min.js @echo "***********************************************************" @echo "** The Twitter Bootstrap was successfully uninstalled. **" @echo "***********************************************************" install-hogan-plugin: @echo "***********************************************************" @echo "** Installing Hogan.js, please wait... **" @echo "***********************************************************" rm -rf /tmp/hogan mkdir /tmp/hogan (cd /tmp/hogan && \ wget -O hogan-${HOGANVER}.js 'http://twitter.github.com/hogan.js/builds/${HOGANVER}/hogan-${HOGANVER}.js' && \ cp hogan-${HOGANVER}.js ${prefix}/var/www/js/hogan.js && \ rm -fr /tmp/hogan ) @echo "***********************************************************" @echo "** Hogan.js was successfully installed. **" @echo "***********************************************************" uninstall-hogan-plugin: rm ${prefix}/var/www/js/hogan.js @echo "***********************************************************" @echo "** Hogan.js was successfully uninstalled. **" @echo "***********************************************************" install-typeahead-plugin: @echo "***********************************************************" @echo "** Installing typeahead.js, please wait... **" @echo "***********************************************************" rm -rf /tmp/typeahead mkdir /tmp/typeahead (cd /tmp/typeahead && \ wget -O typeahead.min.js 'http://twitter.github.com/typeahead.js/releases/latest/typeahead.min.js' && \ cp typeahead.min.js ${prefix}/var/www/js/typeahead.min.js && \ wget -O typeahead.js-bootstrap.css 'https://raw.github.com/jharding/typeahead.js-bootstrap.css/master/typeahead.js-bootstrap.css' && \ cp typeahead.js-bootstrap.css ${prefix}/var/www/css/typeahead.js-bootstrap.css && \ rm -fr /tmp/typeahead ) @echo "***********************************************************" @echo "** typeahead.js was successfully installed. **" @echo "***********************************************************" uninstall-typeahead-plugin: rm ${prefix}/var/www/js/typeahead.min.js @echo "***********************************************************" @echo "** typeahead.js was successfully uninstalled. **" install-recline: @echo "***********************************************************" @echo "** Installing Recline JS, please wait... **" @echo "***********************************************************" rm -rf /tmp/invenio-recline mkdir /tmp/invenio-recline (cd /tmp/invenio-recline && \ wget -O recline.zip 'https://github.com/okfn/recline/archive/${RECLINEVER}.zip' && \ unzip -u recline.zip && \ rm recline.zip && \ mv *recline* recline && \ mkdir -p ${prefix}/var/www/js/recline/css && \ mkdir -p ${prefix}/var/www/js/recline/dist && \ mkdir -p ${prefix}/var/www/js/recline/src && \ mkdir -p ${prefix}/var/www/js/recline/vendor && \ cp -R recline/css ${prefix}/var/www/js/recline/ && \ cp -R recline/src ${prefix}/var/www/js/recline/ && \ cp -R recline/dist ${prefix}/var/www/js/recline/ && \ cp -R recline/vendor ${prefix}/var/www/js/recline/ && \ rm -rf /tmp/invenio-recline ) @echo "***********************************************************" @echo "** The Recline JS was successfully installed. **" @echo "***********************************************************" uninstall-recline: rm -Rf ${prefix}/var/www/js/recline @echo "***********************************************************" @echo "** The Recline JS was successfully uninstalled. **" @echo "***********************************************************" install-jquery-tokeninput: @echo "***********************************************************" @echo "** Installing JQuery Tokeninput, please wait... **" @echo "***********************************************************" rm -rf /tmp/jquery-tokeninput mkdir /tmp/jquery-tokeninput (cd /tmp/jquery-tokeninput && \ wget -O jquery-tokeninput-master.zip 'https://github.com/loopj/jquery-tokeninput/archive/master.zip' --no-check-certificate && \ unzip -u jquery-tokeninput-master.zip && \ cp jquery-tokeninput-master/styles/token-input-facebook.css ${prefix}/var/www/css/token-input-facebook.css && \ cp jquery-tokeninput-master/styles/token-input-mac.css ${prefix}/var/www/css/token-input-mac.css && \ cp jquery-tokeninput-master/styles/token-input.css ${prefix}/var/www/css/token-input.css && \ cp jquery-tokeninput-master/src/jquery.tokeninput.js ${prefix}/var/www/js/jquery.tokeninput.js && \ rm -fr /tmp/jquery-tokeninput ) @echo "***********************************************************" @echo "** The JQuery Tokeninput was successfully installed. **" @echo "***********************************************************" uninstall-jquery-tokeninput: rm ${prefix}/var/www/css/token-input-facebook.css && \ rm ${prefix}/var/www/css/token-input-mac.css && \ rm ${prefix}/var/www/css/token-input.css && \ rm ${prefix}/var/www/js/jquery.tokeninput.js @echo "***********************************************************" @echo "** The JQuery Tokeninput was successfully uninstalled. **" @echo "***********************************************************" install-plupload-plugin: @echo "***********************************************************" @echo "** Installing Plupload plugin, please wait... **" @echo "***********************************************************" rm -rf /tmp/plupload-plugin mkdir /tmp/plupload-plugin (cd /tmp/plupload-plugin && \ wget -O plupload-plugin.zip 'http://invenio-software.org/download/jquery/plupload-1.5.5.zip' && \ unzip -u plupload-plugin.zip && \ mkdir -p ${prefix}/var/www/js/plupload/i18n/ && \ cp -R plupload/js/jquery.plupload.queue ${prefix}/var/www/js/plupload/ && \ cp -R plupload/js/jquery.ui.plupload ${prefix}/var/www/js/plupload/ && \ cp plupload/js/plupload.browserplus.js ${prefix}/var/www/js/plupload/plupload.browserplus.js && \ cp plupload/js/plupload.flash.js ${prefix}/var/www/js/plupload/plupload.flash.js && \ cp plupload/js/plupload.flash.swf ${prefix}/var/www/js/plupload/plupload.flash.swf && \ cp plupload/js/plupload.full.js ${prefix}/var/www/js/plupload/plupload.full.js && \ cp plupload/js/plupload.gears.js ${prefix}/var/www/js/plupload/plupload.gears.js && \ cp plupload/js/plupload.html4.js ${prefix}/var/www/js/plupload/plupload.html4.js && \ cp plupload/js/plupload.html5.js ${prefix}/var/www/js/plupload/plupload.html5.js && \ cp plupload/js/plupload.js ${prefix}/var/www/js/plupload/plupload.js && \ cp plupload/js/plupload.silverlight.js ${prefix}/var/www/js/plupload/plupload.silverlight.js && \ cp plupload/js/plupload.silverlight.xap ${prefix}/var/www/js/plupload/plupload.silverlight.xap && \ cp plupload/js/i18n/*.js ${prefix}/var/www/js/plupload/i18n/ && \ rm -fr /tmp/plupload-plugin ) @echo "***********************************************************" @echo "** The Plupload plugin was successfully installed. **" @echo "***********************************************************" uninstall-plupload-plugin: rm -rf ${prefix}/var/www/js/plupload @echo "***********************************************************" @echo "** The Plupload was successfully uninstalled. **" @echo "***********************************************************" uninstall-pdfa-helper-files: rm -f ${prefix}/etc/websubmit/file_converter_templates/ISOCoatedsb.icc @echo "***********************************************************" @echo "** The PDF/A helper files were successfully uninstalled. **" @echo "***********************************************************" #Solrutils allows automatic installation, running and searching of an external Solr index. install-solrutils: @echo "***********************************************************" @echo "** Installing Solrutils and solr, please wait... **" @echo "***********************************************************" cd $(prefix)/lib && \ if test -d apache-solr*; then echo A solr directory already exists in `pwd` . \ Please remove it manually, if you are sure it is not needed; exit 2; fi ; \ if test -f apache-solr*; then echo solr tarball already exists in `pwd` . \ Please remove it manually.; exit 2; fi ; \ wget http://archive.apache.org/dist/lucene/solr/3.1.0/apache-solr-3.1.0.tgz && \ tar -xzf apache-solr-3.1.0.tgz && \ rm apache-solr-3.1.0.tgz cd $(solrdir)/contrib/ ;\ wget http://mirrors.ibiblio.org/pub/mirrors/maven2/com/jcraft/jzlib/1.0.7/jzlib-1.0.7.jar && \ cd $(solrdir)/contrib/ ;\ jar -xf ../example/webapps/solr.war WEB-INF/lib/lucene-core-3.1.0.jar ; \ if test -d basic-lucene-libs; then rm -rf basic-lucene-libs; fi ; \ mv WEB-INF/lib/ basic-lucene-libs ; \ cp $(solrutils_dir)/schema.xml $(solrdir)/example/solr/conf/ cp $(solrutils_dir)/solrconfig.xml $(solrdir)/example/solr/conf/ cd $(solrutils_dir) && \ javac -classpath $(CLASSPATH) -d $(solrdir)/contrib @$(solrutils_dir)/java_sources.txt && \ cd $(solrdir)/contrib/ && \ jar -cf invenio-solr.jar org/invenio_software/solr/*class update-v0.99.0-tables: cat $(top_srcdir)/invenio/legacy/miscutil/sql/tabcreate.sql | grep -v 'INSERT INTO upgrade' | ${prefix}/bin/dbexec echo "DROP TABLE IF EXISTS oaiREPOSITORY;" | ${prefix}/bin/dbexec echo "ALTER TABLE bibdoc ADD COLUMN more_info mediumblob NULL default NULL;" | ${prefix}/bin/dbexec echo "ALTER TABLE schTASK ADD COLUMN priority tinyint(4) NOT NULL default 0;" | ${prefix}/bin/dbexec echo "ALTER TABLE schTASK ADD KEY priority (priority);" | ${prefix}/bin/dbexec echo "ALTER TABLE rnkCITATIONDATA DROP PRIMARY KEY;" | ${prefix}/bin/dbexec echo "ALTER TABLE rnkCITATIONDATA ADD PRIMARY KEY (id);" | ${prefix}/bin/dbexec echo "ALTER TABLE rnkCITATIONDATA CHANGE id id mediumint(8) unsigned NOT NULL auto_increment;" | ${prefix}/bin/dbexec echo "ALTER TABLE rnkCITATIONDATA ADD UNIQUE KEY object_name (object_name);" | ${prefix}/bin/dbexec echo "ALTER TABLE sbmPARAMETERS CHANGE value value text NOT NULL default '';" | ${prefix}/bin/dbexec echo "ALTER TABLE sbmAPPROVAL ADD note text NOT NULL default '';" | ${prefix}/bin/dbexec echo "ALTER TABLE hstDOCUMENT CHANGE docsize docsize bigint(15) unsigned NOT NULL;" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtACTIONHISTORY CHANGE client_host client_host int(10) unsigned default NULL;" | ${prefix}/bin/dbexec update-v0.99.1-tables: @echo "Nothing to do; table structure did not change between v0.99.1 and v0.99.2." update-v0.99.2-tables: @echo "Nothing to do; table structure did not change between v0.99.2 and v0.99.3." update-v0.99.3-tables: @echo "Nothing to do; table structure did not change between v0.99.3 and v0.99.4." update-v0.99.4-tables: @echo "Nothing to do; table structure did not change between v0.99.4 and v0.99.5." update-v0.99.5-tables: @echo "Nothing to do; table structure did not change between v0.99.5 and v0.99.6." update-v0.99.6-tables: @echo "Nothing to do; table structure did not change between v0.99.6 and v0.99.7." update-v0.99.7-tables: @echo "Nothing to do; table structure did not change between v0.99.7 and v0.99.8." update-v0.99.8-tables: # from v0.99.8 to v1.0.0-rc0 echo "RENAME TABLE oaiARCHIVE TO oaiREPOSITORY;" | ${prefix}/bin/dbexec cat $(top_srcdir)/invenio/legacy/miscutil/sql/tabcreate.sql | grep -v 'INSERT INTO upgrade' | ${prefix}/bin/dbexec echo "INSERT INTO knwKB (id,name,description,kbtype) SELECT id,name,description,'' FROM fmtKNOWLEDGEBASES;" | ${prefix}/bin/dbexec echo "INSERT INTO knwKBRVAL (id,m_key,m_value,id_knwKB) SELECT id,m_key,m_value,id_fmtKNOWLEDGEBASES FROM fmtKNOWLEDGEBASEMAPPINGS;" | ${prefix}/bin/dbexec echo "ALTER TABLE sbmPARAMETERS CHANGE name name varchar(40) NOT NULL default '';" | ${prefix}/bin/dbexec echo "ALTER TABLE bibdoc CHANGE docname docname varchar(250) COLLATE utf8_bin NOT NULL default 'file';" | ${prefix}/bin/dbexec echo "ALTER TABLE bibdoc CHANGE status status text NOT NULL default '';" | ${prefix}/bin/dbexec echo "ALTER TABLE bibdoc ADD COLUMN text_extraction_date datetime NOT NULL default '0000-00-00';" | ${prefix}/bin/dbexec echo "ALTER TABLE collection DROP COLUMN restricted;" | ${prefix}/bin/dbexec echo "ALTER TABLE schTASK CHANGE host host varchar(255) NOT NULL default '';" | ${prefix}/bin/dbexec echo "ALTER TABLE hstTASK CHANGE host host varchar(255) NOT NULL default '';" | ${prefix}/bin/dbexec echo "ALTER TABLE bib85x DROP INDEX kv, ADD INDEX kv (value(100));" | ${prefix}/bin/dbexec echo "UPDATE clsMETHOD SET location='http://invenio-software.org/download/invenio-demo-site-files/HEP.rdf' WHERE name='HEP' AND location='';" | ${prefix}/bin/dbexec echo "UPDATE clsMETHOD SET location='http://invenio-software.org/download/invenio-demo-site-files/NASA-subjects.rdf' WHERE name='NASA-subjects' AND location='';" | ${prefix}/bin/dbexec echo "UPDATE accACTION SET name='runoairepository', description='run oairepositoryupdater task' WHERE name='runoaiarchive';" | ${prefix}/bin/dbexec echo "UPDATE accACTION SET name='cfgoaiharvest', description='configure OAI Harvest' WHERE name='cfgbibharvest';" | ${prefix}/bin/dbexec echo "ALTER TABLE accARGUMENT CHANGE value value varchar(255);" | ${prefix}/bin/dbexec echo "UPDATE accACTION SET allowedkeywords='doctype,act,categ' WHERE name='submit';" | ${prefix}/bin/dbexec echo "INSERT INTO accARGUMENT(keyword,value) VALUES ('categ','*');" | ${prefix}/bin/dbexec echo "INSERT INTO accROLE_accACTION_accARGUMENT(id_accROLE,id_accACTION,id_accARGUMENT,argumentlistid) SELECT DISTINCT raa.id_accROLE,raa.id_accACTION,accARGUMENT.id,raa.argumentlistid FROM accROLE_accACTION_accARGUMENT as raa JOIN accACTION on id_accACTION=accACTION.id,accARGUMENT WHERE accACTION.name='submit' and accARGUMENT.keyword='categ' and accARGUMENT.value='*';" | ${prefix}/bin/dbexec echo "UPDATE accACTION SET allowedkeywords='name,with_editor_rights' WHERE name='cfgwebjournal';" | ${prefix}/bin/dbexec echo "INSERT INTO accARGUMENT(keyword,value) VALUES ('with_editor_rights','yes');" | ${prefix}/bin/dbexec echo "INSERT INTO accROLE_accACTION_accARGUMENT(id_accROLE,id_accACTION,id_accARGUMENT,argumentlistid) SELECT DISTINCT raa.id_accROLE,raa.id_accACTION,accARGUMENT.id,raa.argumentlistid FROM accROLE_accACTION_accARGUMENT as raa JOIN accACTION on id_accACTION=accACTION.id,accARGUMENT WHERE accACTION.name='cfgwebjournal' and accARGUMENT.keyword='with_editor_rights' and accARGUMENT.value='yes';" | ${prefix}/bin/dbexec echo "ALTER TABLE bskEXTREC CHANGE id id int(15) unsigned NOT NULL auto_increment;" | ${prefix}/bin/dbexec echo "ALTER TABLE bskEXTREC ADD external_id int(15) NOT NULL default '0';" | ${prefix}/bin/dbexec echo "ALTER TABLE bskEXTREC ADD collection_id int(15) unsigned NOT NULL default '0';" | ${prefix}/bin/dbexec echo "ALTER TABLE bskEXTREC ADD original_url text;" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD status char(2) NOT NULL default 'ok';" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD KEY status (status);" | ${prefix}/bin/dbexec echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Photos_to_Storage','Attach/edit the pictures uploaded with the \"create_photos_manager_interface()\" function');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFIELDDESC VALUES ('Upload_Photos',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a photos upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Photos_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\n\r\nfrom invenio.websubmit_functions.ParamFile import ParamFromFile\r\nfrom invenio.websubmit_functions.Move_Photos_to_Storage import read_param_file, create_photos_manager_interface, get_session_id\r\n\r\n# Retrieve session id\r\ntry:\r\n # User info is defined only in MBI/MPI actions...\r\n session_id = get_session_id(None, uid, user_info) \r\nexcept:\r\n session_id = get_session_id(req, uid, {})\r\n\r\n# Retrieve context\r\nindir = curdir.split(\'/\')[-3]\r\ndoctype = curdir.split(\'/\')[-2]\r\naccess = curdir.split(\'/\')[-1]\r\n\r\n# Get the record ID, if any\r\nsysno = ParamFromFile(\"%s/%s\" % (curdir,\'SN\')).strip()\r\n\r\n\"\"\"\r\nModify below the configuration of the photos manager interface.\r\nNote: \'can_reorder_photos\' parameter is not yet fully taken into consideration\r\n\r\nDocumentation of the function is available by running:\r\necho -e \'from invenio.websubmit_functions.Move_Photos_to_Storage import create_photos_manager_interface as f\\nprint f.__doc__\' | python\r\n\"\"\"\r\ntext += create_photos_manager_interface(sysno, session_id, uid,\r\n doctype, indir, curdir, access,\r\n can_delete_photos=True,\r\n can_reorder_photos=True,\r\n can_upload_photos=True,\r\n editor_width=700,\r\n editor_height=400,\r\n initial_slider_value=100,\r\n max_slider_value=200,\r\n min_slider_value=80)','0000-00-00','0000-00-00',NULL,NULL,0);" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Photos_to_Storage','iconsize');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFIELDDESC VALUES ('Upload_Files',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a file upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Uploaded_Files_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\nfrom invenio.websubmit_managedocfiles import create_file_upload_interface\r\nfrom invenio.websubmit_functions.Shared_Functions import ParamFromFile\r\n\r\nindir = ParamFromFile(os.path.join(curdir, \'indir\'))\r\ndoctype = ParamFromFile(os.path.join(curdir, \'doctype\'))\r\naccess = ParamFromFile(os.path.join(curdir, \'access\'))\r\ntry:\r\n sysno = int(ParamFromFile(os.path.join(curdir, \'SN\')).strip())\r\nexcept:\r\n sysno = -1\r\nln = ParamFromFile(os.path.join(curdir, \'ln\'))\r\n\r\n\"\"\"\r\nRun the following to get the list of parameters of function \'create_file_upload_interface\':\r\necho -e \'from invenio.websubmit_managedocfiles import create_file_upload_interface as f\\nprint f.__doc__\' | python\r\n\"\"\"\r\ntext = create_file_upload_interface(recid=sysno,\r\n print_outside_form_tag=False,\r\n include_headers=True,\r\n ln=ln,\r\n doctypes_and_desc=[(\'main\',\'Main document\'),\r\n (\'additional\',\'Figure, schema, etc.\')],\r\n can_revise_doctypes=[\'*\'],\r\n can_describe_doctypes=[\'main\'],\r\n can_delete_doctypes=[\'additional\'],\r\n can_rename_doctypes=[\'main\'],\r\n sbm_indir=indir, sbm_doctype=doctype, sbm_access=access)[1]\r\n','0000-00-00','0000-00-00',NULL,NULL,0);" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','forceFileRevision');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Upload_Files_Interface','Display generic interface to add/revise/delete files. To be used before function \"Move_Uploaded_Files_to_Storage\"');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Uploaded_Files_to_Storage','Attach files uploaded with \"Create_Upload_Files_Interface\"')" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','elementNameToDoctype');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createIconDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createRelatedFormats');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','iconsize');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','keepPreviousVersionDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Revised_Files_to_Storage','Revise files initially uploaded with \"Move_Files_to_Storage\"')" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxsize');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','minsize');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','doctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictions');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDeleteDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canReviseDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDescribeDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canCommentDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canKeepDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canAddFormatDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRestrictDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRenameDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canNameNewFiles');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','createRelatedFormats');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','keepDefault');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','showLinks');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','fileLabel');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','filenameLabel');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','descriptionLabel');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','commentLabel');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictionLabel');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','startDoc');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','endDoc');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','defaultFilenameDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxFilesDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','iconsize');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','createIconDoctypes');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','nblength');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_nb_length');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Get_Recid','record_search_pattern');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmALLFUNCDESCR VALUES ('Move_FCKeditor_Files_to_Storage','Transfer files attached to the record with the FCKeditor');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_FCKeditor_Files_to_Storage','input_fields');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','layer');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','layer');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','switch_file');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','switch_file');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_restrictions');" | ${prefix}/bin/dbexec echo "INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_doctypes');" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD round_name varchar(255) NOT NULL default ''" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD restriction varchar(50) NOT NULL default ''" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD in_reply_to_id_cmtRECORDCOMMENT int(15) unsigned NOT NULL default '0'" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD KEY in_reply_to_id_cmtRECORDCOMMENT (in_reply_to_id_cmtRECORDCOMMENT);" | ${prefix}/bin/dbexec echo "ALTER TABLE bskRECORDCOMMENT ADD in_reply_to_id_bskRECORDCOMMENT int(15) unsigned NOT NULL default '0'" | ${prefix}/bin/dbexec echo "ALTER TABLE bskRECORDCOMMENT ADD KEY in_reply_to_id_bskRECORDCOMMENT (in_reply_to_id_bskRECORDCOMMENT);" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD reply_order_cached_data blob NULL default NULL;" | ${prefix}/bin/dbexec echo "ALTER TABLE bskRECORDCOMMENT ADD reply_order_cached_data blob NULL default NULL;" | ${prefix}/bin/dbexec echo "ALTER TABLE cmtRECORDCOMMENT ADD INDEX (reply_order_cached_data(40));" | ${prefix}/bin/dbexec echo "ALTER TABLE bskRECORDCOMMENT ADD INDEX (reply_order_cached_data(40));" | ${prefix}/bin/dbexec echo -e 'from invenio.legacy.webcomment.adminlib import migrate_comments_populate_threads_index;\ migrate_comments_populate_threads_index()' | $(PYTHON) echo -e 'from invenio.modules.access.firerole import repair_role_definitions;\ repair_role_definitions()' | $(PYTHON) CLEANFILES = *~ *.pyc *.tmp diff --git a/config/invenio.conf b/config/invenio.conf index 4db8c2a6d..0658846ca 100644 --- a/config/invenio.conf +++ b/config/invenio.conf @@ -1,2377 +1,2377 @@ ## This file is part of Invenio. ## Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. ################################################### ## About 'invenio.conf' and 'invenio-local.conf' ## ################################################### ## The 'invenio.conf' file contains the vanilla default configuration ## parameters of a Invenio installation, as coming out of the ## distribution. The file should be self-explanatory. Once installed ## in its usual location (usually /opt/invenio/etc), you could in ## principle go ahead and change the values according to your local ## needs, but this is not advised. ## ## If you would like to customize some of these parameters, you should ## rather create a file named 'invenio-local.conf' in the same ## directory where 'invenio.conf' lives and you should write there ## only the customizations that you want to be different from the ## vanilla defaults. ## ## Here is a realistic, minimalist, yet production-ready example of ## what you would typically put there: ## ## $ cat /opt/invenio/etc/invenio-local.conf ## [Invenio] ## CFG_SITE_NAME = John Doe's Document Server ## CFG_SITE_NAME_INTL_fr = Serveur des Documents de John Doe ## CFG_SITE_URL = http://your.site.com ## CFG_SITE_SECURE_URL = https://your.site.com ## CFG_SITE_ADMIN_EMAIL = john.doe@your.site.com ## CFG_SITE_SUPPORT_EMAIL = john.doe@your.site.com ## CFG_WEBALERT_ALERT_ENGINE_EMAIL = john.doe@your.site.com ## CFG_WEBCOMMENT_ALERT_ENGINE_EMAIL = john.doe@your.site.com ## CFG_WEBCOMMENT_DEFAULT_MODERATOR = john.doe@your.site.com ## CFG_BIBAUTHORID_AUTHOR_TICKET_ADMIN_EMAIL = john.doe@your.site.com ## CFG_BIBCATALOG_SYSTEM_EMAIL_ADDRESS = john.doe@your.site.com ## CFG_DATABASE_HOST = localhost ## CFG_DATABASE_NAME = invenio ## CFG_DATABASE_USER = invenio ## CFG_DATABASE_PASS = my123p$ss ## ## You should override at least the parameters mentioned above and the ## parameters mentioned in the `Part 1: Essential parameters' below in ## order to define some very essential runtime parameters such as the ## name of your document server (CFG_SITE_NAME and ## CFG_SITE_NAME_INTL_*), the visible URL of your document server ## (CFG_SITE_URL and CFG_SITE_SECURE_URL), the email address of the ## local Invenio administrator, comment moderator, and alert engine ## (CFG_SITE_SUPPORT_EMAIL, CFG_SITE_ADMIN_EMAIL, etc), and last but ## not least your database credentials (CFG_DATABASE_*). ## ## The Invenio system will then read both the default invenio.conf ## file and your customized invenio-local.conf file and it will ## override any default options with the ones you have specified in ## your local file. This cascading of configuration parameters will ## ease your future upgrades. [Invenio] ################################### ## Part 1: Essential parameters ## ################################### ## This part defines essential Invenio internal parameters that ## everybody should override, like the name of the server or the email ## address of the local Invenio administrator. ## CFG_DATABASE_* - specify which MySQL server to use, the name of the ## database to use, and the database access credentials. CFG_DATABASE_TYPE = mysql CFG_DATABASE_HOST = localhost CFG_DATABASE_PORT = 3306 CFG_DATABASE_NAME = invenio CFG_DATABASE_USER = invenio CFG_DATABASE_PASS = my123p$ss ## CFG_DATABASE_SLAVE - if you use DB replication, then specify the DB ## slave address credentials. (Assuming the same access rights to the ## DB slave as to the DB master.) If you don't use DB replication, ## then leave this option blank. CFG_DATABASE_SLAVE = ## CFG_SITE_URL - specify URL under which your installation will be ## visible. For example, use "http://your.site.com". Do not leave ## trailing slash. CFG_SITE_URL = http://localhost ## CFG_SITE_SECURE_URL - specify secure URL under which your ## installation secure pages such as login or registration will be ## visible. For example, use "https://your.site.com". Do not leave ## trailing slash. If you don't plan on using HTTPS, then you may ## leave this empty. CFG_SITE_SECURE_URL = https://localhost ## CFG_SITE_NAME -- the visible name of your Invenio installation. CFG_SITE_NAME = Atlantis Institute of Fictive Science ## CFG_SITE_NAME_INTL -- the international versions of CFG_SITE_NAME ## in various languages. (See also CFG_SITE_LANGS below.) CFG_SITE_NAME_INTL_en = Atlantis Institute of Fictive Science CFG_SITE_NAME_INTL_fr = Atlantis Institut des Sciences Fictives CFG_SITE_NAME_INTL_de = Atlantis Institut der fiktiven Wissenschaft CFG_SITE_NAME_INTL_es = Instituto de Ciencia Ficticia Atlantis CFG_SITE_NAME_INTL_ca = Institut Atlantis de Ciència Fictícia CFG_SITE_NAME_INTL_pt = Instituto Atlantis de Ciência Fictícia CFG_SITE_NAME_INTL_it = Atlantis Istituto di Scienza Fittizia CFG_SITE_NAME_INTL_ru = Институт Фиктивных Наук Атлантиды CFG_SITE_NAME_INTL_sk = Atlantis Inštitút Fiktívnych Vied CFG_SITE_NAME_INTL_cs = Atlantis Institut Fiktivních Věd CFG_SITE_NAME_INTL_no = Atlantis Institutt for Fiktiv Vitenskap CFG_SITE_NAME_INTL_sv = Atlantis Institut för Fiktiv Vetenskap CFG_SITE_NAME_INTL_el = Ινστιτούτο Φανταστικών Επιστημών Ατλαντίδος CFG_SITE_NAME_INTL_uk = Інститут вигаданих наук в Атлантісі CFG_SITE_NAME_INTL_ja = Fictive 科学のAtlantis の協会 CFG_SITE_NAME_INTL_pl = Instytut Fikcyjnej Nauki Atlantis CFG_SITE_NAME_INTL_bg = Институт за фиктивни науки Атлантис CFG_SITE_NAME_INTL_hr = Institut Fiktivnih Znanosti Atlantis CFG_SITE_NAME_INTL_zh_CN = 阿特兰提斯虚拟科学学院 CFG_SITE_NAME_INTL_zh_TW = 阿特蘭提斯虛擬科學學院 CFG_SITE_NAME_INTL_hu = Kitalált Tudományok Atlantiszi Intézete CFG_SITE_NAME_INTL_af = Atlantis Instituut van Fiktiewe Wetenskap CFG_SITE_NAME_INTL_gl = Instituto Atlantis de Ciencia Fictive CFG_SITE_NAME_INTL_ro = Institutul Atlantis al Ştiinţelor Fictive CFG_SITE_NAME_INTL_rw = Atlantis Ishuri Rikuru Ry'ubuhanga CFG_SITE_NAME_INTL_ka = ატლანტიდის ფიქტიური მეცნიერების ინსტიტუტი CFG_SITE_NAME_INTL_lt = Fiktyvių Mokslų Institutas Atlantis CFG_SITE_NAME_INTL_ar = معهد أطلنطيس للعلوم الافتراضية CFG_SITE_NAME_INTL_fa = موسسه علوم تخیلی آتلانتیس ## CFG_SITE_LANG -- the default language of the interface: ' CFG_SITE_LANG = en ## CFG_SITE_LANGS -- list of all languages the user interface should ## be available in, separated by commas. The order specified below ## will be respected on the interface pages. A good default would be ## to use the alphabetical order. Currently supported languages ## include Afrikaans, Arabic, Bulgarian, Catalan, Czech, German, ## Georgian, Greek, English, Spanish, Persian (Farsi), French, ## Croatian, Hungarian, Galician, Italian, Japanese, Kinyarwanda, ## Lithuanian, Norwegian, Polish, Portuguese, Romanian, Russian, ## Slovak, Swedish, Ukrainian, Chinese (China), Chinese (Taiwan), so ## that the eventual maximum you can currently select is ## "af,ar,bg,ca,cs,de,el,en,es,fa,fr,hr,gl,ka,it,rw,lt,hu,ja,no,pl,pt,ro,ru,sk,sv,uk,zh_CN,zh_TW". CFG_SITE_LANGS = af,ar,bg,ca,cs,de,el,en,es,fa,fr,hr,gl,ka,it,rw,lt,hu,ja,no,pl,pt,ro,ru,sk,sv,uk,zh_CN,zh_TW ## CFG_EMAIL_BACKEND -- the backend to use for sending emails. Defaults to ## 'flask.ext.email.backends.smtp.Mail' if CFG_MISCUTIL_SMTP_HOST and ## CFG_MISCUTIL_SMTP_PORT are set. Possible values are: ## - flask.ext.email.backends.console.Mail ## - flask.ext.email.backends.dummy.Mail ## - flask.ext.email.backends.filebased.Mail ## - flask.ext.email.backends.locmem.Mail ## - flask.ext.email.backends.smtp.Mail ## - invenio.ext.email.backends.console_adminonly.Mail ## - invenio.ext.email.backends.smtp_adminonly.Mail ## * sends email only to the CFG_SITE_ADMIN_EMAIL address using SMTP CFG_EMAIL_BACKEND = flask.ext.email.backends.smtp.Mail ## CFG_SITE_SUPPORT_EMAIL -- the email address of the support team for ## this installation: CFG_SITE_SUPPORT_EMAIL = info@invenio-software.org ## CFG_SITE_ADMIN_EMAIL -- the email address of the 'superuser' for ## this installation. Enter your email address below and login with ## this address when using Invenio inistration modules. You ## will then be automatically recognized as superuser of the system. CFG_SITE_ADMIN_EMAIL = info@invenio-software.org ## CFG_SITE_EMERGENCY_EMAIL_ADDRESSES -- list of email addresses to ## which an email should be sent in case of emergency (e.g. bibsched ## queue has been stopped because of an error). Configuration ## dictionary allows for different recipients based on weekday and ## time-of-day. Example: ## ## CFG_SITE_EMERGENCY_EMAIL_ADDRESSES = { ## 'Sunday 22:00-06:00': '0041761111111@email2sms.foo.com', ## '06:00-18:00': 'team-in-europe@foo.com,0041762222222@email2sms.foo.com', ## '18:00-06:00': 'team-in-usa@foo.com', ## '*': 'john.doe.phone@foo.com'} ## ## If you want the emergency email notifications to always go to the ## same address, just use the wildcard line in the above example. CFG_SITE_EMERGENCY_EMAIL_ADDRESSES = {} ## CFG_SITE_ADMIN_EMAIL_EXCEPTIONS -- set this to 0 if you do not want ## to receive any captured exception via email to CFG_SITE_ADMIN_EMAIL ## address. Captured exceptions will still be available in ## var/log/invenio.err file. Set this to 1 if you want to receive ## some of the captured exceptions (this depends on the actual place ## where the exception is captured). Set this to 2 if you want to ## receive all captured exceptions. CFG_SITE_ADMIN_EMAIL_EXCEPTIONS = 1 ## CFG_SITE_RECORD -- what is the URI part representing detailed ## record pages? We recommend to leave the default value `record' ## unchanged. CFG_SITE_RECORD = record ## CFG_SITE_SECRET_KEY --- which secret key should we use? This should be set ## to a random value per Invenio installation and must be kept secret, as it is ## used to protect against e.g. cross-site request forgery and other is the ## basis of other security measures in Invenio. A random value can be generated ## using the following command: ## python -c "import os;import re;print re.escape(os.urandom(24).__repr__()[1:-1])" CFG_SITE_SECRET_KEY = ## CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER -- set this to ## the number of seconds after which to reset the exception notification ## counter. A given repetitive exception is notified via email with a ## logarithmic strategy: the first time it is seen it is sent via email, ## then the second time, then the fourth, then the eighth and so forth. ## If the number of seconds elapsed since the last time it was notified ## is greater than CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER ## then the internal counter is reset in order not to have exception ## notification become more and more rare. CFG_ERRORLIB_RESET_EXCEPTION_NOTIFICATION_COUNTER_AFTER = 14400 ## CFG_CERN_SITE -- do we want to enable CERN-specific code? ## Put "1" for "yes" and "0" for "no". CFG_CERN_SITE = 0 ## CFG_INSPIRE_SITE -- do we want to enable INSPIRE-specific code? ## Put "1" for "yes" and "0" for "no". CFG_INSPIRE_SITE = 0 ## CFG_ADS_SITE -- do we want to enable ADS-specific code? ## Put "1" for "yes" and "0" for "no". CFG_ADS_SITE = 0 ## CFG_FLASK_CACHE_TYPE -- do we want to enable any cache engine? ## 'null', 'redis' or your own e.g. 'invenio.cache.my_cache_engine' ## NOTE: If you disable cache engine it WILL affect some ## functionality such as 'search facets'. CFG_FLASK_CACHE_TYPE = null ## CFG_FLASK_DISABLED_BLUEPRINTS -- do we want to prevent certain blueprints from ## being loaded? CFG_FLASK_DISABLED_BLUEPRINTS = ## CFG_FLASK_SERVE_STATIC_FILES -- do we want Flask to serve static files? ## Normally Apache serves static files, but during development and if you are ## using the Werkzeug standalone development server, you can set this flag to ## "1", to enable static file serving. CFG_FLASK_SERVE_STATIC_FILES = 0 ## Now you can tune whether to integrate with external authentication providers ## through the OpenID and OAuth protocols. ## The following variables let you fine-tune which authentication ## providers you want to authorize. You can override here most of ## the variables in lib/invenio/access_control_config.py. ## In particular you can put in these variable the consumer_key and ## consumer_secret of the desired services. ## Note: some providers don't supply an mail address. ## If you choose them, the users will be registered with a temporary email address. ## CFG_OPENID_PROVIDERS -- Comma-separated list of providers you want to enable ## through the OpenID protocol. ## E.g.: CFG_OPENID_PROVIDERS = google,yahoo,aol,wordpress,myvidoop,openid,verisign,myopenid,myspace,livejournal,blogger CFG_OPENID_PROVIDERS = ## CFG_OAUTH1_PROVIDERS -- Comma-separated list of providers you want to enable ## through the OAuth1 protocol. ## Note: OAuth1 is in general deprecated in favour of OAuth2. ## E.g.: CFG_OAUTH1_PROVIDERS = twitter,linkedin,flickr, CFG_OAUTH1_PROVIDERS = ## CFG_OAUTH2_PROVIDERS -- Comma-separated list of providers you want to enable ## through the OAuth2 protocol. ## Note: if you enable the "orcid" provider the full profile of the user ## in Orcid will be imported. ## E.g.: CFG_OAUTH2_PROVIDERS = facebook,yammer,foursquare,googleoauth2,instagram,orcid CFG_OAUTH2_PROVIDERS = ## CFG_OPENID_CONFIGURATIONS -- Mapping of special parameter to configure the ## desired OpenID providers. Use this variable to override out-of-the-box ## parameters already set in lib/python/invenio/access_control_config.py. ## E.g.: CFG_OPENID_CONFIGURATIONS = {'google': { ## 'identifier': 'https://www.google.com/accounts/o8/id', ## 'trust_email': True}} CFG_OPENID_CONFIGURATIONS = {} ## CFG_OAUTH1_CONFIGURATIONS -- Mapping of special parameter to configure the ## desired OAuth1 providers. Use this variable to override out-of-the-box ## parameters already set in lib/python/invenio/access_control_config.py. ## E.g.: CFG_OAUTH1_CONFIGURATIONS = {'linkedin': { ## 'consumer_key' : 'MY_LINKEDIN_CONSUMER_KEY', ## 'consumer_secret' : 'MY_LINKEDIN_CONSUMER_SECRET'}} CFG_OAUTH1_CONFIGURATIONS = {} ## CFG_OAUTH2_CONFIGURATIONS -- Mapping of special parameter to configure the ## desired OAuth2 providers. Use this variable to override out-of-the-box ## parameters already set in lib/python/invenio/access_control_config.py. ## E.g.: CFG_OAUTH2_CONFIGURATIONS = {'orcid': { ## 'consumer_key' : 'MY_ORCID_CONSUMER_KEY', ## 'consumer_secret' : 'MY_ORCID_CONSUMER_SECRET'}} CFG_OAUTH2_CONFIGURATIONS = {} ################################ ## Part 2: Web page style ## ################################ ## The variables affecting the page style. The most important one is ## the 'template skin' you would like to use and the obfuscation mode ## for your email addresses. Please refer to the WebStyle Admin Guide ## for more explanation. The other variables are listed here mostly ## for backwards compatibility purposes only. ## CFG_WEBSTYLE_TEMPLATE_SKIN -- what template skin do you want to ## use? CFG_WEBSTYLE_TEMPLATE_SKIN = default ## CFG_WEBSTYLE_EMAIL_ADDRESSES_OBFUSCATION_MODE. How do we "protect" ## email addresses from undesired automated email harvesters? This ## setting will not affect 'support' and 'admin' emails. ## NOTE: there is no ultimate solution to protect against email ## harvesting. All have drawbacks and can more or less be ## circumvented. Choose you preferred mode ([t] means "transparent" ## for the user): ## -1: hide all emails. ## [t] 0 : no protection, email returned as is. ## foo@example.com => foo@example.com ## 1 : basic email munging: replaces @ by [at] and . by [dot] ## foo@example.com => foo [at] example [dot] com ## [t] 2 : transparent name mangling: characters are replaced by ## equivalent HTML entities. ## foo@example.com => foo@example.com ## [t] 3 : javascript insertion. Requires Javascript enabled on client ## side. ## 4 : replaces @ and . characters by gif equivalents. ## foo@example.com => foo [at] example [dot] com CFG_WEBSTYLE_EMAIL_ADDRESSES_OBFUSCATION_MODE = 2 ## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXLEFTTOP -- eventual global HTML ## left top box: CFG_WEBSTYLE_CDSPAGEBOXLEFTTOP = ## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXLEFTBOTTOM -- eventual global ## HTML left bottom box: CFG_WEBSTYLE_CDSPAGEBOXLEFTBOTTOM = ## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXRIGHTTOP -- eventual global ## HTML right top box: CFG_WEBSTYLE_CDSPAGEBOXRIGHTTOP = ## (deprecated) CFG_WEBSTYLE_CDSPAGEBOXRIGHTBOTTOM -- eventual global ## HTML right bottom box: CFG_WEBSTYLE_CDSPAGEBOXRIGHTBOTTOM = ## CFG_WEBSTYLE_HTTP_STATUS_ALERT_LIST -- when certain HTTP status ## codes are raised to the WSGI handler, the corresponding exceptions ## and error messages can be sent to the system administrator for ## inspecting. This is useful to detect and correct errors. The ## variable represents a comma-separated list of HTTP statuses that ## should alert admin. Wildcards are possible. If the status is ## followed by an "r", it means that a referer is required to exist ## (useful to distinguish broken known links from URL typos when 404 ## errors are raised). CFG_WEBSTYLE_HTTP_STATUS_ALERT_LIST = 404r,400,5*,41* ## CFG_WEBSTYLE_HTTP_USE_COMPRESSION -- whether to enable deflate ## compression of your HTTP/HTTPS connections. This will affect the Apache ## configuration snippets created by inveniocfg --create-apache-conf and ## the OAI-PMH Identify response. CFG_WEBSTYLE_HTTP_USE_COMPRESSION = 0 ## CFG_WEBSTYLE_REVERSE_PROXY_IPS -- if you are setting a multinode ## environment where an HTTP proxy such as mod_proxy is sitting in ## front of the Invenio web application and is forwarding requests to ## worker nodes, set here the the list of IP addresses of the allowed ## HTTP proxies. This is needed in order to avoid IP address spoofing ## when worker nodes are also available on the public Internet and ## might receive forged HTTP requests. Only HTTP requests coming from ## the specified IP addresses will be considered as forwarded from a ## reverse proxy. E.g. set this to '123.123.123.123'. CFG_WEBSTYLE_REVERSE_PROXY_IPS = ################################## ## Part 3: WebSearch parameters ## ################################## ## This section contains some configuration parameters for WebSearch ## module. Please note that WebSearch is mostly configured on ## run-time via its WebSearch Admin web interface. The parameters ## below are the ones that you do not probably want to modify very ## often during the runtime. (Note that you may modify them ## afterwards too, though.) ## CFG_WEBSEARCH_SEARCH_CACHE_SIZE -- do you want to enable search ## caching in global search cache engine (e.g. Redis)? This cache is ## used mainly for "next/previous page" functionality, but it caches ## "popular" user queries too if more than one user happen to search ## for the same thing. Note that if you disable the search caching ## features like "facets" will not work. We recommend a value to be ## kept at CFG_WEBSEARCH_SEARCH_CACHE_SIZE = 1. CFG_WEBSEARCH_SEARCH_CACHE_SIZE = 1 ## CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT -- how long should we keep a ## search result in the cache. The value should be more than 0 and ## the unit is second. [600 s = 10 minutes] CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT = 600 ## CFG_WEBSEARCH_FIELDS_CONVERT -- if you migrate from an older ## system, you may want to map field codes of your old system (such as ## 'ti') to Invenio/MySQL ("title"). Use Python dictionary syntax ## for the translation table, e.g. {'wau':'author', 'wti':'title'}. ## Usually you don't want to do that, and you would use empty dict {}. CFG_WEBSEARCH_FIELDS_CONVERT = {} ## CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH -- width of the ## search pattern window in the light search interface, in ## characters. CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH = 60 CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH = 60 ## CFG_WEBSEARCH_SIMPLESEARCH_PATTERN_BOX_WIDTH -- width of the search ## pattern window in the simple search interface, in characters. CFG_WEBSEARCH_SIMPLESEARCH_PATTERN_BOX_WIDTH = 40 ## CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH -- width of the ## search pattern window in the advanced search interface, in ## characters. CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH = 30 ## CFG_WEBSEARCH_NB_RECORDS_TO_SORT -- how many records do we still ## want to sort? For higher numbers we print only a warning and won't ## perform any sorting other than default 'latest records first', as ## sorting would be very time consuming then. We recommend a value of ## not more than a couple of thousands. CFG_WEBSEARCH_NB_RECORDS_TO_SORT = 1000 ## CFG_WEBSEARCH_CALL_BIBFORMAT -- if a record is being displayed but ## it was not preformatted in the "HTML brief" format, do we want to ## call BibFormatting on the fly? Put "1" for "yes" and "0" for "no". ## Note that "1" will display the record exactly as if it were fully ## preformatted, but it may be slow due to on-the-fly processing; "0" ## will display a default format very fast, but it may not have all ## the fields as in the fully preformatted HTML brief format. Note ## also that this option is active only for old (PHP) formats; the new ## (Python) formats are called on the fly by default anyway, since ## they are much faster. When usure, please set "0" here. CFG_WEBSEARCH_CALL_BIBFORMAT = 0 ## CFG_WEBSEARCH_USE_ALEPH_SYSNOS -- do we want to make old SYSNOs ## visible rather than MySQL's record IDs? You may use this if you ## migrate from a different e-doc system, and you store your old ## system numbers into 970__a. Put "1" for "yes" and "0" for ## "no". Usually you don't want to do that, though. CFG_WEBSEARCH_USE_ALEPH_SYSNOS = 0 ## CFG_WEBSEARCH_I18N_LATEST_ADDITIONS -- Put "1" if you want the ## "Latest Additions" in the web collection pages to show ## internationalized records. Useful only if your brief BibFormat ## templates contains internationalized strings. Otherwise put "0" in ## order not to slow down the creation of latest additions by WebColl. CFG_WEBSEARCH_I18N_LATEST_ADDITIONS = 0 ## CFG_WEBSEARCH_INSTANT_BROWSE -- the number of records to display ## under 'Latest Additions' in the web collection pages. CFG_WEBSEARCH_INSTANT_BROWSE = 10 ## CFG_WEBSEARCH_INSTANT_BROWSE_RSS -- the number of records to ## display in the RSS feed. CFG_WEBSEARCH_INSTANT_BROWSE_RSS = 25 ## CFG_WEBSEARCH_RSS_I18N_COLLECTIONS -- comma-separated list of ## collections that feature an internationalized RSS feed on their ## main seach interface page created by webcoll. Other collections ## will have RSS feed using CFG_SITE_LANG. CFG_WEBSEARCH_RSS_I18N_COLLECTIONS = ## CFG_WEBSEARCH_RSS_TTL -- number of minutes that indicates how long ## a feed cache is valid. CFG_WEBSEARCH_RSS_TTL = 360 ## CFG_WEBSEARCH_RSS_MAX_CACHED_REQUESTS -- maximum number of request kept ## in cache. If the cache is filled, following request are not cached. CFG_WEBSEARCH_RSS_MAX_CACHED_REQUESTS = 1000 ## CFG_WEBSEARCH_AUTHOR_ET_AL_THRESHOLD -- up to how many author names ## to print explicitely; for more print "et al". Note that this is ## used in default formatting that is seldomly used, as usually ## BibFormat defines all the format. The value below is only used ## when BibFormat fails, for example. CFG_WEBSEARCH_AUTHOR_ET_AL_THRESHOLD = 3 ## CFG_WEBSEARCH_NARROW_SEARCH_SHOW_GRANDSONS -- whether to show or ## not collection grandsons in Narrow Search boxes (sons are shown by ## default, grandsons are configurable here). Use 0 for no and 1 for ## yes. CFG_WEBSEARCH_NARROW_SEARCH_SHOW_GRANDSONS = 1 ## CFG_WEBSEARCH_CREATE_SIMILARLY_NAMED_AUTHORS_LINK_BOX -- shall we ## create help links for Ellis, Nick or Ellis, Nicholas and friends ## when Ellis, N was searched for? Useful if you have one author ## stored in the database under several name formats, namely surname ## comma firstname and surname comma initial cataloging policy. Use 0 ## for no and 1 for yes. CFG_WEBSEARCH_CREATE_SIMILARLY_NAMED_AUTHORS_LINK_BOX = 1 ## CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS -- MathJax is a JavaScript ## library that renders (La)TeX mathematical formulas in the client ## browser. This parameter must contain a comma-separated list of ## output formats for which to apply the MathJax rendering, for example ## "hb,hd". If the list is empty, MathJax is disabled. CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS = ## CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_TIMEOUT -- when searching ## external collections (e.g. SPIRES, CiteSeer, etc), how many seconds ## do we wait for reply before abandonning? CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_TIMEOUT = 5 ## CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_MAXRESULTS -- how many ## results do we fetch? CFG_WEBSEARCH_EXTERNAL_COLLECTION_SEARCH_MAXRESULTS = 10 ## CFG_WEBSEARCH_SPLIT_BY_COLLECTION -- do we want to split the search ## results by collection or not? Use 0 for not, 1 for yes. CFG_WEBSEARCH_SPLIT_BY_COLLECTION = 1 ## CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS -- the default number of ## records to display per page in the search results pages. CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS = 10 ## CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS -- in order to limit denial of ## service attacks the total number of records per group displayed as a ## result of a search query will be limited to this number. Only the superuser ## queries will not be affected by this limit. CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS = 200 ## CFG_WEBSEARCH_SHOW_COMMENT_COUNT -- do we want to show the 'N comments' ## links on the search engine pages? (useful only when you have allowed ## commenting) CFG_WEBSEARCH_SHOW_COMMENT_COUNT = 1 ## CFG_WEBSEARCH_SHOW_REVIEW_COUNT -- do we want to show the 'N reviews' ## links on the search engine pages? (useful only when you have allowed ## reviewing) CFG_WEBSEARCH_SHOW_REVIEW_COUNT = 1 ## CFG_WEBSEARCH_FULLTEXT_SNIPPETS_GENERATOR -- how do we want to generate ## fulltext? Can be generated by 'native' Invenio or 'SOLR' CFG_WEBSEARCH_FULLTEXT_SNIPPETS_GENERATOR = native ## CFG_WEBSEARCH_FULLTEXT_SNIPPETS -- how many full-text snippets do ## we want to display for full-text searches? If you want to specify ## different values for different document status types, please add ## more items into this dictionary. (Unless specified, the empty ## value will be used as default.) This is useful if you have ## restricted files of different types with various restrictions on ## what we can show. CFG_WEBSEARCH_FULLTEXT_SNIPPETS = { '': 4, } ## CFG_WEBSEARCH_FULLTEXT_SNIPPETS_CHARS -- what is the maximum size ## of a snippet to display around the pattern found in the full-text? ## If you want to specify different values for different document ## status types, please add more items into this dictionary. (Unless ## specified, the empty value will be used as default.) This is ## useful if you have restricted files of different types with various ## restrictions on what we can show. CFG_WEBSEARCH_FULLTEXT_SNIPPETS_CHARS = { '': 100, } ## CFG_WEBSEARCH_WILDCARD_LIMIT -- some of the queries, wildcard ## queries in particular (ex: cern*, a*), but also regular expressions ## (ex: [a-z]+), may take a long time to respond due to the high ## number of hits. You can limit the number of terms matched by a ## wildcard by setting this variable. A negative value or zero means ## that none of the queries will be limited (which may be wanted by ## also prone to denial-of-service kind of attacks). CFG_WEBSEARCH_WILDCARD_LIMIT = 50000 ## CFG_WEBSEARCH_SYNONYM_KBRS -- defines which knowledge bases are to ## be used for which index in order to provide runtime synonym lookup ## of user-supplied terms, and what massaging function should be used ## upon search pattern before performing the KB lookup. (Can be one ## of `exact', 'leading_to_comma', `leading_to_number'.) CFG_WEBSEARCH_SYNONYM_KBRS = { 'journal': ['SEARCH-SYNONYM-JOURNAL', 'leading_to_number'], } ## CFG_SOLR_URL -- optionally, you may use Solr to serve full-text ## queries and ranking. If so, please specify the URL of your Solr instance. ## Example: http://localhost:8983/solr (default solr port) CFG_SOLR_URL = ## CFG_XAPIAN_ENABLED -- optionally, you may use Xapian to serve full-text ## queries and ranking. If so, please enable it: 1 = enabled CFG_XAPIAN_ENABLED = ## CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT -- specify the limit when ## the previous/next/back hit links are to be displayed on detailed record pages. ## In order to speeding up list manipulations, if a search returns lots of hits, ## more than this limit, then do not loose time calculating next/previous/back ## hits at all, but display page directly without these. ## Note also that Invenio installations that do not like ## to have the next/previous hit link functionality would be able to set this ## variable to zero and not see anything. CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT = 1000 ## CFG_WEBSEARCH_PREV_NEXT_HIT_FOR_GUESTS -- Set this to 0 if you want ## to disable the previous/next/back hit link functionality for guests ## users. ## Since the previous/next/back hit link functionality is causing the allocation ## of user session in the database even for guests users, it might be useful to ## be able to disable it e.g. when your site is bombarded by web request ## (a.k.a. Slashdot effect). CFG_WEBSEARCH_PREV_NEXT_HIT_FOR_GUESTS = 1 ## CFG_WEBSEARCH_VIEWRESTRCOLL_POLICY -- when a record belongs to more than one ## restricted collection, if the viewrestcoll policy is set to "ALL" (default) ## then the user must be authorized to all the restricted collections, in ## order to be granted access to the specific record. If the policy is set to ## "ANY", then the user need to be authorized to only one of the collections ## in order to be granted access to the specific record. CFG_WEBSEARCH_VIEWRESTRCOLL_POLICY = ANY ## CFG_WEBSEARCH_SPIRES_SYNTAX -- variable to configure the use of the ## SPIRES query syntax in searches. Values: 0 = SPIRES syntax is ## switched off; 1 = leading 'find' is required; 9 = leading 'find' is ## not required (leading SPIRES operator, space-operator-space, etc ## are also accepted). CFG_WEBSEARCH_SPIRES_SYNTAX = 1 ## CFG_WEBSEARCH_DISPLAY_NEAREST_TERMS -- when user search does not ## return any direct result, what do we want to display? Set to 0 in ## order to display a generic message about search returning no hits. ## Set to 1 in order to display list of nearest terms from the indexes ## that may match user query. Note: this functionality may be slow, ## so you may want to disable it on bigger sites. CFG_WEBSEARCH_DISPLAY_NEAREST_TERMS = 1 ## CFG_WEBSEARCH_DETAILED_META_FORMAT -- the output format to use for ## detailed meta tags containing metadata as configured in the tag ## table. Default output format should be 'hdm', included. This ## format will be included in the header of /record/ pages. For ## efficiency this format should be pre-cached with BibReformat. See ## also CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR and ## CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR. CFG_WEBSEARCH_DETAILED_META_FORMAT = hdm ## CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR -- decides if meta tags for ## Google Scholar shall be included in the detailed record page ## header, when using the standard formatting templates/elements. See ## also CFG_WEBSEARCH_DETAILED_META_FORMAT and ## CFG_WEBSEARCH_ENABLE_OPENGRAPH. When this variable is changed and ## output format defined in CFG_WEBSEARCH_DETAILED_META_FORMAT is ## cached, a bibreformat must be run for the cached records. CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR = True ## CFG_WEBSEARCH_ENABLE_OPENGRAPH -- decides if meta tags for the Open ## Graph protocol shall be included in the detailed record page ## header, when using the standard formatting templates/elements. See ## also CFG_WEBSEARCH_DETAILED_META_FORMAT and ## CFG_WEBSEARCH_ENABLE_GOOGLESCHOLAR. When this variable is changed ## and output format defined in CFG_WEBSEARCH_DETAILED_META_FORMAT is ## cached, a bibreformat must be run for the cached records. Note that ## enabling Open Graph produces invalid XHTML/HTML5 markup. CFG_WEBSEARCH_ENABLE_OPENGRAPH = False ## CFG_WEBSEARCH_CITESUMMARY_SELFCITES_THRESHOLD -- switches off ## self-citations computation if the number records in the citesummary ## is above the threshold CFG_WEBSEARCH_CITESUMMARY_SELFCITES_THRESHOLD = 2000 ## CFG_WEBSEARCH_COLLECTION_NAMES_SEARCH -- decides whether search for ## collection name is enabled (1), disabled (-1) or enabled only for ## the home collection (0), enabled for all collections including ## those not attached to the collection tree (2). This requires the ## CollectionNameSearchService search services to be enabled. CFG_WEBSEARCH_COLLECTION_NAMES_SEARCH = 0 ####################################### ## Part 4: BibHarvest OAI parameters ## ####################################### ## This part defines parameters for the Invenio OAI gateway. ## Useful if you are running Invenio as OAI data provider. ## CFG_OAI_ID_FIELD -- OAI identifier MARC field: CFG_OAI_ID_FIELD = 909COo ## CFG_OAI_SET_FIELD -- OAI set MARC field: CFG_OAI_SET_FIELD = 909COp ## CFG_OAI_SET_FIELD -- previous OAI set MARC field: CFG_OAI_PREVIOUS_SET_FIELD = 909COq ## CFG_OAI_DELETED_POLICY -- OAI deletedrecordspolicy ## (no/transient/persistent): CFG_OAI_DELETED_POLICY = persistent ## CFG_OAI_ID_PREFIX -- OAI identifier prefix: CFG_OAI_ID_PREFIX = atlantis.cern.ch ## CFG_OAI_SAMPLE_IDENTIFIER -- OAI sample identifier: CFG_OAI_SAMPLE_IDENTIFIER = oai:atlantis.cern.ch:123 ## CFG_OAI_IDENTIFY_DESCRIPTION -- description for the OAI Identify verb: CFG_OAI_IDENTIFY_DESCRIPTION = %(CFG_SITE_URL)s Free and unlimited use by anybody with obligation to refer to original record Full content, i.e. preprints may not be harvested by robots Submission restricted. Submitted documents are subject of approval by OAI repository admins. ## CFG_OAI_LOAD -- OAI number of records in a response: CFG_OAI_LOAD = 500 ## CFG_OAI_EXPIRE -- OAI resumptionToken expiration time: CFG_OAI_EXPIRE = 90000 ## CFG_OAI_SLEEP -- service unavailable between two consecutive ## requests for CFG_OAI_SLEEP seconds: CFG_OAI_SLEEP = 2 ## CFG_OAI_METADATA_FORMATS -- mapping between accepted metadataPrefixes and ## the corresponding output format to use, its schema and its metadataNamespace. CFG_OAI_METADATA_FORMATS = { 'marcxml': ('XOAIMARC', 'http://www.openarchives.org/OAI/1.1/dc.xsd', 'http://purl.org/dc/elements/1.1/'), 'oai_dc': ('XOAIDC', 'http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd', 'http://www.loc.gov/MARC21/slim'), } ## CFG_OAI_FRIENDS -- list of OAI baseURL of friend repositories. See: ## CFG_OAI_FRIENDS = http://cds.cern.ch/oai2d,http://openaire.cern.ch/oai2d,http://export.arxiv.org/oai2 ## The following subfields are a completition to ## CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG. If CFG_OAI_PROVENANCE_BASEURL_SUBFIELD is ## set for a record, then the corresponding field is considered has being ## harvested via OAI-PMH ## CFG_OAI_PROVENANCE_BASEURL_SUBFIELD -- baseURL of the originDescription or a ## record CFG_OAI_PROVENANCE_BASEURL_SUBFIELD = u ## CFG_OAI_PROVENANCE_DATESTAMP_SUBFIELD -- datestamp of the originDescription ## or a record CFG_OAI_PROVENANCE_DATESTAMP_SUBFIELD = d ## CFG_OAI_PROVENANCE_METADATANAMESPACE_SUBFIELD -- metadataNamespace of the ## originDescription or a record CFG_OAI_PROVENANCE_METADATANAMESPACE_SUBFIELD = m ## CFG_OAI_PROVENANCE_ORIGINDESCRIPTION_SUBFIELD -- originDescription of the ## originDescription or a record CFG_OAI_PROVENANCE_ORIGINDESCRIPTION_SUBFIELD = d ## CFG_OAI_PROVENANCE_HARVESTDATE_SUBFIELD -- harvestDate of the ## originDescription or a record CFG_OAI_PROVENANCE_HARVESTDATE_SUBFIELD = h ## CFG_OAI_PROVENANCE_ALTERED_SUBFIELD -- altered flag of the ## originDescription or a record CFG_OAI_PROVENANCE_ALTERED_SUBFIELD = t ## CFG_OAI_FAILED_HARVESTING_STOP_QUEUE -- when harvesting OAI sources ## fails, shall we report an error with the task and stop BibSched ## queue, or simply wait for the next run of the task? A value of 0 ## will stop the task upon errors, 1 will let the queue run if the ## next run of the oaiharvest task can safely recover the failure ## (this means that the queue will stop if the task is not set to run ## periodically) CFG_OAI_FAILED_HARVESTING_STOP_QUEUE = 1 ## CFG_OAI_FAILED_HARVESTING_EMAILS_ADMIN -- when ## CFG_OAI_FAILED_HARVESTING_STOP_QUEUE is set to leave the queue ## running upon errors, shall we send an email to admin to notify ## about the failure? CFG_OAI_FAILED_HARVESTING_EMAILS_ADMIN = True ## NOTE: the following parameters are experimental ## ----------------------------------------------------------------------------- ## CFG_OAI_RIGHTS_FIELD -- MARC field dedicated to storing Copyright information CFG_OAI_RIGHTS_FIELD = 542__ ## CFG_OAI_RIGHTS_HOLDER_SUBFIELD -- MARC subfield dedicated to storing the ## Copyright holder information CFG_OAI_RIGHTS_HOLDER_SUBFIELD = d ## CFG_OAI_RIGHTS_DATE_SUBFIELD -- MARC subfield dedicated to storing the ## Copyright date information CFG_OAI_RIGHTS_DATE_SUBFIELD = g ## CFG_OAI_RIGHTS_URI_SUBFIELD -- MARC subfield dedicated to storing the URI ## (URL or URN, more detailed statement about copyright status) information CFG_OAI_RIGHTS_URI_SUBFIELD = u ## CFG_OAI_RIGHTS_CONTACT_SUBFIELD -- MARC subfield dedicated to storing the ## Copyright holder contact information CFG_OAI_RIGHTS_CONTACT_SUBFIELD = e ## CFG_OAI_RIGHTS_STATEMENT_SUBFIELD -- MARC subfield dedicated to storing the ## Copyright statement as presented on the resource CFG_OAI_RIGHTS_STATEMENT_SUBFIELD = f ## CFG_OAI_LICENSE_FIELD -- MARC field dedicated to storing terms governing ## use and reproduction (license) CFG_OAI_LICENSE_FIELD = 540__ ## CFG_OAI_LICENSE_TERMS_SUBFIELD -- MARC subfield dedicated to storing the ## Terms governing use and reproduction, e.g. CC License CFG_OAI_LICENSE_TERMS_SUBFIELD = a ## CFG_OAI_LICENSE_PUBLISHER_SUBFIELD -- MARC subfield dedicated to storing the -## person or institution imposing the license (author, publisher) +## person or institute imposing the license (author, publisher) CFG_OAI_LICENSE_PUBLISHER_SUBFIELD = b ## CFG_OAI_LICENSE_URI_SUBFIELD -- MARC subfield dedicated to storing the URI ## URI CFG_OAI_LICENSE_URI_SUBFIELD = u ##------------------------------------------------------------------------------ ################################### ## Part 5: BibDocFile parameters ## ################################### ## This section contains some configuration parameters for BibDocFile ## module. ## CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_DOCTYPES -- this is the list of ## doctypes (like 'Main' or 'Additional') and their description that admins ## can choose from when adding new files via the Document File Manager ## admin interface. ## - When no value is provided, admins cannot add new ## file (they can only revise/delete/add format) ## - When a single value is given, it is used as ## default doctype for all new documents ## ## Order is relevant ## Eg: ## [('main', 'Main document'), ('additional', 'Figure, schema. etc')] CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_DOCTYPES = [ ('Main', 'Main document'), ('LaTeX', 'LaTeX'), ('Source', 'Source'), ('Additional', 'Additional File'), ('Audio', 'Audio file'), ('Video', 'Video file'), ('Script', 'Script'), ('Data', 'Data'), ('Figure', 'Figure'), ('Schema', 'Schema'), ('Graph', 'Graph'), ('Image', 'Image'), ('Drawing', 'Drawing'), ('Slides', 'Slides')] ## CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_RESTRICTIONS -- this is the ## list of restrictions (like 'Restricted' or 'No Restriction') and their ## description that admins can choose from when adding or revising files. ## Restrictions can then be configured at the level of WebAccess. ## - When no value is provided, no restriction is ## applied ## - When a single value is given, it is used as ## default resctriction for all documents. ## - The first value of the list is used as default ## restriction if the user if not given the ## choice of the restriction. Order is relevant ## ## Eg: ## [('', 'No restriction'), ('restr', 'Restricted')] CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_RESTRICTIONS = [ ('', 'Public'), ('restricted', 'Restricted')] ## CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_MISC -- set here the other ## default flags and attributes to tune the Document File Manager admin ## interface. ## See the docstring of bibdocfile_managedocfiles.create_file_upload_interface ## to have a description of the available parameters and their syntax. ## In general you will rarely need to change this variable. CFG_BIBDOCFILE_DOCUMENT_FILE_MANAGER_MISC = { 'can_revise_doctypes': ['*'], 'can_comment_doctypes': ['*'], 'can_describe_doctypes': ['*'], 'can_delete_doctypes': ['*'], 'can_keep_doctypes': ['*'], 'can_rename_doctypes': ['*'], 'can_add_format_to_doctypes': ['*'], 'can_restrict_doctypes': ['*'], } ## CFG_BIBDOCFILE_FILESYSTEM_BIBDOC_GROUP_LIMIT -- the fulltext ## documents are stored under "/opt/invenio/var/data/files/gX/Y" ## directories where X is 0,1,... and Y stands for bibdoc ID. Thusly ## documents Y are grouped into directories X and this variable ## indicates the maximum number of documents Y stored in each ## directory X. This limit is imposed solely for filesystem ## performance reasons in order not to have too many subdirectories in ## a given directory. CFG_BIBDOCFILE_FILESYSTEM_BIBDOC_GROUP_LIMIT = 5000 ## CFG_BIBDOCFILE_ADDITIONAL_KNOWN_FILE_EXTENSIONS -- a comma-separated ## list of document extensions not listed in Python standard mimetype ## library that should be recognized by Invenio. CFG_BIBDOCFILE_ADDITIONAL_KNOWN_FILE_EXTENSIONS = hpg,link,lis,llb,mat,mpp,msg,docx,docm,xlsx,xlsm,xlsb,pptx,pptm,ppsx,ppsm ## CFG_BIBDOCFILE_ADDITIONAL_KNOWN_MIMETYPES -- a mapping of additional ## mimetypes that could be served or have to be recognized by this instance ## of Invenio (this is useful in order to patch old versions of the ## mimetypes Python module). CFG_BIBDOCFILE_ADDITIONAL_KNOWN_MIMETYPES = { "application/xml-dtd": ".dtd", } ## CFG_BIBDOCFILE_DESIRED_CONVERSIONS -- a dictionary having as keys ## a format and as values the corresponding list of desired converted ## formats. CFG_BIBDOCFILE_DESIRED_CONVERSIONS = { 'pdf' : ('pdf;pdfa', ), 'ps.gz' : ('pdf;pdfa', ), 'djvu' : ('pdf', ), 'sxw': ('doc', 'odt', 'pdf;pdfa', ), 'docx' : ('doc', 'odt', 'pdf;pdfa', ), 'doc' : ('odt', 'pdf;pdfa', 'docx'), 'rtf' : ('pdf;pdfa', 'odt', ), 'odt' : ('pdf;pdfa', 'doc', ), 'pptx' : ('ppt', 'odp', 'pdf;pdfa', ), 'ppt' : ('odp', 'pdf;pdfa', 'pptx'), 'sxi': ('odp', 'pdf;pdfa', ), 'odp' : ('pdf;pdfa', 'ppt', ), 'xlsx' : ('xls', 'ods', 'csv'), 'xls' : ('ods', 'csv'), 'ods' : ('xls', 'xlsx', 'csv'), 'sxc': ('xls', 'xlsx', 'csv'), 'tiff' : ('pdf;pdfa', ), 'tif' : ('pdf;pdfa', ),} ## CFG_BIBDOCFILE_USE_XSENDFILE -- if your web server supports ## XSendfile header, you may want to enable this feature in order for ## to Invenio tell the web server to stream files for download (after ## proper authorization checks) by web server's means. This helps to ## liberate Invenio worker processes from being busy with sending big ## files to clients. The web server will take care of that. Note: ## this feature is still somewhat experimental. Note: when enabled ## (set to 1), then you have to also regenerate Apache vhost conf ## snippets (inveniocfg --update-config-py --create-apache-conf). CFG_BIBDOCFILE_USE_XSENDFILE = 0 ## CFG_BIBDOCFILE_MD5_CHECK_PROBABILITY -- a number between 0 and ## 1 that indicates probability with which MD5 checksum will be ## verified when streaming bibdocfile-managed files. (0.1 will cause ## the check to be performed once for every 10 downloads) CFG_BIBDOCFILE_MD5_CHECK_PROBABILITY = 0.1 ## CFG_BIBDOCFILE_BEST_FORMATS_TO_EXTRACT_TEXT_FROM -- a comma-separated ## list of document extensions in decrescent order of preference ## to suggest what is considered the best format to extract text from. CFG_BIBDOCFILE_BEST_FORMATS_TO_EXTRACT_TEXT_FROM = ('txt', 'html', 'xml', 'odt', 'doc', 'docx', 'djvu', 'pdf', 'ps', 'ps.gz') ## CFG_BIBDOCFILE_ENABLE_BIBDOCFSINFO_CACHE -- whether to use the ## database table bibdocfsinfo as reference for filesystem ## information. The default is 0. Switch this to 1 ## after you have run bibdocfile --fix-bibdocfsinfo-cache ## or on an empty system. CFG_BIBDOCFILE_ENABLE_BIBDOCFSINFO_CACHE = 0 ## CFG_OPENOFFICE_SERVER_HOST -- the host where an OpenOffice Server is ## listening to. If localhost an OpenOffice server will be started ## automatically if it is not already running. ## Note: if you set this to an empty value this will disable the usage of ## OpenOffice for converting documents. ## If you set this to something different than localhost you'll have to take ## care to have an OpenOffice server running on the corresponding host and ## to install the same OpenOffice release both on the client and on the server ## side. ## In order to launch an OpenOffice server on a remote machine, just start ## the usual 'soffice' executable in this way: ## $> soffice -headless -nologo -nodefault -norestore -nofirststartwizard \ ## .. -accept=socket,host=HOST,port=PORT;urp;StarOffice.ComponentContext CFG_OPENOFFICE_SERVER_HOST = localhost ## CFG_OPENOFFICE_SERVER_PORT -- the port where an OpenOffice Server is ## listening to. CFG_OPENOFFICE_SERVER_PORT = 2002 ## CFG_OPENOFFICE_USER -- the user that will be used to launch the OpenOffice ## client. It is recommended to set this to a user who don't own files, like ## e.g. 'nobody'. You should also authorize your Apache server user to be ## able to become this user, e.g. by adding to your /etc/sudoers the following ## line: ## "apache ALL=(nobody) NOPASSWD: ALL" ## provided that apache is the username corresponding to the Apache user. ## On some machine this might be apache2 or www-data. CFG_OPENOFFICE_USER = nobody ################################# ## Part 6: BibIndex parameters ## ################################# ## This section contains some configuration parameters for BibIndex ## module. Please note that BibIndex is mostly configured on run-time ## via its BibIndex Admin web interface. The parameters below are the ## ones that you do not probably want to modify very often during the ## runtime. ## CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY -- when fulltext indexing, do ## you want to index locally stored files only, or also external URLs? ## Use "0" to say "no" and "1" to say "yes". CFG_BIBINDEX_FULLTEXT_INDEX_LOCAL_FILES_ONLY = 1 ## (deprecated) CFG_BIBINDEX_REMOVE_STOPWORDS -- configuration moved to ## DB, variable kept here just for backwards compatibility purposes. CFG_BIBINDEX_REMOVE_STOPWORDS = ## CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS -- characters considered as ## alphanumeric separators of word-blocks inside words. You probably ## don't want to change this. CFG_BIBINDEX_CHARS_ALPHANUMERIC_SEPARATORS = \!\"\#\$\%\&\'\(\)\*\+\,\-\.\/\:\;\<\=\>\?\@\[\\\]\^\_\`\{\|\}\~ ## CFG_BIBINDEX_CHARS_PUNCTUATION -- characters considered as punctuation ## between word-blocks inside words. You probably don't want to ## change this. CFG_BIBINDEX_CHARS_PUNCTUATION = \.\,\:\;\?\!\" ## (deprecated) CFG_BIBINDEX_REMOVE_HTML_MARKUP -- now in database CFG_BIBINDEX_REMOVE_HTML_MARKUP = 0 ## (deprecated) CFG_BIBINDEX_REMOVE_LATEX_MARKUP -- now in database CFG_BIBINDEX_REMOVE_LATEX_MARKUP = 0 ## CFG_BIBINDEX_MIN_WORD_LENGTH -- minimum word length allowed to be added to ## index. The terms smaller then this amount will be discarded. ## Useful to keep the database clean, however you can safely leave ## this value on 0 for up to 1,000,000 documents. CFG_BIBINDEX_MIN_WORD_LENGTH = 0 ## CFG_BIBINDEX_URLOPENER_USERNAME and CFG_BIBINDEX_URLOPENER_PASSWORD -- ## access credentials to access restricted URLs, interesting only if ## you are fulltext-indexing files located on a remote server that is ## only available via username/password. But it's probably better to ## handle this case via IP or some convention; the current scheme is ## mostly there for demo only. CFG_BIBINDEX_URLOPENER_USERNAME = mysuperuser CFG_BIBINDEX_URLOPENER_PASSWORD = mysuperpass ## CFG_INTBITSET_ENABLE_SANITY_CHECKS -- ## Enable sanity checks for integers passed to the intbitset data ## structures. It is good to enable this during debugging ## and to disable this value for speed improvements. CFG_INTBITSET_ENABLE_SANITY_CHECKS = False ## CFG_BIBINDEX_PERFORM_OCR_ON_DOCNAMES -- regular expression that matches ## docnames for which OCR is desired (set this to .* in order to enable ## OCR in general, set this to empty in order to disable it.) CFG_BIBINDEX_PERFORM_OCR_ON_DOCNAMES = scan-.* ## CFG_BIBINDEX_SPLASH_PAGES -- key-value mapping where the key corresponds ## to a regular expression that matches the URLs of the splash pages of ## a given service and the value is a regular expression of the set of URLs ## referenced via tags in the HTML content of the splash pages that are ## referring to documents that need to be indexed. ## NOTE: for backward compatibility reasons you can set this to a simple ## regular expression that will directly be used as the unique key of the ## map, with corresponding value set to ".*" (in order to match any URL) CFG_BIBINDEX_SPLASH_PAGES = { "http://documents\.cern\.ch/setlink\?.*": ".*", "http://ilcagenda\.linearcollider\.org/subContributionDisplay\.py\?.*|http://ilcagenda\.linearcollider\.org/contributionDisplay\.py\?.*": "http://ilcagenda\.linearcollider\.org/getFile\.py/access\?.*|http://ilcagenda\.linearcollider\.org/materialDisplay\.py\?.*", } ## CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES -- do we want ## the author word index to exclude first names to keep only last ## names? If set to True, then for the author `Bernard, Denis', only ## `Bernard' will be indexed in the word index, not `Denis'. Note ## that if you change this variable, you have to re-index the author ## index via `bibindex -w author -R'. CFG_BIBINDEX_AUTHOR_WORD_INDEX_EXCLUDE_FIRST_NAMES = False ## (deprecated) CFG_BIBINDEX_SYNONYM_KBRS -- configuration moved to ## DB, variable kept here just for backwards compatibility purposes. CFG_BIBINDEX_SYNONYM_KBRS = {} ####################################### ## Part 7: Access control parameters ## ####################################### ## This section contains some configuration parameters for the access ## control system. Please note that WebAccess is mostly configured on ## run-time via its WebAccess Admin web interface. The parameters ## below are the ones that you do not probably want to modify very ## often during the runtime. (If you do want to modify them during ## runtime, for example te deny access temporarily because of backups, ## you can edit access_control_config.py directly, no need to get back ## here and no need to redo the make process.) ## CFG_ACCESS_CONTROL_LEVEL_SITE -- defines how open this site is. ## Use 0 for normal operation of the site, 1 for read-only site (all ## write operations temporarily closed), 2 for site fully closed, ## 3 for also disabling any database connection. ## Useful for site maintenance. CFG_ACCESS_CONTROL_LEVEL_SITE = 0 ## CFG_ACCESS_CONTROL_LEVEL_GUESTS -- guest users access policy. Use ## 0 to allow guest users, 1 not to allow them (all users must login). CFG_ACCESS_CONTROL_LEVEL_GUESTS = 0 ## CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS -- account registration and ## activation policy. When 0, users can register and accounts are ## automatically activated. When 1, users can register but admin must ## activate the accounts. When 2, users cannot register nor update ## their email address, only admin can register accounts. When 3, ## users cannot register nor update email address nor password, only ## admin can register accounts. When 4, the same as 3 applies, nor ## user cannot change his login method. When 5, then the same as 4 ## applies, plus info about how to get an account is hidden from the ## login page. CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS = 0 ## CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN -- limit account ## registration to certain email addresses? If wanted, give domain ## name below, e.g. "cern.ch". If not wanted, leave it empty. CFG_ACCESS_CONTROL_LIMIT_REGISTRATION_TO_DOMAIN = ## CFG_ACCESS_CONTROL_NOTIFY_ADMIN_ABOUT_NEW_ACCOUNTS -- send a ## notification email to the administrator when a new account is ## created? Use 0 for no, 1 for yes. CFG_ACCESS_CONTROL_NOTIFY_ADMIN_ABOUT_NEW_ACCOUNTS = 0 ## CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_NEW_ACCOUNT -- send a ## notification email to the user when a new account is created in order to ## to verify the validity of the provided email address? Use ## 0 for no, 1 for yes. CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_NEW_ACCOUNT = 1 ## CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_ACTIVATION -- send a ## notification email to the user when a new account is activated? ## Use 0 for no, 1 for yes. CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_ACTIVATION = 0 ## CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_DELETION -- send a ## notification email to the user when a new account is deleted or ## account demand rejected? Use 0 for no, 1 for yes. CFG_ACCESS_CONTROL_NOTIFY_USER_ABOUT_DELETION = 0 ## CFG_APACHE_PASSWORD_FILE -- the file where Apache user credentials ## are stored. Must be an absolute pathname. If the value does not ## start by a slash, it is considered to be the filename of a file ## located under prefix/var/tmp directory. This is useful for the ## demo site testing purposes. For the production site, if you plan ## to restrict access to some collections based on the Apache user ## authentication mechanism, you should put here an absolute path to ## your Apache password file. CFG_APACHE_PASSWORD_FILE = demo-site-apache-user-passwords ## CFG_APACHE_GROUP_FILE -- the file where Apache user groups are ## defined. See the documentation of the preceding config variable. CFG_APACHE_GROUP_FILE = demo-site-apache-user-groups ################################### ## Part 8: WebSession parameters ## ################################### ## This section contains some configuration parameters for tweaking ## session handling. ## CFG_WEBSESSION_EXPIRY_LIMIT_DEFAULT -- number of days after which a session ## and the corresponding cookie is considered expired. CFG_WEBSESSION_EXPIRY_LIMIT_DEFAULT = 2 ## CFG_WEBSESSION_EXPIRY_LIMIT_REMEMBER -- number of days after which a session ## and the corresponding cookie is considered expired, when the user has ## requested to permanently stay logged in. CFG_WEBSESSION_EXPIRY_LIMIT_REMEMBER = 365 ## CFG_WEBSESSION_RESET_PASSWORD_EXPIRE_IN_DAYS -- when user requested ## a password reset, for how many days is the URL valid? CFG_WEBSESSION_RESET_PASSWORD_EXPIRE_IN_DAYS = 3 ## CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS -- when an account ## activation email was sent, for how many days is the URL valid? CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS = 3 ## CFG_WEBSESSION_NOT_CONFIRMED_EMAIL_ADDRESS_EXPIRE_IN_DAYS -- when ## user won't confirm his email address and not complete ## registeration, after how many days will it expire? CFG_WEBSESSION_NOT_CONFIRMED_EMAIL_ADDRESS_EXPIRE_IN_DAYS = 10 ## CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS -- when set to 1, the session ## system allocates the same uid=0 to all guests users regardless of where they ## come from. 0 allocate a unique uid to each guest. CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS = 0 ## CFG_WEBSESSION_IPADDR_CHECK_SKIP_BITS -- to prevent session cookie ## stealing, Invenio checks that the IP address of a connection is the ## same as that of the connection which created the initial session. ## This variable let you decide how many bits should be skipped during ## this check. Set this to 0 in order to enable full IP address ## checking. Set this to 32 in order to disable IP address checking. ## Intermediate values (say 8) let you have some degree of security so ## that you can trust your local network only while helping to solve ## issues related to outside clients that configured their browser to ## use a web proxy for HTTP connection but not for HTTPS, thus ## potentially having two different IP addresses. In general, if use ## HTTPS in order to serve authenticated content, you can safely set ## CFG_WEBSESSION_IPADDR_CHECK_SKIP_BITS to 32. CFG_WEBSESSION_IPADDR_CHECK_SKIP_BITS = 0 ################################ ## Part 9: BibRank parameters ## ################################ ## This section contains some configuration parameters for the ranking ## system. ## CFG_BIBRANK_SHOW_READING_STATS -- do we want to show reading ## similarity stats? ('People who viewed this page also viewed') CFG_BIBRANK_SHOW_READING_STATS = 1 ## CFG_BIBRANK_SHOW_DOWNLOAD_STATS -- do we want to show the download ## similarity stats? ('People who downloaded this document also ## downloaded') CFG_BIBRANK_SHOW_DOWNLOAD_STATS = 1 ## CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS -- do we want to show download ## history graph? (0=no | 1=classic/gnuplot | 2=flot) CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS = 1 ## CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS_CLIENT_IP_DISTRIBUTION -- do we ## want to show a graph representing the distribution of client IPs ## downloading given document? (0=no | 1=classic/gnuplot | 2=flot) CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS_CLIENT_IP_DISTRIBUTION = 0 ## CFG_BIBRANK_SHOW_CITATION_LINKS -- do we want to show the 'Cited ## by' links? (useful only when you have citations in the metadata) CFG_BIBRANK_SHOW_CITATION_LINKS = 1 ## CFG_BIBRANK_SHOW_CITATION_STATS -- de we want to show citation ## stats? ('Cited by M recors', 'Co-cited with N records') CFG_BIBRANK_SHOW_CITATION_STATS = 1 ## CFG_BIBRANK_SHOW_CITATION_GRAPHS -- do we want to show citation ## history graph? (0=no | 1=classic/gnuplot | 2=flot) CFG_BIBRANK_SHOW_CITATION_GRAPHS = 1 ## CFG_BIBRANK_SELFCITES_USE_BIBAUTHORID -- use authorids for computing ## self-citations ## falls back to hashing the author string CFG_BIBRANK_SELFCITES_USE_BIBAUTHORID = 0 ## CFG_BIBRANK_SELFCITES_PRECOMPUTE -- use precomputed self-citations ## when displaying itesummary. Precomputing citations allows use to ## speed up things CFG_BIBRANK_SELFCITES_PRECOMPUTE = 0 #################################### ## Part 10: WebComment parameters ## #################################### ## This section contains some configuration parameters for the ## commenting and reviewing facilities. ## CFG_WEBCOMMENT_ALLOW_COMMENTS -- do we want to allow users write ## public comments on records? CFG_WEBCOMMENT_ALLOW_COMMENTS = 1 ## CFG_WEBCOMMENT_ALLOW_REVIEWS -- do we want to allow users write ## public reviews of records? CFG_WEBCOMMENT_ALLOW_REVIEWS = 1 ## CFG_WEBCOMMENT_ALLOW_SHORT_REVIEWS -- do we want to allow short ## reviews, that is just the attribution of stars without submitting ## detailed review text? CFG_WEBCOMMENT_ALLOW_SHORT_REVIEWS = 0 ## CFG_WEBCOMMENT_NB_REPORTS_BEFORE_SEND_EMAIL_TO_ADMIN -- if users ## report a comment to be abusive, how many they have to be before the ## site admin is alerted? CFG_WEBCOMMENT_NB_REPORTS_BEFORE_SEND_EMAIL_TO_ADMIN = 5 ## CFG_WEBCOMMENT_NB_COMMENTS_IN_DETAILED_VIEW -- how many comments do ## we display in the detailed record page upon welcome? CFG_WEBCOMMENT_NB_COMMENTS_IN_DETAILED_VIEW = 1 ## CFG_WEBCOMMENT_NB_REVIEWS_IN_DETAILED_VIEW -- how many reviews do ## we display in the detailed record page upon welcome? CFG_WEBCOMMENT_NB_REVIEWS_IN_DETAILED_VIEW = 1 ## CFG_WEBCOMMENT_ADMIN_NOTIFICATION_LEVEL -- do we notify the site ## admin after every comment? CFG_WEBCOMMENT_ADMIN_NOTIFICATION_LEVEL = 1 ## CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_COMMENTS_IN_SECONDS -- how many ## elapsed seconds do we consider enough when checking for possible ## multiple comment submissions by a user? CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_COMMENTS_IN_SECONDS = 20 ## CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_REVIEWS_IN_SECONDS -- how many ## elapsed seconds do we consider enough when checking for possible ## multiple review submissions by a user? CFG_WEBCOMMENT_TIMELIMIT_PROCESSING_REVIEWS_IN_SECONDS = 20 ## CFG_WEBCOMMENT_USE_RICH_EDITOR -- enable the WYSIWYG ## Javascript-based editor when user edits comments? CFG_WEBCOMMENT_USE_RICH_TEXT_EDITOR = False ## CFG_WEBCOMMENT_ALERT_ENGINE_EMAIL -- the email address from which the ## alert emails will appear to be sent: CFG_WEBCOMMENT_ALERT_ENGINE_EMAIL = info@invenio-software.org ## CFG_WEBCOMMENT_DEFAULT_MODERATOR -- if no rules are ## specified to indicate who is the comment moderator of ## a collection, this person will be used as default CFG_WEBCOMMENT_DEFAULT_MODERATOR = info@invenio-software.org ## CFG_WEBCOMMENT_USE_MATHJAX_IN_COMMENTS -- do we want to allow the use ## of MathJax plugin to render latex input in comments? CFG_WEBCOMMENT_USE_MATHJAX_IN_COMMENTS = 1 ## CFG_WEBCOMMENT_AUTHOR_DELETE_COMMENT_OPTION -- allow comment author to ## delete its own comment? CFG_WEBCOMMENT_AUTHOR_DELETE_COMMENT_OPTION = 1 # CFG_WEBCOMMENT_EMAIL_REPLIES_TO -- which field of the record define # email addresses that should be notified of newly submitted comments, # and for which collection. Use collection names as keys, and list of # tags as values CFG_WEBCOMMENT_EMAIL_REPLIES_TO = { 'Articles': ['506__d', '506__m'], } # CFG_WEBCOMMENT_RESTRICTION_DATAFIELD -- which field of the record # define the restriction (must be linked to WebAccess # 'viewrestrcomment') to apply to newly submitted comments, and for # which collection. Use collection names as keys, and one tag as value CFG_WEBCOMMENT_RESTRICTION_DATAFIELD = { 'Articles': '5061_a', 'Pictures': '5061_a', 'Theses': '5061_a', } # CFG_WEBCOMMENT_ROUND_DATAFIELD -- which field of the record define # the current round of comment for which collection. Use collection # name as key, and one tag as value CFG_WEBCOMMENT_ROUND_DATAFIELD = { 'Articles': '562__c', 'Pictures': '562__c', } # CFG_WEBCOMMENT_MAX_ATTACHMENT_SIZE -- max file size per attached # file, in bytes. Choose 0 if you don't want to limit the size CFG_WEBCOMMENT_MAX_ATTACHMENT_SIZE = 5242880 # CFG_WEBCOMMENT_MAX_ATTACHED_FILES -- maxium number of files that can # be attached per comment. Choose 0 if you don't want to limit the # number of files. File uploads can be restricted with action # "attachcommentfile". CFG_WEBCOMMENT_MAX_ATTACHED_FILES = 5 # CFG_WEBCOMMENT_MAX_COMMENT_THREAD_DEPTH -- how many levels of # indentation discussions can be. This can be used to ensure that # discussions will not go into deep levels of nesting if users don't # understand the difference between "reply to comment" and "add # comment". When the depth is reached, any "reply to comment" is # conceptually converted to a "reply to thread" (i.e. reply to this # parent's comment). Use -1 for no limit, 0 for unthreaded (flat) # discussions. CFG_WEBCOMMENT_MAX_COMMENT_THREAD_DEPTH = 1 ################################## ## Part 11: BibSched parameters ## ################################## ## This section contains some configuration parameters for the ## bibliographic task scheduler. ## CFG_BIBSCHED_REFRESHTIME -- how often do we want to refresh ## bibsched monitor? (in seconds) CFG_BIBSCHED_REFRESHTIME = 5 ## CFG_BIBSCHED_LOG_PAGER -- what pager to use to view bibsched task ## logs? CFG_BIBSCHED_LOG_PAGER = /usr/bin/less ## CFG_BIBSCHED_EDITOR -- what editor to use to edit the marcxml ## code of the locked records CFG_BIBSCHED_EDITOR = /usr/bin/vim ## CFG_BIBSCHED_GC_TASKS_OLDER_THAN -- after how many days to perform the ## gargbage collector of BibSched queue (i.e. removing/moving task to archive). CFG_BIBSCHED_GC_TASKS_OLDER_THAN = 30 ## CFG_BIBSCHED_GC_TASKS_TO_REMOVE -- list of BibTask that can be safely ## removed from the BibSched queue once they are DONE. CFG_BIBSCHED_GC_TASKS_TO_REMOVE = bibindex,bibreformat,webcoll,bibrank,inveniogc ## CFG_BIBSCHED_GC_TASKS_TO_ARCHIVE -- list of BibTasks that should be safely ## archived out of the BibSched queue once they are DONE. CFG_BIBSCHED_GC_TASKS_TO_ARCHIVE = bibupload,oairepositoryupdater ## CFG_BIBSCHED_MAX_NUMBER_CONCURRENT_TASKS -- maximum number of BibTasks ## that can run concurrently. ## NOTE: concurrent tasks are still considered as an experimental ## feature. Please keep this value set to 1 on production environments. CFG_BIBSCHED_MAX_NUMBER_CONCURRENT_TASKS = 1 ## CFG_BIBSCHED_PROCESS_USER -- bibsched and bibtask processes must ## usually run under the same identity as the Apache web server ## process in order to share proper file read/write privileges. If ## you want to force some other bibsched/bibtask user, e.g. because ## you are using a local `invenio' user that belongs to your ## `www-data' Apache user group and so shares writing rights with your ## Apache web server process in this way, then please set its username ## identity here. Otherwise we shall check whether your ## bibsched/bibtask processes are run under the same identity as your ## Apache web server process (in which case you can leave the default ## empty value here). CFG_BIBSCHED_PROCESS_USER = ## CFG_BIBSCHED_NODE_TASKS -- specific nodes may be configured to ## run only specific tasks; if you want this, then this variable is a ## dictionary of the form {'hostname1': ['task1', 'task2']}. The ## default is that any node can run any task. CFG_BIBSCHED_NODE_TASKS = {} ## CFG_BIBSCHED_MAX_ARCHIVED_ROWS_DISPLAY -- number of tasks displayed ## CFG_BIBSCHED_MAX_ARCHIVED_ROWS_DISPLAY = 500 ################################### ## Part 12: WebBasket parameters ## ################################### ## CFG_WEBBASKET_MAX_NUMBER_OF_DISPLAYED_BASKETS -- a safety limit for ## a maximum number of displayed baskets CFG_WEBBASKET_MAX_NUMBER_OF_DISPLAYED_BASKETS = 20 ## CFG_WEBBASKET_USE_RICH_TEXT_EDITOR -- enable the WYSIWYG ## Javascript-based editor when user edits comments in WebBasket? CFG_WEBBASKET_USE_RICH_TEXT_EDITOR = False ################################## ## Part 13: WebAlert parameters ## ################################## ## This section contains some configuration parameters for the ## automatic email notification alert system. ## CFG_WEBALERT_ALERT_ENGINE_EMAIL -- the email address from which the ## alert emails will appear to be sent: CFG_WEBALERT_ALERT_ENGINE_EMAIL = info@invenio-software.org ## CFG_WEBALERT_MAX_NUM_OF_RECORDS_IN_ALERT_EMAIL -- how many records ## at most do we send in an outgoing alert email? CFG_WEBALERT_MAX_NUM_OF_RECORDS_IN_ALERT_EMAIL = 20 ## CFG_WEBALERT_MAX_NUM_OF_CHARS_PER_LINE_IN_ALERT_EMAIL -- number of ## chars per line in an outgoing alert email? CFG_WEBALERT_MAX_NUM_OF_CHARS_PER_LINE_IN_ALERT_EMAIL = 72 ## CFG_WEBALERT_SEND_EMAIL_NUMBER_OF_TRIES -- when sending alert ## emails fails, how many times we retry? CFG_WEBALERT_SEND_EMAIL_NUMBER_OF_TRIES = 3 ## CFG_WEBALERT_SEND_EMAIL_SLEEPTIME_BETWEEN_TRIES -- when sending ## alert emails fails, what is the sleeptime between tries? (in ## seconds) CFG_WEBALERT_SEND_EMAIL_SLEEPTIME_BETWEEN_TRIES = 300 #################################### ## Part 14: WebMessage parameters ## #################################### ## CFG_WEBMESSAGE_MAX_SIZE_OF_MESSAGE -- how large web messages do we ## allow? CFG_WEBMESSAGE_MAX_SIZE_OF_MESSAGE = 20000 ## CFG_WEBMESSAGE_MAX_NB_OF_MESSAGES -- how many messages for a ## regular user do we allow in its inbox? CFG_WEBMESSAGE_MAX_NB_OF_MESSAGES = 30 ## CFG_WEBMESSAGE_DAYS_BEFORE_DELETE_ORPHANS -- how many days before ## we delete orphaned messages? CFG_WEBMESSAGE_DAYS_BEFORE_DELETE_ORPHANS = 60 ################################## ## Part 15: MiscUtil parameters ## ################################## ## CFG_MISCUTIL_SQL_USE_SQLALCHEMY -- whether to use SQLAlchemy.pool ## in the DB engine of Invenio. It is okay to enable this flag ## even if you have not installed SQLAlchemy. Note that Invenio will ## loose some perfomance if this option is enabled. CFG_MISCUTIL_SQL_USE_SQLALCHEMY = False ## CFG_MISCUTIL_SQL_RUN_SQL_MANY_LIMIT -- how many queries can we run ## inside run_sql_many() in one SQL statement? The limit value ## depends on MySQL's max_allowed_packet configuration. CFG_MISCUTIL_SQL_RUN_SQL_MANY_LIMIT = 10000 ## CFG_MISCUTIL_SMTP_HOST -- which server to use as outgoing mail server to ## send outgoing emails generated by the system, for example concerning ## submissions or email notification alerts. CFG_MISCUTIL_SMTP_HOST = localhost ## CFG_MISCUTIL_SMTP_PORT -- which port to use on the outgoing mail server ## defined in the previous step. CFG_MISCUTIL_SMTP_PORT = 25 ## CFG_MISCUTIL_SMTP_USER -- which username to use on the outgoing mail server ## defined in CFG_MISCUTIL_SMTP_HOST. If either CFG_MISCUTIL_SMTP_USER or ## CFG_MISCUTIL_SMTP_PASS are empty Invenio won't attempt authentication. CFG_MISCUTIL_SMTP_USER = ## CFG_MISCUTIL_SMTP_PASS -- which password to use on the outgoing mail ## server defined in CFG_MISCUTIL_SMTP_HOST. If either CFG_MISCUTIL_SMTP_USER ## or CFG_MISCUTIL_SMTP_PASS are empty Invenio won't attempt authentication. CFG_MISCUTIL_SMTP_PASS = ## CFG_MISCUTIL_SMTP_TLS -- whether to use a TLS (secure) connection when ## talking to the SMTP server defined in CFG_MISCUTIL_SMTP_HOST. CFG_MISCUTIL_SMTP_TLS = False ## CFG_MISCUTILS_DEFAULT_PROCESS_TIMEOUT -- the default number of seconds after ## which a process launched trough shellutils.run_process_with_timeout will ## be killed. This is useful to catch runaway processes. CFG_MISCUTIL_DEFAULT_PROCESS_TIMEOUT = 300 ## CFG_MATHJAX_HOSTING -- if you plan to use MathJax to display TeX ## formulas on HTML web pages, you can specify whether you wish to use ## 'local' hosting or 'cdn' hosting of MathJax libraries. (If set to ## 'local', you have to run 'make install-mathjax-plugin' as described ## in the INSTALL guide.) If set to 'local', users will use your site ## to download MathJax sources. If set to 'cdn', users will use ## centralized MathJax CDN servers instead. Please note that using -## CDN is suitable only for small institutions or for MathJax +## CDN is suitable only for small institutes or for MathJax ## sponsors; see the MathJax website for more details. (Also, please ## note that if you plan to use MathJax on your site, you have to ## adapt CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS and ## CFG_WEBCOMMENT_USE_MATHJAX_IN_COMMENTS configuration variables ## elsewhere in this file.) CFG_MATHJAX_HOSTING = local ################################# ## Part 16: BibEdit parameters ## ################################# ## CFG_BIBEDIT_TIMEOUT -- when a user edits a record, this record is ## locked to prevent other users to edit it at the same time. ## How many seconds of inactivity before the locked record again will be free ## for other people to edit? CFG_BIBEDIT_TIMEOUT = 3600 ## CFG_BIBEDIT_LOCKLEVEL -- when a user tries to edit a record which there ## is a pending bibupload task for in the queue, this shouldn't be permitted. ## The lock level determines how thouroughly the queue should be investigated ## to determine if this is the case. ## Level 0 - always permits editing, doesn't look at the queue ## (unsafe, use only if you know what you are doing) ## Level 1 - permits editing if there are no queued bibedit tasks for this record ## (safe with respect to bibedit, but not for other bibupload maintenance jobs) ## Level 2 - permits editing if there are no queued bibupload tasks of any sort ## (safe, but may lock more than necessary if many cataloguers around) ## Level 3 - permits editing if no queued bibupload task concerns given record ## (safe, most precise locking, but slow, ## checks for 001/EXTERNAL_SYSNO_TAG/EXTERNAL_OAIID_TAG) ## The recommended level is 3 (default) or 2 (if you use maintenance jobs often). CFG_BIBEDIT_LOCKLEVEL = 3 ## CFG_BIBEDIT_PROTECTED_FIELDS -- a comma-separated list of fields that BibEdit ## will not allow to be added, edited or deleted. Wildcards are not supported, ## but conceptually a wildcard is added at the end of every field specification. ## Examples: ## 500A - protect all MARC fields with tag 500 and first indicator A ## 5 - protect all MARC fields in the 500-series. ## 909C_a - protect subfield a in tag 909 with first indicator C and empty ## second indicator ## Note that 001 is protected by default, but if protection of other ## identifiers or automated fields is a requirement, they should be added to ## this list. CFG_BIBEDIT_PROTECTED_FIELDS = ## CFG_BIBEDIT_QUEUE_CHECK_METHOD -- how do we want to check for ## possible queue locking situations to prevent cataloguers from ## editing a record that may be waiting in the queue? Use 'bibrecord' ## for exact checking (always works, but may be slow), use 'regexp' ## for regular expression based checking (very fast, but may be ## inaccurate). When unsure, use 'bibrecord'. CFG_BIBEDIT_QUEUE_CHECK_METHOD = bibrecord ## CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE -- a dictionary ## containing which collections will be extended with a given template ## while being displayed in BibEdit UI. The collection corresponds with ## the value written in field 980 CFG_BIBEDIT_EXTEND_RECORD_WITH_COLLECTION_TEMPLATE = { 'POETRY' : 'record_poem'} ## CFG_BIBEDIT_KB_SUBJECTS - Name of the KB used in the field 65017a ## to automatically convert codes into extended version. e.g ## a - Astrophysics CFG_BIBEDIT_KB_SUBJECTS = Subjects ## CFG_BIBEDIT_KB_INSTITUTIONS - Name of the KB used for institution ## autocomplete. To be applied in fields defined in ## CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS CFG_BIBEDIT_KB_INSTITUTIONS = InstitutionsCollection ## CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS - list of fields to ## be autocompleted with the KB CFG_BIBEDIT_KB_INSTITUTIONS CFG_BIBEDIT_AUTOCOMPLETE_INSTITUTIONS_FIELDS = 100__u,700__u,701__u,502__c ## CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING -- maximum number of records ## that can be modified instantly using the multi-record editor. Above ## this limit, modifications will only be executed in limited hours. CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING = 2000 ## CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING -- maximum number of records ## that can be send for modification without having a superadmin role. ## If the number of records is between CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING ## and this number, the modifications will take place only in limited hours. CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING = 20000 ## CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING_TIME -- Allowed time to ## execute modifications on records, when the number exceeds ## CFG_BIBEDITMULTI_LIMIT_INSTANT_PROCESSING. CFG_BIBEDITMULTI_LIMIT_DELAYED_PROCESSING_TIME = 22:00-05:00 ################################### ## Part 17: BibUpload parameters ## ################################### ## CFG_BIBUPLOAD_REFERENCE_TAG -- where do we store references? CFG_BIBUPLOAD_REFERENCE_TAG = 999 ## CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG -- where do we store external ## system numbers? Useful for matching when our records come from an ## external digital library system. CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG = 970__a ## CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG -- where do we store OAI ID tags ## of harvested records? Useful for matching when we harvest stuff ## via OAI that we do not want to reexport via Invenio OAI; so records ## may have only the source OAI ID stored in this tag (kind of like ## external system number too). CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG = 035__a ## CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG -- where do we store OAI SRC ## tags of harvested records? Useful for matching when we harvest stuff ## via OAI that we do not want to reexport via Invenio OAI; so records ## may have only the source OAI SRC stored in this tag (kind of like ## external system number too). Note that the field should be the same of ## CFG_BIBUPLOAD_EXTERNAL_OAIID_TAG. CFG_BIBUPLOAD_EXTERNAL_OAIID_PROVENANCE_TAG = 035__9 ## CFG_BIBUPLOAD_STRONG_TAGS -- a comma-separated list of tags that ## are strong enough to resist the replace mode. Useful for tags that ## might be created from an external non-metadata-like source, ## e.g. the information about the number of copies left. CFG_BIBUPLOAD_STRONG_TAGS = 964 ## CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS -- a comma-separated list ## of tags that contain provenance information that should be checked ## in the bibupload correct mode via matching provenance codes. (Only ## field instances of the same provenance information would be acted ## upon.) Please specify the whole tag info up to subfield codes. CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS = 6531_9 ## CFG_BIBUPLOAD_FFT_ALLOWED_LOCAL_PATHS -- a comma-separated list of system ## paths from which it is allowed to take fulltextes that will be uploaded via ## FFT (CFG_TMPDIR is included by default). CFG_BIBUPLOAD_FFT_ALLOWED_LOCAL_PATHS = /tmp,/home ## CFG_BIBUPLOAD_FFT_ALLOWED_EXTERNAL_URLS -- a dictionary containing ## external URLs that can be accessed by Invenio and specific HTTP ## headers that will be used for each URL. The keys of the dictionary ## are regular expressions matching a set of URLs, the values are ## dictionaries of headers as consumed by urllib2.Request. If a ## regular expression matching all URLs is created at the end of the ## list, it means that Invenio will download all URLs. Otherwise ## Invenio will just download authorized URLs. Note: by default, a ## User-Agent built after the current Invenio version, site name, and ## site URL will be used. The values of the header dictionary can ## also contain a call to a python function, in the form of a ## disctionary with two entries: the name of the function to be called ## as a value for the 'fnc' key, and the arguments to this function, ## as a value for the 'args' key (in the form of a dictionary). ## CFG_BIBUPLOAD_FFT_ALLOWED_EXTERNAL_URLS = [ ## ('http://myurl.com/.*', {'User-Agent': 'Me'}), ## ('http://yoururl.com/.*', {'User-Agent': 'You', 'Accept': 'text/plain'}), ## ('http://thisurl.com/.*', {'Cookie': {'fnc':'read_cookie', 'args':{'cookiefile':'/tmp/cookies.txt'}}}) ## ('http://.*', {'User-Agent': 'Invenio'}), ## ] CFG_BIBUPLOAD_FFT_ALLOWED_EXTERNAL_URLS = [ ('http(s)?://.*', {}), ] ## CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE -- do we want to serialize ## internal representation of records (Pythonic record structure) into ## the database? This can improve internal processing speed of some ## operations at the price of somewhat bigger disk space usage. ## If you change this value after some records have already been added ## to your installation, you may want to run: ## $ /opt/invenio/bin/inveniocfg --reset-recstruct-cache ## in order to either erase the cache thus freeing database space, ## or to fill the cache for all records that have not been cached yet. CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE = 1 ## CFG_BIBUPLOAD_DELETE_FORMATS -- which formats do we want bibupload ## to delete when a record is ingested? Enter comma-separated list of ## formats. For example, 'hb,hd' will delete pre-formatted HTML brief ## and defailed formats from cache, so that search engine will ## generate them on-the-fly. Useful to always present latest data of ## records upon record display, until the periodical bibreformat job ## runs next and updates the cache. CFG_BIBUPLOAD_DELETE_FORMATS = hb ## CFG_BIBUPLOAD_DISABLE_RECORD_REVISIONS -- set to 1 if keeping ## history of record revisions is not necessary (e.g. because records ## and corresponding modifications are coming always from the same ## external system which already keeps revision history). CFG_BIBUPLOAD_DISABLE_RECORD_REVISIONS = 0 ## CFG_BIBUPLOAD_CONFLICTING_REVISION_TICKET_QUEUE -- Set the name of ## the BibCatalog ticket queue to be used when BibUpload can't ## automatically resolve a revision conflict and has therefore to put ## requested modifications in the holding pen. CFG_BIBUPLOAD_CONFLICTING_REVISION_TICKET_QUEUE = ## CFG_BATCHUPLOADER_FILENAME_MATCHING_POLICY -- a comma-separated list ## indicating which fields match the file names of the documents to be ## uploaded. ## The matching will be done in the same order as the list provided. CFG_BATCHUPLOADER_FILENAME_MATCHING_POLICY = reportnumber,recid ## CFG_BATCHUPLOADER_DAEMON_DIR -- Directory where the batchuploader daemon ## will look for the subfolders metadata and document by default. ## If path is relative, CFG_PREFIX will be joined as a prefix CFG_BATCHUPLOADER_DAEMON_DIR = var/batchupload ## CFG_BATCHUPLOADER_WEB_ROBOT_AGENTS -- Regular expression to specify the ## agents permitted when calling batch uploader web interface ## cds.cern.ch/batchuploader/robotupload ## if using a curl, eg: curl xxx -A invenio CFG_BATCHUPLOADER_WEB_ROBOT_AGENTS = invenio_webupload|Invenio-.* ## CFG_BATCHUPLOADER_WEB_ROBOT_RIGHTS -- Access list specifying for each ## IP address, which collections are allowed using batch uploader robot ## interface. CFG_BATCHUPLOADER_WEB_ROBOT_RIGHTS = { '127.0.0.1': ['*'], # useful for testing '127.0.1.1': ['*'], # useful for testing '10.0.0.1': ['BOOK', 'REPORT'], # Example 1 '10.0.0.2': ['POETRY', 'PREPRINT'], # Example 2 } #################################### ## Part 18: BibCatalog parameters ## #################################### ## CFG_BIBCATALOG_SYSTEM -- set desired catalog system. (RT or EMAIL) CFG_BIBCATALOG_SYSTEM = EMAIL ## Email backend configuration: CFG_BIBCATALOG_SYSTEM_EMAIL_ADDRESS = info@invenio-software.org ## RT backend configuration: ## CFG_BIBCATALOG_SYSTEM_RT_CLI -- path to the RT CLI client CFG_BIBCATALOG_SYSTEM_RT_CLI = /usr/bin/rt ## CFG_BIBCATALOG_SYSTEM_RT_URL -- Base URL of the remote RT system CFG_BIBCATALOG_SYSTEM_RT_URL = http://localhost/rt3 ## CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_USER -- Set the username for a default RT account ## on remote system, with limited privileges, in order to only create and modify own tickets. CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_USER = ## CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_PWD -- Set the password for the default RT account ## on remote system. CFG_BIBCATALOG_SYSTEM_RT_DEFAULT_PWD = #################################### ## Part 19: BibFormat parameters ## #################################### ## CFG_BIBFORMAT_HIDDEN_TAGS -- comma-separated list of MARC tags that ## are not shown to users not having cataloging authorizations. CFG_BIBFORMAT_HIDDEN_TAGS = 595 ## CFG_BIBFORMAT_HIDDEN_FILE_FORMATS -- comma-separated list of file formats ## that are not shown explicitly to user not having cataloging authorizations. ## e.g. pdf;pdfa,xml CFG_BIBFORMAT_HIDDEN_FILE_FORMATS = ## CFG_BIBFORMAT_ADDTHIS_ID -- if you want to use the AddThis service from ## , set this value to the pubid parameter as ## provided by the service (e.g. ra-4ff80aae118f4dad), and add a call to ## formatting element in your formats, for example ## Default_HTML_detailed.bft. CFG_BIBFORMAT_ADDTHIS_ID = ## CFG_BIBFORMAT_DISABLE_I18N_FOR_CACHED_FORMATS -- For each output ## format BibReformat currently creates a cache for only one language ## (CFG_SITE_LANG) per record. This means that visitors having set a ## different language than CFG_SITE_LANG will be served an on-the-fly ## output using the language of their choice. You can disable this ## behaviour by specifying below for which output format you would ## like to force the cache to be used whatever language is ## requested. If your format templates do not provide ## internationalization, you can optimize your site by setting for ## eg. hb,hd to always serve the precached output (if it exists) in ## the CFG_SITE_LANG CFG_BIBFORMAT_DISABLE_I18N_FOR_CACHED_FORMATS = #################################### ## Part 20: BibMatch parameters ## #################################### ## CFG_BIBMATCH_LOCAL_SLEEPTIME -- Determines the amount of seconds to sleep ## between search queries on LOCAL system. CFG_BIBMATCH_LOCAL_SLEEPTIME = 0.0 ## CFG_BIBMATCH_REMOTE_SLEEPTIME -- Determines the amount of seconds to sleep ## between search queries on REMOTE systems. CFG_BIBMATCH_REMOTE_SLEEPTIME = 2.0 ## CFG_BIBMATCH_FUZZY_WORDLIMITS -- Determines the amount of words to extract ## from a certain fields value during fuzzy matching mode. Add/change field ## and appropriate number to the dictionary to configure this. CFG_BIBMATCH_FUZZY_WORDLIMITS = { '100__a': 2, '245__a': 4 } ## CFG_BIBMATCH_FUZZY_EMPTY_RESULT_LIMIT -- Determines the amount of empty results ## to accept during fuzzy matching mode. CFG_BIBMATCH_FUZZY_EMPTY_RESULT_LIMIT = 1 ## CFG_BIBMATCH_QUERY_TEMPLATES -- Here you can set the various predefined querystrings ## used to standardize common matching queries. By default the following templates ## are given: ## title - standard title search. Taken from 245__a (default) ## title-author - title and author search (i.e. this is a title AND author a) ## Taken from 245__a and 100__a ## reportnumber - reportnumber search (i.e. reportnumber:REP-NO-123). CFG_BIBMATCH_QUERY_TEMPLATES = { 'title' : '[title]', 'title-author' : '[title] [author]', 'reportnumber' : 'reportnumber:[reportnumber]' } ## CFG_BIBMATCH_MATCH_VALIDATION_RULESETS -- Here you can define the various rulesets for ## validating search results done by BibMatch. Each ruleset contains a certain pattern mapped ## to a tuple defining a "matching-strategy". ## ## The rule-definitions must come in two parts: ## ## * The first part is a string containing a regular expression ## that is matched against the textmarc representation of each record. ## If a match is found, the final rule-set is updated with ## the given "sub rule-set", where identical tag rules are replaced. ## ## * The second item is a list of key->value mappings (dict) that indicates specific ## strategy parameters with corresponding validation rules. ## ## This strategy consists of five items: ## ## * MARC TAGS: ## These MARC tags represents the fields taken from original record and any records from search ## results. When several MARC tags are specified with a given match-strategy, all the fields ## associated with these tags are matched together (i.e. with key "100__a,700__a", all 100__a ## and 700__a fields are matched together. Which is useful when first-author can vary for ## certain records on different systems). ## ## * COMPARISON THRESHOLD: ## a value between 0.0 and 1.0 specifying the threshold for string matches ## to determine if it is a match or not (using normalized string-distance). ## Normally 0.8 (80% match) is considered to be a close match. ## ## * COMPARISON MODE: ## the parse mode decides how the record datafields are compared: ## - 'strict' : all (sub-)fields are compared, and all must match. Order is significant. ## - 'normal' : all (sub-)fields are compared, and all must match. Order is ignored. ## - 'lazy' : all (sub-)fields are compared with each other and at least one must match ## - 'ignored': the tag is ignored in the match. Used to disable previously defined rules. ## ## * MATCHING MODE: ## the comparison mode decides how the fieldvalues are matched: ## - 'title' : uses a method specialized for comparing titles, e.g. looking for subtitles ## - 'author' : uses a special authorname comparison. Will take initials into account. ## - 'identifier' : special matching for identifiers, stripping away punctuation ## - 'date': matches dates by extracting and comparing the year ## - 'normal': normal string comparison. ## Note: Fields are considered matching when all its subfields or values match. ## ## * RESULT MODE: ## the result mode decides how the results from the comparisons are handled further: ## - 'normal' : a failed match will cause the validation to immediately exit as a failure. ## a successful match will cause the validation to continue on other rules (if any) ## - 'final' : a failed match will cause the validation to immediately exit as a failure. ## a successful match will cause validation to immediately exit as a success. ## - 'joker' : a failed match will cause the validation to continue on other rules (if any). ## a successful match will cause validation to immediately exit as a success. ## ## You can add your own rulesets in the dictionary below. The 'default' ruleset is always applied, ## and should therefore NOT be removed, but can be changed. The tag-rules can also be overwritten ## by other rulesets. ## ## WARNING: Beware that the validation quality is only as good as given rules, so matching results ## are never guaranteed to be accurate, as it is very content-specific. CFG_BIBMATCH_MATCH_VALIDATION_RULESETS = [('default', [{ 'tags' : '245__%,242__%', 'threshold' : 0.8, 'compare_mode' : 'lazy', 'match_mode' : 'title', 'result_mode' : 'normal' }, { 'tags' : '037__a,088__a', 'threshold' : 1.0, 'compare_mode' : 'lazy', 'match_mode' : 'identifier', 'result_mode' : 'final' }, { 'tags' : '100__a,700__a', 'threshold' : 0.8, 'compare_mode' : 'normal', 'match_mode' : 'author', 'result_mode' : 'normal' }, { 'tags' : '773__a', 'threshold' : 1.0, 'compare_mode' : 'lazy', 'match_mode' : 'title', 'result_mode' : 'normal' }]), ('980__ \$\$a(THESIS|Thesis)', [{ 'tags' : '100__a', 'threshold' : 0.8, 'compare_mode' : 'strict', 'match_mode' : 'author', 'result_mode' : 'normal' }, { 'tags' : '700__a,701__a', 'threshold' : 1.0, 'compare_mode' : 'lazy', 'match_mode' : 'author', 'result_mode' : 'normal' }, { 'tags' : '100__a,700__a', 'threshold' : 0.8, 'compare_mode' : 'ignored', 'match_mode' : 'author', 'result_mode' : 'normal' }]), ('260__', [{ 'tags' : '260__c', 'threshold' : 0.8, 'compare_mode' : 'lazy', 'match_mode' : 'date', 'result_mode' : 'normal' }]), ('0247_', [{ 'tags' : '0247_a', 'threshold' : 1.0, 'compare_mode' : 'lazy', 'match_mode' : 'identifier', 'result_mode' : 'final' }]), ('020__', [{ 'tags' : '020__a', 'threshold' : 1.0, 'compare_mode' : 'lazy', 'match_mode' : 'identifier', 'result_mode' : 'joker' }]) ] ## CFG_BIBMATCH_FUZZY_MATCH_VALIDATION_LIMIT -- Determines the minimum percentage of the ## amount of rules to be positively matched when comparing two records. Should the number ## of matches be lower than required matches but equal to or above this limit, ## the match will be considered fuzzy. CFG_BIBMATCH_FUZZY_MATCH_VALIDATION_LIMIT = 0.65 ## CFG_BIBMATCH_SEARCH_RESULT_MATCH_LIMIT -- Determines the maximum amount of search results ## a single search can return before acting as a non-match. CFG_BIBMATCH_SEARCH_RESULT_MATCH_LIMIT = 15 ###################################### ## Part 21: BibAuthorID parameters ## ###################################### # CFG_BIBAUTHORID_MAX_PROCESSES is the max number of processes # that may be spawned by the disambiguation algorithm CFG_BIBAUTHORID_MAX_PROCESSES = 12 # CFG_BIBAUTHORID_PERSONID_SQL_MAX_THREADS is the max number of threads # to parallelize sql queries during personID tables updates CFG_BIBAUTHORID_PERSONID_SQL_MAX_THREADS = 12 # CFG_BIBAUTHORID_EXTERNAL_CLAIMED_RECORDS_KEY defines the user info # keys for externally claimed records in an remote-login scenario--e.g. from arXiv.org # e.g. "external_arxivids" for arXiv SSO CFG_BIBAUTHORID_EXTERNAL_CLAIMED_RECORDS_KEY = # CFG_BIBAUTHORID_AID_ENABLED # Globally enable AuthorID Interfaces. # If False: No guest, user or operator will have access to the system. CFG_BIBAUTHORID_ENABLED = True # CFG_BIBAUTHORID_AID_ON_AUTHORPAGES # Enable AuthorID information on the author pages. CFG_BIBAUTHORID_ON_AUTHORPAGES = True # CFG_BIBAUTHORID_AUTHOR_TICKET_ADMIN_EMAIL defines the eMail address # all ticket requests concerning authors will be sent to. CFG_BIBAUTHORID_AUTHOR_TICKET_ADMIN_EMAIL = info@invenio-software.org #CFG_BIBAUTHORID_UI_SKIP_ARXIV_STUB_PAGE defines if the optional arXive stub page is skipped CFG_BIBAUTHORID_UI_SKIP_ARXIV_STUB_PAGE = False ######################################### ## Part 22: BibCirculation parameters ## ######################################### ## CFG_BIBCIRCULATION_ITEM_STATUS_OPTIONAL -- comma-separated list of statuses # Example: missing, order delayed, not published # You can allways add a new status here, but you may want to run some script # to update the database if you remove some statuses. CFG_BIBCIRCULATION_ITEM_STATUS_OPTIONAL = ## Here you can edit the text of the statuses that have specific roles. # You should run a script to update the database if you change them after having # used the module for some time. ## Item statuses # The book is on loan CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN = on loan # Available for loan CFG_BIBCIRCULATION_ITEM_STATUS_ON_SHELF = on shelf # The book is being processed by the library (cataloguing, etc.) CFG_BIBCIRCULATION_ITEM_STATUS_IN_PROCESS = in process # The book has been ordered (bought) CFG_BIBCIRCULATION_ITEM_STATUS_ON_ORDER = on order # The order of the book has been cancelled CFG_BIBCIRCULATION_ITEM_STATUS_CANCELLED = cancelled # The order of the book has not arrived yet CFG_BIBCIRCULATION_ITEM_STATUS_NOT_ARRIVED = not arrived # The order of the book has not arrived yet and has been claimed CFG_BIBCIRCULATION_ITEM_STATUS_CLAIMED = claimed # The book has been proposed for acquisition and is under review. CFG_BIBCIRCULATION_ITEM_STATUS_UNDER_REVIEW = under review ## Loan statuses # This status should not be confussed with CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN. # If the item status is CFG_BIBCIRCULATION_ITEM_STATUS_ON_LOAN, then there is # a loan with status CFG_BIBCIRCULATION_LOAN_STATUS_ON_LOAN or # CFG_BIBCIRCULATION_LOAN_STATUS_EXPIRED. # For each copy, there can only be one active loan ('on loan' or 'expired') at # the time, since can be many 'returned' loans for the same copy. CFG_BIBCIRCULATION_LOAN_STATUS_ON_LOAN = on loan # The due date has come and the item has not been returned CFG_BIBCIRCULATION_LOAN_STATUS_EXPIRED = expired # The item has been returned. CFG_BIBCIRCULATION_LOAN_STATUS_RETURNED = returned ## Request statuses # There is at least one copy available, and this is the oldest request. CFG_BIBCIRCULATION_REQUEST_STATUS_PENDING = pending # There are no copies available, or there is another request with more priority. CFG_BIBCIRCULATION_REQUEST_STATUS_WAITING = waiting # The request has become a loan CFG_BIBCIRCULATION_REQUEST_STATUS_DONE = done # The request has been cancelled CFG_BIBCIRCULATION_REQUEST_STATUS_CANCELLED = cancelled # The request has been generated for a proposed book CFG_BIBCIRCULATION_REQUEST_STATUS_PROPOSED = proposed # ILL request statuses CFG_BIBCIRCULATION_ILL_STATUS_NEW = new CFG_BIBCIRCULATION_ILL_STATUS_REQUESTED = requested CFG_BIBCIRCULATION_ILL_STATUS_ON_LOAN = on loan CFG_BIBCIRCULATION_ILL_STATUS_RETURNED = returned CFG_BIBCIRCULATION_ILL_STATUS_CANCELLED = cancelled CFG_BIBCIRCULATION_ILL_STATUS_RECEIVED = received #Book proposal statuses CFG_BIBCIRCULATION_PROPOSAL_STATUS_NEW = proposal-new CFG_BIBCIRCULATION_PROPOSAL_STATUS_ON_ORDER = proposal-on order CFG_BIBCIRCULATION_PROPOSAL_STATUS_PUT_ASIDE = proposal-put aside CFG_BIBCIRCULATION_PROPOSAL_STATUS_RECEIVED = proposal-received # Purchase statuses CFG_BIBCIRCULATION_ACQ_STATUS_NEW = new CFG_BIBCIRCULATION_ACQ_STATUS_ON_ORDER = on order CFG_BIBCIRCULATION_ACQ_STATUS_PARTIAL_RECEIPT = partial receipt CFG_BIBCIRCULATION_ACQ_STATUS_RECEIVED = received CFG_BIBCIRCULATION_ACQ_STATUS_CANCELLED = cancelled ## Library types # Normal library where you have your books. I can also be a depot. CFG_BIBCIRCULATION_LIBRARY_TYPE_INTERNAL = internal # external libraries for ILL. CFG_BIBCIRCULATION_LIBRARY_TYPE_EXTERNAL = external # The main library is also an internal library. # Since you may have several depots or small sites you can tag one of them as # the main site. CFG_BIBCIRCULATION_LIBRARY_TYPE_MAIN = main # It is also an internal library. The copies in this type of library will NOT # be displayed to borrowers. Use this for depots. CFG_BIBCIRCULATION_LIBRARY_TYPE_HIDDEN = hidden ## Amazon access key. You will need your own key. # Example: 1T6P5M3ZDMW9AWJ212R2 CFG_BIBCIRCULATION_AMAZON_ACCESS_KEY = ###################################### ## Part 22: BibClassify parameters ## ###################################### # CFG_BIBCLASSIFY_WEB_MAXKW -- maximum number of keywords to display # in the Keywords tab web page. CFG_BIBCLASSIFY_WEB_MAXKW = 100 ######################################## ## Part 23: Plotextractor parameters ## ######################################## ## CFG_PLOTEXTRACTOR_SOURCE_BASE_URL -- for acquiring source tarballs for plot ## extraction, where should we look? If nothing is set, we'll just go ## to arXiv, but this can be a filesystem location, too CFG_PLOTEXTRACTOR_SOURCE_BASE_URL = http://arxiv.org/ ## CFG_PLOTEXTRACTOR_SOURCE_TARBALL_FOLDER -- for acquiring source tarballs for plot ## extraction, subfolder where the tarballs sit CFG_PLOTEXTRACTOR_SOURCE_TARBALL_FOLDER = e-print/ ## CFG_PLOTEXTRACTOR_SOURCE_PDF_FOLDER -- for acquiring source tarballs for plot ## extraction, subfolder where the pdf sit CFG_PLOTEXTRACTOR_SOURCE_PDF_FOLDER = pdf/ ## CFG_PLOTEXTRACTOR_DOWNLOAD_TIMEOUT -- a float representing the number of seconds ## to wait between each download of pdf and/or tarball from source URL. CFG_PLOTEXTRACTOR_DOWNLOAD_TIMEOUT = 2.0 ## CFG_PLOTEXTRACTOR_CONTEXT_LIMIT -- when extracting context of plots from ## TeX sources, this is the limitation of characters in each direction to extract ## context from. Default 750. CFG_PLOTEXTRACTOR_CONTEXT_EXTRACT_LIMIT = 750 ## CFG_PLOTEXTRACTOR_DISALLOWED_TEX -- when extracting context of plots from TeX ## sources, this is the list of TeX tags that will trigger 'end of context'. CFG_PLOTEXTRACTOR_DISALLOWED_TEX = begin,end,section,includegraphics,caption,acknowledgements ## CFG_PLOTEXTRACTOR_CONTEXT_WORD_LIMIT -- when extracting context of plots from ## TeX sources, this is the limitation of words in each direction. Default 75. CFG_PLOTEXTRACTOR_CONTEXT_WORD_LIMIT = 75 ## CFG_PLOTEXTRACTOR_CONTEXT_SENTENCE_LIMIT -- when extracting context of plots from ## TeX sources, this is the limitation of sentences in each direction. Default 2. CFG_PLOTEXTRACTOR_CONTEXT_SENTENCE_LIMIT = 2 ###################################### ## Part 24: WebStat parameters ## ###################################### # CFG_WEBSTAT_BIBCIRCULATION_START_YEAR defines the start date of the BibCirculation # statistics. Value should have the format 'yyyy'. If empty, take all existing data. CFG_WEBSTAT_BIBCIRCULATION_START_YEAR = ###################################### ## Part 25: Web API Key parameters ## ###################################### # CFG_WEB_API_KEY_ALLOWED_URL defines the web apps that are going to use the web # API key. It has three values, the name of the web app, the time of life for the # secure url and if a time stamp is needed. #CFG_WEB_API_KEY_ALLOWED_URL = [('search/\?', 3600, True), # ('rss', 0, False)] CFG_WEB_API_KEY_ALLOWED_URL = [] ########################################## ## Part 26: WebAuthorProfile parameters ## ########################################## #CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_LIVE consider a cached element expired after days #when loading an authorpage, thus recomputing the content live CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_LIVE = 7 #CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_BIBSCHED consider a cache element expired after days, #thus recompute it, bibsched daemon CFG_WEBAUTHORPROFILE_CACHE_EXPIRED_DELAY_BIBSCHED = 5 #CFG_WEBAUTHORPROFILE_MAX_COLLAB_LIST: limit collaboration list. #Set to 0 to disable limit. CFG_WEBAUTHORPROFILE_MAX_COLLAB_LIST = 100 #CFG_WEBAUTHORPROFILE_MAX_KEYWORD_LIST: limit keywords list #Set to 0 to disable limit. CFG_WEBAUTHORPROFILE_MAX_KEYWORD_LIST = 100 #CFG_WEBAUTHORPROFILE_MAX_AFF_LIST: limit affiliations list #Set to 0 to disable limit. CFG_WEBAUTHORPROFILE_MAX_AFF_LIST = 100 #CFG_WEBAUTHORPROFILE_MAX_COAUTHOR_LIST: limit coauthors list #Set to 0 to disable limit. CFG_WEBAUTHORPROFILE_MAX_COAUTHOR_LIST = 100 #CFG_WEBAUTHORPROFILE_MAX_HEP_CHOICES: limit HepRecords choices #Set to 0 to disable limit. CFG_WEBAUTHORPROFILE_MAX_HEP_CHOICES = 10 #CFG_WEBAUTHORPROFILE_USE_BIBAUTHORID: use bibauthorid or exactauthor CFG_WEBAUTHORPROFILE_USE_BIBAUTHORID = False #################################### ## Part 27: BibSort parameters ## #################################### ## CFG_BIBSORT_BUCKETS -- the number of buckets bibsort should use. ## If 0, then no buckets will be used (bibsort will be inactive). ## If different from 0, bibsort will be used for sorting the records. ## The number of buckets should be set with regards to the size ## of the repository; having a larger number of buckets will increase ## the sorting performance for the top results but will decrease ## the performance for sorting the middle results. ## We recommend to to use 1 in case you have less than about ## 1,000,000 records. ## When modifying this variable, re-run rebalancing for all the bibsort ## methods, for having the database in synch. CFG_BIBSORT_BUCKETS = 1 #################################### ## Part 26: Developer options ## #################################### ## CFG_DEVEL_SITE -- is this a development site? If it is, you might ## prefer that it does not do certain things. For example, you might ## not want WebSubmit to send certain emails or trigger certain ## processes on a development site. Put "0" for "no" (meaning we are ## on production site), put "1" for "yes" (meaning we are on ## development site), or put "9" for "maximum debugging info" (which ## will be displayed to *all* users using Flask DebugToolbar, so ## please beware). ## If you do *NOT* want to send emails to their original recipients ## set up corresponding value to CFG_EMAIL_BACKEND (e.g. dummy, locmem). CFG_DEVEL_SITE = 0 ## CFG_DEVEL_TEST_DATABASE_ENGINES -- do we want to enable different testing ## database engines for testing Flask and SQLAlchemy? This setting ## will allow `*_flask_tests.py` to run on databases defined bellow. ## It uses `CFG_DATABASE_*` config variables as defaults for every ## specified engine. Put following keys to the testing database ## configuration dictionary in order to overwrite default values: ## * `engine`: SQLAlchemy engine + driver ## * `username`: The user name. ## * `password`: The database password. ## * `host`: The name of the host. ## * `port`: The port number. ## * `database`: The database name. ## EXAMPLE: ## CFG_DEVEL_TEST_DATABASE_ENGINES = { ## 'PostgreSQL': {'engine': 'postgresql'}, ## 'SQLite': {'engine': 'sqlite+pysqlite', 'username': None, ## 'password': None, 'host': None, 'database': None} ## } ## } CFG_DEVEL_TEST_DATABASE_ENGINES = {} ## CFG_DEVEL_TOOLS -- list of development tools to enable or disable. ## Currently supported tools are: ## * debug-toolbar: Flask Debug Toolbar ## * werkzeug-debugger: Werkzeug Debugger (for Apache) ## * sql-logger: Logging of run_sql SQL queries ## * inspect-templates: Template inspection (formerly CFG_WEBSTYLE_INSPECT_TEMPLATES) ## * no-https-redirect: Do not redirect HTTP to HTTPS ## * assets-debug: Jinja2 assets debugging (i.e. do not merge JavaScript files) ## * intercept-redirects: Intercept redirects (requires debug-toolbar enabled). ## * winpdb-local: Embedded WinPDB Debugger (default password is Change1Me) ## * winpdb-remote: Remote WinPDB Debugger (default password is Change1Me) ## * pydev: PyDev Remote Debugger ## ## IMPORTANT: For werkzeug-debugger, winpdb and pydev to work with Apache you ## must set WSGIDaemonProcess processes=1 threads=1 in invenio-apache-vhost.conf. CFG_DEVEL_TOOLS = ######################################## ## Part 28: JsTestDriver parameters ## ######################################## ## CFG_JSTESTDRIVER_PORT -- server port where JS tests will be run. CFG_JSTESTDRIVER_PORT = 9876 ############################ ## Part 29: RefExtract ## ############################ ## Refextract can automatically submit tickets (after extracting refereces) ## to CFG_REFEXTRACT_TICKET_QUEUE if it is set CFG_REFEXTRACT_TICKET_QUEUE = None ## Override refextract kbs locations CFG_REFEXTRACT_KBS_OVERRIDE = {} ################################## ## Part 30: CrossRef parameters ## ################################## ## CFG_CROSSREF_USERNAME -- the username used when sending request ## to the Crossref site. CFG_CROSSREF_USERNAME = ## CFG_CROSSREF_PASSWORD -- the password used when sending request ## to the Crossref site. CFG_CROSSREF_PASSWORD = ##################################### ## Part 31: WebLinkback parameters ## ##################################### ## CFG_WEBLINKBACK_TRACKBACK_ENABLED -- whether to enable trackback support ## 1 to enable, 0 to disable it CFG_WEBLINKBACK_TRACKBACK_ENABLED = 0 #################################### ## Part 33: WebSubmit parameters ## #################################### ## CFG_WEBSUBMIT_USE_MATHJAX -- whether to use MathJax and math ## preview panel within submissions (1) or not (0). Customize your ## websubmit_template.tmpl_mathpreview_header() to enable for specific ## fields. ## See also CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS CFG_WEBSUBMIT_USE_MATHJAX = 0 ############################ ## Part 34: BibWorkflow ## ############################ ## Setting worker that will be used to execut workflows. ## Allowed options: Celery CFG_BIBWORKFLOW_WORKER = worker_celery ## Messages broker for worker ## RabbitMQ - amqp://guest@localhost// ## Redis - redis://localhost:6379/0 CFG_BROKER_URL = amqp://guest@localhost:5672// ## Broker backend ## RabbitMQ - amqp ## Redis - redis://localhost:6379/0 CFG_CELERY_RESULT_BACKEND = amqp #################################### ## Part 35: BibField parameters ## #################################### ## CFG_BIBFIELD_MASTER_FORMATS -- the name of all the allowed master formats ## that BibField will work with. CFG_BIBFIELD_MASTER_FORMATS = marc ###################################### ## Part 36: WebDeposit parameters ## ###################################### CFG_WEBDEPOSIT_MAX_UPLOAD_SIZE = 104857600 ########################## ## THAT's ALL, FOLKS! ## ########################## diff --git a/configure-tests.py b/configure-tests.py index b38044ca1..26d205a02 100644 --- a/configure-tests.py +++ b/configure-tests.py @@ -1,547 +1,546 @@ ## This file is part of Invenio. ## Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. from __future__ import print_function """ Test the suitability of Python core and the availability of various Python modules for running Invenio. Warn the user if there are eventual troubles. Exit status: 0 if okay, 1 if not okay. Useful for running from configure.ac. """ ## minimally recommended/required versions: CFG_MIN_PYTHON_VERSION = (2, 6) CFG_MAX_PYTHON_VERSION = (2, 9, 9999) CFG_MIN_MYSQLDB_VERSION = "1.2.1_p2" ## 0) import modules needed for this testing: import string import sys import getpass import subprocess import re error_messages = [] warning_messages = [] def wait_for_user(msg): """Print MSG and prompt user for confirmation.""" try: raw_input(msg) except KeyboardInterrupt: print("\n\nInstallation aborted.") sys.exit(1) except EOFError: print(" (continuing in batch mode)") return ## 1) check Python version: if sys.version_info < CFG_MIN_PYTHON_VERSION: error_messages.append( """ ******************************************************* ** ERROR: TOO OLD PYTHON DETECTED: %s ******************************************************* ** You seem to be using a too old version of Python. ** ** You must use at least Python %s. ** ** ** ** Note that if you have more than one Python ** ** installed on your system, you can specify the ** ** --with-python configuration option to choose ** ** a specific (e.g. non system wide) Python binary. ** ** ** ** Please upgrade your Python before continuing. ** ******************************************************* """ % (string.replace(sys.version, "\n", ""), '.'.join(CFG_MIN_PYTHON_VERSION)) ) if sys.version_info > CFG_MAX_PYTHON_VERSION: error_messages.append( """ ******************************************************* ** ERROR: TOO NEW PYTHON DETECTED: %s ******************************************************* ** You seem to be using a too new version of Python. ** ** You must use at most Python %s. ** ** ** ** Perhaps you have downloaded and are installing an ** ** old Invenio version? Please look for more recent ** ** Invenio version or please contact the development ** ** team at about this ** ** problem. ** ** ** ** Installation aborted. ** ******************************************************* """ % (string.replace(sys.version, "\n", ""), '.'.join(CFG_MAX_PYTHON_VERSION)) ) ## 2) check for required modules: try: import MySQLdb import base64 from six.moves import cPickle import cStringIO import cgi import copy import fileinput import getopt import sys if sys.hexversion < 0x2060000: import md5 else: import hashlib import marshal import os import pyparsing import signal import tempfile import time import traceback import unicodedata import urllib import zlib import wsgiref import sqlalchemy import werkzeug import jinja2 import flask import fixture import flask.ext.assets import flask.ext.cache import flask.ext.sqlalchemy import flask.ext.testing import wtforms import flask.ext.wtf import flask.ext.admin ## Check Werkzeug version werkzeug_ver = werkzeug.__version__.split(".") if werkzeug_ver[0] == "0" and int(werkzeug_ver[1]) < 8: error_messages.append( """ ***************************************************** ** Werkzeug version %s detected ***************************************************** ** Your are using an outdated version of Werkzeug ** ** with known problems. Please upgrade Werkzeug to ** ** at least v0.8 by running e.g.: ** ** pip install Werkzeug --upgrade ** ***************************************************** """ % werkzeug.__version__ ) except ImportError as msg: error_messages.append(""" ************************************************* ** IMPORT ERROR %s ************************************************* ** Perhaps you forgot to install some of the ** ** prerequisite Python modules? Please look ** ** at our INSTALL file for more details and ** ** fix the problem before continuing! ** ************************************************* """ % msg ) ## 3) check for recommended modules: try: import rdflib except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that rdflib is needed only if you plan ** ** to work with the automatic classification of ** ** documents based on RDF-based taxonomies. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import pyRXP except ImportError as msg: warning_messages.append(""" ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that PyRXP is not really required but ** ** we recommend it for fast XML MARC parsing. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import dateutil except ImportError as msg: warning_messages.append(""" ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that dateutil is not really required but ** ** we recommend it for user-friendly date ** ** parsing. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import libxml2 except ImportError as msg: warning_messages.append(""" ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that libxml2 is not really required but ** ** we recommend it for XML metadata conversions ** ** and for fast XML parsing. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import libxslt except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that libxslt is not really required but ** ** we recommend it for XML metadata conversions. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import Gnuplot except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that Gnuplot.py is not really required but ** ** we recommend it in order to have nice download ** ** and citation history graphs on Detailed record ** ** pages. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import rauth except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that python-rauth is not really required ** ** but we recommend it in order to enable oauth ** ** based authentication. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import openid except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that python-openid is not really required ** ** but we recommend it in order to enable OpenID ** ** based authentication. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: import magic if not hasattr(magic, "open"): raise StandardError except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that magic module is not really required ** ** but we recommend it in order to have detailed ** ** content information about fulltext files. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) except StandardError: warning_messages.append( """ ***************************************************** ** IMPORT WARNING python-magic ***************************************************** ** The python-magic package you installed is not ** ** the one supported by Invenio. Please refer to ** ** the INSTALL file for more details. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ ) try: import reportlab except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that reportlab module is not really ** ** required, but we recommend it you want to ** ** enrich PDF with OCR information. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) try: try: import PyPDF2 except ImportError: import pyPdf except ImportError as msg: warning_messages.append( """ ***************************************************** ** IMPORT WARNING %s ***************************************************** ** Note that pyPdf or pyPdf2 module is not really ** ** required, but we recommend it you want to ** ** enrich PDF with OCR information. ** ** ** ** You can safely continue installing Invenio ** ** now, and add this module anytime later. (I.e. ** ** even after your Invenio installation is put ** ** into production.) ** ***************************************************** """ % msg ) ## 4) check for versions of some important modules: if MySQLdb.__version__ < CFG_MIN_MYSQLDB_VERSION: error_messages.append( """ ***************************************************** ** ERROR: PYTHON MODULE MYSQLDB %s DETECTED ***************************************************** ** You have to upgrade your MySQLdb to at least ** ** version %s. You must fix this problem ** ** before continuing. Please see the INSTALL file ** ** for more details. ** ***************************************************** """ % (MySQLdb.__version__, CFG_MIN_MYSQLDB_VERSION) ) try: import Stemmer try: from Stemmer import algorithms except ImportError as msg: error_messages.append( """ ***************************************************** ** ERROR: STEMMER MODULE PROBLEM %s ***************************************************** ** Perhaps you are using an old Stemmer version? ** ** You must either remove your old Stemmer or else ** ** upgrade to Snowball Stemmer ** ** before continuing. Please see the INSTALL file ** ** for more details. ** ***************************************************** """ % (msg) ) except ImportError: pass # no prob, Stemmer is optional ## 5) check for Python.h (needed for intbitset): try: from distutils.sysconfig import get_python_inc path_to_python_h = get_python_inc() + os.sep + 'Python.h' if not os.path.exists(path_to_python_h): raise StandardError, "Cannot find %s" % path_to_python_h except StandardError as msg: error_messages.append( """ ***************************************************** ** ERROR: PYTHON HEADER FILE ERROR %s ***************************************************** ** You do not seem to have Python developer files ** ** installed (such as Python.h). Some operating ** ** systems provide these in a separate Python ** ** package called python-dev or python-devel. ** ** You must install such a package before ** ** continuing the installation process. ** ***************************************************** """ % (msg) ) ## 6) Check if ffmpeg is installed and if so, with the minimum configuration for bibencode try: try: process = subprocess.Popen('ffprobe', stderr=subprocess.PIPE, stdout=subprocess.PIPE) except OSError: raise StandardError, "FFMPEG/FFPROBE does not seem to be installed!" returncode = process.wait() output = process.communicate()[1] RE_CONFIGURATION = re.compile("(--enable-[a-z0-9\-]*)") CONFIGURATION_REQUIRED = ( '--enable-gpl', '--enable-version3', '--enable-nonfree', '--enable-libtheora', '--enable-libvorbis', '--enable-libvpx', '--enable-libopenjpeg' ) options = RE_CONFIGURATION.findall(output) if sys.version_info < (2, 6): import sets s = sets.Set(CONFIGURATION_REQUIRED) if not s.issubset(options): raise StandardError, options.difference(s) else: if not set(CONFIGURATION_REQUIRED).issubset(options): raise StandardError, set(CONFIGURATION_REQUIRED).difference(options) except StandardError as msg: warning_messages.append( """ ***************************************************** ** WARNING: FFMPEG CONFIGURATION MISSING %s ***************************************************** ** You do not seem to have FFmpeg configured with ** ** the minimum video codecs to run the demo site. ** ** Please install the necessary libraries and ** ** re-install FFmpeg according to the Invenio ** ** installation manual (INSTALL). ** ***************************************************** """ % (msg) ) if warning_messages: print(""" ****************************************************** ** WARNING MESSAGES ** ****************************************************** """) for warning in warning_messages: print(warning) if error_messages: print(""" ****************************************************** ** ERROR MESSAGES ** ****************************************************** """) for error in error_messages: print(error) if warning_messages and error_messages: print(""" There were %(n_err)s error(s) found that you need to solve. Please see above, solve them, and re-run configure. Note that there are also %(n_wrn)s warnings you may want to look into. Aborting the installation. """ % {'n_wrn': len(warning_messages), 'n_err': len(error_messages)}) sys.exit(1) elif error_messages: print(""" There were %(n_err)s error(s) found that you need to solve. Please see above, solve them, and re-run configure. Aborting the installation. """ % {'n_err': len(error_messages)}) sys.exit(1) elif warning_messages: print(""" There were %(n_wrn)s warnings found that you may want to look into, solve, and re-run configure before you continue the installation. However, you can also continue the installation now and solve these issues later, if you wish. """ % {'n_wrn': len(warning_messages)}) - wait_for_user("Press ENTER to continue the installation...") diff --git a/invenio/legacy/bibauthority/config.py b/invenio/legacy/bibauthority/config.py index a7af7012f..4b2b0504b 100644 --- a/invenio/legacy/bibauthority/config.py +++ b/invenio/legacy/bibauthority/config.py @@ -1,147 +1,147 @@ ## This file is part of Invenio. -## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN. +## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. # CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD # the authority record field containing the authority record control number CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD = '035__a' # Separator to be used in control numbers to separate the authority type -# PREFIX (e.g. "INSTITUTION") from the control_no (e.g. "(CERN)abc123" +# PREFIX (e.g. "INSTITUTE") from the control_no (e.g. "(CERN)abc123" CFG_BIBAUTHORITY_PREFIX_SEP = '|' # the ('980__a') string that identifies an authority record CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER = 'AUTHORITY' # the name of the authority collection. # This is needed for searching within the authority record collection. -CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME = 'Authority Records' +CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME = 'Authorities' # CFG_BIBAUTHORITY_TYPE_NAMES # Some administrators may want to be able to change the names used for the # authority types. Although the keys of this dictionary are hard-coded into # Invenio, the values are not and can therefore be changed to match whatever # values are to be used in the MARC records. # WARNING: These values shouldn't be changed on a running INVENIO installation # ... since the same values are hard coded into the MARC data, # ... including the 980__a subfields of all authority records # ... and the $0 subfields of the bibliographic fields under authority control CFG_BIBAUTHORITY_TYPE_NAMES = { - 'INSTITUTION': 'INSTITUTION', + 'INSTITUTE': 'INSTITUTE', 'AUTHOR': 'AUTHOR', 'JOURNAL': 'JOURNAL', 'SUBJECT': 'SUBJECT', } # CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC # 1. tells us which bibliographic subfields are under authority control # 2. tells us which bibliographic subfields refer to which type of # ... authority record (must conform to the keys of CFG_BIBAUTHORITY_TYPE_NAMES) # Note: if you want to add new tag here you should also append appropriate tag # to the miscellaneous index on the BibIndex Admin Site CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC = { '100__a': 'AUTHOR', - '100__u': 'INSTITUTION', - '110__a': 'INSTITUTION', + '100__u': 'INSTITUTE', + '110__a': 'INSTITUTE', '130__a': 'JOURNAL', '150__a': 'SUBJECT', - '260__b': 'INSTITUTION', + '260__b': 'INSTITUTE', '700__a': 'AUTHOR', - '700__u': 'INSTITUTION', + '700__u': 'INSTITUTE', } # CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY # Tells us which authority record subfields are under authority control # used by autosuggest feature in BibEdit # authority record subfields use the $4 field for the control_no (not $0) CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY = { '500__a': 'AUTHOR', - '510__a': 'INSTITUTION', + '510__a': 'INSTITUTE', '530__a': 'JOURNAL', '550__a': 'SUBJECT', - '909C1u': 'INSTITUTION', # used in bfe_affiliation - '920__v': 'INSTITUTION', # used by FZ Juelich demo data + '909C1u': 'INSTITUTE', # used in bfe_affiliation + '920__v': 'INSTITUTE', # used by FZ Juelich demo data } # constants for CFG_BIBEDIT_AUTOSUGGEST_TAGS # CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA for alphabetical sorting # ... of drop-down suggestions # CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR for sorting of drop-down # ... suggestions according to a popularity ranking CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA = 'alphabetical' CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR = 'by popularity' # CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG # some additional configuration for auto-suggest drop-down # 'field' : which logical or MARC field field to use for this # ... auto-suggest type # 'insert_here_field' : which authority record field to use # ... for insertion into the auto-completed bibedit field # 'disambiguation_fields': an ordered list of fields to use # ... in case multiple suggestions have the same 'insert_here_field' values # TODO: 'sort_by'. This has not been implemented yet ! CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG = { 'AUTHOR': { 'field': 'authorityauthor', 'insert_here_field': '100__a', 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR, 'disambiguation_fields': ['100__d', '270__m'], }, - 'INSTITUTION':{ - 'field': 'authorityinstitution', + 'INSTITUTE':{ + 'field': 'authorityinstitute', 'insert_here_field': '110__a', 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA, 'disambiguation_fields': ['270__b'], }, 'JOURNAL':{ 'field': 'authorityjournal', 'insert_here_field': '130__a', 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR, }, 'SUBJECT':{ 'field': 'authoritysubject', 'insert_here_field': '150__a', 'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA, }, } # list of authority record fields to index for each authority record type # R stands for 'repeatable' # NR stands for 'non-repeatable' CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX = { 'AUTHOR': [ '100__a', #Personal Name (NR, NR) '100__d', #Year of birth or other dates (NR, NR) '100__q', #Fuller form of name (NR, NR) '400__a', #(See From Tracing) (R, NR) '400__d', #(See From Tracing) (R, NR) '400__q', #(See From Tracing) (R, NR) ], - 'INSTITUTION': [ + 'INSTITUTE': [ '110__a', #(NR, NR) '410__a', #(R, NR) ], 'JOURNAL': [ '130__a', #(NR, NR) '130__f', #(NR, NR) '130__l', #(NR, NR) '430__a', #(R, NR) ], 'SUBJECT': [ '150__a', #(NR, NR) '450__a', #(R, NR) ], } diff --git a/invenio/legacy/bibauthority/doc/admin/bibauthority-admin-guide.webdoc b/invenio/legacy/bibauthority/doc/admin/bibauthority-admin-guide.webdoc index 2919f865d..3fc00ad81 100644 --- a/invenio/legacy/bibauthority/doc/admin/bibauthority-admin-guide.webdoc +++ b/invenio/legacy/bibauthority/doc/admin/bibauthority-admin-guide.webdoc @@ -1,171 +1,171 @@ ## This file is part of Invenio. -## Copyright (C) 2010, 2011 CERN. +## Copyright (C) 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.

Introduction

The INVENIO admin can configure the various ways in which authority control works for INVENIO by means of the bibauthority_config.py file. The location and full contents of this configuration file with a commented example configuration are shown at the bottom of this page. Their functionality is explained in the following paragraphs.

For examples of how Authority Control works in Invenio from a user's perspective, cf. _(HOWTO Manage Authority Records)_.

Enforcing types of authority records

INVENIO is originally agnostic about the types of authority records it contains. Everything it needs to know about authority records comes, on the one hand, from the authority record types that are contained within the '980__a' fields, and from the configurations related to these types on the other hand. Whereas the '980__a' values are usually edited by the librarians, the INVENIO configuration is the responsibility of the administrator. It is important for librarians and administrators to communicate the exact authority record types as well as the desired functionality relative to the types for the various INVENIO modules.

BibEdit

As admin of an INVENIO instance, you have the possibility of configuring which fields are under authority control. In the “Configuration File Overview” at the end of this page you will find an example of a configuration which will enable the auto-complete functionality for the '100__a', '100__u', '110__a', '130__a', '150__a', '700__a' and '700__u' fields of a bibliographic record in BibEdit. The keys of the “CFG BIBAUTHORITY CONTROLLED FIELDS” dictionary indicate which bibliographic fields are under authority control. If the user types Ctrl-Shift-A while typing within one of these fields, they will propose an auto-complete dropdown list in BibEdit. The user still has the option to enter values manually without use of the drop-down list. The values associated with each key of the dictionary indicate which kind of authority record is to be associated with this field. In the example given, the '100__a' field is associated with the authority record type 'AUTHOR'.

The “CFG BIBAUTHORITY AUTOSUGGEST OPTIONS” dictionary gives us the remaining configurations, specific only to the auto-suggest functionality. The value for the 'index' key determines which index type will be used find the authority records that will populate the drop-down with a list of suggestions (cf. the following paragraph on configuring the BibIndex for authority records). The value of the 'insert_here_field' determines which authority record field contains the value that should be used both for constructing the strings of the entries in the drop-down list as well as the value to be inserted directly into the edited subfield if the user clicks on one of the drop-down entries. Finally, the value for the 'disambiguation_fields' key is an ordered list of authority record fields that are used, in the order in which they appear in the list, to disambiguate between authority records with exactly the same value in their 'insert_here_field'.

BibIndex

-

As an admin of INVENIO, you have the possibility of configuring how indexing works in regards to authority records that are referenced by bibliographic records. When a bibliographic record is indexed for a particular index type, and if that index type contains MARC fields which are under authority control in this particular INVENIO instance (as configured by the, “CFG BIBAUTHORITY CONTROLLED FIELDS” dictionary in the bibauthority_config.py configuration file, mentioned above), then the indexer will include authority record data from specific MARC fields of these authority records in the same index. Which authority record fields are to be used to enrich the indexes for bibliographic records can be configured by the “CFG BIBAUTHORITY AUTHORITY SUBFIELDS TO INDEX” dictionary. In the example below each of the 4 authority record types ('AUTHOR', 'INSTITUTION', 'JOURNAL' and 'SUBJECT') is given a list of authority record MARC fields which are to be scanned for data that is to be included in the indexed terms of the dependent bibliographic records. For the 'AUTHOR' authority records, the example specifies that the values of the fields '100__a', '100__d', '100__q', '400__a', '400__d', and '400__q' (i.e. name, alternative names, and year of birth) should all be included in the data to be indexed for any bibliographic records referencing these authority records in their authority-controlled subfields.

+

As an admin of INVENIO, you have the possibility of configuring how indexing works in regards to authority records that are referenced by bibliographic records. When a bibliographic record is indexed for a particular index type, and if that index type contains MARC fields which are under authority control in this particular INVENIO instance (as configured by the, “CFG BIBAUTHORITY CONTROLLED FIELDS” dictionary in the bibauthority_config.py configuration file, mentioned above), then the indexer will include authority record data from specific MARC fields of these authority records in the same index. Which authority record fields are to be used to enrich the indexes for bibliographic records can be configured by the “CFG BIBAUTHORITY AUTHORITY SUBFIELDS TO INDEX” dictionary. In the example below each of the 4 authority record types ('AUTHOR', 'INSTITUTE', 'JOURNAL' and 'SUBJECT') is given a list of authority record MARC fields which are to be scanned for data that is to be included in the indexed terms of the dependent bibliographic records. For the 'AUTHOR' authority records, the example specifies that the values of the fields '100__a', '100__d', '100__q', '400__a', '400__d', and '400__q' (i.e. name, alternative names, and year of birth) should all be included in the data to be indexed for any bibliographic records referencing these authority records in their authority-controlled subfields.

Configuration File Overview

The configuration file for the BibAuthority module can be found at invenio/lib/python/invenio.legacy.bibauthority.config.py. Below is a commented example configuration to show how one would typically configure the parameters for BibAuthority. The details of how this works were explained in the paragraphs above.

 # CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD
 # the authority record field containing the authority record control number
 CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD = '035__a'
 
 # Separator to be used in control numbers to separate the authority type
-# PREFIX (e.g. "INSTITUTION") from the control_no (e.g. "(CERN)abc123"
+# PREFIX (e.g. "INSTITUTE") from the control_no (e.g. "(CERN)abc123"
 CFG_BIBAUTHORITY_PREFIX_SEP = '|'
 
 # the ('980__a') string that identifies an authority record
 CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER = 'AUTHORITY'
 
 # the name of the authority collection.
 # This is needed for searching within the authority record collection.
 CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME = 'Authority Records'
 
 # used in log file and regression tests
 CFG_BIBAUTHORITY_BIBINDEX_UPDATE_MESSAGE = \
     "Indexing records dependent on modified authority records"
 
 # CFG_BIBAUTHORITY_TYPE_NAMES
 # Some administrators may want to be able to change the names used for the
 # authority types. Although the keys of this dictionary are hard-coded into
 # Invenio, the values are not and can therefore be changed to match whatever
 # values are to be used in the MARC records.
 # WARNING: These values shouldn't be changed on a running INVENIO installation
 # ... since the same values are hard coded into the MARC data,
 # ... including the 980__a subfields of all authority records
 # ... and the $0 subfields of the bibliographic fields under authority control
 CFG_BIBAUTHORITY_TYPE_NAMES = {
-    'INSTITUTION': 'INSTITUTION',
+    'INSTITUTE': 'INSTITUTE',
     'AUTHOR': 'AUTHOR',
     'JOURNAL': 'JOURNAL',
     'SUBJECT': 'SUBJECT',
 }
 
 # CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC
 # 1. tells us which bibliographic subfields are under authority control
 # 2. tells us which bibliographic subfields refer to which type of
 # ... authority record (must conform to the keys of CFG_BIBAUTHORITY_TYPE_NAMES)
 CFG_BIBAUTHORITY_CONTROLLED_FIELDS_BIBLIOGRAPHIC = {
     '100__a': 'AUTHOR',
-    '100__u': 'INSTITUTION',
-    '110__a': 'INSTITUTION',
+    '100__u': 'INSTITUTE',
+    '110__a': 'INSTITUTE',
     '130__a': 'JOURNAL',
     '150__a': 'SUBJECT',
-    '260__b': 'INSTITUTION',
+    '260__b': 'INSTITUTE',
     '700__a': 'AUTHOR',
-    '700__u': 'INSTITUTION',
+    '700__u': 'INSTITUTE',
 }
 
 # CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY
 # Tells us which authority record subfields are under authority control
 # used by autosuggest feature in BibEdit
 # authority record subfields use the $4 field for the control_no (not $0)
 CFG_BIBAUTHORITY_CONTROLLED_FIELDS_AUTHORITY = {
     '500__a': 'AUTHOR',
-    '510__a': 'INSTITUTION',
+    '510__a': 'INSTITUTE',
     '530__a': 'JOURNAL',
     '550__a': 'SUBJECT',
-    '909C1u': 'INSTITUTION', # used in bfe_affiliation
-    '920__v': 'INSTITUTION', # used by FZ Juelich demo data
+    '909C1u': 'INSTITUTE', # used in bfe_affiliation
+    '920__v': 'INSTITUTE', # used by FZ Juelich demo data
 }
 
 # constants for CFG_BIBEDIT_AUTOSUGGEST_TAGS
 # CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA for alphabetical sorting
 # ... of drop-down suggestions
 # CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR for sorting of drop-down
 # ... suggestions according to a popularity ranking
 CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA = 'alphabetical'
 CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR = 'by popularity'
 
 # CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG
 # some additional configuration for auto-suggest drop-down
 # 'field' : which logical or MARC field field to use for this
 # ... auto-suggest type
 # 'insert_here_field' : which authority record field to use
 # ... for insertion into the auto-completed bibedit field
 # 'disambiguation_fields': an ordered list of fields to use
 # ... in case multiple suggestions have the same 'insert_here_field' values
 # TODO: 'sort_by'. This has not been implemented yet !
 CFG_BIBAUTHORITY_AUTOSUGGEST_CONFIG = {
     'AUTHOR': {
         'field': 'authorityauthor',
         'insert_here_field': '100__a',
         'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR,
         'disambiguation_fields': ['100__d', '270__m'],
     },
-    'INSTITUTION':{
-        'field': 'authorityinstitution',
+    'INSTITUTE':{
+        'field': 'authorityinstitute',
         'insert_here_field': '110__a',
         'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA,
         'disambiguation_fields': ['270__b'],
     },
     'JOURNAL':{
         'field': 'authorityjournal',
         'insert_here_field': '130__a',
         'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_POPULAR,
     },
     'SUBJECT':{
         'field': 'authoritysubject',
         'insert_here_field': '150__a',
         'sort_by': CFG_BIBAUTHORITY_AUTOSUGGEST_SORT_ALPHA,
     },
 }
 
 # list of authority record fields to index for each authority record type
 # R stands for 'repeatable'
 # NR stands for 'non-repeatable'
 CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX = {
     'AUTHOR': [
         '100__a', #Personal Name (NR, NR)
         '100__d', #Year of birth or other dates (NR, NR)
         '100__q', #Fuller form of name (NR, NR)
         '400__a', #(See From Tracing) (R, NR)
         '400__d', #(See From Tracing) (R, NR)
         '400__q', #(See From Tracing) (R, NR)
     ],
-    'INSTITUTION': [
+    'INSTITUTE': [
         '110__a', #(NR, NR)
         '410__a', #(R, NR)
     ],
     'JOURNAL': [
         '130__a', #(NR, NR)
         '130__f', #(NR, NR)
         '130__l', #(NR, NR)
         '430__a', #(R, NR)
     ],
     'SUBJECT': [
         '150__a', #(NR, NR)
         '450__a', #(R, NR)
     ],
 }
 
diff --git a/invenio/legacy/bibauthority/doc/hacking/bibauthority-internals.webdoc b/invenio/legacy/bibauthority/doc/hacking/bibauthority-internals.webdoc index 5e1cfdf53..4f3203c8e 100644 --- a/invenio/legacy/bibauthority/doc/hacking/bibauthority-internals.webdoc +++ b/invenio/legacy/bibauthority/doc/hacking/bibauthority-internals.webdoc @@ -1,254 +1,254 @@ ## -*- mode: html; coding: utf-8; -*- ## This file is part of Invenio. -## Copyright (C) 2011 CERN. +## Copyright (C) 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. Here you will find a few explanations to the inner workings of BibAuthority.

Indexing

Introduction

There are two cases that need special attention when idexing bibliographic data that contains references to authority records. The first case is relatively simple and requires the enriching of bibliographic data with data from authority records whenever a bibliographic record is being indexed. The second is a bit more complex, for it requires detecting which bibliographic records should be re-indexed, based on referenced authority records having been updated within a given date range.

Indexing by record ID, by modification date or by index type

First of all, we need to say something about how INVENIO let's the admin index the data. INVENIO's indexer (BibIndex) is always run as a task that is executed by INVENIO's scheduler (BibSched). Typically, this is done either by scheduling a bibindex task from the command line (manually), or it is part of a periodic task (BibTask) run directly from BibSched, typically ever 5 minutes. In case it is run manually, the user has the option of specifying certain record IDs to be re-indexed, e.g. by specifying ranges of IDs or collections to be re-indexed. In this case, the selected records are re-indexed whether or not there were any modifications to the data. Alternatively, the user can specify a date range, in which case the indexer will search all the record IDs that have been modified in the selected date range (by default, the date range would specify all IDs modified since the last time the indexer was run) and update the index only for those records. As a third option, the user can specify specific types of indexes. INVENIO lets you search by different criteria (e.g. 'any field', 'title', 'author', 'abstract', 'keyword', 'journal', 'year', 'fulltext', …), and each of these criteria corresponds to a separate index, indexing only the data from the relevant MARC subfields. Normally, the indexer would update all index types for any given record ID, but with this third option, the user can limit the re-indexing to only specific types of indexes if desired.

Note: In reality, INVENIO creates not only 1 but 6 different indexes per index type. 3 are forward indexes (mapping words, pairs or phrases to record IDs), 3 are reverse indexes (mapping record IDs to words, pairs or phrases). The word, pair and phrase indexes are used for optimizing the searching speed depending on whether the user searches for words, sub-phrases or entire phrases. These details are however not relevant for BibAuthority. It simply finds the values to be indexed and passes them on to the indexer which indexes them as if it was data coming directly from the bibliographic record.

Enriching the index data – simple case

Once the indexer knows which record ID (and optionally, which index type) to re-index, including authority data is simply a question of checking whether the MARC subfields currently being indexed are under authority control (as specified in the BibAuthority configuration file). If they are, the indexer must follow the following (pseudo-)algorithm which will fetch the necessary data from the referenced authority records:

For each subfield and each record ID currently being re-indexed:

If the subfield is under authority control (→ config file):

Get the type of referenced authority record expected for this field

For each authority record control number found in the corresponding 'XXX__0' subfields and matching the expected authority record type (control number prefix):

Find the authority record ID (MARC field '001' control number) corresponding to the authority record control number (as contained in MARC field '035' of the authority record)

For each authority record subfield marked as index relevant for the given $type (→ config file)

Add the values of these subfields to the list of values to be returned and used for enriching the indexed strings.

The strings collected with this algorithm are simply added to the strings already found by the indexer in the regular bibliographic record MARC data. Once all the strings are collected, the indexer goes on with the usual operation, parsing them 3 different times, once for phrases, once for word-pairs, once for words, which are used to populate the 6 forward and reverse index tables in the database.

Updating the index by date range

When a bibindex task is created by date range, we are presented with a more tricky situation which requires a more complex treatment for it to work properly. As long as the bibindex task is configured to index by record ID, the simple algorithm described above is enough to properly index the authority data along with the data from bibliographic records. This is true also if we use the third option described above, specifying the particular index type to re-index with the bibindex task. However, if we launch a bibindex task based on a date range (by default the date range covers the time since the last time bibindex task was run on for each of the index types), bibindex would have no way to know that it must update the index for a specific bibliographic record if one of the authority records it references was modified in the specified date range. This would lead to incomplete indexes.

A first idea was to modify the time-stamp for any bibliographic records as soon as an authority record is modified. Every MARC record in INVENIO has a 'modification_date' time-stamp which indicates to the indexer when this record was last modified. If we search for dependent bibliographic records every time we modify an authority record, and if we then update the 'modification_date' time-stamp for each of these dependent bibliographic records, then we can be sure that the indexer would find and re-index these bibliographic records as well when indexing by a specified date-range. The problem with this is a performance problem. If we update the time-stamp for the bibliographic record, this record will be re-indexed for all of the mentioned index-types ('author', 'abstract', 'fulltext', etc.), even though many of them may not cover MARC subfields that are under authority control, and hence re-indexing them because of a change in an authority record would be quite useless. In an INVENIO installation there would typically be 15-30 index-types. Imagine if you make a change to a 'journal' authority record and only 1 out of the 20+ index-types is for 'journal'. INVENIO would be re-indexing 20+ index types in stead of only the 1 index type which is relevant to the the type of the changed authority record.

There are two approaches that could solve this problem equally well. The first approach would require checking – for each authority record ID which is to be re-indexed – whether there are any dependent bibliographic records that need to be re-indexed as well. If done in the right manner, this approach would only re-index the necessary index types that can contain information from referenced authority records, and the user could specify the index type to be re-indexed and the right bibliographic records would still be found. The second approach works the other way around. In stead of waiting until we find a recently modified authority record, and then looking for dependent bibliographic records, we directly launch a search for bibliographic records containing links to recently updated authority records and add the record IDs found in this way to the list of record IDs that need to be re-indexed.

Of the two approaches, the second one was choses based solely upon considerations of integration into existing INVENIO code. As indexing in INVENIO currently works, it is more natural and easily readable to apply the second method than the first.

According to the second method, the pseudo-algorithm for finding the bibliographic record IDs that need to be updated based upon recently modified authority records in a given date range looks like this:

For each index-type to re-index:

For each subfield concerned by the index-type:

If the subfield is under authority control (→ config file):

Get the type of authority record associated with this field

Get all of the record IDs for authority records updated in the specified date range.

For each record ID

Get the authority record control numbers of this record ID

For each authority record control number

Search for and add the record IDs of bibliographic records containing this control number (with type in the prefix) in the 'XXX__0' field of the current subfield to the list of record IDs to be returned to the caller to be marked as needing re-indexing.



The record IDs returned in this way are added to the record IDs that need to be re-indexed (by date range) and then the rest of the indexing can run as usual.

Implementation specifics

The pseudo-algorithms described above were used as described in this document, but were not each implemented in a single function. In order for parts of them to be reusable and also for the various parts to be properly integrated into existing python modules with similar functionality (e.g auxiliary search functions were added to INVENIO's search_engine.py code), the pseudo-algorithms were split up into multiple nested function calls and integrated where it seemed to best fit the existing code base of INVENIO. In the case of the pseudo-algorithm described in “Updating the index by date range”, the very choice of the algorithm had already depended on how to best integrate it into the existing code for date-range related indexing.

Cross-referencing between MARC records

In order to reference authority records, we use alphanumeric strings stored in the $0 subfields of fields that contain other, authority-controlled subfields as well. The format of these alphanumeric strings for INVENIO is in part determined by the MARC standard itself, which states that:

Subfield $0 contains the system control number of the related authority record, or a standard identifier such as an International Standard Name Identifier (ISNI). The control number or identifier is preceded by the appropriate MARC Organization code (for a related authority record) or the Standard Identifier source code (for a standard identifier scheme), enclosed in parentheses. See MARC Code List for Organizations for a listing of organization codes and Standard Identifier Source Codes for code systems for standard identifiers. Subfield $0 is repeatable for different control numbers or identifiers.

An example of such a string could be “(SzGeCERN)abc1234”, where “SzGeCERN” would be the MARC organization code, and abc1234 would be the unique identifier for this authority record within the given organization.

Since it is possible for a single field (e.g. field '100') to have multiple $0 subfields for the same field entry, we need a way to specify which $0 subfield reference is associated with which other subfield of the same field entry.

For example, imagine that in bibliographic records both '700__a' ('other author' name) as well as '700__u' ('other author' affiliation) are under authority control. In this case we would have two '700__0' subfields. Of of them would reference the author authority record (for the name), -the other one would reference an institution authority record +the other one would reference an institute authority record (for the affiliation). INVENIO needs some way to know which $0 subfield is associated with the $a subfield and which one with the $u subfield.

We have chosen to solve this in the following way. Every $0 subfield value will not only contain the authority record control number, but in addition will be prefixed by -the type of authority record (e.g. 'AUTHOR', 'INSTITUTION', 'JOURNAL' +the type of authority record (e.g. 'AUTHOR', 'INSTITUTE', 'JOURNAL' or 'SUBJECT), separated from the control number by a separator, e.g. ':' (configurable). A possible $0 subfield value could therefore be: “author:(SzGeCERN)abc1234”. This will allow INVENIO to know that the $0 subfield containing “author:(SzGeCERN)abc1234” is associated with the $a subfield (author's name), containing e.g. “Ellis, John”, whereas the $0 -subfield containing “institution:(SzGeCERN)xyz4321” is associated -with the $u subfield (author's affiliation/institution) of the same +subfield containing “institute:(SzGeCERN)xyz4321” is associated +with the $u subfield (author's affiliation/institute) of the same field entry, containing e.g. “CERN”.

diff --git a/invenio/legacy/bibauthority/engine.py b/invenio/legacy/bibauthority/engine.py index 3b9cf1939..83eb98cb1 100644 --- a/invenio/legacy/bibauthority/engine.py +++ b/invenio/legacy/bibauthority/engine.py @@ -1,289 +1,288 @@ ## This file is part of Invenio. -## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN. +## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. # pylint: disable=C0103 """Invenio BibAuthority Engine.""" from invenio.legacy.bibauthority.config import \ CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD, \ CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX,\ CFG_BIBAUTHORITY_PREFIX_SEP import re from invenio.ext.logging import register_exception from invenio.legacy.search_engine import search_pattern, \ record_exists from invenio.legacy.bibrecord import get_fieldvalues from invenio.legacy.bibauthority.config import \ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER def is_authority_record(recID): """ returns whether recID is an authority record @param recID: the record id to check @type recID: int @return: True or False """ # low-level: don't use possibly indexed logical fields ! return recID in search_pattern(p='980__a:AUTHORITY') def get_dependent_records_for_control_no(control_no): """ returns a list of recIDs that refer to an authority record containing the given control_no. E.g. if an authority record has the control number "AUTHOR:(CERN)aaa0005" in its '035__a' subfield, then this function will return all recIDs of records that contain any 'XXX__0' subfield containing "AUTHOR:(CERN)aaa0005" @param control_no: the control number for an authority record @type control_no: string @return: list of recIDs """ # We don't want to return the recID who's control number is control_no myRecIDs = _get_low_level_recIDs_intbitset_from_control_no(control_no) # Use search_pattern, since we want to find records from both bibliographic # as well as authority record collections return list(search_pattern(p='"' + control_no+'"') - myRecIDs) def get_dependent_records_for_recID(recID): """ returns a list of recIDs that refer to an authority record containing the given record ID. 'type' is a string (e.g. "AUTHOR") referring to the type of authority record @param recID: the record ID for the authority record @type recID: int @return: list of recIDs """ recIDs = [] # get the control numbers control_nos = get_control_nos_from_recID(recID) for control_no in control_nos: recIDs.extend(get_dependent_records_for_control_no(control_no)) return recIDs def guess_authority_types(recID): """ - guesses the type(s) (e.g. AUTHOR, INSTITUTION, etc.) + guesses the type(s) (e.g. AUTHOR, INSTITUTE, etc.) of an authority record (should only have one value) @param recID: the record ID of the authority record @type recID: int @return: list of strings """ types = get_fieldvalues(recID, '980__a', repetitive_values=False) # remove possible duplicates ! #filter out unwanted information while CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER in types: types.remove(CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_IDENTIFIER) types = [_type for _type in types if _type.isalpha()] return types def get_low_level_recIDs_from_control_no(control_no): """ returns the list of EXISTING record ID(s) of the authority records corresponding to the given (INVENIO) MARC control_no (e.g. 'AUTHOR:(XYZ)abc123') (NB: the list should normally contain exactly 1 element) @param control_no: a (INVENIO) MARC internal control_no to an authority record @type control_no: string @return:: list containing the record ID(s) of the referenced authority record (should be only one) """ # values returned # recIDs = [] #check for correct format for control_no # control_no = "" # if CFG_BIBAUTHORITY_PREFIX_SEP in control_no: # auth_prefix, control_no = control_no.split(CFG_BIBAUTHORITY_PREFIX_SEP); # #enforce expected enforced_type if present # if (enforced_type is None) or (auth_prefix == enforced_type): # #low-level search needed e.g. for bibindex # hitlist = search_pattern(p='980__a:' + auth_prefix) # hitlist &= _get_low_level_recIDs_intbitset_from_control_no(control_no) # recIDs = list(hitlist) recIDs = list(_get_low_level_recIDs_intbitset_from_control_no(control_no)) # filter out "DELETED" recIDs recIDs = [recID for recID in recIDs if record_exists(recID) > 0] # normally there should be exactly 1 authority record per control_number _assert_unique_control_no(recIDs, control_no) # return return recIDs #def get_low_level_recIDs_from_control_no(control_no): # """ # Wrapper function for _get_low_level_recIDs_intbitset_from_control_no() # Returns a list of EXISTING record IDs with control_no # # @param control_no: an (INVENIO) MARC internal control number to an authority record # @type control_no: string # # @return: list (in stead of an intbitset) # """ # #low-level search needed e.g. for bibindex # recIDs = list(_get_low_level_recIDs_intbitset_from_control_no(control_no)) # # # filter out "DELETED" recIDs # recIDs = [recID for recID in recIDs if record_exists(recID) > 0] # # # normally there should be exactly 1 authority record per control_number # _assert_unique_control_no(recIDs, control_no) # # # return # return recIDs def _get_low_level_recIDs_intbitset_from_control_no(control_no): """ returns the intbitset hitlist of ALL record ID(s) of the authority records corresponding to the given (INVENIO) MARC control number (e.g. '(XYZ)abc123'), (e.g. from the 035 field) of the authority record. Note: This function does not filter out DELETED records!!! The caller to this function must do this himself. @param control_no: an (INVENIO) MARC internal control number to an authority record @type control_no: string @return:: intbitset containing the record ID(s) of the referenced authority record (should be only one) """ #low-level search needed e.g. for bibindex hitlist = search_pattern( p=CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD + ":" + '"' + control_no + '"') # return return hitlist def _assert_unique_control_no(recIDs, control_no): """ If there are more than one EXISTING recIDs with control_no, log a warning @param recIDs: list of record IDs with control_no @type recIDs: list of int @param control_no: the control number of the authority record in question @type control_no: string """ if len(recIDs) > 1: error_message = \ "DB inconsistency: multiple rec_ids " + \ "(" + ", ".join([str(recID) for recID in recIDs]) + ") " + \ "found for authority record control number: " + control_no try: raise Exception except: register_exception(prefix=error_message, alert_admin=True, subject=error_message) def get_control_nos_from_recID(recID): """ get a list of control numbers from the record ID @param recID: record ID @type recID: int @return: authority record control number """ return get_fieldvalues(recID, CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD, repetitive_values=False) def get_type_from_control_no(control_no): """simply returns the authority record TYPE prefix contained in control_no or else an empty string. @param control_no: e.g. "AUTHOR:(CERN)abc123" @type control_no: string @return: e.g. "AUTHOR" or "" """ # pattern: any string, followed by the prefix, followed by a parenthesis pattern = \ r'.*' + \ r'(?=' + re.escape(CFG_BIBAUTHORITY_PREFIX_SEP) + re.escape('(') + r')' m = re.match(pattern, control_no) return m and m.group(0) or '' def guess_main_name_from_authority_recID(recID): """ get the main name of the authority record @param recID: the record ID of authority record @type recID: int @return: the main name of this authority record (string) """ #tags where the main authority record name can be found main_name_tags = ['100__a', '110__a', '130__a', '150__a'] main_name = '' # look for first match only for tag in main_name_tags: fieldvalues = get_fieldvalues(recID, tag, repetitive_values=False) if len(fieldvalues): main_name = fieldvalues[0] break # return first match, if found return main_name def get_index_strings_by_control_no(control_no): """extracts the index-relevant strings from the authority record referenced by the 'control_no' parameter and returns it as a list of strings @param control_no: a (INVENIO) MARC internal control_no to an authority record @type control_no: string (e.g. 'author:(ABC)1234') @param expected_type: the type of authority record expected @type expected_type: string, e.g. 'author', 'journal' etc. @return: list of index-relevant strings from the referenced authority record """ from invenio.legacy.bibindex.engine import list_union #return value string_list = [] #1. get recID and authority type corresponding to control_no rec_IDs = get_low_level_recIDs_from_control_no(control_no) #2. concatenate and return all the info from the interesting fields for this record for rec_id in rec_IDs: # in case we get multiple authority records for tag in CFG_BIBAUTHORITY_AUTHORITY_SUBFIELDS_TO_INDEX.get(get_type_from_control_no(control_no)): new_strings = get_fieldvalues(rec_id, tag) string_list = list_union(new_strings, string_list) #return return string_list - diff --git a/invenio/legacy/bibclassify/doc/admin/bibclassify-admin-guide.webdoc b/invenio/legacy/bibclassify/doc/admin/bibclassify-admin-guide.webdoc index 649cc4577..9203d2026 100644 --- a/invenio/legacy/bibclassify/doc/admin/bibclassify-admin-guide.webdoc +++ b/invenio/legacy/bibclassify/doc/admin/bibclassify-admin-guide.webdoc @@ -1,232 +1,232 @@ ## -*- mode: html; coding: utf-8; -*- ## This file is part of Invenio. -## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN. +## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.

Contents

1. Overview
       1.1 Thesaurus
       1.2 Keyword extraction
2. Running BibClassify

1. Overview

BibClassify automatically extracts keywords from fulltext documents. The automatic assignment of keywords to textual documents has clear benefits in the digital library environment as it aids catalogization, classification and retrieval of documents.

1.1 Thesaurus

BibClassify performs an extraction of keywords based on the recurrence of specific terms, taken from a controlled vocabulary. A controlled vocabulary is a thesaurus of all the terms that are relevant in a specific context. When a context is defined by a discipline or branch of knowledge then the vocabulary is said to be a subject thesaurus. Various existing subject thesauri can be found here.

A subject thesaurus can be expressed in several different -formats. Different institutions/disciplines have developed different +formats. Different institutes/disciplines have developed different ways of representing their vocabulary systems. The taxonomy used by bibclassify is expressed in RDF/SKOS. It allows not only to list keywords but to specify relations between the keywords and alternative ways to represent the same keyword.

 <Concept rdf:about="http://cern.ch/thesauri/HEP.rdf#scalar">
  <composite rdf:resource="http://cern.ch/thesauri/HEP.rdf#Composite.fieldtheoryscalar"/>
  <prefLabel xml:lang="en">scalar</prefLabel>
  <note xml:lang="en">nostandalone</note>
 </Concept>
 
 <Concept rdf:about="http://cern.ch/thesauri/HEP.rdf#fieldtheory">
  <composite rdf:resource="http://cern.ch/thesauri/HEP.rdf#Composite.fieldtheoryscalar"/>
  <prefLabel xml:lang="en">field theory</prefLabel>
  <altLabel xml:lang="en">QFT</altLabel>
  <hiddenLabel xml:lang="en">/field theor\w*/</hiddenLabel>
  <note xml:lang="en">nostandalone</note>
 </Concept>
 
 <Concept rdf:about="http://cern.ch/thesauri/HEP.rdf#Composite.fieldtheoryscalar">
  <compositeOf rdf:resource="http://cern.ch/thesauri/HEP.rdf#scalar"/>
  <compositeOf rdf:resource="http://cern.ch/thesauri/HEP.rdf#fieldtheory"/>
  <prefLabel xml:lang="en">field theory: scalar</prefLabel>
  <altLabel xml:lang="en">scalar field</altLabel>
 </Concept>
 
In RDF/SKOS, every keyword is wrapped around a concept which encapsulates the full semantics and hierarchical status of a term - including synonyms, alternative forms, broader concepts, notes and so on - rather than just a plain keyword.

The specification of the SKOS language and various manuals that aid the building of a semantic thesaurus can be found at the SKOS W3C website. Furthermore, BibClassify can function on top of an extended version of SKOS, which includes special elements such as key chains, composite keywords and special annotations. The extension of the SKOS language is documented in the hacking guide.

1.2 Keyword extraction

BibClassify computes the keywords of a fulltext document based on the frequency of thesaurus terms in it. In other words, it calculates how many times a thesaurus keyword (and its alternative and hidden labels, defined in the taxonomy) appears in a text and it ranks the results. Unlike other similar systems, BibClassify does not use any machine learning or AI methodologies - a just plain phrase matching using regular expressions: it exploits the conformation and richness of the thesaurus to produce accurate results. It is then clear that BibClassify performs best on top of rich, well-structured, subject thesauri expressed in the RDF/SKOS language.

A detailed account of the phrase matching mechanisms used by BibClassify is included in the hacking guide.

2. Running BibClassify

 Dependencies. BibClassify requires Python RDFLib in order to process the RDF/SKOS taxonomy.

In order to extract relevant keywords from a document fulltext.pdf based on a controlled vocabulary thesaurus.rdf, you would run BibClassify as follows:

 $ bibclassify.py -k thesaurus.rdf fulltext.pdf
 

Launching bibclassify --help shows the options available for BibClassify:


 Usage: bibclassify [OPTION]... [FILE/URL]...
        bibclassify [OPTION]... [DIRECTORY]...
 Searches keywords in FILEs and/or files in DIRECTORY(ies). If a directory is
 specified, BibClassify will generate keywords for all PDF documents contained
 in the directory.  Can also run in a daemon mode, in which case the files to
 be run are looked for from the database (=records modified since the last run).
 
 General options:
   -h, --help                display this help and exit
   -V, --version             output version information and exit
   -v, --verbose=LEVEL       sets the verbose to LEVEL (=0)
   -k, --taxonomy=NAME       sets the taxonomy NAME. It can be a simple
                             controlled vocabulary or a descriptive RDF/SKOS
                             and can be located in a local file or URL.
 
 Standalone file mode options:
   -o, --output-mode=TYPE    changes the output format to TYPE (text, marcxml or
                             html) (=text)
   -s, --spires              outputs keywords in the SPIRES format
   -n, --keywords-number=INT sets the number of keywords displayed (=20), use 0
                             to set no limit
   -m, --matching-mode=TYPE  changes the search mode to TYPE (full or partial)
                             (=full)
   --detect-author-keywords  detect keywords that are explicitely written in the
                             document
 Daemon mode options:
   -i, --recid=RECID         extract keywords for a record and store into DB
                             (=all necessary ones for pre-defined taxonomies)
   -c, --collection=COLL     extract keywords for a collection and store into DB
                             (=all necessary ones for pre-defined taxonomies)
 
 Taxonomy management options:
   --check-taxonomy          checks the taxonomy and reports warnings and errors
   --rebuild-cache           ignores the existing cache and regenerates it
   --no-cache                don't cache the taxonomy
 
 Backward compatibility options (discouraged):
   -q                        equivalent to -s
   -f FILE URL               sets the file to read the keywords from
 
 Examples (standalone file mode):
     $ bibclassify -k HEP.rdf http://arxiv.org/pdf/0808.1825
     $ bibclassify -k HEP.rdf article.pdf
     $ bibclassify -k HEP.rdf directory/
 
 Examples (daemon mode):
     $ bibclassify -u admin -s 24h -L 23:00-05:00
     $ bibclassify -u admin -i 1234
     $ bibclassify -u admin -c Preprints
 

 NB. BibClassify can run as a CDS Invenio module or as a standalone program. If you already run a server with a Invenio installation, you can simply run /opt/invenio/bin/bibclassify [options]. Otherwise, you can run from BibClassify sources bibclassify [options].

As an example, running BibClassify on document nucl-th/0204033 using the high-energy physics RDF/SKOS taxonomy (HEP.rdf) would yield the following results (based on the HEP taxonomy from October 10th 2008):


 Input file: 0204033.pdf
 
 Author keywords:
 Dense matter
 Saturation
 Unstable nuclei
 
 Composite keywords:
 10  nucleus: stability [36, 14]
 6  saturation: density [25, 31]
 6  energy: symmetry [35, 11]
 4  nucleon: density [13, 31]
 3  energy: Coulomb [35, 3]
 2  energy: density [35, 31]
 2  nuclear matter: asymmetry [21, 2]
 1  n: matter [54, 36]
 1  n: density [54, 31]
 1  n: mass [54, 16]
 
 Single keywords:
 61  K0
 23  equation of state
 12  slope
 4  mass number
 4  nuclide
 3  nuclear model
 3  mass formula
 2  charge distribution
 2  elastic scattering
 2  binding energy
 
or, the following keyword-cloud HTML visualization:

tag-cloud for document nucl-th/0204033

diff --git a/invenio/legacy/bibupload/doc/admin/bibupload-admin-guide.webdoc b/invenio/legacy/bibupload/doc/admin/bibupload-admin-guide.webdoc index 59bfa1ca5..84dee933c 100644 --- a/invenio/legacy/bibupload/doc/admin/bibupload-admin-guide.webdoc +++ b/invenio/legacy/bibupload/doc/admin/bibupload-admin-guide.webdoc @@ -1,767 +1,772 @@ ## -*- mode: html; coding: utf-8; -*- ## This file is part of Invenio. -## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2012 CERN. +## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.

Contents

1. Overview
2. Configuring BibUpload
3. Running BibUpload
       3.1. Inserting new records
       3.2. Inserting records into the Holding Pen
       3.3. Updating existing records
       3.4. Inserting and updating at the same time
       3.5. Updating preformatted output formats
       3.6. Uploading fulltext files
       3.7. Obtaining feedbacks
       3.8. Assigning additional informations to documents and other entities
             3.8.1 Uploading relations between documents
             3.8.2 Using temporary identifiers
4. Batch Uploader
       4.1. Web interface - Cataloguers
       4.1. Web interface - Robots
       4.2. Daemon mode

1. Overview

BibUpload enables you to upload bibliographic data in MARCXML format into Invenio bibliographic database. It is also used internally by other Invenio modules as the sole entrance of metadata into the bibliographic databases.

Note that before uploading a MARCXML file, you may want to run provided /opt/invenio/bin/xmlmarclint on it in order to verify its correctness.

2. Configuring BibUpload

BibUpload takes a MARCXML file as its input. There is nothing to be configured for these files. If the files have to be coverted into MARCXML from some other format, structured or not, this is usually done beforehand via BibConvert module.

Note that if you are using external system numbers for your records, such as when your records are being synchronized from an external system, then BibUpload knows about the tag 970 as the one containing external system number. (To change this 970 tag into something else, you would have to edit BibUpload config source file.)

Note also that in the similar way BibUpload knows about OAI identifiers, so that it will refuse to insert the same OAI harvested record twice, for example.

3. Running BibUpload

3.1 Inserting new records

Consider that you have an MARCXML file containing new records that is to be uploaded into the Invenio. (For example, it might have been produced by BibConvert.) To finish the upload, you would call the BibUpload script in the insert mode as follows:

 $ bibupload -i file.xml
 
 
In the insert mode, all the records from the file will be treated as new. This means that they should not contain neither 001 tags (holding record IDs) nor 970 tags (holding external system numbers). BibUpload would refuse to upload records having these tags, in order to prevent potential double uploading. If your file does contain 001 or 970, then chances are that you want to update existing records, not re-upload them as new, and so BibUpload will warn you about this and will refuse to continue.

For example, to insert a new record, your file should look like this:

     <record>
         <datafield tag="100" ind1=" " ind2=" ">
             <subfield code="a">Doe, John</subfield>
         </datafield>
         <datafield tag="245" ind1=" " ind2=" ">
             <subfield code="a">On The Foo And Bar</subfield>
         </datafield>
     </record>
 

3.2 Inserting records into the Holding Pen

A special mode of BibUpload that is thigthly connected with BibEdit is the Holding Pen mode.

When you insert a record using the holding pen mode such as in the following example:

 $ bibupload -o file.xml
 
the records are not actually integrated into the database, but are instead put into an intermediate space called holding pen, where authorized curators can review them, manipulate them and eventually approve them.

The holding pen is integrated with BibEdit.

3.3 Updating existing records

When you want to update existing records, with the new content from your input MARCXML file, then your input file should contain either tags 001 (holding record IDs) or tag 970 (holding external system numbers). BibUpload will try to match existing records via 001 and 970 and if it finds a record in the database that corresponds to a record from the file, it will update its content. Otherwise it will signal an error saying that it could not find the record-to-be-updated.

For example, to update a title of record #123 via correct mode, your input file should contain record ID in the 001 tag and the title in 245 tag as follows:

     <record>
         <controlfield tag="001">123</controlfield>
         <datafield tag="245" ind1=" " ind2=" ">
             <subfield code="a">My Newly Updated Title</subfield>
         </datafield>
     </record>
 

There are several updating modes:

 
     -r, --replace Replace existing records by those from the XML
                   MARC file.  The original content is wiped out
                   and fully replaced.  Signals error if record
                   is not found via matching record IDs or system
                   numbers.
                   Fields defined in Invenio config variable
                   CFG_BIBUPLOAD_STRONG_TAGS are not replaced.
 
                   Note also that `-r' can be combined with `-i'
                   into an `-ir' option that would automatically
                   either insert records as new if they are not
                   found in the system, or correct existing
                   records if they are found to exist.
 
     -a, --append  Append fields from XML MARC file at the end of
                   existing records.  The original content is
                   enriched only.  Signals error if record is not
                   found via matching record IDs or system
                   numbers.
 
     -c, --correct Correct fields of existing records by those
                   from XML MARC file.  The original record
                   content is modified only on those fields from
                   the XML MARC file where both the tags and the
                   indicators match: the original fields are
                   removed and replaced by those from the XML
                   MARC file.  Fields not present in XML MARC
                   file are not changed (unlike the -r option).
                   Fields with "provenance" subfields defined in
                   'CFG_BIBUPLOAD_CONTROLLED_PROVENANCE_TAGS'
                   are protected against deletion unless the
                   input MARCXML contains a matching
                   provenance value.
                   Signals error if record is not found via
                   matching record IDs or system numbers.
 
     -d, --delete  Delete fields of existing records that are
                   contained in the XML MARC file. The fields in
                   the original record that are not present in
                   the XML MARC file are preserved.
                   This is incompatible with FFT (see below).
 

Note that if you are using the --replace mode, and you specify in the incoming MARCXML a 001 tag with a value representing a record ID that does not exist, bibupload will not create the record on-the-fly unless the --force parameter was also passed on the command line. This is done in order to avoid creating, by mistake, holes in the database list of record identifiers. When you ask, in fact, to --replace a non-existing record imposing a record ID with a value of, say, 1 000 000 and, subsequently, you --insert a new record, this will automatically receive an ID with the value 1 000 001.

If you combine the --pretend parameter with the above updating modes you can actually test what would be executed without modifying the database or altering the system status.

3.4 Inserting and updating at the same time

Note that the insert/update modes can be combined together. For example, if you have a file that contains a mixture of new records with possibly some records to be updated, then you can run:

 $ bibupload -i -r file.xml
 
 
In this case BibUpload will try to do an update (for records having either 001 or 970 identifiers), or an insert (for the other ones).

3.6 Uploading fulltext files

The fulltext files can be uploaded and revised via a special FFT ("fulltext file transfer") tag with the following semantic:

     FFT $a  ...  location of the docfile to upload (a filesystem path or a URL)
         $d  ...  docfile description (optional)
         $f  ...  format (optional; if not set, deduced from $a)
         $m  ...  new desired docfile name (optional; used for renaming files)
         $n  ...  docfile name (optional; if not set, deduced from $a)
         $o  ...  flag (repeatable subfield)
         $r  ...  restriction (optional, see below)
         $s  ...  set timestamp (optional, see below)
         $t  ...  docfile type (e.g. Main, Additional)
         $v  ...  version (used only with REVERT and DELETE-FILE, see below)
         $x  ...  url/path for an icon (optional)
         $z  ...  comment (optional)
         $w  ... MoreInfo modification of the document
         $p  ... MoreInfo modification of a current version of the document
         $b  ... MoreInfo modification of a current version and format of the document
         $u  ... MoreInfo modification of a format (of any version) of the document
 

For example, to upload a new fulltext file thesis.pdf associated to record ID 123:

     <record>
         <controlfield tag="001">123</controlfield>
         <datafield tag="FFT" ind1=" " ind2=" ">
             <subfield code="a">/tmp/thesis.pdf</subfield>
             <subfield code="t">Main</subfield>
             <subfield code="d">
               This is the fulltext version of my thesis in the PDF format.
               Chapter 5 still needs some revision.
             </subfield>
         </datafield>
     </record>
 

The FFT tag can be repetitive, so one can pass along another FFT tag instance containing a pointer to e.g. the thesis defence slides. The subfields of an FFT tag are non-repetitive.

When more than one FFT tag is specified for the same document (e.g. for adding more than one format at a time), if $t (docfile type), $m (new desired docfile name), $r (restriction), $v (version), $x (url/path for an icon), are specified, they should be identically specified for each single entry of FFT. E.g. if you want to specify an icon for a document with two formats (say .pdf and .doc), you'll write two FFT tags, both containing the same $x subfield.

The bibupload process, when it encounters FFT tags, will automatically populate fulltext storage space (/opt/invenio/var/data/files) and metadata record associated tables (bibrec_bibdoc, bibdoc) as appropriate. It will also enrich the 856 tags (URL tags) of the MARC metadata of the record in question with references to the latest versions of each file.

Note that for $a and $x subfields filesystem paths must be absolute (e.g. /tmp/icon.gif is valid, while Destkop/icon.gif is not) and they must be readable by the user/group of the bibupload process that will handle the FFT.

The bibupload process supports the usual modes correct, append, replace, insert with a semantic that is somewhat similar to the semantic of the metadata upload:

Metadata Fulltext
objects being uploaded MARC field instances characterized by tags (010-999) fulltext files characterized by unique file names (FFT $n)
insert insert new record; must not exist insert new files; must not exist
append append new tag instances for the given tag XXX, regardless of existing tag instances append new files, if filename (i.e. new format) not already present
correct correct tag instances for the given tag XXX; delete existing ones and replace with given ones correct files with the given filename; add new revision or delete file; if the docname does not exist the file is added
replace replace all tags, whatever XXX are replace all files, whatever filenames are
delete delete all existing tag instances not supported

-

Note, in append and insert mode,

$m
is ignored. +

Note that you can mix regular MARC tags with special FFT tags in +the incoming XML input file. Both record metadata and record files +will be updated as a result. Hence beware with some input modes, such +as replace mode, if you would like to touch only files.

+ +

Note that in append and insert mode the $m is ignored.

In order to rename a document just use the the correct mode specifing in the $n subfield the original docname that should be renamed and in $m the new name.

Special values can be assigned to the $t subfield.

ValueMeaning
PURGEIn order to purge previous file revisions (i.e. in order to keep only the latest file version), please use the correct mode with $n docname and $t PURGE as the special keyword.
DELETEIn order to delete all existing versions of a file, making it effectively hidden, please use the correct mode with $n docname and $t DELETE as the special keyword.
EXPUNGEIn order to expunge (i.e. remove completely, also from the filesystem) all existing versions of a file, making it effectively disappear, please use the correct mode with $n docname and $t EXPUNGE as the special keyword.
FIX-MARCIn order to synchronize MARC to the bibrec/bibdoc structure (e.g. after an update or a tweak in the database), please use the correct mode with $n docname and $t FIX-MARC as the special keyword.
FIX-ALLIn order to fix a record (i.e. put all its linked documents in a coherent state) and synchronize the MARC to the table, please use the correct mode with $n docname and $t FIX-ALL as the special keyword.
REVERTIn order to revert to a previous file revision (i.e. to create a new revision with the same content as some previous revision had), please use the correct mode with $n docname, $t REVERT as the special keyword and $v the number corresponding to the desired version.
DELETE-FILEIn order to delete a particular file added by mistake, please use the correct mode with $n docname, $t DELETE-FILE, specifing $v version and $f format. Note that this operation is not reversible. Note that if you don't spcify a version, the last version will be used.

In order to preserve previous comments and descriptions when correcting, please use the KEEP-OLD-VALUE special keyword with the desired $d and $z subfield.

The $r subfield can contain a string that can be use to restrict the given document. The same value must be specified for all the format of a given document. By default the keyword will be used as the status parameter for the "viewrestrdoc" action, which can be used to give access right/restriction to desired user. e.g. if you set the keyword "thesis", you can the connect the "thesisviewer" to the action "viewrestrdoc" with parameter "status" set to "thesis". Then all the user which are linked with the "thesisviewer" role will be able to download the document. Instead any other user which are not considered as authors for the given record will not be allowed. Note, if you use the keyword "KEEP-OLD-VALUE" the previous restrictions if applicable will be kept.

More advanced document-level restriction is indeed possible. If the value contains infact:

  • email: john.doe@example.org: then only the user having john.doe@example.org as email address will be authorized to access the given document.
  • group: example: then only users belonging to the local/external group example will be authorized to access the given document.
  • role: example: then only the users belonging to the WebAccess role example will be authorized to access the given document.
  • firerole: allow .../deny...: then only the users implicitly matched by the given firewall like role definition will be authorized to access the given document.
  • status: example: then only the users belonging to roles having an authorization for the WebAccess action viewrestrdoc with parameter status set to example will be authorized (that is exactly like setting $r to example).
Note, that authors (as defined in the record MARC) and superadmin are always authorized to access a document, no matter what is the given value of the status.

Some special flags might be set via FFT and associated with the current document by using the $o subfield. This feature is experimental. Currently only two flags are actively considered:

  • HIDDEN: used to specify that the file that is currently added (via revision or append) must be hidden, i.e. must not be visible to the world but only known by the system (e.g. to allow for fulltext indexing). This flag is permanently associated with the specific revision and format of the file being added.
  • PERFORM_HIDE_PREVIOUS: used to specify that, although the current file should be visible (unless the HIDDEN flag is also specified), any other previous revision of the document should receive the HIDDEN flag, and should thus be hidden to the world.

Note that each time bibupload is called on a record, the 8564 tags pointing to locally stored files are recreated on the basis of the full-text files connected to the record. Thus, if you whish to update some 8564 tag pointing to a locally managed file, the only way to perform this is through the FFT tag, not by editing 8564 directly.

The subfield $s of FFT can be used to set time stamp of the uploaded file to a given value, e.g. 2007-05-04 03:02:01. This is useful when uploading old files. When $s is not present, the current time will be used.

3.7 Obtaining feedbacks

Sometimes, to implement a particular workflow or policy in a digital repository, it might be nice to receive an automatic machine friendly feedback that aknowledges the outcome of a bibupload execution. To this aim the --callback-url command line parameter can be used. This parameter expects a URL to be specified to which a JSON-serialized response will POSTed.

Say, you have an external service reachable via the URL http://www.example.org/accept_feedback. If the argument:

 --callback-url http://www.example.org/accept_feedback
 
is added to the usual bibupload call, at the end of the execution of the corresponding bibupload task, an HTTP POST request will be performed, if possible to the given URL, reporting the outcome of the bibupload execution as a JSON-serialized response with the following structure:
  • a JSON object with the following string -- value mapping:
    • string: results -- value: a JSON array whose values are all JSON objects with the following string -- value mapping:
      • recid: an integer number, representing the described record identifier (-1 if no record identifier can be retrieved)
      • success: either true or false depending on the success of the elaboration of the corresponding MARCXML
      • error_message: a string containing a human-friendly description of the error that caused the MARCXML elaboration to fail (in case success was having false value)
      • marcxml: in case of success, this contains the final MARCXML representation of the record
      • url: in case of success, this contains the final URL where the detailde representation of the record can be fetched (i.e. its canonical URL)

For example, a possible JSON response posted to a specified URL can look like:

 {
     "results": [
         {
             "recid": -1,
             "error_message": "ERROR: can not retrieve the record identifier",
             "success": false
         },
         {
             "recid": 1000,
             "error_message": "",
             "success": true,
             "marcxml": "1000...",
             "url": "http://www.example.org/record/1000"
         },
         ...
     ]
 }
 

Note that, currently, in case the specified URL can not be reached at the time of the POST request, the whole bibupload task will fail.

If you use the same callback URL to receive the feedback from more than one bibupload request you might want to be able to correctly identify each bibupload call with the corresponding feedback. For this reason you can pass to the bibupload call an additional argument:

 --nonce VALUE
 
where value can be any string you wish. Such string will be then added to the JSON structure, as in (supposing you specified --nonce 1234):
 {
     "nonce": "1234",
     "results": [
         {
             "recid": -1,
             "error_message": "ERROR: can not retrieve the record identifier",
             "success": false
         },
         {
             "recid": 1000,
             "error_message": "",
             "success": true,
             "marcxml": "1000...",
             "url": "http://www.example.org/record/1000"
         },
         ...
     ]
 }
 

3.8 Assigning additional informations to documents and other entities

Some bits of meta-data should not be viewed by Invenio users directly and stored in the MARC format. This includes all types of non-standard data related to records and documents, for example flags realted to documetns (sepcified inside of a FFT tage) or bits of semantic information related to entities managed in Invenio. This type of data is usually machine generated and should be used by modules of Invenio internally.

Invenio provides a general mechanism allowing to store objects related to different entities of Invenio. This mechanism is called MoreInfo and resembles well known more-info solutions. Every entity (document, version of a document, format of a particular version of a document, relation between documents) can be assigned a dictionary of arbitrary values. The dictionary is divided into namespaces, which allow to separate data from different modules and serving different purposes.

BibUpload, the only gateway to uploading data into the Invenio database, allows to populate MoreInfo structures. MoreInfo related to a given entity can be modified by providing a Pickle-serialised byte64 encoded Python object having following structure:

 {
     "namespace": {
         "key": "value",
        	"key2": "value2"
     }
 }
 

For example the above dictionary should be uploaded as

KGRwMQpTJ25hbWVzcGFjZScKcDIKKGRwMwpTJ2tleTInCnA0ClMndmFsdWUyJwpwNQpzUydrZXknCnA2ClMndmFsdWUnCnA3CnNzLg==

Which is a base-64 encoded representation of the string

(dp0\nS'namespace'\np1\n(dp2\nS'key2'\np3\nS'value2'\np4\nsS'key'\np5\nS'value'\np6\nss.

Removing of data keys from a dictionary can happen by providing None value as a value. Empty namespaces are considered non-existent.

The string representation of modifications to the MoreInfo dictionary can be provided in several places, depending, to which object it should be attached. The most general upload method, the BDM tag has following semantic:

     BDM $r  ... Identifier of a relation between documents (optional)
         $i  ... Identifier of a BibDoc (optional)
         $v  ... Version of a BibDoc (optional)
         $n  ... Name of a BibDoc (within a current record) (optional)
         $f  ... Format of a BibDoc (optional)
         $m  ... Serialised update to the MoreInfo dictionary
 

All (except $m) subfields are optional and allow to identify an entity to which MoreInfo should refer.

Besides the BDM tag, MoreInfo can be transfered using special subfields of FFT and BDR tags. The first one allows to modify MoreInfo of a newly uploaded document, the second of a relation. The additional subfields have following semantic:

     FFT $w  ... MoreInfo modification of the document
         $p  ... MoreInfo modification of a current version of the document
         $s  ... MoreInfo modification of a current version and format of the document
         $u  ... MoreInfo modification of a format (of any version) of the document
     BDR $m  ... MoreInfo modification of a relation between BibDocs
 

3.8.1 Uploading relations between documents

One of additional pieces of non-MARC data which can be uploaded to Invenio are relations between documents. Similarly to MoreInfos, relations are intended to be used by Invenio modules. The semantics of BDR field allowing to upload relations looks as follows

     BDR $r  ... Identifier of the relation (optional, can be provided if modifying a known relation)
 
         $i  ... Identifier of the first document
         $n  ... Name of the first document (within the current record) (optional)
         $v  ... Version of the first document (optional)
         $f  ... Format of the first document (optional)
 
         $j  ... Identifier of the second document
         $o  ... Name of the second document (within the current record) (optional)
         $w  ... Version of the second document (optional)
         $g  ... Format of the second document (optional)
 
         $t  ... Type of the relation
         $m  ... Modification of the MoreInfo of the relation
         $d  ... Special field. if value=DELETE, relation is removed
 

Behavious of BDR tag in different upload modes:

insert, appendInserts new relation if necessary. Appends fields to the MoreInfo structure
correct, replaceCreates new relation if necessary, replaces the entire content of MoreInfo field.

3.8.2 Using temporary identifiers

In many cases, users want to upload large collections of documents using single BibUpload tasks. The infrastructure described in the rest of this manual allows easy upload of multiple documents, but lacks facilities for relating them to each other. A sample use-case which can not be satisfied by simple usage of FFT tags is uploading a document and relating it to another which is either already in the database or is being uploaded within the same BibUpload task. BibUpload provides a mechanism of temportaty identifiers which allows to serve scenarios similar to the aforementioned.

Temporary identifier is a string (unique in the context of a single MARC XML document), which replaces document number or a version number. In the context of BibDoc manipulations (FFT, BDR and BDM tags), temporary identifeirs can appear everywhere where version or numerical id are required. If a temporary identifier appears in a context of document already having an ID assigned, it will be interpreted as this already existent number. If newly created document is assigned a temporary identifier, the newly generated numerical ID is assigned to the temporary id. In order to be recognised as a temporary identifier, a string has to begin with a prefix TMP:. The mechanism of temporary identifiers can not be used in the con text of records, but only with BibDocs.

A BibUpload input using temporary identifiers can look like:

 
 <collection xmlns="http://www.loc.gov/MARC21/slim">
   <record>
     <datafield tag="100" ind1=" " ind2=" ">
       <subfield code="a">This is a record of the publication</subfield>
     </datafield>
     <datafield tag="FFT" ind1=" " ind2=" ">
       <subfield code="a">http://somedomain.com/document.pdf</subfield>
       <subfield code="t">Main</subfield>
       <subfield code="n">docname</subfield>
       <subfield code="i">TMP:id_identifier1</subfield>
       <subfield code="v">TMP:ver_identifier1</subfield>
     </datafield>
   </record>
 
   <record>
     <datafield tag="100" ind1=" " ind2=" ">
       <subfield code="a">This is a record of a dataset extracted from the publication</subfield>
     </datafield>
 
     <datafield tag="FFT" ind1=" " ind2=" ">
       <subfield code="a">http://sample.com/dataset.data</subfield>
       <subfield code="t">Main</subfield>
       <subfield code="n">docname2</subfielxd>
       <subfield code="i">TMP:id_identifier2</subfield>
       <subfield code="v">TMP:ver_identifier2</subfield>
     </datafield>
 
     <datafield tag="BDR" ind1=" " ind2=" ">
       <subfield code="i">TMP:id_identifier1</subfield>
       <subfield code="v">TMP:ver_identifier1</subfield>
       <subfield code="j">TMP:id_identifier2</subfield>
       <subfield code="w">TMP:ver_identifier2</subfield>
 
       <subfield code="t">is_extracted_from</subfield>
     </datafield>
   </record>
 
 </collection>
 

4. Batch Uploader

4.1 Web interface - Cataloguers

The batchuploader web interface can be used either to upload metadata files or documents. Opposed to daemon mode, actions will be executed only once.

The available upload history displays metadata and document uploads using the web interface, not daemon mode.

4.2 Web interface - Robots

If it is needed to use the batch upload function from within command line, this can be achieved with a curl call, like:

 $ curl -F 'file=@localfile.xml' -F 'mode=-i' http://cds.cern.ch/batchuploader/robotupload [-F 'callback_url=http://...'] -A invenio_webupload
 
 

This service provides (client, file) checking to assure the records are put into a collection the client has rights to.
To configure this permissions, check CFG_BATCHUPLOADER_WEB_ROBOT_RIGHTS variable in the configuration file.
The allowed user agents can also be defined using the CFG_BATCHUPLOADER_WEB_ROBOT_AGENT variable.

Note that you can receive machine-friendly feedbacks from the corresponding bibupload task that is launched by a given batchuploader request, by adding the optional POST field callback_url with the same semantic of the --callback-url command line parameter of bibupload (see the previous paragraph Obtaining feedbacks).

A second more RESTful interface is also available: it will suffice to append to the URL the specific mode (among "insert", "append", "correct", "delete", "replace"), as in:

 http://cds.cern.ch/batchuploader/robotupload/insert
 

The callback_url argument can be put in query part of the URL as in:

 http://cds.cern.ch/batchuploader/robotupload/insert?callback_url=http://myhandler
 

In case the HTTP server that is going to receive the feedback at callback_url expect the request to be encoded in application/x-www-form-urlencoded rather than application/json (e.g. if the server is implemented directly in Oracle), you can further specify the special_treatment argument and set it to oracle. The feedback will then be further encoded into an application/x-www-form-urlencoded request, with a single form key called results, which will contain the final JSON data.

The MARCXML content should then be specified as the body of the request. With curl this can be implemented as in:

 $ curl -T localfile.xml http://cds.cern.ch/batchuploader/robotupload/insert?callback_url=http://... -A invenio_webupload -H "Content-Type: application/marcxml+xml"
 

The nonce argument that can be passed to BibUpload as described in the previous paragraph can also be specified with both robotupload interfaces. E.g.:

 $ curl -F 'file=@localfile.xml' -F 'nonce=1234' -F 'mode=-i' http://cds.cern.ch/batchuploader/robotupload -F 'callback_url=http://...' -A invenio_webupload
 
and
 $ curl -T localfile.xml http://cds.cern.ch/batchuploader/robotupload/insert?nonce=1234&callback_url=http://... -A invenio_webupload -H "Content-Type: application/marcxml+xml"
 

4.2 Daemon mode

The batchuploader daemon mode is intended to be a bibsched task for document or metadata upload. The parent directory where the daemon will look for folders metadata and documents must be specified in the invenio configuration file.

An example of how directories should be arranged, considering that invenio was installed in folder /opt/invenio would be:

      /opt/invenio/var/batchupload
             /opt/invenio/var/batchupload/documents
                     /opt/invenio/var/batchupload/documents/append
                     /opt/invenio/var/batchupload/documents/revise
             /opt/invenio/var/batchupload/metadata
                     /opt/invenio/var/batchupload/metadata/append
                     /opt/invenio/var/batchupload/metadata/correct
                     /opt/invenio/var/batchupload/metadata/insert
                     /opt/invenio/var/batchupload/metadata/replace
 

When running the batchuploader daemon there are two possible execution modes:

         -m,   --metadata    Look for metadata files in folders insert, append, correct and replace.
                             All files are uploaded and then moved to the corresponding DONE folder.
         -d,   --documents   Look for documents in folders append and revise. Uploaded files are then
                             moved to DONE folders if possible.
 
By default, metadata mode is used.

An example of invocation would be:

 $ batchuploader --documents
 
 

It is possible to program batch uploader to run periodically. Read the Howto-run guide to see how. diff --git a/invenio/legacy/miscutil/sql/tabfill.sql b/invenio/legacy/miscutil/sql/tabfill.sql index f3b574f5b..8d5d47d91 100644 --- a/invenio/legacy/miscutil/sql/tabfill.sql +++ b/invenio/legacy/miscutil/sql/tabfill.sql @@ -1,863 +1,863 @@ -- This file is part of Invenio. -- Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013 CERN. -- -- Invenio is free software; you can redistribute it and/or -- modify it under the terms of the GNU General Public License as -- published by the Free Software Foundation; either version 2 of the -- License, or (at your option) any later version. -- -- Invenio is distributed in the hope that it will be useful, but -- WITHOUT ANY WARRANTY; without even the implied warranty of -- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -- General Public License for more details. -- -- You should have received a copy of the GNU General Public License -- along with Invenio; if not, write to the Free Software Foundation, Inc., -- 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. -- Fill Invenio configuration tables with defaults suitable for any site. INSERT INTO rnkMETHOD (id,name,last_updated) VALUES (1,'wrd','0000-00-00 00:00:00'); INSERT INTO collection_rnkMETHOD (id_collection,id_rnkMETHOD,score) VALUES (1,1,100); INSERT INTO rnkCITATIONDATA VALUES (1,'citationdict',NULL,'0000-00-00'); INSERT INTO rnkCITATIONDATA VALUES (2,'reversedict',NULL,'0000-00-00'); INSERT INTO rnkCITATIONDATA VALUES (3,'selfcitdict',NULL,'0000-00-00'); INSERT INTO rnkCITATIONDATA VALUES (4,'selfcitedbydict',NULL,'0000-00-00'); INSERT INTO field VALUES (1,'any field','anyfield'); INSERT INTO field VALUES (2,'title','title'); INSERT INTO field VALUES (3,'author','author'); INSERT INTO field VALUES (4,'abstract','abstract'); INSERT INTO field VALUES (5,'keyword','keyword'); INSERT INTO field VALUES (6,'report number','reportnumber'); INSERT INTO field VALUES (7,'subject','subject'); INSERT INTO field VALUES (8,'reference','reference'); INSERT INTO field VALUES (9,'fulltext','fulltext'); INSERT INTO field VALUES (10,'collection','collection'); INSERT INTO field VALUES (11,'division','division'); INSERT INTO field VALUES (12,'year','year'); INSERT INTO field VALUES (13,'experiment','experiment'); INSERT INTO field VALUES (14,'record ID','recid'); INSERT INTO field VALUES (15,'isbn','isbn'); INSERT INTO field VALUES (16,'issn','issn'); INSERT INTO field VALUES (17,'coden','coden'); -- INSERT INTO field VALUES (18,'doi','doi'); INSERT INTO field VALUES (19,'journal','journal'); INSERT INTO field VALUES (20,'collaboration','collaboration'); INSERT INTO field VALUES (21,'affiliation','affiliation'); INSERT INTO field VALUES (22,'exact author','exactauthor'); INSERT INTO field VALUES (23,'date created','datecreated'); INSERT INTO field VALUES (24,'date modified','datemodified'); INSERT INTO field VALUES (25,'refers to','refersto'); INSERT INTO field VALUES (26,'cited by','citedby'); INSERT INTO field VALUES (27,'caption','caption'); INSERT INTO field VALUES (28,'first author','firstauthor'); INSERT INTO field VALUES (29,'exact first author','exactfirstauthor'); INSERT INTO field VALUES (30,'author count','authorcount'); INSERT INTO field VALUES (31,'reference to','rawref'); INSERT INTO field VALUES (32,'exact title','exacttitle'); INSERT INTO field VALUES (33,'authority author','authorityauthor'); -INSERT INTO field VALUES (34,'authority institution','authorityinstitution'); +INSERT INTO field VALUES (34,'authority institute','authorityinstitute'); INSERT INTO field VALUES (35,'authority journal','authorityjournal'); INSERT INTO field VALUES (36,'authority subject','authoritysubject'); INSERT INTO field VALUES (37,'item count','itemcount'); INSERT INTO field VALUES (38,'file type','filetype'); INSERT INTO field VALUES (39,'miscellaneous', 'miscellaneous'); INSERT INTO field VALUES (40,'tag','tag'); INSERT INTO field_tag VALUES (10,11,100); INSERT INTO field_tag VALUES (11,14,100); INSERT INTO field_tag VALUES (12,15,10); INSERT INTO field_tag VALUES (13,116,10); INSERT INTO field_tag VALUES (2,3,100); INSERT INTO field_tag VALUES (2,4,90); INSERT INTO field_tag VALUES (3,1,100); INSERT INTO field_tag VALUES (3,2,90); INSERT INTO field_tag VALUES (4,5,100); INSERT INTO field_tag VALUES (5,6,100); INSERT INTO field_tag VALUES (6,7,30); INSERT INTO field_tag VALUES (6,8,10); INSERT INTO field_tag VALUES (6,9,20); INSERT INTO field_tag VALUES (7,12,100); INSERT INTO field_tag VALUES (7,13,90); INSERT INTO field_tag VALUES (8,10,100); INSERT INTO field_tag VALUES (9,115,100); INSERT INTO field_tag VALUES (14,117,100); INSERT INTO field_tag VALUES (15,118,100); INSERT INTO field_tag VALUES (16,119,100); INSERT INTO field_tag VALUES (17,120,100); -- INSERT INTO field_tag VALUES (18,121,100); INSERT INTO field_tag VALUES (19,131,100); INSERT INTO field_tag VALUES (20,132,100); INSERT INTO field_tag VALUES (21,133,100); INSERT INTO field_tag VALUES (21,134,90); INSERT INTO field_tag VALUES (22,1,100); INSERT INTO field_tag VALUES (22,2,90); INSERT INTO field_tag VALUES (27,135,100); INSERT INTO field_tag VALUES (28,1,100); INSERT INTO field_tag VALUES (29,1,100); INSERT INTO field_tag VALUES (30,1,100); INSERT INTO field_tag VALUES (30,2,90); INSERT INTO field_tag VALUES (32,3,100); INSERT INTO field_tag VALUES (32,4,90); -- authority fields INSERT INTO field_tag VALUES (33,1,100); INSERT INTO field_tag VALUES (33,146,100); INSERT INTO field_tag VALUES (33,140,100); INSERT INTO field_tag VALUES (34,148,100); INSERT INTO field_tag VALUES (34,149,100); INSERT INTO field_tag VALUES (34,150,100); INSERT INTO field_tag VALUES (35,151,100); INSERT INTO field_tag VALUES (35,152,100); INSERT INTO field_tag VALUES (35,153,100); INSERT INTO field_tag VALUES (36,154,100); INSERT INTO field_tag VALUES (36,155,100); INSERT INTO field_tag VALUES (36,156,100); -- misc fields INSERT INTO field_tag VALUES (39,17,10); INSERT INTO field_tag VALUES (39,18,10); INSERT INTO field_tag VALUES (39,157,10); INSERT INTO field_tag VALUES (39,158,10); INSERT INTO field_tag VALUES (39,159,10); INSERT INTO field_tag VALUES (39,160,10); INSERT INTO field_tag VALUES (39,161,10); INSERT INTO field_tag VALUES (39,162,10); INSERT INTO field_tag VALUES (39,163,10); INSERT INTO field_tag VALUES (39,164,10); INSERT INTO field_tag VALUES (39,20,10); INSERT INTO field_tag VALUES (39,21,10); INSERT INTO field_tag VALUES (39,22,10); INSERT INTO field_tag VALUES (39,23,10); INSERT INTO field_tag VALUES (39,165,10); INSERT INTO field_tag VALUES (39,166,10); INSERT INTO field_tag VALUES (39,167,10); INSERT INTO field_tag VALUES (39,168,10); INSERT INTO field_tag VALUES (39,169,10); INSERT INTO field_tag VALUES (39,170,10); INSERT INTO field_tag VALUES (39,25,10); INSERT INTO field_tag VALUES (39,27,10); INSERT INTO field_tag VALUES (39,28,10); INSERT INTO field_tag VALUES (39,29,10); INSERT INTO field_tag VALUES (39,30,10); INSERT INTO field_tag VALUES (39,31,10); INSERT INTO field_tag VALUES (39,32,10); INSERT INTO field_tag VALUES (39,33,10); INSERT INTO field_tag VALUES (39,34,10); INSERT INTO field_tag VALUES (39,35,10); INSERT INTO field_tag VALUES (39,36,10); INSERT INTO field_tag VALUES (39,37,10); INSERT INTO field_tag VALUES (39,38,10); INSERT INTO field_tag VALUES (39,39,10); INSERT INTO field_tag VALUES (39,171,10); INSERT INTO field_tag VALUES (39,172,10); INSERT INTO field_tag VALUES (39,173,10); INSERT INTO field_tag VALUES (39,174,10); INSERT INTO field_tag VALUES (39,175,10); INSERT INTO field_tag VALUES (39,41,10); INSERT INTO field_tag VALUES (39,42,10); INSERT INTO field_tag VALUES (39,43,10); INSERT INTO field_tag VALUES (39,44,10); INSERT INTO field_tag VALUES (39,45,10); INSERT INTO field_tag VALUES (39,46,10); INSERT INTO field_tag VALUES (39,47,10); INSERT INTO field_tag VALUES (39,48,10); INSERT INTO field_tag VALUES (39,49,10); INSERT INTO field_tag VALUES (39,50,10); INSERT INTO field_tag VALUES (39,51,10); INSERT INTO field_tag VALUES (39,52,10); INSERT INTO field_tag VALUES (39,53,10); INSERT INTO field_tag VALUES (39,54,10); INSERT INTO field_tag VALUES (39,55,10); INSERT INTO field_tag VALUES (39,56,10); INSERT INTO field_tag VALUES (39,57,10); INSERT INTO field_tag VALUES (39,58,10); INSERT INTO field_tag VALUES (39,59,10); INSERT INTO field_tag VALUES (39,60,10); INSERT INTO field_tag VALUES (39,61,10); INSERT INTO field_tag VALUES (39,62,10); INSERT INTO field_tag VALUES (39,63,10); INSERT INTO field_tag VALUES (39,64,10); INSERT INTO field_tag VALUES (39,65,10); INSERT INTO field_tag VALUES (39,66,10); INSERT INTO field_tag VALUES (39,67,10); INSERT INTO field_tag VALUES (39,176,10); INSERT INTO field_tag VALUES (39,177,10); INSERT INTO field_tag VALUES (39,178,10); INSERT INTO field_tag VALUES (39,179,10); INSERT INTO field_tag VALUES (39,180,10); INSERT INTO field_tag VALUES (39,69,10); INSERT INTO field_tag VALUES (39,70,10); INSERT INTO field_tag VALUES (39,71,10); INSERT INTO field_tag VALUES (39,72,10); INSERT INTO field_tag VALUES (39,73,10); INSERT INTO field_tag VALUES (39,74,10); INSERT INTO field_tag VALUES (39,75,10); INSERT INTO field_tag VALUES (39,76,10); INSERT INTO field_tag VALUES (39,77,10); INSERT INTO field_tag VALUES (39,78,10); INSERT INTO field_tag VALUES (39,79,10); INSERT INTO field_tag VALUES (39,80,10); INSERT INTO field_tag VALUES (39,181,10); INSERT INTO field_tag VALUES (39,182,10); INSERT INTO field_tag VALUES (39,183,10); INSERT INTO field_tag VALUES (39,184,10); INSERT INTO field_tag VALUES (39,185,10); INSERT INTO field_tag VALUES (39,186,10); INSERT INTO field_tag VALUES (39,82,10); INSERT INTO field_tag VALUES (39,83,10); INSERT INTO field_tag VALUES (39,84,10); INSERT INTO field_tag VALUES (39,85,10); INSERT INTO field_tag VALUES (39,187,10); INSERT INTO field_tag VALUES (39,88,10); INSERT INTO field_tag VALUES (39,89,10); INSERT INTO field_tag VALUES (39,90,10); INSERT INTO field_tag VALUES (39,91,10); INSERT INTO field_tag VALUES (39,92,10); INSERT INTO field_tag VALUES (39,93,10); INSERT INTO field_tag VALUES (39,94,10); INSERT INTO field_tag VALUES (39,95,10); INSERT INTO field_tag VALUES (39,96,10); INSERT INTO field_tag VALUES (39,97,10); INSERT INTO field_tag VALUES (39,98,10); INSERT INTO field_tag VALUES (39,99,10); INSERT INTO field_tag VALUES (39,100,10); INSERT INTO field_tag VALUES (39,102,10); INSERT INTO field_tag VALUES (39,103,10); INSERT INTO field_tag VALUES (39,104,10); INSERT INTO field_tag VALUES (39,105,10); INSERT INTO field_tag VALUES (39,188,10); INSERT INTO field_tag VALUES (39,189,10); INSERT INTO field_tag VALUES (39,190,10); INSERT INTO field_tag VALUES (39,191,10); INSERT INTO field_tag VALUES (39,192,10); INSERT INTO field_tag VALUES (39,193,10); INSERT INTO field_tag VALUES (39,194,10); INSERT INTO field_tag VALUES (39,195,10); INSERT INTO field_tag VALUES (39,196,10); INSERT INTO field_tag VALUES (39,107,10); INSERT INTO field_tag VALUES (39,108,10); INSERT INTO field_tag VALUES (39,109,10); INSERT INTO field_tag VALUES (39,110,10); INSERT INTO field_tag VALUES (39,111,10); INSERT INTO field_tag VALUES (39,112,10); INSERT INTO field_tag VALUES (39,113,10); INSERT INTO field_tag VALUES (39,197,10); INSERT INTO field_tag VALUES (39,198,10); INSERT INTO field_tag VALUES (39,199,10); INSERT INTO field_tag VALUES (39,200,10); INSERT INTO field_tag VALUES (39,201,10); INSERT INTO field_tag VALUES (39,202,10); INSERT INTO field_tag VALUES (39,203,10); INSERT INTO field_tag VALUES (39,204,10); INSERT INTO field_tag VALUES (39,205,10); INSERT INTO field_tag VALUES (39,206,10); INSERT INTO field_tag VALUES (39,207,10); INSERT INTO field_tag VALUES (39,208,10); INSERT INTO field_tag VALUES (39,209,10); INSERT INTO field_tag VALUES (39,210,10); INSERT INTO field_tag VALUES (39,211,10); INSERT INTO field_tag VALUES (39,212,10); INSERT INTO field_tag VALUES (39,213,10); INSERT INTO field_tag VALUES (39,214,10); INSERT INTO field_tag VALUES (39,215,10); INSERT INTO field_tag VALUES (39,122,10); INSERT INTO field_tag VALUES (39,123,10); INSERT INTO field_tag VALUES (39,124,10); INSERT INTO field_tag VALUES (39,125,10); INSERT INTO field_tag VALUES (39,126,10); INSERT INTO field_tag VALUES (39,127,10); INSERT INTO field_tag VALUES (39,128,10); INSERT INTO field_tag VALUES (39,129,10); INSERT INTO field_tag VALUES (39,130,10); INSERT INTO field_tag VALUES (39,1,10); INSERT INTO field_tag VALUES (39,2,10); -- misc authority fields INSERT INTO field_tag VALUES (39,216,10); INSERT INTO field_tag VALUES (39,217,10); INSERT INTO field_tag VALUES (39,218,10); INSERT INTO field_tag VALUES (39,219,10); INSERT INTO field_tag VALUES (39,220,10); INSERT INTO field_tag VALUES (39,221,10); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (1,'HTML brief','hb', 'HTML brief output format, used for search results pages.', 'text/html', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (2,'HTML detailed','hd', 'HTML detailed output format, used for Detailed record pages.', 'text/html', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (3,'MARC','hm', 'HTML MARC.', 'text/html', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (4,'Dublin Core','xd', 'XML Dublin Core.', 'text/xml', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (5,'MARCXML','xm', 'XML MARC.', 'text/xml', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (6,'portfolio','hp', 'HTML portfolio-style output format for photos.', 'text/html', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (7,'photo captions only','hc', 'HTML caption-only output format for photos.', 'text/html', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (8,'BibTeX','hx', 'BibTeX.', 'text/html', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (9,'EndNote','xe', 'XML EndNote.', 'text/xml', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (10,'NLM','xn', 'XML NLM.', 'text/xml', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (11,'Excel','excel', 'Excel csv output', 'application/ms-excel', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (12,'HTML similarity','hs', 'Very short HTML output for similarity box (people also viewed..).', 'text/html', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (13,'RSS','xr', 'RSS.', 'text/xml', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (14,'OAI DC','xoaidc', 'OAI DC.', 'text/xml', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (15,'File mini-panel', 'hdfile', 'Used to show fulltext files in mini-panel of detailed record pages.', 'text/html', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (16,'Actions mini-panel', 'hdact', 'Used to display actions in mini-panel of detailed record pages.', 'text/html', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (17,'References tab', 'hdref', 'Display record references in References tab.', 'text/html', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (18,'HTML citesummary','hcs', 'HTML cite summary format, used for search results pages.', 'text/html', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (19,'RefWorks','xw', 'RefWorks.', 'text/xml', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (20,'MODS', 'xo', 'Metadata Object Description Schema', 'application/xml', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (21,'HTML author claiming', 'ha', 'Very brief HTML output format for author/paper claiming facility.', 'text/html', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (22,'Podcast', 'xp', 'Sample format suitable for multimedia feeds, such as podcasts', 'application/rss+xml', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (23,'WebAuthorProfile affiliations helper','wapaff', 'cPickled dicts', 'text', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (24,'EndNote (8-X)','xe8x', 'XML EndNote (8-X).', 'text/xml', 1); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (25,'HTML citesummary extended','hcs2', 'HTML cite summary format, including self-citations counts.', 'text/html', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (26,'DataCite','dcite', 'DataCite XML format.', 'text/xml', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (27,'Mobile brief','mobb', 'Mobile brief format.', 'text/html', 0); INSERT INTO format (id,name,code,description,content_type,visibility) VALUES (28,'Mobile detailed','mobd', 'Mobile detailed format.', 'text/html', 0); INSERT INTO tag VALUES (1,'first author name','100__a'); INSERT INTO tag VALUES (2,'additional author name','700__a'); INSERT INTO tag VALUES (3,'main title','245__%'); INSERT INTO tag VALUES (4,'additional title','246__%'); INSERT INTO tag VALUES (5,'abstract','520__%'); INSERT INTO tag VALUES (6,'keyword','6531_a'); INSERT INTO tag VALUES (7,'primary report number','037__a'); INSERT INTO tag VALUES (8,'additional report number','088__a'); INSERT INTO tag VALUES (9,'added report number','909C0r'); INSERT INTO tag VALUES (10,'reference','999C5%'); INSERT INTO tag VALUES (11,'collection identifier','980__%'); INSERT INTO tag VALUES (12,'main subject','65017a'); INSERT INTO tag VALUES (13,'additional subject','65027a'); INSERT INTO tag VALUES (14,'division','909C0p'); INSERT INTO tag VALUES (15,'year','909C0y'); INSERT INTO tag VALUES (16,'00x','00%'); INSERT INTO tag VALUES (17,'01x','01%'); INSERT INTO tag VALUES (18,'02x','02%'); INSERT INTO tag VALUES (19,'03x','03%'); INSERT INTO tag VALUES (20,'lang','04%'); INSERT INTO tag VALUES (21,'05x','05%'); INSERT INTO tag VALUES (22,'06x','06%'); INSERT INTO tag VALUES (23,'07x','07%'); INSERT INTO tag VALUES (24,'08x','08%'); INSERT INTO tag VALUES (25,'09x','09%'); INSERT INTO tag VALUES (26,'10x','10%'); INSERT INTO tag VALUES (27,'11x','11%'); INSERT INTO tag VALUES (28,'12x','12%'); INSERT INTO tag VALUES (29,'13x','13%'); INSERT INTO tag VALUES (30,'14x','14%'); INSERT INTO tag VALUES (31,'15x','15%'); INSERT INTO tag VALUES (32,'16x','16%'); INSERT INTO tag VALUES (33,'17x','17%'); INSERT INTO tag VALUES (34,'18x','18%'); INSERT INTO tag VALUES (35,'19x','19%'); INSERT INTO tag VALUES (36,'20x','20%'); INSERT INTO tag VALUES (37,'21x','21%'); INSERT INTO tag VALUES (38,'22x','22%'); INSERT INTO tag VALUES (39,'23x','23%'); INSERT INTO tag VALUES (40,'24x','24%'); INSERT INTO tag VALUES (41,'25x','25%'); INSERT INTO tag VALUES (42,'internal','26%'); INSERT INTO tag VALUES (43,'27x','27%'); INSERT INTO tag VALUES (44,'28x','28%'); INSERT INTO tag VALUES (45,'29x','29%'); INSERT INTO tag VALUES (46,'pages','30%'); INSERT INTO tag VALUES (47,'31x','31%'); INSERT INTO tag VALUES (48,'32x','32%'); INSERT INTO tag VALUES (49,'33x','33%'); INSERT INTO tag VALUES (50,'34x','34%'); INSERT INTO tag VALUES (51,'35x','35%'); INSERT INTO tag VALUES (52,'36x','36%'); INSERT INTO tag VALUES (53,'37x','37%'); INSERT INTO tag VALUES (54,'38x','38%'); INSERT INTO tag VALUES (55,'39x','39%'); INSERT INTO tag VALUES (56,'40x','40%'); INSERT INTO tag VALUES (57,'41x','41%'); INSERT INTO tag VALUES (58,'42x','42%'); INSERT INTO tag VALUES (59,'43x','43%'); INSERT INTO tag VALUES (60,'44x','44%'); INSERT INTO tag VALUES (61,'45x','45%'); INSERT INTO tag VALUES (62,'46x','46%'); INSERT INTO tag VALUES (63,'47x','47%'); INSERT INTO tag VALUES (64,'48x','48%'); INSERT INTO tag VALUES (65,'series','49%'); INSERT INTO tag VALUES (66,'50x','50%'); INSERT INTO tag VALUES (67,'51x','51%'); INSERT INTO tag VALUES (68,'52x','52%'); INSERT INTO tag VALUES (69,'53x','53%'); INSERT INTO tag VALUES (70,'54x','54%'); INSERT INTO tag VALUES (71,'55x','55%'); INSERT INTO tag VALUES (72,'56x','56%'); INSERT INTO tag VALUES (73,'57x','57%'); INSERT INTO tag VALUES (74,'58x','58%'); INSERT INTO tag VALUES (75,'summary','59%'); INSERT INTO tag VALUES (76,'60x','60%'); INSERT INTO tag VALUES (77,'61x','61%'); INSERT INTO tag VALUES (78,'62x','62%'); INSERT INTO tag VALUES (79,'63x','63%'); INSERT INTO tag VALUES (80,'64x','64%'); INSERT INTO tag VALUES (81,'65x','65%'); INSERT INTO tag VALUES (82,'66x','66%'); INSERT INTO tag VALUES (83,'67x','67%'); INSERT INTO tag VALUES (84,'68x','68%'); INSERT INTO tag VALUES (85,'subject','69%'); INSERT INTO tag VALUES (86,'70x','70%'); INSERT INTO tag VALUES (87,'71x','71%'); INSERT INTO tag VALUES (88,'author-ad','72%'); INSERT INTO tag VALUES (89,'73x','73%'); INSERT INTO tag VALUES (90,'74x','74%'); INSERT INTO tag VALUES (91,'75x','75%'); INSERT INTO tag VALUES (92,'76x','76%'); INSERT INTO tag VALUES (93,'77x','77%'); INSERT INTO tag VALUES (94,'78x','78%'); INSERT INTO tag VALUES (95,'79x','79%'); INSERT INTO tag VALUES (96,'80x','80%'); INSERT INTO tag VALUES (97,'81x','81%'); INSERT INTO tag VALUES (98,'82x','82%'); INSERT INTO tag VALUES (99,'83x','83%'); INSERT INTO tag VALUES (100,'84x','84%'); INSERT INTO tag VALUES (101,'electr','85%'); INSERT INTO tag VALUES (102,'86x','86%'); INSERT INTO tag VALUES (103,'87x','87%'); INSERT INTO tag VALUES (104,'88x','88%'); INSERT INTO tag VALUES (105,'89x','89%'); INSERT INTO tag VALUES (106,'publication','90%'); INSERT INTO tag VALUES (107,'pub-conf-cit','91%'); INSERT INTO tag VALUES (108,'92x','92%'); INSERT INTO tag VALUES (109,'93x','93%'); INSERT INTO tag VALUES (110,'94x','94%'); INSERT INTO tag VALUES (111,'95x','95%'); INSERT INTO tag VALUES (112,'catinfo','96%'); INSERT INTO tag VALUES (113,'97x','97%'); INSERT INTO tag VALUES (114,'98x','98%'); INSERT INTO tag VALUES (115,'url','8564_u'); INSERT INTO tag VALUES (116,'experiment','909C0e'); INSERT INTO tag VALUES (117,'record ID','001'); INSERT INTO tag VALUES (118,'isbn','020__a'); INSERT INTO tag VALUES (119,'issn','022__a'); INSERT INTO tag VALUES (120,'coden','030__a'); INSERT INTO tag VALUES (121,'doi','909C4a'); INSERT INTO tag VALUES (122,'850x','850%'); INSERT INTO tag VALUES (123,'851x','851%'); INSERT INTO tag VALUES (124,'852x','852%'); INSERT INTO tag VALUES (125,'853x','853%'); INSERT INTO tag VALUES (126,'854x','854%'); INSERT INTO tag VALUES (127,'855x','855%'); INSERT INTO tag VALUES (128,'857x','857%'); INSERT INTO tag VALUES (129,'858x','858%'); INSERT INTO tag VALUES (130,'859x','859%'); INSERT INTO tag VALUES (131,'journal','909C4%'); INSERT INTO tag VALUES (132,'collaboration','710__g'); INSERT INTO tag VALUES (133,'first author affiliation','100__u'); INSERT INTO tag VALUES (134,'additional author affiliation','700__u'); INSERT INTO tag VALUES (135,'caption','8564_y'); INSERT INTO tag VALUES (136,'journal page','909C4c'); INSERT INTO tag VALUES (137,'journal title','909C4p'); INSERT INTO tag VALUES (138,'journal volume','909C4v'); INSERT INTO tag VALUES (139,'journal year','909C4y'); INSERT INTO tag VALUES (140,'comment','500__a'); INSERT INTO tag VALUES (141,'title','245__a'); INSERT INTO tag VALUES (142,'main abstract','245__a'); INSERT INTO tag VALUES (143,'internal notes','595__a'); INSERT INTO tag VALUES (144,'other relationship entry', '787%'); -- INSERT INTO tag VALUES (145,'authority: main personal name','100__a'); -- already exists under a different name ('first author name') INSERT INTO tag VALUES (146,'authority: alternative personal name','400__a'); -- INSERT INTO tag VALUES (147,'authority: personal name from other record','500__a'); -- already exists under a different name ('comment') INSERT INTO tag VALUES (148,'authority: organization main name','110__a'); INSERT INTO tag VALUES (149,'organization alternative name','410__a'); INSERT INTO tag VALUES (150,'organization main from other record','510__a'); INSERT INTO tag VALUES (151,'authority: uniform title','130__a'); INSERT INTO tag VALUES (152,'authority: uniform title alternatives','430__a'); INSERT INTO tag VALUES (153,'authority: uniform title from other record','530__a'); INSERT INTO tag VALUES (154,'authority: subject from other record','150__a'); INSERT INTO tag VALUES (155,'authority: subject alternative name','450__a'); INSERT INTO tag VALUES (156,'authority: subject main name','550__a'); -- tags for misc index INSERT INTO tag VALUES (157,'031x','031%'); INSERT INTO tag VALUES (158,'032x','032%'); INSERT INTO tag VALUES (159,'033x','033%'); INSERT INTO tag VALUES (160,'034x','034%'); INSERT INTO tag VALUES (161,'035x','035%'); INSERT INTO tag VALUES (162,'036x','036%'); INSERT INTO tag VALUES (163,'037x','037%'); INSERT INTO tag VALUES (164,'038x','038%'); INSERT INTO tag VALUES (165,'080x','080%'); INSERT INTO tag VALUES (166,'082x','082%'); INSERT INTO tag VALUES (167,'083x','083%'); INSERT INTO tag VALUES (168,'084x','084%'); INSERT INTO tag VALUES (169,'085x','085%'); INSERT INTO tag VALUES (170,'086x','086%'); INSERT INTO tag VALUES (171,'240x','240%'); INSERT INTO tag VALUES (172,'242x','242%'); INSERT INTO tag VALUES (173,'243x','243%'); INSERT INTO tag VALUES (174,'244x','244%'); INSERT INTO tag VALUES (175,'247x','247%'); INSERT INTO tag VALUES (176,'521x','521%'); INSERT INTO tag VALUES (177,'522x','522%'); INSERT INTO tag VALUES (178,'524x','524%'); INSERT INTO tag VALUES (179,'525x','525%'); INSERT INTO tag VALUES (180,'526x','526%'); INSERT INTO tag VALUES (181,'650x','650%'); INSERT INTO tag VALUES (182,'651x','651%'); INSERT INTO tag VALUES (183,'6531_v','6531_v'); INSERT INTO tag VALUES (184,'6531_y','6531_y'); INSERT INTO tag VALUES (185,'6531_9','6531_9'); INSERT INTO tag VALUES (186,'654x','654%'); INSERT INTO tag VALUES (187,'655x','655%'); INSERT INTO tag VALUES (188,'656x','656%'); INSERT INTO tag VALUES (189,'657x','657%'); INSERT INTO tag VALUES (190,'658x','658%'); INSERT INTO tag VALUES (191,'711x','711%'); INSERT INTO tag VALUES (192,'900x','900%'); INSERT INTO tag VALUES (193,'901x','901%'); INSERT INTO tag VALUES (194,'902x','902%'); INSERT INTO tag VALUES (195,'903x','903%'); INSERT INTO tag VALUES (196,'904x','904%'); INSERT INTO tag VALUES (197,'905x','905%'); INSERT INTO tag VALUES (198,'906x','906%'); INSERT INTO tag VALUES (199,'907x','907%'); INSERT INTO tag VALUES (200,'908x','908%'); INSERT INTO tag VALUES (201,'909C1x','909C1%'); INSERT INTO tag VALUES (202,'909C5x','909C5%'); INSERT INTO tag VALUES (203,'909CSx','909CS%'); INSERT INTO tag VALUES (204,'909COx','909CO%'); INSERT INTO tag VALUES (205,'909CKx','909CK%'); INSERT INTO tag VALUES (206,'909CPx','909CP%'); INSERT INTO tag VALUES (207,'981x','981%'); INSERT INTO tag VALUES (208,'982x','982%'); INSERT INTO tag VALUES (209,'983x','983%'); INSERT INTO tag VALUES (210,'984x','984%'); INSERT INTO tag VALUES (211,'985x','985%'); INSERT INTO tag VALUES (212,'986x','986%'); INSERT INTO tag VALUES (213,'987x','987%'); INSERT INTO tag VALUES (214,'988x','988%'); INSERT INTO tag VALUES (215,'989x','989%'); -- authority controled tags INSERT INTO tag VALUES (216,'author control','100__0'); -INSERT INTO tag VALUES (217,'institution control','110__0'); +INSERT INTO tag VALUES (217,'institute control','110__0'); INSERT INTO tag VALUES (218,'journal control','130__0'); INSERT INTO tag VALUES (219,'subject control','150__0'); -INSERT INTO tag VALUES (220,'additional institution control', '260__0'); +INSERT INTO tag VALUES (220,'additional institute control', '260__0'); INSERT INTO tag VALUES (221,'additional author control', '700__0'); INSERT INTO idxINDEX VALUES (1,'global','This index contains words/phrases from global fields.','0000-00-00 00:00:00', '', 'native', 'INDEX-SYNONYM-TITLE,exact','No','No','No','BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (2,'collection','This index contains words/phrases from collection identifiers fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (3,'abstract','This index contains words/phrases from abstract fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (4,'author','This index contains fuzzy words/phrases from author fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorTokenizer'); INSERT INTO idxINDEX VALUES (5,'keyword','This index contains words/phrases from keyword fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (6,'reference','This index contains words/phrases from references fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (7,'reportnumber','This index contains words/phrases from report numbers fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (8,'title','This index contains words/phrases from title fields.','0000-00-00 00:00:00', '', 'native','INDEX-SYNONYM-TITLE,exact','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (9,'fulltext','This index contains words/phrases from fulltext fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexFulltextTokenizer'); INSERT INTO idxINDEX VALUES (10,'year','This index contains words/phrases from year fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexYearTokenizer'); INSERT INTO idxINDEX VALUES (11,'journal','This index contains words/phrases from journal publication information fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexJournalTokenizer'); INSERT INTO idxINDEX VALUES (12,'collaboration','This index contains words/phrases from collaboration name fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); -INSERT INTO idxINDEX VALUES (13,'affiliation','This index contains words/phrases from institutional affiliation fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); +INSERT INTO idxINDEX VALUES (13,'affiliation','This index contains words/phrases from affiliation fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (14,'exactauthor','This index contains exact words/phrases from author fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexExactAuthorTokenizer'); INSERT INTO idxINDEX VALUES (15,'caption','This index contains exact words/phrases from figure captions.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (16,'firstauthor','This index contains fuzzy words/phrases from first author field.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorTokenizer'); INSERT INTO idxINDEX VALUES (17,'exactfirstauthor','This index contains exact words/phrases from first author field.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexExactAuthorTokenizer'); INSERT INTO idxINDEX VALUES (18,'authorcount','This index contains number of authors of the record.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorCountTokenizer'); INSERT INTO idxINDEX VALUES (19,'exacttitle','This index contains exact words/phrases from title fields.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (20,'authorityauthor','This index contains words/phrases from author authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexAuthorTokenizer'); -INSERT INTO idxINDEX VALUES (21,'authorityinstitution','This index contains words/phrases from institution authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); +INSERT INTO idxINDEX VALUES (21,'authorityinstitute','This index contains words/phrases from institute authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (22,'authorityjournal','This index contains words/phrases from journal authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (23,'authoritysubject','This index contains words/phrases from subject authority records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX VALUES (24,'itemcount','This index contains number of copies of items in the library.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexItemCountTokenizer'); INSERT INTO idxINDEX VALUES (25,'filetype','This index contains extensions of files connected to records.','0000-00-00 00:00:00', '', 'native', '','No','No','No', 'BibIndexFiletypeTokenizer'); INSERT INTO idxINDEX VALUES (26,'miscellaneous','This index contains words/phrases from miscellaneous fields','0000-00-00 00:00:00', '', 'native','','No','No','No', 'BibIndexDefaultTokenizer'); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (1,1); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (2,10); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (3,4); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (4,3); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (5,5); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (6,8); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (7,6); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (8,2); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (9,9); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (10,12); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (11,19); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (12,20); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (13,21); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (14,22); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (15,27); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (16,28); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (17,29); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (18,30); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (19,32); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (20,33); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (21,34); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (22,35); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (23,36); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (24,37); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (25,38); INSERT INTO idxINDEX_field (id_idxINDEX, id_field) VALUES (26,39); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 2); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 3); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 5); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 7); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 8); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 10); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 11); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 12); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 13); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 19); INSERT INTO idxINDEX_idxINDEX (id_virtual, id_normal) VALUES (1, 26); INSERT INTO sbmACTION VALUES ('Submit New Record','SBI','running','1998-08-17','2001-08-08','','Submit New Record'); INSERT INTO sbmACTION VALUES ('Modify Record','MBI','modify','1998-08-17','2001-11-07','','Modify Record'); INSERT INTO sbmACTION VALUES ('Submit New File','SRV','revise','0000-00-00','2001-11-07','','Submit New File'); INSERT INTO sbmACTION VALUES ('Approve Record','APP','approve','2001-11-08','2002-06-11','','Approve Record'); INSERT INTO sbmALLFUNCDESCR VALUES ('Ask_For_Record_Details_Confirmation',''); INSERT INTO sbmALLFUNCDESCR VALUES ('CaseEDS',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Modify_Interface',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Recid',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Finish_Submission',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Info',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Recid', 'This function gets the recid for a document with a given report-number (as stored in the global variable rn).'); INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Report_Number',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Get_Sysno',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Insert_Modify_Record',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Insert_Record',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Is_Original_Submitter',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Is_Referee','This function checks whether the logged user is a referee for the current document'); INSERT INTO sbmALLFUNCDESCR VALUES ('Mail_Approval_Request_to_Referee',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Mail_Approval_Withdrawn_to_Referee',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Mail_Submitter',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Make_Modify_Record',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Make_Record',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_From_Pending',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_to_Done',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_to_Pending',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_Approval_Request',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_APP',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_DEL','Prepare a message for the user informing them that their record was successfully deleted.'); INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_MBI',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Print_Success_SRV',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Register_Approval_Request',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Register_Referee_Decision',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Withdraw_Approval_Request',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Report_Number_Generation',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Second_Report_Number_Generation','Generate a secondary report number for a document.'); INSERT INTO sbmALLFUNCDESCR VALUES ('Send_Approval_Request',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Send_APP_Mail',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Send_Delete_Mail',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Send_Modify_Mail',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Send_SRV_Mail',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Set_Embargo','Set an embargo on all the documents of a given record.'); INSERT INTO sbmALLFUNCDESCR VALUES ('Stamp_Replace_Single_File_Approval','Stamp a single file when a document is approved.'); INSERT INTO sbmALLFUNCDESCR VALUES ('Stamp_Uploaded_Files','Stamp some of the files that were uploaded during a submission.'); INSERT INTO sbmALLFUNCDESCR VALUES ('Test_Status',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Update_Approval_DB',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('User_is_Record_Owner_or_Curator','Check if user is owner or special editor of a record'); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Files_to_Storage','Attach files received from chosen file input element(s)'); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Revised_Files_to_Storage','Revise files initially uploaded with "Move_Files_to_Storage"'); INSERT INTO sbmALLFUNCDESCR VALUES ('Make_Dummy_MARC_XML_Record',''); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_CKEditor_Files_to_Storage','Transfer files attached to the record with the CKEditor'); INSERT INTO sbmALLFUNCDESCR VALUES ('Create_Upload_Files_Interface','Display generic interface to add/revise/delete files. To be used before function "Move_Uploaded_Files_to_Storage"'); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Uploaded_Files_to_Storage','Attach files uploaded with "Create_Upload_Files_Interface"'); INSERT INTO sbmALLFUNCDESCR VALUES ('Move_Photos_to_Storage','Attach/edit the pictures uploaded with the "create_photos_manager_interface()" function'); INSERT INTO sbmALLFUNCDESCR VALUES ('Link_Records','Link two records toghether via MARC'); INSERT INTO sbmALLFUNCDESCR VALUES ('Video_Processing',NULL); INSERT INTO sbmALLFUNCDESCR VALUES ('Set_RN_From_Sysno', 'Set the value of global rn variable to the report number identified by sysno (recid)'); INSERT INTO sbmALLFUNCDESCR VALUES ('Notify_URL','Access URL, possibly to post content'); INSERT INTO sbmALLFUNCDESCR VALUES ('Run_PlotExtractor','Run PlotExtractor on the current record'); INSERT INTO sbmFIELDDESC VALUES ('Upload_Photos',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a photos upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Photos_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\n\r\nfrom invenio.legacy.websubmit.functions.Shared_Functions import ParamFromFile\r\nfrom invenio.websubmit_functions.Move_Photos_to_Storage import \\\r\n read_param_file, \\\r\n create_photos_manager_interface, \\\r\n get_session_id\r\n\r\n# Retrieve session id\r\ntry:\r\n # User info is defined only in MBI/MPI actions...\r\n session_id = get_session_id(None, uid, user_info) \r\nexcept:\r\n session_id = get_session_id(req, uid, {})\r\n\r\n# Retrieve context\r\nindir = curdir.split(\'/\')[-3]\r\ndoctype = curdir.split(\'/\')[-2]\r\naccess = curdir.split(\'/\')[-1]\r\n\r\n# Get the record ID, if any\r\nsysno = ParamFromFile(\"%s/%s\" % (curdir,\'SN\')).strip()\r\n\r\n\"\"\"\r\nModify below the configuration of the photos manager interface.\r\nNote: `can_reorder_photos\' parameter is not yet fully taken into consideration\r\n\r\nDocumentation of the function is available at \r\n\"\"\"\r\ntext += create_photos_manager_interface(sysno, session_id, uid,\r\n doctype, indir, curdir, access,\r\n can_delete_photos=True,\r\n can_reorder_photos=True,\r\n can_upload_photos=True,\r\n editor_width=700,\r\n editor_height=400,\r\n initial_slider_value=100,\r\n max_slider_value=200,\r\n min_slider_value=80)','0000-00-00','0000-00-00',NULL,NULL,0); INSERT INTO sbmCHECKS VALUES ('AUCheck','function AUCheck(txt) {\r\n var res=1;\r\n tmp=txt.indexOf(\"\\015\");\r\n while (tmp != -1) {\r\n left=txt.substring(0,tmp);\r\n right=txt.substring(tmp+2,txt.length);\r\n txt=left + \"\\012\" + right;\r\n tmp=txt.indexOf(\"\\015\");\r\n }\r\n tmp=txt.indexOf(\"\\012\");\r\n if (tmp==-1){\r\n line=txt;\r\n txt=\'\';}\r\n else{\r\n line=txt.substring(0,tmp);\r\n txt=txt.substring(tmp+1,txt.length);}\r\n while (line != \"\"){\r\n coma=line.indexOf(\",\");\r\n left=line.substring(0,coma);\r\n right=line.substring(coma+1,line.length);\r\n coma2=right.indexOf(\",\");\r\n space=right.indexOf(\" \");\r\n if ((coma==-1)||(left==\"\")||(right==\"\")||(space!=0)||(coma2!=-1)){\r\n res=0;\r\n error_log=line;\r\n }\r\n tmp=txt.indexOf(\"\\012\");\r\n if (tmp==-1){\r\n line=txt;\r\n txt=\'\';}\r\n else{\r\n line=txt.substring(0,tmp-1);\r\n txt=txt.substring(tmp+1,txt.length);}\r\n }\r\n if (res == 0){\r\n alert(\"This author name cannot be managed \\: \\012\\012\" + error_log + \" \\012\\012It is not in the required format!\\012Put one author per line and a comma (,) between the name and the firstname initial letters. \\012The name is going first, followed by the firstname initial letters.\\012Do not forget the whitespace after the comma!!!\\012\\012Example \\: Put\\012\\012Le Meur, J Y \\012Baron, T \\012\\012for\\012\\012Le Meur Jean-Yves & Baron Thomas.\");\r\n return 0;\r\n } \r\n return 1; \r\n}','1998-08-18','0000-00-00','',''); INSERT INTO sbmCHECKS VALUES ('DatCheckNew','function DatCheckNew(txt) {\r\n var res=1;\r\n if (txt.length != 10){res=0;}\r\n if (txt.indexOf(\"/\") != 2){res=0;}\r\n if (txt.lastIndexOf(\"/\") != 5){res=0;}\r\n tmp=parseInt(txt.substring(0,2),10);\r\n if ((tmp > 31)||(tmp < 1)||(isNaN(tmp))){res=0;}\r\n tmp=parseInt(txt.substring(3,5),10);\r\n if ((tmp > 12)||(tmp < 1)||(isNaN(tmp))){res=0;}\r\n tmp=parseInt(txt.substring(6,10),10);\r\n if ((tmp < 1)||(isNaN(tmp))){res=0;}\r\n if (txt.length == 0){res=1;}\r\n if (res == 0){\r\n alert(\"Please enter a correct Date \\012Format: dd/mm/yyyy\");\r\n return 0;\r\n }\r\n return 1; \r\n}','0000-00-00','0000-00-00','',''); INSERT INTO sbmFIELDDESC VALUES ('Upload_Files',NULL,'','R',NULL,NULL,NULL,NULL,NULL,'\"\"\"\r\nThis is an example of element that creates a file upload interface.\r\nClone it, customize it and integrate it into your submission. Then add function \r\n\'Move_Uploaded_Files_to_Storage\' to your submission functions list, in order for files \r\nuploaded with this interface to be attached to the record. More information in \r\nthe WebSubmit admin guide.\r\n\"\"\"\r\nimport os\r\nfrom invenio.legacy.bibdocfile.managedocfiles import create_file_upload_interface\r\nfrom invenio.legacy.websubmit.functions.Shared_Functions import ParamFromFile\r\n\r\nindir = ParamFromFile(os.path.join(curdir, \'indir\'))\r\ndoctype = ParamFromFile(os.path.join(curdir, \'doctype\'))\r\naccess = ParamFromFile(os.path.join(curdir, \'access\'))\r\ntry:\r\n sysno = int(ParamFromFile(os.path.join(curdir, \'SN\')).strip())\r\nexcept:\r\n sysno = -1\r\nln = ParamFromFile(os.path.join(curdir, \'ln\'))\r\n\r\n\"\"\"\r\nRun the following to get the list of parameters of function \'create_file_upload_interface\':\r\necho -e \'from invenio.legacy.bibdocfile.managedocfiles import create_file_upload_interface as f\\nprint f.__doc__\' | python\r\n\"\"\"\r\ntext = create_file_upload_interface(recid=sysno,\r\n print_outside_form_tag=False,\r\n include_headers=True,\r\n ln=ln,\r\n doctypes_and_desc=[(\'main\',\'Main document\'),\r\n (\'additional\',\'Figure, schema, etc.\')],\r\n can_revise_doctypes=[\'*\'],\r\n can_describe_doctypes=[\'main\'],\r\n can_delete_doctypes=[\'additional\'],\r\n can_rename_doctypes=[\'main\'],\r\n sbm_indir=indir, sbm_doctype=doctype, sbm_access=access)[1]\r\n','0000-00-00','0000-00-00',NULL,NULL,0); INSERT INTO sbmFORMATEXTENSION VALUES ('WORD','.doc'); INSERT INTO sbmFORMATEXTENSION VALUES ('PostScript','.ps'); INSERT INTO sbmFORMATEXTENSION VALUES ('PDF','.pdf'); INSERT INTO sbmFORMATEXTENSION VALUES ('JPEG','.jpg'); INSERT INTO sbmFORMATEXTENSION VALUES ('JPEG','.jpeg'); INSERT INTO sbmFORMATEXTENSION VALUES ('GIF','.gif'); INSERT INTO sbmFORMATEXTENSION VALUES ('PPT','.ppt'); INSERT INTO sbmFORMATEXTENSION VALUES ('HTML','.htm'); INSERT INTO sbmFORMATEXTENSION VALUES ('HTML','.html'); INSERT INTO sbmFORMATEXTENSION VALUES ('Latex','.tex'); INSERT INTO sbmFORMATEXTENSION VALUES ('Compressed PostScript','.ps.gz'); INSERT INTO sbmFORMATEXTENSION VALUES ('Tarred Tex (.tar)','.tar'); INSERT INTO sbmFORMATEXTENSION VALUES ('Text','.txt'); INSERT INTO sbmFUNDESC VALUES ('Get_Recid','record_search_pattern'); INSERT INTO sbmFUNDESC VALUES ('Get_Report_Number','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','addressesMBI'); INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','sourceDoc'); INSERT INTO sbmFUNDESC VALUES ('Register_Approval_Request','categ_file_appreq'); INSERT INTO sbmFUNDESC VALUES ('Register_Approval_Request','categ_rnseek_appreq'); INSERT INTO sbmFUNDESC VALUES ('Register_Approval_Request','note_file_appreq'); INSERT INTO sbmFUNDESC VALUES ('Register_Referee_Decision','decision_file'); INSERT INTO sbmFUNDESC VALUES ('Withdraw_Approval_Request','categ_file_withd'); INSERT INTO sbmFUNDESC VALUES ('Withdraw_Approval_Request','categ_rnseek_withd'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','autorngen'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','rnin'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','counterpath'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','rnformat'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','yeargen'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','nblength'); INSERT INTO sbmFUNDESC VALUES ('Report_Number_Generation','initialvalue'); INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Request_to_Referee','categ_file_appreq'); INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Request_to_Referee','categ_rnseek_appreq'); INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Request_to_Referee','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Withdrawn_to_Referee','categ_file_withd'); INSERT INTO sbmFUNDESC VALUES ('Mail_Approval_Withdrawn_to_Referee','categ_rnseek_withd'); INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','authorfile'); INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','status'); INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','authorfile'); INSERT INTO sbmFUNDESC VALUES ('Create_Modify_Interface','fieldnameMBI'); INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','fieldnameMBI'); INSERT INTO sbmFUNDESC VALUES ('Update_Approval_DB','categformatDAM'); INSERT INTO sbmFUNDESC VALUES ('Update_Approval_DB','decision_file'); INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','categformatDAM'); INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','addressesSRV'); INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','directory'); INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','categformatDAM'); INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','addressesDAM'); INSERT INTO sbmFUNDESC VALUES ('Send_Approval_Request','titleFile'); INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','titleFile'); INSERT INTO sbmFUNDESC VALUES ('Send_Modify_Mail','emailFile'); INSERT INTO sbmFUNDESC VALUES ('Get_Info','authorFile'); INSERT INTO sbmFUNDESC VALUES ('Get_Info','emailFile'); INSERT INTO sbmFUNDESC VALUES ('Get_Info','titleFile'); INSERT INTO sbmFUNDESC VALUES ('Make_Modify_Record','modifyTemplate'); INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','addressesAPP'); INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','categformatAPP'); INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','newrnin'); INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','decision_file'); INSERT INTO sbmFUNDESC VALUES ('Send_APP_Mail','comments_file'); INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casevariable'); INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casevalues'); INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casesteps'); INSERT INTO sbmFUNDESC VALUES ('CaseEDS','casedefault'); INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','noteFile'); INSERT INTO sbmFUNDESC VALUES ('Send_SRV_Mail','emailFile'); INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','emailFile'); INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Mail_Submitter','newrnin'); INSERT INTO sbmFUNDESC VALUES ('Make_Record','sourceTemplate'); INSERT INTO sbmFUNDESC VALUES ('Make_Record','createTemplate'); INSERT INTO sbmFUNDESC VALUES ('Print_Success','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Print_Success','newrnin'); INSERT INTO sbmFUNDESC VALUES ('Print_Success','status'); INSERT INTO sbmFUNDESC VALUES ('Make_Modify_Record','sourceTemplate'); INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','documenttype'); INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','iconsize'); INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_suffixes'); INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','rename'); INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_restrictions'); INSERT INTO sbmFUNDESC VALUES ('Move_Files_to_Storage','paths_and_doctypes'); INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','elementNameToDoctype'); INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createIconDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','createRelatedFormats'); INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','iconsize'); INSERT INTO sbmFUNDESC VALUES ('Move_Revised_Files_to_Storage','keepPreviousVersionDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Set_Embargo','date_file'); INSERT INTO sbmFUNDESC VALUES ('Set_Embargo','date_format'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','files_to_be_stamped'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','latex_template'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','latex_template_vars'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','stamp'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','layer'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Uploaded_Files','switch_file'); INSERT INTO sbmFUNDESC VALUES ('Make_Dummy_MARC_XML_Record','dummyrec_source_tpl'); INSERT INTO sbmFUNDESC VALUES ('Make_Dummy_MARC_XML_Record','dummyrec_create_tpl'); INSERT INTO sbmFUNDESC VALUES ('Print_Success_APP','decision_file'); INSERT INTO sbmFUNDESC VALUES ('Print_Success_APP','newrnin'); INSERT INTO sbmFUNDESC VALUES ('Send_Delete_Mail','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Send_Delete_Mail','record_managers'); INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rn_file'); INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rn_format'); INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rn_yeargen'); INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_rncateg_file'); INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_counterpath'); INSERT INTO sbmFUNDESC VALUES ('Second_Report_Number_Generation','2nd_nb_length'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','file_to_be_stamped'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','latex_template'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','latex_template_vars'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','new_file_name'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','stamp'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','layer'); INSERT INTO sbmFUNDESC VALUES ('Stamp_Replace_Single_File_Approval','switch_file'); INSERT INTO sbmFUNDESC VALUES ('Move_CKEditor_Files_to_Storage','input_fields'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxsize'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','minsize'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','doctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictions'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDeleteDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canReviseDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canDescribeDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canCommentDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canKeepDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canAddFormatDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRestrictDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canRenameDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','canNameNewFiles'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','createRelatedFormats'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','keepDefault'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','showLinks'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','fileLabel'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','filenameLabel'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','descriptionLabel'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','commentLabel'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','restrictionLabel'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','startDoc'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','endDoc'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','defaultFilenameDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Create_Upload_Files_Interface','maxFilesDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','iconsize'); INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','createIconDoctypes'); INSERT INTO sbmFUNDESC VALUES ('Move_Uploaded_Files_to_Storage','forceFileRevision'); INSERT INTO sbmFUNDESC VALUES ('Move_Photos_to_Storage','iconsize'); INSERT INTO sbmFUNDESC VALUES ('Move_Photos_to_Storage','iconformat'); INSERT INTO sbmFUNDESC VALUES ('User_is_Record_Owner_or_Curator','curator_role'); INSERT INTO sbmFUNDESC VALUES ('User_is_Record_Owner_or_Curator','curator_flag'); INSERT INTO sbmFUNDESC VALUES ('Link_Records','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Link_Records','edsrn2'); INSERT INTO sbmFUNDESC VALUES ('Link_Records','directRelationship'); INSERT INTO sbmFUNDESC VALUES ('Link_Records','reverseRelationship'); INSERT INTO sbmFUNDESC VALUES ('Link_Records','keep_original_edsrn2'); INSERT INTO sbmFUNDESC VALUES ('Video_Processing','aspect'); INSERT INTO sbmFUNDESC VALUES ('Video_Processing','batch_template'); INSERT INTO sbmFUNDESC VALUES ('Video_Processing','title'); INSERT INTO sbmFUNDESC VALUES ('Set_RN_From_Sysno','edsrn'); INSERT INTO sbmFUNDESC VALUES ('Set_RN_From_Sysno','rep_tags'); INSERT INTO sbmFUNDESC VALUES ('Set_RN_From_Sysno','record_search_pattern'); INSERT INTO sbmFUNDESC VALUES ('Notify_URL','url'); INSERT INTO sbmFUNDESC VALUES ('Notify_URL','data'); INSERT INTO sbmFUNDESC VALUES ('Notify_URL','admin_emails'); INSERT INTO sbmFUNDESC VALUES ('Notify_URL','content_type'); INSERT INTO sbmFUNDESC VALUES ('Notify_URL','attempt_times'); INSERT INTO sbmFUNDESC VALUES ('Notify_URL','attempt_sleeptime'); INSERT INTO sbmFUNDESC VALUES ('Notify_URL','user'); INSERT INTO sbmFUNDESC VALUES ('Run_PlotExtractor','with_docname'); INSERT INTO sbmFUNDESC VALUES ('Run_PlotExtractor','with_doctype'); INSERT INTO sbmFUNDESC VALUES ('Run_PlotExtractor','with_docformat'); INSERT INTO sbmFUNDESC VALUES ('Run_PlotExtractor','extract_plots_switch_file'); INSERT INTO sbmGFILERESULT VALUES ('HTML','HTML document'); INSERT INTO sbmGFILERESULT VALUES ('WORD','data'); INSERT INTO sbmGFILERESULT VALUES ('PDF','PDF document'); INSERT INTO sbmGFILERESULT VALUES ('PostScript','PostScript document'); INSERT INTO sbmGFILERESULT VALUES ('PostScript','data '); INSERT INTO sbmGFILERESULT VALUES ('PostScript','HP Printer Job Language data'); INSERT INTO sbmGFILERESULT VALUES ('jpg','JPEG image'); INSERT INTO sbmGFILERESULT VALUES ('Compressed PostScript','gzip compressed data'); INSERT INTO sbmGFILERESULT VALUES ('Tarred Tex (.tar)','tar archive'); INSERT INTO sbmGFILERESULT VALUES ('JPEG','JPEG image'); INSERT INTO sbmGFILERESULT VALUES ('GIF','GIF'); INSERT INTO swrREMOTESERVER VALUES (1, 'arXiv', 'arxiv.org', 'CDS_Invenio', 'sword_invenio', 'admin', 'SWORD at arXiv', 'http://arxiv.org/abs', 'https://arxiv.org/sword-app/servicedocument', '', 0); -- end of file diff --git a/invenio/legacy/search_engine/__init__.py b/invenio/legacy/search_engine/__init__.py index 43527ebf2..d36fc0f6f 100644 --- a/invenio/legacy/search_engine/__init__.py +++ b/invenio/legacy/search_engine/__init__.py @@ -1,6727 +1,6727 @@ # -*- coding: utf-8 -*- ## This file is part of Invenio. ## Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. # pylint: disable=C0301 """Invenio Search Engine in mod_python.""" __lastupdated__ = """$Date$""" __revision__ = "$Id$" ## import general modules: import cgi import cStringIO import copy import string import os import re import time import urllib import urlparse import zlib import sys try: ## import optional module: import numpy CFG_NUMPY_IMPORTABLE = True except: CFG_NUMPY_IMPORTABLE = False if sys.hexversion < 0x2040000: # pylint: disable=W0622 from sets import Set as set # pylint: enable=W0622 from six import iteritems ## import Invenio stuff: from invenio.base.globals import cfg from invenio.config import \ CFG_CERN_SITE, \ CFG_INSPIRE_SITE, \ CFG_OAI_ID_FIELD, \ CFG_WEBCOMMENT_ALLOW_REVIEWS, \ CFG_WEBSEARCH_CALL_BIBFORMAT, \ CFG_WEBSEARCH_CREATE_SIMILARLY_NAMED_AUTHORS_LINK_BOX, \ CFG_WEBSEARCH_FIELDS_CONVERT, \ CFG_WEBSEARCH_NB_RECORDS_TO_SORT, \ CFG_WEBSEARCH_SEARCH_CACHE_SIZE, \ CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT, \ CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS, \ CFG_WEBSEARCH_USE_ALEPH_SYSNOS, \ CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS, \ CFG_WEBSEARCH_FULLTEXT_SNIPPETS, \ CFG_WEBSEARCH_DISPLAY_NEAREST_TERMS, \ CFG_WEBSEARCH_WILDCARD_LIMIT, \ CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE, \ CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG, \ CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS, \ CFG_WEBSEARCH_SYNONYM_KBRS, \ CFG_SITE_LANG, \ CFG_SITE_NAME, \ CFG_LOGDIR, \ CFG_SITE_URL, \ CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS, \ CFG_SOLR_URL, \ CFG_WEBSEARCH_DETAILED_META_FORMAT, \ CFG_SITE_RECORD, \ CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT, \ CFG_WEBSEARCH_VIEWRESTRCOLL_POLICY, \ CFG_BIBSORT_BUCKETS, \ CFG_XAPIAN_ENABLED, \ CFG_BIBINDEX_CHARS_PUNCTUATION from invenio.modules.search.errors import \ InvenioWebSearchUnknownCollectionError, \ InvenioWebSearchWildcardLimitError from invenio.legacy.bibrecord import get_fieldvalues, get_fieldvalues_alephseq_like from invenio.legacy.bibrecord import create_record, record_xml_output from invenio.legacy.bibrank.record_sorter import get_bibrank_methods, is_method_valid, rank_records as rank_records_bibrank from invenio.legacy.bibrank.downloads_similarity import register_page_view_event, calculate_reading_similarity_list from invenio.legacy.bibindex.engine_stemmer import stem from invenio.modules.indexer.tokenizers.BibIndexDefaultTokenizer import BibIndexDefaultTokenizer from invenio.modules.indexer.tokenizers.BibIndexCJKTokenizer import BibIndexCJKTokenizer, is_there_any_CJK_character_in_text from invenio.legacy.bibindex.engine_utils import author_name_requires_phrase_search from invenio.legacy.bibindex.engine_washer import wash_index_term, lower_index_term, wash_author_name from invenio.legacy.bibindex.engine_config import CFG_BIBINDEX_SYNONYM_MATCH_TYPE from invenio.legacy.bibindex.adminlib import get_idx_indexer from invenio.modules.formatter import format_record, format_records, get_output_format_content_type, create_excel from invenio.modules.formatter.config import CFG_BIBFORMAT_USE_OLD_BIBFORMAT from invenio.legacy.bibrank.downloads_grapher import create_download_history_graph_and_box from invenio.modules.knowledge.api import get_kbr_values from invenio.legacy.miscutil.data_cacher import DataCacher from invenio.legacy.websearch_external_collections import print_external_results_overview, perform_external_collection_search from invenio.modules.access.control import acc_get_action_id from invenio.modules.access.local_config import VIEWRESTRCOLL, \ CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS, \ CFG_ACC_GRANT_VIEWER_RIGHTS_TO_EMAILS_IN_TAGS from invenio.legacy.websearch.adminlib import get_detailed_page_tabs, get_detailed_page_tabs_counts from intbitset import intbitset from invenio.legacy.dbquery import DatabaseError, deserialize_via_marshal, InvenioDbQueryWildcardLimitError from invenio.modules.access.engine import acc_authorize_action from invenio.ext.logging import register_exception from invenio.ext.cache import cache from invenio.utils.text import encode_for_xml, wash_for_utf8, strip_accents from invenio.utils.html import get_mathjax_header from invenio.utils.html import nmtoken_from_string import invenio.legacy.template webstyle_templates = invenio.legacy.template.load('webstyle') webcomment_templates = invenio.legacy.template.load('webcomment') from invenio.legacy.bibrank.citation_searcher import calculate_cited_by_list, \ calculate_co_cited_with_list, get_records_with_num_cites, get_self_cited_by, \ get_refersto_hitset, get_citedby_hitset from invenio.legacy.bibrank.citation_grapher import create_citation_history_graph_and_box from invenio.legacy.dbquery import run_sql, run_sql_with_limit, wash_table_column_name, \ get_table_update_time from invenio.legacy.webuser import getUid, collect_user_info, session_param_set from invenio.legacy.webpage import pageheaderonly, pagefooteronly, create_error_box, write_warning from invenio.base.i18n import gettext_set_language from invenio.legacy.search_engine.query_parser import SearchQueryParenthesisedParser, \ SpiresToInvenioSyntaxConverter from invenio.utils import apache from invenio.legacy.miscutil.solrutils_bibindex_searcher import solr_get_bitset from invenio.legacy.miscutil.xapianutils_bibindex_searcher import xapian_get_bitset from invenio.modules.search import services try: import invenio.legacy.template websearch_templates = invenio.legacy.template.load('websearch') except: pass from invenio.legacy.websearch_external_collections import calculate_hosted_collections_results, do_calculate_hosted_collections_results from invenio.legacy.websearch_external_collections.config import CFG_HOSTED_COLLECTION_TIMEOUT_ANTE_SEARCH from invenio.legacy.websearch_external_collections.config import CFG_HOSTED_COLLECTION_TIMEOUT_POST_SEARCH from invenio.legacy.websearch_external_collections.config import CFG_EXTERNAL_COLLECTION_MAXRESULTS VIEWRESTRCOLL_ID = acc_get_action_id(VIEWRESTRCOLL) ## global vars: cfg_nb_browse_seen_records = 100 # limit of the number of records to check when browsing certain collection cfg_nicely_ordered_collection_list = 0 # do we propose collection list nicely ordered or alphabetical? ## precompile some often-used regexp for speed reasons: re_word = re.compile('[\s]') re_quotes = re.compile('[\'\"]') re_doublequote = re.compile('\"') re_logical_and = re.compile('\sand\s', re.I) re_logical_or = re.compile('\sor\s', re.I) re_logical_not = re.compile('\snot\s', re.I) re_operators = re.compile(r'\s([\+\-\|])\s') re_pattern_wildcards_after_spaces = re.compile(r'(\s)[\*\%]+') re_pattern_single_quotes = re.compile("'(.*?)'") re_pattern_double_quotes = re.compile("\"(.*?)\"") re_pattern_parens_quotes = re.compile(r'[\'\"]{1}[^\'\"]*(\([^\'\"]*\))[^\'\"]*[\'\"]{1}') re_pattern_regexp_quotes = re.compile("\/(.*?)\/") re_pattern_spaces_after_colon = re.compile(r'(:\s+)') re_pattern_short_words = re.compile(r'([\s\"]\w{1,3})[\*\%]+') re_pattern_space = re.compile("__SPACE__") re_pattern_today = re.compile("\$TODAY\$") re_pattern_parens = re.compile(r'\([^\)]+\s+[^\)]+\)') re_punctuation_followed_by_space = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION + '\s') ## em possible values EM_REPOSITORY={"body" : "B", "header" : "H", "footer" : "F", "search_box" : "S", "see_also_box" : "L", "basket" : "K", "alert" : "A", "search_info" : "I", "overview" : "O", "all_portalboxes" : "P", "te_portalbox" : "Pte", "tp_portalbox" : "Ptp", "np_portalbox" : "Pnp", "ne_portalbox" : "Pne", "lt_portalbox" : "Plt", "rt_portalbox" : "Prt", "search_services": "SER"}; class RestrictedCollectionDataCacher(DataCacher): def __init__(self): def cache_filler(): ret = [] try: res = run_sql("""SELECT DISTINCT ar.value FROM accROLE_accACTION_accARGUMENT raa JOIN accARGUMENT ar ON raa.id_accARGUMENT = ar.id WHERE ar.keyword = 'collection' AND raa.id_accACTION = %s""", (VIEWRESTRCOLL_ID,), run_on_slave=True) except Exception: # database problems, return empty cache return [] for coll in res: ret.append(coll[0]) return ret def timestamp_verifier(): return max(get_table_update_time('accROLE_accACTION_accARGUMENT'), get_table_update_time('accARGUMENT')) DataCacher.__init__(self, cache_filler, timestamp_verifier) def collection_restricted_p(collection, recreate_cache_if_needed=True): if recreate_cache_if_needed: restricted_collection_cache.recreate_cache_if_needed() return collection in restricted_collection_cache.cache try: restricted_collection_cache.is_ok_p except Exception: restricted_collection_cache = RestrictedCollectionDataCacher() def ziplist(*lists): """Just like zip(), but returns lists of lists instead of lists of tuples Example: zip([f1, f2, f3], [p1, p2, p3], [op1, op2, '']) => [(f1, p1, op1), (f2, p2, op2), (f3, p3, '')] ziplist([f1, f2, f3], [p1, p2, p3], [op1, op2, '']) => [[f1, p1, op1], [f2, p2, op2], [f3, p3, '']] FIXME: This is handy to have, and should live somewhere else, like miscutil.really_useful_functions or something. XXX: Starting in python 2.6, the same can be achieved (faster) by using itertools.izip_longest(); when the minimum recommended Python is bumped, we should use that instead. """ def l(*items): return list(items) return map(l, *lists) def get_permitted_restricted_collections(user_info, recreate_cache_if_needed=True): """Return a list of collection that are restricted but for which the user is authorized.""" if recreate_cache_if_needed: restricted_collection_cache.recreate_cache_if_needed() ret = [] auths = acc_authorize_action( user_info, 'viewrestrcoll', batch_args=True, collection=restricted_collection_cache.cache ) for collection, auth in zip(restricted_collection_cache.cache, auths): if auth[0] == 0: ret.append(collection) return ret def get_all_restricted_recids(): """ Return the set of all the restricted recids, i.e. the ids of those records which belong to at least one restricted collection. """ ret = intbitset() for collection in restricted_collection_cache.cache: ret |= get_collection_reclist(collection) return ret def get_restricted_collections_for_recid(recid, recreate_cache_if_needed=True): """ Return the list of restricted collection names to which recid belongs. """ if recreate_cache_if_needed: restricted_collection_cache.recreate_cache_if_needed() collection_reclist_cache.recreate_cache_if_needed() return [collection for collection in restricted_collection_cache.cache if recid in get_collection_reclist(collection, recreate_cache_if_needed=False)] def is_user_owner_of_record(user_info, recid): """ Check if the user is owner of the record, i.e. he is the submitter and/or belongs to a owner-like group authorized to 'see' the record. @param user_info: the user_info dictionary that describe the user. @type user_info: user_info dictionary @param recid: the record identifier. @type recid: positive integer @return: True if the user is 'owner' of the record; False otherwise @rtype: bool """ authorized_emails_or_group = [] for tag in CFG_ACC_GRANT_AUTHOR_RIGHTS_TO_EMAILS_IN_TAGS: authorized_emails_or_group.extend(get_fieldvalues(recid, tag)) for email_or_group in authorized_emails_or_group: if email_or_group in user_info['group']: return True email = email_or_group.strip().lower() if user_info['email'].strip().lower() == email: return True return False ###FIXME: This method needs to be refactorized def is_user_viewer_of_record(user_info, recid): """ Check if the user is allow to view the record based in the marc tags inside CFG_ACC_GRANT_VIEWER_RIGHTS_TO_EMAILS_IN_TAGS i.e. his email is inside the 506__m tag or he is inside an e-group listed in the 506__m tag @param user_info: the user_info dictionary that describe the user. @type user_info: user_info dictionary @param recid: the record identifier. @type recid: positive integer @return: True if the user is 'allow to view' the record; False otherwise @rtype: bool """ authorized_emails_or_group = [] for tag in CFG_ACC_GRANT_VIEWER_RIGHTS_TO_EMAILS_IN_TAGS: authorized_emails_or_group.extend(get_fieldvalues(recid, tag)) for email_or_group in authorized_emails_or_group: if email_or_group in user_info['group']: return True email = email_or_group.strip().lower() if user_info['email'].strip().lower() == email: return True return False def check_user_can_view_record(user_info, recid): """ Check if the user is authorized to view the given recid. The function grants access in two cases: either user has author rights on this record, or he has view rights to the primary collection this record belongs to. @param user_info: the user_info dictionary that describe the user. @type user_info: user_info dictionary @param recid: the record identifier. @type recid: positive integer @return: (0, ''), when authorization is granted, (>0, 'message') when authorization is not granted @rtype: (int, string) """ policy = CFG_WEBSEARCH_VIEWRESTRCOLL_POLICY.strip().upper() if isinstance(recid, str): recid = int(recid) ## At this point, either webcoll has not yet run or there are some ## restricted collections. Let's see first if the user own the record. if is_user_owner_of_record(user_info, recid): ## Perfect! It's authorized then! return (0, '') if is_user_viewer_of_record(user_info, recid): ## Perfect! It's authorized then! return (0, '') restricted_collections = get_restricted_collections_for_recid(recid, recreate_cache_if_needed=False) if not restricted_collections and record_public_p(recid): ## The record is public and not part of any restricted collection return (0, '') if restricted_collections: ## If there are restricted collections the user must be authorized to all/any of them (depending on the policy) auth_code, auth_msg = 0, '' for collection in restricted_collections: (auth_code, auth_msg) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=collection) if auth_code and policy != 'ANY': ## Ouch! the user is not authorized to this collection return (auth_code, auth_msg) elif auth_code == 0 and policy == 'ANY': ## Good! At least one collection is authorized return (0, '') ## Depending on the policy, the user will be either authorized or not return auth_code, auth_msg if is_record_in_any_collection(recid, recreate_cache_if_needed=False): ## the record is not in any restricted collection return (0, '') elif record_exists(recid) > 0: ## We are in the case where webcoll has not run. ## Let's authorize SUPERADMIN (auth_code, auth_msg) = acc_authorize_action(user_info, VIEWRESTRCOLL, collection=None) if auth_code == 0: return (0, '') else: ## Too bad. Let's print a nice message: return (1, """The record you are trying to access has just been submitted to the system and needs to be assigned to the proper collections. It is currently restricted for security reasons until the assignment will be fully completed. Please come back later to properly access this record.""") else: ## The record either does not exists or has been deleted. ## Let's handle these situations outside of this code. return (0, '') class IndexStemmingDataCacher(DataCacher): """ Provides cache for stemming information for word/phrase indexes. This class is not to be used directly; use function get_index_stemming_language() instead. """ def __init__(self): def cache_filler(): try: res = run_sql("""SELECT id, stemming_language FROM idxINDEX""") except DatabaseError: # database problems, return empty cache return {} return dict(res) def timestamp_verifier(): return get_table_update_time('idxINDEX') DataCacher.__init__(self, cache_filler, timestamp_verifier) try: index_stemming_cache.is_ok_p except Exception: index_stemming_cache = IndexStemmingDataCacher() def get_index_stemming_language(index_id, recreate_cache_if_needed=True): """Return stemming langugage for given index.""" if recreate_cache_if_needed: index_stemming_cache.recreate_cache_if_needed() return index_stemming_cache.cache[index_id] class FieldTokenizerDataCacher(DataCacher): """ Provides cache for tokenizer information for fields corresponding to indexes. This class is not to be used directly; use function get_field_tokenizer_type() instead. """ def __init__(self): def cache_filler(): try: res = run_sql("""SELECT fld.code, ind.tokenizer FROM idxINDEX AS ind, field AS fld, idxINDEX_field AS indfld WHERE ind.id = indfld.id_idxINDEX AND indfld.id_field = fld.id""") except DatabaseError: # database problems, return empty cache return {} return dict(res) def timestamp_verifier(): return get_table_update_time('idxINDEX') DataCacher.__init__(self, cache_filler, timestamp_verifier) try: field_tokenizer_cache.is_ok_p except Exception: field_tokenizer_cache = FieldTokenizerDataCacher() def get_field_tokenizer_type(field_name, recreate_cache_if_needed=True): """Return tokenizer type for given field corresponding to an index if applicable.""" if recreate_cache_if_needed: field_tokenizer_cache.recreate_cache_if_needed() tokenizer = None try: tokenizer = field_tokenizer_cache.cache[field_name] except KeyError: return None return tokenizer class CollectionRecListDataCacher(DataCacher): """ Provides cache for collection reclist hitsets. This class is not to be used directly; use function get_collection_reclist() instead. """ def __init__(self): def cache_filler(): ret = {} try: res = run_sql("SELECT name FROM collection") except Exception: # database problems, return empty cache return {} for name in res: ret[name[0]] = None # this will be filled later during runtime by calling get_collection_reclist(coll) return ret def timestamp_verifier(): return get_table_update_time('collection') DataCacher.__init__(self, cache_filler, timestamp_verifier) try: if not collection_reclist_cache.is_ok_p: raise Exception except Exception: collection_reclist_cache = CollectionRecListDataCacher() def get_collection_reclist(coll, recreate_cache_if_needed=True): """Return hitset of recIDs that belong to the collection 'coll'.""" if recreate_cache_if_needed: collection_reclist_cache.recreate_cache_if_needed() if coll not in collection_reclist_cache.cache: return intbitset() # collection does not exist; return empty set if not collection_reclist_cache.cache[coll]: # collection's reclist not in the cache yet, so calculate it # and fill the cache: reclist = intbitset() query = "SELECT nbrecs,reclist FROM collection WHERE name=%s" res = run_sql(query, (coll, ), 1) if res: try: reclist = intbitset(res[0][1]) except: pass collection_reclist_cache.cache[coll] = reclist # finally, return reclist: return collection_reclist_cache.cache[coll] def get_available_output_formats(visible_only=False): """ Return the list of available output formats. When visible_only is True, returns only those output formats that have visibility flag set to 1. """ formats = [] query = "SELECT code,name FROM format" if visible_only: query += " WHERE visibility='1'" query += " ORDER BY name ASC" res = run_sql(query) if res: # propose found formats: for code, name in res: formats.append({ 'value' : code, 'text' : name }) else: formats.append({'value' : 'hb', 'text' : "HTML brief" }) return formats # Flask cache for search results. from invenio.modules.search.cache import search_results_cache, get_search_results_cache_key class CollectionI18nNameDataCacher(DataCacher): """ Provides cache for I18N collection names. This class is not to be used directly; use function get_coll_i18nname() instead. """ def __init__(self): def cache_filler(): ret = {} try: res = run_sql("SELECT c.name,cn.ln,cn.value FROM collectionname AS cn, collection AS c WHERE cn.id_collection=c.id AND cn.type='ln'") # ln=long name except Exception: # database problems return {} for c, ln, i18nname in res: if i18nname: if c not in ret: ret[c] = {} ret[c][ln] = i18nname return ret def timestamp_verifier(): return get_table_update_time('collectionname') DataCacher.__init__(self, cache_filler, timestamp_verifier) try: if not collection_i18nname_cache.is_ok_p: raise Exception except Exception: collection_i18nname_cache = CollectionI18nNameDataCacher() def get_coll_i18nname(c, ln=CFG_SITE_LANG, verify_cache_timestamp=True): """ Return nicely formatted collection name (of the name type `ln' (=long name)) for collection C in language LN. This function uses collection_i18nname_cache, but it verifies whether the cache is up-to-date first by default. This verification step is performed by checking the DB table update time. So, if you call this function 1000 times, it can get very slow because it will do 1000 table update time verifications, even though collection names change not that often. Hence the parameter VERIFY_CACHE_TIMESTAMP which, when set to False, will assume the cache is already up-to-date. This is useful namely in the generation of collection lists for the search results page. """ if verify_cache_timestamp: collection_i18nname_cache.recreate_cache_if_needed() out = c try: out = collection_i18nname_cache.cache[c][ln] except KeyError: pass # translation in LN does not exist return out class FieldI18nNameDataCacher(DataCacher): """ Provides cache for I18N field names. This class is not to be used directly; use function get_field_i18nname() instead. """ def __init__(self): def cache_filler(): ret = {} try: res = run_sql("SELECT f.name,fn.ln,fn.value FROM fieldname AS fn, field AS f WHERE fn.id_field=f.id AND fn.type='ln'") # ln=long name except Exception: # database problems, return empty cache return {} for f, ln, i18nname in res: if i18nname: if f not in ret: ret[f] = {} ret[f][ln] = i18nname return ret def timestamp_verifier(): return get_table_update_time('fieldname') DataCacher.__init__(self, cache_filler, timestamp_verifier) try: if not field_i18nname_cache.is_ok_p: raise Exception except Exception: field_i18nname_cache = FieldI18nNameDataCacher() def get_field_i18nname(f, ln=CFG_SITE_LANG, verify_cache_timestamp=True): """ Return nicely formatted field name (of type 'ln', 'long name') for field F in language LN. If VERIFY_CACHE_TIMESTAMP is set to True, then verify DB timestamp and field I18N name cache timestamp and refresh cache from the DB if needed. Otherwise don't bother checking DB timestamp and return the cached value. (This is useful when get_field_i18nname is called inside a loop.) """ if verify_cache_timestamp: field_i18nname_cache.recreate_cache_if_needed() out = f try: out = field_i18nname_cache.cache[f][ln] except KeyError: pass # translation in LN does not exist return out def get_alphabetically_ordered_collection_list(level=0, ln=CFG_SITE_LANG): """Returns nicely ordered (score respected) list of collections, more exactly list of tuples (collection name, printable collection name). Suitable for create_search_box().""" out = [] res = run_sql("SELECT name FROM collection ORDER BY name ASC") for c_name in res: c_name = c_name[0] # make a nice printable name (e.g. truncate c_printable for # long collection names in given language): c_printable_fullname = get_coll_i18nname(c_name, ln, False) c_printable = wash_index_term(c_printable_fullname, 30, False) if c_printable != c_printable_fullname: c_printable = c_printable + "..." if level: c_printable = " " + level * '-' + " " + c_printable out.append([c_name, c_printable]) return out def get_nicely_ordered_collection_list(collid=1, level=0, ln=CFG_SITE_LANG): """Returns nicely ordered (score respected) list of collections, more exactly list of tuples (collection name, printable collection name). Suitable for create_search_box().""" colls_nicely_ordered = [] res = run_sql("""SELECT c.name,cc.id_son FROM collection_collection AS cc, collection AS c WHERE c.id=cc.id_son AND cc.id_dad=%s ORDER BY score DESC""", (collid, )) for c, cid in res: # make a nice printable name (e.g. truncate c_printable for # long collection names in given language): c_printable_fullname = get_coll_i18nname(c, ln, False) c_printable = wash_index_term(c_printable_fullname, 30, False) if c_printable != c_printable_fullname: c_printable = c_printable + "..." if level: c_printable = " " + level * '-' + " " + c_printable colls_nicely_ordered.append([c, c_printable]) colls_nicely_ordered = colls_nicely_ordered + get_nicely_ordered_collection_list(cid, level+1, ln=ln) return colls_nicely_ordered def get_index_id_from_field(field): """ Return index id with name corresponding to FIELD, or the first index id where the logical field code named FIELD is indexed. Return zero in case there is no index defined for this field. Example: field='author', output=4. """ out = 0 if not field: field = 'global' # empty string field means 'global' index (field 'anyfield') # first look in the index table: res = run_sql("""SELECT id FROM idxINDEX WHERE name=%s""", (field,)) if res: out = res[0][0] return out # not found in the index table, now look in the field table: res = run_sql("""SELECT w.id FROM idxINDEX AS w, idxINDEX_field AS wf, field AS f WHERE f.code=%s AND wf.id_field=f.id AND w.id=wf.id_idxINDEX LIMIT 1""", (field,)) if res: out = res[0][0] return out def get_words_from_pattern(pattern): """ Returns list of whitespace-separated words from pattern, removing any trailing punctuation-like signs from words in pattern. """ words = {} # clean trailing punctuation signs inside pattern pattern = re_punctuation_followed_by_space.sub(' ', pattern) for word in string.split(pattern): if word not in words: words[word] = 1 return words.keys() def create_basic_search_units(req, p, f, m=None, of='hb'): """Splits search pattern and search field into a list of independently searchable units. - A search unit consists of '(operator, pattern, field, type, hitset)' tuples where 'operator' is set union (|), set intersection (+) or set exclusion (-); 'pattern' is either a word (e.g. muon*) or a phrase (e.g. 'nuclear physics'); 'field' is either a code like 'title' or MARC tag like '100__a'; 'type' is the search type ('w' for word file search, 'a' for access file search). - Optionally, the function accepts the match type argument 'm'. If it is set (e.g. from advanced search interface), then it performs this kind of matching. If it is not set, then a guess is made. 'm' can have values: 'a'='all of the words', 'o'='any of the words', 'p'='phrase/substring', 'r'='regular expression', 'e'='exact value'. - Warnings are printed on req (when not None) in case of HTML output formats.""" opfts = [] # will hold (o,p,f,t,h) units # FIXME: quick hack for the journal index if f == 'journal': opfts.append(['+', p, f, 'w']) return opfts ## check arguments: is desired matching type set? if m: ## A - matching type is known; good! if m == 'e': # A1 - exact value: opfts.append(['+', p, f, 'a']) # '+' since we have only one unit elif m == 'p': # A2 - phrase/substring: opfts.append(['+', "%" + p + "%", f, 'a']) # '+' since we have only one unit elif m == 'r': # A3 - regular expression: opfts.append(['+', p, f, 'r']) # '+' since we have only one unit elif m == 'a' or m == 'w': # A4 - all of the words: p = strip_accents(p) # strip accents for 'w' mode, FIXME: delete when not needed for word in get_words_from_pattern(p): opfts.append(['+', word, f, 'w']) # '+' in all units elif m == 'o': # A5 - any of the words: p = strip_accents(p) # strip accents for 'w' mode, FIXME: delete when not needed for word in get_words_from_pattern(p): if len(opfts)==0: opfts.append(['+', word, f, 'w']) # '+' in the first unit else: opfts.append(['|', word, f, 'w']) # '|' in further units else: if of.startswith("h"): write_warning("Matching type '%s' is not implemented yet." % cgi.escape(m), "Warning", req=req) opfts.append(['+', "%" + p + "%", f, 'w']) else: ## B - matching type is not known: let us try to determine it by some heuristics if f and p[0] == '"' and p[-1] == '"': ## B0 - does 'p' start and end by double quote, and is 'f' defined? => doing ACC search opfts.append(['+', p[1:-1], f, 'a']) elif f in ('author', 'firstauthor', 'exactauthor', 'exactfirstauthor', 'authorityauthor') and author_name_requires_phrase_search(p): ## B1 - do we search in author, and does 'p' contain space/comma/dot/etc? ## => doing washed ACC search opfts.append(['+', p, f, 'a']) elif f and p[0] == "'" and p[-1] == "'": ## B0bis - does 'p' start and end by single quote, and is 'f' defined? => doing ACC search opfts.append(['+', '%' + p[1:-1] + '%', f, 'a']) elif f and p[0] == "/" and p[-1] == "/": ## B0ter - does 'p' start and end by a slash, and is 'f' defined? => doing regexp search opfts.append(['+', p[1:-1], f, 'r']) elif f and string.find(p, ',') >= 0: ## B1 - does 'p' contain comma, and is 'f' defined? => doing ACC search opfts.append(['+', p, f, 'a']) elif f and str(f[0:2]).isdigit(): ## B2 - does 'f' exist and starts by two digits? => doing ACC search opfts.append(['+', p, f, 'a']) else: ## B3 - doing WRD search, but maybe ACC too # search units are separated by spaces unless the space is within single or double quotes # so, let us replace temporarily any space within quotes by '__SPACE__' p = re_pattern_single_quotes.sub(lambda x: "'"+string.replace(x.group(1), ' ', '__SPACE__')+"'", p) p = re_pattern_double_quotes.sub(lambda x: "\""+string.replace(x.group(1), ' ', '__SPACE__')+"\"", p) p = re_pattern_regexp_quotes.sub(lambda x: "/"+string.replace(x.group(1), ' ', '__SPACE__')+"/", p) # and spaces after colon as well: p = re_pattern_spaces_after_colon.sub(lambda x: string.replace(x.group(1), ' ', '__SPACE__'), p) # wash argument: p = re_logical_and.sub(" ", p) p = re_logical_or.sub(" |", p) p = re_logical_not.sub(" -", p) p = re_operators.sub(r' \1', p) for pi in string.split(p): # iterate through separated units (or items, as "pi" stands for "p item") pi = re_pattern_space.sub(" ", pi) # replace back '__SPACE__' by ' ' # firstly, determine set operator if pi[0] == '+' or pi[0] == '-' or pi[0] == '|': oi = pi[0] pi = pi[1:] else: # okay, there is no operator, so let us decide what to do by default oi = '+' # by default we are doing set intersection... # secondly, determine search pattern and field: if string.find(pi, ":") > 0: fi, pi = string.split(pi, ":", 1) fi = wash_field(fi) # test whether fi is a real index code or a MARC-tag defined code: if fi in get_fieldcodes() or '00' <= fi[:2] <= '99': pass else: # it is not, so join it back: fi, pi = f, fi + ":" + pi else: fi, pi = f, pi # wash 'fi' argument: fi = wash_field(fi) # wash 'pi' argument: pi = pi.strip() # strip eventual spaces if re_quotes.match(pi): # B3a - quotes are found => do ACC search (phrase search) if pi[0] == '"' and pi[-1] == '"': pi = string.replace(pi, '"', '') # remove quote signs opfts.append([oi, pi, fi, 'a']) elif pi[0] == "'" and pi[-1] == "'": pi = string.replace(pi, "'", "") # remove quote signs opfts.append([oi, "%" + pi + "%", fi, 'a']) else: # unbalanced quotes, so fall back to WRD query: opfts.append([oi, pi, fi, 'w']) elif pi.startswith('/') and pi.endswith('/'): # B3b - pi has slashes around => do regexp search opfts.append([oi, pi[1:-1], fi, 'r']) elif fi and len(fi) > 1 and str(fi[0]).isdigit() and str(fi[1]).isdigit(): # B3c - fi exists and starts by two digits => do ACC search opfts.append([oi, pi, fi, 'a']) elif fi and not get_index_id_from_field(fi) and get_field_name(fi): # B3d - logical field fi exists but there is no WRD index for fi => try ACC search opfts.append([oi, pi, fi, 'a']) else: # B3e - general case => do WRD search pi = strip_accents(pi) # strip accents for 'w' mode, FIXME: delete when not needed for pii in get_words_from_pattern(pi): opfts.append([oi, pii, fi, 'w']) ## sanity check: for i in range(0, len(opfts)): try: pi = opfts[i][1] if pi == '*': if of.startswith("h"): write_warning("Ignoring standalone wildcard word.", "Warning", req=req) del opfts[i] if pi == '' or pi == ' ': fi = opfts[i][2] if fi: if of.startswith("h"): write_warning("Ignoring empty %s search term." % fi, "Warning", req=req) del opfts[i] except: pass ## replace old logical field names if applicable: if CFG_WEBSEARCH_FIELDS_CONVERT: opfts = [[o, p, wash_field(f), t] for o, p, f, t in opfts] ## return search units: return opfts def page_start(req, of, cc, aas, ln, uid, title_message=None, description='', keywords='', recID=-1, tab='', p='', em=''): """ Start page according to given output format. @param title_message: title of the page, not escaped for HTML @param description: description of the page, not escaped for HTML @param keywords: keywords of the page, not escaped for HTML """ _ = gettext_set_language(ln) if not req or isinstance(req, cStringIO.OutputType): return # we were called from CLI if not title_message: title_message = _("Search Results") content_type = get_output_format_content_type(of) if of.startswith('x'): if of == 'xr': # we are doing RSS output req.content_type = "application/rss+xml" req.send_http_header() req.write("""\n""") else: # we are doing XML output: req.content_type = get_output_format_content_type(of, 'text/xml') req.send_http_header() req.write("""\n""") elif of.startswith('t') or str(of[0:3]).isdigit(): # we are doing plain text output: req.content_type = "text/plain" req.send_http_header() elif of == "intbitset": req.content_type = "application/octet-stream" req.send_http_header() elif of == "id": pass # nothing to do, we shall only return list of recIDs elif content_type == 'text/html': # we are doing HTML output: req.content_type = "text/html" req.send_http_header() if not description: description = "%s %s." % (cc, _("Search Results")) if not keywords: keywords = "%s, WebSearch, %s" % (get_coll_i18nname(CFG_SITE_NAME, ln, False), get_coll_i18nname(cc, ln, False)) ## generate RSS URL: argd = {} if req.args: argd = cgi.parse_qs(req.args) rssurl = websearch_templates.build_rss_url(argd) ## add MathJax if displaying single records (FIXME: find ## eventual better place to this code) if of.lower() in CFG_WEBSEARCH_USE_MATHJAX_FOR_FORMATS: metaheaderadd = get_mathjax_header(req.is_https()) else: metaheaderadd = '' # Add metadata in meta tags for Google scholar-esque harvesting... # only if we have a detailed meta format and we are looking at a # single record if (recID != -1 and CFG_WEBSEARCH_DETAILED_META_FORMAT): metaheaderadd += format_record(recID, \ CFG_WEBSEARCH_DETAILED_META_FORMAT, \ ln = ln) ## generate navtrail: navtrail = create_navtrail_links(cc, aas, ln) if navtrail != '': navtrail += ' > ' if (tab != '' or ((of != '' or of.lower() != 'hd') and of != 'hb')) and \ recID != -1: # If we are not in information tab in HD format, customize # the nav. trail to have a link back to main record. (Due # to the way perform_request_search() works, hb # (lowercase) is equal to hd) navtrail += ' %s' % \ (CFG_SITE_URL, CFG_SITE_RECORD, recID, cgi.escape(title_message)) if (of != '' or of.lower() != 'hd') and of != 'hb': # Export format_name = of query = "SELECT name FROM format WHERE code=%s" res = run_sql(query, (of,)) if res: format_name = res[0][0] navtrail += ' > ' + format_name else: # Discussion, citations, etc. tabs tab_label = get_detailed_page_tabs(cc, ln=ln)[tab]['label'] navtrail += ' > ' + _(tab_label) else: navtrail += cgi.escape(title_message) if p: # we are serving search/browse results pages, so insert pattern: navtrail += ": " + cgi.escape(p) title_message = p + " - " + title_message body_css_classes = [] if cc: # we know the collection, lets allow page styles based on cc #collection names may not satisfy rules for css classes which #are something like: -?[_a-zA-Z]+[_a-zA-Z0-9-]* #however it isn't clear what we should do about cases with #numbers, so we leave them to fail. Everything else becomes "_" css = nmtoken_from_string(cc).replace('.','_').replace('-','_').replace(':','_') body_css_classes.append(css) ## finally, print page header: if em == '' or EM_REPOSITORY["header"] in em: req.write(pageheaderonly(req=req, title=title_message, navtrail=navtrail, description=description, keywords=keywords, metaheaderadd=metaheaderadd, uid=uid, language=ln, navmenuid='search', navtrail_append_title_p=0, rssurl=rssurl, body_css_classes=body_css_classes)) req.write(websearch_templates.tmpl_search_pagestart(ln=ln)) else: req.content_type = content_type req.send_http_header() def page_end(req, of="hb", ln=CFG_SITE_LANG, em=""): "End page according to given output format: e.g. close XML tags, add HTML footer, etc." if of == "id": return [] # empty recID list if of == "intbitset": return intbitset() if not req: return # we were called from CLI if of.startswith('h'): req.write(websearch_templates.tmpl_search_pageend(ln = ln)) # pagebody end if em == "" or EM_REPOSITORY["footer"] in em: req.write(pagefooteronly(lastupdated=__lastupdated__, language=ln, req=req)) return def create_page_title_search_pattern_info(p, p1, p2, p3): """Create the search pattern bit for the page web page HTML header. Basically combine p and (p1,p2,p3) together so that the page header may be filled whether we are in the Simple Search or Advanced Search interface contexts.""" out = "" if p: out = p else: out = p1 if p2: out += ' ' + p2 if p3: out += ' ' + p3 return out def create_inputdate_box(name="d1", selected_year=0, selected_month=0, selected_day=0, ln=CFG_SITE_LANG): "Produces 'From Date', 'Until Date' kind of selection box. Suitable for search options." _ = gettext_set_language(ln) box = "" # day box += """<select name="%sd">""" % name box += """<option value="">%s""" % _("any day") for day in range(1, 32): box += """<option value="%02d"%s>%02d""" % (day, is_selected(day, selected_day), day) box += """</select>""" # month box += """<select name="%sm">""" % name box += """<option value="">%s""" % _("any month") # trailing space in May distinguishes short/long form of the month name for mm, month in [(1, _("January")), (2, _("February")), (3, _("March")), (4, _("April")), \ (5, _("May ")), (6, _("June")), (7, _("July")), (8, _("August")), \ (9, _("September")), (10, _("October")), (11, _("November")), (12, _("December"))]: box += """<option value="%02d"%s>%s""" % (mm, is_selected(mm, selected_month), month.strip()) box += """</select>""" # year box += """<select name="%sy">""" % name box += """<option value="">%s""" % _("any year") this_year = int(time.strftime("%Y", time.localtime())) for year in range(this_year-20, this_year+1): box += """<option value="%d"%s>%d""" % (year, is_selected(year, selected_year), year) box += """</select>""" return box def create_search_box(cc, colls, p, f, rg, sf, so, sp, rm, of, ot, aas, ln, p1, f1, m1, op1, p2, f2, m2, op2, p3, f3, m3, sc, pl, d1y, d1m, d1d, d2y, d2m, d2d, dt, jrec, ec, action="", em=""): """Create search box for 'search again in the results page' functionality.""" if em != "" and EM_REPOSITORY["search_box"] not in em: if EM_REPOSITORY["body"] in em and cc != CFG_SITE_NAME: return ''' <h1 class="headline">%(ccname)s</h1>''' % {'ccname' : cgi.escape(cc), } else: return "" # load the right message language _ = gettext_set_language(ln) # some computations cc_intl = get_coll_i18nname(cc, ln, False) cc_colID = get_colID(cc) colls_nicely_ordered = [] if cfg_nicely_ordered_collection_list: colls_nicely_ordered = get_nicely_ordered_collection_list(ln=ln) else: colls_nicely_ordered = get_alphabetically_ordered_collection_list(ln=ln) colls_nice = [] for (cx, cx_printable) in colls_nicely_ordered: if not cx.startswith("Unnamed collection"): colls_nice.append({ 'value' : cx, 'text' : cx_printable }) coll_selects = [] if colls and colls[0] != CFG_SITE_NAME: # some collections are defined, so print these first, and only then print 'add another collection' heading: for c in colls: if c: temp = [] temp.append({ 'value' : CFG_SITE_NAME, 'text' : '*** %s ***' % _("any public collection") }) # this field is used to remove the current collection from the ones to be searched. temp.append({ 'value' : '', 'text' : '*** %s ***' % _("remove this collection") }) for val in colls_nice: # print collection: if not cx.startswith("Unnamed collection"): temp.append({ 'value' : val['value'], 'text' : val['text'], 'selected' : (c == re.sub("^[\s\-]*","", val['value'])) }) coll_selects.append(temp) coll_selects.append([{ 'value' : '', 'text' : '*** %s ***' % _("add another collection") }] + colls_nice) else: # we searched in CFG_SITE_NAME, so print 'any public collection' heading coll_selects.append([{ 'value' : CFG_SITE_NAME, 'text' : '*** %s ***' % _("any public collection") }] + colls_nice) ## ranking methods ranks = [{ 'value' : '', 'text' : "- %s %s -" % (_("OR").lower (), _("rank by")), }] for (code, name) in get_bibrank_methods(cc_colID, ln): # propose found rank methods: ranks.append({ 'value' : code, 'text' : name, }) formats = get_available_output_formats(visible_only=True) # show collections in the search box? (not if there is only one # collection defined, and not if we are in light search) show_colls = True show_title = True if len(collection_reclist_cache.cache.keys()) == 1 or \ aas == -1: show_colls = False show_title = False if cc == CFG_SITE_NAME: show_title = False if CFG_INSPIRE_SITE: show_title = False return websearch_templates.tmpl_search_box( ln = ln, aas = aas, cc_intl = cc_intl, cc = cc, ot = ot, sp = sp, action = action, fieldslist = get_searchwithin_fields(ln=ln, colID=cc_colID), f1 = f1, f2 = f2, f3 = f3, m1 = m1, m2 = m2, m3 = m3, p1 = p1, p2 = p2, p3 = p3, op1 = op1, op2 = op2, rm = rm, p = p, f = f, coll_selects = coll_selects, d1y = d1y, d2y = d2y, d1m = d1m, d2m = d2m, d1d = d1d, d2d = d2d, dt = dt, sort_fields = get_sortby_fields(ln=ln, colID=cc_colID), sf = sf, so = so, ranks = ranks, sc = sc, rg = rg, formats = formats, of = of, pl = pl, jrec = jrec, ec = ec, show_colls = show_colls, show_title = show_title and (em=="" or EM_REPOSITORY["body"] in em) ) def create_exact_author_browse_help_link(p=None, p1=None, p2=None, p3=None, f=None, f1=None, f2=None, f3=None, rm=None, cc=None, ln=None, jrec=None, rg=None, aas=0, action=""): """Creates a link to help switch from author to exact author while browsing""" if action == 'browse': search_fields = (f, f1, f2, f3) if ('author' in search_fields) or ('firstauthor' in search_fields): def add_exact(field): if field == 'author' or field == 'firstauthor': return 'exact' + field return field (fe, f1e, f2e, f3e) = map(add_exact, search_fields) link_name = f or f1 link_name = (link_name == 'firstauthor' and 'exact first author') or 'exact author' return websearch_templates.tmpl_exact_author_browse_help_link(p=p, p1=p1, p2=p2, p3=p3, f=fe, f1=f1e, f2=f2e, f3=f3e, rm=rm, cc=cc, ln=ln, jrec=jrec, rg=rg, aas=aas, action=action, link_name=link_name) return "" def create_navtrail_links(cc=CFG_SITE_NAME, aas=0, ln=CFG_SITE_LANG, self_p=1, tab=''): """Creates navigation trail links, i.e. links to collection ancestors (except Home collection). If aas==1, then links to Advanced Search interfaces; otherwise Simple Search. """ dads = [] for dad in get_coll_ancestors(cc): if dad != CFG_SITE_NAME: # exclude Home collection dads.append ((dad, get_coll_i18nname(dad, ln, False))) if self_p and cc != CFG_SITE_NAME: dads.append((cc, get_coll_i18nname(cc, ln, False))) return websearch_templates.tmpl_navtrail_links( aas=aas, ln=ln, dads=dads) def get_searchwithin_fields(ln='en', colID=None): """Retrieves the fields name used in the 'search within' selection box for the collection ID colID.""" res = None if colID: res = run_sql("""SELECT f.code,f.name FROM field AS f, collection_field_fieldvalue AS cff WHERE cff.type='sew' AND cff.id_collection=%s AND cff.id_field=f.id ORDER BY cff.score DESC, f.name ASC""", (colID,)) if not res: res = run_sql("SELECT code,name FROM field ORDER BY name ASC") fields = [{ 'value' : '', 'text' : get_field_i18nname("any field", ln, False) }] for field_code, field_name in res: if field_code and field_code != "anyfield": fields.append({ 'value' : field_code, 'text' : get_field_i18nname(field_name, ln, False) }) return fields def get_sortby_fields(ln='en', colID=None): """Retrieves the fields name used in the 'sort by' selection box for the collection ID colID.""" _ = gettext_set_language(ln) res = None if colID: res = run_sql("""SELECT DISTINCT(f.code),f.name FROM field AS f, collection_field_fieldvalue AS cff WHERE cff.type='soo' AND cff.id_collection=%s AND cff.id_field=f.id ORDER BY cff.score DESC, f.name ASC""", (colID,)) if not res: # no sort fields defined for this colID, try to take Home collection: res = run_sql("""SELECT DISTINCT(f.code),f.name FROM field AS f, collection_field_fieldvalue AS cff WHERE cff.type='soo' AND cff.id_collection=%s AND cff.id_field=f.id ORDER BY cff.score DESC, f.name ASC""", (1,)) if not res: # no sort fields defined for the Home collection, take all sort fields defined wherever they are: res = run_sql("""SELECT DISTINCT(f.code),f.name FROM field AS f, collection_field_fieldvalue AS cff WHERE cff.type='soo' AND cff.id_field=f.id ORDER BY cff.score DESC, f.name ASC""",) fields = [{ 'value' : '', 'text' : _("latest first") }] for field_code, field_name in res: if field_code and field_code != "anyfield": fields.append({ 'value' : field_code, 'text' : get_field_i18nname(field_name, ln, False) }) return fields def create_andornot_box(name='op', value='', ln='en'): "Returns HTML code for the AND/OR/NOT selection box." _ = gettext_set_language(ln) out = """ <select name="%s"> <option value="a"%s>%s <option value="o"%s>%s <option value="n"%s>%s </select> """ % (name, is_selected('a', value), _("AND"), is_selected('o', value), _("OR"), is_selected('n', value), _("AND NOT")) return out def create_matchtype_box(name='m', value='', ln='en'): "Returns HTML code for the 'match type' selection box." _ = gettext_set_language(ln) out = """ <select name="%s"> <option value="a"%s>%s <option value="o"%s>%s <option value="e"%s>%s <option value="p"%s>%s <option value="r"%s>%s </select> """ % (name, is_selected('a', value), _("All of the words:"), is_selected('o', value), _("Any of the words:"), is_selected('e', value), _("Exact phrase:"), is_selected('p', value), _("Partial phrase:"), is_selected('r', value), _("Regular expression:")) return out def is_selected(var, fld): "Checks if the two are equal, and if yes, returns ' selected'. Useful for select boxes." if type(var) is int and type(fld) is int: if var == fld: return " selected" elif str(var) == str(fld): return " selected" elif fld and len(fld)==3 and fld[0] == "w" and var == fld[1:]: return " selected" return "" def wash_colls(cc, c, split_colls=0, verbose=0): """Wash collection list by checking whether user has deselected anything under 'Narrow search'. Checks also if cc is a list or not. Return list of cc, colls_to_display, colls_to_search since the list of collections to display is different from that to search in. This is because users might have chosen 'split by collection' functionality. The behaviour of "collections to display" depends solely whether user has deselected a particular collection: e.g. if it started from 'Articles and Preprints' page, and deselected 'Preprints', then collection to display is 'Articles'. If he did not deselect anything, then collection to display is 'Articles & Preprints'. The behaviour of "collections to search in" depends on the 'split_colls' parameter: * if is equal to 1, then we can wash the colls list down and search solely in the collection the user started from; * if is equal to 0, then we are splitting to the first level of collections, i.e. collections as they appear on the page we started to search from; The function raises exception InvenioWebSearchUnknownCollectionError if cc or one of c collections is not known. """ colls_out = [] colls_out_for_display = [] # list to hold the hosted collections to be searched and displayed hosted_colls_out = [] debug = "" if verbose: debug += "<br />" debug += "<br />1) --- initial parameters ---" debug += "<br />cc : %s" % cc debug += "<br />c : %s" % c debug += "<br />" # check what type is 'cc': if type(cc) is list: for ci in cc: if ci in collection_reclist_cache.cache: # yes this collection is real, so use it: cc = ci break else: # check once if cc is real: if cc not in collection_reclist_cache.cache: if cc: raise InvenioWebSearchUnknownCollectionError(cc) else: cc = CFG_SITE_NAME # cc is not set, so replace it with Home collection # check type of 'c' argument: if type(c) is list: colls = c else: colls = [c] if verbose: debug += "<br />2) --- after check for the integrity of cc and the being or not c a list ---" debug += "<br />cc : %s" % cc debug += "<br />c : %s" % c debug += "<br />" # remove all 'unreal' collections: colls_real = [] for coll in colls: if coll in collection_reclist_cache.cache: colls_real.append(coll) else: if coll: raise InvenioWebSearchUnknownCollectionError(coll) colls = colls_real if verbose: debug += "<br />3) --- keeping only the real colls of c ---" debug += "<br />colls : %s" % colls debug += "<br />" # check if some real collections remain: if len(colls)==0: colls = [cc] if verbose: debug += "<br />4) --- in case no colls were left we use cc directly ---" debug += "<br />colls : %s" % colls debug += "<br />" # then let us check the list of non-restricted "real" sons of 'cc' and compare it to 'coll': res = run_sql("""SELECT c.name FROM collection AS c, collection_collection AS cc, collection AS ccc WHERE c.id=cc.id_son AND cc.id_dad=ccc.id AND ccc.name=%s AND cc.type='r'""", (cc,)) # list that holds all the non restricted sons of cc that are also not hosted collections l_cc_nonrestricted_sons_and_nonhosted_colls = [] res_hosted = run_sql("""SELECT c.name FROM collection AS c, collection_collection AS cc, collection AS ccc WHERE c.id=cc.id_son AND cc.id_dad=ccc.id AND ccc.name=%s AND cc.type='r' AND (c.dbquery NOT LIKE 'hostedcollection:%%' OR c.dbquery IS NULL)""", (cc,)) for row_hosted in res_hosted: l_cc_nonrestricted_sons_and_nonhosted_colls.append(row_hosted[0]) l_cc_nonrestricted_sons_and_nonhosted_colls.sort() l_cc_nonrestricted_sons = [] l_c = colls[:] for row in res: if not collection_restricted_p(row[0]): l_cc_nonrestricted_sons.append(row[0]) l_c.sort() l_cc_nonrestricted_sons.sort() if l_cc_nonrestricted_sons == l_c: colls_out_for_display = [cc] # yep, washing permitted, it is sufficient to display 'cc' # the following elif is a hack that preserves the above funcionality when we start searching from # the frontpage with some hosted collections deselected (either by default or manually) elif set(l_cc_nonrestricted_sons_and_nonhosted_colls).issubset(set(l_c)): colls_out_for_display = colls split_colls = 0 else: colls_out_for_display = colls # nope, we need to display all 'colls' successively # remove duplicates: #colls_out_for_display_nondups=filter(lambda x, colls_out_for_display=colls_out_for_display: colls_out_for_display[x-1] not in colls_out_for_display[x:], range(1, len(colls_out_for_display)+1)) #colls_out_for_display = map(lambda x, colls_out_for_display=colls_out_for_display:colls_out_for_display[x-1], colls_out_for_display_nondups) #colls_out_for_display = list(set(colls_out_for_display)) #remove duplicates while preserving the order set_out = set() colls_out_for_display = [coll for coll in colls_out_for_display if coll not in set_out and not set_out.add(coll)] if verbose: debug += "<br />5) --- decide whether colls_out_for_diplay should be colls or is it sufficient for it to be cc; remove duplicates ---" debug += "<br />colls_out_for_display : %s" % colls_out_for_display debug += "<br />" # FIXME: The below quoted part of the code has been commented out # because it prevents searching in individual restricted daughter # collections when both parent and all its public daughter # collections were asked for, in addition to some restricted # daughter collections. The removal was introduced for hosted # collections, so we may want to double check in this context. # the following piece of code takes care of removing collections whose ancestors are going to be searched anyway # list to hold the collections to be removed #colls_to_be_removed = [] # first calculate the collections that can safely be removed #for coll in colls_out_for_display: # for ancestor in get_coll_ancestors(coll): # #if ancestor in colls_out_for_display: colls_to_be_removed.append(coll) # if ancestor in colls_out_for_display and not is_hosted_collection(coll): colls_to_be_removed.append(coll) # secondly remove the collections #for coll in colls_to_be_removed: # colls_out_for_display.remove(coll) if verbose: debug += "<br />6) --- remove collections that have ancestors about to be search, unless they are hosted ---" debug += "<br />colls_out_for_display : %s" % colls_out_for_display debug += "<br />" # calculate the hosted collections to be searched. if colls_out_for_display == [cc]: if is_hosted_collection(cc): hosted_colls_out.append(cc) else: for coll in get_coll_sons(cc): if is_hosted_collection(coll): hosted_colls_out.append(coll) else: for coll in colls_out_for_display: if is_hosted_collection(coll): hosted_colls_out.append(coll) if verbose: debug += "<br />7) --- calculate the hosted_colls_out ---" debug += "<br />hosted_colls_out : %s" % hosted_colls_out debug += "<br />" # second, let us decide on collection splitting: if split_colls == 0: # type A - no sons are wanted colls_out = colls_out_for_display else: # type B - sons (first-level descendants) are wanted for coll in colls_out_for_display: coll_sons = get_coll_sons(coll) if coll_sons == []: colls_out.append(coll) else: for coll_son in coll_sons: if not is_hosted_collection(coll_son): colls_out.append(coll_son) #else: # colls_out = colls_out + coll_sons # remove duplicates: #colls_out_nondups=filter(lambda x, colls_out=colls_out: colls_out[x-1] not in colls_out[x:], range(1, len(colls_out)+1)) #colls_out = map(lambda x, colls_out=colls_out:colls_out[x-1], colls_out_nondups) #colls_out = list(set(colls_out)) #remove duplicates while preserving the order set_out = set() colls_out = [coll for coll in colls_out if coll not in set_out and not set_out.add(coll)] if verbose: debug += "<br />8) --- calculate the colls_out; remove duplicates ---" debug += "<br />colls_out : %s" % colls_out debug += "<br />" # remove the hosted collections from the collections to be searched if hosted_colls_out: for coll in hosted_colls_out: try: colls_out.remove(coll) except ValueError: # in case coll was not found in colls_out pass if verbose: debug += "<br />9) --- remove the hosted_colls from the colls_out ---" debug += "<br />colls_out : %s" % colls_out return (cc, colls_out_for_display, colls_out, hosted_colls_out, debug) def get_synonym_terms(term, kbr_name, match_type, use_memoise=False): """ Return list of synonyms for TERM by looking in KBR_NAME in MATCH_TYPE style. @param term: search-time term or index-time term @type term: str @param kbr_name: knowledge base name @type kbr_name: str @param match_type: specifies how the term matches against the KBR before doing the lookup. Could be `exact' (default), 'leading_to_comma', `leading_to_number'. @type match_type: str @param use_memoise: can we memoise while doing lookups? @type use_memoise: bool @return: list of term synonyms @rtype: list of strings """ dterms = {} ## exact match is default: term_for_lookup = term term_remainder = '' ## but maybe match different term: if match_type == CFG_BIBINDEX_SYNONYM_MATCH_TYPE['leading_to_comma']: mmm = re.match(r'^(.*?)(\s*,.*)$', term) if mmm: term_for_lookup = mmm.group(1) term_remainder = mmm.group(2) elif match_type == CFG_BIBINDEX_SYNONYM_MATCH_TYPE['leading_to_number']: mmm = re.match(r'^(.*?)(\s*\d.*)$', term) if mmm: term_for_lookup = mmm.group(1) term_remainder = mmm.group(2) ## FIXME: workaround: escaping SQL wild-card signs, since KBR's ## exact search is doing LIKE query, so would match everything: term_for_lookup = term_for_lookup.replace('%', '\%') ## OK, now find synonyms: for kbr_values in get_kbr_values(kbr_name, searchkey=term_for_lookup, searchtype='e', use_memoise=use_memoise): for kbr_value in kbr_values: dterms[kbr_value + term_remainder] = 1 ## return list of term synonyms: return dterms.keys() def wash_output_format(format): """Wash output format FORMAT. Currently only prevents input like 'of=9' for backwards-compatible format that prints certain fields only. (for this task, 'of=tm' is preferred)""" if str(format[0:3]).isdigit() and len(format) != 6: # asked to print MARC tags, but not enough digits, # so let's switch back to HTML brief default return 'hb' else: return format def wash_pattern(p): """Wash pattern passed by URL. Check for sanity of the wildcard by removing wildcards if they are appended to extremely short words (1-3 letters). TODO: instead of this approximative treatment, it will be much better to introduce a temporal limit, e.g. to kill a query if it does not finish in 10 seconds.""" # strip accents: # p = strip_accents(p) # FIXME: when available, strip accents all the time # add leading/trailing whitespace for the two following wildcard-sanity checking regexps: p = " " + p + " " # replace spaces within quotes by __SPACE__ temporarily: p = re_pattern_single_quotes.sub(lambda x: "'"+string.replace(x.group(1), ' ', '__SPACE__')+"'", p) p = re_pattern_double_quotes.sub(lambda x: "\""+string.replace(x.group(1), ' ', '__SPACE__')+"\"", p) p = re_pattern_regexp_quotes.sub(lambda x: "/"+string.replace(x.group(1), ' ', '__SPACE__')+"/", p) # get rid of unquoted wildcards after spaces: p = re_pattern_wildcards_after_spaces.sub("\\1", p) # get rid of extremely short words (1-3 letters with wildcards): #p = re_pattern_short_words.sub("\\1", p) # replace back __SPACE__ by spaces: p = re_pattern_space.sub(" ", p) # replace special terms: p = re_pattern_today.sub(time.strftime("%Y-%m-%d", time.localtime()), p) # remove unnecessary whitespace: p = string.strip(p) # remove potentially wrong UTF-8 characters: p = wash_for_utf8(p) return p def wash_field(f): """Wash field passed by URL.""" if f: # get rid of unnecessary whitespace and make it lowercase # (e.g. Author -> author) to better suit iPhone etc input # mode: f = f.strip().lower() # wash legacy 'f' field names, e.g. replace 'wau' or `au' by # 'author', if applicable: if CFG_WEBSEARCH_FIELDS_CONVERT: f = CFG_WEBSEARCH_FIELDS_CONVERT.get(f, f) return f def wash_dates(d1="", d1y=0, d1m=0, d1d=0, d2="", d2y=0, d2m=0, d2d=0): """ Take user-submitted date arguments D1 (full datetime string) or (D1Y, D1M, D1Y) year, month, day tuple and D2 or (D2Y, D2M, D2Y) and return (YYY1-M1-D2 H1:M1:S2, YYY2-M2-D2 H2:M2:S2) datetime strings in the YYYY-MM-DD HH:MM:SS format suitable for time restricted searching. Note that when both D1 and (D1Y, D1M, D1D) parameters are present, the precedence goes to D1. Ditto for D2*. Note that when (D1Y, D1M, D1D) are taken into account, some values may be missing and are completed e.g. to 01 or 12 according to whether it is the starting or the ending date. """ datetext1, datetext2 = "", "" # sanity checking: if d1 == "" and d1y == 0 and d1m == 0 and d1d == 0 and d2 == "" and d2y == 0 and d2m == 0 and d2d == 0: return ("", "") # nothing selected, so return empty values # wash first (starting) date: if d1: # full datetime string takes precedence: datetext1 = d1 else: # okay, first date passed as (year,month,day): if d1y: datetext1 += "%04d" % d1y else: datetext1 += "0000" if d1m: datetext1 += "-%02d" % d1m else: datetext1 += "-01" if d1d: datetext1 += "-%02d" % d1d else: datetext1 += "-01" datetext1 += " 00:00:00" # wash second (ending) date: if d2: # full datetime string takes precedence: datetext2 = d2 else: # okay, second date passed as (year,month,day): if d2y: datetext2 += "%04d" % d2y else: datetext2 += "9999" if d2m: datetext2 += "-%02d" % d2m else: datetext2 += "-12" if d2d: datetext2 += "-%02d" % d2d else: datetext2 += "-31" # NOTE: perhaps we should add max(datenumber) in # given month, but for our quering it's not # needed, 31 will always do datetext2 += " 00:00:00" # okay, return constructed YYYY-MM-DD HH:MM:SS datetexts: return (datetext1, datetext2) def is_hosted_collection(coll): """Check if the given collection is a hosted one; i.e. its dbquery starts with hostedcollection: Returns True if it is, False if it's not or if the result is empty or if the query failed""" res = run_sql("SELECT dbquery FROM collection WHERE name=%s", (coll, )) try: return res[0][0].startswith("hostedcollection:") except: return False def get_colID(c): "Return collection ID for collection name C. Return None if no match found." colID = None res = run_sql("SELECT id FROM collection WHERE name=%s", (c,), 1) if res: colID = res[0][0] return colID def get_coll_normalised_name(c): """Returns normalised collection name (case sensitive) for collection name C (case insensitive). Returns None if no match found.""" try: return run_sql("SELECT name FROM collection WHERE name=%s", (c,))[0][0] except: return None def get_coll_ancestors(coll): "Returns a list of ancestors for collection 'coll'." coll_ancestors = [] coll_ancestor = coll while 1: res = run_sql("""SELECT c.name FROM collection AS c LEFT JOIN collection_collection AS cc ON c.id=cc.id_dad LEFT JOIN collection AS ccc ON ccc.id=cc.id_son WHERE ccc.name=%s ORDER BY cc.id_dad ASC LIMIT 1""", (coll_ancestor,)) if res: coll_name = res[0][0] coll_ancestors.append(coll_name) coll_ancestor = coll_name else: break # ancestors found, return reversed list: coll_ancestors.reverse() return coll_ancestors def get_coll_sons(coll, type='r', public_only=1): """Return a list of sons (first-level descendants) of type 'type' for collection 'coll'. If public_only, then return only non-restricted son collections. """ coll_sons = [] query = "SELECT c.name FROM collection AS c "\ "LEFT JOIN collection_collection AS cc ON c.id=cc.id_son "\ "LEFT JOIN collection AS ccc ON ccc.id=cc.id_dad "\ "WHERE cc.type=%s AND ccc.name=%s" query += " ORDER BY cc.score DESC" res = run_sql(query, (type, coll)) for name in res: if not public_only or not collection_restricted_p(name[0]): coll_sons.append(name[0]) return coll_sons class CollectionAllChildrenDataCacher(DataCacher): """Cache for all children of a collection (regular & virtual, public & private)""" def __init__(self): def cache_filler(): def get_all_children(coll, type='r', public_only=1): """Return a list of all children of type 'type' for collection 'coll'. If public_only, then return only non-restricted child collections. If type='*', then return both regular and virtual collections. """ children = [] if type == '*': sons = get_coll_sons(coll, 'r', public_only) + get_coll_sons(coll, 'v', public_only) else: sons = get_coll_sons(coll, type, public_only) for child in sons: children.append(child) children.extend(get_all_children(child, type, public_only)) return children ret = {} collections = collection_reclist_cache.cache.keys() for collection in collections: ret[collection] = get_all_children(collection, '*', public_only=0) return ret def timestamp_verifier(): return max(get_table_update_time('collection'), get_table_update_time('collection_collection')) DataCacher.__init__(self, cache_filler, timestamp_verifier) try: if not collection_allchildren_cache.is_ok_p: raise Exception except Exception: collection_allchildren_cache = CollectionAllChildrenDataCacher() def get_collection_allchildren(coll, recreate_cache_if_needed=True): """Returns the list of all children of a collection.""" if recreate_cache_if_needed: collection_allchildren_cache.recreate_cache_if_needed() if coll not in collection_allchildren_cache.cache: return [] # collection does not exist; return empty list return collection_allchildren_cache.cache[coll] def get_coll_real_descendants(coll, type='_', get_hosted_colls=True): """Return a list of all descendants of collection 'coll' that are defined by a 'dbquery'. IOW, we need to decompose compound collections like "A & B" into "A" and "B" provided that "A & B" has no associated database query defined. """ coll_sons = [] res = run_sql("""SELECT c.name,c.dbquery FROM collection AS c LEFT JOIN collection_collection AS cc ON c.id=cc.id_son LEFT JOIN collection AS ccc ON ccc.id=cc.id_dad WHERE ccc.name=%s AND cc.type LIKE %s ORDER BY cc.score DESC""", (coll, type,)) for name, dbquery in res: if dbquery: # this is 'real' collection, so return it: if get_hosted_colls: coll_sons.append(name) else: if not dbquery.startswith("hostedcollection:"): coll_sons.append(name) else: # this is 'composed' collection, so recurse: coll_sons.extend(get_coll_real_descendants(name)) return coll_sons def browse_pattern_phrases(req, colls, p, f, rg, ln=CFG_SITE_LANG): """Returns either biliographic phrases or words indexes.""" ## is p enclosed in quotes? (coming from exact search) if p.startswith('"') and p.endswith('"'): p = p[1:-1] p_orig = p ## okay, "real browse" follows: ## FIXME: the maths in the get_nearest_terms_in_bibxxx is just a test if not f and string.find(p, ":") > 0: # does 'p' contain ':'? f, p = string.split(p, ":", 1) ## do we search in words indexes? # FIXME uncomment this #if not f: # return browse_in_bibwords(req, p, f) coll_hitset = intbitset() for coll_name in colls: coll_hitset |= get_collection_reclist(coll_name) index_id = get_index_id_from_field(f) if index_id != 0: browsed_phrases_in_colls = get_nearest_terms_in_idxphrase_with_collection(p, index_id, rg/2, rg/2, coll_hitset) else: browsed_phrases = get_nearest_terms_in_bibxxx(p, f, (rg+1)/2+1, (rg-1)/2+1) while not browsed_phrases: # try again and again with shorter and shorter pattern: try: p = p[:-1] browsed_phrases = get_nearest_terms_in_bibxxx(p, f, (rg+1)/2+1, (rg-1)/2+1) except: # probably there are no hits at all: #req.write(_("No values found.")) return [] ## try to check hits in these particular collection selection: browsed_phrases_in_colls = [] if 0: for phrase in browsed_phrases: phrase_hitset = intbitset() phrase_hitsets = search_pattern("", phrase, f, 'e') for coll in colls: phrase_hitset.union_update(phrase_hitsets[coll]) if len(phrase_hitset) > 0: # okay, this phrase has some hits in colls, so add it: browsed_phrases_in_colls.append([phrase, len(phrase_hitset)]) ## were there hits in collections? if browsed_phrases_in_colls == []: if browsed_phrases != []: #write_warning(req, """<p>No match close to <em>%s</em> found in given collections. #Please try different term.<p>Displaying matches in any collection...""" % p_orig) ## try to get nbhits for these phrases in any collection: for phrase in browsed_phrases: nbhits = get_nbhits_in_bibxxx(phrase, f, coll_hitset) if nbhits > 0: browsed_phrases_in_colls.append([phrase, nbhits]) return browsed_phrases_in_colls def browse_pattern(req, colls, p, f, rg, ln=CFG_SITE_LANG): """Displays either biliographic phrases or words indexes.""" # load the right message language _ = gettext_set_language(ln) browsed_phrases_in_colls = browse_pattern_phrases(req, colls, p, f, rg, ln) if len(browsed_phrases_in_colls) == 0: req.write(_("No values found.")) return ## display results now: out = websearch_templates.tmpl_browse_pattern( f=f, fn=get_field_i18nname(get_field_name(f) or f, ln, False), ln=ln, browsed_phrases_in_colls=browsed_phrases_in_colls, colls=colls, rg=rg, ) req.write(out) return def browse_in_bibwords(req, p, f, ln=CFG_SITE_LANG): """Browse inside words indexes.""" if not p: return _ = gettext_set_language(ln) urlargd = {} urlargd.update(req.argd) urlargd['action'] = 'search' nearest_box = create_nearest_terms_box(urlargd, p, f, 'w', ln=ln, intro_text_p=0) req.write(websearch_templates.tmpl_search_in_bibwords( p = p, f = f, ln = ln, nearest_box = nearest_box )) return def search_pattern(req=None, p=None, f=None, m=None, ap=0, of="id", verbose=0, ln=CFG_SITE_LANG, display_nearest_terms_box=True, wl=0): """Search for complex pattern 'p' within field 'f' according to matching type 'm'. Return hitset of recIDs. The function uses multi-stage searching algorithm in case of no exact match found. See the Search Internals document for detailed description. The 'ap' argument governs whether an alternative patterns are to be used in case there is no direct hit for (p,f,m). For example, whether to replace non-alphanumeric characters by spaces if it would give some hits. See the Search Internals document for detailed description. (ap=0 forbits the alternative pattern usage, ap=1 permits it.) 'ap' is also internally used for allowing hidden tag search (for requests coming from webcoll, for example). In this case ap=-9 The 'of' argument governs whether to print or not some information to the user in case of no match found. (Usually it prints the information in case of HTML formats, otherwise it's silent). The 'verbose' argument controls the level of debugging information to be printed (0=least, 9=most). All the parameters are assumed to have been previously washed. This function is suitable as a mid-level API. """ _ = gettext_set_language(ln) hitset_empty = intbitset() # sanity check: if not p: hitset_full = intbitset(trailing_bits=1) hitset_full.discard(0) # no pattern, so return all universe return hitset_full # search stage 1: break up arguments into basic search units: if verbose and of.startswith("h"): t1 = os.times()[4] basic_search_units = create_basic_search_units(req, p, f, m, of) if verbose and of.startswith("h"): t2 = os.times()[4] write_warning("Search stage 1: basic search units are: %s" % cgi.escape(repr(basic_search_units)), req=req) write_warning("Search stage 1: execution took %.2f seconds." % (t2 - t1), req=req) # search stage 2: do search for each search unit and verify hit presence: if verbose and of.startswith("h"): t1 = os.times()[4] basic_search_units_hitsets = [] #prepare hiddenfield-related.. myhiddens = cfg['CFG_BIBFORMAT_HIDDEN_TAGS'] can_see_hidden = False if req: user_info = collect_user_info(req) can_see_hidden = user_info.get('precached_canseehiddenmarctags', False) if not req and ap == -9: # special request, coming from webcoll can_see_hidden = True if can_see_hidden: myhiddens = [] if CFG_INSPIRE_SITE and of.startswith('h'): # fulltext/caption search warnings for INSPIRE: fields_to_be_searched = [f for o, p, f, m in basic_search_units] if 'fulltext' in fields_to_be_searched: write_warning( _("Warning: full-text search is only available for a subset of papers mostly from %(x_range_from_year)s-%(x_range_to_year)s.") % \ {'x_range_from_year': '2006', 'x_range_to_year': '2012'}, req=req) elif 'caption' in fields_to_be_searched: write_warning(_("Warning: figure caption search is only available for a subset of papers mostly from %(x_range_from_year)s-%(x_range_to_year)s.") % \ {'x_range_from_year': '2008', 'x_range_to_year': '2012'}, req=req) for idx_unit in xrange(len(basic_search_units)): bsu_o, bsu_p, bsu_f, bsu_m = basic_search_units[idx_unit] if bsu_f and len(bsu_f) < 2: if of.startswith("h"): write_warning(_("There is no index %(x_name)s. Searching for %(x_text)s in all fields.", x_name=bsu_f, x_text=bsu_p), req=req) bsu_f = '' bsu_m = 'w' if of.startswith("h") and verbose: write_warning(_('Instead searching %(x_name)s.', x_name=str([bsu_o, bsu_p, bsu_f, bsu_m])), req=req) try: basic_search_unit_hitset = search_unit(bsu_p, bsu_f, bsu_m, wl) except InvenioWebSearchWildcardLimitError as excp: basic_search_unit_hitset = excp.res if of.startswith("h"): write_warning(_("Search term too generic, displaying only partial results..."), req=req) # FIXME: print warning if we use native full-text indexing if bsu_f == 'fulltext' and bsu_m != 'w' and of.startswith('h') and not CFG_SOLR_URL: write_warning(_("No phrase index available for fulltext yet, looking for word combination..."), req=req) #check that the user is allowed to search with this tag #if he/she tries it if bsu_f and len(bsu_f) > 1 and bsu_f[0].isdigit() and bsu_f[1].isdigit(): for htag in myhiddens: ltag = len(htag) samelenfield = bsu_f[0:ltag] if samelenfield == htag: #user searches by a hidden tag #we won't show you anything.. basic_search_unit_hitset = intbitset() if verbose >= 9 and of.startswith("h"): write_warning("Pattern %s hitlist omitted since \ it queries in a hidden tag %s" % (cgi.escape(repr(bsu_p)), repr(myhiddens)), req=req) display_nearest_terms_box = False #..and stop spying, too. if verbose >= 9 and of.startswith("h"): write_warning("Search stage 1: pattern %s gave hitlist %s" % (cgi.escape(bsu_p), basic_search_unit_hitset), req=req) if len(basic_search_unit_hitset) > 0 or \ ap<1 or \ bsu_o=="|" or \ ((idx_unit+1)<len(basic_search_units) and basic_search_units[idx_unit+1][0]=="|"): # stage 2-1: this basic search unit is retained, since # either the hitset is non-empty, or the approximate # pattern treatment is switched off, or the search unit # was joined by an OR operator to preceding/following # units so we do not require that it exists basic_search_units_hitsets.append(basic_search_unit_hitset) else: # stage 2-2: no hits found for this search unit, try to replace non-alphanumeric chars inside pattern: if re.search(r'[^a-zA-Z0-9\s\:]', bsu_p) and bsu_f != 'refersto' and bsu_f != 'citedby': if bsu_p.startswith('"') and bsu_p.endswith('"'): # is it ACC query? bsu_pn = re.sub(r'[^a-zA-Z0-9\s\:]+', "*", bsu_p) else: # it is WRD query bsu_pn = re.sub(r'[^a-zA-Z0-9\s\:]+', " ", bsu_p) if verbose and of.startswith('h') and req: write_warning("Trying (%s,%s,%s)" % (cgi.escape(bsu_pn), cgi.escape(bsu_f), cgi.escape(bsu_m)), req=req) basic_search_unit_hitset = search_pattern(req=None, p=bsu_pn, f=bsu_f, m=bsu_m, of="id", ln=ln, wl=wl) if len(basic_search_unit_hitset) > 0: # we retain the new unit instead if of.startswith('h'): write_warning(_("No exact match found for %(x_query1)s, using %(x_query2)s instead...") % \ {'x_query1': "<em>" + cgi.escape(bsu_p) + "</em>", 'x_query2': "<em>" + cgi.escape(bsu_pn) + "</em>"}, req=req) basic_search_units[idx_unit][1] = bsu_pn basic_search_units_hitsets.append(basic_search_unit_hitset) else: # stage 2-3: no hits found either, propose nearest indexed terms: if of.startswith('h') and display_nearest_terms_box: if req: if bsu_f == "recid": write_warning(_("Requested record does not seem to exist."), req=req) else: write_warning(create_nearest_terms_box(req.argd, bsu_p, bsu_f, bsu_m, ln=ln), req=req) return hitset_empty else: # stage 2-3: no hits found either, propose nearest indexed terms: if of.startswith('h') and display_nearest_terms_box: if req: if bsu_f == "recid": write_warning(_("Requested record does not seem to exist."), req=req) else: write_warning(create_nearest_terms_box(req.argd, bsu_p, bsu_f, bsu_m, ln=ln), req=req) return hitset_empty if verbose and of.startswith("h"): t2 = os.times()[4] for idx_unit in range(0, len(basic_search_units)): write_warning("Search stage 2: basic search unit %s gave %d hits." % (basic_search_units[idx_unit][1:], len(basic_search_units_hitsets[idx_unit])), req=req) write_warning("Search stage 2: execution took %.2f seconds." % (t2 - t1), req=req) # search stage 3: apply boolean query for each search unit: if verbose and of.startswith("h"): t1 = os.times()[4] # let the initial set be the complete universe: hitset_in_any_collection = intbitset(trailing_bits=1) hitset_in_any_collection.discard(0) for idx_unit in xrange(len(basic_search_units)): this_unit_operation = basic_search_units[idx_unit][0] this_unit_hitset = basic_search_units_hitsets[idx_unit] if this_unit_operation == '+': hitset_in_any_collection.intersection_update(this_unit_hitset) elif this_unit_operation == '-': hitset_in_any_collection.difference_update(this_unit_hitset) elif this_unit_operation == '|': hitset_in_any_collection.union_update(this_unit_hitset) else: if of.startswith("h"): write_warning("Invalid set operation %s." % cgi.escape(this_unit_operation), "Error", req=req) if len(hitset_in_any_collection) == 0: # no hits found, propose alternative boolean query: if of.startswith('h') and display_nearest_terms_box: nearestterms = [] for idx_unit in range(0, len(basic_search_units)): bsu_o, bsu_p, bsu_f, bsu_m = basic_search_units[idx_unit] if bsu_p.startswith("%") and bsu_p.endswith("%"): bsu_p = "'" + bsu_p[1:-1] + "'" bsu_nbhits = len(basic_search_units_hitsets[idx_unit]) # create a similar query, but with the basic search unit only argd = {} argd.update(req.argd) argd['p'] = bsu_p argd['f'] = bsu_f nearestterms.append((bsu_p, bsu_nbhits, argd)) text = websearch_templates.tmpl_search_no_boolean_hits( ln=ln, nearestterms=nearestterms) write_warning(text, req=req) if verbose and of.startswith("h"): t2 = os.times()[4] write_warning("Search stage 3: boolean query gave %d hits." % len(hitset_in_any_collection), req=req) write_warning("Search stage 3: execution took %.2f seconds." % (t2 - t1), req=req) return hitset_in_any_collection def search_pattern_parenthesised(req=None, p=None, f=None, m=None, ap=0, of="id", verbose=0, ln=CFG_SITE_LANG, display_nearest_terms_box=True, wl=0): """Search for complex pattern 'p' containing parenthesis within field 'f' according to matching type 'm'. Return hitset of recIDs. For more details on the parameters see 'search_pattern' """ _ = gettext_set_language(ln) spires_syntax_converter = SpiresToInvenioSyntaxConverter() spires_syntax_query = False # if the pattern uses SPIRES search syntax, convert it to Invenio syntax if spires_syntax_converter.is_applicable(p): spires_syntax_query = True p = spires_syntax_converter.convert_query(p) # sanity check: do not call parenthesised parser for search terms # like U(1) but still call it for searches like ('U(1)' | 'U(2)'): if not re_pattern_parens.search(re_pattern_parens_quotes.sub('_', p)): return search_pattern(req, p, f, m, ap, of, verbose, ln, display_nearest_terms_box=display_nearest_terms_box, wl=wl) # Try searching with parentheses try: parser = SearchQueryParenthesisedParser() # get a hitset with all recids result_hitset = intbitset(trailing_bits=1) # parse the query. The result is list of [op1, expr1, op2, expr2, ..., opN, exprN] parsing_result = parser.parse_query(p) if verbose and of.startswith("h"): write_warning("Search stage 1: search_pattern_parenthesised() searched %s." % repr(p), req=req) write_warning("Search stage 1: search_pattern_parenthesised() returned %s." % repr(parsing_result), req=req) # go through every pattern # calculate hitset for it # combine pattern's hitset with the result using the corresponding operator for index in xrange(0, len(parsing_result)-1, 2 ): current_operator = parsing_result[index] current_pattern = parsing_result[index+1] if CFG_INSPIRE_SITE and spires_syntax_query: # setting ap=0 to turn off approximate matching for 0 results. # Doesn't work well in combinations. # FIXME: The right fix involves collecting statuses for each # hitset, then showing a nearest terms box exactly once, # outside this loop. ap = 0 display_nearest_terms_box = False # obtain a hitset for the current pattern current_hitset = search_pattern(req, current_pattern, f, m, ap, of, verbose, ln, display_nearest_terms_box=display_nearest_terms_box, wl=wl) # combine the current hitset with resulting hitset using the current operator if current_operator == '+': result_hitset = result_hitset & current_hitset elif current_operator == '-': result_hitset = result_hitset - current_hitset elif current_operator == '|': result_hitset = result_hitset | current_hitset else: assert False, "Unknown operator in search_pattern_parenthesised()" return result_hitset # If searching with parenteses fails, perform search ignoring parentheses except SyntaxError: write_warning(_("Search syntax misunderstood. Ignoring all parentheses in the query. If this doesn't help, please check your search and try again."), req=req) # remove the parentheses in the query. Current implementation removes all the parentheses, # but it could be improved to romove only these that are not inside quotes p = p.replace('(', ' ') p = p.replace(')', ' ') return search_pattern(req, p, f, m, ap, of, verbose, ln, display_nearest_terms_box=display_nearest_terms_box, wl=wl) def search_unit(p, f=None, m=None, wl=0, ignore_synonyms=None): """Search for basic search unit defined by pattern 'p' and field 'f' and matching type 'm'. Return hitset of recIDs. All the parameters are assumed to have been previously washed. 'p' is assumed to be already a ``basic search unit'' so that it is searched as such and is not broken up in any way. Only wildcard and span queries are being detected inside 'p'. If CFG_WEBSEARCH_SYNONYM_KBRS is set and we are searching in one of the indexes that has defined runtime synonym knowledge base, then look up there and automatically enrich search results with results for synonyms. In case the wildcard limit (wl) is greater than 0 and this limit is reached an InvenioWebSearchWildcardLimitError will be raised. In case you want to call this function with no limit for the wildcard queries, wl should be 0. Parameter 'ignore_synonyms' is a list of terms for which we should not try to further find a synonym. This function is suitable as a low-level API. """ ## create empty output results set: hitset = intbitset() if not p: # sanity checking return hitset tokenizer = get_field_tokenizer_type(f) hitset_cjk = intbitset() if tokenizer == "BibIndexCJKTokenizer": if is_there_any_CJK_character_in_text(p): cjk_tok = BibIndexCJKTokenizer() chars = cjk_tok.tokenize_for_words(p) for char in chars: hitset_cjk |= search_unit_in_bibwords(char, f, m, wl) ## eventually look up runtime synonyms: hitset_synonyms = intbitset() if f in CFG_WEBSEARCH_SYNONYM_KBRS: if ignore_synonyms is None: ignore_synonyms = [] ignore_synonyms.append(p) for p_synonym in get_synonym_terms(p, CFG_WEBSEARCH_SYNONYM_KBRS[f][0], CFG_WEBSEARCH_SYNONYM_KBRS[f][1]): if p_synonym != p and \ not p_synonym in ignore_synonyms: hitset_synonyms |= search_unit(p_synonym, f, m, wl, ignore_synonyms) ## look up hits: if f == 'fulltext' and get_idx_indexer('fulltext') == 'SOLR' and CFG_SOLR_URL: # redirect to Solr try: return search_unit_in_solr(p, f, m) except: # There were troubles with getting full-text search # results from Solr. Let us alert the admin of these # problems and let us simply return empty results to the # end user. register_exception() return hitset elif f == 'fulltext' and get_idx_indexer('fulltext') == 'XAPIAN' and CFG_XAPIAN_ENABLED: # redirect to Xapian try: return search_unit_in_xapian(p, f, m) except: # There were troubles with getting full-text search # results from Xapian. Let us alert the admin of these # problems and let us simply return empty results to the # end user. register_exception() return hitset if f == 'datecreated': hitset = search_unit_in_bibrec(p, p, 'c') elif f == 'datemodified': hitset = search_unit_in_bibrec(p, p, 'm') elif f == 'refersto': # we are doing search by the citation count hitset = search_unit_refersto(p) elif f == 'rawref': from invenio.legacy.refextract.api import search_from_reference field, pattern = search_from_reference(p) return search_unit(pattern, field) elif f == 'citedby': # we are doing search by the citation count hitset = search_unit_citedby(p) elif f == 'collection': # we are doing search by the collection name or MARC field hitset = search_unit_collection(p, m, wl=wl) elif f == 'tag': module_found = False try: from invenio.modules.tags.search_units import search_unit_in_tags module_found = True except: # WebTag module is disabled, so ignore 'tag' selector pass if module_found: return search_unit_in_tags(p) elif m == 'a' or m == 'r': # we are doing either phrase search or regexp search if f == 'fulltext': # FIXME: workaround for not having phrase index yet return search_pattern(None, p, f, 'w') index_id = get_index_id_from_field(f) if index_id != 0: if m == 'a' and index_id in get_idxpair_field_ids(): #for exact match on the admin configured fields we are searching in the pair tables hitset = search_unit_in_idxpairs(p, f, m, wl) else: hitset = search_unit_in_idxphrases(p, f, m, wl) else: hitset = search_unit_in_bibxxx(p, f, m, wl) # if not hitset and m == 'a' and (p[0] != '%' and p[-1] != '%'): # #if we have no results by doing exact matching, do partial matching # #for removing the distinction between simple and double quotes # hitset = search_unit_in_bibxxx('%' + p + '%', f, m, wl) elif p.startswith("cited:"): # we are doing search by the citation count hitset = search_unit_by_times_cited(p[6:]) else: # we are doing bibwords search by default hitset = search_unit_in_bibwords(p, f, m, wl=wl) ## merge synonym results and return total: hitset |= hitset_synonyms hitset |= hitset_cjk return hitset def get_idxpair_field_ids(): """Returns the list of ids for the fields that idxPAIRS should be used on""" index_dict = dict(run_sql("SELECT name, id FROM idxINDEX")) return [index_dict[field] for field in index_dict if field in cfg['CFG_WEBSEARCH_IDXPAIRS_FIELDS']] def search_unit_in_bibwords(word, f, m=None, decompress=zlib.decompress, wl=0): """Searches for 'word' inside bibwordsX table for field 'f' and returns hitset of recIDs.""" set = intbitset() # will hold output result set set_used = 0 # not-yet-used flag, to be able to circumvent set operations limit_reached = 0 # flag for knowing if the query limit has been reached # if no field is specified, search in the global index. f = f or 'anyfield' index_id = get_index_id_from_field(f) if index_id: bibwordsX = "idxWORD%02dF" % index_id stemming_language = get_index_stemming_language(index_id) else: return intbitset() # word index f does not exist # wash 'word' argument and run query: if f.endswith('count') and word.endswith('+'): # field count query of the form N+ so transform N+ to N->99999: word = word[:-1] + '->99999' word = string.replace(word, '*', '%') # we now use '*' as the truncation character words = string.split(word, "->", 1) # check for span query if len(words) == 2: word0 = re_word.sub('', words[0]) word1 = re_word.sub('', words[1]) if stemming_language: word0 = lower_index_term(word0) word1 = lower_index_term(word1) word0 = stem(word0, stemming_language) word1 = stem(word1, stemming_language) word0_washed = wash_index_term(word0) word1_washed = wash_index_term(word1) if f.endswith('count'): # field count query; convert to integers in order # to have numerical behaviour for 'BETWEEN n1 AND n2' query try: word0_washed = int(word0_washed) word1_washed = int(word1_washed) except ValueError: pass try: res = run_sql_with_limit("SELECT term,hitlist FROM %s WHERE term BETWEEN %%s AND %%s" % bibwordsX, (word0_washed, word1_washed), wildcard_limit = wl) except InvenioDbQueryWildcardLimitError as excp: res = excp.res limit_reached = 1 # set the limit reached flag to true else: if f == 'journal': pass # FIXME: quick hack for the journal index else: word = re_word.sub('', word) if stemming_language: word = lower_index_term(word) word = stem(word, stemming_language) if string.find(word, '%') >= 0: # do we have wildcard in the word? if f == 'journal': # FIXME: quick hack for the journal index # FIXME: we can run a sanity check here for all indexes res = () else: try: res = run_sql_with_limit("SELECT term,hitlist FROM %s WHERE term LIKE %%s" % bibwordsX, (wash_index_term(word),), wildcard_limit = wl) except InvenioDbQueryWildcardLimitError as excp: res = excp.res limit_reached = 1 # set the limit reached flag to true else: res = run_sql("SELECT term,hitlist FROM %s WHERE term=%%s" % bibwordsX, (wash_index_term(word),)) # fill the result set: for word, hitlist in res: hitset_bibwrd = intbitset(hitlist) # add the results: if set_used: set.union_update(hitset_bibwrd) else: set = hitset_bibwrd set_used = 1 #check to see if the query limit was reached if limit_reached: #raise an exception, so we can print a nice message to the user raise InvenioWebSearchWildcardLimitError(set) # okay, return result set: return set def search_unit_in_idxpairs(p, f, type, wl=0): """Searches for pair 'p' inside idxPAIR table for field 'f' and returns hitset of recIDs found.""" limit_reached = 0 # flag for knowing if the query limit has been reached do_exact_search = True # flag to know when it makes sense to try to do exact matching result_set = intbitset() #determine the idxPAIR table to read from index_id = get_index_id_from_field(f) if not index_id: return intbitset() stemming_language = get_index_stemming_language(index_id) pairs_tokenizer = BibIndexDefaultTokenizer(stemming_language) idxpair_table_washed = wash_table_column_name("idxPAIR%02dF" % index_id) if p.startswith("%") and p.endswith("%"): p = p[1:-1] original_pattern = p p = string.replace(p, '*', '%') # we now use '*' as the truncation character queries_releated_vars = [] # contains tuples of (query_addons, query_params, use_query_limit) #is it a span query? ps = string.split(p, "->", 1) if len(ps) == 2 and not (ps[0].endswith(' ') or ps[1].startswith(' ')): #so we are dealing with a span query pairs_left = pairs_tokenizer.tokenize_for_pairs(ps[0]) pairs_right = pairs_tokenizer.tokenize_for_pairs(ps[1]) if not pairs_left or not pairs_right: # we are not actually dealing with pairs but with words return search_unit_in_bibwords(original_pattern, f, type, wl) elif len(pairs_left) != len(pairs_right): # it is kind of hard to know what the user actually wanted # we have to do: foo bar baz -> qux xyz, so let's swith to phrase return search_unit_in_idxphrases(original_pattern, f, type, wl) elif len(pairs_left) > 1 and \ len(pairs_right) > 1 and \ pairs_left[:-1] != pairs_right[:-1]: # again we have something like: foo bar baz -> abc xyz qux # so we'd better switch to phrase return search_unit_in_idxphrases(original_pattern, f, type, wl) else: # finally, we can treat the search using idxPairs # at this step we have either: foo bar -> abc xyz # or foo bar abc -> foo bar xyz queries_releated_vars = [("BETWEEN %s AND %s", (pairs_left[-1], pairs_right[-1]), True)] for pair in pairs_left[:-1]:# which should be equal with pairs_right[:-1] queries_releated_vars.append(("= %s", (pair, ), False)) do_exact_search = False # no exact search for span queries elif string.find(p, '%') > -1: #tokenizing p will remove the '%', so we have to make sure it stays replacement = 'xxxxxxxxxx' #hopefuly this will not clash with anything in the future p = string.replace(p, '%', replacement) pairs = pairs_tokenizer.tokenize_for_pairs(p) if not pairs: # we are not actually dealing with pairs but with words return search_unit_in_bibwords(original_pattern, f, type, wl) queries_releated_vars = [] for pair in pairs: if string.find(pair, replacement) > -1: pair = string.replace(pair, replacement, '%') #we replace back the % sign queries_releated_vars.append(("LIKE %s", (pair, ), True)) else: queries_releated_vars.append(("= %s", (pair, ), False)) do_exact_search = False else: #normal query pairs = pairs_tokenizer.tokenize_for_pairs(p) if not pairs: # we are not actually dealing with pairs but with words return search_unit_in_bibwords(original_pattern, f, type, wl) queries_releated_vars = [] for pair in pairs: queries_releated_vars.append(("= %s", (pair, ), False)) first_results = 1 # flag to know if it's the first set of results or not for query_var in queries_releated_vars: query_addons = query_var[0] query_params = query_var[1] use_query_limit = query_var[2] if use_query_limit: try: res = run_sql_with_limit("SELECT term, hitlist FROM %s WHERE term %s" \ % (idxpair_table_washed, query_addons), query_params, wildcard_limit=wl) #kwalitee:disable=sql except InvenioDbQueryWildcardLimitError as excp: res = excp.res limit_reached = 1 # set the limit reached flag to true else: res = run_sql("SELECT term, hitlist FROM %s WHERE term %s" \ % (idxpair_table_washed, query_addons), query_params) #kwalitee:disable=sql if not res: return intbitset() for pair, hitlist in res: hitset_idxpairs = intbitset(hitlist) if first_results: result_set = hitset_idxpairs first_results = 0 else: result_set.intersection_update(hitset_idxpairs) #check to see if the query limit was reached if limit_reached: #raise an exception, so we can print a nice message to the user raise InvenioWebSearchWildcardLimitError(result_set) # check if we need to eliminate the false positives if cfg['CFG_WEBSEARCH_IDXPAIRS_EXACT_SEARCH'] and do_exact_search: # we need to eliminate the false positives idxphrase_table_washed = wash_table_column_name("idxPHRASE%02dR" % index_id) not_exact_search = intbitset() for recid in result_set: res = run_sql("SELECT termlist FROM %s WHERE id_bibrec %s" %(idxphrase_table_washed, '=%s'), (recid, )) #kwalitee:disable=sql if res: termlist = deserialize_via_marshal(res[0][0]) if not [term for term in termlist if term.lower().find(p.lower()) > -1]: not_exact_search.add(recid) else: not_exact_search.add(recid) # remove the recs that are false positives from the final result result_set.difference_update(not_exact_search) return result_set def search_unit_in_idxphrases(p, f, type, wl=0): """Searches for phrase 'p' inside idxPHRASE*F table for field 'f' and returns hitset of recIDs found. The search type is defined by 'type' (e.g. equals to 'r' for a regexp search).""" # call word search method in some cases: if f.endswith('count'): return search_unit_in_bibwords(p, f, wl=wl) set = intbitset() # will hold output result set set_used = 0 # not-yet-used flag, to be able to circumvent set operations limit_reached = 0 # flag for knowing if the query limit has been reached use_query_limit = False # flag for knowing if to limit the query results or not # deduce in which idxPHRASE table we will search: idxphraseX = "idxPHRASE%02dF" % get_index_id_from_field("anyfield") if f: index_id = get_index_id_from_field(f) if index_id: idxphraseX = "idxPHRASE%02dF" % index_id else: return intbitset() # phrase index f does not exist # detect query type (exact phrase, partial phrase, regexp): if type == 'r': query_addons = "REGEXP %s" query_params = (p,) use_query_limit = True else: p = string.replace(p, '*', '%') # we now use '*' as the truncation character ps = string.split(p, "->", 1) # check for span query: if len(ps) == 2 and not (ps[0].endswith(' ') or ps[1].startswith(' ')): query_addons = "BETWEEN %s AND %s" query_params = (ps[0], ps[1]) use_query_limit = True else: if string.find(p, '%') > -1: query_addons = "LIKE %s" query_params = (p,) use_query_limit = True else: query_addons = "= %s" query_params = (p,) # special washing for fuzzy author index: if f in ('author', 'firstauthor', 'exactauthor', 'exactfirstauthor', 'authorityauthor'): query_params_washed = () for query_param in query_params: query_params_washed += (wash_author_name(query_param),) query_params = query_params_washed # perform search: if use_query_limit: try: res = run_sql_with_limit("SELECT term,hitlist FROM %s WHERE term %s" % (idxphraseX, query_addons), query_params, wildcard_limit=wl) except InvenioDbQueryWildcardLimitError as excp: res = excp.res limit_reached = 1 # set the limit reached flag to true else: res = run_sql("SELECT term,hitlist FROM %s WHERE term %s" % (idxphraseX, query_addons), query_params) # fill the result set: for word, hitlist in res: hitset_bibphrase = intbitset(hitlist) # add the results: if set_used: set.union_update(hitset_bibphrase) else: set = hitset_bibphrase set_used = 1 #check to see if the query limit was reached if limit_reached: #raise an exception, so we can print a nice message to the user raise InvenioWebSearchWildcardLimitError(set) # okay, return result set: return set def search_unit_in_bibxxx(p, f, type, wl=0): """Searches for pattern 'p' inside bibxxx tables for field 'f' and returns hitset of recIDs found. The search type is defined by 'type' (e.g. equals to 'r' for a regexp search).""" # call word search method in some cases: if f == 'journal' or f.endswith('count'): return search_unit_in_bibwords(p, f, wl=wl) p_orig = p # saving for eventual future 'no match' reporting limit_reached = 0 # flag for knowing if the query limit has been reached use_query_limit = False # flag for knowing if to limit the query results or not query_addons = "" # will hold additional SQL code for the query query_params = () # will hold parameters for the query (their number may vary depending on TYPE argument) # wash arguments: f = string.replace(f, '*', '%') # replace truncation char '*' in field definition if type == 'r': query_addons = "REGEXP %s" query_params = (p,) use_query_limit = True else: p = string.replace(p, '*', '%') # we now use '*' as the truncation character ps = string.split(p, "->", 1) # check for span query: if len(ps) == 2 and not (ps[0].endswith(' ') or ps[1].startswith(' ')): query_addons = "BETWEEN %s AND %s" query_params = (ps[0], ps[1]) use_query_limit = True else: if string.find(p, '%') > -1: query_addons = "LIKE %s" query_params = (p,) use_query_limit = True else: query_addons = "= %s" query_params = (p,) # construct 'tl' which defines the tag list (MARC tags) to search in: tl = [] if len(f) >= 2 and str(f[0]).isdigit() and str(f[1]).isdigit(): tl.append(f) # 'f' seems to be okay as it starts by two digits else: # deduce desired MARC tags on the basis of chosen 'f' tl = get_field_tags(f) if not tl: # f index does not exist, nevermind pass # okay, start search: l = [] # will hold list of recID that matched for t in tl: # deduce into which bibxxx table we will search: digit1, digit2 = int(t[0]), int(t[1]) bx = "bib%d%dx" % (digit1, digit2) bibx = "bibrec_bib%d%dx" % (digit1, digit2) # construct and run query: if t == "001": if query_addons.find('BETWEEN') > -1 or query_addons.find('=') > -1: # verify that the params are integers (to avoid returning record 123 when searching for 123foo) try: query_params = tuple(int(param) for param in query_params) except ValueError: return intbitset() if use_query_limit: try: res = run_sql_with_limit("SELECT id FROM bibrec WHERE id %s" % query_addons, query_params, wildcard_limit=wl) except InvenioDbQueryWildcardLimitError as excp: res = excp.res limit_reached = 1 # set the limit reached flag to true else: res = run_sql("SELECT id FROM bibrec WHERE id %s" % query_addons, query_params) else: query = "SELECT bibx.id_bibrec FROM %s AS bx LEFT JOIN %s AS bibx ON bx.id=bibx.id_bibxxx WHERE bx.value %s" % \ (bx, bibx, query_addons) if len(t) != 6 or t[-1:]=='%': # wildcard query, or only the beginning of field 't' # is defined, so add wildcard character: query += " AND bx.tag LIKE %s" query_params_and_tag = query_params + (t + '%',) else: # exact query for 't': query += " AND bx.tag=%s" query_params_and_tag = query_params + (t,) if use_query_limit: try: res = run_sql_with_limit(query, query_params_and_tag, wildcard_limit=wl) except InvenioDbQueryWildcardLimitError as excp: res = excp.res limit_reached = 1 # set the limit reached flag to true else: res = run_sql(query, query_params_and_tag) # fill the result set: for id_bibrec in res: if id_bibrec[0]: l.append(id_bibrec[0]) # check no of hits found: nb_hits = len(l) # okay, return result set: set = intbitset(l) #check to see if the query limit was reached if limit_reached: #raise an exception, so we can print a nice message to the user raise InvenioWebSearchWildcardLimitError(set) return set def search_unit_in_solr(p, f=None, m=None): """ Query a Solr index and return an intbitset corresponding to the result. Parameters (p,f,m) are usual search unit ones. """ if m and (m == 'a' or m == 'r'): # phrase/regexp query if p.startswith('%') and p.endswith('%'): p = p[1:-1] # fix for partial phrase p = '"' + p + '"' return solr_get_bitset(f, p) def search_unit_in_xapian(p, f=None, m=None): """ Query a Xapian index and return an intbitset corresponding to the result. Parameters (p,f,m) are usual search unit ones. """ if m and (m == 'a' or m == 'r'): # phrase/regexp query if p.startswith('%') and p.endswith('%'): p = p[1:-1] # fix for partial phrase p = '"' + p + '"' return xapian_get_bitset(f, p) def search_unit_in_bibrec(datetext1, datetext2, type='c'): """ Return hitset of recIDs found that were either created or modified (according to 'type' arg being 'c' or 'm') from datetext1 until datetext2, inclusive. Does not pay attention to pattern, collection, anything. Useful to intersect later on with the 'real' query. """ set = intbitset() if type and type.startswith("m"): type = "modification_date" else: type = "creation_date" # by default we are searching for creation dates parts = datetext1.split('->') if len(parts) > 1 and datetext1 == datetext2: datetext1 = parts[0] datetext2 = parts[1] if datetext1 == datetext2: res = run_sql("SELECT id FROM bibrec WHERE %s LIKE %%s" % (type,), (datetext1 + '%',)) else: res = run_sql("SELECT id FROM bibrec WHERE %s>=%%s AND %s<=%%s" % (type, type), (datetext1, datetext2)) for row in res: set += row[0] return set def search_unit_by_times_cited(p): """ Return histset of recIDs found that are cited P times. Usually P looks like '10->23'. """ numstr = '"'+p+'"' #this is sort of stupid but since we may need to #get the records that do _not_ have cites, we have to #know the ids of all records, too #but this is needed only if bsu_p is 0 or 0 or 0->0 allrecs = [] if p == 0 or p == "0" or \ p.startswith("0->") or p.endswith("->0"): allrecs = intbitset(run_sql("SELECT id FROM bibrec")) return get_records_with_num_cites(numstr, allrecs) def search_unit_refersto(query): """ Search for records satisfying the query (e.g. author:ellis) and return list of records referred to by these records. """ if query: ahitset = search_pattern(p=query) if ahitset: return get_refersto_hitset(ahitset) else: return intbitset([]) else: return intbitset([]) def search_unit_citedby(query): """ Search for records satisfying the query (e.g. author:ellis) and return list of records cited by these records. """ if query: ahitset = search_pattern(p=query) if ahitset: return get_citedby_hitset(ahitset) else: return intbitset([]) else: return intbitset([]) def search_unit_collection(query, m, wl=None): """ Search for records satisfying the query (e.g. collection:"BOOK" or collection:"Books") and return list of records in the collection. """ if len(query): ahitset = get_collection_reclist(query) if not ahitset: return search_unit_in_bibwords(query, 'collection', m, wl=wl) return ahitset else: return intbitset([]) def get_records_that_can_be_displayed(user_info, hitset_in_any_collection, current_coll=CFG_SITE_NAME, colls=None, permitted_restricted_collections=None): """ Return records that can be displayed. """ records_that_can_be_displayed = intbitset() if colls is None: colls = [current_coll] # let's get the restricted collections the user has rights to view if permitted_restricted_collections is None: permitted_restricted_collections = user_info.get('precached_permitted_restricted_collections', []) policy = CFG_WEBSEARCH_VIEWRESTRCOLL_POLICY.strip().upper() current_coll_children = get_collection_allchildren(current_coll) # real & virtual # add all restricted collections, that the user has access to, and are under the current collection # do not use set here, in order to maintain a specific order: # children of 'cc' (real, virtual, restricted), rest of 'c' that are not cc's children colls_to_be_displayed = [coll for coll in current_coll_children if coll in colls or coll in permitted_restricted_collections] colls_to_be_displayed.extend([coll for coll in colls if coll not in colls_to_be_displayed]) if policy == 'ANY':# the user needs to have access to at least one collection that restricts the records #we need this to be able to remove records that are both in a public and restricted collection permitted_recids = intbitset() notpermitted_recids = intbitset() for collection in restricted_collection_cache.cache: if collection in permitted_restricted_collections: permitted_recids |= get_collection_reclist(collection) else: notpermitted_recids |= get_collection_reclist(collection) records_that_can_be_displayed = hitset_in_any_collection - (notpermitted_recids - permitted_recids) else:# the user needs to have access to all collections that restrict a records notpermitted_recids = intbitset() for collection in restricted_collection_cache.cache: if collection not in permitted_restricted_collections: notpermitted_recids |= get_collection_reclist(collection) records_that_can_be_displayed = hitset_in_any_collection - notpermitted_recids if records_that_can_be_displayed.is_infinite(): # We should not return infinite results for user. records_that_can_be_displayed = intbitset() for coll in colls_to_be_displayed: records_that_can_be_displayed |= get_collection_reclist(coll) return records_that_can_be_displayed def intersect_results_with_collrecs(req, hitset_in_any_collection, colls, ap=0, of="hb", verbose=0, ln=CFG_SITE_LANG, display_nearest_terms_box=True): """Return dict of hitsets given by intersection of hitset with the collection universes.""" _ = gettext_set_language(ln) # search stage 4: intersect with the collection universe if verbose and of.startswith("h"): t1 = os.times()[4] results = {} # all final results results_nbhits = 0 # calculate the list of recids (restricted or not) that the user has rights to access and we should display (only those) if not req or isinstance(req, cStringIO.OutputType): # called from CLI user_info = {} for coll in colls: results[coll] = hitset_in_any_collection & get_collection_reclist(coll) results_nbhits += len(results[coll]) records_that_can_be_displayed = hitset_in_any_collection permitted_restricted_collections = [] else: user_info = collect_user_info(req) # let's get the restricted collections the user has rights to view if user_info['guest'] == '1': ## For guest users that are actually authorized to some restricted ## collection (by virtue of the IP address in a FireRole rule) ## we explicitly build the list of permitted_restricted_collections permitted_restricted_collections = get_permitted_restricted_collections(user_info) else: permitted_restricted_collections = user_info.get('precached_permitted_restricted_collections', []) # let's build the list of the both public and restricted # child collections of the collection from which the user # started his/her search. This list of children colls will be # used in the warning proposing a search in that collections try: current_coll = req.argd['cc'] # current_coll: coll from which user started his/her search except: from flask import request current_coll = request.args.get('cc', CFG_SITE_NAME) # current_coll: coll from which user started his/her search current_coll_children = get_collection_allchildren(current_coll) # real & virtual # add all restricted collections, that the user has access to, and are under the current collection # do not use set here, in order to maintain a specific order: # children of 'cc' (real, virtual, restricted), rest of 'c' that are not cc's children colls_to_be_displayed = [coll for coll in current_coll_children if coll in colls or coll in permitted_restricted_collections] colls_to_be_displayed.extend([coll for coll in colls if coll not in colls_to_be_displayed]) records_that_can_be_displayed = get_records_that_can_be_displayed( user_info, hitset_in_any_collection, current_coll, colls, permitted_restricted_collections) for coll in colls_to_be_displayed: results[coll] = results.get(coll, intbitset()) | (records_that_can_be_displayed & get_collection_reclist(coll)) results_nbhits += len(results[coll]) if results_nbhits == 0: # no hits found, try to search in Home and restricted and/or hidden collections: results = {} results_in_Home = records_that_can_be_displayed & get_collection_reclist(CFG_SITE_NAME) results_in_restricted_collections = intbitset() results_in_hidden_collections = intbitset() for coll in permitted_restricted_collections: if not get_coll_ancestors(coll): # hidden collection results_in_hidden_collections.union_update(records_that_can_be_displayed & get_collection_reclist(coll)) else: results_in_restricted_collections.union_update(records_that_can_be_displayed & get_collection_reclist(coll)) # in this way, we do not count twice, records that are both in Home collection and in a restricted collection total_results = len(results_in_Home.union(results_in_restricted_collections)) if total_results > 0: # some hits found in Home and/or restricted collections, so propose this search: if of.startswith("h") and display_nearest_terms_box: url = websearch_templates.build_search_url(req.argd, cc=CFG_SITE_NAME, c=[]) len_colls_to_display = len(colls_to_be_displayed) # trim the list of collections to first two, since it might get very large write_warning(_("No match found in collection %(x_collection)s. Other collections gave %(x_url_open)s%(x_nb_hits)d hits%(x_url_close)s.") %\ {'x_collection': '<em>' + \ string.join([get_coll_i18nname(coll, ln, False) for coll in colls_to_be_displayed[:2]], ', ') + \ (len_colls_to_display > 2 and ' et al' or '') + '</em>', 'x_url_open': '<a class="nearestterms" href="%s">' % (url), 'x_nb_hits': total_results, 'x_url_close': '</a>'}, req=req) # display the hole list of collections in a comment if len_colls_to_display > 2: write_warning("<!--No match found in collection <em>%(x_collection)s</em>.-->" %\ {'x_collection': string.join([get_coll_i18nname(coll, ln, False) for coll in colls_to_be_displayed], ', ')}, req=req) else: # no hits found, either user is looking for a document and he/she has not rights # or user is looking for a hidden document: if of.startswith("h") and display_nearest_terms_box: if len(results_in_hidden_collections) > 0: write_warning(_("No public collection matched your query. " "If you were looking for a hidden document, please type " "the correct URL for this record."), req=req) else: write_warning(_("No public collection matched your query. " "If you were looking for a non-public document, please choose " "the desired restricted collection first."), req=req) if verbose and of.startswith("h"): t2 = os.times()[4] write_warning("Search stage 4: intersecting with collection universe gave %d hits." % results_nbhits, req=req) write_warning("Search stage 4: execution took %.2f seconds." % (t2 - t1), req=req) return results def intersect_results_with_hitset(req, results, hitset, ap=0, aptext="", of="hb"): """Return intersection of search 'results' (a dict of hitsets with collection as key) with the 'hitset', i.e. apply 'hitset' intersection to each collection within search 'results'. If the final set is to be empty, and 'ap' (approximate pattern) is true, and then print the `warningtext' and return the original 'results' set unchanged. If 'ap' is false, then return empty results set. """ if ap: results_ap = copy.deepcopy(results) else: results_ap = {} # will return empty dict in case of no hits found nb_total = 0 final_results = {} for coll in results.keys(): final_results[coll] = results[coll].intersection(hitset) nb_total += len(final_results[coll]) if nb_total == 0: if of.startswith("h"): write_warning(aptext, req=req) final_results = results_ap return final_results def create_similarly_named_authors_link_box(author_name, ln=CFG_SITE_LANG): """Return a box similar to ``Not satisfied...'' one by proposing author searches for similar names. Namely, take AUTHOR_NAME and the first initial of the firstame (after comma) and look into author index whether authors with e.g. middle names exist. Useful mainly for CERN Library that sometimes contains name forms like Ellis-N, Ellis-Nick, Ellis-Nicolas all denoting the same person. The box isn't proposed if no similarly named authors are found to exist. """ # return nothing if not configured: if CFG_WEBSEARCH_CREATE_SIMILARLY_NAMED_AUTHORS_LINK_BOX == 0: return "" # return empty box if there is no initial: if re.match(r'[^ ,]+, [^ ]', author_name) is None: return "" # firstly find name comma initial: author_name_to_search = re.sub(r'^([^ ,]+, +[^ ,]).*$', '\\1', author_name) # secondly search for similar name forms: similar_author_names = {} for name in author_name_to_search, strip_accents(author_name_to_search): for tag in get_field_tags("author"): # deduce into which bibxxx table we will search: digit1, digit2 = int(tag[0]), int(tag[1]) bx = "bib%d%dx" % (digit1, digit2) bibx = "bibrec_bib%d%dx" % (digit1, digit2) if len(tag) != 6 or tag[-1:]=='%': # only the beginning of field 't' is defined, so add wildcard character: res = run_sql("""SELECT bx.value FROM %s AS bx WHERE bx.value LIKE %%s AND bx.tag LIKE %%s""" % bx, (name + "%", tag + "%")) else: res = run_sql("""SELECT bx.value FROM %s AS bx WHERE bx.value LIKE %%s AND bx.tag=%%s""" % bx, (name + "%", tag)) for row in res: similar_author_names[row[0]] = 1 # remove the original name and sort the list: try: del similar_author_names[author_name] except KeyError: pass # thirdly print the box: out = "" if similar_author_names: out_authors = similar_author_names.keys() out_authors.sort() tmp_authors = [] for out_author in out_authors: nbhits = get_nbhits_in_bibxxx(out_author, "author") if nbhits: tmp_authors.append((out_author, nbhits)) out += websearch_templates.tmpl_similar_author_names( authors=tmp_authors, ln=ln) return out def create_nearest_terms_box(urlargd, p, f, t='w', n=5, ln=CFG_SITE_LANG, intro_text_p=True): """Return text box containing list of 'n' nearest terms above/below 'p' for the field 'f' for matching type 't' (words/phrases) in language 'ln'. Propose new searches according to `urlargs' with the new words. If `intro_text_p' is true, then display the introductory message, otherwise print only the nearest terms in the box content. """ # load the right message language _ = gettext_set_language(ln) if not CFG_WEBSEARCH_DISPLAY_NEAREST_TERMS: return _("Your search did not match any records. Please try again.") nearest_terms = [] if not p: # sanity check p = "." if p.startswith('%') and p.endswith('%'): p = p[1:-1] # fix for partial phrase index_id = get_index_id_from_field(f) if f == 'fulltext': if CFG_SOLR_URL: return _("No match found, please enter different search terms.") else: # FIXME: workaround for not having native phrase index yet t = 'w' # special indexes: if f == 'refersto': return _("There are no records referring to %(x_rec)s.", x_rec=cgi.escape(p)) if f == 'citedby': return _("There are no records cited by %(x_rec)s.", x_rec=cgi.escape(p)) # look for nearest terms: if t == 'w': nearest_terms = get_nearest_terms_in_bibwords(p, f, n, n) if not nearest_terms: return _("No word index is available for %(x_name)s.", x_name=('<em>' + cgi.escape(get_field_i18nname(get_field_name(f) or f, ln, False)) + '</em>')) else: nearest_terms = [] if index_id: nearest_terms = get_nearest_terms_in_idxphrase(p, index_id, n, n) if f == 'datecreated' or f == 'datemodified': nearest_terms = get_nearest_terms_in_bibrec(p, f, n, n) if not nearest_terms: nearest_terms = get_nearest_terms_in_bibxxx(p, f, n, n) if not nearest_terms: return _("No phrase index is available for %(x_name)s.", x_name=('<em>' + cgi.escape(get_field_i18nname(get_field_name(f) or f, ln, False)) + '</em>')) terminfo = [] for term in nearest_terms: if t == 'w': hits = get_nbhits_in_bibwords(term, f) else: if index_id: hits = get_nbhits_in_idxphrases(term, f) elif f == 'datecreated' or f == 'datemodified': hits = get_nbhits_in_bibrec(term, f) else: hits = get_nbhits_in_bibxxx(term, f) argd = {} argd.update(urlargd) # check which fields contained the requested parameter, and replace it. for (px, fx) in ('p', 'f'), ('p1', 'f1'), ('p2', 'f2'), ('p3', 'f3'): if px in argd: argd_px = argd[px] if t == 'w': # p was stripped of accents, to do the same: argd_px = strip_accents(argd_px) #argd[px] = string.replace(argd_px, p, term, 1) #we need something similar, but case insensitive pattern_index = string.find(argd_px.lower(), p.lower()) if pattern_index > -1: argd[px] = argd_px[:pattern_index] + term + argd_px[pattern_index+len(p):] break #this is doing exactly the same as: #argd[px] = re.sub('(?i)' + re.escape(p), term, argd_px, 1) #but is ~4x faster (2us vs. 8.25us) terminfo.append((term, hits, argd)) intro = "" if intro_text_p: # add full leading introductory text if f: intro = _("Search term %(x_term)s inside index %(x_index)s did not match any record. Nearest terms in any collection are:") % \ {'x_term': "<em>" + cgi.escape(p.startswith("%") and p.endswith("%") and p[1:-1] or p) + "</em>", 'x_index': "<em>" + cgi.escape(get_field_i18nname(get_field_name(f) or f, ln, False)) + "</em>"} else: intro = _("Search term %(x_name)s did not match any record. Nearest terms in any collection are:", x_name=("<em>" + cgi.escape(p.startswith("%") and p.endswith("%") and p[1:-1] or p) + "</em>")) return websearch_templates.tmpl_nearest_term_box(p=p, ln=ln, f=f, terminfo=terminfo, intro=intro) def get_nearest_terms_in_bibwords(p, f, n_below, n_above): """Return list of +n -n nearest terms to word `p' in index for field `f'.""" nearest_words = [] # will hold the (sorted) list of nearest words to return # deduce into which bibwordsX table we will search: bibwordsX = "idxWORD%02dF" % get_index_id_from_field("anyfield") if f: index_id = get_index_id_from_field(f) if index_id: bibwordsX = "idxWORD%02dF" % index_id else: return nearest_words # firstly try to get `n' closest words above `p': res = run_sql("SELECT term FROM %s WHERE term<%%s ORDER BY term DESC LIMIT %%s" % bibwordsX, (p, n_above)) for row in res: nearest_words.append(row[0]) nearest_words.reverse() # secondly insert given word `p': nearest_words.append(p) # finally try to get `n' closest words below `p': res = run_sql("SELECT term FROM %s WHERE term>%%s ORDER BY term ASC LIMIT %%s" % bibwordsX, (p, n_below)) for row in res: nearest_words.append(row[0]) return nearest_words def get_nearest_terms_in_idxphrase(p, index_id, n_below, n_above): """Browse (-n_above, +n_below) closest bibliographic phrases for the given pattern p in the given field idxPHRASE table, regardless of collection. Return list of [phrase1, phrase2, ... , phrase_n].""" if CFG_INSPIRE_SITE and index_id in (3, 15): # FIXME: workaround due to new fuzzy index return [p,] idxphraseX = "idxPHRASE%02dF" % index_id res_above = run_sql("SELECT term FROM %s WHERE term<%%s ORDER BY term DESC LIMIT %%s" % idxphraseX, (p, n_above)) res_above = map(lambda x: x[0], res_above) res_above.reverse() res_below = run_sql("SELECT term FROM %s WHERE term>=%%s ORDER BY term ASC LIMIT %%s" % idxphraseX, (p, n_below)) res_below = map(lambda x: x[0], res_below) return res_above + res_below def get_nearest_terms_in_idxphrase_with_collection(p, index_id, n_below, n_above, collection): """Browse (-n_above, +n_below) closest bibliographic phrases for the given pattern p in the given field idxPHRASE table, considering the collection (intbitset). Return list of [(phrase1, hitset), (phrase2, hitset), ... , (phrase_n, hitset)].""" idxphraseX = "idxPHRASE%02dF" % index_id res_above = run_sql("SELECT term,hitlist FROM %s WHERE term<%%s ORDER BY term DESC LIMIT %%s" % idxphraseX, (p, n_above * 3)) res_above = [(term, intbitset(hitlist) & collection) for term, hitlist in res_above] res_above = [(term, len(hitlist)) for term, hitlist in res_above if hitlist] res_below = run_sql("SELECT term,hitlist FROM %s WHERE term>=%%s ORDER BY term ASC LIMIT %%s" % idxphraseX, (p, n_below * 3)) res_below = [(term, intbitset(hitlist) & collection) for term, hitlist in res_below] res_below = [(term, len(hitlist)) for term, hitlist in res_below if hitlist] res_above.reverse() return res_above[-n_above:] + res_below[:n_below] def get_nearest_terms_in_bibxxx(p, f, n_below, n_above): """Browse (-n_above, +n_below) closest bibliographic phrases for the given pattern p in the given field f, regardless of collection. Return list of [phrase1, phrase2, ... , phrase_n].""" ## determine browse field: if not f and string.find(p, ":") > 0: # does 'p' contain ':'? f, p = string.split(p, ":", 1) # FIXME: quick hack for the journal index if f == 'journal': return get_nearest_terms_in_bibwords(p, f, n_below, n_above) ## We are going to take max(n_below, n_above) as the number of ## values to ferch from bibXXx. This is needed to work around ## MySQL UTF-8 sorting troubles in 4.0.x. Proper solution is to ## use MySQL 4.1.x or our own idxPHRASE in the future. index_id = get_index_id_from_field(f) if index_id: return get_nearest_terms_in_idxphrase(p, index_id, n_below, n_above) n_fetch = 2*max(n_below, n_above) ## construct 'tl' which defines the tag list (MARC tags) to search in: tl = [] if str(f[0]).isdigit() and str(f[1]).isdigit(): tl.append(f) # 'f' seems to be okay as it starts by two digits else: # deduce desired MARC tags on the basis of chosen 'f' tl = get_field_tags(f) ## start browsing to fetch list of hits: browsed_phrases = {} # will hold {phrase1: 1, phrase2: 1, ..., phraseN: 1} dict of browsed phrases (to make them unique) # always add self to the results set: browsed_phrases[p.startswith("%") and p.endswith("%") and p[1:-1] or p] = 1 for t in tl: # deduce into which bibxxx table we will search: digit1, digit2 = int(t[0]), int(t[1]) bx = "bib%d%dx" % (digit1, digit2) bibx = "bibrec_bib%d%dx" % (digit1, digit2) # firstly try to get `n' closest phrases above `p': if len(t) != 6 or t[-1:]=='%': # only the beginning of field 't' is defined, so add wildcard character: res = run_sql("""SELECT bx.value FROM %s AS bx WHERE bx.value<%%s AND bx.tag LIKE %%s ORDER BY bx.value DESC LIMIT %%s""" % bx, (p, t + "%", n_fetch)) else: res = run_sql("""SELECT bx.value FROM %s AS bx WHERE bx.value<%%s AND bx.tag=%%s ORDER BY bx.value DESC LIMIT %%s""" % bx, (p, t, n_fetch)) for row in res: browsed_phrases[row[0]] = 1 # secondly try to get `n' closest phrases equal to or below `p': if len(t) != 6 or t[-1:]=='%': # only the beginning of field 't' is defined, so add wildcard character: res = run_sql("""SELECT bx.value FROM %s AS bx WHERE bx.value>=%%s AND bx.tag LIKE %%s ORDER BY bx.value ASC LIMIT %%s""" % bx, (p, t + "%", n_fetch)) else: res = run_sql("""SELECT bx.value FROM %s AS bx WHERE bx.value>=%%s AND bx.tag=%%s ORDER BY bx.value ASC LIMIT %%s""" % bx, (p, t, n_fetch)) for row in res: browsed_phrases[row[0]] = 1 # select first n words only: (this is needed as we were searching # in many different tables and so aren't sure we have more than n # words right; this of course won't be needed when we shall have # one ACC table only for given field): phrases_out = browsed_phrases.keys() phrases_out.sort(lambda x, y: cmp(string.lower(strip_accents(x)), string.lower(strip_accents(y)))) # find position of self: try: idx_p = phrases_out.index(p) except: idx_p = len(phrases_out)/2 # return n_above and n_below: return phrases_out[max(0, idx_p-n_above):idx_p+n_below] def get_nearest_terms_in_bibrec(p, f, n_below, n_above): """Return list of nearest terms and counts from bibrec table. p is usually a date, and f either datecreated or datemodified. Note: below/above count is very approximative, not really respected. """ col = 'creation_date' if f == 'datemodified': col = 'modification_date' res_above = run_sql("""SELECT DATE_FORMAT(%s,'%%%%Y-%%%%m-%%%%d %%%%H:%%%%i:%%%%s') FROM bibrec WHERE %s < %%s ORDER BY %s DESC LIMIT %%s""" % (col, col, col), (p, n_above)) res_below = run_sql("""SELECT DATE_FORMAT(%s,'%%%%Y-%%%%m-%%%%d %%%%H:%%%%i:%%%%s') FROM bibrec WHERE %s > %%s ORDER BY %s ASC LIMIT %%s""" % (col, col, col), (p, n_below)) out = set([]) for row in res_above: out.add(row[0]) for row in res_below: out.add(row[0]) out_list = list(out) out_list.sort() return list(out_list) def get_nbhits_in_bibrec(term, f): """Return number of hits in bibrec table. term is usually a date, and f is either 'datecreated' or 'datemodified'.""" col = 'creation_date' if f == 'datemodified': col = 'modification_date' res = run_sql("SELECT COUNT(*) FROM bibrec WHERE %s LIKE %%s" % (col,), (term + '%',)) return res[0][0] def get_nbhits_in_bibwords(word, f): """Return number of hits for word 'word' inside words index for field 'f'.""" out = 0 # deduce into which bibwordsX table we will search: bibwordsX = "idxWORD%02dF" % get_index_id_from_field("anyfield") if f: index_id = get_index_id_from_field(f) if index_id: bibwordsX = "idxWORD%02dF" % index_id else: return 0 if word: res = run_sql("SELECT hitlist FROM %s WHERE term=%%s" % bibwordsX, (word,)) for hitlist in res: out += len(intbitset(hitlist[0])) return out def get_nbhits_in_idxphrases(word, f): """Return number of hits for word 'word' inside phrase index for field 'f'.""" out = 0 # deduce into which bibwordsX table we will search: idxphraseX = "idxPHRASE%02dF" % get_index_id_from_field("anyfield") if f: index_id = get_index_id_from_field(f) if index_id: idxphraseX = "idxPHRASE%02dF" % index_id else: return 0 if word: res = run_sql("SELECT hitlist FROM %s WHERE term=%%s" % idxphraseX, (word,)) for hitlist in res: out += len(intbitset(hitlist[0])) return out def get_nbhits_in_bibxxx(p, f, in_hitset=None): """Return number of hits for word 'word' inside words index for field 'f'.""" ## determine browse field: if not f and string.find(p, ":") > 0: # does 'p' contain ':'? f, p = string.split(p, ":", 1) # FIXME: quick hack for the journal index if f == 'journal': return get_nbhits_in_bibwords(p, f) ## construct 'tl' which defines the tag list (MARC tags) to search in: tl = [] if str(f[0]).isdigit() and str(f[1]).isdigit(): tl.append(f) # 'f' seems to be okay as it starts by two digits else: # deduce desired MARC tags on the basis of chosen 'f' tl = get_field_tags(f) # start searching: recIDs = {} # will hold dict of {recID1: 1, recID2: 1, ..., } (unique recIDs, therefore) for t in tl: # deduce into which bibxxx table we will search: digit1, digit2 = int(t[0]), int(t[1]) bx = "bib%d%dx" % (digit1, digit2) bibx = "bibrec_bib%d%dx" % (digit1, digit2) if len(t) != 6 or t[-1:]=='%': # only the beginning of field 't' is defined, so add wildcard character: res = run_sql("""SELECT bibx.id_bibrec FROM %s AS bibx, %s AS bx WHERE bx.value=%%s AND bx.tag LIKE %%s AND bibx.id_bibxxx=bx.id""" % (bibx, bx), (p, t + "%")) else: res = run_sql("""SELECT bibx.id_bibrec FROM %s AS bibx, %s AS bx WHERE bx.value=%%s AND bx.tag=%%s AND bibx.id_bibxxx=bx.id""" % (bibx, bx), (p, t)) for row in res: recIDs[row[0]] = 1 if in_hitset is None: nbhits = len(recIDs) else: nbhits = len(intbitset(recIDs.keys()).intersection(in_hitset)) return nbhits def get_mysql_recid_from_aleph_sysno(sysno): """Returns DB's recID for ALEPH sysno passed in the argument (e.g. "002379334CER"). Returns None in case of failure.""" out = None res = run_sql("""SELECT bb.id_bibrec FROM bibrec_bib97x AS bb, bib97x AS b WHERE b.value=%s AND b.tag='970__a' AND bb.id_bibxxx=b.id""", (sysno,)) if res: out = res[0][0] return out def guess_primary_collection_of_a_record(recID): """Return primary collection name a record recid belongs to, by testing 980 identifier. May lead to bad guesses when a collection is defined dynamically via dbquery. In that case, return 'CFG_SITE_NAME'.""" out = CFG_SITE_NAME dbcollids = get_fieldvalues(recID, "980__a") for dbcollid in dbcollids: variants = ("collection:" + dbcollid, 'collection:"' + dbcollid + '"', "980__a:" + dbcollid, '980__a:"' + dbcollid + '"', '980:' + dbcollid , '980:"' + dbcollid + '"') res = run_sql("SELECT name FROM collection WHERE dbquery IN (%s,%s,%s,%s,%s,%s)", variants) if res: out = res[0][0] break if CFG_CERN_SITE: recID = int(recID) # dirty hack for ATLAS collections at CERN: if out in ('ATLAS Communications', 'ATLAS Internal Notes'): for alternative_collection in ('ATLAS Communications Physics', 'ATLAS Communications General', 'ATLAS Internal Notes Physics', 'ATLAS Internal Notes General',): if recID in get_collection_reclist(alternative_collection): return alternative_collection # dirty hack for FP FP_collections = {'DO': ['Current Price Enquiries', 'Archived Price Enquiries'], 'IT': ['Current Invitation for Tenders', 'Archived Invitation for Tenders'], 'MS': ['Current Market Surveys', 'Archived Market Surveys']} fp_coll_ids = [coll for coll in dbcollids if coll in FP_collections] for coll in fp_coll_ids: for coll_name in FP_collections[coll]: if recID in get_collection_reclist(coll_name): return coll_name return out _re_collection_url = re.compile('/collection/(.+)') def guess_collection_of_a_record(recID, referer=None, recreate_cache_if_needed=True): """Return collection name a record recid belongs to, by first testing the referer URL if provided and otherwise returning the primary collection.""" if referer: dummy, hostname, path, dummy, query, dummy = urlparse.urlparse(referer) #requests can come from different invenio installations, with different collections if CFG_SITE_URL.find(hostname) < 0: return guess_primary_collection_of_a_record(recID) g = _re_collection_url.match(path) if g: name = urllib.unquote_plus(g.group(1)) #check if this collection actually exist (also normalize the name if case-insensitive) name = get_coll_normalised_name(name) if name and recID in get_collection_reclist(name): return name elif path.startswith('/search'): if recreate_cache_if_needed: collection_reclist_cache.recreate_cache_if_needed() query = cgi.parse_qs(query) for name in query.get('cc', []) + query.get('c', []): name = get_coll_normalised_name(name) if name and recID in get_collection_reclist(name, recreate_cache_if_needed=False): return name return guess_primary_collection_of_a_record(recID) def is_record_in_any_collection(recID, recreate_cache_if_needed=True): """Return True if the record belongs to at least one collection. This is a good, although not perfect, indicator to guess if webcoll has already run after this record has been entered into the system. """ if recreate_cache_if_needed: collection_reclist_cache.recreate_cache_if_needed() for name in collection_reclist_cache.cache.keys(): if recID in get_collection_reclist(name, recreate_cache_if_needed=False): return True return False def get_all_collections_of_a_record(recID, recreate_cache_if_needed=True): """Return all the collection names a record belongs to. Note this function is O(n_collections).""" ret = [] if recreate_cache_if_needed: collection_reclist_cache.recreate_cache_if_needed() for name in collection_reclist_cache.cache.keys(): if recID in get_collection_reclist(name, recreate_cache_if_needed=False): ret.append(name) return ret def get_tag_name(tag_value, prolog="", epilog=""): """Return tag name from the known tag value, by looking up the 'tag' table. Return empty string in case of failure. Example: input='100__%', output=first author'.""" out = "" res = run_sql("SELECT name FROM tag WHERE value=%s", (tag_value,)) if res: out = prolog + res[0][0] + epilog return out def get_fieldcodes(): """Returns a list of field codes that may have been passed as 'search options' in URL. Example: output=['subject','division'].""" out = [] res = run_sql("SELECT DISTINCT(code) FROM field") for row in res: out.append(row[0]) return out def get_field_name(code): """Return the corresponding field_name given the field code. e.g. reportnumber -> report number.""" res = run_sql("SELECT name FROM field WHERE code=%s", (code, )) if res: return res[0][0] else: return "" def get_field_tags(field): """Returns a list of MARC tags for the field code 'field'. Returns empty list in case of error. Example: field='author', output=['100__%','700__%'].""" out = [] query = """SELECT t.value FROM tag AS t, field_tag AS ft, field AS f WHERE f.code=%s AND ft.id_field=f.id AND t.id=ft.id_tag ORDER BY ft.score DESC""" res = run_sql(query, (field, )) for val in res: out.append(val[0]) return out def get_merged_recid(recID): """ Return the record ID of the record with which the given record has been merged. @param recID: deleted record recID @type recID: int @return: merged record recID @rtype: int or None """ merged_recid = None for val in get_fieldvalues(recID, "970__d"): try: merged_recid = int(val) break except ValueError: pass return merged_recid def record_exists(recID): """Return 1 if record RECID exists. Return 0 if it doesn't exist. Return -1 if it exists but is marked as deleted. """ out = 0 res = run_sql("SELECT id FROM bibrec WHERE id=%s", (recID,), 1) if res: try: # if recid is '123foo', mysql will return id=123, and we don't want that recID = int(recID) except ValueError: return 0 # record exists; now check whether it isn't marked as deleted: dbcollids = get_fieldvalues(recID, "980__%") if ("DELETED" in dbcollids) or (CFG_CERN_SITE and "DUMMY" in dbcollids): out = -1 # exists, but marked as deleted else: out = 1 # exists fine return out def record_empty(recID): """ Is this record empty, e.g. has only 001, waiting for integration? @param recID: the record identifier. @type recID: int @return: 1 if the record is empty, 0 otherwise. @rtype: int """ record = get_record(recID) if record is None or len(record) < 2: return 1 else: return 0 def record_public_p(recID, recreate_cache_if_needed=True): """Return 1 if the record is public, i.e. if it can be found in the Home collection. Return 0 otherwise. """ return recID in get_collection_reclist(CFG_SITE_NAME, recreate_cache_if_needed=recreate_cache_if_needed) def get_creation_date(recID, fmt="%Y-%m-%d"): "Returns the creation date of the record 'recID'." out = "" res = run_sql("SELECT DATE_FORMAT(creation_date,%s) FROM bibrec WHERE id=%s", (fmt, recID), 1) if res: out = res[0][0] return out def get_modification_date(recID, fmt="%Y-%m-%d"): "Returns the date of last modification for the record 'recID'." out = "" res = run_sql("SELECT DATE_FORMAT(modification_date,%s) FROM bibrec WHERE id=%s", (fmt, recID), 1) if res: out = res[0][0] return out def print_search_info(p, f, sf, so, sp, rm, of, ot, collection=CFG_SITE_NAME, nb_found=-1, jrec=1, rg=CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS, aas=0, ln=CFG_SITE_LANG, p1="", p2="", p3="", f1="", f2="", f3="", m1="", m2="", m3="", op1="", op2="", sc=1, pl_in_url="", d1y=0, d1m=0, d1d=0, d2y=0, d2m=0, d2d=0, dt="", cpu_time=-1, middle_only=0, em=""): """Prints stripe with the information on 'collection' and 'nb_found' results and CPU time. Also, prints navigation links (beg/next/prev/end) inside the results set. If middle_only is set to 1, it will only print the middle box information (beg/netx/prev/end/etc) links. This is suitable for displaying navigation links at the bottom of the search results page.""" if em != '' and EM_REPOSITORY["search_info"] not in em: return "" # sanity check: if jrec < 1: jrec = 1 if jrec > nb_found: jrec = max(nb_found-rg+1, 1) return websearch_templates.tmpl_print_search_info( ln = ln, collection = collection, aas = aas, collection_name = get_coll_i18nname(collection, ln, False), collection_id = get_colID(collection), middle_only = middle_only, rg = rg, nb_found = nb_found, sf = sf, so = so, rm = rm, of = of, ot = ot, p = p, f = f, p1 = p1, p2 = p2, p3 = p3, f1 = f1, f2 = f2, f3 = f3, m1 = m1, m2 = m2, m3 = m3, op1 = op1, op2 = op2, pl_in_url = pl_in_url, d1y = d1y, d1m = d1m, d1d = d1d, d2y = d2y, d2m = d2m, d2d = d2d, dt = dt, jrec = jrec, sc = sc, sp = sp, all_fieldcodes = get_fieldcodes(), cpu_time = cpu_time, ) def print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, collection=CFG_SITE_NAME, nb_found=-1, jrec=1, rg=CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS, aas=0, ln=CFG_SITE_LANG, p1="", p2="", p3="", f1="", f2="", f3="", m1="", m2="", m3="", op1="", op2="", sc=1, pl_in_url="", d1y=0, d1m=0, d1d=0, d2y=0, d2m=0, d2d=0, dt="", cpu_time=-1, middle_only=0, em=""): """Prints stripe with the information on 'collection' and 'nb_found' results and CPU time. Also, prints navigation links (beg/next/prev/end) inside the results set. If middle_only is set to 1, it will only print the middle box information (beg/netx/prev/end/etc) links. This is suitable for displaying navigation links at the bottom of the search results page.""" if em != '' and EM_REPOSITORY["search_info"] not in em: return "" # sanity check: if jrec < 1: jrec = 1 if jrec > nb_found: jrec = max(nb_found-rg+1, 1) return websearch_templates.tmpl_print_hosted_search_info( ln = ln, collection = collection, aas = aas, collection_name = get_coll_i18nname(collection, ln, False), collection_id = get_colID(collection), middle_only = middle_only, rg = rg, nb_found = nb_found, sf = sf, so = so, rm = rm, of = of, ot = ot, p = p, f = f, p1 = p1, p2 = p2, p3 = p3, f1 = f1, f2 = f2, f3 = f3, m1 = m1, m2 = m2, m3 = m3, op1 = op1, op2 = op2, pl_in_url = pl_in_url, d1y = d1y, d1m = d1m, d1d = d1d, d2y = d2y, d2m = d2m, d2d = d2d, dt = dt, jrec = jrec, sc = sc, sp = sp, all_fieldcodes = get_fieldcodes(), cpu_time = cpu_time, ) def print_results_overview(colls, results_final_nb_total, results_final_nb, cpu_time, ln=CFG_SITE_LANG, ec=[], hosted_colls_potential_results_p=False, em=""): """Prints results overview box with links to particular collections below.""" if em != "" and EM_REPOSITORY["overview"] not in em: return "" new_colls = [] for coll in colls: new_colls.append({ 'id': get_colID(coll), 'code': coll, 'name': get_coll_i18nname(coll, ln, False), }) return websearch_templates.tmpl_print_results_overview( ln = ln, results_final_nb_total = results_final_nb_total, results_final_nb = results_final_nb, cpu_time = cpu_time, colls = new_colls, ec = ec, hosted_colls_potential_results_p = hosted_colls_potential_results_p, ) def print_hosted_results(url_and_engine, ln=CFG_SITE_LANG, of=None, req=None, no_records_found=False, search_timed_out=False, limit=CFG_EXTERNAL_COLLECTION_MAXRESULTS, em = ""): """Prints the full results of a hosted collection""" if of.startswith("h"): if no_records_found: return "<br />No results found." if search_timed_out: return "<br />The search engine did not respond in time." return websearch_templates.tmpl_print_hosted_results( url_and_engine=url_and_engine, ln=ln, of=of, req=req, limit=limit, display_body = em == "" or EM_REPOSITORY["body"] in em, display_add_to_basket = em == "" or EM_REPOSITORY["basket"] in em) class BibSortDataCacher(DataCacher): """ Cache holding all structures created by bibsort ( _data, data_dict). """ def __init__(self, method_name): self.method_name = method_name self.method_id = 0 try: res = run_sql("""SELECT id from bsrMETHOD where name = %s""", (self.method_name,)) except: self.method_id = 0 if res and res[0]: self.method_id = res[0][0] else: self.method_id = 0 def cache_filler(): method_id = self.method_id alldicts = {} if self.method_id == 0: return {} try: res_data = run_sql("""SELECT data_dict_ordered from bsrMETHODDATA \ where id_bsrMETHOD = %s""", (method_id,)) res_buckets = run_sql("""SELECT bucket_no, bucket_data from bsrMETHODDATABUCKET\ where id_bsrMETHOD = %s""", (method_id,)) except Exception: # database problems, return empty cache return {} try: data_dict_ordered = deserialize_via_marshal(res_data[0][0]) except: data_dict_ordered = {} alldicts['data_dict_ordered'] = data_dict_ordered # recid: weight if not res_buckets: alldicts['bucket_data'] = {} return alldicts for row in res_buckets: bucket_no = row[0] try: bucket_data = intbitset(row[1]) except: bucket_data = intbitset([]) alldicts.setdefault('bucket_data', {})[bucket_no] = bucket_data return alldicts def timestamp_verifier(): method_id = self.method_id res = run_sql("""SELECT last_updated from bsrMETHODDATA where id_bsrMETHOD = %s""", (method_id,)) try: update_time_methoddata = str(res[0][0]) except IndexError: update_time_methoddata = '1970-01-01 00:00:00' res = run_sql("""SELECT max(last_updated) from bsrMETHODDATABUCKET where id_bsrMETHOD = %s""", (method_id,)) try: update_time_buckets = str(res[0][0]) except IndexError: update_time_buckets = '1970-01-01 00:00:00' return max(update_time_methoddata, update_time_buckets) DataCacher.__init__(self, cache_filler, timestamp_verifier) def get_sorting_methods(): if not CFG_BIBSORT_BUCKETS: # we do not want to use buckets return {} try: # make sure the method has some data res = run_sql("""SELECT m.name, m.definition FROM bsrMETHOD m, bsrMETHODDATA md WHERE m.id = md.id_bsrMETHOD""") except: return {} return dict(res) sorting_methods = get_sorting_methods() cache_sorted_data = {} for sorting_method in sorting_methods: try: cache_sorted_data[sorting_method].is_ok_p except Exception: cache_sorted_data[sorting_method] = BibSortDataCacher(sorting_method) def get_tags_from_sort_fields(sort_fields): """Given a list of sort_fields, return the tags associated with it and also the name of the field that has no tags associated, to be able to display a message to the user.""" tags = [] if not sort_fields: return [], '' for sort_field in sort_fields: if sort_field and str(sort_field[0:2]).isdigit(): # sort_field starts by two digits, so this is probably a MARC tag already tags.append(sort_field) else: # let us check the 'field' table field_tags = get_field_tags(sort_field) if field_tags: tags.extend(field_tags) else: return [], sort_field return tags, '' def rank_records(req, rank_method_code, rank_limit_relevance, hitset_global, pattern=None, verbose=0, sort_order='d', of='hb', ln=CFG_SITE_LANG, rg=None, jrec=None, field=''): """Initial entry point for ranking records, acts like a dispatcher. (i) rank_method_code is in bsrMETHOD, bibsort buckets can be used; (ii)rank_method_code is not in bsrMETHOD, use bibrank; """ if CFG_BIBSORT_BUCKETS and sorting_methods: for sort_method in sorting_methods: definition = sorting_methods[sort_method] if definition.startswith('RNK') and \ definition.replace('RNK:','').strip().lower() == string.lower(rank_method_code): (solution_recs, solution_scores) = sort_records_bibsort(req, hitset_global, sort_method, '', sort_order, verbose, of, ln, rg, jrec, 'r') #return (solution_recs, solution_scores, '', '', '') comment = '' if verbose > 0: comment = 'find_citations retlist %s' % [[solution_recs[i], solution_scores[i]] for i in range(len(solution_recs))] return (solution_recs, solution_scores, '(', ')', comment) return rank_records_bibrank(rank_method_code, rank_limit_relevance, hitset_global, pattern, verbose, field, rg, jrec) def sort_records(req, recIDs, sort_field='', sort_order='d', sort_pattern='', verbose=0, of='hb', ln=CFG_SITE_LANG, rg=None, jrec=None): """Initial entry point for sorting records, acts like a dispatcher. (i) sort_field is in the bsrMETHOD, and thus, the BibSort has sorted the data for this field, so we can use the cache; (ii)sort_field is not in bsrMETHOD, and thus, the cache does not contain any information regarding this sorting method""" _ = gettext_set_language(ln) #we should return sorted records up to irec_max(exclusive) dummy, irec_max = get_interval_for_records_to_sort(len(recIDs), jrec, rg) #calculate the min index on the reverted list index_min = max(len(recIDs) - irec_max, 0) #just to be sure that the min index is not negative #bibsort does not handle sort_pattern for now, use bibxxx if sort_pattern: return sort_records_bibxxx(req, recIDs, None, sort_field, sort_order, sort_pattern, verbose, of, ln, rg, jrec) use_sorting_buckets = True if not CFG_BIBSORT_BUCKETS or not sorting_methods: #ignore the use of buckets, use old fashion sorting use_sorting_buckets = False if not sort_field: if use_sorting_buckets: return sort_records_bibsort(req, recIDs, 'latest first', sort_field, sort_order, verbose, of, ln, rg, jrec) else: return recIDs[index_min:] sort_fields = string.split(sort_field, ",") if len(sort_fields) == 1: # we have only one sorting_field, check if it is treated by BibSort for sort_method in sorting_methods: definition = sorting_methods[sort_method] if use_sorting_buckets and \ ((definition.startswith('FIELD') and \ definition.replace('FIELD:','').strip().lower() == string.lower(sort_fields[0])) or \ sort_method == sort_fields[0]): #use BibSort return sort_records_bibsort(req, recIDs, sort_method, sort_field, sort_order, verbose, of, ln, rg, jrec) #deduce sorting MARC tag out of the 'sort_field' argument: tags, error_field = get_tags_from_sort_fields(sort_fields) if error_field: if use_sorting_buckets: return sort_records_bibsort(req, recIDs, 'latest first', sort_field, sort_order, verbose, of, ln, rg, jrec) else: if of.startswith('h'): write_warning(_("Sorry, %(x_name)s does not seem to be a valid sort option. The records will not be sorted.", x_name=cgi.escape(error_field)), "Error", req=req) return recIDs[index_min:] if tags: for sort_method in sorting_methods: definition = sorting_methods[sort_method] if definition.startswith('MARC') \ and definition.replace('MARC:','').strip().split(',') == tags \ and use_sorting_buckets: #this list of tags have a designated method in BibSort, so use it return sort_records_bibsort(req, recIDs, sort_method, sort_field, sort_order, verbose, of, ln, rg, jrec) #we do not have this sort_field in BibSort tables -> do the old fashion sorting return sort_records_bibxxx(req, recIDs, tags, sort_field, sort_order, sort_pattern, verbose, of, ln, rg, jrec) return recIDs[index_min:] def sort_records_bibsort(req, recIDs, sort_method, sort_field='', sort_order='d', verbose=0, of='hb', ln=CFG_SITE_LANG, rg=None, jrec=None, sort_or_rank = 's'): """This function orders the recIDs list, based on a sorting method(sort_field) using the BibSortDataCacher for speed""" _ = gettext_set_language(ln) #sanity check if sort_method not in sorting_methods: if sort_or_rank == 'r': return rank_records_bibrank(sort_method, 0, recIDs, None, verbose) else: return sort_records_bibxxx(req, recIDs, None, sort_field, sort_order, '', verbose, of, ln, rg, jrec) if verbose >= 3 and of.startswith('h'): write_warning("Sorting (using BibSort cache) by method %s (definition %s)." \ % (cgi.escape(repr(sort_method)), cgi.escape(repr(sorting_methods[sort_method]))), req=req) #we should return sorted records up to irec_max(exclusive) dummy, irec_max = get_interval_for_records_to_sort(len(recIDs), jrec, rg) solution = intbitset([]) input_recids = intbitset(recIDs) cache_sorted_data[sort_method].recreate_cache_if_needed() sort_cache = cache_sorted_data[sort_method].cache bucket_numbers = sort_cache['bucket_data'].keys() #check if all buckets have been constructed if len(bucket_numbers) != CFG_BIBSORT_BUCKETS: if verbose > 3 and of.startswith('h'): write_warning("Not all buckets have been constructed.. switching to old fashion sorting.", req=req) if sort_or_rank == 'r': return rank_records_bibrank(sort_method, 0, recIDs, None, verbose) else: return sort_records_bibxxx(req, recIDs, None, sort_field, sort_order, '', verbose, of, ln, rg, jrec) if sort_order == 'd': bucket_numbers.reverse() for bucket_no in bucket_numbers: solution.union_update(input_recids & sort_cache['bucket_data'][bucket_no]) if len(solution) >= irec_max: break dict_solution = {} missing_records = [] for recid in solution: try: dict_solution[recid] = sort_cache['data_dict_ordered'][recid] except KeyError: #recid is in buckets, but not in the bsrMETHODDATA, #maybe because the value has been deleted, but the change has not yet been propagated to the buckets missing_records.append(recid) #check if there are recids that are not in any bucket -> to be added at the end/top, ordered by insertion date if len(solution) < irec_max: #some records have not been yet inserted in the bibsort structures #or, some records have no value for the sort_method missing_records = sorted(missing_records + list(input_recids.difference(solution))) #the records need to be sorted in reverse order for the print record function #the return statement should be equivalent with the following statements #(these are clearer, but less efficient, since they revert the same list twice) #sorted_solution = (missing_records + sorted(dict_solution, key=dict_solution.__getitem__, reverse=sort_order=='d'))[:irec_max] #sorted_solution.reverse() #return sorted_solution if sort_method.strip().lower().startswith('latest') and sort_order == 'd': # if we want to sort the records on their insertion date, add the mission records at the top solution = sorted(dict_solution, key=dict_solution.__getitem__, reverse=sort_order=='a') + missing_records else: solution = missing_records + sorted(dict_solution, key=dict_solution.__getitem__, reverse=sort_order=='a') #calculate the min index on the reverted list index_min = max(len(solution) - irec_max, 0) #just to be sure that the min index is not negative #return all the records up to irec_max, but on the reverted list if sort_or_rank == 'r': # we need the recids, with values return (solution[index_min:], [dict_solution.get(record, 0) for record in solution[index_min:]]) else: return solution[index_min:] def sort_records_bibxxx(req, recIDs, tags, sort_field='', sort_order='d', sort_pattern='', verbose=0, of='hb', ln=CFG_SITE_LANG, rg=None, jrec=None): """OLD FASHION SORTING WITH NO CACHE, for sort fields that are not run in BibSort Sort records in 'recIDs' list according sort field 'sort_field' in order 'sort_order'. If more than one instance of 'sort_field' is found for a given record, try to choose that that is given by 'sort pattern', for example "sort by report number that starts by CERN-PS". Note that 'sort_field' can be field code like 'author' or MARC tag like '100__a' directly.""" _ = gettext_set_language(ln) #we should return sorted records up to irec_max(exclusive) dummy, irec_max = get_interval_for_records_to_sort(len(recIDs), jrec, rg) #calculate the min index on the reverted list index_min = max(len(recIDs) - irec_max, 0) #just to be sure that the min index is not negative ## check arguments: if not sort_field: return recIDs[index_min:] if len(recIDs) > CFG_WEBSEARCH_NB_RECORDS_TO_SORT: if of.startswith('h'): write_warning(_("Sorry, sorting is allowed on sets of up to %(x_name)d records only. Using default sort order.", x_name=CFG_WEBSEARCH_NB_RECORDS_TO_SORT), "Warning", req=req) return recIDs[index_min:] recIDs_dict = {} recIDs_out = [] if not tags: # tags have not been camputed yet sort_fields = string.split(sort_field, ",") tags, error_field = get_tags_from_sort_fields(sort_fields) if error_field: if of.startswith('h'): write_warning(_("Sorry, %(x_name)s does not seem to be a valid sort option. The records will not be sorted.", x_name=cgi.escape(error_field)), "Error", req=req) return recIDs[index_min:] if verbose >= 3 and of.startswith('h'): write_warning("Sorting by tags %s." % cgi.escape(repr(tags)), req=req) if sort_pattern: write_warning("Sorting preferentially by %s." % cgi.escape(sort_pattern), req=req) ## check if we have sorting tag defined: if tags: # fetch the necessary field values: for recID in recIDs: val = "" # will hold value for recID according to which sort vals = [] # will hold all values found in sorting tag for recID for tag in tags: if CFG_CERN_SITE and tag == '773__c': # CERN hack: journal sorting # 773__c contains page numbers, e.g. 3-13, and we want to sort by 3, and numerically: vals.extend(["%050s" % x.split("-", 1)[0] for x in get_fieldvalues(recID, tag)]) else: vals.extend(get_fieldvalues(recID, tag)) if sort_pattern: # try to pick that tag value that corresponds to sort pattern bingo = 0 for v in vals: if v.lower().startswith(sort_pattern.lower()): # bingo! bingo = 1 val = v break if not bingo: # sort_pattern not present, so add other vals after spaces val = sort_pattern + " " + string.join(vals) else: # no sort pattern defined, so join them all together val = string.join(vals) val = strip_accents(val.lower()) # sort values regardless of accents and case if val in recIDs_dict: recIDs_dict[val].append(recID) else: recIDs_dict[val] = [recID] # sort them: recIDs_dict_keys = recIDs_dict.keys() recIDs_dict_keys.sort() # now that keys are sorted, create output array: for k in recIDs_dict_keys: for s in recIDs_dict[k]: recIDs_out.append(s) # ascending or descending? if sort_order == 'a': recIDs_out.reverse() # okay, we are done # return only up to the maximum that we need to sort if len(recIDs_out) != len(recIDs): dummy, irec_max = get_interval_for_records_to_sort(len(recIDs_out), jrec, rg) index_min = max(len(recIDs_out) - irec_max, 0) #just to be sure that the min index is not negative return recIDs_out[index_min:] else: # good, no sort needed return recIDs[index_min:] def get_interval_for_records_to_sort(nb_found, jrec=None, rg=None): """calculates in which interval should the sorted records be a value of 'rg=-9999' means to print all records: to be used with care.""" if not jrec: jrec = 1 if not rg: #return all return jrec-1, nb_found if rg == -9999: # print all records rg = nb_found else: rg = abs(rg) if jrec < 1: # sanity checks jrec = 1 if jrec > nb_found: jrec = max(nb_found-rg+1, 1) # will sort records from irec_min to irec_max excluded irec_min = jrec - 1 irec_max = irec_min + rg if irec_min < 0: irec_min = 0 if irec_max > nb_found: irec_max = nb_found return irec_min, irec_max def print_records(req, recIDs, jrec=1, rg=CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS, format='hb', ot='', ln=CFG_SITE_LANG, relevances=[], relevances_prologue="(", relevances_epilogue="%%)", decompress=zlib.decompress, search_pattern='', print_records_prologue_p=True, print_records_epilogue_p=True, verbose=0, tab='', sf='', so='d', sp='', rm='', em=''): """ Prints list of records 'recIDs' formatted according to 'format' in groups of 'rg' starting from 'jrec'. Assumes that the input list 'recIDs' is sorted in reverse order, so it counts records from tail to head. A value of 'rg=-9999' means to print all records: to be used with care. Print also list of RELEVANCES for each record (if defined), in between RELEVANCE_PROLOGUE and RELEVANCE_EPILOGUE. Print prologue and/or epilogue specific to 'format' if 'print_records_prologue_p' and/or print_records_epilogue_p' are True. 'sf' is sort field and 'rm' is ranking method that are passed here only for proper linking purposes: e.g. when a certain ranking method or a certain sort field was selected, keep it selected in any dynamic search links that may be printed. """ if em != "" and EM_REPOSITORY["body"] not in em: return # load the right message language _ = gettext_set_language(ln) # sanity checking: if req is None: return # get user_info (for formatting based on user) if isinstance(req, cStringIO.OutputType): user_info = {} else: user_info = collect_user_info(req) if len(recIDs): nb_found = len(recIDs) if rg == -9999: # print all records rg = nb_found else: rg = abs(rg) if jrec < 1: # sanity checks jrec = 1 if jrec > nb_found: jrec = max(nb_found-rg+1, 1) # will print records from irec_max to irec_min excluded: irec_max = nb_found - jrec irec_min = nb_found - jrec - rg if irec_min < 0: irec_min = -1 if irec_max >= nb_found: irec_max = nb_found - 1 #req.write("%s:%d-%d" % (recIDs, irec_min, irec_max)) if format.startswith('x'): # print header if needed if print_records_prologue_p: print_records_prologue(req, format) # print records recIDs_to_print = [recIDs[x] for x in range(irec_max, irec_min, -1)] if ot: # asked to print some filtered fields only, so call print_record() on the fly: for irec in range(irec_max, irec_min, -1): x = print_record(recIDs[irec], format, ot, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm) req.write(x) if x: req.write('\n') else: format_records(recIDs_to_print, format, ln=ln, search_pattern=search_pattern, record_separator="\n", user_info=user_info, req=req) # print footer if needed if print_records_epilogue_p: print_records_epilogue(req, format) elif format.startswith('t') or str(format[0:3]).isdigit(): # we are doing plain text output: for irec in range(irec_max, irec_min, -1): x = print_record(recIDs[irec], format, ot, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm) req.write(x) if x: req.write('\n') elif format == 'excel': recIDs_to_print = [recIDs[x] for x in range(irec_max, irec_min, -1)] create_excel(recIDs=recIDs_to_print, req=req, ln=ln, ot=ot, user_info=user_info) else: # we are doing HTML output: if format == 'hp' or format.startswith("hb_") or format.startswith("hd_"): # portfolio and on-the-fly formats: for irec in range(irec_max, irec_min, -1): req.write(print_record(recIDs[irec], format, ot, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm)) elif format.startswith("hb"): # HTML brief format: display_add_to_basket = True if user_info: if user_info['email'] == 'guest': if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS > 4: display_add_to_basket = False else: if not user_info['precached_usebaskets']: display_add_to_basket = False if em != "" and EM_REPOSITORY["basket"] not in em: display_add_to_basket = False req.write(websearch_templates.tmpl_record_format_htmlbrief_header( ln = ln)) for irec in range(irec_max, irec_min, -1): row_number = jrec+irec_max-irec recid = recIDs[irec] if relevances and relevances[irec]: relevance = relevances[irec] else: relevance = '' record = print_record(recIDs[irec], format, ot, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm) req.write(websearch_templates.tmpl_record_format_htmlbrief_body( ln = ln, recid = recid, row_number = row_number, relevance = relevance, record = record, relevances_prologue = relevances_prologue, relevances_epilogue = relevances_epilogue, display_add_to_basket = display_add_to_basket )) req.write(websearch_templates.tmpl_record_format_htmlbrief_footer( ln = ln, display_add_to_basket = display_add_to_basket)) elif format.startswith("hd"): # HTML detailed format: for irec in range(irec_max, irec_min, -1): if record_exists(recIDs[irec]) == -1: write_warning(_("The record has been deleted."), req=req) merged_recid = get_merged_recid(recIDs[irec]) if merged_recid: write_warning(_("The record %(x_rec)d replaces it.", x_rec=merged_recid), req=req) continue unordered_tabs = get_detailed_page_tabs(get_colID(guess_primary_collection_of_a_record(recIDs[irec])), recIDs[irec], ln=ln) ordered_tabs_id = [(tab_id, values['order']) for (tab_id, values) in iteritems(unordered_tabs)] ordered_tabs_id.sort(lambda x, y: cmp(x[1], y[1])) link_ln = '' if ln != CFG_SITE_LANG: link_ln = '?ln=%s' % ln recid = recIDs[irec] recid_to_display = recid # Record ID used to build the URL. if CFG_WEBSEARCH_USE_ALEPH_SYSNOS: try: recid_to_display = get_fieldvalues(recid, CFG_BIBUPLOAD_EXTERNAL_SYSNO_TAG)[0] except IndexError: # No external sysno is available, keep using # internal recid. pass tabs = [(unordered_tabs[tab_id]['label'], \ '%s/%s/%s/%s%s' % (CFG_SITE_URL, CFG_SITE_RECORD, recid_to_display, tab_id, link_ln), \ tab_id == tab, unordered_tabs[tab_id]['enabled']) \ for (tab_id, order) in ordered_tabs_id if unordered_tabs[tab_id]['visible'] == True] tabs_counts = get_detailed_page_tabs_counts(recid) citedbynum = tabs_counts['Citations'] references = tabs_counts['References'] discussions = tabs_counts['Discussions'] # load content if tab == 'usage': req.write(webstyle_templates.detailed_record_container_top(recIDs[irec], tabs, ln, citationnum=citedbynum, referencenum=references, discussionnum=discussions)) r = calculate_reading_similarity_list(recIDs[irec], "downloads") downloadsimilarity = None downloadhistory = None #if r: # downloadsimilarity = r if CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS: downloadhistory = create_download_history_graph_and_box(recIDs[irec], ln) r = calculate_reading_similarity_list(recIDs[irec], "pageviews") viewsimilarity = None if r: viewsimilarity = r content = websearch_templates.tmpl_detailed_record_statistics(recIDs[irec], ln, downloadsimilarity=downloadsimilarity, downloadhistory=downloadhistory, viewsimilarity=viewsimilarity) req.write(content) req.write(webstyle_templates.detailed_record_container_bottom(recIDs[irec], tabs, ln)) elif tab == 'citations': recid = recIDs[irec] req.write(webstyle_templates.detailed_record_container_top(recid, tabs, ln, citationnum=citedbynum, referencenum=references, discussionnum=discussions)) req.write(websearch_templates.tmpl_detailed_record_citations_prologue(recid, ln)) # Citing citinglist = calculate_cited_by_list(recid) req.write(websearch_templates.tmpl_detailed_record_citations_citing_list(recid, ln, citinglist, sf=sf, so=so, sp=sp, rm=rm)) # Self-cited selfcited = get_self_cited_by(recid) req.write(websearch_templates.tmpl_detailed_record_citations_self_cited(recid, ln, selfcited=selfcited, citinglist=citinglist)) # Co-cited s = calculate_co_cited_with_list(recid) cociting = None if s: cociting = s req.write(websearch_templates.tmpl_detailed_record_citations_co_citing(recid, ln, cociting=cociting)) # Citation history, if needed citationhistory = None if citinglist: citationhistory = create_citation_history_graph_and_box(recid, ln) #debug if verbose > 3: write_warning("Citation graph debug: " + \ str(len(citationhistory)), req=req) req.write(websearch_templates.tmpl_detailed_record_citations_citation_history(recid, ln, citationhistory)) req.write(websearch_templates.tmpl_detailed_record_citations_epilogue(recid, ln)) req.write(webstyle_templates.detailed_record_container_bottom(recid, tabs, ln)) elif tab == 'references': req.write(webstyle_templates.detailed_record_container_top(recIDs[irec], tabs, ln, citationnum=citedbynum, referencenum=references, discussionnum=discussions)) req.write(format_record(recIDs[irec], 'HDREF', ln=ln, user_info=user_info, verbose=verbose)) req.write(webstyle_templates.detailed_record_container_bottom(recIDs[irec], tabs, ln)) elif tab == 'keywords': from invenio.bibclassify_webinterface import \ record_get_keywords, write_keywords_body, \ generate_keywords from invenio.webinterface_handler import wash_urlargd form = req.form argd = wash_urlargd(form, { 'generate': (str, 'no'), 'sort': (str, 'occurrences'), 'type': (str, 'tagcloud'), 'numbering': (str, 'off'), }) recid = recIDs[irec] req.write(webstyle_templates.detailed_record_container_top(recid, tabs, ln)) content = websearch_templates.tmpl_record_plots(recID=recid, ln=ln) req.write(content) req.write(webstyle_templates.detailed_record_container_bottom(recid, tabs, ln)) req.write(webstyle_templates.detailed_record_container_top(recid, tabs, ln, citationnum=citedbynum, referencenum=references)) if argd['generate'] == 'yes': # The user asked to generate the keywords. keywords = generate_keywords(req, recid, argd) else: # Get the keywords contained in the MARC. keywords = record_get_keywords(recid, argd) if argd['sort'] == 'related' and not keywords: req.write('You may want to run BibIndex.') # Output the keywords or the generate button. write_keywords_body(keywords, req, recid, argd) req.write(webstyle_templates.detailed_record_container_bottom(recid, tabs, ln)) elif tab == 'plots': req.write(webstyle_templates.detailed_record_container_top(recIDs[irec], tabs, ln)) content = websearch_templates.tmpl_record_plots(recID=recIDs[irec], ln=ln) req.write(content) req.write(webstyle_templates.detailed_record_container_bottom(recIDs[irec], tabs, ln)) else: # Metadata tab req.write(webstyle_templates.detailed_record_container_top(recIDs[irec], tabs, ln, show_short_rec_p=False, citationnum=citedbynum, referencenum=references, discussionnum=discussions)) creationdate = None modificationdate = None if record_exists(recIDs[irec]) == 1: creationdate = get_creation_date(recIDs[irec]) modificationdate = get_modification_date(recIDs[irec]) content = print_record(recIDs[irec], format, ot, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm) content = websearch_templates.tmpl_detailed_record_metadata( recID = recIDs[irec], ln = ln, format = format, creationdate = creationdate, modificationdate = modificationdate, content = content) # display of the next-hit/previous-hit/back-to-search links # on the detailed record pages content += websearch_templates.tmpl_display_back_to_search(req, recIDs[irec], ln) req.write(content) req.write(webstyle_templates.detailed_record_container_bottom(recIDs[irec], tabs, ln, creationdate=creationdate, modificationdate=modificationdate, show_short_rec_p=False)) if len(tabs) > 0: # Add the mini box at bottom of the page if CFG_WEBCOMMENT_ALLOW_REVIEWS: from invenio.modules.comments.api import get_mini_reviews reviews = get_mini_reviews(recid = recIDs[irec], ln=ln) else: reviews = '' actions = format_record(recIDs[irec], 'HDACT', ln=ln, user_info=user_info, verbose=verbose) files = format_record(recIDs[irec], 'HDFILE', ln=ln, user_info=user_info, verbose=verbose) req.write(webstyle_templates.detailed_record_mini_panel(recIDs[irec], ln, format, files=files, reviews=reviews, actions=actions)) else: # Other formats for irec in range(irec_max, irec_min, -1): req.write(print_record(recIDs[irec], format, ot, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm)) else: write_warning(_("Use different search terms."), req=req) def print_records_prologue(req, format, cc=None): """ Print the appropriate prologue for list of records in the given format. """ prologue = "" # no prologue needed for HTML or Text formats if format.startswith('xm'): prologue = websearch_templates.tmpl_xml_marc_prologue() elif format.startswith('xn'): prologue = websearch_templates.tmpl_xml_nlm_prologue() elif format.startswith('xw'): prologue = websearch_templates.tmpl_xml_refworks_prologue() elif format.startswith('xr'): prologue = websearch_templates.tmpl_xml_rss_prologue(cc=cc) elif format.startswith('xe8x'): prologue = websearch_templates.tmpl_xml_endnote_8x_prologue() elif format.startswith('xe'): prologue = websearch_templates.tmpl_xml_endnote_prologue() elif format.startswith('xo'): prologue = websearch_templates.tmpl_xml_mods_prologue() elif format.startswith('xp'): prologue = websearch_templates.tmpl_xml_podcast_prologue(cc=cc) elif format.startswith('x'): prologue = websearch_templates.tmpl_xml_default_prologue() req.write(prologue) def print_records_epilogue(req, format): """ Print the appropriate epilogue for list of records in the given format. """ epilogue = "" # no epilogue needed for HTML or Text formats if format.startswith('xm'): epilogue = websearch_templates.tmpl_xml_marc_epilogue() elif format.startswith('xn'): epilogue = websearch_templates.tmpl_xml_nlm_epilogue() elif format.startswith('xw'): epilogue = websearch_templates.tmpl_xml_refworks_epilogue() elif format.startswith('xr'): epilogue = websearch_templates.tmpl_xml_rss_epilogue() elif format.startswith('xe8x'): epilogue = websearch_templates.tmpl_xml_endnote_8x_epilogue() elif format.startswith('xe'): epilogue = websearch_templates.tmpl_xml_endnote_epilogue() elif format.startswith('xo'): epilogue = websearch_templates.tmpl_xml_mods_epilogue() elif format.startswith('xp'): epilogue = websearch_templates.tmpl_xml_podcast_epilogue() elif format.startswith('x'): epilogue = websearch_templates.tmpl_xml_default_epilogue() req.write(epilogue) def get_record(recid): """Directly the record object corresponding to the recid.""" if CFG_BIBUPLOAD_SERIALIZE_RECORD_STRUCTURE: value = run_sql("SELECT value FROM bibfmt WHERE id_bibrec=%s AND FORMAT='recstruct'", (recid, )) if value: try: return deserialize_via_marshal(value[0][0]) except: ### In case of corruption, let's rebuild it! pass return create_record(print_record(recid, 'xm'))[0] def print_record(recID, format='hb', ot='', ln=CFG_SITE_LANG, decompress=zlib.decompress, search_pattern=None, user_info=None, verbose=0, sf='', so='d', sp='', rm='', brief_links=True): """ Prints record 'recID' formatted according to 'format'. 'sf' is sort field and 'rm' is ranking method that are passed here only for proper linking purposes: e.g. when a certain ranking method or a certain sort field was selected, keep it selected in any dynamic search links that may be printed. """ if format == 'recstruct': return get_record(recID) _ = gettext_set_language(ln) display_claim_this_paper = False try: display_claim_this_paper = user_info["precached_viewclaimlink"] except (KeyError, TypeError): display_claim_this_paper = False #check from user information if the user has the right to see hidden fields/tags in the #records as well can_see_hidden = False if user_info: can_see_hidden = user_info.get('precached_canseehiddenmarctags', False) out = "" # sanity check: record_exist_p = record_exists(recID) if record_exist_p == 0: # doesn't exist return out # New Python BibFormat procedure for formatting # Old procedure follows further below # We must still check some special formats, but these # should disappear when BibFormat improves. if not (CFG_BIBFORMAT_USE_OLD_BIBFORMAT \ or format.lower().startswith('t') \ or format.lower().startswith('hm') \ or str(format[0:3]).isdigit() \ or ot): # Unspecified format is hd if format == '': format = 'hd' if record_exist_p == -1 and get_output_format_content_type(format) == 'text/html': # HTML output displays a default value for deleted records. # Other format have to deal with it. out += _("The record has been deleted.") # was record deleted-but-merged ? merged_recid = get_merged_recid(recID) if merged_recid: out += ' ' + _("The record %(x_rec)d replaces it.", x_rec=merged_recid) else: out += call_bibformat(recID, format, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose) # at the end of HTML brief mode, print the "Detailed record" functionality: if brief_links and format.lower().startswith('hb') and \ format.lower() != 'hb_p': out += websearch_templates.tmpl_print_record_brief_links(ln=ln, recID=recID, sf=sf, so=so, sp=sp, rm=rm, display_claim_link=display_claim_this_paper) return out # Old PHP BibFormat procedure for formatting # print record opening tags, if needed: if format == "marcxml" or format == "oai_dc": out += " <record>\n" out += " <header>\n" for oai_id in get_fieldvalues(recID, CFG_OAI_ID_FIELD): out += " <identifier>%s</identifier>\n" % oai_id out += " <datestamp>%s</datestamp>\n" % get_modification_date(recID) out += " </header>\n" out += " <metadata>\n" if format.startswith("xm") or format == "marcxml": # look for detailed format existence: query = "SELECT value FROM bibfmt WHERE id_bibrec=%s AND format=%s" res = run_sql(query, (recID, format), 1) if res and record_exist_p == 1 and not ot: # record 'recID' is formatted in 'format', and we are not # asking for field-filtered output; so print it: out += "%s" % decompress(res[0][0]) elif ot: # field-filtered output was asked for; print only some fields if not can_see_hidden: ot = list(set(ot) - set(cfg['CFG_BIBFORMAT_HIDDEN_TAGS'])) out += record_xml_output(get_record(recID), ot) else: # record 'recID' is not formatted in 'format' or we ask # for field-filtered output -- they are not in "bibfmt" # table; so fetch all the data from "bibXXx" tables: if format == "marcxml": out += """ <record xmlns="http://www.loc.gov/MARC21/slim">\n""" out += " <controlfield tag=\"001\">%d</controlfield>\n" % int(recID) elif format.startswith("xm"): out += """ <record>\n""" out += " <controlfield tag=\"001\">%d</controlfield>\n" % int(recID) if record_exist_p == -1: # deleted record, so display only OAI ID and 980: oai_ids = get_fieldvalues(recID, CFG_OAI_ID_FIELD) if oai_ids: out += "<datafield tag=\"%s\" ind1=\"%s\" ind2=\"%s\"><subfield code=\"%s\">%s</subfield></datafield>\n" % \ (CFG_OAI_ID_FIELD[0:3], CFG_OAI_ID_FIELD[3:4], CFG_OAI_ID_FIELD[4:5], CFG_OAI_ID_FIELD[5:6], oai_ids[0]) out += "<datafield tag=\"980\" ind1=\"\" ind2=\"\"><subfield code=\"c\">DELETED</subfield></datafield>\n" else: # controlfields query = "SELECT b.tag,b.value,bb.field_number FROM bib00x AS b, bibrec_bib00x AS bb "\ "WHERE bb.id_bibrec=%s AND b.id=bb.id_bibxxx AND b.tag LIKE '00%%' "\ "ORDER BY bb.field_number, b.tag ASC" res = run_sql(query, (recID, )) for row in res: field, value = row[0], row[1] value = encode_for_xml(value) out += """ <controlfield tag="%s">%s</controlfield>\n""" % \ (encode_for_xml(field[0:3]), value) # datafields i = 1 # Do not process bib00x and bibrec_bib00x, as # they are controlfields. So start at bib01x and # bibrec_bib00x (and set i = 0 at the end of # first loop) for digit1 in range(0, 10): for digit2 in range(i, 10): bx = "bib%d%dx" % (digit1, digit2) bibx = "bibrec_bib%d%dx" % (digit1, digit2) query = "SELECT b.tag,b.value,bb.field_number FROM %s AS b, %s AS bb "\ "WHERE bb.id_bibrec=%%s AND b.id=bb.id_bibxxx AND b.tag LIKE %%s"\ "ORDER BY bb.field_number, b.tag ASC" % (bx, bibx) res = run_sql(query, (recID, str(digit1)+str(digit2)+'%')) field_number_old = -999 field_old = "" for row in res: field, value, field_number = row[0], row[1], row[2] ind1, ind2 = field[3], field[4] if ind1 == "_" or ind1 == "": ind1 = " " if ind2 == "_" or ind2 == "": ind2 = " " # print field tag, unless hidden printme = True if not can_see_hidden: for htag in cfg['CFG_BIBFORMAT_HIDDEN_TAGS']: ltag = len(htag) samelenfield = field[0:ltag] if samelenfield == htag: printme = False if printme: if field_number != field_number_old or field[:-1] != field_old[:-1]: if field_number_old != -999: out += """ </datafield>\n""" out += """ <datafield tag="%s" ind1="%s" ind2="%s">\n""" % \ (encode_for_xml(field[0:3]), encode_for_xml(ind1), encode_for_xml(ind2)) field_number_old = field_number field_old = field # print subfield value value = encode_for_xml(value) out += """ <subfield code="%s">%s</subfield>\n""" % \ (encode_for_xml(field[-1:]), value) # all fields/subfields printed in this run, so close the tag: if field_number_old != -999: out += """ </datafield>\n""" i = 0 # Next loop should start looking at bib%0 and bibrec_bib00x # we are at the end of printing the record: out += " </record>\n" elif format == "xd" or format == "oai_dc": # XML Dublin Core format, possibly OAI -- select only some bibXXx fields: out += """ <dc xmlns="http://purl.org/dc/elements/1.1/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://purl.org/dc/elements/1.1/ http://www.openarchives.org/OAI/1.1/dc.xsd">\n""" if record_exist_p == -1: out += "" else: for f in get_fieldvalues(recID, "041__a"): out += " <language>%s</language>\n" % f for f in get_fieldvalues(recID, "100__a"): out += " <creator>%s</creator>\n" % encode_for_xml(f) for f in get_fieldvalues(recID, "700__a"): out += " <creator>%s</creator>\n" % encode_for_xml(f) for f in get_fieldvalues(recID, "245__a"): out += " <title>%s\n" % encode_for_xml(f) for f in get_fieldvalues(recID, "65017a"): out += " %s\n" % encode_for_xml(f) for f in get_fieldvalues(recID, "8564_u"): if f.split('.') == 'png': continue out += " %s\n" % encode_for_xml(f) for f in get_fieldvalues(recID, "520__a"): out += " %s\n" % encode_for_xml(f) out += " %s\n" % get_creation_date(recID) out += " \n" elif len(format) == 6 and str(format[0:3]).isdigit(): # user has asked to print some fields only if format == "001": out += "%s\n" % (format, recID, format) else: vals = get_fieldvalues(recID, format) for val in vals: out += "%s\n" % (format, val, format) elif format.startswith('t'): ## user directly asked for some tags to be displayed only if record_exist_p == -1: out += get_fieldvalues_alephseq_like(recID, ["001", CFG_OAI_ID_FIELD, "980"], can_see_hidden) else: out += get_fieldvalues_alephseq_like(recID, ot, can_see_hidden) elif format == "hm": if record_exist_p == -1: - out += "\n
" + cgi.escape(get_fieldvalues_alephseq_like(recID, ["001", CFG_OAI_ID_FIELD, "980"], can_see_hidden)) + "
" + out += "\n
" + cgi.escape(get_fieldvalues_alephseq_like(recID, ["001", CFG_OAI_ID_FIELD, "980"], can_see_hidden)) + "
" else: - out += "\n
" + cgi.escape(get_fieldvalues_alephseq_like(recID, ot, can_see_hidden)) + "
" + out += "\n
" + cgi.escape(get_fieldvalues_alephseq_like(recID, ot, can_see_hidden)) + "
" elif format.startswith("h") and ot: ## user directly asked for some tags to be displayed only if record_exist_p == -1: out += "\n
" + get_fieldvalues_alephseq_like(recID, ["001", CFG_OAI_ID_FIELD, "980"], can_see_hidden) + "
" else: out += "\n
" + get_fieldvalues_alephseq_like(recID, ot, can_see_hidden) + "
" elif format == "hd": # HTML detailed format if record_exist_p == -1: out += _("The record has been deleted.") else: # look for detailed format existence: query = "SELECT value FROM bibfmt WHERE id_bibrec=%s AND format=%s" res = run_sql(query, (recID, format), 1) if res: # record 'recID' is formatted in 'format', so print it out += "%s" % decompress(res[0][0]) else: # record 'recID' is not formatted in 'format', so try to call BibFormat on the fly or use default format: out_record_in_format = call_bibformat(recID, format, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose) if out_record_in_format: out += out_record_in_format else: out += websearch_templates.tmpl_print_record_detailed( ln = ln, recID = recID, ) elif format.startswith("hb_") or format.startswith("hd_"): # underscore means that HTML brief/detailed formats should be called on-the-fly; suitable for testing formats if record_exist_p == -1: out += _("The record has been deleted.") else: out += call_bibformat(recID, format, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose) elif format.startswith("hx"): # BibTeX format, called on the fly: if record_exist_p == -1: out += _("The record has been deleted.") else: out += call_bibformat(recID, format, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose) elif format.startswith("hs"): # for citation/download similarity navigation links: if record_exist_p == -1: out += _("The record has been deleted.") else: out += '' % websearch_templates.build_search_url(recid=recID, ln=ln) # firstly, title: titles = get_fieldvalues(recID, "245__a") if titles: for title in titles: out += "%s" % title else: # usual title not found, try conference title: titles = get_fieldvalues(recID, "111__a") if titles: for title in titles: out += "%s" % title else: # just print record ID: out += "%s %d" % (get_field_i18nname("record ID", ln, False), recID) out += "" # secondly, authors: authors = get_fieldvalues(recID, "100__a") + get_fieldvalues(recID, "700__a") if authors: out += " - %s" % authors[0] if len(authors) > 1: out += " et al" # thirdly publication info: publinfos = get_fieldvalues(recID, "773__s") if not publinfos: publinfos = get_fieldvalues(recID, "909C4s") if not publinfos: publinfos = get_fieldvalues(recID, "037__a") if not publinfos: publinfos = get_fieldvalues(recID, "088__a") if publinfos: out += " - %s" % publinfos[0] else: # fourthly publication year (if not publication info): years = get_fieldvalues(recID, "773__y") if not years: years = get_fieldvalues(recID, "909C4y") if not years: years = get_fieldvalues(recID, "260__c") if years: out += " (%s)" % years[0] else: # HTML brief format by default if record_exist_p == -1: out += _("The record has been deleted.") else: query = "SELECT value FROM bibfmt WHERE id_bibrec=%s AND format=%s" res = run_sql(query, (recID, format)) if res: # record 'recID' is formatted in 'format', so print it out += "%s" % decompress(res[0][0]) else: # record 'recID' is not formatted in 'format', so try to call BibFormat on the fly: or use default format: if CFG_WEBSEARCH_CALL_BIBFORMAT: out_record_in_format = call_bibformat(recID, format, ln, search_pattern=search_pattern, user_info=user_info, verbose=verbose) if out_record_in_format: out += out_record_in_format else: out += websearch_templates.tmpl_print_record_brief( ln = ln, recID = recID, ) else: out += websearch_templates.tmpl_print_record_brief( ln = ln, recID = recID, ) # at the end of HTML brief mode, print the "Detailed record" functionality: if format == 'hp' or format.startswith("hb_") or format.startswith("hd_"): pass # do nothing for portfolio and on-the-fly formats else: out += websearch_templates.tmpl_print_record_brief_links(ln=ln, recID=recID, sf=sf, so=so, sp=sp, rm=rm, display_claim_link=display_claim_this_paper) # print record closing tags, if needed: if format == "marcxml" or format == "oai_dc": out += " \n" out += " \n" return out def call_bibformat(recID, format="HD", ln=CFG_SITE_LANG, search_pattern=None, user_info=None, verbose=0): """ Calls BibFormat and returns formatted record. BibFormat will decide by itself if old or new BibFormat must be used. """ from invenio.modules.formatter.utils import get_pdf_snippets keywords = [] if search_pattern is not None: for unit in create_basic_search_units(None, str(search_pattern), None): bsu_o, bsu_p, bsu_f, bsu_m = unit[0], unit[1], unit[2], unit[3] if (bsu_o != '-' and bsu_f in [None, 'fulltext']): if bsu_m == 'a' and bsu_p.startswith('%') and bsu_p.endswith('%'): # remove leading and training `%' representing partial phrase search keywords.append(bsu_p[1:-1]) else: keywords.append(bsu_p) out = format_record(recID, of=format, ln=ln, search_pattern=keywords, user_info=user_info, verbose=verbose) if CFG_WEBSEARCH_FULLTEXT_SNIPPETS and user_info and \ 'fulltext' in user_info['uri'].lower(): # check snippets only if URL contains fulltext # FIXME: make it work for CLI too, via new function arg if keywords: snippets = '' try: snippets = get_pdf_snippets(recID, keywords, user_info) except: register_exception() if snippets: out += snippets return out def log_query(hostname, query_args, uid=-1): """ Log query into the query and user_query tables. Return id_query or None in case of problems. """ id_query = None if uid >= 0: # log the query only if uid is reasonable res = run_sql("SELECT id FROM query WHERE urlargs=%s", (query_args,), 1) try: id_query = res[0][0] except: id_query = run_sql("INSERT INTO query (type, urlargs) VALUES ('r', %s)", (query_args,)) if id_query: run_sql("INSERT INTO user_query (id_user, id_query, hostname, date) VALUES (%s, %s, %s, %s)", (uid, id_query, hostname, time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))) return id_query def log_query_info(action, p, f, colls, nb_records_found_total=-1): """Write some info to the log file for later analysis.""" try: log = open(CFG_LOGDIR + "/search.log", "a") log.write(time.strftime("%Y%m%d%H%M%S#", time.localtime())) log.write(action+"#") log.write(p+"#") log.write(f+"#") for coll in colls[:-1]: log.write("%s," % coll) log.write("%s#" % colls[-1]) log.write("%d" % nb_records_found_total) log.write("\n") log.close() except: pass return def clean_dictionary(dictionary, list_of_items): """Returns a copy of the dictionary with all the items in the list_of_items as empty strings""" out_dictionary = dictionary.copy() out_dictionary.update((item, '') for item in list_of_items) return out_dictionary ### CALLABLES def perform_request_search(req=None, cc=CFG_SITE_NAME, c=None, p="", f="", rg=CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS, sf="", so="d", sp="", rm="", of="id", ot="", aas=0, p1="", f1="", m1="", op1="", p2="", f2="", m2="", op2="", p3="", f3="", m3="", sc=0, jrec=0, recid=-1, recidb=-1, sysno="", id=-1, idb=-1, sysnb="", action="", d1="", d1y=0, d1m=0, d1d=0, d2="", d2y=0, d2m=0, d2d=0, dt="", verbose=0, ap=0, ln=CFG_SITE_LANG, ec=None, tab="", wl=0, em=""): """Perform search or browse request, without checking for authentication. Return list of recIDs found, if of=id. Otherwise create web page. The arguments are as follows: req - mod_python Request class instance. cc - current collection (e.g. "ATLAS"). The collection the user started to search/browse from. c - collection list (e.g. ["Theses", "Books"]). The collections user may have selected/deselected when starting to search from 'cc'. p - pattern to search for (e.g. "ellis and muon or kaon"). f - field to search within (e.g. "author"). rg - records in groups of (e.g. "10"). Defines how many hits per collection in the search results page are displayed. (Note that `rg' is ignored in case of `of=id'.) sf - sort field (e.g. "title"). so - sort order ("a"=ascending, "d"=descending). sp - sort pattern (e.g. "CERN-") -- in case there are more values in a sort field, this argument tells which one to prefer rm - ranking method (e.g. "jif"). Defines whether results should be ranked by some known ranking method. of - output format (e.g. "hb"). Usually starting "h" means HTML output (and "hb" for HTML brief, "hd" for HTML detailed), "x" means XML output, "t" means plain text output, "id" means no output at all but to return list of recIDs found, "intbitset" means to return an intbitset representation of the recIDs found (no sorting or ranking will be performed). (Suitable for high-level API.) ot - output only these MARC tags (e.g. "100,700,909C0b"). Useful if only some fields are to be shown in the output, e.g. for library to control some fields. em - output only part of the page. aas - advanced search ("0" means no, "1" means yes). Whether search was called from within the advanced search interface. p1 - first pattern to search for in the advanced search interface. Much like 'p'. f1 - first field to search within in the advanced search interface. Much like 'f'. m1 - first matching type in the advanced search interface. ("a" all of the words, "o" any of the words, "e" exact phrase, "p" partial phrase, "r" regular expression). op1 - first operator, to join the first and the second unit in the advanced search interface. ("a" add, "o" or, "n" not). p2 - second pattern to search for in the advanced search interface. Much like 'p'. f2 - second field to search within in the advanced search interface. Much like 'f'. m2 - second matching type in the advanced search interface. ("a" all of the words, "o" any of the words, "e" exact phrase, "p" partial phrase, "r" regular expression). op2 - second operator, to join the second and the third unit in the advanced search interface. ("a" add, "o" or, "n" not). p3 - third pattern to search for in the advanced search interface. Much like 'p'. f3 - third field to search within in the advanced search interface. Much like 'f'. m3 - third matching type in the advanced search interface. ("a" all of the words, "o" any of the words, "e" exact phrase, "p" partial phrase, "r" regular expression). sc - split by collection ("0" no, "1" yes). Governs whether we want to present the results in a single huge list, or splitted by collection. jrec - jump to record (e.g. "234"). Used for navigation inside the search results. (Note that `jrec' is ignored in case of `of=id'.) recid - display record ID (e.g. "20000"). Do not search/browse but go straight away to the Detailed record page for the given recID. recidb - display record ID bis (e.g. "20010"). If greater than 'recid', then display records from recid to recidb. Useful for example for dumping records from the database for reformatting. sysno - display old system SYS number (e.g. ""). If you migrate to Invenio from another system, and store your old SYS call numbers, you can use them instead of recid if you wish so. id - the same as recid, in case recid is not set. For backwards compatibility. idb - the same as recid, in case recidb is not set. For backwards compatibility. sysnb - the same as sysno, in case sysno is not set. For backwards compatibility. action - action to do. "SEARCH" for searching, "Browse" for browsing. Default is to search. d1 - first datetime in full YYYY-mm-dd HH:MM:DD format (e.g. "1998-08-23 12:34:56"). Useful for search limits on creation/modification date (see 'dt' argument below). Note that 'd1' takes precedence over d1y, d1m, d1d if these are defined. d1y - first date's year (e.g. "1998"). Useful for search limits on creation/modification date. d1m - first date's month (e.g. "08"). Useful for search limits on creation/modification date. d1d - first date's day (e.g. "23"). Useful for search limits on creation/modification date. d2 - second datetime in full YYYY-mm-dd HH:MM:DD format (e.g. "1998-09-02 12:34:56"). Useful for search limits on creation/modification date (see 'dt' argument below). Note that 'd2' takes precedence over d2y, d2m, d2d if these are defined. d2y - second date's year (e.g. "1998"). Useful for search limits on creation/modification date. d2m - second date's month (e.g. "09"). Useful for search limits on creation/modification date. d2d - second date's day (e.g. "02"). Useful for search limits on creation/modification date. dt - first and second date's type (e.g. "c"). Specifies whether to search in creation dates ("c") or in modification dates ("m"). When dt is not set and d1* and d2* are set, the default is "c". verbose - verbose level (0=min, 9=max). Useful to print some internal information on the searching process in case something goes wrong. ap - alternative patterns (0=no, 1=yes). In case no exact match is found, the search engine can try alternative patterns e.g. to replace non-alphanumeric characters by a boolean query. ap defines if this is wanted. ln - language of the search interface (e.g. "en"). Useful for internationalization. ec - list of external search engines to search as well (e.g. "SPIRES HEP"). wl - wildcard limit (ex: 100) the wildcard queries will be limited at 100 results """ kwargs = prs_wash_arguments(req=req, cc=cc, c=c, p=p, f=f, rg=rg, sf=sf, so=so, sp=sp, rm=rm, of=of, ot=ot, aas=aas, p1=p1, f1=f1, m1=m1, op1=op1, p2=p2, f2=f2, m2=m2, op2=op2, p3=p3, f3=f3, m3=m3, sc=sc, jrec=jrec, recid=recid, recidb=recidb, sysno=sysno, id=id, idb=idb, sysnb=sysnb, action=action, d1=d1, d1y=d1y, d1m=d1m, d1d=d1d, d2=d2, d2y=d2y, d2m=d2m, d2d=d2d, dt=dt, verbose=verbose, ap=ap, ln=ln, ec=ec, tab=tab, wl=wl, em=em) return prs_perform_search(kwargs=kwargs, **kwargs) def prs_perform_search(kwargs=None, **dummy): """Internal call which does the search, it is calling standard Invenio; Unless you know what you are doing, don't use this call as an API """ # separately because we can call it independently out = prs_wash_arguments_colls(kwargs=kwargs, **kwargs) if not out: return out return prs_search(kwargs=kwargs, **kwargs) def prs_wash_arguments_colls(kwargs=None, of=None, req=None, cc=None, c=None, sc=None, verbose=None, aas=None, ln=None, em="", **dummy): """ Check and wash collection list argument before we start searching. If there are troubles, e.g. a collection is not defined, print warning to the browser. @return: True if collection list is OK, and various False values (empty string, empty list) if there was an error. """ # raise an exception when trying to print out html from the cli if of.startswith("h"): assert req # for every search engine request asking for an HTML output, we # first regenerate cache of collection and field I18N names if # needed; so that later we won't bother checking timestamps for # I18N names at all: if of.startswith("h"): collection_i18nname_cache.recreate_cache_if_needed() field_i18nname_cache.recreate_cache_if_needed() try: (cc, colls_to_display, colls_to_search, hosted_colls, wash_colls_debug) = wash_colls(cc, c, sc, verbose) # which colls to search and to display? kwargs['colls_to_display'] = colls_to_display kwargs['colls_to_search'] = colls_to_search kwargs['hosted_colls'] = hosted_colls kwargs['wash_colls_debug'] = wash_colls_debug except InvenioWebSearchUnknownCollectionError as exc: colname = exc.colname if of.startswith("h"): page_start(req, of, cc, aas, ln, getUid(req), websearch_templates.tmpl_collection_not_found_page_title(colname, ln)) req.write(websearch_templates.tmpl_collection_not_found_page_body(colname, ln)) page_end(req, of, ln, em) return '' elif of == "id": return [] elif of == "intbitset": return intbitset() elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) page_end(req, of, ln, em) return '' else: page_end(req, of, ln, em) return '' return True def prs_wash_arguments(req=None, cc=CFG_SITE_NAME, c=None, p="", f="", rg=CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS, sf="", so="d", sp="", rm="", of="id", ot="", aas=0, p1="", f1="", m1="", op1="", p2="", f2="", m2="", op2="", p3="", f3="", m3="", sc=0, jrec=0, recid=-1, recidb=-1, sysno="", id=-1, idb=-1, sysnb="", action="", d1="", d1y=0, d1m=0, d1d=0, d2="", d2y=0, d2m=0, d2d=0, dt="", verbose=0, ap=0, ln=CFG_SITE_LANG, ec=None, tab="", uid=None, wl=0, em="", **dummy): """ Sets the (default) values and checks others for the PRS call """ # wash output format: of = wash_output_format(of) # wash all arguments requiring special care p = wash_pattern(p) f = wash_field(f) p1 = wash_pattern(p1) f1 = wash_field(f1) p2 = wash_pattern(p2) f2 = wash_field(f2) p3 = wash_pattern(p3) f3 = wash_field(f3) (d1y, d1m, d1d, d2y, d2m, d2d) = map(int, (d1y, d1m, d1d, d2y, d2m, d2d)) datetext1, datetext2 = wash_dates(d1, d1y, d1m, d1d, d2, d2y, d2m, d2d) # wash ranking method: if not is_method_valid(None, rm): rm = "" # backwards compatibility: id, idb, sysnb -> recid, recidb, sysno (if applicable) if sysnb != "" and sysno == "": sysno = sysnb if id > 0 and recid == -1: recid = id if idb > 0 and recidb == -1: recidb = idb # TODO deduce passed search limiting criterias (if applicable) pl, pl_in_url = "", "" # no limits by default if action != "browse" and req and not isinstance(req, cStringIO.OutputType) \ and req.args and not isinstance(req.args, dict): # we do not want to add options while browsing or while calling via command-line fieldargs = cgi.parse_qs(req.args) for fieldcode in get_fieldcodes(): if fieldcode in fieldargs: for val in fieldargs[fieldcode]: pl += "+%s:\"%s\" " % (fieldcode, val) pl_in_url += "&%s=%s" % (urllib.quote(fieldcode), urllib.quote(val)) # deduce recid from sysno argument (if applicable): if sysno: # ALEPH SYS number was passed, so deduce DB recID for the record: recid = get_mysql_recid_from_aleph_sysno(sysno) if recid is None: recid = 0 # use recid 0 to indicate that this sysno does not exist # deduce collection we are in (if applicable): if recid > 0: referer = None if req: referer = req.headers_in.get('Referer') cc = guess_collection_of_a_record(recid, referer) # deduce user id (if applicable): if uid is None: try: uid = getUid(req) except: uid = 0 _ = gettext_set_language(ln) kwargs = {'req':req,'cc':cc, 'c':c, 'p':p, 'f':f, 'rg':rg, 'sf':sf, 'so':so, 'sp':sp, 'rm':rm, 'of':of, 'ot':ot, 'aas':aas, 'p1':p1, 'f1':f1, 'm1':m1, 'op1':op1, 'p2':p2, 'f2':f2, 'm2':m2, 'op2':op2, 'p3':p3, 'f3':f3, 'm3':m3, 'sc':sc, 'jrec':jrec, 'recid':recid, 'recidb':recidb, 'sysno':sysno, 'id':id, 'idb':idb, 'sysnb':sysnb, 'action':action, 'd1':d1, 'd1y':d1y, 'd1m':d1m, 'd1d':d1d, 'd2':d2, 'd2y':d2y, 'd2m':d2m, 'd2d':d2d, 'dt':dt, 'verbose':verbose, 'ap':ap, 'ln':ln, 'ec':ec, 'tab':tab, 'wl':wl, 'em': em, 'datetext1': datetext1, 'datetext2': datetext2, 'uid': uid, 'cc':cc, 'pl': pl, 'pl_in_url': pl_in_url, '_': _, 'selected_external_collections_infos':None, } kwargs.update(**dummy) return kwargs def prs_search(kwargs=None, recid=0, req=None, cc=None, p=None, p1=None, p2=None, p3=None, f=None, ec=None, verbose=None, ln=None, selected_external_collections_infos=None, action=None,rm=None, of=None, em=None, **dummy): """ This function write various bits into the req object as the search proceeds (so that pieces of a page are rendered even before the search ended) """ ## 0 - start output if recid >= 0: # recid can be 0 if deduced from sysno and if such sysno does not exist output = prs_detailed_record(kwargs=kwargs, **kwargs) if output is not None: return output elif action == "browse": ## 2 - browse needed of = 'hb' output = prs_browse(kwargs=kwargs, **kwargs) if output is not None: return output elif rm and p.startswith("recid:"): ## 3-ter - similarity search (or old-style citation search) needed output = prs_search_similar_records(kwargs=kwargs, **kwargs) if output is not None: return output elif p.startswith("cocitedwith:"): #WAS EXPERIMENTAL ## 3-terter - cited by search needed output = prs_search_cocitedwith(kwargs=kwargs, **kwargs) if output is not None: return output else: ## 3 - common search needed output = prs_search_common(kwargs=kwargs, **kwargs) if output is not None: return output # External searches if of.startswith("h"): if not of in ['hcs', 'hcs2']: perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) return page_end(req, of, ln, em) def prs_detailed_record(kwargs=None, req=None, of=None, cc=None, aas=None, ln=None, uid=None, recid=None, recidb=None, p=None, verbose=None, tab=None, sf=None, so=None, sp=None, rm=None, ot=None, _=None, em=None, **dummy): """Formats and prints one record""" ## 1 - detailed record display title, description, keywords = \ websearch_templates.tmpl_record_page_header_content(req, recid, ln) if req is not None and req.method != 'HEAD': page_start(req, of, cc, aas, ln, uid, title, description, keywords, recid, tab, em) # Default format is hb but we are in detailed -> change 'of' if of == "hb": of = "hd" if record_exists(recid): if recidb <= recid: # sanity check recidb = recid + 1 if of in ["id", "intbitset"]: result = [recidx for recidx in range(recid, recidb) if record_exists(recidx)] if of == "intbitset": return intbitset(result) else: return result else: print_records(req, range(recid, recidb), -1, -9999, of, ot, ln, search_pattern=p, verbose=verbose, tab=tab, sf=sf, so=so, sp=sp, rm=rm, em=em) if req and of.startswith("h"): # register detailed record page view event client_ip_address = str(req.remote_ip) register_page_view_event(recid, uid, client_ip_address) else: # record does not exist if of == "id": return [] elif of == "intbitset": return intbitset() elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) elif of.startswith("h"): if req.method == 'HEAD': raise apache.SERVER_RETURN, apache.HTTP_NOT_FOUND else: write_warning(_("Requested record does not seem to exist."), req=req) def prs_browse(kwargs=None, req=None, of=None, cc=None, aas=None, ln=None, uid=None, _=None, p=None, p1=None, p2=None, p3=None, colls_to_display=None, f=None, rg=None, sf=None, so=None, sp=None, rm=None, ot=None, f1=None, m1=None, op1=None, f2=None, m2=None, op2=None, f3=None, m3=None, sc=None, pl=None, d1y=None, d1m=None, d1d=None, d2y=None, d2m=None, d2d=None, dt=None, jrec=None, ec=None, action=None, colls_to_search=None, verbose=None, em=None, **dummy): page_start(req, of, cc, aas, ln, uid, _("Browse"), p=create_page_title_search_pattern_info(p, p1, p2, p3), em=em) req.write(create_search_box(cc, colls_to_display, p, f, rg, sf, so, sp, rm, of, ot, aas, ln, p1, f1, m1, op1, p2, f2, m2, op2, p3, f3, m3, sc, pl, d1y, d1m, d1d, d2y, d2m, d2d, dt, jrec, ec, action, em )) write_warning(create_exact_author_browse_help_link(p, p1, p2, p3, f, f1, f2, f3, rm, cc, ln, jrec, rg, aas, action), req=req) try: if aas == 1 or (p1 or p2 or p3): browse_pattern(req, colls_to_search, p1, f1, rg, ln) browse_pattern(req, colls_to_search, p2, f2, rg, ln) browse_pattern(req, colls_to_search, p3, f3, rg, ln) else: browse_pattern(req, colls_to_search, p, f, rg, ln) except: register_exception(req=req, alert_admin=True) if of.startswith("h"): req.write(create_error_box(req, verbose=verbose, ln=ln)) elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) return page_end(req, of, ln, em) def prs_search_similar_records(kwargs=None, req=None, of=None, cc=None, pl_in_url=None, ln=None, uid=None, _=None, p=None, p1=None, p2=None, p3=None, colls_to_display=None, f=None, rg=None, sf=None, so=None, sp=None, rm=None, ot=None, aas=None, f1=None, m1=None, op1=None, f2=None, m2=None, op2=None, f3=None, m3=None, sc=None, pl=None, d1y=None, d1m=None, d1d=None, d2y=None, d2m=None, d2d=None, dt=None, jrec=None, ec=None, action=None, em=None, verbose=None, **dummy): if req and req.method != 'HEAD': page_start(req, of, cc, aas, ln, uid, _("Search Results"), p=create_page_title_search_pattern_info(p, p1, p2, p3), em=em) if of.startswith("h"): req.write(create_search_box(cc, colls_to_display, p, f, rg, sf, so, sp, rm, of, ot, aas, ln, p1, f1, m1, op1, p2, f2, m2, op2, p3, f3, m3, sc, pl, d1y, d1m, d1d, d2y, d2m, d2d, dt, jrec, ec, action, em )) if record_exists(p[6:]) != 1: # record does not exist if of.startswith("h"): if req.method == 'HEAD': raise apache.SERVER_RETURN, apache.HTTP_NOT_FOUND else: write_warning(_("Requested record does not seem to exist."), req=req) if of == "id": return [] if of == "intbitset": return intbitset() elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) else: # record well exists, so find similar ones to it t1 = os.times()[4] results_similar_recIDs, results_similar_relevances, results_similar_relevances_prologue, results_similar_relevances_epilogue, results_similar_comments = \ rank_records_bibrank(rm, 0, get_collection_reclist(cc), string.split(p), verbose, f, rg, jrec) if results_similar_recIDs: t2 = os.times()[4] cpu_time = t2 - t1 if of.startswith("h"): req.write(print_search_info(p, f, sf, so, sp, rm, of, ot, cc, len(results_similar_recIDs), jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, em=em)) write_warning(results_similar_comments, req=req) print_records(req, results_similar_recIDs, jrec, rg, of, ot, ln, results_similar_relevances, results_similar_relevances_prologue, results_similar_relevances_epilogue, search_pattern=p, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm, em=em) elif of == "id": return results_similar_recIDs elif of == "intbitset": return intbitset(results_similar_recIDs) elif of.startswith("x"): print_records(req, results_similar_recIDs, jrec, rg, of, ot, ln, results_similar_relevances, results_similar_relevances_prologue, results_similar_relevances_epilogue, search_pattern=p, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm, em=em) else: # rank_records failed and returned some error message to display: if of.startswith("h"): write_warning(results_similar_relevances_prologue, req=req) write_warning(results_similar_relevances_epilogue, req=req) write_warning(results_similar_comments, req=req) if of == "id": return [] elif of == "intbitset": return intbitset() elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) def prs_search_cocitedwith(kwargs=None, req=None, of=None, cc=None, pl_in_url=None, ln=None, uid=None, _=None, p=None, p1=None, p2=None, p3=None, colls_to_display=None, f=None, rg=None, sf=None, so=None, sp=None, rm=None, ot=None, aas=None, f1=None, m1=None, op1=None, f2=None, m2=None, op2=None, f3=None, m3=None, sc=None, pl=None, d1y=None, d1m=None, d1d=None, d2y=None, d2m=None, d2d=None, dt=None, jrec=None, ec=None, action=None, verbose=None, em=None, **dummy): page_start(req, of, cc, aas, ln, uid, _("Search Results"), p=create_page_title_search_pattern_info(p, p1, p2, p3), em=em) if of.startswith("h"): req.write(create_search_box(cc, colls_to_display, p, f, rg, sf, so, sp, rm, of, ot, aas, ln, p1, f1, m1, op1, p2, f2, m2, op2, p3, f3, m3, sc, pl, d1y, d1m, d1d, d2y, d2m, d2d, dt, jrec, ec, action, em )) recID = p[12:] if record_exists(recID) != 1: # record does not exist if of.startswith("h"): write_warning(_("Requested record does not seem to exist."), req=req) if of == "id": return [] elif of == "intbitset": return intbitset() elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) else: # record well exists, so find co-cited ones: t1 = os.times()[4] results_cocited_recIDs = map(lambda x: x[0], calculate_co_cited_with_list(int(recID))) if results_cocited_recIDs: t2 = os.times()[4] cpu_time = t2 - t1 if of.startswith("h"): req.write(print_search_info(p, f, sf, so, sp, rm, of, ot, CFG_SITE_NAME, len(results_cocited_recIDs), jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, em=em)) print_records(req, results_cocited_recIDs, jrec, rg, of, ot, ln, search_pattern=p, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm, em=em) elif of == "id": return results_cocited_recIDs elif of == "intbitset": return intbitset(results_cocited_recIDs) elif of.startswith("x"): print_records(req, results_cocited_recIDs, jrec, rg, of, ot, ln, search_pattern=p, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm, em=em) else: # cited rank_records failed and returned some error message to display: if of.startswith("h"): write_warning("nothing found", req=req) if of == "id": return [] elif of == "intbitset": return intbitset() elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) def prs_search_hosted_collections(kwargs=None, req=None, of=None, ln=None, _=None, p=None, p1=None, p2=None, p3=None, hosted_colls=None, f=None, colls_to_search=None, hosted_colls_actual_or_potential_results_p=None, verbose=None, **dummy): hosted_colls_results = hosted_colls_timeouts = hosted_colls_true_results = None # search into the hosted collections only if the output format is html or xml if hosted_colls and (of.startswith("h") or of.startswith("x")) and not p.startswith("recid:"): # hosted_colls_results : the hosted collections' searches that did not timeout # hosted_colls_timeouts : the hosted collections' searches that timed out and will be searched later on again (hosted_colls_results, hosted_colls_timeouts) = calculate_hosted_collections_results(req, [p, p1, p2, p3], f, hosted_colls, verbose, ln, CFG_HOSTED_COLLECTION_TIMEOUT_ANTE_SEARCH) # successful searches if hosted_colls_results: hosted_colls_true_results = [] for result in hosted_colls_results: # if the number of results is None or 0 (or False) then just do nothing if result[1] == None or result[1] == False: # these are the searches the returned no or zero results if verbose: write_warning("Hosted collections (perform_search_request): %s returned no results" % result[0][1].name, req=req) else: # these are the searches that actually returned results on time hosted_colls_true_results.append(result) if verbose: write_warning("Hosted collections (perform_search_request): %s returned %s results in %s seconds" % (result[0][1].name, result[1], result[2]), req=req) else: if verbose: write_warning("Hosted collections (perform_search_request): there were no hosted collections results to be printed at this time", req=req) if hosted_colls_timeouts: if verbose: for timeout in hosted_colls_timeouts: write_warning("Hosted collections (perform_search_request): %s timed out and will be searched again later" % timeout[0][1].name, req=req) # we need to know for later use if there were any hosted collections to be searched even if they weren't in the end elif hosted_colls and ((not (of.startswith("h") or of.startswith("x"))) or p.startswith("recid:")): (hosted_colls_results, hosted_colls_timeouts) = (None, None) else: if verbose: write_warning("Hosted collections (perform_search_request): there were no hosted collections to be searched", req=req) ## let's define some useful boolean variables: # True means there are actual or potential hosted collections results to be printed kwargs['hosted_colls_actual_or_potential_results_p'] = not (not hosted_colls or not ((hosted_colls_results and hosted_colls_true_results) or hosted_colls_timeouts)) # True means there are hosted collections timeouts to take care of later # (useful for more accurate printing of results later) kwargs['hosted_colls_potential_results_p'] = not (not hosted_colls or not hosted_colls_timeouts) # True means we only have hosted collections to deal with kwargs['only_hosted_colls_actual_or_potential_results_p'] = not colls_to_search and hosted_colls_actual_or_potential_results_p kwargs['hosted_colls_results'] = hosted_colls_results kwargs['hosted_colls_timeouts'] = hosted_colls_timeouts kwargs['hosted_colls_true_results'] = hosted_colls_true_results def prs_advanced_search(results_in_any_collection, kwargs=None, req=None, of=None, cc=None, ln=None, _=None, p=None, p1=None, p2=None, p3=None, f=None, f1=None, m1=None, op1=None, f2=None, m2=None, op2=None, f3=None, m3=None, ap=None, ec=None, selected_external_collections_infos=None, verbose=None, wl=None, em=None, **dummy): len_results_p1 = 0 len_results_p2 = 0 len_results_p3 = 0 try: results_in_any_collection.union_update(search_pattern_parenthesised(req, p1, f1, m1, ap=ap, of=of, verbose=verbose, ln=ln, wl=wl)) len_results_p1 = len(results_in_any_collection) if len_results_p1 == 0: if of.startswith("h"): perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) return page_end(req, of, ln, em) if p2: results_tmp = search_pattern_parenthesised(req, p2, f2, m2, ap=ap, of=of, verbose=verbose, ln=ln, wl=wl) len_results_p2 = len(results_tmp) if op1 == "a": # add results_in_any_collection.intersection_update(results_tmp) elif op1 == "o": # or results_in_any_collection.union_update(results_tmp) elif op1 == "n": # not results_in_any_collection.difference_update(results_tmp) else: if of.startswith("h"): write_warning("Invalid set operation %s." % cgi.escape(op1), "Error", req=req) if len(results_in_any_collection) == 0: if of.startswith("h"): if len_results_p2: #each individual query returned results, but the boolean operation did not nearestterms = [] nearest_search_args = req.argd.copy() if p1: nearestterms.append((p1, len_results_p1, clean_dictionary(nearest_search_args, ['p2', 'f2', 'm2', 'p3', 'f3', 'm3']))) nearestterms.append((p2, len_results_p2, clean_dictionary(nearest_search_args, ['p1', 'f1', 'm1', 'p3', 'f3', 'm3']))) write_warning(websearch_templates.tmpl_search_no_boolean_hits(ln=ln, nearestterms=nearestterms), req=req) perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) if p3: results_tmp = search_pattern_parenthesised(req, p3, f3, m3, ap=ap, of=of, verbose=verbose, ln=ln, wl=wl) len_results_p3 = len(results_tmp) if op2 == "a": # add results_in_any_collection.intersection_update(results_tmp) elif op2 == "o": # or results_in_any_collection.union_update(results_tmp) elif op2 == "n": # not results_in_any_collection.difference_update(results_tmp) else: if of.startswith("h"): write_warning("Invalid set operation %s." % cgi.escape(op2), "Error", req=req) if len(results_in_any_collection) == 0 and len_results_p3 and of.startswith("h"): #each individual query returned results but the boolean operation did not nearestterms = [] nearest_search_args = req.argd.copy() if p1: nearestterms.append((p1, len_results_p1, clean_dictionary(nearest_search_args, ['p2', 'f2', 'm2', 'p3', 'f3', 'm3']))) if p2: nearestterms.append((p2, len_results_p2, clean_dictionary(nearest_search_args, ['p1', 'f1', 'm1', 'p3', 'f3', 'm3']))) nearestterms.append((p3, len_results_p3, clean_dictionary(nearest_search_args, ['p1', 'f1', 'm1', 'p2', 'f2', 'm2']))) write_warning(websearch_templates.tmpl_search_no_boolean_hits(ln=ln, nearestterms=nearestterms), req=req) except: register_exception(req=req, alert_admin=True) if of.startswith("h"): req.write(create_error_box(req, verbose=verbose, ln=ln)) perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) return page_end(req, of, ln, em) def prs_simple_search(results_in_any_collection, kwargs=None, req=None, of=None, cc=None, ln=None, p=None, f=None, p1=None, p2=None, p3=None, ec=None, verbose=None, selected_external_collections_infos=None, only_hosted_colls_actual_or_potential_results_p=None, query_representation_in_cache=None, ap=None, hosted_colls_actual_or_potential_results_p=None, wl=None, em=None, **dummy): try: results_in_cache = intbitset().fastload( search_results_cache.get(query_representation_in_cache)) except: results_in_cache = None if results_in_cache is not None: # query is not in the cache already, so reuse it: results_in_any_collection.union_update(results_in_cache) if verbose and of.startswith("h"): write_warning("Search stage 0: query found in cache, reusing cached results.", req=req) else: try: # added the display_nearest_terms_box parameter to avoid printing out the "Nearest terms in any collection" # recommendations when there are results only in the hosted collections. Also added the if clause to avoid # searching in case we know we only have actual or potential hosted collections results if not only_hosted_colls_actual_or_potential_results_p: results_in_any_collection.union_update(search_pattern_parenthesised(req, p, f, ap=ap, of=of, verbose=verbose, ln=ln, display_nearest_terms_box=not hosted_colls_actual_or_potential_results_p, wl=wl)) except: register_exception(req=req, alert_admin=True) if of.startswith("h"): req.write(create_error_box(req, verbose=verbose, ln=ln)) perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) return page_end(req, of, ln, em) def prs_intersect_results_with_collrecs(results_final, results_in_any_collection, kwargs=None, colls_to_search=None, req=None, ap=None, of=None, ln=None, cc=None, p=None, p1=None, p2=None, p3=None, f=None, ec=None, verbose=None, selected_external_collections_infos=None, em=None, **dummy): display_nearest_terms_box=not kwargs['hosted_colls_actual_or_potential_results_p'] try: # added the display_nearest_terms_box parameter to avoid printing out the "Nearest terms in any collection" # recommendations when there results only in the hosted collections. Also added the if clause to avoid # searching in case we know since the last stage that we have no results in any collection if len(results_in_any_collection) != 0: results_final.update(intersect_results_with_collrecs(req, results_in_any_collection, colls_to_search, ap, of, verbose, ln, display_nearest_terms_box=display_nearest_terms_box)) except: register_exception(req=req, alert_admin=True) if of.startswith("h"): req.write(create_error_box(req, verbose=verbose, ln=ln)) perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) return page_end(req, of, ln, em) def prs_store_results_in_cache(query_representation_in_cache, results_in_any_collection, req=None, verbose=None, of=None, **dummy): if CFG_WEBSEARCH_SEARCH_CACHE_SIZE > 0: search_results_cache.set(query_representation_in_cache, results_in_any_collection.fastdump(), timeout=CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT) search_results_cache.set(query_representation_in_cache + '::cc', dummy.get('cc', CFG_SITE_NAME), timeout=CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT) if req: from flask import request req = request search_results_cache.set(query_representation_in_cache + '::p', req.values.get('p', ''), timeout=CFG_WEBSEARCH_SEARCH_CACHE_TIMEOUT) if verbose and of.startswith("h"): write_warning(req, "Search stage 3: storing query results in cache.", req=req) def prs_apply_search_limits(results_final, kwargs=None, req=None, of=None, cc=None, ln=None, _=None, p=None, p1=None, p2=None, p3=None, f=None, pl=None, ap=None, dt=None, ec=None, selected_external_collections_infos=None, hosted_colls_actual_or_potential_results_p=None, datetext1=None, datetext2=None, verbose=None, wl=None, em=None, **dummy): if datetext1 != "" and results_final != {}: if verbose and of.startswith("h"): write_warning("Search stage 5: applying time etc limits, from %s until %s..." % (datetext1, datetext2), req=req) try: results_final = intersect_results_with_hitset(req, results_final, search_unit_in_bibrec(datetext1, datetext2, dt), ap, aptext= _("No match within your time limits, " "discarding this condition..."), of=of) except: register_exception(req=req, alert_admin=True) if of.startswith("h"): req.write(create_error_box(req, verbose=verbose, ln=ln)) perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) return page_end(req, of, ln, em) if results_final == {} and not hosted_colls_actual_or_potential_results_p: if of.startswith("h"): perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) #if of.startswith("x"): # # Print empty, but valid XML # print_records_prologue(req, of) # print_records_epilogue(req, of) return page_end(req, of, ln, em) if pl and results_final != {}: pl = wash_pattern(pl) if verbose and of.startswith("h"): write_warning("Search stage 5: applying search pattern limit %s..." % cgi.escape(pl), req=req) try: results_final = intersect_results_with_hitset(req, results_final, search_pattern_parenthesised(req, pl, ap=0, ln=ln, wl=wl), ap, aptext=_("No match within your search limits, " "discarding this condition..."), of=of) except: register_exception(req=req, alert_admin=True) if of.startswith("h"): req.write(create_error_box(req, verbose=verbose, ln=ln)) perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) return page_end(req, of, ln, em) if results_final == {} and not hosted_colls_actual_or_potential_results_p: if of.startswith("h"): perform_external_collection_search_with_em(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, selected_external_collections_infos, em=em) if of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) return page_end(req, of, ln, em) def prs_split_into_collections(kwargs=None, results_final=None, colls_to_search=None, hosted_colls_results=None, cpu_time=0, results_final_nb_total=None, hosted_colls_actual_or_potential_results_p=None, hosted_colls_true_results=None, hosted_colls_timeouts=None, **dummy): results_final_nb_total = 0 results_final_nb = {} # will hold number of records found in each collection # (in simple dict to display overview more easily) for coll in results_final.keys(): results_final_nb[coll] = len(results_final[coll]) #results_final_nb_total += results_final_nb[coll] # Now let us calculate results_final_nb_total more precisely, # in order to get the total number of "distinct" hits across # searched collections; this is useful because a record might # have been attributed to more than one primary collection; so # we have to avoid counting it multiple times. The price to # pay for this accuracy of results_final_nb_total is somewhat # increased CPU time. if results_final.keys() == 1: # only one collection; no need to union them results_final_for_all_selected_colls = results_final.values()[0] results_final_nb_total = results_final_nb.values()[0] else: # okay, some work ahead to union hits across collections: results_final_for_all_selected_colls = intbitset() for coll in results_final.keys(): results_final_for_all_selected_colls.union_update(results_final[coll]) results_final_nb_total = len(results_final_for_all_selected_colls) #if hosted_colls and (of.startswith("h") or of.startswith("x")): if hosted_colls_actual_or_potential_results_p: if hosted_colls_results: for result in hosted_colls_true_results: colls_to_search.append(result[0][1].name) results_final_nb[result[0][1].name] = result[1] results_final_nb_total += result[1] cpu_time += result[2] if hosted_colls_timeouts: for timeout in hosted_colls_timeouts: colls_to_search.append(timeout[1].name) # use -963 as a special number to identify the collections that timed out results_final_nb[timeout[1].name] = -963 kwargs['results_final_nb'] = results_final_nb kwargs['results_final_nb_total'] = results_final_nb_total kwargs['results_final_for_all_selected_colls'] = results_final_for_all_selected_colls kwargs['cpu_time'] = cpu_time #rca TODO: check where the cpu_time is used, this line was missing return (results_final_nb, results_final_nb_total, results_final_for_all_selected_colls) def prs_summarize_records(kwargs=None, req=None, p=None, f=None, aas=None, p1=None, p2=None, p3=None, f1=None, f2=None, f3=None, op1=None, op2=None, ln=None, results_final_for_all_selected_colls=None, of='hcs', **dummy): # feed the current search to be summarized: from invenio.legacy.search_engine.summarizer import summarize_records search_p = p search_f = f if not p and (aas == 1 or p1 or p2 or p3): op_d = {'n': ' and not ', 'a': ' and ', 'o': ' or ', '': ''} triples = ziplist([f1, f2, f3], [p1, p2, p3], [op1, op2, '']) triples_len = len(triples) for i in range(triples_len): fi, pi, oi = triples[i] # e.g.: if i < triples_len-1 and not triples[i+1][1]: # if p2 empty triples[i+1][0] = '' # f2 must be too oi = '' # and o1 if ' ' in pi: pi = '"'+pi+'"' if fi: fi = fi + ':' search_p += fi + pi + op_d[oi] search_f = '' summarize_records(results_final_for_all_selected_colls, of, ln, search_p, search_f, req) def prs_print_records(kwargs=None, results_final=None, req=None, of=None, cc=None, pl_in_url=None, ln=None, _=None, p=None, p1=None, p2=None, p3=None, f=None, rg=None, sf=None, so=None, sp=None, rm=None, ot=None, aas=None, f1=None, m1=None, op1=None, f2=None, m2=None, op2=None, f3=None, m3=None, sc=None, d1y=None, d1m=None, d1d=None, d2y=None, d2m=None, d2d=None, dt=None, jrec=None, colls_to_search=None, hosted_colls_actual_or_potential_results_p=None, hosted_colls_results=None, hosted_colls_true_results=None, hosted_colls_timeouts=None, results_final_nb=None, cpu_time=None, verbose=None, em=None, **dummy): if len(colls_to_search)>1: cpu_time = -1 # we do not want to have search time printed on each collection print_records_prologue(req, of, cc=cc) results_final_colls = [] wlqh_results_overlimit = 0 for coll in colls_to_search: if coll in results_final and len(results_final[coll]): if of.startswith("h"): req.write(print_search_info(p, f, sf, so, sp, rm, of, ot, coll, results_final_nb[coll], jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, em=em)) results_final_recIDs = list(results_final[coll]) results_final_relevances = [] results_final_relevances_prologue = "" results_final_relevances_epilogue = "" if rm: # do we have to rank? results_final_recIDs_ranked, results_final_relevances, results_final_relevances_prologue, results_final_relevances_epilogue, results_final_comments = \ rank_records(req, rm, 0, results_final[coll], string.split(p) + string.split(p1) + string.split(p2) + string.split(p3), verbose, so, of, ln, rg, jrec, kwargs['f']) if of.startswith("h"): write_warning(results_final_comments, req=req) if results_final_recIDs_ranked: results_final_recIDs = results_final_recIDs_ranked else: # rank_records failed and returned some error message to display: write_warning(results_final_relevances_prologue, req=req) write_warning(results_final_relevances_epilogue, req=req) elif sf or (CFG_BIBSORT_BUCKETS and sorting_methods): # do we have to sort? results_final_recIDs = sort_records(req, results_final_recIDs, sf, so, sp, verbose, of, ln, rg, jrec) if len(results_final_recIDs) < CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT: results_final_colls.append(results_final_recIDs) else: wlqh_results_overlimit = 1 print_records(req, results_final_recIDs, jrec, rg, of, ot, ln, results_final_relevances, results_final_relevances_prologue, results_final_relevances_epilogue, search_pattern=p, print_records_prologue_p=False, print_records_epilogue_p=False, verbose=verbose, sf=sf, so=so, sp=sp, rm=rm, em=em) if of.startswith("h"): req.write(print_search_info(p, f, sf, so, sp, rm, of, ot, coll, results_final_nb[coll], jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, 1, em=em)) if req and not isinstance(req, cStringIO.OutputType): # store the last search results page session_param_set(req, 'websearch-last-query', req.unparsed_uri) if wlqh_results_overlimit: results_final_colls = None # store list of results if user wants to display hits # in a single list, or store list of collections of records # if user displays hits split by collections: session_param_set(req, 'websearch-last-query-hits', results_final_colls) #if hosted_colls and (of.startswith("h") or of.startswith("x")): if hosted_colls_actual_or_potential_results_p: if hosted_colls_results: # TODO: add a verbose message here for result in hosted_colls_true_results: if of.startswith("h"): req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, result[0][1].name, results_final_nb[result[0][1].name], jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, em=em)) req.write(print_hosted_results(url_and_engine=result[0], ln=ln, of=of, req=req, limit=rg, em=em)) if of.startswith("h"): req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, result[0][1].name, results_final_nb[result[0][1].name], jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, 1)) if hosted_colls_timeouts: # TODO: add a verbose message here # TODO: check if verbose messages still work when dealing with (re)calculations of timeouts (hosted_colls_timeouts_results, hosted_colls_timeouts_timeouts) = do_calculate_hosted_collections_results(req, ln, None, verbose, None, hosted_colls_timeouts, CFG_HOSTED_COLLECTION_TIMEOUT_POST_SEARCH) if hosted_colls_timeouts_results: for result in hosted_colls_timeouts_results: if result[1] == None or result[1] == False: ## these are the searches the returned no or zero results ## also print a nearest terms box, in case this is the only ## collection being searched and it returns no results? if of.startswith("h"): req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, result[0][1].name, -963, jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time)) req.write(print_hosted_results(url_and_engine=result[0], ln=ln, of=of, req=req, no_records_found=True, limit=rg, em=em)) req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, result[0][1].name, -963, jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, 1)) else: # these are the searches that actually returned results on time if of.startswith("h"): req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, result[0][1].name, result[1], jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time)) req.write(print_hosted_results(url_and_engine=result[0], ln=ln, of=of, req=req, limit=rg, em=em)) if of.startswith("h"): req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, result[0][1].name, result[1], jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, 1)) if hosted_colls_timeouts_timeouts: for timeout in hosted_colls_timeouts_timeouts: if of.startswith("h"): req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, timeout[1].name, -963, jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time)) req.write(print_hosted_results(url_and_engine=timeout[0], ln=ln, of=of, req=req, search_timed_out=True, limit=rg, em=em)) req.write(print_hosted_search_info(p, f, sf, so, sp, rm, of, ot, timeout[1].name, -963, jrec, rg, aas, ln, p1, p2, p3, f1, f2, f3, m1, m2, m3, op1, op2, sc, pl_in_url, d1y, d1m, d1d, d2y, d2m, d2d, dt, cpu_time, 1)) print_records_epilogue(req, of) if f == "author" and of.startswith("h"): req.write(create_similarly_named_authors_link_box(p, ln)) def prs_log_query(kwargs=None, req=None, uid=None, of=None, ln=None, p=None, f=None, colls_to_search=None, results_final_nb_total=None, em=None, **dummy): # FIXME move query logging to signal receiver # log query: try: from flask.ext.login import current_user if req: from flask import request req = request id_query = log_query(req.host, '&'.join(map(lambda (k,v): k+'='+v, request.values.iteritems(multi=True))), uid) #id_query = log_query(req.remote_host, req.args, uid) #of = request.values.get('of', 'hb') if of.startswith("h") and id_query and (em == '' or EM_REPOSITORY["alert"] in em): if not of in ['hcs', 'hcs2']: # display alert/RSS teaser for non-summary formats: display_email_alert_part = True if current_user: if current_user['email'] == 'guest': if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS > 4: display_email_alert_part = False else: if not current_user['precached_usealerts']: display_email_alert_part = False from flask import flash flash(websearch_templates.tmpl_alert_rss_teaser_box_for_query(id_query, \ ln=ln, display_email_alert_part=display_email_alert_part), 'search-results-after') except: # do not log query if req is None (used by CLI interface) pass log_query_info("ss", p, f, colls_to_search, results_final_nb_total) def prs_search_common(kwargs=None, req=None, of=None, cc=None, ln=None, uid=None, _=None, p=None, p1=None, p2=None, p3=None, colls_to_display=None, f=None, rg=None, sf=None, so=None, sp=None, rm=None, ot=None, aas=None, f1=None, m1=None, op1=None, f2=None, m2=None, op2=None, f3=None, m3=None, sc=None, pl=None, d1y=None, d1m=None, d1d=None, d2y=None, d2m=None, d2d=None, dt=None, jrec=None, ec=None, action=None, colls_to_search=None, wash_colls_debug=None, verbose=None, wl=None, em=None, **dummy): query_representation_in_cache = get_search_results_cache_key(**kwargs) page_start(req, of, cc, aas, ln, uid, p=create_page_title_search_pattern_info(p, p1, p2, p3), em=em) if of.startswith("h") and verbose and wash_colls_debug: write_warning("wash_colls debugging info : %s" % wash_colls_debug, req=req) prs_search_hosted_collections(kwargs=kwargs, **kwargs) if of.startswith("h"): req.write(create_search_box(cc, colls_to_display, p, f, rg, sf, so, sp, rm, of, ot, aas, ln, p1, f1, m1, op1, p2, f2, m2, op2, p3, f3, m3, sc, pl, d1y, d1m, d1d, d2y, d2m, d2d, dt, jrec, ec, action, em )) # WebSearch services if jrec <= 1 and \ (em == "" and True or (EM_REPOSITORY["search_services"] in em)): user_info = collect_user_info(req) # display only on first search page, and only if wanted # when 'em' param set. for answer_relevance, answer_html in services.get_answers( req, user_info, of, cc, colls_to_search, p, f, ln): req.write('
') req.write(answer_html) if verbose > 8: write_warning("Service relevance: %i" % answer_relevance, req=req) req.write('
') t1 = os.times()[4] results_in_any_collection = intbitset() if aas == 1 or (p1 or p2 or p3): ## 3A - advanced search output = prs_advanced_search(results_in_any_collection, kwargs=kwargs, **kwargs) if output is not None: return output else: ## 3B - simple search output = prs_simple_search(results_in_any_collection, kwargs=kwargs, **kwargs) if output is not None: return output if len(results_in_any_collection) == 0 and not kwargs['hosted_colls_actual_or_potential_results_p']: if of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) return None # store this search query results into search results cache if needed: prs_store_results_in_cache(query_representation_in_cache, results_in_any_collection, **kwargs) # search stage 4 and 5: intersection with collection universe and sorting/limiting try: output = prs_intersect_with_colls_and_apply_search_limits(results_in_any_collection, kwargs=kwargs, **kwargs) if output is not None: return output except Exception: # no results to display return None t2 = os.times()[4] cpu_time = t2 - t1 kwargs['cpu_time'] = cpu_time ## search stage 6: display results: return prs_display_results(kwargs=kwargs, **kwargs) def prs_intersect_with_colls_and_apply_search_limits(results_in_any_collection, kwargs=None, req=None, of=None, ln=None, _=None, p=None, p1=None, p2=None, p3=None, f=None, cc=None, ec=None, verbose=None, em=None, **dummy): # search stage 4: intersection with collection universe: results_final = {} output = prs_intersect_results_with_collrecs(results_final, results_in_any_collection, kwargs, **kwargs) if output is not None: return output # another external search if we still don't have something if results_final == {} and not kwargs['hosted_colls_actual_or_potential_results_p']: if of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) kwargs['results_final'] = results_final raise Exception # search stage 5: apply search option limits and restrictions: output = prs_apply_search_limits(results_final, kwargs=kwargs, **kwargs) kwargs['results_final'] = results_final if output is not None: return output def prs_display_results(kwargs=None, results_final=None, req=None, of=None, sf=None, so=None, sp=None, verbose=None, p=None, p1=None, p2=None, p3=None, cc=None, ln=None, _=None, ec=None, colls_to_search=None, rm=None, cpu_time=None, f=None, em=None, **dummy ): ## search stage 6: display results: # split result set into collections (results_final_nb, results_final_nb_total, results_final_for_all_selected_colls) = prs_split_into_collections(kwargs=kwargs, **kwargs) # we continue past this point only if there is a hosted collection that has timed out and might offer potential results if results_final_nb_total == 0 and not kwargs['hosted_colls_potential_results_p']: if of.startswith("h"): write_warning("No match found, please enter different search terms.", req=req) elif of.startswith("x"): # Print empty, but valid XML print_records_prologue(req, of) print_records_epilogue(req, of) else: prs_log_query(kwargs=kwargs, **kwargs) # yes, some hits found: good! # collection list may have changed due to not-exact-match-found policy so check it out: for coll in results_final.keys(): if coll not in colls_to_search: colls_to_search.append(coll) # print results overview: if of == "intbitset": #return the result as an intbitset return results_final_for_all_selected_colls elif of == "id": # we have been asked to return list of recIDs recIDs = list(results_final_for_all_selected_colls) if rm: # do we have to rank? results_final_for_all_colls_rank_records_output = rank_records(req, rm, 0, results_final_for_all_selected_colls, string.split(p) + string.split(p1) + string.split(p2) + string.split(p3), verbose, so, of, ln, kwargs['rg'], kwargs['jrec'], kwargs['f']) if results_final_for_all_colls_rank_records_output[0]: recIDs = results_final_for_all_colls_rank_records_output[0] elif sf or (CFG_BIBSORT_BUCKETS and sorting_methods): # do we have to sort? recIDs = sort_records(req, recIDs, sf, so, sp, verbose, of, ln) return recIDs elif of.startswith("h"): if of not in ['hcs', 'hcs2']: # added the hosted_colls_potential_results_p parameter to help print out the overview more accurately req.write(print_results_overview(colls_to_search, results_final_nb_total, results_final_nb, cpu_time, ln, ec, hosted_colls_potential_results_p=kwargs['hosted_colls_potential_results_p'], em=em)) kwargs['selected_external_collections_infos'] = print_external_results_overview(req, cc, [p, p1, p2, p3], f, ec, verbose, ln, print_overview=em == "" or EM_REPOSITORY["overview"] in em) # print number of hits found for XML outputs: if of.startswith("x") or of == 'mobb': req.write("\n" % kwargs['results_final_nb_total']) # print records: if of in ['hcs', 'hcs2']: prs_summarize_records(kwargs=kwargs, **kwargs) else: prs_print_records(kwargs=kwargs, **kwargs) # this is a copy of the prs_display_results with output parts removed, needed for external modules def prs_rank_results(kwargs=None, results_final=None, req=None, colls_to_search=None, sf=None, so=None, sp=None, of=None, rm=None, p=None, p1=None, p2=None, p3=None, verbose=None, **dummy ): ## search stage 6: display results: # split result set into collections (results_final_nb, results_final_nb_total, results_final_for_all_selected_colls) = prs_split_into_collections(kwargs=kwargs, **kwargs) # yes, some hits found: good! # collection list may have changed due to not-exact-match-found policy so check it out: for coll in results_final.keys(): if coll not in colls_to_search: colls_to_search.append(coll) # we have been asked to return list of recIDs recIDs = list(results_final_for_all_selected_colls) if rm: # do we have to rank? results_final_for_all_colls_rank_records_output = rank_records(req, rm, 0, results_final_for_all_selected_colls, string.split(p) + string.split(p1) + string.split(p2) + string.split(p3), verbose, so, of, field=kwargs['f']) if results_final_for_all_colls_rank_records_output[0]: recIDs = results_final_for_all_colls_rank_records_output[0] elif sf or (CFG_BIBSORT_BUCKETS and sorting_methods): # do we have to sort? recIDs = sort_records(req, recIDs, sf, so, sp, verbose, of) return recIDs def perform_request_cache(req, action="show"): """Manipulates the search engine cache.""" req.content_type = "text/html" req.send_http_header() req.write("") out = "" out += "

Search Cache

" req.write(out) # show collection reclist cache: out = "

Collection reclist cache

" out += "- collection table last updated: %s" % get_table_update_time('collection') out += "
- reclist cache timestamp: %s" % collection_reclist_cache.timestamp out += "
- reclist cache contents:" out += "
" for coll in collection_reclist_cache.cache.keys(): if collection_reclist_cache.cache[coll]: out += "%s (%d)
" % (coll, len(collection_reclist_cache.cache[coll])) out += "
" req.write(out) # show field i18nname cache: out = "

Field I18N names cache

" out += "- fieldname table last updated: %s" % get_table_update_time('fieldname') out += "
- i18nname cache timestamp: %s" % field_i18nname_cache.timestamp out += "
- i18nname cache contents:" out += "
" for field in field_i18nname_cache.cache.keys(): for ln in field_i18nname_cache.cache[field].keys(): out += "%s, %s = %s
" % (field, ln, field_i18nname_cache.cache[field][ln]) out += "
" req.write(out) # show collection i18nname cache: out = "

Collection I18N names cache

" out += "- collectionname table last updated: %s" % get_table_update_time('collectionname') out += "
- i18nname cache timestamp: %s" % collection_i18nname_cache.timestamp out += "
- i18nname cache contents:" out += "
" for coll in collection_i18nname_cache.cache.keys(): for ln in collection_i18nname_cache.cache[coll].keys(): out += "%s, %s = %s
" % (coll, ln, collection_i18nname_cache.cache[coll][ln]) out += "
" req.write(out) req.write("") return "\n" def perform_request_log(req, date=""): """Display search log information for given date.""" req.content_type = "text/html" req.send_http_header() req.write("") req.write("

Search Log

") if date: # case A: display stats for a day yyyymmdd = string.atoi(date) req.write("

Date: %d

" % yyyymmdd) req.write("""""") req.write("" % ("No.", "Time", "Pattern", "Field", "Collection", "Number of Hits")) # read file: p = os.popen("grep ^%d %s/search.log" % (yyyymmdd, CFG_LOGDIR), 'r') lines = p.readlines() p.close() # process lines: i = 0 for line in lines: try: datetime, dummy_aas, p, f, c, nbhits = string.split(line,"#") i += 1 req.write("" \ % (i, datetime[8:10], datetime[10:12], datetime[12:], p, f, c, nbhits)) except: pass # ignore eventual wrong log lines req.write("
%s%s%s%s%s%s
#%d%s:%s:%s%s%s%s%s
") else: # case B: display summary stats per day yyyymm01 = int(time.strftime("%Y%m01", time.localtime())) yyyymmdd = int(time.strftime("%Y%m%d", time.localtime())) req.write("""""") req.write("" % ("Day", "Number of Queries")) for day in range(yyyymm01, yyyymmdd + 1): p = os.popen("grep -c ^%d %s/search.log" % (day, CFG_LOGDIR), 'r') for line in p.readlines(): req.write("""""" % \ (day, CFG_SITE_URL, day, line)) p.close() req.write("
%s%s
%s%s
") req.write("") return "\n" def get_all_field_values(tag): """ Return all existing values stored for a given tag. @param tag: the full tag, e.g. 909C0b @type tag: string @return: the list of values @rtype: list of strings """ table = 'bib%02dx' % int(tag[:2]) return [row[0] for row in run_sql("SELECT DISTINCT(value) FROM %s WHERE tag=%%s" % table, (tag, ))] def get_most_popular_field_values(recids, tags, exclude_values=None, count_repetitive_values=True, split_by=0): """ Analyze RECIDS and look for TAGS and return most popular values and the frequency with which they occur sorted according to descending frequency. If a value is found in EXCLUDE_VALUES, then do not count it. If COUNT_REPETITIVE_VALUES is True, then we count every occurrence of value in the tags. If False, then we count the value only once regardless of the number of times it may appear in a record. (But, if the same value occurs in another record, we count it, of course.) @return: list of tuples containing tag and its frequency Example: >>> get_most_popular_field_values(range(11,20), '980__a') [('PREPRINT', 10), ('THESIS', 7), ...] >>> get_most_popular_field_values(range(11,20), ('100__a', '700__a')) [('Ellis, J', 10), ('Ellis, N', 7), ...] >>> get_most_popular_field_values(range(11,20), ('100__a', '700__a'), ('Ellis, J')) [('Ellis, N', 7), ...] """ def _get_most_popular_field_values_helper_sorter(val1, val2): """Compare VAL1 and VAL2 according to, firstly, frequency, then secondly, alphabetically.""" compared_via_frequencies = cmp(valuefreqdict[val2], valuefreqdict[val1]) if compared_via_frequencies == 0: return cmp(val1.lower(), val2.lower()) else: return compared_via_frequencies valuefreqdict = {} ## sanity check: if not exclude_values: exclude_values = [] if isinstance(tags, str): tags = (tags,) ## find values to count: vals_to_count = [] displaytmp = {} if count_repetitive_values: # counting technique A: can look up many records at once: (very fast) for tag in tags: vals_to_count.extend(get_fieldvalues(recids, tag, sort=False, split_by=split_by)) else: # counting technique B: must count record-by-record: (slow) for recid in recids: vals_in_rec = [] for tag in tags: for val in get_fieldvalues(recid, tag, False): vals_in_rec.append(val) # do not count repetitive values within this record # (even across various tags, so need to unify again): dtmp = {} for val in vals_in_rec: dtmp[val.lower()] = 1 displaytmp[val.lower()] = val vals_in_rec = dtmp.keys() vals_to_count.extend(vals_in_rec) ## are we to exclude some of found values? for val in vals_to_count: if val not in exclude_values: if val in valuefreqdict: valuefreqdict[val] += 1 else: valuefreqdict[val] = 1 ## sort by descending frequency of values: if not CFG_NUMPY_IMPORTABLE: ## original version out = [] vals = valuefreqdict.keys() vals.sort(_get_most_popular_field_values_helper_sorter) for val in vals: tmpdisplv = '' if val in displaytmp: tmpdisplv = displaytmp[val] else: tmpdisplv = val out.append((tmpdisplv, valuefreqdict[val])) return out else: f = [] # frequencies n = [] # original names ln = [] # lowercased names ## build lists within one iteration for (val, freq) in iteritems(valuefreqdict): f.append(-1 * freq) if val in displaytmp: n.append(displaytmp[val]) else: n.append(val) ln.append(val.lower()) ## sort by frequency (desc) and then by lowercased name. return [(n[i], -1 * f[i]) for i in numpy.lexsort([ln, f])] def profile(p="", f="", c=CFG_SITE_NAME): """Profile search time.""" import profile import pstats profile.run("perform_request_search(p='%s',f='%s', c='%s')" % (p, f, c), "perform_request_search_profile") p = pstats.Stats("perform_request_search_profile") p.strip_dirs().sort_stats("cumulative").print_stats() return 0 def perform_external_collection_search_with_em(req, current_collection, pattern_list, field, external_collection, verbosity_level=0, lang=CFG_SITE_LANG, selected_external_collections_infos=None, em=""): perform_external_collection_search(req, current_collection, pattern_list, field, external_collection, verbosity_level, lang, selected_external_collections_infos, print_overview=em == "" or EM_REPOSITORY["overview"] in em, print_search_info=em == "" or EM_REPOSITORY["search_info"] in em, print_see_also_box=em == "" or EM_REPOSITORY["see_also_box"] in em, print_body=em == "" or EM_REPOSITORY["body"] in em) @cache.memoize(timeout=5) def get_fulltext_terms_from_search_pattern(search_pattern): keywords = [] if search_pattern is not None: for unit in create_basic_search_units(None, search_pattern.encode('utf-8'), None): bsu_o, bsu_p, bsu_f, bsu_m = unit[0], unit[1], unit[2], unit[3] if (bsu_o != '-' and bsu_f in [None, 'fulltext']): if bsu_m == 'a' and bsu_p.startswith('%') and bsu_p.endswith('%'): # remove leading and training `%' representing partial phrase search keywords.append(bsu_p[1:-1]) else: keywords.append(bsu_p) return keywords diff --git a/invenio/legacy/webhelp/web/admin/howto/howto-authority.webdoc b/invenio/legacy/webhelp/web/admin/howto/howto-authority.webdoc index 78a2b1491..9314de0ea 100644 --- a/invenio/legacy/webhelp/web/admin/howto/howto-authority.webdoc +++ b/invenio/legacy/webhelp/web/admin/howto/howto-authority.webdoc @@ -1,99 +1,99 @@ ## -*- mode: html; coding: utf-8; -*- ## This file is part of Invenio. -## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN. +## Copyright (C) 2007, 2008, 2009, 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.

Introduction

This page describes how to use Authority Control in Invenio from a user's perspective

For an explanation of how to configure Authority Control in Invenio, cf. _(BibAuthority Admin Guide)_.

How to MARC authority records

1. The 980 field

-

When adding an authority record to INVENIO, whether by uploading a MARC record manually or by adding a new record in BibEdit, it is important to add two separate '980' fields to the record. -The first field will contain the value “AUTHORITY” in the $a subfield. -This is to tell INVENIO that this is an authority record. -The second '980' field will likewise contain a value in its $a subfield, only this time you must specify what kind of authority record it is. -Typically an author authority record would contain the term “AUTHOR”, an institution would contain “INSTITUTION” etc. +

When adding an authority record to INVENIO, whether by uploading a MARC record manually or by adding a new record in BibEdit, it is important to add two separate '980' fields to the record. +The first field will contain the value “AUTHORITY” in the $a subfield. +This is to tell INVENIO that this is an authority record. +The second '980' field will likewise contain a value in its $a subfield, only this time you must specify what kind of authority record it is. +Typically an author authority record would contain the term “AUTHOR”, an institute would contain “INSTITUTE” etc. It is important to communicate these exact terms to the INVENIO admin who will configure how INVENIO handles each of these authority record types for the individual INVENIO modules.

2. The 035 field

Further, you must add a unique control number to each authority record. In Invenio, this number must be contained in the 035__ $a field of the authority record and contains the MARC code (enclosed in parentheses) of the organization originating the system control number, followed immediately by the number, e.g. "(SzGeCERN)abc123"). Cf. 035 - System Control Number from the MARC 21 reference page.

3. Links between MARC records

When creating links between MARC records, we must distinguish two cases: 1) references from bibliographic records towards authority records, and 2) references between authority records

3.1 Creating a reference from a bibliographic record

Example: You have an article (bibliographic record) with Author "Ellis" in the 100__ $a field and you want to create a reference to the authority record for this author.

-

This can be done by inserting the control number of this authority record (as contained in the 035__ $a subfield of the authority record) into the $0 subfield of the same 100__ field of the bibliographic record, prefixed by the type of authority record being referenced and a (configurable) separator. +

This can be done by inserting the control number of this authority record (as contained in the 035__ $a subfield of the authority record) into the $0 subfield of the same 100__ field of the bibliographic record, prefixed by the type of authority record being referenced and a (configurable) separator.

A 100 field might look like this:

 100__ $a Ellis, J.
       $0 AUTHOR:(CERN)abc123
       $u CERN
-      $0 INSTITUTION:(CERN)xyz789
+      $0 INSTITUTE:(CERN)xyz789
 
-

In this case, since we are referencing an AUTHOR authority record, the 100__ $0 subfield would read, e.g. "AUTHOR:(CERN)abc123". If you want to reference an institution, e.g. SLAC, as affiliation for an author, you would prefix the control number with "INSTITUTION". You would add another 100__ $0 subfield to the same 100 field and add the value "INSTITUTION:(CERN)xyz789".

+

In this case, since we are referencing an AUTHOR authority record, the 100__ $0 subfield would read, e.g. "AUTHOR:(CERN)abc123". If you want to reference an institute, e.g. SLAC, as affiliation for an author, you would prefix the control number with "INSTITUTE". You would add another 100__ $0 subfield to the same 100 field and add the value "INSTITUTE:(CERN)xyz789".

3.2 Creating links between authority records

-

Links between authority records use the 5xx fields. AUTHOR records use the 500 fields, INSTITUTION records the 510 fields and so on, according to the MARC 21 standard. -

+

Links between authority records use the 5xx fields. AUTHOR records use the 500 fields, INSTITUTE records the 510 fields and so on, according to the MARC 21 standard. +

Subfield codes:
 $a - Corporate name or jurisdiction name as entry element (NR)
      e.g. "SLAC National Accelerator Laboratory" or "European Organization for Nuclear Research"
 
 $w - Control subfield (NR)
      'a' - for predecessor
      'b' - for successor
      't' - for top / parent
 
 $4 - Relationship code (R)
-     The control number of the referenced authority record, 
+     The control number of the referenced authority record,
      e.g. "(CERN)iii000"
 
-

Example: You want to add a predecessor to an INSTITUTION authority record. Let's say "Institution A" has control number "(CERN)iii000" and its successor "Institution B" has control number "(CERN)iii001". In order to designate Institution A as predecessor of Institution B, we would add a 510 field to Institution B with a $w value of 'a', a $a value of 'Institution A', and a $4 value of '(CERN)iii000' like this: +

Example: You want to add a predecessor to an INSTITUTE authority record. Let's say "Institute A" has control number "(CERN)iii000" and its successor "Institute B" has control number "(CERN)iii001". In order to designate Institute A as predecessor of Institute B, we would add a 510 field to Institute B with a $w value of 'a', a $a value of 'Institute A', and a $4 value of '(CERN)iii000' like this:

-510__ $a Institution A
+510__ $a Institute A
       $w a
-      $4 INSTITUTION:(CERN)iii000
+      $4 INSTITUTE:(CERN)iii000
 

4. Other MARC for authority records

All other MARC fields should follow the MARC 21 Format for Authority Data

Creating collections of authority records

-

Once the authority records have been given the appropriate '980__a' values (cf. above), creating a collection of authority records is no different from creating any other collection in INVENIO. You can simply define a new collection defined by the usual collection query 'collection:AUTHOR' for author authority records, or 'collection:INSTITUTION' for institutions, etc.

+

Once the authority records have been given the appropriate '980__a' values (cf. above), creating a collection of authority records is no different from creating any other collection in INVENIO. You can simply define a new collection defined by the usual collection query 'collection:AUTHOR' for author authority records, or 'collection:INSTITUTE' for institutes, etc.

The recommended way of creating collections for authority records is to create a “virtual collection” for the main 'collection:AUTHORITY' collection and then add the individual authority record collections as regular children of this collection. This will allow you to browse and search within authority records without making this the default for all INVENIO searches.

How to use authority control in BibEdit

When using BibEdit to modify MARC meta-data of bibliographic records, certain fields may be configured (by the admin of your INVENIO installation) to offer you auto-complete functionality based upon the data contained in authority records for that field. For example, if MARC subfield 100__ $a was configured to be under authority control, then typing the beginning of a word into this subfield will trigger a drop-down list, offering you a choice of values to choose from. When you click on one of the entries in the drop-down list, this will not only populate the immediate subfield you are editing, but it will also insert a reference into a new $0 subfield of the same MARC field you are editing. This reference tells the system that the author you are referring to is the author as contained in the 'author' authority record with the given authority record control number.

The illustration below demonstrates how this works:

autosuggest dropdown

Typing “Elli” into the 100__ $a subfield will present you with a list of authors that contain a word starting with “Elli” somewhere in their name. In case there are multiple authors with similar or identical names (as is the case in the example shown here), you will receive additional information about these authors to help you disambiguate. The fields to be used for disambiguation can be configured by your INVENIO administrator. If such fields have not been configured, or if they are not sufficient for disambiguation, the authority record control number will be used to assure a unique value for each entry in the drop-down list. In the example above, the first author can be uniquely identified by his email address, whereas for the latter we have only the authority record control number as uniquely identifying characteristic.

inserted $0 subfield for authority record -

If in the shown example you click on the first author from the list, this author's name will automatically be inserted into the 100__ $a subfield you were editing, while the authority type and the authority record control number “author:(SzGeCERN)abc123” , is inserted into a new $0 subfield (cf. Illustration 2). This new subfield tells INVENIO that “Ellis, John” is associated with the 'author' authority record containing the authority record control number “(SzGeCERN)abc123”. In this example you can also see that the author's affiliation has been entered in the same way as well, using the auto-complete option for the 100__ $u subfield. In this case the author's affiliation is the “University of Oxford”, which is associated in this INVENIO installation with the 'institution' authority record containing the authority record control number “(SzGeCERN)inst0001”.

-

If INVENIO has no authority record data to match what you type into the authority-controlled subfield, you still have the possibility to enter a value manually.

\ No newline at end of file +

If in the shown example you click on the first author from the list, this author's name will automatically be inserted into the 100__ $a subfield you were editing, while the authority type and the authority record control number “author:(SzGeCERN)abc123” , is inserted into a new $0 subfield (cf. Illustration 2). This new subfield tells INVENIO that “Ellis, John” is associated with the 'author' authority record containing the authority record control number “(SzGeCERN)abc123”. In this example you can also see that the author's affiliation has been entered in the same way as well, using the auto-complete option for the 100__ $u subfield. In this case the author's affiliation is the “University of Oxford”, which is associated in this INVENIO installation with the 'institute' authority record containing the authority record control number “(SzGeCERN)inst0001”.

+

If INVENIO has no authority record data to match what you type into the authority-controlled subfield, you still have the possibility to enter a value manually.

diff --git a/invenio/legacy/websearch/templates.py b/invenio/legacy/websearch/templates.py index 9184b1356..b86b00647 100644 --- a/invenio/legacy/websearch/templates.py +++ b/invenio/legacy/websearch/templates.py @@ -1,4643 +1,4643 @@ # -*- coding: utf-8 -*- ## This file is part of Invenio. ## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. # pylint: disable=C0301 __revision__ = "$Id$" import time import cgi import string import re import locale from six import iteritems from urllib import quote, urlencode from xml.sax.saxutils import escape as xml_escape from invenio.config import \ CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH, \ CFG_WEBSEARCH_SIMPLESEARCH_PATTERN_BOX_WIDTH, \ CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH, \ CFG_WEBSEARCH_AUTHOR_ET_AL_THRESHOLD, \ CFG_WEBSEARCH_USE_ALEPH_SYSNOS, \ CFG_WEBSEARCH_SPLIT_BY_COLLECTION, \ CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS, \ CFG_BIBRANK_SHOW_READING_STATS, \ CFG_BIBRANK_SHOW_DOWNLOAD_STATS, \ CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS, \ CFG_BIBRANK_SHOW_CITATION_LINKS, \ CFG_BIBRANK_SHOW_CITATION_STATS, \ CFG_BIBRANK_SHOW_CITATION_GRAPHS, \ CFG_WEBSEARCH_RSS_TTL, \ CFG_SITE_LANG, \ CFG_SITE_NAME, \ CFG_SITE_NAME_INTL, \ CFG_VERSION, \ CFG_SITE_URL, \ CFG_SITE_SUPPORT_EMAIL, \ CFG_SITE_ADMIN_EMAIL, \ CFG_CERN_SITE, \ CFG_INSPIRE_SITE, \ CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE, \ CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES, \ CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS, \ CFG_BIBINDEX_CHARS_PUNCTUATION, \ CFG_WEBCOMMENT_ALLOW_COMMENTS, \ CFG_WEBCOMMENT_ALLOW_REVIEWS, \ CFG_WEBSEARCH_WILDCARD_LIMIT, \ CFG_WEBSEARCH_SHOW_COMMENT_COUNT, \ CFG_WEBSEARCH_SHOW_REVIEW_COUNT, \ CFG_SITE_RECORD, \ CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT from invenio.legacy.dbquery import run_sql from invenio.base.i18n import gettext_set_language from invenio.base.globals import cfg from invenio.utils.url import make_canonical_urlargd, drop_default_urlargd, create_html_link, create_url from invenio.utils.html import nmtoken_from_string from invenio.ext.legacy.handler import wash_urlargd from invenio.legacy.bibrank.citation_searcher import get_cited_by_count from invenio.legacy.webuser import session_param_get from invenio.modules.search.services import \ CFG_WEBSEARCH_MAX_SEARCH_COLL_RESULTS_TO_PRINT from intbitset import intbitset from invenio.legacy.websearch_external_collections import external_collection_get_state, get_external_collection_engine from invenio.legacy.websearch_external_collections.utils import get_collection_id from invenio.legacy.websearch_external_collections.config import CFG_EXTERNAL_COLLECTION_MAXRESULTS from invenio.legacy.bibrecord import get_fieldvalues _RE_PUNCTUATION = re.compile(CFG_BIBINDEX_CHARS_PUNCTUATION) _RE_SPACES = re.compile(r"\s+") class Template: # This dictionary maps Invenio language code to locale codes (ISO 639) tmpl_localemap = { 'bg': 'bg_BG', 'ar': 'ar_AR', 'ca': 'ca_ES', 'de': 'de_DE', 'el': 'el_GR', 'en': 'en_US', 'es': 'es_ES', 'pt': 'pt_BR', 'fa': 'fa_IR', 'fr': 'fr_FR', 'it': 'it_IT', 'ka': 'ka_GE', 'lt': 'lt_LT', 'ro': 'ro_RO', 'ru': 'ru_RU', 'rw': 'rw_RW', 'sk': 'sk_SK', 'cs': 'cs_CZ', 'no': 'no_NO', 'sv': 'sv_SE', 'uk': 'uk_UA', 'ja': 'ja_JA', 'pl': 'pl_PL', 'hr': 'hr_HR', 'zh_CN': 'zh_CN', 'zh_TW': 'zh_TW', 'hu': 'hu_HU', 'af': 'af_ZA', 'gl': 'gl_ES' } tmpl_default_locale = "en_US" # which locale to use by default, useful in case of failure # Type of the allowed parameters for the web interface for search results @property def search_results_default_urlargd(self): from invenio.modules.search.washers import \ search_results_default_urlargd return search_results_default_urlargd # ...and for search interfaces search_interface_default_urlargd = { 'aas': (int, CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE), 'as': (int, CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE), 'verbose': (int, 0), 'em' : (str, "")} # ...and for RSS feeds rss_default_urlargd = {'c' : (list, []), 'cc' : (str, ""), 'p' : (str, ""), 'f' : (str, ""), 'p1' : (str, ""), 'f1' : (str, ""), 'm1' : (str, ""), 'op1': (str, ""), 'p2' : (str, ""), 'f2' : (str, ""), 'm2' : (str, ""), 'op2': (str, ""), 'p3' : (str, ""), 'f3' : (str, ""), 'm3' : (str, ""), 'wl' : (int, CFG_WEBSEARCH_WILDCARD_LIMIT)} tmpl_openurl_accepted_args = { 'id' : (list, []), 'genre' : (str, ''), 'aulast' : (str, ''), 'aufirst' : (str, ''), 'auinit' : (str, ''), 'auinit1' : (str, ''), 'auinitm' : (str, ''), 'issn' : (str, ''), 'eissn' : (str, ''), 'coden' : (str, ''), 'isbn' : (str, ''), 'sici' : (str, ''), 'bici' : (str, ''), 'title' : (str, ''), 'stitle' : (str, ''), 'atitle' : (str, ''), 'volume' : (str, ''), 'part' : (str, ''), 'issue' : (str, ''), 'spage' : (str, ''), 'epage' : (str, ''), 'pages' : (str, ''), 'artnum' : (str, ''), 'date' : (str, ''), 'ssn' : (str, ''), 'quarter' : (str, ''), 'url_ver' : (str, ''), 'ctx_ver' : (str, ''), 'rft_val_fmt' : (str, ''), 'rft_id' : (list, []), 'rft.atitle' : (str, ''), 'rft.title' : (str, ''), 'rft.jtitle' : (str, ''), 'rft.stitle' : (str, ''), 'rft.date' : (str, ''), 'rft.volume' : (str, ''), 'rft.issue' : (str, ''), 'rft.spage' : (str, ''), 'rft.epage' : (str, ''), 'rft.pages' : (str, ''), 'rft.artnumber' : (str, ''), 'rft.issn' : (str, ''), 'rft.eissn' : (str, ''), 'rft.aulast' : (str, ''), 'rft.aufirst' : (str, ''), 'rft.auinit' : (str, ''), 'rft.auinit1' : (str, ''), 'rft.auinitm' : (str, ''), 'rft.ausuffix' : (str, ''), 'rft.au' : (list, []), 'rft.aucorp' : (str, ''), 'rft.isbn' : (str, ''), 'rft.coden' : (str, ''), 'rft.sici' : (str, ''), 'rft.genre' : (str, 'unknown'), 'rft.chron' : (str, ''), 'rft.ssn' : (str, ''), 'rft.quarter' : (int, ''), 'rft.part' : (str, ''), 'rft.btitle' : (str, ''), 'rft.isbn' : (str, ''), 'rft.atitle' : (str, ''), 'rft.place' : (str, ''), 'rft.pub' : (str, ''), 'rft.edition' : (str, ''), 'rft.tpages' : (str, ''), 'rft.series' : (str, ''), } tmpl_opensearch_rss_url_syntax = "%(CFG_SITE_URL)s/rss?p={searchTerms}&jrec={startIndex}&rg={count}&ln={language}" % {'CFG_SITE_URL': CFG_SITE_URL} tmpl_opensearch_html_url_syntax = "%(CFG_SITE_URL)s/search?p={searchTerms}&jrec={startIndex}&rg={count}&ln={language}" % {'CFG_SITE_URL': CFG_SITE_URL} def tmpl_openurl2invenio(self, openurl_data): """ Return an Invenio url corresponding to a search with the data included in the openurl form map. """ def isbn_to_isbn13_isbn10(isbn): isbn = isbn.replace(' ', '').replace('-', '') if len(isbn) == 10 and isbn.isdigit(): ## We already have isbn10 return ('', isbn) if len(isbn) != 13 and isbn.isdigit(): return ('', '') isbn13, isbn10 = isbn, isbn[3:-1] checksum = 0 weight = 10 for char in isbn10: checksum += int(char) * weight weight -= 1 checksum = 11 - (checksum % 11) if checksum == 10: isbn10 += 'X' if checksum == 11: isbn10 += '0' else: isbn10 += str(checksum) return (isbn13, isbn10) from invenio.legacy.search_engine import perform_request_search doi = '' pmid = '' bibcode = '' oai = '' issn = '' isbn = '' for elem in openurl_data['id']: if elem.startswith('doi:'): doi = elem[len('doi:'):] elif elem.startswith('pmid:'): pmid = elem[len('pmid:'):] elif elem.startswith('bibcode:'): bibcode = elem[len('bibcode:'):] elif elem.startswith('oai:'): oai = elem[len('oai:'):] for elem in openurl_data['rft_id']: if elem.startswith('info:doi/'): doi = elem[len('info:doi/'):] elif elem.startswith('info:pmid/'): pmid = elem[len('info:pmid/'):] elif elem.startswith('info:bibcode/'): bibcode = elem[len('info:bibcode/'):] elif elem.startswith('info:oai/'): oai = elem[len('info:oai/')] elif elem.startswith('urn:ISBN:'): isbn = elem[len('urn:ISBN:'):] elif elem.startswith('urn:ISSN:'): issn = elem[len('urn:ISSN:'):] ## Building author query aulast = openurl_data['rft.aulast'] or openurl_data['aulast'] aufirst = openurl_data['rft.aufirst'] or openurl_data['aufirst'] auinit = openurl_data['rft.auinit'] or \ openurl_data['auinit'] or \ openurl_data['rft.auinit1'] + ' ' + openurl_data['rft.auinitm'] or \ openurl_data['auinit1'] + ' ' + openurl_data['auinitm'] or aufirst[:1] auinit = auinit.upper() if aulast and aufirst: author_query = 'author:"%s, %s" or author:"%s, %s"' % (aulast, aufirst, aulast, auinit) elif aulast and auinit: author_query = 'author:"%s, %s"' % (aulast, auinit) else: author_query = '' ## Building title query title = openurl_data['rft.atitle'] or \ openurl_data['atitle'] or \ openurl_data['rft.btitle'] or \ openurl_data['rft.title'] or \ openurl_data['title'] if title: title_query = 'title:"%s"' % title title_query_cleaned = 'title:"%s"' % _RE_SPACES.sub(' ', _RE_PUNCTUATION.sub(' ', title)) else: title_query = '' ## Building journal query jtitle = openurl_data['rft.stitle'] or \ openurl_data['stitle'] or \ openurl_data['rft.jtitle'] or \ openurl_data['title'] if jtitle: journal_query = 'journal:"%s"' % jtitle else: journal_query = '' ## Building isbn query isbn = isbn or openurl_data['rft.isbn'] or \ openurl_data['isbn'] isbn13, isbn10 = isbn_to_isbn13_isbn10(isbn) if isbn13: isbn_query = 'isbn:"%s" or isbn:"%s"' % (isbn13, isbn10) elif isbn10: isbn_query = 'isbn:"%s"' % isbn10 else: isbn_query = '' ## Building issn query issn = issn or openurl_data['rft.eissn'] or \ openurl_data['eissn'] or \ openurl_data['rft.issn'] or \ openurl_data['issn'] if issn: issn_query = 'issn:"%s"' % issn else: issn_query = '' ## Building coden query coden = openurl_data['rft.coden'] or openurl_data['coden'] if coden: coden_query = 'coden:"%s"' % coden else: coden_query = '' ## Building doi query if False: #doi: #FIXME Temporaly disabled until doi field is properly setup doi_query = 'doi:"%s"' % doi else: doi_query = '' ## Trying possible searches if doi_query: if perform_request_search(p=doi_query): return '%s/search?%s' % (CFG_SITE_URL, urlencode({ 'p' : doi_query, 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hd'})) if isbn_query: if perform_request_search(p=isbn_query): return '%s/search?%s' % (CFG_SITE_URL, urlencode({ 'p' : isbn_query, 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hd'})) if coden_query: if perform_request_search(p=coden_query): return '%s/search?%s' % (CFG_SITE_URL, urlencode({ 'p' : coden_query, 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hd'})) if author_query and title_query: if perform_request_search(p='%s and %s' % (title_query, author_query)): return '%s/search?%s' % (CFG_SITE_URL, urlencode({ 'p' : '%s and %s' % (title_query, author_query), 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hd'})) if title_query: result = len(perform_request_search(p=title_query)) if result == 1: return '%s/search?%s' % (CFG_SITE_URL, urlencode({ 'p' : title_query, 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hd'})) elif result > 1: return '%s/search?%s' % (CFG_SITE_URL, urlencode({ 'p' : title_query, 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hb'})) ## Nothing worked, let's return a search that the user can improve if author_query and title_query: return '%s/search%s' % (CFG_SITE_URL, make_canonical_urlargd({ 'p' : '%s and %s' % (title_query_cleaned, author_query), 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hb'}, {})) elif title_query: return '%s/search%s' % (CFG_SITE_URL, make_canonical_urlargd({ 'p' : title_query_cleaned, 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hb'}, {})) else: ## Mmh. Too few information provided. return '%s/search%s' % (CFG_SITE_URL, make_canonical_urlargd({ 'p' : 'recid:-1', 'sc' : CFG_WEBSEARCH_SPLIT_BY_COLLECTION, 'of' : 'hb'}, {})) def tmpl_opensearch_description(self, ln): """ Returns the OpenSearch description file of this site. """ _ = gettext_set_language(ln) return """ %(short_name)s %(long_name)s %(description)s UTF-8 UTF-8 * %(CFG_SITE_ADMIN_EMAIL)s Powered by Invenio %(CFG_SITE_URL)s """ % \ {'CFG_SITE_URL': CFG_SITE_URL, 'short_name': CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME)[:16], 'long_name': CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME), 'description': _("Search on %(x_CFG_SITE_NAME_INTL)s", x_CFG_SITE_NAME_INTL=CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME))[:1024], 'CFG_SITE_ADMIN_EMAIL': CFG_SITE_ADMIN_EMAIL, 'rss_search_syntax': self.tmpl_opensearch_rss_url_syntax, 'html_search_syntax': self.tmpl_opensearch_html_url_syntax } def build_search_url(self, known_parameters={}, **kargs): """ Helper for generating a canonical search url. 'known_parameters' is the list of query parameters you inherit from your current query. You can then pass keyword arguments to modify this query. build_search_url(known_parameters, of="xm") The generated URL is absolute. """ parameters = {} parameters.update(known_parameters) parameters.update(kargs) # Now, we only have the arguments which have _not_ their default value parameters = drop_default_urlargd(parameters, self.search_results_default_urlargd) # Treat `as' argument specially: if 'aas' in parameters: parameters['as'] = parameters['aas'] del parameters['aas'] # Asking for a recid? Return a /CFG_SITE_RECORD/ URL if 'recid' in parameters: target = "%s/%s/%s" % (CFG_SITE_URL, CFG_SITE_RECORD, parameters['recid']) del parameters['recid'] target += make_canonical_urlargd(parameters, self.search_results_default_urlargd) return target return "%s/search%s" % (CFG_SITE_URL, make_canonical_urlargd(parameters, self.search_results_default_urlargd)) def build_search_interface_url(self, known_parameters={}, **kargs): """ Helper for generating a canonical search interface URL.""" parameters = {} parameters.update(known_parameters) parameters.update(kargs) c = parameters['c'] del parameters['c'] # Now, we only have the arguments which have _not_ their default value parameters = drop_default_urlargd(parameters, self.search_results_default_urlargd) # Treat `as' argument specially: if 'aas' in parameters: parameters['as'] = parameters['aas'] del parameters['aas'] if c and c != CFG_SITE_NAME: base = CFG_SITE_URL + '/collection/' + quote(c) else: base = CFG_SITE_URL return create_url(base, parameters) def build_rss_url(self, known_parameters, **kargs): """Helper for generating a canonical RSS URL""" parameters = {} parameters.update(known_parameters) parameters.update(kargs) # Keep only interesting parameters argd = wash_urlargd(parameters, self.rss_default_urlargd) if argd: # Handle 'c' differently since it is a list c = argd.get('c', []) del argd['c'] # Create query, and drop empty params args = make_canonical_urlargd(argd, self.rss_default_urlargd) if c != []: # Add collections c = [quote(coll) for coll in c] if args == '': args += '?' else: args += '&' args += 'c=' + '&c='.join(c) return CFG_SITE_URL + '/rss' + args def tmpl_record_page_header_content(self, req, recid, ln): """ Provide extra information in the header of /CFG_SITE_RECORD pages Return (title, description, keywords), not escaped for HTML """ _ = gettext_set_language(ln) title = get_fieldvalues(recid, "245__a") or \ get_fieldvalues(recid, "111__a") if title: title = title[0] else: title = _("Record") + ' #%d' % recid keywords = ', '.join(get_fieldvalues(recid, "6531_a")) description = ' '.join(get_fieldvalues(recid, "520__a")) description += "\n" description += '; '.join(get_fieldvalues(recid, "100__a") + get_fieldvalues(recid, "700__a")) return (title, description, keywords) def tmpl_exact_author_browse_help_link(self, p, p1, p2, p3, f, f1, f2, f3, rm, cc, ln, jrec, rg, aas, action, link_name): """ Creates the 'exact author' help link for browsing. """ _ = gettext_set_language(ln) url = create_html_link(self.build_search_url(p=p, p1=p1, p2=p2, p3=p3, f=f, f1=f1, f2=f2, f3=f3, rm=rm, cc=cc, ln=ln, jrec=jrec, rg=rg, aas=aas, action=action), {}, _(link_name), {'class': 'nearestterms'}) return "Did you mean to browse in %s index?" % url def tmpl_navtrail_links(self, aas, ln, dads): """ Creates the navigation bar at top of each search page (*Home > Root collection > subcollection > ...*) Parameters: - 'aas' *int* - Should we display an advanced search box? - 'ln' *string* - The language to display - 'separator' *string* - The separator between two consecutive collections - 'dads' *list* - A list of parent links, eachone being a dictionary of ('name', 'longname') """ out = [] for url, name in dads: args = {'c': url, 'as': aas, 'ln': ln} out.append(create_html_link(self.build_search_interface_url(**args), {}, cgi.escape(name), {'class': 'navtrail'})) return ' > '.join(out) def tmpl_webcoll_body(self, ln, collection, te_portalbox, searchfor, np_portalbox, narrowsearch, focuson, instantbrowse, ne_portalbox, show_body=True): """ Creates the body of the main search page. Parameters: - 'ln' *string* - language of the page being generated - 'collection' - collection id of the page being generated - 'te_portalbox' *string* - The HTML code for the portalbox on top of search - 'searchfor' *string* - The HTML code for the search for box - 'np_portalbox' *string* - The HTML code for the portalbox on bottom of search - 'narrowsearch' *string* - The HTML code for the search categories (left bottom of page) - 'focuson' *string* - The HTML code for the "focuson" categories (right bottom of page) - 'ne_portalbox' *string* - The HTML code for the bottom of the page """ if not narrowsearch: narrowsearch = instantbrowse body = '''
%(searchfor)s %(np_portalbox)s''' % { 'siteurl' : CFG_SITE_URL, 'searchfor' : searchfor, 'np_portalbox' : np_portalbox } if show_body: body += ''' ''' % { 'narrowsearch' : narrowsearch } if focuson: body += """""" body += """
%(narrowsearch)s""" + focuson + """
""" elif focuson: body += focuson body += """%(ne_portalbox)s
""" % {'ne_portalbox' : ne_portalbox} return body def tmpl_portalbox(self, title, body): """Creates portalboxes based on the parameters Parameters: - 'title' *string* - The title of the box - 'body' *string* - The HTML code for the body of the box """ out = """
%(title)s
%(body)s
""" % {'title' : cgi.escape(title), 'body' : body} return out def tmpl_searchfor_light(self, ln, collection_id, collection_name, record_count, example_search_queries): # EXPERIMENTAL """Produces light *Search for* box for the current collection. Parameters: - 'ln' *string* - *str* The language to display - 'collection_id' - *str* The collection id - 'collection_name' - *str* The collection name in current language - 'example_search_queries' - *list* List of search queries given as example for this collection """ # load the right message language _ = gettext_set_language(ln) out = ''' ''' argd = drop_default_urlargd({'ln': ln, 'sc': CFG_WEBSEARCH_SPLIT_BY_COLLECTION}, self.search_results_default_urlargd) # Only add non-default hidden values for field, value in argd.items(): out += self.tmpl_input_hidden(field, value) header = _("Search %(x_name)s records for:", x_name=self.tmpl_nbrecs_info(record_count, "", "")) asearchurl = self.build_search_interface_url(c=collection_id, aas=max(CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES), ln=ln) # Build example of queries for this collection example_search_queries_links = [create_html_link(self.build_search_url(p=example_query, ln=ln, aas= -1, c=collection_id), {}, cgi.escape(example_query), {'class': 'examplequery'}) \ for example_query in example_search_queries] example_query_html = '' if len(example_search_queries) > 0: example_query_link = example_search_queries_links[0] # offers more examples if possible more = '' if len(example_search_queries_links) > 1: more = ''' ''' % {'more_example_queries': '
'.join(example_search_queries_links[1:]), 'show_less':_("less"), 'show_more':_("more")} example_query_html += '''

%(example)s%(more)s

''' % {'example': _("Example: %(x_sample_search_query)s") % \ {'x_sample_search_query': example_query_link}, 'more': more} # display options to search in current collection or everywhere search_in = '' if collection_name != CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME): search_in += ''' ''' % {'search_in_collection_name': _("Search in %(x_collection_name)s") % \ {'x_collection_name': collection_name}, 'collection_id': collection_id, 'root_collection_name': CFG_SITE_NAME, 'search_everywhere': _("Search everywhere")} # print commentary start: out += ''' %(search_in)s ''' % {'ln' : ln, 'sizepattern' : CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH, - 'langlink': ln != CFG_SITE_LANG and '?ln=' + ln or '', + 'langlink': '?ln=' + ln, 'siteurl' : CFG_SITE_URL, 'asearch' : create_html_link(asearchurl, {}, _('Advanced Search')), 'header' : header, 'msg_search' : _('Search'), 'msg_browse' : _('Browse'), 'msg_search_tips' : _('Search Tips'), 'search_in': search_in, 'example_query_html': example_query_html} return out def tmpl_searchfor_simple(self, ln, collection_id, collection_name, record_count, middle_option): """Produces simple *Search for* box for the current collection. Parameters: - 'ln' *string* - *str* The language to display - 'collection_id' - *str* The collection id - 'collection_name' - *str* The collection name in current language - 'record_count' - *str* Number of records in this collection - 'middle_option' *string* - HTML code for the options (any field, specific fields ...) """ # load the right message language _ = gettext_set_language(ln) out = ''' ''' argd = drop_default_urlargd({'ln': ln, 'cc': collection_id, 'sc': CFG_WEBSEARCH_SPLIT_BY_COLLECTION}, self.search_results_default_urlargd) # Only add non-default hidden values for field, value in argd.items(): out += self.tmpl_input_hidden(field, value) header = _("Search %(x_name)s records for:", x_name=self.tmpl_nbrecs_info(record_count, "", "")) asearchurl = self.build_search_interface_url(c=collection_id, aas=max(CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES), ln=ln) # print commentary start: out += ''' ''' % {'ln' : ln, 'sizepattern' : CFG_WEBSEARCH_SIMPLESEARCH_PATTERN_BOX_WIDTH, - 'langlink': ln != CFG_SITE_LANG and '?ln=' + ln or '', + 'langlink': '?ln=' + ln, 'siteurl' : CFG_SITE_URL, 'asearch' : create_html_link(asearchurl, {}, _('Advanced Search')), 'header' : header, 'middle_option' : middle_option, 'msg_search' : _('Search'), 'msg_browse' : _('Browse'), 'msg_search_tips' : _('Search Tips')} return out def tmpl_searchfor_advanced(self, ln, # current language collection_id, collection_name, record_count, middle_option_1, middle_option_2, middle_option_3, searchoptions, sortoptions, rankoptions, displayoptions, formatoptions ): """ Produces advanced *Search for* box for the current collection. Parameters: - 'ln' *string* - The language to display - 'middle_option_1' *string* - HTML code for the first row of options (any field, specific fields ...) - 'middle_option_2' *string* - HTML code for the second row of options (any field, specific fields ...) - 'middle_option_3' *string* - HTML code for the third row of options (any field, specific fields ...) - 'searchoptions' *string* - HTML code for the search options - 'sortoptions' *string* - HTML code for the sort options - 'rankoptions' *string* - HTML code for the rank options - 'displayoptions' *string* - HTML code for the display options - 'formatoptions' *string* - HTML code for the format options """ # load the right message language _ = gettext_set_language(ln) out = ''' ''' argd = drop_default_urlargd({'ln': ln, 'aas': 1, 'cc': collection_id, 'sc': CFG_WEBSEARCH_SPLIT_BY_COLLECTION}, self.search_results_default_urlargd) # Only add non-default hidden values for field, value in argd.items(): out += self.tmpl_input_hidden(field, value) header = _("Search %(x_rec)s records for", x_rec=self.tmpl_nbrecs_info(record_count, "", "")) header += ':' ssearchurl = self.build_search_interface_url(c=collection_id, aas=min(CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES), ln=ln) out += ''' ''' % {'ln' : ln, 'sizepattern' : CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH, - 'langlink': ln != CFG_SITE_LANG and '?ln=' + ln or '', + 'langlink': '?ln=' + ln, 'siteurl' : CFG_SITE_URL, 'ssearch' : create_html_link(ssearchurl, {}, _("Simple Search")), 'header' : header, 'matchbox_m1' : self.tmpl_matchtype_box('m1', ln=ln), 'middle_option_1' : middle_option_1, 'andornot_op1' : self.tmpl_andornot_box('op1', ln=ln), 'matchbox_m2' : self.tmpl_matchtype_box('m2', ln=ln), 'middle_option_2' : middle_option_2, 'andornot_op2' : self.tmpl_andornot_box('op2', ln=ln), 'matchbox_m3' : self.tmpl_matchtype_box('m3', ln=ln), 'middle_option_3' : middle_option_3, 'msg_search' : _("Search"), 'msg_browse' : _("Browse"), 'msg_search_tips' : _("Search Tips")} if (searchoptions): out += """""" % { 'searchheader' : _("Search options:"), 'searchoptions' : searchoptions } out += """ """ % { 'added' : _("Added/modified since:"), 'until' : _("until:"), 'added_or_modified': self.tmpl_inputdatetype(ln=ln), 'date_added' : self.tmpl_inputdate("d1", ln=ln), 'date_until' : self.tmpl_inputdate("d2", ln=ln), 'msg_sort' : _("Sort by:"), 'msg_display' : _("Display results:"), 'msg_format' : _("Output format:"), 'sortoptions' : sortoptions, 'rankoptions' : rankoptions, 'displayoptions' : displayoptions, 'formatoptions' : formatoptions } return out def tmpl_matchtype_box(self, name='m', value='', ln='en'): """Returns HTML code for the 'match type' selection box. Parameters: - 'name' *string* - The name of the produced select - 'value' *string* - The selected value (if any value is already selected) - 'ln' *string* - the language to display """ # load the right message language _ = gettext_set_language(ln) out = """ """ % {'name' : name, 'sela' : self.tmpl_is_selected('a', value), 'opta' : _("All of the words:"), 'selo' : self.tmpl_is_selected('o', value), 'opto' : _("Any of the words:"), 'sele' : self.tmpl_is_selected('e', value), 'opte' : _("Exact phrase:"), 'selp' : self.tmpl_is_selected('p', value), 'optp' : _("Partial phrase:"), 'selr' : self.tmpl_is_selected('r', value), 'optr' : _("Regular expression:") } return out def tmpl_is_selected(self, var, fld): """ Checks if *var* and *fld* are equal, and if yes, returns ' selected="selected"'. Useful for select boxes. Parameters: - 'var' *string* - First value to compare - 'fld' *string* - Second value to compare """ if var == fld: return ' selected="selected"' else: return "" def tmpl_andornot_box(self, name='op', value='', ln='en'): """ Returns HTML code for the AND/OR/NOT selection box. Parameters: - 'name' *string* - The name of the produced select - 'value' *string* - The selected value (if any value is already selected) - 'ln' *string* - the language to display """ # load the right message language _ = gettext_set_language(ln) out = """ """ % {'name' : name, 'sela' : self.tmpl_is_selected('a', value), 'opta' : _("AND"), 'selo' : self.tmpl_is_selected('o', value), 'opto' : _("OR"), 'seln' : self.tmpl_is_selected('n', value), 'optn' : _("AND NOT") } return out def tmpl_inputdate(self, name, ln, sy=0, sm=0, sd=0): """ Produces *From Date*, *Until Date* kind of selection box. Suitable for search options. Parameters: - 'name' *string* - The base name of the produced selects - 'ln' *string* - the language to display """ # load the right message language _ = gettext_set_language(ln) box = """ """ # month box += """ """ # year box += """ """ return box def tmpl_inputdatetype(self, dt='', ln=CFG_SITE_LANG): """ Produces input date type selection box to choose added-or-modified date search option. Parameters: - 'dt' *string - date type (c=created, m=modified) - 'ln' *string* - the language to display """ # load the right message language _ = gettext_set_language(ln) box = """ """ % { 'added': _("Added since:"), 'modified': _("Modified since:"), 'sel': self.tmpl_is_selected(dt, 'm'), } return box def tmpl_narrowsearch(self, aas, ln, type, father, has_grandchildren, sons, display_grandsons, grandsons): """ Creates list of collection descendants of type *type* under title *title*. If aas==1, then links to Advanced Search interfaces; otherwise Simple Search. Suitable for 'Narrow search' and 'Focus on' boxes. Parameters: - 'aas' *bool* - Should we display an advanced search box? - 'ln' *string* - The language to display - 'type' *string* - The type of the produced box (virtual collections or normal collections) - 'father' *collection* - The current collection - 'has_grandchildren' *bool* - If the current collection has grand children - 'sons' *list* - The list of the sub-collections (first level) - 'display_grandsons' *bool* - If the grand children collections should be displayed (2 level deep display) - 'grandsons' *list* - The list of sub-collections (second level) """ # load the right message language _ = gettext_set_language(ln) title = father.get_collectionbox_name(ln, type) if has_grandchildren: style_prolog = "" style_epilog = "" else: style_prolog = "" style_epilog = "" out = """""" % {'title' : title, 'narrowsearchbox': {'r': 'narrowsearchbox', 'v': 'focusonsearchbox'}[type]} # iterate through sons: i = 0 for son in sons: out += """""" % {'name' : cgi.escape(son.name) } # hosted collections are checked by default only when configured so elif str(son.dbquery).startswith("hostedcollection:"): external_collection_engine = get_external_collection_engine(str(son.name)) if external_collection_engine and external_collection_engine.selected_by_default: out += """""" % {'name' : cgi.escape(son.name) } elif external_collection_engine and not external_collection_engine.selected_by_default: out += """""" % {'name' : cgi.escape(son.name) } else: # strangely, the external collection engine was never found. In that case, # why was the hosted collection here in the first place? out += """""" % {'name' : cgi.escape(son.name) } else: out += """""" % {'name' : cgi.escape(son.name) } else: out += '' out += """""" i += 1 out += "
%(title)s
""" % \ { 'narrowsearchbox': {'r': 'narrowsearchbox', 'v': 'focusonsearchbox'}[type]} if type == 'r': if son.restricted_p() and son.restricted_p() != father.restricted_p(): out += """%(link)s%(recs)s """ % { 'link': create_html_link(self.build_search_interface_url(c=son.name, ln=ln, aas=aas), {}, style_prolog + cgi.escape(son.get_name(ln)) + style_epilog), 'recs' : self.tmpl_nbrecs_info(son.nbrecs, ln=ln)} # the following prints the "external collection" arrow just after the name and # number of records of the hosted collection # 1) we might want to make the arrow work as an anchor to the hosted collection as well. # That would probably require a new separate function under invenio.utils.url # 2) we might want to place the arrow between the name and the number of records of the hosted collection # That would require to edit/separate the above out += ... if type == 'r': if str(son.dbquery).startswith("hostedcollection:"): out += """%(name)s""" % \ { 'siteurl' : CFG_SITE_URL, 'name' : cgi.escape(son.name), } if son.restricted_p(): out += """ [%(msg)s] """ % { 'msg' : _("restricted") } if display_grandsons and len(grandsons[i]): # iterate trough grandsons: out += """
""" for grandson in grandsons[i]: out += """ %(link)s%(nbrec)s """ % { 'link': create_html_link(self.build_search_interface_url(c=grandson.name, ln=ln, aas=aas), {}, cgi.escape(grandson.get_name(ln))), 'nbrec' : self.tmpl_nbrecs_info(grandson.nbrecs, ln=ln)} # the following prints the "external collection" arrow just after the name and # number of records of the hosted collection # Some relatives comments have been made just above if type == 'r': if str(grandson.dbquery).startswith("hostedcollection:"): out += """%(name)s""" % \ { 'siteurl' : CFG_SITE_URL, 'name' : cgi.escape(grandson.name), } out += """
" return out def tmpl_searchalso(self, ln, engines_list, collection_id): _ = gettext_set_language(ln) box_name = _("Search also:") html = """
""" % locals() for engine in engines_list: internal_name = engine.name name = _(internal_name) base_url = engine.base_url if external_collection_get_state(engine, collection_id) == 3: checked = ' checked="checked"' else: checked = '' html += """""" % \ { 'checked': checked, 'base_url': base_url, 'internal_name': internal_name, 'name': cgi.escape(name), 'id': "extSearch" + nmtoken_from_string(name), 'siteurl': CFG_SITE_URL, } html += """
%(box_name)s
%(name)s
""" return html def tmpl_nbrecs_info(self, number, prolog=None, epilog=None, ln=CFG_SITE_LANG): """ Return information on the number of records. Parameters: - 'number' *string* - The number of records - 'prolog' *string* (optional) - An HTML code to prefix the number (if **None**, will be '(') - 'epilog' *string* (optional) - An HTML code to append to the number (if **None**, will be ')') """ if number is None: number = 0 if prolog is None: prolog = ''' (''' if epilog is None: epilog = ''')''' return prolog + self.tmpl_nice_number(number, ln) + epilog def tmpl_box_restricted_content(self, ln): """ Displays a box containing a *restricted content* message Parameters: - 'ln' *string* - The language to display """ # load the right message language _ = gettext_set_language(ln) return _("This collection is restricted. If you are authorized to access it, please click on the Search button.") def tmpl_box_hosted_collection(self, ln): """ Displays a box containing a *hosted collection* message Parameters: - 'ln' *string* - The language to display """ # load the right message language _ = gettext_set_language(ln) return _("This is a hosted external collection. Please click on the Search button to see its content.") def tmpl_box_no_records(self, ln): """ Displays a box containing a *no content* message Parameters: - 'ln' *string* - The language to display """ # load the right message language _ = gettext_set_language(ln) return _("This collection does not contain any document yet.") def tmpl_instant_browse(self, aas, ln, recids, more_link=None, grid_layout=False, father=None): """ Formats a list of records (given in the recids list) from the database. Parameters: - 'aas' *int* - Advanced Search interface or not (0 or 1) - 'ln' *string* - The language to display - 'recids' *list* - the list of records from the database - 'more_link' *string* - the "More..." link for the record. If not given, will not be displayed - 'father' *collection* - The current collection """ # load the right message language _ = gettext_set_language(ln) body = '''''' if grid_layout: body += '''' % { 'recid': recid['id'], 'date': recid['date'], 'body': recid['body'] } if grid_layout: body += '''
''' body += '''''' body += "
' for recid in recids: if grid_layout: body += ''' %(body)s ''' % { 'recid': recid['id'], 'body': recid['body']} else: body += '''
%(date)s %(body)s
" if more_link: body += '
' + \ create_html_link(more_link, {}, '[>> %s]' % _("more")) + \ '
' return '''
%(header)s
%(body)s
''' % {'header' : father.get_collectionbox_name(ln, 'l') , 'body' : body, } def tmpl_searchwithin_select(self, ln, fieldname, selected, values): """ Produces 'search within' selection box for the current collection. Parameters: - 'ln' *string* - The language to display - 'fieldname' *string* - the name of the select box produced - 'selected' *string* - which of the values is selected - 'values' *list* - the list of values in the select """ out = '""" return out def tmpl_select(self, fieldname, values, selected=None, css_class=''): """ Produces a generic select box Parameters: - 'css_class' *string* - optional, a css class to display this select with - 'fieldname' *list* - the name of the select box produced - 'selected' *string* - which of the values is selected - 'values' *list* - the list of values in the select """ if css_class != '': class_field = ' class="%s"' % css_class else: class_field = '' out = '""" return out def tmpl_record_links(self, recid, ln, sf='', so='d', sp='', rm=''): """ Displays the *More info* and *Find similar* links for a record Parameters: - 'ln' *string* - The language to display - 'recid' *string* - the id of the displayed record """ # load the right message language _ = gettext_set_language(ln) out = '''
%(detailed)s - %(similar)s''' % { 'detailed': create_html_link(self.build_search_url(recid=recid, ln=ln), {}, _("Detailed record"), {'class': "moreinfo"}), 'similar': create_html_link(self.build_search_url(p="recid:%d" % recid, rm='wrd', ln=ln), {}, _("Similar records"), {'class': "moreinfo"})} if CFG_BIBRANK_SHOW_CITATION_LINKS: num_timescited = get_cited_by_count(recid) if num_timescited: out += ''' - %s ''' % \ create_html_link(self.build_search_url(p='refersto:recid:%d' % recid, sf=sf, so=so, sp=sp, rm=rm, ln=ln), {}, _("Cited by %(x_num)i records", x_num=num_timescited), {'class': "moreinfo"}) return out def tmpl_record_body(self, titles, authors, dates, rns, abstracts, urls_u, urls_z, ln): """ Displays the "HTML basic" format of a record Parameters: - 'authors' *list* - the authors (as strings) - 'dates' *list* - the dates of publication - 'rns' *list* - the quicknotes for the record - 'abstracts' *list* - the abstracts for the record - 'urls_u' *list* - URLs to the original versions of the record - 'urls_z' *list* - Not used """ out = "" for title in titles: out += "%(title)s " % { 'title' : cgi.escape(title) } if authors: out += " / " for author in authors[:CFG_WEBSEARCH_AUTHOR_ET_AL_THRESHOLD]: out += '%s ' % \ create_html_link(self.build_search_url(p=author, f='author', ln=ln), {}, cgi.escape(author)) if len(authors) > CFG_WEBSEARCH_AUTHOR_ET_AL_THRESHOLD: out += "et al" for date in dates: out += " %s." % cgi.escape(date) for rn in rns: out += """ [%(rn)s]""" % {'rn' : cgi.escape(rn)} for abstract in abstracts: out += "
%(abstract)s [...]" % {'abstract' : cgi.escape(abstract[:1 + string.find(abstract, '.')]) } for idx in range(0, len(urls_u)): out += """
%(name)s""" % { 'url' : urls_u[idx], 'name' : urls_u[idx] } return out def tmpl_search_in_bibwords(self, p, f, ln, nearest_box): """ Displays the *Words like current ones* links for a search Parameters: - 'p' *string* - Current search words - 'f' *string* - the fields in which the search was done - 'nearest_box' *string* - the HTML code for the "nearest_terms" box - most probably from a create_nearest_terms_box call """ # load the right message language _ = gettext_set_language(ln) out = '

' if f: out += _("Words nearest to %(x_word)s inside %(x_field)s in any collection are:") % {'x_word': '' + cgi.escape(p) + '', 'x_field': '' + cgi.escape(f) + ''} else: out += _("Words nearest to %(x_word)s in any collection are:") % {'x_word': '' + cgi.escape(p) + ''} out += '
' + nearest_box + '

' return out def tmpl_nearest_term_box(self, p, ln, f, terminfo, intro): """ Displays the *Nearest search terms* box Parameters: - 'p' *string* - Current search words - 'f' *string* - a collection description (if the search has been completed in a collection) - 'ln' *string* - The language to display - 'terminfo': tuple (term, hits, argd) for each near term - 'intro' *string* - the intro HTML to prefix the box with """ out = '''''' for term, hits, argd in terminfo: if hits: hitsinfo = str(hits) else: hitsinfo = '-' argd['f'] = f argd['p'] = term term = cgi.escape(term) # FIXME this is hack to get correct links to nearest terms from flask import has_request_context, request if has_request_context() and request.values.get('of', '') != argd.get('of', ''): if 'of' in request.values: argd['of'] = request.values.get('of') else: del argd['of'] if term == p: # print search word for orientation: nearesttermsboxbody_class = "nearesttermsboxbodyselected" if hits > 0: term = create_html_link(self.build_search_url(argd), {}, term, {'class': "nearesttermsselected"}) else: nearesttermsboxbody_class = "nearesttermsboxbody" term = create_html_link(self.build_search_url(argd), {}, term, {'class': "nearestterms"}) out += '''\ ''' % {'hits': hitsinfo, 'nearesttermsboxbody_class': nearesttermsboxbody_class, 'term': term} out += "
%(hits)s   %(term)s
" return intro + "
" + out + "
" def tmpl_browse_pattern(self, f, fn, ln, browsed_phrases_in_colls, colls, rg): """ Displays the *Nearest search terms* box Parameters: - 'f' *string* - field (*not* i18nized) - 'fn' *string* - field name (i18nized) - 'ln' *string* - The language to display - 'browsed_phrases_in_colls' *array* - the phrases to display - 'colls' *array* - the list of collection parameters of the search (c's) - 'rg' *int* - the number of records """ # load the right message language _ = gettext_set_language(ln) out = """""" % { 'hits' : _("Hits"), 'fn' : cgi.escape(fn) } if len(browsed_phrases_in_colls) == 1: # one hit only found: phrase, nbhits = browsed_phrases_in_colls[0][0], browsed_phrases_in_colls[0][1] query = {'c': colls, 'ln': ln, 'p': '"%s"' % phrase.replace('"', '\\"'), 'f': f, 'rg' : rg} out += """""" % {'nbhits': nbhits, 'link': create_html_link(self.build_search_url(query), {}, cgi.escape(phrase))} elif len(browsed_phrases_in_colls) > 1: # first display what was found but the last one: for phrase, nbhits in browsed_phrases_in_colls[:-1]: query = {'c': colls, 'ln': ln, 'p': '"%s"' % phrase.replace('"', '\\"'), 'f': f, 'rg' : rg} out += """""" % {'nbhits' : nbhits, 'link': create_html_link(self.build_search_url(query), {}, cgi.escape(phrase))} # now display last hit as "previous term": phrase, nbhits = browsed_phrases_in_colls[0] query_previous = {'c': colls, 'ln': ln, 'p': '"%s"' % phrase.replace('"', '\\"'), 'f': f, 'rg' : rg} # now display last hit as "next term": phrase, nbhits = browsed_phrases_in_colls[-1] query_next = {'c': colls, 'ln': ln, 'p': '"%s"' % phrase.replace('"', '\\"'), 'f': f, 'rg' : rg} out += """""" % {'link_previous': create_html_link(self.build_search_url(query_previous, action='browse'), {}, _("Previous")), 'link_next': create_html_link(self.build_search_url(query_next, action='browse'), {}, _("next")), 'siteurl' : CFG_SITE_URL} out += """
%(hits)s   %(fn)s
%(nbhits)s   %(link)s
%(nbhits)s   %(link)s
  %(link_previous)s %(link_next)s
""" return out def tmpl_search_box(self, ln, aas, cc, cc_intl, ot, sp, action, fieldslist, f1, f2, f3, m1, m2, m3, p1, p2, p3, op1, op2, rm, p, f, coll_selects, d1y, d2y, d1m, d2m, d1d, d2d, dt, sort_fields, sf, so, ranks, sc, rg, formats, of, pl, jrec, ec, show_colls=True, show_title=True): """ Displays the *Nearest search terms* box Parameters: - 'ln' *string* - The language to display - 'aas' *bool* - Should we display an advanced search box? -1 -> 1, from simpler to more advanced - 'cc_intl' *string* - the i18nized current collection name, used for display - 'cc' *string* - the internal current collection name - 'ot', 'sp' *string* - hidden values - 'action' *string* - the action demanded by the user - 'fieldslist' *list* - the list of all fields available, for use in select within boxes in advanced search - 'p, f, f1, f2, f3, m1, m2, m3, p1, p2, p3, op1, op2, op3, rm' *strings* - the search parameters - 'coll_selects' *array* - a list of lists, each containing the collections selects to display - 'd1y, d2y, d1m, d2m, d1d, d2d' *int* - the search between dates - 'dt' *string* - the dates' types (creation dates, modification dates) - 'sort_fields' *array* - the select information for the sort fields - 'sf' *string* - the currently selected sort field - 'so' *string* - the currently selected sort order ("a" or "d") - 'ranks' *array* - ranking methods - 'rm' *string* - selected ranking method - 'sc' *string* - split by collection or not - 'rg' *string* - selected results/page - 'formats' *array* - available output formats - 'of' *string* - the selected output format - 'pl' *string* - `limit to' search pattern - show_colls *bool* - propose coll selection box? - show_title *bool* show cc_intl in page title? """ # load the right message language _ = gettext_set_language(ln) # These are hidden fields the user does not manipulate # directly if aas == -1: argd = drop_default_urlargd({ 'ln': ln, 'aas': aas, 'ot': ot, 'sp': sp, 'ec': ec, }, self.search_results_default_urlargd) else: argd = drop_default_urlargd({ 'cc': cc, 'ln': ln, 'aas': aas, 'ot': ot, 'sp': sp, 'ec': ec, }, self.search_results_default_urlargd) out = "" if show_title: # display cc name if asked for out += '''

%(ccname)s

''' % {'ccname' : cgi.escape(cc_intl), } out += '''
''' % {'siteurl' : CFG_SITE_URL} # Only add non-default hidden values for field, value in argd.items(): out += self.tmpl_input_hidden(field, value) leadingtext = _("Search") if action == 'browse': leadingtext = _("Browse") if aas == 1: # print Advanced Search form: # define search box elements: out += ''' ''' % { 'simple_search': create_html_link(self.build_search_url(p=p1, f=f1, rm=rm, cc=cc, ln=ln, jrec=jrec, rg=rg), {}, _("Simple Search")), 'leading' : leadingtext, 'sizepattern' : CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH, 'matchbox1' : self.tmpl_matchtype_box('m1', m1, ln=ln), 'p1' : cgi.escape(p1, 1), 'searchwithin1' : self.tmpl_searchwithin_select( ln=ln, fieldname='f1', selected=f1, values=self._add_mark_to_field(value=f1, fields=fieldslist, ln=ln) ), 'andornot1' : self.tmpl_andornot_box( name='op1', value=op1, ln=ln ), 'matchbox2' : self.tmpl_matchtype_box('m2', m2, ln=ln), 'p2' : cgi.escape(p2, 1), 'searchwithin2' : self.tmpl_searchwithin_select( ln=ln, fieldname='f2', selected=f2, values=self._add_mark_to_field(value=f2, fields=fieldslist, ln=ln) ), 'andornot2' : self.tmpl_andornot_box( name='op2', value=op2, ln=ln ), 'matchbox3' : self.tmpl_matchtype_box('m3', m3, ln=ln), 'p3' : cgi.escape(p3, 1), 'searchwithin3' : self.tmpl_searchwithin_select( ln=ln, fieldname='f3', selected=f3, values=self._add_mark_to_field(value=f3, fields=fieldslist, ln=ln) ), 'search' : _("Search"), 'browse' : _("Browse"), 'siteurl' : CFG_SITE_URL, 'ln' : ln, - 'langlink': ln != CFG_SITE_LANG and '?ln=' + ln or '', + 'langlink': '?ln=' + ln, 'search_tips': _("Search Tips") } elif aas == 0: # print Simple Search form: out += ''' ''' % { 'advanced_search': create_html_link(self.build_search_url(p1=p, f1=f, rm=rm, aas=max(CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES), cc=cc, jrec=jrec, ln=ln, rg=rg), {}, _("Advanced Search")), 'leading' : leadingtext, 'sizepattern' : CFG_WEBSEARCH_SIMPLESEARCH_PATTERN_BOX_WIDTH, 'p' : cgi.escape(p, 1), 'searchwithin' : self.tmpl_searchwithin_select( ln=ln, fieldname='f', selected=f, values=self._add_mark_to_field(value=f, fields=fieldslist, ln=ln) ), 'search' : _("Search"), 'browse' : _("Browse"), 'siteurl' : CFG_SITE_URL, 'ln' : ln, - 'langlink': ln != CFG_SITE_LANG and '?ln=' + ln or '', + 'langlink': '?ln=' + ln, 'search_tips': _("Search Tips") } else: # EXPERIMENTAL # print light search form: search_in = '' if cc_intl != CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME): search_in = ''' ''' % {'search_in_collection_name': _("Search in %(x_collection_name)s") % \ {'x_collection_name': cgi.escape(cc_intl)}, 'collection_id': cc, 'root_collection_name': CFG_SITE_NAME, 'search_everywhere': _("Search everywhere")} out += ''' %(search_in)s ''' % { 'sizepattern' : CFG_WEBSEARCH_LIGHTSEARCH_PATTERN_BOX_WIDTH, 'advanced_search': create_html_link(self.build_search_url(p1=p, f1=f, rm=rm, aas=max(CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES), cc=cc, jrec=jrec, ln=ln, rg=rg), {}, _("Advanced Search")), 'leading' : leadingtext, 'p' : cgi.escape(p, 1), 'searchwithin' : self.tmpl_searchwithin_select( ln=ln, fieldname='f', selected=f, values=self._add_mark_to_field(value=f, fields=fieldslist, ln=ln) ), 'search' : _("Search"), 'browse' : _("Browse"), 'siteurl' : CFG_SITE_URL, 'ln' : ln, - 'langlink': ln != CFG_SITE_LANG and '?ln=' + ln or '', + 'langlink': '?ln=' + ln, 'search_tips': _("Search Tips"), 'search_in': search_in } ## secondly, print Collection(s) box: if show_colls and aas > -1: # display collections only if there is more than one selects = '' for sel in coll_selects: selects += self.tmpl_select(fieldname='c', values=sel) out += """ """ % { 'leading' : leadingtext, 'msg_coll' : _("collections"), 'colls' : selects, } ## thirdly, print search limits, if applicable: if action != _("Browse") and pl: out += """""" % { 'limitto' : _("Limit to:"), 'sizepattern' : CFG_WEBSEARCH_ADVANCEDSEARCH_PATTERN_BOX_WIDTH, 'pl' : cgi.escape(pl, 1), } ## fourthly, print from/until date boxen, if applicable: if action == _("Browse") or (d1y == 0 and d1m == 0 and d1d == 0 and d2y == 0 and d2m == 0 and d2d == 0): pass # do not need it else: cell_6_a = self.tmpl_inputdatetype(dt, ln) + self.tmpl_inputdate("d1", ln, d1y, d1m, d1d) cell_6_b = self.tmpl_inputdate("d2", ln, d2y, d2m, d2d) out += """""" % { 'added' : _("Added/modified since:"), 'until' : _("until:"), 'added_or_modified': self.tmpl_inputdatetype(dt, ln), 'date1' : self.tmpl_inputdate("d1", ln, d1y, d1m, d1d), 'date2' : self.tmpl_inputdate("d2", ln, d2y, d2m, d2d), } ## fifthly, print Display results box, including sort/rank, formats, etc: if action != _("Browse") and aas > -1: rgs = [] for i in [10, 25, 50, 100, 250, 500]: if i <= CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS: rgs.append({ 'value' : i, 'text' : "%d %s" % (i, _("results"))}) # enrich sort fields list if we are sorting by some MARC tag: sort_fields = self._add_mark_to_field(value=sf, fields=sort_fields, ln=ln) # create sort by HTML box: out += """""" % { 'sort_by' : _("Sort by:"), 'display_res' : _("Display results:"), 'out_format' : _("Output format:"), 'select_sf' : self.tmpl_select(fieldname='sf', values=sort_fields, selected=sf, css_class='address'), 'select_so' : self.tmpl_select(fieldname='so', values=[{ 'value' : 'a', 'text' : _("asc.") }, { 'value' : 'd', 'text' : _("desc.") }], selected=so, css_class='address'), 'select_rm' : self.tmpl_select(fieldname='rm', values=ranks, selected=rm, css_class='address'), 'select_rg' : self.tmpl_select(fieldname='rg', values=rgs, selected=rg, css_class='address'), 'select_sc' : self.tmpl_select(fieldname='sc', values=[{ 'value' : 0, 'text' : _("single list") }, { 'value' : 1, 'text' : _("split by collection") }], selected=sc, css_class='address'), 'select_of' : self.tmpl_select( fieldname='of', selected=of, values=self._add_mark_to_field(value=of, fields=formats, chars=3, ln=ln), css_class='address'), } ## last but not least, print end of search box: out += """
""" return out def tmpl_input_hidden(self, name, value): "Produces the HTML code for a hidden field " if isinstance(value, list): list_input = [self.tmpl_input_hidden(name, val) for val in value] return "\n".join(list_input) # # Treat `as', `aas' arguments specially: if name == 'aas': name = 'as' return """""" % { 'name' : cgi.escape(str(name), 1), 'value' : cgi.escape(str(value), 1), } def _add_mark_to_field(self, value, fields, ln, chars=1): """Adds the current value as a MARC tag in the fields array Useful for advanced search""" # load the right message language _ = gettext_set_language(ln) out = fields if value and str(value[0:chars]).isdigit(): out.append({'value' : value, 'text' : str(value) + " " + _("MARC tag") }) return out def tmpl_search_pagestart(self, ln) : "page start for search page. Will display after the page header" return """
""" def tmpl_search_pageend(self, ln) : "page end for search page. Will display just before the page footer" return """
""" def tmpl_print_search_info(self, ln, middle_only, collection, collection_name, collection_id, aas, sf, so, rm, rg, nb_found, of, ot, p, f, f1, f2, f3, m1, m2, m3, op1, op2, p1, p2, p3, d1y, d1m, d1d, d2y, d2m, d2d, dt, all_fieldcodes, cpu_time, pl_in_url, jrec, sc, sp): """Prints stripe with the information on 'collection' and 'nb_found' results and CPU time. Also, prints navigation links (beg/next/prev/end) inside the results set. If middle_only is set to 1, it will only print the middle box information (beg/netx/prev/end/etc) links. This is suitable for displaying navigation links at the bottom of the search results page. Parameters: - 'ln' *string* - The language to display - 'middle_only' *bool* - Only display parts of the interface - 'collection' *string* - the collection name - 'collection_name' *string* - the i18nized current collection name - 'aas' *bool* - if we display the advanced search interface - 'sf' *string* - the currently selected sort format - 'so' *string* - the currently selected sort order ("a" or "d") - 'rm' *string* - selected ranking method - 'rg' *int* - selected results/page - 'nb_found' *int* - number of results found - 'of' *string* - the selected output format - 'ot' *string* - hidden values - 'p' *string* - Current search words - 'f' *string* - the fields in which the search was done - 'f1, f2, f3, m1, m2, m3, p1, p2, p3, op1, op2' *strings* - the search parameters - 'jrec' *int* - number of first record on this page - 'd1y, d2y, d1m, d2m, d1d, d2d' *int* - the search between dates - 'dt' *string* the dates' type (creation date, modification date) - 'all_fieldcodes' *array* - all the available fields - 'cpu_time' *float* - the time of the query in seconds """ # load the right message language _ = gettext_set_language(ln) out = "" # left table cells: print collection name if not middle_only: out += '''
''' % { 'collection_id': collection_id, 'siteurl' : CFG_SITE_URL, 'collection_link': create_html_link(self.build_search_interface_url(c=collection, aas=aas, ln=ln), {}, cgi.escape(collection_name)) } else: out += """
""" % { 'siteurl' : CFG_SITE_URL } # middle table cell: print beg/next/prev/end arrows: if not middle_only: out += """
" else: out += "" # right table cell: cpu time info if not middle_only: if cpu_time > -1: out += """""" % { 'time' : _("Search took %(x_sec)s seconds.", x_sec=('%.2f' % cpu_time)), } out += "
%(collection_link)s %(recs_found)s  """ % { 'recs_found' : _("%(x_rec)s records found", x_rec=('' + self.tmpl_nice_number(nb_found, ln) + '')) } else: out += "" if nb_found > rg: out += "" + cgi.escape(collection_name) + " : " + _("%(x_rec)s records found", x_rec=('' + self.tmpl_nice_number(nb_found, ln) + '')) + "   " if nb_found > rg: # navig.arrows are needed, since we have many hits query = {'p': p, 'f': f, 'cc': collection, 'sf': sf, 'so': so, 'sp': sp, 'rm': rm, 'of': of, 'ot': ot, 'aas': aas, 'ln': ln, 'p1': p1, 'p2': p2, 'p3': p3, 'f1': f1, 'f2': f2, 'f3': f3, 'm1': m1, 'm2': m2, 'm3': m3, 'op1': op1, 'op2': op2, 'sc': 0, 'd1y': d1y, 'd1m': d1m, 'd1d': d1d, 'd2y': d2y, 'd2m': d2m, 'd2d': d2d, 'dt': dt, } # @todo here def img(gif, txt): return '%(txt)s' % { 'txt': txt, 'gif': gif, 'siteurl': CFG_SITE_URL} if jrec - rg > 1: out += create_html_link(self.build_search_url(query, jrec=1, rg=rg), {}, img('sb', _("begin")), {'class': 'img'}) if jrec > 1: out += create_html_link(self.build_search_url(query, jrec=max(jrec - rg, 1), rg=rg), {}, img('sp', _("previous")), {'class': 'img'}) if jrec + rg - 1 < nb_found: out += "%d - %d" % (jrec, jrec + rg - 1) else: out += "%d - %d" % (jrec, nb_found) if nb_found >= jrec + rg: out += create_html_link(self.build_search_url(query, jrec=jrec + rg, rg=rg), {}, img('sn', _("next")), {'class':'img'}) if nb_found >= jrec + rg + rg: out += create_html_link(self.build_search_url(query, jrec=nb_found - rg + 1, rg=rg), {}, img('se', _("end")), {'class': 'img'}) # still in the navigation part cc = collection sc = 0 for var in ['p', 'cc', 'f', 'sf', 'so', 'of', 'rg', 'aas', 'ln', 'p1', 'p2', 'p3', 'f1', 'f2', 'f3', 'm1', 'm2', 'm3', 'op1', 'op2', 'sc', 'd1y', 'd1m', 'd1d', 'd2y', 'd2m', 'd2d', 'dt']: out += self.tmpl_input_hidden(name=var, value=vars()[var]) for var in ['ot', 'sp', 'rm']: if vars()[var]: out += self.tmpl_input_hidden(name=var, value=vars()[var]) if pl_in_url: fieldargs = cgi.parse_qs(pl_in_url) for fieldcode in all_fieldcodes: # get_fieldcodes(): if fieldcode in fieldargs: for val in fieldargs[fieldcode]: out += self.tmpl_input_hidden(name=fieldcode, value=val) out += """  %(jump)s """ % { 'jump' : _("jump to record:"), 'jrec' : jrec, } if not middle_only: out += "%(time)s 
" else: out += "" out += "
" return out def tmpl_print_hosted_search_info(self, ln, middle_only, collection, collection_name, collection_id, aas, sf, so, rm, rg, nb_found, of, ot, p, f, f1, f2, f3, m1, m2, m3, op1, op2, p1, p2, p3, d1y, d1m, d1d, d2y, d2m, d2d, dt, all_fieldcodes, cpu_time, pl_in_url, jrec, sc, sp): """Prints stripe with the information on 'collection' and 'nb_found' results and CPU time. Also, prints navigation links (beg/next/prev/end) inside the results set. If middle_only is set to 1, it will only print the middle box information (beg/netx/prev/end/etc) links. This is suitable for displaying navigation links at the bottom of the search results page. Parameters: - 'ln' *string* - The language to display - 'middle_only' *bool* - Only display parts of the interface - 'collection' *string* - the collection name - 'collection_name' *string* - the i18nized current collection name - 'aas' *bool* - if we display the advanced search interface - 'sf' *string* - the currently selected sort format - 'so' *string* - the currently selected sort order ("a" or "d") - 'rm' *string* - selected ranking method - 'rg' *int* - selected results/page - 'nb_found' *int* - number of results found - 'of' *string* - the selected output format - 'ot' *string* - hidden values - 'p' *string* - Current search words - 'f' *string* - the fields in which the search was done - 'f1, f2, f3, m1, m2, m3, p1, p2, p3, op1, op2' *strings* - the search parameters - 'jrec' *int* - number of first record on this page - 'd1y, d2y, d1m, d2m, d1d, d2d' *int* - the search between dates - 'dt' *string* the dates' type (creation date, modification date) - 'all_fieldcodes' *array* - all the available fields - 'cpu_time' *float* - the time of the query in seconds """ # load the right message language _ = gettext_set_language(ln) out = "" # left table cells: print collection name if not middle_only: out += '''
''' % { 'collection_id': collection_id, 'siteurl' : CFG_SITE_URL, 'collection_link': create_html_link(self.build_search_interface_url(c=collection, aas=aas, ln=ln), {}, cgi.escape(collection_name)) } else: out += """
""" % { 'siteurl' : CFG_SITE_URL } # middle table cell: print beg/next/prev/end arrows: if not middle_only: # in case we have a hosted collection that timed out do not print its number of records, as it is yet unknown if nb_found != -963: out += """
" else: out += "" # right table cell: cpu time info if not middle_only: if cpu_time > -1: out += """""" % { 'time' : _("Search took %(x_sec)s seconds.", x_sec=('%.2f' % cpu_time)), } out += "
%(collection_link)s %(recs_found)s  """ % { 'recs_found' : _("%(x_rec)s records found", x_rec=('' + self.tmpl_nice_number(nb_found, ln) + '')) } #elif nb_found = -963: # out += """ # %(recs_found)s  """ % { # 'recs_found' : _("%s records found") % ('' + self.tmpl_nice_number(nb_found, ln) + '') # } else: out += "" # we do not care about timed out hosted collections here, because the bumber of records found will never be bigger # than rg anyway, since it's negative if nb_found > rg: out += "" + cgi.escape(collection_name) + " : " + _("%(x_rec)s records found", x_rec=('' + self.tmpl_nice_number(nb_found, ln) + '')) + "   " if nb_found > rg: # navig.arrows are needed, since we have many hits query = {'p': p, 'f': f, 'cc': collection, 'sf': sf, 'so': so, 'sp': sp, 'rm': rm, 'of': of, 'ot': ot, 'aas': aas, 'ln': ln, 'p1': p1, 'p2': p2, 'p3': p3, 'f1': f1, 'f2': f2, 'f3': f3, 'm1': m1, 'm2': m2, 'm3': m3, 'op1': op1, 'op2': op2, 'sc': 0, 'd1y': d1y, 'd1m': d1m, 'd1d': d1d, 'd2y': d2y, 'd2m': d2m, 'd2d': d2d, 'dt': dt, } # @todo here def img(gif, txt): return '%(txt)s' % { 'txt': txt, 'gif': gif, 'siteurl': CFG_SITE_URL} if jrec - rg > 1: out += create_html_link(self.build_search_url(query, jrec=1, rg=rg), {}, img('sb', _("begin")), {'class': 'img'}) if jrec > 1: out += create_html_link(self.build_search_url(query, jrec=max(jrec - rg, 1), rg=rg), {}, img('sp', _("previous")), {'class': 'img'}) if jrec + rg - 1 < nb_found: out += "%d - %d" % (jrec, jrec + rg - 1) else: out += "%d - %d" % (jrec, nb_found) if nb_found >= jrec + rg: out += create_html_link(self.build_search_url(query, jrec=jrec + rg, rg=rg), {}, img('sn', _("next")), {'class':'img'}) if nb_found >= jrec + rg + rg: out += create_html_link(self.build_search_url(query, jrec=nb_found - rg + 1, rg=rg), {}, img('se', _("end")), {'class': 'img'}) # still in the navigation part cc = collection sc = 0 for var in ['p', 'cc', 'f', 'sf', 'so', 'of', 'rg', 'aas', 'ln', 'p1', 'p2', 'p3', 'f1', 'f2', 'f3', 'm1', 'm2', 'm3', 'op1', 'op2', 'sc', 'd1y', 'd1m', 'd1d', 'd2y', 'd2m', 'd2d', 'dt']: out += self.tmpl_input_hidden(name=var, value=vars()[var]) for var in ['ot', 'sp', 'rm']: if vars()[var]: out += self.tmpl_input_hidden(name=var, value=vars()[var]) if pl_in_url: fieldargs = cgi.parse_qs(pl_in_url) for fieldcode in all_fieldcodes: # get_fieldcodes(): if fieldcode in fieldargs: for val in fieldargs[fieldcode]: out += self.tmpl_input_hidden(name=fieldcode, value=val) out += """  %(jump)s """ % { 'jump' : _("jump to record:"), 'jrec' : jrec, } if not middle_only: out += "%(time)s 
" else: out += "" out += "
" return out def tmpl_nice_number(self, number, ln=CFG_SITE_LANG, thousands_separator=',', max_ndigits_after_dot=None): """ Return nicely printed number NUMBER in language LN using given THOUSANDS_SEPARATOR character. If max_ndigits_after_dot is specified and the number is float, the number is rounded by taking in consideration up to max_ndigits_after_dot digit after the dot. This version does not pay attention to locale. See tmpl_nice_number_via_locale(). """ if type(number) is float: if max_ndigits_after_dot is not None: number = round(number, max_ndigits_after_dot) int_part, frac_part = str(number).split('.') return '%s.%s' % (self.tmpl_nice_number(int(int_part), ln, thousands_separator), frac_part) else: chars_in = list(str(number)) number = len(chars_in) chars_out = [] for i in range(0, number): if i % 3 == 0 and i != 0: chars_out.append(thousands_separator) chars_out.append(chars_in[number - i - 1]) chars_out.reverse() return ''.join(chars_out) def tmpl_nice_number_via_locale(self, number, ln=CFG_SITE_LANG): """ Return nicely printed number NUM in language LN using the locale. See also version tmpl_nice_number(). """ if number is None: return None # Temporarily switch the numeric locale to the requested one, and format the number # In case the system has no locale definition, use the vanilla form ol = locale.getlocale(locale.LC_NUMERIC) try: locale.setlocale(locale.LC_NUMERIC, self.tmpl_localemap.get(ln, self.tmpl_default_locale)) except locale.Error: return str(number) try: number = locale.format('%d', number, True) except TypeError: return str(number) locale.setlocale(locale.LC_NUMERIC, ol) return number def tmpl_record_format_htmlbrief_header(self, ln): """Returns the header of the search results list when output is html brief. Note that this function is called for each collection results when 'split by collection' is enabled. See also: tmpl_record_format_htmlbrief_footer, tmpl_record_format_htmlbrief_body Parameters: - 'ln' *string* - The language to display """ # load the right message language _ = gettext_set_language(ln) out = """
""" % { 'siteurl' : CFG_SITE_URL, } return out def tmpl_record_format_htmlbrief_footer(self, ln, display_add_to_basket=True): """Returns the footer of the search results list when output is html brief. Note that this function is called for each collection results when 'split by collection' is enabled. See also: tmpl_record_format_htmlbrief_header(..), tmpl_record_format_htmlbrief_body(..) Parameters: - 'ln' *string* - The language to display - 'display_add_to_basket' *bool* - whether to display Add-to-basket button """ # load the right message language _ = gettext_set_language(ln) out = """

%(add_to_basket)s
""" % { 'add_to_basket': display_add_to_basket and """""" % _("Add to basket") or "", } return out def tmpl_record_format_htmlbrief_body(self, ln, recid, row_number, relevance, record, relevances_prologue, relevances_epilogue, display_add_to_basket=True): """Returns the html brief format of one record. Used in the search results list for each record. See also: tmpl_record_format_htmlbrief_header(..), tmpl_record_format_htmlbrief_footer(..) Parameters: - 'ln' *string* - The language to display - 'row_number' *int* - The position of this record in the list - 'recid' *int* - The recID - 'relevance' *string* - The relevance of the record - 'record' *string* - The formatted record - 'relevances_prologue' *string* - HTML code to prepend the relevance indicator - 'relevances_epilogue' *string* - HTML code to append to the relevance indicator (used mostly for formatting) """ # load the right message language _ = gettext_set_language(ln) checkbox_for_baskets = """""" % \ {'recid': recid, } if not display_add_to_basket: checkbox_for_baskets = '' out = """ %(checkbox_for_baskets)s %(number)s. """ % {'recid': recid, 'number': row_number, 'checkbox_for_baskets': checkbox_for_baskets} if relevance: out += """
""" % { 'prologue' : relevances_prologue, 'epilogue' : relevances_epilogue, 'relevance' : relevance } out += """%s""" % record return out def tmpl_print_results_overview(self, ln, results_final_nb_total, cpu_time, results_final_nb, colls, ec, hosted_colls_potential_results_p=False): """Prints results overview box with links to particular collections below. Parameters: - 'ln' *string* - The language to display - 'results_final_nb_total' *int* - The total number of hits for the query - 'colls' *array* - The collections with hits, in the format: - 'coll[code]' *string* - The code of the collection (canonical name) - 'coll[name]' *string* - The display name of the collection - 'results_final_nb' *array* - The number of hits, indexed by the collection codes: - 'cpu_time' *string* - The time the query took - 'url_args' *string* - The rest of the search query - 'ec' *array* - selected external collections - 'hosted_colls_potential_results_p' *boolean* - check if there are any hosted collections searches that timed out during the pre-search """ if len(colls) == 1 and not ec: # if one collection only and no external collections, print nothing: return "" # load the right message language _ = gettext_set_language(ln) # first find total number of hits: # if there were no hosted collections that timed out during the pre-search print out the exact number of records found if not hosted_colls_potential_results_p: out = """''' % { 'more': create_html_link( self.build_search_url(p='refersto:recid:%d' % recID, #XXXX sf=sf, so=so, sp=sp, rm=rm, ln=ln), {}, _("more")), 'similar': similar} return out def tmpl_detailed_record_citations_citation_history(self, recID, ln, citationhistory): """Returns the citations history graph of this record Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display - citationhistory *string* - citationhistory box """ # load the right message language _ = gettext_set_language(ln) out = '' if CFG_BIBRANK_SHOW_CITATION_GRAPHS and citationhistory is not None: out = '' % citationhistory else: out = "" else: out += "no citationhistory -->" return out def tmpl_detailed_record_citations_co_citing(self, recID, ln, cociting): """Returns the list of cocited records Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display - cociting *string* - cociting box """ # load the right message language _ = gettext_set_language(ln) out = '' if CFG_BIBRANK_SHOW_CITATION_STATS and cociting is not None: similar = self.tmpl_print_record_list_for_similarity_boxen ( _("Co-cited with: %(x_num)s records", x_num=len (cociting)), cociting, ln) out = ''' ''' % { 'more': create_html_link(self.build_search_url(p='cocitedwith:%d' % recID, ln=ln), {}, _("more")), 'similar': similar } return out def tmpl_detailed_record_citations_self_cited(self, recID, ln, selfcited, citinglist): """Returns the list of self-citations for this record Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display - selfcited list - a list of self-citations for recID """ # load the right message language _ = gettext_set_language(ln) out = '' if CFG_BIBRANK_SHOW_CITATION_GRAPHS and selfcited is not None: sc_scorelist = [] #a score list for print.. for s in selfcited: #copy weight from citations weight = 0 for c in citinglist: (crec, score) = c if crec == s: weight = score tmp = [s, weight] sc_scorelist.append(tmp) scite = self.tmpl_print_record_list_for_similarity_boxen ( _(".. of which self-citations: %(x_rec)s records", x_rec=len (selfcited)), sc_scorelist, ln) out = '' return out def tmpl_author_information(self, req, pubs, authorname, num_downloads, aff_pubdict, citedbylist, kwtuples, authors, vtuples, names_dict, person_link, bibauthorid_data, ln, return_html=False): """Prints stuff about the author given as authorname. 1. Author name + his/her institutes. Each institute I has a link to papers where the auhtor has I as institute. 2. Publications, number: link to search by author. 3. Keywords 4. Author collabs 5. Publication venues like journals The parameters are data structures needed to produce 1-6, as follows: req - request pubs - list of recids, probably the records that have the author as an author authorname - evident num_downloads - evident aff_pubdict - a dictionary where keys are inst names and values lists of recordids citedbylist - list of recs that cite pubs kwtuples - keyword tuples like ('HIGGS BOSON',[3,4]) where 3 and 4 are recids authors - a list of authors that have collaborated with authorname names_dict - a dict of {name: frequency} """ from invenio.legacy.search_engine import perform_request_search from operator import itemgetter _ = gettext_set_language(ln) ib_pubs = intbitset(pubs) html = [] # construct an extended search as an interim solution for author id # searches. Will build "(exactauthor:v1 OR exactauthor:v2)" strings # extended_author_search_str = "" # if bibauthorid_data["is_baid"]: # if len(names_dict.keys()) > 1: # extended_author_search_str = '(' # # for name_index, name_query in enumerate(names_dict.keys()): # if name_index > 0: # extended_author_search_str += " OR " # # extended_author_search_str += 'exactauthor:"' + name_query + '"' # # if len(names_dict.keys()) > 1: # extended_author_search_str += ')' # rec_query = 'exactauthor:"' + authorname + '"' # # if bibauthorid_data["is_baid"] and extended_author_search_str: # rec_query = extended_author_search_str baid_query = "" extended_author_search_str = "" if 'is_baid' in bibauthorid_data and bibauthorid_data['is_baid']: if bibauthorid_data["cid"]: baid_query = 'author:%s' % bibauthorid_data["cid"] elif bibauthorid_data["pid"] > -1: baid_query = 'author:%s' % bibauthorid_data["pid"] ## todo: figure out if the author index is filled with pids/cids. ## if not: fall back to exactauthor search. # if not index: # baid_query = "" if not baid_query: baid_query = 'exactauthor:"' + authorname + '"' if bibauthorid_data['is_baid']: if len(names_dict.keys()) > 1: extended_author_search_str = '(' for name_index, name_query in enumerate(names_dict.keys()): if name_index > 0: extended_author_search_str += " OR " extended_author_search_str += 'exactauthor:"' + name_query + '"' if len(names_dict.keys()) > 1: extended_author_search_str += ')' if bibauthorid_data['is_baid'] and extended_author_search_str: baid_query = extended_author_search_str baid_query = baid_query + " " sorted_names_list = sorted(iteritems(names_dict), key=itemgetter(1), reverse=True) # Prepare data for display # construct names box header = "" + _("Name variants") + "" content = [] for name, frequency in sorted_names_list: prquery = baid_query + ' exactauthor:"' + name + '"' name_lnk = create_html_link(self.build_search_url(p=prquery), {}, str(frequency),) content.append("%s (%s)" % (name, name_lnk)) if not content: content = [_("No Name Variants")] names_box = self.tmpl_print_searchresultbox(header, "
\n".join(content)) # construct papers box rec_query = baid_query searchstr = create_html_link(self.build_search_url(p=rec_query), {}, "" + "All papers (" + str(len(pubs)) + ")" + "",) line1 = "" + _("Papers") + "" line2 = searchstr if CFG_BIBRANK_SHOW_DOWNLOAD_STATS and num_downloads: line2 += " (" + _("downloaded") + " " line2 += str(num_downloads) + " " + _("times") + ")" if CFG_INSPIRE_SITE: CFG_COLLS = ['Book', 'Conference', 'Introductory', 'Lectures', 'Preprint', 'Published', 'Review', 'Thesis'] else: CFG_COLLS = ['Article', 'Book', 'Preprint', ] collsd = {} for coll in CFG_COLLS: coll_papers = list(ib_pubs & intbitset(perform_request_search(f="collection", p=coll))) if coll_papers: collsd[coll] = coll_papers colls = collsd.keys() colls.sort(lambda x, y: cmp(len(collsd[y]), len(collsd[x]))) # sort by number of papers for coll in colls: rec_query = baid_query + 'collection:' + coll line2 += "
" + create_html_link(self.build_search_url(p=rec_query), {}, coll + " (" + str(len(collsd[coll])) + ")",) if not pubs: line2 = _("No Papers") papers_box = self.tmpl_print_searchresultbox(line1, line2) #make a authoraff string that looks like CERN (1), Caltech (2) etc authoraff = "" aff_pubdict_keys = aff_pubdict.keys() aff_pubdict_keys.sort(lambda x, y: cmp(len(aff_pubdict[y]), len(aff_pubdict[x]))) if aff_pubdict_keys: for a in aff_pubdict_keys: print_a = a if (print_a == ' '): print_a = _("unknown affiliation") if authoraff: authoraff += '
' authoraff += create_html_link(self.build_search_url(p=' or '.join(["%s" % x for x in aff_pubdict[a]]), f='recid'), {}, print_a + ' (' + str(len(aff_pubdict[a])) + ')',) else: authoraff = _("No Affiliations") line1 = "" + _("Affiliations") + "" line2 = authoraff affiliations_box = self.tmpl_print_searchresultbox(line1, line2) # print frequent keywords: keywstr = "" if (kwtuples): for (kw, freq) in kwtuples: if keywstr: keywstr += '
' rec_query = baid_query + 'keyword:"' + kw + '"' searchstr = create_html_link(self.build_search_url(p=rec_query), {}, kw + " (" + str(freq) + ")",) keywstr = keywstr + " " + searchstr else: keywstr += _('No Keywords') line1 = "" + _("Frequent keywords") + "" line2 = keywstr keyword_box = self.tmpl_print_searchresultbox(line1, line2) header = "" + _("Frequent co-authors") + "" content = [] sorted_coauthors = sorted(sorted(iteritems(authors), key=itemgetter(0)), key=itemgetter(1), reverse=True) for name, frequency in sorted_coauthors: rec_query = baid_query + 'exactauthor:"' + name + '"' lnk = create_html_link(self.build_search_url(p=rec_query), {}, "%s (%s)" % (name, frequency),) content.append("%s" % lnk) if not content: content = [_("No Frequent Co-authors")] coauthor_box = self.tmpl_print_searchresultbox(header, "
\n".join(content)) pubs_to_papers_link = create_html_link(self.build_search_url(p=baid_query), {}, str(len(pubs))) display_name = "" try: display_name = sorted_names_list[0][0] except IndexError: display_name = " " headertext = ('

%s (%s papers)

' % (display_name, pubs_to_papers_link)) if return_html: html.append(headertext) else: req.write(headertext) #req.write("

%s

" % (authorname)) if person_link: cmp_link = ('' % (CFG_SITE_URL, person_link, _("This is me. Verify my publication list."))) if return_html: html.append(cmp_link) else: req.write(cmp_link) if return_html: html.append("
%(founds)s
""" % { 'founds' : _("%(x_fmt_open)sResults overview:%(x_fmt_close)s Found %(x_nb_records)s records in %(x_nb_seconds)s seconds.") % \ {'x_fmt_open': '', 'x_fmt_close': '', 'x_nb_records': '' + self.tmpl_nice_number(results_final_nb_total, ln) + '', 'x_nb_seconds': '%.2f' % cpu_time} } # if there were (only) hosted_collections that timed out during the pre-search print out a fuzzier message else: if results_final_nb_total == 0: out = """
%(founds)s
""" % { 'founds' : _("%(x_fmt_open)sResults overview%(x_fmt_close)s") % \ {'x_fmt_open': '', 'x_fmt_close': ''} } elif results_final_nb_total > 0: out = """
%(founds)s
""" % { 'founds' : _("%(x_fmt_open)sResults overview:%(x_fmt_close)s Found at least %(x_nb_records)s records in %(x_nb_seconds)s seconds.") % \ {'x_fmt_open': '', 'x_fmt_close': '', 'x_nb_records': '' + self.tmpl_nice_number(results_final_nb_total, ln) + '', 'x_nb_seconds': '%.2f' % cpu_time} } # then print hits per collection: out += """""" count = 0 for coll in colls: if coll['code'] in results_final_nb and results_final_nb[coll['code']] > 0: count += 1 out += """ %(coll_name)s, %(number)s
""" % \ {'collclass' : count > cfg['CFG_WEBSEARCH_RESULTS_OVERVIEW_MAX_COLLS_TO_PRINT'] and 'class="morecollslist" style="display:none"' or '', 'coll' : coll['id'], 'coll_name' : cgi.escape(coll['name']), 'number' : _("%(x_rec)s records found", x_rec=('' + self.tmpl_nice_number(results_final_nb[coll['code']], ln) + ''))} # the following is used for hosted collections that have timed out, # i.e. for which we don't know the exact number of results yet. elif coll['code'] in results_final_nb and results_final_nb[coll['code']] == -963: count += 1 out += """ %(coll_name)s
""" % \ {'collclass' : count > cfg['CFG_WEBSEARCH_RESULTS_OVERVIEW_MAX_COLLS_TO_PRINT'] and 'class="morecollslist" style="display:none"' or '', 'coll' : coll['id'], 'coll_name' : cgi.escape(coll['name']), 'number' : _("%(x_rec)s records found", x_rec=('' + self.tmpl_nice_number(results_final_nb[coll['code']], ln) + ''))} if count > cfg['CFG_WEBSEARCH_RESULTS_OVERVIEW_MAX_COLLS_TO_PRINT']: out += """""" % _("Show less collections") out += """%s""" % _("Show all collections") out += "
" return out def tmpl_print_hosted_results(self, url_and_engine, ln, of=None, req=None, limit=CFG_EXTERNAL_COLLECTION_MAXRESULTS, display_body=True, display_add_to_basket = True): """Print results of a given search engine. """ if display_body: _ = gettext_set_language(ln) #url = url_and_engine[0] engine = url_and_engine[1] #name = _(engine.name) db_id = get_collection_id(engine.name) #base_url = engine.base_url out = "" results = engine.parser.parse_and_get_results(None, of=of, req=req, limit=limit, parseonly=True) if len(results) != 0: if of == 'hb': out += """
""" % { 'siteurl' : CFG_SITE_URL, 'col_db_id' : db_id, } else: if of == 'hb': out += """
""" for result in results: out += result.html.replace('>Detailed record<', '>External record<').replace('>Similar records<', '>Similar external records<') if len(results) != 0: if of == 'hb': out += """

""" if display_add_to_basket: out += """ """ % {'basket' : _("Add to basket")} out += """
""" else: if of == 'hb': out += """
""" # we have already checked if there are results or no, maybe the following if should be removed? if not results: if of.startswith("h"): out = _('No results found...') + '
' return out else: return "" def tmpl_print_service_list_links(self, label, labels_and_urls, ln=CFG_SITE_URL): """ Prints service results as list @param label: the label to display before the list of links @type label: string @param labels_and_urls: list of tuples (label, url), already translated, not escaped @type labels_and_urls: list(string, string) @param ln: language """ # load the right message language _ = gettext_set_language(ln) out = ''' %s ''' % cgi.escape(label) out += """""" count = 0 for link_label, link_url in labels_and_urls: count += 1 out += """%(separator)s %(link_label)s""" % \ {'itemclass' : count > CFG_WEBSEARCH_MAX_SEARCH_COLL_RESULTS_TO_PRINT and 'class="moreserviceitemslist" style="display:none"' or '', 'separator': count > 1 and ', ' or '', 'url' : link_url, 'link_label' : cgi.escape(link_label)} if count > CFG_WEBSEARCH_MAX_SEARCH_COLL_RESULTS_TO_PRINT: out += """ """ % _("Less suggestions") out += """ %s""" % _("More suggestions") return out def tmpl_print_searchresultbox(self, header, body): """print a nicely formatted box for search results """ #_ = gettext_set_language(ln) # first find total number of hits: out = '
' + header + '
' + body + '
' return out def tmpl_search_no_boolean_hits(self, ln, nearestterms): """No hits found, proposes alternative boolean queries Parameters: - 'ln' *string* - The language to display - 'nearestterms' *array* - Parts of the interface to display, in the format: - 'nearestterms[nbhits]' *int* - The resulting number of hits - 'nearestterms[url_args]' *string* - The search parameters - 'nearestterms[p]' *string* - The search terms """ # load the right message language _ = gettext_set_language(ln) out = _("Boolean query returned no hits. Please combine your search terms differently.") out += '''
''' for term, hits, argd in nearestterms: out += '''\ ''' % {'hits' : hits, 'link': create_html_link(self.build_search_url(argd), {}, cgi.escape(term), {'class': "nearestterms"})} out += """
%(hits)s   %(link)s
""" return out def tmpl_similar_author_names(self, authors, ln): """No hits found, proposes alternative boolean queries Parameters: - 'authors': a list of (name, hits) tuples - 'ln' *string* - The language to display """ # load the right message language _ = gettext_set_language(ln) out = ''' ''' % { 'similar' : _("See also: similar author names") } for author, hits in authors: out += '''\ ''' % {'link': create_html_link( self.build_search_url(p=author, f='author', ln=ln), {}, cgi.escape(author), {'class':"google"}), 'nb' : hits} out += """
%(similar)s
%(nb)d %(link)s
""" return out def tmpl_print_record_detailed(self, recID, ln): """Displays a detailed on-the-fly record Parameters: - 'ln' *string* - The language to display - 'recID' *int* - The record id """ # okay, need to construct a simple "Detailed record" format of our own: out = "

 " # secondly, title: titles = get_fieldvalues(recID, "245__a") or \ get_fieldvalues(recID, "111__a") for title in titles: out += "

%s

" % cgi.escape(title) # thirdly, authors: authors = get_fieldvalues(recID, "100__a") + get_fieldvalues(recID, "700__a") if authors: out += "

" for author in authors: out += '%s; ' % create_html_link(self.build_search_url( ln=ln, p=author, f='author'), {}, cgi.escape(author)) out += "

" # fourthly, date of creation: dates = get_fieldvalues(recID, "260__c") for date in dates: out += "

%s

" % date # fifthly, abstract: abstracts = get_fieldvalues(recID, "520__a") for abstract in abstracts: out += """

Abstract: %s

""" % abstract # fifthly bis, keywords: keywords = get_fieldvalues(recID, "6531_a") if len(keywords): out += """

Keyword(s):""" for keyword in keywords: out += '%s; ' % create_html_link( self.build_search_url(ln=ln, p=keyword, f='keyword'), {}, cgi.escape(keyword)) out += '

' # fifthly bis bis, published in: prs_p = get_fieldvalues(recID, "909C4p") prs_v = get_fieldvalues(recID, "909C4v") prs_y = get_fieldvalues(recID, "909C4y") prs_n = get_fieldvalues(recID, "909C4n") prs_c = get_fieldvalues(recID, "909C4c") for idx in range(0, len(prs_p)): out += """

Publ. in: %s""" % prs_p[idx] if prs_v and prs_v[idx]: out += """%s""" % prs_v[idx] if prs_y and prs_y[idx]: out += """(%s)""" % prs_y[idx] if prs_n and prs_n[idx]: out += """, no.%s""" % prs_n[idx] if prs_c and prs_c[idx]: out += """, p.%s""" % prs_c[idx] out += """.

""" # sixthly, fulltext link: urls_z = get_fieldvalues(recID, "8564_z") urls_u = get_fieldvalues(recID, "8564_u") # we separate the fulltext links and image links for url_u in urls_u: if url_u.endswith('.png'): continue else: link_text = "URL" try: if urls_z[idx]: link_text = urls_z[idx] except IndexError: pass out += """

%s: %s

""" % (link_text, urls_u[idx], urls_u[idx]) # print some white space at the end: out += "

" return out def tmpl_print_record_list_for_similarity_boxen(self, title, recID_score_list, ln=CFG_SITE_LANG): """Print list of records in the "hs" (HTML Similarity) format for similarity boxes. RECID_SCORE_LIST is a list of (recID1, score1), (recID2, score2), etc. """ from invenio.legacy.search_engine import print_record, record_public_p recID_score_list_to_be_printed = [] # firstly find 5 first public records to print: nb_records_to_be_printed = 0 nb_records_seen = 0 while nb_records_to_be_printed < 5 and nb_records_seen < len(recID_score_list) and nb_records_seen < 50: # looking through first 50 records only, picking first 5 public ones (recID, score) = recID_score_list[nb_records_seen] nb_records_seen += 1 if record_public_p(recID): nb_records_to_be_printed += 1 recID_score_list_to_be_printed.append([recID, score]) # secondly print them: out = '''
%(title)s
''' % { 'title': cgi.escape(title) } for recid, score in recID_score_list_to_be_printed: out += ''' ''' % { 'score': score, 'info' : print_record(recid, format="hs", ln=ln), } out += """
(%(score)s)  %(info)s
""" return out def tmpl_print_record_brief(self, ln, recID): """Displays a brief record on-the-fly Parameters: - 'ln' *string* - The language to display - 'recID' *int* - The record id """ out = "" # record 'recID' does not exist in format 'format', so print some default format: # firstly, title: titles = get_fieldvalues(recID, "245__a") or \ get_fieldvalues(recID, "111__a") # secondly, authors: authors = get_fieldvalues(recID, "100__a") + get_fieldvalues(recID, "700__a") # thirdly, date of creation: dates = get_fieldvalues(recID, "260__c") # thirdly bis, report numbers: rns = get_fieldvalues(recID, "037__a") rns = get_fieldvalues(recID, "088__a") # fourthly, beginning of abstract: abstracts = get_fieldvalues(recID, "520__a") # fifthly, fulltext link: urls_z = get_fieldvalues(recID, "8564_z") urls_u = get_fieldvalues(recID, "8564_u") # get rid of images images = [] non_image_urls_u = [] for url_u in urls_u: if url_u.endswith('.png'): images.append(url_u) else: non_image_urls_u.append(url_u) ## unAPI identifier out = '\n' % recID out += self.tmpl_record_body( titles=titles, authors=authors, dates=dates, rns=rns, abstracts=abstracts, urls_u=non_image_urls_u, urls_z=urls_z, ln=ln) return out def tmpl_print_record_brief_links(self, ln, recID, sf='', so='d', sp='', rm='', display_claim_link=False): """Displays links for brief record on-the-fly Parameters: - 'ln' *string* - The language to display - 'recID' *int* - The record id """ from invenio.ext.template import render_template_to_string tpl = """{%- from "search/helpers.html" import record_brief_links with context -%} {{ record_brief_links(get_record(recid)) }}""" return render_template_to_string(tpl, recid=recID, _from_string=True).encode('utf-8') def tmpl_xml_rss_prologue(self, current_url=None, previous_url=None, next_url=None, first_url=None, last_url=None, nb_found=None, jrec=None, rg=None, cc=None): """Creates XML RSS 2.0 prologue.""" title = CFG_SITE_NAME description = '%s latest documents' % CFG_SITE_NAME if cc and cc != CFG_SITE_NAME: title += ': ' + cgi.escape(cc) description += ' in ' + cgi.escape(cc) out = """ %(rss_title)s %(siteurl)s %(rss_description)s %(sitelang)s %(timestamp)s Invenio %(version)s %(sitesupportemail)s %(timetolive)s%(previous_link)s%(next_link)s%(current_link)s%(total_results)s%(start_index)s%(items_per_page)s %(siteurl)s/img/site_logo_rss.png %(sitename)s %(siteurl)s \n""" return out def tmpl_xml_podcast_prologue(self, current_url=None, previous_url=None, next_url=None, first_url=None, last_url=None, nb_found=None, jrec=None, rg=None, cc=None): """Creates XML podcast prologue.""" title = CFG_SITE_NAME description = '%s latest documents' % CFG_SITE_NAME if CFG_CERN_SITE: title = 'CERN' description = 'CERN latest documents' if cc and cc != CFG_SITE_NAME: title += ': ' + cgi.escape(cc) description += ' in ' + cgi.escape(cc) out = """ %(podcast_title)s %(siteurl)s %(podcast_description)s %(sitelang)s %(timestamp)s Invenio %(version)s %(siteadminemail)s %(timetolive)s%(previous_link)s%(next_link)s%(current_link)s %(siteurl)s/img/site_logo_rss.png %(sitename)s %(siteurl)s %(siteadminemail)s """ % {'sitename': CFG_SITE_NAME, 'siteurl': CFG_SITE_URL, 'sitelang': CFG_SITE_LANG, 'siteadminemail': CFG_SITE_ADMIN_EMAIL, 'timestamp': time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime()), 'version': CFG_VERSION, 'sitesupportemail': CFG_SITE_SUPPORT_EMAIL, 'timetolive': CFG_WEBSEARCH_RSS_TTL, 'current_link': (current_url and \ '\n\n' % current_url) or '', 'previous_link': (previous_url and \ '\n' % previous_url) or '', 'next_link': (next_url and \ '\n \n""" return out def tmpl_xml_nlm_prologue(self): """Creates XML NLM prologue.""" out = """\n""" return out def tmpl_xml_nlm_epilogue(self): """Creates XML NLM epilogue.""" out = """\n""" return out def tmpl_xml_refworks_prologue(self): """Creates XML RefWorks prologue.""" out = """\n""" return out def tmpl_xml_refworks_epilogue(self): """Creates XML RefWorks epilogue.""" out = """\n""" return out def tmpl_xml_endnote_prologue(self): """Creates XML EndNote prologue.""" out = """\n\n""" return out def tmpl_xml_endnote_8x_prologue(self): """Creates XML EndNote prologue.""" out = """\n""" return out def tmpl_xml_endnote_epilogue(self): """Creates XML EndNote epilogue.""" out = """\n\n""" return out def tmpl_xml_endnote_8x_epilogue(self): """Creates XML EndNote epilogue.""" out = """\n""" return out def tmpl_xml_marc_prologue(self): """Creates XML MARC prologue.""" out = """\n""" return out def tmpl_xml_marc_epilogue(self): """Creates XML MARC epilogue.""" out = """\n""" return out def tmpl_xml_mods_prologue(self): """Creates XML MODS prologue.""" out = """\n""" return out def tmpl_xml_mods_epilogue(self): """Creates XML MODS epilogue.""" out = """\n""" return out def tmpl_xml_default_prologue(self): """Creates XML default format prologue. (Sanity calls only.)""" out = """\n""" return out def tmpl_xml_default_epilogue(self): """Creates XML default format epilogue. (Sanity calls only.)""" out = """\n""" return out def tmpl_collection_not_found_page_title(self, colname, ln=CFG_SITE_LANG): """ Create page title for cases when unexisting collection was asked for. """ _ = gettext_set_language(ln) out = _("Collection %(x_name)s Not Found", x_name=cgi.escape(colname)) return out def tmpl_collection_not_found_page_body(self, colname, ln=CFG_SITE_LANG): """ Create page body for cases when unexisting collection was asked for. """ _ = gettext_set_language(ln) out = """

%(title)s

%(sorry)s

%(you_may_want)s

""" % { 'title': self.tmpl_collection_not_found_page_title(colname, ln), 'sorry': _("Sorry, collection %(x_name)s does not seem to exist.", x_name=('' + cgi.escape(colname) + '')), 'you_may_want': _("You may want to start browsing from %(x_name)s.", x_name=('' + cgi.escape(CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME)) + ''))} return out def tmpl_alert_rss_teaser_box_for_query(self, id_query, ln, display_email_alert_part=True): """Propose teaser for setting up this query as alert or RSS feed. Parameters: - 'id_query' *int* - ID of the query we make teaser for - 'ln' *string* - The language to display - 'display_email_alert_part' *bool* - whether to display email alert part """ # load the right message language _ = gettext_set_language(ln) # get query arguments: res = run_sql("SELECT urlargs FROM query WHERE id=%s", (id_query,)) argd = {} if res: argd = cgi.parse_qs(res[0][0]) rssurl = self.build_rss_url(argd) alerturl = CFG_SITE_URL + '/youralerts/input?ln=%s&idq=%s' % (ln, id_query) if display_email_alert_part: msg_alert = _("""Set up a personal %(x_url1_open)semail alert%(x_url1_close)s or subscribe to the %(x_url2_open)sRSS feed%(x_url2_close)s.""") % \ {'x_url1_open': ' ' % (alerturl, CFG_SITE_URL) + ' ' % (alerturl), 'x_url1_close': '', 'x_url2_open': ' ' % (rssurl, CFG_SITE_URL) + ' ' % rssurl, 'x_url2_close': '', } else: msg_alert = _("""Subscribe to the %(x_url2_open)sRSS feed%(x_url2_close)s.""") % \ {'x_url2_open': ' ' % (rssurl, CFG_SITE_URL) + ' ' % rssurl, 'x_url2_close': '', } out = '''
%(similar)s
%(msg_alert)s
''' % { 'similar' : _("Interested in being notified about new results for this query?"), 'msg_alert': msg_alert, } return out def tmpl_detailed_record_metadata(self, recID, ln, format, content, creationdate=None, modificationdate=None): """Returns the main detailed page of a record Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display - 'format' *string* - The format in used to print the record - 'content' *string* - The main content of the page - 'creationdate' *string* - The creation date of the printed record - 'modificationdate' *string* - The last modification date of the printed record """ _ = gettext_set_language(ln) ## unAPI identifier out = '\n' % recID out += content return out def tmpl_display_back_to_search(self, req, recID, ln): """ Displays next-hit/previous-hit/back-to-search links on the detailed record pages in order to be able to quickly flip between detailed record pages @param req: Apache request object @type req: Apache request object @param recID: detailed record ID @type recID: int @param ln: language of the page @type ln: string @return: html output @rtype: html """ _ = gettext_set_language(ln) # this variable is set to zero and then, nothing is displayed if not CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT: return '' # search for a specific record having not done any search before wlq = session_param_get(req, 'websearch-last-query', '') wlqh = session_param_get(req, 'websearch-last-query-hits') out = '''

''' # excedeed limit CFG_WEBSEARCH_PREV_NEXT_HIT_LIMIT, # then will be displayed only the back to search link if wlqh is None: out += '''
%(back)s
''' % \ {'back': create_html_link(wlq, {}, _("Back to search"), {'class': "moreinfo"})} return out # let's look for the recID's collection record_found = False for coll in wlqh: if recID in coll: record_found = True coll_recID = coll break # let's calculate lenght of recID's collection if record_found: recIDs = coll_recID[::-1] totalrec = len(recIDs) # search for a specific record having not done any search before else: return '' # if there is only one hit, # to show only the "back to search" link if totalrec == 1: # to go back to the last search results page out += '''
%(back)s
''' % \ {'back': create_html_link(wlq, {}, _("Back to search"), {'class': "moreinfo"})} elif totalrec > 1: pos = recIDs.index(recID) numrec = pos + 1 if pos == 0: recIDnext = recIDs[pos + 1] recIDlast = recIDs[totalrec - 1] # to display only next and last links out += '''
%(numrec)s %(totalrec)s %(next)s %(last)s
''' % { 'numrec': _("%(x_name)s of", x_name=('' + self.tmpl_nice_number(numrec, ln) + '')), 'totalrec': ("%s") % ('' + self.tmpl_nice_number(totalrec, ln) + ''), 'next': create_html_link(self.build_search_url(recid=recIDnext, ln=ln), {}, (''), {'class': "moreinfo"}), 'last': create_html_link(self.build_search_url(recid=recIDlast, ln=ln), {}, ('»'), {'class': "moreinfo"})} elif pos == totalrec - 1: recIDfirst = recIDs[0] recIDprev = recIDs[pos - 1] # to display only first and previous links out += '''
%(first)s %(previous)s %(numrec)s %(totalrec)s
''' % { 'first': create_html_link(self.build_search_url(recid=recIDfirst, ln=ln), {}, ('«'), {'class': "moreinfo"}), 'previous': create_html_link(self.build_search_url(recid=recIDprev, ln=ln), {}, (''), {'class': "moreinfo"}), 'numrec': _("%(x_name)s of", x_name=('' + self.tmpl_nice_number(numrec, ln) + '')), 'totalrec': ("%s") % ('' + self.tmpl_nice_number(totalrec, ln) + '')} else: # to display all links recIDfirst = recIDs[0] recIDprev = recIDs[pos - 1] recIDnext = recIDs[pos + 1] recIDlast = recIDs[len(recIDs) - 1] out += '''
%(first)s %(previous)s %(numrec)s %(totalrec)s %(next)s %(last)s
''' % { 'first': create_html_link(self.build_search_url(recid=recIDfirst, ln=ln), {}, ('«'), {'class': "moreinfo"}), 'previous': create_html_link(self.build_search_url(recid=recIDprev, ln=ln), {}, (''), {'class': "moreinfo"}), 'numrec': _("%(x_name)s of", x_name=('' + self.tmpl_nice_number(numrec, ln) + '')), 'totalrec': ("%s") % ('' + self.tmpl_nice_number(totalrec, ln) + ''), 'next': create_html_link(self.build_search_url(recid=recIDnext, ln=ln), {}, (''), {'class': "moreinfo"}), 'last': create_html_link(self.build_search_url(recid=recIDlast, ln=ln), {}, ('»'), {'class': "moreinfo"})} out += '''
%(back)s
''' % { 'back': create_html_link(wlq, {}, _("Back to search"), {'class': "moreinfo"})} return out def tmpl_record_plots(self, recID, ln): """ Displays little tables containing the images and captions contained in the specified document. Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display """ from invenio.legacy.search_engine import get_record from invenio.legacy.bibrecord import field_get_subfield_values from invenio.legacy.bibrecord import record_get_field_instances _ = gettext_set_language(ln) out = '' rec = get_record(recID) flds = record_get_field_instances(rec, '856', '4') images = [] for fld in flds: image = field_get_subfield_values(fld, 'u') caption = field_get_subfield_values(fld, 'y') if type(image) == list and len(image) > 0: image = image[0] else: continue if type(caption) == list and len(caption) > 0: caption = caption[0] else: continue if not image.endswith('.png'): # huh? continue if len(caption) >= 5: images.append((int(caption[:5]), image, caption[5:])) else: # we don't have any idea of the order... just put it on images.append(99999, image, caption) images = sorted(images, key=lambda x: x[0]) for (index, image, caption) in images: # let's put everything in nice little subtables with the image # next to the caption out = out + '' + \ '' + \ '' + \ '
' + \ '' + caption + '
' out = out + '

' return out def tmpl_detailed_record_statistics(self, recID, ln, downloadsimilarity, downloadhistory, viewsimilarity): """Returns the statistics page of a record Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display - downloadsimilarity *string* - downloadsimilarity box - downloadhistory *string* - downloadhistory box - viewsimilarity *string* - viewsimilarity box """ # load the right message language _ = gettext_set_language(ln) out = '' if CFG_BIBRANK_SHOW_DOWNLOAD_STATS and downloadsimilarity is not None: similar = self.tmpl_print_record_list_for_similarity_boxen ( _("People who downloaded this document also downloaded:"), downloadsimilarity, ln) out = '' out += ''' ''' % { 'siteurl': CFG_SITE_URL, 'recid': recID, 'ln': ln, 'similar': similar, 'more': _("more"), 'graph': downloadsimilarity } out += '
%(graph)s
%(similar)s
' out += '
' if CFG_BIBRANK_SHOW_READING_STATS and viewsimilarity is not None: out += self.tmpl_print_record_list_for_similarity_boxen ( _("People who viewed this page also viewed:"), viewsimilarity, ln) if CFG_BIBRANK_SHOW_DOWNLOAD_GRAPHS and downloadhistory is not None: out += downloadhistory + '
' return out def tmpl_detailed_record_citations_prologue(self, recID, ln): """Returns the prologue of the citations page of a record Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display """ return '' def tmpl_detailed_record_citations_epilogue(self, recID, ln): """Returns the epilogue of the citations page of a record Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display """ return '
' def tmpl_detailed_record_citations_citing_list(self, recID, ln, citinglist, sf='', so='d', sp='', rm=''): """Returns the list of record citing this one Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display - citinglist *list* - a list of tuples [(x1,y1),(x2,y2),..] where x is doc id and y is number of citations """ # load the right message language _ = gettext_set_language(ln) out = '' if CFG_BIBRANK_SHOW_CITATION_STATS and citinglist is not None: similar = self.tmpl_print_record_list_for_similarity_boxen( _("Cited by: %(x_num)s records", x_num=len(citinglist)), citinglist, ln) out += '''
%(similar)s %(more)s

%s
%(similar)s %(more)s
' + scite + '
") html.append("") html.append("
") html.append(names_box) html.append("
") html.append(papers_box) html.append("
") html.append(keyword_box) html.append("
 ") html.append(affiliations_box) html.append("
") html.append(coauthor_box) html.append("
") else: req.write("") req.write("") req.write("
") req.write(names_box) req.write("
") req.write(papers_box) req.write("
") req.write(keyword_box) req.write("
 ") req.write(affiliations_box) req.write("
") req.write(coauthor_box) req.write("
") # print citations: rec_query = baid_query if len(citedbylist): line1 = "" + _("Citations:") + "" line2 = "" if not pubs: line2 = _("No Citation Information available") sr_box = self.tmpl_print_searchresultbox(line1, line2) if return_html: html.append(sr_box) else: req.write(sr_box) if return_html: return "\n".join(html) # print frequent co-authors: # collabstr = "" # if (authors): # for c in authors: # c = c.strip() # if collabstr: # collabstr += '
' # #do not add this person him/herself in the list # cUP = c.upper() # authornameUP = authorname.upper() # if not cUP == authornameUP: # commpubs = intbitset(pubs) & intbitset(perform_request_search(p="exactauthor:\"%s\" exactauthor:\"%s\"" % (authorname, c))) # collabstr = collabstr + create_html_link(self.build_search_url(p='exactauthor:"' + authorname + '" exactauthor:"' + c + '"'), # {}, c + " (" + str(len(commpubs)) + ")",) # else: collabstr += 'None' # banner = self.tmpl_print_searchresultbox("" + _("Frequent co-authors:") + "", collabstr) # print frequently publishes in journals: #if (vtuples): # pubinfo = "" # for t in vtuples: # (journal, num) = t # pubinfo += create_html_link(self.build_search_url(p='exactauthor:"' + authorname + '" ' + \ # 'journal:"' + journal + '"'), # {}, journal + " ("+str(num)+")
") # banner = self.tmpl_print_searchresultbox("" + _("Frequently publishes in:") + "", pubinfo) # req.write(banner) def tmpl_detailed_record_references(self, recID, ln, content): """Returns the discussion page of a record Parameters: - 'recID' *int* - The ID of the printed record - 'ln' *string* - The language to display - 'content' *string* - The main content of the page """ # load the right message language out = '' if content is not None: out += content return out def tmpl_citesummary_title(self, ln=CFG_SITE_LANG): """HTML citesummary title and breadcrumbs A part of HCS format suite.""" return '' def tmpl_citesummary2_title(self, searchpattern, ln=CFG_SITE_LANG): """HTML citesummary title and breadcrumbs A part of HCS2 format suite.""" return '' def tmpl_citesummary_back_link(self, searchpattern, ln=CFG_SITE_LANG): """HTML back to citesummary link A part of HCS2 format suite.""" _ = gettext_set_language(ln) out = '' params = {'ln': 'en', 'p': quote(searchpattern), 'of': 'hcs'} msg = _('Back to citesummary') url = CFG_SITE_URL + '/search?' + \ '&'.join(['='.join(i) for i in iteritems(params)]) out += '

%(msg)s

' % {'url': url, 'msg': msg} return out def tmpl_citesummary_more_links(self, searchpattern, ln=CFG_SITE_LANG): _ = gettext_set_language(ln) out = '' msg = '

%(msg)s

' params = {'ln': ln, 'p': quote(searchpattern), 'of': 'hcs2'} url = CFG_SITE_URL + '/search?' + \ '&'.join(['='.join(i) for i in iteritems(params)]) out += msg % {'url': url, 'msg': _('Exclude self-citations')} return out def tmpl_citesummary_prologue(self, d_recids, collections, search_patterns, searchfield, citable_recids, total_count, ln=CFG_SITE_LANG): """HTML citesummary format, prologue. A part of HCS format suite.""" _ = gettext_set_language(ln) out = """""" % \ {'msg_title': _("Citation summary results"), } for coll, dummy in collections: out += '' % _(coll) out += '' out += """""" % \ {'msg_recs': _("Total number of papers analyzed:"), } for coll, colldef in collections: link_url = CFG_SITE_URL + '/search?p=' if search_patterns[coll]: p = search_patterns[coll] if searchfield: if " " in p: p = searchfield + ':"' + p + '"' else: p = searchfield + ':' + p link_url += quote(p) if colldef: link_url += '%20AND%20' + quote(colldef) link_text = self.tmpl_nice_number(len(d_recids[coll]), ln) out += '' % (link_url, link_text) out += '' return out def tmpl_citesummary_overview(self, collections, d_total_cites, d_avg_cites, ln=CFG_SITE_LANG): """HTML citesummary format, overview. A part of HCS format suite.""" _ = gettext_set_language(ln) out = """""" % \ {'msg_cites': _("Total number of citations:"), } for coll, dummy in collections: total_cites = d_total_cites[coll] out += '' % \ self.tmpl_nice_number(total_cites, ln) out += '' out += """""" % \ {'msg_avgcit': _("Average citations per paper:"), } for coll, dummy in collections: avg_cites = d_avg_cites[coll] out += '' % avg_cites out += '' return out def tmpl_citesummary_minus_self_cites(self, d_total_cites, d_avg_cites, ln=CFG_SITE_LANG): """HTML citesummary format, overview. A part of HCS format suite.""" _ = gettext_set_language(ln) msg = _("Total number of citations excluding self-citations") out = """' out += msg % (CFG_SITE_URL, '/help/citation-metrics#citesummary_self-cites') for total_cites in d_total_cites.values(): out += '' % \ self.tmpl_nice_number(total_cites, ln) out += '' msg = _("Average citations per paper excluding self-citations") out += """' out += msg % (CFG_SITE_URL, '/help/citation-metrics#citesummary_self-cites') for avg_cites in d_avg_cites.itervalues(): out += '' % avg_cites out += '' return out def tmpl_citesummary_footer(self): return '' def tmpl_citesummary_breakdown_header(self, ln=CFG_SITE_LANG): _ = gettext_set_language(ln) return """""" % \ {'msg_breakdown': _("Breakdown of papers by citations:"), } def tmpl_citesummary_breakdown_by_fame(self, d_cites, low, high, fame, l_colls, searchpatterns, searchfield, ln=CFG_SITE_LANG): """HTML citesummary format, breakdown by fame. A part of HCS format suite.""" _ = gettext_set_language(ln) out = """""" % \ {'fame': _(fame), } for coll, colldef in l_colls: link_url = CFG_SITE_URL + '/search?p=' if searchpatterns.get(coll, None): p = searchpatterns.get(coll, None) if searchfield: if " " in p: p = searchfield + ':"' + p + '"' else: p = searchfield + ':' + p link_url += quote(p) + '%20AND%20' if colldef: link_url += quote(colldef) + '%20AND%20' if low == 0 and high == 0: link_url += quote('cited:0') else: link_url += quote('cited:%i->%i' % (low, high)) link_text = self.tmpl_nice_number(d_cites[coll], ln) out += '' % (link_url, link_text) out += '' return out def tmpl_citesummary_h_index(self, collections, d_h_factors, ln=CFG_SITE_LANG): """HTML citesummary format, h factor output. A part of the HCS suite.""" _ = gettext_set_language(ln) out = "" % \ {'msg_metrics': _("Citation metrics"), 'help_url': CFG_SITE_URL + '/help/citation-metrics', } out += '' out += msg % (CFG_SITE_URL, '/help/citation-metrics#citesummary_h-index') for coll, dummy in collections: h_factors = d_h_factors[coll] out += '' % \ self.tmpl_nice_number(h_factors, ln) out += '' return out def tmpl_citesummary_epilogue(self, ln=CFG_SITE_LANG): """HTML citesummary format, epilogue. A part of HCS format suite.""" out = "
%(msg_title)s %s
%(msg_recs)s%s
%(msg_cites)s%s
%(msg_avgcit)s%.1f
%(msg_cites)s""" % \ {'msg_cites': msg, } # use ? help linking in the style of oai_repository_admin.py msg = ' [?]%s
%(msg_avgcit)s""" % \ {'msg_avgcit': msg, } # use ? help linking in the style of oai_repository_admin.py msg = ' [?]%.1f
%(msg_breakdown)s
%(fame)s%s
%(msg_metrics)s [?]
h-index' # use ? help linking in the style of oai_repository_admin.py msg = ' [?]%s
" return out def tmpl_unapi(self, formats, identifier=None): """ Provide a list of object format available from the unAPI service for the object identified by IDENTIFIER """ out = '\n' if identifier: out += '\n' % (identifier) else: out += "\n" for format_name, format_type in iteritems(formats): docs = '' if format_name == 'xn': docs = 'http://www.nlm.nih.gov/databases/dtd/' format_type = 'application/xml' format_name = 'nlm' elif format_name == 'xm': docs = 'http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd' format_type = 'application/xml' format_name = 'marcxml' elif format_name == 'xr': format_type = 'application/rss+xml' docs = 'http://www.rssboard.org/rss-2-0/' elif format_name == 'xw': format_type = 'application/xml' docs = 'http://www.refworks.com/RefWorks/help/RefWorks_Tagged_Format.htm' elif format_name == 'xoaidc': format_type = 'application/xml' docs = 'http://www.openarchives.org/OAI/2.0/oai_dc.xsd' elif format_name == 'xe': format_type = 'application/xml' docs = 'http://www.endnote.com/support/' format_name = 'endnote' elif format_name == 'xd': format_type = 'application/xml' docs = 'http://dublincore.org/schemas/' format_name = 'dc' elif format_name == 'xo': format_type = 'application/xml' docs = 'http://www.loc.gov/standards/mods/v3/mods-3-3.xsd' format_name = 'mods' if docs: out += '\n' % (xml_escape(format_name), xml_escape(format_type), xml_escape(docs)) else: out += '\n' % (xml_escape(format_name), xml_escape(format_type)) out += "" return out diff --git a/invenio/legacy/websearch/webcoll.py b/invenio/legacy/websearch/webcoll.py index 55d69406b..9216b2822 100644 --- a/invenio/legacy/websearch/webcoll.py +++ b/invenio/legacy/websearch/webcoll.py @@ -1,1142 +1,1177 @@ # -*- coding: utf-8 -*- ## This file is part of Invenio. ## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. from __future__ import print_function """Create Invenio collection cache.""" __revision__ = "$Id$" import calendar import copy import sys import cgi import re import os import string import time from six.moves import cPickle from invenio.config import \ CFG_CERN_SITE, \ CFG_WEBSEARCH_INSTANT_BROWSE, \ CFG_WEBSEARCH_NARROW_SEARCH_SHOW_GRANDSONS, \ CFG_WEBSEARCH_I18N_LATEST_ADDITIONS, \ CFG_CACHEDIR, \ CFG_SITE_LANG, \ CFG_SITE_NAME, \ CFG_SITE_LANGS, \ CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES, \ CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE, \ CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS from invenio.base.i18n import gettext_set_language, language_list_long from invenio.legacy.search_engine import search_pattern_parenthesised, get_creation_date, get_field_i18nname, collection_restricted_p, sort_records, EM_REPOSITORY from invenio.legacy.dbquery import run_sql, Error, get_table_update_time from invenio.legacy.bibrank.record_sorter import get_bibrank_methods from invenio.utils.date import convert_datestruct_to_dategui, strftime from invenio.modules.formatter import format_record from invenio.utils.shell import mymkdir from intbitset import intbitset from invenio.legacy.websearch_external_collections import \ external_collection_load_states, \ dico_collection_external_searches, \ external_collection_sort_engine_by_name from invenio.legacy.bibsched.bibtask import task_init, task_get_option, task_set_option, \ write_message, task_has_option, task_update_progress, \ task_sleep_now_if_required import invenio.legacy.template websearch_templates = invenio.legacy.template.load('websearch') from invenio.legacy.websearch_external_collections.searcher import external_collections_dictionary from invenio.legacy.websearch_external_collections.config import CFG_EXTERNAL_COLLECTION_TIMEOUT from invenio.legacy.websearch_external_collections.config import CFG_HOSTED_COLLECTION_TIMEOUT_NBRECS from invenio.base.signals import webcoll_after_webpage_cache_update, \ webcoll_after_reclist_cache_update ## global vars COLLECTION_HOUSE = {} # will hold collections we treat in this run of the program; a dict of {collname2, collobject1}, ... # CFG_CACHE_LAST_UPDATED_TIMESTAMP_TOLERANCE -- cache timestamp # tolerance (in seconds), to account for the fact that an admin might # accidentally happen to edit the collection definitions at exactly # the same second when some webcoll process was about to be started. # In order to be safe, let's put an exaggerated timestamp tolerance # value such as 20 seconds: CFG_CACHE_LAST_UPDATED_TIMESTAMP_TOLERANCE = 20 # CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE -- location of the cache # timestamp file: CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE = "%s/collections/last_updated" % CFG_CACHEDIR # CFG_CACHE_LAST_FAST_UPDATED_TIMESTAMP_FILE -- location of the cache # timestamp file usef when running webcoll in the fast-mode. CFG_CACHE_LAST_FAST_UPDATED_TIMESTAMP_FILE = "%s/collections/last_fast_updated" % CFG_CACHEDIR def get_collection(colname): """Return collection object from the collection house for given colname. If does not exist, then create it.""" if colname not in COLLECTION_HOUSE: colobject = Collection(colname) COLLECTION_HOUSE[colname] = colobject return COLLECTION_HOUSE[colname] ## auxiliary functions: def is_selected(var, fld): "Checks if the two are equal, and if yes, returns ' selected'. Useful for select boxes." if var == fld: return ' selected="selected"' else: return "" def get_field(recID, tag): "Gets list of field 'tag' for the record with 'recID' system number." out = [] digit = tag[0:2] bx = "bib%sx" % digit bibx = "bibrec_bib%sx" % digit query = "SELECT bx.value FROM %s AS bx, %s AS bibx WHERE bibx.id_bibrec='%s' AND bx.id=bibx.id_bibxxx AND bx.tag='%s'" \ % (bx, bibx, recID, tag) res = run_sql(query) for row in res: out.append(row[0]) return out def check_nbrecs_for_all_external_collections(): """Check if any of the external collections have changed their total number of records, aka nbrecs. Return True if any of the total numbers of records have changed and False if they're all the same.""" res = run_sql("SELECT name FROM collection WHERE dbquery LIKE 'hostedcollection:%';") for row in res: coll_name = row[0] if (get_collection(coll_name)).check_nbrecs_for_external_collection(): return True return False class Collection: "Holds the information on collections (id,name,dbquery)." def __init__(self, name=""): "Creates collection instance by querying the DB configuration database about 'name'." self.calculate_reclist_run_already = 0 # to speed things up without much refactoring self.update_reclist_run_already = 0 # to speed things up without much refactoring self.reclist_updated_since_start = 0 # to check if webpage cache need rebuilding self.reclist_with_nonpublic_subcolls = intbitset() - # used to store the temporary result of the calculation of nbrecs of an external collection - self.nbrecs_tmp = None + + # temporary counters for the number of records in hosted collections + self.nbrecs_tmp = None # number of records in a hosted collection + self.nbrecs_from_hosted_collections = 0 # total number of records from + # descendant hosted collections if not name: self.name = CFG_SITE_NAME # by default we are working on the home page self.id = 1 self.dbquery = None self.nbrecs = None self.reclist = intbitset() self.old_reclist = intbitset() self.reclist_updated_since_start = 1 else: self.name = name try: res = run_sql("""SELECT id,name,dbquery,nbrecs,reclist FROM collection WHERE name=%s""", (name,)) if res: self.id = res[0][0] self.name = res[0][1] self.dbquery = res[0][2] self.nbrecs = res[0][3] try: self.reclist = intbitset(res[0][4]) except: self.reclist = intbitset() self.reclist_updated_since_start = 1 else: # collection does not exist! self.id = None self.dbquery = None self.nbrecs = None self.reclist = intbitset() self.reclist_updated_since_start = 1 self.old_reclist = intbitset(self.reclist) except Error as e: print("Error %d: %s" % (e.args[0], e.args[1])) sys.exit(1) def get_example_search_queries(self): """Returns list of sample search queries for this collection. """ res = run_sql("""SELECT example.body FROM example LEFT JOIN collection_example on example.id=collection_example.id_example WHERE collection_example.id_collection=%s ORDER BY collection_example.score""", (self.id,)) return [query[0] for query in res] def get_name(self, ln=CFG_SITE_LANG, name_type="ln", prolog="", epilog="", prolog_suffix=" ", epilog_suffix=""): """Return nicely formatted collection name for language LN. The NAME_TYPE may be 'ln' (=long name), 'sn' (=short name), etc.""" out = prolog i18name = "" res = run_sql("SELECT value FROM collectionname WHERE id_collection=%s AND ln=%s AND type=%s", (self.id, ln, name_type)) try: i18name += res[0][0] except IndexError: pass if i18name: out += i18name else: out += self.name out += epilog return out def get_collectionbox_name(self, ln=CFG_SITE_LANG, box_type="r"): """ Return collection-specific labelling of 'Focus on' (regular collection), 'Narrow by' (virtual collection) and 'Latest addition' boxes. If translation for given language does not exist, use label for CFG_SITE_LANG. If no custom label is defined for CFG_SITE_LANG, return default label for the box. @param ln: the language of the label @param box_type: can be 'r' (=Narrow by), 'v' (=Focus on), 'l' (=Latest additions) """ i18name = "" res = run_sql("SELECT value FROM collectionboxname WHERE id_collection=%s AND ln=%s AND type=%s", (self.id, ln, box_type)) try: i18name = res[0][0] except IndexError: res = run_sql("SELECT value FROM collectionboxname WHERE id_collection=%s AND ln=%s AND type=%s", (self.id, CFG_SITE_LANG, box_type)) try: i18name = res[0][0] except IndexError: pass if not i18name: # load the right message language _ = gettext_set_language(ln) if box_type == "v": i18name = _('Focus on:') elif box_type == "r": i18name = _('Narrow by collection:') elif box_type == "l": i18name = _('Latest additions:') return i18name def get_ancestors(self): "Returns list of ancestors of the current collection." ancestors = [] ancestors_ids = intbitset() id_son = self.id while 1: query = "SELECT cc.id_dad,c.name FROM collection_collection AS cc, collection AS c "\ "WHERE cc.id_son=%d AND c.id=cc.id_dad" % int(id_son) res = run_sql(query, None, 1) if res: col_ancestor = get_collection(res[0][1]) # looking for loops if self.id in ancestors_ids: write_message("Loop found in collection %s" % self.name, stream=sys.stderr) raise OverflowError("Loop found in collection %s" % self.name) else: ancestors.append(col_ancestor) ancestors_ids.add(col_ancestor.id) id_son = res[0][0] else: break ancestors.reverse() return ancestors def restricted_p(self): """Predicate to test if the collection is restricted or not. Return the contect of the `restrited' column of the collection table (typically Apache group). Otherwise return None if the collection is public.""" if collection_restricted_p(self.name): return 1 return None def get_sons(self, type='r'): "Returns list of direct sons of type 'type' for the current collection." sons = [] id_dad = self.id query = "SELECT cc.id_son,c.name FROM collection_collection AS cc, collection AS c "\ "WHERE cc.id_dad=%d AND cc.type='%s' AND c.id=cc.id_son ORDER BY score DESC, c.name ASC" % (int(id_dad), type) res = run_sql(query) for row in res: sons.append(get_collection(row[1])) return sons def get_descendants(self, type='r'): "Returns list of all descendants of type 'type' for the current collection." descendants = [] descendant_ids = intbitset() id_dad = self.id query = "SELECT cc.id_son,c.name FROM collection_collection AS cc, collection AS c "\ "WHERE cc.id_dad=%d AND cc.type='%s' AND c.id=cc.id_son ORDER BY score DESC" % (int(id_dad), type) res = run_sql(query) for row in res: col_desc = get_collection(row[1]) # looking for loops if self.id in descendant_ids: write_message("Loop found in collection %s" % self.name, stream=sys.stderr) raise OverflowError("Loop found in collection %s" % self.name) else: descendants.append(col_desc) descendant_ids.add(col_desc.id) tmp_descendants = col_desc.get_descendants() for descendant in tmp_descendants: descendant_ids.add(descendant.id) descendants += tmp_descendants return descendants def write_cache_file(self, filename='', filebody={}): "Write a file inside collection cache." # open file: dirname = "%s/collections" % (CFG_CACHEDIR) mymkdir(dirname) fullfilename = dirname + "/%s.html" % filename try: os.umask(0o022) f = open(fullfilename, "wb") except IOError as v: try: (code, message) = v except: code = 0 message = v print("I/O Error: " + str(message) + " (" + str(code) + ")") sys.exit(1) # print user info: write_message("... creating %s" % fullfilename, verbose=6) sys.stdout.flush() # print page body: cPickle.dump(filebody, f, cPickle.HIGHEST_PROTOCOL) # close file: f.close() def update_webpage_cache(self, lang): """Create collection page header, navtrail, body (including left and right stripes) and footer, and call write_cache_file() afterwards to update the collection webpage cache.""" return {} ## webpage cache update is not really needed in ## Invenio-on-Flask, so let's return quickly here ## for great speed-up benefit ## precalculate latest additions for non-aggregate ## collections (the info is ln and as independent) if self.dbquery: if CFG_WEBSEARCH_I18N_LATEST_ADDITIONS: self.create_latest_additions_info(ln=lang) else: self.create_latest_additions_info() # load the right message language _ = gettext_set_language(lang) # create dictionary with data cache = {"te_portalbox" : self.create_portalbox(lang, 'te'), "np_portalbox" : self.create_portalbox(lang, 'np'), "ne_portalbox" : self.create_portalbox(lang, 'ne'), "tp_portalbox" : self.create_portalbox(lang, "tp"), "lt_portalbox" : self.create_portalbox(lang, "lt"), "rt_portalbox" : self.create_portalbox(lang, "rt"), "last_updated" : convert_datestruct_to_dategui(time.localtime(), ln=lang)} for aas in CFG_WEBSEARCH_ENABLED_SEARCH_INTERFACES: # do light, simple and advanced search pages: cache["navtrail_%s" % aas] = self.create_navtrail_links(aas, lang) cache["searchfor_%s" % aas] = self.create_searchfor(aas, lang) cache["narrowsearch_%s" % aas] = self.create_narrowsearch(aas, lang, 'r') cache["focuson_%s" % aas] = self.create_narrowsearch(aas, lang, "v")+ \ self.create_external_collections_box(lang) cache["instantbrowse_%s" % aas] = self.create_instant_browse(aas=aas, ln=lang) # write cache file self.write_cache_file("%s-ln=%s"%(self.name, lang), cache) return cache def create_navtrail_links(self, aas=CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE, ln=CFG_SITE_LANG): """Creates navigation trail links, i.e. links to collection ancestors (except Home collection). If aas==1, then links to Advanced Search interfaces; otherwise Simple Search. """ dads = [] for dad in self.get_ancestors(): if dad.name != CFG_SITE_NAME: # exclude Home collection dads.append((dad.name, dad.get_name(ln))) return websearch_templates.tmpl_navtrail_links( aas=aas, ln=ln, dads=dads) def create_portalbox(self, lang=CFG_SITE_LANG, position="rt"): """Creates portalboxes of language CFG_SITE_LANG of the position POSITION by consulting DB configuration database. The position may be: 'lt'='left top', 'rt'='right top', etc.""" out = "" query = "SELECT p.title,p.body FROM portalbox AS p, collection_portalbox AS cp "\ " WHERE cp.id_collection=%d AND p.id=cp.id_portalbox AND cp.ln='%s' AND cp.position='%s' "\ " ORDER BY cp.score DESC" % (self.id, lang, position) res = run_sql(query) for row in res: title, body = row[0], row[1] if title: out += websearch_templates.tmpl_portalbox(title = title, body = body) else: # no title specified, so print body ``as is'' only: out += body return out def create_narrowsearch(self, aas=CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE, ln=CFG_SITE_LANG, type="r"): """Creates list of collection descendants of type 'type' under title 'title'. If aas==1, then links to Advanced Search interfaces; otherwise Simple Search. Suitable for 'Narrow search' and 'Focus on' boxes.""" # get list of sons and analyse it sons = self.get_sons(type) if not sons: return '' # get descendents descendants = self.get_descendants(type) grandsons = [] if CFG_WEBSEARCH_NARROW_SEARCH_SHOW_GRANDSONS: # load grandsons for each son for son in sons: grandsons.append(son.get_sons()) # return "" return websearch_templates.tmpl_narrowsearch( aas = aas, ln = ln, type = type, father = self, has_grandchildren = len(descendants)>len(sons), sons = sons, display_grandsons = CFG_WEBSEARCH_NARROW_SEARCH_SHOW_GRANDSONS, grandsons = grandsons ) def create_external_collections_box(self, ln=CFG_SITE_LANG): external_collection_load_states() if self.id not in dico_collection_external_searches: return "" engines_list = external_collection_sort_engine_by_name(dico_collection_external_searches[self.id]) return websearch_templates.tmpl_searchalso(ln, engines_list, self.id) def create_latest_additions_info(self, rg=CFG_WEBSEARCH_INSTANT_BROWSE, ln=CFG_SITE_LANG): """ Create info about latest additions that will be used for create_instant_browse() later. """ self.latest_additions_info = [] if self.nbrecs and self.reclist: # firstly, get last 'rg' records: recIDs = list(self.reclist) of = 'hb' # CERN hack begins: tweak latest additions for selected collections: if CFG_CERN_SITE: # alter recIDs list for some CERN collections: this_year = time.strftime("%Y", time.localtime()) if self.name in ['CERN Yellow Reports','Videos']: last_year = str(int(this_year) - 1) # detect recIDs only from this and past year: recIDs = list(self.reclist & \ search_pattern_parenthesised(p='year:%s or year:%s' % \ (this_year, last_year))) elif self.name in ['VideosXXX']: # detect recIDs only from this year: recIDs = list(self.reclist & \ search_pattern_parenthesised(p='year:%s' % this_year)) elif self.name == 'CMS Physics Analysis Summaries' and \ 1281585 in self.reclist: # REALLY, REALLY temporary hack recIDs = list(self.reclist) recIDs.remove(1281585) # apply special filters: if self.name in ['Videos']: # select only videos with movies: recIDs = list(intbitset(recIDs) & \ search_pattern_parenthesised(p='collection:"PUBLVIDEOMOVIE"')) of = 'hvp' # sort some CERN collections specially: if self.name in ['Videos', 'Video Clips', 'Video Movies', 'Video News', 'Video Rushes', 'Webcast', 'ATLAS Videos', 'Restricted Video Movies', 'Restricted Video Rushes', 'LHC First Beam Videos', 'CERN openlab Videos']: recIDs = sort_records(None, recIDs, '269__c') elif self.name in ['LHCb Talks']: recIDs = sort_records(None, recIDs, 'reportnumber') # CERN hack ends. total = len(recIDs) to_display = min(rg, total) for idx in range(total-1, total-to_display-1, -1): recid = recIDs[idx] self.latest_additions_info.append({'id': recid, 'format': format_record(recid, of, ln=ln), 'date': get_creation_date(recid, fmt="%Y-%m-%d
%H:%i")}) return def create_instant_browse(self, rg=CFG_WEBSEARCH_INSTANT_BROWSE, aas=CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE, ln=CFG_SITE_LANG): "Searches database and produces list of last 'rg' records." if self.restricted_p(): return websearch_templates.tmpl_box_restricted_content(ln = ln) if str(self.dbquery).startswith("hostedcollection:"): return websearch_templates.tmpl_box_hosted_collection(ln = ln) if rg == 0: # do not show latest additions box return "" # CERN hack: do not display latest additions for some CERN collections: if CFG_CERN_SITE and self.name in ['Periodicals', 'Electronic Journals', 'Press Office Photo Selection', 'Press Office Video Selection']: return "" try: self.latest_additions_info latest_additions_info_p = True except: latest_additions_info_p = False if latest_additions_info_p: passIDs = [] for idx in range(0, min(len(self.latest_additions_info), rg)): # CERN hack: display the records in a grid layout, so do not show the related links if CFG_CERN_SITE and self.name in ['Videos']: passIDs.append({'id': self.latest_additions_info[idx]['id'], 'body': self.latest_additions_info[idx]['format'], 'date': self.latest_additions_info[idx]['date']}) else: passIDs.append({'id': self.latest_additions_info[idx]['id'], 'body': self.latest_additions_info[idx]['format'] + \ websearch_templates.tmpl_record_links(recid=self.latest_additions_info[idx]['id'], rm='citation', ln=ln), 'date': self.latest_additions_info[idx]['date']}) if self.nbrecs > rg: url = websearch_templates.build_search_url( cc=self.name, jrec=rg+1, ln=ln, aas=aas) else: url = "" # CERN hack: display the records in a grid layout if CFG_CERN_SITE and self.name in ['Videos']: return websearch_templates.tmpl_instant_browse( aas=aas, ln=ln, recids=passIDs, more_link=url, grid_layout=True, father=self) return websearch_templates.tmpl_instant_browse( aas=aas, ln=ln, recids=passIDs, more_link=url, father=self) return websearch_templates.tmpl_box_no_records(ln=ln) def create_searchoptions(self): "Produces 'Search options' portal box." box = "" query = """SELECT DISTINCT(cff.id_field),f.code,f.name FROM collection_field_fieldvalue AS cff, field AS f WHERE cff.id_collection=%d AND cff.id_fieldvalue IS NOT NULL AND cff.id_field=f.id ORDER BY cff.score DESC""" % self.id res = run_sql(query) if res: for row in res: field_id = row[0] field_code = row[1] field_name = row[2] query_bis = """SELECT fv.value,fv.name FROM fieldvalue AS fv, collection_field_fieldvalue AS cff WHERE cff.id_collection=%d AND cff.type='seo' AND cff.id_field=%d AND fv.id=cff.id_fieldvalue ORDER BY cff.score_fieldvalue DESC, cff.score DESC, fv.name ASC""" % (self.id, field_id) res_bis = run_sql(query_bis) if res_bis: values = [{'value' : '', 'text' : 'any' + ' ' + field_name}] # FIXME: internationalisation of "any" for row_bis in res_bis: values.append({'value' : cgi.escape(row_bis[0], 1), 'text' : row_bis[1]}) box += websearch_templates.tmpl_select( fieldname = field_code, values = values ) return box def create_sortoptions(self, ln=CFG_SITE_LANG): """Produces 'Sort options' portal box.""" # load the right message language _ = gettext_set_language(ln) box = "" query = """SELECT f.code,f.name FROM field AS f, collection_field_fieldvalue AS cff WHERE id_collection=%d AND cff.type='soo' AND cff.id_field=f.id ORDER BY cff.score DESC, f.name ASC""" % self.id values = [{'value' : '', 'text': "- %s -" % _("latest first")}] res = run_sql(query) if res: for row in res: values.append({'value' : row[0], 'text': get_field_i18nname(row[1], ln)}) else: for tmp in ('title', 'author', 'report number', 'year'): values.append({'value' : tmp.replace(' ', ''), 'text' : get_field_i18nname(tmp, ln)}) box = websearch_templates.tmpl_select( fieldname = 'sf', css_class = 'address', values = values ) box += websearch_templates.tmpl_select( fieldname = 'so', css_class = 'address', values = [ {'value' : 'a' , 'text' : _("asc.")}, {'value' : 'd' , 'text' : _("desc.")} ] ) return box def create_rankoptions(self, ln=CFG_SITE_LANG): "Produces 'Rank options' portal box." # load the right message language _ = gettext_set_language(ln) values = [{'value' : '', 'text': "- %s %s -" % (string.lower(_("OR")), _("rank by"))}] for (code, name) in get_bibrank_methods(self.id, ln): values.append({'value' : code, 'text': name}) box = websearch_templates.tmpl_select( fieldname = 'rm', css_class = 'address', values = values ) return box def create_displayoptions(self, ln=CFG_SITE_LANG): "Produces 'Display options' portal box." # load the right message language _ = gettext_set_language(ln) values = [] for i in ['10', '25', '50', '100', '250', '500']: values.append({'value' : i, 'text' : i + ' ' + _("results")}) box = websearch_templates.tmpl_select( fieldname = 'rg', selected = str(CFG_WEBSEARCH_DEF_RECORDS_IN_GROUPS), css_class = 'address', values = values ) if self.get_sons(): box += websearch_templates.tmpl_select( fieldname = 'sc', css_class = 'address', values = [ {'value' : '1' , 'text' : _("split by collection")}, {'value' : '0' , 'text' : _("single list")} ] ) return box def create_formatoptions(self, ln=CFG_SITE_LANG): "Produces 'Output format options' portal box." # load the right message language _ = gettext_set_language(ln) box = "" values = [] query = """SELECT f.code,f.name FROM format AS f, collection_format AS cf WHERE cf.id_collection=%d AND cf.id_format=f.id AND f.visibility='1' ORDER BY cf.score DESC, f.name ASC""" % self.id res = run_sql(query) if res: for row in res: values.append({'value' : row[0], 'text': row[1]}) else: values.append({'value' : 'hb', 'text' : "HTML %s" % _("brief")}) box = websearch_templates.tmpl_select( fieldname = 'of', css_class = 'address', values = values ) return box def create_searchwithin_selection_box(self, fieldname='f', value='', ln='en'): """Produces 'search within' selection box for the current collection.""" # get values query = """SELECT f.code,f.name FROM field AS f, collection_field_fieldvalue AS cff WHERE cff.type='sew' AND cff.id_collection=%d AND cff.id_field=f.id ORDER BY cff.score DESC, f.name ASC""" % self.id res = run_sql(query) values = [{'value' : '', 'text' : get_field_i18nname("any field", ln)}] if res: for row in res: values.append({'value' : row[0], 'text' : get_field_i18nname(row[1], ln)}) else: if CFG_CERN_SITE: for tmp in ['title', 'author', 'abstract', 'report number', 'year']: values.append({'value' : tmp.replace(' ', ''), 'text' : get_field_i18nname(tmp, ln)}) else: for tmp in ['title', 'author', 'abstract', 'keyword', 'report number', 'journal', 'year', 'fulltext', 'reference']: values.append({'value' : tmp.replace(' ', ''), 'text' : get_field_i18nname(tmp, ln)}) return websearch_templates.tmpl_searchwithin_select( fieldname = fieldname, ln = ln, selected = value, values = values ) def create_searchexample(self): "Produces search example(s) for the current collection." out = "$collSearchExamples = getSearchExample(%d, $se);" % self.id return out def create_searchfor(self, aas=CFG_WEBSEARCH_DEFAULT_SEARCH_INTERFACE, ln=CFG_SITE_LANG): "Produces either Simple or Advanced 'Search for' box for the current collection." if aas == 1: return self.create_searchfor_advanced(ln) elif aas == 0: return self.create_searchfor_simple(ln) else: return self.create_searchfor_light(ln) def create_searchfor_light(self, ln=CFG_SITE_LANG): "Produces light 'Search for' box for the current collection." return websearch_templates.tmpl_searchfor_light( ln=ln, collection_id = self.name, collection_name=self.get_name(ln=ln), record_count=self.nbrecs, example_search_queries=self.get_example_search_queries(), ) def create_searchfor_simple(self, ln=CFG_SITE_LANG): "Produces simple 'Search for' box for the current collection." return websearch_templates.tmpl_searchfor_simple( ln=ln, collection_id = self.name, collection_name=self.get_name(ln=ln), record_count=self.nbrecs, middle_option = self.create_searchwithin_selection_box(ln=ln), ) def create_searchfor_advanced(self, ln=CFG_SITE_LANG): "Produces advanced 'Search for' box for the current collection." return websearch_templates.tmpl_searchfor_advanced( ln = ln, collection_id = self.name, collection_name=self.get_name(ln=ln), record_count=self.nbrecs, middle_option_1 = self.create_searchwithin_selection_box('f1', ln=ln), middle_option_2 = self.create_searchwithin_selection_box('f2', ln=ln), middle_option_3 = self.create_searchwithin_selection_box('f3', ln=ln), searchoptions = self.create_searchoptions(), sortoptions = self.create_sortoptions(ln), rankoptions = self.create_rankoptions(ln), displayoptions = self.create_displayoptions(ln), formatoptions = self.create_formatoptions(ln) ) def calculate_reclist(self): - """Calculate, set and return the (reclist, reclist_with_nonpublic_subcolls) tuple for given collection.""" - if self.calculate_reclist_run_already or str(self.dbquery).startswith("hostedcollection:"): - # do we have to recalculate? - return (self.reclist, self.reclist_with_nonpublic_subcolls) + """ + Calculate, set and return the (reclist, + reclist_with_nonpublic_subcolls, + nbrecs_from_hosted_collections) + tuple for the given collection.""" + + if str(self.dbquery).startswith("hostedcollection:"): + # we don't normally use this function to calculate the reclist + # for hosted collections. In case we do, recursively for a regular + # ancestor collection, then quickly return the object attributes. + return (self.reclist, + self.reclist_with_nonpublic_subcolls, + self.nbrecs) + + if self.calculate_reclist_run_already: + # do we really have to recalculate? If not, + # then return the object attributes + return (self.reclist, + self.reclist_with_nonpublic_subcolls, + self.nbrecs_from_hosted_collections) + write_message("... calculating reclist of %s" % self.name, verbose=6) reclist = intbitset() # will hold results for public sons only; good for storing into DB reclist_with_nonpublic_subcolls = intbitset() # will hold results for both public and nonpublic sons; good for deducing total # number of documents + nbrecs_from_hosted_collections = 0 # will hold the total number of records from descendant hosted collections + if not self.dbquery: # A - collection does not have dbquery, so query recursively all its sons # that are either non-restricted or that have the same restriction rules for coll in self.get_sons(): - coll_reclist, coll_reclist_with_nonpublic_subcolls = coll.calculate_reclist() + coll_reclist,\ + coll_reclist_with_nonpublic_subcolls,\ + coll_nbrecs_from_hosted_collection = coll.calculate_reclist() + if ((coll.restricted_p() is None) or (coll.restricted_p() == self.restricted_p())): # add this reclist ``for real'' only if it is public reclist.union_update(coll_reclist) reclist_with_nonpublic_subcolls.union_update(coll_reclist_with_nonpublic_subcolls) + + # increment the total number of records from descendant hosted collections + nbrecs_from_hosted_collections += coll_nbrecs_from_hosted_collection + else: # B - collection does have dbquery, so compute it: # (note: explicitly remove DELETED records) if CFG_CERN_SITE: reclist = search_pattern_parenthesised(None, self.dbquery + \ ' -980__:"DELETED" -980__:"DUMMY"', ap=-9) #ap=-9 for allow queries containing hidden tags else: reclist = search_pattern_parenthesised(None, self.dbquery + ' -980__:"DELETED"', ap=-9) #ap=-9 allow queries containing hidden tags reclist_with_nonpublic_subcolls = copy.deepcopy(reclist) + # store the results: - self.nbrecs = len(reclist_with_nonpublic_subcolls) + self.nbrecs_from_hosted_collections = nbrecs_from_hosted_collections + self.nbrecs = len(reclist_with_nonpublic_subcolls) + \ + nbrecs_from_hosted_collections self.reclist = reclist self.reclist_with_nonpublic_subcolls = reclist_with_nonpublic_subcolls # last but not least, update the speed-up flag: self.calculate_reclist_run_already = 1 - # return the two sets: - return (self.reclist, self.reclist_with_nonpublic_subcolls) + # return the two sets, as well as + # the total number of records from descendant hosted collections: + return (self.reclist, + self.reclist_with_nonpublic_subcolls, + self.nbrecs_from_hosted_collections) def calculate_nbrecs_for_external_collection(self, timeout=CFG_EXTERNAL_COLLECTION_TIMEOUT): """Calculate the total number of records, aka nbrecs, for given external collection.""" #if self.calculate_reclist_run_already: # do we have to recalculate? #return self.nbrecs #write_message("... calculating nbrecs of external collection %s" % self.name, verbose=6) if self.name in external_collections_dictionary: engine = external_collections_dictionary[self.name] if engine.parser: self.nbrecs_tmp = engine.parser.parse_nbrecs(timeout) if self.nbrecs_tmp >= 0: return self.nbrecs_tmp # the parse_nbrecs() function returns negative values for some specific cases # maybe we can handle these specific cases, some warnings or something # for now the total number of records remains silently the same else: return self.nbrecs else: write_message("External collection %s does not have a parser!" % self.name, verbose=6) else: write_message("External collection %s not found!" % self.name, verbose=6) return 0 # last but not least, update the speed-up flag: #self.calculate_reclist_run_already = 1 def check_nbrecs_for_external_collection(self): """Check if the external collections has changed its total number of records, aka nbrecs. Rerurns True if the total number of records has changed and False if it's the same""" write_message("*** self.nbrecs = %s / self.cal...ion = %s ***" % (str(self.nbrecs), str(self.calculate_nbrecs_for_external_collection())), verbose=6) write_message("*** self.nbrecs != self.cal...ion = %s ***" % (str(self.nbrecs != self.calculate_nbrecs_for_external_collection()),), verbose=6) return self.nbrecs != self.calculate_nbrecs_for_external_collection(CFG_HOSTED_COLLECTION_TIMEOUT_NBRECS) def set_nbrecs_for_external_collection(self): """Set this external collection's total number of records, aka nbrecs""" if self.calculate_reclist_run_already: # do we have to recalculate? return write_message("... calculating nbrecs of external collection %s" % self.name, verbose=6) if self.nbrecs_tmp: self.nbrecs = self.nbrecs_tmp else: self.nbrecs = self.calculate_nbrecs_for_external_collection(CFG_HOSTED_COLLECTION_TIMEOUT_NBRECS) # last but not least, update the speed-up flag: self.calculate_reclist_run_already = 1 def update_reclist(self): "Update the record universe for given collection; nbrecs, reclist of the collection table." if self.update_reclist_run_already: # do we have to reupdate? return 0 write_message("... updating reclist of %s (%s recs)" % (self.name, self.nbrecs), verbose=6) sys.stdout.flush() try: ## In principle we could skip this update if old_reclist==reclist ## however we just update it here in case of race-conditions. run_sql("UPDATE collection SET nbrecs=%s, reclist=%s WHERE id=%s", (self.nbrecs, self.reclist.fastdump(), self.id)) if self.old_reclist != self.reclist: self.reclist_updated_since_start = 1 else: write_message("... no changes in reclist detected", verbose=6) except Error as e: print("Database Query Error %d: %s." % (e.args[0], e.args[1])) sys.exit(1) # last but not least, update the speed-up flag: self.update_reclist_run_already = 1 return 0 def perform_display_collection(colID, colname, aas, ln, em, show_help_boxes): """Returns the data needed to display a collection page The arguments are as follows: colID - id of the collection to display colname - name of the collection to display aas - 0 if simple search, 1 if advanced search ln - language of the page em - code to display just part of the page show_help_boxes - whether to show the help boxes or not""" # check and update cache if necessary try: cachedfile = open("%s/collections/%s-ln=%s.html" % \ (CFG_CACHEDIR, colname, ln), "rb") data = cPickle.load(cachedfile) cachedfile.close() except: data = get_collection(colname).update_webpage_cache(ln) # check em value to return just part of the page if em != "": if EM_REPOSITORY["search_box"] not in em: data["searchfor_%s" % aas] = "" if EM_REPOSITORY["see_also_box"] not in em: data["focuson_%s" % aas] = "" if EM_REPOSITORY["all_portalboxes"] not in em: if EM_REPOSITORY["te_portalbox"] not in em: data["te_portalbox"] = "" if EM_REPOSITORY["np_portalbox"] not in em: data["np_portalbox"] = "" if EM_REPOSITORY["ne_portalbox"] not in em: data["ne_portalbox"] = "" if EM_REPOSITORY["tp_portalbox"] not in em: data["tp_portalbox"] = "" if EM_REPOSITORY["lt_portalbox"] not in em: data["lt_portalbox"] = "" if EM_REPOSITORY["rt_portalbox"] not in em: data["rt_portalbox"] = "" c_body = websearch_templates.tmpl_webcoll_body(ln, colID, data.get("te_portalbox", ""), data.get("searchfor_%s"%aas,''), data.get("np_portalbox", ''), data.get("narrowsearch_%s"%aas, ''), data.get("focuson_%s"%aas, ''), data.get("instantbrowse_%s"%aas, ''), data.get("ne_portalbox", ''), em=="" or EM_REPOSITORY["body"] in em) if show_help_boxes <= 0: data["rt_portalbox"] = "" return (c_body, data.get("navtrail_%s"%aas, ''), data.get("lt_portalbox", ''), data.get("rt_portalbox", ''), data.get("tp_portalbox", ''), data.get("te_portalbox", ''), data.get("last_updated", '')) def get_datetime(var, format_string="%Y-%m-%d %H:%M:%S"): """Returns a date string according to the format string. It can handle normal date strings and shifts with respect to now.""" date = time.time() shift_re = re.compile("([-\+]{0,1})([\d]+)([dhms])") factors = {"d":24*3600, "h":3600, "m":60, "s":1} m = shift_re.match(var) if m: sign = m.groups()[0] == "-" and -1 or 1 factor = factors[m.groups()[2]] value = float(m.groups()[1]) date = time.localtime(date + sign * factor * value) date = strftime(format_string, date) else: date = time.strptime(var, format_string) date = strftime(format_string, date) return date def get_current_time_timestamp(): """Return timestamp corresponding to the current time.""" return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) def compare_timestamps_with_tolerance(timestamp1, timestamp2, tolerance=0): """Compare two timestamps TIMESTAMP1 and TIMESTAMP2, of the form '2005-03-31 17:37:26'. Optionally receives a TOLERANCE argument (in seconds). Return -1 if TIMESTAMP1 is less than TIMESTAMP2 minus TOLERANCE, 0 if they are equal within TOLERANCE limit, and 1 if TIMESTAMP1 is greater than TIMESTAMP2 plus TOLERANCE. """ # remove any trailing .00 in timestamps: timestamp1 = re.sub(r'\.[0-9]+$', '', timestamp1) timestamp2 = re.sub(r'\.[0-9]+$', '', timestamp2) # first convert timestamps to Unix epoch seconds: timestamp1_seconds = calendar.timegm(time.strptime(timestamp1, "%Y-%m-%d %H:%M:%S")) timestamp2_seconds = calendar.timegm(time.strptime(timestamp2, "%Y-%m-%d %H:%M:%S")) # now compare them: if timestamp1_seconds < timestamp2_seconds - tolerance: return -1 elif timestamp1_seconds > timestamp2_seconds + tolerance: return 1 else: return 0 def get_database_last_updated_timestamp(): """Return last updated timestamp for collection-related and record-related database tables. """ database_tables_timestamps = [] database_tables_timestamps.append(get_table_update_time('bibrec')) database_tables_timestamps.append(get_table_update_time('bibfmt')) try: database_tables_timestamps.append(get_table_update_time('idxWORD%')) except ValueError: # There are no indexes in the database. That's OK. pass database_tables_timestamps.append(get_table_update_time('collection%')) database_tables_timestamps.append(get_table_update_time('portalbox')) database_tables_timestamps.append(get_table_update_time('field%')) database_tables_timestamps.append(get_table_update_time('format%')) database_tables_timestamps.append(get_table_update_time('rnkMETHODNAME')) database_tables_timestamps.append(get_table_update_time('accROLE_accACTION_accARGUMENT', run_on_slave=True)) return max(database_tables_timestamps) def get_cache_last_updated_timestamp(): """Return last updated cache timestamp.""" try: f = open(CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE, "r") except: return "1970-01-01 00:00:00" timestamp = f.read() f.close() return timestamp def set_cache_last_updated_timestamp(timestamp): """Set last updated cache timestamp to TIMESTAMP.""" try: with open(CFG_CACHE_LAST_UPDATED_TIMESTAMP_FILE, "w") as f: f.write(timestamp) except: # FIXME: do something here pass return timestamp def task_submit_elaborate_specific_parameter(key, value, opts, args): """ Given the string key it checks it's meaning, eventually using the value. Usually it fills some key in the options dict. It must return True if it has elaborated the key, False, if it doesn't know that key. eg: if key in ['-n', '--number']: self.options['number'] = value return True return False """ if key in ("-c", "--collection"): task_set_option("collection", value) elif key in ("-r", "--recursive"): task_set_option("recursive", 1) elif key in ("-f", "--force"): task_set_option("force", 1) elif key in ("-q", "--quick"): task_set_option("quick", 1) elif key in ("-p", "--part"): task_set_option("part", int(value)) elif key in ("-l", "--language"): languages = task_get_option("language", []) languages += value.split(',') for ln in languages: if ln not in CFG_SITE_LANGS: print('ERROR: "%s" is not a recognized language code' % ln) return False task_set_option("language", languages) else: return False return True def task_submit_check_options(): if task_has_option('collection'): coll = get_collection(task_get_option("collection")) if coll.id is None: print('ERROR: Collection "%s" does not exist' % coll.name) return False return True def task_run_core(): """ Reimplement to add the body of the task.""" ## ## ------->--->time--->------> ## (-1) | ( 0) | ( 1) ## | | | ## [T.db] | [T.fc] | [T.db] ## | | | ## |<-tol|tol->| ## ## the above is the compare_timestamps_with_tolerance result "diagram" ## [T.db] stands fore the database timestamp and [T.fc] for the file cache timestamp ## ( -1, 0, 1) stand for the returned value ## tol stands for the tolerance in seconds ## ## When a record has been added or deleted from one of the collections the T.db becomes greater that the T.fc ## and when webcoll runs it is fully ran. It recalculates the reclists and nbrecs, and since it updates the ## collections db table it also updates the T.db. The T.fc is set as the moment the task started running thus ## slightly before the T.db (practically the time distance between the start of the task and the last call of ## update_reclist). Therefore when webcoll runs again, and even if no database changes have taken place in the ## meanwhile, it fully runs (because compare_timestamps_with_tolerance returns 0). This time though, and if ## no databases changes have taken place, the T.db remains the same while T.fc is updated and as a result if ## webcoll runs again it will not be fully ran ## task_run_start_timestamp = get_current_time_timestamp() colls = [] # decide whether we need to run or not, by comparing last updated timestamps: write_message("Database timestamp is %s." % get_database_last_updated_timestamp(), verbose=3) write_message("Collection cache timestamp is %s." % get_cache_last_updated_timestamp(), verbose=3) if task_has_option("part"): write_message("Running cache update part %s only." % task_get_option("part"), verbose=3) if check_nbrecs_for_all_external_collections() or task_has_option("force") or \ compare_timestamps_with_tolerance(get_database_last_updated_timestamp(), get_cache_last_updated_timestamp(), CFG_CACHE_LAST_UPDATED_TIMESTAMP_TOLERANCE) >= 0: ## either forced update was requested or cache is not up to date, so recreate it: # firstly, decide which collections to do: if task_has_option("collection"): coll = get_collection(task_get_option("collection")) colls.append(coll) if task_has_option("recursive"): r_type_descendants = coll.get_descendants(type='r') colls += r_type_descendants v_type_descendants = coll.get_descendants(type='v') colls += v_type_descendants else: res = run_sql("SELECT name FROM collection ORDER BY id") for row in res: colls.append(get_collection(row[0])) # secondly, update collection reclist cache: if task_get_option('part', 1) == 1: i = 0 for coll in colls: i += 1 write_message("%s / reclist cache update" % coll.name) if str(coll.dbquery).startswith("hostedcollection:"): coll.set_nbrecs_for_external_collection() else: coll.calculate_reclist() coll.update_reclist() task_update_progress("Part 1/2: done %d/%d" % (i, len(colls))) task_sleep_now_if_required(can_stop_too=True) webcoll_after_reclist_cache_update.send('webcoll', collections=colls) # thirdly, update collection webpage cache: if task_get_option("part", 2) == 2: i = 0 for coll in colls: i += 1 if coll.reclist_updated_since_start or task_has_option("collection") or task_get_option("force") or not task_get_option("quick"): write_message("%s / webpage cache update" % coll.name) for lang in CFG_SITE_LANGS: coll.update_webpage_cache(lang) webcoll_after_webpage_cache_update.send(coll.name, collection=coll, lang=lang) else: write_message("%s / webpage cache seems not to need an update and --quick was used" % coll.name, verbose=2) task_update_progress("Part 2/2: done %d/%d" % (i, len(colls))) task_sleep_now_if_required(can_stop_too=True) # finally update the cache last updated timestamp: # (but only when all collections were updated, not when only # some of them were forced-updated as per admin's demand) if not task_has_option("collection"): set_cache_last_updated_timestamp(task_run_start_timestamp) write_message("Collection cache timestamp is set to %s." % get_cache_last_updated_timestamp(), verbose=3) else: ## cache up to date, we don't have to run write_message("Collection cache is up to date, no need to run.") ## we are done: return True ### okay, here we go: if __name__ == '__main__': main() diff --git a/invenio/legacy/webstyle/webdoc.py b/invenio/legacy/webstyle/webdoc.py index 9c0a9d771..da388a63c 100644 --- a/invenio/legacy/webstyle/webdoc.py +++ b/invenio/legacy/webstyle/webdoc.py @@ -1,894 +1,890 @@ # -*- coding: utf-8 -*- ## This file is part of Invenio. ## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. from __future__ import print_function """ WebDoc -- Transform webdoc sources into static html files """ __revision__ = \ "$Id$" from six import iteritems from . import registry from invenio.config import \ CFG_PREFIX, \ CFG_SITE_LANG, \ CFG_SITE_LANGS, \ CFG_SITE_NAME, \ CFG_SITE_SUPPORT_EMAIL, \ CFG_SITE_ADMIN_EMAIL, \ CFG_SITE_URL, \ CFG_SITE_SECURE_URL, \ CFG_SITE_RECORD, \ CFG_VERSION, \ CFG_SITE_NAME_INTL, \ CFG_CACHEDIR from invenio.utils.date import \ convert_datestruct_to_datetext, \ convert_datestruct_to_dategui, \ convert_datecvs_to_datestruct from invenio.utils.shell import mymkdir from invenio.base.i18n import \ gettext_set_language, \ wash_language, \ language_list_long import re import getopt import os import sys import time # List of (webdoc_source_dir, webdoc_cache_dir) webdoc_dirs = {'help':('', '%s/webdoc/help-pages' % CFG_CACHEDIR), 'admin':('admin', '%s/webdoc/admin-pages' % CFG_CACHEDIR), 'hacking':('hacking', '%s/webdoc/hacking-pages' % CFG_CACHEDIR)} # Regular expression for finding text to be translated translation_pattern = re.compile(r'_\((?P.*?)\)_', \ re.IGNORECASE | re.DOTALL | re.VERBOSE) # # Regular expression for finding comments comments_pattern = re.compile(r'^\s*#.*$', \ re.MULTILINE) # Regular expression for finding tag pattern_lang_current = re.compile(r'', \ re.IGNORECASE | re.DOTALL | re.VERBOSE) # Regular expression for finding tag pattern_lang_link_current = re.compile(r'', \ re.IGNORECASE | re.DOTALL | re.VERBOSE) # Regular expression for finding tag # where %s will be replaced at run time pattern_tag = r''' ) #end tag ''' # List of available tags in webdoc, and the pattern to find it pattern_tags = {'WebDoc-Page-Title': '', 'WebDoc-Page-Navtrail': '', 'WebDoc-Page-Description': '', 'WebDoc-Page-Keywords': '', 'WebDoc-Page-Header-Add': '', 'WebDoc-Page-Box-Left-Top-Add': '', 'WebDoc-Page-Box-Left-Bottom-Add': '', 'WebDoc-Page-Box-Right-Top-Add': '', 'WebDoc-Page-Box-Right-Bottom-Add': '', 'WebDoc-Page-Footer-Add': '', 'WebDoc-Page-Revision': '' } for tag in pattern_tags.keys(): pattern_tags[tag] = re.compile(pattern_tag % tag, \ re.IGNORECASE | re.DOTALL | re.VERBOSE) # Regular expression for finding ... tag pattern_lang = re.compile(r''' keep=all)* \s* #any number of white spaces > #closing start tag (?P.*?) #anything but the next group (greedy) () #end tag ''', re.IGNORECASE | re.DOTALL | re.VERBOSE) # Regular expression for finding ... tag (particular case of # pattern_lang) pattern_CFG_SITE_LANG = re.compile(r"<("+CFG_SITE_LANG+ \ r")\s*>(.*?)()", re.IGNORECASE | re.DOTALL) # Builds regular expression for finding each known language in tags ln_pattern_text = r"<(?P" ln_pattern_text += r"|".join([lang[0] for lang in \ language_list_long(enabled_langs_only=False)]) ln_pattern_text += r')\s*(revision="[^"]"\s*)?>(?P.*?)' ln_pattern = re.compile(ln_pattern_text, re.IGNORECASE | re.DOTALL) defined_tags = {'': CFG_SITE_NAME, '': CFG_SITE_SUPPORT_EMAIL, '': CFG_SITE_ADMIN_EMAIL, '': CFG_SITE_URL, '': CFG_SITE_SECURE_URL, '': CFG_SITE_RECORD, '': CFG_VERSION, '': CFG_SITE_NAME_INTL} def get_webdoc_parts(webdoc, parts=['title', \ 'keywords', \ 'navtrail', \ 'body', 'lastupdated', 'description'], categ="", update_cache_mode=1, ln=CFG_SITE_LANG, verbose=0): """ Returns the html of the specified 'webdoc' part(s). Also update the cache if 'update_cache' is True. Parameters: webdoc - *string* the name of a webdoc that can be found in standard webdoc dir, or a webdoc filepath. Priority is given to filepath if both match. parts - *list(string)* the parts that should be returned by this function. Can be in: 'title', 'keywords', 'navtrail', 'body', 'description', 'lastupdated'. categ - *string* (optional) The category to which the webdoc file belongs. 'help', 'admin' or 'hacking'. If "", look in all categories. update_cache_mode - *int* update the cached version of the given 'webdoc': - 0 : do not update - 1 : update if needed - 2 : always update Returns : *dictionary* with keys being in 'parts' input parameter and values being the corresponding html part. """ html_parts = {} if update_cache_mode in [1, 2]: update_webdoc_cache(webdoc, update_cache_mode, verbose) def get_webdoc_cached_part_path(webdoc_cache_dir, webdoc, ln, part): "Build path for given webdoc, ln and part" return webdoc_cache_dir + os.sep + webdoc + \ os.sep + webdoc + '.' + part + '-' + \ ln + '.html' for part in parts: if categ != "": locations = [webdoc_dirs.get(categ, ('',''))] else: locations = webdoc_dirs.values() for (_webdoc_source_dir, _web_doc_cache_dir) in locations: webdoc_cached_part_path = None if os.path.exists(get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, ln, part)): # Check given language webdoc_cached_part_path = get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, ln, part) elif os.path.exists(get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, CFG_SITE_LANG, part)): # Check CFG_SITE_LANG webdoc_cached_part_path = get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, CFG_SITE_LANG, part) elif os.path.exists(get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, 'en', part)): # Check English webdoc_cached_part_path = get_webdoc_cached_part_path(_web_doc_cache_dir, webdoc, 'en', part) if webdoc_cached_part_path is not None: try: webdoc_cached_part = file(webdoc_cached_part_path, 'r').read() html_parts[part] = webdoc_cached_part except IOError: # Could not read cache file. Generate on-the-fly, # get all the parts at the same time, and return (webdoc_source_path, \ webdoc_cache_dir, \ webdoc_name,\ webdoc_source_modification_date, \ webdoc_cache_modification_date) = get_webdoc_info(webdoc) webdoc_source = file(webdoc_source_path, 'r').read() htmls = transform(webdoc_source, languages=[ln]) if len(htmls) > 0: (lang, body, title, keywords, \ navtrail, lastupdated, description) = htmls[-1] html_parts = {'body': body or '', 'title': title or '', 'keywords': keywords or '', 'navtrail': navtrail or '', 'lastupdated': lastupdated or '', 'description': description or ''} # We then have all the parts, or there is no # translation for this file (if len(htmls)==0) break else: # Look in other categories continue if html_parts == {}: # Could not find/read the folder where cache should # be. Generate on-the-fly, get all the parts at the # same time, and return (webdoc_source_path, \ webdoc_cache_dir, \ webdoc_name,\ webdoc_source_modification_date, \ webdoc_cache_modification_date) = get_webdoc_info(webdoc) if webdoc_source_path is not None: try: webdoc_source = file(webdoc_source_path, 'r').read() htmls = transform(webdoc_source, languages=[ln]) if len(htmls) > 0: (lang, body, title, keywords, \ navtrail, lastupdated, description) = htmls[-1] html_parts = {'body': body or '', 'title': title or '', 'keywords': keywords or '', 'navtrail': navtrail or '', 'lastupdated': lastupdated or '', 'description': description or ''} # We then have all the parts, or there is no # translation for this file (if len(htmls)==0) break except IOError: # Nothing we can do.. pass return html_parts def update_webdoc_cache(webdoc, mode=1, verbose=0, languages=CFG_SITE_LANGS): """ Update the cache (on disk) of the given webdoc. Parameters: webdoc - *string* the name of a webdoc that can be found in standard webdoc dir, or a webdoc filepath. mode - *int* update cache mode: - 0 : do not update - 1 : only if necessary (webdoc source is newer than its cache) - 2 : always update """ if mode in [1, 2]: (webdoc_source_path, \ webdoc_cache_dir, \ webdoc_name,\ webdoc_source_modification_date, \ webdoc_cache_modification_date) = get_webdoc_info(webdoc) if mode == 1 and \ webdoc_source_modification_date < webdoc_cache_modification_date and \ get_mo_last_modification() < webdoc_cache_modification_date: # Cache was updated after source. No need to update return (webdoc_source, \ webdoc_cache_dir, \ webdoc_name) = read_webdoc_source(webdoc) if webdoc_source is not None: htmls = transform(webdoc_source, languages=languages) for (lang, body, title, keywords, \ navtrail, lastupdated, description) in htmls: # Body if body is not None or lang == CFG_SITE_LANG: try: write_cache_file('%(name)s.body%(lang)s.html' % \ {'name': webdoc_name, 'lang': '-'+lang}, webdoc_cache_dir, body, verbose) except IOError as e: print(e) except OSError as e: print(e) # Title if title is not None or lang == CFG_SITE_LANG: try: write_cache_file('%(name)s.title%(lang)s.html' % \ {'name': webdoc_name, 'lang': '-'+lang}, webdoc_cache_dir, title, verbose) except IOError as e: print(e) except OSError as e: print(e) # Keywords if keywords is not None or lang == CFG_SITE_LANG: try: write_cache_file('%(name)s.keywords%(lang)s.html' % \ {'name': webdoc_name, 'lang': '-'+lang}, webdoc_cache_dir, keywords, verbose) except IOError as e: print(e) except OSError as e: print(e) # Navtrail if navtrail is not None or lang == CFG_SITE_LANG: try: write_cache_file('%(name)s.navtrail%(lang)s.html' % \ {'name': webdoc_name, 'lang': '-'+lang}, webdoc_cache_dir, navtrail, verbose) except IOError as e: print(e) except OSError as e: print(e) # Description if description is not None or lang == CFG_SITE_LANG: try: write_cache_file('%(name)s.description%(lang)s.html' % \ {'name': webdoc_name, 'lang': '-'+lang}, webdoc_cache_dir, description, verbose) except IOError as e: print(e) except OSError as e: print(e) # Last updated timestamp (CVS timestamp) if lastupdated is not None or lang == CFG_SITE_LANG: try: write_cache_file('%(name)s.lastupdated%(lang)s.html' % \ {'name': webdoc_name, 'lang': '-'+lang}, webdoc_cache_dir, lastupdated, verbose) except IOError as e: print(e) except OSError as e: print(e) # Last updated cache file try: write_cache_file('last_updated', webdoc_cache_dir, convert_datestruct_to_dategui(time.localtime()), verbose=0) except IOError as e: print(e) except OSError as e: print(e) if verbose > 0: print('Written cache in %s' % webdoc_cache_dir) def read_webdoc_source(webdoc): """ Returns the source of the given webdoc, along with the path to its cache directory. Returns (None, None, None) if webdoc cannot be found. Parameters: webdoc - *string* the name of a webdoc that can be found in standard webdoc dir, or a webdoc filepath. Priority is given to filepath if both match. Returns: *tuple* (webdoc_source, webdoc_cache_dir, webdoc_name) """ (webdoc_source_path, \ webdoc_cache_dir, \ webdoc_name,\ webdoc_source_modification_date, \ webdoc_cache_modification_date) = get_webdoc_info(webdoc) if webdoc_source_path is not None: try: webdoc_source = file(webdoc_source_path, 'r').read() except IOError: webdoc_source = None else: webdoc_source = None return (webdoc_source, webdoc_cache_dir, webdoc_name) def get_webdoc_info(webdoc): """ Locate the file corresponding to given webdoc and return its path, the path to its cache directory (even if it does not exist yet), the last modification dates of the source and the cache, and the webdoc name (i.e. webdoc id) Parameters: webdoc - *string* the name of a webdoc that can be found in standard webdoc dirs. (Without extension '.webdoc', hence 'search-guide', not'search-guide.webdoc'.) Returns: *tuple* (webdoc_source_path, webdoc_cache_dir, webdoc_name webdoc_source_modification_date, webdoc_cache_modification_date) """ webdoc_source_path = None webdoc_cache_dir = None webdoc_name = None last_updated_date = None webdoc_source_modification_date = 1 webdoc_cache_modification_date = 0 for (_webdoc_source_dir, _web_doc_cache_dir) in webdoc_dirs.values(): webdoc_source_path = registry.doc_category_topics( _webdoc_source_dir).get(webdoc) if webdoc_source_path is not None and os.path.exists(webdoc_source_path): webdoc_cache_dir = _web_doc_cache_dir + os.sep + webdoc webdoc_name = webdoc webdoc_source_modification_date = os.stat(webdoc_source_path).st_mtime break else: webdoc_source_path = None webdoc_name = None webdoc_source_modification_date = 1 if webdoc_cache_dir is not None and \ os.path.exists(webdoc_cache_dir + os.sep + 'last_updated'): webdoc_cache_modification_date = os.stat(webdoc_cache_dir + \ os.sep + \ 'last_updated').st_mtime return (webdoc_source_path, webdoc_cache_dir, webdoc_name, webdoc_source_modification_date, webdoc_cache_modification_date) def get_webdoc_topics(sort_by='name', sc=0, limit=-1, categ=['help', 'admin', 'hacking'], ln=CFG_SITE_LANG): """ List the available webdoc files in html format. sort_by - *string* Sort topics by 'name' or 'date'. sc - *int* Split the topics by categories if sc=1. limit - *int* Max number of topics to be printed. No limit if limit < 0. categ - *list(string)* the categories to consider ln - *string* Language of the page """ _ = gettext_set_language(ln) topics = {} - ln_link = (ln != CFG_SITE_LANG and '?ln=' + ln) or '' + ln_link = '?ln=' + ln for category in categ: if category not in webdoc_dirs: continue (source_path, cache_path) = webdoc_dirs[category] if category not in topics: topics[category] = [] # Build list of tuples(webdoc_name, webdoc_date, webdoc_url) for webdoc_name, webdocfile in registry.doc_category_topics(source_path).items(): webdoc_url = CFG_SITE_URL + "/help/" + \ ((category != 'help' and category + '/') or '') + \ webdoc_name try: webdoc_date = time.strptime(get_webdoc_parts(webdoc_name, parts=['lastupdated']).get('lastupdated', "1970-01-01 00:00:00"), "%Y-%m-%d %H:%M:%S") except: webdoc_date = time.strptime("1970-01-01 00:00:00", "%Y-%m-%d %H:%M:%S") topics[category].append((webdoc_name, webdoc_date, webdoc_url)) # If not split by category, merge everything if sc == 0: all_topics = [] for topic in topics.values(): all_topics.extend(topic) topics.clear() topics[''] = all_topics # Sort topics if sort_by == 'name': for topic in topics.values(): topic.sort() elif sort_by == 'date': for topic in topics.values(): topic.sort(lambda x, y:cmp(x[1], y[1])) topic.reverse() out = '' for category, topic in iteritems(topics): if category != '' and len(categ) > 1: out += ''+ _("%(category)s Pages") % \ {'category': _(category).capitalize()} + '' if limit < 0: limit = len(topic) out += '' return out def transform(webdoc_source, verbose=0, req=None, languages=CFG_SITE_LANGS): """ Transform a WebDoc into html This is made through a serie of transformations, mainly substitutions. Parameters: - webdoc_source : *string* the WebDoc input to transform to HTML """ parameters = {} # Will store values for specified parameters, such # as 'Title' for def get_param_and_remove(match): """ Analyses 'match', get the parameter and return empty string to remove it. Called by substitution in 'transform(...)', used to collection parameters such as @param match: a match object corresponding to the special tag that must be interpreted """ tag = match.group("tag") value = match.group("value") parameters[tag] = value return '' def translate(match): """ Translate matching values """ word = match.group("word") translated_word = _(word) return translated_word # 1 step ## First filter, used to remove comments ## and tags uncommented_webdoc = '' for line in webdoc_source.splitlines(True): if not line.strip().startswith('#'): uncommented_webdoc += line webdoc_source = uncommented_webdoc.replace('', '') webdoc_source = webdoc_source.replace('', '') html_texts = {} # Language dependent filters for ln in languages: _ = gettext_set_language(ln) # Check if translation is really needed ## Just a quick check. Might trigger false negative, but it is ## ok. if ln != CFG_SITE_LANG and \ translation_pattern.search(webdoc_source) is None and \ pattern_lang_link_current.search(webdoc_source) is None and \ pattern_lang_current.search(webdoc_source) is None and \ '<%s>' % ln not in webdoc_source and \ ('_(') not in webdoc_source: continue # 2 step ## Filter used to translate string in _(..)_ localized_webdoc = translation_pattern.sub(translate, webdoc_source) # 3 step ## Print current language 'en', 'fr', .. instead of ## tags and '?ln=en', '?ln=fr', .. instead of - ## if ln is not default language - if ln != CFG_SITE_LANG: - localized_webdoc = pattern_lang_link_current.sub('?ln=' + ln, - localized_webdoc) - else: - localized_webdoc = pattern_lang_link_current.sub('', - localized_webdoc) + ## + localized_webdoc = pattern_lang_link_current.sub('?ln=' + ln, + localized_webdoc) localized_webdoc = pattern_lang_current.sub(ln, localized_webdoc) # 4 step ## Filter out languages localized_webdoc = filter_languages(localized_webdoc, ln, defined_tags) # 5 Step ## Replace defined tags with their value from config file ## Eg. replace with 'http://cds.cern.ch/': for defined_tag, value in iteritems(defined_tags): if defined_tag.upper() == '': localized_webdoc = localized_webdoc.replace(defined_tag, \ value.get(ln, value['en'])) else: localized_webdoc = localized_webdoc.replace(defined_tag, value) # 6 step ## Get the parameters defined in HTML comments, like ## localized_body = localized_webdoc for tag, pattern in iteritems(pattern_tags): localized_body = pattern.sub(get_param_and_remove, localized_body) out = localized_body # Pre-process date last_updated = parameters.get('WebDoc-Page-Revision', '') last_updated = convert_datecvs_to_datestruct(last_updated) last_updated = convert_datestruct_to_datetext(last_updated) html_texts[ln] = (ln, out, parameters.get('WebDoc-Page-Title'), parameters.get('WebDoc-Page-Keywords'), parameters.get('WebDoc-Page-Navtrail'), last_updated, parameters.get('WebDoc-Page-Description')) # Remove duplicates filtered_html_texts = [] if CFG_SITE_LANG in html_texts: filtered_html_texts = [(html_text[0], \ (html_text[1] != html_texts[CFG_SITE_LANG][1] and html_text[1]) or None, \ (html_text[2] != html_texts[CFG_SITE_LANG][2] and html_text[2]) or None, \ (html_text[3] != html_texts[CFG_SITE_LANG][3] and html_text[3]) or None, \ (html_text[4] != html_texts[CFG_SITE_LANG][4] and html_text[4]) or None, \ (html_text[5] != html_texts[CFG_SITE_LANG][5] and html_text[5]) or None, \ (html_text[6] != html_texts[CFG_SITE_LANG][6] and html_text[6]) or None) for html_text in html_texts.values() \ if html_text[0] != CFG_SITE_LANG] filtered_html_texts.append(html_texts[CFG_SITE_LANG]) else: filtered_html_texts = html_texts.values() return filtered_html_texts def write_cache_file(filename, webdoc_cache_dir, filebody, verbose=0): """Write a file inside WebDoc cache dir. Raise an exception if not possible """ # open file: mymkdir(webdoc_cache_dir) fullfilename = webdoc_cache_dir + os.sep + filename if filebody is None: filebody = '' os.umask(0o022) f = open(fullfilename, "w") f.write(filebody) f.close() if verbose > 2: print('Written %s' % fullfilename) def get_mo_last_modification(): """ Returns the timestamp of the most recently modified mo (compiled po) file """ # Take one of the mo files. They are all installed at the same # time, so last modication date should be the same mo_file = '%s/share/locale/%s/LC_MESSAGES/invenio.mo' % (CFG_PREFIX, CFG_SITE_LANG) if os.path.exists(os.path.abspath(mo_file)): return os.stat(mo_file).st_mtime else: return 0 def filter_languages(text, ln='en', defined_tags=None): """ Filters the language tags that do not correspond to the specified language. Eg: A bookEin Buch will return - with ln = 'de': "Ein Buch" - with ln = 'en': "A book" - with ln = 'fr': "A book" Also replace variables such as and inside <..><..> tags in order to print them with the correct language @param text: the input text @param ln: the language that is NOT filtered out from the input @return: the input text as string with unnecessary languages filtered out @see: bibformat_engine.py, from where this function was originally extracted """ # First define search_lang_tag(match) and clean_language_tag(match), used # in re.sub() function def search_lang_tag(match): """ Searches for the ... tag and remove inner localized tags such as , , that are not current_lang. If current_lang cannot be found inside ... , try to use 'CFG_SITE_LANG' @param match: a match object corresponding to the special tag that must be interpreted """ current_lang = ln # If is used, keep all empty line (this is # currently undocumented and behaviour might change) keep = False if match.group("keep") is not None: keep = True def clean_language_tag(match): """ Return tag text content if tag language of match is output language. Called by substitution in 'filter_languages(...)' @param match: a match object corresponding to the special tag that must be interpreted """ if match.group('lang') == current_lang or \ keep == True: return match.group('translation') else: return "" # End of clean_language_tag(..) lang_tag_content = match.group("langs") # Try to find tag with current lang. If it does not exists, # then try to look for CFG_SITE_LANG. If still does not exist, use # 'en' as current_lang pattern_current_lang = re.compile(r"<(" + current_lang + \ r")\s*>(.*?)()", re.IGNORECASE | re.DOTALL) if re.search(pattern_current_lang, lang_tag_content) is None: current_lang = CFG_SITE_LANG # Can we find translation in 'CFG_SITE_LANG'? if re.search(pattern_CFG_SITE_LANG, lang_tag_content) is None: current_lang = 'en' cleaned_lang_tag = ln_pattern.sub(clean_language_tag, lang_tag_content) # Remove empty lines # Only if 'keep' has not been set if keep == False: stripped_text = '' for line in cleaned_lang_tag.splitlines(True): if line.strip(): stripped_text += line cleaned_lang_tag = stripped_text return cleaned_lang_tag # End of search_lang_tag(..) filtered_text = pattern_lang.sub(search_lang_tag, text) return filtered_text def usage(exitcode=1, msg=""): """Prints usage info.""" if msg: sys.stderr.write("Error: %s.\n" % msg) sys.stderr.write("Usage: %s [options] \n" % sys.argv[0]) sys.stderr.write(" -h, --help \t\t Print this help.\n") sys.stderr.write(" -V, --version \t\t Print version information.\n") sys.stderr.write(" -v, --verbose=LEVEL \t\t Verbose level (0=min,1=normal,9=max).\n") sys.stderr.write(" -l, --language=LN1,LN2,.. \t\t Language(s) to process (default all)\n") sys.stderr.write(" -m, --mode=MODE \t\t Update cache mode(0=Never,1=if necessary,2=always) (default 2)\n") sys.stderr.write("\n") sys.stderr.write(" Example: webdoc search-guide\n") sys.stderr.write(" Example: webdoc -l en,fr search-guide\n") sys.stderr.write(" Example: webdoc -m 1 search-guide") sys.stderr.write("\n") sys.exit(exitcode) def main(): """ main entry point for webdoc via command line """ options = {'language':CFG_SITE_LANGS, 'verbose':1, 'mode':2} try: opts, args = getopt.getopt(sys.argv[1:], "hVv:l:m:", ["help", "version", "verbose=", "language=", "mode="]) except getopt.GetoptError as err: usage(1, err) try: for opt in opts: if opt[0] in ["-h", "--help"]: usage(0) elif opt[0] in ["-V", "--version"]: print(__revision__) sys.exit(0) elif opt[0] in ["-v", "--verbose"]: options["verbose"] = int(opt[1]) elif opt[0] in ["-l", "--language"]: options["language"] = [wash_language(lang.strip().lower()) \ for lang in opt[1].split(',') \ if lang in CFG_SITE_LANGS] elif opt[0] in ["-m", "--mode"]: options["mode"] = opt[1] except StandardError as e: usage(e) try: options["mode"] = int(options["mode"]) except ValueError: usage(1, "Mode must be an integer") if len(args) > 0: options["webdoc"] = args[0] if "webdoc" not in options: usage(0) # check if webdoc exists infos = get_webdoc_info(options["webdoc"]) if infos[0] is None: usage(1, "Could not find %s" % options["webdoc"]) update_webdoc_cache(webdoc=options["webdoc"], mode=options["mode"], verbose=options["verbose"], languages=options["language"]) if __name__ == "__main__": main() diff --git a/invenio/legacy/websubmit/functions/Shared_Functions.py b/invenio/legacy/websubmit/functions/Shared_Functions.py index 9885e55cf..5fb5a2c12 100644 --- a/invenio/legacy/websubmit/functions/Shared_Functions.py +++ b/invenio/legacy/websubmit/functions/Shared_Functions.py @@ -1,271 +1,271 @@ ## This file is part of Invenio. ## Copyright (C) 2007, 2008, 2009, 2010, 2011 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. """Functions shared by websubmit_functions""" from __future__ import print_function __revision__ = "$Id$" import os import cgi import glob import sys from logging import DEBUG from six import iteritems from invenio.config import \ CFG_PATH_CONVERT, \ CFG_SITE_LANG from invenio.legacy.bibdocfile.api import decompose_file from invenio.ext.logging import register_exception from invenio.legacy.websubmit.file_converter import convert_file, InvenioWebSubmitFileConverterError, get_missing_formats, get_file_converter_logger from invenio.legacy.websubmit.config import InvenioWebSubmitFunctionError from invenio.legacy.dbquery import run_sql from invenio.legacy.bibsched.cli import server_pid from invenio.base.i18n import gettext_set_language from invenio.legacy.search_engine import get_record from invenio.legacy.bibrecord import record_get_field_values, record_get_field_value def createRelatedFormats(fullpath, overwrite=True, debug=False): """Given a fullpath, this function extracts the file's extension and finds in which additional format the file can be converted and converts it. @param fullpath: (string) complete path to file @param overwrite: (bool) overwrite already existing formats Return a list of the paths to the converted files """ file_converter_logger = get_file_converter_logger() old_logging_level = file_converter_logger.getEffectiveLevel() if debug: file_converter_logger.setLevel(DEBUG) try: createdpaths = [] basedir, filename, extension = decompose_file(fullpath) extension = extension.lower() if debug: print("basedir: %s, filename: %s, extension: %s" % (basedir, filename, extension), file=sys.stderr) filelist = glob.glob(os.path.join(basedir, '%s*' % filename)) if debug: print("filelist: %s" % filelist, file=sys.stderr) missing_formats = get_missing_formats(filelist) if debug: print("missing_formats: %s" % missing_formats, file=sys.stderr) for path, formats in iteritems(missing_formats): if debug: print("... path: %s, formats: %s" % (path, formats), file=sys.stderr) for aformat in formats: if debug: print("...... aformat: %s" % aformat, file=sys.stderr) newpath = os.path.join(basedir, filename + aformat) if debug: print("...... newpath: %s" % newpath, file=sys.stderr) try: convert_file(path, newpath) createdpaths.append(newpath) except InvenioWebSubmitFileConverterError as msg: if debug: print("...... Exception: %s" % msg, file=sys.stderr) register_exception(alert_admin=True) finally: if debug: file_converter_logger.setLevel(old_logging_level) return createdpaths def createIcon(fullpath, iconsize): """Given a fullpath, this function extracts the file's extension and if the format is compatible it converts it to icon. @param fullpath: (string) complete path to file Return the iconpath if successful otherwise None """ basedir = os.path.dirname(fullpath) filename = os.path.basename(fullpath) filename, extension = os.path.splitext(filename) if extension == filename: extension == "" iconpath = "%s/icon-%s.gif" % (basedir, filename) if os.path.exists(fullpath) and extension.lower() in ['.pdf', '.gif', '.jpg', '.jpeg', '.ps']: os.system("%s -scale %s %s %s" % (CFG_PATH_CONVERT, iconsize, fullpath, iconpath)) if os.path.exists(iconpath): return iconpath else: return None def get_dictionary_from_string(dict_string): """Given a string version of a "dictionary", split the string into a python dictionary. For example, given the following string: {'TITLE' : 'EX_TITLE', 'AUTHOR' : 'EX_AUTHOR', 'REPORTNUMBER' : 'EX_RN'} A dictionary in the following format will be returned: { 'TITLE' : 'EX_TITLE', 'AUTHOR' : 'EX_AUTHOR', 'REPORTNUMBER' : 'EX_RN', } @param dict_string: (string) - the string version of the dictionary. @return: (dictionary) - the dictionary build from the string. """ try: # Evaluate the dictionary string in an empty local/global # namespaces. An empty '__builtins__' variable is still # provided, otherwise Python will add the real one for us, # which would access to undesirable functions, such as # 'file()', 'open()', 'exec()', etc. evaluated_dict = eval(dict_string, {"__builtins__": {}}, {}) except: evaluated_dict = {} # Check that returned value is a dict. Do not check with # isinstance() as we do not even want to match subclasses of dict. if type(evaluated_dict) is dict: return evaluated_dict else: return {} def ParamFromFile(afile): """ Pipe a multi-line file into a single parameter""" parameter = '' afile = afile.strip() if afile == '': return parameter try: fp = open(afile, "r") lines = fp.readlines() for line in lines: parameter = parameter + line fp.close() except IOError: pass return parameter def write_file(filename, filedata): """Open FILENAME and write FILEDATA to it.""" filename1 = filename.strip() try: of = open(filename1,'w') except IOError: raise InvenioWebSubmitFunctionError('Cannot open ' + filename1 + ' to write') of.write(filedata) of.close() return "" def get_nice_bibsched_related_message(curdir, ln=CFG_SITE_LANG): """ @return: a message suitable to display to the user, explaining the current status of the system. @rtype: string """ bibupload_id = ParamFromFile(os.path.join(curdir, 'bibupload_id')) if not bibupload_id: ## No BibUpload scheduled? Then we don't care about bibsched return "" ## Let's get an estimate about how many processes are waiting in the queue. ## Our bibupload might be somewhere in it, but it's not really so important ## WRT informing the user. _ = gettext_set_language(ln) res = run_sql("SELECT id,proc,runtime,status,priority FROM schTASK WHERE (status='WAITING' AND runtime<=NOW()) OR status='SLEEPING'") - pre = _("Note that your submission as been inserted into the bibliographic task queue and is waiting for execution.\n") + pre = _("Note that your submission has been inserted into the bibliographic task queue and is waiting for execution.\n") if server_pid(): ## BibSched is up and running msg = _("The task queue is currently running in automatic mode, and there are currently %(x_num)s tasks waiting to be executed. Your record should be available within a few minutes and searchable within an hour or thereabouts.\n", x_num=(len(res))) else: msg = _("Because of a human intervention or a temporary problem, the task queue is currently set to the manual mode. Your submission is well registered but may take longer than usual before it is fully integrated and searchable.\n") return pre + msg def txt2html(msg): """Transform newlines into paragraphs.""" rows = msg.split('\n') rows = [cgi.escape(row) for row in rows] rows = "

" + "

".join(rows) + "

" return rows def get_all_values_in_curdir(curdir): """ Return a dictionary with all the content of curdir. @param curdir: the path to the current directory. @type curdir: string @return: the content @rtype: dict """ ret = {} for filename in os.listdir(curdir): if not filename.startswith('.') and os.path.isfile(os.path.join(curdir, filename)): ret[filename] = open(os.path.join(curdir, filename)).read().strip() return ret def get_current_record(curdir, system_number_file='SN'): """ Return the current record (in case it's being modified). @param curdir: the path to the current directory. @type curdir: string @param system_number_file: is the name of the file on disk in curdir, that is supposed to contain the record id. @type system_number_file: string @return: the record @rtype: as in L{get_record} """ if os.path.exists(os.path.join(curdir, system_number_file)): recid = open(os.path.join(curdir, system_number_file)).read().strip() if recid: recid = int(recid) return get_record(recid) return {} def retrieve_field_values(curdir, field_name, separator=None, system_number_file='SN', tag=None): """ This is a handy function to retrieve values either from the current submission directory, when a form has been just submitted, or from an existing record (e.g. during MBI action). @param curdir: is the current submission directory. @type curdir: string @param field_name: is the form field name that might exists on disk. @type field_name: string @param separator: is an optional separator. If it exists, it will be used to retrieve multiple values contained in the field. @type separator: string @param system_number_file: is the name of the file on disk in curdir, that is supposed to contain the record id. @type system_number_file: string @param tag: is the full MARC tag (tag+ind1+ind2+code) that should contain values. If not specified, only values in curdir will be retrieved. @type tag: 6-chars @return: the field value(s). @rtype: list of strings. @note: if field_name exists in curdir it will take precedence over retrieving the values from the record. """ field_file = os.path.join(curdir, field_name) if os.path.exists(field_file): field_value = open(field_file).read() if separator is not None: return [value.strip() for value in field_value.split(separator) if value.strip()] else: return [field_value.strip()] elif tag is not None: system_number_file = os.path.join(curdir, system_number_file) if os.path.exists(system_number_file): recid = int(open(system_number_file).read().strip()) record = get_record(recid) if separator: return record_get_field_values(record, tag[:3], tag[3], tag[4], tag[5]) else: return [record_get_field_value(record, tag[:3], tag[3], tag[4], tag[5])] return [] diff --git a/invenio/modules/converter/converterext/templates/oaiarxiv2marcxml.xsl b/invenio/modules/converter/converterext/templates/oaiarxiv2marcxml.xsl index 58ccebdee..c627ceb38 100644 --- a/invenio/modules/converter/converterext/templates/oaiarxiv2marcxml.xsl +++ b/invenio/modules/converter/converterext/templates/oaiarxiv2marcxml.xsl @@ -1,1049 +1,1080 @@ abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ + + + + + + + + + DOI + + + + + + + + + + LANL EDS arXiv arXiv - + CERN ; - + CERN , , , - + + + - + + + - + + + + + giva a faire accepted@appear@press@publ@review@submitted"> - + false arXiv DELETED SzGeCERN + + + + + + false arXiv eng - - TH- + TH- CERN Library PH-TH + + OA + + + CERN-TH + - - PH-EP- + PH-EP- CERN Library PH-EP http://arxiv.org/pdf/.pdf - - - + + p mult. p Comments: LANL EDS - + - - - - - - + ARTICLE ARTICLE 13 Thesis THESIS THESIS 14 PREPRINT PREPRINT 11 diff --git a/invenio/modules/formatter/format_elements/bfe_authority_institution.py b/invenio/modules/formatter/format_elements/bfe_authority_institution.py index 05b90478b..b3550736e 100644 --- a/invenio/modules/formatter/format_elements/bfe_authority_institution.py +++ b/invenio/modules/formatter/format_elements/bfe_authority_institution.py @@ -1,205 +1,205 @@ # -*- coding: utf-8 -*- ## ## This file is part of Invenio. -## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 CERN. +## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. -"""BibFormat element - Prints institution data from an Authority Record. +"""BibFormat element - Prints institute data from an Authority Record. """ __revision__ = "$Id$" from invenio.config import CFG_SITE_URL from invenio.legacy.bibauthority.config import \ CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD, \ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME from invenio.legacy.bibauthority.engine import \ get_control_nos_from_recID, \ guess_main_name_from_authority_recID from invenio.legacy.search_engine import \ perform_request_search, \ get_record def format_element(bfo, detail='no'): - """ Prints the data of an institution authority record in HTML. By default prints + """ Prints the data of an institute authority record in HTML. By default prints brief version. @param detail: whether the 'detailed' rather than the 'brief' format @type detail: 'yes' or 'no' """ from invenio.base.i18n import gettext_set_language _ = gettext_set_language(bfo.lang) # load the right message language # return value out = "" # brief main_dicts = bfo.fields('110%%') if len(main_dicts): main = main_dicts[0].get('a') or "" out += "

" + "" + _("Main %(x_name)s name", x_name=_("institution")).encode('utf8') + "" + ": " + main + "

" # detail if detail.lower() == "yes": sees = [see_dict['a'] for see_dict in bfo.fields('410%%') if 'a' in see_dict] sees = filter(None, sees) # fastest way to remove empty ""s if len(sees): out += "

" + "" + _("Variant(s)") + "" + ": " + ", ".join(sees) + "

" see_also_dicts = bfo.fields('510%%') cc_val = CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME - c_val = "Authority Institution" + c_val = "Institutes" record_url_pattern = "/record/" + "%s" search_url_pattern = "/search?" + \ "cc=" + "%s" + \ "&c=" + "%s" + \ "&p=" + "%s" + \ "&sc=" + "%s" link_pattern = "" + '%s' + "" # populate the first 3 lists parent_htmls, predecessor_htmls, successor_htmls = \ get_main_htmls(see_also_dicts, cc_val, c_val, record_url_pattern, search_url_pattern, link_pattern) # populate the list of children child_htmls = \ get_child_htmls(bfo.recID, cc_val, c_val, record_url_pattern, link_pattern) # put it all together if len(parent_htmls): out += "

" + "" + _("Parent") + "" + ": " + ", ".join(parent_htmls) + "

" if len(child_htmls): out += "

" + "" + _("Children") + "" + ": " + ", ".join(child_htmls) + "

" if len(predecessor_htmls): out += "

" + "" + _("Predecessor") + "" + ": " + ", ".join(predecessor_htmls) + "

" if len(successor_htmls): out += "

" + "" + _("Successor") + "" + ": " + ", ".join(successor_htmls) + "

" # return return out def get_main_htmls(see_also_dicts, cc_val, c_val, record_url_pattern, search_url_pattern, link_pattern): """parent_htmls, predecessor_htmls, successor_htmls can all be deduced directly from the metadata of the record""" # reusable vars f_val = CFG_BIBAUTHORITY_RECORD_CONTROL_NUMBER_FIELD sc_val = "1" parent_htmls = [] predecessor_htmls = [] successor_htmls = [] # start processing for see_also_dict in see_also_dicts: if 'w' in see_also_dict: # $w contains 'a' for predecessor, 'b' for successor, etc. w_subfield = see_also_dict.get('w') # $4 contains control_no of linked authority record _4_subfield = see_also_dict.get('4') - # $a contains the name of the linked institution + # $a contains the name of the linked institute out_string = see_also_dict.get('a') or _4_subfield # if we have something to display if out_string: url = '' # if we have a control number if _4_subfield: p_val = _4_subfield # if CFG_BIBAUTHORITY_PREFIX_SEP in _4_subfield: # unused, p_val = _4_subfield.split(CFG_BIBAUTHORITY_PREFIX_SEP); recIDs = perform_request_search(cc=cc_val, c=c_val, p=p_val, f=f_val) if len(recIDs) == 1: url = record_url_pattern % (recIDs[0]) elif len(recIDs) > 1: p_val = "recid:" + \ " or recid:".join([str(r) for r in recIDs]) url = search_url_pattern % (cc_val, c_val, p_val, sc_val) # if we found one or multiple records for the control_no, # make the out_string a clickable url towards those records if url: out_string = link_pattern % (url, out_string) # add the out_string to the appropriate list if w_subfield == 't': parent_htmls.append(out_string) elif w_subfield == 'a': predecessor_htmls.append(out_string) elif w_subfield == 'b': successor_htmls.append(out_string) # return return parent_htmls, predecessor_htmls, successor_htmls def get_child_htmls(this_recID, cc_val, c_val, record_url_pattern, link_pattern): """children aren'r referenced by parents, so we need special treatment to find them""" control_nos = get_control_nos_from_recID(this_recID) for control_no in control_nos: url = '' p_val = '510%4:"' + control_no + '" and 510%w:t' # find a first, fuzzy result set # narrowing down on a few possible recIDs recIDs = perform_request_search(cc=cc_val, c=c_val, p=p_val) # now filter to find the ones where the subfield conditions of p_val # are both true within the exact same field sf_req = [('w', 't'), ('4', control_no)] recIDs = filter(lambda x: match_all_subfields_for_tag(x, '510', sf_req), recIDs) # proceed with assembling the html link child_htmls = [] for recID in recIDs: url = record_url_pattern % str(recID) display = guess_main_name_from_authority_recID(recID) or str(recID) out_html = link_pattern % (url, display) child_htmls.append(out_html) return child_htmls def match_all_subfields_for_tag(recID, field_tag, subfields_required=[]): """ Tests whether the record with recID has at least one field with 'field_tag' where all of the required subfields in subfields_required match a subfield in the given field both in code and value @param recID: record ID @type recID: int @param field_tag: a 3 digit code for the field tag code @type field_tag: string @param subfields_required: a list of subfield code/value tuples @type subfields_required: list of tuples of strings. same format as in get_record(): e.g. [('w', 't'), ('4', 'XYZ123')] @return: boolean """ rec = get_record(recID) for field in rec[field_tag]: subfields_present = field[0] intersection = set(subfields_present) & set(subfields_required) if set(subfields_required) == intersection: return True return False def escape_values(bfo): """ Called by BibFormat in order to check if output of this element should be escaped. """ return 0 diff --git a/invenio/modules/formatter/format_elements/bfe_authors.py b/invenio/modules/formatter/format_elements/bfe_authors.py index cb6b4add7..0b8337369 100644 --- a/invenio/modules/formatter/format_elements/bfe_authors.py +++ b/invenio/modules/formatter/format_elements/bfe_authors.py @@ -1,207 +1,207 @@ # -*- coding: utf-8 -*- ## ## This file is part of Invenio. ## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. """BibFormat element - Prints authors """ __revision__ = "$Id$" import re import six from urllib import quote from cgi import escape from invenio.base.globals import cfg from invenio.base.i18n import gettext_set_language from invenio.legacy.bibauthority.config import \ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME, \ CFG_BIBAUTHORITY_TYPE_NAMES, \ CFG_BIBAUTHORITY_PREFIX_SEP from invenio.legacy.bibauthority.engine import \ get_low_level_recIDs_from_control_no def format_element(bfo, limit, separator=' ; ', extension='[...]', print_links="yes", print_affiliations='no', affiliation_prefix=' (', affiliation_suffix=')', interactive="no", highlight="no", link_author_pages="no", link_mobile_pages="no", relator_code_pattern=None): """ Prints the list of authors of a record. @param limit: the maximum number of authors to display @param separator: the separator between authors. @param extension: a text printed if more authors than 'limit' exist @param print_links: if yes, prints the authors as HTML link to their publications @param print_affiliations: if yes, make each author name followed by its affiliation @param affiliation_prefix: prefix printed before each affiliation @param affiliation_suffix: suffix printed after each affiliation @param interactive: if yes, enable user to show/hide authors when there are too many (html + javascript) @param highlight: highlights authors corresponding to search query if set to 'yes' @param link_author_pages: should we link to author pages if print_links in on? @param link_mobile_pages: should we link to mobile app pages if print_links in on? @param relator_code_pattern: a regular expression to filter authors based on subfield $4 (relator code) """ CFG_SITE_URL = cfg['CFG_SITE_URL'] if isinstance(CFG_SITE_URL, six.text_type): CFG_SITE_URL = CFG_SITE_URL.encode('utf8') _ = gettext_set_language(bfo.lang) # load the right message language authors = [] authors_1 = bfo.fields('100__', repeatable_subfields_p=True) authors_2 = bfo.fields('700__', repeatable_subfields_p=True) authors.extend(authors_1) authors.extend(authors_2) # make unique string per key for author in authors: if 'a' in author: author['a'] = author['a'][0] if 'u' in author: author['u'] = author['u'][0] pattern = '%s' + CFG_BIBAUTHORITY_PREFIX_SEP + "(" for control_no in author.get('0', []): - if pattern % (CFG_BIBAUTHORITY_TYPE_NAMES["INSTITUTION"]) in control_no: + if pattern % (CFG_BIBAUTHORITY_TYPE_NAMES["INSTITUTE"]) in control_no: author['u0'] = control_no # overwrite if multiples elif pattern % (CFG_BIBAUTHORITY_TYPE_NAMES["AUTHOR"]) in control_no: author['a0'] = control_no # overwrite if multiples if relator_code_pattern: p = re.compile(relator_code_pattern) authors = filter(lambda x: p.match(x.get('4', '')), authors) nb_authors = len(authors) bibrec_id = bfo.control_field("001") # Process authors to add link, highlight and format affiliation for author in authors: if 'a' in author: if highlight == 'yes': from invenio.modules.formatter import utils as bibformat_utils author['a'] = bibformat_utils.highlight(author['a'], bfo.search_pattern) if print_links.lower() == "yes": if link_author_pages == "yes": author['a'] = '' elif link_mobile_pages == 'yes': author['a'] = '' + escape(author['a']) + '' else: auth_coll_param = '' if 'a0' in author: recIDs = get_low_level_recIDs_from_control_no(author['a0']) if len(recIDs): auth_coll_param = '&c=' + \ CFG_BIBAUTHORITY_AUTHORITY_COLLECTION_NAME author['a'] = '' + escape(author['a']) + '' if 'u' in author: if print_affiliations == "yes": if 'u0' in author: recIDs = get_low_level_recIDs_from_control_no(author['u0']) # if there is more than 1 recID, clicking on link and # thus displaying the authority record's page should # contain a warning that there are multiple authority # records with the same control number if len(recIDs): author['u'] = '' + author['u'] + '' author['u'] = affiliation_prefix + author['u'] + \ affiliation_suffix # Flatten author instances if print_affiliations == 'yes': authors = [author.get('a', '') + author.get('u', '') for author in authors] else: authors = [author.get('a', '') for author in authors] if limit.isdigit() and nb_authors > int(limit) and interactive != "yes": return separator.join(authors[:int(limit)]) + extension elif limit.isdigit() and nb_authors > int(limit) and interactive == "yes": out = '' out += separator.join(authors[:int(limit)]) out += '' % bibrec_id + separator + \ separator.join(authors[int(limit):]) + '' out += ' ' % bibrec_id out += ' ' % bibrec_id out += ''' ''' % {'show_less':_("Hide"), 'show_more':_("Show all %(x_num)i authors", x_num=nb_authors), 'extension':extension, 'recid': bibrec_id} out += '' % bibrec_id return out elif nb_authors > 0: return separator.join(authors) def escape_values(bfo): """ Called by BibFormat in order to check if output of this element should be escaped. """ return 0 diff --git a/invenio/modules/formatter/format_templates/Authority_HTML_brief.bft b/invenio/modules/formatter/format_templates/Authority_HTML_brief.bft index 578f80105..50c65852f 100755 --- a/invenio/modules/formatter/format_templates/Authority_HTML_brief.bft +++ b/invenio/modules/formatter/format_templates/Authority_HTML_brief.bft @@ -1,7 +1,7 @@ Default HTML brief Brief Authority HTML format. - + \ No newline at end of file diff --git a/invenio/modules/formatter/format_templates/Authority_HTML_detailed.bft b/invenio/modules/formatter/format_templates/Authority_HTML_detailed.bft index 944ce7396..4649b4af7 100755 --- a/invenio/modules/formatter/format_templates/Authority_HTML_detailed.bft +++ b/invenio/modules/formatter/format_templates/Authority_HTML_detailed.bft @@ -1,15 +1,15 @@ Authority HTML detailed Detailed Authority HTML format.

Authority Record

- +
\ No newline at end of file diff --git a/invenio/modules/indexer/fixtures.py b/invenio/modules/indexer/fixtures.py index a742bc56b..dc28751fc 100644 --- a/invenio/modules/indexer/fixtures.py +++ b/invenio/modules/indexer/fixtures.py @@ -1,568 +1,568 @@ # -*- coding: utf-8 -*- # ## This file is part of Invenio. ## Copyright (C) 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. from fixture import DataSet from invenio.modules.search.fixtures import FieldData class IdxINDEXData(DataSet): class IdxINDEX_1: last_updated = None description = u'This index contains words/phrases from global fields.' stemming_language = u'' id = 1 indexer = u'native' name = u'global' synonym_kbrs = u'INDEX-SYNONYM-TITLE,exact' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_2: last_updated = None description = u'This index contains words/phrases from collection identifiers fields.' stemming_language = u'' id = 2 indexer = u'native' name = u'collection' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_3: last_updated = None description = u'This index contains words/phrases from abstract fields.' stemming_language = u'' id = 3 indexer = u'native' name = u'abstract' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_4: last_updated = None description = u'This index contains fuzzy words/phrases from author fields.' stemming_language = u'' id = 4 indexer = u'native' name = u'author' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexAuthorTokenizer' class IdxINDEX_5: last_updated = None description = u'This index contains words/phrases from keyword fields.' stemming_language = u'' id = 5 indexer = u'native' name = u'keyword' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_6: last_updated = None description = u'This index contains words/phrases from references fields.' stemming_language = u'' id = 6 indexer = u'native' name = u'reference' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_7: last_updated = None description = u'This index contains words/phrases from report numbers fields.' stemming_language = u'' id = 7 indexer = u'native' name = u'reportnumber' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_8: last_updated = None description = u'This index contains words/phrases from title fields.' stemming_language = u'' id = 8 indexer = u'native' name = u'title' synonym_kbrs = u'INDEX-SYNONYM-TITLE,exact' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_9: last_updated = None description = u'This index contains words/phrases from fulltext fields.' stemming_language = u'' id = 9 indexer = u'native' name = u'fulltext' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexFulltextTokenizer' class IdxINDEX_10: last_updated = None description = u'This index contains words/phrases from year fields.' stemming_language = u'' id = 10 indexer = u'native' name = u'year' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexYearTokenizer' class IdxINDEX_11: last_updated = None description = u'This index contains words/phrases from journal publication information fields.' stemming_language = u'' id = 11 indexer = u'native' name = u'journal' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexJournalTokenizer' class IdxINDEX_12: last_updated = None description = u'This index contains words/phrases from collaboration name fields.' stemming_language = u'' id = 12 indexer = u'native' name = u'collaboration' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_13: last_updated = None - description = u'This index contains words/phrases from institutional affiliation fields.' + description = u'This index contains words/phrases from affiliation fields.' stemming_language = u'' id = 13 indexer = u'native' name = u'affiliation' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_14: last_updated = None description = u'This index contains exact words/phrases from author fields.' stemming_language = u'' id = 14 indexer = u'native' name = u'exactauthor' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_15: last_updated = None description = u'This index contains exact words/phrases from figure captions.' stemming_language = u'' id = 15 indexer = u'native' name = u'caption' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_16: last_updated = None description = u'This index contains fuzzy words/phrases from first author field.' stemming_language = u'' id = 16 indexer = u'native' name = u'firstauthor' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexAuthorTokenizer' class IdxINDEX_17: last_updated = None description = u'This index contains exact words/phrases from first author field.' stemming_language = u'' id = 17 indexer = u'native' name = u'exactfirstauthor' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexExactAuthorTokenizer' class IdxINDEX_18: last_updated = None description = u'This index contains number of authors of the record.' stemming_language = u'' id = 18 indexer = u'native' name = u'authorcount' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexAuthorCountTokenizer' class IdxINDEX_19: last_updated = None description = u'This index contains exact words/phrases from title fields.' stemming_language = u'' id = 19 indexer = u'native' name = u'exacttitle' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_20: last_updated = None description = u'This index contains words/phrases from author authority records.' stemming_language = u'' id = 20 indexer = u'native' name = u'authorityauthor' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexAuthorTokenizer' class IdxINDEX_21: last_updated = None - description = u'This index contains words/phrases from institution authority records.' + description = u'This index contains words/phrases from institute authority records.' stemming_language = u'' id = 21 indexer = u'native' - name = u'authorityinstitution' + name = u'authorityinstitute' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_22: last_updated = None description = u'This index contains words/phrases from journal authority records.' stemming_language = u'' id = 22 indexer = u'native' name = u'authorityjournal' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_23: last_updated = None description = u'This index contains words/phrases from subject authority records.' stemming_language = u'' id = 23 indexer = u'native' name = u'authoritysubject' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEX_24: last_updated = None description = u'This index contains number of copies of items in the library.' stemming_language = u'' id = 24 indexer = u'native' name = u'itemcount' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexItemCountTokenizer' class IdxINDEX_25: last_updated = None description = u'This index contains extensions of files connected to records.' stemming_language = u'' id = 25 indexer = u'native' name = u'filetype' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexFiletypeTokenizer' class IdxINDEX_26: last_updated = None description = u'This index contains words/phrases from miscellaneous fields.' stemming_language = u'' id = 26 indexer = u'native' name = u'miscellaneous' synonym_kbrs = u'' remove_stopwords = u'No' remove_html_markup = u'No' remove_latex_markup = u'No' tokenizer = u'BibIndexDefaultTokenizer' class IdxINDEXFieldData(DataSet): class IdxINDEXField_10_12: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_10.ref('id') id_field = FieldData.Field_12.ref('id') class IdxINDEXField_11_19: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_11.ref('id') id_field = FieldData.Field_19.ref('id') class IdxINDEXField_12_20: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_12.ref('id') id_field = FieldData.Field_20.ref('id') class IdxINDEXField_13_21: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_13.ref('id') id_field = FieldData.Field_21.ref('id') class IdxINDEXField_14_22: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_14.ref('id') id_field = FieldData.Field_22.ref('id') class IdxINDEXField_15_27: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_15.ref('id') id_field = FieldData.Field_27.ref('id') class IdxINDEXField_16_28: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_16.ref('id') id_field = FieldData.Field_28.ref('id') class IdxINDEXField_17_29: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_17.ref('id') id_field = FieldData.Field_29.ref('id') class IdxINDEXField_18_30: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_18.ref('id') id_field = FieldData.Field_30.ref('id') class IdxINDEXField_19_32: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_19.ref('id') id_field = FieldData.Field_32.ref('id') class IdxINDEXField_1_1: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_1.ref('id') id_field = FieldData.Field_1.ref('id') class IdxINDEXField_2_10: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_2.ref('id') id_field = FieldData.Field_10.ref('id') class IdxINDEXField_3_4: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_3.ref('id') id_field = FieldData.Field_4.ref('id') class IdxINDEXField_4_3: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_4.ref('id') id_field = FieldData.Field_3.ref('id') class IdxINDEXField_5_5: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_5.ref('id') id_field = FieldData.Field_5.ref('id') class IdxINDEXField_6_8: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_6.ref('id') id_field = FieldData.Field_8.ref('id') class IdxINDEXField_7_6: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_7.ref('id') id_field = FieldData.Field_6.ref('id') class IdxINDEXField_8_2: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_8.ref('id') id_field = FieldData.Field_2.ref('id') class IdxINDEXField_9_9: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_9.ref('id') id_field = FieldData.Field_9.ref('id') class IdxINDEXField_20_33: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_20.ref('id') id_field = FieldData.Field_33.ref('id') class IdxINDEXField_21_34: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_21.ref('id') id_field = FieldData.Field_34.ref('id') class IdxINDEXField_22_35: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_22.ref('id') id_field = FieldData.Field_35.ref('id') class IdxINDEXField_23_36: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_23.ref('id') id_field = FieldData.Field_36.ref('id') class IdxINDEXField_24_37: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_24.ref('id') id_field = FieldData.Field_37.ref('id') class IdxINDEXField_25_38: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_25.ref('id') id_field = FieldData.Field_38.ref('id') class IdxINDEXField_26_39: regexp_alphanumeric_separators = u'' regexp_punctuation = u'[.,:;?!"]' id_idxINDEX = IdxINDEXData.IdxINDEX_26.ref('id') id_field = FieldData.Field_39.ref('id') class IdxINDEXIdxINDEXData(DataSet): class IdxINDEXIdxINDEX_1_2: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_2.ref('id') class IdxINDEXIdxINDEX_1_3: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_3.ref('id') class IdxINDEXIdxINDEX_1_5: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_5.ref('id') class IdxINDEXIdxINDEX_1_7: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_7.ref('id') class IdxINDEXIdxINDEX_1_8: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_8.ref('id') class IdxINDEXIdxINDEX_1_10: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_10.ref('id') class IdxINDEXIdxINDEX_1_11: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_11.ref('id') class IdxINDEXIdxINDEX_1_12: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_12.ref('id') class IdxINDEXIdxINDEX_1_13: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_13.ref('id') class IdxINDEXIdxINDEX_1_19: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_19.ref('id') class IdxINDEXIdxINDEX_1_26: id_virtual = IdxINDEXData.IdxINDEX_1.ref('id') id_normal = IdxINDEXData.IdxINDEX_26.ref('id') diff --git a/invenio/modules/records/testsuite/fields/atlantis.cfg b/invenio/modules/records/testsuite/fields/atlantis.cfg index 3e065b6c6..6cdf31e3a 100644 --- a/invenio/modules/records/testsuite/fields/atlantis.cfg +++ b/invenio/modules/records/testsuite/fields/atlantis.cfg @@ -1,1083 +1,1083 @@ ############################################################################### ########## ########## ########## Invenio Atlantis Site Bibfield Configuration File ########## ########## ########## ############################################################################### @persistent_identifier(0) recid: """ """ schema: {'recid': {'type':'integer', 'min': 1, 'required': True}} creator: @legacy(('001', ''), ) @connect('_id') marc, '001', int(value) producer: json_for_marc(), {'001': ''} @extend modification_date: derived: @legacy('marc', ('005', '')) @depends_on('recid') get_modification_date(self.get('recid', -1)) producer: json_for_marc(), {"005": "self.get('modification_date').strftime('%Y%m%d%H%M%S.0')"} @extend creation_date: creator: @parse_first('recid') @only_if('recid' not in self) marc, '005', datetime.datetime(*(time.strptime(value, "%Y%m%d%H%M%S.0")[0:6])) derived: @depends_on('recid') get_creation_date(self.get('recid', -1)) abstract: creator: @legacy((("520", "520__", "520__%"), "abstract", ""), ("520__a", "abstract", "summary"), ("520__b", "expansion"), ("520__9", "number")) marc, "520__", {'summary':value['a'], 'expansion':value['b'], 'number':value['9']} producer: json_for_marc(), {"520__a": "summary", "520__b": "expansion", "520__9": "number"} abstract_french: creator: @legacy((("590", "590__", "590__%"), ""), ("590__a", "summary"), ("590__b", "expansion")) marc, "590__", {'summary':value['a'], 'expansion':value['b']} producer: json_for_marc(), {"590__a": "sumary", "590__b": "expansion"} accelerator_experiment: creator: @legacy((("693", "693__", "693__%"), ""), ("693__a", "accelerator"), ("693__e", "experiment"), ("693__f", "facility")) marc, "693__", {'accelerator':value['a'], 'experiment':value['e'], 'facility':value['f']} producer: json_for_marc(), {"693__a": "accelerator", "693__b": "experiment", "693__f": "facility"} action_note: creator: @legacy((("583", "583__", "583__%"), ""), ("583__a", "action"), ("583__c", "time"), ("583__i", "email"), ("583__z", "note")) marc, "583__", {'action':value['a'], 'time':value['c'], 'email':value['i'], 'note':value['z']} producer: json_for_marc(), {"583__a": "action", "583__c": "time", "583__i": "email", "583__z": "note"} address: creator: @legacy((("270", "270__", "270__%"), ""), ("270__a", "address"), ("270__b", "city"), ("270__d", "country"), ("270__e", "pc"), ("270__k", "telephone"), ("270__l", "fax"), ("270__m", "email"), ("270__p", "contact"), ("270__s", "suffix"), ("270__9", "telex")) marc, "270__", {'address':value['a'], 'city':value['b'], 'country':value['d'], 'pc':value['e'], 'telephone':value['k'], 'fax':value['l'], 'email':value['m'], 'contact':value['p'], 'suffix':value['s'], 'telex':value['9']} producer: json_for_marc(), {"270__a":"address", "270__b":"city", "270__d":"country", "270__e":"pc", "270__k":"telephone", "270__l":"fax", "270__m":"email", "270__p":"contact", "270__s":"suffix", "270__9":"telex"} affiliation: creator: @legacy((("901", "901__", "901__%"), ""), ("901__u", "")) marc, "901__", value['u'] producer: json_for_marc(), {"901__u": ""} agency_code: creator: @legacy(("003", ""), ) marc, "003", value producer: json_for_marc(), {"003": ""} description: """It contains the code for the agency whose system control number is present in field recid""" aleph_linking_page: creator: @legacy((("962", "962__", "962__%"), ""), ("962__a", "type"), ("962__b", "sysno"), ("962__l", "library"), ("962__n", "down_link"), ("962__m", "up_link"), ("962__y", "volume_link"), ("962__p", "part_link"), ("962__i", "issue_link"), ("962__k", "pages"), ("962__t", "base")) marc, "962__", {'type':value['a'], 'sysno':value['b'], 'library':value['l'], 'down_link':value['n'], 'up_link':value['n'], 'volume_link':value['y'], 'part_link':value['p'], 'issue_link':value['i'], 'pages':value['k'], 'base':value['t']} producer: json_for_marc(), {"962__a":"type", "962__b":"sysno", "962__l":"library", "962__n":"down_link", "962__m":"up_link", "962__y":"volume_link", "962__p":"part_link", "962__i":"issue_link", "962__k":"pages", "962__t":"base"} _first_author, first_author, creator: creator: @legacy((("100", "100__", "100__%"), ""), ("100__a", "first author name", "full_name"), ("100__e", "relator_name"), ("100__h", "CCID"), ("100__i", "INSPIRE_number"), ("100__u", "first author affiliation", "affiliation")) marc, "100__", { 'full_name':value['a'], 'first_name':util_split(value['a'],',',1), 'last_name':util_split(value['a'],',',0), 'relator_name':value['e'], 'CCID':value['h'], 'INSPIRE_number':value['i'], 'affiliation':value['u'] } producer: json_for_marc(), {"100__a": "full_name", "100__e": "relator_name", "100__h": "CCID", "100__i": "INSPIRE_number", "100__u": "affiliation"} _additional_authors, additional_authors, contributor: schema: {'_additional_authors': {'type': 'list', 'force': True}} creator: @legacy((("700", "700__", "700__%"), ""), ("700__a", "additional author name", "full_name"), ("700__u", "additional author affiliation", "affiliation")) @parse_first('_first_author') marc, "700__", {'full_name': value['a'], 'first_name':util_split(value['a'],',',1), 'last_name':util_split(value['a'],',',0), 'relator_name':value['e'], 'CCID':value['h'], 'INSPIRE_number':value['i'], 'affiliation':value['u'] } producer: json_for_marc(), {"700__a": "full_name", "700__e": "relator_name", "700__h": "CCID", "700__i": "INSPIRE_number", "700__u": "affiliation"} authors: """List with all the authors, connected with main_author and rest_authors""" derived: @parse_first('_first_author', '_additional_authors') @connect('_first_author', sync_authors) @connect('_additional_authors', sync_authors) @only_if('_firs_author' in self or '_additional_authors' in self) util_merge_fields_info_list(self, ['_first_author', '_additional_authors']) author_archive: creator: @legacy((("720", "720__", "720__%"), ""), ("720__a", "")) marc, "720__", value['a'] producer: json_for_marc(), {"720__a": ""} base: creator: @legacy((("960", "960__", "960__%"), ""), ("960__a", "")) marc, "960__", value['a'] producer: json_for_marc(), {"960__a": ""} cataloguer_info: creator: @legacy((("961", "961__", "961__%"), ""), ("961__a", "cataloguer"), ("961__b", "level"), ("961__c", "modification_date"), ("961__l", "library"), ("961__h", "hour"), ("961__x", "creation_date")) marc, "961__", {'cataloguer':value['a'], 'level':value['b'], 'modification_date':value['c'], 'library':value['l'], 'hour':value['h'], 'creation_date':value['x']} producer: json_for_marc(), {"961__a": "cataloguer", "961__b": "level", "961__c": "modification_date", "961__l": "library", "961__h": "hour", "961__x": "creation_date"} classification_terms: schema: {'classification_terms': {'type': 'list', 'force': True}} creator: @legacy((("694", "694__", "694__%"), ""), ("694__a", "term"), ("694__9", "institute")) marc, "694__", {'term':value['a'], 'institute':value['9']} producer: json_for_marc(), {"694__a": "term", "694__9": "institute"} cern_bookshop_statistics: creator: @legacy((("599", "599__", "599__%"), ""), ("599__a", "number_of_books_bought"), ("599__b", "number_of_books_sold"), ("599__c", "relation")) marc, "599__", {'number_of_books_bought':value['a'], 'number_of_books_sold':value['b'], 'relation':value['c']} producer: json_for_marc(), {"599__a":"number_of_books_bought", "599__b":"number_of_books_sold", "599__c":"relation"} code_designation: creator: @legacy((("033", "033__", "033__%"), ""), ("030__a", "coden", "coden"), ("030__9", "source")) marc, "030__", {'coden':value['a'], 'source':value['9']} producer: json_for_marc(), {"030__a":"coden", "030__9":"source"} collections: schema: {'collections': {'type': 'list', 'force': True}} creator: @legacy((("980", "980__", "980__%"), ""), ("980__%", "collection identifier", ""), ("980__a", "primary"), ("980__b", "secondary"), ("980__c", "deleted")) marc, "980__", { 'primary':value['a'], 'secondary':value['b'], 'deleted':value['c'] } producer: json_for_marc(), {"980__a":"primary", "980__b":"secondary", "980__c":"deleted"} comment: creator: @legacy((("500", "500__", "500__%"), ""), ("500__a", "comment", "")) marc, "500__", value['a'] producer: json_for_marc(), {"500__a": ""} content_type: creator: @legacy((("336", "336__", "336__%"), ""), ("336__a", "")) marc, "336__", value['a'] producer: json_for_marc(), {"336__a": ""} description: """Note: use for SLIDES""" copyright: creator: @legacy((("598", "598__", "598__%"), ""), ("598__a", "")) marc, "598__", value['a'] producer: json_for_marc(), {"580__a": ""} _first_corporate_name, first_corporate_name: creator: @legacy((("110", "110__", "110__%"), ""), ("110__a", "name"), ("110__b", "subordinate_unit"), ("110__g", "collaboration")) marc, "110__", {'name':value['a'], 'subordinate_unit':value['b'], 'collaboration':value['g']} producer: json_for_marc(), {"110__a":"name", "110__b":"subordinate_unit", "110__":"collaboration"} _additional_corporate_names, additional_corporate_names: schema: {'_additional_corporate_names': {'type': 'list', 'force': True}} creator: @legacy((("710", "710__", "710__%"), ""), ("710__a", "name"), ("710__b", "subordinate_unit"), ("710__g", "collaboration", "collaboration")) marc, "710__", {'name':value['a'], 'subordinate_unit':value['b'], 'collaboration':value['g']} producer: json_for_marc(), {"710__a":"name", "710__b":"subordinate_unit", "710__":"collaboration"} corporate_names: derived: @parse_first('_first_corporate_name', '_additional_corporate_names') @connect('_first_corporate_name', sync_corparate_names) @connect('_additional_corporate_names', sync_corparate_names) @only_if('_first_corporate_name' in self or '_additional_corporate_names' in self) util_merge_fields_info_list(self, ['_first_corporate_name', '_additional_corporate_names']) cumulative_index: creator: @legacy((("555", "555__", "555__%"), ""), ("555__a", "")) marc, "555__", value['a'] producer: json_for_marc(), {"555__a": ""} current_publication_prequency: creator: @legacy((("310", "310__", "310__%"), ""), ("310__a", "")) marc, "310__", value['a'] producer: json_for_marc(), {"310__a": ""} publishing_country: creator: @legacy((("044", "044__", "044__%"), ""), ("044__a", "")) marc, "044__", value['a'] producer: json_for_marc(), {"044__a": ""} coyright: creator: @legacy((("542", "542__", "542__%"), ""), ("542__d", "holder"), ("542__g", "date"), ("542__u", "url"), ("542__e", "holder_contact"), ("542__f", "statement"), ("542__3", "materials"),) marc, "542__", {'holder':value['d'], 'date':value['g'], 'url':value['u'], 'holder_contact':value['e'], 'statement':value['f'], 'materials':value['3']} producer: json_for_marc(), {"542__d": "holder", "542__g": "date", "542__u": "url", "542__e": "holder_contact", "542__f": "statement", "542__3": "materials"} dewey_decimal_classification_number: creator: @legacy((("082", "082__", "082__%"), ""), ("082__a", "")) marc, "082__", value['a'] producer: json_for_marc(), {"082__a": ""} dissertation_note: creator: @legacy((("502", "502__", "502__%"), ""), ("502__a","diploma"), ("502__b","university"), ("502__c","defense_date")) marc, "502__", {'diploma':value['a'], 'university':value['b'], 'defense_date':value['c']} producer: json_for_marc(), {"502__a": "diploma", "502__b": "university", "502__b": "defense_date"} @persistent_identifier(3) doi: creator: @legacy((("024", "0247_", "0247_%"), ""), ("0247_a", "")) marc, "0247_", get_doi(value) producer: json_for_marc(), {'0247_2': 'str("DOI")', '0247_a': ''} edition_statement: creator: @legacy((("250", "250__", "250__%"), ""), ("250__a", "")) marc, "250__", value['a'] producer: json_for_marc(), {"250__a": ""} description: """Information relating to the edition of a work as determined by applicable cataloging rules.""" email: creator: @legacy((("856", "8560_", "8560_%"), ""), ("8560_f", "email")) marc, "8560_", value['f'] producer: json_for_marc(), {"8560_f": ""} email_message: creator: @legacy((("859", "859__", "859__%"), ""), ("859__a","contact"), ("859__f","address"), ("859__x","date")) marc, "859__", {'contact':value['a'], 'address':value['f'], 'date':value['x']} producer: json_for_marc(), {"859__a": 'contact',"859__f": 'address', "859__x": 'date'} fft: creator: @legacy((('FFT', 'FFT__', 'FFT__%'), ''), ("FFT__a", "path"), ("FFT__d", "description"), ("FFT__f", "eformat"), ("FFT__i", "temporary_id"), ("FFT__m", "new_name"), ("FFT__o", "flag"), ("FFT__r", "restriction"), ("FFT__s", "timestamp"), ("FFT__t", "docfile_type"), ("FFT__v", "version"), ("FFT__x", "icon_path"), ("FFT__z", "comment"), ("FFT__w", "document_moreinfo"), ("FFT__p", "version_moreinfo"), ("FFT__b", "version_format_moreinfo"), ("FFT__f", "format_moreinfo")) marc, "FFT__", {'path': value['a'], 'description': value['d'], 'eformat': value['f'], 'temporary_id': value['i'], 'new_name': value['m'], 'flag': value['o'], 'restriction': value['r'], 'timestamp': value['s'], 'docfile_type': value['t'], 'version': value['v'], 'icon_path': value['x'], 'comment': value['z'], 'document_moreinfo': value['w'], 'version_moreinfo': value['p'], 'version_format_moreinfo': value['b'], 'format_moreinfo': value['u'] } @only_if_master_value(is_local_url(value['u'])) marc, "8564_", {'hots_name': value['a'], 'access_number': value['b'], 'compression_information': value['c'], 'path':value['d'], 'electronic_name': value['f'], 'request_processor': value['h'], 'institution': value['i'], 'formart': value['q'], 'settings': value['r'], 'file_size': value['s'], 'url': value['u'], 'subformat':value['x'], 'description':value['y'], 'comment':value['z']} producer: json_for_marc(), {"FFT__a": "path", "FFT__d": "description", "FFT__f": "eformat", "FFT__i": "temporary_id", "FFT__m": "new_name", "FFT__o": "flag", "FFT__r": "restriction", "FFT__s": "timestamp", "FFT__t": "docfile_type", "FFT__v": "version", "FFT__x": "icon_path", "FFT__z": "comment", "FFT__w": "document_moreinfo", "FFT__p": "version_moreinfo", "FFT__b": "version_format_moreinfo", "FFT__f": "format_moreinfo"} funding_info: creator: @legacy((("536", "536__", "536__%"), ""), ("536__a", "agency"), ("536__c", "grant_number"), ("536__f", "project_number"), ("536__r", "access_info")) marc, "536__", {'agency':value['a'], 'grant_number':value['c'], 'project_number':value['f'], 'access_info':value['r']} producer: json_for_marc(), {"536__a": "agency", "536__c": "grant_number", "536__f": "project_number", "536__r": "access_info"} imprint: creator: @legacy((("260", "260__", "260__%"), ""), ("260__a", "place"), ("260__b", "publisher_name"), ("260__c", "date"), ("260__g", "reprinted_editions")) marc, "260__", {'place':value['a'], 'publisher_name':value['b'], 'date':value['c'], 'reprinted_editions':value['g']} producer: json_for_marc(), {"260__a": "place", "260__b": "publisher_name", "260__c": "date", "260__g": "reprinted_editions"} internal_notes: creator: @legacy((("595", "595__", "595__%"), ""), ("595__a", "internal notes", "internal_note"), ("595__d", "control_field"), ("595__i", "inspec_number"), ("595__s", "subject")) marc, "595__", {'internal_note':value['a'], 'control_field':value['d'], 'inspec_number':value['i'], 'subject':value['s']} producer: json_for_marc(), {"595__a": "internal_note", "595__d": "control_field","595__i": "inspec_number", "595__s": "subject"} isbn: creator: @legacy((("020", "020__", "020__%"), ""), ("020__a", "isbn", "isbn"), ("020__u", "medium")) marc, "020__", {'isbn':value['a'], 'medium':value['u']} producer: json_for_marc(), {"020__a": "isbn", "020__u": "medium"} isn: creator: @legacy((("021", "021__", "021__%"), ""), ("021__a", "")) marc, "021__", value['a'] producer: json_for_marc(), {"021__a": ""} issn: creator: @legacy((("022", "022__", "022__%"), ""), ("022__a", "issn", "")) marc, "022__", value['a'] producer: json_for_marc(), {"022__a": ""} item: creator: @legacy((("964", "964__", "964__%"), ""), ("964__a", "")) marc, "964__", value['a'] producer: json_for_marc(), {"964__a": ""} journal_info: creator: @legacy((("909", "909C4", "909C4%"), "journal", ""), ("909C4a", "doi", "doi"), ("909C4c", "journal page", "pagination"), ("909C4d", "date"), ("909C4e", "recid"), ("909C4f", "note"), ("909C4p", "journal title", "title"), ("909C4u", "url"), ("909C4v", "journal volume", "volume"), ("909C4y", "journal year", "year"), ("909C4t", "talk"), ("909C4w", "cnum"), ("909C4x", "reference")) marc, "909C4", {'doi':value['a'], 'pagination':value['c'], 'date':value['d'], 'recid':value['e'], 'note':value['f'], 'title':value['p'], 'url':value['u'], 'volume':value['v'], 'year':value['y'], 'talk':value['t'], 'cnum':value['w'], 'reference':value['x']} producer: json_for_marc(), {"909C4a": "doi","909C4c": "pagination", "909C4d": "date", "909C4e": "recid", "909C4f": "note", "909C4p": "title", "909C4u": "url","909C4v": "volume", "909C4y": "year", "909C4t": "talk", "909C4w": "cnum", "909C4x": "reference"} keywords: schema: {'keywords': {'type': 'list', 'force': True}} creator: @legacy((("653", "6531_", "6531_%"), ""), ("6531_a", "keyword", "term"), ("6531_9", "institute")) marc, "6531_", { 'term': value['a'], 'institute': value['9'] } producer: json_for_marc(), {"6531_a": "term", "6531_9": "institute"} language: creator: @legacy((("041", "041__", "041__%"), ""), ("041__a", "")) marc, "041__", value['a'] producer: json_for_marc(), {"041__a": ""} language_note: creator: @legacy((("546", "546__", "546__%"), ""), ("546__a", "language_note"), ("546__g", "target_language")) marc, "546__", {'language_note':value['a'], 'target_language':value['g']} producer: json_for_marc(), {"546__a": "language_note", "546__g": "target_language"} library_of_congress_call_number: creator: @legacy((("050", "050__", "050__%"), ""), ("050__a", "classification_number"), ("050__b", "item_number")) marc, "050__", {'classification_number':value['a'], 'item_number':value['b']} producer: json_for_marc(), {"050__a": "classification_number", "050__b": "item_number"} license: creator: @legacy((("540", "540__", "540__%"), ""), ("540__a", "license"), ("540__b", "imposing"), ("540__u", "url"), ("540__3", "material")) marc, "540__", {'license':value['a'], 'imposing':value['b'], 'url':value['u'], 'material':value['3'],} producer: json_for_marc(), {"540__a": "license", "540__b": "imposing", "540__u": "url", "540__3": "material"} location: creator: @legacy((("852", "852__", "852__%"), ""), ("852__a", "")) marc, "852__", value['a'] producer: json_for_marc(), {"852__a": ""} medium: creator: @legacy((("340", "340__", "340__%"), ""), ("340__a", "material"), ("340__c", "suface"), ("340__d", "recording_technique"), ("340__d", "cd-rom")) marc, "340__", {'material':value['a'], 'surface':value['c'], 'recording_technique':value['d'], 'cd-rom':value['9']} producer: json_for_marc(), {"340__a": "material", "340__c": "suface", "340__d": "recording_technique", "340__d": "cd-rom"} _first_meeting_name: creator: @legacy((("111", "111__", "111__%"), ""), ("111__a", "meeting"), ("111__c", "location"), ("111__d", "date"), ("111__f", "year"), ("111__g", "coference_code"), ("111__n", "number_of_parts"), ("111__w", "country"), ("111__z", "closing_date"), ("111__9", "opening_date")) marc, "111__", {'meeting':value['a'], 'location':value['c'], 'date':value['d'], 'year':value['f'], 'coference_code':value['g'], 'number_of_parts':value['n'], 'country':value['w'], 'closing_date':value['z'], 'opening_date':value['9']} producer: json_for_marc(), {"111__a": "meeting", "111__c": "location", "111__d": "date","111__f": "year", "111__g": "coference_code", "111__n": "number_of_parts", "111__w": "country", "111__z": "closing_date", "111__9": "opening_date"} _additionla_meeting_names: creator: @legacy((("711", "711__", "711__%"), ""), ("711__a", "meeting"), ("711__c", "location"), ("711__d", "date"), ("711__f", "work_date"), ("711__g", "coference_code"), ("711__n", "number_of_parts"), ("711__9", "opening_date")) marc, "711__", {'meeting':value['a'], 'location':value['c'], 'date':value['d'], 'work_date':value['f'], 'coference_code':value['g'], 'number_of_parts':value['n'], 'opening_date':value['9']} producer: json_ffirst_authoror_marc(), {"711__a": "meeting", "711__c": "location", "711__d": "date", "711__f": "work_date", "711__g": "coference_code", "711__n": "number_of_parts", "711__9": "opening_date"} meeting_names: derived: @parse_first('_first_meeting_name', '_additionla_meeting_names') @connect('_first_meeting_name', sync_meeting_names) @connect('_additionla_meeting_names', sync_meeting_names) @only_if('_first_meeting_name' in self or '_additionla_meeting_names' in self) util_merge_fields_info_list(self, ['_first_meeting_name', '_additionla_meeting_names']) @persistent_identifier(4) oai: creator: @legacy((("024", "0248_", "0248_%"), ""), ("0248_a", "oai"), ("0248_p", "indicator")) marc, "0248_", {'value': value['a'], 'indicator': value['p']} producer: json_for_marc(), {"0248_a": "oai", "0248_p": "indicator"} observation: creator: @legacy((("691", "691__", "691__%"), ""), ("691__a", "")) marc, "691__", value['a'] producer: json_for_marc(), {"691__a": ""} observation_french: creator: @legacy((("597", "597__", "597__%"), ""), ("597__a", "")) marc, "597__", value['a'] producer: json_for_marc(), {"597__a": ""} other_report_number: creator: @legacy((("084", "084__", "084__%"), ""), ("084__a", "clasification_number"), ("084__b", "collection_short"), ("084__2", "source_number")) marc, "084__", {'clasification_number':value['a'], 'collection_short':value['b'], 'source_number':value['2'],} producer: json_for_marc(), {"084__a": "clasification_number", "084__b": "collection_short", "084__2": "source_number"} owner: creator: @legacy((("963", "963__", "963__%"), ""), ("963__a","")) marc, "963__", value['a'] producer: json_for_marc(), {"963__a": ""} prepublication: creator: @legacy((("269", "269__", "269__%"), ""), ("269__a", "place"), ("269__b", "publisher_name"), ("269__c", "date")) marc, "269__", {'place':value['a'], 'publisher_name': value['b'], 'date':value['c']} producer: json_for_marc(), {"269__a": "place", "269__b": "publisher_name", "269__c": "date"} description: """ note: don't use the following lines for cer base=14,2n,41-45 !! note: don't use for theses """ primary_report_number: creator: @legacy((("037", "037__", "037__%"), ""), ("037__a", "primary report number", ""), ) marc, "037__", value['a'] producer: json_for_marc(), {"037__a": ""} publication_info: creator: @legacy((("773", "773__", "773__%"), ""), ("773__a", "doi"), ("773__c", "pagination"), ("773__d", "date"), ("773__e", "recid"), ("773__f", "note"), ("773__p", "title"), ("773__u", "url"), ("773__v", "volume"), ("773__y", "year"), ("773__t", "talk"), ("773__w", "cnum"), ("773__x", "reference")) marc, "773__", {'doi':value['a'], 'pagination':value['c'], 'date':value['d'], 'recid':value['e'], 'note':value['f'], 'title':value['p'], 'url':value['u'], 'volume':value['v'], 'year':value['y'], 'talk':value['t'], 'cnum':value['w'], 'reference':value['x']} producer: json_for_marc(), {"773__a": "doi", "773__c": "pagination", "773__d": "date", "773__e": "recid", "773__f": "note", "773__p": "title", "773__u": "url", "773__v": "volume", "773__y": "year", "773__t": "talk", "773__w": "cnum", "773__x": "reference"} description: """note: publication_info.doi not to be used, used instead doi""" physical_description: creator: @legacy((("300", "300__", "300__%", "")), ("300__a", "pagination"), ("300__b", "details")) marc, "300__", {'pagination':value['a'], 'details':value['b']} producer: json_for_marc(), {"300__a": "pagination", "300__b": "details"} reference: creator: @legacy((("999", "999C5", "999C5%"), ""), ("999C5", "reference", ""), ("999C5a", "doi"), ("999C5h", "authors"), ("999C5m", "misc"), ("999C5n", "issue_number"), ("999C5o", "order_number"), ("999C5p", "page"), ("999C5r", "report_number"), ("999C5s", "title"), ("999C5u", "url"), ("999C5v", "volume"), ("999C5y", "year"),) marc, "999C5", {'doi':value['a'], 'authors':value['h'], 'misc':value['m'], 'issue_number':value['n'], 'order_number':value['o'], 'page':value['p'], 'report_number':value['r'], 'title':value['s'], 'url':value['u'], 'volume':value['v'], 'year':value['y'],} producer: json_for_marc(), {"999C5a": "doi", "999C5h": "authors", "999C5m": "misc", "999C5n": "issue_number", "999C5o":"order_number", "999C5p":"page", "999C5r":"report_number", "999C5s":"title", "999C5u":"url", "999C5v":"volume", "999C5y": "year"} restriction_access: creator: @legacy((("506", "506__", "506__%"), ""), ("506__a", "terms"), ("506__9", "local_info")) marc, "506__", {'terms':value['a'], 'local_info':value['9']} producer: json_for_marc(), {"506__a": "terms", "506__9": "local_info"} report_number: creator: @legacy((("088", "088__", "088__%"), ""), ("088__a", "additional report number", "report_number"), ("088__9", "internal")) marc, "088__", {'report_number':value['a'], 'internal':value['9']} producer: json_for_marc(), {"088__a": "report_number", "088__9": "internal"} series: creator: @legacy((("490", "490__", "490__%"), ""), ("490__a", "statement"), ("490__v", "volume")) marc, "490__", {'statement':value['a'], 'volume':value['v']} producer: json_for_marc(), {"490__a": "statement", "490__v": "volume"} slac_note: creator: @legacy((("596", "596__", "596__%"), ""), ("596__a", "slac_note", ""), ) marc, "596__", value['a'] producer: json_for_marc(), {"596__a": ""} source_of_acquisition: creator: @legacy((("541", "541__", "541__%"), ""), ("541__a","source_of_acquisition"), ("541__d","date"), ("541__e","accession_number"), ("541_f_","owner"), ("541__h","price_paid"), ("541__9","price_user")) marc, "541__", {'source_of_acquisition':value['a'], 'date':value['d'], 'accession_number':value['e'], 'owner':value['f'], 'price_paid':value['h'], 'price_user':value['9']} producer: json_for_marc(), {"541__a": "source_of_acquisition", "541__d": "date", "541__e": "accession_number", "541_f_": "owner", "541__h": "price_paid", "541__9":"price_user"} status_week: creator: @legacy((("916", "916__", "916__%"), ""), ("916__a","acquistion_proceedings"), ("916__d","display_period"), ("916__e","copies_bought"), ("916__s","status"), ("916__w","status_week"), ("916__y","year")) marc, "916__", {'acquistion_proceedings':value['a'], 'display_period':value['d'], 'copies_bought':value['e'], 'status':value['s'], 'status_week':value['w'], 'year':value['y']} producer: json_for_marc(), {"916__a": "acquistion_proceedings", "916__d": "display_period", "916__e": "copies_bought", "916__s": "status", "916__w": "status_week", "916__y":"year"} subject: creator: @legacy((("650", "65017", "65017%"), ""), ("65017a", "main subject", "term"), ("650172", "source"), ("65017e", "relator")) marc, "65017", {'term':value['a'], 'source':value['2'], 'relator':value['e']} producer: json_for_marc(), {"65017a": "term", "650172": "source", "65017e": "relator"} subject_additional: creator: @legacy((("650", "65027", "65027%"), ""), ("65027a", "additional subject", "term"), ("650272", "source"), ("65027e", "relator"), ("65027p", "percentage")) marc, "65027", {'term':value['a'], 'source':value['2'], 'relator':value['e'], 'percentage':value['p']} producer: json_for_marc(), {"65027a": "term", "650272": "source", "65027e": "relator", "65027p": "percentage"} subject_indicator: creator: @legacy((("690", "690__", "690__%"), ""), ("690c_a", "")) marc, "690c_", value['a'] producer: json_for_marc(), {"690c_a": ""} @persistent_identifier(2) system_control_number: creator: @legacy((("035", "035__", "035__%"), ""), ("035__a", "system_control_number"), ("035__9", "institute")) marc, "035__", {'value': value['a'], 'canceled':value['z'], 'linkpage':value['6'], 'institute':value['9']} producer: json_for_marc(), {"035__a": "system_control_number", "035__9": "institute"} @persistent_identifier(1) system_number: creator: @legacy((("970", "970__", "970__%"), ""), ("970__a", "sysno"), ("970__d", "recid")) marc, "970__", {'value':value['a'], 'recid':value['d']} producer: json_for_marc(), {"970__a": "sysno", "970__d": "recid"} thesaurus_terms: creator: @legacy((("695", "695__", "695__%"), ""), ("695__a", "term"), ("695__9", "institute")) marc, "695__", {'term':value['a'], 'institute':value['9']} producer: json_for_marc(), {"695__a": "term", "695__9": "institute"} time_and_place_of_event_note: creator: @legacy((("518", "518__", "518__%"), ""), ("518__d", "date"), ("518__g", "conference_identification"), ("518__h", "starting_time"), ("518__l", "speech_length"), ("518__r", "meeting")) marc, "519__", {'date':value['d'], 'conference_identification':value['g'], 'starting_time':value['h'], 'speech_length':value['l'], 'meeting':value['r']} producer: json_for_marc(), {"518__d": "date", "518__g": "conference_identification", "518__h": "starting_time", "518__l": "speech_length", "518__r": "meeting"} abbreviated_title: creator: @legacy((("210", "210__", "210__%"), ""), ("210__a", "")) marc, "210__", value['a'] producer: json_for_marc(), {"210__a": ""} main_title_statement: creator: @legacy((("145", "145__", "145__%"), ""), ("145__a", "title"), ("145__b", "subtitle"),) marc, "145__", {'title':value['a'], 'subtitle':value['b']} producer: json_for_marc(), {"145__a": "title", "145__b": "subtitle"} title_additional: creator: @legacy((("246", "246__", "246__%"), ""), ("246__%", "additional title", ""), ("246__a", "title"), ("246__b", "subtitle"), ("246__g", "misc"), ("246__i", "text"), ("246__n", "part_number"), ("246__p", "part_name")) marc, "246__", { 'title':value['a'], 'subtitle':value['b'], 'misc':value['g'], 'text':value['i'], 'part_number':value['n'], 'part_name':value['p']} producer: json_for_marc(), {"246__a": "title", "246__b": "subtitle", "246__g": "misc", "246__i": "text", "246__n": "part_number", "246__p": "part_name"} title: creator: @legacy((("245", "245__", "245__%"), ""), ("245__%", "main title", ""), ("245__a", "title", "title"), ("245__b", "subtitle"), ("245__n", "volume"), ("245__k", "form")) marc, "245__", { 'title':value['a'], 'subtitle':value['b'], 'volume': value['n'], 'form':value['k'] } producer: json_for_marc(), {"245__a": "title", "245__b": "subtitle", "245__k": "form"} title_key: creator: @legacy((("222", "222__", "222__%"), ""), ("222__a", "")) marc, "222__", value['a'] producer: json_for_marc(), {"222__a": ""} title_other: creator: @legacy((("246", "246_3", "246_3%"), ""), ("246_3a", "title"), ("246_3i", "text"), ("246_39", "sigle")) marc, "246_3", { 'title':value['a'], 'text':value['i'], 'sigle':value['9']} producer: json_for_marc(), {"246_3a": "title", "246_3i": "text", "246_39": "sigle"} title_parallel: creator: @legacy((("246", "246_1", "246_1%"), ""), ("246_1a", "title"), ("246_1i", "text")) marc, "246_1", { 'title':value['a'], 'text':value['i']} producer: json_for_marc(), {"246_1a": "title", "246_1i": "text"} title_translation: creator: @legacy((("242", "242__", "242__%"), ""), ("242__a", "title"), ("242__b", "subtitle"), ("242__y", "language")) marc, "242__", {'title':value['a'], 'subtitle':value['b'], 'language':value['y']} producer: json_for_marc(), {"242__a": "title", "242__b": "subtitle", "242__y": "language"} type: creator: @legacy((("594", "594__", "594__%"), ""), ("594__a", "")) marc, "594__", value['a'] producer: json_for_marc(), {"594__a": ""} udc: creator: @legacy((("080", "080__", "080__%"), ""), ("080__a", "")) marc, "080__", value['a'] producer: json_for_marc(), {"080__a": ""} description: """"universal decimal classification number""" url: creator: @legacy((("856", "8564_", "8564_%"), ""), ("8564_a", "host_name"), ("8564_b", "access_number"), ("8564_c", "compression_information"), ("8564_d", "path"), ("8564_f", "electronic_name"), ("8564_h", "request_processor"), ("8564_i", "institution"), ("8564_q", "eformat"), ("8564_r", "settings"), ("8564_s", "file_size"), ("8564_u", "url", "url"), ("8564_x", "subformat"), ("8564_y", "caption", "description"), ("8564_z", "comment")) @only_if_master_value((not is_local_url(value['u']), )) marc, "8564_", {'host_name': value['a'], 'access_number': value['b'], 'compression_information': value['c'], 'path':value['d'], 'electronic_name': value['f'], 'request_processor': value['h'], 'institution': value['i'], 'eformart': value['q'], 'settings': value['r'], 'size': value['s'], 'url': value['u'], 'subformat':value['x'], 'description':value['y'], 'comment':value['z']} producer: json_for_marc(), {"8564_a": "host_name", "8564_b": "access_number", "8564_c": "compression_information", "8564_d": "path", "8564_f": "electronic_name", "8564_h": "request_processor", "8564_i": "institution", "8564_q": "eformat", "8564_r": "settings", "8564_s": "file_size", "8564_u": "url", "8564_x": "subformat", "8564_y": "description", "8564_z": "comment"} ############################################################################### ########## ########## ########## Derived and Calculated Fields Definitions ########## ########## ########## ############################################################################### files: calculated: @legacy('marc', ("8564_z", "comment"), ("8564_y", "caption", "description"), ("8564_q", "eformat"), ("8564_f", "name"), ("8564_s", "size"), ("8564_u", "url", "url") ) @parse_first('recid') @memoize() get_files_from_bibdoc(self.get('recid', -1)) producer: json_for_marc(), {"8564_z": "comment", "8564_y": "description", "8564_q": "eformat", "8564_f": "name", "8564_s": "size", "8564_u": "url"} description: """ Retrieves all the files related with the recid that were passed to the system using the FFT field described above Note: this is a mandatory field and it shouldn't be remove from this configuration - file. On the other hand the function that retrieve the metadata from BibDoc could + file. On the other hand the function that retrieve the metadata from BibDoc could be enrich. """ number_of_authors: """Number of authors""" derived: @depends_on('authors') len(self['authors']) number_of_copies: calculated: @depends_on('recid', 'collections') @only_if('BOOK' in self.get('collections.primary', [])) @memoize() get_number_of_copies(self['recid']) description: """Number of copies""" number_of_reviews: calculated: @parse_first('recid') @memoize() get_number_of_reviews(self.get('recid')) description: """Number of reviews""" number_of_comments: calculated: @parse_first('recid') @memoize(30) get_number_of_comments(self.get('recid')) description: """Number of comments""" cited_by_count: calculated: @parse_first('recid') @memoize() get_cited_by_count(self.get('recid')) description: """How many records cite given record""" diff --git a/invenio/modules/search/fixtures.py b/invenio/modules/search/fixtures.py index f96349ce3..656e52128 100644 --- a/invenio/modules/search/fixtures.py +++ b/invenio/modules/search/fixtures.py @@ -1,2649 +1,2649 @@ # -*- coding: utf-8 -*- # ## This file is part of Invenio. ## Copyright (C) 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. from invenio.config import CFG_SITE_NAME from fixture import DataSet class CollectionData(DataSet): class siteCollection: id = 1 name = CFG_SITE_NAME dbquery = None class FieldData(DataSet): class Field_1: code = u'anyfield' id = 1 name = u'any field' class Field_2: code = u'title' id = 2 name = u'title' class Field_3: code = u'author' id = 3 name = u'author' class Field_4: code = u'abstract' id = 4 name = u'abstract' class Field_5: code = u'keyword' id = 5 name = u'keyword' class Field_6: code = u'reportnumber' id = 6 name = u'report number' class Field_7: code = u'subject' id = 7 name = u'subject' class Field_8: code = u'reference' id = 8 name = u'reference' class Field_9: code = u'fulltext' id = 9 name = u'fulltext' class Field_10: code = u'collection' id = 10 name = u'collection' class Field_11: code = u'division' id = 11 name = u'division' class Field_12: code = u'year' id = 12 name = u'year' class Field_13: code = u'experiment' id = 13 name = u'experiment' class Field_14: code = u'recid' id = 14 name = u'record ID' class Field_15: code = u'isbn' id = 15 name = u'isbn' class Field_16: code = u'issn' id = 16 name = u'issn' class Field_17: code = u'coden' id = 17 name = u'coden' #class Field_18: # code = u'doi' # id = 18 # name = u'doi' class Field_19: code = u'journal' id = 19 name = u'journal' class Field_20: code = u'collaboration' id = 20 name = u'collaboration' class Field_21: code = u'affiliation' id = 21 name = u'affiliation' class Field_22: code = u'exactauthor' id = 22 name = u'exact author' class Field_23: code = u'datecreated' id = 23 name = u'date created' class Field_24: code = u'datemodified' id = 24 name = u'date modified' class Field_25: code = u'refersto' id = 25 name = u'refers to' class Field_26: code = u'citedby' id = 26 name = u'cited by' class Field_27: code = u'caption' id = 27 name = u'caption' class Field_28: code = u'firstauthor' id = 28 name = u'first author' class Field_29: code = u'exactfirstauthor' id = 29 name = u'exact first author' class Field_30: code = u'authorcount' id = 30 name = u'author count' class Field_31: code = u'rawref' id = 31 name = u'reference to' class Field_32: code = u'exacttitle' id = 32 name = u'exact title' class Field_33: code = u'authorityauthor' id = 33 name = u'authority author' class Field_34: - code = u'authorityinstitution' + code = u'authorityinstitute' id = 34 name = u'authority institution' class Field_35: code = u'authorityjournal' id = 35 name = u'authority journal' class Field_36: code = u'authoritysubject' id = 36 name = u'authority subject' class Field_37: code = u'itemcount' id = 37 name = u'item count' class Field_38: code = u'filetype' id = 38 name = u'file type' class Field_39: code = u'miscellaneous' id = 39 name = u'miscellaneous' class Field_40: code = u'tag' id = 40 name = u'tag' class TagData(DataSet): class Tag_1: id = 1 value = u'100__a' name = u'first author name' class Tag_2: id = 2 value = u'700__a' name = u'additional author name' class Tag_3: id = 3 value = u'245__%' name = u'main title' class Tag_4: id = 4 value = u'246__%' name = u'additional title' class Tag_5: id = 5 value = u'520__%' name = u'abstract' class Tag_6: id = 6 value = u'6531_a' name = u'keyword' class Tag_7: id = 7 value = u'037__a' name = u'primary report number' class Tag_8: id = 8 value = u'088__a' name = u'additional report number' class Tag_9: id = 9 value = u'909C0r' name = u'added report number' class Tag_10: id = 10 value = u'999C5%' name = u'reference' class Tag_11: id = 11 value = u'980__%' name = u'collection identifier' class Tag_12: id = 12 value = u'65017a' name = u'main subject' class Tag_13: id = 13 value = u'65027a' name = u'additional subject' class Tag_14: id = 14 value = u'909C0p' name = u'division' class Tag_15: id = 15 value = u'909C0y' name = u'year' class Tag_16: id = 16 value = u'00%' name = u'00x' class Tag_17: id = 17 value = u'01%' name = u'01x' class Tag_18: id = 18 value = u'02%' name = u'02x' class Tag_19: id = 19 value = u'03%' name = u'03x' class Tag_20: id = 20 value = u'04%' name = u'lang' class Tag_21: id = 21 value = u'05%' name = u'05x' class Tag_22: id = 22 value = u'06%' name = u'06x' class Tag_23: id = 23 value = u'07%' name = u'07x' class Tag_24: id = 24 value = u'08%' name = u'08x' class Tag_25: id = 25 value = u'09%' name = u'09x' class Tag_26: id = 26 value = u'10%' name = u'10x' class Tag_27: id = 27 value = u'11%' name = u'11x' class Tag_28: id = 28 value = u'12%' name = u'12x' class Tag_29: id = 29 value = u'13%' name = u'13x' class Tag_30: id = 30 value = u'14%' name = u'14x' class Tag_31: id = 31 value = u'15%' name = u'15x' class Tag_32: id = 32 value = u'16%' name = u'16x' class Tag_33: id = 33 value = u'17%' name = u'17x' class Tag_34: id = 34 value = u'18%' name = u'18x' class Tag_35: id = 35 value = u'19%' name = u'19x' class Tag_36: id = 36 value = u'20%' name = u'20x' class Tag_37: id = 37 value = u'21%' name = u'21x' class Tag_38: id = 38 value = u'22%' name = u'22x' class Tag_39: id = 39 value = u'23%' name = u'23x' class Tag_40: id = 40 value = u'24%' name = u'24x' class Tag_41: id = 41 value = u'25%' name = u'25x' class Tag_42: id = 42 value = u'26%' name = u'internal' class Tag_43: id = 43 value = u'27%' name = u'27x' class Tag_44: id = 44 value = u'28%' name = u'28x' class Tag_45: id = 45 value = u'29%' name = u'29x' class Tag_46: id = 46 value = u'30%' name = u'pages' class Tag_47: id = 47 value = u'31%' name = u'31x' class Tag_48: id = 48 value = u'32%' name = u'32x' class Tag_49: id = 49 value = u'33%' name = u'33x' class Tag_50: id = 50 value = u'34%' name = u'34x' class Tag_51: id = 51 value = u'35%' name = u'35x' class Tag_52: id = 52 value = u'36%' name = u'36x' class Tag_53: id = 53 value = u'37%' name = u'37x' class Tag_54: id = 54 value = u'38%' name = u'38x' class Tag_55: id = 55 value = u'39%' name = u'39x' class Tag_56: id = 56 value = u'40%' name = u'40x' class Tag_57: id = 57 value = u'41%' name = u'41x' class Tag_58: id = 58 value = u'42%' name = u'42x' class Tag_59: id = 59 value = u'43%' name = u'43x' class Tag_60: id = 60 value = u'44%' name = u'44x' class Tag_61: id = 61 value = u'45%' name = u'45x' class Tag_62: id = 62 value = u'46%' name = u'46x' class Tag_63: id = 63 value = u'47%' name = u'47x' class Tag_64: id = 64 value = u'48%' name = u'48x' class Tag_65: id = 65 value = u'49%' name = u'series' class Tag_66: id = 66 value = u'50%' name = u'50x' class Tag_67: id = 67 value = u'51%' name = u'51x' class Tag_68: id = 68 value = u'52%' name = u'52x' class Tag_69: id = 69 value = u'53%' name = u'53x' class Tag_70: id = 70 value = u'54%' name = u'54x' class Tag_71: id = 71 value = u'55%' name = u'55x' class Tag_72: id = 72 value = u'56%' name = u'56x' class Tag_73: id = 73 value = u'57%' name = u'57x' class Tag_74: id = 74 value = u'58%' name = u'58x' class Tag_75: id = 75 value = u'59%' name = u'summary' class Tag_76: id = 76 value = u'60%' name = u'60x' class Tag_77: id = 77 value = u'61%' name = u'61x' class Tag_78: id = 78 value = u'62%' name = u'62x' class Tag_79: id = 79 value = u'63%' name = u'63x' class Tag_80: id = 80 value = u'64%' name = u'64x' class Tag_81: id = 81 value = u'65%' name = u'65x' class Tag_82: id = 82 value = u'66%' name = u'66x' class Tag_83: id = 83 value = u'67%' name = u'67x' class Tag_84: id = 84 value = u'68%' name = u'68x' class Tag_85: id = 85 value = u'69%' name = u'subject' class Tag_86: id = 86 value = u'70%' name = u'70x' class Tag_87: id = 87 value = u'71%' name = u'71x' class Tag_88: id = 88 value = u'72%' name = u'author-ad' class Tag_89: id = 89 value = u'73%' name = u'73x' class Tag_90: id = 90 value = u'74%' name = u'74x' class Tag_91: id = 91 value = u'75%' name = u'75x' class Tag_92: id = 92 value = u'76%' name = u'76x' class Tag_93: id = 93 value = u'77%' name = u'77x' class Tag_94: id = 94 value = u'78%' name = u'78x' class Tag_95: id = 95 value = u'79%' name = u'79x' class Tag_96: id = 96 value = u'80%' name = u'80x' class Tag_97: id = 97 value = u'81%' name = u'81x' class Tag_98: id = 98 value = u'82%' name = u'82x' class Tag_99: id = 99 value = u'83%' name = u'83x' class Tag_100: id = 100 value = u'84%' name = u'84x' class Tag_101: id = 101 value = u'85%' name = u'electr' class Tag_102: id = 102 value = u'86%' name = u'86x' class Tag_103: id = 103 value = u'87%' name = u'87x' class Tag_104: id = 104 value = u'88%' name = u'88x' class Tag_105: id = 105 value = u'89%' name = u'89x' class Tag_106: id = 106 value = u'90%' name = u'publication' class Tag_107: id = 107 value = u'91%' name = u'pub-conf-cit' class Tag_108: id = 108 value = u'92%' name = u'92x' class Tag_109: id = 109 value = u'93%' name = u'93x' class Tag_110: id = 110 value = u'94%' name = u'94x' class Tag_111: id = 111 value = u'95%' name = u'95x' class Tag_112: id = 112 value = u'96%' name = u'catinfo' class Tag_113: id = 113 value = u'97%' name = u'97x' class Tag_114: id = 114 value = u'98%' name = u'98x' class Tag_115: id = 115 value = u'8564_u' name = u'url' class Tag_116: id = 116 value = u'909C0e' name = u'experiment' class Tag_117: id = 117 value = u'001' name = u'record ID' class Tag_118: id = 118 value = u'020__a' name = u'isbn' class Tag_119: id = 119 value = u'022__a' name = u'issn' class Tag_120: id = 120 value = u'030__a' name = u'coden' class Tag_121: id = 121 value = u'909C4a' name = u'doi' class Tag_122: id = 122 value = u'850%' name = u'850x' class Tag_123: id = 123 value = u'851%' name = u'851x' class Tag_124: id = 124 value = u'852%' name = u'852x' class Tag_125: id = 125 value = u'853%' name = u'853x' class Tag_126: id = 126 value = u'854%' name = u'854x' class Tag_127: id = 127 value = u'855%' name = u'855x' class Tag_128: id = 128 value = u'857%' name = u'857x' class Tag_129: id = 129 value = u'858%' name = u'858x' class Tag_130: id = 130 value = u'859%' name = u'859x' class Tag_131: id = 131 value = u'909C4%' name = u'journal' class Tag_132: id = 132 value = u'710__g' name = u'collaboration' class Tag_133: id = 133 value = u'100__u' name = u'first author affiliation' class Tag_134: id = 134 value = u'700__u' name = u'additional author affiliation' class Tag_135: id = 135 value = u'8564_y' name = u'caption' class Tag_136: id = 136 value = u'909C4c' name = u'journal page' class Tag_137: id = 137 value = u'909C4p' name = u'journal title' class Tag_138: id = 138 value = u'909C4v' name = u'journal volume' class Tag_139: id = 139 value = u'909C4y' name = u'journal year' class Tag_140: id = 140 value = u'500__a' name = u'comment' class Tag_141: id = 141 value = u'245__a' name = u'title' class Tag_142: id = 142 value = u'245__a' name = u'main abstract' class Tag_143: id = 143 value = u'595__a' name = u'internal notes' class Tag_144: id = 144 value = u'787%' name = u'other relationship entry' class Tag_146: id = 146 value = u'400__a' name = u'authority: alternative personal name' class Tag_148: id = 148 value = u'110__a' name = u'authority: organization main name' class Tag_149: id = 149 value = u'410__a' name = u'organization alternative name' class Tag_150: id = 150 value = u'510__a' name = u'organization main from other record' class Tag_151: id = 151 value = u'130__a' name = u'authority: uniform title' class Tag_152: id = 152 value = u'430__a' name = u'authority: uniform title alternatives' class Tag_153: id = 153 value = u'530__a' name = u'authority: uniform title from other record' class Tag_154: id = 154 value = u'150__a' name = u'authority: subject from other record' class Tag_155: id = 155 value = u'450__a' name = u'authority: subject alternative name' class Tag_156: id = 156 value = u'450__a' name = u'authority: subject main name' class Tag_157: id = 157 value = u'031%' name = u'031x' class Tag_158: id = 158 value = u'032%' name = u'032x' class Tag_159: id = 159 value = u'033%' name = u'033x' class Tag_160: id = 160 value = u'034%' name = u'034x' class Tag_161: id = 161 value = u'035%' name = u'035x' class Tag_162: id = 162 value = u'036%' name = u'036x' class Tag_163: id = 163 value = u'037%' name = u'037x' class Tag_164: id = 164 value = u'038%' name = u'038x' class Tag_165: id = 165 value = u'080%' name = u'080x' class Tag_166: id = 166 value = u'082%' name = u'082x' class Tag_167: id = 167 value = u'083%' name = u'083x' class Tag_168: id = 168 value = u'084%' name = u'084x' class Tag_169: id = 169 value = u'085%' name = u'085x' class Tag_170: id = 170 value = u'086%' name = u'086x' class Tag_171: id = 171 value = u'240%' name = u'240x' class Tag_172: id = 172 value = u'242%' name = u'242x' class Tag_173: id = 173 value = u'243%' name = u'243x' class Tag_174: id = 174 value = u'244%' name = u'244x' class Tag_175: id = 175 value = u'247%' name = u'247x' class Tag_176: id = 176 value = u'521%' name = u'521x' class Tag_177: id = 177 value = u'522%' name = u'522x' class Tag_178: id = 178 value = u'524%' name = u'524x' class Tag_179: id = 179 value = u'525%' name = u'525x' class Tag_180: id = 180 value = u'526%' name = u'526x' class Tag_181: id = 181 value = u'650%' name = u'650x' class Tag_182: id = 182 value = u'651%' name = u'651x' class Tag_183: id = 183 value = u'6531_v' name = u'6531_v' class Tag_184: id = 184 value = u'6531_y' name = u'6531_y' class Tag_185: id = 185 value = u'6531_9' name = u'6531_9' class Tag_186: id = 186 value = u'654%' name = u'654x' class Tag_187: id = 187 value = u'655%' name = u'655x' class Tag_188: id = 188 value = u'656%' name = u'656x' class Tag_189: id = 189 value = u'657%' name = u'657x' class Tag_190: id = 190 value = u'658%' name = u'658x' class Tag_191: id = 191 value = u'711%' name = u'711x' class Tag_192: id = 192 value = u'900%' name = u'900x' class Tag_193: id = 193 value = u'901%' name = u'901x' class Tag_194: id = 194 value = u'902%' name = u'902x' class Tag_195: id = 195 value = u'903%' name = u'903x' class Tag_196: id = 196 value = u'904%' name = u'904x' class Tag_197: id = 197 value = u'905%' name = u'905x' class Tag_198: id = 198 value = u'906%' name = u'906x' class Tag_199: id = 199 value = u'907%' name = u'907x' class Tag_200: id = 200 value = u'908%' name = u'908x' class Tag_201: id = 201 value = u'909C1%' name = u'909C1x' class Tag_202: id = 202 value = u'909C5%' name = u'909C5x' class Tag_203: id = 203 value = u'909CS%' name = u'909CSx' class Tag_204: id = 204 value = u'909CO%' name = u'909COx' class Tag_205: id = 205 value = u'909CK%' name = u'909CKx' class Tag_206: id = 206 value = u'909CP%' name = u'909CPx' class Tag_207: id = 207 value = u'981%' name = u'981x' class Tag_208: id = 208 value = u'982%' name = u'982x' class Tag_209: id = 209 value = u'983%' name = u'983x' class Tag_210: id = 210 value = u'984%' name = u'984x' class Tag_211: id = 211 value = u'985%' name = u'985x' class Tag_212: id = 212 value = u'986%' name = u'986x' class Tag_213: id = 213 value = u'987%' name = u'987x' class Tag_214: id = 214 value = u'988%' name = u'988x' class Tag_215: id = 215 value = u'989%' name = u'989x' class Tag_216: id = 216 value = u'100__0' name = u'author control' class Tag_217: id = 217 value = u'110__0' - name = u'institution control' + name = u'institute control' class Tag_218: id = 218 value = u'130__0' name = u'journal control' class Tag_219: id = 219 value = u'150__0' name = u'subject control' class Tag_220: id = 220 value = u'260__0' - name = u'additional institution control' + name = u'additional institute control' class Tag_221: id = 221 value = u'700__0' name = u'additional author control' class FormatData(DataSet): class Format_1: code = u'hb' last_updated = None description = u'HTML brief output format, used for search results pages.' content_type = u'text/html' id = 1 visibility = 1 name = u'HTML brief' class Format_2: code = u'hd' last_updated = None description = u'HTML detailed output format, used for Detailed record pages.' content_type = u'text/html' id = 2 visibility = 1 name = u'HTML detailed' class Format_3: code = u'hm' last_updated = None description = u'HTML MARC.' content_type = u'text/html' id = 3 visibility = 1 name = u'MARC' class Format_4: code = u'xd' last_updated = None description = u'XML Dublin Core.' content_type = u'text/xml' id = 4 visibility = 1 name = u'Dublin Core' class Format_5: code = u'xm' last_updated = None description = u'XML MARC.' content_type = u'text/xml' id = 5 visibility = 1 name = u'MARCXML' class Format_6: code = u'hp' last_updated = None description = u'HTML portfolio-style output format for photos.' content_type = u'text/html' id = 6 visibility = 1 name = u'portfolio' class Format_7: code = u'hc' last_updated = None description = u'HTML caption-only output format for photos.' content_type = u'text/html' id = 7 visibility = 1 name = u'photo captions only' class Format_8: code = u'hx' last_updated = None description = u'BibTeX.' content_type = u'text/html' id = 8 visibility = 1 name = u'BibTeX' class Format_9: code = u'xe' last_updated = None description = u'XML EndNote.' content_type = u'text/xml' id = 9 visibility = 1 name = u'EndNote' class Format_10: code = u'xn' last_updated = None description = u'XML NLM.' content_type = u'text/xml' id = 10 visibility = 1 name = u'NLM' class Format_11: code = u'excel' last_updated = None description = u'Excel csv output' content_type = u'application/ms-excel' id = 11 visibility = 0 name = u'Excel' class Format_12: code = u'hs' last_updated = None description = u'Very short HTML output for similarity box (people also viewed..).' content_type = u'text/html' id = 12 visibility = 0 name = u'HTML similarity' class Format_13: code = u'xr' last_updated = None description = u'RSS.' content_type = u'text/xml' id = 13 visibility = 0 name = u'RSS' class Format_14: code = u'xoaidc' last_updated = None description = u'OAI DC.' content_type = u'text/xml' id = 14 visibility = 0 name = u'OAI DC' class Format_15: code = u'hdfile' last_updated = None description = u'Used to show fulltext files in mini-panel of detailed record pages.' content_type = u'text/html' id = 15 visibility = 0 name = u'File mini-panel' class Format_16: code = u'hdact' last_updated = None description = u'Used to display actions in mini-panel of detailed record pages.' content_type = u'text/html' id = 16 visibility = 0 name = u'Actions mini-panel' class Format_17: code = u'hdref' last_updated = None description = u'Display record references in References tab.' content_type = u'text/html' id = 17 visibility = 0 name = u'References tab' class Format_18: code = u'hcs' last_updated = None description = u'HTML cite summary format, used for search results pages.' content_type = u'text/html' id = 18 visibility = 1 name = u'HTML citesummary' class Format_19: code = u'xw' last_updated = None description = u'RefWorks.' content_type = u'text/xml' id = 19 visibility = 1 name = u'RefWorks' class Format_20: code = u'xo' last_updated = None description = u'Metadata Object Description Schema' content_type = u'application/xml' id = 20 visibility = 1 name = u'MODS' class Format_21: code = u'ha' last_updated = None description = u'Very brief HTML output format for author/paper claiming facility.' content_type = u'text/html' id = 21 visibility = 0 name = u'HTML author claiming' class Format_22: code = u'xp' last_updated = None description = u'Sample format suitable for multimedia feeds, such as podcasts' content_type = u'application/rss+xml' id = 22 visibility = 0 name = u'Podcast' class Format_23: code = u'wapaff' last_updated = None description = u'cPickled dicts' content_type = u'text' id = 23 visibility = 0 name = u'WebAuthorProfile affiliations helper' class Format_24: code = u'xe8x' last_updated = None description = u'XML EndNote (8-X).' content_type = u'text/xml' id = 24 visibility = 1 name = u'EndNote (8-X)' class Format_25: code = u'hcs2' last_updated = None description = u'HTML cite summary format, including self-citations counts.' content_type = u'text/html' id = 25 visibility = 0 name = u'HTML citesummary extended' class Format_26: code = u'dcite' last_updated = None description = u'DataCite XML format.' content_type = u'text/xml' id = 26 visibility = 0 name = u'DataCite' class Format_27: code = u'mobb' last_updated = None description = u'Mobile brief format.' content_type = u'text/html' id = 27 visibility = 0 name = u'Mobile brief' class Format_28: code = u'mobd' last_updated = None description = u'Mobile detailed format.' content_type = u'text/html' id = 28 visibility = 0 name = u'Mobile detailed' class Format_29: code = u'tm' last_updated = None description = u'Text MARC.' content_type = u'text/plain' id = 29 visibility = 0 name = u'Text MARC' class FieldTagData(DataSet): class FieldTag_10_11: score = 100 id_tag = TagData.Tag_11.ref('id') id_field = FieldData.Field_10.ref('id') class FieldTag_11_14: score = 100 id_tag = TagData.Tag_14.ref('id') id_field = FieldData.Field_11.ref('id') class FieldTag_12_15: score = 10 id_tag = TagData.Tag_15.ref('id') id_field = FieldData.Field_12.ref('id') class FieldTag_13_116: score = 10 id_tag = TagData.Tag_116.ref('id') id_field = FieldData.Field_13.ref('id') class FieldTag_14_117: score = 100 id_tag = TagData.Tag_117.ref('id') id_field = FieldData.Field_14.ref('id') class FieldTag_15_118: score = 100 id_tag = TagData.Tag_118.ref('id') id_field = FieldData.Field_15.ref('id') class FieldTag_16_119: score = 100 id_tag = TagData.Tag_119.ref('id') id_field = FieldData.Field_16.ref('id') class FieldTag_17_120: score = 100 id_tag = TagData.Tag_120.ref('id') id_field = FieldData.Field_17.ref('id') #class FieldTag_18_120: # score = 100 # id_tag = TagData.Tag_121.ref('id') # id_field = FieldData.Field_18.ref('id') class FieldTag_19_131: score = 100 id_tag = TagData.Tag_131.ref('id') id_field = FieldData.Field_19.ref('id') class FieldTag_20_132: score = 100 id_tag = TagData.Tag_132.ref('id') id_field = FieldData.Field_20.ref('id') class FieldTag_21_133: score = 100 id_tag = TagData.Tag_133.ref('id') id_field = FieldData.Field_21.ref('id') class FieldTag_21_134: score = 90 id_tag = TagData.Tag_134.ref('id') id_field = FieldData.Field_21.ref('id') class FieldTag_22_1: score = 100 id_tag = TagData.Tag_1.ref('id') id_field = FieldData.Field_22.ref('id') class FieldTag_22_2: score = 90 id_tag = TagData.Tag_2.ref('id') id_field = FieldData.Field_22.ref('id') class FieldTag_27_135: score = 100 id_tag = TagData.Tag_135.ref('id') id_field = FieldData.Field_27.ref('id') class FieldTag_28_1: score = 100 id_tag = TagData.Tag_1.ref('id') id_field = FieldData.Field_28.ref('id') class FieldTag_29_1: score = 100 id_tag = TagData.Tag_1.ref('id') id_field = FieldData.Field_29.ref('id') class FieldTag_2_3: score = 100 id_tag = TagData.Tag_3.ref('id') id_field = FieldData.Field_2.ref('id') class FieldTag_2_4: score = 90 id_tag = TagData.Tag_4.ref('id') id_field = FieldData.Field_2.ref('id') class FieldTag_30_1: score = 100 id_tag = TagData.Tag_1.ref('id') id_field = FieldData.Field_30.ref('id') class FieldTag_30_2: score = 90 id_tag = TagData.Tag_2.ref('id') id_field = FieldData.Field_30.ref('id') class FieldTag_32_3: score = 100 id_tag = TagData.Tag_3.ref('id') id_field = FieldData.Field_32.ref('id') class FieldTag_32_4: score = 90 id_tag = TagData.Tag_4.ref('id') id_field = FieldData.Field_32.ref('id') class FieldTag_3_1: score = 100 id_tag = TagData.Tag_1.ref('id') id_field = FieldData.Field_3.ref('id') class FieldTag_3_2: score = 90 id_tag = TagData.Tag_2.ref('id') id_field = FieldData.Field_3.ref('id') class FieldTag_4_5: score = 100 id_tag = TagData.Tag_5.ref('id') id_field = FieldData.Field_4.ref('id') class FieldTag_5_6: score = 100 id_tag = TagData.Tag_6.ref('id') id_field = FieldData.Field_5.ref('id') class FieldTag_6_7: score = 30 id_tag = TagData.Tag_7.ref('id') id_field = FieldData.Field_6.ref('id') class FieldTag_6_8: score = 10 id_tag = TagData.Tag_8.ref('id') id_field = FieldData.Field_6.ref('id') class FieldTag_6_9: score = 20 id_tag = TagData.Tag_9.ref('id') id_field = FieldData.Field_6.ref('id') class FieldTag_7_12: score = 100 id_tag = TagData.Tag_12.ref('id') id_field = FieldData.Field_7.ref('id') class FieldTag_7_13: score = 90 id_tag = TagData.Tag_13.ref('id') id_field = FieldData.Field_7.ref('id') class FieldTag_8_10: score = 100 id_tag = TagData.Tag_10.ref('id') id_field = FieldData.Field_8.ref('id') class FieldTag_9_115: score = 100 id_tag = TagData.Tag_115.ref('id') id_field = FieldData.Field_9.ref('id') class FieldTag_33_1: score = 100 id_tag = TagData.Tag_1.ref('id') id_field = FieldData.Field_33.ref('id') class FieldTag_33_146: score = 100 id_tag = TagData.Tag_146.ref('id') id_field = FieldData.Field_33.ref('id') class FieldTag_33_140: score = 100 id_tag = TagData.Tag_140.ref('id') id_field = FieldData.Field_33.ref('id') class FieldTag_34_148: score = 100 id_tag = TagData.Tag_148.ref('id') id_field = FieldData.Field_34.ref('id') class FieldTag_34_149: score = 100 id_tag = TagData.Tag_149.ref('id') id_field = FieldData.Field_34.ref('id') class FieldTag_34_150: score = 100 id_tag = TagData.Tag_150.ref('id') id_field = FieldData.Field_34.ref('id') class FieldTag_35_151: score = 100 id_tag = TagData.Tag_151.ref('id') id_field = FieldData.Field_35.ref('id') class FieldTag_35_152: score = 100 id_tag = TagData.Tag_152.ref('id') id_field = FieldData.Field_35.ref('id') class FieldTag_35_153: score = 100 id_tag = TagData.Tag_153.ref('id') id_field = FieldData.Field_35.ref('id') class FieldTag_36_154: score = 100 id_tag = TagData.Tag_154.ref('id') id_field = FieldData.Field_36.ref('id') class FieldTag_36_155: score = 100 id_tag = TagData.Tag_155.ref('id') id_field = FieldData.Field_36.ref('id') class FieldTag_36_156: score = 100 id_tag = TagData.Tag_156.ref('id') id_field = FieldData.Field_36.ref('id') class FieldTag_39_17: score = 10 id_tag = TagData.Tag_17.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_18: score = 10 id_tag = TagData.Tag_18.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_157: score = 10 id_tag = TagData.Tag_157.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_158: score = 10 id_tag = TagData.Tag_158.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_159: score = 10 id_tag = TagData.Tag_159.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_160: score = 10 id_tag = TagData.Tag_160.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_161: score = 10 id_tag = TagData.Tag_161.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_162: score = 10 id_tag = TagData.Tag_162.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_163: score = 10 id_tag = TagData.Tag_163.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_164: score = 10 id_tag = TagData.Tag_164.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_20: score = 10 id_tag = TagData.Tag_20.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_21: score = 10 id_tag = TagData.Tag_21.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_22: score = 10 id_tag = TagData.Tag_22.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_23: score = 10 id_tag = TagData.Tag_23.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_165: score = 10 id_tag = TagData.Tag_165.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_166: score = 10 id_tag = TagData.Tag_166.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_167: score = 10 id_tag = TagData.Tag_167.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_168: score = 10 id_tag = TagData.Tag_168.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_169: score = 10 id_tag = TagData.Tag_169.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_170: score = 10 id_tag = TagData.Tag_170.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_25: score = 10 id_tag = TagData.Tag_25.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_27: score = 10 id_tag = TagData.Tag_27.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_28: score = 10 id_tag = TagData.Tag_28.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_29: score = 10 id_tag = TagData.Tag_29.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_30: score = 10 id_tag = TagData.Tag_30.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_31: score = 10 id_tag = TagData.Tag_31.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_32: score = 10 id_tag = TagData.Tag_32.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_33: score = 10 id_tag = TagData.Tag_33.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_34: score = 10 id_tag = TagData.Tag_34.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_35: score = 10 id_tag = TagData.Tag_35.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_36: score = 10 id_tag = TagData.Tag_36.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_37: score = 10 id_tag = TagData.Tag_37.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_38: score = 10 id_tag = TagData.Tag_38.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_39: score = 10 id_tag = TagData.Tag_39.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_171: score = 10 id_tag = TagData.Tag_171.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_172: score = 10 id_tag = TagData.Tag_172.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_173: score = 10 id_tag = TagData.Tag_173.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_174: score = 10 id_tag = TagData.Tag_174.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_175: score = 10 id_tag = TagData.Tag_175.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_41: score = 10 id_tag = TagData.Tag_41.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_42: score = 10 id_tag = TagData.Tag_42.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_43: score = 10 id_tag = TagData.Tag_43.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_44: score = 10 id_tag = TagData.Tag_44.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_45: score = 10 id_tag = TagData.Tag_45.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_46: score = 10 id_tag = TagData.Tag_46.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_47: score = 10 id_tag = TagData.Tag_47.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_48: score = 10 id_tag = TagData.Tag_48.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_49: score = 10 id_tag = TagData.Tag_49.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_50: score = 10 id_tag = TagData.Tag_50.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_51: score = 10 id_tag = TagData.Tag_51.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_52: score = 10 id_tag = TagData.Tag_52.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_53: score = 10 id_tag = TagData.Tag_53.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_54: score = 10 id_tag = TagData.Tag_54.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_55: score = 10 id_tag = TagData.Tag_55.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_56: score = 10 id_tag = TagData.Tag_56.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_57: score = 10 id_tag = TagData.Tag_57.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_58: score = 10 id_tag = TagData.Tag_58.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_59: score = 10 id_tag = TagData.Tag_59.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_60: score = 10 id_tag = TagData.Tag_60.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_61: score = 10 id_tag = TagData.Tag_61.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_62: score = 10 id_tag = TagData.Tag_62.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_63: score = 10 id_tag = TagData.Tag_63.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_64: score = 10 id_tag = TagData.Tag_64.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_65: score = 10 id_tag = TagData.Tag_65.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_66: score = 10 id_tag = TagData.Tag_66.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_67: score = 10 id_tag = TagData.Tag_67.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_176: score = 10 id_tag = TagData.Tag_176.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_177: score = 10 id_tag = TagData.Tag_177.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_178: score = 10 id_tag = TagData.Tag_178.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_179: score = 10 id_tag = TagData.Tag_179.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_180: score = 10 id_tag = TagData.Tag_180.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_69: score = 10 id_tag = TagData.Tag_69.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_70: score = 10 id_tag = TagData.Tag_70.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_71: score = 10 id_tag = TagData.Tag_71.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_72: score = 10 id_tag = TagData.Tag_72.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_73: score = 10 id_tag = TagData.Tag_73.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_74: score = 10 id_tag = TagData.Tag_74.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_75: score = 10 id_tag = TagData.Tag_75.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_76: score = 10 id_tag = TagData.Tag_76.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_77: score = 10 id_tag = TagData.Tag_77.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_78: score = 10 id_tag = TagData.Tag_78.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_79: score = 10 id_tag = TagData.Tag_79.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_80: score = 10 id_tag = TagData.Tag_80.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_181: score = 10 id_tag = TagData.Tag_181.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_182: score = 10 id_tag = TagData.Tag_182.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_183: score = 10 id_tag = TagData.Tag_183.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_184: score = 10 id_tag = TagData.Tag_184.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_185: score = 10 id_tag = TagData.Tag_185.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_186: score = 10 id_tag = TagData.Tag_186.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_82: score = 10 id_tag = TagData.Tag_82.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_83: score = 10 id_tag = TagData.Tag_83.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_84: score = 10 id_tag = TagData.Tag_84.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_85: score = 10 id_tag = TagData.Tag_85.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_187: score = 10 id_tag = TagData.Tag_187.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_88: score = 10 id_tag = TagData.Tag_88.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_89: score = 10 id_tag = TagData.Tag_89.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_90: score = 10 id_tag = TagData.Tag_90.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_91: score = 10 id_tag = TagData.Tag_91.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_92: score = 10 id_tag = TagData.Tag_92.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_93: score = 10 id_tag = TagData.Tag_93.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_94: score = 10 id_tag = TagData.Tag_94.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_95: score = 10 id_tag = TagData.Tag_95.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_96: score = 10 id_tag = TagData.Tag_96.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_97: score = 10 id_tag = TagData.Tag_97.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_98: score = 10 id_tag = TagData.Tag_98.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_99: score = 10 id_tag = TagData.Tag_99.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_100: score = 10 id_tag = TagData.Tag_100.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_102: score = 10 id_tag = TagData.Tag_102.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_103: score = 10 id_tag = TagData.Tag_103.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_104: score = 10 id_tag = TagData.Tag_104.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_105: score = 10 id_tag = TagData.Tag_105.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_188: score = 10 id_tag = TagData.Tag_188.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_189: score = 10 id_tag = TagData.Tag_189.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_190: score = 10 id_tag = TagData.Tag_190.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_191: score = 10 id_tag = TagData.Tag_191.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_192: score = 10 id_tag = TagData.Tag_192.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_193: score = 10 id_tag = TagData.Tag_193.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_194: score = 10 id_tag = TagData.Tag_194.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_195: score = 10 id_tag = TagData.Tag_195.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_196: score = 10 id_tag = TagData.Tag_196.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_107: score = 10 id_tag = TagData.Tag_107.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_108: score = 10 id_tag = TagData.Tag_108.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_109: score = 10 id_tag = TagData.Tag_109.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_110: score = 10 id_tag = TagData.Tag_110.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_111: score = 10 id_tag = TagData.Tag_111.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_112: score = 10 id_tag = TagData.Tag_112.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_113: score = 10 id_tag = TagData.Tag_113.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_197: score = 10 id_tag = TagData.Tag_197.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_198: score = 10 id_tag = TagData.Tag_198.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_199: score = 10 id_tag = TagData.Tag_199.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_200: score = 10 id_tag = TagData.Tag_200.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_201: score = 10 id_tag = TagData.Tag_201.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_202: score = 10 id_tag = TagData.Tag_202.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_203: score = 10 id_tag = TagData.Tag_203.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_204: score = 10 id_tag = TagData.Tag_204.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_205: score = 10 id_tag = TagData.Tag_205.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_206: score = 10 id_tag = TagData.Tag_206.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_207: score = 10 id_tag = TagData.Tag_207.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_208: score = 10 id_tag = TagData.Tag_208.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_209: score = 10 id_tag = TagData.Tag_209.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_210: score = 10 id_tag = TagData.Tag_210.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_211: score = 10 id_tag = TagData.Tag_211.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_212: score = 10 id_tag = TagData.Tag_212.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_213: score = 10 id_tag = TagData.Tag_213.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_214: score = 10 id_tag = TagData.Tag_214.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_215: score = 10 id_tag = TagData.Tag_215.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_122: score = 10 id_tag = TagData.Tag_122.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_123: score = 10 id_tag = TagData.Tag_123.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_124: score = 10 id_tag = TagData.Tag_124.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_125: score = 10 id_tag = TagData.Tag_125.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_126: score = 10 id_tag = TagData.Tag_126.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_127: score = 10 id_tag = TagData.Tag_127.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_128: score = 10 id_tag = TagData.Tag_128.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_129: score = 10 id_tag = TagData.Tag_129.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_130: score = 10 id_tag = TagData.Tag_130.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_1: score = 10 id_tag = TagData.Tag_1.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_2: score = 10 id_tag = TagData.Tag_2.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_216: score = 10 id_tag = TagData.Tag_216.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_217: score = 10 id_tag = TagData.Tag_217.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_218: score = 10 id_tag = TagData.Tag_218.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_219: score = 10 id_tag = TagData.Tag_219.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_220: score = 10 id_tag = TagData.Tag_220.ref('id') id_field = FieldData.Field_39.ref('id') class FieldTag_39_221: score = 10 id_tag = TagData.Tag_221.ref('id') id_field = FieldData.Field_39.ref('id') diff --git a/invenio/modules/search/models.py b/invenio/modules/search/models.py index 623cd7b8c..904d06994 100644 --- a/invenio/modules/search/models.py +++ b/invenio/modules/search/models.py @@ -1,801 +1,850 @@ # -*- coding: utf-8 -*- # ## This file is part of Invenio. ## Copyright (C) 2011, 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. """ WebSearch database models. """ # General imports. import re from flask import g, url_for from intbitset import intbitset from six import iteritems from operator import itemgetter from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.orderinglist import ordering_list from sqlalchemy.orm.collections import InstrumentedList from sqlalchemy.orm.collections import attribute_mapped_collection from sqlalchemy.orm.collections import collection from invenio.base.globals import cfg -from invenio.base.i18n import _ +from invenio.base.i18n import _, gettext_set_language from invenio.ext.sqlalchemy import db # Create your models here. from invenio.modules.accounts.models import User from invenio.modules.formatter.models import Format class IntbitsetPickle(object): def dumps(self, obj, protocol=None): if obj is not None: return obj.fastdump() return intbitset([]).fastdump() def loads(self, obj): try: return intbitset(obj) except: return intbitset() def IntbitsetCmp(x, y): if x is None or y is None: return False else: return x == y class OrderedList(InstrumentedList): def append(self, item): if self: s = sorted(self, key=lambda obj: obj.score) item.score = s[-1].score + 1 else: item.score = 1 InstrumentedList.append(self, item) def set(self, item, index=0): if self: s = sorted(self, key=lambda obj: obj.score) if index >= len(s): item.score = s[-1].score + 1 elif index < 0: item.score = s[0].score index = 0 else: item.score = s[index].score + 1 for i, it in enumerate(s[index:]): it.score = item.score + i + 1 #if s[i+1].score more then break else: item.score = index InstrumentedList.append(self, item) def pop(self, item): #FIXME if self: obj_list = sorted(self, key=lambda obj: obj.score) for i, it in enumerate(obj_list): if obj_list[i] == item: return InstrumentedList.pop(self, i) def attribute_multi_dict_collection(creator, key_attr, val_attr): class MultiMappedCollection(dict): def __init__(self, data=None): self._data = data or {} @collection.appender def _append(self, obj): l = self._data.setdefault(key_attr(obj), []) l.append(obj) def __setitem__(self, key, value): self._append(creator(key, value)) def __getitem__(self, key): return tuple(val_attr(obj) for obj in self._data[key]) @collection.remover def _remove(self, obj): self._data[key_attr(obj)].remove(obj) @collection.iterator def _iterator(self): for objs in self._data.itervalues(): for obj in objs: yield obj #@collection.converter #def convert(self, other): # print '===== CONVERT ====' # print other # for k, vals in iteritems(other): # for v in list(vals): # print 'converting: ', k,': ',v # yield creator(k, v) #@collection.internally_instrumented #def extend(self, items): # for k, item in items: # for v in list(item): # print 'setting: ', k,': ',v # self.__setitem__(k,v) def __repr__(self): return '%s(%r)' % (type(self).__name__, self._data) return MultiMappedCollection external_collection_mapper = attribute_multi_dict_collection( creator=lambda k, v: CollectionExternalcollection(type=k, externalcollection=v), key_attr=lambda obj: obj.type, val_attr=lambda obj: obj.externalcollection) class Collection(db.Model): """Represents a Collection record.""" def __repr__(self): return "%s(%s)" % (self.__class__.__name__, self.id) __tablename__ = 'collection' id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True) name = db.Column(db.String(255), unique=True, index=True, nullable=False) dbquery = db.Column(db.Text(20), nullable=True, index=True) nbrecs = db.Column(db.Integer(10, unsigned=True), server_default='0') #FIXME read only!!! reclist = db.Column(db.PickleType(pickler=IntbitsetPickle(), comparator=IntbitsetCmp)) _names = db.relationship(lambda: Collectionname, backref='collection', collection_class=attribute_mapped_collection('ln_type'), cascade="all, delete, delete-orphan") names = association_proxy('_names', 'value', creator=lambda k, v: Collectionname(ln_type=k, value=v)) _boxes = db.relationship(lambda: Collectionboxname, backref='collection', collection_class=attribute_mapped_collection('ln_type'), cascade="all, delete, delete-orphan") boxes = association_proxy('_boxes', 'value', creator=lambda k, v: Collectionboxname(ln_type=k, value=v)) _formatoptions = association_proxy('formats', 'format') #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def formatoptions(self): if len(self._formatoptions): return [dict(f) for f in self._formatoptions] else: return [{'code': u'hb', 'name': _("HTML %(format)s", format=_("brief")), 'content_type': u'text/html'}] formatoptions = property(formatoptions) _examples_example = association_proxy('_examples', 'example') @property #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def examples(self): return list(self._examples_example) @property def name_ln(self): from invenio.legacy.search_engine import get_coll_i18nname return get_coll_i18nname(self.name, g.ln).decode('utf-8') # Another possible implementation with cache memoize # @cache.memoize #try: # return db.object_session(self).query(Collectionname).\ # with_parent(self).filter(db.and_(Collectionname.ln==g.ln, # Collectionname.type=='ln')).first().value #except: # return self.name @property #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def portalboxes_ln(self): return db.object_session(self).query(CollectionPortalbox).\ with_parent(self).\ options(db.joinedload_all(CollectionPortalbox.portalbox)).\ filter(CollectionPortalbox.ln == g.ln).\ order_by(db.desc(CollectionPortalbox.score)).all() @property def most_specific_dad(self): return db.object_session(self).query(Collection).\ join(Collection.sons).\ filter(CollectionCollection.id_son == self.id).\ order_by(db.asc(Collection.nbrecs)).\ first() @property #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def is_restricted(self): from invenio.legacy.search_engine import collection_restricted_p return collection_restricted_p(self.name) @property def type(self): p = re.compile("\d+:.*") if self.dbquery is not None and \ p.match(self.dbquery.lower()): return 'r' else: return 'v' _collection_children = db.relationship(lambda: CollectionCollection, #collection_class=OrderedList, collection_class=ordering_list('score'), primaryjoin=lambda: Collection.id == CollectionCollection.id_dad, foreign_keys=lambda: CollectionCollection.id_dad, order_by=lambda: db.asc(CollectionCollection.score)) _collection_children_r = db.relationship(lambda: CollectionCollection, #collection_class=OrderedList, collection_class=ordering_list('score'), primaryjoin=lambda: db.and_( Collection.id == CollectionCollection.id_dad, CollectionCollection.type == 'r'), foreign_keys=lambda: CollectionCollection.id_dad, order_by=lambda: db.asc(CollectionCollection.score)) _collection_children_v = db.relationship(lambda: CollectionCollection, #collection_class=OrderedList, collection_class=ordering_list('score'), primaryjoin=lambda: db.and_( Collection.id == CollectionCollection.id_dad, CollectionCollection.type == 'v'), foreign_keys=lambda: CollectionCollection.id_dad, order_by=lambda: db.asc(CollectionCollection.score)) collection_parents = db.relationship(lambda: CollectionCollection, #collection_class=OrderedList, collection_class=ordering_list('score'), primaryjoin=lambda: Collection.id == CollectionCollection.id_son, foreign_keys=lambda: CollectionCollection.id_son, order_by=lambda: db.asc(CollectionCollection.score)) collection_children = association_proxy('_collection_children', 'son') collection_children_r = association_proxy('_collection_children_r', 'son', creator=lambda son: CollectionCollection(id_son=son.id, type='r')) collection_children_v = association_proxy('_collection_children_v', 'son', creator=lambda son: CollectionCollection(id_son=son.id, type='v')) # _externalcollections = db.relationship(lambda: CollectionExternalcollection, # backref='collection', cascade="all, delete, delete-orphan") # # externalcollections = association_proxy( # '_externalcollections', # 'externalcollection') def _externalcollections_type(type): return association_proxy( '_externalcollections_' + str(type), 'externalcollection', creator=lambda ext: CollectionExternalcollection( externalcollection=ext, type=type)) externalcollections_0 = _externalcollections_type(0) externalcollections_1 = _externalcollections_type(1) externalcollections_2 = _externalcollections_type(2) externalcollections = db.relationship(lambda: CollectionExternalcollection, #backref='collection', collection_class=external_collection_mapper, cascade="all, delete, delete-orphan") # Search options _make_field_fieldvalue = lambda type: db.relationship( lambda: CollectionFieldFieldvalue, primaryjoin=lambda: db.and_( Collection.id == CollectionFieldFieldvalue.id_collection, CollectionFieldFieldvalue.type == type), order_by=lambda: CollectionFieldFieldvalue.score) _search_within = _make_field_fieldvalue('sew') _search_options = _make_field_fieldvalue('seo') @property #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def search_within(self): """ Collect search within options. """ default = [('', g._('any field'))] found = [(o.field.code, o.field.name_ln) for o in self._search_within] if not found: found = [(f.name.replace(' ', ''), f.name_ln) for f in Field.query.filter(Field.name.in_( cfg['CFG_WEBSEARCH_SEARCH_WITHIN'])).all()] return default + sorted(found, key=itemgetter(1)) @property #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def search_options(self): return self._search_options @property #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def ancestors_ids(self): """Get list of parent collection ids.""" output = intbitset([self.id]) for c in self.dads: ancestors = c.dad.ancestors_ids if self.id in ancestors: raise output |= ancestors return output @property #@cache.memoize(make_name=lambda fname: fname + '::' + g.ln) def descendants_ids(self): """Get list of child collection ids.""" output = intbitset([self.id]) for c in self.sons: descendants = c.son.descendants_ids if self.id in descendants: raise output |= descendants return output # Gets the list of localized names as an array collection_names = db.relationship( lambda: Collectionname, primaryjoin=lambda: Collection.id == Collectionname.id_collection, foreign_keys=lambda: Collectionname.id_collection ) - # Gets the translation according to the lang code def translation(self, lang): + """Get the translation according to the language code.""" try: return db.object_session(self).query(Collectionname).\ with_parent(self).filter(db.and_(Collectionname.ln == lang, Collectionname.type == 'ln')).first().value except: return "" + + def get_collectionbox_name(self, ln=None, box_type="r"): + """Return collection-specific labelling subtrees. + + - 'Focus on': regular collection + - 'Narrow by': virtual collection + - 'Latest addition': boxes + + If translation for given language does not exist, use label + for CFG_SITE_LANG. If no custom label is defined for + CFG_SITE_LANG, return default label for the box. + + :param ln: the language of the label + :param box_type: can be 'r' (=Narrow by), 'v' (=Focus on), + 'l' (=Latest additions) + """ + if ln is None: + ln = g.ln + collectionboxnamequery = db.object_session(self).query( + Collectionboxname).with_parent(self) + try: + collectionboxname = collectionboxnamequery.filter(db.and_( + Collectionboxname.ln == ln, + Collectionboxname.type == box_type, + )).one() + except: + try: + collectionboxname = collectionboxnamequery.filter(db.and_( + Collectionboxname.ln == ln, + Collectionboxname.type == box_type, + )).one() + except: + collectionboxname = None + + if collectionboxname is None: + # load the right message language + _ = gettext_set_language(ln) + return _(Collectionboxname.TYPES.get(box_type, '')) + else: + return collectionboxname.value + + portal_boxes_ln = db.relationship( lambda: CollectionPortalbox, #collection_class=OrderedList, collection_class=ordering_list('score'), primaryjoin=lambda: \ Collection.id == CollectionPortalbox.id_collection, foreign_keys=lambda: CollectionPortalbox.id_collection, order_by=lambda: db.asc(CollectionPortalbox.score)) #@db.hybrid_property #def externalcollections(self): # return self._externalcollections #@externalcollections.setter #def externalcollections(self, data): # if isinstance(data, dict): # for k, vals in iteritems(data): # for v in list(vals): # self._externalcollections[k] = v # else: # self._externalcollections = data def breadcrumbs(self, builder=None, ln=None): """Retunds breadcrumbs for collection.""" ln = cfg.get('CFG_SITE_LANG') if ln is None else ln breadcrumbs = [] # Get breadcrumbs for most specific dad if it exists. if self.most_specific_dad is not None: breadcrumbs = self.most_specific_dad.breadcrumbs(builder=builder, ln=ln) if builder is not None: crumb = builder(self) else: crumb = dict( text=self.name_ln, url=url_for('search.collection', name=self.name)) breadcrumbs.append(crumb) return breadcrumbs class Collectionname(db.Model): """Represents a Collectionname record.""" __tablename__ = 'collectionname' id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), nullable=False, primary_key=True) ln = db.Column(db.Char(5), nullable=False, primary_key=True, server_default='') type = db.Column(db.Char(3), nullable=False, primary_key=True, server_default='sn') value = db.Column(db.String(255), nullable=False) @db.hybrid_property def ln_type(self): return (self.ln, self.type) @ln_type.setter def set_ln_type(self, value): (self.ln, self.type) = value #from sqlalchemy import event #def collection_append_listener(target, value, initiator): # print "received append event for target: %s" % target.__dict__ # print value.__dict__ # print initiator.__dict__ #event.listen(Collection.names, 'append', collection_append_listener) class Collectionboxname(db.Model): """Represents a Collectionboxname record.""" __tablename__ = 'collectionboxname' + + TYPES = { + 'v': 'Focus on:', + 'r': 'Narrow by collection:', + 'l': 'Latest additions:', + } + id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), nullable=False, primary_key=True) ln = db.Column(db.Char(5), nullable=False, primary_key=True, server_default='') type = db.Column(db.Char(3), nullable=False, primary_key=True, server_default='r') value = db.Column(db.String(255), nullable=False) @db.hybrid_property def ln_type(self): return (self.ln, self.type) @ln_type.setter def set_ln_type(self, value): (self.ln, self.type) = value class Collectiondetailedrecordpagetabs(db.Model): """Represents a Collectiondetailedrecordpagetabs record.""" __tablename__ = 'collectiondetailedrecordpagetabs' id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), nullable=False, primary_key=True) tabs = db.Column(db.String(255), nullable=False, server_default='') collection = db.relationship(Collection, backref='collectiondetailedrecordpagetabs') class CollectionCollection(db.Model): """Represents a CollectionCollection record.""" __tablename__ = 'collection_collection' id_dad = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), primary_key=True) id_son = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), primary_key=True) type = db.Column(db.Char(1), nullable=False, server_default='r') score = db.Column(db.TinyInteger(4, unsigned=True), nullable=False, server_default='0') son = db.relationship(Collection, primaryjoin=id_son == Collection.id, backref='dads', #FIX collection_class=db.attribute_mapped_collection('score'), order_by=db.asc(score)) dad = db.relationship(Collection, primaryjoin=id_dad == Collection.id, backref='sons', order_by=db.asc(score)) class Example(db.Model): """Represents a Example record.""" __tablename__ = 'example' id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True, autoincrement=True) type = db.Column(db.Text, nullable=False) body = db.Column(db.Text, nullable=False) class CollectionExample(db.Model): """Represents a CollectionExample record.""" __tablename__ = 'collection_example' id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), primary_key=True) id_example = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Example.id), primary_key=True) score = db.Column(db.TinyInteger(4, unsigned=True), nullable=False, server_default='0') collection = db.relationship(Collection, backref='_examples', order_by=score) example = db.relationship(Example, backref='collections', order_by=score) class Portalbox(db.Model): """Represents a Portalbox record.""" __tablename__ = 'portalbox' id = db.Column(db.MediumInteger(9, unsigned=True), autoincrement=True, primary_key=True) title = db.Column(db.Text, nullable=False) body = db.Column(db.Text, nullable=False) def get_pbx_pos(): """Returns a list of all the positions for a portalbox""" position = {} position["rt"] = "Right Top" position["lt"] = "Left Top" position["te"] = "Title Epilog" position["tp"] = "Title Prolog" position["ne"] = "Narrow by coll epilog" position["np"] = "Narrow by coll prolog" return position class CollectionPortalbox(db.Model): """Represents a CollectionPortalbox record.""" __tablename__ = 'collection_portalbox' id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), primary_key=True) id_portalbox = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Portalbox.id), primary_key=True) ln = db.Column(db.Char(5), primary_key=True, server_default='', nullable=False) position = db.Column(db.Char(3), nullable=False, server_default='top') score = db.Column(db.TinyInteger(4, unsigned=True), nullable=False, server_default='0') collection = db.relationship(Collection, backref='portalboxes', order_by=score) portalbox = db.relationship(Portalbox, backref='collections', order_by=score) class Externalcollection(db.Model): """Represents a Externalcollection record.""" __tablename__ = 'externalcollection' id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True) name = db.Column(db.String(255), unique=True, nullable=False, server_default='') @property def engine(self): from invenio.legacy.websearch_external_collections.searcher import external_collections_dictionary if self.name in external_collections_dictionary: return external_collections_dictionary[self.name] class CollectionExternalcollection(db.Model): """Represents a CollectionExternalcollection record.""" __tablename__ = 'collection_externalcollection' id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), primary_key=True, server_default='0') id_externalcollection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Externalcollection.id), primary_key=True, server_default='0') type = db.Column(db.TinyInteger(4, unsigned=True), server_default='0', nullable=False) def _collection_type(type): return db.relationship(Collection, primaryjoin=lambda: db.and_( CollectionExternalcollection.id_collection == Collection.id, CollectionExternalcollection.type == type), backref='_externalcollections_' + str(type)) collection_0 = _collection_type(0) collection_1 = _collection_type(1) collection_2 = _collection_type(2) externalcollection = db.relationship(Externalcollection) class CollectionFormat(db.Model): """Represents a CollectionFormat record.""" __tablename__ = 'collection_format' id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), primary_key=True) id_format = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Format.id), primary_key=True) score = db.Column(db.TinyInteger(4, unsigned=True), nullable=False, server_default='0') collection = db.relationship(Collection, backref='formats', order_by=db.desc(score)) format = db.relationship(Format, backref='collections', order_by=db.desc(score)) class Field(db.Model): """Represents a Field record.""" def __repr__(self): return "%s(%s)" % (self.__class__.__name__, self.id) __tablename__ = 'field' id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True) name = db.Column(db.String(255), nullable=False) code = db.Column(db.String(255), unique=True, nullable=False) #tags = db.relationship('FieldTag', # collection_class=attribute_mapped_collection('score'), # cascade="all, delete-orphan" # ) #tag_names = association_proxy("tags", "as_tag") @property def name_ln(self): from invenio.legacy.search_engine import get_field_i18nname return get_field_i18nname(self.name, g.ln) #try: # return db.object_session(self).query(Fieldname).\ # with_parent(self).filter(db.and_(Fieldname.ln==g.ln, # Fieldname.type=='ln')).first().value #except: # return self.name class Fieldvalue(db.Model): """Represents a Fieldvalue record.""" def __init__(self): pass __tablename__ = 'fieldvalue' id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True, autoincrement=True) name = db.Column(db.String(255), nullable=False) value = db.Column(db.Text, nullable=False) class Fieldname(db.Model): """Represents a Fieldname record.""" __tablename__ = 'fieldname' id_field = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Field.id), primary_key=True) ln = db.Column(db.Char(5), primary_key=True, server_default='') type = db.Column(db.Char(3), primary_key=True, server_default='sn') value = db.Column(db.String(255), nullable=False) field = db.relationship(Field, backref='names') class Tag(db.Model): """Represents a Tag record.""" __tablename__ = 'tag' id = db.Column(db.MediumInteger(9, unsigned=True), primary_key=True) name = db.Column(db.String(255), nullable=False) value = db.Column(db.Char(6), nullable=False) def __init__(self, tup=None, *args, **kwargs): if tup is not None and isinstance(tup, tuple): self.name, self.value = tup super(Tag, self).__init__(*args, **kwargs) else: if tup is None: super(Tag, self).__init__(*args, **kwargs) else: super(Tag, self).__init__(tup, *args, **kwargs) @property def as_tag(self): """Returns tupple with name and value.""" return self.name, self.value class FieldTag(db.Model): """Represents a FieldTag record.""" __tablename__ = 'field_tag' id_field = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey('field.id'), nullable=False, primary_key=True) id_tag = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey('tag.id'), nullable=False, primary_key=True) score = db.Column(db.TinyInteger(4, unsigned=True), nullable=False, server_default='0') tag = db.relationship(Tag, backref='fields', order_by=score) field = db.relationship(Field, backref='tags', order_by=score) def __init__(self, score=None, tup=None, *args, **kwargs): if score is not None: self.score = score if tup is not None: self.tag = Tag(tup) super(FieldTag, self).__init__(*args, **kwargs) @property def as_tag(self): """ Returns Tag record directly.""" return self.tag class WebQuery(db.Model): """Represents a WebQuery record.""" __tablename__ = 'query' id = db.Column(db.Integer(15, unsigned=True), primary_key=True, autoincrement=True) type = db.Column(db.Char(1), nullable=False, server_default='r') urlargs = db.Column(db.Text(100), nullable=False, index=True) class UserQuery(db.Model): """Represents a UserQuery record.""" __tablename__ = 'user_query' id_user = db.Column(db.Integer(15, unsigned=True), db.ForeignKey(User.id), primary_key=True, server_default='0') id_query = db.Column(db.Integer(15, unsigned=True), db.ForeignKey(WebQuery.id), primary_key=True, index=True, server_default='0') hostname = db.Column(db.String(50), nullable=True, server_default='unknown host') date = db.Column(db.DateTime, nullable=True) class CollectionFieldFieldvalue(db.Model): """Represents a CollectionFieldFieldvalue record.""" __tablename__ = 'collection_field_fieldvalue' id_collection = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Collection.id), primary_key=True, nullable=False) id_field = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Field.id), primary_key=True, nullable=False) id_fieldvalue = db.Column(db.MediumInteger(9, unsigned=True), db.ForeignKey(Fieldvalue.id), primary_key=True, nullable=True) type = db.Column(db.Char(3), nullable=False, server_default='src') score = db.Column(db.TinyInteger(4, unsigned=True), nullable=False, server_default='0') score_fieldvalue = db.Column(db.TinyInteger(4, unsigned=True), nullable=False, server_default='0') collection = db.relationship(Collection, backref='field_fieldvalues', order_by=score) field = db.relationship(Field, backref='collection_fieldvalues', lazy='joined') fieldvalue = db.relationship(Fieldvalue, backref='collection_fields', lazy='joined') __all__ = ['Collection', 'Collectionname', 'Collectiondetailedrecordpagetabs', 'CollectionCollection', 'Example', 'CollectionExample', 'Portalbox', 'CollectionPortalbox', 'Externalcollection', 'CollectionExternalcollection', 'CollectionFormat', 'Field', 'Fieldvalue', 'Fieldname', 'Tag', 'FieldTag', 'WebQuery', 'UserQuery', 'CollectionFieldFieldvalue'] diff --git a/invenio/modules/search/templates/search/index_base.html b/invenio/modules/search/templates/search/index_base.html index 6f2405d4e..31d550427 100644 --- a/invenio/modules/search/templates/search/index_base.html +++ b/invenio/modules/search/templates/search/index_base.html @@ -1,182 +1,182 @@ {# ## This file is part of Invenio. ## Copyright (C) 2012, 2013, 2014 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. #} {%- from "_formhelpers.html" import render_filter_form with context -%} {%- from "search/helpers.html" import collection_tree, portalbox_sidebar, search_also, with context -%} {%- extends "search/searchbar_frame.html" %} {% set title = None %} {%- set portalboxes = dict() -%} {%- for k,l in collection.portalboxes_ln|groupby('position') -%} {%- do portalboxes.update({k:l}) -%} {%- endfor -%} {%- macro collection_records(collection) -%}{% block collection_records scoped %} {%- block extra_style -%} {%- endblock extra_style -%} {%- block index -%}
{%- block portalbox_lt -%}{{ portalbox_sidebar(portalboxes.lt, class="col-md-2") }}{%- endblock -%} {%- block index_right -%} {% if collection.collection_children_r %}
-

{{ _("Narrow by collection:") }}

+

{{ collection.get_collectionbox_name(box_type='r') }}

{{ collection_tree(collection.collection_children_r, limit=2, class="nav nav-list clearfix") }}
{% else %}
{% if collection.is_restricted %} {{ _('This collection is restricted. If you are authorized to access it, please click on the Search button.') }} {% else %} {% if collection.reclist %} {% for recid in collection.reclist[-10:]|reverse %}
{{ format_record(recid, of, ln=g.ln)|safe }} {{ '
'|safe if not loop.last }}
{% endfor %} {% if collection.reclist|length > 10 %} [>> {{ _('more')}}] {% endif %} {% endif %} {% endif %}
{% endif %} {%- endblock index_right -%} {%- block index_left -%} {% if collection.collection_children_v %}
-

{{ _("Focus on:") }}

+

{{ collection.get_collectionbox_name(box_type='v') }}

{{ collection_tree(collection.collection_children_v, limit=2, class="nav nav-list clearfix") }} {{ search_also(collection.externalcollections_2) }}
{% elif collection.externalcollections_2 %}
{{ search_also(collection.externalcollections_2) }}
{% endif %} {%- endblock index_left -%} {%- block portalbox_rt -%} {% if collection.externalcollections_2 %} {{ portalbox_sidebar(portalboxes.rt, class="col-md-2") }} {% else %} {{ portalbox_sidebar(portalboxes.rt, class="col-md-offset-1 col-md-3") }} {% endif %} {%- endblock portalbox_rt -%}
{%- endblock index -%} {% endblock collection_records %}{%- endmacro -%} {% block title %} {{ portalboxes.tp }} {{ super() }} {{ portalboxes.te }} {% endblock %} {% block inner_content %} {% if collection.is_restricted %} {{ collection_records(collection) }} {% else %} {# cache 24*60*60, collection.name, g.ln #} {{ collection_records(collection) }} {# endcache #} {% endif %} {% endblock %} {% block javascript %} {{ super() }} {% endblock %} diff --git a/invenio/testsuite/data/demo_record_marc_data.xml b/invenio/testsuite/data/demo_record_marc_data.xml index 828223339..a04271d1f 100644 --- a/invenio/testsuite/data/demo_record_marc_data.xml +++ b/invenio/testsuite/data/demo_record_marc_data.xml @@ -1,26285 +1,26285 @@ CERN-EX-0106015 Photolab ALEPH experiment: Candidate of Higgs boson production Expérience ALEPH: Candidat de la production d'un boson Higgs 14 06 2000 FILM Candidate for the associated production of the Higgs boson and Z boson. Both, the Higgs and Z boson decay into 2 jets each. The green and the yellow jets belong to the Higgs boson. They represent the fragmentation of a bottom andanti-bottom quark. The red and the blue jets stem from the decay of the Z boson into a quark anti-quark pair. Left: View of the event along the beam axis. Bottom right: Zoom around the interaction point at the centre showing detailsof the fragmentation of the bottom and anti-bottom quarks. As expected for b quarks, in each jet the decay of a long-lived B meson is visible. Top right: "World map" showing the spatial distribution of the jets in the event. Press SzGeCERN Experiments and Tracks LEP neil.calder@cern.ch http://invenio-software.org/download/invenio-demo-site-files/0106015_01.jpg restricted_picture http://invenio-software.org/download/invenio-demo-site-files/0106015_01.gif .gif;icon restricted_picture 0003717PHOPHO 2000 81 2001-06-14 50 2001-08-27 CM Bldg. 2 Calder, N n 200231 PICTURE CERN-EX-0104007 Patrice Loïez The first CERN-built module of the barrel section of ATLAS's electromagnetic calorimeter Premier module du tonneau du calorimètre electromagnétique d'ATLAS 10 Apr 2001 DIGITAL Behind the module, left to right Ralf Huber, Andreas Bies and Jorgen Beck Hansen. In front of the module, left to right: Philippe Lançon and Edward Wood. Derrière le module, de gauche à droite: Ralf Huber, Andreas Bies, Jorgen Beck Hansen. Devant le module, de gauche à droite : Philippe Lançon et Edward Wood. CERN EDS SzGeCERN Experiments and Tracks marie.noelle.pages.ribeiro@cern.ch http://invenio-software.org/download/invenio-demo-site-files/0104007_02.jpeg http://invenio-software.org/download/invenio-demo-site-files/0104007_02.gif 0003601PHOPHO 2001 81 2001-04-23 50 2001-06-18 CM 0020699 ADMBUL CERN Bulletin 18/2001 : 30 April 2001 (English) 0020700 ADMBUL CERN Bulletin 18/2001 : 30 avril 2001 (French) Bldg. 184 Fassnacht, P n 200231 PICTURE CERN-HI-6902127 European Molecular Biology Conference Jul 1969 In February, the Agreement establishing the European Molecular Biology Conference was signed at CERN. Willy Spuhler is signing for Switzerland. SzGeCERN Personalities and History of CERN http://invenio-software.org/download/invenio-demo-site-files/6902127.jpeg http://invenio-software.org/download/invenio-demo-site-files/6902127.gif 0002690PHOPHO 1969 81 2000-06-13 50 2000-06-13 CM 127-2-69 n 200024 PICTURE CERN-DI-9906028 J.L. Caron The Twenty Member States of CERN (with dates of accession) on 1 June 1999 Jun 1999 CERN Member States. Les Etats membres du CERN. Press SzGeCERN Diagrams and Charts http://invenio-software.org/download/invenio-demo-site-files/9906028_01.jpeg http://invenio-software.org/download/invenio-demo-site-files/9906028_01.gif 0001754PHOPHO 1999 81 1999-06-17 50 2000-10-30 CM n 199924 PICTURE CERN-DI-9905005 High energy cosmic rays striking atoms at the top of the atmosphere give the rise to showers of particles striking the Earth's surface Des rayons cosmiques de haute energie heurtent des atomes dans la haute atmosphere et donnent ainsi naissance a des gerbes de particules projetees sur la surface terrestre 10 May 1999 DIGITAL Press SzGeCERN Diagrams and Charts neil.calder@cern.ch http://invenio-software.org/download/invenio-demo-site-files/9905005_01.jpeg http://invenio-software.org/download/invenio-demo-site-files/9905005_01.gif 0001626PHOPHO 1999 81 1999-05-10 50 2000-09-12 CM Bldg. 60 Calder, N n 200231 PICTURE CERN-HI-6206002 eng At CERN in 1962 eight Nobel prizewinners 1962 jekyll_only In 1962, CERN hosted the 11th International Conference on High Energy Physics. Among the distinguished visitors were eight Nobel prizewinners.Left to right: Cecil F. Powell, Isidor I. Rabi, Werner Heisenberg, Edwin M. McMillan, Emile Segre, Tsung Dao Lee, Chen Ning Yang and Robert Hofstadter. En 1962, le CERN est l'hote de la onzieme Conference Internationale de Physique des Hautes Energies. Parmi les visiteurs eminents se trouvaient huit laureats du prix Nobel.De gauche a droite: Cecil F. Powell, Isidor I. Rabi, Werner Heisenberg, Edwin M. McMillan, Emile Segre, Tsung Dao Lee, Chen Ning Yang et Robert Hofstadter. Press SzGeCERN Personalities and History of CERN Nobel laureate http://invenio-software.org/download/invenio-demo-site-files/6206002.jpg http://invenio-software.org/download/invenio-demo-site-files/6206002.gif 0000736PHOPHO 1962 81 1998-07-23 50 2002-07-15 CM http://www.nobel.se/physics/laureates/1950/index.html The Nobel Prize in Physics 1950 : Cecil Frank Powell http://www.nobel.se/physics/laureates/1944/index.html The Nobel Prize in Physics 1944 : Isidor Isaac Rabi http://www.nobel.se/physics/laureates/1932/index.html The Nobel Prize in Physics 1932 : Werner Karl Heisenberg http://www.nobel.se/chemistry/laureates/1951/index.html The Nobel Prize in Chemistry 1951 : Edwin Mattison McMillan http://www.nobel.se/physics/laureates/1959/index.html The Nobel Prize in Physics 1959 : Emilio Gino Segre http://www.nobel.se/physics/laureates/1957/index.html The Nobel Prize in Physics 1957 : Chen Ning Yang and Tsung-Dao Lee http://www.nobel.se/physics/laureates/1961/index.html The Nobel Prize in Physics 1961 : Robert Hofstadter 6206002 (1962) n 199830 PICTURE CERN-GE-9806033 Tim Berners-Lee World-Wide Web inventor 28 Jun 1998 Conference "Internet, Web, What's next?" on 26 June 1998 at CERN : Tim Berners-Lee, inventor of the World-Wide Web and Director of the W3C, explains how the Web came to be and give his views on the future. Conference "Internet, Web, What's next?" le 26 juin 1998 au CERN: Tim Berners-Lee, inventeur du World-Wide Web et directeur du W3C, explique comment le Web est ne, et donne ses opinions sur l'avenir. Press SzGeCERN Life at CERN neil.calder@cern.ch http://invenio-software.org/download/invenio-demo-site-files/9806033.jpeg http://invenio-software.org/download/invenio-demo-site-files/9806033.gif 0000655PHOPHO 1998 81 1998-07-03 50 2001-07-10 CM http://www.cern.ch/CERN/Announcements/1998/WebNext.html "Internet, Web, What's next?" 26 June 1998 http://Bulletin.cern.ch/9828/art2/Text_E.html CERN Bulletin no 28/98 (6 July 1998) (English) http://Bulletin.cern.ch/9828/art2/Text_F.html CERN Bulletin no 28/98 (6 juillet 1998) (French) http://www.w3.org/People/Berners-Lee/ Biography 0000990 PRSPRS Le Pays Gessien : 3 Jul 1998 0001037 PRSPRS Le Temps : 27 Jun 1998 0000809 PRSPRS La Tribune de Geneve : 27 Jun 1998 Bldg. 60 Calder, N n 199827 PICTURE astro-ph/9812226 eng Efstathiou, G P Cambridge University Constraints on $\Omega_{\Lambda}$ and $\Omega_{m}$from Distant Type 1a Supernovae and Cosmic Microwave Background Anisotropies 14 Dec 1998 6 p We perform a combined likelihood analysis of the latest cosmic microwave background anisotropy data and distant Type 1a Supernova data of Perlmutter etal (1998a). Our analysis is restricted tocosmological models where structure forms from adiabatic initial fluctuations characterised by a power-law spectrum with negligible tensor component. Marginalizing over other parameters, our bestfit solution gives Omega_m = 0.25 (+0.18, -0.12) and Omega_Lambda = 0.63 (+0.17, -0.23) (95 % confidence errors) for the cosmic densities contributed by matter and a cosmological constantrespectively. The results therefore strongly favour a nearly spatially flat Universe with a non-zero cosmological constant. LANL EDS SzGeCERN Astrophysics and Astronomy Lasenby, A N Hobson, M P Ellis, R S Bridle, S L George Efstathiou <gpe@ast.cam.ac.uk> http://invenio-software.org/download/invenio-demo-site-files/9812226.pdf http://invenio-software.org/download/invenio-demo-site-files/9812226.fig1.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/9812226.fig3.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/9812226.fig5.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/9812226.fig6.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/9812226.fig7.ps.gz Additional 1998 11 1998-12-14 50 2001-04-07 BATCH Mon. Not. R. Astron. Soc. SLAC 4162242 CER n 200231 PREPRINT Bond, J.R. 1996, Theory and Observations of the Cosmic Background Radiation, in "Cosmology and Large Scale Structure", Les Houches Session LX, August 1993, eds. R. Schaeffer, J. Silk, M. Spiro and J. Zinn-Justin, Elsevier SciencePress, Amsterdam, p469 Bond J.R., Efstathiou G., Tegmark M., 1997 L33 Mon. Not. R. Astron. Soc. 291 1997 Mon. Not. R. Astron. Soc. 291 (1997) L33 Bond, J.R., Jaffe, A. 1997, in Proc. XXXI Rencontre de Moriond, ed. F. Bouchet, Edition Fronti eres, in press astro-ph/9610091 Bond J.R., Jaffe A.H. and Knox L.E., 1998 astro-ph/9808264 Astrophys.J. 533 (2000) 19 Burles S., Tytler D., 1998a, to appear in the Proceedings of the Second Oak Ridge Symposium on Atomic & Nuclear Astrophysics, ed. A. Mezzacappa, Institute of Physics, Bristol astro-ph/9803071 Burles S., Tytler D., 1998b, Astrophys. J.in press astro-ph/9712109 Astrophys.J. 507 (1998) 732 Caldwell, R.R., Dave, R., Steinhardt P.J., 1998 1582 Phys. Rev. Lett. 80 1998 Phys. Rev. Lett. 80 (1998) 1582 Carroll S.M., Press W.H., Turner E.L., 1992, Ann. Rev. Astr. Astrophys., 30, 499. Chaboyer B., 1998 astro-ph/9808200 Phys.Rept. 307 (1998) 23 Devlin M.J., De Oliveira-Costa A., Herbig T., Miller A.D., Netterfield C.B., Page L., Tegmark M., 1998, submitted to Astrophys. J astro-ph/9808043 Astrophys. J. 509 (1998) L69-72 Efstathiou G. 1996, Observations of Large-Scale Structure in the Universe, in "Cosmology and Large Scale Structure", Les Houches Session LX, August 1993, eds. R. Schaeffer, J. Silk, M. Spiro and J. Zinn-Justin, Elsevier SciencePress, Amsterdam, p135. Efstathiou G., Bond J.R., Mon. Not. R. Astron. Soc.in press astro-ph/9807130 Astrophys. J. 518 (1999) 2-23 Evrard G., 1998, submitted to Mon. Not. R. Astron. Soc astro-ph/9701148 Mon.Not.Roy.Astron.Soc. 292 (1997) 289 Freedman J.B., Mould J.R., Kennicutt R.C., Madore B.F., 1998 astro-ph/9801090 Astrophys. J. 480 (1997) 705 Garnavich P.M. et al. 1998 astro-ph/9806396 Astrophys.J. 509 (1998) 74-79 Goobar A., Perlmutter S., 1995 14 Astrophys. J. 450 1995 Astrophys. J. 450 (1995) 14 Hamuy M., Phillips M.M., Maza J., Suntzeff N.B., Schommer R.A., Aviles R. 1996 2391 Astrophys. J. 112 1996 Astrophys. J. 112 (1996) 2391 Hancock S., Gutierrez C.M., Davies R.D., Lasenby A.N., Rocha G., Rebolo R., Watson R.A., Tegmark M., 1997 505 Mon. Not. R. Astron. Soc. 298 1997 Mon. Not. R. Astron. Soc. 298 (1997) 505 Hancock S., Rocha G., Lasenby A.N., Gutierrez C.M., 1998 L1 Mon. Not. R. Astron. Soc. 294 1998 Mon. Not. R. Astron. Soc. 294 (1998) L1 Herbig T., De Oliveira-Costa A., Devlin M.J., Miller A.D., Page L., Tegmark M., 1998, submitted to Astrophys. J astro-ph/9808044 Astrophys.J. 509 (1998) L73-76 Lineweaver C.H., 1998. Astrophys. J.505, L69. Lineweaver, C.H., Barbosa D., 1998a 624 Astrophys. J. 446 1998 Astrophys. J. 446 (1998) 624 Lineweaver, C.H., Barbosa D., 1998b 799 Astron. Astrophys. 329 1998 Astron. Astrophys. 329 (1998) 799 De Oliveira-Costa A., Devlin M.J., Herbig T., Miller A.D., Netterfield C.B. Page L., Tegmark M., 1998, submitted to Astrophys. J astro-ph/9808045 Astrophys. J. 509 (1998) L77-80 Ostriker J.P., Steinhardt P.J., 1995 600 Nature 377 1995 Nature 377 (1995) 600 Peebles P.J.E., 1993, Principles of Physical Cosmology, Princeton University Press, Princeton, New Jersey. Perlmutter S, et al., 1995, In Presentations at the NATO ASI in Aiguablava, Spain, LBL-38400; also published in Thermonuclear Supernova, P. Ruiz-Lapuente, R. Cana and J. Isern (eds), Dordrecht, Kluwer, 1997, p749. Perlmutter S, et al., 1997 565 Astrophys. J. 483 1997 Astrophys. J. 483 (1997) 565 Perlmutter S. et al., 1998a, Astrophys. J.in press. (P98) astro-ph/9812133 Astrophys. J. 517 (1999) 565-586 Perlmutter S. et al., 1998b, In Presentation at the January 1988 Meeting of the American Astronomical Society, Washington D.C., LBL-42230, available at www-supernova.lbl.gov; B.A.A.S., volume : 29 (1997) 1351Perlmutter S, et al., 1998c 51 Nature 391 1998 Nature 391 (1998) 51 Ratra B., Peebles P.J.E., 1988 3406 Phys. Rev., D 37 1988 Phys. Rev. D 37 (1988) 3406 Riess A. et al. 1998, Astrophys. J.in press astro-ph/9805201 Astron. J. 116 (1998) 1009-1038 Seljak U., Zaldarriaga M. 1996 437 Astrophys. J. 469 1996 Astrophys. J. 469 (1996) 437 Seljak U. & Zaldarriaga M., 1998 astro-ph/9811123 Phys. Rev. D60 (1999) 043504 Tegmark M., 1997 3806 Phys. Rev. Lett. 79 1997 Phys. Rev. Lett. 79 (1997) 3806 Tegmark M. 1998, submitted to Astrophys. J astro-ph/9809201 Astrophys. J. 514 (1999) L69-L72 Tegmark, M., Eisenstein D.J., Hu W., Kron R.G., 1998 astro-ph/9805117 Wambsganss J., Cen R., Ostriker J.P., 1998 29 Astrophys. J. 494 1998 Astrophys. J. 494 (1998) 29 Webster M., Bridle S.L., Hobson M.P., Lasenby A.N., Lahav O., Rocha, G., 1998, Astrophys. J.in press astro-ph/9802109 White M., 1998, Astrophys. J.in press astro-ph/9802295 Astrophys. J. 506 (1998) 495 Zaldarriaga, M., Spergel D.N., Seljak U., 1997 1 Astrophys. J. 488 1997 Astrophys. J. 488 (1997) 1 eng PRE-25553 RL-82-024 Ellis, J AUTHOR|(SzGeCERN)aaa0005 University of Oxford Grand unification with large supersymmetry breaking Mar 1982 18 p SzGeCERN General Theoretical Physics Ibanez, L E Ross, G G 1982 11 Oxford Univ. Univ. Auton. Madrid Rutherford Lab. 1990-01-28 50 2002-01-04 BATCH h 1982n PREPRINT hep-ex/0201013 eng CERN-EP-2001-094 Heister, A Aachen, Tech. Hochsch. Search for R-Parity Violating Production of Single Sneutrinos in $e^{+}e^{-}$ Collisions at $\sqrt{s}$ = 189-209 GeV Geneva CERN 17 Dec 2001 22 p ALEPH Papers A search for single sneutrino production under the assumption that $R$-parity is violated via a single dominant $LL\bar{E}$ coupling is presented. This search considers the process ${\rm e} \gamma\;{\smash{\mathop{\rightarrow}}}\;\tilde{\nu}\ell$ and is performed using the data collected by the ALEPH detector at centre-of-mass energies from 189\,GeV up to 209\,GeV corresponding to an integrated luminosity of637.1\,$\mathrm{pb}^{-1}$. The numbers of observed candidate events are in agreement with Standard Model expectations and 95\% confidence level upper limits on five of the $LL\bar{E}$ couplings are given as a function of the assumedsneutrino mass. CERN EDS 20011220SLAC giva LANL EDS SzGeCERN Particle Physics - Experimental Results Schael, S Barate, R Bruneliere, R De Bonis, I Decamp, D Goy, C Jezequel, S Lees, J P Martin, F Merle, E Minard, M N Pietrzyk, B Trocme, B Boix, G Bravo, S Casado, M P Chmeissani, M Crespo, J M Fernandez, E Fernandez-Bosman, M Garrido, L Grauges, E Lopez, J Martinez, M Merino, G Miquel, R Mir, L M Pacheco, A Paneque, D Ruiz, H Colaleo, A Creanza, D De Filippis, N De Palma, M Iaselli, G Maggi, G Maggi, M Nuzzo, S Ranieri, A Raso, G Ruggieri, F Selvaggi, G Silvestris, L Tempesta, P Tricomi, A Zito, G Huang, X Lin, J Ouyang, Q Wang, T Xie, Y Xu, R Xue, S Zhang, J Zhang, L Zhao, W Abbaneo, D Azzurri, P Barklow, T Buchmuller, O Cattaneo, M Cerutti, F Clerbaux, B Drevermann, H Forty, R W Frank, M Gianotti, F Greening, T C Hansen, J B Harvey, J Hutchcroft, D E Janot, P Jost, B Kado, M Maley, P Mato, P Moutoussi, A Ranjard, F Rolandi, L Schlatter, D Sguazzoni, G Tejessy, W Teubert, F Valassi, A Videau, I Ward, J J Badaud, F Dessagne, S Falvard, A Fayolle, D Gay, P Jousset, J Michel, B Monteil, S Pallin, D Pascolo, J M Perret, P Hansen, J D Hansen, J R Hansen, P H Nilsson, B S Waananen, A Kyriakis, A Markou, C Simopoulou, E Vayaki, A Zachariadou, K Blondel, A Brient, J C Machefert, F P Rouge, A Swynghedauw, M Tanaka, R Videau, H L Ciulli, V Focardi, E Parrini, G Antonelli, A Antonelli, M Bencivenni, G Bologna, G Bossi, F Campana, P Capon, G Chiarella, V Laurelli, P Mannocchi, G Murtas, F Murtas, G P Passalacqua, L Pepe-Altarelli, M Spagnolo, P Kennedy, J Lynch, J G Negus, P O'Shea, V Smith, D Thompson, A S Wasserbaech, S R Cavanaugh, R Dhamotharan, S Geweniger, C Hanke, P Hepp, V Kluge, E E Leibenguth, G Putzer, A Stenzel, H Tittel, K Werner, S Wunsch, M Beuselinck, R Binnie, D M Cameron, W Davies, G Dornan, P J Girone, M Hill, R D Marinelli, N Nowell, J Przysiezniak, H Rutherford, S A Sedgbeer, J K Thompson, J C White, R Ghete, V M Girtler, P Kneringer, E Kuhn, D Rudolph, G Bouhova-Thacker, E Bowdery, C K Clarke, D P Ellis, G Finch, A J Foster, F Hughes, G Jones, R W L Pearson, M R Robertson, N A Smizanska, M Lemaître, V Blumenschein, U Holldorfer, F Jakobs, K Kayser, F Kleinknecht, K Muller, A S Quast, G Renk, B Sander, H G Schmeling, S Wachsmuth, H Zeitnitz, C Ziegler, T Bonissent, A Carr, J Coyle, P Curtil, C Ealet, A Fouchez, D Leroy, O Kachelhoffer, T Payre, P Rousseau, D Tilquin, A Ragusa, F David, A Dietl, H Ganis, G Huttmann, K Lutjens, G Mannert, C Manner, W Moser, H G Settles, R Wolf, G Boucrot, J Callot, O Davier, M Duflot, L Grivaz, J F Heusse, P Jacholkowska, A Loomis, C Serin, L Veillet, J J De Vivie de Regie, J B Yuan, C Bagliesi, G Boccali, T Foà, L Giammanco, A Giassi, A Ligabue, F Messineo, A Palla, F Sanguinetti, G Sciaba, A Tenchini, R Venturi, A Verdini, P G Awunor, O Blair, G A Coles, J Cowan, G García-Bellido, A Green, M G Jones, L T Medcalf, T Misiejuk, A Strong, J A Teixeira-Dias, P Clifft, R W Edgecock, T R Norton, P R Tomalin, I R Bloch-Devaux, B Boumediene, D Colas, P Fabbro, B Lancon, E Lemaire, M C Locci, E Perez, P Rander, J Renardy, J F Rosowsky, A Seager, P Trabelsi, A Tuchming, B Vallage, B Konstantinidis, N P Litke, A M Taylor, G Booth, C N Cartwright, S Combley, F Hodgson, P N Lehto, M H Thompson, L F Affholderbach, K Bohrer, A Brandt, S Grupen, C Hess, J Ngac, A Prange, G Sieler, U Borean, C Giannini, G He, H Putz, J Rothberg, J E Armstrong, S R Berkelman, K Cranmer, K Ferguson, D P S Gao, Y Gonzalez, S Hayes, O J Hu, H Jin, S Kile, J McNamara, P A Nielsen, J Pan, Y B Von Wimmersperg-Toller, J H Wiedenmann, W Wu, J Wu, S L Wu, X Zobernig, G Dissertori, G ALEPH Collaboration valerie.brunner@cern.ch http://invenio-software.org/download/invenio-demo-site-files/ep-2001-094.pdf http://invenio-software.org/download/invenio-demo-site-files/ep-2001-094.ps.gz 2002 ALEPH 11 EP CERN LEP 2001-12-19 50 2002-02-19 BATCH CERN Eur. Phys. J., C SLAC 4823672 oai:cds.cern.ch:CERN-EP-2001-094 CER n 200231 PREPRINT ALEPHPAPER [1] For reviews, see for example: H.P. Nilles 1 Phys. Rep. 110 1984 Phys. Rep. 110 (1984) 1 H.E. Haber and G. L. Kane 75 Phys. Rep. 117 1985 Phys. Rep. 117 (1985) 75 [2] G. Farrar and P. Fayet 575 Phys. Lett., B 76 1978 Phys. Lett. B 76 (1978) 575 [3] S. Weinberg 287 Phys. Rev., B 26 1982 Phys. Rev. B 26 (1982) 287 N. Sakai and T. Yanagida 83 Nucl. Phys., B 197 1982 Nucl. Phys. B 197 (1982) 83 S. Dimopoulos, S. Raby and F. Wilczek 133 Phys. Lett., B 212 1982 Phys. Lett. B 212 (1982) 133 [4] B.C. Allanach, H. Dreiner, P. Morawitz and M.D. Williams, "Single Sneutrino/Slepton Production at LEP2 and the NLC" 307 Phys. Lett., B 420 1998 Phys. Lett. B 420 (1998) 307 [5] ALEPH Collaboration, "Search for R-Parity Violating Decays of Supersymmetric Particles in e+e- Collisions at Centre-of-Mass Energies between s = 189­202 GeV" 415 Eur. Phys. J., C 19 2001 Eur. Phys. J. C 19 (2001) 415 [6] ALEPH Collaboration, "ALEPH: a detector for electron-positron annihilations at LEP", Nucl. Instrum. and Methods. A : 294 (1990) 121 [7] S. Cantini, Yu. L. Dokshitzer, M. Olsson, G. Turnock and B.R. Webber, `New clustering algorithm for multijet cross sections in e+e- annihilation" 432 Phys. Lett., B 269 1991 Phys. Lett. B 269 (1991) 432 [8] ALEPH Collaboration, "Performance of the ALEPH detector at LEP", Nucl. Instrum. and Methods. A : 360 (1995) 481 Nucl. Instrum. and Methods. 360 (1995) 481 [9] S. Katsanevas and P. Morawitz, "SUSYGEN 2.2 - A Monte Carlo Event Generator for MSSM Sparticle Production at e+e- Colliders" 227 Comput. Phys. Commun. 112 1998 Comput. Phys. Commun. 112 (1998) 227 [10] E. Barberio, B. van Eijk and Z. W¸as 115 Comput. Phys. Commun. 66 1991 Comput. Phys. Commun. 66 (1991) 115 [11] S. Jadach and Z. W¸as, R. Decker and J.H. Kühn, "The decay library TAUOLA" 361 Comput. Phys. Commun. 76 1993 Comput. Phys. Commun. 76 (1993) 361 [12] T. Sjöstrand, " High-Energy Physics Event Generation with PYTHIA 5.7 and JETSET 7.4" 74 Comput. Phys. Commun. 82 1994 Comput. Phys. Commun. 82 (1994) 74 [13] S. Jadach et al 276 Comput. Phys. Commun. 66 1991 Comput. Phys. Commun. 66 (1991) 276 11 [14] M. Skrzypek, S. Jadach, W. Placzek and Z. Was 216 Comput. Phys. Commun. 94 1996 Comput. Phys. Commun. 94 (1996) 216 [15] S. Jadach et al 298 Phys. Lett., B 390 1997 Phys. Lett. B 390 (1997) 298 [16] J.A.M. Vermaseren, in Proceedings of the IVth International Workshop on Gamma Gamma Interactions, Eds. G. Cochard and P. Kessler, Springer Verlag, 1980 [17] J. -F. Grivaz and F. Le Diberder, "Complementary analyses and acceptance optimization in new particle searches", LAL preprint # 92-37 (1992) [18] ALEPH Collaboration, "Search for Supersymmetry with a dominant R-Parity Violating LL ¯ E Coupling in e+e- Collisions at Centre-of-Mass Energies of 130 GeV to 172 GeV" 433 Eur. Phys. J., C 4 1998 Eur. Phys. J. C 4 (1998) 433 [19] For reviews see for example: H. Dreiner, "An Introduction to Explicit R-parity Violation" hep-ph/9707435 published in Perspectives on Supersymmetry, ed. G.L. Kane, World Scientific, Singapore (1998); G. Bhattacharyya 83 Nucl. Phys. B, Proc. Suppl. 52 1997 Nucl. Phys. B Proc. Suppl. 52 (1997) 83 12 astro-ph/0101431 eng Gray, M E Cambridge University Infrared constraints on the dark mass concentration observed in the cluster Abell 1942 24 Jan 2001 8 p We present a deep H-band image of the region in the vicinity of the cluster Abell 1942 containing the puzzling dark matter concentration detected in an optical weak lensing study by Erben et al. (2000). We demonstrate that ourlimiting magnitude, H=22, would be sufficient to detect clusters of appropriate mass out to redshifts comparable with the mean redshift of the background sources. Despite this, our infrared image reveals no obvious overdensity ofsources at the location of the lensing mass peak, nor an excess of sources in the I-H vs. H colour-magnitude diagram. We use this to further constrain the luminosity and mass-to-light ratio of the putative dark clump as a function ofits redshift. We find that for spatially-flat cosmologies, background lensing clusters with reasonable mass-to-light ratios lying in the redshift range 0<z<1 are strongly excluded, leaving open the possibility that the massconcentration is a new type of truly dark object. LANL EDS SzGeCERN Astrophysics and Astronomy Ellis, R S Lewis, J R McMahon, R G Firth, A E Meghan Gray <meg@ast.cam.ac.uk> http://invenio-software.org/download/invenio-demo-site-files/0101431.pdf http://invenio-software.org/download/invenio-demo-site-files/0101431.ps.gz http://invenio-software.org/download/invenio-demo-site-files/0101431.fig1.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0101431.fig2.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0101431.fig3.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0101431.fig4.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0101431.fig5a.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0101431.fig5b.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0101431.fig6.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0101431.fig7.ps.gz Additional 2001 11 Caltech IoA, Cambridge 2001-01-25 00 2001-11-02 BATCH Gray, Meghan E. Ellis, Richard S. Lewis, James R. Mahon, Richard G. Mc Firth, Andrew E. Mon. Not. R. Astron. Soc. n 200104 Allen S.W., Mon. Not. R. Astron. Soc. 296 (1998) 392 Bacon D., Refregier A., Ellis R.S., Mon. Not. R. Astron. Soc. 318 (2000) 625 Beckett M., Mackay C., McMahon R., Parry I., Ellis R.S., Chan S.J., Hoenig M., Proc. SPIE 3354 (1998) 431 Bertin E., Arnouts S., 1996, A&Ann. Sci., 117, 393 Bonnet H., Mellier Y., Fort B., Astrophys. J. 427 (1994) 83 Bower R.G., Lucey J.R., Ellis R.S., Mon. Not. R. Astron. Soc. 254 (1992) 589 Bower R.G., Lucey J.R., Ellis R.S., Mon. Not. R. Astron. Soc. 254 (1992) 601 Clowe D., Luppino G.A., Kaiser N., Henry J.P., Gioia I.M., 1998, Astrophys. J., 497L Couch W.J., Ellis R.S., Malin D.F., MacLaren I., Mon. Not. R. Astron. Soc. 249 (1991) 606 da Costa L., et al., Astron. Astrophys. 343 (1999) 29 Erben T., van Waerbeke L., Mellier Y., Schneider P., Cuillandre J.-C., Castander F.J., Dantel-Fort M., Astron. Astrophys. 355 (2000) 23 Fahlman G., Kaiser N., Squires G., Woods D., Astrophys. J. 437 (1994) 56 Firth, A., 2000, Clustering at High Redshift, Astropart. Phys. Conference Series, Vol. 200, p. 404 Fischer P., Astron. J. 117 (1999) 2024 Gladders M.D., Yee H.K.C., Astron. J. 120 (2000) 2148 Gray M.E., Ellis R.S., Refregier A., Bézecourt J., McMahon R.G., Hoenig M.D : 318 (2000) 573 Hradecky V., Jones C., Donnelly R.H., Djorgovski S.G., Gal R.R., Odeqahn S.C., Astrophys. J. 543 (2000) 521 Kaiser N., Wilson G., Luppino G.A., 2000, astro-ph/0003338 Kaiser N., Squires G., Astrophys. J. 404 (1993) 441 Kneib J.P., Ellis R.S., Smail I., Couch W.J., Sharples R.M., Astrophys. J. 471 (1996) 643 Marzke R., McCarthy P.J., Persson E., et al., 1999, Photometric Redshifts and the Detection of High Redshift Galaxies, Astropart. Phys. Conference Series, Vol. 191, p. 148 Menanteau F., Ellis R.S., Abraham R.G., Barger Astron. J., Cowie L.L., Mon. Not. R. Astron. Soc. 309 (1999) 208 Metzler C.A., White M., Loken C., 2000, Astrophys. J., submitted astro-ph/0005442 Miralda-Escude J., Babul A., Astrophys. J. 449 (1995) 18 Persson S.E., Murphy D.C., Krzeminsky W., Rother M., Rieke M.J., 1998, Astron. J., 116 Refregier A., Heavens A., Heymans C., Mon. Not. R. Astron. Soc. 319 (2000) 649 Schneider P., Mon. Not. R. Astron. Soc. 283 (1996) 837 Smail I., Ellis R.S., Fitchett M.J., Edge A.C., Mon. Not. R. Astron. Soc. 273 (1995) 277 Smail I., Dressler A., Kneib J.-P., Ellis R.S., Couch W.J., Sharples R.M., Oemler Astron. J., Astrophys. J. 469 (1996) 508 Squires G., Neumann D.M., Kaiser N., Arnaud M., Babul A., Boehringer H., Fahlman G., Woods D., Astrophys. J. 482 (1997) 648 van Kampen E., Katgert P., Mon. Not. R. Astron. Soc. 289 (1997) 327 van Waerbeke L, et al., Astron. Astrophys. 538 (2000) 30 Whittman D.M., Tyson J.A., Kirkman D., Dell’Antonio, I., Bertstein G., Nature 405 (2000) 143 PREPRINT hep-ph/0105155 eng CERN-TH-2001-131 Mangano, M L CERN - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 Physics at the front-end of a neutrino factory : a quantitative appraisal Geneva CERN 16 May 2001 - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 1 p We present a quantitative appraisal of the physics potential for neutrino experiments at the front-end of a muon storage ring. We estimate the forseeable accuracy in the determination of several interesting observables, and explorethe consequences of these measurements. We discuss the extraction of individual quark and antiquark densities from polarized and unpolarized deep-inelastic scattering. In particular we study the implications for the undertanding ofthe nucleon spin structure. We assess the determination of alpha_s from scaling violation of structure functions, and from sum rules, and the determination of sin^2(theta_W) from elastic nu-e and deep-inelastic nu-p scattering. Wethen consider the production of charmed hadrons, and the measurement of their absolute branching ratios. We study the polarization of Lambda baryons produced in the current and target fragmentation regions. Finally, we discuss thesensitivity to physics beyond the Standard Model. LANL EDS SzGeCERN Particle Physics - Phenomenology Alekhin, S I Anselmino, M Ball, R D Boglione, M D'Alesio, U Davidson, S De Lellis, G Ellis, J AUTHOR|(SzGeCERN)aaa0005 Forte, S Gambino, P Gehrmann, T Kataev, A L Kotzinian, A Kulagin, S A Lehmann-Dronke, B Migliozzi, P Murgia, F Ridolfi, G Michelangelo MANGANO <Michelangelo.Mangano@cern.ch> http://invenio-software.org/download/invenio-demo-site-files/0105155.pdf http://invenio-software.org/download/invenio-demo-site-files/0105155.ps.gz 2001 11 TH CERN - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 nuDIS Working group of the ECFA-CERN Neutrino-Factory Study Group 2001-05-17 50 2001-05-25 MH SLAC 4628020 CER n 200231 PREPRINT [1] S. Geer 6989 Phys. Rev., D 57 1998 Phys. Rev. D 57 (1998) 6989 hep-ph/9712290 Phys. Rev. D 57 (1998) 6989-6997 039903 Phys. Rev., D 59 1999 Phys. Rev. D 59 (1999) 039903 ] [2] The Muon Collider Collab., µ+µ- Collider: a feasibility study, Report BNL-52503, Fermilab-Conf-96/092, LBNL-38946 (1996); B. Autin, A. Blondel and J. Ellis (eds.), Prospective study of muon storage rings at CERN, Report CERN 99-02, ECFA 99-197 (Geneva, 1999) [3] I. Bigi et al., The potential for neutrino physics at muon colliders and dedicated high current muon storage rings, Report BNL-67404 [4] C. Albright et al hep-ex/0008064 [5] R.D. Ball, D.A. Harris and K.S. McFarland hep-ph/0009223 submitted to the Proceedings of the Nufact '00 Workshop, June 2000, Monterey [6] H.L. Lai et al 1280 Phys. Rev., D 55 1997 Phys. Rev. D 55 (1997) 1280 hep-ph/9606399 Phys. Rev. D 55 (1997) 1280-1296 [7] V. Barone, C. Pascaud and F. Zomer 243 Eur. Phys. J., C 12 2000 Eur. Phys. J. C 12 (2000) 243 hep-ph/9907512 Eur. Phys. J. C 12 (2000) 243-262 [8] S. I. Alekhin 094022 Phys. Rev., D 63 2001 Phys. Rev. D 63 (2001) 094022 hep-ph/0011002 Phys. Rev. D 63 (2001) 094022 65 [9] G. Ridolfi 278 Nucl. Phys., A 666 2000 Nucl. Phys. A 666 (2000) 278 R.D. Ball and H.A.M. Tallini 1327 J. Phys., G 25 1999 J. Phys. G 25 (1999) 1327 S. Forte hep-ph/9409416 and hep-ph/9610238 [10] S. Forte, M.L. Mangano and G. Ridolfi hep-ph/0101192 Nucl. Phys. B 602 (2001) 585-621 to appear in Nucl. Phys., B [11] J. Blümlein and N. Kochelev 296 Phys. Lett., B 381 1996 Phys. Lett. B 381 (1996) 296 and 285 Nucl. Phys., B 498 1997 Nucl. Phys. B 498 (1997) 285 [12] D.A. Dicus 1637 Phys. Rev., D 5 1972 Phys. Rev. D 5 (1972) 1637 [13] M. Anselmino, P. Gambino and J. Kalinowski 267 Z. Phys., C 64 1994 Z. Phys. C 64 (1994) 267 M. Maul et al 443 Z. Phys., A 356 1997 Z. Phys. A 356 (1997) 443 J. Blümlein and N. Kochelev 285 Nucl. Phys., B 498 1997 Nucl. Phys. B 498 (1997) 285 V. Ravishankar 309 Nucl. Phys., B 374 1992 Nucl. Phys. B 374 (1992) 309 [14] B. Ehrnsperger and A. Schäfer 619 Phys. Lett., B 348 1995 Phys. Lett. B 348 (1995) 619 J. Lichtenstadt and H.J. Lipkin 119 Phys. Lett., B 353 1995 Phys. Lett. B 353 (1995) 119 J. Dai et al 273 Phys. Rev., D 53 1996 Phys. Rev. D 53 (1996) 273 P.G. Ratcliffe 383 Phys. Lett., B 365 1996 Phys. Lett. B 365 (1996) 383 N.W. Park, J. Schechter and H. Weigel 420 Phys. Lett., B 228 1989 Phys. Lett. B 228 (1989) 420 [15] A.O. Bazarko et al 189 Z. Phys., C 65 1989 Z. Phys. C 65 (1989) 189 [16] R. Mertig and W.L. van Neerven 637 Z. Phys., C 70 1996 Z. Phys. C 70 (1996) 637 W. Vogelsang 2023 Phys. Rev., D 54 1996 Phys. Rev. D 54 (1996) 2023 [17] D. de Florian and R. Sassot 6052 Phys. Rev., D 51 1995 Phys. Rev. D 51 (1995) 6052 [18] R.D. Ball, S. Forte and G. Ridolfi 255 Phys. Lett., B 378 1996 Phys. Lett. B 378 (1996) 255 [19] G. Altarelli, S. Forte and G. Ridolfi 277 Nucl. Phys., B 534 1998 Nucl. Phys. B 534 (1998) 277 and 138 Nucl. Phys. B, Proc. Suppl. 74 1999 Nucl. Phys. B Proc. Suppl. 74 (1999) 138 [20] G. Altarelli, R.D. Ball, S. Forte and G. Ridolfi 1145 Acta Phys. Pol., B 29 1998 Acta Phys. Pol. B 29 (1998) 1145 hep-ph/9803237 Acta Phys. Pol. B 29 (1998) 1145-1173 [21] H.L. Lai et al. (CTEQ Collab.) 375 Eur. Phys. J., C 12 2000 Eur. Phys. J. C 12 (2000) 375 hep-ph/9903282 Eur. Phys. J. C 12 (2000) 375-392 [22] G. Altarelli, R.D. Ball, S. Forte and G. Ridolfi 337 Nucl. Phys., B 496 1997 Nucl. Phys. B 496 (1997) 337 and 1145 Acta Phys. Pol., B 29 1998 Acta Phys. Pol. B 29 (1998) 1145 [23] G. Altarelli and G.G. Ross 391 Phys. Lett., B 212 1988 Phys. Lett. B 212 (1988) 391 A. V. Efremov and O. V. Teryaev, JINR-E2-88-287, in Proceedings of Symposium on Hadron Interactions-Theory and Phenomenology, Bechyne, June 26- July 1, 1988; ed. by J. Fischer et al (Czech. Acad. ScienceInst. Phys., 1988) p.432; R.D. Carlitz, J.C. Collins and A.H. Mueller 229 Phys. Lett., B 214 1988 Phys. Lett. B 214 (1988) 229 G. Altarelli and B. Lampe 315 Z. Phys. C 47 1990 Z. Phys. C 47 (1990) 315 W. Vogelsang 275 Z. Phys., C 50 1991 Z. Phys. C 50 (1991) 275 [24] G.M. Shore and G. Veneziano 75 Phys. Lett., B 244 1990 Phys. Lett. B 244 (1990) 75 and 23 Nucl. Phys., B 381 1992 Nucl. Phys. B 381 (1992) 23 see also G. M. Shore hep-ph/9812355 [25] S. Forte 189 Phys. Lett., B 224 1989 Phys. Lett. B 224 (1989) 189 and 1 Nucl. Phys., B 331 1990 Nucl. Phys. B 331 (1990) 1 S. Forte and E.V. Shuryak 153 Nucl. Phys., B 357 1991 Nucl. Phys. B 357 (1991) 153 [26] S.J. Brodsky and B.-Q. Ma 317 Phys. Lett., B 381 1996 Phys. Lett. B 381 (1996) 317 66 [27] S.J. Brodsky, J. Ellis and M. Karliner 309 Phys. Lett., B 206 1988 Phys. Lett. B 206 (1988) 309 J. Ellis and M. Karliner hep-ph/9601280 [28] M. Glück et al hep-ph/0011215 Phys.Rev. D63 (2001) 094005 [29] D. Adams et al. (Spin Muon Collab.) 23 Nucl. Instrum. Methods Phys. Res., A 437 1999 Nucl. Instrum. Methods Phys. Res. A 437 (1999) 23 [30] B. Adeva et al. (SMC Collab.) 112001 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 112001 P. L. Anthony et al. (E155 Collab.) 19 Phys. Lett., B 493 2000 Phys. Lett. B 493 (2000) 19 [31] R.M. Barnett 1163 Phys. Rev. Lett. 36 1976 Phys. Rev. Lett. 36 (1976) 1163 [32] M.A. Aivazis, J.C. Collins, F.I. Olness and W. Tung 3102 Phys. Rev., D 50 1994 Phys. Rev. D 50 (1994) 3102 [33] T. Gehrmann and W.J. Stirling 6100 Phys. Rev., D 53 1996 Phys. Rev. D 53 (1996) 6100 [34] M. Glück, E. Reya, M. Stratmann and W. Vogelsang 4775 Phys. Rev., D 53 1996 Phys. Rev. D 53 (1996) 4775 [35] D.J. Gross and C.H. Llewellyn Smith 337 Nucl. Phys., B 14 1969 Nucl. Phys. B 14 (1969) 337 [36] R. D. Ball and S. Forte 365 Phys. Lett., B 358 1995 Phys. Lett. B 358 (1995) 365 hep-ph/9506233 Phys.Lett. B358 (1995) 365-378 and hep-ph/9607289 [37] J. Santiago and F.J. Yndurain 45 Nucl. Phys., B 563 1999 Nucl. Phys. B 563 (1999) 45 hep-ph/9904344 Nucl.Phys. B563 (1999) 45-62 [38] V.S. Fadin and L.N. Lipatov 127 Phys. Lett., B 429 1998 Phys. Lett. B 429 (1998) 127 M. Ciafaloni, D. Colferai and G. Salam 114036 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 114036 G. Altarelli, R.D. Ball and S. Forte hep-ph/0011270 Nucl.Phys. B599 (2001) 383-423 [39] W.G. Seligman et al 1213 Phys. Rev. Lett. 79 1997 Phys. Rev. Lett. 79 (1997) 1213 [40] A. L. Kataev, G. Parente and A.V. Sidorov 405 Nucl. Phys., B 573 2000 Nucl. Phys. B 573 (2000) 405 hep-ph/9905310 Nucl.Phys. B573 (2000) 405-433 [41] A.L. Kataev, G. Parente and A.V. Sidorov, preprint CERN-TH/2000-343 hep-ph/0012014 and work in progress [42] S.I. Alekhin and A.L. Kataev 402 Phys. Lett., B 452 1999 Phys. Lett. B 452 (1999) 402 hep-ph/9812348 Phys.Lett. B452 (1999) 402-408 [43] S. Bethke R27 J. Phys., G 26 2000 J. Phys. G 26 (2000) R27 hep-ex/0004021 J.Phys. G26 (2000) R27 [44] I. Hinchliffe and A.V. Manohar 643 Annu. Rev. Nucl. Part. Sci. 50 2000 Annu. Rev. Nucl. Part. Sci. 50 (2000) 643 hep-ph/0004186 Ann.Rev.Nucl.Part.Sci. 50 (2000) 643-678 [45] H. Georgi and H.D. Politzer 1829 Phys. Rev., D 14 1976 Phys. Rev. D 14 (1976) 1829 [46] E.B. Zijlstra and W.L. van Neerven 377 Phys. Lett., B 297 1992 Phys. Lett. B 297 (1992) 377 [47] W.L. van Neerven and A. Vogt 263 Nucl. Phys., B 568 2000 Nucl. Phys. B 568 (2000) 263 hep-ph/9907472 Nucl.Phys. B568 (2000) 263-286 and hep-ph/0103123 Nucl.Phys. B603 (2001) 42-68 [48] W.L. van Neerven and A. Vogt 111 Phys. Lett., B 490 2000 Phys. Lett. B 490 (2000) 111 hep-ph/0007362 Phys.Lett. B490 (2000) 111-118 [49] S.A. Larin, T. van Ritbergen and J.A. Vermaseren 41 Nucl. Phys., B 427 1994 Nucl. Phys. B 427 (1994) 41 [50] S.A. Larin, P. Nogueira, T. van Ritbergen and J.A. Vermaseren 338 Nucl. Phys., B 492 1997 Nucl. Phys. B 492 (1997) 338 hep-ph/9605317 Nucl.Phys. B492 (1997) 338-378 [51] A. Retey and J.A. Vermaseren, preprint TTP00-13, NIKHEF-2000-018 hep-ph/0007294 Nucl.Phys. B604 (2001) 281-311 [52] J.A. Gracey 141 Phys. Lett., B 322 1994 Phys. Lett. B 322 (1994) 141 hep-ph/9401214 Phys.Lett. B322 (1994) 141-146 67 [53] J. Blümlein and A. Vogt 149 Phys. Lett., B 370 1996 Phys. Lett. B 370 (1996) 149 hep-ph/9510410 Phys.Lett. B370 (1996) 149-155 [54] S. Catani et al., preprint CERN-TH/2000-131 hep-ph/0005025 in Standard model physics (and more) at the LHC, eds. G. Altarelli and M. Mangano, Report CERN 2000-004 (Geneva, 2000) [55] A.L. Kataev, A.V. Kotikov, G. Parente and A.V. Sidorov 374 Phys. Lett. B 417 1998 Phys. Lett. B 417 (1998) 374 hep-ph/9706534 Phys.Lett. B417 (1998) 374-384 [56] M. Beneke 1 Phys. Rep. 317 1999 Phys. Rep. 317 (1999) 1 hep-ph/9807443 Phys.Rept. 317 (1999) 1-142 [57] M. Beneke and V.M. Braun hep-ph/0010208 [58] M. Dasgupta and B.R. Webber 273 Phys. Lett., B 382 1996 Phys. Lett. B 382 (1996) 273 hep-ph/9604388 [59] M. Maul, E. Stein, A. Schafer and L. Mankiewicz 100 Phys. Lett., B 401 1997 Phys. Lett. B 401 (1997) 100 hep-ph/9612300 [60] A.V. Sidorov et al. (IHEP­JINR Neutrino Detector Collab.) 405 Eur. Phys. J., C 10 1999 Eur. Phys. J. C 10 (1999) 405 hep-ex/9905038 [61] S.I. Alekhin et al. (IHEP-JINR Neutrino Detector Collab), preprint IHEP-01-18 (2001) hep-ex/0104013 [62] C. Adloff et al. (H1 Collab.) hep-ex/0012052 [63] A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne 117 Eur. Phys. J., C 18 2000 Eur. Phys. J. C 18 (2000) 117 hep-ph/0007099 [64] E.B. Zijlstra and W.L. van Neerven 525 Nucl. Phys., B 383 1992 Nucl. Phys. B 383 (1992) 525 [65] S.G. Gorishny and S.A. Larin 109 Phys. Lett., B 172 1986 Phys. Lett. B 172 (1986) 109 S.A. Larin and J.A.M. Vermaseren 345 Phys. Lett., B 259 1991 Phys. Lett. B 259 (1991) 345 [66] A.L. Kataev and A.V. Sidorov, preprint CERN-TH/7235-94 hep-ph/9405254 in Proceedings of Rencontre de Moriond - Hadronic session of `QCD and high energy hadronic interactions', M´eribel-les-Allues, 1994, ed. J. Tr an Thanh V an (Editions Fronti eres, Gif-sur-Yvette, 1995), p. 189 [67] J.H. Kim et al 3595 Phys. Rev. Lett. 81 1998 Phys. Rev. Lett. 81 (1998) 3595 hep-ex/9808015 [68] J. Chyla and A.L. Kataev 385 Phys. Lett., B 297 1992 Phys. Lett. B 297 (1992) 385 hep-ph/9209213 [69] A.L. Kataev and A.V. Sidorov 179 Phys. Lett., B 331 1994 Phys. Lett. B 331 (1994) 179 hep-ph/9402342 [70] J. Blümlein and W. L. van Neerven 417 Phys. Lett., B 450 1999 Phys. Lett. B 450 (1999) 417 hep-ph/9811351 [71] A.L. Kataev and V.V. Starshenko 235 Mod. Phys. Lett., A 10 1995 Mod. Phys. Lett. A 10 (1995) 235 hep-ph/9502348 M.A. Samuel, J. Ellis and M. Karliner 4380 Phys. Rev. Lett. 74 1995 Phys. Rev. Lett. 74 (1995) 4380 hep-ph/9503411 [72] W. Bernreuther and W. Wetzel 228 Nucl. Phys., B 197 1982 Nucl. Phys. B 197 (1982) 228 [Erratum 758 Nucl. Phys., B 513 1998 Nucl. Phys. B 513 (1998) 758 ]; S.A. Larin, T. van Ritbergen and J.A. Vermaseren 278 Nucl. Phys., B 438 1995 Nucl. Phys. B 438 (1995) 278 hep-ph/9411260 K.G. Chetyrkin, B.A. Kniehl and M. Steinhauser 2184 Phys. Rev. Lett. 79 1997 Phys. Rev. Lett. 79 (1997) 2184 hep-ph/9706430 [73] E.V. Shuryak and A.I. Vainshtein 451 Nucl. Phys. B 199 1982 Nucl. Phys. B 199 (1982) 451 [74] M.A. Shifman, A.I. Vainshtein and V.I. Zakharov 385 Nucl. Phys., B 147 1979 Nucl. Phys. B 147 (1979) 385 [75] V.M. Braun and A.V. Kolesnichenko 723 Nucl. Phys., B 283 1987 Nucl. Phys. B 283 (1987) 723 68 [76] G.G. Ross and R.G. Roberts 425 Phys. Lett., B 322 1994 Phys. Lett. B 322 (1994) 425 hep-ph/9312237 [77] J. Balla, M.V. Polyakov and C. Weiss 327 Nucl. Phys., B 510 1998 Nucl. Phys. B 510 (1998) 327 hep-ph/9707515 [78] R.G. Oldeman (CHORUS Collab.) 96 Nucl. Phys. B, Proc. Suppl. 79 1999 Nucl. Phys. B, Proc. Suppl. 79 (1999) 96 R.G. Oldeman, PhD Thesis, Amsterdam University, June 2000 (unpublished) [79] U.K. Yang et al. (CCFR­NuTeV Collab.) hep-ex/0010001 [80] J.D. Bjorken 1767 Phys. Rev. 163 1967 Phys. Rev. 163 (1967) 1767 [81] W.A. Bardeen, A.J. Buras, D.W. Duke and T. Muta 3998 Phys. Rev., D 18 1978 Phys. Rev. D 18 (1978) 3998 G. Altarelli, R.K. Ellis and G. Martinelli 521 Nucl. Phys., B 143 1978 Nucl. Phys. B 143 (1978) 521 [82] K.G. Chetyrkin, S.G. Gorishny, S.A. Larin and F.V. Tkachov 230 Phys. Lett., B 137 1984 Phys. Lett. B 137 (1984) 230 [83] S.A. Larin, F.V. Tkachov and J.A. Vermaseren 862 Phys. Rev. Lett. 66 1991 Phys. Rev. Lett. 66 (1991) 862 [84] M. Arneodo 301 Phys. Rep. 240 1994 Phys. Rep. 240 (1994) 301 [85] G. Piller and W. Weise 1 Phys. Rep. 330 2000 Phys. Rep. 330 (2000) 1 hep-ph/9908230 [86] P. Amaudruz et al 3 Nucl. Phys., B 441 1995 Nucl. Phys. B 441 (1995) 3 M. Arneodo et al 12 Nucl. Phys., B 441 1995 Nucl. Phys. B 441 (1995) 12 [87] A.C. Benvenuti et al. (BCDMS Collab.) 483 Phys. Lett., B 189 1987 Phys. Lett. B 189 (1987) 483 [88] J. Gomez et al 4348 Phys. Rev., D 49 1994 Phys. Rev. D 49 (1994) 4348 [89] M.R. Adams et al. (E665 Collab.) 403 Z. Phys., C 67 1995 Z. Phys. C 67 (1995) 403 hep-ex/9505006 [90] S.L. Adler 963 Phys. Rev., B 135 1964 Phys. Rev. B 135 (1964) 963 [91] J.S. Bell 57 Phys. Rev. Lett. 13 1964 Phys. Rev. Lett. 13 (1964) 57 [92] C.A. Piketti and L. Stodolsky 571 Nucl. Phys., B 15 1970 Nucl. Phys. B 15 (1970) 571 [93] B.Z. Kopeliovich and P. Marage 1513 Int. J. Mod. Phys., A 8 1993 Int. J. Mod. Phys. A 8 (1993) 1513 [94] P.P. Allport et al. (BEBC WA59 Collab.) 417 Phys. Lett., B 232 1989 Phys. Lett. B 232 (1989) 417 [95] C. Boros, J.T. Londergan and A.W. Thomas 114030 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 114030 hep-ph/9804410 [96] U.K. Yang et al. (CCFR­NuTeV Collab.) hep-ex/0009041 [97] M.A. Aivazis, F.I. Olness and W. Tung 2339 Phys. Rev. Lett. 65 1990 Phys. Rev. Lett. 65 (1990) 2339 V. Barone, M. Genovese, N.N. Nikolaev, E. Predazzi and B. Zakharov 279 Phys. Lett., B 268 1991 Phys. Lett. B 268 (1991) 279 and 83 Z. Phys., C 70 1996 Z. Phys. C 70 (1996) 83 hep-ph/9505343 [98] R.S. Thorne and R.G. Roberts 303 Phys. Lett., B 421 1998 Phys. Lett. B 421 (1998) 303 hep-ph/9711223 [99] A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne 463 Eur. Phys. J., C 4 1998 Eur. Phys. J. C 4 (1998) 463 hep-ph/9803445 [100] L.L. Frankfurt, M.I. Strikman and S. Liuti 1725 Phys. Rev. Lett. 65 1990 Phys. Rev. Lett. 65 (1990) 1725 [101] R. Kobayashi, S. Kumano and M. Miyama 465 Phys. Lett., B 354 1995 Phys. Lett. B 354 (1995) 465 hep-ph/9501313 [102] K.J. Eskola, V.J. Kolhinen and P.V. Ruuskanen 351 Nucl. Phys., B 535 1998 Nucl. Phys. B 535 (1998) 351 hep-ph/9802350 69 [103] S.A. Kulagin hep-ph/9812532 [104] P.V. Landshoff, J.C. Polkinghorne and R.D. Short 225 Nucl. Phys., B 28 1971 Nucl. Phys. B 28 (1971) 225 [105] S.A. Kulagin, G. Piller and W. Weise 1154 Phys. Rev., C 50 1994 Phys. Rev. C 50 (1994) 1154 nucl-th/9402015 [106] S.V. Akulinichev, S.A. Kulagin and G.M. Vagradov 485 Phys. Lett., B 158 1985 Phys. Lett. B 158 (1985) 485 [107] S.A. Kulagin 653 Nucl. Phys., A 500 1989 Nucl. Phys. A 500 (1989) 653 [108] G.B. West, Ann. Phys.NY : 74 (1972) 464 [109] S.A. Kulagin and A.V. Sidorov 261 Eur. Phys. J., A 9 2000 Eur. Phys. J. A 9 (2000) 261 hep-ph/0009150 [110] A.C. Benvenuti et al. (BCDMS Collab.) 29 Z. Phys., C 63 1994 Z. Phys. C 63 (1994) 29 [111] M. Vakili et al. (CCFR Collab.) 052003 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 052003 hep-ex/9905052 [112] S.A. Kulagin 435 Nucl. Phys., A 640 1998 Nucl. Phys. A 640 (1998) 435 nucl-th/9801039 [113] I.R. Afnan, F. Bissey, J. Gomez, A.T. Katramatou, W. Melnitchouk, G.G. Petratos and A.W. Thomas nucl-th/0006003 [114] V. Guzey et al hep-ph/0102133 [115] S. Sarantakos, A. Sirlin and W.J. Marciano 84 Nucl. Phys., B 217 1983 Nucl. Phys. B 217 (1983) 84 D.Y. Bardin and V.A. Dokuchaeva 975 Sov. J. Nucl. Phys. 43 1986 Sov. J. Nucl. Phys. 43 (1986) 975 D.Y. Bardin and V.A. Dokuchaeva 839 Nucl. Phys., B 287 1987 Nucl. Phys. B 287 (1987) 839 [116] G. Degrassi et al. Phys. Lett., B350 (95) 75; G. Degrassi and P. Gambino 3 Nucl. Phys., B 567 2000 Nucl. Phys. B 567 (2000) 3 [117] J.N. Bahcall, M. Kamionkowski and A. Sirlin 6146 Phys. Rev., D 51 1995 Phys. Rev. D 51 (1995) 6146 astro-ph/9502003 [118] See F. Jegerlehner hep-ph/9901386 and references therein [119] K.S. McFarland et al. (NuTeV Collab.) hep-ex/9806013 in Proceedings 33rd Rencontres de Moriond on Electroweak Interactions and Unified Theories, Les Arcs, 1998 [120] M. E. Peskin and T. Takeuchi 381 Phys. Rev., D 46 1992 Phys. Rev. D 46 (1992) 381 W. J. Marciano and J. L. Rosner 2963 Phys. Rev. Lett. 65 1990 Phys. Rev. Lett. 65 (1990) 2963 [Erratum 2963 Phys. Rev. Lett. 68 1990 Phys. Rev. Lett. 68 (1990) 2963 ] [121] G. Altarelli and R. Barbieri 161 Phys. Lett., B 253 1991 Phys. Lett. B 253 (1991) 161 D. C. Kennedy and P. Langacker 2967 Phys. Rev. Lett. 65 1990 Phys. Rev. Lett. 65 (1990) 2967 [Erratum 2967 Phys. Rev. Lett. 66 1990 Phys. Rev. Lett. 66 (1990) 2967 ] [122] D.E. Groom et al, Particle Data Group 1 Eur. Phys. J. 15 2000 Eur. Phys. J. 15 (2000) 1 [123] P. Migliozzi et al 217 Phys. Lett., B 462 1999 Phys. Lett. B 462 (1999) 217 [124] J. Finjord and F. Ravndal 61 Phys. Lett., B 58 1975 Phys. Lett. B 58 (1975) 61 [125] R.E. Shrock and B.W. Lee 2539 Phys. Rev., D 13 1976 Phys. Rev. D 13 (1976) 2539 [126] C. Avilez et al 149 Phys. Lett., B 66 1977 Phys. Lett. B 66 (1977) 149 [127] C. Avilez and T. Kobayashi 3448 Phys. Rev., D 19 1979 Phys. Rev. D 19 (1979) 3448 [128] C. Avilez et al 709 Phys. Rev., D 17 1978 Phys. Rev. D 17 (1978) 709 70 [129] A. Amer et al 48 Phys. Lett., B 81 1979 Phys. Lett. B 81 (1979) 48 [130] S.G. Kovalenko 934 Sov. J. Nucl. Phys. 52 1990 Sov. J. Nucl. Phys. 52 (1990) 934 [131] G.T. Jones et al. (WA21 Collab.) 593 Z. Phys., C 36 1987 Z. Phys. C 36 (1987) 593 [132] V.V. Ammosov et al 247 JETP Lett. 58 1993 JETP Lett. 58 (1993) 247 [133] D. Son et al 2129 Phys. Rev., D 28 1983 Phys. Rev. D 28 (1983) 2129 [134] N. Ushida et al. (E531 Collab.) 375 Phys. Lett., B 206 1988 Phys. Lett. B 206 (1988) 375 [135] N. Armenise et al 409 Phys. Lett., B 104 1981 Phys. Lett. B 104 (1981) 409 [136] G. De Lellis, P. Migliozzi and P. Zucchelli 7 Phys. Lett., B 507 2001 Phys. Lett. B 507 (2001) 7 hep-ph/0104066 [137] G. Corcella et al hep-ph/0011363 [138] T. Sjöstrand, report LU-TP-95-20 hep-ph/9508391 [139] G. Ingelman et al 108 Comput. Phys. Commun. 101 1997 Comput. Phys. Commun. 101 (1997) 108 [140] T. Bolton hep-ex/9708014 [141] P. Annis et al. (CHORUS Collab.) 458 Phys. Lett., B 435 1998 Phys. Lett. B 435 (1998) 458 [142] J. Conrad et al 1341 Rev. Mod. Phys. 70 1998 Rev. Mod. Phys. 70 (1998) 1341 [143] T. Adams et al. (NuTeV Collab.) 092001 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 092001 [144] A.E. Asratian et al. (BBCN Collab.) 55 Z. Phys., C 58 1993 Z. Phys. C 58 (1993) 55 [145] J.D. Richman and P.R. Burchat 893 Rev. Mod. Phys. 67 1995 Rev. Mod. Phys. 67 (1995) 893 [146] J. Collins, L. Frankfurt and M. Strikman 2982 Phys. Rev., D 56 1997 Phys. Rev. D 56 (1997) 2982 [147] A.V. Radyushkin 5524 Phys. Rev., D 56 1997 Phys. Rev. D 56 (1997) 5524 [148] S.J. Brodsky, L. Frankfurt, J.F. Gunion, A.H. Mueller and M. Strikman 3134 Phys. Rev., D 50 1994 Phys. Rev. D 50 (1994) 3134 A. V. Radyushkin 333 Phys. Lett., B 385 1996 Phys. Lett. B 385 (1996) 333 L. Mankiewicz, G. Piller and T. Weigl 119 Eur. Phys. J., C 5 1998 Eur. Phys. J. C 5 (1998) 119 and 017501 Phys. Rev., D 59 1999 Phys. Rev. D 59 (1999) 017501 M. Vanderhaeghen, P.A.M. Guichon and M. Guidal 5064 Phys. Rev. Lett. 80 1998 Phys. Rev. Lett. 80 (1998) 5064 [149] B. Lehmann-Dronke, P.V. Pobylitsa, M.V. Polyakov, A. Schäfer and K. Goeke 147 Phys. Lett., B 475 2000 Phys. Lett. B 475 (2000) 147 B. Lehmann-Dronke, M.V. Polyakov, A. Schäfer and K. Goeke 114001 Phys. Rev., D 63 2001 Phys. Rev. D 63 (2001) 114001 hep-ph/0012108 [150] M. Wirbel, B. Stech and M. Bauer 637 Z. Phys., C 29 1985 Z. Phys. C 29 (1985) 637 M. Bauer and M. Wirbel 671 Z. Phys. 42 1989 Z. Phys. 42 (1989) 671 [151] H-n. Li and B. Meli´c 695 Eur. Phys. J., C 11 1999 Eur. Phys. J. C 11 (1999) 695 [152] A. Abada et al 268 Nucl. Phys. B, Proc. Suppl. 83 2000 Nucl. Phys. B, Proc. Suppl. 83 (2000) 268 D. Becirevic et al hep-lat/0002025 A. Ali Khan et al hep-lat/0010009 A. S. Kronfeld hep-ph/0010074 L. Lellouch and C.J.D. Lin (UKQCD Collab.) hep-ph/0011086 71 [153] A.V. Radyushkin 014030 Phys. Rev., D 59 1999 Phys. Rev. D 59 (1999) 014030 [154] A.D. Martin, R.G. Roberts and W.J. Stirling 155 Phys. Lett., B 354 1995 Phys. Lett. B 354 (1995) 155 [155] J.T. Jones et al. (WA21 Collab.) 23 Z. Phys., C 28 1987 Z. Phys. C 28 (1987) 23 [156] S. Willocq et al. (WA59 Collab.) 207 Z. Phys., C 53 1992 Z. Phys. C 53 (1992) 207 [157] D. DeProspo et al. (E632 Collab.) 6691 Phys. Rev., D 50 1994 Phys. Rev. D 50 (1994) 6691 [158] P. Astier et al. (NOMAD Collab.) 3 Nucl. Phys., B 588 2000 Nucl. Phys. B 588 (2000) 3 [159] L. Trentadue and G. Veneziano 201 Phys. Lett., B 323 1994 Phys. Lett. B 323 (1994) 201 [160] M. Anselmino, M. Boglione, J. Hansson, and F. Murgia 828 Phys. Rev., D 54 1996 Phys. Rev. D 54 (1996) 828 [161] R.L. Jaffe 6581 Phys. Rev., D 54 1996 Phys. Rev. D 54 (1996) 6581 [162] J. Ellis, D.E. Kharzeev and A. Kotzinian 467 Z. Phys., C 69 1996 Z. Phys. C 69 (1996) 467 [163] D. de Florian, M. Stratmann, and W. Vogelsang 5811 Phys. Rev., D 57 1998 Phys. Rev. D 57 (1998) 5811 [164] A. Kotzinian, A. Bravar and D. von Harrach 329 Eur. Phys. J., C 2 1998 Eur. Phys. J. C 2 (1998) 329 [165] A. Kotzinian hep-ph/9709259 [166] S.L. Belostotski 526 Nucl. Phys. B, Proc. Suppl. 79 1999 Nucl. Phys. B, Proc. Suppl. 79 (1999) 526 [167] D. Boer, R. Jakob, and P.J. Mulders 471 Nucl. Phys., B 564 2000 Nucl. Phys. B 564 (2000) 471 [168] C. Boros, J.T. Londergan and A.W. Thomas 014007 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 014007 and D : 62 (2000) 014021 [169] D. Ashery and H.J. Lipkin 263 Phys. Lett., B 469 1999 Phys. Lett. B 469 (1999) 263 [170] B-Q. Ma, I. Schmidt, J. Soffer, and J-Y. Yang 657 Eur. Phys. J., C 16 2000 Eur. Phys. J. C 16 (2000) 657 114009 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 114009 [171] M. Anselmino, M. Boglione, and F. Murgia 253 Phys. Lett., B 481 2000 Phys. Lett. B 481 (2000) 253 [172] M. Anselmino, D. Boer, U. D'Alesio, and F. Murgia 054029 Phys. Rev., D 63 2001 Phys. Rev. D 63 (2001) 054029 [173] D. Indumathi, H.S. Mani and A. Rastogi 094014 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 094014 [174] M. Burkardt and R.L. Jaffe 2537 Phys. Rev. Lett. 70 1993 Phys. Rev. Lett. 70 (1993) 2537 [175] I.I. Bigi 43 Nuovo Cimento 41 1977 Nuovo Cimento 41 (1977) 43 and 581 [176] W. Melnitchouk and A.W. Thomas 311 Z. Phys., A 353 1996 Z. Phys. A 353 (1996) 311 [177] J. Ellis, M. Karliner, D.E. Kharzeev and M.G. Sapozhnikov 256 Nucl. Phys., A 673 2000 Nucl. Phys. A 673 (2000) 256 [178] R. Carlitz and M. Kislinger 336 Phys. Rev., D 2 1970 Phys. Rev. D 2 (1970) 336 [179] D. Naumov hep-ph/0101355 [180] P. Migliozzi et al 19 Phys. Lett., B 494 2000 Phys. Lett. B 494 (2000) 19 [181] A. Alton et al hep-ex/0008068 72 [182] Y. Grossman 141 Phys. Lett., B 359 1995 Phys. Lett. B 359 (1995) 141 [183] P. Langacker, M. Luo and A. Mann 87 Rev. Mod. Phys. 64 1992 Rev. Mod. Phys. 64 (1992) 87 [184] F. Cuypers and S. Davidson 503 Eur. Phys. J., C 2 1998 Eur. Phys. J. C 2 (1998) 503 S. Davidson, D. Bailey and B.A. Campbell 613 Z. Phys., C 61 1994 Z. Phys. C 61 (1994) 613 [185] A. Leike 143 Phys. Rep. 317 1999 Phys. Rep. 317 (1999) 143 [186] A. Datta, R. Gandhi, B. Mukhopadhyaya and P. Mehta hep-ph/0011375 [187] G. Giudice et al., Report of the Stopped-Muon Working Group, to appear. 73 eng BNL-40718 FERMILAB-Pub-87-222-T Nason, P Brookhaven Nat. Lab. The total cross section for the production of heavy quarks in hadronic collisions Upton, IL Brookhaven Nat. Lab. 23 Dec 1987 42 p SzGeCERN Particle Physics - Phenomenology Dawson, S Ellis, R K 11 1987 1990-01-29 50 2002-01-04 BATCH SLAC 1773607 h 198804n PREPRINT eng CERN-PRE-82-006 Ellis, J AUTHOR|(SzGeCERN)aaa0005 CERN - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 From the standard model to grand unification Geneva CERN 1982 mult. p SzGeCERN General Theoretical Physics 1982 11 TH 1990-01-28 50 2001-09-15 BATCH 820332 oai:cds.cern.ch:CERN-PRE-82-006 cern:theory h 1982n PREPRINT astro-ph/0104076 eng Dev, A Delhi University Cosmic equation of state, Gravitational Lensing Statistics and Merging of Galaxies 4 Apr 2001 28 p In this paper we investigate observational constraints on the cosmic equation of state of dark energy ($p = w \rho$) using gravitational lensing statistics. We carry out likelihood analysis of the lens surveys to constrain thecosmological parameters $\Omega_{m}$ and $w$. We start by constraining $\Omega_{m}$ and $w$ in the no-evolution model of galaxies where the comoving number density of galaxies is constant. We extend our study to evolutionary modelsof galaxies - Volmerange $&$ Guiderdoni Model and Fast-Merging Model (of Broadhurst, Ellis $&$ Glazebrook). For the no-evolution model we get $w \leq -0.24$ and $\Omega_{m}\leq 0.48$ at $1\sigma$ (68% confidence level). For theVolmerange $&$ Guiderdoni Model we have $w \leq -0.2$ and $\Omega_{m} \leq 0.58$ at $1 \sigma$, and for the Fast Merging Model we get $w \leq -0.02$ and $\Omega_{m} \leq 0.93$ at $1\sigma$. For the case of constant $\Lambda$ ($w=-1$), all the models permit $\Omega_{m} = 0.3$ with 68% CL. We observe that the constraints on $w$ and $\Omega_{m}$ (and on $\Omega_{m}$ in the case of $w = -1$) obtained in the case of evolutionary models are weaker than thoseobtained in the case of the no-evolution model. LANL EDS SzGeCERN Astrophysics and Astronomy Jain, D Panchapakesan, N Mahajan, S Bhatia, V B Deepak Jain <deepak@physics.du.ac.in> http://invenio-software.org/download/invenio-demo-site-files/0104076.pdf http://invenio-software.org/download/invenio-demo-site-files/0104076.ps.gz 2001 10 Delhi University 2001-04-05 00 2001-04-10 BATCH Dev, Abha Jain, Deepak CER n 200231 PREPRINT [1] S. Perlmutter et al 565 Astrophys. J. 517 1999 Astrophys. J. 517 (1999) 565 [2] S. Perlmutter et al., Phy. Rev. Lett.: 83 (1999) 670 [3] A. G. Riess et al 1009 Astron. J. 116 1998 Astron. J. 116 (1998) 1009 [4] P. de Bernardis et al 955 Nature 404 2000 Nature 404 (2000) 955 [5] P. J. Ostriker & P. J. Steinhardt 600 Nature 377 1995 Nature 377 (1995) 600 [6] V. Sahni & Alexei Starobinsky, IJMP, D : 9 (2000) 373 [7] L. F. Bloomfield Torres & I. Waga 712 Mon. Not. R. Astron. Soc. 279 1996 Mon. Not. R. Astron. Soc. 279 (1996) 712 [8] V. Silveira & I. Waga 4890 Phys. Rev., D 50 1994 Phys. Rev. D 50 (1994) 4890 [9] V.Silveira & I. Waga 4625 Phys. Rev., D 56 1997 Phys. Rev. D 56 (1997) 4625 [10] I. Waga & Ana P. M. R. Miceli, Phy. Rev. D : 59 (1999) 103507 [11] M. S. Turner & M. White, Phy. Rev. D : 56 (1997) 4439 [12] D. Huterer & M. S. Turner, Phy. Rev. D : 60 (1999) 081301 [13] T. Chiba, N. Sugiyama & T. Nakamura, Mon. Not. R. As-tron. Soc.: 289 (1997) L5 [14] T. Chiba, N. Sugiyama & T. Nakamura, Mon. Not. R. As-tron. Soc.: 301 (1998) 72 [15] P. J. E. Peebles 439 Astrophys. J. 284 1984 Astrophys. J. 284 (1984) 439 [16] B. Ratra & P. J. E. Peebles, Phy. Rev. D : 37 (1988) 3406 [17] R. R. Caldwell, R. Dave & P. J. Steinhardt, Phy. Rev. Lett.: 80 (1998) 1582 [18] G. Efstathiou astro-ph/9904356 (1999) [19] J. A. S. Lima & J. S. Alcaniz 893 Mon. Not. R. Astron. Soc. 317 2000 Mon. Not. R. Astron. Soc. 317 (2000) 893 [20] Wang et al 17 Astrophys. J. 530 2000 Astrophys. J. 530 (2000) 17 [21] H. W. Rix, D. Maoz, E. Turner & M. Fukugita 49 Astrophys. J. 435 1994 Astrophys. J. 435 (1994) 49 [22] S. Mao & C. S. Kochanek 569 Mon. Not. R. Astron. Soc. 268 1994 Mon. Not. R. Astron. Soc. 268 (1994) 569 [23] D. Jain, N. Panchapakesan, S. Mahajan & V. B. Bhatia, MPLA : 15 (2000) 41 [24] T. Broadhurst, R. Ellis & K. Glazebrook 55 Nature 355 1992 Nature 355 (1992) 55 [BEG] [25] B. Rocca-Volmerange & B. Guiderdoni, Mon. Not. R. As-tron. Soc.: 247 (1990) 166 [26] A. Toomre, in The Evolution of Galaxies and Stellar Pop-ulations eds: B. M. Tinsley & R. B. Larson ( Yale Univ. Observatory), p-401 (1977) [27] F. Schwezier 109 Astron. J. 111 1996 Astron. J. 111 (1996) 109 [28] O. J. Eggen, D. Lynden-Bell & A. R. Sandage 748 Astrophys. J. 136 1962 Astrophys. J. 136 (1962) 748 [29] R. B. Partridge & P. J. E. Peebles 868 Astrophys. J. 147 1967 Astrophys. J. 147 (1967) 868 [30] S. P. Driver et al L23 Astrophys. J. 449 1995 Astrophys. J. 449 (1995) L23 [31] J. M. Burkey et al L13 Astrophys. J. 429 1994 Astrophys. J. 429 (1994) L13 [32] S. M. Faber et al 668 Astrophys. J. 204 1976 Astrophys. J. 204 (1976) 668 [33] R. G. Carlberg et al 540 Astrophys. J. 435 1994 Astrophys. J. 435 (1994) 540 [34] S. E. Zepf 377 Nature 390 1997 Nature 390 (1997) 377 [35] K. Glazebrook et al 157 Mon. Not. R. Astron. Soc. 273 1995 Mon. Not. R. Astron. Soc. 273 (1995) 157 [36] S. J. Lilly et al 108 Astrophys. J. 455 1995 Astrophys. J. 455 (1995) 108 [37] R. S. Ellis et al 235 Mon. Not. R. Astron. Soc. 280 1996 Mon. Not. R. Astron. Soc. 280 (1996) 235 [38] R. S. Ellis, Ann. Rev 389 Astron. Astrophys. 35 1997 Astron. Astrophys. 35 (1997) 389 [39] B. Guiderdoni & B. Rocca-Volmerange 435 Astron. Astrophys. 252 1991 Astron. Astrophys. 252 (1991) 435 [40] S. E. Zepf, & D. C. Koo 34 Astrophys. J. 337 1989 Astrophys. J. 337 (1989) 34 [41] H. K. C. Yee & E. Ellingson 37 Astrophys. J. 445 1995 Astrophys. J. 445 (1995) 37 [42] S. Cole et al 781 Mon. Not. R. Astron. Soc. 271 1994 Mon. Not. R. Astron. Soc. 271 (1994) 781 [43] C. M. Baugh, S. Cole & C. S. Frenk L27 Mon. Not. R. Astron. Soc. 282 1996 Mon. Not. R. Astron. Soc. 282 (1996) L27 [44] C. M. Baugh, S. Cole & C. S. Frenk 1361 Mon. Not. R. Astron. Soc. 283 1996 Mon. Not. R. Astron. Soc. 283 (1996) 1361 [45] C. M. Baugh et al 504 Astrophys. J. 498 1998 Astrophys. J. 498 (1998) 504 [46] P. Schechter 297 Astrophys. J. 203 1976 Astrophys. J. 203 (1976) 297 [47] W. H. Press & P. Schechter 487 Astrophys. J. 187 1974 Astrophys. J. 187 (1974) 487 [48] J. E. Gunn & J. R. Gott 1 Astrophys. J. 176 1972 Astrophys. J. 176 (1972) 1 [49] J. Loveday, B. A. Peterson, G. Efstathiou & S. J. Maddox 338 Astrophys. J. 390 1994 Astrophys. J. 390 (1994) 338 [50] E. L. Turner, J. P. ostriker & J. R. Gott III 1 Astrophys. J. 284 1984 Astrophys. J. 284 (1984) 1 [51] E. L. Turner L43 Astrophys. J. 365 1990 Astrophys. J. 365 (1990) L43 [52] M. Fukugita & E. L. Turner 99 Mon. Not. R. Astron. Soc. 253 1991 Mon. Not. R. Astron. Soc. 253 (1991) 99 [53] M. Fukugita, T. Futamase, M. Kasai & E. L. Turner, As-trophys. J.: 393 (1992) 3 [54] C. S. Kochanek 12 Astrophys. J. 419 1993 Astrophys. J. 419 (1993) 12 [55] C. S. Kochanek 638 Astrophys. J. 466 1996 Astrophys. J. 466 (1996) 638 [56] F. D. A. Hartwich & D. Schade, Ann. Rev. Astron. Astro-phys.: 28 (1990) 437 [57] Yu-Chung N. Cheng & L. M. Krauss 697 Int. J. Mod. Phys., A 15 2000 Int. J. Mod. Phys. A 15 (2000) 697 [58] J. N. Bahcall et al 56 Astrophys. J. 387 1992 Astrophys. J. 387 (1992) 56 [59] P. C. Hewett et al., Astron. J.109, 1498(LBQS) (1995) [60] D. Maoz et al., Astrophys. J.409, 28(Snapshot) (1993) [61] D. Crampton, R. D. McClure & J. M. Fletcher 23 Astrophys. J. 392 1992 Astrophys. J. 392 (1992) 23 [62] H. K. C. Yee, A. V. Filippenko & D. Tang 7 Astron. J. 105 1993 Astron. J. 105 (1993) 7 [63] J. Surdej et al 2064 Astron. J. 105 1993 Astron. J. 105 (1993) 2064 [64] D. Jain, N. Panchapakesan, S. Mahajan & V. B. Bhatia astro-ph/9807129 (1998) [65] D. Jain, N. Panchapakesan, S. Mahajan & V. B. Bhatia, IJMP, A : 13 (1998) 4227 [66] M. lampton, B. Margon & S. Bowyer 177 Astrophys. J. 208 1976 Astrophys. J. 208 (1976) 177 eng DOE-ER-40048-24-P-4 Abbott, R B Washington U. Seattle Cosmological perturbations in Kaluza-Klein models Washington, DC US. Dept. Energy. Office Adm. Serv. Nov 1985 26 p SzGeCERN General Theoretical Physics Bednarz, B F Ellis, S D 1985 11 1990-01-29 50 2002-01-04 BATCH h 198608n PREPRINT eng CERN-PPE-92-085 HEPHY-PUB-568 Albajar, C CERN - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 Multifractal analysis of minimum bias events in \Sqrt s = 630 GeV $\overline{p}$p collisions Geneva CERN 1 Jun 1992 27 p SzGeCERN Particle Physics - Experimental Results Allkofer, O C Apsimon, R J Bartha, S Bezaguet, A Bohrer, A Buschbeck, B Cennini, P Cittolin, S Clayton, E Coughlan, J A Dau, D Della Negra, M Demoulin, M Dibon, H Dowell, J D Eggert, K Eisenhandler, E F Ellis, N Faissner, H Fensome, I F Ferrando, A Garvey, J Geiser, A Givernaud, A Gonidec, A Jank, W Jorat, G Josa-Mutuberria, I Kalmus, P I P Karimaki, V Kenyon, I R Kinnunen, R Krammer, M Lammel, S Landon, M Levegrun, S Lipa, P Markou, C Markytan, M Maurin, G McMahon, S Meyer, T Moers, T Morsch, A Moulin, A Naumann, L Neumeister, N Norton, A Pancheri, G Pauss, F Pietarinen, E Pimia, M Placci, A Porte, J P Priem, R Prosi, R Radermacher, E Rauschkolb, M Reithler, H Revol, J P Robinson, D Rubbia, C Salicio, J M Samyn, D Schinzel, D Schleichert, R Seez, C Shah, T P Sphicas, P Sumorok, K Szoncso, F Tan, C H Taurok, A Taylor, L Tether, S Teykal, H F Thompson, G Terrente-Lujan, E Tuchscherer, H Tuominiemi, J Virdee, T S von Schlippe, W Vuillemin, V Wacker, K Wagner, H Walzel, G Weselka, D Wulz, C E AACHEN - BIRMINGHAM - CERN - HELSINKI - KIEL - IMP. COLL. LONDON - QUEEN MARY COLL. LONDON - MADRID CIEMAT - MIT - RUTHERFORD APPLETON LAB. - VIENNA Collaboration 1992 13 UA1 PPE P00003707 CERN SPS CERN 1992-06-16 50 2001-04-12 BATCH 37-46 Z. Phys., C 56 1992 SLAC 2576562 oai:cds.cern.ch:CERN-PPE-92-085 cern:experiment n 199226 a1992 ARTICLE eng CERN-TH-4036 Ellis, J AUTHOR|(SzGeCERN)aaa0005 CERN - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 Non-compact supergravity solves problems Geneva CERN Oct 1984 15 p SzGeCERN General Theoretical Physics Kahler manifolds gravitinos axions constraints noscale Enqvist, K Nanopoulos, D V 1985 13 TH CERN 1990-01-29 50 2001-09-15 BATCH 357-362 Phys. Lett., B 151 1985 oai:cds.cern.ch:CERN-TH-4036 cern:theory h 198451 a1985 ARTICLE eng STAN-CS-81-898-MF Whang, K Stanford University Separability as a physical database design methodology Stanford, CA Stanford Univ. Comput. Sci. Dept. Oct 1981 60 p Ordered for J Blake/DD SzGeCERN Computing and Computers Wiederhold, G Sagalowicz, D 1981 19 Stanford Univ. 1990-01-28 50 2002-01-04 BATCH n 198238n REPORT eng JYFL-RR-82-7 Arje, J University of Jyvaskyla Charge creation and reset mechanisms in an ion guide isotope separator (IGIS) Jyvaskyla Finland Univ. Dept. Phys. Jul 1982 18 p SzGeCERN Detectors and Experimental Techniques 1982 19 Jyväsklä Univ. 1990-01-28 50 2002-01-04 BATCH n 198238n REPORT 0898710022 eng 519.2 Lindley, Dennis Victor University College London Bayesian statistics a review Philadelphia, PA SIAM 1972 88 p CBMS-NSF Reg. Conf. Ser. Appl. Math. 2 Society for Industrial and Applied Mathematics. Philadelphia 1972 21 1990-01-27 00 2002-04-12 BATCH m 198604 BOOK 0844621951 eng 621.396.615 621.385.3 Hamilton, Donald R MIT Klystrons and microwave triodes New York, NY McGraw-Hill 1948 547 p M.I.T. Radiat. Lab. 7 Knipp, Julian K Kuper, J B Horner 1948 21 1990-01-27 00 2002-04-12 BATCH m 198604 BOOK eng 621.313 621.382.333.33 Draper, Alec Electrical machines 2nd ed London Longmans 1967 404 p Electrical engineering series 1967 21 1990-01-27 00 2002-04-12 BATCH m 198604 BOOK 1563964554 eng 539.1.078 539.143.44 621.384.8 Quadrupole mass spectrometry and its applications Amsterdam North-Holland 1976 ed. Dawson, Peter H 368 p 1976 21 1990-01-27 00 2002-04-12 BATCH m 198604 BOOK 2225350574 fre 518.5:62.01 Dasse, Michel Analyse informatique t.1 Les preliminaires Paris Masson 1972 Informatique 1972 21 1990-01-27 00 2002-04-12 BATCH m 198604 BOOK 2225350574 fre 518.5:62.01 Dasse, Michel Analyse informatique t.2 L'accomplissement Paris Masson 1972 Informatique 1972 21 1990-01-27 00 2002-04-12 BATCH m 198604 BOOK 0023506709 eng 519.2 Harshbarger, Thad R Introductory statistics a decision map 2nd ed New York, NY Macmillan 1977 597 p 1977 21 1990-01-27 00 2002-04-12 BATCH m 198604 BOOK eng 519.2 Fry, Thornton C Bell Teleph Labs Probability and its engineering uses Princeton, NJ Van Nostrand 1928 490 p Bell Teleph Lab. Ser. 1928 21 1990-01-27 00 2002-04-12 BATCH m 198606 BOOK 0720421039 eng 517.11 Kleene, Stephen Cole University of Wisconsin Introduction to metamathematics Amsterdam North-Holland 1952 (repr.1964.) 560 p Bibl. Matematica 1 1952 21 1990-01-27 00 2002-04-12 BATCH m 198606 BOOK eng 621.38 Hughes, Robert James Introduction to electronics London English Univ. Press 1962 432 p 65/0938, Blair, W/PE, pp Pipe, Peter 1962 21 1990-01-27 50 2002-04-12 BATCH m 198606 BOOK eng 519.2 518.5:519.2 Burford, Roger L Indiana University Statistics a computer approach Columbus, OH Merrill 1968 814 p 1968 21 1990-01-27 00 2002-04-12 BATCH m 198606 BOOK 0471155039 eng 539.1.075 Chiang, Hai Hung Basic nuclear electronics New York, NY Wiley 1969 354 p 1969 21 1990-01-27 00 2002-04-12 BATCH m 198606 BOOK eng 621-5 Dransfield, Peter Engineering systems and automatic control Englewood Cliffs, N.J. Prentice-Hall 1968 432 p 1968 21 1990-01-27 00 2002-04-12 BATCH m 198606 BOOK 0387940758 eng 537.52 Electrical breakdown in gases London Macmillan 1973 ed. Rees, J A 303 p 1973 21 1990-01-27 00 2002-04-12 BATCH m 198606 BOOK eng Tavanapong, W University of Central Florida A High-performance Video Browsing System Orlando, FL Central Florida Univ. 1999 dir. Hua, K A 172 p No fulltext Not held by the library Ph.D. : Univ. Central Florida : 1999 Recent advances in multimedia processing technologies, internetworking technologies, and the World Wide Web phenomenon have resulted in a vast creation and use of digital videos in all kinds of applications ranging fromentertainment, business solutions, to education. Designing efficient techniques for searching and retrieving videos over the networks becomes increasingly more important as future applications will include a huge volume of multimediacontent. One practical approach to search for a video segment is as follows. Step 1: Apply an initial search to determine the set of candidate videos. Step 2: Browse the candidates to identify the relevant videos. Step 3: Searchwithin the relevant videos for interesting video segments. In practice, a user might have to iterate through these steps multiple times in order to locate the desired video segments. Independently, database researchers have beeninvestigating techniques for the initial search in Step 1. Multimedia researchers have proposed several techniques for video browsing in Step 2. Computer communications researchers have been investigating video delivery techniques. Iidentify that searching for video data is an interactive process which involves the transmission of video data. Developing techniques for each step independently could result in a system with less performance. In this dissertation, Ipresent a unified approach taking into accounts all fundamental characteristics of multimedia data. I evaluated the proposed techniques through both simulation and system implementation. The resulting system is less expensive andoffers better performance. The simulation results demonstrate that the proposed technique can offer video browsing and search operations with little delays and with minimum storage overhead at the server. Client machines can handletheir search operations without involving the server making the design more scalable, which is vital for large systems deployed over the Internet. The implemented system shows that the visual quality of the browsing and the searchoperations are excellent. PROQUEST200009 SzGeCERN Computing and Computers THESIS notheld 1999 14 2000-09-22 00 2002-02-22 BATCH PROQUEST 9923724 PROQUEST DAI-B60/03p1177Sep1999 n 200034 THESIS eng Teshome, D California State Univ Neural Networks For Speech Recognition Of A Phonetic Language Long Beach, CA Calif. State Univ. 1999 55 p No fulltext Not held by the library Ms : California State Univ. : 1999 The goal of this thesis is to explore a possibility for a viable alternative/replacement to the Amharic typewriter. Amharic is the national language of Ethiopia. It is one of the oldest languages in the world. Actually, the root-language of Amharic, called Geez, is a descendent of Sabean, which is the direct ancestor of all Semitic languages including English. A phonetic language with 276 phonemes/characters, Amharic has posed quite a challenge to those who,like the author of this thesis, have attempted to design an easy-to-use word processor that interfaces with the conventional keyboard. With current Amharic word processing software, each character requires an average of threekeystrokes thus making typing Amharic literature quite a task. This thesis researches the feasibility of developing a PC-based speech recognition system to recognize the spoken phonemes of the Amharic language. Artificial NeuralNetworks are used for the recognition of spoken alphabets that form Amharic words. A neural network with feed-forward architecture is trained with a series of alphabets and is evaluated on its ability to recognize subsequent testdata. The neural network used in this project is a static classification network; that is, it focuses on the frequency domain of speech while making no attempt to process temporal information. The network training procedure uses thegeneralized Delta Rule. The recognition system developed in this project is an Isolated Speech Recognition System. The approach taken is to recognize the spoken word character by character. This approach is expected to work well dueto the phonetic nature of Amharic. PROQUEST200009 SzGeCERN Computing and Computers THESIS notheld 1999 14 2000-09-22 00 2002-02-22 BATCH PROQUEST 1397120 PROQUEST MAI38/02p448Apr2000 n 200034 THESIS eng Topcuoglu, H R Syracuse Univ. Scheduling Task Graphs In Heterogeneous Computing Environments Syracuse, NY Syracuse Univ. 1999 dir. Hariri, S 126 p No fulltext Not held by the library Ph.D. : Syracuse Univ. : 1999 Efficient application scheduling is critical for achieving high performance in heterogeneous computing environments. An application is represented by a directed acyclic graph (DAG) whose nodes represent tasks and whose edgesrepresent communication messages and precedence constraints among the tasks. The general task-scheduling problem maps the tasks of an application on processors and orders their execution so that task precedence requirements aresatisfied and a minimum schedule length is obtained. The task-scheduling problem has been shown to be NP- complete in general cases as well as in several restricted cases. Although a large number of scheduling heuristics arepresented in the literature, most of them target homogeneous processors. Existing algorithms for heterogeneous processors are not generally efficient because of their high complexity and the quality of their results. This thesisstudies the scheduling of DAG-structured application tasks on heterogeneous domains. We develop two novel low-complexity and efficient scheduling algorithms for bounded number of heterogeneous processors, the HeterogeneousEarliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm. The experimental work presented in this thesis shows that these algorithms significantly surpass previous approaches in terms of performance(schedule length ratio, speed-up, and frequency of best results) and cost (running time and time complexity). Our experimental work includes randomly generated graphs and graphs deducted from real applications. As part of thecomparison study, a parametric graph generator is introduced to generate graphs with various characteristics. We also present a further optimization of the HEFT Algorithm by introducing alternative methods for task prioritizing andprocessor selection phases. A novel processor selection policy based on the earliest finish time of the critical child task improves the performance of the HEFT algorithm. Several strategies for selecting the critical child task of agiven task are presented. This thesis addresses embedding the task scheduling algorithms into an application-development environment for distributed resources. An analytical model is introduced for setting the computation costs oftasks and communication costs of edges of a graph. As part of the design framework of our application development environment, a novel, two-phase, distributed scheduling algorithm is presented for scheduling an application overwide-area distributed resources. PROQUEST200009 SzGeCERN Computing and Computers THESIS notheld 1999 14 2000-09-22 00 2002-02-08 BATCH PROQUEST 9946509 PROQUEST DAI-B60/09p4718Mar2000 n 200034 THESIS spa Trespalacios-Mantilla, J H Puerto Rico Univ. Software De Apoyo Educativo Al Concepto De Funcion En Precalculo I (spanish Text) Rio Piedras Puerto Rico Univ. 1999 dir. Monroy, H 64 p No fulltext Not held by the library Ms : Univ. Puerto Rico : 1999 This thesis reports on the evaluation of the use of an educational software, designed to improve student's learning of the concept of mathematical function. The students in the study were registered in Precalculus I at theUniversity of Puerto Rico, Mayaguez Campus. The educational software allows the practice of changing the representation of a function among tabular, analytic, and graphical representations. To carry the evaluation, 59 students wereselected and were divided in two groups: control and experimental. Both groups received the 'traditional' classroom lectures on the topic. The experimental group, in addition, was allowed to practice with the educational software. Tomeasure their performance and the effect of the educational software, two tests were given: a pre-test and a post-test. The results of this study shows that the experimental group improved significantly more than the control group,thus demonstrating the validity of the educational software in the learning of the concept of mathematical function. PROQUEST200009 SzGeCERN Computing and Computers THESIS notheld 1999 14 2000-09-22 00 2002-02-08 BATCH PROQUEST 1395476 PROQUEST MAI37/06p1890Dec1999 n 200034 THESIS 0612382052 fre Troudi, N Laval Univ. Systeme Multiagent Pour Les Environnements Riches En Informations (french Text) Laval Laval Univ. 1999 dir. Chaib-Draa, B 101 p No fulltext Not held by the library Msc : Universite Laval : 1999 La croissance du Web est spectaculaire, puisqu'on estime aujourd'hui a plus de 50 millions, le nombre de pages sur le Web qui ne demandent qu'a etre consultees. Un simple calcul montre qu'en consacrant ne serait-ce qu'une minute parpage, il faudrait environ 95 ans pour consulter toutes ces pages. L'utilisation d'une strategie de recherche est donc vitale. Dans ce cadre, de nombreux outils de recherche ont ete proposes. Ces outils appeles souvent moteurs derecherche se sont averes aujourd'hui incapables de fournir de l'aide aux utilisateurs. Les raisons principales a cela sont les suivantes: (1) La nature ouverte de l'Internet: aucune supervision centrale ne s'applique quant audeveloppement d'Internet, puisque toute personne qui desire l'utiliser et/ou offrir des informations est libre de le faire; (2) La nature dynamique des informations: les informations qui ne sont pas disponibles aujourd'hui peuventl'etre demain et inversement; (3) La nature heterogene de l'information: l'information est offerte sous plusieurs formats et de plusieurs facons, compliquant ainsi la recherche automatique de l'information. Devant ce constat, ilsemble important de chercher de nouvelles solutions pour aider l'utilisateur dans sa recherche d'informations. (Abstract shortened by UMI.) PROQUEST200009 SzGeCERN Computing and Computers THESIS notheld 1999 14 2000-09-22 00 2002-02-08 BATCH PROQUEST MQ38205 PROQUEST MAI37/06p1890Dec1999 n 200034 THESIS eng LBL-22304 Manes, J L Calif. Univ. Berkeley Anomalies in quantum field theory and differential geometry Berkeley, CA Lawrence Berkeley Nat. Lab. Apr 1986 76 p Thesis : Calif. Univ. Berkeley SzGeCERN General Theoretical Physics bibliography REPORT THESIS 1986 14 1990-01-29 50 2002-03-22 BATCH SLAC 1594192 h 198650n THESIS eng LBL-21916 Ingermanson, R Calif. Univ. Berkeley Accelerating the loop expansion Berkeley, CA Lawrence Berkeley Nat. Lab. Jul 1986 96 p Thesis : Calif. Univ. Berkeley montague_only SzGeCERN General Theoretical Physics bibliography REPORT THESIS 1986 14 1990-01-29 50 2002-03-22 BATCH SLAC 1594184 h 198650n THESIS eng LBL-28106 Bertsche, K J Calif. Univ. Berkeley A small low energy cyclotron for radioisotope measurements Berkeley, CA Lawrence Berkeley Nat. Lab. Nov 1989 155 p Thesis : Calif. Univ. Berkeley SzGeCERN Accelerators and Storage Rings bibliography REPORT THESIS 14 1989 1990-02-28 50 2002-03-22 BATCH h 199010n THESIS gr-qc/0204045 eng Khalatnikov, I M L D Landau Institute for Theoretical Physics of Russian Academy of Sciences Comment about quasiisotropic solution of Einstein equations near cosmological singularity 12 Apr 2002 7 p We generalize for the case of arbitrary hydrodynamical matter the quasiisotropic solution of Einstein equations near cosmological singularity, found by Lifshitz and Khalatnikov in 1960 for the case of radiation-dominated universe. Itis shown that this solution always exists, but dependence of terms in the quasiisotropic expansion acquires a more complicated form. LANL EDS SzGeCERN General Relativity and Cosmology Kamenshchik, A Y Alexander Kamenshchik <sasha.kamenshchik@centrovolta.it> http://invenio-software.org/download/invenio-demo-site-files/0204045.pdf http://invenio-software.org/download/invenio-demo-site-files/0204045.ps.gz CER n 200231 2002 11 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] Lifshitz E M and Khalatnikov I M 1960 149 Zh. Eksp. Teor. Fiz. 39 1960 Zh. Eksp. Teor. Fiz. 39 (1960) 149 [2] Lifshitz E M and Khalatnikov I M 1964 Sov. Phys. Uspekhi 6 495 [3] Landau L D and Lifshitz E M 1979 The Classical Theory of Fields (Perg-amon Press) [4] Starobinsky A A 1986 Stochastic De Sitter (inflationary stage) in the early universe in Field Theory, Quantum Gravity and Strings, (Eds. H.J. De Vega and N. Sanchez, Springer-Verlag, Berlin) 107; Linde A D 1990 Particle Physics and Inflationary Cosmology (Harward Academic Publishers, New York) [5] Banks T and Fischler W 2001 M theory observables for cosmolog-ical space-times hep-th/0102077 An Holographic Cosmology hep-th/0111142 [6] Perlmutter S J et al 1999 565 Astrophys. J. 517 1999 Astrophys. J. 517 (1999) 565 Riess A et al 1998 1009 Astron. J. 116 1998 Astron. J. 116 (1998) 1009 [7] Sahni V and Starobinsky A A 2000 373 Int. J. Mod. Phys., D 9 2000 Int. J. Mod. Phys. D 9 (2000) 373 gr-qc/0204046 eng Bento, M C CERN - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 Supergravity Inflation on the Brane 12 Apr 2002 5 p We study N=1 Supergravity inflation in the context of the braneworld scenario. Particular attention is paid to the problem of the onset of inflation at sub-Planckian field values and the ensued inflationary observables. We find thatthe so-called $\eta$-problem encountered in supergravity inspired inflationary models can be solved in the context of the braneworld scenario, for some range of the parameters involved. Furthermore, we obtain an upper bound on thescale of the fifth dimension, $M_5 \lsim 10^{-3} M_P$, in case the inflationary potential is quadratic in the inflaton field, $\phi$. If the inflationary potential is cubic in $\phi$, consistency with observational data requires that$M_5 \simeq 9.2 \times 10^{-4} M_P$. LANL EDS SzGeCERN General Relativity and Cosmology Bertolami, O Sen, A A Maria da Conceicao Bento <bento@sirius.ist.utl.pt> http://invenio-software.org/download/invenio-demo-site-files/0204046.pdf http://invenio-software.org/download/invenio-demo-site-files/0204046.ps.gz 2002 11 2002-04-15 00 2002-04-15 BATCH n 200216 [1] R. Maartens, D. Wands, B.A. Bassett, I.P.C. Heard, Phys. Rev. D 62 (2000) 041301 [2] M.C. Bento, O. Bertolami, Phys. Rev. D 65 (2002) 063513 [3] O. Bertolami, G.G. Ross, Phys. Lett. B 183 (1987) 163 [4] J. McDonald, "F-term Hybrid Inflation, η-Problem and Extra Dimensions", hep-ph/0201016 [5] G. Dvali, Q. Shafi, R. Schaefer, Phys. Rev. Lett. 73 (1994) 1886 [6] A.D. Linde, Phys. Lett. B 259 (1991) 38 [6] M.C. Bento, O. Bertolami, P.M. Sá, Phys. Lett. B 262 (1991) 11 [6] Mod. Phys. Lett. A 7 (1992) 911 [6] A.D. Linde, Phys. Rev. D 49 (1994) 748 [7] E.J. Copeland, A.R. Liddle, D.H. Lyth, E.D. Stewart, D. Wands, Phys. Rev. D 49 (1994) 6410 [8] L.E. Mendes, A.R. Liddle, Phys. Rev. D 62 (2000) 103511 [9] J.A. Adams, G.G. Ross, S. Sarkar, Phys. Lett. B 391 (1997) 271 [10] T. Shiromizu, K. Maeda, M. Sasaki, Phys. Rev. D 62 (2000) 024012 [11] P. Binétruy, C. Deffayet, U. Ellwanger, D. Langlois, Phys. Lett. B 477 (2000) 285 [11] E.E. Flanagan, S.H. Tye, I. Wasserman, Phys. Rev. D 62 (2000) 044039 [12] D. Langlois, R. Maartens, D. Wands, Phys. Lett. B 489 (2000) 259 [13] C.B. Netterfield Pryke, et al., "A Measurement by BOOMERANG of multiple peaks in the angular power spectrum of the cosmic microwave background", Ann. Sci.tro/ph0104460. [14] A.T. Lee, et al., "A High Spatial Resolution Analy-sis of the MAXIMA-1 Cosmic Microwave Background Anisotropy Data", Astronomy/ph0104459. [15] C. Pryke, et al., "Cosmological Parameter Extraction from the First Season of Observations with DASI", Ann. Sci.tro/ph0104490. [16] M.C. Bento, O. Bertolami, Phys. Lett. B 384 (1996) 98 [17] G.G. Ross, S. Sarkar, Nucl. Phys. B 461 (1995) 597 PREPRINT hep-th/0204098 eng Alhaidari, A D King Fadh University Reply to 'Comment on "Solution of the Relativistic Dirac-Morse Problem"' 11 Apr 2002 This combines a reply to the Comment [hep-th/0203067 v1] by A. N. Vaidya and R. de L. Rodrigues with an erratum to our Letter [Phys. Rev. Lett. 87, 210405 (2001)] LANL EDS SzGeCERN Particle Physics - Theory A. D. Alhaidari <haidari@kfupm.edu.sa> http://invenio-software.org/download/invenio-demo-site-files/0204098.pdf http://invenio-software.org/download/invenio-demo-site-files/0204098.ps.gz 2002 11 2002-04-15 00 2002-04-15 BATCH n 200216 [1] A. D. Alhaidari, Phys. Rev. Lett. 87 (2001) 210405 [2] A. N. Vaidya and R. de L. Rodrigues, hep-th/0203067 [3] A. D. Alhaidari, J. Phys. A 34 (2001) 9827 [3] J. Phys. A 35 (2002) 3143 [4] See for example G. A. Natanzon, Teor. Mat. Fiz. 38 (1979) 146 [4] L. E. Gendenshtein, JETP Lett. 38 (1983) 356 [4] F. Cooper, J. N. Ginocchi, and A. Khare, Phys. Rev. D 36 (1987) 2438 [4] R. Dutt, A. Khare, and U. P. Sukhatme, Am. J. Phys. 56 (1988) 163 [4] Am. J. Phys. 59 (1991) 723 [4] G. Lévai, J. Phys. A 22 (1989) 689 [4] J. Phys. A 27 (1994) 3809 [4] R. De, R. Dutt, and U. Sukhatme, J. Phys. A 25 (1992) L843 [5] See for example M. F. Manning, Phys. Rev. 48 (1935) 161 [5] A. Bhattacharjie and E. C. G. Sudarshan, Nuovo Cimento 25 (1962) 864 [5] N. K. Pak and I. Sökmen, Phys. Lett. A 103 (1984) 298 [5] H. G. Goldstein, Classical Mechanics (Addison-Wesley, Reading-MA 1986); R. Montemayer, Phys. Rev. A 36 (1987) 1562 [5] G. Junker, J. Phys. A 23 (1990) L881 [6] A. D. Alhaidari, Phys. Rev. A 65 (2002) 042109 PREPRINT hep-th/0204099 eng CU-TP-1043 Easther, R Columbia Univ. Cosmological String Gas on Orbifolds Irvington-on-Hudson, NY Columbia Univ. Dept. Phys. 12 Apr 2002 14 p It has long been known that strings wound around incontractible cycles can play a vital role in cosmology. In particular, in a spacetime with toroidal spatial hypersurfaces, the dynamics of the winding modes may help yield threelarge spatial dimensions. However, toroidal compactifications are phenomenologically unrealistic. In this paper we therefore take a first step toward extending these cosmological considerations to $D$-dimensional toroidal orbifolds.We use numerical simulation to study the timescales over which "pseudo-wound" strings unwind on these orbifolds with trivial fundamental group. We show that pseudo-wound strings can persist for many ``Hubble times'' in some of thesespaces, suggesting that they may affect the dynamics in the same way as genuinely wound strings. We also outline some possible extensions that include higher-dimensional wrapped branes. LANL EDS SzGeCERN Particle Physics - Theory Greene, B R Jackson, M G M. G. Jackson <markj@phys.columbia.edu> http://invenio-software.org/download/invenio-demo-site-files/0204099.pdf http://invenio-software.org/download/invenio-demo-site-files/0204099.ps.gz CER n 200231 2002 11 Easther, Richard Greene, Brian R. Jackson, Mark G. 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] R. Brandenberger and C. Vafa 391 Nucl. Phys., B 316 1989 Nucl. Phys. B 316 (1989) 391 [2] A. A. Tseytlin and C. Vafa 443 Nucl. Phys., B 372 1992 Nucl. Phys. B 372 (1992) 443 hep-th/9109048 [3] M. Sakellariadou 319 Nucl. Phys., B 468 1996 Nucl. Phys. B 468 (1996) 319 hep-th/9511075 [4] A. G. Smith and A. Vilenkin 990 Phys. Rev., D 36 1987 Phys. Rev. D 36 (1987) 990 [5] R. Brandenberger, D. A. Easson and D. Kimberly 421 Nucl. Phys., B 623 2002 Nucl. Phys. B 623 (2002) 421 hep-th/0109165 [6] B. R. Greene, A. D. Shapere, C. Vafa, and S. T. Yau 1 Nucl. Phys., B 337 1990 Nucl. Phys. B 337 (1990) 1 [7] L. Dixon, J. Harvey, C. Vafa and E. Witten 678 Nucl. Phys., B 261 1985 Nucl. Phys. B 261 (1985) 678 L. Dixon, J. Harvey, C. Vafa and E. Witten 285 Nucl. Phys., B 274 1986 Nucl. Phys. B 274 (1986) 285 [8] M. Sakellariadou and A. Vilenkin 885 Phys. Rev., D 37 1988 Phys. Rev. D 37 (1988) 885 [9] J. J. Atick and E. Witten 291 Nucl. Phys., B 310 1988 Nucl. Phys. B 310 (1988) 291 [10] D. Mitchell and N. Turok 1577 Phys. Rev. Lett. 58 1987 Phys. Rev. Lett. 58 (1987) 1577 Imperial College report, 1987 (unpublished) [11] S. Alexander, R. Brandenberger, and D. Easson 103509 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 103509 hep-th/0005212 [12] D. Easson hep-th/0110225 hep-ph/0204132 eng NUC-MINN-02-3-T Shovkovy, I A Minnesota Univ. Thermal conductivity of dense quark matter and cooling of stars Minneapolis, MN Minnesota Univ. 11 Apr 2002 9 p The thermal conductivity of the color-flavor locked phase of dense quark matter is calculated. The dominant contribution to the conductivity comes from photons and Nambu-Goldstone bosons associated with breaking of baryon numberwhich are trapped in the quark core. Because of their very large mean free path the conductivity is also very large. The cooling of the quark core arises mostly from the heat flux across the surface of direct contact with the nuclearmatter. As the thermal conductivity of the neighboring layer is also high, the whole interior of the star should be nearly isothermal. Our results imply that the cooling time of compact stars with color-flavor locked quark cores issimilar to that of ordinary neutron stars. LANL EDS SzGeCERN Particle Physics - Phenomenology Ellis, P J Igor Shovkovy <shovkovy@physics.umn.edu> http://invenio-software.org/download/invenio-demo-site-files/0204132.pdf http://invenio-software.org/download/invenio-demo-site-files/0204132.ps.gz CER n 200231 2002 11 Shovkovy, Igor A. Ellis, Paul J. 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] J.C. Collins and M.J. Perry 1353 Phys. Rev. Lett. 34 1975 Phys. Rev. Lett. 34 (1975) 1353 [2] B. C. Barrois 390 Nucl. Phys., B 129 1977 Nucl. Phys. B 129 (1977) 390 S. C. Frautschi, in "Hadronic matter at extreme energy density", edited by N. Cabibbo and L. Sertorio (Plenum Press, 1980); D. Bailin and A. Love 325 Phys. Rep. 107 1984 Phys. Rep. 107 (1984) 325 [3] M. G. Alford, K. Rajagopal and F. Wilczek 247 Phys. Lett., B 422 1998 Phys. Lett. B 422 (1998) 247 R. Rapp, T. Schäfer, E. V. Shuryak and M. Velkovsky 53 Phys. Rev. Lett. 81 1998 Phys. Rev. Lett. 81 (1998) 53 [4] D. T. Son 094019 Phys. Rev., D 59 1999 Phys. Rev. D 59 (1999) 094019 R. D. Pisarski and D. H. Rischke 37 Phys. Rev. Lett. 83 1999 Phys. Rev. Lett. 83 (1999) 37 [5] T. Schafer and F. Wilczek 114033 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 114033 D. K. Hong, V. A. Miransky, I. A. Shovkovy and L. C. R. Wijewardhana 056001 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 056001 erratum 059903 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 059903 R. D. Pisarski and D. H. Rischke 051501 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 051501 [6] S. D. Hsu and M. Schwetz 211 Nucl. Phys., B 572 2000 Nucl. Phys. B 572 (2000) 211 W. E. Brown, J. T. Liu and H. C. Ren 114012 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 114012 [7] I. A. Shovkovy and L. C. R. Wijewardhana 189 Phys. Lett., B 470 1999 Phys. Lett. B 470 (1999) 189 T. Schäfer 269 Nucl. Phys., B 575 2000 Nucl. Phys. B 575 (2000) 269 [8] K. Rajagopal and F. Wilczek, arXiv hep-ph/0011333 M. G. Alford 131 Annu. Rev. Nucl. Part. Sci. 51 2001 Annu. Rev. Nucl. Part. Sci. 51 (2001) 131 [9] M. Alford and K. Rajagopal, arXiv hep-ph/0204001 [10] M. Alford, K. Rajagopal and F. Wilczek 443 Nucl. Phys., B 537 1999 Nucl. Phys. B 537 (1999) 443 [11] R. Casalbuoni and R. Gatto 111 Phys. Lett., B 464 1999 Phys. Lett. B 464 (1999) 111 [12] D. T. Son and M. A. Stephanov 074012 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 074012 erratum 059902 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 059902 [13] P. F. Bedaque and T. Schäfer 802 Nucl. Phys., A 697 2002 Nucl. Phys. A 697 (2002) 802 [14] V. A. Miransky and I. A. Shovkovy 111601 Phys. Rev. Lett. 88 2002 Phys. Rev. Lett. 88 (2002) 111601 [arXiv hep-ph/0108178 T. Schafer, D. T. Son, M. A. Stephanov, D. Toublan and J. J. Ver-baarschot 67 Phys. Lett., B 522 2001 Phys. Lett. B 522 (2001) 67 [arXiv hep-ph/0108210 [15] D. T. Son, arXiv hep-ph/0108260 [16] P. Jaikumar, M. Prakash and T. Schäfer, arXiv astro-ph/0203088 [17] E. J. Ferrer, V. P. Gusynin and V. de la Incera, arXiv: cond-matt/0203217 [18] I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series and Products (Academic, New York, 1965) 3.252.9 [19] J. J. Freeman and A. C. Anderson 5684 Phys. Rev., B 34 1986 Phys. Rev. B 34 (1986) 5684 [20] C. Kittel, Introduction to Solid State Phys.(John Wi-ley & Sons, Inc., 1960) p. 139 [21] V. P. Gusynin and I. A. Shovkovy 577 Nucl. Phys., A 700 2002 Nucl. Phys. A 700 (2002) 577 [22] I. M. Khalatnikov, An introduction to the theory of su-perfluidity, (Addison-Wesley Pub. Co., 1989) [23] P. A. Sturrock, Plasma Physics, (Cambridge University Press, 1994) [24] J. M. Lattimer, K. A. Van Riper, M. Prakash and M. Prakash 802 Astrophys. J. 425 1994 Astrophys. J. 425 (1994) 802 [25] M. G. Alford, K. Rajagopal, S. Reddy and F. Wilczek 074017 Phys. Rev., D 64 2001 Phys. Rev. D 64 (2001) 074017 [26] G. W. Carter and S. Reddy 103002 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 103002 [27] A. W. Steiner, M. Prakash and J. M. Lattimer 10 Phys. Lett., B 509 2001 Phys. Lett. B 509 (2001) 10 [arXiv astro-ph/0101566 [28] S. Reddy, M. Sadzikowski and M. Tachibana, arXiv nucl-th/0203011 [29] M. Prakash, J. M. Lattimer, J. A. Pons, A. W. Steiner and S. Reddy 364 Lect. Notes Phys. 578 2001 Lect. Notes Phys. 578 (2001) 364 [30] S. L. Shapiro and S. A. Teukolsky, Black holes, white dwarfs, and neutron stars: the physics of compact ob-jects, (John Wiley & Sons, 1983) [31] D. Blaschke, H. Grigorian and D. N. Voskresensky, As-tron. Astrophys. : 368 (2001) 561 [32] D. Page, M. Prakash, J. M. Lattimer and A. W. Steiner 2048 Phys. Rev. Lett. 85 2000 Phys. Rev. Lett. 85 (2000) 2048 [33] K. Rajagopal and F. Wilczek 3492 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 3492 [34] J. I. Kapusta, Finite-temperature field theory, (Cam-bridge University Press, 1989) [35] S. M. Johns, P. J. Ellis and J. M. Lattimer 1020 Astrophys. J. 473 1996 Astrophys. J. 473 (1996) 1020 hep-ph/0204133 eng Gomez, M E CFIF Lepton-Flavour Violation in SUSY with and without R-parity 12 Apr 2002 11 p We study whether the individual violation of the lepton numbers L_{e,mu,tau} in the charged sector can lead to measurable rates for BR(mu->e gamma) and BR(tau->mu gamma). We consider three different scenarios, the fist onecorresponds to the Minimal Supersymmetric Standard Model with non-universal soft terms. In the other two cases the violation of flavor in the leptonic charged sector is associated to the neutrino problem in models with a see-sawmechanism and with R-parity violation respectively. LANL EDS SzGeCERN Particle Physics - Phenomenology Carvalho, D F Mario E. Gomez <mgomez@gtae3.ist.utl.pt> http://invenio-software.org/download/invenio-demo-site-files/0204133.pdf http://invenio-software.org/download/invenio-demo-site-files/0204133.ps.gz CER n 200231 2002 11 2002-04-15 00 2002-04-15 BATCH TALK GIVEN BY M E G AT THE CORFU SUMMER INSTITUTE ON ELEMENTARY PARTICLE PHYSICS CORFU 2001 11 PAGES 5 FIGURES PREPRINT [1] Y. Fukuda et al., Super-Kamiokande collaboration 9 Phys. Lett., B 433 1998 Phys. Lett. B 433 (1998) 9 33 Phys. Lett., B 436 1998 Phys. Lett. B 436 (1998) 33 1562 Phys. Rev. Lett. 81 1998 Phys. Rev. Lett. 81 (1998) 1562 [2] M. Apollonio et al., Chooz collaboration 397 Phys. Lett. B 420 1998 Phys. Lett. B 420 (1998) 397 [3] H. N. Brown et al. [Muon g-2 Collaboration] 2227 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 2227 hep-ex/0102017 [4] Review of Particle Physics, D. E. Groom et al 1 Eur. Phys. J., C 15 2000 Eur. Phys. J. C 15 (2000) 1 [5] D. F. Carvalho, M. E. Gomez and S. Khalil 001 J. High Energy Phys. 0107 2001 J. High Energy Phys. 0107 (2001) 001 hep-ph/0104292 [6] D. F. Carvalho, J. R. Ellis, M. Gomez and S. Lola 323 Phys. Lett., B 515 2001 Phys. Lett. B 515 (2001) 323 [7] D. F. Carvalho, M. E. Gomez and J. C. Romao hep-ph/0202054 (to appear in Phys. Rev., D [8] J. R. Ellis, M. E. Gomez, G. K. Leontaris, S. Lola and D. V. Nanopoulos 319 Eur. Phys. J., C 14 2000 Eur. Phys. J. C 14 (2000) 319 [9] A. Belyaev et al 715 Eur. Phys. J., C 22 2002 Eur. Phys. J. C 22 (2002) 715 A. Belyaev et al. [Kaon Physics Working Group Collaboration] hep-ph/0107046 [10] J. Hisano, T. Moroi, K. Tobe and M. Yamaguchi 2442 Phys. Rev., D 53 1996 Phys. Rev. D 53 (1996) 2442 J. Hisano and D. Nomura 116005 Phys. Rev., D 59 1999 Phys. Rev. D 59 (1999) 116005 [11] R. Barbieri and L.J. Hall 212 Phys. Lett., B 338 1994 Phys. Lett. B 338 (1994) 212 R. Barbieri et al 219 Nucl. Phys., B 445 1995 Nucl. Phys. B 445 (1995) 219 Nima Arkani-Hamed, Hsin-Chia Cheng and L.J. Hall 413 Phys. Rev., D 53 1996 Phys. Rev. D 53 (1996) 413 P. Ciafaloni, A. Romanino and A. Strumia 3 Nucl. Phys., B 458 1996 Nucl. Phys. B 458 (1996) 3 M. E. Gomez and H. Goldberg 5244 Phys. Rev., D 53 1996 Phys. Rev. D 53 (1996) 5244 J. Hisano, D. Nomura, Y. Okada, Y. Shimizu and M. Tanaka 116010 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 116010 [12] M. E. Gomez, G. K. Leontaris, S. Lola and J. D. Vergados 116009 Phys. Rev., D 59 1999 Phys. Rev. D 59 (1999) 116009 G.K. Leontaris and N.D. Tracas 90 Phys. Lett., B 431 1998 Phys. Lett. B 431 (1998) 90 W. Buchmuller, D. Delepine and F. Vissani 171 Phys. Lett., B 459 1999 Phys. Lett. B 459 (1999) 171 W. Buchmuller, D. Delepine and L. T. Handoko 445 Nucl. Phys., B 576 2000 Nucl. Phys. B 576 (2000) 445 Q. Shafi and Z. Tavartkiladze 145 Phys. Lett., B 473 2000 Phys. Lett. B 473 (2000) 145 J. L. Feng, Y. Nir and Y. Shadmi 113005 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 113005 [13] A. Brignole, L. E. Iba nez and C. Mu noz 125 Nucl. Phys., B 422 1994 Nucl. Phys. B 422 (1994) 125 [Erratum 747 Nucl. Phys., B 436 1994 Nucl. Phys. B 436 (1994) 747 ] [14] L. Iba nez and G.G. Ross 100 Phys. Lett., B 332 1994 Phys. Lett. B 332 (1994) 100 G.K. Leontaris, S. Lola and G.G. Ross 25 Nucl. Phys., B 454 1995 Nucl. Phys. B 454 (1995) 25 S. Lola and G.G. Ross 81 Nucl. Phys., B 553 1999 Nucl. Phys. B 553 (1999) 81 [15] M. Gell-Mann, P. Ramond and R. Slansky, Proceedings of the Stony Brook Super-gravity Workshop, New York, 1979, eds. P. Van Nieuwenhuizen and D. Freedman (North-Holland, Amsterdam) [16] J. A. Casas and A. Ibarra 171 Nucl. Phys., B 618 2001 Nucl. Phys. B 618 (2001) 171 S. Lavignac, I. Masina and C. A. Savoy 269 Phys. Lett., B 520 2001 Phys. Lett. B 520 (2001) 269 and hep-ph/0202086 [17] J. R. Ellis, D. V. Nanopoulos and K. A. Olive 65 Phys. Lett., B 508 2001 Phys. Lett. B 508 (2001) 65 [18] J. C. Romao, M. A. Diaz, M. Hirsch, W. Porod and J. W. Valle 071703 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 071703 113008 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 113008 [19] M. E. Gomez and K. Tamvakis 057701 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 057701 [20] M. Hirsch, W. Porod, J. W. F. Valle and J. C. Rom ao hep-ph/0202149 [21] L. M. Barkov et al., Research Proposal to PSI, 1999 http://www.icepp.s.u-tokyo.ac.jp/meg http://www.icepp.s.u-tokyo.ac.jp/meg [22] The homepage of the PRISM project http://www-prism.kek.jp/ http://www-prism.kek.jp/ Y. Kuno, Lep-ton Flavor Violation Experiments at KEK/JAERI Joint Project of High Intensity Proton Machine, in Proceedings of Workshop of "LOWNU/NOON 2000", Tokyo, December 4-8, 2000 [23] W. Porod, M. Hirsch, J. Rom ao and J. W. Valle 115004 Phys. Rev., D 63 2001 Phys. Rev. D 63 (2001) 115004 hep-ph/0204134 eng Dzuba, V A University of New South Wales Precise calculation of parity nonconservation in cesium and test of the standard model 12 Apr 2002 24 p We have calculated the 6s-7s parity nonconserving (PNC) E1 transition amplitude, E_{PNC}, in cesium. We have used an improved all-order technique in the calculation of the correlations and have included all significant contributionsto E_{PNC}. Our final value E_{PNC} = 0.904 (1 +/- 0.5 %) \times 10^{-11}iea_{B}(-Q_{W}/N) has half the uncertainty claimed in old calculations used for the interpretation of Cs PNC experiments. The resulting nuclear weak chargeQ_{W} for Cs deviates by about 2 standard deviations from the value predicted by the standard model. LANL EDS SzGeCERN Particle Physics - Phenomenology Flambaum, V V Ginges, J S M "Jacinda S.M. GINGES" <ginges@phys.unsw.edu.au> http://invenio-software.org/download/invenio-demo-site-files/0204134.pdf http://invenio-software.org/download/invenio-demo-site-files/0204134.ps.gz CER n 200231 2002 11 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] I.B. Khriplovich, Parity Nonconservation in Atomic Phenomena (Gordon and Breach, Philadelphia, 1991) [2] M.-A. Bouchiat and C. Bouchiat 1351 Rep. Prog. Phys. 60 1997 Rep. Prog. Phys. 60 (1997) 1351 [3] C.S. Wood et al 1759 Science 275 1997 Science 275 (1997) 1759 [4] V.A. Dzuba, V.V. Flambaum, and O.P. Sushkov 147 Phys. Lett., A 141 1989 Phys. Lett. A 141 (1989) 147 [5] S.A. Blundell, W.R. Johnson, and J. Sapirstein 1411 Phys. Rev. Lett. 65 1990 Phys. Rev. Lett. 65 (1990) 1411 S.A. Blundell, J. Sapirstein, and W.R. Johnson 1602 Phys. Rev., D 45 1992 Phys. Rev. D 45 (1992) 1602 [6] R.J. Rafac, and C.E. Tanner, Phys. Rev., A58 1087 (1998); R.J. Rafac, C.E. Tanner, A.E. Livingston, and H.G. Berry, Phys. Rev., A60 3648 (1999) [7] S.C. Bennett, J.L. Roberts, and C.E. Wieman R16 Phys. Rev., A 59 1999 Phys. Rev. A 59 (1999) R16 [8] S.C. Bennett and C.E. Wieman 2484 Phys. Rev. Lett. 82 1999 Phys. Rev. Lett. 82 (1999) 2484 82, 4153(E) (1999); 83, 889(E) (1999) [9] R. Casalbuoni, S. De Curtis, D. Dominici, and R. Gatto 135 Phys. Lett., B 460 1999 Phys. Lett. B 460 (1999) 135 [10] J. L. Rosner 016006 Phys. Rev., D 61 1999 Phys. Rev. D 61 (1999) 016006 [11] J. Erler and P. Langacker 212 Phys. Rev. Lett. 84 2000 Phys. Rev. Lett. 84 (2000) 212 [12] A. Derevianko 1618 Phys. Rev. Lett. 85 2000 Phys. Rev. Lett. 85 (2000) 1618 [13] V.A. Dzuba, C. Harabati, W.R. Johnson, and M.S. Safronova 044103 Phys. Rev., A 63 2001 Phys. Rev. A 63 (2001) 044103 [14] M.G. Kozlov, S.G. Porsev, and I.I. Tupitsyn 3260 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 3260 [15] W.J. Marciano and A. Sirlin 552 Phys. Rev., D 27 1983 Phys. Rev. D 27 (1983) 552 W.J. Marciano and J.L. Rosner 2963 Phys. Rev. Lett. 65 1990 Phys. Rev. Lett. 65 (1990) 2963 [16] B.W. Lynn and P.G.H. Sandars 1469 J. Phys., B 27 1994 J. Phys. B 27 (1994) 1469 I. Bednyakov et al 012103 Phys. Rev., A 61 1999 Phys. Rev. A 61 (1999) 012103 [17] A.I. Milstein and O.P. Sushkov, e-print hep-ph/0109257 [18] W.R. Johnson, I. Bednyakov, and G. Soff 233001 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 233001 [19] A. Derevianko 012106 Phys. Rev., A 65 2002 Phys. Rev. A 65 (2002) 012106 [20] V.A. Dzuba and V.V. Flambaum 052101 Phys. Rev., A 62 2000 Phys. Rev. A 62 (2000) 052101 [21] V.A. Dzuba, V.V. Flambaum, and O.P. Sushkov R4357 Phys. Rev., A 56 1997 Phys. Rev. A 56 (1997) R4357 [22] D.E. Groom et al., Euro. Phys. J. C : 15 (2000) 1 [23] V.A. Dzuba, V.V. Flambaum, P.G. Silvestrov, and O.P. Sushkov 1399 J. Phys., B 20 1987 J. Phys. B 20 (1987) 1399 [24] V.A. Dzuba, V.V. Flambaum, and O.P. Sushkov 493 Phys. Lett., A 140 1989 Phys. Lett. A 140 (1989) 493 [25] V.A. Dzuba, V.V. Flambaum, A.Ya. Kraftmakher, and O.P. Sushkov 373 Phys. Lett., A 142 1989 Phys. Lett. A 142 (1989) 373 [26] G. Fricke et al 177 At. Data Nucl. Data Tables 60 1995 At. Data Nucl. Data Tables 60 (1995) 177 [27] A. Trzci´nska et al 082501 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 082501 [28] V.B. Berestetskii, E.M. Lifshitz, and L.P. Pitaevskii, Relativistic Quantum Theory (Pergamon Press, Oxford, 1982) [29] P.J. Mohr and Y.-K. Kim 2727 Phys. Rev., A 45 1992 Phys. Rev. A 45 (1992) 2727 P.J. Mohr 4421 Phys. Rev., A 46 1992 Phys. Rev. A 46 (1992) 4421 [30] L.W. Fullerton and G.A. Rinker, Jr 1283 Phys. Rev., A 13 1976 Phys. Rev. A 13 (1976) 1283 [31] E.H. Wichmann and N.M. Kroll 343 Phys. Rev. 101 1956 Phys. Rev. 101 (1956) 343 [32] A.I. Milstein and V.M. Strakhovenko 1247 Zh. Eksp. Teor. Fiz. 84 1983 Zh. Eksp. Teor. Fiz. 84 (1983) 1247 [33] V.V. Flambaum and V.G. Zelevinsky 3108 Phys. Rev. Lett. 83 1999 Phys. Rev. Lett. 83 (1999) 3108 [34] C.E. Moore, Natl. Stand. Ref. Data Ser. (U.S., Natl. Bur. Stand.), 3 (1971) [35] R.J. Rafac, C.E. Tanner, A.E. Livingston, and H.G. Berry 3648 Phys. Rev., A 60 1999 Phys. Rev. A 60 (1999) 3648 [36] M.-A. Bouchiat, J. Gu´ena, and L. Pottier, J. Phys.(France) Lett. : 45 (1984) L523 [37] E. Arimondo, M. Inguscio, and P. Violino 31 Rev. Mod. Phys. 49 1977 Rev. Mod. Phys. 49 (1977) 31 [38] S.L. Gilbert, R.N. Watts, and C.E. Wieman 581 Phys. Rev., A 27 1983 Phys. Rev. A 27 (1983) 581 [39] R.J. Rafac and C.E. Tanner 1027 Phys. Rev., A 56 1997 Phys. Rev. A 56 (1997) 1027 [40] M.-A. Bouchiat and J. Gu´ena, J. Phys.(France) : 49 (1988) 2037 [41] D. Cho et al 1007 Phys. Rev., A 55 1997 Phys. Rev. A 55 (1997) 1007 [42] A.A. Vasilyev, I.M. Savukov, M.S. Safronova, and H.G. Berry, e-print physics/0112071 hep-ph/0204135 eng Bertin, V Universite Blaise Pascal Neutrino Indirect Detection of Neutralino Dark Matter in the CMSSM 12 Apr 2002 16 p We study potential signals of neutralino dark matter indirect detection by neutrino telescopes in a wide range of CMSSM parameters. We also compare with direct detection potential signals taking into account in both cases present andfuture experiment sensitivities. Only models with neutralino annihilation into gauge bosons can satisfy cosmological constraints and current neutrino indirect detection sensitivities. For both direct and indirect detection, only nextgeneration experiments will be able to really test this kind of models. LANL EDS SzGeCERN Particle Physics - Phenomenology Nezri, E Orloff, J Jean Orloff <orloff@in2p3.fr> http://invenio-software.org/download/invenio-demo-site-files/0204135.pdf http://invenio-software.org/download/invenio-demo-site-files/0204135.ps.gz 2002 11 2002-04-15 00 2002-04-15 BATCH n 200216 PREPRINT hep-ph/0204136 eng FISIST-14-2001-CFIF IPPP-01-58 DCPT-01-114 Branco, G C FCIF Supersymmetry and a rationale for small CP violating phases 12 Apr 2002 28 p We analyse the CP problem in the context of a supersymmetric extension of the standard model with universal strength of Yukawa couplings. A salient feature of these models is that the CP phases are constrained to be very small by thehierarchy of the quark masses, and the pattern of CKM mixing angles. This leads to a small amount of CP violation from the usual KM mechanism and a significant contribution from supersymmetry is required. Due to the large generationmixing in some of the supersymmetric interactions, the electric dipole moments impose severe constraints on the parameter space, forcing the trilinear couplings to be factorizable in matrix form. We find that the LL mass insertionsgive the dominant gluino contribution to saturate epsilon_K. The chargino contributions to epsilon'/epsilon are significant and can accommodate the experimental results. In this framework, the standard model gives a negligiblecontribution to the CP asymmetry in B-meson decay, a_{J/\psi K_s}. However, due to supersymmetric contributions to B_d-\bar{B}_d mixing, the recent large value of a_{J/\psi K_s} can be accommodated. LANL EDS SzGeCERN Particle Physics - Phenomenology Gomez, M E Khalil, S Teixeira, A M Shaaban Khalil <shaaban.khalil@durham.ac.uk> http://invenio-software.org/download/invenio-demo-site-files/0204136.pdf http://invenio-software.org/download/invenio-demo-site-files/0204136.ps.gz CER n 200231 2002 11 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] A.G. Cohen, D.B. Kaplan and A.E. Nelson 27 Annu. Rev. Nucl. Part. Sci. 43 1993 Annu. Rev. Nucl. Part. Sci. 43 (1993) 27 M.B. Gavela, P. Hernandez, J. Orloff, O. P ene and C. Quimbay 345 Nucl. Phys., B 430 1994 Nucl. Phys. B 430 (1994) 345 382 Nucl. Phys., B 430 1994 Nucl. Phys. B 430 (1994) 382 A.D. Dolgov hep-ph/9707419 V.A. Rubakov and M.E. Shaposhnikov, Usp. Fiz. Nauk : 166 (1996) 493[ 461 Phys. Usp. 39 1996 Phys. Usp. 39 (1996) 461 ] [2] S. Abel, S. Khalil and O. Lebedev 151 Nucl. Phys., B 606 2001 Nucl. Phys. B 606 (2001) 151 [3] S. Pokorski, J. Rosiek and C. A. Savoy 81 Nucl. Phys., B 570 2000 Nucl. Phys. B 570 (2000) 81 [4] Recent Developments in Gauge Theories, Proceedings of Nato Advanced Study Insti-tute (Carg ese, 1979), edited by G. 't Hooft et al., Plenum, New York (1980) [5] G. C. Branco, J. I. Silva-Marcos and M. N. Rebelo 446 Phys. Lett., B 237 1990 Phys. Lett. B 237 (1990) 446 G. C. Branco, D. Emmanuel­Costa and J. I. Silva-Marcos 107 Phys. Rev., D 56 1997 Phys. Rev. D 56 (1997) 107 [6] P. M. Fishbane and P. Q. Hung 2743 Phys. Rev., D 57 1998 Phys. Rev. D 57 (1998) 2743 [7] P. Q. Hung and M. Seco hep-ph/0111013 [8] G. C. Branco and J. I. Silva-Marcos 166 Phys. Lett., B 359 1995 Phys. Lett. B 359 (1995) 166 [9] M. V. Romalis, W. C. Griffith and E. N. Fortson 2505 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 2505 J. P. Jacobs et al 3782 Phys. Rev. Lett. 71 1993 Phys. Rev. Lett. 71 (1993) 3782 [10] BABAR Collaboration, B. Aubert et al 091801 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 091801 [11] BELLE Collaboration, K. Abe et al 091802 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 091802 [12] G. Eyal and Y. Nir 21 Nucl. Phys., B 528 1998 Nucl. Phys. B 528 (1998) 21 and references therein [13] G. C. Branco, F. Cagarrinho and F. Krüger 224 Phys. Lett., B 459 1999 Phys. Lett. B 459 (1999) 224 [14] H. Fritzsch and J. Plankl 584 Phys. Rev., D 49 1994 Phys. Rev. D 49 (1994) 584 H. Fritzsch and P. Minkowski 393 Nuovo Cimento 30 1975 Nuovo Cimento 30 (1975) 393 H. Fritzsch and D. Jackson 365 Phys. Lett., B 66 1977 Phys. Lett. B 66 (1977) 365 P. Kaus and S. Meshkov 1863 Phys. Rev., D 42 1990 Phys. Rev. D 42 (1990) 1863 [15] H. Fusaoka and Y. Koide 3986 Phys. Rev., D 57 1998 Phys. Rev. D 57 (1998) 3986 [16] See for example, V. Barger, M. S. Berger and P. Ohmann 1093 Phys. Rev., D 47 1993 Phys. Rev. D 47 (1993) 1093 4908 Phys. Rev., D 49 1994 Phys. Rev. D 49 (1994) 4908 [17] G. C. Branco and J. I. Silva-Marcos 390 Phys. Lett., B 331 1994 Phys. Lett. B 331 (1994) 390 [18] Particle Data Group 1 Eur. Phys. J., C 15 2000 Eur. Phys. J. C 15 (2000) 1 [19] C. Jarlskog 1039 Phys. Rev. Lett. 55 1985 Phys. Rev. Lett. 55 (1985) 1039 491 Z. Phys., C 29 1985 Z. Phys. C 29 (1985) 491 [20] M. Dugan, B. Grinstein and L. J. Hall 413 Nucl. Phys., B 255 1985 Nucl. Phys. B 255 (1985) 413 [21] D. A. Demir, A. Masiero and O. Vives 230 Phys. Lett., B 479 2000 Phys. Lett. B 479 (2000) 230 S. M. Barr and S. Khalil 035005 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 035005 [22] S. A. Abel and J. M. Fr ere 1632 Phys. Rev., D 55 1997 Phys. Rev. D 55 (1997) 1632 S. Khalil, T. Kobayashi and A. Masiero 075003 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 075003 S. Khalil and T. Kobayashi 341 Phys. Lett., B 460 1999 Phys. Lett. B 460 (1999) 341 [23] S. Khalil, T. Kobayashi and O. Vives 275 Nucl. Phys., B 580 2000 Nucl. Phys. B 580 (2000) 275 T. Kobayashi and O. Vives 323 Phys. Lett., B 406 2001 Phys. Lett. B 406 (2001) 323 [24] S. Abel, D. Bailin, S. Khalil and O. Lebedev 241 Phys. Lett., B 504 2001 Phys. Lett. B 504 (2001) 241 [25] A. Masiero, M. Piai, A. Romanino and L. Silvestrini 075005 Phys. Rev., D 64 2001 Phys. Rev. D 64 (2001) 075005 and references therein [26] P. G. Harris et al 904 Phys. Rev. Lett. 82 1999 Phys. Rev. Lett. 82 (1999) 904 [27] M. Ciuchini et al 008 J. High Energy Phys. 10 1998 J. High Energy Phys. 10 (1998) 008 [28] S. Khalil and O. Lebedev 387 Phys. Lett., B 515 2001 Phys. Lett. B 515 (2001) 387 [29] A. J. Buras hep-ph/0101336 [30] See, for example, G. C. Branco, L. Lavoura and J. P. Silva, CP Violation, Interna-tional Series of Monographs on Physics (103), Oxford University Press, Clarendon (1999) [31] F. Gabbiani, E. Gabrielli, A. Masiero and L. Silverstrini 321 Nucl. Phys., B 477 1996 Nucl. Phys. B 477 (1996) 321 [32] V. Fanti et al 335 Phys. Lett., B 465 1999 Phys. Lett. B 465 (1999) 335 [33] T. Gershon (NA48) hep-ex/0101034 [34] A. J. Buras, M. Jamin and M. E. Lautenbacher 209 Nucl. Phys., B 408 1993 Nucl. Phys. B 408 (1993) 209 [35] S. Bertolini, M. Fabbrichesi and E. Gabrielli 136 Phys. Lett., B 327 1994 Phys. Lett. B 327 (1994) 136 [36] G. Colangelo and G. Isidori 009 J. High Energy Phys. 09 1998 J. High Energy Phys. 09 (1998) 009 A. Buras, G. Colan-gelo, G. Isidori, A. Romanino and L. Silvestrini 3 Nucl. Phys., B 566 2000 Nucl. Phys. B 566 (2000) 3 [37] OPAL Collaboration, K. Ackerstaff et al 379 Eur. Phys. J., C 5 1998 Eur. Phys. J. C 5 (1998) 379 CDF Collaboration, T. Affolder et al 072005 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 072005 CDF Collaboration, C. A. Blocker, Proceedings of 3rd Workshop on Physics and Detectors for DAPHNE (DAPHNE 99), Frascati, Italy, 16-19 Nov 1999; ALEPH Collaboration, R. Barate et al 259 Phys. Lett., B 492 2000 Phys. Lett. B 492 (2000) 259 [38] S. Bertolini, F. Borzumati, A. Masiero and G. Ridolfi 591 Nucl. Phys., B 353 1991 Nucl. Phys. B 353 (1991) 591 [39] CLEO Collaboration, S. Ahmed et al, CLEO-CONF-99-10 hep-ex/9908022 [40] E. Gabrielli, S. Khalil and E. Torrente­Lujan 3 Nucl. Phys., B 594 2001 Nucl. Phys. B 594 (2001) 3 hep-ph/0204137 eng DO-TH-02-05 Paschos, E A Univ. Dortmund Leptogenesis with Majorana neutrinos Dortmund Dortmund Univ. Inst. Phys. 12 Apr 2002 6 p I review the origin of the lepton asymmetry which is converted to a baryon excess at the electroweak scale. This scenario becomes more attractive if we can relate it to other physical phenomena. For this reason I elaborate on theconditions of the early universe which lead to a sizable lepton asymmetry. Then I describe methods and models which relate the low energy parameters of neutrinos to the high energy (cosmological) CP-violation and to neutrinolessdouble beta-decay. LANL EDS SzGeCERN Particle Physics - Phenomenology Emmanuel A. Paschos <paschos@hal1.physik.uni-dortmund.de> http://invenio-software.org/download/invenio-demo-site-files/0204137.pdf http://invenio-software.org/download/invenio-demo-site-files/0204137.ps.gz CER n 200231 11 2002 2002-04-15 00 2002-04-15 BATCH CONTRIBUTED TO 1ST WORKSHOP ON NEUTRINO - NUCLEUS INTERACTIONS IN THE FEW GEV REGION (NUINT01) TSUKUBA JAPAN 13-16 DEC 2001 6 PAGES 6 FIGURES PREPRINT 1. Fukugida and Yanagida 45 Phys. Lett., B 174 1986 Phys. Lett. B 174 (1986) 45 2. M. Flanz, E.A. Paschos and U. Sarkar 248 Phys. Lett., B 345 1995 Phys. Lett. B 345 (1995) 248 3. M. Luty 445 Phys. Rev., D 45 1992 Phys. Rev. D 45 (1992) 445 4. M. Flanz, E.A. Paschos, U. Sarkar and J. Weiss 693 Phys. Lett., B 389 1996 Phys. Lett. B 389 (1996) 693 M. Flanz and E.A. Paschos 113009 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 113009 5. A. Pilaftsis 5431 Phys. Rev., D 56 1997 Phys. Rev. D 56 (1997) 5431 6. W. Buchmüller and M. Plümacher 354 Phys. Lett., B 431 1998 Phys. Lett. B 431 (1998) 354 7. L. Covi, E. Roulet and F. Vissani 169 Phys. Lett., B 384 1996 Phys. Lett. B 384 (1996) 169 8. E.K. Akhmedov, V.A. Rubakov and A.Y. Smirnov 1359 Phys. Rev. Lett. 81 1998 Phys. Rev. Lett. 81 (1998) 1359 9. S.Y. Khlepnikov and M.E. Shaposhnikov 885 Nucl. Phys., B 308 1988 Nucl. Phys. B 308 (1988) 885 and references therein 10. J. Ellis, S. Lola and D.V. Nanopoulos 87 Phys. Lett., B 452 1999 Phys. Lett. B 452 (1999) 87 11. G. Lazarides and N. Vlachos 482 Phys. Lett., B 459 1999 Phys. Lett. B 459 (1999) 482 12. M.S. Berger and B. Brahmachari 073009 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 073009 13. K. Kang, S.K. Kang and U. Sarkar 391 Phys. Lett., B 486 2000 Phys. Lett. B 486 (2000) 391 14. H. Goldberg 389 Phys. Lett., B 474 2000 Phys. Lett. B 474 (2000) 389 15. M. Hirsch and S.F. King 113005 Phys. Rev., D 64 2001 Phys. Rev. D 64 (2001) 113005 16. H. Nielsen and Y. Takanishi 241 Phys. Lett., B 507 2001 Phys. Lett. B 507 (2001) 241 17. W. Buchmüller and D. Wyler 291 Phys. Lett., B 521 2001 Phys. Lett. B 521 (2001) 291 18. Falcone and Tramontano 1 Phys. Lett., B 506 2001 Phys. Lett. B 506 (2001) 1 F. Buccella et al 241 Phys. Lett., B 524 2002 Phys. Lett. B 524 (2002) 241 19. G.C. Branco et al., Nucl. Phys., B617 (2001) 475 20. A.S. Joshipura, E.A. Paschos and W. Rodejo-hann 227 Nucl. Phys., B 611 2001 Nucl. Phys. B 611 (2001) 227 and 29 J. High Energy Phys. 0108 2001 J. High Energy Phys. 0108 (2001) 29 21. mentioned by H.V. Klapdor­Kleingrothaus, in hep-ph/0103062 hep-ph/0204138 eng DO-TH-02-06 Paschos, E A Univ. Dortmund Neutrino Interactions at Low and Medium Energies Dortmund Dortmund Univ. Inst. Phys. 12 Apr 2002 9 p We discuss the calculations for several neutrino induced reactions from low energies to the GeV region. Special attention is paid to nuclear corrections when the targets are medium or heavy nuclei. Finally, we present several ratiosof neutral to charged current reactions whose values on isoscalar targets can be estimated accurately. The ratios are useful for investigating neutrino oscillations in Long Baseline experiments. LANL EDS SzGeCERN Particle Physics - Phenomenology Emmanuel A. Paschos <paschos@hal1.physik.uni-dortmund.de> http://invenio-software.org/download/invenio-demo-site-files/0204138.pdf http://invenio-software.org/download/invenio-demo-site-files/0204138.ps.gz CER n 200231 11 2002 2002-04-15 00 2002-04-15 BATCH CONTRIBUTED TO 1ST WORKSHOP ON NEUTRINO - NUCLEUS INTERACTIONS IN THE FEW GEV REGION (NUINT01) TSUKUBA JAPAN 13-16 DEC 2001 9 PAGES 15 FIGURES PREPRINT 1. E. A. Paschos, L. Pasquali and J. Y. Yu 263 Nucl. Phys., B 588 2000 Nucl. Phys. B 588 (2000) 263 2. E. A. Paschos and J. Y. Yu 033002 Phys. Rev., D 65 2002 Phys. Rev. D 65 (2002) 033002 3. C. Albright and C. Jarlskog 467 Nucl. Phys., B 84 1975 Nucl. Phys. B 84 (1975) 467 4. N. J. Baker et al 617 Phys. Rev., D 25 1982 Phys. Rev. D 25 (1982) 617 5. M. Hirai, S. Kumano and M. Miyama 034003 Phys. Rev., D 64 2001 Phys. Rev. D 64 (2001) 034003 6. K. J. Eskola, V. J. Kolhinen and P. V. Ru-uskanen 351 Nucl. Phys., B 535 1998 Nucl. Phys. B 535 (1998) 351 K. J. Eskola, V. J. Kolhinen, P. V. Ruuskanen and C. A. Salgado 645 Nucl. Phys., A 661 1999 Nucl. Phys. A 661 (1999) 645 7. See Figure 1 in Ref. [2] 8. P. A. Schreiner and F. V. von Hippel 333 Nucl. Phys., B 58 1973 Nucl. Phys. B 58 (1973) 333 9. S. L. Adler, S. Nussinov. and E. A. Paschos 2125 Phys. Rev., D 9 1974 Phys. Rev. D 9 (1974) 2125 10. S. L. Adler 2644 Phys. Rev., D 12 1975 Phys. Rev. D 12 (1975) 2644 11. P. Musset and J. P. Vialle 1 Phys. Rep. 39 1978 Phys. Rep. 39 (1978) 1 12. H. Kluttig, J. G. Morfin and W. Van Do-minick 446 Phys. Lett., B 71 1977 Phys. Lett. B 71 (1977) 446 13. R. Merenyi et al 743 Phys. Rev., D 45 1982 Phys. Rev. D 45 (1982) 743 14. S. K. Singh, M. T. Vicente-Vacas and E. Oset 23 Phys. Lett., B 416 1998 Phys. Lett. B 416 (1998) 23 15. E. A. Paschos and L. Wolfenstein 91 Phys. Rev., D 7 1973 Phys. Rev. D 7 (1973) 91 see equation (15) 16. E. A. Paschos, Precise Ratios for Neutrino-Nucleon and Neutrino-Nucleus Interactions, Dortmund preprint­DO­TH 02/02; hep-ph 0204090 17. G. Gounaris, E. A. Paschos and P. Porfyri-adis 63 Phys. Lett., B 525 2002 Phys. Lett. B 525 (2002) 63 18. J. Bouchez and I. Giomataris, CEA/Saclay internal note, DAPNIA/01-07, June 2001 hep-ph/0204139 eng Van Beveren, E University of Coimbra Remarks on the f_0(400-1200) scalar meson as the dynamically generated chiral partner of the pion 12 Apr 2002 15 p The quark-level linear sigma model is revisited, in particular concerning the identification of the f_0(400-1200) (or \sigma(600)) scalar meson as the chiral partner of the pion. We demonstrate the predictive power of the linearsigma model through the pi-pi and pi-N s-wave scattering lengths, as well as several electromagnetic, weak, and strong decays of pseudoscalar and vector mesons. The ease with which the data for these observables are reproduced in thelinear sigma model lends credit to the necessity to include the sigma as a fundamental q\bar{q} degree of freedom, to be contrasted with approaches like chiral perturbation theory or the confining NJL model of Shakin and Wang. LANL EDS SzGeCERN Particle Physics - Phenomenology Kleefeld, F Rupp, G Scadron, M D George Rupp <george@ajax.ist.utl.pt> http://invenio-software.org/download/invenio-demo-site-files/0204139.pdf http://invenio-software.org/download/invenio-demo-site-files/0204139.ps.gz CER n 200231 2002 11 Beveren, Eef van Kleefeld, Frieder Rupp, George Scadron, Michael D. 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] D. E. Groom et al. [Particle Data Group Collaboration] 1 Eur. Phys. J., C 15 2000 Eur. Phys. J. C 15 (2000) 1 [2] N. Isgur and J. Speth 2332 Phys. Rev. Lett. 77 1996 Phys. Rev. Lett. 77 (1996) 2332 [3] N. A. Törnqvist and M. Roos 1575 Phys. Rev. Lett. 76 1996 Phys. Rev. Lett. 76 (1996) 1575 hep-ph/9511210 [4] M. Harada, F. Sannino, and J. Schechter 1603 Phys. Rev. Lett. 78 1997 Phys. Rev. Lett. 78 (1997) 1603 hep-ph/9609428 [5] E. van Beveren, T. A. Rijken, K. Metzger, C. Dullemond, G. Rupp, and J. E. Ribeiro 615 Z. Phys., C 30 1986 Z. Phys. C 30 (1986) 615 Eef van Beveren and George Rupp 469 Eur. Phys. J., C 10 1999 Eur. Phys. J. C 10 (1999) 469 hep-ph/9806246 [6] M. Boglione and M. R. Pennington hep-ph/0203149 [7] M. Gell-Mann and M. L´evy 705 Nuovo Cimento 16 1960 Nuovo Cimento 16 (1960) 705 also see V. de Alfaro, S. Fubini, G. Furlan, and C. Rossetti, in Currents in Hadron Physics, North-Holland Publ., Amsterdam, Chap. 5 (1973) [8] R. Delbourgo and M. D. Scadron 251 Mod. Phys. Lett., A 10 1995 Mod. Phys. Lett. A 10 (1995) 251 hep-ph/9910242 657 Int. J. Mod. Phys., A 13 1998 Int. J. Mod. Phys. A 13 (1998) 657 hep-ph/9807504 [9] Y. Nambu and G. Jona-Lasinio 345 Phys. Rev. 122 1961 Phys. Rev. 122 (1961) 345 [10] C. M. Shakin and Huangsheng Wang 094020 Phys. Rev., D 64 2001 Phys. Rev. D 64 (2001) 094020 [11] C. M. Shakin and Huangsheng Wang 014019 Phys. Rev., D 63 2000 Phys. Rev. D 63 (2000) 014019 [12] G. Rupp, E. van Beveren, and M. D. Scadron 078501 Phys. Rev., D 65 2002 Phys. Rev. D 65 (2002) 078501 hep-ph/0104087 [13] Eef van Beveren, George Rupp, and Michael D. Scadron 300 Phys. Lett., B 495 2000 Phys. Lett. B 495 (2000) 300 [Erratum 365 Phys. Lett., B 509 2000 Phys. Lett. B 509 (2000) 365 ] hep-ph/0009265 Frieder Kleefeld, Eef van Beveren, George Rupp, and Michael D. Scadron hep-ph/0109158 [14] M. D. Scadron 239 Phys. Rev., D 26 1982 Phys. Rev. D 26 (1982) 239 2076 Phys. Rev., D 29 1984 Phys. Rev. D 29 (1984) 2076 669 Mod. Phys. Lett., A 7 1992 Mod. Phys. Lett. A 7 (1992) 669 [15] M. L´evy 23 Nuovo Cimento, A 52 1967 Nuovo Cimento, A 52 (1967) 23 S. Gasiorowicz and D. A. Geffen 531 Rev. Mod. Phys. 41 1969 Rev. Mod. Phys. 41 (1969) 531 J. Schechter and Y. Ueda 2874 Phys. Rev., D 3 1971 Phys. Rev. D 3 (1971) 2874 [Erratum 987 Phys. Rev., D 8 1973 Phys. Rev. D 8 (1973) 987 ] [16] T. Eguchi 2755 Phys. Rev., D 14 1976 Phys. Rev. D 14 (1976) 2755 T. Eguchi 611 Phys. Rev., D 17 1978 Phys. Rev. D 17 (1978) 611 [17] The once-subtracted dispersion-relation result hep-ph/0204140 eng CERN-TH-2002-069 RM3-TH-02-4 Aglietti, U CERN - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 A new model-independent way of extracting |V_ub/V_cb| Geneva CERN 12 Apr 2002 20 p The ratio between the photon spectrum in B -> X_s gamma and the differential semileptonic rate wrt the hadronic variable M_X/E_X is a short-distance quantity calculable in perturbation theory and independent of the Fermi motion ofthe b quark in the B meson. We present a NLO analysis of this ratio and show how it can be used to determine |V_ub/V_cb| independently of any model for the shape function. We also discuss how this relation can be used to test thevalidity of the shape-function theory on the data. LANL EDS SzGeCERN Particle Physics - Phenomenology Ciuchini, M Gambino, P Paolo Gambino <paolo.gambino@cern.ch> http://invenio-software.org/download/invenio-demo-site-files/0204140.pdf http://invenio-software.org/download/invenio-demo-site-files/0204140.ps.gz CER n 200231 2002 11 TH CERN 2002-04-15 00 2002-04-15 BATCH Aglietti, Ugo Ciuchini, Marco Gambino, Paolo PREPRINT [1] I. I. Bigi, M. A. Shifman, N. G. Uraltsev and A. I. Vainshtein 496 Phys. Rev. Lett. 71 1993 Phys. Rev. Lett. 71 (1993) 496 [arXiv hep-ph/9304225 and 2467 Int. J. Mod. Phys., A 9 1994 Int. J. Mod. Phys. A 9 (1994) 2467 [arXiv hep-ph/9312359 [2] M. Neubert 4623 Phys. Rev., D 49 1994 Phys. Rev. D 49 (1994) 4623 [arXiv hep-ph/9312311 [3] R. Akhoury and I. Z. Rothstein 2349 Phys. Rev., D 54 1996 Phys. Rev. D 54 (1996) 2349 [arXiv hep-ph/9512303 A. K. Leibovich and I. Z. Rothstein 074006 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 074006 [arXiv hep-ph/9907391 A. K. Leibovich, I. Low and I. Z. Rothstein 053006 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 053006 [arXiv hep-ph/9909404 A. K. Leibovich, I. Low and I. Z. Rothstein 86 Phys. Lett., B 486 2000 Phys. Lett. B 486 (2000) 86 [arXiv hep-ph/0005124 M. Neubert 88 Phys. Lett., B 513 2001 Phys. Lett. B 513 (2001) 88 [arXiv hep-ph/0104280 A. K. Leibovich, I. Low and I. Z. Rothstein 83 Phys. Lett., B 513 2001 Phys. Lett. B 513 (2001) 83 [arXiv hep-ph/0105066 [4] V. D. Barger, C. S. Kim and R. J. Phillips 629 Phys. Lett., B 251 1990 Phys. Lett. B 251 (1990) 629 A. F. Falk, Z. Ligeti and M. B. Wise 225 Phys. Lett., B 406 1997 Phys. Lett. B 406 (1997) 225 [arXiv hep-ph/9705235 I. I. Bigi, R. D. Dikeman and N. Uraltsev 453 Eur. Phys. J., C 4 1998 Eur. Phys. J. C 4 (1998) 453 [arXiv hep-ph/9706520 [5] R. Barate et al. (ALEPH Coll.) 555 Eur. Phys. J., C 6 1999 Eur. Phys. J. C 6 (1999) 555 M. Acciarri et al. (L3 Coll.), Phys. Lett., B436 (1998); P. Abreu et al. (DELPHI Coll.) 14 Phys. Lett., B 478 2000 Phys. Lett. B 478 (2000) 14 G. Abbiendi et al. (OPAL Coll.) 399 Eur. Phys. J., C 21 2001 Eur. Phys. J. C 21 (2001) 399 [6] A. Bornheim [CLEO Coll.], arXiv hep-ex/0202019 [7] C. W. Bauer, Z. Ligeti and M. E. Luke 395 Phys. Lett., B 479 2000 Phys. Lett. B 479 (2000) 395 [arXiv hep-ph/0002161 [8] C. W. Bauer, Z. Ligeti and M. E. Luke 113004 Phys. Rev., D 64 2001 Phys. Rev. D 64 (2001) 113004 [arXiv hep-ph/0107074 [9] U. Aglietti, arXiv hep-ph/0010251 [10] U. Aglietti 308 Phys. Lett., B 515 2001 Phys. Lett. B 515 (2001) 308 [arXiv hep-ph/0103002 [11] U. Aglietti 293 Nucl. Phys., B 610 2001 Nucl. Phys. B 610 (2001) 293 [arXiv hep-ph/0104020 [12] A. Ali and E. Pietarinen 519 Nucl. Phys., B 154 1979 Nucl. Phys. B 154 (1979) 519 [13] G. Altarelli, N. Cabibbo, G. Corb o, L. Maiani and G. Martinelli 365 Nucl. Phys., B 208 1982 Nucl. Phys. B 208 (1982) 365 [14] R. L. Jaffe and L. Randall 79 Nucl. Phys., B 412 1994 Nucl. Phys. B 412 (1994) 79 [arXiv hep-ph/9306201 [15] M. Neubert 3392 Phys. Rev., D 49 1994 Phys. Rev. D 49 (1994) 3392 [arXiv hep-ph/9311325 [16] U. Aglietti, M. Ciuchini, G. Corb o, E. Franco, G. Martinelli and L. Silvestrini 411 Phys. Lett., B 432 1998 Phys. Lett. B 432 (1998) 411 [arXiv hep-ph/9804416 [17] S. Catani, L. Trentadue, G. Turnock and B. R. Webber 3 Nucl. Phys., B 407 1993 Nucl. Phys. B 407 (1993) 3 [18] V. Lubicz 116 Nucl. Phys. B, Proc. Suppl. 94 2001 Nucl. Phys. B, Proc. Suppl. 94 (2001) 116 [arXiv hep-lat/0012003 [19] A. J. Buras, M. Jamin, M. E. Lautenbacher and P. H. Weisz 37 Nucl. Phys., B 400 1993 Nucl. Phys. B 400 (1993) 37 [arXiv hep-ph/9211304 M. Ciuchini, E. Franco, G. Martinelli and L. Reina 403 Nucl. Phys., B 415 1994 Nucl. Phys. B 415 (1994) 403 [arXiv hep-ph/9304257 [20] K. Chetyrkin, M. Misiak and M. Munz 206 Phys. Lett., B 400 1997 Phys. Lett. B 400 (1997) 206 [Erratum 414 Phys. Lett., B 425 1997 Phys. Lett. B 425 (1997) 414 ] [arXiv hep-ph/9612313 and refs. therein [21] P. Gambino and M. Misiak 338 Nucl. Phys., B 611 2001 Nucl. Phys. B 611 (2001) 338 [arXiv hep-ph/0104034 [22] M.B. Voloshin 275 Phys. Lett., B 397 1997 Phys. Lett. B 397 (1997) 275 A. Khodjamirian et al 167 Phys. Lett., B 402 1997 Phys. Lett. B 402 (1997) 167 Z. Ligeti, L. Randall and M.B. Wise 178 Phys. Lett., B 402 1997 Phys. Lett. B 402 (1997) 178 A.K. Grant, A.G. Morgan, S. Nussinov and R.D. Peccei 3151 Phys. Rev., D 56 1997 Phys. Rev. D 56 (1997) 3151 G. Buchalla, G. Isidori and S.J. Rey 594 Nucl. Phys., B 511 1998 Nucl. Phys. B 511 (1998) 594 [23] P. Gambino and U. Haisch 020 J. High Energy Phys. 0110 2001 J. High Energy Phys. 0110 (2001) 020 [arXiv hep-ph/0109058 and 001 J. High Energy Phys. 0009 2000 J. High Energy Phys. 0009 (2000) 001 [arXiv hep-ph/0007259 [24] F. De Fazio and M. Neubert 017 J. High Energy Phys. 9906 1999 J. High Energy Phys. 9906 (1999) 017 [arXiv hep-ph/9905351 [25] U. Aglietti, arXiv hep-ph/0105168 to appear in the Proceedings of "XIII Convegno sulla Fisica al LEP (LEPTRE 2001)", Rome (Italy), 18-20 April 2001 [26] T. van Ritbergen 353 Phys. Lett., B 454 1999 Phys. Lett. B 454 (1999) 353 [27] C. W. Bauer, M. E. Luke and T. Mannel, arXiv hep-ph/0102089 [28] The Particle Data Group 1 Eur. Phys. J., C 15 2000 Eur. Phys. J. C 15 (2000) 1 [29] M. Ciuchini et al 013 J. High Energy Phys. 0107 2001 J. High Energy Phys. 0107 (2001) 013 [arXiv hep-ph/0012308 [30] S. Chen et al., CLEO Coll 251807 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 251807 [31] N. Pott 938 Phys. Rev. D 54 1996 Phys. Rev., D 54 (1996) 938 [arXiv hep-ph/9512252 [32] C. Greub, T. Hurth and D. Wyler 3350 Phys. Rev. D 54 1996 Phys. Rev., D 54 (1996) 3350 [arXiv hep-ph/9603404 A. J. Buras, A. Czarnecki, M. Misiak and J. Urban 488 Nucl. Phys. B 611 2001 Nucl. Phys., B 611 (2001) 488 [arXiv hep-ph/0105160 [33] A. J. Buras, A. Czarnecki, M. Misiak and J. Urban, arXiv hep-ph/0203135 hep-ph/0204141 eng Appelquist, T Yale University Neutrino Masses in Theories with Dynamical Electroweak Symmetry Breaking 12 Apr 2002 4 p We address the problem of accounting for light neutrino masses in theories with dynamical electroweak symmetry breaking. As a possible solution, we embed (extended) technicolor in a theory in which a $|\Delta L|=2$ neutrinocondensate forms at a scale $\Lambda_N \gsim 10^{11}$ GeV, and produces acceptably small (Majorana) neutrino masses. We present an explicit model illustrating this mechanism. LANL EDS SzGeCERN Particle Physics - Phenomenology Shrock, R Robert Shrock <shrock@insti.physics.sunysb.edu> http://invenio-software.org/download/invenio-demo-site-files/0204141.pdf http://invenio-software.org/download/invenio-demo-site-files/0204141.ps.gz CER n 200231 2002 11 Appelquist, Thomas Shrock, Robert 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] S. Fukuda, et al 5651 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 5651 S. Fukuda et al. ibid, 5656 (2001) (SuperK) and Q.R. Ahmad et al 071301 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 071301 (SNO). Other data is from the Homestake, Kamiokande, GALLEX, and SAGE experiments [2] Y. Fukuda et al 9 Phys. Lett., B 433 1998 Phys. Lett. B 433 (1998) 9 1562 Phys. Rev. Lett. 81 1998 Phys. Rev. Lett. 81 (1998) 1562 2644 Phys. Rev. Lett. 82 1999 Phys. Rev. Lett. 82 (1999) 2644 185 Phys. Lett., B 467 1999 Phys. Lett. B 467 (1999) 185 (SuperK) and data from Kamiokande, IMB, Soudan-2, and MACRO experiments. The data, which is consistent with results from K2K, indicates that |m( 3)2 - m( 2)2| |m( 3)2 - m( 1)2| 2.5 × 10-3 eV2. With a hierarchical mass assumption, one infers m( 3) m232 0.05 eV [3] S. Weinberg 1277 Phys. Rev., D 19 1979 Phys. Rev. D 19 (1979) 1277 L. Susskind 2619 Phys. Rev., D 20 1979 Phys. Rev. D 20 (1979) 2619 E. Eichten and K. Lane 125 Phys. Lett., B 90 1980 Phys. Lett. B 90 (1980) 125 [4] P. Sikivie, L. Susskind, M. Voloshin, and V. Zakharov 189 Nucl. Phys., B 173 1980 Nucl. Phys. B 173 (1980) 189 [5] B. Holdom 301 Phys. Lett., B 150 1985 Phys. Lett. B 150 (1985) 301 K Yamawaki, M. Bando, and K. Matumoto 1335 Phys. Rev. Lett. 56 1986 Phys. Rev. Lett. 56 (1986) 1335 T. Appelquist, D. Karabali, and L. Wijeward-hana 957 Phys. Rev. Lett. 57 1986 Phys. Rev. Lett. 57 (1986) 957 T. Appelquist and L.C.R. Wijewardhana 774 Phys. Rev., D 35 1987 Phys. Rev. D 35 (1987) 774 568 Phys. Rev., D 36 1987 Phys. Rev. D 36 (1987) 568 [6] B. Holdom 1637 Phys. Rev., D 23 1981 Phys. Rev. D 23 (1981) 1637 169 Phys. Lett., B 246 1990 Phys. Lett. B 246 (1990) 169 [7] T. Appelquist and J. Terning 139 Phys. Lett., B 315 1993 Phys. Lett. B 315 (1993) 139 T. Appelquist, J. Terning, and L. Wijewardhana 1214 Phys. Rev. Lett. 77 1996 Phys. Rev. Lett. 77 (1996) 1214 2767 Phys. Rev. Lett. 79 1997 Phys. Rev. Lett. 79 (1997) 2767 T. Appelquist, N. Evans, S. Selipsky 145 Phys. Lett., B 374 1996 Phys. Lett. B 374 (1996) 145 T. Appelquist and S. Selipsky 364 Phys. Lett., B 400 1997 Phys. Lett. B 400 (1997) 364 [8] T. Appelquist, J. Terning 2116 Phys. Rev., D 50 1994 Phys. Rev. D 50 (1994) 2116 [9] Recent reviews include R. Chivukula hep-ph/0011264 K. Lane hep-ph/0202255 C. Hill E. Simmons hep-ph/0203079 [10] M. Gell-Mann, P. Ramond, R. Slansky, in Supergrav-ity (North Holland, Amsterdam, 1979), p. 315; T. Yanagida in proceedings of Workshop on Unified Theory and Baryon Number in the Universe, KEK, 1979 [11] Although we require our model to yield a small S, a re-analysis of precision electroweak data is called for in view of the value of sin2 W reported in G. Zeller et al 091802 Phys. Rev. Lett. 88 2002 Phys. Rev. Lett. 88 (2002) 091802 [12] For a vectorial SU(N) theory with Nf fermions in the fundamental representation, an IRFP occurs if Nf > Nf,min,IR, where, perturbatively, Nf,min,IR 34N3/(13N2 -3). At this IRFP, using the criticality con-dition [13], the theory is expected to exist in a confining phase with S SB if Nf,min,IR < Nf < Nf,con, where Nf,con (2/5)N(50N2 - 33)/(5N2 - 3) and in a confor-mal phase if Nf,con < Nf < 11N/2. For N = 2 we have Nf,min,IR 5 and Nf,con 8, respectively. For attempts at lattice measurements, see R. Mawhinney 57 Nucl. Phys. B, Proc. Suppl. 83 2000 Nucl. Phys. B, Proc. Suppl. 83 (2000) 57 [13] In the approximation of a single-gauge-boson exchange, the critical coupling for the condensation of fermion rep-resentations R1 × R2 Rc is 3 2 C2 = 1, where C2 = [C2(R1) + C2(R2) - C2(Rc)], and C2(R) is the quadratic Casimir invariant. Instanton contributions are also important [7] [14] J. Gasser, H. Leutwyler 77 Phys. Rep. 87 1982 Phys. Rep. 87 (1982) 77 H. Leutwyler, in 108 Nucl. Phys. B, Proc. Suppl. 94 2001 Nucl. Phys. B, Proc. Suppl. 94 (2001) 108 [15] A. Ali Khan et al 4674 Phys. Rev. Lett. 85 2000 Phys. Rev. Lett. 85 (2000) 4674 M. Wingate et al., Int. J. Mod. Phys., A16 S B1 (2001) 585 [16] Here a = exp[ETC,a fF (dµ/µ) ( (µ))], and in walking TC theories the anomalous dimension 1 so a ETC,a/fF [17] By convention, we write SM-singlet neutrinos as right-handed fields j,R. These are assigned lepton number 1. Thus, in writing SU(4)PS SU(3)c × U(1), the U(1) is not U(1)B-L since some neutrinos in the model are SU(4)PS-singlet states [18] Z. Maki, M. Nakagawa, S. Sakata 870 Prog. Theor. Phys. 28 1962 Prog. Theor. Phys. 28 (1962) 870 (2 × 2 matrix); B. W. Lee, S. Pakvasa, R. Shrock, and H. Sugawara 937 Phys. Rev. Lett. 38 1977 Phys. Rev. Lett. 38 (1977) 937 (3 × 3 matrix) [19] T. Appelquist and R. Shrock, to appear [20] K. Dienes, E. Dudas, T. Gherghetta 25 Nucl. Phys., B 557 1999 Nucl. Phys. B 557 (1999) 25 N. Arkani-Hamed, S. Dimopoulos, G. Dvali, and J. March-Russell hep-ph/9811448 T. Appelquist, B. Dobrescu, E. Ponton, and H.-U. Yee hep-ph/0201131 hep-th/0204100 eng LBNL-50097 UCB-PTH-02-14 Gaillard, M K University of California, Berkeley Modular Invariant Anomalous U(1) Breaking Berkeley, CA Lawrence Berkeley Nat. Lab. 11 Apr 2002 19 p We describe the effective supergravity theory present below the scale of spontaneous gauge symmetry breaking due to an anomalous U(1), obtained by integrating out tree-level interactions of massive modes. A simple case is examined insome detail. We find that the effective theory can be expressed in the linear multiplet formulation, with some interesting consequences. Among them, the modified linearity conditions lead to new interactions not present in the theorywithout an anomalous U(1). These additional interactions are compactly expressed through a superfield functional. LANL EDS SzGeCERN Particle Physics - Theory Giedt, J Mary K Gaillard <gaillard@thsrv.lbl.gov> http://invenio-software.org/download/invenio-demo-site-files/0204100.pdf http://invenio-software.org/download/invenio-demo-site-files/0204100.ps.gz CER n 200231 2002 11 Gaillard, Mary K. Giedt, Joel 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] J. Giedt 1 Ann. Phys. (N.Y.) 297 2002 Ann. Phys. (N.Y.) 297 (2002) 1 hep-th/0108244 [2] M. Dine, N. Seiberg and E. Witten 585 Nucl. Phys., B 289 1987 Nucl. Phys. B 289 (1987) 585 J. J. Atick, L. Dixon and A. Sen 109 Nucl. Phys., B 292 1987 Nucl. Phys. B 292 (1987) 109 M. Dine, I. Ichinose and N. Seiberg 253 Nucl. Phys., B 293 1987 Nucl. Phys. B 293 (1987) 253 [3] M. B. Green and J. H. Schwarz 117 Phys. Lett., B 149 1984 Phys. Lett. B 149 (1984) 117 [4] P. Bin´etruy, G. Girardi and R. Grimm 111 Phys. Lett., B 265 1991 Phys. Lett. B 265 (1991) 111 [5] M. Müller 292 Nucl. Phys., B 264 1986 Nucl. Phys. B 264 (1986) 292 P. Bin´etruy, G. Girardi, R. Grimm and M. Müller 389 Phys. Lett., B 189 1987 Phys. Lett. B 189 (1987) 389 [6] P. Bin´etruy, G. Girardi and R. Grimm 255 Phys. Rep. 343 2001 Phys. Rep. 343 (2001) 255 [7] G. Girardi and R. Grimm 49 Ann. Phys. (N.Y.) 272 1999 Ann. Phys. (N.Y.) 272 (1999) 49 [8] P. Bin´etruy, M. K. Gaillard and Y.-Y. Wu 109 Nucl. Phys., B 481 1996 Nucl. Phys. B 481 (1996) 109 [9] P. Bin´etruy, M. K. Gaillard and Y.-Y. Wu 27 Nucl. Phys., B 493 1997 Nucl. Phys. B 493 (1997) 27 P. Bin´etruy, M. K. Gaillard and Y.-Y. Wu 288 Phys. Lett., B 412 1997 Phys. Lett. B 412 (1997) 288 [10] M. K. Gaillard, B. Nelson and Y.-Y. Wu 549 Phys. Lett., B 459 1999 Phys. Lett. B 459 (1999) 549 [11] M. K. Gaillard and B. Nelson 3 Nucl. Phys., B 571 2000 Nucl. Phys. B 571 (2000) 3 [12] S. Ferrara, C. Kounnas and M. Porrati 263 Phys. Lett., B 181 1986 Phys. Lett. B 181 (1986) 263 [13] M. Cveti c, J. Louis and B. A. Ovrut 227 Phys. Lett., B 206 1988 Phys. Lett. B 206 (1988) 227 L. E. Iba nez and D. Lüst 305 Nucl. Phys., B 382 1992 Nucl. Phys. B 382 (1992) 305 [14] M. K. Gaillard 125 Phys. Lett., B 342 1995 Phys. Lett. B 342 (1995) 125 105027 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 105027 D : 61 (2000) 084028 [15] E. Witten 151 Phys. Lett., B 155 1985 Phys. Lett. B 155 (1985) 151 [16] L. J. Dixon, V. S. Kaplunovsky and J. Louis 27 Nucl. Phys., B 329 1990 Nucl. Phys. B 329 (1990) 27 [17] S.J. Gates, M. Grisaru, M. Ro cek and W. Siegel, Superspace (Benjamin/Cummings, 1983) [18] M.K. Gaillard and T.R. Taylor 577 Nucl. Phys., B 381 1992 Nucl. Phys. B 381 (1992) 577 [19] J. Wess and J. Bagger, Supersymmetry and supergravity (Princeton, 1992) [20] P. Bin´etruy, C. Deffayet and P. Peter 163 Phys. Lett., B 441 1998 Phys. Lett. B 441 (1998) 163 [21] M. K. Gaillard and J. Giedt, in progress hep-ph/0204142 eng Chacko, Z University of California, Berkeley Fine Structure Constant Variation from a Late Phase Transition Berkeley, CA Lawrence Berkeley Nat. Lab. 12 Apr 2002 9 p Recent experimental data indicates that the fine structure constant alpha may be varying on cosmological time scales. We consider the possibility that such a variation could be induced by a second order phase transition which occursat late times (z ~ 1 - 3) and involves a change in the vacuum expectation value (vev) of a scalar with milli-eV mass. Such light scalars are natural in supersymmetric theories with low SUSY breaking scale. If the vev of this scalarcontributes to masses of electrically charged fields, the low-energy value of alpha changes during the phase transition. The observational predictions of this scenario include isotope-dependent deviations from Newtonian gravity atsub-millimeter distances, and (if the phase transition is a sharp event on cosmological time scales) the presence of a well-defined step-like feature in the alpha(z) plot. The relation between the fractional changes in alpha and theQCD confinement scale is highly model dependent, and even in grand unified theories the change in alpha does not need to be accompanied by a large shift in nucleon masses. LANL EDS SzGeCERN Particle Physics - Phenomenology Grojean, C Perelstein, M Maxim Perelstein <meperelstein@lbl.gov> http://invenio-software.org/download/invenio-demo-site-files/0204142.pdf http://invenio-software.org/download/invenio-demo-site-files/0204142.ps.gz CER n 200231 2002 11 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] J. K. Webb, M. T. Murphy, V. V. Flambaum, V. A. Dzuba, J. D. Barrow, C. W. Churchill, J. X. Prochaska and A. M. Wolfe 091301 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 091301 astro-ph/0012539 see also J. K. Webb, V. V. Flambaum, C. W. Churchill, M. J. Drinkwater and J. D. Barrow 884 Phys. Rev. Lett. 82 1999 Phys. Rev. Lett. 82 (1999) 884 astro-ph/9803165 V. A. Dzuba, V. V. Flambaum, and J. K. Webb 888 Phys. Rev. Lett. 82 1999 Phys. Rev. Lett. 82 (1999) 888 [2] P. A. Dirac 323 Nature 139 1937 Nature 139 (1937) 323 for an historial perspective, see F. Dyson, "The fundamental constants and their time variation", in Aspects of Quantum Theory, eds A. Salam and E. Wigner [3] T. Damour gr-qc/0109063 [4] J. D. Bekenstein 1527 Phys. Rev., D 25 1982 Phys. Rev. D 25 (1982) 1527 [5] G. R. Dvali and M. Zaldarriaga 091303 Phys. Rev. Lett. 88 2002 Phys. Rev. Lett. 88 (2002) 091303 hep-ph/0108217 [6] K. A. Olive and M. Pospelov 085044 Phys. Rev., D 65 2002 Phys. Rev. D 65 (2002) 085044 hep-ph/0110377 [7] T. Banks, M. Dine and M. R. Douglas 131301 Phys. Rev. Lett. 88 2002 Phys. Rev. Lett. 88 (2002) 131301 hep-ph/0112059 [8] P. Langacker, G. Segr e and M. J. Strassler 121 Phys. Lett., B 528 2002 Phys. Lett. B 528 (2002) 121 hep-ph/0112233 [9] A. Y. Potekhin, A. V. Ivanchik, D. A. Varshalovich, K. M. Lanzetta, J. A. Bald-win, G. M. Williger and R. F. Carswell 523 Astrophys. J. 505 1998 Astrophys. J. 505 (1998) 523 astro-ph/9804116 [10] S. Weinberg 3357 Phys. Rev., D 9 1974 Phys. Rev. D 9 (1974) 3357 L. Dolan and R. Jackiw 3320 Phys. Rev., D 9 1974 Phys. Rev. D 9 (1974) 3320 [11] N. Arkani-Hamed, L. J. Hall, C. Kolda and H. Murayama 4434 Phys. Rev. Lett. 85 2000 Phys. Rev. Lett. 85 (2000) 4434 astro-ph/0005111 [12] M. Dine, W. Fischler and M. Srednicki 575 Nucl. Phys., B 189 1981 Nucl. Phys. B 189 (1981) 575 S. Dimopou-los and S. Raby 353 Nucl. Phys., B 192 1981 Nucl. Phys. B 192 (1981) 353 L. Alvarez-Gaum´e, M. Claudson and M. B. Wise 96 Nucl. Phys., B 207 1982 Nucl. Phys. B 207 (1982) 96 M. Dine and A. E. Nelson 1277 Phys. Rev., D 48 1993 Phys. Rev. D 48 (1993) 1277 hep-ph/9303230 M. Dine, A. E. Nelson and Y. Shirman 1362 Phys. Rev., D 51 1995 Phys. Rev. D 51 (1995) 1362 hep-ph/9408384 M. Dine, A. E. Nelson, Y. Nir and Y. Shirman 2658 Phys. Rev., D 53 1996 Phys. Rev. D 53 (1996) 2658 hep-ph/9507378 [13] N. Arkani-Hamed, S. Dimopoulos, N. Kaloper and R. Sundrum 193 Phys. Lett., B 480 2000 Phys. Lett. B 480 (2000) 193 hep-th/0001197 S. Kachru, M. Schulz and E. Silverstein 045021 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 045021 hep-th/0001206 C. Cs´aki, J. Erlich and C. Grojean 312 Nucl. Phys., B 604 2001 Nucl. Phys. B 604 (2001) 312 hep-th/0012143 [14] X. Calmet and H. Fritzsch hep-ph/0112110 H. Fritzsch hep-ph/0201198 [15] G. R. Dvali and S. Pokorski 126 Phys. Lett., B 379 1996 Phys. Lett. B 379 (1996) 126 hep-ph/9601358 [16] Z. Chacko and R. N. Mohapatra 2836 Phys. Rev. Lett. 82 1999 Phys. Rev. Lett. 82 (1999) 2836 hep-ph/9810315 [17] J. P. Turneaure, C. M. Will, B. F. Farrell, E. M. Mattison and R. F. C. Vessot 1705 Phys. Rev., D 27 1983 Phys. Rev. D 27 (1983) 1705 J. D. Prestage, R. L. Tjoelker and L. Maleki 3511 Phys. Rev. Lett. 74 1995 Phys. Rev. Lett. 74 (1995) 3511 [18] A. I. Shlyakhter 340 Nature 264 1976 Nature 264 (1976) 340 T. Damour and F. Dyson 37 Nucl. Phys., B 480 1996 Nucl. Phys. B 480 (1996) 37 hep-ph/9606486 Y. Fujii, A. Iwamoto, T. Fukahori, T. Ohnuki, M. Nakagawa, H. Hidaka, Y. Oura, P. Möller 377 Nucl. Phys., B 573 2000 Nucl. Phys. B 573 (2000) 377 hep-ph/9809549 [19] E. W. Kolb, M. J. Perry and T. P. Walker 869 Phys. Rev., D 33 1986 Phys. Rev. D 33 (1986) 869 B. A. Camp-bell and K. A. Olive 429 Phys. Lett., B 345 1995 Phys. Lett. B 345 (1995) 429 hep-ph/9411272 L. Bergström, S. Iguri and H. Rubinstein 045005 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 045005 astro-ph/9902157 P. P. Avelino et al 103505 Phys. Rev., D 64 2001 Phys. Rev. D 64 (2001) 103505 astro-ph/0102144 [20] S. Hannestad 023515 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 023515 astro-ph/9810102 M. Kaplinghat, R. J. Scherrer and M. S. Turner 023516 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 023516 astro-ph/9810133 P. P. Avelino, C. J. Martins, G. Rocha and P. Viana 123508 Phys. Rev., D 62 2000 Phys. Rev. D 62 (2000) 123508 astro-ph/0008446 [21] C. D. Hoyle, U. Schmidt, B. R. Heckel, E. G. Adelberger, J. H. Gundlach, D. J. Kap-ner and H. E. Swanson 1418 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 1418 hep-ph/0011014 E. G. Adelberger [EOT-WASH Group Collaboration] hep-ex/0202008 [22] S. Coleman, Aspects of symmetry. (Cambridge Univ. Press, 1985.) hep-ph/0204143 eng Domin, P Comenius University Phenomenological Study of Solar-Neutrino Induced Double Beta Decay of Mo100 12 Apr 2002 8 p The detection of solar-neutrinos of different origin via induced beta beta process of Mo100 is investigated. The particular counting rates and energy distributions of emitted electrons are presented. A discussion in respect tosolar-neutrino detector consisting of 10 tones of Mo100 is included. Both the cases of the standard solar model and neutrino oscillation scenarios are analyzed. Moreover, new beta^- beta^+ and beta^-/EC channels of the double-betaprocess are introduced and possibilities of their experimental observation are addressed. LANL EDS SzGeCERN Particle Physics - Phenomenology Simkovic, F Semenov, S V Gaponov, Y V Pavol Domin <domin@chavena.dnp.fmph.uniba.sk> http://invenio-software.org/download/invenio-demo-site-files/0204143.pdf http://invenio-software.org/download/invenio-demo-site-files/0204143.ps.gz CER n 200231 2002 11 Gaponov, Yu. V. 2002-04-15 00 2002-04-15 BATCH 8 PAGES LATEX 2 POSTSCRIPT FIGURES TALK PRESENTED BY P DOMIN ON THE WORKSHOP MEDEX'01 (PRAGUE JUNE 2001) TO APPEAR IN CZECH J PHYS 52 (2002) PREPRINT [1] S. M. Bilenky, C. Giunti and W. Grimus 1 Prog. Part. Nucl. Phys. 45 1999 Prog. Part. Nucl. Phys. 45 (1999) 1 [2] J. N. Bahcall, S. Basu and M. H. Pinsonneault 1 Phys. Lett., B 433 1998 Phys. Lett. B 433 (1998) 1 [3] R. Davis Jr 13 Prog. Part. Nucl. Phys. 32 1994 Prog. Part. Nucl. Phys. 32 (1994) 13 [4] Kamiokande Collaboration, Y Fukuda et al 1683 Phys. Rev. Lett. 77 1996 Phys. Rev. Lett. 77 (1996) 1683 [5] SAGE collaboration, A. I. Abazov et al 3332 Phys. Rev. Lett. 67 1991 Phys. Rev. Lett. 67 (1991) 3332 D. N. Abdurashitov et al 4708 Phys. Rev. Lett. 77 1996 Phys. Rev. Lett. 77 (1996) 4708 [6] GALLEX collaboration, P. Anselmann et al 376 Phys. Lett., B 285 1992 Phys. Lett. B 285 (1992) 376 W. Hampel et al 384 Phys. Lett., B 388 1996 Phys. Lett. B 388 (1996) 384 [7] Super-Kamiokande Coll., S. Fukuda et al 5651 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 5651 [8] SNO Collaboration, Q.R. Ahmad et. al 071301 Phys. Rev. Lett. 87 2001 Phys. Rev. Lett. 87 (2001) 071301 [9] H. Ejiri et al., Phys. Rev. Lett.85 2917 (2000); H. Ejiri 265 Phys. Rep. 338 2000 Phys. Rep. 338 (2000) 265 [10] S. V. Semenov, Yu. V. Gaponov and R. U. Khafizov 1379 Yad. Fiz. 61 1998 Yad. Fiz. 61 (1998) 1379 [11] L. V. Inzhechik, Yu. V. Gaponov and S. V. Semenov 1384 Yad. Fiz. 61 1998 Yad. Fiz. 61 (1998) 1384 [12] http://www.sns.ias.edu/~jnb. http://www.sns.ias.edu/~jnb [13] B. Singh et al 478 Nucl. Data Sheets 84 1998 Nucl. Data Sheets 84 (1998) 478 [14] H. Akimune et al 23 Phys. Lett., B 394 1997 Phys. Lett. B 394 (1997) 23 [15] J. N. Bahcall, P. I. Krastev, and A. Yu. Smirnov 096016 Phys. Rev., D 58 1998 Phys. Rev. D 58 (1998) 096016 hep-th/0204101 eng CSULB-PA-02-2 Nishino, H California State University Axisymmetric Gravitational Solutions as Possible Classical Backgrounds around Closed String Mass Distributions 12 Apr 2002 15 p By studying singularities in stationary axisymmetric Kerr and Tomimatsu-Sato solutions with distortion parameter \d = 2, 3, ... in general relativity, we conclude that these singularities can be regarded as nothing other than closedstring-like circular mass distributions. We use two different regularizations to identify \d-function type singularities in the energy-momentum tensor for these solutions, realizing a regulator independent result. This result givessupporting evidence that these axisymmetric exact solutions may well be the classical solutions around closed string-like mass distributions, just like Schwarzschild solution corresponding to a point mass distribution. In otherwords, these axisymmetric exact solutions may well provide the classical backgrounds around closed strings. LANL EDS SzGeCERN Particle Physics - Theory Rajpoot, S Hitoshi Nishino <hnishino@csulb.edu> http://invenio-software.org/download/invenio-demo-site-files/0204101.pdf http://invenio-software.org/download/invenio-demo-site-files/0204101.ps.gz CER n 200231 2002 11 Nishino, Hitoshi Rajpoot, Subhash 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] K. Schwarzschild, Sitzungsberichte Preuss. Akad. Wiss., 424 (1916) [2] M. Green, J.H. Schwarz and E. Witten, `Superstring Theory', Vols. I and II, Cambridge University Press (1987) [3] J. Chazy, Bull. Soc. Math. France: 52 (1924) 17H.E.J. Curzon, Proc. London Math. Soc. : 23 (1924) 477 [4] P. Ho rava and E. Witten 506 Nucl. Phys., B 460 1996 Nucl. Phys. B 460 (1996) 506 94 Nucl. Phys., B 475 1996 Nucl. Phys. B 475 (1996) 94 [5] N. Arkani-Hamed, S. Dimopoulos and G. Dvali 263 Phys. Lett. 429 1998 Phys. Lett. 429 (1998) 263 I. Anto-niadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali 257 Phys. Lett. 436 1998 Phys. Lett. 436 (1998) 257 [6] L. Randall and R. Sundrum 3370 Phys. Rev. Lett. 83 1999 Phys. Rev. Lett. 83 (1999) 3370 4690 Phys. Rev. Lett. 83 1999 Phys. Rev. Lett. 83 (1999) 4690 [7] R.P. Kerr 237 Phys. Rev. Lett. 11 1963 Phys. Rev. Lett. 11 (1963) 237 [8] A. Ya Burinskii 441 Phys. Lett., A 185 1994 Phys. Lett. A 185 (1994) 441 `Complex String as Source of Kerr Ge-ometry' hep-th/9503094 2392 Phys. Rev., D 57 1998 Phys. Rev. D 57 (1998) 2392 `Structure of Spinning Parti-cle Suggested by Gravity, Supergravity & Low-Energy String Theory' hep-th/9910045 Czech. J. Phys.50S : 1 (2000) 201 [9] See, e.g., A. Sen 2081 Mod. Phys. Lett., A 10 1995 Mod. Phys. Lett. A 10 (1995) 2081 P.H. Frampton and T.W. Kephart 2571 Mod. Phys. Lett., A 10 1995 Mod. Phys. Lett. A 10 (1995) 2571 A. Strominger and C. Vafa 99 Phys. Lett. 379 1996 Phys. Lett. 379 (1996) 99 K. Behrndt 188 Nucl. Phys., B 455 1995 Nucl. Phys. B 455 (1995) 188 J.C. Breckenridge, D.A. Lowe, R.C. Myers, A.W. Peet, A. Strominger and C. Vafa 423 Phys. Lett., B 381 1996 Phys. Lett. B 381 (1996) 423 C. Callan and J. Maldacena 591 Nucl. Phys., B 472 1996 Nucl. Phys. B 472 (1996) 591 G. Horowitz and A. Stro-minger 2368 Phys. Rev. Lett. 77 1996 Phys. Rev. Lett. 77 (1996) 2368 J.M. Maldacena, `Black Holes in String Theory', Ph.D. Thesis hep-th/9607235 A. Dabholkar and J.A. Harvey 478 Phys. Rev. Lett. 63 1989 Phys. Rev. Lett. 63 (1989) 478 A. Dabholkar, G.W. Gibbons, J.A. Harvey and F. Ruiz Ruiz 33 Nucl. Phys., B 340 1990 Nucl. Phys. B 340 (1990) 33 C.G. Callan, Jr., J.M. Maldacena, A.W. Peet 645 Nucl. Phys. B 475 1996 Nucl. Phys. B 475 (1996) 645 [10] A. Tomimatu and H. Sato 95 Prog. Theor. Phys. 50 1973 Prog. Theor. Phys. 50 (1973) 95 [11] M. Yamazaki and S. Hori 696 Prog. Theor. Phys. 57 1977 Prog. Theor. Phys. 57 (1977) 696 erratum 1248 Prog. Theor. Phys. 60 1978 Prog. Theor. Phys. 60 (1978) 1248 S. Hori 1870 Prog. Theor. Phys. 59 1978 Prog. Theor. Phys. 59 (1978) 1870 erratum 365 Prog. Theor. Phys. 61 1979 Prog. Theor. Phys. 61 (1979) 365 [12] H. Nishino 77 Phys. Lett. 359 1995 Phys. Lett. 359 (1995) 77 [13] H. Weyl, Ann. de Phys. : 54 (1917) 117 [14] J.M. Bardeen, Astrophys. Jour. : 162 (1970) 71 [15] D. Kramer, H. Stephani, E. Herlt and M. MacCallum, `Exact Solutions of Einstein's Field Equations', Cambridge University Press (1980) [16] R. Arnowitt, S. Deser and C. Misner, in `Gravitation': `An Introduction to Current Re-search', ed. L. Witten (New York, Wiley, 1962) hep-th/0204102 eng Bo-Yu, H Northwest University, China Soliton on Noncommutative Orbifold $ T^2/Z_k $ 12 Apr 2002 13 p Following the construction of the projection operators on $ T^2 $ presented by Gopakumar, Headrick and Spradin, we construct the projection operators on the integral noncommutative orbifold $ T^2/G (G=Z_k,k=2, 3, 4, 6)$. Suchoperators are expressed by a function on this orbifold. So it provides a complete set of projection operators upon the moduli space $T^2 \times K/Z_k$. All these operators has the same trace 1/A ($A$ is an integer). Since theprojection operators correspond to solitons in noncommutative string field theory, we obtained the explicit expression of all the soliton solutions on $ T^2/Z_k $. LANL EDS SzGeCERN Particle Physics - Theory Kangjie, S Zhan-ying, Y Zhanying Yang <yzy@phy.nwu.edu.cn> http://invenio-software.org/download/invenio-demo-site-files/0204102.pdf http://invenio-software.org/download/invenio-demo-site-files/0204102.ps.gz CER n 200231 2002 11 Bo-yu, Hou Kangjie, Shi Zhan-ying, Yang 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] A. Connes, Non-commutative Geometry, Academic Press, 1994 [2] G. Landi," An introduction to non-commutative space and their geometry" hep-th/9701078 J. Varilly, "An introduction to non-commutative Geometry" physics/9709045 [3] J. Madore, "An introduction to non-commutative Differential Geometry and its physical Applications", Cambridge University press 2nd edition, 1999 [4] A. Connes, M. Douglas, A. Schwartz, Matrix theory compactification on Tori 003 J. High Energy Phys. 9802 1998 J. High Energy Phys. 9802 (1998) 003 hep-th/9711162 M. dougals, C. Hull 008 J. High Energy Phys. 9802 1998 J. High Energy Phys. 9802 (1998) 008 hep-th/9711165 [5] Nathan. Seiberg and Edward. Witten," String theory and non-commutative geometry" 032 J. High Energy Phys. 9909 1999 J. High Energy Phys. 9909 (1999) 032 hep-th/9908142 V. Schomerus," D-branes and Deformation Quan-tization" 030 J. High Energy Phys. 9906 1999 J. High Energy Phys. 9906 (1999) 030 [6] E. Witten, "Noncommutative Geometry and String Field Theory" 253 Nucl. Phys., B 268 1986 Nucl. Phys. B 268 (1986) 253 [7] R. B. Laughlin, "The quantum Hall Effect", edited by R. Prange and S. Girvin, p233 [8] L. Susskind hep-th/0101029 J. P. Hu and S. C. Zhang cond-mat/0112432 [9] R. Gopakumar, M. Headrick, M. Spradin, "on Noncommutative Multi-solitons" hep-th/0103256 [10] E. J. Martinec and G. Moore, "Noncommutative Solitons on Orbifolds" hep-th/0101199 [11] D. J. Gross and N. A. Nekrasov, " Solitons in noncommutative Gauge Theory" hep-th/0010090 M. R. Douglas and N. A. Nekrasov, "Noncommutative Field Theory" hep-th/0106048 [12] R. Gopakumar, S. Minwalla and A. Strominger, " Noncommutative Soliton" 048 J. High Energy Phys. 005 2000 J. High Energy Phys. 005 (2000) 048 hep-th/0003160 [13] J. Harvey, " Komaba Lectures on Noncommutative Solitons and D-branes hep-th/0102076 J. A. Harvey, P. Kraus and F.Larsen, J. High Energy Phys.0012 (200) 024 hep-th/0010060 [14] A. Konechny and A. Schwarz, "Compactification of M(atrix) theory on noncommutative toroidal orbifolds" 667 Nucl. Phys., B 591 2000 Nucl. Phys. B 591 (2000) 667 hep-th/9912185 " Moduli spaces of max-imally supersymmetric solutions on noncommutative tori and noncommutative orbifolds", J. High Energy Phys.0009, (2000) 005 hep-th/0005167 [15] S. Walters, "Projective modules over noncommutative sphere", J. London Math. Soc. : 51 (1995) 589"Chern characters of Fourier modules", Can. J. Math. : 52 (2000) 633 [16] M. Rieffel, Pacific J. Math. : 93 (1981) 415 [17] F. P. Boca 325 Commun. Math. Phys. 202 1999 Commun. Math. Phys. 202 (1999) 325 [18] H. Bacry, A. Grossman and J. Zak 1118 Phys. Rev., B 12 1975 Phys. Rev. B 12 (1975) 1118 [19] J. Zak, In Solid State Phys.edited by H. Ehrenreich, F. Seitz and D. Turnbull (Aca-demic,new York,1972), Vol. 27 nucl-th/0204031 eng LA-UR-02-2040 Page, P R Los Alamos Sci. Lab. Hybrid Baryons Los Alamos, NM Los Alamos Sci. Lab. 11 Apr 2002 12 p We review the status of hybrid baryons. The only known way to study hybrids rigorously is via excited adiabatic potentials. Hybrids can be modelled by both the bag and flux-tube models. The low-lying hybrid baryon is N 1/2^+ with amass of 1.5-1.8 GeV. Hybrid baryons can be produced in the glue-rich processes of diffractive gamma N and pi N production, Psi decays and p pbar annihilation. LANL EDS SzGeCERN Nuclear Physics "Philip R. page" <prp@t16prp.lanl.gov> http://invenio-software.org/download/invenio-demo-site-files/0204031.pdf http://invenio-software.org/download/invenio-demo-site-files/0204031.ps.gz 11 2002 Page, Philip R. 2002-04-15 00 2002-04-15 BATCH INVITED PLENARY TALK PRESENTED AT THE ``9TH INTERNATIONAL CONFERENCE ON THE STRUCTURE OF BARYONS'' (BARYONS 2002) 3-8 MARCH NEWPORT NEWS VA USA 12 PAGES 7 ENCAPSULATED POSTSCRIPT FIGURES LATEX n 200216 1. T. Barnes, contribution at the COSY Workshop on Baryon Excitations (May 2000, Jülich, Germany), nucl-th/0009011 2. E.I. Ivanov et al., Phys. Rev. Lett. 86 (2001) 3977 3. T.T. Takahashi, H. Matsufuru, Y. Nemoto and H. Suganuma, Phys. Rev. Lett. 86 (2001) 18 3. ; ibid., Proc. of "Int. Symp. on Hadron and Nuclei" (February 2001, Seoul, Korea), published by Institute of Physics and Applied Phyics (2001), ed. Dr. T.K. Choi, p. 341; ibid., T. Umeda, Nucl. Phys. Proc. S. : 94 (2001) 554. 4. Yu. A. Simonov, these proceedings; D.S. Kuzmenko and Yu. A. Simonov, hep-ph/0202277 5. S. Capstick and N. Isgur, Phys. Rev. D 34 (1986) 2809 6. C. Alexandrou, Ph. de Forcrand and A. Tsapalis, Phys. Rev. D 65 (2002) 054503 7. C.E. Carlson and N.C. Mukhopadhyay, Phys. Rev. Lett. 67 (1991) 3745 8. C.-K. Chow, D. Pirjol and T.-M. Yan, Phys. Rev. D 59 (1999) 056002 9. T. Barnes, Ph. D. thesis, California Institute of Technology, 1977; T. Barnes and F.E. Close, Phys. Lett. B 123 (1983) 89 10. E. Golowich, E. Haqq and G. Karl, Phys. Rev. D 28 (1983) 160 11. C.E. Carlson, Proc. of the 7th Int. Conf. on the Structure of Baryons (October 1995, Santa Fe, NM), p. 461, eds. B. F. Gibson et al. (World Scientific, Singapore, 1996). 12. C.E. Carlson and T.H. Hansson, Phys. Lett. B 128 (1983) 95 13. I. Duck and E. Umland, Phys. Lett. B 128 (1983) 221 14. P.R. Page, Proc. of "The Physics of Excited Nucleons" (NSTAR2000) (February 2000, Newport News, VA). 15. J. Merlin and J. Paton, J. Phys. G 11 (1985) 439 16. K.J. Juge, J. Kuti and C.J. Morningstar, Nucl. Phys. Proc. S. : 63 (1998) 543. 17. E.S. Swanson and A.P. Szczepaniak, Phys. Rev. D 59 (1999) 014035 18. T.J. Allen, M.G. Olsson and S. Veseli, Phys. Lett. B 434 (1998) 110 19. S. Capstick and P.R. Page, Phys. Rev. D 60 (1999) 111501 20. L.S. Kisslinger et al., Phys. Rev. D 51 (1995) 5986 20. Nucl. Phys. A 629 (1998) 30c 20. A.P. Martynenko, Sov. J. Nucl. Phys. 54 (1991) 488 21. S.M. Gerasyuta and V.I. Kochkin, hep-ph/0203104 22. T.D. Cohen and L.Ya. Glozman, Phys. Rev. D 65 (2002) 016006 23. E. Klempt, these proceedings. 24. A.M. Zaitsev (VES Collab.), Proc. of ICHEP’96 (Warsaw, 1996). 25. D.E. Groom et al. (Particle Data Group), Eur. Phys. J. C 15 (2000) 1 26. H. Li (BES Collab.), Nucl. Phys. A 675 (2000) 189c 26. B.-S. Zou et al., hep-ph/9909204 27. L.S. Kisslinger and Z.-P. Li, Phys. Lett. B 445 (1999) 271 28. N. Isgur, Phys. Rev. D 60 (1999) 114016 29. Z.-P. Li et al., Phys. Rev. D 44 (1991) 2841 29. Phys. Rev. D 46 (1992) 70 30. T. Barnes and F.E. Close, Phys. Lett. B 128 (1983) 277 31. O. Kittel and G.F. Farrar, hep-ph/0010186 32. P.R. Page, Proc. of "3rd Int. Conf. on Quark Confinement and Hadron Spectrum" (Confinement III), (June 1998, Newport News, VA). 33. K.J. Juge, J. Kuti and C.J. Morningstar, Nucl. Phys. Proc. S. : 63 (1998) 326. PREPRINT nucl-th/0204032 eng Amos, K The University of Melbourne A simple functional form for proton-nucleus total reaction cross sections 12 Apr 2002 13 p A simple functional form has been found that gives a good representation of the total reaction cross sections for the scattering of protons from (15) nuclei spanning the mass range ${}^{9}$Be to ${}^{238}$U and for proton energiesranging from 20 to 300 MeV. LANL EDS SzGeCERN Nuclear Physics Deb, P K Ken Amos <amos@physics.unimelb.edu.au> http://invenio-software.org/download/invenio-demo-site-files/0204032.pdf http://invenio-software.org/download/invenio-demo-site-files/0204032.ps.gz 2002 11 2002-04-15 00 2002-04-15 BATCH n 200216 PREPRINT nucl-th/0204033 eng Oyamatsu, K Aichi Shukutoku Univ Saturation of nuclear matter and radii of unstable nuclei 12 Apr 2002 26 p We examine relations among the parameters characterizing the phenomenological equation of state (EOS) of nearly symmetric, uniform nuclear matter near the saturation density by comparing macroscopic calculations of radii and massesof stable nuclei with the experimental data. The EOS parameters of interest here are the symmetry energy S_0, the symmetry energy density-derivative coefficient L and the incompressibility K_0 at the normal nuclear density. We find aconstraint on the relation between K_0 and L from the empirically allowed values of the slope of the saturation line (the line joining the saturation points of nuclear matter at finite neutron excess), together with a strongcorrelation between S_0 and L. In the light of the uncertainties in the values of K_0 and L, we macroscopically calculate radii of unstable nuclei as expected to be produced in future facilities. We find that the matter radii dependstrongly on L while being almost independent of K_0, a feature that will help to determine the L value via systematic measurements of nuclear size. LANL EDS SzGeCERN Nuclear Physics Iida, K Kei Iida <keiiida@postman.riken.go.jp> http://invenio-software.org/download/invenio-demo-site-files/0204033.pdf http://invenio-software.org/download/invenio-demo-site-files/0204033.ps.gz CER n 200231 2002 11 Oyamatsu, Kazuhiro Iida, Kei 2002-04-15 00 2002-04-15 BATCH PREPRINT [1] J.M. Blatt and V.F. Weisskopf, Theoretical Nuclear Physics, Wiley, New York, 1952 [2] H. Heiselberg, V.R. Pandharipande 481 Annu. Rev. Nucl. Part. Sci. 50 2000 Annu. Rev. Nucl. Part. Sci. 50 (2000) 481 [3] K. Oyamatsu, I. Tanihata, Y. Sugahara, K. Sumiyoshi, H. Toki 3 Nucl. Phys., A 634 1998 Nucl. Phys. A 634 (1998) 3 [4] B.A. Brown 5296 Phys. Rev. Lett. 85 2000 Phys. Rev. Lett. 85 (2000) 5296 [5] K.C. Chung, C.S. Wang, A.J. Santiago nucl-th/0102017 [6] B.A. Li 4221 Phys. Rev. Lett. 85 2000 Phys. Rev. Lett. 85 (2000) 4221 [7] C. Sturm et al 39 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 39 [8] C. Fuchs, A. Faessler, E. Zabrodin, Y.M. Zheng 1974 Phys. Rev. Lett. 86 2001 Phys. Rev. Lett. 86 (2001) 1974 [9] P. Danielewicz, in: Proc. Int. Symp. on Non-Equilibrium and Nonlinear Dynamics in Nuclear and Other Finite Systems, Beijing, 2001 nucl-th/0112006 [10] D.H. Youngblood, H.L. Clark, Y.-W. Lui 691 Phys. Rev. Lett. 82 1999 Phys. Rev. Lett. 82 (1999) 691 [11] J.A. Pons, F.M. Walter, J.M. Lattimer, M. Prakash, R. Neuhaeuser, P. An 981 Astrophys. J. 564 2002 Astrophys. J. 564 (2002) 981 [12] J.M. Lattimer 337 Annu. Rev. Nucl. Part. Sci. 31 1981 Annu. Rev. Nucl. Part. Sci. 31 (1981) 337 [13] K. Oyamatsu 431 Nucl. Phys., A 561 1993 Nucl. Phys. A 561 (1993) 431 [14] L.R.B. Elton, A. Swift 52 Nucl. Phys., A 94 1967 Nucl. Phys. A 94 (1967) 52 [15] M. Yamada 512 Prog. Theor. Phys. 32 1964 Prog. Theor. Phys. 32 (1964) 512 [16] H. de Vries, C.W. de Jager, C. de Vries 495 At. Data Nucl. Data Tables 36 1987 At. Data Nucl. Data Tables 36 (1987) 495 [17] G. Audi, A.H. Wapstra 409 Nucl. Phys., A 595 1995 Nucl. Phys. A 595 (1995) 409 [18] S. Goriely, F. Tondeur, J.M. Pearson 311 At. Data Nucl. Data Tables 77 2001 At. Data Nucl. Data Tables 77 (2001) 311 [19] M. Samyn, S. Goriely, P.-H. Heenen, J.M. Pearson, F. Tondeur 142 Nucl. Phys., A 700 2002 Nucl. Phys. A 700 (2002) 142 [20] E. Chabanat, P. Bonche, P. Haensel, J. Meyer, R. Schaeffer 231 Nucl. Phys., A 635 1998 Nucl. Phys. A 635 (1998) 231 [21] Y. Sugahara, H. Toki 557 Nucl. Phys., A 579 1994 Nucl. Phys. A 579 (1994) 557 [22] A. Ozawa, T. Suzuki, I. Tanihata 32 Nucl. Phys., A 693 2001 Nucl. Phys. A 693 (2001) 32 [23] C.J. Batty, E. Friedman, H.J. Gils, H. Rebel 1 Adv. Nucl. Phys. 19 1989 Adv. Nucl. Phys. 19 (1989) 1 [24] G. Fricke, C. Bernhardt, K. Heilig, L.A. Schaller, L. Schellenberg, E.B. Shera, C.W. de Jager 177 At. Data Nucl. Data Tables 60 1995 At. Data Nucl. Data Tables 60 (1995) 177 [25] G. Huber et al 2342 Phys. Rev., C 18 1978 Phys. Rev. C 18 (1978) 2342 [26] L. Ray, G.W. Hoffmann, W.R. Coker 223 Phys. Rep. 212 1992 Phys. Rep. 212 (1992) 223 [27] S. Yoshida, H. Sagawa, N. Takigawa 2796 Phys. Rev., C 58 1998 Phys. Rev. C 58 (1998) 2796 [28] C.J. Pethick, D.G. Ravenhall 173 Nucl. Phys., A 606 1996 Nucl. Phys. A 606 (1996) 173 [29] K. Iida, K. Oyamatsu, unpublished [30] J.P. Blaizot, J.F. Berger, J. Decharg´e, M. Girod 435 Nucl. Phys., A 591 1995 Nucl. Phys. A 591 (1995) 435 nucl-th/0204034 eng Bozek, P Institute of Nuclear Physics, Cracow, Poland Nuclear matter with off-shell propagation 12 Apr 2002 Symmetric nuclear matter is studied within the conserving, self-consistent T-matrix approximation. This approach involves off-shell propagation of nucleons in the ladder diagrams. The binding energy receives contributions from thebackground part of the spectral function, away form the quasiparticle peak. The Fermi energy at the saturation point fulfills the Hugenholz-Van Hove relation. In comparison to the Brueckner-Hartree-Fock approach, the binding energyis reduced and the equation of state is harder LANL EDS SzGeCERN Nuclear Physics Bozek <bozek@sothis.ifj.edu.pl> http://invenio-software.org/download/invenio-demo-site-files/0204034.pdf http://invenio-software.org/download/invenio-demo-site-files/0204034.ps.gz 2002 11 2002-04-15 00 2002-04-15 BATCH n 200216 PREPRINT SCAN-9605068 eng McGILL-96-15 Contogouris, A P University of Athens One loop corrections for certain reactions initiated by 5-parton subprocesses via helicity amplitudes Montreal McGill Univ. Phys. Dept. Apr 1996? 28 p UNC9808 SzGeCERN Particle Physics - Phenomenology Merebashvili, Z V Lebessis, F Veropoulos, G http://invenio-software.org/download/invenio-demo-site-files/convert_SCAN-9605068.pdf http://invenio-software.org/download/invenio-demo-site-files/SCAN-9605068.tif 13 1996 1996-05-08 50 2001-12-14 BATCH 4234-4243 7 Phys. Rev., D 54 1996 h 199620 ARTICLE eng TRI-PP-86-73 Bryman, D A University of British Columbia Exotic muon decay mu --> e + x Burnaby, BC TRIUMF Aug 1986 8 p jv200203 SzGeCERN Particle Physics - Experimental Results Clifford, E T H 13 1986 1990-01-29 50 2002-03-26 BATCH 2787-88 22 Phys. Rev. Lett. 57 1986 SLAC 1594699 h 198648n ARTICLE hep-th/0003289 eng PUPT-1926 Costa, M S Princeton University A Test of the AdS/CFT Duality on the Coulomb Branch Princeton, NJ Princeton Univ. Joseph-Henry Lab. Phys. 31 Mar 2000 11 p We consider the N=4 SU(N) Super Yang Mills theory on the Coulomb branch with gauge symmetry broken to S(U(N_1)*U(N_2)). By integrating the W particles, the effective action near the IR SU(N_i) conformal fixed points is seen to be adeformation of the Super Yang Mills theory by a non-renormalized, irrelevant, dimension 8 operator. The correction to the two-point function of the dilaton field dual operator near the IR is related to a three-point function ofchiral primary operators at the conformal fixed points and agrees with the classical gravity prediction, including the numerical factor. LANL EDS LANLPUBL200104 SzGeCERN Particle Physics - Theory Miguel S Costa <miguel@feynman.princeton.edu> http://invenio-software.org/download/invenio-demo-site-files/0003289.pdf http://invenio-software.org/download/invenio-demo-site-files/0003289.ps.gz 2000 13 Princeton University 2000-04-03 50 2001-11-09 BATCH Costa, Miguel S. 287-292 Phys. Lett., B 482 2000 SLAC 4356110 n 200014 ARTICLE [1] J.M. Maldacena, The Large N Limit of Superconformal Field Theories and Supergrav-ity 231 Adv. Theor. Math. Phys. 2 1998 Adv. Theor. Math. Phys. 2 (1998) 231 hep-th/9711200 [2] S.S. Gubser, I.R. Klebanov and A.M. Polyakov, Gauge Theory Correlators from Non-Critical String Theory 105 Phys. Lett., B 428 1998 Phys. Lett. B 428 (1998) 105 hep-th/9802109 [3] E. Witten, Anti De Sitter Space And Holography 253 Adv. Theor. Math. Phys. 2 1998 Adv. Theor. Math. Phys. 2 (1998) 253 hep-th/9802150 [4] O. Aharony, S.S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, Large N Field Theories, String Theory and Gravity 183 Phys. Rep. 323 2000 Phys. Rep. 323 (2000) 183 hep-th/9905111 [5] J.A. Minahan and N.P. Warner, Quark Potentials in the Higgs Phase of Large N Supersymmetric Yang-Mills Theories 005 J. High Energy Phys. 06 1998 J. High Energy Phys. 06 (1998) 005 hep-th/9805104 [6] M.R. Douglas and W. Taylor, Branes in the bulk of Anti-de Sitter space hep-th/9807225 [7] A.A. Tseytlin and S. Yankielowicz, Free energy of N=4 super Yang-Mills in Higgs phase and non-extremal D3-brane interactions 145 Nucl. Phys., B 541 1999 Nucl. Phys. B 541 (1999) 145 hep-th/9809032 [8] Y. Wu, A Note on AdS/SYM Correspondence on the Coulomb Branch hep-th/9809055 [9] P. Kraus, F. Larsen, S. Trivedi, The Coulomb Branch of Gauge Theory from Rotating Branes 003 J. High Energy Phys. 03 1999 J. High Energy Phys. 03 (1999) 003 hep-th/9811120 [10] I.R. Klebanov and E. Witten, AdS/CFT Correspondence and Symmetry Breaking 89 Nucl. Phys., B 556 1999 Nucl. Phys. B 556 (1999) 89 hep-th/9905104 [11] D.Z. Freedman, S.S. Gubser, K. Pilch and N.P. Warner, Continuous distributions of D3-branes and gauged supergravity hep-th/9906194 [12] A. Brandhuber and K. Sfetsos, Wilson loops from multicentre and rotating branes, mass gaps and phase structure in gauge theories hep-th/9906201 [13] I. Chepelev and R. Roiban, A note on correlation functions in AdS5/SY M4 corre-spondence on the Coulomb branch 74 Phys. Lett., B 462 1999 Phys. Lett. B 462 (1999) 74 hep-th/9906224 [14] S.B. Giddings and S.F. Ross, D3-brane shells to black branes on the Coulomb branch 024036 Phys. Rev., D 61 2000 Phys. Rev. D 61 (2000) 024036 hep-th/9907204 [15] M. Cvetic, S.S. Gubser, H. Lu and C.N. Pope, Symmetric Potentials of Gauged Su-pergravities in Diverse Dimensions and Coulomb Branch of Gauge Theories hep-th/9909121 [16] R.C.Rashkov and K.S.Viswanathan, Correlation functions in the Coulomb branch of N=4 SYM from AdS/CFT correspondence hep-th/9911160 [17] M.S. Costa, Absorption by Double-Centered D3-Branes and the Coulomb Branch of N = 4 SYM Theory hep-th/9912073 [18] Y.S. Myung, G. Kang and H.W. Lee, Greybody factor for D3-branes in B field hep-th/9911193 S-wave absorption of scalars by noncommutative D3-branes hep-th/9912288 [19] R. Manvelyan, H.J.W. Mueller-Kirsten, J.-Q. Liang, Y. Zhang, Absorption Cross Sec-tion of Scalar Field in Supergravity Background hep-th/0001179 [20] S.S. Gubser and I.R. Klebanov, Absorption by Branes and Schwinger Terms in the World Volume Theory 41 Phys. Lett., B 413 1997 Phys. Lett. B 413 (1997) 41 hep-th/9708005 [21] K. Intriligator, Maximally Supersymmetric RG Flows and AdS Duality hep-th/9909082 [22] S.S. Gubser, A. Hashimoto, I.R. Klebanov and M. Krasnitz, Scalar Absorption and the Breaking of the World Volume Conformal Invariance 393 Nucl. Phys., B 526 1998 Nucl. Phys. B 526 (1998) 393 hep-th/9803023 [23] S. Lee, S. Minwalla, M. Rangamani and N. Seiberg, Three-Point Functions of Chiral Operators in D=4, N = 4 SYM at Large N 697 Adv. Theor. Math. Phys. 2 1998 Adv. Theor. Math. Phys. 2 (1998) 697 hep-th/9806074 [24] E. D'Hoker, D.Z. Freedman and W. Skiba, Field Theory Tests for Correlators in the AdS/CFT Correspondence 045008 Phys. Rev., D 59 1999 Phys. Rev. D 59 (1999) 045008 hep-th/9807098 [25] F. Gonzalez-Rey, B. Kulik and I.Y. Park, Non-renormalization of two and three Point Correlators of N=4 SYM in N=1 Superspace 164 Phys. Lett., B 455 1999 Phys. Lett. B 455 (1999) 164 hep-th/9903094 [26] K. Intriligator, Bonus Symmetries of N=4 Super-Yang-Mills Correlation Functions via AdS Duality 575 Nucl. Phys., B 551 1999 Nucl. Phys. B 551 (1999) 575 hep-th/9811047 K. Intriligator and W. Skiba, Bonus Symmetry and the Operator Product Expansion of N=4 Super-Yang-Mills 165 Nucl. Phys., B 559 1999 Nucl. Phys. B 559 (1999) 165 hep-th/9905020 [27] B. Eden, P.S. Howe and P.C. West, Nilpotent invariants in N=4 SYM 19 Phys. Lett., B 463 1999 Phys. Lett. B 463 (1999) 19 hep-th/9905085 P.S. Howe, C. Schubert, E. Sokatchev and P.C. West, Explicit construction of nilpotent covariants in N=4 SYM hep-th/9910011 [28] A. Petkou and K. Skenderis, A non-renormalization theorem for conformal anomalies 100 Nucl. Phys., B 561 1999 Nucl. Phys. B 561 (1999) 100 hep-th/9906030 [29] M.R. Douglas, D. Kabat, P. Pouliot and S.H. Shenker, D-branes and Short Distances in String Theory 85 Nucl. Phys., B 485 1997 Nucl. Phys. B 485 (1997) 85 hep-th/9608024 [30] G. Lifschytz and S.D. Mathur, Supersymmetry and Membrane Interactions in M(atrix) Theory 621 Nucl. Phys., B 507 1997 Nucl. Phys. B 507 (1997) 621 hep-th/9612087 [31] J. Maldacena, Probing Near Extremal Black Holes with D-branes 3736 Phys. Rev., D 57 1998 Phys. Rev. D 57 (1998) 3736 hep-th/9705053 Branes probing black holes 17 Nucl. Phys. B, Proc. Suppl. 68 1998 Nucl. Phys. B, Proc. Suppl. 68 (1998) 17 hep-th/9709099 [32] I. Chepelev and A.A. Tseytlin, Interactions of type IIB D-branes from D-instanton ma-trix model 629 Nucl. Phys., B 511 1998 Nucl. Phys. B 511 (1998) 629 hep-th/9705120 Long-distance interactions of branes: correspondence between supergravity and super Yang-Mills descriptions 73 Nucl. Phys., B 515 1998 Nucl. Phys. B 515 (1998) 73 hep-th/9709087 A.A. Tseytlin, Interactions Between Branes and Matrix Theories 99 Nucl. Phys. B, Proc. Suppl. 68 1998 Nucl. Phys. B, Proc. Suppl. 68 (1998) 99 hep-th/9709123 [33] M. Dine and N. Seiberg, Comments on Higher Derivative Operators in Some SUSY Field Theories 239 Phys. Lett., B 409 1997 Phys. Lett. B 409 (1997) 239 hep-th/9705057 [34] A.A. Tseytlin, On non-abelian generalisation of Born-Infeld action in string theory 41 Nucl. Phys., B 501 1997 Nucl. Phys. B 501 (1997) 41 hep-th/9701125 [35] S.S. Gubser and A. Hashimoto, Exact absorption probabilities for the D3-brane, Com-mun. Math. Phys. : 203 (1999) 325 hep-th/9805140 [36] S.S. Gubser, Non-conformal examples of AdS/CFT 1081 Class. Quantum Gravity 17 2000 Class. Quantum Gravity 17 (2000) 1081 hep-th/9910117 eng Bollen, G Institut fur Physic, Universitat Mainz ISOLTRAP : a tandem penning trap system for accurate on-line mass determination of short-lived isotopes SzGeCERN Detectors and Experimental Techniques Becker, S Kluge, H J Konig, M Moore, M Otto, T Raimbault-Hartmann, H Savard, G Schweikhard, L Stolzenberg, H ISOLDE Collaboration 1996 13 IS302 ISOLDE PPE CERN PS 1996-05-08 50 2001-12-14 BATCH 675-697 Nucl. Instrum. Methods Phys. Res., A 368 1996 n 199600 a1996 ARTICLE ISOLDEPAPER hep-th/0003291 eng McInnes, B National University of Singapore AdS/CFT For Non-Boundary Manifolds In its Euclidean formulation, the AdS/CFT correspondence begins as a study of Yang-Mills conformal field theories on the sphere, S^4. It has been successfully extended, however, to S^1 X S^3 and to the torus T^4. It is natural tohope that it can be made to work for any manifold on which it is possible to define a stable Yang-Mills conformal field theory. We consider a possible classification of such manifolds, and show how to deal with the most obviousobjection : the existence of manifolds which cannot be represented as boundaries. We confirm Witten's suggestion that this can be done with the help of a brane in the bulk. LANL EDS SzGeCERN Particle Physics - Theory Brett McInnes <matmcinn@nus.edu.sg> http://invenio-software.org/download/invenio-demo-site-files/0003291.pdf http://invenio-software.org/download/invenio-demo-site-files/0003291.ps.gz 2000 13 Innes, Brett Mc 2000-04-03 50 2001-11-09 BATCH 025 J. High Energy Phys. 05 2000 SLAC 4356136 n 200014 ARTICLE SCAN-9605071 eng KEK-Preprint-95-196 TUAT-HEP-96-1 DPNU-96-04 Emi, K KEK Study of a dE/dx measurement and the gas-gain saturation by a prototype drift chamber for the BELLE-CDC Tsukuba KEK Jan 1996 20 p UNC9806 SzGeCERN Detectors and Experimental Techniques Tsukamoto, T Hirano, H Mamada, H Sakai, Y Uno, S Itami, S Kajikawa, R Nitoh, O Ohishi, N Sugiyama, A Suzuki, S Takahashi, T Tamagawa, Y Tomoto, M Yamaki, T library@kekvax.kek.jp http://invenio-software.org/download/invenio-demo-site-files/convert_SCAN-9605071.pdf http://invenio-software.org/download/invenio-demo-site-files/SCAN-9605071.tif 1996 13 1996-05-08 50 2001-12-14 BATCH 225 2 Nucl. Instrum. Methods Phys. Res., A 379 1996 SLAC 3328660 h 199620 ARTICLE hep-th/0003293 eng Smailagic, A University of Osijek Higher Dimensional Schwinger-like Anomalous Effective Action We construct explicit form of the anomalous effective action, in arbitrary even dimension, for Abelian vector and axial gauge fields coupled to Dirac fermions. It turns out to be a surprisingly simple extension of 2D Schwinger modeleffective action. LANL EDS LANLPUBL200104 SzGeCERN Particle Physics - Theory Spallucci, E spallucci@ts.infn.it http://invenio-software.org/download/invenio-demo-site-files/0003293.pdf http://invenio-software.org/download/invenio-demo-site-files/0003293.ps.gz CER n 200231 2000 13 2000-04-03 50 2001-11-09 BATCH 045010 Phys. Rev., D 62 2000 SLAC 4356152 ARTICLE [1] S.L. Adler 2426 Phys. Rev. 177 1969 Phys. Rev. 177 (1969) 2426 [2] S.E.Treiman, R. Jackiw, D.J.Gross " Lectures on Current Algebra and its Applications ", Princeton UP, Princeton NJ, (1972) [3] T.Berger " Fermions in two (1+1)-dimensional anomalous gauge theories: the chiral Schwinger model and the chiral quantum gravity " Hamburg U DESY-90-084 July 1990 [4] L.Rosenberg, Phys. Rev.129, (1963) 2786 [5] R.Jackiw " Topological Investigations of Quantized Gauge Theories " in Relativity, Groups and Topology eds. B.deWitt and R.Stora (Elsevier, Amsterdam 1984) [6] M.T.Grisaru, N.K.Nielsen, W.Siegel, D.Zanon 157 Nucl. Phys., B 247 1984 Nucl. Phys. B 247 (1984) 157 [7] A.M. Polyakov 207 Phys. Lett., B 103 1981 Phys. Lett. B 103 (1981) 207 A.M. Polyakov 893 Mod. Phys. Lett., A 2 1987 Mod. Phys. Lett. A 2 (1987) 893 [8] R.J. Riegert 56 Phys. Lett. 134 1984 Phys. Lett. 134 (1984) 56 [9] K.Fujikawa 1195 Phys. Rev. Lett. 42 1979 Phys. Rev. Lett. 42 (1979) 1195 [10] B.deWitt, Relativity, Groups and Topology, Paris (1963); A.O.Barvinsky, G.A.Vilkovisky 1 Phys. Rep. 119 1985 Phys. Rep. 119 (1985) 1 [11] P.H.Frampton, T.W.Kephart 1343 Phys. Rev. Lett. 50 1983 Phys. Rev. Lett. 50 (1983) 1343 L. Alvarez-Gaume, E.Witten 269 Nucl. Phys. 234 1983 Nucl. Phys. 234 (1983) 269 [12] A.Smailagic, R.E.Gamboa-Saravi 145 Phys. Lett. 192 1987 Phys. Lett. 192 (1987) 145 A.Smailagic, E.Spallucci 17 Phys. Lett. 284 1992 Phys. Lett. 284 (1992) 17 hep-th/0003294 eng Matsubara, K Uppsala University Restrictions on Gauge Groups in Noncommutative Gauge Theory We show that the gauge groups SU(N), SO(N) and Sp(N) cannot be realized on a flat noncommutative manifold, while it is possible for U(N). LANL EDS LANLPUBL200104 SzGeCERN Particle Physics - Theory Keizo Matsubara <keizo.matsubara@teorfys.uu.se> http://invenio-software.org/download/invenio-demo-site-files/0003294.pdf http://invenio-software.org/download/invenio-demo-site-files/0003294.ps.gz CER n 200231 2000 13 Matsubara, Keizo 2000-04-03 50 2001-11-09 BATCH 417-419 Phys. Lett., B 482 2000 SLAC 4356160 ARTICLE [1] J.Polchinski, TASI Lectures on D-branes hep-th/9611050 [2] M.R.Douglas and C.Hull, D-branes and the Noncommuta-tive torus 8 J. High Energy Phys. 2 1998 J. High Energy Phys. 2 (1998) 8 hep-th/9711165 [3] V.Schomerus, D-branes and Deformation Quantization hep-th/9903205 [4] N.Seiberg and E.Witten, String Theory and Noncommu-tative Geometry hep-th/9908142 2 hep-th/0003295 eng Wang, B Fudan University Quasinormal modes of Reissner-Nordstrom Anti-de Sitter Black Holes Complex frequencies associated with quasinormal modes for large Reissner-Nordstr$\ddot{o}$m Anti-de Sitter black holes have been computed. These frequencies have close relation to the black hole charge and do not linearly scale withthe black hole temperature as in Schwarzschild Anti-de Sitter case. In terms of AdS/CFT correspondence, we found that the bigger the black hole charge is, the quicker for the approach to thermal equilibrium in the CFT. The propertiesof quasinormal modes for $l>0$ have also been studied. LANL EDS LANLPUBL200104 SzGeCERN Particle Physics - Theory Lin, C Y Abdalla, E Elcio Abdalla <eabdalla@fma.if.usp.br> http://invenio-software.org/download/invenio-demo-site-files/0003295.pdf http://invenio-software.org/download/invenio-demo-site-files/0003295.ps.gz http://invenio-software.org/download/invenio-demo-site-files/0003295.fig1.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0003295.fig2.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0003295.fig3.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0003295.fig4.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0003295.fig5.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0003295.fig6a.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0003295.fig6b.ps.gz Additional http://invenio-software.org/download/invenio-demo-site-files/0003295.fig7.ps.gz Additional CER n 200231 2000 13 Wang, Bin Lin, Chi-Yong Abdalla, Elcio 2000-04-03 50 2001-11-09 BATCH 79-88 Phys. Lett., B 481 2000 SLAC 4356179 ARTICLE [1] K. D. Kokkotas, B. G. Schmidt gr-qc/9909058 and references therein [2] W. Krivan 101501 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 101501 [3] S. Hod gr-qc/9902072 [4] P. R. Brady, C. M. Chambers, W. G. Laarakkers and E. Poisson 064003 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 064003 [5] P. R. Brady, C. M. Chambers, W. Krivan and P. Laguna 7538 Phys. Rev., D 55 1997 Phys. Rev. D 55 (1997) 7538 [6] G. T. Horowitz and V. E. Hubeny hep-th/9909056 G. T. Horowitz hep-th/9910082 [7] E. S. C. Ching, P. T. Leung, W. M. Suen and K. Young 2118 Phys. Rev., D 52 1995 Phys. Rev. D 52 (1995) 2118 [8] J. M. Maldacena 231 Adv. Theor. Math. Phys. 2 1998 Adv. Theor. Math. Phys. 2 (1998) 231 [9] E. Witten 253 Adv. Theor. Math. Phys. 2 1998 Adv. Theor. Math. Phys. 2 (1998) 253 [10] S. S. Gubser, I. R. Klebanov and A. M. Polyakov 105 Phys. Lett., B 428 1998 Phys. Lett. B 428 (1998) 105 [11] A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers 064018 Phys. Rev., D 60 1999 Phys. Rev. D 60 (1999) 064018 [12] E. W. Leaver 1238 J. Math. Phys. 27 1986 J. Math. Phys. 27 (1986) 1238 [13] E. W. Leaver 2986 Phys. Rev., D 41 1990 Phys. Rev. D 41 (1990) 2986 [14] C. O. Lousto 1733 Phys. Rev., D 51 1995 Phys. Rev. D 51 (1995) 1733 [15] O. Kaburaki 316 Phys. Lett., A 217 1996 Phys. Lett. A 217 (1996) 316 [16] R. K. Su, R. G. Cai and P. K. N. Yu 2932 Phys. Rev., D 50 1994 Phys. Rev. D 50 (1994) 2932 3473 Phys. Rev., D 48 1993 Phys. Rev. D 48 (1993) 3473 6186 Phys. Rev., D 52 1995 Phys. Rev. D 52 (1995) 6186 B. Wang, J. M. Zhu 1269 Mod. Phys. Lett., A 10 1995 Mod. Phys. Lett. A 10 (1995) 1269 [17] A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, Phys. Rev., D60: 104026 (1999) 5070 90 110 130 150 r+ 130 230 330 50 70 90 110 130 150 r+ rus Пушкин, А С Медный всадник <!--HTML-->На берегу пустынных волн, <br /> Стоял он, дум великих полн, <br /> И вдаль глядел. Пред ним широко<br /> Река неслася; бедный чёлн<br /> По ней стремился одиноко. <br /> По мшистым, топким берегам<br /> Чернели избы здесь и там, <br /> Приют убогого чухонца; <br /> И лес, неведомый лучам<br /> В тумане спрятанного солнца, <br /> Кругом шумел. 1833 1990-01-27 00 2002-04-12 BATCH POETRY gre Καβάφης, Κ Π Ιθάκη <!--HTML-->Σα βγεις στον πηγαιμό για την Ιθάκη, <br /> να εύχεσαι νάναι μακρύς ο δρόμος, <br /> γεμάτος περιπέτειες, γεμάτος γνώσεις. <br/> Τους Λαιστρυγόνας και τους Κύκλωπας, <br /> τον θυμωμένο Ποσειδώνα μη φοβάσαι, <br /> τέτοια στον δρόμο σου ποτέ σου δεν θα βρείς, <br /> αν μέν' η σκέψις σου υψηλή, αν εκλεκτή<br /> συγκίνησις το πνεύμα και το σώμα σου αγγίζει. <br /> Τους Λαιστρυγόνας και τους Κύκλωπας, <br /> τον άγριο Ποσειδώνα δεν θα συναντήσεις, <br /> αν δεν τους κουβανείς μες στην ψυχή σου, <br /> αν η ψυχή σου δεν τους στήνει εμπρός σου. <br /> <br> Να εύχεσαι νάναι μακρύς ο δρόμος. <br /> Πολλά τα καλοκαιρινά πρωϊά να είναι<br /> που με τι ευχαρίστησι, με τι χαρά<br /> θα μπαίνεις σε λιμένας πρωτοειδωμένους· <br /> να σταματήσεις σ' εμπορεία Φοινικικά, <br /> και τες καλές πραγμάτειες ν' αποκτήσεις, <br /> σεντέφια και κοράλλια, κεχριμπάρια κ' έβενους, <br /> και ηδονικά μυρωδικά κάθε λογής, <br /> όσο μπορείς πιο άφθονα ηδονικά μυρωδικά· <br /> σε πόλεις Αιγυπτιακές πολλές να πας, <br /> να μάθεις και να μάθεις απ' τους σπουδασμένους. <br /> <br /> Πάντα στον νου σου νάχεις την Ιθάκη. <br/> Το φθάσιμον εκεί είν' ο προορισμός σου. <br /> Αλλά μη βιάζεις το ταξίδι διόλου. <br /> Καλλίτερα χρόνια πολλά να διαρκέσει· <br /> και γέρος πια ν' αράξεις στο νησί, <br /> πλούσιος με όσα κέρδισες στον δρόμο, <br /> μη προσδοκώντας πλούτη να σε δώσει η Ιθάκη. <br /> <br /> Η Ιθάκη σ' έδωσε το ωραίο ταξίδι. <br /> Χωρίς αυτήν δεν θάβγαινες στον δρόμο. <br /> Αλλο δεν έχει να σε δώσει πια. <br /> <br /> Κι αν πτωχική την βρεις, η Ιθάκη δεν σε γέλασε. <br /> Ετσι σοφός που έγινες, με τόση πείρα, <br /> ήδη θα το κατάλαβες η Ιθάκες τι σημαίνουν. 1911 2005-03-02 00 2005-03-02 BATCH POETRY SzGeCERN 2345180CERCER SLAC 5278333 hep-th/0210114 eng Klebanov, Igor R Princeton University AdS Dual of the Critical O(N) Vector Model 2002 11 Oct 2002 11 p We suggest a general relation between theories of infinite number of higher-spin massless gauge fields in $AdS_{d+1}$ and large $N$ conformal theories in $d$ dimensions containing $N$-component vector fields. In particular, we propose that the singlet sector of the well-known critical 3-d O(N) model with the $(\phi^a \phi^a)^2$ interaction is dual, in the large $N$ limit, to the minimal bosonic theory in $AdS_4$ containing massless gauge fields of even spin. LANL EDS SIS LANLPUBL2003 SIS:2003 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Polyakov, A M 213-219 Phys. Lett. B 550 2002 http://invenio-software.org/download/invenio-demo-site-files/0210114.pdf http://invenio-software.org/download/invenio-demo-site-files/0210114.ps.gz klebanov@feynman.princeton.edu n 200242 13 20060826 0012 CER01 20021014 PUBLIC 002345180CER ARTICLE DRAFT [1] G.’t Hooft, "A planar diagram theory for strong interactions," Nucl. Phys. B 72 (1974) 461 [2] A.M. Polyakov, "String theory and quark confinement," Nucl. Phys. B, Proc. Suppl. 68 (1998) 1 hep-th/9711002 [2] "The wall of the cave," hep-th/9809057 [3] J. Maldacena, "The large N limit of superconformal field theories and supergravity," Adv. Theor. Math. Phys. 2 (1998) 231 hep-th/9711200 [4] S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, "Gauge theory correlators from non-critical string theory," Phys. Lett. B 428 (1998) 105 hep-th/9802109 [5] E. Witten, "Anti-de Sitter space and holography," Adv. Theor. Math. Phys. 2 (1998) 253 hep-th/9802150 [6] E. Brezin, D.J. Wallace, Phys. Rev. B 7 (1973) 1976 [7] K.G. Wilson and J. Kogut, "The Renormalization Group and the Epsilon Expansion," Phys. Rep. 12 (1974) 75 [8] C. Fronsdal, Phys. Rev. D 18 (1978) 3624 [9] E. Fradkin and M. Vasiliev, Phys. Lett. B 189 (1987) 89 [9] Nucl. Phys. B 291 (1987) 141 [10] M.A. Vasiliev, "Higher Spin Gauge Theories Star Product and AdS Space," hep-th/9910096 [11] A. M. Polyakov, "Gauge fields and space-time," hep-th/0110196 [12] P. Haggi-Mani and B. Sundborg, "Free Large N Supersymmetric Yang-Mills Theory Ann. Sci. a String Theory," hep-th/0002189 [12] B. Sundborg, "Stringy Gravity, Interacting Tensionless Strings and Massless Higher Spins," hep-th/0103247 [13] E. Witten, Talk at the John Schwarz 60-th Birthday Symposium, http://theory.caltech.edu/jhs60/witten/1.html http://theory.caltech.edu/jhs60/witten/1.html [14] E. Sezgin and P. Sundell, "Doubletons and 5D Higher Spin Gauge Theory," hep-th/0105001 [15] A. Mikhailov, "Notes On Higher Spin Symmetries," hep-th/0201019 [16] E. Sezgin and P. Sundell, "Analysis of Higher Spin Field Equations in Four Dimensions," hep-th/0205132 [16] J. Engquist, E. Sezgin, P. Sundell, "On N=1,2,4 Higher Spin Gauge Theories in Four Dimensions," hep-th/0207101 [17] M. Vasiliev, "Higher Spin Gauge Theories in Four, Three and Two Dimensions," Int. J. Mod. Phys. D 5 (1996) 763 hep-th/9611024 [18] I.R. Klebanov and E. Witten, "AdS/CFT correspondence and Symmetry Breaking," Nucl. Phys. B 556 (1999) 89 hep-th/9905104 [19] O. Aharony, M. Berkooz, E. Silverstein, "Multiple Trace Operators and Nonlocal String Theories," J. High Energy Phys. 0108 (2001) 006 hep-th-0105309 [20] E. Witten, "Multi-Trace Operators, Boundary Conditions, And AdS/CFT Correspondence," hep-th/0112258 [21] M. Berkooz, A. Sever and A. Shomer, "Double Trace Deformations, Boundary Conditions and Space-time Singularities," J. High Energy Phys. 0205 (2002) 034 hep-th-0112264 [22] S.S. Gubser and I. Mitra, "Double-Trace Operators and One-Loop Vacuum Energy in AdS/CFT," hep-th/0210093 [23] I.R. Klebanov, "Touching Random Surfaces and Liouville Gravity," Phys. Rev. D 51 (1995) 1836 hep-th/9407167 [23] I.R. Klebanov and A. Hashimoto, "Non-Perturbative Solution of Matrix Models Modified by Trace-Squared Terms," Nucl. Phys. B 434 (1995) 264 hep-th/9409064 [24] A.M. Polyakov, "Non-Hamiltonian Approach to Quantum Field Theory at Small Distances," Zh. Eksp. Teor. Fiz. 66 (1974) 23 [25] E. D’Hoker, D. Z. Freedman, S. Mathur, A. Matusis and L. Rastelli, "Graviton exchange and complete 4-point functions in the AdS/CFT correspondence, " hep-th/9903196 [26] For a review with a comprehensive set of references, see E. d‘Hoker and D.Z. Freedman, "Supersymmetric Gauge Theories and the AdS/CFT Correspondence," hep-th/0201253 2292727CERCER SLAC 4828445 UNCOVER 1021768628 hep-th/0201100 eng DSF-2002-2 Mück, W INFN Universita di Napoli An improved correspondence formula for AdS/CFT with multi-trace operators 2002 Napoli Napoli Univ. 15 Jan 2002 6 p An improved correspondence formula is proposed for the calculation of correlation functions of a conformal field theory perturbed by multi-trace operators from the analysis of the dynamics of the dual field theory in Anti-de Sitter space. The formula reduces to the usual AdS/CFT correspondence formula in the case of single-trace perturbations. LANL EDS SIS ING2002 SIS:2002 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory 301-304 3-4 Phys. Lett. B 531 2002 http://invenio-software.org/download/invenio-demo-site-files/0201100.pdf http://invenio-software.org/download/invenio-demo-site-files/0201100.ps.gz wolfgang.mueck@na.infn.it n 200204 13 20060826 0008 CER01 20020128 PUBLIC 002292727CER ARTICLE [1] E. Witten, hep-th/0112258 [2] S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B 428 (1998) 105 hep-th/9802109 [3] E. Witten, Adv. Theor. Math. Phys. 2 (1998) 253 hep-th/9802150 [4] P. Breitenlohner and D. Z. Freedman, Ann. Phys. (San Diego) 144 (1982) 249 [5] I. R. Klebanov and E. Witten, Nucl. Phys. B 556 (1999) 89 hep-th/9905104 [6] W. Mück and K. S. Viswanathan, Phys. Rev. D 60 (1999) 081901 hep-th/9906155 [7] W. Mück, Nucl. Phys. B 620 (2002) 477 hep-th/0105270 [8] M. Bianchi, D. Z. Freedman and K. Skenderis, J. High Energy Phys. 08 (2001) 041 hep-th/0105276 SzGeCERN 2307939CERCER SLAC 4923022 hep-th/0205061 eng DSF-2002-11 QMUL-PH-2002-11 Martelli, D University of London Holographic Renormalization and Ward Identities with the Hamilton-Jacobi Method 2003 Napoli Napoli Univ. 7 May 2002 31 p A systematic procedure for performing holographic renormalization, which makes use of the Hamilton-Jacobi method, is proposed and applied to a bulk theory of gravity interacting with a scalar field and a U(1) gauge field in the Stueckelberg formalism. We describe how the power divergences are obtained as solutions of a set of "descent equations" stemming from the radial Hamiltonian constraint of the theory. In addition, we isolate the logarithmic divergences, which are closely related to anomalies. The method allows to determine also the exact one-point functions of the dual field theory. Using the other Hamiltonian constraints of the bulk theory, we derive the Ward identities for diffeomorphisms and gauge invariance. In particular, we demonstrate the breaking of U(1)_R current conservation, recovering the holographic chiral anomaly recently discussed in hep-th/0112119 and hep-th/0202056. LANL EDS SIS LANLPUBL2004 SIS:2004 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Mück, W Martelli, Dario Mueck, Wolfgang 248-276 Nucl. Phys. B 654 2003 http://invenio-software.org/download/invenio-demo-site-files/0205061.pdf http://invenio-software.org/download/invenio-demo-site-files/0205061.ps.gz d.martelli@qmul.ac.uk n 200219 13 20060823 0005 CER01 20020508 PUBLIC 002307939CER ARTICLE [1] J. M. Maldacena, Adv. Theor. Math. Phys. 2 (1998) 231 hep-th/9711200 [2] S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B 428 (1998) 105 hep-th/9802109 [3] E. Witten, Adv. Theor. Math. Phys. 2 (1998) 253 hep-th/9802150 [4] E. D’Hoker and D. Z. Freedman, hep-th/0201253 [5] W. Mück and K. S. Viswanathan, Phys. Rev. D 58 (1998) 041901 hep-th/9804035 [6] D. Z. Freedman, S. D. Mathur, A. Matusis and L. Rastelli, Nucl. Phys. B 546 (1998) 96 hep-th/9812032 [7] H. Liu and Astron. Astrophys. Tseytlin, Nucl. Phys. B 533 (1998) 88 hep-th/9804083 [8] M. Henningson and K. Skenderis, J. High Energy Phys. 07 (1998) 023 hep-th/9806087 [9] J. D. Brown and J. W. York, Phys. Rev. D 47 (1993) 1407 [10] B. Balasubramanian and P. Kraus, Commun. Math. Phys. 208 (1999) 413 hep-th/9902121 [11] R. C. Myers, Phys. Rev. D 60 (1999) 046002 hep-th/9903203 [12] R. Emparan, C. V. Johnson and R. C. Myers, Phys. Rev. D p. 104001 (1999), hep-th/9903238 [13] S. de Haro, K. Skenderis and S. N. Solodukhin, Commun. Math. Phys. 217 (2000) 595 hep-th/0002230 [14] M. Bianchi, D. Z. Freedman and K. Skenderis, hep-th/0112119 [15] M. Bianchi, D. Z. Freedman and K. Skenderis, J. High Energy Phys. 08 (2001) 041 hep-th/0105276 [16] J. de Boer, E. Verlinde and H. Verlinde, J. High Energy Phys. 08 (2000) 003 hep-th/9912012 [17] J. Kalkkinen, D. Martelli and W. Mück, J. High Energy Phys. 04 (2001) 036 hep-th/0103111 [18] S. Corley, Phys. Lett. B 484 (2000) 141 hep-th/0004030 [19] J. Kalkkinen and D. Martelli, Nucl. Phys. B 596 (2001) 415 hep-th/0007234 [20] M. Bianchi, O. DeWolfe, D. Z. Freedman and K. Pilch, J. High Energy Phys. 01 (2001) 021 hep-th/0009156 [21] I. R. Klebanov, P. Ouyang and E. Witten, Phys. Rev. D 65 (2002) 105007 hep-th/0202056 [22] C. Fefferman and C. R. Graham, in Elie Cartan et les Mathématiques d’aujour d’hui, Astérique, p. 95 (1985). [23] D. Martelli and A. Miemic, J. High Energy Phys. 04 (2002) 027 hep-th/0112150 [24] S. Ferrara and A. Zaffaroni, hep-th/9908163 [25] J. Parry, D. S. Salopek and J. M. Stewart, Phys. Rev. D 49 (1994) 2872 gr-qc/9310020 [26] B. Darian, Class. Quantum Gravity 15 (1998) 143 gr-qc/9707046 [27] V. L. Campos, G. Ferretti, H. Larsson, D. Martelli and B. E. W. Nilsson, J. High Energy Phys. 0006 (2000) 023 hep-th/0003151 [28] L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni, Nucl. Phys. B 569 (2000) 451 hep-th/9909047 [29] W. Mück, hep-th/0201100 [30] W. Mück and K. S. Viswanathan, Phys. Rev. D 58 (1998) 106006 hep-th/9805145 [31] M. M. Taylor-Robinson, hep-th/0002125 [32] C. W. Misner, K. S. Thorne and J. A. Wheeler, Gravitation, Freeman, San Francisco (1973). SzGeCERN 2327507CERCER SLAC 5004500 hep-th/0207111 eng BROWN-HEP-1309 Ramgoolam, S Brown University Higher dimensional geometries related to fuzzy odd-dimensional spheres 2002 Providence, RI Brown Univ. 11 Jul 2002 32 p We study $SO(m)$ covariant Matrix realizations of $ \sum_{i=1}^{m} X_i^2 = 1 $ for even $m$ as candidate fuzzy odd spheres following hep-th/0101001. As for the fuzzy four sphere, these Matrix algebras contain more degrees of freedom than the sphere itself and the full set of variables has a geometrical description in terms of a higher dimensional coset. The fuzzy $S^{2k-1} $ is related to a higher dimensional coset $ {SO(2k) \over U(1) \times U(k-1)}$. These cosets are bundles where base and fibre are hermitian symmetric spaces. The detailed form of the generators and relations for the Matrix algebras related to the fuzzy three-spheres suggests Matrix actions which admit the fuzzy spheres as solutions. These Matrix actions are compared with the BFSS, IKKT and BMN Matrix models as well as some others. The geometry and combinatorics of fuzzy odd spheres lead to some remarks on the transverse five-brane problem of Matrix theories and the exotic scaling of the entropy of 5-branes with the brane number. LANL EDS SIS:2003 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Ramgoolam, Sanjaye 064 J. High Energy Phys. 10 2002 http://invenio-software.org/download/invenio-demo-site-files/0207111.pdf http://invenio-software.org/download/invenio-demo-site-files/0207111.ps.gz ramgosk@het.brown.edu n 200228 13 20070205 2036 CER01 20020712 PUBLIC 002327507CER ARTICLE [1] D. Kabat and W. Taylor, "Spherical membranes in Matrix theory," Adv. Theor. Math. Phys. 2 (1998) 181 hep-th/9711078 [2] J.Castelino, S. Lee and W. Taylor IV, "Longitudinal Five-Branes Ann. Sci. Four Spheres in Matrix Theory," Nucl. Phys. B 526 (1998) 334 hep-th/9712105 [3] R. Myers, "Dielectric-Branes," hep-th/9910053 [4] N. Constable, R. Myers, O. Tafjord, "Non-abelian Brane intersections, " J. High Energy Phys. 0106 (2001) 023 hep-th/0102080 [5] D. Berenstein, J. Maldacena, H. Nastase "Strings in flat space and pp waves from N = 4 Super Yang Mills," J. High Energy Phys. 0204 (2002) 013 hep-th/0202021 [6] J. Maldacena, A. Strominger, "AdS3 Black Holes and a Stringy Exclusion Principle," hep-th/980408, J. High Energy Phys. 9812 (1998) 005 [7] A. Jevicki, S. Ramgoolam, "Non commutative gravity from the ADS/CFT correspon-dence," J. High Energy Phys. 9904 (1999) 032 hep-th/9902059 [8] P.M. Ho, M. Li, "Fuzzy Spheres in AdS/CFT Correspondence and Holography from Noncommutativity," Nucl. Phys. B 596 (2001) 259 hep-th/0004072 [9] M. Berkooz, H. Verlinde "Matrix Theory, AdS/CFT and Higgs-Coulomb Equiva-lence," J. High Energy Phys. 9911 (1999) 037 hep-th/9907100 [10] Z. Guralnik, S. Ramgoolam "On the Polarization of Unstable D0-Branes into Non-Commutative Odd Spheres," J. High Energy Phys. 0102 (2001) 032 hep-th/0101001 [11] S. Ramgoolam, " On spherical harmonics for fuzzy spheres in diverse dimensions, " Nucl. Phys. B 610 (2001) 461 hep-th/0105006 [12] P.M. Ho, S. Ramgoolam, "Higher dimensional geometries from matrix brane construc-tions," Nucl. Phys. B 627 (2002) 266 hep-th/0111278 [13] Y. Kimura "Noncommutative Gauge Theory on Fuzzy Four-Sphere and Matrix Model," hep-th/0204256 [14] S.C Zhang, J. Hu "A Four Dimensional Generalization of the Quantum Hall Effect," Science 294 (2001) 823 cond-mat/0110572 [15] M. Fabinger, "Higher-Dimensional Quantum Hall Effect in String Theory," J. High Energy Phys. 0205 (2002) 037 hep-th/0201016 [16] A.P. Balachandran "Quantum Spacetimes in the Year 1," hep-th/0203259 [17] A. Salam, J. Strathdee, "On Kaluza-Klein Theory," Ann. Phys. 141, 1982, 216 [18] N.L. Wallach, "Harmonic Analysis on homogeneous spaces," M. Drekker Inc. NY 1973 [19] Y. Kazama, H. Suzuki, "New N = 2 superconformal field theories and superstring compactification" Nucl. Phys. B 321 (1989) 232 [20] M. Kramer, " Some remarks suggesting an interesting theory of harmonic functions on SU(2n + 1)/Sp(n) and SO(2n + 1)/U(n)," Arch. Math. 33 ( 1979/80), 76-79. [21] P.M. Ho, "Fuzzy sphere from Matrix model," J. High Energy Phys. 0012 (2000) 015 hep-th/0110165 [22] T. Banks, W. Fischler, S. Shenker, L. Susskind, "M-Theory Ann. Sci. a Matrix model A conjecture," Phys. Rev. D 55 (1997) 5112 hep-th/9610043 [23] N. Ishibashi, H. Kawai, Y. Kitazawa, A. Tsuchiya, " A large-N reduced model Ann. Sci. Superstring, " Nucl. Phys. B 498 (1997) 467 [24] V. Periwal, "Matrices on a point Ann. Sci. the theory of everything," Phys. Rev. D 55 (1997) 1711 [25] S. Chaudhuri, "Bosonic Matrix Theory and D-branes," hep-th/0205306 [26] M. Bagnoud, L. Carlevaro, A. Bilal, "Supermatrix models for M-theory based on osp(1—32,R)," hep-th/0201183 [27] L. Smolin, "M theory Ann. Sci. a matrix extension of Chern Simons theory," Nucl. Phys. B 591 (2000) 227 hep-th/0002009 [28] I. Bandos, J. Lukierski, "New superparticle models outside the HLS suersymmetry scheme," hep-th/9812074 [29] S.Iso, Y.Kimura, K.Tanaka, K. Wakatsuki, "Noncommutative Gauge Theory on Fuzzy Sphere from Matrix Model," hep-th/0101102 [30] W. Fulton and G. Harris, "Representation theory," Springer Verlag 1991. [31] M. Atiyah and E. Witten, "M-Theory Dynamics On A Manifold Of G2 Holonomy," hep-th/0107177 [32] S. Ramgoolam, D. Waldram, "Zero branes on a compact orbifold," J. High Energy Phys. 9807 (1998) 009 hep-th/9805191 [33] Brian R. Greene, C.I. Lazaroiu, Piljin Yi " D Particles on T4 /Z(N) Orbifolds and their resolutions," Nucl. Phys. B 539 (1999) 135 hep-th/9807040 [34] I. Klebanov, A. Tseytlin, "Entropy of Near-Extremal Black p-branes," Nucl. Phys. B 475 (1996) 164 hep-th/9604089 SzGeCERN 2341644CERCER SLAC 5208424 hep-th/0209226 eng PUTP-2002-48 SLAC-PUB-9504 SU-ITP-2002-36 Adams, A Stanford University Decapitating Tadpoles 2002 Beijing Beijing Univ. Dept. Phys. 26 Sep 2002 31 p We argue that perturbative quantum field theory and string theory can be consistently modified in the infrared to eliminate, in a radiatively stable manner, tadpole instabilities that arise after supersymmetry breaking. This is achieved by deforming the propagators of classically massless scalar fields and the graviton so as to cancel the contribution of their zero modes. In string theory, this modification of propagators is accomplished by perturbatively deforming the world-sheet action with bi-local operators similar to those that arise in double-trace deformations of AdS/CFT. This results in a perturbatively finite and unitary S-matrix (in the case of string theory, this claim depends on standard assumptions about unitarity in covariant string diagrammatics). The S-matrix is parameterized by arbitrary scalar VEVs, which exacerbates the vacuum degeneracy problem. However, for generic values of these parameters, quantum effects produce masses for the nonzero modes of the scalars, lifting the fluctuating components of the moduli. LANL EDS SzGeCERN Particle Physics - Theory PREPRINT LANL EDS High Energy Physics - Theory McGreevy, J Silverstein, E Adams, Allan Greevy, John Mc Silverstein, Eva http://invenio-software.org/download/invenio-demo-site-files/0209226.pdf http://invenio-software.org/download/invenio-demo-site-files/0209226.ps.gz evas@slac.stanford.edu n 200239 11 20060218 0013 CER01 20020927 PUBLIC 002341644CER PREPRINT [1] W. Fischler and L. Susskind, "Dilaton Tadpoles, String Condensates And Scale In-variance," Phys. Lett. B 171 (1986) 383 [2] W. Fischler and L. Susskind, "Dilaton Tadpoles, String Condensates And Scale In-variance. 2," Phys. Lett. B 173 (1986) 262 [3] C. G. Callan, C. Lovelace, C. R. Nappi and S. A. Yost, "Loop Corrections To Super-string Equations Of Motion," Nucl. Phys. B 308 (1988) 221 [4] H. Ooguri and N. Sakai, "String Multiloop Corrections To Equations Of Motion," Nucl. Phys. B 312 (1989) 435 [5] J. Polchinski, "Factorization Of Bosonic String Amplitudes," Nucl. Phys. B 307 (1988) 61 [6] H. La and P. Nelson, "Effective Field Equations For Fermionic Strings," Nucl. Phys. B 332 (1990) 83 [7] O. Aharony, M. Berkooz and E. Silverstein, "Multiple-trace operators and non-local string theories," J. High Energy Phys. 0108 (2001) 006 hep-th/0105309 [8] O. Aharony, M. Berkooz and E. Silverstein, "Non-local string theories on AdS3 × S3 and stable non-supersymmetric backgrounds," Phys. Rev. D 65 (2002) 106007 hep-th/0112178 [9] N. Arkani-Hamed, S. Dimopoulos, G. Dvali, G. Gabadadze, to appear. [10] E. Witten, "Strong Coupling Expansion Of Calabi-Yau Compactification," Nucl. Phys. B 471 (1996) 135 hep-th/9602070 [11] O. Aharony and T. Banks, "Note on the Quantum Mech. of M theory," J. High Energy Phys. 9903 (1999) 016 hep-th/9812237 [12] T. Banks, "On isolated vacua and background independence," arXiv hep-th/0011255 [13] R. Bousso and J. Polchinski, "Quantization of four-form fluxes and dynamical neutral-ization of the cosmological constant," J. High Energy Phys. 0006 (2000) 006 hep-th/0004134 [14] S. B. Giddings, S. Kachru and J. Polchinski, "Hierarchies from fluxes in string com-pactifications," arXiv hep-th/0105097 [15] A. Maloney, E. Silverstein and A. Strominger, "De Sitter space in noncritical string theory," arXiv hep-th/0205316 [16] S. Kachru and E. Silverstein, "4d conformal theories and strings on orbifolds," Phys. Rev. Lett. 80 (1998) 4855 hep-th/9802183 [17] A. E. Lawrence, N. Nekrasov and C. Vafa, "On conformal field theories in four di-mensions," Nucl. Phys. B 533 (1998) 199 hep-th/9803015 [18] M. Bershadsky, Z. Kakushadze and C. Vafa, "String expansion Ann. Sci. large N expansion of gauge theories," Nucl. Phys. B 523 (1998) 59 hep-th/9803076 [19] I. R. Klebanov and Astron. Astrophys. Tseytlin, "A non-supersymmetric large N CFT from type 0 string theory," J. High Energy Phys. 9903 (1999) 015 hep-th/9901101 [20] E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspon-dence," arXiv hep-th/0112258 [21] M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary condi-tions and spacetime singularities," J. High Energy Phys. 0205 (2002) 034 hep-th/0112264 [22] A. Adams and E. Silverstein, "Closed string tachyons, AdS/CFT, and large N QCD," Phys. Rev. D 64 (2001) 086001 hep-th/0103220 [23] Astron. Astrophys. Tseytlin and K. Zarembo, "Effective potential in non-supersymmetric SU(N) x SU(N) gauge theory and interactions of type 0 D3-branes," Phys. Lett. B 457 (1999) 77 hep-th/9902095 [24] M. Strassler, to appear [25] V. Balasubramanian and P. Kraus, "A stress tensor for anti-de Sitter gravity," Commun. Math. Phys. 208 (1999) 413 hep-th/9902121 [26] S. Thomas, in progress. [27] O. Aharony, M. Fabinger, G. T. Horowitz and E. Silverstein, "Clean time-dependent string backgrounds from bubble baths," J. High Energy Phys. 0207 (2002) 007 hep-th/0204158 [28] G. Dvali, G. Gabadadze and M. Shifman, "Diluting cosmological constant in infinite volume extra dimensions," arXiv hep-th/0202174 [29] D. Friedan, "A tentative theory of large distance Physics," arXiv hep-th/0204131 [30] G. Dvali, G. Gabadadze and M. Shifman, "Diluting cosmological constant via large distance modification of gravity," arXiv hep-th/0208096 [31] J. W. Moffat, arXiv hep-th/0207198 [32] Astron. Astrophys. Tseytlin, "On ’Macroscopic String’ Approximation In String Theory," Phys. Lett. B 251 (1990) 530 [33] B. Zwiebach, "Closed string field theory Quantum action and the B-V master equa-tion," Nucl. Phys. B 390 (1993) 33 hep-th/9206084 [34] J. Polchinski, "String Theory. Vol. 1 An Introduction To The Bosonic String," Cam-bridge, UK Univ. Phys. Rev. (1998) 402 p. [35] S. Kachru, X. Liu, M. B. Schulz and S. P. Trivedi, "Supersymmetry changing bubbles in string theory," arXiv hep-th/0205108 [36] A. R. Frey and J. Polchinski, "N = 3 warped compactifications," Phys. Rev. D 65 (2002) 126009 hep-th/0201029 [37] A. Adams, O. Aharony, J. McGreevy, E. Silverstein,..., work in progress SzGeCERN 2342206CERCER SLAC 5224543 hep-th/0209257 eng Berkooz, M The Weizmann Inst. of Science Double Trace Deformations, Infinite Extra Dimensions and Supersymmetry Breaking 2002 29 Sep 2002 22 p It was recently shown how to break supersymmetry in certain $AdS_3$ spaces, without destabilizing the background, by using a ``double trace'' deformation which localizes on the boundary of space-time. By viewing spatial sections of $AdS_3$ as a compactification space, one can convert this into a SUSY breaking mechanism which exists uniformly throughout a large 3+1 dimensional space-time, without generating any dangerous tadpoles. This is a generalization of a Visser type infinite extra dimensions compactification. Although the model is not Lorentz invariant, the dispersion relation is relativistic at high enough momenta, and it can be arranged such that at the same kinematical regime the energy difference between between former members of a SUSY multiplet is large. LANL EDS SzGeCERN Particle Physics - Theory PREPRINT LANL EDS High Energy Physics - Theory Berkooz, Micha http://invenio-software.org/download/invenio-demo-site-files/0209257.pdf http://invenio-software.org/download/invenio-demo-site-files/0209257.ps.gz berkooz@wisemail.weizmann.ac.il n 200240 11 20060603 0013 CER01 20021001 PUBLIC 002342206CER PREPRINT [1] T. Banks, "Cosmological Breaking of Supersymmetry ? Or Little Lambda Goes Back to the Future 2.", hep-th/0007146 [2] J. Brown and C. Teitelboim, Phys. Lett. B 195 (1987) 177 [2] Nucl. Phys. B 297 (1988) 787 [2] R. Bousso and J. Polchinski, "Quantization of Four-form Fluxes and Dynami-cal Neutralization of the Cosmological Constant", J. High Energy Phys. 0006 (2000) 006 hep-th/0004134 [2] J.L. Feng, J. March-Russell, S. Sethi and F. Wilczek, "Saltatory Relaxation of the Cosmological Constant", Nucl. Phys. B 602 (2001) 307 hep-th/0005276 [3] E. Witten, "Strong Coupling and the Cosmological Constant", Mod. Phys. Lett. A 10 (1995) 2153 hep-th/9506101 [4] A. Maloney, E. Silverstein and A. Strominger "de Sitter Space in Non-Critical String Theory", hep-th/0205316 [4] , Hawking Festschrift. [5] S. Kachru and E. Silverstein, "On Vanishing Two Loop Cosmological Constant in Nonsupersymmetric Strings", J. High Energy Phys. 9901 (1999) 004 hep-th/9810129 [6] S. Kachru, M. Schulz and E. Silverstein, "Self-tuning flat domain walls in 5d gravity and string theory", Phys. Rev. D 62 (2000) 045021 hep-th/0001206 [7] V.A. Rubakov and M.E. Shaposhnikov, "Extra Space-time Dimensions Towards a Solution to the Cosmological Constant Problem", Phys. Lett. B 125 (1983) 139 [8] G. Dvali, G. Gabadadze and M. Shifman, "Diluting Cosmological Constant In Infinite Volume Extra", hep-th/0202174 [9] G.W. Moore, "Atkin-Lehner Symmetry", Nucl. Phys. B 293 (1987) 139 [9] Erratum- Nucl. Phys. B 299 (1988) 847 [10] By K. Akama, "Pregeometry", Lect. Notes Phys. 176 (1982) 267 hep-th/0001113 [10] Also in *Nara 1982, Proceedings, Gauge Theory and Gravitation*, 267-271. [11] M. Visser, "An Exotic Class of Kaluza-Klein Models", Phys. Lett. B 159 (1985) 22 hep-th/9910093 [12] L. Randall and R. Sundrum, "An Alternative to Compactification", Phys. Rev. Lett. 83 (1999) 4690 hep-th/9906064 [13] A. Adams and E. Silverstein, "Closed String Tachyons, AdS/CFT and Large N QCD", Phys. Rev. D 64 (2001) 086001 hep-th/0103220 [14] O. Aharony, M. Berkooz and E. Silverstein, "Multiple-Trace Operators and Non-Local String Theories", J. High Energy Phys. 0108 (2001) 006 hep-th/0105309 [15] O. Aharony, M. Berkooz and E. Silverstein, "Non-local String Theories on AdS3 × S3 and non-supersymmetric backgrounds", Phys. Rev. D 65 (2002) 106007 hep-th/0112178 [16] M. Berkooz, A. Sever and A. Shomer, "Double-trace Deformations, Boundary Condi-tions and Space-time Singularities", J. High Energy Phys. 0205 (2002) 034 hep-th/0112264 [17] E. Witten, "Multi-Trace Operators, Boundary Conditions, And AdS/CFT Correspon-dence", hep-th/0112258 [18] A. Sever and A. Shomer, "A Note on Multi-trace Deformations and AdS/CFT", J. High Energy Phys. 0207 (2002) 027 hep-th/0203168 [19] J. Maldacena, "The large N limit of superconformal field theories and supergravity," Adv. Theor. Math. Phys. 2 (1998) 231 hep-th/9711200 [19] Int. J. Theor. Phys. 38 (1998) 1113 [20] E. Witten, "Anti-de Sitter space and holography," Adv. Theor. Math. Phys. 2 (1998) 253 hep-th/9802150 [21] S. S. Gubser, I. R. Klebanov and A. M. Polyakov, "Gauge theory correlators from non-critical string theory," hep-th/980210, Phys. Lett. B 428 (1998) 105 [22] O. Aharony, S.S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, "Large N Field Theories, String Theory and Gravity", Phys. Rep. 323 (2000) 183 hep-th/9905111 [23] A. Giveon, D. Kutasov and N. Seiberg, "Comments on string theory on AdS3," he-th/9806194, Adv. Theor. Math. Phys. 2 (1998) 733 [24] J. Maldacena, J. Michelson and A. Strominger, "Anti-de Sitter Fragmentation", J. High Energy Phys. 9902 (1999) 011 hep-th/9812073 [25] N. Seiberg and E. Witten, "The D1/D5 System And Singular CFT", J. High Energy Phys. 9904 (1999) 017 hep-th/9903224 [26] J. Maldacena and H. Ooguri, "Strings in AdS3 and the SL(2, R) WZW Model. Part 1 The Spectrum", J. Math. Phys. 42 (2001) 2929 hep-th/0001053 [27] I. R. Klebanov and E. Witten, "AdS/CFT correspondence and Symmetry breaking," Nucl. Phys. B 556 (1999) 89 hep-th/9905104 [28] R. Kallosh, A.D. Linde, S. Prokushkin and M. Shmakova, "Gauged Supergravities, de Sitter space and Cosmology", Phys. Rev. D 65 (2002) 105016 hep-th/0110089 [29] R. Kallosh, "Supergravity, M-Theory and Cosmology", hep-th/0205315 [30] R. Kallosh, A.D. Linde, S. Prokushkin and M. Shmakova, "Supergravity, Dark Energy and the Fate of the Universe", hep-th/0208156 [31] C.M. Hull and N.P. Warner, "Non-compact Gauging from Higher Dimensions", Class. Quantum Gravity 5 (1988) 1517 [32] P. Kraus and E.T. Tomboulis "Title Photons and Gravitons Ann. Sci. Goldstone Bosons, and the Cosmological Constant", Phys. Rev. D 66 (2002) 045015 hep-th/0203221 [33] M. Berkooz and S.-J. Rey, "Non-Supersymmetric Stable Vacua of M-Theory", J. High Energy Phys. 9901 (1999) 014 hep-th/9807200 [34] A. Adams, J. McGreevy and E. Silverstein, "Decapitating Tadpoles", hep-th/0209226 [35] N. Arkani-Hamed, S. Dimopoulos, G. Dvali and G. Gabadadze, "Non-Local Modifi-cation of Gravity and the Cosmological Constant Problem", hep-th/0209227 [36] V. Balasubramanian, P. Kraus and A. Lawrence "Bulk vs. Boundary Dynamics in Anti-de Sitter Space-time" Phys. Rev. D 59 (1999) 046003 hep-th/9805171 [37] H. Verlinde, "Holography and Compactification", Nucl. Phys. B 580 (2000) 264 hep-th/9906182 [38] S.B. Giddings, S. Kachru and J. Polchinski, "Hierarchies from Fluxes in String Com-pactifications", hep-th/0105097 [39] G. Dvali, G. Gabadadze and M. Shifman, "Diluting Cosmological Constant via Large Distance Modification of Gravity" hep-th/0208096 [40] D. Gepner, "Lectures on N=2 String Theory", In Superstrings 89, The Trieste Spring School, 1989. [41] R.G. Leigh, "Dirac-Born-Infeld Action from Dirichlet SIGMA, Symmetry Integrability Geom. Methods Appl. Model", Mod. Phys. Lett. A 4 (1989) 2767 [42] J. Bagger and A. Galperin "Linear and Non-linear Supersymmetries", hep-th/9810109 [42] , *Dubna 1997, Supersymmetries and quantum symmetries* 3-20. [43] A, Giveon and M. Rocek, "Supersymmetric String Vacua on AdS3 × N ", hep-th/9904024 [44] E.J. Martinec and W. McElgin, "String Theory on AdS Orbifolds" J. High Energy Phys. 0204 (2002) 029 hep-th/0106171 [45] E.J. Martinec and W. McElgin, "Exciting AdS Orbifolds", hep-th/0206175 [46] V. Balasubramanian, J. de Boer, E. Keski-Vakkuri and S.F. Ross, "Supersymmetric Conical Defects", Phys. Rev. D 64 (2001) 064011 hep-th/0011217 SzGeCERN CERCER 2344398 SLAC 5256739 hep-th/0210075 eng SISSA-2002-64-EP Borunda, M INFN On the quantum stability of IIB orbifolds and orientifolds with Scherk-Schwarz SUSY breaking 2003 Trieste Scuola Int. Sup. Studi Avan. 8 Oct 2002 26 p We study the quantum stability of Type IIB orbifold and orientifold string models in various dimensions, including Melvin backgrounds, where supersymmetry (SUSY) is broken {\it \`a la} Scherk-Schwarz (SS) by twisting periodicity conditions along a circle of radius R. In particular, we compute the R-dependence of the one-loop induced vacuum energy density $\rho(R)$, or cosmological constant. For SS twists different from Z2 we always find, for both orbifolds and orientifolds, a monotonic $\rho(R)<0$, eventually driving the system to a tachyonic instability. For Z2 twists, orientifold models can have a different behavior, leading either to a runaway decompactification limit or to a negative minimum at a finite value R_0. The last possibility is obtained for a 4D chiral orientifold model where a more accurate but yet preliminary analysis seems to indicate that $R_0\to \infty$ or towards the tachyonic instability, as the dependence on the other geometric moduli is included. LANL EDS SIS LANLPUBL2003 SIS:2003 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Serone, M Trapletti, M 85-108 Nucl. Phys. B 653 2003 http://invenio-software.org/download/invenio-demo-site-files/0210075.pdf http://invenio-software.org/download/invenio-demo-site-files/0210075.ps.gz serone@he.sissa.it n 200241 13 20060823 0006 CER01 20021009 PUBLIC 002344398CER ARTICLE [1] J. Scherk and J. H. Schwarz, Phys. Lett. B 82 (1979) 60 [1] Nucl. Phys. B 153 (1979) 61 [2] R. Rohm, Nucl. Phys. B 237 (1984) 553 [3] H. Itoyama and T.R. Taylor, Phys. Lett. B 186 (1987) 129 [4] C. Kounnas and M. Porrati, Nucl. Phys. B 310 (1988) 355 [4] S. Ferrara, C. Kounnas, M. Porrati and F. Zwirner, Nucl. Phys. B 318 (1989) 75 [4] C. Kounnas and B. Rostand, Nucl. Phys. B 341 (1990) 641 [4] I. Antoniadis and C. Kounnas, Phys. Lett. B 261 (1991) 369 [4] E. Kiritsis and C. Kounnas, Nucl. Phys. B 503 (1997) 117 hep-th/9703059 [5] I. Antoniadis, Phys. Lett. B 246 (1990) 377 [6] C. A. Scrucca and M. Serone, J. High Energy Phys. 0110 (2001) 017 hep-th/0107159 [7] I. Antoniadis, E. Dudas and A. Sagnotti, Nucl. Phys. B 544 (1999) 469 hep-th/9807011 [8] I. Antoniadis, G. D’Appollonio, E. Dudas and A. Sagnotti, Nucl. Phys. B 553 (1999) 133 hep-th/9812118 [8] Nucl. Phys. B 565 (2000) 123 hep-th/9907184 [8] I. Antoniadis, K. Benakli and A. Laugier, hep-th/0111209 [9] C. A. Scrucca, M. Serone and M. Trapletti, Nucl. Phys. B 635 (2002) 33 hep-th/0203190 [10] J. D. Blum and K. R. Dienes, Nucl. Phys. B 516 (1998) 83 hep-th/9707160 [11] M. Fabinger and P. Horava, Nucl. Phys. B 580 (2000) 243 hep-th/0002073 [12] P. Ginsparg and C. Vafa, Nucl. Phys. B 289 (1987) 414 [13] M. A. Melvin, Phys. Lett. 8 (1964) 65 [13] G. W. Gibbons and K. i. Maeda, Nucl. Phys. B 298 (1988) 741 [13] F. Dowker, J. P. Gauntlett, D. A. Kastor and J. Traschen, Phys. Rev. D 49 (1994) 2909 hep-th/9309075 [14] A. Adams, J. Polchinski and E. Silverstein, J. High Energy Phys. 0110 (2001) 029 hep-th/0108075 [15] J. R. David, M. Gutperle, M. Headrick and S. Minwalla, J. High Energy Phys. 0202 (2002) 041 hep-th/0111212 [16] T. Suyama, J. High Energy Phys. 0207 (2002) 015 hep-th/0110077 [17] C. Vafa, arXiv hep-th/0111051 [18] G. Aldazabal, A. Font, L. E. Ibanez and G. Violero, Nucl. Phys. B 536 (1998) 29 hep-th/9804026 [19] K. H. O’Brien and C. I. Tan, Phys. Rev. D 36 (1987) 1184 [20] J. Polchinski, Commun. Math. Phys. 104 (1986) 37 [21] D. M. Ghilencea, H. P. Nilles and S. Stieberger, arXiv hep-th/0108183 [22] P. Mayr and S. Stieberger, Nucl. Phys. B 407 (1993) 725 hep-th/9303017 [23] E. Alvarez, Nucl. Phys. B 269 (1986) 596 [24] J. G. Russo and Astron. Astrophys. Tseytlin, J. High Energy Phys. 0111 (2001) 065 hep-th/0110107 [24] Nucl. Phys. B 611 (2001) 93 hep-th/0104238 [24] A. Dabholkar, Nucl. Phys. B 639 (2002) 331 hep-th/0109019 [24] M. Gutperle and A. Strominger, J. High Energy Phys. 0106 (2001) 035 hep-th/0104136 [24] M. S. Costa and M. Gutperle, J. High Energy Phys. 0103 (2001) 027 hep-th/0012072 [25] E. Dudas and J. Mourad, Nucl. Phys. B 622 (2002) 46 hep-th/0110186 [25] T. Takayanagi and T. Uesugi, J. High Energy Phys. 0111 (2001) 036 hep-th/0110200 [25] Phys. Lett. B 528 (2002) 156 hep-th/0112199 [25] C. Angelantonj, E. Dudas and J. Mourad, Nucl. Phys. B 637 (2002) 59 hep-th/0205096 [26] M. Trapletti, in preparation. [27] A. Adams, J. McGreevy and E. Silverstein, arXiv hep-th/0209226 [28] E. Witten, Nucl. Phys. B 195 (1982) 481 SzGeCERN 2355566CERCER SLAC 5419166 hep-th/0212138 eng PUPT-2069 Gubser, S S Princeton University A universal result on central charges in the presence of double-trace deformations 2003 Princeton, NJ Princeton Univ. Joseph-Henry Lab. Phys. 12 Dec 2002 15 p We study large N conformal field theories perturbed by relevant double-trace deformations. Using the auxiliary field trick, or Hubbard-Stratonovich transformation, we show that in the infrared the theory flows to another CFT. The generating functionals of planar correlators in the ultraviolet and infrared CFT's are shown to be related by a Legendre transform. Our main result is a universal expression for the difference of the scale anomalies between the ultraviolet and infrared fixed points, which is of order 1 in the large N expansion. Our computations are entirely field theoretic, and the results are shown to agree with predictions from AdS/CFT. We also remark that a certain two-point function can be computed for all energy scales on both sides of the duality, with full agreement between the two and no scheme dependence. LANL EDS SIS LANLPUBL2004 SIS:2004 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Klebanov, Igor R Gubser, Steven S. Klebanov, Igor R. 23-36 Nucl. Phys. B 656 2003 http://invenio-software.org/download/invenio-demo-site-files/0212138.ps.gz http://invenio-software.org/download/invenio-demo-site-files/0212138.pdf ssgubser@Princeton.EDU n 200250 13 20060823 0007 CER01 20021213 PUBLIC 002355566CER ARTICLE SzGeCERN 2356302CERCER SLAC 5423422 hep-th/0212181 eng Girardello, L INFN Universita di Milano-Bicocca 3-D Interacting CFTs and Generalized Higgs Phenomenon in Higher Spin Theories on AdS 2003 16 Dec 2002 8 p We study a duality, recently conjectured by Klebanov and Polyakov, between higher-spin theories on AdS_4 and O(N) vector models in 3-d. These theories are free in the UV and interacting in the IR. At the UV fixed point, the O(N) model has an infinite number of higher-spin conserved currents. In the IR, these currents are no longer conserved for spin s>2. In this paper, we show that the dual interpretation of this fact is that all fields of spin s>2 in AdS_4 become massive by a Higgs mechanism, that leaves the spin-2 field massless. We identify the Higgs field and show how it relates to the RG flow connecting the two CFTs, which is induced by a double trace deformation. LANL EDS SIS LANLPUBL2004 SIS:2004 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Porrati, Massimo Zaffaroni, A 289-293 Phys. Lett. B 561 2003 http://invenio-software.org/download/invenio-demo-site-files/0212181.pdf http://invenio-software.org/download/invenio-demo-site-files/0212181.ps.gz alberto.zaffaroni@mib.infn.it n 200251 13 20060823 0007 CER01 20021217 PUBLIC 002356302CER ARTICLE [1] D. Francia and A. Sagnotti, Phys. Lett. B 543 (2002) 303 hep-th/0207002 [1] P. Haggi-Mani and B. Sundborg, J. High Energy Phys. 0004 (2000) 031 hep-th/0002189 [1] B. Sundborg, Nucl. Phys. B, Proc. Suppl. 102 (2001) 113 hep-th/0103247 [1] E. Sezgin and P. Sundell, J. High Energy Phys. 0109 (2001) 036 hep-th/0105001 [1] A. Mikhailov, hep-th/0201019 [1] E. Sezgin and P. Sundell, Nucl. Phys. B 644 (2002) 303 hep-th/0205131 [1] E. Sezgin and P. Sundell, J. High Energy Phys. 0207 (2002) 055 hep-th/0205132 [1] J. Engquist, E. Sezgin and P. Sundell, Class. Quantum Gravity 19 (2002) 6175 hep-th/0207101 [1] M. A. Vasiliev, Int. J. Mod. Phys. D 5 (1996) 763 hep-th/9611024 [1] D. Anselmi, Nucl. Phys. B 541 (1999) 323 hep-th/9808004 [1] D. Anselmi, Class. Quantum Gravity 17 (2000) 1383 hep-th/9906167 [2] E. S. Fradkin and M. A. Vasiliev, Nucl. Phys. B 291 (1987) 141 [2] E. S. Fradkin and M. A. Vasiliev, Phys. Lett. B 189 (1987) 89 [3] I. R. Klebanov and A. M. Polyakov, Phys. Lett. B 550 (2002) 213 hep-th/0210114 [4] M. A. Vasiliev, hep-th/9910096 [5] T. Leonhardt, A. Meziane and W. Ruhl, hep-th/0211092 [6] O. Aharony, M. Berkooz and E. Silverstein, J. High Energy Phys. 0108 (2001) 006 hep-th/0105309 [7] E. Witten, hep-th/0112258 [8] M. Berkooz, A. Sever and A. Shomer J. High Energy Phys. 0205 (2002) 034 hep-th/0112264 [9] S. S. Gubser and I. Mitra, hep-th/0210093 [10] S. S. Gubser and I. R. Klebanov, hep-th/0212138 [11] M. Porrati, J. High Energy Phys. 0204 (2002) 058 hep-th/0112166 [12] K. G. Wilson and J. B. Kogut, Phys. Rep. 12 (1974) 75 [13] I. R. Klebanov and E. Witten, Nucl. Phys. B 556 (1999) 89 hep-th/9905104 [14] W. Heidenreich, J. Math. Phys. 22 (1981) 1566 [15] D. Anselmi, hep-th/0210123 SzGeCERN 20041129103619.0 2357700CERCER SLAC 5435544 hep-th/0212314 eng KUNS-1817 YITP-2002-73 TAUP-2719 Fukuma, M Kyoto University Holographic Renormalization Group 2003 Kyoto Kyoto Univ. 26 Dec 2002 90 p The holographic renormalization group (RG) is reviewed in a self-contained manner. The holographic RG is based on the idea that the radial coordinate of a space-time with asymptotically AdS geometry can be identified with the RG flow parameter of the boundary field theory. After briefly discussing basic aspects of the AdS/CFT correspondence, we explain how the notion of the holographic RG comes out in the AdS/CFT correspondence. We formulate the holographic RG based on the Hamilton-Jacobi equations for bulk systems of gravity and scalar fields, as was introduced by de Boer, Verlinde and Verlinde. We then show that the equations can be solved with a derivative expansion by carefully extracting local counterterms from the generating functional of the boundary field theory. The calculational methods to obtain the Weyl anomaly and scaling dimensions are presented and applied to the RG flow from the N=4 SYM to an N=1 superconformal fixed point discovered by Leigh and Strassler. We further discuss a relation between the holographic RG and the noncritical string theory, and show that the structure of the holographic RG should persist beyond the supergravity approximation as a consequence of the renormalizability of the nonlinear sigma model action of noncritical strings. As a check, we investigate the holographic RG structure of higher-derivative gravity systems, and show that such systems can also be analyzed based on the Hamilton-Jacobi equations, and that the behaviour of bulk fields are determined solely by their boundary values. We also point out that higher-derivative gravity systems give rise to new multicritical points in the parameter space of the boundary field theories. LANL EDS SIS INIS2004 SIS LANLPUBL2004 SIS:2004 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Matsuura, S Sakai, T Fukuma, Masafumi Matsuura, So Sakai, Tadakatsu 489-562 Prog. Theor. Phys. 109 2003 http://invenio-software.org/download/invenio-demo-site-files/0212314.pdf http://invenio-software.org/download/invenio-demo-site-files/0212314.ps.gz matsu@yukawa.kyoto-u.ac.jp n 200201 13 20051024 1938 CER01 20021230 PUBLIC 002357700CER ARTICLE [1] Y. Nambu, in Symmetries and quark models, ed. R. Chand (Tordon and Breach 1970), p 269; H. Nielsen, in the 15th International Conference on High Energy Physics (Kiev 1970); L. Susskind, Nuovo Cimento A 69 (1970) 457 [2] G. ’t Hooft, "A Planar Diagram Theory For Strong Interactions," Nucl. Phys. B 72 (1974) 461 [3] K. G. Wilson, ‘Confinement of Quarks," Phys. Rev. D 10 (1974) 2445 [4] R. Gopakumar and C. Vafa, "On the gauge theory/geometry correspondence," Adv. Theor. Math. Phys. 3 (1999) 1415 hep-th/9811131 [5] J. Maldacena, "The large N limit of superconformal field theories and supergravity," Adv. Theor. Math. Phys. 2 (1998) 231 hep-th/9711200 [6] S. S. Gubser, I. R. Klebanov and A. M. Polyakov, "Gauge Theory Correlators from Non-Critical String Theory," Phys. Lett. B 428 (1998) 105 hep-th/9802109 [7] E. Witten, "Anti De Sitter Space And Holography," Adv. Theor. Math. Phys. 2 (1998) 253 hep-th/9802150 [8] O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, "Large N Field Theories, String Theory and Gravity," hep-th/9905111 [8] , and references therein. [9] G. T. Horowitz and A. Strominger, "Black Strings And P-Branes," Nucl. Phys. B 360 (1991) 197 [10] L. Susskind and E. Witten, "The holographic bound in anti-de Sitter space," hep-th/9805114 [11] E. T. Akhmedov, "A remark on the AdS/CFT correspondence and the renormaliza-tion group flow," Phys. Lett. B 442 (1998) 152 hep-th/9806217 [12] E. Alvarez and C. Gomez, "Geometric Holography, the Renormalization Group and the c-Theorem," Nucl. Phys. B 541 (1999) 441 hep-th/9807226 [13] L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni, "Novel Local CFT and Exact Results on Perturbations of N=4 Super Yang Mills from AdS Dynamics," J. High Energy Phys. 12 (1998) 022 hep-th/9810126 [14] M. Porrati and A. Starinets, "RG Fixed Points in Supergravity Duals of 4-d Field Theory and Asymptotically AdS Spaces," Phys. Lett. B 454 (1999) 77 hep-th/9903085 [15] V. Balasubramanian and P. Kraus, "Spacetime and the Holographic Renormalization Group," Phys. Rev. Lett. 83 (1999) 3605 hep-th/9903190 [16] D. Z. Freedman, S. S. Gubser, K. Pilch and N. P. Warner, "Renormalization group flows from holography supersymmetry and a c-theorem," Adv. Theor. Math. Phys. 3 (1999) 363 hep-th/9904017 [17] L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni "The Supergravity Dual of N=1 Super Yang-Mills Theory," Nucl. Phys. B 569 (2000) 451 hep-th/9909047 [18] K. Skenderis and P. K. Townsend, "Gravitational Stability and Renormalization-Group Flow," Phys. Lett. B 468 (1999) 46 hep-th/9909070 [19] O. DeWolfe, D. Z. Freedman, S. S. Gubser and A. Karch, "Modeling the fifth dimen-sion with scalars and gravity," Phys. Rev. D 62 (2000) 046008 hep-th/9909134 [20] V. Sahakian, "Holography, a covariant c-function and the geometry of the renormal-ization group," Phys. Rev. D 62 (2000) 126011 hep-th/9910099 [21] E. Alvarez and C. Gomez, "A comment on the holographic renormalization group and the soft dilaton theorem," Phys. Lett. B 476 (2000) 411 hep-th/0001016 [22] S. Nojiri, S. D. Odintsov and S. Zerbini, "Quantum (in)stability of dilatonic AdS backgrounds and holographic renormalization group with gravity," Phys. Rev. D 62 (2000) 064006 hep-th/0001192 [23] M. Li, "A note on relation between holographic RG equation and Polchinski’s RG equation," Nucl. Phys. B 579 (2000) 525 hep-th/0001193 [24] V. Sahakian, "Comments on D branes and the renormalization group," J. High Energy Phys. 0005 (2000) 011 hep-th/0002126 [25] O. DeWolfe and D. Z. Freedman, "Notes on fluctuations and correlation functions in holographic renormalization group flows," hep-th/0002226 [26] V. Balasubramanian, E. G. Gimon and D. Minic, "Consistency conditions for holo-graphic duality," J. High Energy Phys. 0005 (2000) 014 hep-th/0003147 [27] C. V. Johnson, K. J. Lovis and D. C. Page, "Probing some N = 1 AdS/CFT RG flows," J. High Energy Phys. 0105 (2001) 036 hep-th/0011166 [28] J. Erdmenger, "A field-theoretical interpretation of the holographic renormalization group," Phys. Rev. D 64 (2001) 085012 hep-th/0103219 [29] S. Yamaguchi, "Holographic RG flow on the defect and g-theorem," J. High Energy Phys. 0210 (2002) 002 hep-th/0207171 [30] J. de Boer, E. Verlinde and H. Verlinde, "On the Holographic Renormalization Group," hep-th/9912012 [31] M. Henningson and K. Skenderis, "The Holographic Weyl anomaly," J. High Energy Phys. 07 (1998) 023 hep-th/9806087 [32] V. Balasubramanian and P. Kraus, "A stress tensor for anti-de Sitter gravity," Commun. Math. Phys. 208 (1999) 413 hep-th/9902121 [33] S. de Haro, K. Skenderis and S. Solodukhin, "Holographic Reconstruction of Space-time and Renormalization in the AdS/CFT Correspondence," hep-th/0002230 [34] M. J. Duff, "Twenty Years of the Weyl Anomaly," Class. Quantum Gravity 11 (1994) 1387 hep-th/9308075 [35] M. Fukuma, S. Matsuura and T. Sakai, "A note on the Weyl anomaly in the holographic renormalization group," Prog. Theor. Phys. 104 (2000) 1089 hep-th/0007062 [36] M. Fukuma and T. Sakai, "Comment on ambiguities in the holographic Weyl anomaly," Mod. Phys. Lett. A 15 (2000) 1703 hep-th/0007200 [37] M. Fukuma, S. Matsuura and T. Sakai, "Higher-Derivative Gravity and the AdS/CFT Correspondence," Prog. Theor. Phys. 105 (2001) 1017 hep-th/0103187 [38] M. Fukuma and S. Matsuura, "Holographic renormalization group structure in higher-derivative gravity," Prog. Theor. Phys. 107 (2002) 1085 hep-th/0112037 [39] A. Fayyazuddin and M. Spalinski "Large N Superconformal Gauge Theories and Supergravity Orientifolds," Nucl. Phys. B 535 (1998) 219 hep-th/9805096 [39] O. Aharony, A. Fayyazuddin and J. Maldacena, "The Large N Limit of N = 1, 2 Field Theories from Three Branes in F-theory," J. High Energy Phys. 9807 (1998) 013 hep-th/9806159 [40] M. Blau, K. S. Narain and E. Gava "On Subleading Contributions to the AdS/CFT Trace Anomaly," J. High Energy Phys. 9909 (1999) 018 hep-th/9904179 [41] O. Aharony, J. Pawelczyk, S. Theisen and S. Yankielowicz, "A Note on Anomalies in the AdS/CFT correspondence," Phys. Rev. D 60 (1999) 066001 hep-th/9901134 [42] S. Corley, "A Note on Holographic Ward Identities," Phys. Lett. B 484 (2000) 141 hep-th/0004030 [43] J. Kalkkinen and D. Martelli, "Holographic renormalization group with fermions and form fields," Nucl. Phys. B 596 (2001) 415 hep-th/0007234 [44] S. Nojiri, S. D. Odintsov and S. Ogushi, "Scheme-dependence of holographic confor-mal anomaly in d5 gauged supergravity with non-trivial bulk potential," Phys. Lett. B 494 (2000) 318 hep-th/0009015 [45] N. Hambli, "On the holographic RG-flow and the low-Energy, strong coupling, large N limit," Phys. Rev. D 64 (2001) 024001 hep-th/0010054 [46] S. Nojiri, S. D. Odintsov and S. Ogushi, "Holographic renormalization group and conformal anomaly for AdS(9)/CFT(8) correspondence," Phys. Lett. B 500 (2001) 199 hep-th/0011182 [47] J. de Boer, "The holographic renormalization group," Fortschr. Phys. 49 (2001) 339 hep-th/0101026 [48] J. Kalkkinen, D. Martelli and W. Muck, "Holographic renormalisation and anoma-lies," J. High Energy Phys. 0104 (2001) 036 hep-th/0103111 [49] S. Nojiri and S. D. Odintsov, "Conformal anomaly from dS/CFT correspondence," Phys. Lett. B 519 (2001) 145 hep-th/0106191 [50] S. Nojiri and S. D. Odintsov, "Asymptotically de Sitter dilatonic space-time, holo-graphic RG flow and conformal anomaly from (dilatonic) dS/CFT correspondence," Phys. Lett. B 531 (2002) 143 hep-th/0201210 [51] R. G. Leigh and M. J. Strassler, "Exactly marginal operators and duality in four-dimensional N=1 supersymmetric gauge theory," Nucl. Phys. B 447 (1995) 95 hep-th/9503121 [52] S. Ferrara, C. Fronsdal and A. Zaffaroni, "On N = 8 supergravity on AdS(5) and N = 4 superconformal Yang-Mills theory," Nucl. Phys. B 532 (1998) 153 hep-th/9802203 [53] L. Andrianopoli and S. Ferrara, "K-K excitations on AdS(5) x S(5) Ann. Sci. N = 4 *pri-mary* superfields," Phys. Lett. B 430 (1998) 248 hep-th/9803171 [54] S. Ferrara, M. A. Lledo and A. Zaffaroni, "Born-Infeld corrections to D3 brane action in AdS(5) x S(5) and N = 4, d = 4 primary superfields," Phys. Rev. D 58 (1998) 105029 hep-th/9805082 [55] M. F. Sohnius, "Introducing Supersymmetry," Phys. Rep. 128 (1985) 39 [56] O. Aharony, M. Berkooz and E. Silverstein, "Multiple-trace operators and non-local string theories," J. High Energy Phys. 0108 (2001) 006 hep-th/0105309 [57] E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspon-dence," hep-th/0112258 [58] M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary condi-tions and spacetime singularities," J. High Energy Phys. 0205 (2002) 034 hep-th/0112264 [59] S. Minwalla, "Restrictions imposed by superconformal invariance on quantum field theories," Adv. Theor. Math. Phys. 2 (1998) 781 hep-th/9712074 [60] M. Gunaydin, D. Minic, and M. Zagermann, "Novel supermultiplets of SU(2, 2|4) and the AdS5 / CFT4 duality," hep-th/9810226 [61] L. Andrianopoli and S. Ferrara, "K-K Excitations on AdS5 ×S5 Ann. Sci. N = 4 ‘Primary’ Superfields," Phys. Lett. B 430 (1998) 248 hep-th/9803171 [62] L. Andrianopoli and S. Ferrara, "Nonchiral’ Primary Superfields in the AdSd+1 / CFTd Correspondence," Lett. Math. Phys. 46 (1998) 265 hep-th/9807150 [63] S. Ferrara and A. Zaffaroni, "Bulk gauge fields in AdS supergravity and supersingle-tons," hep-th/9807090 [64] M. Gunaydin, D. Minic, and M. Zagermann, "4-D doubleton conformal theories, CPT and II B string on AdS5 × S5," Nucl. Phys. B 534 (1998) 96 hep-th/9806042 [65] L. Andrianopoli and S. Ferrara, "On Short and Long SU(2, 2/4) Multiplets in the AdS/CFT Correspondence," hep-th/9812067 [66] P. S. Howe, K. S. Stelle and P. K. Townsend, "Supercurrents," Nucl. Phys. B 192 (1981) 332 [67] P. S. Howe and P. C. West, "Operator product expansions in four-dimensional super-conformal field theories," Phys. Lett. B 389 (1996) 273 hep-th/9607060 [67] "Is N = 4 Yang-Mills theory soluble?," hep-th/9611074 [67] "Superconformal invariants and extended supersymmetry," Phys. Lett. B 400 (1997) 307 hep-th/9611075 [68] H. J. Kim, L. J. Romans and P. van Nieuwenhuizen, "The Mass Spectrum Of Chiral N=2 D = 10 Supergravity On S**5," Phys. Rev. D 32 (1985) 389 [69] M. Günaydin and N. Marcus, "The Spectrum Of The S**5 Compactification Of The Chiral N=2, D=10 Supergravity And The Unitary Supermultiplets Of U(2, 2/4)," Class. Quantum Gravity 2 (1985) L11 [70] V. A. Novikov, M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, "Exact Gell-Mann-Low Function Of Supersymmetric Yang-Mills Theories From Instanton Cal-culus," Nucl. Phys. B 229 (1983) 381 [71] D. Anselmi, D. Z. Freedman, M. T. Grisaru and Astron. Astrophys. Johansen, "Nonperturbative formulas for central functions of supersymmetric gauge theories," Nucl. Phys. B 526 (1998) 543 hep-th/9708042 [72] A. Khavaev, K. Pilch and N. P. Warner, "New vacua of gauged N = 8 supergravity in five dimensions," Phys. Lett. B 487 (2000) 14 hep-th/9812035 [73] K. Pilch and N. P. Warner, "N = 1 supersymmetric renormalization group flows from ICFA Instrum. Bull. supergravity," Adv. Theor. Math. Phys. 4 (2002) 627 hep-th/0006066 [74] D. Berenstein, J. M. Maldacena and H. Nastase, "Strings in flat space and pp waves from N = 4 super Yang Mills," J. High Energy Phys. 0204 (2002) 013 hep-th/0202021 [75] M. Blau, J. Figueroa-O’Farrill, C. Hull and G. Papadopoulos, "A new maximally supersymmetric background of ICFA Instrum. Bull. superstring theory," J. High Energy Phys. 0201 (2002) 047 hep-th/0110242 [75] M. Blau, J. Figueroa-O’Farrill, C. Hull and G. Papadopoulos, "Pen-rose limits and maximal supersymmetry," Class. Quantum Gravity 19 (2002) L87 hep-th/0201081 [75] M. Blau, J. Figueroa-O’Farrill and G. Papadopoulos, "Penrose lim-its, supergravity and brane dynamics," Class. Quantum Gravity 19 (2002) 4753 hep-th/0202111 [76] R. R. Metsaev, "Type ICFA Instrum. Bull. Green-Schwarz superstring in plane wave Ramond-Ramond background," Nucl. Phys. B 625 (2002) 70 hep-th/0112044 [77] R. Corrado, N. Halmagyi, K. D. Kennaway and N. P. Warner, "Penrose limits of RG fixed points and pp-waves with background fluxes," hep-th/0205314 [77] E. G. Gi-mon, L. A. Pando Zayas and J. Sonnenschein, "Penrose limits and RG flows," hep-th/0206033 [77] D. Brecher, C. V. Johnson, K. J. Lovis and R. C. Myers, "Penrose limits, deformed pp-waves and the string duals of N = 1 large N gauge theory," J. High Energy Phys. 0210 (2002) 008 hep-th/0206045 [78] Y. Oz and T. Sakai, "Penrose limit and six-dimensional gauge theories," Phys. Lett. B 544 (2002) 321 hep-th/0207223 [78] ; etc. [79] A. B. Zamolodchikov, "Irreversibility’ Of The Flux Of The Renormalization Group In A 2-D Field Theory," JETP Lett. 43 (1986) 730 [79] [ Pis'ma Zh. Eksp. Teor. Fiz. 43 (1986) 565 [79] ]. [80] D. Anselmi, "Anomalies, unitarity, and quantum irreversibility," Ann. Phys. 276 (1999) 361 hep-th/9903059 [81] G. W. Gibbons and S. W. Hawking, "Action Integrals and Partition Functions in Quantum Gravity," Phys. Rev. D 15 (1977) 2752 [82] C. R. Graham and J. M. Lee, "Einstein Metrics with Prescribed Conformal Infinity on the Ball," Adv. Math. 87 (1991) 186 [83] M. Green, J. Schwarz and E. Witten, "Superstring Theory," Cambridge University Press, New York, 1987. [84] S. Nojiri and S. Odintsov, "Conformal Anomaly for Dilaton Coupled Theories from AdS/CFT Correspondence," Phys. Lett. B 444 (1998) 92 hep-th/9810008 [84] S. Nojiri, S. Odintsov and S. Ogushi, "Conformal Anomaly from d5 Gauged Super-gravity and c-function Away from Conformity," hep-th/9912191 [84] "Finite Action in d5 Gauged Supergravity and Dilatonic Conformal Anomaly for Dual Quantum Field Theory," hep-th/0001122 [85] A. Polyakov, Phys. Lett. B 103 (1981) 207 [85] 211; V. Knizhnik, A. Polyakov and A. Zamolodchikov, Mod. Phys. Lett. A 3 (1988) 819 [86] F. David, Mod. Phys. Lett. A 3 (1988) 1651 [86] J. Distler and H. Kawai, Nucl. Phys. B 321 (1989) 509 [87] N. Seiberg, "Notes on quantum Liouville theory and quantum gravity," Prog. Theor. Phys. Suppl. 102 (1990) 319 [88] R. Myer, Phys. Lett. B 199 (1987) 371 [89] A. Dhar and S. Wadia, "Noncritical strings, RG flows and holography," Nucl. Phys. B 590 (2000) 261 hep-th/0006043 [90] S. Nojiri and S. D. Odintsov, "Brane World Inflation Induced by Quantum Effects," Phys. Lett. B 484 (2000) 119 hep-th/0004097 [91] R. C. Myers, "Higher-derivative gravity, surface terms, and string theory," Phys. Rev. D 36 (1987) 392 [92] S. Nojiri and S. D. Odintsov, "Brane-World Cosmology in Higher Derivative Gravity or Warped Compactification in the Next-to-leading Order of AdS/CFT Correspon-dence," J. High Energy Phys. 0007 (2000) 049 hep-th/0006232 [92] S. Nojiri, S. D. Odintsov and S. Ogushi, "Dynamical Branes from Gravitational Dual of N = 2 Sp(N) Superconformal Field Theory," hep-th/0010004 [92] "Holographic Europhys. Newstropy and brane FRW-dynamics from AdS black hole in d5 higher derivative gravity," hep-th/0105117 [93] S. Nojiri and S. D. Odintsov, "On the conformal anomaly from higher derivative grav-ity in AdS/CFT correspondence," Int. J. Mod. Phys. A 15 (2000) 413 hep-th/9903033 [93] S. Nojiri and S. D. Odintsov, "Finite gravitational action for higher derivative and stringy gravity," Phys. Rev. D 62 (2000) 064018 hep-th/9911152 [94] J. Polchinski, "String Theory," Vol. II, Cambridge University Press, 1998. [95] S. Kamefuchi, L. O’Raifeartaigh and A. Salam, "Change Of Variables And Equiva-lence Theorems In Quantum Field Theories," Nucl. Phys. 28 (1961) 529 [96] D. J. Gross and E. Witten, "Superstring Modifications Of Einstein’s Equations," Nucl. Phys. B 277 (1986) 1 [97] J. I. Latorre and T. R. Morris, "Exact scheme independence," J. High Energy Phys. 0011 (2000) 004 hep-th/0008123 [98] M. Fukuma and S. Matsuura, "Comment on field redefinitions in the AdS/CFT cor-respondence," Prog. Theor. Phys. 108 (2002) 375 hep-th/0204257 [99] I. R. Klebanov and A. M. Polyakov, "AdS dual of the critical O(N) vector model," Phys. Lett. B 550 (2002) 213 hep-th/0210114 [100] A. M. Polyakov, "Gauge Fields Ann. Sci. Rings Of Glue," Nucl. Phys. B 164 (1980) 171 [101] Y. Makeenko and Astron. Astrophys. Migdal, "Quantum Chromodynamics Ann. Sci. Dynamics Of Loops," Nucl. Phys. B 188 (1981) 269 [102] A. M. Polyakov, "Confining strings," Nucl. Phys. B 486 (1997) 23 hep-th/9607049 [103] A. M. Polyakov, "String theory and quark confinement," Nucl. Phys. B, Proc. Suppl. 68 (1998) 1 hep-th/9711002 [104] A. M. Polyakov, "The wall of the cave," Int. J. Mod. Phys. A 14 (1999) 645 hep-th/9809057 [105] A. M. Polyakov and V. S. Rychkov, "Gauge fields - strings duality and the loop equation," Nucl. Phys. B 581 (2000) 116 hep-th/0002106 [106] A. M. Polyakov and V. S. Rychkov, "Loop dynamics and AdS/CFT correspondence," Nucl. Phys. B 594 (2001) 272 hep-th/0005173 [107] A. M. Polyakov, "String theory Ann. Sci. a universal language," Phys. At. Nucl. 64 (2001) 540 hep-th/0006132 [108] A. M. Polyakov, "Gauge fields and space-time," Int. J. Mod. Phys. A 17S : 1 (2002) 119, hep-th/0110196 [109] J. B. Kogut and L. Susskind, "Hamiltonian Formulation Of Wilson’s Lattice Gauge Theories," Phys. Rev. D 11 (1975) 395 [110] A. Santambrogio and D. Zanon, "Exact anomalous dimensions of N = 4 Yang-Mills operators with large R charge," Phys. Lett. B 545 (2002) 425 hep-th/0206079 [111] Y. Oz and T. Sakai, "Exact anomalous dimensions for N = 2 ADE SCFTs," hep-th/0208078 [112] S. R. Das, C. Gomez and S. J. Rey, "Penrose limit, spontaneous Symmetry break-ing and holography in pp-wave background," Phys. Rev. D 66 (2002) 046002 hep-th/0203164 [113] R. G. Leigh, K. Okuyama and M. Rozali, "PP-waves and holography," Phys. Rev. D 66 (2002) 046004 hep-th/0204026 [114] D. Berenstein and H. Nastase, "On lightcone string field theory from super Yang-Mills and holography," hep-th/0205048 SzGeCERN 2373792CERCER hep-th/0304229 eng Barvinsky, A O Lebedev Physics Institute Nonlocal action for long-distance modifications of gravity theory 2003 28 Apr 2003 9 p We construct the covariant nonlocal action for recently suggested long-distance modifications of gravity theory motivated by the cosmological constant and cosmological acceleration problems. This construction is based on the special nonlocal form of the Einstein-Hilbert action explicitly revealing the fact that this action within the covariant curvature expansion begins with curvature-squared terms. LANL EDS SIS LANLPUBL2004 SIS:2004 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory 109-116 Phys. Lett. B 572 2003 http://invenio-software.org/download/invenio-demo-site-files/0304229.pdf http://invenio-software.org/download/invenio-demo-site-files/0304229.ps.gz barvin@lpi.ru n 200318 13 20060826 0015 CER01 20030429 PUBLIC 002373792CER ARTICLE [1] N.Arkani-Hamed, S.Dimopoulos, G.Dvali and G.Gabadadze, Nonlocal modifica-tion of gravity and the cosmological constant problem, hep-th/0209227 [2] S.Weinberg, Rev. Mod. Phys. 61 (1989) 1 [3] M.K.Parikh and S.N.Solodukhin, Phys. Lett. B 503 (2001) 384 hep-th/0012231 [4] A.O.Barvinsky and G.A.Vilkovisky, Nucl. Phys. B 282 (1987) 163 [5] A.O.Barvinsky and G.A.Vilkovisky, Nucl. Phys. B 333 (1990) 471 [6] A.O.Barvinsky, Yu.V.Gusev, G.A.Vilkovisky and V.V.Zhytnikov, J. Math. Phys. 35 (1994) 3525 [6] J. Math. Phys. 35 (1994) 3543 [7] A.Adams, J.McGreevy and E.Silverstein, hep-th/0209226 [8] R.Gregory, V.A.Rubakov and S.M.Sibiryakov, Phys. Rev. Lett. 84 (2000) 5928 hep-th/0002072 [9] G.Dvali, G.Gabadadze and M.Porrati, Phys. Rev. Lett. B : 485 (2000) 208, hep-th/0005016 [10] S.L.Dubovsky and V.A.Rubakov, Phys. Rev. D 67 (2003) 104014 hep-th/0212222 [11] A.O.Barvinsky, Phys. Rev. D 65 (2002) 062003 hep-th/0107244 [12] A.O.Barvinsky, A.Yu.Kamenshchik, A.Rathke and C.Kiefer, Phys. Rev. D 67 (2003) 023513 hep-th/0206188 [13] E.S. Fradkin and Astron. Astrophys. Tseytlin, Phys. Lett. B 104 (1981) 377 [13] A.O.Barvinsky and I.G.Avramidi, Phys. Lett. B 159 (1985) 269 [14] A.O.Barvinsky, A.Yu.Kamenshchik and I.P.Karmazin, Phys. Rev. D 48 (1993) 3677 gr-qc/9302007 [15] E.V.Gorbar and I.L.Shapiro, J. High Energy Phys. 0302 (2003) 021 [16] M.Porrati, Phys. Lett. B 534 (2002) 209 hep-th/0203014 [17] H. van Damm and M.J.Veltman, Nucl. Phys. D : 22 (1970) 397; V.I.Zakharov, JETP Lett. 12 (1970) 312 [17] M.Porrati, Phys. Lett. B 498 (2001) 92 hep-th/0011152 [18] A.O.Barvinsky, Yu.V.Gusev, V.F.Mukhanov and D.V.Nesterov, Nonperturbative late time asymptotics for heat kernel in gravity theory, hep-th/0306052 [19] A.Strominger, J. High Energy Phys. 0110 (2001) 034 hep-th/0106113 [19] J. High Energy Phys. 0111 (2001) 049 hep-th/0110087 [20] J.Schwinger, J. Math. Phys. 2 (1961) 407 [20] J.L.Buchbinder, E.S.Fradkin and D.M.Gitman, Fortschr. Phys. 29 (1981) 187 [20] R.D.Jordan, Phys. Rev. D 33 (1986) 444 [21] C.Deffayet, G.Dvali and G.Gabadadze, Phys. Rev. D 65 (2002) 044023 astro-ph/0105068 [22] G.Dvali, A.Gruzinov and M.Zaldarriaga, The accelerated Universe and the Moon, hep-ph/0212069 [23] M.E.Soussa and R.P.Woodard, A nonlocal metric formulation of MOND, astro-ph/0302030 [24] M.Milgrom, Astrophys. J. 270 (1983) 365 [24] Astrophys. J. 270 (1983) 371 [24] J.Bekenstein and M.Milgrom, Astrophys. J. 286 (1984) 7 [25] L.R.Abramo and R.P.Woodard, Phys. Rev. D 65 (2002) 063516 [25] V.K.Onemli and R.P.Woodard, Class. Quantum Gravity 19 (2002) 4607 gr-qc/0204065 SzGeCERN hep-th/0307041 eng Witten, Edward Princeton University SL(2,Z) Action On Three-Dimensional Conformal Field Theories With Abelian Symmetry 2003 3 Jul 2003 24 p On the space of three-dimensional conformal field theories with U(1) symmetry and a chosen coupling to a background gauge field, there is a natural action of the group SL(2,Z). The generator S of SL(2,Z) acts by letting the background gauge field become dynamical, an operation considered recently by Kapustin and Strassler. The other generator T acts by shifting the Chern-Simons coupling of the background field. This SL(2,Z) action in three dimensions is related by the AdS/CFT correspondence to SL(2,Z) duality of low energy U(1) gauge fields in four dimensions. LANL EDS SzGeCERN Particle Physics - Theory PREPRINT LANL EDS High Energy Physics - Theory Witten, Edward http://invenio-software.org/download/invenio-demo-site-files/0307041.pdf http://invenio-software.org/download/invenio-demo-site-files/0307041.ps.gz witten@ias.edu n 200327 11 20061123 0917 CER01 20030704 PUBLIC 002385282CER PREPRINT [1] C. Burgess and B. P. Dolan, "Particle Vortex Duality And The Modular Group Applications To The Quantum Hall Effect And Other 2-D Systems," hep-th/0010246 [2] A. Shapere and F. Wilczek, "Self-Dual Models With Theta Terms," Nucl. Phys. B 320 (1989) 669 [3] S. J. Rey and A. Zee, "Self-Duality Of Three-Dimensional Chern-Simons Theory," Nucl. Phys. B 352 (1991) 897 [4] C. A. Lutken and G. G. Ross, "Duality In The Quantum Hall System," Phys. Rev. B 45 (1992) 11837 [4] Phys. Rev. B 48 (1993) 2500 [5] D.-H. Lee, S. Kivelson, and S.-C. Zhang, Phys. Lett. 68 (1992) 2386 [5] Phys. Rev. B 46 (1992) 2223 [6] C. A. Lutken, "Geometry Of Renormalization Group Flows Constrained By Discrete Global Symmetries," Nucl. Phys. B 396 (1993) 670 [7] B. P. Dolan, "Duality And The Modular Group In The Quantum Hall Effect," J. Phys. A 32 (1999) L243 cond-mat/9805171 [8] C. P. Burgess, R. Dib, and B. P. Dolan, Phys. Rev. B 62 (2000) 15359 cond-mat/9911476 [9] A. Zee, "Quantum Hall Fluids," cond-mat/9501022 [10] A. Kapustin and M. Strassler, "On Mirror Symmetry In Three Dimensional Abelian Gauge Theories," hep-th/9902033 [11] K. Intriligator and N. Seiberg, "Mirror Symmetry In Three-Dimensional Gauge The-ories," Phys. Lett. B 387 (1996) 512 hep-th/9607207 [12] J. Cardy and E. Rabinovici, "Phase Structure Of Z. Phys. Models In The Presence Of A Theta Parameter," Nucl. Phys. B 205 (1982) 1 [12] J. Cardy, "Duality And The Theta Parameter In Abelian Lattice Models," Nucl. Phys. B 205 (1982) 17 [13] C. Vafa and E. Witten, "A Strong Coupling Test Of S-Duality," Nucl. Phys. B 431 (1994) 3 hep-th/9408074 [14] E. Witten, "On S Duality In Abelian Gauge Theory," Selecta Mathematica : 1 (1995) 383, hep-th/9505186 [15] S. Deser, R. Jackiw, and S. Templeton, "Topologically Massive Gauge Theories," Ann. Phys. 140 (1982) 372 [16] E. Guadagnini, M. Martinelli, and M. Mintchev, "Scale-Invariant SIGMA, Symmetry Integrability Geom. Methods Appl. Models On Homogeneous Spaces," Phys. Lett. B 194 (1987) 69 [17] K. Bardacki, E. Rabinovici, and B. Saring, Nucl. Phys. B 299 (1988) 157 [18] D. Karabali, Q.-H. Park, H. J. Schnitzer, and Z. Yang, Phys. Lett. B 216 (1989) 307 [18] H. J. Schnitzer, Nucl. Phys. B 324 (1989) 412 [18] D. Karabali and H. J. Schnitzer, Nucl. Phys. B 329 (1990) 649 [19] T. Appelquist and R. D. Pisarski, "Hot Yang-Mills Theories And Three-Dimensional QCD," Phys. Rev. D 23 (1981) 2305 [20] R. Jackiw and S. Templeton, "How Superrenormalizable Interactions Cure Their In-frared Divergences," Phys. Rev. D 23 (1981) 2291 [21] S. Templeton, "Summation Of Dominant Coupling Constant Logarithms In QED In Three Dimensions," Phys. Lett. B 103 (1981) 134 [21] "Summation Of Coupling Constant Logarithms In QED In Three Dimensions," Phys. Rev. D 24 (1981) 3134 [22] T. Appelquist and U. W. Heinz,"Three-Dimensional O(N) Theories At Large Dis-tances," Phys. Rev. D 24 (1981) 2169 [23] D. Anselmi, "Large N Expansion, Conformal Field Theory, And Renormalization Group Flows In Three Dimensions," J. High Energy Phys. 0006 (2000) 042 hep-th/0005261 [24] V. Borokhov, A. Kapustin, and X. Wu, "Topological Disorder Operators In Three-Dimensional Conformal Field Theory," hep-th/0206054 [25] V. Borokhov, A. Kapustin, and X. Wu, "Monopole Operators And Mirror Symmetry In Three Dimensions," J. High Energy Phys. 0212 (2002) 044 hep-th/0207074 [26] P. Breitenlohner and D. Z. Freedman, "Stability In Gauged Extended Supergravity," Ann. Phys. 144 (1982) 249 [27] I. R. Klebanov and E. Witten, "AdS/CFT Correspondence And Symmetry Breaking," Nucl. Phys. B 536 (1998) 199 hep-th/9905104 [28] R. Jackiw, "Topological Investigations Of Quantized Gauge Theories," in Current Algebra And Anomalies, ed. S. B. Treiman et. al. (World-Scientific, 1985). [29] A. Schwarz, "The Partition Function Of A Degenerate Functional," Commun. Math. Phys. 67 (1979) 1 [30] M. Rocek and E. Verlinde, "Duality, Quotients, and Currents," Nucl. Phys. B 373 (1992) 630 hep-th/9110053 [31] S. Elitzur, G. Moore, A. Schwimmer, and N. Seiberg, "Remarks On The Canonical Quantization Of The Chern-Simons-Witten Theory," Nucl. Phys. B 326 (1989) 108 [32] E. Witten, "Quantum Field Theory And The Jones Polynomial," Commun. Math. Phys. 121 (1989) 351 [33] N. Redlich, "Parity Violation And Gauge Non-Invariance Of The Effective Gauge Field Action In Three Dimensions," Phys. Rev. D 29 (1984) 2366 [34] E. Witten, "Multi-Trace Operators, Boundary Conditions, and AdS/CFT Correspon-dence," hep-th/0112258 [35] M. Berkooz, A. Sever, and A. Shomer, "Double-trace Deformations, Boundary Con-ditions, and Spacetime Singularities," J. High Energy Phys. 05 (2002) 034 hep-th/0112264 [36] P. Minces, "Multi-trace Operators And The Generalized AdS/CFT Prescription," hep-th/0201172 [37] O. Aharony, M. Berkooz, and E. Silverstein, "Multiple Trace Operators And Non-Local String Theories," J. High Energy Phys. 08 (2001) 006 [38] V. K. Dobrev, "Intertwining Operator Realization Of The AdS/CFT Correspon-dence," Nucl. Phys. B 553 (1999) 559 hep-th/9812194 [39] I. R. Klebanov, "Touching Random Surfaces And Liouville Theory," Phys. Rev. D 51 (1995) 1836 hep-th/9407167 [39] I. R. Klebanov and A. Hashimoto, "Non-perturbative Solution Of Matrix Models Modified By Trace Squared Terms," Nucl. Phys. B 434 (1995) 264 hep-th/9409064 [40] S. Gubser and I. Mitra, "Double-trace Operators And One-Loop Vacuum Energy In AdS/CFT," hep-th/0210093 Phys.Rev. D67 (2003) 064018 [41] S. Gubser and I. R. Klebanov, "A Universal Result On Central Charges In The Presence Of Double-Trace Deformations," Nucl.Phys. B656 (2003) 23 hep-th/0212138 SzGeCERN 20070403111954.0 hep-th/0402130 eng NYU-TH-2004-02-17 Dvali, G New York University Filtering Gravity: Modification at Large Distances? Infrared Modification of Gravity Preprint title 2005 New York, NY New York Univ. Dept. Phys. 17 Feb 2004 18 p In this lecture I address the issue of possible large distance modification of gravity and its observational consequences. Although, for the illustrative purposes we focus on a particular simple generally-covariant example, our conclusions are rather general and apply to large class of theories in which, already at the Newtonian level, gravity changes the regime at a certain very large crossover distance $r_c$. In such theories the cosmological evolution gets dramatically modified at the crossover scale, usually exhibiting a "self-accelerated" expansion, which can be differentiated from more conventional "dark energy" scenarios by precision cosmology. However, unlike the latter scenarios, theories of modified-gravity are extremely constrained (and potentially testable) by the precision gravitational measurements at much shorter scales. Despite the presence of extra polarizations of graviton, the theory is compatible with observations, since the naive perturbative expansion in Newton's constant breaks down at a certain intermediate scale. This happens because the extra polarizations have couplings singular in $1/r_c$. However, the correctly resummed non-linear solutions are regular and exhibit continuous Einsteinian limit. Contrary to the naive expectation, explicit examples indicate that the resummed solutions remain valid after the ultraviolet completion of the theory, with the loop corrections taken into account. LANL EDS SIS:200704 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Dvali, Gia 92-98 Phys. Scr. Top. Issues T117 2005 http://invenio-software.org/download/invenio-demo-site-files/0402130.pdf n 200408 13 20070425 1019 CER01 20040218 002414101 92-98 sigtuna20030814 PUBLIC 002426503CER ARTICLE [1] G. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B 485 (2000) 208 hep-th/0005016 [1] G. R. Dvali and G. Gabadadze, Phys. Rev. D 63 (2001) 065007 hep-th/0008054 [2] G. Dvali, G. Gabadadze, M. Kolanovic and F. Nitti, Phys. Rev. D 65 (2002) 024031 hep-ph/0106058 [3] G. Dvali, G. Gabadadze, M. Kolanovic and F. Nitti, Phys. Rev. D 64 (2001) 084004 hep-ph/0102216 [4] C. Deffayet, G. Dvali and G. Gabadadze, Phys. Rev. D 65 (2002) 044023 astro-ph/0105068 [5] C. Deffayet, Phys. Lett. B 502 (2001) 199 hep-th/0010186 [6] A. G. Riess et al. [Supernova Search Team Collaboration], Astron. J. 116 (1998) 1009 astro-ph/9805201 [6] S. Perlmutter et al. [Supernova Cosmology Project Collaboration], Astrophys. J. 517 (1999) 565 astro-ph/9812133 [7] G. Dvali and M. Turner, astro-ph/0301510 [8] H. van Dam and M. Veltman, Nucl. Phys. B 22 (1970) 397 [9] V. I. Zakharov, JETP Lett. 12 (1970) 312 [10] A. I. Vainshtein, Phys. Lett. B 39 (1972) 393 [11] C. Deffayet, G. Dvali, G. Gabadadze and A. I. Vainshtein, Phys. Rev. D 65 (2002) 044026 hep-th/0106001 [12] N. Arkani-Hamed, H. Georgi and M.D. Schwartz, Ann. Phys. 305 (2003) 96 hep-th/0210184 [13] D. G..Boulware and S. Deser., Phys. Rev. D 6 (1972) 3368 [14] G. Gabadadze and A. Gruzinov, Phys.Rev. D72 (2005) 124007 hep-th/0312074 [15] M. A. Luty, M. Porrati and R. Rattazzi, J. High Energy Phys. 0309 (2003) 029 hep-th/0303116 [16] A. Lue, Phys. Rev. D 66 (2002) 043509 hep-th/0111168 [17] A. Gruzinov, astro-ph/0112246 New Astron. 10 (2005) 311 [18] S. Corley, D.A.Lowe and S. Ramgoolam, J. High Energy Phys. 0107 (2001) 030 hep-th/0106067 [19] I. Antoniadis, R. Minasian and P. Vanhove, Nucl. Phys. B 648 (2003) 69 hep-th/0209030 [20] R. L. Davis, Phys. Rev. D 35 (1987) 3705 [21] G. Dvali, A. Gruzinov and M. Zaldarriaga, Phys. Rev. D 68 (2003) 024012 hep-ph/0212069 [22] A. Lue and G. Starkman, Phys. Rev. D 67 (2003) 064002 astro-ph/0212083 [23] E. Adelberger (2002). Private communication. [24] T. Damour, I. I. Kogan, A. Papazoglou, Phys. Rev. D 66 (2002) 104025 hep-th/0206044 [25] G. Dvali, G. Gabadadze and M. Shifman, Phys. Rev. D 67 (2003) 044020 hep-th/0202174 [26] A. Adams, J. McGreevy and E. Silverstein, hep-th/0209226 [27] N. Arkani-Hamed, S. Dimopoulos, G. Dvali and G. Gabadadze, hep-th/0209227 [28] S.M. Carrol, V. Duvvuri, M. Trodden and M.S. Turner, astro-ph/0306438 Phys.Rev. D70 (2004) 043528 [29] G.Gabadadze and M. Shifman, hep-th/0312289 Phys.Rev. D69 (2004) 124032 [30] M.Porrati and G. W. Rombouts, hep-th/0401211 Phys.Rev. D69 (2004) 122003 SzGeCERN 20060124104603.0 hep-th/0501145 eng Durin, B LPTHE Closed strings in Misner space: a toy model for a Big Bounce ? 2005 19 Jan 2005 Misner space, also known as the Lorentzian orbifold $R^{1,1}/boost$, is one of the simplest examples of a cosmological singularity in string theory. In this lecture, we review the semi-classical propagation of closed strings in this background, with a particular emphasis on the twisted sectors of the orbifold. Tree-level scattering amplitudes and the one-loop vacuum amplitude are also discussed. LANL EDS SIS LANLPUBL2006 SIS:2006 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Pioline, B Durin, Bruno Pioline, Boris http://invenio-software.org/download/invenio-demo-site-files/0501145.pdf LPTHE LPTHE, Lptens n 200503 13 20061202 0008 CER01 20050120 002424942 177 cargese20040607 PUBLIC 002503681CER ARTICLE [1] S. Lem, "The Seventh Voyage", in The Star Diaries, Varsaw 1971, english translation New York, 1976. [2] A. Borde and A. Vilenkin, "Eternal inflation and the initial singu-larity," Phys. Rev. Lett. 72 (1994) 3305 gr-qc/9312022 [3] C. W. Misner, in Relativity Theory and Astrophysics I Relativity and Cosmology, edited by J. Ehlers, Lectures in Applied Mathe-matics, Vol. 8 (American Electron. Res. Announc. Am. Math. Soc., Providence, 1967), p. 160. [4] M. Berkooz, and B. Pioline, "Strings in an electric field, and the Milne universe," J. Cosmol. Astropart. Phys. 0311 (2003) 007 hep-th/0307280 [5] M. Berkooz, B. Pioline and M. Rozali, "Closed strings in Mis-ner space Cosmological production of winding strings," J. Cosmol. Astropart. Phys. 07 (2004) 003 hep-th/0405126 [6] JCAP 0410 (2004) 002 M. Berkooz, B. Durin, B. Pioline and D. Reichmann, "Closed strings in Misner space Stringy fuzziness with a twist," arXiv hep-th/0407216 [7] G. T. Horowitz and A. R. Steif, "Singular String Solutions With Nonsingular Initial Data," Phys. Lett. B 258 (1991) 91 [8] J. Khoury, B. A. Ovrut, N. Seiberg, P. J. Steinhardt and N. Turok, "From big crunch to big bang," Phys. Rev. D 65 (2002) 086007 hep-th/0108187 [9] Surveys High Energ.Phys. 17 (2002) 115 N. A. Nekrasov, "Milne universe, tachyons, and quantum group," arXiv hep-th/0203112 [10] V. Balasubramanian, S. F. Hassan, E. Keski-Vakkuri and A. Naqvi, "A space-time orbifold A toy model for a cosmological singu-larity," Phys. Rev. D 67 (2003) 026003 hep-th/0202187 [10] R. Biswas, E. Keski-Vakkuri, R. G. Leigh, S. Nowling and E. Sharpe, "The taming of closed time-like curves," J. High Energy Phys. 0401 (2004) 064 hep-th/0304241 [11] I. Antoniadis, C. Bachas, J. R. Ellis and D. V. Nanopoulos, "Cosmo-logical String Theories And Discrete Inflation," Phys. Lett. B 211 (1988) 393 [11] I. Antoniadis, C. Bachas, J. R. Ellis and D. V. Nanopou-los, "An Expanding Universe In String Theory," Nucl. Phys. B 328 (1989) 117 [11] I. Antoniadis, C. Bachas, J. R. Ellis and D. V. Nanopou-los, "Comments On Cosmological String Solutions," Phys. Lett. B 257 (1991) 278 [12] C. R. Nappi and E. Witten, "A Closed, expanding universe in string theory," Phys. Lett. B 293 (1992) 309 hep-th/9206078 [13] C. Kounnas and D. Lust, "Cosmological string backgrounds from gauged WZW models," Phys. Lett. B 289 (1992) 56 hep-th/9205046 [14] E. Kiritsis and C. Kounnas, "Dynamical Topology change in string theory," Phys. Lett. B 331 (1994) 51 hep-th/9404092 [15] S. Elitzur, A. Giveon, D. Kutasov and E. Rabinovici, "From big bang to big crunch and beyond," J. High Energy Phys. 0206 (2002) 017 hep-th/0204189 [15] S. Elitzur, A. Giveon and E. Rabinovici, "Removing singularities," J. High Energy Phys. 0301 (2003) 017 hep-th/0212242 [16] L. Cornalba and M. S. Costa, "A New Cosmological Scenario in String Theory," Phys. Rev. D 66 (2002) 066001 hep-th/0203031 [16] L. Cornalba, M. S. Costa and C. Kounnas, "A res-olution of the cosmological singularity with orientifolds," Nucl. Phys. B 637 (2002) 378 hep-th/0204261 [16] L. Cornalba and M. S. Costa, "On the classical stability of orientifold cosmologies," Class. Quantum Gravity 20 (2003) 3969 hep-th/0302137 [17] B. Craps, D. Kutasov and G. Rajesh, "String propagation in the presence of cosmological singularities," J. High Energy Phys. 0206 (2002) 053 hep-th/0205101 [17] B. Craps and B. A. Ovrut, "Global fluc-tuation spectra in big crunch / big bang string vacua," Phys. Rev. D 69 (2004) 066001 hep-th/0308057 [18] E. Dudas, J. Mourad and C. Timirgaziu, "Time and space depen-dent backgrounds from nonsupersymmetric strings," Nucl. Phys. B 660 (2003) 3 hep-th/0209176 [19] L. Cornalba and M. S. Costa, "Time-dependent orbifolds and string cosmology," Fortschr. Phys. 52 (2004) 145 hep-th/0310099 [20] Phys.Rev. D70 (2004) 126011 C. V. Johnson and H. G. Svendsen, "An exact string theory model of closed time-like curves and cosmological singularities," arXiv hep-th/0405141 [21] N. Toumbas and J. Troost, "A time-dependent brane in a cosmolog-ical background," J. High Energy Phys. 0411 (2004) 032 hep-th/0410007 [22] W. A. Hiscock and D. A. Konkowski, "Quantum Vacuum Energy In Taub - Nut (Newman-Unti-Tamburino) Type Cosmologies," Phys. Rev. D 26 (1982) 1225 [23] A. H. Taub, "Empty Space-Times Admitting A Three Parameter Group Of Motions," Ann. Math. 53 (1951) 472 [23] E. Newman, L. Tamburino and T. Unti, "Empty Space Generalization Of The Schwarzschild Metric," J. Math. Phys. 4 (1963) 915 [24] J. G. Russo, "Cosmological string models from Milne spaces and SL(2,Z) orbifold," arXiv hep-th/0305032 [25] Mod.Phys.Lett. A19 (2004) 421 J. R. I. Gott, "Closed Timelike Curves Produced By Pairs Of Mov-ing Cosmic Strings Exact Solutions," Phys. Rev. Lett. 66 (1991) 1126 [25] J. D. Grant, "Cosmic strings and chronology protection," Phys. Rev. D 47 (1993) 2388 hep-th/9209102 [26] S. W. Hawking, "The Chronology protection conjecture," Phys. Rev. D 46 (1992) 603 [27] Commun.Math.Phys. 256 (2005) 491 D. Kutasov, J. Marklof and G. W. Moore, "Melvin Models and Diophantine Approximation," arXiv hep-th/0407150 [28] C. Gabriel and P. Spindel, "Quantum charged fields in Rindler space," Ann. Phys. 284 (2000) 263 gr-qc/9912016 [29] N. Turok, M. Perry and P. J. Steinhardt, "M theory model of a big crunch / big bang transition," Phys. Rev. D 70 (2004) 106004 hep-th/0408083 [30] C. Bachas and M. Porrati, "Pair Creation Of Open Strings In An Electric Field," Phys. Lett. B 296 (1992) 77 hep-th/9209032 [31] J. M. Maldacena, H. Ooguri and J. Son, "Strings in AdS(3) and the SL(2,R) WZW model. II Euclidean black hole," J. Math. Phys. 42 (2001) 2961 hep-th/0005183 [32] M. Berkooz, B. Craps, D. Kutasov and G. Rajesh, "Comments on cosmological singularities in string theory," arXiv hep-th/0212215 [33] D. J. Gross and P. F. Mende, "The High-Energy Behavior Of String Scattering Amplitudes," Phys. Lett. B 197 (1987) 129 [34] H. Liu, G. Moore and N. Seiberg, "Strings in a time-dependent orbifold," J. High Energy Phys. 0206 (2002) 045 hep-th/0204168 [34] H. Liu, G. Moore and N. Seiberg, "Strings in time-dependent orbifolds," J. High Energy Phys. 0210 (2002) 031 hep-th/0206182 [35] D. Amati, M. Ciafaloni and G. Veneziano, "Class. Quantum Gravity Effects From Planckian Energy Superstring Collisions," Int. J. Mod. Phys. A 3 (1988) 1615 [36] G. T. Horowitz and J. Polchinski, "Instability of spacelike and null orbifold singularities," Phys. Rev. D 66 (2002) 103512 hep-th/0206228 [37] C. R. Nappi and E. Witten, "A WZW model based on a non-semisimple group," Phys. Rev. Lett. 71 (1993) 3751 hep-th/9310112 [38] D. I. Olive, E. Rabinovici and A. Schwimmer, "A Class of string backgrounds Ann. Sci. a semiclassical limit of WZW models," Phys. Lett. B 321 (1994) 361 hep-th/9311081 [39] E. Kiritsis and C. Kounnas, "String Propagation In Gravitational Wave Backgrounds," Phys. Lett. B 320 (1994) 264 [39] [Addendum- Phys. Lett. B 325 (1994) 536 hep-th/9310202 [39] E. Kiritsis, C. Koun-nas and D. Lust, "Superstring gravitational wave backgrounds with space-time supersymmetry," Phys. Lett. B 331 (1994) 321 hep-th/9404114 [40] E. Kiritsis and B. Pioline, "Strings in homogeneous gravitational waves and null holography," J. High Energy Phys. 0208 (2002) 048 hep-th/0204004 [41] Nucl.Phys. B674 (2003) 80 G. D’Appollonio and E. Kiritsis, "String interactions in gravita-tional wave backgrounds," arXiv hep-th/0305081 [42] Y. K. Cheung, L. Freidel and K. Savvidy, "Strings in gravimagnetic fields," J. High Energy Phys. 0402 (2004) 054 hep-th/0309005 [43] O. Aharony, M. Berkooz and E. Silverstein, "Multiple-trace op-erators and non-local string theories," J. High Energy Phys. 0108 (2001) 006 hep-th/0105309 [43] M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary conditions and spacetime singularities," J. High Energy Phys. 0205 (2002) 034 hep-th/0112264 [43] E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspondence," arXiv hep-th/0112258 [44] T. Damour, M. Henneaux and H. Nicolai, "Cosmological billiards," Class. Quantum Gravity 20 (2003) R145 hep-th/0212256 SzGeCERN 20060713170102.0 hep-th/0606038 eng DESY-06-083 DESY-2006-083 Papadimitriou, I DESY Non-Supersymmetric Membrane Flows from Fake Supergravity and Multi-Trace Deformations 2007 Hamburg DESY 5 Jun 2006 45 p We use fake supergravity as a solution generating technique to obtain a continuum of non-supersymmetric asymptotically $AdS_4\times S^7$ domain wall solutions of eleven-dimensional supergravity with non-trivial scalars in the $SL(8,\mathbb{R})/SO(8)$ coset. These solutions are continuously connected to the supersymmetric domain walls describing a uniform sector of the Coulomb branch of the $M2$-brane theory. We also provide a general argument that identifies the fake superpotential with the exact large-N quantum effective potential of the dual theory, thus arriving at a very general description of multi-trace deformations in the AdS/CFT correspondence, which strongly motivates further study of fake supergravity as a solution generating method. This identification allows us to interpret our non-supersymmetric solutions as a family of marginal triple-trace deformations of the Coulomb branch that completely break supersymmetry and to calculate the exact large-N anomalous dimensions of the operators involved. The holographic one- and two-point functions for these solutions are also computed. LANL EDS SIS JHEP2007 SIS:200703 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Papadimitriou, Ioannis 008 J. High Energy Phys. 02 2007 http://invenio-software.org/download/invenio-demo-site-files/0606038.pdf n 200623 13 20070307 2032 CER01 20060607 PUBLIC 002623855CER ARTICLE [1] M. Cvetic and H. H. Soleng, "Supergravity domain walls," Phys. Rep. 282 (1997) 159 hep-th/9604090 [2] D. Z. Freedman, C. Nunez, M. Schnabl and K. Skenderis, "Fake supergravity and domain wall stability," Phys. Rev. D 69 (2004) 104027 hep-th/0312055 [3] A. Celi, A. Ceresole, G. Dall’Agata, A. Van Proeyen and M. Zagermann, "On the fakeness of fake supergravity," Phys. Rev. D 71 (2005) 045009 hep-th/0410126 [4] K. Skenderis and P. K. Townsend, "Gravitational stability and renormalization-group flow," Phys. Lett. B 468 (1999) 46 hep-th/9909070 [5] I. Bakas, A. Brandhuber and K. Sfetsos, "Domain walls of gauged supergravity, M-branes, and algebraic curves," Adv. Theor. Math. Phys. 3 (1999) 1657 hep-th/9912132 [6] M. Zagermann, "N = 4 fake supergravity," Phys. Rev. D 71 (2005) 125007 hep-th/0412081 [7] K. Skenderis and P. K. Townsend, "Hidden supersymmetry of domain walls and cosmologies," arXiv hep-th/0602260 [8] K. Skenderis, private communication. [9] P. K. Townsend, "Positive Energy And The Scalar Potential In Higher Dimensional (Super)Gravity Theories," Phys. Lett. B 148 (1984) 55 [10] O. DeWolfe, D. Z. Freedman, S. S. Gubser and A. Karch, "Modeling the fifth dimension with scalars and gravity," Phys. Rev. D 62 (2000) 046008 hep-th/9909134 [11] S. S. Gubser, "Curvature singularities The good, the bad, and the naked," Adv. Theor. Math. Phys. 4 (2002) 679 hep-th/0002160 [12] I. Papadimitriou and K. Skenderis, "AdS / CFT correspondence and geometry," arXiv hep-th/0404176 [13] V. L. Campos, G. Ferretti, H. Larsson, D. Martelli and B. E. W. Nilsson, "A study of holographic renormalization group flows in d = 6 and d = 3," J. High Energy Phys. 0006 (2000) 023 hep-th/0003151 [14] M. Cvetic, S. S. Gubser, H. Lu and C. N. Pope, "Symmetric potentials of gauged supergravities in diverse dimensions and Coulomb branch of gauge theories," Phys. Rev. D 62 (2000) 086003 hep-th/9909121 [15] M. Cvetic, H. Lu, C. N. Pope and A. Sadrzadeh, "Consistency of Kaluza-Klein sphere reductions of symmetric potentials," Phys. Rev. D 62 (2000) 046005 hep-th/0002056 [16] P. Kraus, F. Larsen and S. P. Trivedi, "The Coulomb branch of gauge theory from rotating branes," J. High Energy Phys. 9903 (1999) 003 hep-th/9811120 [17] D. Z. Freedman, S. S. Gubser, K. Pilch and N. P. Warner, "Continuous distributions of D3-branes and gauged supergravity," J. High Energy Phys. 0007 (2000) 038 hep-th/9906194 [18] I. Bakas and K. Sfetsos, "States and curves of five-dimensional gauged supergravity," Nucl. Phys. B 573 (2000) 768 hep-th/9909041 [19] C. Martinez, R. Troncoso and J. Zanelli, "Exact black hole solution with a minimally coupled scalar field," Phys. Rev. D 70 (2004) 084035 hep-th/0406111 [20] B. de Wit and H. Nicolai, "The Consistency Of The S7 Truncation In D = 11 Supergravity," Nucl. Phys. B 281 (1987) 211 [21] H. Nastase, D. Vaman and P. van Nieuwenhuizen, "Consistent Nonlinearity K K reduction of 11d supergravity on AdS7 × S4 and self-duality in odd dimensions," Phys. Lett. B 469 (1999) 96 hep-th/9905075 [22] H. Nastase, D. Vaman and P. van Nieuwenhuizen, "Consistency of the AdS7 × S4 reduction and the origin of self-duality in odd dimensions," Nucl. Phys. B 581 (2000) 179 hep-th/9911238 [23] P. Breitenlohner and D. Z. Freedman, "Stability In Gauged Extended Supergravity," Ann. Phys. 144 (1982) 249 [24] I. R. Klebanov and E. Witten, "AdS/CFT correspondence and Symmetry breaking," Nucl. Phys. B 556 (1999) 89 hep-th/9905104 [25] Dr. E. Kamke, Differentialgleichungen Lösungsmethoden und Lösungen, Chelsea Publishing Company, 1971. [26] E. S. Cheb-Terrab and A. D. Roche, "Abel ODEs Equivalence and Integrable Classes," Comput. Phys. Commun. 130, Issues 1- : 2 (2000) 204 [arXiv math-ph/0001037 [26] E. S. Cheb-Terrab and A. D. Roche, "An Abel ordinary differential equation class generalizing known integrable classes," European J. Appl. Math. 14 (2003) 217 math.GM/0002059 [26] V. M. Boyko, "Symmetry, Equivalence and Integrable Classes of Abel’s Equations," Proceedings of the Institute of Mathematics of the NAS of Ukraine 50, Part : 1 (2004) 47 [arXiv nlin.SI/0404020 [27] M. J. Duff and J. T. Liu, "Anti-de Sitter black holes in gauged N = 8 supergravity," Nucl. Phys. B 554 (1999) 237 hep-th/9901149 [28] M. Cvetic et al., "Embedding AdS black holes in ten and eleven dimensions," Nucl. Phys. B 558 (1999) 96 hep-th/9903214 [29] J. de Boer, E. P. Verlinde and H. L. Verlinde, "On the holographic renormalization group," J. High Energy Phys. 0008 (2000) 003 hep-th/9912012 [30] M. Bianchi, D. Z. Freedman and K. Skenderis, "How to go with an RG flow," J. High Energy Phys. 0108 (2001) 041 hep-th/0105276 [31] I. Papadimitriou and K. Skenderis, "Correlation functions in holographic RG flows," J. High Energy Phys. 0410 (2004) 075 hep-th/0407071 [32] M. Henningson and K. Skenderis, "The holographic Weyl anomaly," J. High Energy Phys. 9807 (1998) 023 hep-th/9806087 [33] V. Balasubramanian and P. Kraus, "A stress tensor for anti-de Sitter gravity," Commun. Math. Phys. 208 (1999) 413 hep-th/9902121 [34] P. Kraus, F. Larsen and R. Siebelink, "The gravitational action in asymptotically AdS and flat spacetimes," Nucl. Phys. B 563 (1999) 259 hep-th/9906127 [35] S. de Haro, S. N. Solodukhin and K. Skenderis, "Holographic reconstruction of spacetime and renormalization in the AdS/CFT correspondence," Commun. Math. Phys. 217 (2001) 595 hep-th/0002230 [36] M. Bianchi, D. Z. Freedman and K. Skenderis, "Holographic renormalization," Nucl. Phys. B 631 (2002) 159 hep-th/0112119 [37] D. Martelli and W. Muck, "Holographic renormalization and Ward identities with the Hamilton-Jacobi method," Nucl. Phys. B 654 (2003) 248 hep-th/0205061 [38] K. Skenderis, "Lecture notes on holographic renormalization," Class. Quantum Gravity 19 (2002) 5849 hep-th/0209067 [39] D. Z. Freedman, S. D. Mathur, A. Matusis and L. Rastelli, "Correlation functions in the CFT(d)/AdS(d + 1) correspondence," Nucl. Phys. B 546 (1999) 96 hep-th/9804058 [40] O. DeWolfe and D. Z. Freedman, "Notes on fluctuations and correlation functions in holographic renormalization group flows," arXiv hep-th/0002226 [41] W. Muck, "Correlation functions in holographic renormalization group flows," Nucl. Phys. B 620 (2002) 477 hep-th/0105270 [42] M. Bianchi, M. Prisco and W. Muck, "New results on holographic three-point functions," J. High Energy Phys. 0311 (2003) 052 hep-th/0310129 [43] E. Witten, "Multi-trace operators, boundary conditions, and AdS/CFT correspondence," arXiv hep-th/0112258 [44] M. Berkooz, A. Sever and A. Shomer, "Double-trace deformations, boundary conditions and spacetime singularities," J. High Energy Phys. 0205 (2002) 034 hep-th/0112264 [45] W. Muck, "An improved correspondence formula for AdS/CFT with multi-trace operators," Phys. Lett. B 531 (2002) 301 hep-th/0201100 [46] P. Minces, "Multi-trace operators and the generalized AdS/CFT prescription," Phys. Rev. D 68 (2003) 024027 hep-th/0201172 [47] A. Sever and A. Shomer, "A note on multi-trace deformations and AdS/CFT," J. High Energy Phys. 0207 (2002) 027 hep-th/0203168 [48] S. S. Gubser and I. R. Klebanov, "A universal result on central charges in the presence of double-trace deformations," Nucl. Phys. B 656 (2003) 23 hep-th/0212138 [49] O. Aharony, M. Berkooz and B. Katz, "Non-local effects of multi-trace deformations in the AdS/CFT correspondence," J. High Energy Phys. 0510 (2005) 097 hep-th/0504177 [50] S. Elitzur, A. Giveon, M. Porrati and E. Rabinovici, "Multitrace deformations of vector and adjoint theories and their holographic duals," J. High Energy Phys. 0602 (2006) 006 hep-th/0511061 [51] R. Corrado, K. Pilch and N. P. Warner, "An N = 2 supersymmetric membrane flow," Nucl. Phys. B 629 (2002) 74 hep-th/0107220 [52] T. Hertog and K. Maeda, "Black holes with scalar hair and asymptotics in N = 8 supergravity," J. High Energy Phys. 0407 (2004) 051 hep-th/0404261 [53] T. Hertog and G. T. Horowitz, "Towards a big crunch dual," J. High Energy Phys. 0407 (2004) 073 hep-th/0406134 [54] T. Hertog and G. T. Horowitz, "Designer gravity and field theory effective potentials," Phys. Rev. Lett. 94 (2005) 221301 hep-th/0412169 [55] T. Hertog and G. T. Horowitz, "Holographic description of AdS cosmologies," J. High Energy Phys. 0504 (2005) 005 hep-th/0503071 [56] S. de Haro, I. Papadimitriou and A. C. Petkou, "Conformally coupled scalars, instantons and Vacuum instability in AdS(4)," [arXiv hep-th/0611315 SzGeCERN 20060616163757.0 hep-th/0606096 eng UTHET-2006-05-01 Koutsoumbas, G National Technical University of Athens Quasi-normal Modes of Electromagnetic Perturbations of Four-Dimensional Topological Black Holes with Scalar Hair 2006 10 Jun 2006 17 p We study the perturbative behaviour of topological black holes with scalar hair. We calculate both analytically and numerically the quasi-normal modes of the electromagnetic perturbations. In the case of small black holes we find clear evidence of a second-order phase transition of a topological black hole to a hairy configuration. We also find evidence of a second-order phase transition of the AdS vacuum solution to a topological black hole. LANL EDS SIS JHEP2007 SIS:200702 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE LANL EDS High Energy Physics - Theory Musiri, S Papantonopoulos, E Siopsis, G Koutsoumbas, George Musiri, Suphot Papantonopoulos, Eleftherios Siopsis, George 006 J. High Energy Phys. 10 2006 http://invenio-software.org/download/invenio-demo-site-files/0606096.pdf n 200624 13 20070425 1021 CER01 20060613 PUBLIC 002628325CER ARTICLE [1] K. D. Kokkotas and B. G. Schmidt, Living Rev. Relativ. 2 (1999) 2 gr-qc/9909058 [2] H.-P. Nollert, Class. Quantum Gravity 16 (1999) R159 [3] J. S. F. Chan and R. B. Mann, Phys. Rev. D 55 (1997) 7546 gr-qc/9612026 [3] Phys. Rev. D 59 (1999) 064025 [4] G. T. Horowitz and V. E. Hubeny, Phys. Rev. D 62 (2000) 024027 hep-th/9909056 [5] V. Cardoso and J. P. S. Lemos, Phys. Rev. D 64 (2001) 084017 gr-qc/0105103 [6] B. Wang, C. Y. Lin and E. Abdalla, Phys. Lett. B 481 (2000) 79 hep-th/0003295 [7] E. Berti and K. D. Kokkotas, Phys. Rev. D 67 (2003) 064020 gr-qc/0301052 [8] F. Mellor and I. Moss, Phys. Rev. D 41 (1990) 403 [9] C. Martinez and J. Zanelli, Phys. Rev. D 54 (1996) 3830 gr-qc/9604021 [10] M. Henneaux, C. Martinez, R. Troncoso and J. Zanelli, Phys. Rev. D 65 (2002) 104007 hep-th/0201170 [11] C. Martinez, R. Troncoso and J. Zanelli, Phys. Rev. D 67 (2003) 024008 hep-th/0205319 [12] N. Bocharova, K. Bronnikov and V. Melnikov, Vestn. Mosk. Univ. Fizika Astronomy 6 (1970) 706 [12] J. D. Bekenstein, Ann. Phys. 82 (1974) 535 [12] Ann. Phys. 91 (1975) 75 [13] T. Torii, K. Maeda and M. Narita, Phys. Rev. D 64 (2001) 044007 [14] E. Winstanley, Found. Phys. 33 (2003) 111 gr-qc/0205092 [15] T. Hertog and K. Maeda, J. High Energy Phys. 0407 (2004) 051 hep-th/0404261 [16] J. P. S. Lemos, Phys. Lett. B 353 (1995) 46 gr-qc/9404041 [17] R. B. Mann, Class. Quantum Gravity 14 (1997) L109 gr-qc/9607071 [17] R. B. Mann, Nucl. Phys. B 516 (1998) 357 hep-th/9705223 [18] L. Vanzo, Phys. Rev. D 56 (1997) 6475 gr-qc/9705004 [19] D. R. Brill, J. Louko and P. Peldan, Phys. Rev. D 56 (1997) 3600 gr-qc/9705012 [20] D. Birmingham, Class. Quantum Gravity 16 (1999) 1197 hep-th/9808032 [21] R. G. Cai and K. S. Soh, Phys. Rev. D 59 (1999) 044013 gr-qc/9808067 [22] Phys.Rev. D65 (2002) 084006 B. Wang, E. Abdalla and R. B. Mann, [arXiv hep-th/0107243 [23] Phys.Rev. D65 (2002) 084006 R. B. Mann, [arXiv gr-qc/9709039 [24] J. Crisostomo, R. Troncoso and J. Zanelli, Phys. Rev. D 62 (2000) 084013 hep-th/0003271 [25] R. Aros, R. Troncoso and J. Zanelli, Phys. Rev. D 63 (2001) 084015 hep-th/0011097 [26] R. G. Cai, Y. S. Myung and Y. Z. Zhang, Phys. Rev. D 65 (2002) 084019 hep-th/0110234 [27] M. H. Dehghani, Phys. Rev. D 70 (2004) 064019 hep-th/0405206 [28] C. Martinez, R. Troncoso and J. Zanelli, Phys. Rev. D 70 (2004) 084035 hep-th/0406111 [29] Phys.Rev. D74 (2006) 044028 C. Martinez, J. P. Staforelli and R. Troncoso, [arXiv hep-th/0512022 [29] C. Martinez and R. Troncoso, [arXiv Phys.Rev. D74 (2006) 064007 hep-th/0606130 [30] E. Winstanley, Class. Quantum Gravity 22 (2005) 2233 gr-qc/0501096 [30] E. Radu and E. Win-stanley, Phys. Rev. D 72 (2005) 024017 gr-qc/0503095 [30] A. M. Barlow, D. Doherty and E. Winstanley, Phys. Rev. D 72 (2005) 024008 gr-qc/0504087 [31] I. Papadimitriou, [arXiv JHEP 0702 (2007) 008 hep-th/0606038 [32] P. Breitenlohner and D. Z. Freedman, Phys. Lett. B 115 (1982) 197 [32] Ann. Phys. 144 (1982) 249 [33] L. Mezincescu and P. K. Townsend, Ann. Phys. 160 (1985) 406 [34] V. Cardoso, J. Natario and R. Schiappa, J. Math. Phys. 45 (2004) 4698 hep-th/0403132 [35] J. Natario and R. Schiappa, Adv. Theor. Math. Phys. 8 (2004) 1001 hep-th/0411267 [36] S. Musiri, S. Ness and G. Siopsis, Phys. Rev. D 73 (2006) 064001 hep-th/0511113 [37] L. Motl and A. Neitzke, Adv. Theor. Math. Phys. 7 (2003) 307 hep-th/0301173 [38] Astron. J. M. Medved, D. Martin and M. Visser, Class. Quantum Gravity 21 (2004) 2393 gr-qc/0310097 [39] W.-H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery in Numerical Recipies (Cambridge University Press, Cambridge, England, 1992). [40] G. Koutsoumbas, S. Musiri, E. Papantonopoulos and G. Siopsis, in preparation. SzGeCERN hep-th/0703265 eng IGPG-07-3-4 Alexander, S The Pennsylvania State University A new PPN parameter to test Chern-Simons gravity 2007 28 Mar 2007 4 p We study Chern-Simons (CS) gravity in the parameterized post-Newtonian (PPN) framework through weak-field solutions of the modified field equations for a perfect fluid source. We discover that CS gravity possesses the same PPN parameters as general relativity, except for the inclusion of a new term, proportional both to the CS coupling parameter and the curl of the PPN vector potentials. This new term encodes the key physical effect of CS gravity in the weak-field limit, leading to a modification of frame dragging and, thus, the Lense-Thirring contribution to gyroscopic precession. We provide a physical interpretation for the new term, as well as an estimate of the size of this effect relative to the general relativistic Lense-Thirring prediction. This correction to frame dragging might be used in experiments, such as Gravity Probe B and lunar ranging, to place bounds on the CS coupling parameter, as well as other intrinsic parameters of string theory. LANL EDS SzGeCERN Particle Physics - Theory PREPRINT LANL EDS High Energy Physics - Theory Yunes, N Alexander, Stephon Yunes, Nicolas Phys. Rev. Lett. http://invenio-software.org/download/invenio-demo-site-files/0703265.pdf yunes@gravity.psu.edu> Uploader Engine <uploader@sundh99.cern.ch n 200713 11 20070417 2012 CER01 20070330 PUBLIC 002685163CER PREPRINT [1] J. Polchinski, String theory. Vol. 2 Superstring theory and beyond (Cambridge University Press, Cambridge, UK, 1998). [2] S. H. S. Alexander, M. E. Peskin, and M. M. Sheik-Jabbari, Phys. Rev. Lett. 96 (2006) 081301 [3] A. Lue, L.-M. Wang, and M. Kamionkowski, Phys. Rev. Lett. 83 (1999) 1506 astro-ph/9812088 [4] C. M. Will, Theory and experiment in gravitational Physics (Cambridge Univ. Press, Cambridge, UK, 1993). [5] C. M. Will, Phys. Rev. D 57 (1998) 2061 gr-qc/9709011 [6] C. M. Will and N. Yunes, Class. Quantum Gravity 21 (2004) 4367 [7] E. Berti, A. Buonanno, and C. M. Will, Phys. Rev. D 71 (2005) 084025 [8] A discussion of the history, technology and Physics of Gravity Probe B can be found at http://einstein.standfod.edu http://einstein.standfod.edu [9] J. Murphy, T. W., K. Nordtvedt, and S. G. Turyshev, Phys. Rev. Lett. 98 (2007) 071102 gr-qc/0702028 [10] R. Jackiw and S. Y. Pi, Phys. Rev. D 68 (2003) 104012 [11] D. Guarrera and Astron. J. Hariton (2007), Phys. Rev. D 76 (2007) 044011 gr-qc/0702029 [12] S. Alexander and J. Martin, Phys. Rev. D 71 (2005) 063526 hep-th/0410230 [13] R. J. Gleiser and C. N. Kozameh, Phys. Rev. D 64 (2001) 083007 gr-qc/0102093 [14] R. H. Brandenberger and C. Vafa, Nucl. Phys. B 316 (1989) 391 [15] L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 4690 hep-th/9906064 [16] S. Alexander and N. Yunes (2007), in progress. [17] L. Blanchet, Living Rev. Relativ. 9 (2006) 4 [17] and references therein, gr-qc/0202016 [18] S. Alexander, L. S. Finn, and N. Yunes, in progress (2007). SzGeCERN 0237765CERCER SLAC 3455840 hep-th/9611103 eng PUPT-1665 Periwal, V Princeton University Matrices on a point as the theory of everything 1997 Princeton, NJ Princeton Univ. Joseph-Henry Lab. Phys. 14 Nov 1996 5 p It is shown that the world-line can be eliminated in the matrix quantum mechanics conjectured by Banks, Fischler, Shenker and Susskind to describe the light-cone physics of M theory. The resulting matrix model has a form that suggests origins in the reduction to a point of a Yang-Mills theory. The reduction of the Nishino-Sezgin $10+2$ dimensional supersymmetric Yang-Mills theory to a point gives a matrix model with the appropriate features: Lorentz invariance in $9+1$ dimensions, supersymmetry, and the correct number of physical degrees of freedom. SIS UNC98 LANL EDS SzGeCERN Particle Physics - Theory ARTICLE Periwal, Vipul 1711 4 Phys. Rev. D 55 1997 http://invenio-software.org/download/invenio-demo-site-files/9611103.pdf vipul@viper.princeton.edu n 199648 13 20070310 0012 CER01 19961115 PUBLIC 000237765CER ARTICLE 1. T. Banks, W. Fischler, S. Shenker and L. Susskind hep-th/9610043 2. E. Witten Nucl. Phys B 460 (1995) 335 3. B. de Wit, J. Hoppe and H. Nicolai Nucl. Phys B 305 (1988) 545 4. M. Berkooz and M. Douglas hep-th/9610236 5. H. Nishino and E. Sezgin hep-th/9607185 6. M. Blencowe and M. Duff Nucl. Phys B 310 (1988) 387 7. C. Vafa Nucl. Phys B 469 (1996) 403 9 7. C. Hull hep-th/9512181 8. D. Kutasov and E. Martinec hep-th/9602049 10 8. I. Bars hep-th/9607112 11. For some background on the choice of 10+2, see e.g 8. L. Castellani, P. Fré, F. Giani, K. Pilch and P. van Nieuwenhuizen Phys. Rev D 26 (1982) 1481 12 8. A. Connes Non-commutative Geometry, Academic Press (San Diego, 1994) 5 3. B. de Wit, J. Hoppe and H. Nicolai Nucl. Phys B 305 (1988) 545 4. M. Berkooz and M. Douglas hep-th/9610236 5. H. Nishino and E. Sezgin hep-th/9607185 6. M. Blencowe and M. Duff Nucl. Phys B 310 (1988) 387 7. C. Vafa Nucl. Phys B 469 (1996) 403 9 7. C. Hull hep-th/9512181 8. D. Kutasov and E. Martinec hep-th/9602049 10 8. I. Bars hep-th/9607112 11. For some background on the choice of 10+2, see e.g 8. L. Castellani, P. Fré, F. Giani, K. Pilch and P. van Nieuwenhuizen Phys. Rev D 26 (1982) 1481 12 8. A. Connes Non-commutative Geometry, Academic Press (San Diego, 1994) 5 0289446CERCER SLAC 3838510 hep-th/9809057 eng Polyakov, A M Princeton University The wall of the cave 1999 In this article old and new relations between gauge fields and strings are discussed. We add new arguments that the Yang Mills theories must be described by the non-critical strings in the five dimensional curved space. The physical meaning of the fifth dimension is that of the renormalization scale represented by the Liouville field. We analyze the meaning of the zigzag symmetry and show that it is likely to be present if there is a minimal supersymmetry on the world sheet. We also present the new string backgrounds which may be relevant for the description of the ordinary bosonic Yang-Mills theories. The article is written on the occasion of the 40-th anniversary of the IHES. SIS LANLPUBL2001 LANL EDS SIS:2001 PR/LKR added SzGeCERN Particle Physics - Theory ARTICLE 645-658 Int. J. Mod. Phys. A 14 1999 polyakov@puhep1.princeton.edu n 199837 13 20060916 0007 CER01 19980910 PUBLIC 000289446CER ARTICLE http://invenio-software.org/download/invenio-demo-site-files/9809057.pdf [1] K. Wilson Phys. Rev. D 10 (1974) 2445 [2] A. Polyakov Phys. Lett. B 59 (1975) 82 [3] A.Polyakov Nucl. Phys. B 120 (1977) 429 [4] S. Mandelstam Phys. Rep., C 23 (1976) 245 [5] G. t’Hooft in High Energy Phys., Zichichi editor, Bolognia (1976) [6] A. Polyakov hep-th/9711002 [7] I. Klebanov Nucl. Phys. B 496 (1997) 231 [8] J. Maldacena hep-th/9711200 [9] S. Gubser I. Klebanov A. Polyakov hep-th/9802109 [10] E. Witten hep-th/9802150 [11] L. Brink P. di Vecchia P. Howe Phys. Lett. B 63 (1976) 471 [12] S. Deser B. Zumino Phys. Lett. B 65 (1976) 369 [13] A. Polyakov Phys. Lett. B 103 (1981) 207 [14] T. Curtright C. Thorn Phys. Rev. Lett. 48 (1982) 1309 [15] J. Gervais A. Neveu Nucl. Phys. B 199 (1982) 59 [16] J. Polchinski Nucl. Phys. B 346 (1990) 253 [17] C. Callan E. Martinec M. Perry D. Friedan Nucl. Phys. B 262 (1985) 593 [18] A. Polyakov Proceedings of Les Houches (1992) [19] A. Polyakov Nucl. Phys. B 164 (1980) 171 [20] Y. Makeenko A. Migdal Nucl. Phys. B 188 (1981) 269 [21] H. Verlinde hep-th/9705029 [22] A. Migdal Nucl. Phys. B, Proc. Suppl. 41 (1995) 51 [23] J. Maldacena hep-th/9803002 [24] G. Horowitz A. Strominger Nucl. Phys. B 360 (1991) 197 [25] A. Lukas B. Ovrut D. Waldram hep-th/9802041 [26] K. Wilson Phys. Rev. 179 (1969) 1499 [27] A. Polyakov Zh. Eksp. Teor. Fiz. 59 (1970) 542 [27] Pis'ma Zh. Eksp. Teor. Fiz. 12 (1970) 538 [28] A. Peet J. Polchinski hep-th/9809022 [29] E. Fradkin A. Tseytlin Phys. Lett. B 178 (1986) 34 [30] S. Gubser I. Klebanov hep-th/9708005 SzGeCERN 2174811CERCER SLAC 4308492 hep-ph/0002060 eng ACT-2000-1 CTP-TAMU-2000-2 OUTP-2000-03-P TPI-MINN-2000-6 Cleaver, G B Non-Abelian Flat Directions in a Minimal Superstring Standard Model 2000 Houston, TX Houston Univ. Adv. Res. Cent. The Woodlands 4 Feb 2000 14 p Recently, by studying exact flat directions of non-Abelian singlet fields, wedemonstrated the existence of free fermionic heterotic-string models in whichthe SU(3)_C x SU(2)_L x U(1)_Y-charged matter spectrum, just below the stringscale, consists solely of the MSSM spectrum. In this paper we generalize theanalysis to include VEVs of non-Abelian fields. We find several,MSSM-producing, exact non-Abelian flat directions, which are the first suchexamples in the literature. We examine the possibility that hidden sectorcondensates lift the flat directions. LANL EDS SIS LANLPUBL2001 SIS:2001 PR/LKR added SzGeCERN Particle Physics - Phenomenology ARTICLE Faraggi, A E Nanopoulos, Dimitri V Walker, J W Walker, Joel W. 10.1142/S0217732300001444 1191-1202 Mod. Phys. Lett. A 15 2000 gcleaver@rainbow.physics.tamu.edu n 200006 13 20070425 1017 CER01 20000207 PUBLIC 002174811CER ARTICLE [1] A.E. Faraggi and D.V. Nanopoulos and L. Yuan Nucl. Phys. B 335 (1990) 347 [2] I. Antoniadis and J. Ellis and J. Hagelin and D.V. Nanopoulos Phys. Lett. B 213 (1989) 65 [3] I. Antoniadis and C. Bachas and C. Kounnas Nucl. Phys. B 289 (1987) 87 [4] A.E. Faraggi and D.V. Nanopoulos Phys. Rev. D 48 (1993) 3288 [5] G.B. Cleaver and A.E. Faraggi and D.V. Nanopoulos and L. Yuan Phys. Lett. B 455 (1999) 135 [6] hep-ph/9904301 [7] hep-ph/9910230 [8] Phys. Lett. B 256 (1991) 150 [10] hep-ph/9511426 [12] J. Ellis, K. Enqvist, D.V. Nanopoulos Phys. Lett., B 151 (1985) 357 [13] P. Horava Phys. Rev. D 54 (1996) 7561 http://invenio-software.org/download/invenio-demo-site-files/0002060.pdf http://invenio-software.org/download/invenio-demo-site-files/0002060.ps.gz SzGeCERN 20060914104330.0 INIS 34038281 UNCOVER 251,129,189,013 eng SCAN-0005061 TESLA-FEL-99-07 Treusch, R Development of photon beam diagnostics for VUV radiation from a SASE FEL 2000 Hamburg DESY Dec 1999 For the proof-of-principle experiment of self-amplified spontaneous emission (SASE) at short wavelengths on the VUV FEL at DESY a multi-facetted photon beam diagnostics experiment has been developed employing new detection concepts to measure all SASE specific properties on a single pulse basis. The present setup includes instrumentation for the measurement of the energy and the angular and spectral distribution of individual photon pulses. Different types of photon detectors such as PtSi-photodiodes and fast thermoelectric detectors based on YBaCuO-films are used to cover some five orders of magnitude of intensity from the level of spontaneous emission to FEL radiation at saturation. A 1 m normal incidence monochromator in combination with a fast intensified CCD camera allows to select single photon pulses and to record the full spectrum at high resolution to resolve the fine structure due to the start-up from noise. SIS INIS2004 SIS UNC2002 Development of photon beam diagnostics for VUV radiation from a SASE FEL SzGeCERN Accelerators and Storage Rings ARTICLE INIS Particle accelerators INIS ceramics- INIS desy- INIS far-ultraviolet-radiation INIS free-electron-lasers INIS photodiodes- INIS photon-beams INIS superradiance- INIS thin-films INIS x-ray-detection INIS x-ray-sources INIS accelerators- INIS beams- INIS cyclic-accelerators INIS detection- INIS electromagnetic-radiation INIS emission- INIS energy-level-transitions INIS films- INIS lasers- INIS photon-emission INIS radiation-detection INIS radiation-sources INIS radiations- INIS semiconductor-devices INIS semiconductor-diodes INIS stimulated-emission INIS synchrotrons- INIS ultraviolet-radiation Lokajczyk, T Xu, W Jastrow, U Hahn, U Bittner, L Feldhaus, J 456-462 1-3 Nucl. Instrum. Methods Phys. Res., A 445 2000 n 200430 13 20061230 0016 CER01 20040727 000289917 456-462 hamburg990823 PUBLIC 002471378CER ARTICLE http://invenio-software.org/download/invenio-demo-site-files/convert_SCAN-0005061.pdf SzGeCERN 20070110102840.0 0008580CERCER eng SCAN-9709037 UCRL-8417 Orear, J Notes on statistics for physicists Statistics for physicists 1958 Berkeley, CA Lawrence Berkeley Nat. Lab. 13 Aug 1958 34 p SzGeCERN Mathematical Physics and Mathematics PREPRINT oai:cds.cern.ch:SCAN-9709037 cerncds:SCAN cerncds:FULLTEXT h 199700 11 20070110 1028 CER01 19900127 PUBLIC PREPRINT DRAFT 0001 000008580CER http://invenio-software.org/download/invenio-demo-site-files/9709037.pdf BUL-NEWS-2009-001 eng Charles Darwin A naturalist's voyage around the world <!--HTML--><p class="articleHeader">After having been twice driven back by heavy south-western gales, Her Majesty's ship Beagle" a ten-gun brig, under the command of Captain Fitz Roy, R.N., sailed from Devonport on the 27th of December, 1831. The object of the expedition was to complete the survey of Patagonia and Tierra del Fuego, commenced under Captain King in 1826 to 1830--to survey the shores of Chile, Peru, and of some islands in the Pacific--and to carry a chain of chronometrical measurements round the World.</p> <div class="phwithcaption"> <div class="imageScale"><img alt="" src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_hms_beagle_image.gif" /></div> <p>H.M.S. Beagle</p> </div> <p>On the 6th of January we reached Teneriffe, but were prevented landing, by fears of our bringing the cholera: the next morning we saw the sun rise behind the rugged outline of the Grand Canary Island, and suddenly illumine the Peak of Teneriffe, whilst the lower parts were veiled in fleecy clouds. This was the first of many delightful days never to be forgotten. On the 16th of January 1832 we anchored at Porto Praya, in St. Jago, the chief island of the Cape de Verd archipelago.</p> <p>The neighbourhood of Porto Praya, viewed from the sea, wears a desolate aspect. The volcanic fires of a past age, and the scorching heat of a tropical sun, have in most places rendered the soil unfit for vegetation. The country rises in successive steps of table-land, interspersed with some truncate conical hills, and the horizon is bounded by an irregular chain of more lofty mountains. The scene, as beheld through the hazy atmosphere of this climate, is one of great interest; if, indeed, a person, fresh from sea, and who has just walked, for the first time, in a grove of cocoa-nut trees, can be a judge of anything but his own happiness. The island would generally be considered as very uninteresting, but to any one accustomed only to an English landscape, the novel aspect of an utterly sterile land possesses a grandeur which more vegetation might spoil. A single green leaf can scarcely be discovered over wide tracts of the lava plains; yet flocks of goats, together with a few cows, contrive to exist. It rains very seldom, but during a short portion of the year heavy torrents fall, and immediately afterwards a light vegetation springs out of every crevice. This soon withers; and upon such naturally formed hay the animals live. It had not now rained for an entire year. When the island was discovered, the immediate neighbourhood of Porto Praya was clothed with trees,1 the reckless destruction of which has caused here, as at St. Helena, and at some of the Canary islands, almost entire sterility. The broad, flat-bottomed valleys, many of which serve during a few days only in the season as watercourses, are clothed with thickets of leafless bushes. Few living creatures inhabit these valleys. The commonest bird is a kingfisher (Dacelo Iagoensis), which tamely sits on the branches of the castor-oil plant, and thence darts on grasshoppers and lizards. It is brightly coloured, but not so beautiful as the European species: in its flight, manners, and place of habitation, which is generally in the driest valley, there is also a wide difference. One day, two of the officers and myself rode to Ribeira Grande, a village a few miles eastward of Porto Praya. Until we reached the valley of St. Martin, the country presented its usual dull brown appearance; but here, a very small rill of water produces a most refreshing margin of luxuriant vegetation. In the course of an hour we arrived at Ribeira Grande, and were surprised at the sight of a large ruined fort and cathedral. This little town, before its harbour was filled up, was the principal place in the island: it now presents a melancholy, but very picturesque appearance. Having procured a black Padre for a guide, and a Spaniard who had served in the Peninsular war as an interpreter, we visited a collection of buildings, of which an ancient church formed the principal part. It is here the governors and captain-generals of the islands have been buried. Some of the tombstones recorded dates of the sixteenth century.1 The heraldic ornaments were the only things in this retired place that reminded us of Europe. The church or chapel formed one side of a quadrangle, in the middle of which a large clump of bananas were growing. On another side was a hospital, containing about a ozen miserable-looking inmates.</p> <p>We returned to the Vênda to eat our dinners. A considerable number of men, women, and children, all as black as jet, collected to watch us. Our companions were extremely merry; and everything we said or did was followed by their hearty laughter. Before leaving the town we visited the cathedral. It does not appear so rich as the smaller church, but boasts of a little organ, which sent forth singularly inharmonious cries. We presented the black priest with a few shillings, and the Spaniard, patting him on the head, said, with much candour, he thought his colour made no great difference. We then returned, as fast as the ponies would go, to Porto Praya.</p> (Excerpt from A NATURALIST'S VOYAGE ROUND THE WORLD Chapter 1, By Charles Darwin) <!--HTML--><br /> 3 02/2009 Atlantis Times 3 03/2009 Atlantis Times ATLANTISTIMESNEWS http://invenio-software.org/download/invenio-demo-site-files/journal_hms_beagle_image.gif http://invenio-software.org/download/invenio-demo-site-files/icon-journal_hms_beagle_image.gif BUL-NEWS-2009-002 eng Plato Atlantis (Critias) <!--HTML--><p class="articleHeader">I have before remarked in speaking of the allotments of the gods, that they distributed the whole earth into portions differing in extent, and made for themselves temples and instituted sacrifices. And Poseidon, receiving for his lot the island of Atlantis, begat children by a mortal woman, and settled them in a part of the island, which I will describe.</p> <p>Looking towards the sea, but in the centre of the whole island, there was a plain which is said to have been the fairest of all plains and very fertile. Near the plain again, and also in the centre of the island at a distance of about fifty stadia, there was a mountain not very high on any side. In this mountain there dwelt one of the earth-born primeval men of that country, whose name was Evenor, and he had a wife named Leucippe, and they had an only daughter who was called Cleito. The maiden had already reached womanhood, when her father and mother died; Poseidon fell in love with her and had intercourse with her, and breaking the ground, inclosed the hill in which she dwelt all round, making alternate zones of sea and land larger and smaller, encircling one another; there were two of land and three of water, which he turned as with a lathe, each having its circumference equidistant every way from the centre, so that no man could get to the island, for ships and voyages were not as yet. He himself, being a god, found no difficulty in making special arrangements for the centre island, bringing up two springs of water from beneath the earth, one of warm water and the other of cold, and making every variety of food to spring up abundantly from the soil. He also begat and brought up five pairs of twin male children; and dividing the island of Atlantis into ten portions, he gave to the first-born of the eldest pair his mother's dwelling and the surrounding allotment, which was the largest and best, and made him king over the rest; the others he made princes, and gave them rule over many men, and a large territory. And he named them all; the eldest, who was the first king, he named Atlas, and after him the whole island and the ocean were called Atlantic. To his twin brother, who was born after him, and obtained as his lot the extremity of the island towards the pillars of Heracles, facing the country which is now called the region of Gades in that part of the world, he gave the name which in the Hellenic language is Eumelus, in the language of the country which is named after him, Gadeirus. Of the second pair of twins he called one Ampheres, and the other Evaemon. To the elder of the third pair of twins he gave the name Mneseus, and Autochthon to the one who followed him. Of the fourth pair of twins he called the elder Elasippus, and the younger Mestor. And of the fifth pair he gave to the elder the name of Azaes, and to the younger that of Diaprepes. All these and their descendants for many generations were the inhabitants and rulers of divers islands in the open sea; and also, as has been already said, they held sway in our direction over the country within the pillars as far as Egypt and Tyrrhenia. Now Atlas had a numerous and honourable family, and they retained the kingdom, the eldest son handing it on to his eldest for many generations; and they had such an amount of wealth as was never before possessed by kings and potentates, and is not likely ever to be again, and they were furnished with everything which they needed, both in the city and country. For because of the greatness of their empire many things were brought to them from foreign countries, and the island itself provided most of what was required by them for the uses of life. In the first place, they dug out of the earth whatever was to be found there, solid as well as fusile, and that which is now only a name and was then something more than a name, orichalcum, was dug out of the earth in many parts of the island, being more precious in those days than anything except gold. There was an abundance of wood for carpenter's work, and sufficient maintenance for tame and wild animals. Moreover, there were a great number of elephants in the island; for as there was provision for all other sorts of animals, both for those which live in lakes and marshes and rivers, and also for those which live in mountains and on plains, so there was for the animal which is the largest and most voracious of all. Also whatever fragrant things there now are in the earth, whether roots, or herbage, or woods, or essences which distil from fruit and flower, grew and thrived in that land; also the fruit which admits of cultivation, both the dry sort, which is given us for nourishment and any other which we use for food&mdash;we call them all by the common name of pulse, and the fruits having a hard rind, affording drinks and meats and ointments, and good store of chestnuts and the like, which furnish pleasure and amusement, and are fruits which spoil with keeping, and the pleasant kinds of dessert, with which we console ourselves after dinner, when we are tired of eating&mdash;all these that sacred island which then beheld the light of the sun, brought forth fair and wondrous and in infinite abundance. With such blessings the earth freely furnished them; meanwhile they went on constructing their temples and palaces and harbours and docks.</p> (Excerpt from CRITIAS, By Plato, translated By Jowett, Benjamin) <!--HTML--><br /> 2 02/2009 Atlantis Times 2 03/2009 Atlantis Times ATLANTISTIMESNEWS BUL-NEWS-2009-003 eng Plato Atlantis (Timaeus) <!--HTML--><p class="articleHeader">This great island lay over against the Pillars of Heracles, in extent greater than Libya and Asia put together, and was the passage to other islands and to a great ocean of which the Mediterranean sea was only the harbour; and within the Pillars the empire of Atlantis reached in Europe to Tyrrhenia and in Libya to Egypt.</p> <p>This mighty power was arrayed against Egypt and Hellas and all the countries</p> <div class="phrwithcaption"> <div class="imageScale"><img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_Athanasius_Kircher_Atlantis_image.gif" alt="" /></div> <p>Representation of Atlantis by Athanasius Kircher (1669)</p> </div> bordering on the Mediterranean. Then your city did bravely, and won renown over the whole earth. For at the peril of her own existence, and when the other Hellenes had deserted her, she repelled the invader, and of her own accord gave liberty to all the nations within the Pillars. A little while afterwards there were great earthquakes and floods, and your warrior race all sank into the earth; and the great island of Atlantis also disappeared in the sea. This is the explanation of the shallows which are found in that part of the Atlantic ocean. <p> </p> (Excerpt from TIMAEUS, By Plato, translated By Jowett, Benjamin)<br /> <!--HTML--><br /> 1 02/2009 Atlantis Times 1 03/2009 Atlantis Times 1 04/2009 Atlantis Times ATLANTISTIMESNEWS http://invenio-software.org/download/invenio-demo-site-files/journal_Athanasius_Kircher_Atlantis_image.gif http://invenio-software.org/download/invenio-demo-site-files/icon-journal_Athanasius_Kircher_Atlantis_image.gif BUL-SCIENCE-2009-001 eng Charles Darwin The order Rodentia in South America <!--HTML--><p>The order Rodentia is here very numerous in species: of mice alone I obtained no less than eight kinds. <sup><a name="note1" href="#footnote1">1</a></sup>The largest gnawing animal in the world, the Hydrochærus capybara (the water-hog), is here also common. One which I shot at Monte Video weighed ninety-eight pounds: its length, from the end of the snout to the stump-like tail, was three feet two inches; and its girth three feet eight. These great Rodents occasionally frequent the islands in the mouth of the Plata, where the water is quite salt, but are far more abundant on the borders of fresh-water lakes and rivers. Near Maldonado three or four generally live together. In the daytime they either lie among the aquatic plants, or openly feed on the turf plain.<sup><a name="note2" href="#footnote2">2</a></sup></p> <p> <div class="phlwithcaption"> <div class="imageScale"><img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_water_dog_image.gif" alt="" /></div> <p>Hydrochærus capybara or Water-hog</p> </div> When viewed at a distance, from their manner of walking and colour they resemble pigs: but when seated on their haunches, and attentively watching any object with one eye, they reassume the appearance of their congeners, cavies and rabbits. Both the front and side view of their head has quite a ludicrous aspect, from the great depth of their jaw. These animals, at Maldonado, were very tame; by cautiously walking, I approached within three yards of four old ones. This tameness may probably be accounted for, by the Jaguar having been banished for some years, and by the Gaucho not thinking it worth his while to hunt them. As I approached nearer and nearer they frequently made their peculiar noise, which is a low abrupt grunt, not having much actual sound, but rather arising from the sudden expulsion of air: the only noise I know at all like it, is the first hoarse bark of a large dog. Having watched the four from almost within arm's length (and they me) for several minutes, they rushed into the water at full gallop with the greatest impetuosity, and emitted at the same time their bark. After diving a short distance they came again to the surface, but only just showed the upper part of their heads. When the female is swimming in the water, and has young ones, they are said to sit on her back. These animals are easily killed in numbers; but their skins are of trifling value, and the meat is very indifferent. On the islands in the Rio Parana they are exceedingly abundant, and afford the ordinary prey to the Jaguar.</p> <p><small><sup><a name="footnote1" href="#note1">1</a></sup>. In South America I collected altogether twenty-seven species of mice, and thirteen more are known from the works of Azara and other authors. Those collected by myself have been named and described by Mr. Waterhouse at the meetings of the Zoological Society. I must be allowed to take this opportunity of returning my cordial thanks to Mr. Waterhouse, and to the other gentleman attached to that Society, for their kind and most liberal assistance on all occasions.</small></p> <p><small><sup><a name="footnote2" href="#note2">2</a></sup>. In the stomach and duodenum of a capybara which I opened, I found a very large quantity of a thin yellowish fluid, in which scarcely a fibre could be distinguished. Mr. Owen informs me that a part of the oesophagus is so constructed that nothing much larger than a crowquill can be passed down. Certainly the broad teeth and strong jaws of this animal are well fitted to grind into pulp the aquatic plants on which it feeds.</small></p> (Excerpt from A NATURALIST'S VOYAGE ROUND THE WORLD Chapter 3, By Charles Darwin) <!--HTML--><br />test fr 1 02/2009 Atlantis Times 1 03/2009 Atlantis Times ATLANTISTIMESSCIENCE http://invenio-software.org/download/invenio-demo-site-files/journal_water_dog_image.gif http://invenio-software.org/download/invenio-demo-site-files/icon-journal_water_dog_image.gif BUL-NEWS-2009-004 eng Charles Darwin Rio Macâe <!--HTML--><p class="articleHeader">April 14th, 1832.—Leaving Socêgo, we rode to another estate on the Rio Macâe, which was the last patch of cultivated ground in that direction. The estate was two and a half miles long, and the owner had forgotten how many broad.</p> <p> <div class="phlwithcaption"> <div class="imageScale"><img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_virgin_forest_image.gif" alt="" /></div> <p>Virgin Forest</p> </div> Only a very small piece had been cleared, yet almost every acre was capable of yielding all the various rich productions of a tropical land. Considering the enormous area of Brazil, the proportion of cultivated ground can scarcely be considered as anything compared to that which is left in the state of nature: at some future age, how vast a population it will support! During the second day's journey we found the road so shut up that it was necessary that a man should go ahead with a sword to cut away the creepers. The forest abounded with beautiful objects; among which the tree ferns, though not large, were, from their bright green foliage, and the elegant curvature of their fronds, most worthy of admiration. In the evening it rained very heavily, and although the thermometer stood at 65°, I felt very cold. As soon as the rain ceased, it was curious to observe the extraordinary evaporation which commenced over the whole extent of the forest. At the height of a hundred feet the hills were buried in a dense white vapour, which rose like columns of smoke from the most thickly-wooded parts, and especially from the valleys. I observed this phenomenon on several occasions: I suppose it is owing to the large surface of foliage, previously heated by the sun's rays.</p> <p>While staying at this estate, I was very nearly being an eye-witness to one of those atrocious acts which can only take place in a slave country. Owing to a quarrel and a lawsuit, the owner was on the point of taking all the women and children from the male slaves, and selling them separately at the public auction at Rio. Interest, and not any feeling of compassion, prevented this act. Indeed, I do not believe the inhumanity of separating thirty families, who had lived together for many years, even occurred to the owner. Yet I will pledge myself, that in humanity and good feeling he was superior to the common run of men. It may be said there exists no limit to the blindness of interest and selfish habit. I may mention one very trifling anecdote, which at the time struck me more forcibly than any story of cruelty. I was crossing a ferry with a negro who was uncommonly stupid. In endeavouring to make him understand, I talked loud, and made signs, in doing which I passed my hand near his face. He, I suppose, thought I was in a passion, and was going to strike him; for instantly, with a frightened look and half-shut eyes, he dropped his hands. I shall never forget my feelings of surprise, disgust, and shame, at seeing a great powerful man afraid even to ward off a blow, directed, as he thought, at his face. This man had been trained to a degradation lower than the slavery of the most helpless animal.</p> (Excerpt from A NATURALIST'S VOYAGE ROUND THE WORLD Chapter 2, By Charles Darwin) 1 03/2009 Atlantis Times ATLANTISTIMESNEWS http://invenio-software.org/download/invenio-demo-site-files/journal_virgin_forest_image.gif http://invenio-software.org/download/invenio-demo-site-files/icon-journal_virgin_forest_image.gif zho 李白 Li Bai Alone Looking at the Mountain eng 敬亭獨坐 <!--HTML-->眾鳥高飛盡<br /> 孤雲去獨閒<br /> 相看兩不厭<br /> 唯有敬亭山 <!--HTML-->All the birds have flown up and gone;<br /> A lonely cloud floats leisurely by.<br /> We never tire of looking at each other -<br /> Only the mountain and I. 701-762 2009-09-16 00 2009-09-16 BATCH POETRY 20110118111428.0 Inspire 882629 SPIRES 8921016 eng CERN-THESIS-99-074 Goodsir, S M Imperial Coll., London A W mass measurement with the ALEPH detector London Imperial Coll. 1999 PhD London U. 1999 No fulltext CERN EDS SzGeCERN Detectors and Experimental Techniques THESIS CERN ALEPH CERN LHC ALEPH h 201103 14 20110128 1717 CER01 20110118 PUBLIC 002943225CER ALEPHTHESIS THESIS DRAFT oai:cds.cern.ch:1322667 cerncds:CERN SzGeCERN CDS INIS 33028075 eng Hodgson, P A measurement of the di-jet cross-sections in two photon physics at LEP 2 Sheffield Sheffield Univ. 2001 mult. p PhD Sheffield Univ. 2001 This thesis presents a study of di-jet production in untagged two photon events in the ALEPH detector at LEP with sq root s=183 GeV. A low background sample of 145146 untagged gamma gamma events is obtained and from this 2346 di-jet events are found. A clustering algorithm, KTCLUS, is used to reconstruct the jet momenta. The data is corrected back to hadron level using the PHOJET Monte Carlo and this sample is compared with two independent NLO QCD calculations. Good agreement is seen except at the lowest jet P sub T , where the calculations overshoot the data; however, it should be noted that perturbative QCD is less reliable at low P sub T. SIS INIS2004 SzGeCERN Particle Physics THESIS ALEPH CERN LEP ALEPH INIS Physics of elementary particles and fields INIS accelerators- INIS bosons- INIS computer-codes INIS cyclic-accelerators INIS elementary-particles INIS energy-range INIS field-theories INIS gev-range INIS jet-model INIS k-codes INIS lep-storage-rings INIS linear-momentum INIS massless-particles INIS mathematical-models INIS multiple-production INIS particle-discrimination INIS particle-identification INIS particle-models INIS particle-production INIS photons- INIS quantum-chromodynamics INIS quantum-field-theory INIS storage-rings INIS synchrotrons- INIS transverse-momentum n 200431 14 20100419 1112 CER01 20040730 PUBLIC 002474361CER ALEPHTHESIS SzGeCERN 20080521084337.0 SPIRES 4066995 CERN-EP-99-060 eng CERN Library EP-1999-060 SCAN-9910048 CERN-L3-175 CERN. Geneva Limits on Higgs boson masses from combining the data of the four LEP experiments at $\sqrt{s} \leq 183 GeV$ 1999 Geneva CERN 26 Apr 1999 18 p ALEPH Papers Preprint not submitted to publication No authors CERN-EP OA SIS:200740 PR/LKR not found (from SLAC, INSPEC) SzGeCERN Particle Physics - Experiment CERN PREPRINT CERN LEP ALEPH CERN LEP DELPHI CERN LEP L3 CERN LEP OPAL MEDLINE searches Higgs bosons LexiHiggs EP ALEPH Collaboration DELPHI Collaboration L3 Collaboration LEP Working Group for Higgs Boson Searches OPAL Collaboration CERN h 199941 11 PUBLIC 000330309CER ARTICLE SzGeCERN 20090128145544.0 CERN-ALEPH-ARCH-DATA-2009-004 eng Beddall, Andrew Gaziantep U. Residual Bose-Einstein Correlations and the Söding Model 2008 Geneva CERN 19 Jan 2009 8 p Bose--Einstein correlations between identical pions close in phase-space is thought to be responsible for the observed distortion in mass spectra of non-identical pions. For example in the decays \rho 0 \rightarrow \pi + \pi - and \rho \pm \rightarrow \pi \pm \pi 0, such distortions are a residual effect where the pions from the \rho decay interact with other identical pions that are close in phase-space. Such interactions can be significant in, for example, hadronic decays of Z bosons where pion multiplicities are high, and resonances such as \rho mesons decay with a very short lifetime thereby creating pions that are close to prompt pions created directly. We present the S{ö}ding model and show that it has been used successfully to model distortions in \pi \pm \pi 0 mass spectra in hadronic Z decays recorded by ALEPH. CERN EDS SzGeCERN Particle Physics - Experiment CERN LEP ALEPH Beddall, Ayda Gaziantep U. Bingül, Ahmet Gaziantep U. PH-EP 173-180 1 Acta Phys. Pol. B 39 2008 nathalie.grub@cern.ch n 200904 13 20100205 1021 CER01 20090119 PUBLIC 000701647CER ARTICLE ALEPHPAPER SzGeCERN 20080909102446.0 INTINT 0000990 WAI01 000004764 eng CERN-ALEPH-95-089 CERN-ALEPH-PHYSIC-95-083 CERN-SL-Note-95-77-BI SL-Note-95-77-BI CERN. Geneva LEP Center-of-Mass Energies in Presence of Opposite Sign Vertical Dispersion in Bunch-Train Operation 1995 Geneva CERN 17 Jul 1995 14 p SIS ALEPH2004 SIS SLNOTE2003 SzGeCERN Accelerators and Storage Rings CERN CERN LEP CERN beam-energy CERN bunch-trains LEP = Large Electron Positron Collider CERN LEP ALEPH CERN PHYSIC (PHYSICs) LEP Energy Working Group SL The LEP energy working group DD506 n 200350 04 PUBLIC 000415594CER NOTE ALEPHNOTE SzGeCERN 20040304154728.0 0335074CERCER eng CERN-PS-PA-Note-93-04 CERN-PS-PA-Note-93-04-PPC Geneva CERN 1993 207 p gift: Bouthéon, Marcel SzGeCERN Accelerators and Storage Rings SzGeCERN Miscellaneous CERN East Hall CERN ISOLDE CERN antiprotons CERN heavy ions CERN proton beams CERN CONFERENCE PROCEEDINGS Manglunki, Django ed. PS PPD '93 CERN d h 199936 y1999 42 20091104 2203 CER01 19991117 PUBLIC PROCEEDINGS 0001 000335074CER ISOLDENOTE BUL-SCIENCE-2009-002 eng Charles Darwin Scissor-beak <!--HTML--><p class="articleHeader"> <i>October 15th.</i>&mdash;We got under way and passed Punta Gorda, where there is a colony of tame Indians from the province of Missiones. We sailed rapidly down the current, but before sunset, from a silly fear of bad weather, we brought-to in a narrow arm of the river. I took the boat and rowed some distance up this creek. It was very narrow, winding, and deep; on each side a wall thirty or forty feet high, formed by trees intwined with creepers, gave to the canal a singularly gloomy appearance. I here saw a very extraordinary bird, called the Scissor-beak (Rhynchops nigra). It has short legs, web feet, extremely long-pointed wings, and is of about the size of a tern.</p> <div class="phrwithcaption"> <div class="imageScale"> <img src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_scissor_beak.jpg" /></div> </div> <p> The beak is flattened laterally, that is, in a plane at right angles to that of a spoonbill or duck. It is as flat and elastic as an ivory paper-cutter, and the lower mandible, differently from every other bird, is an inch and a half longer than the upper. In a lake near Maldonado, from which the water had been nearly drained, and which, in consequence, swarmed with small fry, I saw several of these birds, generally in small flocks, flying rapidly backwards and forwards close to the surface of the lake. They kept their bills wide open, and the lower mandible half buried in the water. Thus skimming the surface, they ploughed it in their course: the water was quite smooth, and it formed a most curious spectacle to behold a flock, each bird leaving its narrow wake on the mirror-like surface. In their flight they frequently twist about with extreme quickness, and dexterously manage with their projecting lower mandible to plough up small fish, which are secured by the upper and shorter half of their scissor-like bills. This fact I repeatedly saw, as, like swallows, they continued to fly backwards and forwards close before me. Occasionally when leaving the surface of the water their flight was wild, irregular, and rapid; they then uttered loud harsh cries. When these birds are fishing, the advantage of the long primary feathers of their wings, in keeping them dry, is very evident. When thus employed, their forms resemble the symbol by which many artists represent marine birds. Their tails are much used in steering their irregular course.</p> <p> These birds are common far inland along the course of the Rio Parana; it is said that they remain here during the whole year, and breed in the marshes. During the day they rest in flocks on the grassy plains, at some distance from the water. Being at anchor, as I have said, in one of the deep creeks between the islands of the Parana, as the evening drew to a close, one of these scissor-beaks suddenly appeared. The water was quite still, and many little fish were rising. The bird continued for a long time to skim the surface, flying in its wild and irregular manner up and down the narrow canal, now dark with the growing night and the shadows of the overhanging trees. At Monte Video, I observed that some large flocks during the day remained on the mud-banks at the head of the harbour, in the same manner as on the grassy plains near the Parana; and every evening they took flight seaward. From these facts I suspect that the Rhynchops generally fishes by night, at which time many of the lower animals come most abundantly to the surface. M. Lesson states that he has seen these birds opening the shells of the mactr&aelig; buried in the sand-banks on the coast of Chile: from their weak bills, with the lower mandible so much projecting, their short legs and long wings, it is very improbable that this can be a general habit.</p> DRAFT 2 03/2009 Atlantis Times 2 04/2009 Atlantis Times ATLANTISTIMESSCIENCEDRAFT http://invenio-software.org/download/invenio-demo-site-files/journal_scissor_beak.jpg http://invenio-software.org/download/invenio-demo-site-files/icon-journal_scissor_beak.jpg http://invenio-software.org/download/invenio-demo-site-files/journal_scissor_beak.jpg http://invenio-software.org/download/invenio-demo-site-files/icon-journal_scissor_beak.jpg restricted-journal_scissor_beak.jpg restricted BUL-NEWS-2009-005 Charles Darwin Galapagos Archipelago <!--HTML--><p class="articleHeader"><i>September 15th.</i>&mdash;This archipelago consists of ten principal islands, of which five exceed the others in size. They are situated under the Equator, and between five and six hundred miles westward of the coast of America. They are all formed of volcanic rocks; a few fragments of granite curiously glazed and altered by the heat can hardly be considered as an exception.</p> <p> Some of the craters surmounting the larger islands are of immense size, and they rise to a height of between three and four thousand feet. Their flanks are studded by innumerable smaller orifices. I scarcely hesitate to affirm that there must be in the whole archipelago at least two thousand craters. These consist either of lava and scori&aelig;, or of finely-stratified, sandstone-like tuff. Most of the latter are beautifully symmetrical; they owe their origin to eruptions of volcanic mud without any lava: it is a remarkable circumstance that every one of the twenty-eight tuff-craters which were examined had their southern sides either much lower than the other sides, or quite broken down and removed. As all these craters apparently have been formed when standing in the sea, and as the waves from the trade wind and the swell from the open Pacific here unite their forces on the southern coasts of all the islands, this singular uniformity in the broken state of the craters, composed of the soft and yielding tuff, is easily explained.</p> <p style="text-align: center;"> <img alt="" class="ph" src="http://invenio-software.org/download/invenio-demo-site-files/icon-journal_galapagos_archipelago.jpg" style="width: 300px; height: 294px;" /></p> <p> Considering that these islands are placed directly under the equator, the climate is far from being excessively hot; this seems chiefly caused by the singularly low temperature of the surrounding water, brought here by the great southern Polar current. Excepting during one short season very little rain falls, and even then it is irregular; but the clouds generally hang low. Hence, whilst the lower parts of the islands are very sterile, the upper parts, at a height of a thousand feet and upwards, possess a damp climate and a tolerably luxuriant vegetation. This is especially the case on the windward sides of the islands, which first receive and condense the moisture from the atmosphere.</p> DRAFT 1 06/2009 Atlantis Times 1 07/2009 Atlantis Times ATLANTISTIMESNEWSDRAFT http://invenio-software.org/download/invenio-demo-site-files/journal_galapagos_archipelago.jpg http://invenio-software.org/download/invenio-demo-site-files/icon-journal_galapagos_archipelago.jpg CERN-MOVIE-2010-075 silent CMS team Produced by CMS animation of the high-energy collisions at 7 TeV on 30th March 2010 2010 CERN Copyright 2010-03-30 10 sec 720x576 16/9, 25 UNKNOWN PAL CERN EDS LHC CERN CMS CERN LHCfirstphysics CERN publvideomovie 2010 VIDEO Run 132440 - Event 2732271 CERN 2010 http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_POSTER.jpg CERN-MOVIE-2010-075_POSTER jpg POSTER POSTER http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075.mpg;master CERN-MOVIE-2010-075 mpg;master MASTER MASTER http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075.webm;480p CERN-MOVIE-2010-075 webm;480p WEBM_480P WEBM_480P http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075.webm;720p CERN-MOVIE-2010-075 webm;720p WEBM_720P WEBM_720P http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_01.jpg;big CERN-MOVIE-2010-075_01 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_01.jpg;small CERN-MOVIE-2010-075_01 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_02.jpg;big CERN-MOVIE-2010-075_02 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_02.jpg;small CERN-MOVIE-2010-075_02 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_03.jpg;big CERN-MOVIE-2010-075_03 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_03.jpg;small CERN-MOVIE-2010-075_03 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_04.jpg;big CERN-MOVIE-2010-075_04 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_04.jpg;small CERN-MOVIE-2010-075_04 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_05.jpg;big CERN-MOVIE-2010-075_05 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_05.jpg;small CERN-MOVIE-2010-075_05 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_06.jpg;big CERN-MOVIE-2010-075_06 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_06.jpg;small CERN-MOVIE-2010-075_06 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_07.jpg;big CERN-MOVIE-2010-075_07 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_07.jpg;small CERN-MOVIE-2010-075_07 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_08.jpg;big CERN-MOVIE-2010-075_08 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_08.jpg;small CERN-MOVIE-2010-075_08 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_09.jpg;big CERN-MOVIE-2010-075_09 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_09.jpg;small CERN-MOVIE-2010-075_09 jpg;small SMALLTHUMB SMALLTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_10.jpg;big CERN-MOVIE-2010-075_10 jpg;big BIGTHUMB BIGTHUMB http://invenio-software.org/download/invenio-demo-site-files/CERN-MOVIE-2010-075_10.jpg;small CERN-MOVIE-2010-075_10 jpg;small SMALLTHUMB SMALLTHUMB DLC 19951121053638.0 (OCoLC)oca00230701 AUTHOR|(SzGeCERN)aaa0001 Mann, Thomas 1875-1955 Man, Tomas 1875-1955 Mann, Tomas 1875-1955 Mān, Tūmās 1875-1955 Mann, Paul Thomas 1875-1955 Thomas, Paul 1875-1955 Mani, Tʿomas 1875-1955 Man, Tʿomasŭ 1875-1955 Mann, Tomasz 1875-1955 His Königliche Hoheit, 1909. Volgina, A.A. Tomas Mann--biobibliogr. ukazatelʹ, 1979: t.p. (Tomas Mann) Najīb, N. Qiṣṣat al-ajyāl bayna Tūmās Mān wa-Najīb Maħfūẓ, 1982: t.p. (Tūmās Mān) Vaget, H.R. Thomas Mann-Kommentar zu sämtlichen Erzählungen, c1984: t.p. (Thomas Mann) p. 13, etc. (b. 6-6-1875 in Lübeck as Paul Thomas Mann; used pseud. Paul Thomas as co-editor of student newspaper in 1893; d. 8-12-1955) Kakabadze, N. Tomas Mann, 1985: t.p. (Tomas Mann) added t.p. (Tʿomas Mani) Chʿoe, S.B. Tʿomasŭ Man yŏnʾgu, 1981: t.p. (Tʿomasŭ Man) Łukosz, J. Terapia jako duchowa forma życia, 1990: t.p. (Tomasza Manna) AUTHORITY AUTHOR DLC 19991204070327.0 AUTHOR|(OCoLC)oca00955355 AUTHOR|(SzGeCERN)aaa0002 Bach, Johann Sebastian 1685-1750. Es erhub sich ein Streit Bach, Johann Sebastian 1685-1750. See how fiercely they fight Bach, Johann Sebastian 1685-1750. There uprose a great strife Bach, Johann Sebastian 1685-1750. Cantatas, BWV 19 Bach, Johann Sebastian 1685-1750. Cantatas, no. 19 Bach, Johann Sebastian 1685-1750. There arose a great strife Bach, Johann Sebastian 1685-1750. Kantate am Michaelisfest BWV 19 Bach, Johann Sebastian 1685-1750. Festo Michaelis BWV 19 Bach, Johann Sebastian 1685-1750. Kantate zum Michaelistag BWV 19 Bach, Johann Sebastian 1685-1750. Cantata for Michaelmas Day BWV 19 Bach, Johann Sebastian 1685-1750. Cantate am Michaelisfeste BWV 19 Bach, J.S. BWV 19, Es erhub sich ein Streit [SR] p1988: label (BWV 19, Es erhub sich ein Streit = There arose a great strife) Schmieder, 1990 (19. Es erhub sich ein Streit; Kantate am Michaelisfest (Festo Michaelis)) AUTHORITY AUTHOR DLC 19850502074119.0 AUTHOR|(SzGeCERN)aaa0003 Bach, Johann Sebastian 1685-1750. Keyboard music. Selections (Bach Guild) Bach, Johann Sebastian 1685-1750. Historical anthology of music. V, Baroque (late). F, Johann Sebastian Bach. 1, Works for keyboard Historical anthology of music. V, Baroque (late). F, Johann Sebastian Bach. 1, Works for keyboard nnaa Historical anthology of music period V, category F, sub-category 1 Johann Sebastian Bach. 1 Works for keyboard Bach, Johann Sebastian 1685-1750. Johann Sebastian Bach. 1, Works for keyboard Bach, Johann Sebastian 1685-1750. Works for keyboard New York, NY Bach Guild Bach, J.S. Organ works [SR] c1981. AUTHORITY AUTHOR 19960528091722.0 Solar energy technology handbook, c1980- (a.e.) v. 1, t.p. (William C. Dickinson) pub. info sheet (William Clarence Dickinson, b. 3/15/22) Dickinson, William C. 1922- AUTHOR|(DLC)n 80007472 AUTHOR|(SzGeCERN)aaa0004 AUTHORITY AUTHOR Europhysics Study Conference on Unification of the Fundamental Particle Interactions, Erice, Italy, 1980. Unification of the fundamental particle interactions, 1980 (a.e.) t.p. (John Ellis) Supersymmetry and supergravity, c1986: CIP t.p. (J. Ellis) Quantum reflections, 2000: CIP t.p. (John Ellis) data sht. (b. July 1, 1946) pub. info. (Jonathan Richard Ellis) Ellis, J. 1946- (John), Ellis, Jonathan Richard 1946- Ellis, John 1946- AUTHOR|(DLC)n 80141717 AUTHOR|(SzGeCERN)aaa0005 AUTHORITY AUTHOR EllisDeleted, JohnDeleted 1946- AUTHOR|(SzGeCERN)aaa0006 AUTHORITY AUTHOR DELETED - + Stanford Linear Accelerator Center SLAC SLAC STANFORD DESY_AFF DOE HEP200 LABLIST PDGLIST PPF SLUO TOP050 TOP100 TOP200 TOP500 WEB http://www.slac.stanford.edu/ 2011-01-21 1989-07-18 INST-6300 Accel. Ctr. Stanford Linear Center SLAC Stanford DESY CORE - INSTITUTION|(SzGeCERN)iii0001 + INSTITUTE|(SzGeCERN)iii0001 94025 US SLAC National Accelerator Laboratory SLAC National Accelerator Laboratory 2575 Sand Hill Road 2575 Sand Hill Road Menlo Park, CA 94025-7090 Menlo Park, CA 94025-7090 USA USA CA Menlo Park AUTHORITY - INSTITUTION + INSTITUTE http://www.cern.ch 2011-01-21 1989-07-16 INST-1147 Conseil européen pour la Recherche Nucléaire (1952-1954) Organisation européenne pour la Recherche nucléaire (1954-now) Sub title: Laboratoire européen pour la Physique des Particules (1984-now) Sub title: European Laboratory for Particle Physics (1984-now) HEP200 LABLIST PDGLIST PPF SLUO TOP050 TOP100 TOP200 TOP500 WEB DESY CERN Geneva Centre Européen de Recherches Nucléaires center European Organization for Nuclear Research CERN CERN CERN CH-1211 Genève 23 Geneva Switzerland 1211 CH Research centre 2nd address: Organisation Européenne pour la Recherche Nucléaire (CERN), F-01631 Prévessin Cedex, France CORE - INSTITUTION|(SzGeCERN)iii0002 + INSTITUTE|(SzGeCERN)iii0002 AUTHORITY - INSTITUTION + INSTITUTE 19891121083347.0 SUBJECT|(DLC)sh 85101653 SUBJECT|(SzGeCERN)sss0001 Physics Natural philosophy Philosophy, Natural g Physical sciences Dynamics AUTHORITY SUBJECT DLC 20010904160459.0 SUBJECT|(SzGeCERN)sss0003 Computer crimes Computer fraud Computers Law and legislation Criminal provisions Computers and crime Cyber crimes Cybercrimes Electronic crimes (Computer crimes) g Crime Privacy, Right of 00351496: Adamski, A. Prawo kame komputerowe, c2000. Excite WWW directory of subjects, July 10, 2001 (cybercrimes; subcategory under Criminal, Branches of law, Law, Education) Electronic crime needs assessment for state and local law enforcement, 2001: glossary, p. 41 (electronic crime includes but is not limited to fraud, theft, forgery, child pornography or exploitation, stalking, traditional white-collar crimes, privacy violations, illegal drug transactions, espionage, computer intrusions; no synonyms given) AUTHORITY SUBJECT DLC 20010904162409.0 SUBJECT|(SzGeCERN)sss0004 Embellishment (Music) Diminution (Music) Ornamentation (Music) Ornaments (Music) g Music Performance g Performance practice (Music) Musical notation Variation (Music) Heim, N.M. Ornamentation for the clarinetist, c1993. Massin. De la variation, c2000. AUTHORITY SUBJECT DLC 20010904162503.0 SUBJECT|(SzGeCERN)sss0005 Embellishment (Vocal music) Colorature Fioriture g Embellishment (Music) g Vocal music History and criticism Massin. De la variation, c2000. AUTHORITY SUBJECT PER:749 Kilian, K. 0 fzj Other ways to make polarized antiproton beams 2010 107 6697 Verhandlungen der Deutschen Physikalischen Gesellschaft (Reihe 06) 2 Journal Article PER:513 Grzonka, D. 1 fzj PER:1182 Oelert, W. 2 fzj --NOT MATCHED-- Vol. 2 2:< 2 2010 GRANT:413 P53 Struktur der Materie der Materie Physik der Hadronen und Kerne 2010 - INSTITUTION|(DE-Juel1)795 + INSTITUTE|(DE-Juel1)795 IKP IKP-1 Experimentelle Hadronstruktur VDB:126525 VDB DOI 10.1063/1.2737136 VDB VDB:88636 WOS WOS:000246413400056 ISSN 0003-6951 inh:9498831 inh English Inspec Atom-, molecule-, and ion-surface impact and interactions Inspec Pulsed laser deposition Inspec Stoichiometry and homogeneity Inspec Thin film growth, structure, and epitaxy Inspec Vacuum deposition 39744 Heeg, T. 0 fzj Epitaxially stabilized growth of orthorhombic LuScO3 thin films 2007 192901-1 - 192901-3 562 Applied Physics Letters 90 0003-6951 Journal Article Metastable lutetium scandate (LuScO3) thin films with an orthorhombic perovskite structure have been prepared by molecular-beam epitaxy and pulsed-laser deposition on NdGaO3(110) and DyScO3(110) substrates. Stoichiometry and crystallinity were investigated using Rutherford backscattering spectrometry/channeling, x-ray diffraction, and transmission electron microscopy. The results indicate that LuScO3, which normally only exists as a solid solution of Sc2O3 and Lu2O3 with the cubic bixbyite structure, can be grown in the orthorhombically distorted perovskite structure. Rocking curves as narrow as 0.05deg were achieved. A critical film thickness of approximately 200 nm for the epitaxially stabilized perovskite polymorph of LuScO3 on NdGaO3(110) substrates was determined. Enriched from Web of Science, Inspec Inspec DyScO3 Inspec DyScO3(110) substrates Inspec Fuel cells Inspec LuScO3 Inspec NdGaO3 Inspec NdGaO3(110) substrates Inspec Rutherford backscattering Inspec Rutherford backscattering channeling Inspec Rutherford backscattering spectrometry Inspec X-ray diffraction Inspec critical film thickness Inspec crystallinity Inspec epitaxial layers Inspec epitaxially stabilized growth Inspec epitaxially stabilized perovskite polymorph Inspec lutetium compounds Inspec metastable lutetium scandate thin films Inspec molecular beam epitaxial growth Inspec molecular beam epitaxy Inspec orthorhombically distorted perovskite structure Inspec polymorphism Inspec pulsed laser deposition Inspec rocking curves Inspec stoichiometry Inspec transmission electron microscopy PER:64142 Roeckerath, M. 1 fzj PER:5409 Schubert, J. 2 fzj PER:5482 Zander, W. 3 fzj PER:14557 Buchal, Ch. 4 fzj PER:60616 Chen, H. Y. 5 fzj PER:5020 Jia, C. L. 6 fzj PER:45799 Jia, Y. 7 fzj PER:65921 Adamo, C. 8 PER:15220 Schlom, D. G. 9 ZDBID:2265524-4 10.1063/1.2737136 Vol. 90, no. 19, p. 192901 19 90:19<192901 Applied physics reviews 90 0003-6951 2007 http://dx.doi.org/10.1063/1.2737136 GRANT:412 P42 Schlüsseltechnologien Grundlagen für zukünftige Informationstechnologien 2007 - INSTITUTION|(DE-Juel1)381 + INSTITUTE|(DE-Juel1)381 14.09.2008 CNI CNI Center of Nanoelectronic Systems for Information Technology Zusammenschluss der am FE-Vorhaben I01 beteiligtenute: IFF-TH-I, IFF-TH-II, IFF-IEM, IFF-IMF, IFF-IEE, ISG-1, ISG-2, ISG-3 - INSTITUTION|(DE-Juel1)788 + INSTITUTE|(DE-Juel1)788 IFF IFF-8 Mikrostrukturforschung - INSTITUTION|(DE-Juel1)799 + INSTITUTE|(DE-Juel1)799 IBN IBN-1 Halbleiter-Nanoelektronik VDB:88636 VDB ARTICLE DOI 10.4028/www.scientific.net/MSF.638-642.1098 VDB VDB:125298 ISSN 0255-5476 inh:11707323 inh English Inspec Fuel cells PER:96536 Menzler, N.H. 0 fzj Influence of processing parameters on the manufacturing of anode-supported solid oxide fuel cells by different wet chemical routes 2010 4206 Materials Science Forum 638-642 0255-5476 1098 - 1105 Journal Article Anode-supported solid oxide fuel cells (SOFC) are manufactured at Forschungszentrum Jülich by different wet chemical powder processes and subsequent sintering at high temperatures. Recently, the warm pressing of Coat-Mix powders has been replaced by tape casting as the shaping technology for the NiO/8YSZ-containing substrate in order to decrease the demand for raw materials due to lower substrate thickness and in order to increase reproducibility and fabrication capacities (scalable process). Different processing routes for the substrates require the adjustment of process parameters for further coating with functional layers. Therefore, mainly thermal treatment steps have to be adapted to the properties of the new substrate types in order to obtain high-performance cells with minimum curvature (for stack assembly). In this presentation, the influence of selected process parameters during cell manufacturing will be characterized with respect to the resulting physical parameters such as slurry viscosity, green tape thickness, relative density, substrate strength, electrical conductivity, and shrinkage of the different newly developed substrate types. The influencing factors during manufacturing and the resulting characteristics will be presented and possible applications for the various substrates identified. Enriched from Inspec Inspec anode-supported solid oxide fuel cells Inspec coating Inspec electrical conductivity Inspec green tape thickness Inspec powders Inspec processing parameters Inspec shrinkage Inspec sintering Inspec slurry viscosity Inspec solid oxide fuel cells Inspec tape casting Inspec thermal treatment Inspec viscosity Inspec warm pressing Inspec wet chemical powder processes PER:76694 Schafbauer, W. 1 fzj PER:96316 Buchkremer, H.P. 2 fzj ZDBID:2047372-2 10.4028/www.scientific.net/MSF.638-642.1098 Vol. 638-642, p. 1098 - 1105 638-642:<1098 - 1105 Materials science forum 638-642 0255-5476 2010 http://dx.doi.org/10.4028/www.scientific.net/MSF.638-642.1098 GRANT:402 P12 Energie Rationelle Energieumwandlung 2010 - INSTITUTION|(DE-Juel1)1130 + INSTITUTE|(DE-Juel1)1130 IEK IEK-1 Werkstoffsynthese und Herstellverfahren VDB:125298 VDB ARTICLE - INSTITUTION|(DE-Juel1)5008462-8 + INSTITUTE|(DE-Juel1)5008462-8 Forschungszentrum Jülich Energieforschungszentrum Jülich KFA Research Centre Jülich KFA Jülich FZJ a Kernforschungsanlage Jülich - INSTITUTION + INSTITUTE AUTHORITY - INSTITUTION|(DE-Juel1)301 + INSTITUTE|(DE-Juel1)301 INST - INSTITUTION|(DE-Juel1)301 + INSTITUTE|(DE-Juel1)301 Institut für Kernphysik 31.12.2000 IKP d - INSTITUTION|(DE-Juel1)5008462-8 + INSTITUTE|(DE-Juel1)5008462-8 GND Forschungszentrum Jülich t - INSTITUTION|(DE-Juel1)301 + INSTITUTE|(DE-Juel1)301 INST IKP g - INSTITUTION|(DE-Juel1)301 + INSTITUTE|(DE-Juel1)301 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)795 INST - INSTITUTION|(DE-Juel1)795 + INSTITUTE|(DE-Juel1)795 Experimentelle Hadronstruktur IKP-1 d - INSTITUTION|(DE-Juel1)301 + INSTITUTE|(DE-Juel1)301 INST IKP g - INSTITUTION|(DE-Juel1)221 + INSTITUTE|(DE-Juel1)221 INST Institut 1 (Experimentelle Kernphysik I) a (DE-Juel1)795 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)241 INST - INSTITUTION|(DE-Juel1)241 + INSTITUTE|(DE-Juel1)241 Institut für Festkörperforschung IFF d - INSTITUTION|(DE-Juel1)5008462-8 + INSTITUTE|(DE-Juel1)5008462-8 GND Forschungszentrum Jülich t - INSTITUTION|(DE-Juel1)241 + INSTITUTE|(DE-Juel1)241 INST IFF g (DE-Juel1)241 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)1107 INST - INSTITUTION|(DE-Juel1)1107 + INSTITUTE|(DE-Juel1)1107 Institut für Bio- und Nanosysteme IBN d - INSTITUTION|(DE-Juel1)5008462-8 + INSTITUTE|(DE-Juel1)5008462-8 GND Forschungszentrum Jülich t - INSTITUTION|(DE-Juel1)1107 + INSTITUTE|(DE-Juel1)1107 INST IBN g (DE-Juel1)1107 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)1125 INST - INSTITUTION|(DE-Juel1)1125 + INSTITUTE|(DE-Juel1)1125 Institut für Energie- und Klimaforschung IEK d - INSTITUTION|(DE-Juel1)5008462-8 + INSTITUTE|(DE-Juel1)5008462-8 GND Forschungszentrum Jülich t - INSTITUTION|(DE-Juel1)1125 + INSTITUTE|(DE-Juel1)1125 INST IEK g (DE-Juel1)1125 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)1115 INST - INSTITUTION|(DE-Juel1)1115 + INSTITUTE|(DE-Juel1)1115 Institut für Energieforschung IEF d - INSTITUTION|(DE-Juel1)5008462-8 + INSTITUTE|(DE-Juel1)5008462-8 GND Forschungszentrum Jülich t - INSTITUTION|(DE-Juel1)1115 + INSTITUTE|(DE-Juel1)1115 INST IEF g (DE-Juel1)1115 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)381 INST - INSTITUTION|(DE-Juel1)381 + INSTITUTE|(DE-Juel1)381 Center of Nanoelectronic Systems for Information Technology 14.09.2008 CNI d - INSTITUTION|(DE-Juel1)5008462-8 + INSTITUTE|(DE-Juel1)5008462-8 GND Forschungszentrum Jülich t - INSTITUTION|(DE-Juel1)381 + INSTITUTE|(DE-Juel1)381 INST CNI g (DE-Juel1)381 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)788 INST - INSTITUTION|(DE-Juel1)788 + INSTITUTE|(DE-Juel1)788 Mikrostrukturforschung IFF-8 d - INSTITUTION|(DE-Juel1)241 + INSTITUTE|(DE-Juel1)241 INST IFF g - INSTITUTION|(DE-Juel1)37 + INSTITUTE|(DE-Juel1)37 INST Mikrostrukturforschung a (DE-Juel1)788 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)861 INST - INSTITUTION|(DE-Juel1)861 + INSTITUTE|(DE-Juel1)861 Institut für Halbleiterschichten und Bauelemente 30.09.2007 IBN-1 d - INSTITUTION|(DE-Juel1)1107 + INSTITUTE|(DE-Juel1)1107 INST IBN g - INSTITUTION|(DE-Juel1)41 + INSTITUTE|(DE-Juel1)41 INST Institut für Halbleiterschichten und Bauelemente a - INSTITUTION|(DE-Juel1)799 + INSTITUTE|(DE-Juel1)799 INST Halbleiter-Nanoelektronik b (DE-Juel1)861 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)799 INST - INSTITUTION|(DE-Juel1)799 + INSTITUTE|(DE-Juel1)799 Halbleiter-Nanoelektronik IBN-1 d - INSTITUTION|(DE-Juel1)1107 + INSTITUTE|(DE-Juel1)1107 INST IBN g - INSTITUTION|(DE-Juel1)861 + INSTITUTE|(DE-Juel1)861 INST Institut für Halbleiterschichten und Bauelemente a (DE-Juel1)799 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)1130 INST - INSTITUTION|(DE-Juel1)1130 + INSTITUTE|(DE-Juel1)1130 Werkstoffsynthese und Herstellverfahren IEK-1 d - INSTITUTION|(DE-Juel1)1125 + INSTITUTE|(DE-Juel1)1125 INST IEK g - INSTITUTION|(DE-Juel1)809 + INSTITUTE|(DE-Juel1)809 INST Werkstoffsynthese und Herstellungsverfahren a (DE-Juel1)1130 - INSTITUTION + INSTITUTE AUTHORITY (DE-Juel1)809 INST - INSTITUTION|(DE-Juel1)809 + INSTITUTE|(DE-Juel1)809 Werkstoffsynthese und Herstellungsverfahren 30.09.2010 IEF-1 d - INSTITUTION|(DE-Juel1)1115 + INSTITUTE|(DE-Juel1)1115 INST IEF g - INSTITUTION|(DE-Juel1)5 + INSTITUTE|(DE-Juel1)5 INST Werkstoffsynthese und Herstellungsverfahren a - INSTITUTION|(DE-Juel1)1130 + INSTITUTE|(DE-Juel1)1130 INST Werkstoffsynthese und Herstellverfahren b (DE-Juel1)809 - INSTITUTION + INSTITUTE AUTHORITY diff --git a/po/POTFILES.in b/po/POTFILES.in index 54de8d5b6..beec9b0d4 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -1,296 +1,296 @@ ## This file is part of Invenio. ## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. # # List of source files which contain translatable strings. # modules/bibauthorid/lib/bibauthorid_templates.py modules/bibauthorid/lib/bibauthorid_webinterface.py modules/bibcatalog/lib/bibcatalog_templates.py modules/bibcheck/web/admin/bibcheckadmin.py modules/bibcirculation/lib/bibcirculation.py modules/bibcirculation/lib/bibcirculation_templates.py modules/bibcirculation/lib/bibcirculation_utils.py modules/bibcirculation/lib/bibcirculation_webinterface.py modules/bibcirculation/lib/bibcirculationadminlib.py modules/bibclassify/doc/admin/bibclassify-admin-guide.webdoc modules/bibclassify/lib/bibclassify_templates.py modules/bibclassify/lib/bibclassify_webinterface.py modules/bibconvert/bin/bibconvert.in modules/bibconvert/doc/admin/bibconvert-admin-guide.webdoc modules/bibconvert/lib/bibconvert.py modules/bibconvert/lib/bibconvert_unit_tests.py modules/bibdocfile/lib/bibdocfile_managedocfiles.py modules/bibdocfile/lib/bibdocfile_templates.py modules/bibdocfile/lib/bibdocfile_webinterface.py modules/bibdocfile/lib/file.py modules/bibedit/doc/admin/bibedit-admin-guide.webdoc modules/bibedit/lib/bibedit_engine.py modules/bibedit/lib/bibedit_templates.py modules/bibedit/lib/bibedit_utils.py modules/bibedit/lib/bibedit_webinterface.py modules/bibedit/lib/bibeditmulti_templates.py modules/bibedit/lib/bibeditmulti_webinterface.py modules/bibencode/lib/bibencode_batch_engine.py modules/bibexport/lib/bibexport_method_fieldexporter.py modules/bibexport/lib/bibexport_method_fieldexporter_templates.py modules/bibexport/lib/bibexport_method_fieldexporter_webinterface.py modules/bibformat/bin/bibreformat.in modules/bibformat/doc/admin/bibformat-admin-guide.webdoc modules/bibformat/etc/format_templates/Default_HTML_actions.bft modules/bibformat/etc/format_templates/Default_HTML_files.bft modules/bibformat/lib/bibformat_bfx_engine.py modules/bibformat/lib/bibformat_engine.py modules/bibformat/lib/bibformat_engine_unit_tests.py modules/bibformat/lib/bibformat_templates.py modules/bibformat/lib/bibformatadminlib.py modules/bibformat/lib/elements/bfe_aid_authors.py modules/bibformat/lib/elements/bfe_authority_author.py modules/bibformat/lib/elements/bfe_authority_control_no.py -modules/bibformat/lib/elements/bfe_authority_institution.py +modules/bibformat/lib/elements/bfe_authority_institute.py modules/bibformat/lib/elements/bfe_authority_journal.py modules/bibformat/lib/elements/bfe_authority_subject.py modules/bibformat/lib/elements/bfe_authors.py modules/bibformat/lib/elements/bfe_edit_files.py modules/bibformat/lib/elements/bfe_edit_record.py modules/bibformat/lib/elements/bfe_fulltext.py modules/bibformat/lib/elements/bfe_fulltext_mini.py modules/bibformat/lib/elements/bfe_sciencewise.py modules/bibformat/web/admin/bibformatadmin.py modules/bibindex/bin/bibindex.in modules/bibindex/bin/bibstat.in modules/bibindex/doc/admin/bibindex-admin-guide.webdoc modules/bibindex/lib/bibindex_engine.py modules/bibindex/lib/bibindex_engine_config.py modules/bibindex/lib/bibindex_engine_stemmer.py modules/bibindex/lib/bibindex_engine_stemmer_unit_tests.py modules/bibindex/lib/bibindex_engine_stopwords.py modules/bibindex/lib/bibindex_engine_unit_tests.py modules/bibindex/lib/bibindexadminlib.py modules/bibindex/web/admin/bibindexadmin.py modules/bibknowledge/lib/bibknowledge_templates.py modules/bibknowledge/lib/bibknowledgeadmin.py modules/bibmatch/doc/admin/bibmatch-admin-guide.webdoc modules/bibrank/bin/bibrank.in modules/bibrank/bin/bibrankgkb.in modules/bibrank/doc/admin/bibrank-admin-guide.webdoc modules/bibrank/lib/bibrank_citation_grapher.py modules/bibrank/lib/bibrank_citation_indexer.py modules/bibrank/lib/bibrank_citation_indexer_regression_tests.py modules/bibrank/lib/bibrank_citation_searcher.py modules/bibrank/lib/bibrank_citation_searcher_unit_tests.py modules/bibrank/lib/bibrank_downloads_grapher.py modules/bibrank/lib/bibrank_downloads_indexer.py modules/bibrank/lib/bibrank_downloads_indexer_unit_tests.py modules/bibrank/lib/bibrank_downloads_similarity.py modules/bibrank/lib/bibrank_grapher.py modules/bibrank/lib/bibrank_record_sorter.py modules/bibrank/lib/bibrank_record_sorter_unit_tests.py modules/bibrank/lib/bibrank_tag_based_indexer.py modules/bibrank/lib/bibrank_tag_based_indexer_unit_tests.py modules/bibrank/lib/bibrank_word_indexer.py modules/bibrank/lib/bibrankadminlib.py modules/bibrank/web/admin/bibrankadmin.py modules/bibrecord/bin/xmlmarclint.in modules/bibrecord/lib/bibrecord.py modules/bibrecord/lib/bibrecord_config.py modules/bibrecord/lib/bibrecord_unit_tests.py modules/bibsched/bin/bibsched.in modules/bibsched/bin/bibtaskex.in modules/bibsched/doc/admin/bibsched-admin-guide.webdoc modules/bibsort/lib/bibsortadminlib.py modules/bibsword/lib/bibsword_webinterface.py modules/bibupload/doc/admin/bibupload-admin-guide.webdoc modules/bibupload/lib/batchuploader_engine.py modules/bibupload/lib/batchuploader_templates.py modules/bibupload/lib/batchuploader_webinterface.py modules/docextract/bin/refextract.in modules/docextract/lib/refextract.py modules/docextract/lib/refextract_config.py modules/elmsubmit/bin/elmsubmit.in modules/elmsubmit/doc/admin/elmsubmit-admin-guide.webdoc modules/elmsubmit/lib/elmsubmit.py modules/elmsubmit/lib/elmsubmit_EZArchive.py modules/elmsubmit/lib/elmsubmit_EZEmail.py modules/elmsubmit/lib/elmsubmit_enriched2txt.py modules/elmsubmit/lib/elmsubmit_field_validation.py modules/elmsubmit/lib/elmsubmit_filename_generator.py modules/elmsubmit/lib/elmsubmit_html2txt.py modules/elmsubmit/lib/elmsubmit_misc.py modules/elmsubmit/lib/elmsubmit_richtext2txt.py modules/elmsubmit/lib/elmsubmit_submission_parser.py modules/elmsubmit/lib/htmlentitydefs.py modules/miscutil/bin/dbexec.in modules/miscutil/lib/__init__.py modules/miscutil/lib/dateutils.py modules/miscutil/lib/errorlib.py modules/miscutil/lib/errorlib_unit_tests.py modules/miscutil/lib/errorlib_webinterface.py modules/miscutil/lib/inveniocfg.py modules/miscutil/lib/mailutils.py modules/miscutil/lib/messages.py modules/miscutil/lib/miscutil_config.py modules/miscutil/lib/textutils.py modules/oaiharvest/bin/oaiharvest.in modules/oaiharvest/doc/admin/oaiharvest-admin-guide.webdoc modules/oaiharvest/lib/oai_harvest_admin.py modules/oaiharvest/lib/oai_harvest_templates.py modules/oairepository/lib/oai_repository_admin.py modules/webaccess/bin/authaction.in modules/webaccess/bin/webaccessadmin.in modules/webaccess/lib/access_control_admin.py modules/webaccess/lib/access_control_config.py modules/webaccess/lib/access_control_engine.py modules/webaccess/lib/external_authentication.py modules/webaccess/lib/webaccessadmin_lib.py modules/webaccess/web/admin/webaccessadmin.py modules/webalert/bin/alertengine.in modules/webalert/doc/admin/webalert-admin-guide.webdoc modules/webalert/lib/alert_engine.py modules/webalert/lib/htmlparser.py modules/webalert/lib/webalert.py modules/webalert/lib/webalert_templates.py modules/webalert/lib/webalert_webinterface.py modules/webauthorprofile/lib/webauthorprofile_templates.py modules/webbasket/doc/admin/webbasket-admin-guide.webdoc modules/webbasket/lib/webbasket.py modules/webbasket/lib/webbasket_config.py modules/webbasket/lib/webbasket_templates.py modules/webbasket/lib/webbasket_webinterface.py modules/webcomment/doc/admin/webcomment-admin-guide.webdoc modules/webcomment/lib/webcomment.py modules/webcomment/lib/webcomment_config.py modules/webcomment/lib/webcomment_templates.py modules/webcomment/lib/webcomment_unit_tests.py modules/webcomment/lib/webcomment_webinterface.py modules/webcomment/lib/webcommentadminlib.py modules/webcomment/web/admin/webcommentadmin.py modules/webhelp/web/admin/admin.webdoc modules/webhelp/web/help-central.webdoc modules/webjournal/lib/elements/bfe_webjournal_archive.py modules/webjournal/lib/elements/bfe_webjournal_article_author.py modules/webjournal/lib/elements/bfe_webjournal_article_body.py modules/webjournal/lib/elements/bfe_webjournal_imprint.py modules/webjournal/lib/elements/bfe_webjournal_main_navigation.py modules/webjournal/lib/elements/bfe_webjournal_rss.py modules/webjournal/lib/webjournal_config.py modules/webjournal/lib/webjournal_templates.py modules/webjournal/lib/webjournal_utils.py modules/webjournal/lib/webjournaladminlib.py modules/webjournal/lib/widgets/bfe_webjournal_widget_seminars.py modules/webjournal/lib/widgets/bfe_webjournal_widget_weather.py modules/webjournal/lib/widgets/bfe_webjournal_widget_whatsNew.py modules/webjournal/web/admin/webjournaladmin.py modules/weblinkback/lib/weblinkback_templates.py modules/weblinkback/lib/weblinkbackadminlib.py modules/weblinkback/web/admin/weblinkbackadmin.py modules/webmessage/bin/webmessageadmin.in modules/webmessage/doc/admin/webmessage-admin-guide.webdoc modules/webmessage/lib/webmessage.py modules/webmessage/lib/webmessage_config.py modules/webmessage/lib/webmessage_dblayer.py modules/webmessage/lib/webmessage_mailutils.py modules/webmessage/lib/webmessage_templates.py modules/webmessage/lib/webmessage_webinterface.py modules/websearch/bin/webcoll.in modules/websearch/doc/admin/websearch-admin-guide.webdoc modules/websearch/doc/hacking/search-services.webdoc modules/websearch/doc/search-guide.webdoc modules/websearch/doc/search-tips.webdoc modules/websearch/lib/search_engine.py modules/websearch/lib/search_engine_config.py modules/websearch/lib/search_engine_unit_tests.py modules/websearch/lib/services/CollectionNameSearchService.py modules/websearch/lib/services/FAQKBService.py modules/websearch/lib/services/SubmissionNameSearchService.py modules/websearch/lib/services/WeatherService.py modules/websearch/lib/websearch_external_collections.py modules/websearch/lib/websearch_external_collections_templates.py modules/websearch/lib/websearch_templates.py modules/websearch/lib/websearch_webcoll.py modules/websearch/lib/websearch_webinterface.py modules/websearch/lib/websearchadminlib.py modules/websearch/web/admin/websearchadmin.py modules/websession/bin/inveniogc.in modules/websession/doc/admin/websession-admin-guide.webdoc modules/websession/lib/session.py modules/websession/lib/webaccount.py modules/websession/lib/webgroup.py modules/websession/lib/webgroup_dblayer.py modules/websession/lib/websession_config.py modules/websession/lib/websession_templates.py modules/websession/lib/websession_webinterface.py modules/websession/lib/webuser.py modules/webstat/bin/webstat.in modules/webstat/doc/admin/webstat-admin-guide.webdoc modules/webstyle/doc/admin/webstyle-admin-guide.webdoc modules/webstyle/doc/hacking/webstyle-webdoc-syntax.webdoc modules/webstyle/lib/template.py modules/webstyle/lib/webdoc.py modules/webstyle/lib/webdoc_unit_tests.py modules/webstyle/lib/webdoc_webinterface.py modules/webstyle/lib/webpage.py modules/webstyle/lib/webstyle_templates.py modules/websubmit/doc/admin/websubmit-admin-guide.webdoc modules/websubmit/doc/submit-guide.webdoc modules/websubmit/lib/functions/Add_Files.py modules/websubmit/lib/functions/CaseEDS.py modules/websubmit/lib/functions/Create_Modify_Interface.py modules/websubmit/lib/functions/Create_Recid.py modules/websubmit/lib/functions/Create_Upload_Files_Interface.py modules/websubmit/lib/functions/Finish_Submission.py modules/websubmit/lib/functions/Format_Record.py modules/websubmit/lib/functions/Get_Info.py modules/websubmit/lib/functions/Get_Report_Number.py modules/websubmit/lib/functions/Get_Sysno.py modules/websubmit/lib/functions/Insert_Modify_Record.py modules/websubmit/lib/functions/Insert_Record.py modules/websubmit/lib/functions/Is_Original_Submitter.py modules/websubmit/lib/functions/Is_Referee.py modules/websubmit/lib/functions/Mail_Submitter.py modules/websubmit/lib/functions/Make_Modify_Record.py modules/websubmit/lib/functions/Make_Record.py modules/websubmit/lib/functions/Move_Files_Archive.py modules/websubmit/lib/functions/Move_From_Pending.py modules/websubmit/lib/functions/Move_to_Done.py modules/websubmit/lib/functions/Move_to_Pending.py modules/websubmit/lib/functions/Print_Success.py modules/websubmit/lib/functions/Print_Success_APP.py modules/websubmit/lib/functions/Print_Success_DEL.py modules/websubmit/lib/functions/Print_Success_MBI.py modules/websubmit/lib/functions/Print_Success_SRV.py modules/websubmit/lib/functions/Report_Number_Generation.py modules/websubmit/lib/functions/Retrieve_Data.py modules/websubmit/lib/functions/Send_APP_Mail.py modules/websubmit/lib/functions/Send_Approval_Request.py modules/websubmit/lib/functions/Send_Modify_Mail.py modules/websubmit/lib/functions/Send_SRV_Mail.py modules/websubmit/lib/functions/Shared_Functions.py modules/websubmit/lib/functions/Test_Status.py modules/websubmit/lib/functions/Update_Approval_DB.py modules/websubmit/lib/websubmit_config.py modules/websubmit/lib/websubmit_engine.py modules/websubmit/lib/websubmit_regression_tests.py modules/websubmit/lib/websubmit_templates.py modules/websubmit/lib/websubmit_webinterface.py modules/websubmit/lib/websubmitadmin_config.py modules/websubmit/lib/websubmitadmin_engine.py modules/websubmit/web/admin/referees.py modules/websubmit/web/admin/websubmitadmin.py modules/websubmit/web/approve.py modules/websubmit/web/publiline.py modules/websubmit/web/yourapprovals.py modules/websubmit/web/yoursubmissions.py diff --git a/scripts/kwalitee.py b/scripts/kwalitee.py index 6bf891b9a..80836a31b 100644 --- a/scripts/kwalitee.py +++ b/scripts/kwalitee.py @@ -1,993 +1,996 @@ ## This file is part of Invenio. ## Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011, 2013 CERN. ## ## Invenio is free software; you can redistribute it and/or ## modify it under the terms of the GNU General Public License as ## published by the Free Software Foundation; either version 2 of the ## License, or (at your option) any later version. ## ## Invenio is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Invenio; if not, write to the Free Software Foundation, Inc., ## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. from __future__ import print_function """ Code kwalitee checking tools for Invenio Python code. Q: What is kwalitee? A: Usage: python kwalitee.py [options] General options:: -h, --help print this help -V, --version print version number -q, --quiet be quiet, print only warnings Check options:: --stats generate kwalitee summary stats --check-all perform all checks listed below --check-some perform some (important) checks only [default] --check-errors check Python errors --check-variables check Python variables --check-indentation check Python code indentation --check-whitespace check trailing whitespace --check-docstrings check Python doctrings compliance --check-pep8 check PEP8 compliance --check-sql check SQL queries --check-eval check Python eval calls --check-html-options check