Open Password – Monday, December 6, 2021
#1007
Future of information science – Scientometrics – Christian Krattenthaler – h-index – Probability theory – German Association of Mathematicians – Knowledge transfer – Jorge Eduardo Hirsch – Citation distributions – Partitions – Young diagram – Ferrers diagram – Durfee square – Brownian motion – Limiting case forms – HNV Temperley – AM Vershik – B. Fristedt – Uniform distribution model – Natural sciences – Web of Science – Google Scholar – Scopus – MathSciNet – Zentralblatt – Ilka Agricola – Joachim Escher – g-Index – Leo Egghe
Darmstadt University of Applied Sciences – Volunteer program – Geribert Jakob – Media documentaries – ARD – ZDF – Deutsche Welle – German Broadcasting Archive – FAZ – RTL – Machine learning – Semantic systems – Agile procedures – Annual projects – Problem/project-based learning – Information Science – h_da Symposium on scientific and media documentation – E.g. MED – LIVIVO – Ulrike Ostrzinski
- Title
Future of Information Science – The h-index, “a useless bibliometric index” – By Christian Krattenthaler
- Darmstadt University of Applied Sciences
26 successful graduates in the sixth volunteer program – By Geribert Jakob
III. E.g. MED:
Making the most of LIVIVO – By Ulrike Ostrzinski
Future of Information Science:
Scientometrics (1)
The h-index – “a useless
bibliometric index”
Christian Krattenthaler
The mathematician Christian Krattenthaler has published an article in the communications of the German Association of Mathematicians that is likely to be of greater importance for information science, especially for scientometrics (“What the h-index really says”, 2021, issue 3, pages 124 – 128 – DOI 10.1515/dmmv-2021-0050). Krattenthaler’s core thesis is that the h-index, which has caused some euphoria since its construction in 2005 and is now used in many natural sciences, but also in information and library science, “with the greatest self-evidence and conviction as a measure of the influence and relevance of a “is used by an author”, “is a useless bibliometric index”. He justifies this with a sentence from probability theory and with the fact that “it can be mathematically proven that the h-index does not keep what it promises. On the contrary, it is essentially equivalent to another (simpler) bibliometric measure of an author”.
Elsewhere, Krattenthaler explains his thesis as follows: “I think that the facts presented in this note should be better known. If other sciences believe in such “hocus-pocus” (I once tried to convince a geoscientist that the h-index is nonsense; without success), then at least we mathematicians should know – and spread the word – that it is h-index is one of the stupidest among the various bibliometric indexes; in the sense that it promises something it does not deliver, and in fact is essentially equivalent to a much simpler index (number of citations).”
The communications from the German Association of Mathematicians are probably hardly read in information and library science circles. Open Password then makes a contribution to the culture of debate and knowledge transfer, at least to the acceleration of knowledge transfer, when it subsequently publishes Krattenthaler’s contribution. We would like to thank the German Association of Mathematicians very much for giving us permission to do this.
Future of Information Science:
Scientometrics (2)
What the h-index really says
By Christian Krattenthaler
This note explains that the so-called h-index (Hirsch’s bibliometric index) essentially reflects the same information as the total number of citations of an author’s publications, and is therefore a useless bibliometric index. This is based on a fascinating theorem of probability theory, which is also explained here.
_____________________________________________________
- Preamble
_____________________________________________________
In the article [6], the theoretical physicist Jorge Eduardo Hirsch proposed a new bibliometric index in 2005, which has since been known as the h-index and is used most naturally and convincingly in many natural sciences as a measure of the influence and relevance of an author’s publication output. of an author is used. According to Hirsch, his index is easy to calculate (that’s true), it (supposedly) avoids the problems of other bibliometric indices, and in particular it (supposedly) allows a well-founded comparison of authors who have very different total numbers of publications or very have different total numbers of citations.
I will explain below that it can be proven mathematically that the h-index does not do what it promises. On the contrary, it is essentially equivalent to another (simpler) bibliometric measure, namely the total number of citations of an author’s publications, which will be referred to below as NZit.
To get to the core statement: Roughly speaking, it is a mathematical proposition that the h-index of an author is (approximately) equal to 0.54 × ? NZit! See the more precise formulation of the theorem in Corollary (3) in section (3).
The motivation for this grade came from a (belated) reading of the DMV’s position paper on the use of bibliometric data [3]. There is a separate paragraph dedicated to the h-index. Everything that is said there is correct. But: There is no justification for many of the errors mentioned in the h-index. This is particularly unfortunate because it is mathematical! And it’s also a shame because – with this justification – the criticism can even be strengthened (1).
_____________________________________________________
- What is h-index?
_____________________________________________________
Here is the definition of h-index.
Definition 1. We imagine that an author has N1, N2, . for his/her publications. . . Get citations where N1 ? N2 ? ···. In other words, we order an author’s publications in descending order according to the number of citations they received, so that the ith publication received Ni citations. Then the h-index is the maximum k such that k ? Nk.
Even if this requires a simple algorithm for determining the h-index, it may not be possible to imagine much of it on an ad hoc basis. But there is a visual approach to the h-index that makes it immediately clear – at least to me – what it is about.
To illustrate this, I’ll choose an example: Let’s assume we’re talking about an author who has published 11 articles. Furthermore, the citation numbers of the individual articles are N1 = 16, N2 = 8, N3 = 7, N4 = 6, N5 = 3, N6 = 3, N7 = 3, N8 = 1, N9 = 1, N10 = 1, N11 = 1 We now plot these citation numbers in a bar chart like we know from school, see Figure 1 on the left. In the right part of the figure we “forgot” the vertical dividing lines. (The reader should ignore the dashed square at this time.)
We now “squeeze” the largest square possible between the top/right boundary of the bar chart and the coordinate axes. See the dashed square in the right part of Figure 1. The side length of this square is the desired h-index! (So in the example of Figure 1 this is 4).
These constructions are well known to combinatorics (like me). In the example, yes
NZit = 50 = 16 + 8 + 7 + 6 + 3 + 3 + 3 + 1 + 1 + 1 + 1.
We call such a sum representation of a given number (in the specific example 50), where the summands are arranged in (weak) descending order, a (number) partition of NZit. The diagram representation as in the right part of Figure 1 is called the Young diagram or Ferrers diagram of the partition. Finally, the largest square that can be squeezed in, as in the right part of Figure 1, is called the Durfee square (2).
___________________________________________________
- Concentration of the distribution: The “formula” for the h-index
_____________________________________________________
We now come to the (mathematical) core of this note. This is a limiting case sentence.
We give ourselves a NZit. Then we choose one randomly from all possible partitions of NZit (= distributions of the total number of NZit citations on the individual publications of an author), whereby we consider all such partitions to be equally likely. The question we ask ourselves is: What does such a randomly chosen partition, appropriately scaled, look like when NZit is large?
The question may seem nonsensical at first glance, but it is a standard question in probability theory. But it is true that it is not clear from the start whether there is a sensible answer to this.
Nevertheless, we all know an example of such a question where there is a sensible answer. If you scale random walks on the integers that jump either to the next higher or to the next smaller integer in each time step according to given probabilities and take N such steps, with N1/2, then you get a (one-dimensional) Brownian motion for N ? ? . The details of the discrete process (the random walks) are not that important; in the limit case you get a universal Brownian motion. This is itself a random process (3).
Sometimes, however, the limiting case process is deterministic . One then speaks of limit shapes . And this fascinating phenomenon occurs with the randomly chosen partitions (4).
The following sentence makes the previous comments precise. Roughly speaking, it states that Young diagrams of partitions of NZit when scaled in the x and y directions by N 1/2 Zit (which is to mean that both x and y coordinates of all points are represented by N 1 /2 Zit), are very likely practically indistinguishable from the curve in (3.1).
Sentence 2 . Let ? be the curve ? = (x,y) x,y > 0 and e ??x/? 6 + e ??y/? 6 = 1, (3.1)
and let ? > 0 be given. Then the probability that the step function, which is represented by the Young diagram of a randomly chosen partition of N (with respect to uniform distribution), scaled by N to the power1/2 in the x and y directions, remains in an ? neighborhood of ? tends towards 1 for N ? ?.
Figure 2 illustrates this sentence. The curve ? (blue) is shown together with the scaled citation distribution/partition (yellow) from Figure 1 (i.e. x and y coordinates of all points were divided by 50 to the 1/2 power). It should be revealed at this point that I created the partition in Figure 1 using the RandomPartition[50] input in Mathematica (using the Combinatorics package). You can see that the step function follows ? relatively closely . The sentence says that this is no coincidence.
I will say something about the history of the phrase below. But first we should return to the h-index. Because if it is the case that a scaled random partition is close to the curve ? , then the h-index of the partition must also be close to the “h-index” of the curve ? (descaled). Clearly, one obtains the “h-index” of ? by setting x = y in (?.?): One obtains x = y = 61/2 log(2)/?.
Corollary 3 . Let ? > 0 be given. Then the probability that the h-index of a randomly chosen partition of N (with respect to uniform distribution) tends to be less than ? percent of
6 1/2 log(2)
————— ? N (3.2)
?
deviates, towards 1 for N ? ?. Here is 6 1/2 log(2) divided by ? = 0.540445… .
The (obvious) conclusions from the corollary are discussed in the next section.
So now to the story of sentence 2. As stated in [2, Sec. 12.1] so beautifully says: “It is difficult to credit limit shape results like h··· i precisely . . . “. The difficulty is, on the one hand, to find a result that is as strong as Theorem 2 and, on the other hand, to be accompanied by a proof that meets today’s demands for rigor. (Authors who address such questions often have a physics background.) As in [2, Sec. 12.1], such a result goes back to Temperley [8], but based only on heuristic arguments. Later, Vershik and Kerov (see [9] and the references given therein) also talk about borderline case forms of partitions in their work, but do not seem to really rigorously derive weaker results. Perhaps it is Fristedt [5, Theorem 2.9] who is the first to rigorously prove a result from which Theorem 2 easily follows. A source where Theorem 2 is formulated exactly as above (with a stronger statement about the convergence speed) is [7, Theorem 1].
____________________________________________________
- Conclusions
____________________________________________________
To put it bluntly, Corollary 3 says that the h-index contains exactly the same information (yes, of course, with a probability close to 1) as the total number of citations of an author. (Potential objections to this statement are discussed in the following section and – argumentatively – brushed off the table.) So he does not do exactly what Hirsch claims he does, namely that he would allow comparisons to be made between authors who have different numbers of citations, be at different stages of your career, etc. (It seems to me like in cryptography, when you devise something clever to be immune to a particular decryption attack, and in return fall into someone else’s trap case the concentration of the distribution.)
So it makes no sense to compare h-indices from different researchers. If so, then you have to compare a researcher’s h-index with the “expected” value according to (3.2). If the h-index is approximately exactly equal to this value, it means nothing at all. That was expected. Only if the h-index differs significantly could that mean anything. But what?
With more established researchers it can often be observed that the h-index is significantly smaller than the formula (3.2). But there is a very simple explanation for this: such authors usually have one, two or three extremely widely cited publications (usually books or review articles). If you then remove this from the count, everything is “correct” again (i.e. the “reduced” h-index is close to the formula).
On the other hand, if the h-index is significantly larger than the “expected” value in (3.2), then it is probably the case that this researcher has few actually highly cited publications and the others have more or less equally (moderately?) quoted.
So is it the case that it is more likely to be assessed positively if the h-index is significantly below the value of formula (3.2)?
We are already noticing that the level of discussion is increasingly slipping. Instead of hiding all sorts of things in the h-index, it probably makes more sense to look at an author’s entire publication profile. (And it’s even better to actually look at the content and content of the publications . . . But that’s another discussion.)
_____________________________________________________
- Objections
_____________________________________________________
There are two potential objections to the conclusions presented in the previous section:
- Corollary 3 is an asymptotic result. In the practice of calculating h-indices of authors, however, we are dealing with “very finite” numbers NZit.
- Is the assumption of equal distribution of all partitions (= citation distributions) realistic?
Regarding the first objection: It is a fact that the concentration of the distribution around the expected value given in formula (3.2) is very strong, even for small NZit. This objection is therefore irrelevant. Here are two concrete examples: For NZit = 50 there are 204,226 partitions, and the “formula” (3.2) gives 3.82 for the “expected” h-index. A not difficult calculation (5) shows that 77% of the partitions out of 50 have an h-index of 3 or 4, and that 97% have an h-index of 3, 4, or 5. On the other hand, if we assume NZit = 1000, then there are approximately 24×10 to the power 30 partitions of 1,000, and the “expected” h-index from (3.2) is 17.1. This shows that 88% of the partitions out of 1000 have an h-index between 15 and 19 and that 97% have an h-index between 15 and 20.
Now to the second objection. Is the uniform distribution model a valid model? Of course, one can only argue empirically on this point, not mathematically. I argue that the uniform distribution model is very plausible for mathematics at least. One of the essential characteristics of mathematical publishing practice is that older (and old) publications continue to be cited because they do not lose their validity, and also because mathematicians have (at least should have) the ambition to use the original source to cite a result (as long as it is not basic knowledge or a “folklore” result). The effect that can be seen from this on the citation numbers of an author’s publications is that there will normally be a few that have caused a stir and will continue to receive citations, and that there will be many which are received moderately or not at all, and as a result will receive few (or no…) citations, and not that many will be added over time. The implied profile of the bar chart of such a citation distribution corresponds exactly to the profile of a random partition, which is precisely captured in Theorem 2. Confidence becomes virtually certainty when one looks at examples, see Section 6.
I am convinced that the uniform distribution model is also extremely suitable for some other natural sciences (such as theoretical physics, but not only). Because of its concentration property, it is very robust against distortions that are not too large. However, if, due to the practices of a discipline, it is the case that publications “necessarily” become obsolete after a certain (short) time, because they are inevitably “overtaken” by newer (technical) developments, then the equal distribution model is probably not a suitable model more. You will then have to deal with citation distributions that are a little “deformed”, “thinner”, so that there are many more moderately frequently cited publications, but still a few that are particularly frequently cited. One would have to take a closer look at such a discipline in order to develop a suitable partition model for it; Not all partitions will then be equally likely. But it is not the case that such scenarios have not also been investigated, see (2) and the references given there. It is not surprising that for these scenarios there are almost always limiting case sentences of a similar character to sentence 2. Then again it is the case that there is an “expected” h-index of the form c ? NZit, with strong concentration, only that the constant c is not the one in (3.2).
Where the equal distribution assumption is certainly misplaced is with authors who have not published for a long time. The situation is that the existing publications will continue to receive citations, but no new publications (with very low citation numbers at the beginning – 0 at the beginning) will be added. This is not compatible with the uniform distribution profile from sentence 2.
_____________________________________________________
- Some examples for illustration
_____________________________________________________
I’ll start with myself. What does my h-index look like? First: I haven’t touched on this point yet because it is irrelevant to the conclusion of this grade: If you use different databases (e.g. Web of Science, Google-Scholar, Scopus, MathSciNet, Zentralblatt, . . . ), then you get it every time very different citation counts, since each database collects things in different ways (and actually collects different things… ), and so the h-index has to be different every time. Anyway, I’m using MathSciNet here. This shows for me (as of July 20, 2021) NZit = 1990, and the h-index turns out to be 21. With NZit = 1990 you get the value 24.09 in the “formula” (3.2). I would say: fits pretty well. But it’s actually even better. If you look at my two most cited publications, you will see that they are review articles, not real research articles. Of course they are quoted “disproportionately”, so to speak. So we should take these out of the “count”. Then 1,600 citations remain and the “reduced” h-index is 20. If you now insert NZit = 1600 into (3.2), you get 21.60.
I also took the liberty of collecting the NZit and h-index of the president and vice president of the DMV (again using MathSciNet). I hope you’ll forgive me. In any case, for Ilka Agricola, NZit = 414, and her h-index is 12. Substituting NZit = 414 into (3.2), you get 10.99. Here, too, you would actually have to remove the most cited article because it is a review article. The reduced number of citations is then NZit = 342, the reduced h-index is 10, and the “formula” (3.2) now results in 9.99. For Joachim Escher, MathSciNet shows NZit = 5900 and an h-index of 38. With NZit = 5900 you get the value 41.48 in (3.2). This is also relatively close to the actual value of the h-index. Here it is not the case that the most cited articles are books or review articles. On the other hand, the most cited article is an obviously particularly fundamental article on wave refraction, which is cited particularly numerously for this reason. If you take this out of the “rating”, then the reduced number of citations results in NZit = 5113, for the reduced h-index 37, and the “formula” (3.2) produces 38.61.
I would like to point out that the article [10] contains numerous additional data and examples, in particular an evaluation and comparison with (3.2) of the h-indices of Abel Prize recipients, of members of the US National Academy of Sciences, and of Associate Professors of three Mathematics Departments at American research universities. All of these data confirm the conclusions presented in Section 4.
I invite you to “check” your h-index and compare it with the “formula” (3.2)!
_____________________________________________________
- Concluding remarks
_____________________________________________________
The “euphoria” surrounding the invention of the h-index inspired the invention of numerous other such indices, also with claimed significance about the influence and relevance of authors’ publications. An example is the g-index by Leo Egghe [4], which – according to its inventor – would be superior to the h-index. Using the notation and assumptions of Definition 1, an author’s g-index is by definition the maximum k such that N1 + N2 + ··· + Nk (the number of citations of the author’s k most cited articles) is at least as large as k (square). Just like all other indices that try to read something from the citation distribution, it also fails because the (scaled) citation distribution is – essentially – “deterministic” due to Theorem 2, and so is the corresponding index. …
Corollary 4 . Let ? > 0 be given. Then the probability that the g-index of a randomly chosen partition of N (with respect to uniform distribution) tends to be less than ? percent of
g ? N (7.2)
deviates, where g is the positive solution of the equation (?.?), to 1 for N ? ?. Here g = 0.88699… .
To “support” this with data: My g-index is 37, and the “formula” (7.2) delivers the value 39.70 with NZit = 1990. Ilka Agricola’s g-index is 18, while the “formula” (7.2) for NZit = 414 gives the value 18.10. Finally, Joachim Escher’s g-index is 73, compared to 68.36 obtained with NZit = 5900 from the “formula” (7.2).
Hirsch’s article [6] is written in a very mathematical way. In fact, Hirsch is aware of a correlation between h-index and total number of citations. In [6, Eq. (1)] he assumes the relation h = c ? NZit (6) and hypothesizes that 1/c2 moves between 3 and 5. He relies on empirical data. At this point (we are on the first page of the article) he already makes the fundamental mistake that invalidates everything else in the article: it does not occur to him that there is a “fixed” relationship between h-index and NZit could (as expressed in Corollary 3), and that the variation in the constant c could have to do with statistical fluctuations and other effects. … Since any meaningfulness of the h-index is based on the fact that the constant c is not “fixed”, everything else in the article [6] (which largely consists of somewhat naive heuristic assumptions and arguments and calculations derived from them) is rubbish.
I believe that the facts set out in this note should be better known. If other sciences already believe in such “hocus-pocus” (I once tried to convince a geoscientist that the h-index is nonsense; without success), then at least we mathematicians should know – and spread the word – that it is h-index is one of the stupidest among the various bibliometric indexes; in the sense that it promises something it does not deliver, and in fact is essentially equivalent to a much simpler index (number of citations).
Remarks
- A note with similar content appeared in English a few years ago in [10]. However, this is based on a weaker result and is therefore formulated more cautiously.
- The classic [1] is recommended to interested readers who want to learn more about the (extremely rich) theory of (number) partitions.
- There are numerous other such (universal) limiting case processes in probability theory, such as Aldous’ “continuous random tree”, the Brownian map (a “random surface”) or the Brownian snake by Marckert and Mokkadem.
- Other examples of limiting case forms are the limiting case surfaces of so-called “plane partitions” (two-dimensional generalizations of partitions) and, more generally, the height functions of randomly chosen perfect matchings in periodic bipartite graphs.
- One uses generating functions, see [1.0, Sec. 3].
- Equation [1] in [6] is actually NZit = ah (squared) , where h denotes the h-index. This translates into a = 1/c2. Hirsch empirically determines a range of 3 to 5 for this proportionality factor a.
literature
[1] GE Andrews, The Theory of Partitions, Encyclopedia of Math. and its Applications, vol. ?, Addison-Wesley, Reading, 1976.
[2] S. DeSalvo and I. Pak, Limit shapes via bijections, Combin. Prob. Comput. 28 (2019), 187-240.
[3] German Association of Mathematicians, position paper on the use of bibliometric data, communications German. Math.-Ver. 27 (2019), 212-217.
[4] L. Egghe, Theory and practice of the g-index, Scientometrics 69 (2006), 131-152.
(5) B. Fristedt, The structure of random partitions of large integers, Trans. Amer. Math. Soc. 337 (1993), 703-735.
[6] JE Hirsch, An index to quantify an individual’s scientific research output, Proc. Natl. Acad. Sci. USA 102 (2005), 16569-16572.
[7] F. Petrov, Two elementary approaches to the limit shapes of Young diagrams, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov 370 (2009), Kraevye Zadachi Matematichesko?? Fiziki i Smezhnye Voprosy Teorii Funktsi??. 40, 111-131, 221; English translation in J. Math. Sci. (NY) 166 (2010), 63-74.
[8] HNV Temperley, Statistical mechanics and the partition of numbers, II. The form of crystal surfaces, Proc. Cambridge Philos. Soc. 48 (1952), 683-697.
[9] AM Vershik, Statistical mechanics of combinatorial partitions, and their limit configurations, Functional. Anal. i Prilozhen. 30 (1996), 19-39, 96; English translation in Funct. Anal. Appl. 30 (1996), 90-105.
[10] A. Yong, Critique of Hirsch’s citation index: a combinatorial Fermi problem, Notices Amer. Math. Soc. 61 (2014), 90-105.
Prof. Dr. Christian Krattenthaler, Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna – christian.krattenthelaer@univie.ac.at
Darmstadt College
26 successful graduates
in the sixth volunteer program
By Prof. Geribert Jakob
Geribert Jacob
The postgraduate and cooperative internship program at Darmstadt University of Applied Sciences (h_da) has 26 successful graduates in its sixth year of existence, including 24 media documentaries from the partners of the ARD state institutions, ZDF, Deutsche Welle, the German Broadcasting Archives and the FAZ (RTL is next again this year) and two documentarians in research data management for partners from the Leibniz Association. They have all completed a two-year program, which in the first year imparts qualifications through a practical internship, which is continued in the second year and includes an academic phase with a focus on information science, project, process, quality and requirements management as well as information law and one Documentation business administration is completed. Two main focuses of the training this year, apart from the basic topics such as metadata, indexing, information retrieval, etc., were machine learning, semantic systems and agile procedures.
The culmination of the program in terms of content came with the university-public presentations of the annual projects carried out individually by the graduates. These projects have the special feature that their results are implemented directly as a business, organizational or technical solution in the documentation, information systems and archives of their respective houses. Another nice aspect of the last five years was winning a total of six science prizes.
The study program is very demanding and usually requires a master’s or master’s degree to participate. In exceptional cases, particularly talented bachelor’s degree holders can also be admitted, similar to doctoral courses. This year there were also three participants with doctorates. Didactically, we follow a problem- or project-based learning approach (PBL/ProjBL) with inverted events in which the participants come prepared, i.e. preferring the acquisition of facts to the self-learning phase, and which make it possible during the academic working weeks through discussions and processing Problems to gain experience. In addition to some colleagues from information science in the h_da Media department, there are over twenty highly qualified lecturers in their specialist areas working in the program, which is very popular with the program participants because of the resulting practical relevance. In addition to the supervisors, each partner has at least one coach supporting the participants in the program.
The “h_da symposium on scientific and media documentation” with its cutting edge topics, which is integrated into the training and takes place regularly in November, is another highlight that can be attended free of charge by the specialist public. This year we had over 60 external guests at the virtual event. Interested parties are always welcome. Further information is presented in detail on the website http://mediendocumentation.h-da.de/doku.php?id=wd:start .
The current program was recently extended until 2024 by h_da and its partners SWR/SR, BR, WDR, hr, rbb, RadioBremen, DRA, Deutschlandradio, Deutsche Welle, ZDF, RTL and FAZ. An appointment process is currently taking place in the Information Science degree programs, whose staffing is intended to ensure the continuation of the program from 2025, as the current program head, Prof. Geribert Jakob, is retiring at the beginning of 2025.
The program is open to other partners and is also relevant insofar as there will be a massive need for young talent in the next five years because at least 40% of qualified documentarians are retiring. For those interested in training, it should be noted that a voluntary employment contract with one of our partners is a mandatory requirement.
The next year is just around the corner and will be looked after starting next week.
E.g. MED
Optimal use of LIVIVO
Ladies and Gentlemen
I would like to draw your attention to a virtual hands-on workshop taking place on December 7, 2021. The workshop provides knowledge about the functionalities, services and features of LIVIVO, the ZB MED search portal for the life sciences.
The aim of the workshop is to get an overview of the LIVIVO user interface. You will be given skills to use the search portal optimally for your needs and to pass on the knowledge to your own users. There will also be sufficient space for the exchange of experiences.
The workshop is aimed, on the one hand, at employees of academic and public libraries, and on the other hand at researchers and students from all disciplines of the life sciences. You are welcome to spread the invitation in your institutions and networks.
All information about the workshop and registration at https://www.zbmed.de/ueber-uns/presse/neuigkeiten-aus-zb-med/artikel/einladen-livivo-workshop-7-december-2021/
Kind regards Ulrike Ostrzinski, ZB MED – Information Center for Life Sciences
OpenPassword
Forum and news
for the information industry
in German-speaking countries
New editions of Open Password appear three times a week.
If you would like to subscribe to the email service free of charge, please register at www.password-online.de.
The current edition of Open Password can be accessed immediately after it appears on the web. www.password-online.de/archiv. This also applies to all previously published editions.
International Cooperation Partner:
Outsell (London)
Business Industry Information Association/BIIA (Hong Kong)
Open Password Archive – Publications
OPEN PASSWORD ARCHIVE
DATA JOURNALISM
Handelsblatt’s Digital Reach