LIS Links

First and Largest Academic Social Network of LIS Professionals in India

Latest Activity

Narendra Pal shared a profile on Facebook
yesterday
Profile IconLIS Links now has birthdays
yesterday
Dr. U. Pramanathan posted blog posts
yesterday
Dr. U. Pramanathan posted events
yesterday
Dr. U. Pramanathan updated their profile
Wednesday
Urvashi kaushik replied to Sunil K Upneja's discussion Clarification regarding CAS promotion of College Librarian
Wednesday
S. Jaffer Basha left a comment for Dr Jolly Varghese U
Tuesday
Dr. U. Pramanathan posted blog posts
Tuesday
Dr. U. Pramanathan posted events
Tuesday
ASHOKKUMAR SINGH commented on Dr. U. Pramanathan's blog post Recruitment of Librarian Gr.II/Librarian Senior, Assistant Librarian, Library Restorer & Documentist @ Baba Farid University of Health Sciences (BFUHS), Faridkot, Punjab.
Tuesday
ASHOKKUMAR SINGH commented on Dr. U. Pramanathan's blog post Recruitment of Librarian Gr.II/Librarian Senior, Assistant Librarian, Library Restorer & Documentist @ Baba Farid University of Health Sciences (BFUHS), Faridkot, Punjab.
Tuesday
Sanraj Roy posted an event
Thumbnail

Advanced Training programme on Bibliometrics and Research Output Analysis at Library Network Centre, Infocity, Gandhinagar382007, Gujarat, India.

December 2, 2024 to December 6, 2024
Monday
SHWETHA KV posted a discussion
Monday
Sunil K Upneja posted a discussion
Monday
Dr. Mahabaleshwara Rao Baikady posted an event

6th NACML (National Conference on Management of Modern Libraries) - 2025 at Dr. TMA Pai Hall

February 21, 2025 at 8am to February 22, 2025 at 5pm
Monday
Saanvi Singh posted a status
"School Librarian Vacancy for 125 Post in Tripura: https://shorturl.at/GEgWl"
Nov 16
Urvashi kaushik and Vikram Jain are now friends
Nov 15
Urvashi kaushik replied to Mrs. Vishakha R. Rajguru's discussion Evidence for CBSE Librarians Status in Maharashtra or Maharashtra govt.
Nov 15
Urvashi kaushik replied to Mrs. Vishakha R. Rajguru's discussion Evidence for CBSE Librarians Status in Maharashtra or Maharashtra govt.
Nov 15
Profile IconRenuka M Yalamali, SAMEER SAI PAINKRA, Annanya and 6 more joined LIS Links
Nov 15

1. Gray literature (or grey literature) is a field in library and information science. The term is used variably by the intellectual community, librarians, and medical and research professionals to refer to a body of materials that cannot be found easily through conventional channels such as publishers, "but which is frequently original and usually recent" in the words of M.C. Debachere. Examples of grey literature include technical reports from government agencies or scientific research groups, working papers from research groups or committees, white papers, or preprints. The term grey literature is often employed exclusively with scientific research in mind. Nevertheless, grey literature is not a specific genre of document, but a specific, non-commercial means of disseminating information.
The identification and acquisition of grey literature poses difficulties for librarians and other information professionals for several reasons. Generally, grey literature lacks strict bibliographic control, meaning that basic information such as author, publication date or publishing body may not be easily discerned. Similarly, non-professional layouts and formats and low print runs of grey literature make the organized collection of such publications challenging compared to more traditional published media such as journals and books.
Information and research professionals generally draw a distinction between ephemera and grey literature. However, there are certain overlaps between the two media and they certainly share common frustrations such as bibliographic control issues

2. Institutional repository is an online locus for collecting, preserving, and disseminating - in digital form - the intellectual output of an institution, particularly a research institution.
For a university, this would include materials such as research journal articles, before (preprints) and after (postprints) undergoing peer review, and digital versions of theses and dissertations, but it might also include other digital assets generated by normal academic life, such as administrative documents, course notes, or learning objects.
The four main objectives for having an institutional repository are:
• to provide open access to institutional research output by self-archiving it;
• to create global visibility for an institution's scholarly research;
• to collect content in a single location;
• to store and preserve other institutional digital assets, including unpublished or otherwise easily lost ("grey") literature (e.g., theses or technical reports).
Features and Benefits of an Institutional Repository
According to the Directory of Open Access Repositories (DOAR) data [6] and the Repository 66 map at December 2010,[7] the majority of IRs are built using Open Source software.
While the most popular Open Source and hosted applications share the advantages that IRs bring to institutions, such as increased visibility and impact of research output, interoperability and availability of technical support, IR advocates tend to favour Open Source solutions for the reason that they are by their nature more compatible with the ideology of the freedom and independence of the internet from commercial interests. On the other hand, some institutions opt for outsourced commercial solutions.
In her briefing paper[8] on open access repositories, advocate Alma Swan lists the following as the benefits that repositories bring to institutions:
• Opening up outputs of the institution to a worldwide audience;
• Maximizing the visibility and impact of these outputs as a result;
• Showcasing the institution to interested constituencies – prospective staff, prospective students and other stakeholders;
• Collecting and curating digital output;
• Managing and measuring research and teaching activities;
• Providing a workspace for work-in-progress, and for collaborative or large-scale projects;
• Enabling and encouraging interdisciplinary approaches to research;
• Facilitating the development and sharing of digital teaching materials and aids, and
• Supporting student endeavours, providing access to theses and dissertations and a location for the development of e-portfolios.
Repository Software
There are a number of open-source software packages for running a repository including:
• DSpace
• Eprints
• Fedora
There are also hosted (proprietary) software services, including:
• Digital Commons
• SimpleDL
There is a mashup indicating the worldwide locations of open access digital repositories. This project is called Repository 66[1] and is based on data provided by ROAR and the OpenDOAR service developed by the SHERPA

3. Ontologies Information reterival system
The use of ontologies to overcome the limitations of keyword-based search has been put forward as one of the motivations of the Semantic Web since its emergence in the
late 90’s. While there have been contributions in this direction in the last few years,
most achievements so far either make partial use of the full expressive power of an
ontology-based knowledge representation, or are based on boolean retrieval models,
and therefore lack an appropriate ranking model needed for scaling up to massive
information sources.

In the former case, ontologies provide a shallow representation of the information
space, equivalent in essence to the taxonomies and thesauri used before the Semantic
Web was envisioned [3,6,7,15]. Rather than an instrument for building knowledge
bases, these light-weight ontologies provide controlled vocabularies for the classification
of content, and rarely surpass several KBs in size. This approach has brought
improvements over classic keyword-based search through e.g. query expansion based
on class hierarchies and rules on relationships, or multifaceted searching and browsing.
It is not clear though that these techniques alone really take advantage of the full potential
of an ontological language, beyond those that could be reduced to conventional
classification schemes.

Other semantic search techniques have been developed that do exploit large knowledge
bases in the order of GBs or TBs consisting of thousands of ontology instances,
classes and relations of arbitrary complexity [1,2,4,12]. These techniques typically use
boolean search models, based on an ideal view of the information space as consisting
of non-ambiguous, non-redundant, formal pieces of ontological knowledge. In this
view, the information retrieval problem is reduced to a data retrieval task. A knowl search results are assumed to be always 100% precise, and there is no notion of approximate
answer to an information need. This model makes sense when the whole
information corpus can be fully represented as an ontology-driven knowledge base, so
that search results consist of ontology entities.
However, there are limits to the extent to which knowledge can or should be formalized
in this way. First, because of the huge amount of information currently available
to information systems worldwide in the form of unstructured text and media documents,
converting this volume of information into formal ontological knowledge at an
affordable cost is currently an unsolved problem in general.
Second, documents hold a value of their own, and are not equivalent to the sum of
their pieces, no matter how well formalized and interlinked. The replacement of a
document by a bag of information atoms inevitably implies a loss of information value:
the thread of thought behind the order of the sentences in free text, the choice of the
words, etc., are a valuable, relevant, and necessary part of the conveyed message.
Therefore, although it is useful to break documents down into smaller information
units that can be reused and reassembled to serve different purposes, it is yet often
appropriate to keep the original documents in the system.
Third, wherever ontology values carry free text, boolean semantic search systems
do a full-text search within the string values. In fact, if the string values hold long
pieces of free text, a form of keyword-based search is taking place in practice beneath
the ontology-based query model since, in a way, unstructured documents are hidden
within ontology values, whereby the “perfect match” assumption starts to become
arguable, and search results may start to grow in size. While this may be manageable
and sufficient for small knowledge bases, the boolean model does not scale properly
for massive document repositories where searches typically return hundreds or thousands
results. Boolean search does not provide clear ranking criteria, without which
the search system may become useless if the search space is too big.
In this paper we propose an ontology-based retrieval model meant for the exploitation
of full-fledged domain ontologies and knowledge bases, to support semantic
search in document repositories. In contrast to boolean semantic search systems, in our
perspective full documents, rather than specific ontology values from a KB, are returned
in response to user information needs. The search system takes advantage of
both detailed instance-level knowledge available in the KB, and topic taxonomies for
classification. To cope with large-scale information sources, we propose an adaptation
of the classic vector-space model [16], suitable for an ontology-based representation,
upon which a ranking algorithm is defined.
The performance of our proposed model is in direct relation with the amount and
quality of information within the KB it runs upon. The latest advances in automating
ontology population and text annotation are promising [5,9,11,14]. While, if ever,
ontologies and metadata (and the Semantic Web itself) become a worldwide commodity,
the lack or incompleteness of available ontologies and KBs is a limitation we shall
likely have to live with in the mid term. In consequence, tolerance to incomplete KBs
has been set as an important requirement in our proposal. This means that the recall
and precision of keyword-based search shall be retained when ontology information is
not available or incomplete.

4. What is Impact factor?
is a measure reflecting the average number of citations to articles published in science and social science journals. It is frequently used as a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones. The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI), now part of Thomson Reuters. Impact factors are calculated yearly for those journals that are indexed in Thomson Reuters Journal Citation Reports
A = the number of times articles published in 2006 and 2007 were cited by indexed journals during 2008.
B = the total number of "citable items" published by that journal in 2006 and 2007. ("Citable items" are usually articles, reviews, proceedings, or notes; not editorials or Letters-to-the-Editor.)
2008 impact factor = A/B.
Content analysis or textual analysis is a methodology in the social sciences for studying the content of communication. Earl Babbie defines it as "the study of recorded human communications, such as books, websites, paintings and laws."
According to Dr. Farooq Joubish, content analysis is considered a scholarly methodology in the humanities by which texts are studied as to authorship, authenticity, or meaning. This latter subject include philology, hermeneutics, and semiotics.
Harold Lasswell formulated the core questions of content analysis: "Who says what, to whom, why, to what extent and with what effect?." Ole Holsti (1969) offers a broad definition of content analysis as "any technique for making inferences by objectively and systematically identifying specified characteristics of messages." Kimberly A. Neuendorf (2002, p. 10) offers a six-part definition of content analysis:
"Content analysis is a summarising, quantitative analysis of messages that relies on the scientific method (including attention to objectivity, intersubjectivity, a priori design, reliability, validity, generalisability, replicability, and hypothesis testing) and is not limited as to the types of variables that may be measured or the context in which the messages are created or presented."

5. SIX SIGMA in Libraries
Six sigma is a business management strategy developed by MOTOROLA company in USA in 1981. It seeks to improve the quality of process output by identifying and removing the cause of defect and minimising variability in manufacturing and business process. It uses a set of quality management methods including statistical method and creates a special infrastructure of people within the organization.
Application in libraries:
1.) Developing specialized pool of library professionals
2.) Developing quality services
3.) Judicious budget allocation
(OR)
The purpose of any library is to be pleased to all customers’ needs. Users are the back bone of library and they are the best judges to evaluate the service and the quality of the library. But in general it is difficult to satisfy completely and totally of every customer needs. To evaluate "users satisfaction” and to develop the "quality" of the library, it is mandatory to bring a new innovation. To develop the library and to provide maximum users’ satisfaction, it is necessary to implement Six Sigma in Libraries. The theory of Six Sigma has been implementing in manufacturing sectors to eliminate wastages and users’ complaint to satisfy the clients. In such a way Six Sigma can be applied in library field to maximize the users’ satisfaction by eliminating their complaints and problems. In such a way this study aims to implement Six Sigma to provide better service and full satisfaction to the library users.

6. Web 2.0
The term Web 2.0 is associated with web applications that facilitate participatory information sharing, interoperability, user-centered design,[1] and collaboration on the World Wide Web. A Web 2.0 site allows users to interact and collaborate with each other in a social media dialogue as creators (prosumers) of user-generated content in a virtual community, in contrast to websites where users (consumers) are limited to the passive viewing of content that was created for them. Examples of Web 2.0 include social networking sites, blogs, wikis, video sharing sites, hosted services, web applications, mashups and folksonomies.
The term is closely associated with Tim O'Reilly because of the O'Reilly Media Web 2.0 conference in late 2004.[2][3] Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specification, but rather to cumulative changes in the ways software developers and end-users use the Web. Whether Web 2.0 is qualitatively different from prior web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who called the term a "piece of jargon", precisely because he intended the Web in his vision as "a collaborative medium, a place where we [could] all meet and read and write". He called it the "Read/Write Web.

7. Hypertext and Hypermedia & Hyperlinks
Hypertext and hypermedia refer to Web pages and other kinds of on-screen content that employ hyperlinks. Hyperlinks give us choices when we look for information, listen to music, purchase products, and engage in similar activities. They take the form of buttons, underlined words and phrases, and other “hot” areas on the screen.
Hypertext refers to the use of hyperlinks (or simply “links”) to present text and static graphics. Many websites are entirely or largely hypertexts. Hypermedia refers to the presentation of video, animation, and audio, which are often referred to as “dynamic” or “time based” content or as “multimedia.

8. What is different between Open-source software (OSS) Commercial software
Open source software is computer software that is available in source code form: the source code and certain other rights normally reserved for copyright holders are provided under a software license that permits users to study, change, improve and at times also to distribute the software.
Open source software is very often developed in a public, collaborative manner. Open-source software is the most prominent example of open-source development and often compared to (technically defined) user-generated content or (legally defined) open content movements.[1]
A report by the Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers.[
The free software movement was launched in 1983. In 1998, a group of individuals advocated that the term free software should be replaced by open source software (OSS) as an expression which is less ambiguous and more comfortable for the corporate world.[4] Software developers may want to publish their software with an open source license, so that anybody may also develop the same software or understand its internal functioning. Open source software generally allows anyone to create modifications of the software, port it to new operating systems and processor architectures, share it with others or, in some cases, market it. Scholars Casson and Ryan have pointed out several policy-based reasons for adoption of open source, in particular, the heightened value proposition from open source (when compared to most proprietary formats) in the following categories:
• Security
• Affordability
• Transparency
• Perpetuity
• Interoperability
• Localization.[5]
Open Source Initiative's definition is widely recognized as the standard or de facto definition
The Open Source Initiative (OSI) was formed in February 1998 by Raymond and Perens. With about 20 years of evidence from case histories of closed and open development already provided by the Internet, the OSI continued to present the 'open source' case to commercial businesses. They sought to bring a higher profile to the practical benefits of freely available source code, and wanted to bring major software businesses and other high-tech industries into open source. Perens adapted the Debian Free Software Guidelines to make The Open Source Definition.[10]
Example KOHA library software
Widely used open source products
Open source software (OSS) projects are built and maintained by a network of volunteer programmers. Prime examples of open source products are the Apache HTTP Server, the e-commerce platform osCommerce and the internet browser Mozilla Firefox. One of the most successful open source products is the GNU/Linux operating system, an open source Unix-like operating system, and its derivative Android, an operating system for mobile devices.[17][18] In some fields, open software is the norm, like in voice over IP applications with Asterisk (PBX). Open source standards are not, however, limited to open-source software. For example, Microsoft has also joined the open-source discussion with the adoption of their OpenDocument format[5] as well as creating another open standard, the Office Open XML formats.

9. What is Resource sharing, Networking, Consortia. give name & detail of Science & Socilal Consortia in india?

Electronic publishing has been revolutionizing the format of the recorded knowledge. Electronic information services are attracting reader’s attention in today’s network environment. This changing scenario in library environment has arisen for the need and use of e-journals along with print version. Electronic journals (e-journals) bring new challenges before the library and information professionals to give full text access to scholarly publications both in print and electronic version to its end users. The aim of this paper is to identify various issues relating to access and bibliographic control of ejournals, access management problems, policy issues, and development of e-journals consortium approach to subscribe scholarly peer reviewed journals for their library users in network environment.
Concept and Definition of E-Journals
The concept of e-journals has emerged from 1980’s onward which were initially made available in CDROM formats and then advent of WWW and Internet has accelerated the publication of electronic version of print journals whose number has been increasing by leaps and down. According to the statistics published in the Seventh edition of the Directory of Electronic Journals, Newsletter and Academic Discussion Lists in 1997, 1049 e-journals were enlisted which rose to 3,915 in its 2000 edition. Now the numbers would might have crossed 10,000 plus. Due to convenient in access, cost effective in publication and distribution, most of the publishers have started publishing e-version of their print journals Problems Facing by Librarian in Acquiring E-journals For accessing e-journals the concerned libraries have to sign an agreement with the publishers of that particular journals and get login and password. Payment has to be made in foreign currency if we purchase e-journals through the publishers directly. At the same time one has to keep record of login and 352 password provided by the individual publishers for accessing e-journals. Therefore there is a need of purchasing e-journals from a vender or e-journals aggregator who can arrange to supply the required titles of any publishers at one point either on the basis of login and password basis or on IP address. Since e-journals are costly, library and information centres of particulars interest may come together and form consortia and negotiate with the publishers or aggregators to have access of e-journals for their library users. The consortia approach of acquiring e-journals are very much popular in USA, UK and many western countries which is now coming up in India with the forming of INDEST, FORSA, CSIR, IIM,
and UGC-INFOENET e-journals consortia. The consortia like INDEST, FORSA and CSIR are running successfully and UGC-INFONET Consortia for e-journals access managed by INFLIBNET are under experiment and trials.
Challenges of Managing E-Journals
At present selected university libraries of India (150 universities approx.) are getting the benefit of
accessing e-journals, which will certainly boost the quality of research in terms of originality and currency in research. If we look few years back when it was very difficult to get the copy of the reprint of the published journals in any disciplines. Now a day the situation has already been changed. Most of the foreign scholarly journals are being published by the renowned publishers which were earlier publishing print version have also started publishing journals in electronic or in digital form. The access of the e-version is either free of cost in case of some society publications which can be accessed over the Internet free of cost. But most of the journals, which are published by the commercial publishers, like Springer Link, Kluwer Online, Elsevier
Science, IEE, and IEEE, ACM Digital Library, Emerald, etc. the access of e-journals are restricted to the subscriber institutions or individual subscribers only. UGC & INFLIBNET and MHRD have taken initiatives to give access of scholarly online journals to the academic community of India by establishing e-journals consortium viz., INDEST Consortium, UGCINFONET

10. Infometrics & Bibliometrics & Cybermetrics & Webometrics & Scientometrics

Before defining the relationship, it’s essential to define all the terms. Here the terms are defined
as the stub (in short).
1. Infometrics: the study of quantitative aspect of information in any form.
2. Bibliometrics: the study of quantitative aspect of production, dissemination and use of recorded
information.
3. Cybermetrics: the study of quantitative aspect of Internet as a whole.
4. Scientometrics: the study of quantitative aspect of science as a discipline or economic activity.
5. Webometrics: the study of quantitative aspect of web/web site.
Webometrics: the study of quantitative aspect of web/web site.

Figure- Relationship diagram of 5 Metrics
In the diagram the circle of Infometrics covers all other metrics circles, because according to stub
(given above), it is a quantitative aspect of any type of information.

The part, which overlaps the circle of bibliometrics, of scientometrics, shows the politico-economical
aspects of scientometrics. The economic aspect of science shows the impact of scientific research
over the society.

Bjorneborn & Ingwersen have proposed a differentiated terminology distinguishing between studies
of the web and studies of all Internet applications. They use ‘webometrics’ for study of web and
‘cybermetrics’ for study of Internet applications.

Some part of cybermetrics ellipse lying outside the bibliometrics. It is because some activities in
cybermetrics normally are not recorded, but communicated synchronously as in chat rooms.
In the diagram the circle of webometrics overlap the circle of bibliometrics, but within the boundaries
of cybermetrics. Webometrics circle can’t overlap the circle of cybermetrics because web is a part
of cyberspace. But in the diagram the circle of webometrics ellipse lying outside the bibliometrics,
because some aspect of webometrics (link structure, technologies and so on), dose not included in
bibliometrics or it is beyond the boundaries of bibliometrics. The following point will be more helpful
to understand the relationship between bibliometrics and webometrics.

11. Information seeking behaviour
12. Contribution of Melvil Dewey In USA Library and why call him Father of Library in USA?

Melville Louis Kossuth (Melvil) Dewey (December 10, 1851 – December 26, 1931) was an American librarian and educator, inventor of the Dewey Decimal system of library classification, and a founder of the Lake Placid Club.
Dewey was a pioneer of American librarianship and an influential factor in the development of libraries in America in the beginning of the 20th century. He is best known for the decimal classification system that is used in most public and school libraries. But the decimal system was just one of a long list of innovations. Among them was the idea of the state library as controller of school and public library services within a state. Dewey is also known for the creation of hanging vertical files, which first introduced at the Columbian Exposition of 1893 in Chicago. In Boston, Massachusetts, he founded the Library Bureau, a private company "for the definite purpose of furnishing libraries with equipment and supplies of unvarying correctness and reliability.
Discussion Groups
Informal and voluntary gathering of individuals (in person, through a conference call, or website) to exchange ideas, information, and suggestions on needs, problems, subjects, etc., of mutual interest. Discussion groups are one of the mainstays of the popularity of internet.

Social network
Social network is a social structure made up of individuals (or organizations) called "nodes", which are tied (connected) by one or more specific types of interdependency, such as friendship, kinship, common interest, financial exchange, dislike, sexual relationships, or relationships of beliefs, knowledge or prestige.

Really Simple Syndication
RSS (originally RDF Site Summary, often dubbed Really Simple Syndication) is a family of web feed formats used to publish frequently updated works—such as blog entries, news headlines, audio, and video—in a standardized format.[2] An RSS document (which is called a "feed", "web feed",[3] or "channel") includes full or summarized text, plus metadata such as publishing dates and authorship.
RSS feeds benefit publishers by letting them syndicate content automatically. A standardized XML file format allows the information to be published once and viewed by many different programs. They benefit readers who want to subscribe to timely updates from favored websites or to aggregate feeds from many sites into one place.
RSS feeds can be read using software called an "RSS reader", "feed reader", or "aggregator", which can be web-based, desktop-based, or mobile-device-based. The user subscribes to a feed by entering into the reader the feed's URI or by clicking a feed icon in a web browser that initiates the subscription process. The RSS reader checks the user's subscribed feeds regularly for new work, downloads any updates that it finds, and provides a user interface to monitor and read the feeds. RSS allows users to avoid manually inspecting all of the websites they are interested in, and instead subscribe to websites such that all new content is pushed onto their browsers when it becomes available

Wikipedia,
the free encyclopedia that anyone can edit online.
is a website that allows the creation and editing of any number of interlinked web pages via a web browser using a simplified markup language or a WYSIWYG text editor.[1][2][3] Wikis are typically powered by wiki software and are often used collaboratively by multiple users. Examples include community websites, corporate intranets, knowledge management systems, and note services. The software can also be used for personal notetaking.
Wikis may serve many different purposes. Some permit control over different functions (levels of access). For example, editing rights may permit changing, adding or removing material. Others may permit access without enforcing access control. Other rules may also be imposed for organizing content.
Ward Cunningham, the developer of the first wiki software, WikiWikiWeb, originally described it as "the simplest online database that could possibly work."[4] "Wiki" (pronounced [ˈwiti] or [ˈviti]) is a Hawaiian word meaning "fast" or "quick

Web 2.0
The term Web 2.0 is associated with web applications that facilitate participatory information sharing, interoperability, user-centered design,[1] and collaboration on the World Wide Web. A Web 2.0 site allows users to interact and collaborate with each other in a social media dialogue as creators (prosumers) of user-generated content in a virtual community, in contrast to websites where users (consumers) are limited to the passive viewing of content that was created for them. Examples of Web 2.0 include social networking sites, blogs, wikis, video sharing sites, hosted services, web applications, mashups and folksonomies.
The term is closely associated with Tim O'Reilly because of the O'Reilly Media Web 2.0 conference in late 2004.[2][3] Although the term suggests a new version of the World Wide Web

About NLIST
Background
The Project entitled "National Library and Information Services Infrastructure for Scholarly Content (N-LIST)", being jointly executed by the UGC-INFONET Digital Library Consortium, INFLIBNET Centre and the INDEST-AICTE Consortium, IIT Delhi provides for
i) cross-subscription to e-resources subscribed by the two Consortia, i.e. subscription to INDEST-AICTE resources for universities and UGCINFONET resources for technical institutions; and
ii) access to selected e-resources to colleges. The N-LIST project provides access to e-resources to students, researchers and faculty from colleges and other beneficiary institutions through server(s) installed at the INFLIBNET Centre. The authorized users from colleges can now access e-resources and download articles required by them directly from the publisher's website once they are duly authenticated as authorized users through servers deployed at the INFLIBNET Centre.
N-LIST: Four Components
The project has four distinct components, i.e. i ) to subscribe and provide access to selected UGC-INFONET e-resources to technical institutions (IITs, IISc, IISERs and NITs) and monitor its usage; ii) to subscribe and provide access to selected INDEST e-resources to selected universities and monitor its usage; iii) to subscribe and provide access to selected e-resources to 6,000 Govt./ Govt.-aided colleges and monitor its usage; and iv) to act as a Monitoring Agency for colleges and evaluate, promote, impart training and monitor all activities involved in the process of providing effective and efficient access to e-resources to colleges.
The INDEST and UGC-INFONET are jointly responsible for activity listed at i) and ii) above. The INFLIBNET Centre, Ahmedabad is responsible for activities listed at iii) and iv) above. The INFLIBNET Centre is also responsible for developing and deploying appropriate software tools and techniques for authenticating authorized users.

Current Status

As on Nov 23 2011, a total number of 2120 colleges have registered themselves with the N-LIST programme including 1942 Govt. / Govt.-aided colleges covered under the section 12 B/2F of UGC Act as well as Non-Aided colleges . Log-in ID and password for accessing e-resources has been sent to the authorized users from these 1942 colleges. All e-resources subscribed for colleges under the N-LIST Project are now accessible to these 1942 colleges through the N-LIST website (http://nlist.inflibnet.ac.in)

Resource Description and Access, (RDA)
the title of the third revision of the Anglo-American Cataloguing Rules, a library cataloguing standard used to support the discovery, identification and employment of information resources.
New cataloging code, Resource Description and Access (RDA) was published in June 2010 and has been undergoing tests at select libraries. RDA is a departure from its predecessor, the Anglo-American Cataloging Rules, second edition (AACR2), in that it was designed for the online environment, is more principles-based, and better accommodates formats other than print.

National Knowledge Network(NKN).
The NKN is a state-of-the-art multi-gigabit pan-India network for providing a unified high speed network backbone for all knowledge related institutions in the country. The purpose of such a knowledge network goes to the very core of the country's quest for building quality institutions with requisite research facilities and creating a pool of highly trained professionals. The NKN will enable scientists, researchers and students from different backgrounds and diverse geographies to work closely for advancing human development in critical and emerging areas.

NATIONAL MISSION ON EDUCATION THROUGH INFORMATION AND COMMUNICATION TECHNOLOGY (NMEICT)
WWW: HHPT://SAKSHAT.AC.IN
NPTEL provides E-learning through online Web and Video courses in Engineering, Science and humanities streams. The mission of NPTEL is to enhance the quality of Engineering education in the country by providing free online courseware.
Founded by Ministry of HRD Government of India.
1. What is NPTEL?
NPTEL is an acronym for National Programme on Technology Enhanced Learning which is an initiative by seven Indian Institutes of Technology (IIT Bombay, Delhi, Guwahati, Kanpur, Kharagpur, Madras and Roorkee) and Indian Institute of Science (IISc) for creating course contents in engineering and science.
NPTEL as a project originated from many deliberations between IITs, Indian Institutes of Management (IIMs) and Carnegie Mellon University (CMU) during the years 1999-2003. A proposal was jointly put forward by five IITs (Bombay, Delhi, Kanpur, Kharagpur and Madras) and IISc for creating contents for 100 courses as web based supplements and 100 complete video courses, for forty hours of duration per course. Web supplements were expected to cover materials that could be delivered in approximately forty hours. Five engineering branches (Civil, Computer Science, Electrical, Electronics and Communication and Mechanical) and core science programmes that all engineering students are required to take in their undergraduate engineering programme in India were chosen initially. Contents for the above courses were based on the model curriculum suggested by All India Council for Technical Education (AICTE) and the syllabi of major affiliating Universities in India.

Digital Library Of The Commons Repository
The Digital Library of the Commons (DLC) is a gateway to the international literature on the commons. The DLC provides free and open access to full-text articles, papers, and dissertations. This site contains an author-submission portal; an Image Database; the Comprehensive Bibliography of the Commons; a Keyword Thesaurus, and links to relevant reference sources on the study of the commons.
UGC is apex body of the Govt. of India
After Independence, the University Education Commission was set up in 1948 under the Chairmanship of Dr. S Radhakrishnan "to report on Indian university education and suggest improvements and extensions that might be desirable to suit the present and future needs and aspirations of the country". It recommended that the University Grants Committee be reconstituted on the general model of the University Grants Commission of the United Kingdom with a full-time Chairman and other members to be appointed from amongst educationists of repute.
In 1952, the Union Government decided that all cases pertaining to the allocation of grants-in-aid from public funds to the Central Universities and other Universities and Institutions of higher learning might be referred to the University Grants Commission. Consequently, the University Grants Commission (UGC) was formally inaugurated by late Shri Maulana Abul Kalam Azad, the then Minister of Education, Natural Resources and Scientific Research on 28 December 1953.
The UGC, however, was formally established only in November 1956 as a statutory body of the Government of India through an Act of Parliament for the coordination, determination and maintenance of standards of university education in India. In order to ensure effective region-wise coverage throughout the country, the UGC has decentralised its operations by setting up six regional centres at Pune, Hyderabad, Kolkata, Bhopal, Guwahati and Bangalore. The head office of the UGC is located at Bahadur Shah Zafar Marg in New Delhi, with two additional bureaus operating from 35, Feroze Shah Road and the South Campus of University of Delhi as well.
The UGC has the unique distinction of being the only grant-giving agency in the country which has been vested with two responsibilities: that of providing funds and that of coordination, determination and maintenance of standards in institutions of higher education.
The UGC's mandate includes:
• Promoting and coordinating university education.
• Determining and maintaining standards of teaching, examination and research in universities.
• Framing regulations on minimum standards of education.
• Monitoring developments in the field of collegiate and university education; disbursing grants to the universities and colleges.
• Serving as a vital link between the Union and state governments and institutions of higher learning.
• Advising the Central and State governments on the measures necessary for improvement of university education.
Inter University Centres
NSC
IUCAA
IUC-DAE
CEC
INFLIBNET
IUC-IS
NAAC
National Facilities
Academic Staff Colleges
Information and Library Network INFLIBNET
Information and Library Network Centre(INFLIBNET)
(established in a project made in 1991 and incorporated as a Society in 1996)
INFLIBNET
Near Gujarat University Guest House,
Post Box No. 4116,
Navrangpura Ahmedabad - 380 009
An Inter-University Centre of UGC the INFLIBNET serves towards modernization of Libraries, serves as Information Centre for transfer and access of information, supporting scholarships and learning and academic pursuits through a National Network of Libraries in around 264 Universities, Colleges and R &D Institutions across the country

National Assessment and Accreditation Council (NAAC)
National Assessment and Accreditation Council (NAAC)
P.O. Box. No. 1075, Nagarbhavi
Bangalore - 560 072.
National Assessment and Accreditation Council (NAAC) was established by the UGC in September 1994 at Bangalore for evaluating the performance of the Universities and Colleges in the Country. NAAC's mandate includes the task of performance evaluation, assessment and accreditation of universities and colleges in the country. The philosophy of NAAC is based on objective and continuous improvement rather than being punitive or judgmental, so that all institutions of higher learning are empowered to maximize their resources, opportunities and capabilities. Assessment is a performance evaluation of an institution and /or its units and is accomplished through a process based on self-study and peer review using defined criteria. Accreditation refers to the certification given by NAAC which is valid for a period of five years. At present the Assessment and Accreditation by NAAC is done on a voluntary basis.

HISTORICAL BACKGROUND

India has witnessed a slow and steady growth of Library and Information Science (LIS) education. The foundation of LIS education in India dates back in 1911 when W.A.Borden (1853-1931), an American disciple of Melvil Dewey, for the first time started a short term training programme in library science at Baroda under the patronage of Maharaja Sayajirao III, Gaekwad of Baroda (1862-1939). Four years later in 1915, another American student of Dewey, Asa Don Dickinson (1876-1960), the then librarian of Punjab University, Lahore (now in Pakistan) started a three-months apprentice training programme for working librarians (Satija, 1993 p.37). Before independence, only five universities (Andhra, Banaras, Bombay, Calcutta and Madras) were offering the
diploma course in library science. After independence, new colleges, universities, educational institutions and
learned societies were emerging and the need for professionally qualified personnel to manage their libraries was realized. As a result, the number of library science schools started to increase. Library associations which exist at various places started providing training courses. Dr S.R. Ranganathan started a certificate course at Madras Library Association in 1929 which was taken over by the University of Madras, and in 1937 the course was converted into Postgraduate (PG) Diploma in Library Science. This was the first diploma programme in Library Science in India. University of Delhi was the first university to establish a full-fledged Department of Library Science just before independence in 1946, and started admitting students to the PG Diploma in
1947. In 1951, the diploma was changed to Master in Library Science (M.Lib.Sc). Later, between 1956 to1959, six new LIS departments were established (Mangla, 1998, p.287) at Aligarh Muslim University, M.S.University
of Baroda, Nagpur University, Osmania University, Pune University and Vikram University. Since 1960s, the number of LIS departments established has continued to increase.

During this period, several institutions played important role for the development of LIS education. University of Madras started her first PG Diploma in library science. University of Delhi contributed many firsts such as the starting of Master in Library Science in 1951; which in 1972, on account of a major course revision was renamed Master in Library and Information Science (MLIS). The department name was also changed to Department of Library & Information Science. The course on ‘Computer Applications in Libraries’ was introduced for
the first time in the MLIS programme in 1972. The M.Phil programme started in1978. The first Ph.D. was awarded to D B Krishna Rao in 1957, under the guidance of Dr. Ranganathan. At that time it was the only university in the
whole of the British Commonwealth conducting Ph.D programme in LIS.

THE PRESENT
Over a period of time, LIS has grown and developed into a full-fledged discipline; courses are being imparted by university departments, institutions, library associations and specialized institutions.(Appendix 1). Data about these
institutions was gathered from published sources (Association of Indian Universities, 2003; Dutta and Das, 2001; Patel and Krishan Kumar, 2001; UGC Model Curriculum, 2001). Tables 1 to 3 (in Appendix 1) shows the current status of these courses. Analysis of the data reveals that 85 universities and 32 colleges and institutions affiliated to universities are offering regular courses, whereas 27 universities are conducting these courses through distance education. However, the certificate and diploma courses are not taken into account. The number of
universities (including distance education) offering LIS programmes is as follows: 120 universities are offering bachelor’s degree, 78 are offering master’s degree, 21 are offering two-year integrated course, 16 universities are offering M.Phil degree, and 63 are offering Ph.D. degree. In addition, NISCAIR (formerly INSDOC), New Delhi and DRTC, Bangalore are offering a two-year Associateship in Information Science, which is recognized by some universities as equivalent to Master’s degree. India maintains its Third World leadership in library research in library education and literature (Satija,1998, p.21). The University Grants Commission (UGC) and Indian Council of Social Science Research (ICSSR) are promoting LIS research programmes by awarding scholarships to doctoral students. The National Commission on Science and Technology (NCST), New Delhi, Raja Rammohun Roy Library Foundation (RRLF), Calcutta, and ICSSR are also providing research grants for non-doctoral research. The Defence Scientific Information and Documentation Centre (DESIDOC), Delhi, provides Junior Research Fellowship (JRF) in LIS (Dutta and Das, 2001, p.26).

LIS EDUCATION IN INDIA

In India, the Library and Information Science (LIS) education have taken place with the
introduction of a training course in 1911, in the previous State of Baroda. The real beginning of systematic education in LIS is able to be traced to the initiatives of Dr. S.R. Ranganathan during the period 1926-1931 at the Madras University Library in association with Madras Library Association. The summer school leading to certificate in library science, which Madras University continued under the stewardship of Dr. S.R. Ranganathan till 1937. Later, Andhra University, Banaras Hindu University, Bombay University, Calcutta University and Delhi University introduced Post -Graduate Diploma Courses in Library Science in the year 1935, 1941, 1944, 1946 and 1948 respectively. Apart from these universities, DRTC in Bangalore and NISCAIR in New Delhi started the library science education programmes. During 1947, altogether 27 universities were offering diploma courses in Library Science. In 1957, for the first time in the country, Aligarh Muslim University started B.L.Sc Course. The courses were offered at different levels such as Certificate, Diploma, Bachelor’s, P.G. Diploma, Master’s and research degree programmes i.e. M Phil and Ph. D under different modes (on regular/on campus or
distance/off campus or some times both) and schemes ( annual or semester).

PLANNING, PROGRAMMING AND BUDGETING SYSTEM MODELS (PPBS)
The PPBS is a formal, systematic structure for making decisions on policy, strategy, and the development of forces and capabilities to accomplish anticipated missions. The PPBS is a cyclic process containing three distinct but interrelated phases
Introduction: In the 1980s and early 1990s, the PPBS model was in favor in many institutions of higher education, it is based on an intensive planning process that defines all activities within the unit and provides an analysis of the cost effectiveness of those activities.
PPBS are about how resources are going to be achieve the various objectives of the organization for example, the care of the elderly, once the objectives have been established programs are identified to meet those objectives and the cost/benefits of alternative programs are assessed.
Planning, programming and budgeting system (PPBS) is a middle type of budget between the traditional character and object budget, on the one hand, and the performance budget on the other. The major contribution of PPBS lies in the planning process, i.e- the process of making program policy decisions that lead to a specific budget and specific multi-year plans.
The preferred programs form in effect a long term plan to be pursued over a number of years; each program budget will disclose the cost of providing a service to satisfy an objective,
Broken down into time periods, it therefore informs management in a manner allowing them to make judgments about such effectiveness that would not be possible it programs were fragmented in the departmental of budget concerned.
Planning: which produces the Defense Planning Guidance (DPG); programming: which produces approved Program Objective Memorandums (POM) for the Military Department and Defense Agencies; and budgeting: budget is a budget in which expenditures are based primarily on programs of work and secondarily on character and object.
PERT
Complex projects require a series of activities, some of which must be performed sequentially and others that can be performed in parallel with other activities. This collection of series and parallel tasks can be modeled as a network.
In 1957 the Critical Path Method (CPM) was developed as a network model for project management. CPM is a deterministic method that uses a fixed time estimate for each activity. While CPM is easy to understand and use, it does not consider the time variations that can have a great impact on the completion time of a complex project.
The Program Evaluation and Review Technique (PERT) is a network model that allows for randomness in activity completion times. PERT was developed in the late 1950's for the U.S. Navy's Polaris project having thousands of contractors. It has the potential to reduce both the time and cost required to complete a project.
The Network Diagram
In a project, an activity is a task that must be performed and an event is a milestone marking the completion of one or more activities. Before an activity can begin, all of its predecessor activities must be completed. Project network models represent activities and milestones by arcs and nodes. PERT originally was an activity on arc network, in which the activities are represented on the lines and milestones on the nodes. Over time, some people began to use PERT as an activity on node network. For this discussion, we will use the original form of activity on arc.
The PERT chart may have multiple pages with many sub-tasks. The following is a very simple example of a PERT diagram:
PERT Chart

The milestones generally are numbered so that the ending node of an activity has a higher number than the beginning node. Incrementing the numbers by 10 allows for new ones to be inserted without modifying the numbering of the entire diagram. The activities in the above diagram are labeled with letters along with the expected time required to complete the activity.
Steps in the PERT Planning Process
PERT planning involves the following steps:
1. Identify the specific activities and milestones.
2. Determine the proper sequence of the activities.
3. Construct a network diagram.
4. Estimate the time required for each activity.
5. Determine the critical path.
6. Update the PERT chart as the project progresses.

1. Identify Activities and Milestones
The activities are the tasks required to complete the project. The milestones are the events marking the beginning and end of one or more activities. It is helpful to list the tasks in a table that in later steps can be expanded to include information on sequence and duration.
2. Determine Activity Sequence
This step may be combined with the activity identification step since the activity sequence is evident for some tasks. Other tasks may require more analysis to determine the exact order in which they must be performed.
3. Construct the Network Diagram
Using the activity sequence information, a network diagram can be drawn showing the sequence of the serial and parallel activities. For the original activity-on-arc model, the activities are depicted by arrowed lines and milestones are depicted by circles or "bubbles".
If done manually, several drafts may be required to correctly portray the relationships among activities. Software packages simplify this step by automatically converting tabular activity information into a network diagram.
4. Estimate Activity Times
Weeks are a commonly used unit of time for activity completion, but any consistent unit of time can be used.
A distinguishing feature of PERT is its ability to deal with uncertainty in activity completion times. For each activity, the model usually includes three time estimates:
• Optimistic time - generally the shortest time in which the activity can be completed. It is common practice to specify optimistic times to be three standard deviations from the mean so that there is approximately a 1% chance that the activity will be completed within the optimistic time.
• Most likely time - the completion time having the highest probability. Note that this time is different from the expected time.
• Pessimistic time - the longest time that an activity might require. Three standard deviations from the mean is commonly used for the pessimistic time.
PERT assumes a beta probability distribution for the time estimates. For a beta distribution, the expected time for each activity can be approximated using the following weighted average:
Expected time = ( Optimistic + 4 x Most likely + Pessimistic ) / 6
This expected time may be displayed on the network diagram.
To calculate the variance for each activity completion time, if three standard deviation times were selected for the optimistic and pessimistic times, then there are six standard deviations between them, so the variance is given by:
[ ( Pessimistic - Optimistic ) / 6 ]2

5. Determine the Critical Path
The critical path is determined by adding the times for the activities in each sequence and determining the longest path in the project. The critical path determines the total calendar time required for the project. If activities outside the critical path speed up or slow down (within limits), the total project time does not change. The amount of time that a non-critical path activity can be delayed without delaying the project is referred to as slack time.
If the critical path is not immediately obvious, it may be helpful to determine the following four quantities for each activity:
• ES - Earliest Start time
• EF - Earliest Finish time
• LS - Latest Start time
• LF - Latest Finish time
These times are calculated using the expected time for the relevant activities. The earliest start and finish times of each activity are determined by working forward through the network and determining the earliest time at which an activity can start and finish considering its predecessor activities. The latest start and finish times are the latest times that an activity can start and finish without delaying the project. LS and LF are found by working backward through the network. The difference in the latest and earliest finish of each activity is that activity's slack. The critical path then is the path through the network in which none of the activities have slack.
The variance in the project completion time can be calculated by summing the variances in the completion times of the activities in the critical path. Given this variance, one can calculate the probability that the project will be completed by a certain date assuming a normal probability distribution for the critical path. The normal distribution assumption holds if the number of activities in the path is large enough for the central limit theorem to be applied.
Since the critical path determines the completion date of the project, the project can be accelerated by adding the resources required to decrease the time for the activities in the critical path. Such a shortening of the project sometimes is referred to as project crashing.
6. Update as Project Progresses
Make adjustments in the PERT chart as the project progresses. As the project unfolds, the estimated times can be replaced with actual times. In cases where there are delays, additional resources may be needed to stay on schedule and the PERT chart may be modified to reflect the new situation.

Benefits of PERT
PERT is useful because it provides the following information:
• Expected project completion time.
• Probability of completion before a specified date.
• The critical path activities that directly impact the completion time.
• The activities that have slack time and that can lend resources to critical path activities.
• Activity start and end dates.

Limitations
The following are some of PERT's weaknesses:
• The activity time estimates are somewhat subjective and depend on judgement. In cases where there is little experience in performing an activity, the numbers may be only a guess. In other cases, if the person or group performing the activity estimates the time there may be bias in the estimate.
• Even if the activity times are well-estimated, PERT assumes a beta distribution for these time estimates, but the actual distribution may be different.
• Even if the beta distribution assumption holds, PERT assumes that the probability distribution of the project completion time is the same as the that of the critical path. Because other paths can become the critical path if their associated activities are delayed, PERT consistently underestimates the expected project completion time.
The underestimation of the project completion time due to alternate paths becoming critical is perhaps the most serious of these issues. To overcome this limitation, Monte Carlo simulations can be performed on the network to eliminate this optimistic bias in the expected project completion time.
CPM - Critical Path Method

In 1957, DuPont developed a project management method designed to address the challenge of shutting down chemical plants for maintenance and then restarting the plants once the maintenance had been completed. Given the complexity of the process, they developed the Critical Path Method (CPM) for managing such projects.
CPM provides the following benefits:
• Provides a graphical view of the project.
• Predicts the time required to complete the project.
• Shows which activities are critical to maintaining the schedule and which are not.
CPM models the activities and events of a project as a network. Activities are depicted as nodes on the network and events that signify the beginning or ending of activities are depicted as arcs or lines between the nodes. The following is an example of a CPM network diagram:
CPM Diagram

Steps in CPM Project Planning
1. Specify the individual activities.
2. Determine the sequence of those activities.
3. Draw a network diagram.
4. Estimate the completion time for each activity.
5. Identify the critical path (longest path through the network)
6. Update the CPM diagram as the project progresses.
1. Specify the Individual Activities
From the work breakdown structure, a listing can be made of all the activities in the project. This listing can be used as the basis for adding sequence and duration information in later steps.
2. Determine the Sequence of the Activities
Some activities are dependent on the completion of others. A listing of the immediate predecessors of each activity is useful for constructing the CPM network diagram.
3. Draw the Network Diagram
Once the activities and their sequencing have been defined, the CPM diagram can be drawn. CPM originally was developed as an activity on node (AON) network, but some project planners prefer to specify the activities on the arcs.
4. Estimate Activity Completion Time
The time required to complete each activity can be estimated using past experience or the estimates of knowledgeable persons. CPM is a deterministic model that does not take into account variation in the completion time, so only one number is used for an activity's time estimate.
5. Identify the Critical Path
The critical path is the longest-duration path through the network. The significance of the critical path is that the activities that lie on it cannot be delayed without delaying the project. Because of its impact on the entire project, critical path analysis is an important aspect of project planning.
The critical path can be identified by determining the following four parameters for each activity:
• ES - earliest start time: the earliest time at which the activity can start given that its precedent activities must be completed first.
• EF - earliest finish time, equal to the earliest start time for the activity plus the time required to complete the activity.
• LF - latest finish time: the latest time at which the activity can be completed without delaying the project.
• LS - latest start time, equal to the latest finish time minus the time required to complete the activity.
The slack time for an activity is the time between its earliest and latest start time, or between its earliest and latest finish time. Slack is the amount of time that an activity can be delayed past its earliest start or earliest finish without delaying the project.
The critical path is the path through the project network in which none of the activities have slack, that is, the path for which ES=LS and EF=LF for all activities in the path. A delay in the critical path delays the project. Similarly, to accelerate the project it is necessary to reduce the total time required for the activities in the critical path.
6. Update CPM Diagram
As the project progresses, the actual task completion times will be known and the network diagram can be updated to include this information. A new critical path may emerge, and structural changes may be made in the network if project requirements change.
CPM Limitations
CPM was developed for complex but fairly routine projects with minimal uncertainty in the project completion times. For less routine projects there is more uncertainty in the completion times, and this uncertainty limits the usefulness of the deterministic CPM model. An alternative to CPM is the PERT project planning model, which allows a range of durations to be specified for each activity

International Federation of Library Associations and Institutions (IFLA) Ellen Tise 2009 to11 SA
The International Federation of Library Associations and Institutions (IFLA) is the leading international body representing the interests of library and information services and their users. It is the global voice of the library and information profession.
Founded in Edinburgh, Scotland, in 1927 at an international conference, we celebrated our 75th birthday at our conference in Glasgow, Scotland in 2002. We now have over 1600 Members in approximately 150 countries around the world. IFLA was registered in the Netherlands in 1971. The Royal Library, the national library of the Netherlands, in The Hague, generously provides the facilities for our headquarters.
Aims
IFLA is an independent, international, non-governmental, not-for-profit organization. Our aims are to:
• Promote high standards of provision and delivery of library and information services
• Encourage widespread understanding of the value of good library & information services
• Represent the interests of our members throughout the world.
Core Values
In pursuing these aims IFLA embraces the following core values:
1. the endorsement of the principles of freedom of access to information. ideas and works of imagination and freedom of expression embodied in Article 19 of the Universal Declaration of Human Rights
2. the belief that people, communities and organizations need universal and equitable access to information, ideas and works of imagination for their social, educational, cultural, democratic and economic well-being
3. the conviction that delivery of high quality library and information services helps guarantee that access
4. the commitment to enable all Members of the Federation to engage in, and benefit from, its activities without regard to citizenship, disability, ethnic origin, gender, geographical location, language, political philosophy, race or religion.
Membership
We have two main categories of voting members: Association Members and Institutional Members. Associations of library and information professionals, of library and information services and of educational and research institutes, within the broad field of library and information science, are all welcome as Association Members. Institutional Membership is designed for individual library and information services, and all kinds of organizations in the library and information sector. International organizations within our sphere of interest may join as International Association Members.
National Association Members, International Association Members and Institutional Members have voting rights in elections and meetings. They are entitled to nominate candidates for the post of IFLA President. Individual practitioners in the field of library and information science may join as Personal Affiliates. They do not have voting rights, but they provide invaluable contributions to the work of IFLA, by serving on committees and contributing to professional programmes.
More information on IFLA Membership and joining IFLA can be found here.
Corporate Partners
More than 25 corporations in the information industry have formed a working relationship with IFLA under our Corporate Partners scheme. In return for financial and 'in kind' support they receive a range of benefits including opportunities to present their products and services to our worldwide membership.
Relations with Other Bodies
We have established good working relations with a variety of other bodies with similar interests, providing an opportunity for a regular exchange of information and views on issues of mutual concern. We have Formal Associate Relations with UNESCO, observer status with the United Nations, associate status with the International Council of Scientific Unions (ICSU) and observer status with the World Intellectual Property Organization (WIPO) and the International Organization for Standardization (ISO). In 1999, we established observer status with the World Trade Organization (WTO).
In turn, we have offered consultative status to a number of non-governmental organizations operating in related fields, including the International Publishers Association (IPA). We are members, along with the International Council on Archives (ICA), International Council of Museums (ICOM) and the International Council on Monuments and Sites (ICOMOS), of the International Committee of the Blue Shield (ICBS). The mission of ICBS is to collect and disseminate information and to co-ordinate action in situations when cultural property is at risk.
World Library and Information Congress: IFLA General Conference and Assembly
Our conference is held in August or early September in a different city each year. More then three thousand delegates meet to exchange experience, debate professional issues, see the latest products of the information industry, conduct the business of IFLA and experience something of the culture of the host country.
More information on the current conference or past and future conferences can be found here.
Regional Meetings
A range of professional meetings, seminars and workshops are held around the world by our professional groups and Core Activities. Use the IFLA website and IFLA Journal to find out what is going on when and where.
Governance
The governing structure of IFLA has been revised and came into force in 2008. The revision was necessary in order to reflect the opportunities presented by our increasingly global membership and the greater ease of worldwide communications. For the description of the governing structure please consult the IFLA Statutes.
Assembly
The General Assembly of Members is the supreme governing body, consisting of delegates of voting Members. It normally meets every year during the annual conference. It elects the President and members of the Governing Board. It also considers general and professional resolutions which, if approved, are usually passed to the Executive Committee and the Professional Committee for action as appropriate.
Governing Board
The Governing Board is responsible for the managerial and professional direction of IFLA within guidelines approved by the Assembly. The Board consists of the President, the President-elect, 10 directly elected Members (by postal and/or electronic ballot, every 2 years) and 6 indirectly elected members of the Professional Committee (by the professional groups through the sections and divisions, and the Chair of the Management of Library Associations Section); up to 3 Members may be co-opted.
The Governing Board meets at least twice per year, once at the time and place of the annual World Library and Information Congress.
Executive Committee
The Executive Committee has executive responsibility delegated by the Governing Board to oversee the direction of IFLA between meetings of this Board within the policies established by the Board. The Committee consists of the President, President-elect, the Treasurer, the Chair of the Professional Committee, 2 members of the Governing Board, elected every 2 years by members of the Board from among its elected members, and IFLA's Secretary General, ex-officio.
Professional Committee
It is the duty of the Professional Committee to ensure coordination of the work of all the IFLA units responsible for professional activities, policies and programmes. The Committee consists of a chair, elected by the outgoing Committee, the chair of each of IFLA's 5 Divisions plus 2 members of the Governing Board, elected by that Board from among its members, the President-elect, and the Chairs of the FAIFE and CLM committees; an additional member may be co-opted.
The Professional Committee meets at least twice per year, once at the time and place of the annual IFLA General Conference.
Core Activities
Issues common to library and information services around the world are the concern of the IFLA Core Activities. Directed by the Professional Committee, the objectives and projects of the Core Activities relate to the Federation's Programme and the priorities of the Divisions and Sections. ALP (Action for Development through Libraries Programme) has very wide scope, concentrating on the broad range of concerns specific to the developing world. The others cover current, internationally important issues. Preservation and Conservation (PAC), IFLA-CDNL Alliance for Digital Strategies (ICADS) and IFLA UNIMARC. Core Activities are each managed by a Director, who reports to the Professional Committee and Governing Board. IFLA is grateful to the following libraries and their librarians for generously hosting these Core Activities: Bibliothèque Nationale de France (PAC), Biblioteca Nacional, Portugal (UNIMARC) and the British Library, United Kingdom (ICADS).
The Action for Development through Libraries Programme (ALP), Free Access to Information and Freedom of Expression (FAIFE), and Committee on Copyright and other Legal Matters (CLM) Core Activities, Committees, and programs are managed by the IFLA Senior Policy Advisor. These committees report the Governing Board.
Divisions and Sections
Sections are the primary focus for the Federation's work in a particular type of library and information service, in an aspect of library and information science or in a region. All IFLA Members are entitled to register for Sections of their choice. Once registered, voting Members have the right to nominate specialists for the Standing Committee of the Sections for which they are registered. The Standing Committee is the key group of professionals who develop and monitor the programme of the Section. Sections are grouped into five Divisions.
Regional Activities
Three Regional Sections (Africa, Asia and Oceania, and Latin America and the Caribbean) make up the Division of Regional Activities (Division 5). They are concerned with all aspects of library and information services in their regions. They promote IFLA activities and work closely with the IFLA Regional Offices, located in Pretoria, South Africa; Singapore and Rio de Janeiro, Brazil.
Special Interest Groups
Special Interest Groups may be set up, on a temporary and informal basis, to enable groups of Members to discuss specific professional, or social and cultural issues relating to the profession. Discussion Groups may be established for two-years, once renewable, and must be sponsored by a Section.
Publications
The results of the programmes developed by IFLA's professional groups are recorded and disseminated in our publications.
• IFLA Journal is published four times a year. Each issue covers news of current IFLA activities and articles, selected to reflect the variety of the international information profession, ranging from freedom of information, preservation, services to the visually impaired and intellectual property.
• The Annual Report records IFLA's achievements during the previous years.
• The IFLA publications series, published by IFLA's publisher, De Gruyter Saur in Berlin, Germany includes such titles as: Digital Library Futures: User perspectives and institutional strategies, Strategies for Regenerating the Library and Information Profession, and the 3rd edition of The World Guide to Library, Archive and Information Science Associations
• The IFLA Professional Reports series feature reports of professional meetings and guidelines to best practice. Recent reports include: Using research to promote literacy and reading in libraries: Guidelines for librarians, Mobile Library Guidelines, International Resource Book for Libraries Serving Disadvantaged Persons: 2001-2008.
Resources
Many librarians and information professionals throughout the world, who contribute their time, expertise and financial resources, make our achievements possible. Approximately 60% of our income is derived from membership fees.
Other sources of income include sales of publications, contributions in cash and kind from our corporate partners, grants from foundations and government agencies.
Our Core Activities programme is supported by grants from international funding agencies and the generous support through donations and in kind contributions by natinoal and university libraries and national associations.

OCLC in the Middle East
6565 Kilgour Place
Dublin, Ohio 43017 USA
About OCLC
Connecting people to knowledge through library cooperation
OCLC is a worldwide library cooperative, owned, governed and sustained by members since 1967. Our public purpose is a statement of commitment to each other—that we will work together to improve access to the information held in libraries around the globe, and find ways to reduce costs for libraries through collaboration.
Our public purpose is to establish, maintain and operate a computerized library network and to promote the evolution of library use, of libraries themselves and of librarianship, and to provide processes and products for the benefit of library users and libraries, including such objectives as increasing availability of library resources to individual library patrons and reducing the rate-of-rise of library per-unit costs, all for the fundamental public purpose of furthering ease of access to and use of the ever-expanding body of worldwide scientific, literary and educational knowledge and information.
The Dewey Decimal Classification (DDC)
System is the world’s most widely used library classification system. 22nd edition published in 2003, The 23rd edition of the DDC (middle 2011 )enhances the efficiency and accuracy of your classification work in ways no previous editions have done.
You can use the DDC in several convenient formats. The four-volume print edition includes thousands of updates added to the system over the past seven years. The electronic version, WebDewey, enhances the print updates with online delivery that is updated continuously. And the Abridged Edition 14, also available in print and online, is a simplified version perfect for smaller collections. Whether you choose the print or electronic format (or both), DDC 23 makes it easier than ever to organize your library collections.
Sears List of Subject Headings is also available on WilsonWeb, with the added advantage of regular updates and the versatility of the WilsonWeb interface.
Sears List of Subject Headings®, 20th Edition
Delivering a core list of key headings, together with patterns and examples to guide the cataloger in creating further headings as required, Sears List of Subject Headings has been the standard thesaurus of subject terminology for small and medium-sized libraries since 1923.

Indian drama 891.4
UF East Indian drama
Indic drama
BT Drama
Indian literature
NT Hindi drama
Indian drama (English

OCLC Jay Jordan President and Chief Executive Officer, OCLC in the Middle East
6565 Kilgour Place, Dublin, Ohio 43017 USA

OCLC is a nonprofit, membership, computer library service and research organization dedicated to the public purposes of furthering access to the world’s information and reducing information costs. More than 72,000 libraries in 170 countries and territories around the world have used OCLC services to locate, acquire, catalog, lend and preserve library materials.
A unique cooperative venture
In 1967, a small group was established. They began with the idea of combining computer technology with library cooperation to reduce costs and improve services through shared, online cataloging.
The shared cataloging service is among the busiest in the world, enabling libraries each year to catalog more than 235 million items. Cooperative advances have expanded to help libraries better manage workflows, collection management, reference services, resource sharing and digital materials. And tomorrow, new Web-scale services will amplify library cooperation even further.
Libraries and OCLC will continue to find innovative ways to reinforce traditional values of library cooperation, working together for the common good. It is this approach that has kept our unique, member-owned and member-managed enterprise viable for more than 40 years.
WorldCat is a global network of library-management and user-facing services built upon cooperatively-maintained databases of bibliographic and institutional metadata. WorldCat enhances productivity across the full range of library workflows—from cataloging to resource sharing to discovery and delivery—by intelligently reusing contributed data, and makes library resources more visible on the Internet by distributing data across a growing number of partner services and Web technologies.
WorldCat, the record of human knowledge the community has built, contains the world’s great library collections merged electronically into a database that can be tailored for and linked to local, regional and global levels. It contains records that represent more than 470 languages, and more than half of the records represent materials that are non-English. The OCLC system supports 12 language scripts. A database synchronization program enables national libraries to automatically keep their union catalogs in synch with WorldCat. And new batchloading capabilities and metadata harvesting tools enable libraries to share and expose their resources much more quickly and efficiently.
Mission “Connecting people to knowledge through library cooperation”
Vision “The world's libraries. Connected”
Quality policy
OCLC will continually improve the processes used to deliver its products and services to achieve the OCLC Vision
Products;
Wordcat, DDC 23red 2011 4V, MARC format cataloge
X39.50 cataloging standard.

OCLC Publications
NextSpace is OCLC’s magazine for our members and information managers. NextSpace analyzes industry trends and technology developments as well as feature news about OCLC. Our goal is to help you stay informed and make key decisions. View back issues of NextSpace. (NextSpace replaced the OCLC Newsletter.)
Regional OCLC newsletters provide focused, local information about what's happening in libraries in different parts of the world. From interviews and feature stories to training schedules and special offers, regional newsletters keep you informed about what's happening near you.
The OCLC Annual Report is much more than a presentation of the previous fiscal year’s product introductions, service levels and member activities. It also includes snapshots of member libraries worldwide, the President’s message to the cooperative, information about the Board of Trustees and various management groups, and other information that gives a broad perspective of the organization. View recent editions of the Annual Report.
OCLC Financial Statements provide consolidated balance sheets and related statements of revenues, expenses and corporate equity and of cash flows. View recent OCLC Financial Statements.
OCLC Reports communicate the findings or results of OCLC initiatives. These in-depth studies and topical surveys will help you understand the issues and trends that affect librarianship. View OCLC Reports.
Products service;
Cataloging and Metadata
Dewey® Services
Dewey Decimal Classification for use with OCLC's online cataloging services
Batchload
Automatically add, change and cancel holdings information on large numbers of catalog records
Connexion
A full-service online cataloging tool

Content and Collections
CAMIO
Catalog of Art Museum Images Online
FirstSearch
Online reference

Digital Collection Services
CONTENTdm
Digital Collection Management Software
Digital Archive
Secure, managed storage for digital preservation

Web-scale Management Services
The first cooperative management service for libraries
License Manager
Manage your licensed and electronic resources
CBS
Metadata management solution
LBS
Cataloging to ordering, request to return in an integrated local library management system.
WorldCat Collection Analysis
Resource evaluation, comparison and planning

Resource Sharing and Delivery
VDX
A resource sharing option for library groups
WorldCat Resource Sharing
Borrow and lend materials with libraries down the road or around the world

________________________________________
Web Services
WorldCat Search API
WorldCat access through library Web sites

Views: 7616

Reply to This

Replies to This Forum

Reallyyyy

superb

Thank you sir for share this

Thank you sir to provide link to get the answer of the questionpaper  Dec.,2011

Its nice

Superb

thanks

Thanks

You are a true follower of S.R. Ranganathan. Provided information is satisfying four laws among five... Keep this sharing alive n up........

Thankq verymuch sir.

Than you very much sir....

Thank you sir for share this important material
again thanks a lot

Thanks a lot.

RSS

© 2024   Created by Dr. Badan Barman.   Powered by

Badges  |  Report an Issue  |  Terms of Service

Koha Workshop