World wide web first and. Internet and World Wide Web - Knowledge Hypermarket

What is the World Wide Web?

A web, or "web", is a collection of interconnected pages of specific information. Each such page can contain text, images, video, audio and various other objects. But apart from this, there are so-called hyperlinks on the web pages. Each such link points to a different page, which is located on some other computer on the Internet.

Various information resources, which are interconnected by means of telecommunications and are based on hypertext representation of data, form the World Wide Web (WWW).

Hyperlinks link pages that are located on different computers located in different parts of the globe. A huge number of computers that are united in one network is the Internet, and the "World Wide Web" is a huge number of web pages located on computers in the network.

Every web page on the Internet has an address - URL (English Uniform Resource Locator - unique address, name). It is at the address that you can find any page.

How was the World Wide Web created?

On March 12, 1989, Tim Berners-Lee presented the project to the CERN leadership unified system organization, storage and general access to information that was supposed to solve the problem of exchange of knowledge and experience between the employees of the Center. Berners-Lee proposed to solve the problem of access to information on different computers of employees with the help of browser programs that provide access to a server computer where hypertext information is stored. After the successful implementation of the project, Berners-Lee was able to convince the rest of the world to use uniform standards for Internet communication, using the standards of the hypertext transfer protocols (HTTP) and the universal markup language (HTML).

It should be noted that Tim Berners-Lee was not the first creator of the Internet. The first system of protocols for transferring data between networked computers was developed by employees of the United States Defense Advanced Research Projects Agency (DARPA) Vinton Cerf and Robert Kahn in the late 60s - early 70s of the last century. Berners-Lee only suggested using the capabilities of computer networks to create new system organization of information and access to it.

What was the prototype of the World Wide Web?

Back in the 60s of the XX century, the US Department of Defense set the task of developing a reliable system for transmitting information in case of war. The US Advanced Research Projects Agency (ARPA) has proposed developing a computer network for this. It was named ARPANET (Advanced Research Projects Agency Network). The project brought together four academic institutions - the University of Los Angeles, the Stanford Research Institute and the Universities of Santa Barbara and Utah. All work was funded by the US Department of Defense.

The first data transmission over a computer network took place in 1969. A Los Angeles University professor with his students tried to enter a Stanford computer and send the word "login". Only the first two letters L and O were successfully transmitted. When they typed the letter G, the communication system failed, but the Internet revolution took place.

By 1971, a network of 23 users had been established in the United States. The first program for sending e-mail over the network was developed. And in 1973, University College London and Government Services in Norway joined the network and the network became international. In 1977, the number of Internet users reached 100, in 1984 - 1,000, in 1986 there were already more than 5,000, in 1989 - more than 100,000. In 1991, the World-Wide Web (WWW) project was implemented at CERN. In 1997, there were already 19.5 million Internet users.

Some sources indicate the date of the appearance of the World Wide Web a day later - March 13, 1989.

World Wide Web (WWW)

The World Wide Web(eng. World wide web) is a distributed system that provides access to interconnected documents located on various computers connected to the Internet. The word web is also used to refer to the World Wide Web (eng. web"Spider web") and the abbreviation WWW... The World Wide Web is the world's largest multilingual electronic repository of information: tens of millions of interconnected documents that are located on computers located around the globe. It is considered the most popular and interesting service on the Internet, which allows you to access information regardless of its location. To find out the news, learn something or just have fun, people watch TV, listen to the radio, read newspapers, magazines, books. The World Wide Web also offers its users radio broadcasting, video information, press, books, but with the difference that all this can be obtained without leaving home. It does not matter in what form the information you are interested in is presented (text document, photograph, video or sound fragment) and where this information is located geographically (in Russia, Australia or on the Ivory Coast) - you will receive it in a matter of minutes on your computer.

The World Wide Web is formed by hundreds of millions of web servers. Most of the resources of the World Wide Web are hypertext. Hypertext documents posted on the World Wide Web are called web pages. Several web pages, united by a common theme, design, and also linked by links and usually located on the same web server, are called a website. To download and view web pages, special programs are used - browsers. The World Wide Web has caused a real revolution in information technology and a boom in the development of the Internet. Often, when talking about the Internet, they mean the World Wide Web, but it is important to understand that they are not the same thing.

History of the World Wide Web

Tim Berners-Lee and, to a lesser extent, Robert Kayo are considered the inventors of the World Wide Web. Tim Berners-Lee is the author of HTTP, URI / URL and HTML technologies. In 1980 he worked for the European Council for Nuclear Research (FR: Conseil Européen pour la Recherche Nucléaire, CERN) as a consultant on software... It was there, in Geneva, Switzerland, that he wrote for his own needs the Enquire program, which used random associations to store data and laid the conceptual foundation for the World Wide Web.

In 1989, while working at CERN on the organization's internal network, Tim Berners-Lee proposed a global hypertext project now known as the World Wide Web. The project involved the publication of hypertext documents linked by hyperlinks, which would facilitate the search and consolidation of information for CERN scientists. To carry out the project, Tim Berners-Lee (together with his assistants) invented URIs, the HTTP protocol, and the HTML language. These are technologies without which the modern Internet can no longer be imagined. Between 1991 and 1993, Berners-Lee improved the technical specifications of these standards and published them. But, nevertheless, officially the year of birth of the World Wide Web should be considered 1989.

As part of the project, Berners-Lee wrote the world's first httpd web server and the world's first hypertext web browser called WorldWideWeb. This browser was simultaneously a WYSIWYG editor (abbreviated from the English What You See Is What You Get - what you see is what you get), its development was started in October 1990, and finished in December of the same year. The program ran in the NeXTStep environment and began to spread over the Internet in the summer of 1991.

The world's first website was hosted by Berners-Lee on August 6, 1991, on the first web server available at http://info.cern.ch/. The resource defined the concept of the World Wide Web, contained instructions for setting up a web server, using a browser, etc. This site was also the world's first Internet directory, because later Tim Berners-Lee posted and maintained a list of links to other sites there.

Since 1994, the World Wide Web Consortium (W3C), founded and still headed by Tim Berners-Lee, took over the main work on the development of the World Wide Web. This consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. W3C Mission: "To unleash the full potential of the World Wide Web by creating protocols and principles to ensure the long-term development of the Web." Two other major tasks of the consortium are to ensure the full “internationalization of the Web” and to make the Web accessible to people with disabilities.

The W3C develops uniform principles and standards for the Internet (called "W3C Recommendations"), which are then implemented by software and hardware manufacturers. Thus, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more perfect, universal and convenient. All recommendations of the World Wide Web consortium are open, that is, they are not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

The structure and principles of the World Wide Web

The World Wide Web is formed by millions of Internet web servers located around the world. A web server is a program that runs on a computer connected to the network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard disk and sends it over the network to the requesting computer. More sophisticated web servers are capable of dynamically generating documents using templates and scripts in response to an HTTP request.

To view the information received from the web server, a special program is used on the client computer - a web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked to the concepts of hypertext and hyperlinks. Most of the information on the Web is precisely hypertext.

To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML (HyperText Markup Language) is traditionally used. The work of creating (marking up) hypertext documents is called layout, it is done by a webmaster or a separate markup specialist - a layout designer. After HTML markup, the resulting document is saved to a file, and such HTML files are the main type of resources on the World Wide Web. Once the HTML file is available to the web server, it is referred to as a “web page”. A collection of web pages forms a website.

The hypertext of web pages contains hyperlinks. Hyperlinks help users of the World Wide Web to easily navigate between resources (files), regardless of whether the resources are located on local computer or on a remote server. Uniform Resource Locators are used to locate resources on the World Wide Web. For example, the full URL home page The Russian section of Wikipedia looks like this: http://ru.wikipedia.org/wiki/Home_page... Such URL locators combine the technology of identification URI (English Uniform Resource Identifier - "uniform resource identifier") and the domain name system DNS (English Domain Name System). The domain name (in this case, ru.wikipedia.org) in the URL designates a computer (more precisely, one of its network interfaces) that executes the code of the required web server. The url of the current page can usually be seen in address bar browser, although many modern browsers prefer to show only Domain name current site.

World Wide Web Technologies

To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform styles for multiple web pages. Another innovation worth paying attention to is the URN (Uniform Resource Name) resource designation system.

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable for computers. The Semantic Web is the concept of a web in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web provides access to well-structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely distributed and properly implemented, the Semantic Web can revolutionize the Internet. To create a computer-understandable description of a resource, the Semantic Web uses the Resource Description Framework (RDF) format, which is based on XML syntax and uses URIs to denote resources. New items in this area are RDFS (RDF Schema) and SPARQL (English Protocol And RDF Query Language) (pronounced as "Sparkl"), new language queries for fast access to RDF data.

The main used terms of the World Wide Web

Working with the browser

Today, ten years after the invention of the HTTP protocol, which formed the basis of the World Wide Web, the browser is the most sophisticated software that combines ease of use and a wealth of features.
The browser not only opens the world of hypertext resources of the World Wide Web to the user. It can also work with other web services such as FTP, Gopher, WAIS. Along with the browser, a program is usually installed on the computer for using the e-mail (e-mail) and news (news) services. In fact, the browser is the main program for accessing Web services. Through it, you can access almost any Internet service, even if the browser does not support this service. For this purpose, specially programmed web servers are used that connect the World Wide Web with this Web service. An example of this kind of web server is numerous free mail servers with a web interface (see http://www.mail.ru)
Today there are many browser programs created by various companies. The most widespread and recognized browsers such as Netscape Navigator and Internet Explorer... It is these browsers that constitute the main competition with each other, although it is worth noting that these programs are in many ways similar. This is understandable, because they work according to the same standards - the standards of the Internet.
Working with the browser begins with the fact that the user types in the address bar (address) the URL of the resource to which he wants to access, and presses the Enter key.

The browser sends a request to the specified web server. As the elements of the web page specified by the user come from the server, it gradually appears in the working window of the browser. The process of receiving page elements from the server is displayed in the lower "status" line of the browser.

Text hyperlinks in the resulting web page are usually highlighted in a different color from the rest of the text in the document and underlined. Links pointing to resources that the user has not yet viewed, and links to resources already visited, usually have a different color. Images can also function as hyperlinks. Regardless of whether it is a text link or a graphic one, if you move the mouse cursor over it, its shape will change. At the same time, the address indicated by the link will appear in the status bar of the browser.

When you click on the hyperlink, the browser opens in the working window the resource to which it points, while the previous resource is unloaded from it. The browser maintains a list of viewed pages and the user, if necessary, can go back through the chain of viewed pages. To do this, click on the "Back" button in the browser menu - and it will return to the page that you viewed before you opened the current document.
Each time you click on this button, the browser will go back one document in the list of visited documents. If suddenly you are back too far, use the "Forward" button of the browser menu. It will help you move forward through the list of documents.
The "Stop" button will stop loading the document. The "Reload" button allows you to reload the current document from the server.
The browser can display only one document in its window: to display another document, it unloads the previous one. It is much more convenient to work in several browser windows at the same time. A new window can be opened using the menu: File - New - Window (or by pressing Ctrl + N).

Working with a document

The browser allows you to perform a set of standard operations on a document. A web page loaded into it can be printed (in Internet Explorer this is done using the "Print" button or from the menu: File - Print ...), save to disk (menu: File - Save As ...). You can find the piece of text you are interested in in the loaded page. To do this, use the menu: Edit - Find on this page…. And if you are interested in how this document looks in the original hypertext, which was processed by the browser, select from the menu: View - As HTML.
When, in the process of surfing the Internet, a user finds a page of particular interest to him, he uses the ability provided in browsers to set bookmarks (by analogy with bookmarks marking interesting places in a book).
This is done through the menu: Favorites - Add to favorites. After that, the new bookmark appears in the list of bookmarks, which can be viewed by clicking the "Favorites" button on the browser panel or through the Favorites menu.
Existing bookmarks can be deleted, changed, organized into folders using the menu: Favorites - Organize favorites.

Work through a proxy server

Netscape Navigator and Microsoft Internet Explorer also provide a mechanism for embedding additional opportunities independent manufacturers. Modules that extend the capabilities of the browser are called plug-ins.
Browsers run on computers running a wide variety of operating systems... This provides a basis for speaking about the independence of the World Wide Web from the type of computer used by the user and the operating system.

Finding information on the Internet

Recently, the World Wide Web has seen a new powerful mass media, the audience of which is the most active and educated part of the world's population. This vision corresponds to the real state of affairs. On days of significant events and shocks, the load on news network sites increases dramatically; in response to reader demand, there are instantaneous resources dedicated to the incident that just happened. Thus, during the August 1998 crisis, news appeared on the CNN website (http://www.cnn.com) much earlier than the Russian media reported. At the same time, the server of RIA RosBusinessConsulting (http://www.rbc.ru) became widely known, providing fresh information from financial markets and the latest news. Many Americans watched the impeachment vote on US President Bill Clinton on the Web, not on TV screens. The development of the war in Yugoslavia was also instantly reflected in a variety of publications reflecting a variety of points of view on this conflict.
Many people who are familiar with the Internet more by hearsay believe that any information can be found on the Internet. This is really so in the sense that there you can come across the most unexpected in form and content resources. Indeed, the modern Web is able to offer its user a lot of information of a very different profile. Here you can get acquainted with the news, have an interesting time, get access to a variety of reference, encyclopedic and educational information. It is only necessary to emphasize that although the general informational value of the Internet is very high, the information space itself is not homogeneous in terms of quality, since resources are often created in a hurry. If, when preparing a paper publication, its text is usually read by several reviewers and corrections are made to it, then this stage of the publishing process is usually absent on the Web. So, in general, information gleaned from the Internet should be treated with a little more caution than information found in a printed publication.
However, the abundance of information has a negative side: with the growth of the amount of information, it becomes more and more difficult to find the information that is needed in this moment... Therefore, the main problem that arises when working with the Network is to quickly find the information you need and understand it, to assess the information value of this or that resource for your own purposes.

To solve the problem of finding the necessary information on the Internet, there is a separate type of network service. We are talking about search engines, or search engines.
Search engines are numerous and varied. It is customary to distinguish between search indexes and directories.
Index servers They work as follows: they regularly read the content of most web pages on the Web ("index" them), and place them in whole or in part in a common database. Search engine users have the ability to search this database using keywords related to their topic of interest. Search results usually consist of extracts of pages recommended to the attention of the user and their addresses (URLs), designed in the form of hyperlinks. It is convenient to work with search engines of this type if you have a clear idea of ​​the subject of the search.
Directory servers in fact, they represent a multi-level classification of links, built on the principle of "from general to specific". Sometimes links are accompanied by a short description of the resource. As a rule, it is possible to search in the names of headings (categories) and descriptions of resources by keywords. Catalogs are used when they do not quite know exactly what they are looking for. Moving from the most general categories to more specific, you can determine which web resource should be consulted. Search catalogs are appropriate to compare with thematic library catalogs or classifiers. The maintenance of search directories is partially automated, but until now the classification of resources is carried out mainly by hand.
Search directories are common destination and specialized... General purpose search directories include resources of a wide variety of profiles. Specialized catalogs combine only resources dedicated to a specific topic. They often manage to achieve a better coverage of resources from their area and build a more adequate rubrication.
Recently, general purpose search directories and indexing search engines have been intensively integrated, successfully combining their advantages. Search technologies do not stand still either. Traditional indexing servers search the database for documents containing keywords from a search query. With this approach, it is very difficult to assess the value and quality of the resource given to the user. An alternative approach is to look for web pages that are referenced by other resources on the topic. The more links to a page there are on the web, the more likely you are to find it. This kind of meta-search is carried out by the search engine. google server (http://www.google.com/), which appeared quite recently, but has already proven itself well.

Working with search engines

Working with search engines is not difficult. In the address bar of the browser, type its address, in the query line, type in the required language keywords or a phrase corresponding to the resource or Web resources that you want to find. Then click the "Search" button and the first page with the search results is loaded into the browser window.

Typically, a search engine returns search results in small chunks, for example, 10 per search page. Therefore, they often span more than one page. Then under the list of recommended links there will be a link offering to go to the next "portion" of search results (see fig.).

Ideally, the search engine will place the resource you are looking for on the first page of search results, and you will immediately recognize the desired link by its short description. However, it is often necessary to browse several resources before finding a suitable one. Typically, the user views them in new browser windows without closing the search results browser window. Sometimes the search and viewing of found resources is carried out in the same browser window.
The success of the search for information directly depends on how competently you have made a search query.
Let's look at a simple example... Suppose you want to buy a computer, but do not know what modifications exist today and what their characteristics are. To get the required information, you can use the Internet by asking a search engine. If we set the word "computer" in the search line, then the search result will be more than 6 million (!) Links. Naturally, among them there are pages that meet our requirements, but it is not possible to find them among such a number.
If you write "what modifications of computers exist today," then the search server will offer you to view about two hundred pages, but none of them will strictly match the request. In other words, they contain individual words from your query, but this may not be about computers at all, but, say, about existing modifications of washing machines or about the number of computers available in the warehouse of a company for that day.
In general, it is not always possible to successfully ask a question to a search server the first time. If the query is short and contains only frequently used words, a lot of documents can be found, hundreds of thousands and millions. On the contrary, if your request turns out to be too detailed or very rare words are used in it, you will see a message stating that no resources matching your request were found in the server database.
Gradually narrowing or expanding the focus of your search by increasing or decreasing the list of keywords, replacing unsuccessful search terms with more successful ones will help you improve your search results.
In addition to the number of words, their content plays an important role in the query. The keywords that make up a search query are usually just separated by spaces. It should be remembered that different search engines interpret this differently. Some of them select for such a query only documents containing all the keywords, that is, they perceive the space in the query as a logical combination of "and". Some people interpret whitespace as a logical "or" and look for documents that contain at least one of the keywords.
When forming a search query, most servers allow you to explicitly specify logical bundles that combine keywords and set some other search parameters. Logical connectives are usually denoted using the English words "AND", "OR", "NOT". Different search servers use different syntax when forming an extended search query - the so-called query language. Using the query language, you can specify which words must necessarily appear in the document, which should not be, which are desirable (that is, they may or may not be).
As a rule, modern search engines use all possible word forms of the words used when searching. That is, regardless of the form in which you used the word in the query, the search takes into account all its forms according to the rules of the Russian language: for example, if the query is "go", then the search will find links to documents containing the words "go" , "walking", "walking", "walking", etc.
Usually on the title page of the search server there is a link "Help", by contacting which, the user can familiarize himself with the search rules and the language of queries used on this server.
Another very important point is the choice of a search engine suitable for your tasks. If you are looking for a specific file, then it is better to use a specialized search engine that indexes not web pages, but file archives on the Internet. An example of such search servers is FTP Search (http://ftpsearch.lycos.com), and to search for files in Russian archives it is better to use the Russian analogue - http://www.filesearch.ru.
To search for software, use software archives such as http://www.tucows.com/, http://www.windows95.com, http://www.freeware.ru.
If the web page you are looking for is located on the Russian side of the Web, it may be worth using Russian search engines. They work better with Russian speakers search queries, provided with an interface in Russian.
Table 1 lists some of the more well-known general purpose search engines. All of these servers now offer both full-text search and category search, thus combining the advantages of an index server and a directory server.

Http, which will allow maintaining a long-term connection, transferring data in multiple streams, distributing data transmission channels and managing them. If it is implemented and supported by standard WWW software, it will remove the aforementioned disadvantages. Another way is to use navigators that can run programs in interpreted languages ​​locally, such as Sun Microsystems' Java project. Another way to solve this problem is to use AJAX technology, based on XML and JavaScript. This allows additional data to be received from the server when the WWW page is already loaded from the server.

Currently, there are two trends in the development of the World Wide Web: the Semantic Web and

There is also the popular concept of Web 2.0, which summarizes several directions of the development of the World Wide Web.

Web 2.0

The development of the WWW has been carried out in a significant way through the active introduction of new principles and technologies, which have received the general name Web 2.0 (Web 2.0). The term Web 2.0 itself first appeared in 2004 and is intended to illustrate the qualitative changes in the WWW in the second decade of its existence. Web 2.0 is a logical improvement over the Web. The main feature is to improve and accelerate the interaction of websites with users, which has led to a rapid increase in user activity. This manifested itself in:

  • participation in Internet communities (in particular, in forums);
  • posting comments on sites;
  • maintaining personal journals (blogs);
  • placing links on the WWW.

Web 2.0 introduced active data exchange, in particular:

  • export of news between sites;
  • active aggregation of information from sites.
  • using an API to separate site data from the site itself

From the point of view of the implementation of sites, Web 2.0 raises the requirements for the simplicity and convenience of sites for ordinary users and is aimed at a rapid decline in user skills in the near future. The focus is on compliance with the W3C List of Standards and Approvals. These are in particular:

  • standards of visual design and functionality of sites;
  • typical requirements (SEO) of search engines;
  • XML standards and open information exchange.

On the other hand, the following dropped in Web 2.0:

  • requirements for "brightness" and "creativity" of design and content;
  • needs for complex websites ([http://ru.wikipedia.org/wiki/%D0%98%D0%BD%D1%82%D0%B5%D1%80%D0%BD%D0%B5%D1 % 82-% D0% BF% D0% BE% D1% 80% D1% 82% D0% B0% D0% BB]);
  • the value of offline advertising;
  • business interest in large projects.

Thus, Web 2.0 recorded the transition of the WWW from single, expensive, complex solutions to highly typed, cheap, easy-to-use sites with the ability to efficiently exchange information. The main reasons for this transition were:

  • critical lack of quality content;
  • the need for active self-expression of the user in the WWW;
  • development of technologies for searching and aggregating information in the WWW.

The transition to a complex of technologies Web 2.0 has such consequences for the global information space WWW, as:

  • the success of the project is determined by the level of active communication of project users and the level of quality of the content;
  • sites can achieve high performance and profitability without large capital investments due to successful positioning on the WWW;
  • individual WWW users can achieve significant success in implementing their business and creative plans on the WWW without having their own sites;
  • the concept of a personal site gives way to the concept of "blog", "author's heading";
  • fundamentally new roles of an active WWW user appear (forum moderator, authoritative forum member, blogger).

Web 2.0 Examples
Here are a few examples of sites illustrating Web 2.0 technologies that have actually changed the WWW environment. These are in particular:

In addition to these projects, there are other projects that form a modern global environment and are based on the activity of their users. Sites, the content and popularity of which are formed, first of all, not by the efforts and resources of their owners, but by the community of users interested in the development of the site, constitute a new class of services that determine the rules of the global WWW environment.

The Internet is a communication system and at the same time an information system is an environment for people to communicate. Currently, there are many definitions of this concept. In our opinion, one of the definitions of the Internet that most fully characterizes the information interaction of the world's population is: “The Internet is a complex transport and information system of mushroom (dipole) structures, the head of each of which (dipoles proper) represents the brain of a person sitting at a computer , in conjunction with the computer itself, which, as it were, is an artificial continuation of the brain, and the legs, for example, telephone network connecting computers, or ether, through which radio waves are transmitted. "

The emergence of the Internet gave impetus to the development of new information technologies, leading not only to a change in the consciousness of people, but also the world as a whole. However, the worldwide computer network was not the first such discovery. Today the Internet is developing in the same way as its predecessors - the telegraph, telephone and radio. However, unlike them, he combined their merits - it became not only useful for communication between people, but also a publicly available means for receiving and exchanging information. It should be added that the capabilities of not only stationary, but also mobile television began to be used on the Internet to the fullest.

The history of the Internet begins around the 60s of the XX century.

The first documented description of social interaction that will be made possible by the network was a series of notes written by J. Licklider. These notes discussed the concept of the "Galactic Network". The author foresaw the creation of a global network of interconnected computers, with the help of which everyone can quickly access data and programs located on any computer. In spirit, this concept is very close to the current state of the Internet.

Leonard Kleinrock published the first paper on the theory of packet switching in July 1961. In the article, he presented the advantages of his theory over the existing principle of data transmission - circuit switching. What is the difference between these concepts? With packet switching, there is no physical connection between two terminal devices (computers). In this case, the data required for transmission is divided into parts. Each part is appended with a heading containing full information about the delivery of the package to its destination. When switching channels for the duration of information transmission, two computers are physically connected "each to each". During the connection period, the entire amount of information is transmitted. This connection is maintained until the end of the transfer of information, i.e., just as it was during the transfer of information over analog systems that provide connection switching. At the same time, the utilization factor of the information channel is minimal.

To test the concept of packet switching, Lawrence Roberts and Thomas Merrill in 1965 connected a TX-2 computer in Massachusetts to a Q-32 computer in California using low-speed dial-up telephone lines. Thus, the first in history (albeit small) non-local computer network was created. The result of the experiment was the understanding that time-sharing computers can work together successfully, executing programs and fetching data on a remote machine. It also became clear that the circuit-switched telephone system was absolutely unsuitable for building a computer network.

In 1969, the American agency ARPA (Advanced Research Projects Agency) began research to create an experimental "packet-switched" network. This network was created and received the name ARPANET, i.e. agency network of advanced research projects. A sketch of the ARANET network, consisting of four nodes - the embryo of the Internet is shown in Fig. 6.1.

At this early stage, research was conducted on both network infrastructure and network applications. At the same time, work was underway to create a functionally complete protocol for intercomputer interaction and other network software.

In December 1970, the Network Working Group (NWG), led by S. Crocker, completed work on the first version of the protocol, called the Network Control Protocol (NCP). After the work on the implementation of NCP on the ARPANET nodes was completed in 1971-1972, netizens were finally able to start developing applications.

In 1972, the first application appeared - e-mail.

In March 1972, Ray Tomlinson wrote the basic programs for sending and reading electronic messages. In July of the same year, Roberts added to these programs the ability to list messages, selectively read, save to a file, forward and prepare a response.

Since then, email has become the largest web application. For its time, e-mail has become what the World Wide Web is today - an extremely powerful catalyst for the growth of the exchange of all kinds of interpersonal data streams.

In 1974, the Internet Network Working Group (INWG) introduced the universal protocol for data transmission and networking, TCP / IP. This is the protocol that is used on the Internet today.

However, the ARPANET transition from NCP to TCP / IP only took place on January 1, 1983. It was a Day X-style transition that required simultaneous changes on all computers. The transition was carefully planned by all stakeholders over the past several years and went surprisingly smoothly (however, it led to the proliferation of the “I survived the transition to TCP / IP” badge). In 1983, the transfer of ARPANET from NCP to TCP / IP made it possible to split this network into MILNET, the actual network for military needs, and ARPANET, which was used for research purposes.

Another important event took place in the same year. Paul Mockapetris developed the Domain Name System (DNS). This system allowed the creation of a scalable distributed mechanism for mapping hierarchical computer names (eg www.acm.org) to Internet addresses.

In the same 1983, a Domain Name Server (DNS) was created at the University of Wisconsia. This server (DNS) automatically and secretly from the user provides translation of the dictionary equivalent of the site into an IP address.

With the universal spread of the Network outside the United States, national first-level domains ru, uk, ua, etc. appeared.

In 1985, the National Science Foundation (NSF) participated in the creation of its own NSFNet, which was soon connected to the Internet. Initially, NSF included 5 supercomputer centers, however, less than APRANET, and the data transfer rate in communication channels did not exceed 56 kbit / s. At the same time, the creation of NSFNet was a significant contribution to the development of the Internet, as it allowed a new way of looking at how the Internet can be used. The Foundation set a task for every scientist and every engineer in the United States to be "connected" to a single network, and therefore began to create a network with faster channels that would unite numerous regional and local networks.

On the basis of ARPANET technology, the NSFNET (the National Science Foundation NETwork) was created in 1986, in the creation of which NASA and the Department of Energy were directly involved. Six large research centers equipped with the latest supercomputers located in different regions of the United States were connected. The main goal of this network was to provide US research centers with access to supercomputers, based on the interregional backbone network. The network was operating at a base speed of 56 Kbps. When creating the network, it became obvious that it was not even worth trying to connect all universities and research organizations directly to the centers, since it is not only very expensive, but almost impossible to lay such a large amount of cable. Therefore, we decided to create networks on a regional basis. In every part of the country, interested institutions connected with their closest neighbors. The resulting chains were connected to the supercomputer centers through one of their nodes, so the supercomputer centers were connected together. With this design, any computer could communicate with any other, passing messages through neighbors.

One of the problems that existed at that time was that early networks (including ARPANET) were built on purpose in the interests of a narrow circle of interested organizations. They were to be used by a closed community of specialists; as a rule, the work of networks was limited to this. There was no particular need for network interoperability, and therefore no interoperability itself. At the same time, alternative technologies began to appear in the commercial sector, such as XNS from Xerox, DECNet, and SNA from IBM. Therefore, NSFNET, DARPA sponsored Internet Engineering and Architecture Task Forces and NSF Network Technical Advisory Group members to develop Internet Gateway Requirements. These requirements formally ensured interoperability between the parts of the Internet maintained by DARPA and NSF. In addition to choosing TCP / IP as the foundation of NSFNet, US federal agencies have adopted and implemented a number of additional principles and rules that have shaped the modern face of the Internet. Most importantly, NSFNET has a policy of "universal and equal access to the Internet." Indeed, in order for the American university to obtain funds from NSF to connect to the Internet, it, as recorded in the NSFNet program, "must make this connection available to all trained users on campus."

NSFNET worked quite well at first. But the time has come when she has ceased to cope with the increased needs. The network created for the use of supercomputers also allowed connected organizations to use a lot of informational data not related to supercomputers. Netizens in research centers, universities, schools, etc. have realized that they now have access to a wealth of information and that they have direct access to their colleagues. The flow of messages on the Web grew faster and faster until it eventually overwhelmed the computers that control the network and the telephone lines that connected them.

In 1987 NSF transferred to Merit Network Inc. a contract under which Merit, with the participation of IBM and MCI, was supposed to provide management of the NSFNET backbone, move to faster T-1 channels and continue its development. The growing backbone already had over 10 nodes.

In 1990, the concepts of ARPANET, NFSNET, MILNET, etc. finally disappeared from the scene, giving way to the concept of the Internet.

The scale of the NSFNET network, combined with the quality of the protocols, led to the fact that by 1990, when the ARPANET was finally dismantled, the TCP / IP family supplanted or significantly supplanted most other protocols of global computer networks worldwide, and IP was confidently becoming the dominant data transport service in the global information infrastructure.

In 1990, the European Organization for Nuclear Research established the largest Internet site in Europe and provided access to the Old World Internet. In order to help promote and promote the concept of distributed computing over the Internet, CERN (Geneva, Switzerland) Tim Berners-Lee has developed the technology of hypertext documents - the World Wide Web (WWW), which allows users to have access to any information available on the Internet on computers around the world.

The WWW technology is based on the definition of URL (Universal Resource Locator) specifications, HTTP (HyperText Transfer Protocol) and HTML itself (HyperText Markup Language). Text can be marked up in HTML using any text editor. An HTML-markup page is often referred to as a Web page. To view a Web page, a client application is used - a Web browser.

In 1994, the W3C consortium (W3 Consortium) was formed, which brought together scientists from different universities and companies (including Netscape and Microsoft). From that time on, the committee began to deal with all the standards in the world of the Internet. The organization's first step was to develop the HTML 2.0 specification. In this version, it became possible to transfer information from the user's computer to the server using forms. The next step was the HTML 3 project, on which work began in 1995. It was first introduced CSS system(Cascading Style Sheets, hierarchical style sheets). CSS allows you to format text without breaking logical and structural markup. The HTML 3 standard was never approved; instead, it was created and adopted in January 1997. HTML 3.2. As early as December 1997, the W3C adopted HTML standard 4.0, which is divided into logical and visual tags.

By 1995, the growth of the Internet showed that the regulation of connectivity and funding could not be in the hands of NSF alone. 1995 saw the transfer of fees to regional networks for connecting multiple private networks to the national backbone.

The Internet grew far beyond what it was seen and designed, it outgrew the agencies and organizations that created it, they could no longer play a dominant role in its growth. Today it is a powerful worldwide communication network based on distributed switching elements - hubs and communication channels. The Internet has grown exponentially since 1983, and hardly a single detail has survived since the Internet is still based on the TCP / IP suite of protocols.

If the term "Internet" was originally used to describe a network built on the basis of the Internet - IP protocol, now this word has acquired a global meaning and is only sometimes used as a name for a set of interconnected networks. Strictly speaking, the Internet is any set of physically separate networks that are interconnected by a single IP protocol, which allows us to speak of them as one logical network. The rapid growth of the Internet, aroused an increased interest in the TCP / IP protocols, as a result, specialists and companies appeared who found a number of other applications for it. This protocol began to be used to build local area networks (LAN - Local Area Network) even when their connection to the Internet was not provided. In addition, TCP / IP began to be used in the creation of corporate networks, which have adopted Internet technologies, including the WWW (World Wide Web) - the world wide web, in order to establish an effective exchange of internal corporate information. These corporate networks are called "Intranets" and can either connect or not to the Internet.

The inventor of the World Wide Web is Tim Berners-Lee, who is the author of HTTP, URI / URL and HTML technologies. In 1980, for his own needs, he wrote the program "Enquire" ("Investigator"), which used random associations to store data and laid the conceptual foundation for the World Wide Web. In 1989, Tim Berners-Lee proposed a global hypertext project now known as the World Wide Web. The project involved the publication of hypertext documents linked by hyperlinks, which would facilitate the search and consolidation of information for scientists. To carry out the project, he invented URIs, the HTTP protocol and the HTML language. These are technologies without which the modern Internet can no longer be imagined. Between 1991 and 1993, Berners-Lee improved the technical specifications of these standards and published them. He wrote the world's first web server "httpd" and the world's first hypertext web browser called "WorldWideWeb". This browser was also a WYSIWYG editor (abbreviated from the English What You See Is What You Get - what you see is what you get), its development was started in October 1990, and finished in December of the same year. The program worked in the NeXTStep environment and began to spread over the Internet in the summer of 1991. The world's first Web site Berners-Lee created at http://info.cern.ch/, now the site is archived. This site went online on the Internet on August 6, 1991. This site described what the World Wide Web is, how to set up a Web server, how to use a browser, etc. This site was also the world's first Internet directory, because later Tim Berners-Lee posted and maintained a list of links there. sites.

Since 1994, the World Wide Web Consortium (W3C), founded by Tim Berners-Lee, took over the main work on the development of the World Wide Web. This Consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. W3C Mission: "Unleash the full potential of the World Wide Web by creating protocols and principles to ensure the long-term development of the Web." Two other major tasks of the Consortium are to ensure the full “internationalization of the Web” and to make the Web accessible to people with disabilities.

The W3C develops uniform principles and standards for the Internet (called "W3C Recommendations"), which are then implemented by software and hardware manufacturers. Thus, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more perfect, universal and convenient. All Recommendations of the World Wide Web Consortium are open, that is, they are not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

Currently, the World Wide Web is formed by millions of Internet Web servers located around the world. A web server is a program that runs on a computer connected to the network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard disk and sends it over the network to the requesting computer. More sophisticated Web servers are capable of dynamically allocating resources in response to an HTTP request. Uniform Resource Identifiers (URIs) are used to identify resources (often files or parts of them) on the World Wide Web. Uniform Resource Locators are used to locate resources on the network. Such URL locators combine the URI identification technology and the Domain Name System (DNS) - the domain name (or directly the IP address in a numeric record) is part of the URL to designate a computer (more precisely, one of its network interfaces ) that executes the code desired Web–Servers.

To view the information received from the Web-server, a special program, the Web-browser, is used on the client computer. The main function of the Web browser is to display hypertext. The World Wide Web is inextricably linked to the concepts of hypertext and hyperlinks. Most of the information on the Web is hypertext. To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML (HyperText Markup Language), a hypertext markup language, is traditionally used. Work on hypertext markup is called typesetting, markup masters are called webmasters. After HTML markup, the resulting hypertext is placed in a file, such an HTML file is the most widespread resource on the World Wide Web. Once the HTML file is available to the web server, it is referred to as a “web page”. A collection of web pages forms a website. Hyperlinks are added to the hypertext of web pages. Hyperlinks help users of the World Wide Web to easily navigate between resources (files), regardless of whether the resources are located on a local computer or on a remote server. "Web" hyperlinks are based on URL technology.

In general, we can conclude that the World Wide Web is based on "three pillars": HTTP, HTML and URL. Although recently HTML has begun to lose ground and give way to more modern markup technologies: XHTML and XML. XML (English eXtensible Markup Language) is positioned as the foundation for other markup languages. To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform styles for multiple web pages. Another innovation worth paying attention to is the URN (Uniform Resource Name) resource designation system.

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable for computers. The Semantic Web is a concept of a web in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web provides access to well-structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely distributed and properly implemented, the Semantic Web can revolutionize the Internet. To create a computer-understandable description of a resource, the Semantic Web uses the Resource Description Framework (RDF) format, which is based on XML syntax and uses URIs to denote resources. New additions in this area are RDFS (RDF Schema) and SPARQL (English Protocol And RDF Query Language), a new query language for quickly accessing RDF data.

Currently, there are two trends in the development of the World Wide Web: the semantic web and the social web. The Semantic Web envisions improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats. The Social Web relies on the work of organizing the information on the Web, done by the Web users themselves. In the second direction, developments that are part of the Semantic Web are actively used as tools (RSS and other web feed formats, OPML, XHTML microformats).

Internet telephony has become one of the most modern and economical types of communication. Her birthday can be considered February 15, 1995, when VocalTec released its first soft-phone - a program for voice exchange over IP. Then Microsoft released the first version of NetMeeting in October 1996. And already in 1997, connections via the Internet of two ordinary telephone subscribers, located in completely different places of the planet, became quite common.

Why regular long distance and international telephone communications so dear? This is explained by the fact that during a conversation the subscriber occupies an entire communication channel, and not only when he speaks or listens to the interlocutor, but also when he is silent or distracted from the conversation. This is the case when transmitting voice over the telephone in the usual analog way.

With the digital method, information can be transmitted not continuously, but in separate "packets". Then, through one communication channel, information can be sent simultaneously from many subscribers. This principle of packet transmission of information is similar to the transport of many letters with different addresses in the same mail wagon. After all, one mail wagon is not “chased” to transport each letter separately! Such temporary "burst multiplexing" makes it possible to use the existing communication channels much more efficiently, to "compress" them. At one end of the communication channel, information is divided into packets, each of which, like a letter, is supplied with its own individual address. Through the communication channel, packets of many subscribers are transmitted "alternately". At the other end of the link, packets with the same address are merged again and sent to their destination. This packet principle is widely used on the Internet.

Having Personal Computer, sound card, compatible microphone and headphones (or speakers), the subscriber can use Internet telephony to call any subscriber who has a regular landline telephone. In this conversation, he will also pay only for using the Internet. Before using Internet telephony, a subscriber who owns a personal computer needs to install a special program on it.

It is not necessary to have a personal computer to use Internet telephony services. To do this, it is enough to have an ordinary telephone with tone dialing. In this case, each dialed digit goes into the line not in the form of a different number of electrical impulses, as when the disk rotates, but in the form of alternating currents of different frequencies. This tone mode is found in most modern telephones. To use Internet telephony using a telephone, you need to buy a credit card and call a powerful central computer server at the number indicated on the card. Then the server automaton in a voice (optionally in Russian or English) communicates the commands: dial using the buttons of the telephone set serial number and the card key, dial the country code and the number of your future interlocutor. Next, the server transforms analog signal to digital, sends it to another city, to the server located there, which again converts the digital signal into analog and sends it to the desired subscriber. The interlocutors talk like on a regular phone, although sometimes a slight (for a split second) delay in answering is felt. Recall that to save communication channels, voice information is transmitted in “packets” of digital data: your voice information is dissected into segments, packets called Internet protocols (IP).

In 2003, the Skype program (www.skype.com) was created, which is completely free and does not require practically any knowledge from the user either to install it or to use it. It allows you to talk in video accompaniment mode with interlocutors who are at their computers in different parts of the world. In order for the interlocutors to be able to see each other, the computer of each of them must be equipped with a web-camera.

This is such a long way in the development of means of communication has come mankind: from signal fires and drums to a cellular mobile phone, which allows almost instantly to contact two people who are in any part of our planet. At the same time, despite the different distances, the subscribers have a feeling of personal communication.

An increasing place in our life is occupied by the Internet. No human-made technology has gained such widespread popularity. Internet - the World Wide Web, which covers the entire globe, enveloping it in a network of TV towers. It began to gain its popularity back in the relatively distant 1990s. In the article, we will discuss where it came from and why it became so popular.

The Internet is like the World Wide Web

The second name of such a plan was given for a reason. The fact is that the Internet unites many users around the world. Like a spider's web, it envelops the entire globe with its threads. And this is not an ordinary metaphor, it really is. The Internet is wires and wireless networks, the second of which are not visible to us.

But this is a lyrical digression, in fact, the Internet is associated with the World Wide Web (www, or Word Wide Web). It covers all computers connected to the network. On remote servers, users store the necessary information, and can also communicate on the Web. Often this name is understood as the World Wide or Global Network.

It is based on several critical protocols like TCP / IP. Thanks to the Internet, the World Wide Web, or otherwise the Word Wide Web (WWW) carries out its activities, that is, it transmits and receives data.

Number of users

At the end of 2015, a study was conducted, on the basis of which the following data were obtained. The number of Internet users worldwide is 3.3 billion. And this is almost 50% of the total population of our planet.

These strong performances have been achieved thanks to the proliferation of 3G cellular networks and high-speed 4G. Providers have played an important role, thanks to the massive introduction of Internet technologies, the costs of maintaining servers and manufacturing fiber-optic cables have decreased. Most European countries have faster internet speeds than African countries. This explains the technical lag of the latter and the low demand for the service.

Why is the Internet called the World Wide Web?

Paradoxically, many users are sure that the above term and the Internet are one and the same. This deep misconception, hovering in the minds of many users, is caused by the similarity of concepts. Now we will figure out what's what.

The World Wide Web is often confused with the similar phrase "World Wide Web". It represents a certain amount of information based on Internet technology.

History of the World Wide Web

By the end of the 90s, the dominance of NSFNet over ARPANET technology was finally established in the world. Oddly enough, but one research center was engaged in their development. ARPNET was developed by order of the US Department of War. Yes, the first people to use the Internet were the military. And NSFNet technology was developed independently of government services, with almost pure enthusiasm.

It was the competition between the two developments that became the basis for their further development and massive introduction into the world. The World Wide Web of the Internet became available to the general public in 1991. It had to work somehow, and Berners Lee took over the development of the system for the Internet. In two years of successful work, he created hypertext, or HTTP, the famous electronic language HTML and URL. We do not need to go into details, because now we see them as ordinary links for site addresses.

Information space

First of all, this is an information space, access to which is carried out through the Internet. It allows the user to have access to the data that is on the servers. If you use a visual-figurative way, then the Internet is a volumetric cylinder, and the World Wide Web is what fills it.

Through a program called a "browser", the user gains access to the Internet to surf the Web. It consists of an innumerable number of server-based sites. They are connected to computers and are responsible for the safety, loading, viewing of data.

Spider webs and modern man

Currently, Homo sapiens in developed countries are almost completely integrated with the World Wide Web. We are not talking about our grandfathers and grandmothers or about remote villages, where they don’t even know about some kind of Internet.

Previously, a person in search of information went straight to the library. And it often happened that the book he needed was not found, then he had to go to other institutions with archives. Now the need for such manipulations has disappeared.

In biology, all species names consist of three words, for example, our full name is Homo sapiens neanderthalensis. Now we can safely add the fourth word internetiys.

The Internet is taking over the minds of humanity

Agree, we get almost all information from the Internet. We have tons of information in our hands. Tell our ancestor about this, he would eagerly bury himself in the monitor screen and sit there all his free time in search of information.

It is the Internet that has brought humanity to a fundamentally new level, it contributes to the creation of a new culture - mixed or multi. Representatives of different nations mimic and adapt, as if merging their customs into one cauldron. This is where the final product comes from.

It is especially useful for scientists, there is no longer a need to gather at councils in a country that is 1000 km away from yours. You can exchange experiences without a personal meeting, for example, through instant messengers or social networks... And if an important issue needs to be discussed, then you can do it via Skype.

Output

The World Wide Web is a constituent part of the Internet. Its work is ensured thanks to storage servers, which provide information to the user upon his request. The Network itself was developed thanks to scientists from the United States and their enthusiasm.

The structure and principles of the World Wide Web

World Wide Web around Wikipedia

The World Wide Web is formed by millions of Internet web servers located around the world. A web server is a program that runs on a computer connected to the network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard disk and sends it over the network to the requesting computer. More sophisticated web servers are capable of dynamically allocating resources in response to an HTTP request. To identify resources (often files or their parts) on the World Wide Web, uniform resource identifiers (URIs) are used. Uniform Resource Identifier). Uniform URL resource locators are used to locate resources on the network. Uniform Resource Locator). These URL locators combine URI identification technology and the DNS domain name system. Domain Name System) - domain name (or directly -address in a numeric notation) is included in the URL to designate a computer (more precisely, one of its network interfaces) that executes the code of the required web server.

To view the information received from the web server, a special program is used on the client computer - a web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked to the concepts of hypertext and hyperlinks. Most of the information on the Web is precisely hypertext. To facilitate the creation, storage and display of hypertext on the World Wide Web, the HTML language is traditionally used. HyperText Markup Language), hypertext markup language. Work on hypertext markup is called typesetting, a markup master is called a webmaster or a webmaster (without a hyphen). After HTML markup, the resulting hypertext is placed into a file, such an HTML file is the main resource of the World Wide Web. Once the HTML file is available to the web server, it is referred to as a “web page”. A collection of web pages forms a website. Hyperlinks are added to the hypertext of web pages. Hyperlinks help users of the World Wide Web to easily navigate between resources (files), regardless of whether the resources are located on a local computer or on a remote server. Web hyperlinks are based on URL technology.

World Wide Web Technologies

To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform styles for multiple web pages. Another innovation worth paying attention to is the URN resource naming system (eng. Uniform Resource Name).

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable for computers. The Semantic Web is the concept of a web in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web provides access to well-structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely distributed and properly implemented, the Semantic Web can revolutionize the Internet. To create a computer-understandable description of a resource, the Semantic Web uses the RDF format (eng. Resource Description Framework ), which is based on XML syntax and uses URIs to denote resources. New in this area is RDFS (English) Russian (eng. RDF Schema) and SPARQL (eng. Protocol And RDF Query Language ) (pronounced "spáarkl"), a new query language for quickly accessing RDF data.

History of the World Wide Web

Tim Berners-Lee and, to a lesser extent, Robert Kayo are considered the inventors of the World Wide Web. Tim Berners-Lee is the author of HTTP, URI / URL and HTML technologies. In 1980 he worked at the European Council for Nuclear Research (fr. Conseil Européen pour la Recherche Nucléaire, CERN ) a software consultant. It was there, in Geneva (Switzerland), that he wrote the Enquire program for his own needs. Inquire, loosely translated as "Investigator"), which used random associations to store data and laid the conceptual foundation for the World Wide Web.

The world's first website was hosted by Berners-Lee on August 6, 1991, on the first web server available at http://info.cern.ch/, (). Resource defined concept World wide web, contained instructions for setting up a web server, using a browser, etc. This site was also the world's first Internet directory, because later Tim Berners-Lee posted and maintained a list of links to other sites there.

The first photograph on the World Wide Web showed a parody filk group Les Horribles Cernettes. Tim Bernes-Lee asked for their scans from the band leader after the CERN Hardronic Festival.

Yet the theoretical foundations of the web were laid much earlier than Berners-Lee. Back in 1945, Vannaver Busch developed the Memex concept (English) Russian - auxiliary mechanical means"Expanding human memory." Memex is a device in which a person stores all his books and records (and, ideally, all his knowledge that lends itself to formal description) and which gives out the necessary information with sufficient speed and flexibility. It is an extension and addition of a person's memory. Bush also predicted comprehensive indexing of texts and multimedia resources with the ability to quick search necessary information. The next significant step towards the World Wide Web was the creation of hypertext (a term coined by Ted Nelson in 1965).

  • The Semantic Web is about improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats.
  • The Social Web relies on the work of organizing the information on the Web, done by the Web users themselves. In the second direction, developments that are part of the Semantic Web are actively used as tools (RSS and other web feed formats, OPML, XHTML microformats). Partially semantic sections of the Wikipedia Category Tree help users to move consciously in the information space, however, very soft requirements for subcategories do not give reason to hope for the expansion of such sections. In this regard, attempts to compile Knowledge atlases may be of interest.

There is also the popular concept of Web 2.0, which summarizes several directions of the development of the World Wide Web.

Ways to actively display information on the World Wide Web

Information on the web can be displayed both passively (that is, the user can only read it), and actively - then the user can add information and edit it. The methods of active display of information on the World Wide Web include:

It should be noted that this division is very arbitrary. So, say, a blog or a guestbook can be viewed as a special case of a forum, which, in turn, is a special case of a content management system. Usually, the difference is manifested in the purpose, approach and positioning of a particular product.

In part, information from sites can also be accessed through speech. In India, testing has already begun on a system that makes the text content of pages accessible even to people who cannot read or write.

The World Wide Web is sometimes ironically called the Wild Wild Web (wild, wild Web) - by analogy with the title of the film of the same name Wild Wild West (Wild, wild West).

see also

Notes (edit)

Literature

  • Fielding, R .; Gettys, J .; Mogul, J.; Fristik, G .; Mazinter, L .; Leach, P .; Berners-Lee, T. (June 1999). "Hypertext Transfer Protocol - http: //1.1" (Information Sciences Institute).
  • Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jackle, Mario; Lilly, Chris; Mendelssohn, Noah; Orcard, David; Walsh, Norman; Williams, Stewart (December 15, 2004). "Architecture of the World Wide Web, Volume One" (W3C).
  • Polo, Luciano World Wide Web Technology Architecture: A Conceptual Analysis. New Devices(2003). Archived from the original on August 24, 2011. Retrieved July 31, 2005.

Links

  • Official site of the World Wide Web Consortium (W3C)
  • Tim Berners-Lee, Mark Fischetti. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. - New York: HarperCollins Publishers (English) Russian ... - 256 p. - ISBN 0-06-251587-X, ISBN 978-0-06-251587-2(English)
Other organizations involved in the development of the World Wide Web and the Internet in general