Network technologies of local computer networks. Concept of network technologies

Today, networks and network technologies connect people in every corner of the world and provide them with access to the greatest luxury in the world - human communication. People can communicate and play with friends in other parts of the world without interference.

The events taking place become known in all countries of the world in a matter of seconds. Everyone is able to connect to the Internet and post their piece of information.

Network information technologies: the roots of their origin

In the second half of the last century, human civilization formed its two most important scientific and technical branches - computer and About a quarter of a century, both of these branches developed independently, and within their framework computer and telecommunication networks were created, respectively. However, in the last quarter of the twentieth century, as a result of the evolution and interpenetration of these two branches of human knowledge, what we call the term “network technology” arose, which is a subsection of more general concept"information technology".

As a result of their appearance, a new technological revolution occurred in the world. Just as several decades earlier the land surface was covered with a network of expressways, at the end of the last century all countries, cities and villages, enterprises and organizations, as well as individual homes found themselves connected by “information highways.” At the same time, they all became elements various networks transferring data between computers that have implemented certain information transfer technologies.

Network technology: concept and content

Network technology is a sufficient set of rules for the presentation and transmission of information, implemented in the form of so-called “standard protocols”, as well as hardware and software, including network adapters with drivers, cables and fiber-optic lines, and various connectors (connectors).

“Sufficiency” of this set of tools means its minimization while maintaining the possibility of building an efficient network. It should have the potential for improvement, for example, by creating subnets in it that require the use of protocols of various levels, as well as special communicators, usually called “routers.” After improvement, the network becomes more reliable and faster, but at the cost of adding add-ons to the main network technology that forms its basis.

The term "network technology" is most often used in the above in the narrow sense, however, it is often broadly interpreted as any set of tools and rules for building networks of a certain type, for example “local technology computer networks".

Prototype of network technology

The first prototype of a computer network, but not yet the network itself, began in the 60-80s. last century multi-terminal systems. Representing a set of monitor and keyboard, located at great distances from large computers and connecting to them via telephone modems or dedicated channels, the terminals left the premises of the computer information center and were dispersed throughout the building.

At the same time, in addition to the operator of the computer itself on the computer information center, all users of the terminals were able to enter their tasks from the keyboard and observe their execution on the monitor, carrying out some task management operations. Such systems, implementing both time-sharing and batch processing algorithms, were called remote job entry systems.

Global networks

Following multi-terminal systems in the late 60s. XX century The first type of networks was created - global computer networks (GCN). They connected supercomputers, which existed in single copies and stored unique data and software, with mainframe computers located at distances of up to many thousands of kilometers, through telephone networks and modems. This network technology has previously been tested in multi-terminal systems.

The first GCS in 1969 was ARPANET, which worked in the US Department of Defense and united different types of computers with different operating systems. They were equipped with additional modules to implement communication systems common to all computers on the network. It was on it that the foundations of network technologies that are still used today were developed.

The first example of the convergence of computer and telecommunications networks

GKS inherited communication lines from older and more global networks- telephone, because laying new long-distance lines was very expensive. Therefore, for many years they used analog telephone channels to transmit only one conversation at a time. Digital data was transmitted over them at a very low speed (tens of kbit/s), and the capabilities were limited to the transfer of data files and email.

However, having inherited telephone communication lines, GCS did not take their basic technology, based on the principle of circuit switching, when each pair of subscribers was allocated a channel at a constant speed for the entire duration of the communication session. The GKS used new computer network technologies based on the principle of packet switching, in which data in the form of small portions of packets at a constant speed is issued into a non-switched network and received by their recipients on the network using address codes built into the packet headers.

Predecessors of local area networks

Appearance in the late 70s. XX century LSI led to the creation of minicomputers with low cost and rich functionality. They began to really compete with large computers.

Minicomputers of the PDP-11 family have gained wide popularity. They began to be installed in all, even very small production units to manage technical processes and individual technological installations, as well as in enterprise management departments to perform office tasks.

The concept of computer resources distributed throughout the enterprise emerged, although all minicomputers still operated autonomously.

The emergence of LAN networks

By the mid-80s. XX century technologies for combining minicomputers into networks were introduced, based on switching data packets, as in the GCS.

They turned the construction of a single enterprise network, called a local (LAN) network, into an almost trivial task. To create it, you only need to buy network adapters for the selected LAN technology, for example, Ethernet, a standard cable system, install connectors (connectors) on its cables and connect the adapters to the minicomputer and to each other using these cables. Next, one of the operating systems intended for organizing a LAN network was installed on the computer server. After that, it began to work, and the subsequent connection of each new minicomputer did not cause any problems.

The inevitability of the Internet

If the advent of minicomputers made it possible to distribute computer resources evenly across the territories of enterprises, then the appearance in the early 90s. PC led to their gradual appearance, first in every workplace of any mental worker, and then in individual human dwellings.

The relative cheapness and high reliability of PCs first gave a powerful impetus to the development of LAN networks, and then led to the emergence of a global computer network - the Internet, which today covers all countries of the world.

The size of the Internet is growing by 7-10% every month. It represents the core that connects various local and global networks of enterprises and institutions around the world with each other.

If at the first stage data files and email messages were mainly transmitted via the Internet, today it mainly provides remote access to distributed information resources and electronic archives, to commercial and non-commercial information services in many countries. Its freely accessible archives contain information on almost all areas of knowledge and human activity - from new trends in science to weather forecasts.

Basic network technologies of LAN networks

Among them are basic technologies, on which the basis of any specific network can be built. Examples include such well-known LAN technologies as Ethernet (1980), Token Ring (1985) and FDDI (late 80s).

At the end of the 90s. Ethernet technology has become the leader in LAN network technology, combining its classic version with up to 10 Mbit/s, as well as Fast Ethernet(up to 100 Mbit/s) and Gigabit Ethernet (up to 1000 Mbit/s). All Ethernet technologies have similar operating principles that simplify their maintenance and the integration of LAN networks built on their basis.

During the same period, their developers began to build into the kernels of almost all computer operating systems network functions, implementing the above network information technologies. Even specialized communication operating systems like IOS from Cisco Systems have appeared.

How GCS technologies developed

GKS technologies on analog telephone channels, due to the high level of distortion in them, were distinguished by complex algorithms for monitoring and data recovery. An example of them is the X.25 technology developed in the early 70s. XX century More modern network technologies are frame relay, ISDN, ATM.

ISDN is an acronym that stands for Integrated Services Digital Network and allows remote video conferencing. Remote access is ensured by installing ISDN adapters in PCs, which work many times faster than any modems. There is also special software that allows popular operating systems and browsers to work with ISDN. But the high cost of equipment and the need to lay special communication lines hinder the development of this technology.

WAN technologies have progressed along with telephone networks. After the advent of digital telephony, a special technology, Plesiochronous Digital Hierarchy (PDH), was developed, supporting speeds of up to 140 Mbit/s and used by enterprises to create their own networks.

New Synchronous Digital Hierarchy (SDH) technology in the late 80s. XX century expanded throughput digital telephone channels up to 10 Gbit/s, and Dense Wave Division Multiplexing (DWDM) technology - up to hundreds of Gbit/s and even up to several Tbit/s.

Internet technologies

Network ones are based on the use of hypertext language (or HTML language) - a special markup language that is an ordered set of attributes (tags) that are pre-implemented by website developers into each of their pages. Of course, the speech is in this case We are not talking about text or graphic documents (photos, pictures) that have already been “downloaded” by the user from the Internet, are located in the memory of his PC and are viewed through text or images. We are talking about so-called web pages viewed through browser programs.

Developers of Internet sites create them in HTML language (now many tools and technologies have been created for this work, collectively called “website layout”) in the form of a set of web pages, and site owners place them on Internet servers on a rental basis from the owners of their memory servers (the so-called “hosting”). They work on the Internet around the clock, servicing the requests of its users to view the web pages loaded on them.

Browsers on user PCs, having received access through the server of their Internet provider to a specific server, the address of which is contained in the name of the requested Internet site, gain access to this site. Next, by analyzing the HTML tags of each page being viewed, browsers form its image on the monitor screen in the way it was intended by the site developer - with all the headings, font and background colors, various inserts in the form of photos, diagrams, pictures, etc. .

TechnologyEthernet

Ethernet is the most common local network standard today.

Ethernet is a networking standard based on the experimental Ethernet Network, which Xerox developed and implemented in 1975.

In 1980, DEC, Intel, and Xerox jointly developed and published the Ethernet version II standard for coaxial cable networks, which became the final version of the proprietary Ethernet standard. Therefore, the proprietary version of the Ethernet standard is called the Ethernet DIX standard, or Ethernet II, on the basis of which the IEEE 802.3 standard was developed.

Based on the Ethernet standard, additional standards were adopted: in 1995 Fast Ethernet (addition to IEEE 802.3), in 1998 Gigabit Ethernet (section IEEE 802.3z of the main document), which in many ways are not independent standards.

To transmit binary information over a cable for all variants of the physical layer of Ethernet technology, providing a throughput of 10 Mbit/s, the Manchester code is used (Fig. 3.9).

The Manchester code uses a potential difference, that is, the edge of a pulse, to encode ones and zeros. With Manchester encoding, each measure is divided into two parts. Information is encoded by potential drops that occur in the middle of each clock cycle. A unit is encoded by a drop from a low signal level to a high one (the leading edge of the pulse), and a zero is encoded by a reverse drop (a trailing edge).

Rice. 3.9. Differential Manchester coding

The Ethernet standard (including Fast Ethernet and Gigabit Ethernet) uses the same method of separating the data transmission medium - the CSMA/CD method.

Each PC operates on Ethernet according to the principle “Listen to the transmission channel before sending messages; listen when you post; stop working if there is any interference and try again.”

This principle can be deciphered (explained) as follows:

1. No one is allowed to send messages while someone else is already doing so (listen before you send).

2. If two or more senders start sending messages at approximately the same time, sooner or later their messages will “collide” with each other in the communication channel, which is called a collision.

Collisions are not difficult to recognize because they always produce an interference signal that does not resemble a valid message. Ethernet can detect interference and forces the sender to pause the transmission and wait a while before resending the message.

Reasons for the widespread use and popularity of Ethernet (advantages):

1. Cheap.

2. Great experience of use.

3. Continued innovation.

4. Wide choice of equipment. Many manufacturers offer networking equipment based on Ethernet.

Disadvantages of Ethernet:

1. Possibility of message collisions (collisions, interference).

2. If the network is heavily loaded, the message transmission time is unpredictable.

TechnologyTokenRing

Token Ring networks, like Ethernet networks, are characterized by a shared data transmission medium, which consists of cable segments connecting all network stations into a ring. The ring is considered as a common shared resource, and access to it requires not a random algorithm, as in Ethernet networks, but a deterministic one, based on transferring the right to use the ring to stations in a certain order. This right is conveyed using a special format frame called a token.

Token Ring technology was developed by IBM in 1984 and then submitted as a draft standard to the IEEE 802 committee, which based it on it adopted the 802.5 standard in 1985.

Each PC operates in a Token Ring according to the principle of “Wait for a token, if a message needs to be sent, attach it to the token as it passes by. If a token passes, remove the message from it and send the token further.”

Token Ring networks operate at two bit rates - 4 and 16 Mbit/s. Mixing stations operating at different speeds in one ring is not allowed.

Token Ring technology is more complex technology than Ethernet. It has fault tolerance properties. The Token Ring network defines network operation control procedures that use feedback of a ring-shaped structure - the sent frame always returns to the sending station.

Rice. 3.10. The principle of TOKEN RING technology

In some cases, detected errors in the network are eliminated automatically, for example, a lost token can be restored. In other cases, errors are only recorded, and their elimination is carried out manually by maintenance personnel.

To monitor the network, one of the stations acts as a so-called active monitor. The active monitor is selected during ring initialization as the station with maximum value MAC addresses. If active monitor fails, the ring initialization procedure is repeated and a new active monitor is selected. The Token Ring network can include up to 260 nodes.

A Token Ring hub can be active or passive. A passive hub simply interconnects ports so that stations connected to those ports form a ring. The passive MSAU does not perform signal amplification or resynchronization.

An active hub performs signal regeneration functions and is therefore sometimes called a repeater, as in the Ethernet standard.

In general, the Token Ring network has a combined star-ring configuration. End nodes are connected to the MSAU in a star topology, and the MSAUs themselves are combined through special Ring In (RI) and Ring Out (RO) ports to form a backbone physical ring.

All stations in the ring must operate at the same speed, either 4 Mbit/s or 16 Mbit/s. The cables connecting the station to the hub are called lobe cables, and the cables connecting the hubs are called trunk cables.

Token Ring technology allows you to use different types of cable to connect end stations and hubs:

– STP Type 1 - shielded twisted pair(Shielded Twistedpair).
It is allowed to combine up to 260 stations into a ring with a branch cable length of up to 100 meters;

– UTP Type 3, UTP Type 6 - unshielded twisted pair (Unshielded Twistedpair). Maximum quantity stations are reduced to 72 with branch cable lengths of up to 45 meters;

– fiber optic cable.

The distance between passive MSAUs can reach 100 m when using STP Type 1 cable and 45 m when using UTP Type 3 cable. Between active MSAUs, the maximum distance increases accordingly to 730 m or 365 m, depending on the cable type.

The maximum ring length of a Token Ring is 4000 m. The restrictions on the maximum ring length and the number of stations in a ring in Token Ring technology are not as strict as in Ethernet technology. Here, these restrictions are mainly related to the time of turning the marker around the ring.

All timeout values ​​in the network adapters of the Token Ring network nodes are configurable, so you can build a Token Ring network with more stations and a longer ring length.

Advantages of Token Ring technology:

· guaranteed delivery of messages;

· high data transfer speed (up to 160% Ethernet).

Disadvantages of Token Ring technology:

· expensive media access devices are required;

· technology is more complex to implement;

· 2 cables are needed (to increase reliability): one incoming, the other outgoing from the computer to the hub;

· high cost (160-200% of Ethernet).

TechnologyFDDI

FDDI (Fiber Distributed Data Interface) technology - fiber-optic distributed data interface - is the first local network technology in which the data transmission medium is a fiber-optic cable. The technology appeared in the mid-80s.

FDDI technology is largely based on Token Ring technology, supporting a token-passing access method.

The FDDI network is built on the basis of two fiber optic rings, which form the main and backup path data transfer between network nodes. Having two rings is the primary way to improve fault tolerance in an FDDI network, and nodes that want to take advantage of this increased reliability potential must be connected to both rings.

In normal network operation mode, data passes through all nodes and all cable sections of the Primary ring only; this mode is called the Thru mode - “end-to-end”, or “transit”. The Secondary ring is not used in this mode.

In the event of some type of failure where part of the primary ring cannot transmit data (for example, a broken cable or node failure), the primary ring is merged with the secondary ring, again forming a single ring. This mode of network operation is called Wrap, that is, “folding” or “folding” of rings. The coagulation operation is carried out using concentrators and/or network adapters FDDI.

Rice. 3.11. IVS with two cyclic rings in emergency mode

To simplify this procedure, data on the primary ring is always transmitted in one direction (in the diagrams this direction is shown counterclockwise), and on the secondary ring it is always transmitted in the opposite direction (shown clockwise). Therefore, when a common ring of two rings is formed, the transmitters of the stations still remain connected to the receivers of neighboring stations, which allows information to be correctly transmitted and received by neighboring stations.

The FDDI network can fully restore its functionality in the event of single failures of its elements. When there are multiple failures, the network splits into several unconnected networks.

Rings in FDDI networks are considered as a common shared data transmission medium, so a special access method is defined for it. This method is very close to the access method of Token Ring networks and is also called the token ring method.

The differences in the access method are that the token retention time in the FDDI network is not a constant value. This time depends on the load on the ring - with a small load it increases, and with large overloads it can decrease to zero. These changes in the access method only affect asynchronous traffic, which is not critical to small delays in frame transmission. For synchronous traffic, the token hold time is still a fixed value.

FDDI technology currently supports cable types:

– fiber optic cable;

– unshielded twisted pair cable of category 5. The latest standard appeared later than the optical one and is called TP-PMD (Physical Media Dependent).

Fiber optic technology provides the necessary means to transmit data from one station to another over optical fiber and defines:

Using 62.5/125 µm multimode fiber optic cable as the main physical medium;

Requirements for optical signal power and maximum attenuation between network nodes. For standard multimode cable, these requirements lead to a maximum distance between nodes of 2 km, and for single-mode cable the distance increases to 10–40 km depending on the quality of the cable;

Requirements for optical bypass switches and optical transceivers;

Parameters of optical connectors MIC (Media Interface Connector), their markings;

Used to transmit light with a wavelength of 1.3 nm;

The maximum total length of the FDDI ring is 100 kilometers, the maximum number of dual-connected stations in the ring is 500.

FDDI technology was developed for use in critical areas of networks - on backbone connections between large networks, such as building networks, as well as for connecting high-performance servers to the network. Therefore, the developers had the main requirements ( dignity):

- ensuring high data transfer speed,

- fault tolerance at the protocol level;

- long distances between network nodes and a large number of connected stations.

All these goals were achieved. As a result, FDDI technology turned out to be of high quality, but very expensive ( flaw). Even the introduction of a cheaper twisted-pair option has not significantly reduced the cost of connecting a single node to an FDDI network. Therefore, practice has shown that the main area of ​​application of FDDI technology has become backbone networks consisting of several buildings, as well as networks on the scale of a large city, that is, the MAN class.

TechnologyFastEthernet

The need for high-speed and at the same time inexpensive technology for connecting powerful workstations to a network led in the early 90s to the creation of an initiative group that began searching for a new Ethernet, the same simple and effective technology, but operating at a speed of 100 Mbit/s .

Experts split into two camps, which ultimately led to the emergence of two standards, adopted in the fall of 1995: the 802.3 committee approved the Fast Ethernet standard, which almost completely replicates 10 Mbit/s Ethernet technology.

Fast Ethernet technology has kept the CSMA/CD access method intact, leaving it with the same algorithm and the same timing parameters in bit intervals (the bit interval itself has decreased by 10 times). All differences between Fast Ethernet and Ethernet appear at the physical level.

The Fast Ethernet standard defines three physical layer specifications:

- 100Base-TX for 2 pairs of UTP category 5 or 2 pairs of STP Type 1 (4V/5V encoding method);

- l00Base-FX for multimode fiber optic cable with two optical fibers (4V/5V encoding method);

- 100Base-T4, operating on 4 UTP category 3 pairs, but using only three pairs simultaneously for transmission, and the remaining one for collision detection (8B/6T encoding method).

l00Base-TX/FX standards can operate in full duplex mode.

The maximum diameter of a Fast Ethernet network is approximately 200 m, with more precise values ​​depending on the physical media specification. In the Fast Ethernet collision domain, no more than one Class I repeater is allowed (allowing translation of 4B/5B codes to 8B/6T codes and vice versa) and no more than two Class II repeaters (not allowing translation of codes).

Fast Ethernet technology, when operating on twisted pair, allows, through the auto-negotiation procedure, two ports to select the most efficient operating mode - speed 10 Mbit/s or 100 Mbit/s, as well as half-duplex or full-duplex mode.

Gigabit Ethernet technology

Gigabit Ethernet technology adds a new 1000 Mbps step in the speed hierarchy of the Ethernet family. This stage allows you to effectively build large local networks, in which powerful servers and backbones of the lower levels of the network operate at a speed of 100 Mbit/s, and a Gigabit Ethernet backbone connects them, providing a sufficiently large reserve of bandwidth.

The developers of Gigabit Ethernet technology have maintained a large degree of continuity with Ethernet and Fast Ethernet technologies. Gigabit Ethernet uses the same frame formats as previous versions Ethernet operates in full-duplex and half-duplex modes, supporting the same CSMA/CD access method on the shared medium with minimal changes.

To ensure an acceptable maximum network diameter of 200 m in half duplex mode technology developers increased the minimum frame size by 8 times (from 64 to 512 bytes). It is also allowed to transmit several frames in a row, without releasing the medium, at an interval of 8096 bytes, then the frames do not have to be padded to 512 bytes. The remaining parameters of the access method and maximum frame size remained unchanged.

In the summer of 1998, the 802.3z standard was adopted, which defines the use of three types of cable as the physical medium:

- multimode fiber optic (distance up to 500 m),

- single-mode fiber optic (distance up to 5000 m),

- double coaxial (twinax), through which data is transmitted simultaneously over two shielded copper conductors over a distance of up to 25 m.

To develop a variant of Gigabit Ethernet on UTP category 5, a special group 802.3ab was created, which has already developed a draft standard for working on category 4 UTP pairs category 5. Adoption of this standard is expected in the near future.

Introduction

Chapter 1. Forms of using network technologies in education

1 Network technologies in education

2 Email

1.3 Technology World- Wide Web(WWW)

1.4 Search engines and Internet directories

5 Computer teleconferencing

6 Digital libraries

Chapter 2. Actually educational network technologies and resources

1 Desirable components of a network education system

2 Educational portals and distance education

Conclusion


Introduction

Education should be ahead of life. This is an axiom that has long become commonplace, but still remains (at least in Russia) a pure declaration. How can education be ahead of life? It is clear that it is impossible to teach something that does not yet exist. But it is possible to give a student the most up-to-date knowledge, while at the same time guiding him towards solving fundamental, conceptual issues. It is the conceptualization of education in the field of specific implementations that stimulates the search for new, more advanced, more daring solutions.

Informatization is an objective process in all spheres of human activity, including education. The goal of informatization of education is the global intensification of intellectual activity through the use of new information technologies.

The information saturation of modern society and its functionality at a decent level today require such speeds of information movement that only computer networks integrated into the global information space can provide.

Thus, the purpose of this work is to consider the problems of introducing modern forms and methods of teaching based on achievements into education and the educational process computer equipment And communication technologies in connection with the growing globalization of all areas of society, including pedagogical science and practice.

In accordance with the purpose, object and subject of the study, the following tasks were set: identifying the main problems and prospects for introducing informatization into education; consideration of forms of use of network technologies in education; review of network technologies in Russian education.


.1 Network technologies in education

The rapid development of telecommunication technologies, in particular the Internet, and multimedia in recent years not only contributed to the emergence of increased interest in the use of computers in the educational process, but also led to the emergence of a new generation education system - computer-based distance education. As evidenced by the diagram below.

Scheme 1 - Education of the new generation - computer-based distance education

Network education, as one of the types of distance learning, is a rapidly changing and still largely hypothetical area of ​​socio-economic development, difficult to predict, which suggests the importance of evaluating alternative technologies and all kinds of “warming up” the interest of the public and specialists in this area.

The main issues of network education include the development of new technological schemes, modernization of methodological resources and infrastructure development. Consideration of current problems of network education takes place against the backdrop of the process of job reductions that has continued in recent years in almost all developed countries, the acceleration of modernization under the influence of environmental restrictions on the content of many professions, on the one hand, and, on the other, due to the ongoing technological development of mankind.

All this leads to a shortening of the life cycle of knowledge and skills, transforming the educational function from a one-time (as at the beginning of the century) and recurring (in the middle of the century) into a regular one. The most striking example is information technology, which changes software and hardware platforms in one and a half to two years. Under these conditions, the classical form of full-time education becomes only a part of the general educational toolkit, and an increasingly smaller part. Outwardly imperceptible, but continuously increasing, indirect participation in the educational process electronic means mass media - primarily television, and in recent years - public computer networks.

1.2 Email

The most popular “carrier” technology in distance education is now using regular email, based on the TCP/IP protocol. It is often convenient for students to separate the moment of receiving and comprehending educational information and the point in time of sending the response signal, which may represent additional questions to the “teacher”, or answers to test questions and tasks contained in the received training material.

E-mail is equally good for supporting other basic functions of the educational process. Attractiveness technological scheme e-mail, based on its relative accessibility and low cost, will apparently remain for “correspondence students” for decades.

Recently, more and more attention has been paid to real-time technologies, including, first of all, the World Wide Web technology.

1.3 Technology World-Wide Web(WWW)

Internet technology, named World Wide Web(World-WideWeb, WWW or W3) is one of the popular and interesting Internet services today, as well as a convenient means of working with information. Very often, the concepts of WWW and the Internet are even considered identical.

This system is based on two pillars - the Hypertext Transport Protocol (HTTP), which is used to transfer complex documents, and the Hypertext Markup Language (HTML), which uses hypertext links to define objects inside document files. - an information system that is very difficult to give a correct definition. Here are some of the epithets by which it can be designated: hypertextual, distributed, integrating, global. WWW works on the client-server principle, or more precisely, client-servers: there are many servers that, at the client’s request, return to him a hypermedia document - a document consisting of parts with a diverse representation of information (text, sound, graphics, three-dimensional objects, etc.). ), in which each element can be a link to another document or part of it. WWW links point not only to documents specific to the WWW itself, but also to other services and information resources on the Internet. Moreover, most WWW client programs (browsers, navigators) not only understand such links, but are also client programs for the corresponding services: ftp, gopher, Usenet network news, email, etc. Thus, WWW software tools are universal for various Internet services, and the WWW information system itself plays an integrating role. - service direct access, which requires a full Internet connection, and moreover, often requires fast communication lines if the documents you are reading contain a lot of graphics or other non-text information.

Web technology, developed in 1989 in Geneva at the Particle Physics Laboratory of the European Center for Nuclear Research (CERN) by Tim Berners-Lee and his fellow programmers, was first aimed at creating a unified network for scientific workers, involved in high energy physics. However, this technology soon found much wider application. The first programs demonstrating the operation of the system were completed in 1992, and since then the WWW has been the most dynamic and rapidly developing part of the Internet.

The WWW system is easy to use, which predetermined its success. Before the World Wide Web Internet was available only to qualified computer users. Now, those without much computer experience can easily use the system.

1.4 Search engines and Internet directories

On the Internet you can find any information that is available on it. The Internet is a giant library. As with any library, you need to know how to use a search engine. How to search? Catalog of information and services available on the Internet from using WWW, even today would take up dozens of volumes of printed text. Therefore, the search problem comes to the fore necessary information which specialized specialists help solve search engines.

Perhaps the most useful feature of the Internet is the presence of search engines. These are dedicated computers that automatically scan all Internet resources they can find and index their content. It is then possible to pass a phrase or set to such a server keywords, describing the topic of interest, and the server will return a list of resources matching the request.

Today's search engines maintain indexes that include a very significant portion of Internet resources. There are quite a lot of such servers, and together they cover almost all available resources. If there is information on the Internet that interests a student, then it can certainly be found using search servers. This is the most powerful means of finding resources on the network. Internet directories store thematically systematized collections of links to various network resources, primarily World Wide Web documents. Links to such directories are not entered automatically, but by their administrators. Moreover, people involved in this endeavor try to make their collections as comprehensive as possible, including all available resources on each topic. As a result, the user does not need to collect all the links on the question he is interested in, but just find this question in the catalog - the work of searching and systematizing links has already been done for him.


The global Internet allows you to support such an important mode of communication as teleconferencing. Computer teleconferencing refers to a specially organized memory area on a computer that supports the operation of a telecommunications system. All subscribers who have access to this memory area (to the teleconference) have the opportunity to either receive on their computer all the text that was already in this memory area at that moment, or add their own text to it. As texts and remarks from participants are added to the teleconference, the overall text becomes more and more similar to the transcript of a regular conference. Hence the name - teleconference.

There are many types of teleconferencing, differing in the way its participants interact with the computer ( user interface), as well as ways to organize teleconference sections. Differences are determined by the software that the body uses. communication system to implement teleconference mode.

However, despite the differences between teleconferences, they all have the same structure. The conference begins with some text that sets its topic. Next, each participant has the opportunity to add his own replica to this text. All replicas are arranged sequentially as they are received and are available, along with the original text, to all participants in the teleconference. On subsequent calls, you can receive either the entire text or only new text fragments. Each teleconference participant has the opportunity to work at a time convenient for him.

Call participants may be divided into groups to develop specific topics, and their access to specific topics may be limited. The teacher can ask leading questions, pose new problems, and address individual participants individually. In general, teleconferencing provides ample opportunities for organizing the educational process. However, whatever the purpose or purpose of the entire call, it is a special kind of collective activity. The participants in this activity do not see each other, and may never meet each other as strangers in person. Their work in a teleconference is extended over time, and occurs, as a rule, against the background of their main activity, which may not be related to the material being studied. Be that as it may, the behavior of participants in teleconferences turns out to be subject to certain patterns, knowing which you can effectively influence the success of the teleconference itself and, as a consequence, the success of learning the educational material to which the teleconference is dedicated.

In addition, conferences can be divided: by access methods; by methods of participation; on ways to achieve the goal. As shown in the following diagram.

Scheme 2 - Conference methods

1.6 Digital libraries

The forms of using network technologies in education can be different. In principle, storing documents in electronic form on a medium accessible from the network, and in a format interpretable by any fairly common user software package, is already an educational network technology. We are talking about the so-called electronic libraries. These may also be accessible only via ftp file storages, in which documents are sorted into directories in accordance with topic, chronology or format, and each directory is equipped with a description file (file_id.diz, descript.ion, files.bbs, read.me, etc.). Network libraries with a similar device, although they continue to exist today, are certainly not widespread, at least not the most widespread. And to call such a file storage a library would not be entirely correct - it is more like a home bookshelf.

In the era of hypertext and organized databases, the interface of a network library is more characterized by the presence of a hypertext main, title page and an electronic catalog accessible from it based on some fairly powerful DBMS (database management environment; most often today it is MySQL) with the ability to search for a document ( entries) according to various keys (author, title, subject, context of the bib entry, any word found, etc.) and sorting according to various criteria.

Definition of the actual keys and sorting characteristics, i.e. Classification of storage units is a very important part of organizing a network library. Most of the current Russian-language network libraries were created by amateurs, and the classification of stored texts in them leaves much to be desired

We can say that Russian Internet librarianship is in its infancy, which is not surprising: the Russian segment of the Internet recently turned only ten years old.

The use by Russian users of foreign network storage information is often hampered by insufficient knowledge English language, and the absence on many Russian workstations of programs capable of interpreting postscript and TeX/LaTeX formats.

Chapter 2. Actually educational network technologies and resources

.1 Desirable components of a network education system

The information repositories themselves, even if they are sufficiently equipped user-friendly interface and publicly available, can be considered educational portals only with a certain stretch. In order for information to serve education, in addition to itself, several more elements are desirable, such as a program and methods for assimilating information; mentor; system for testing acquired knowledge; a method of certifying the qualifications acquired during the educational process. The diagram illustrates these provisions.

Scheme 3 - Components of a network education system

electronic library education

2.2 Educational portals and distance education

Today, a new term has been introduced into speech for education received via the network - distance learning. Distance education differs from traditional correspondence education in that the recipient, as a rule, does not have full verbal and visual contact with the teacher (teachers), even occasionally. He does not go to orientation and examination sessions, and is not personally present at lectures and examination tests. Training comes down to students receiving programs, methods, assignments and special texts via the network, answering (via the network) control questions and tests, and completing and sending some final work to the distance education institution.

Real control over the student’s work is virtually reduced to zero, and therefore it is not surprising that the prestige of distance education today is very low - even in comparison with the prestige of correspondence education. Of course, the same should be said about its quality.

One way or another, distance education cannot be the main thing today. At least in Russia, where the era of highly specialized specialists will probably not come soon - due to the specifics of the national-historical situation.

This is due to the current level of technology development. It is likely that when the speed of data exchange and the quality of presentation of this data on the user terminal increase so much that they can create at least a minimal effect of presence, the quality and, accordingly, the prestige of distance education will approach the quality and prestige of face-to-face education, because It will be possible to conduct fully-fledged remote lectures, conferences, and exams.

To some extent, this is still possible today - with the help of webcams and programs like NetMeeting, but web cameras are still too expensive equipment to be present at the workstations of a sufficient number of students, and the speed of connecting most ordinary workstations to the network is so low With, at the same time, a very high payment for this connection, which is both normal and painless for the budget, it is often difficult for a student to receive a high-quality video and audio stream. Hence the simple (and virtually anonymous) exchange of texts and “birds” when answering tests.

Conclusion

A scientific approach to solving the problems of informatization of education sets the immediate goal of mastering by students a complex of knowledge, skills, abilities, and developing such personality qualities that could ensure the successful completion of tasks professional activities and comfortable existence in conditions information society.

The technological orientation of education lies in the following directions of its implementation:

introduction of scientific and technological information technologies into the educational process;

increasing the level of computer (information) training of participants in the educational process;

system integration of information technologies in education that support learning processes;

construction and development of a unified educational information space.

Scientific research conducted at the Russian Research Institute of System Integration (Russian Research Institute of SI) of the Ministry of Education of the Russian Federation has made it possible to identify a number of relevant information and telecommunication technologies in secondary and higher schools in Russia, among them: 1. Electronic textbook; 2. Multimedia system; 3. Expert system; 4. System computer-aided design; 5. Electronic library catalogue; 6. Databases; 7. Local and distributed (global) computing systems; 8. Email; 9. Voice email; 10. Electronic bulletin board; 11. Teleconference system; 12. Desktop electronic printing house.

Accessibility is achieved through the opportunity for various segments of the population to receive education; in different geographical regions; on various technical platforms; in various languages; in various educational institutions.

There is no doubt that the comprehensive and full use of the advantages of network learning will allow us to raise education to a qualitatively new level that meets the ever-growing needs of the “information” society.

List of used literature

1.Federal Law of the Russian Federation dated December 29, 2012 No. 217-FZ “On Education”.

.Order of the Ministry of Education and Science of Russia dated May 6, 2005 No. 137 “On the use of distance educational technologies.”

In order to understand how it works local network, it is necessary to understand such a concept as network technology.

Network technology consists of two components: network protocols and the hardware that makes these protocols work. Protocol in turn, is a set of “rules” with the help of which computers on the network can connect to each other and exchange information. With the help of network technologies we have the Internet, there is a local connection between computers in your home. More network technologies called basic, but also have one more beautiful namenetwork architectures.

Network architectures define several network parameters, which you need to have a little idea about in order to understand the structure of the local network:

1)Data transfer speed. Determines how much information, usually measured in bits, can be transmitted over a network in a given time.

2) Format of network frames. Information transmitted through the network exists in the form of so-called “frames” - packets of information. Network frames in different network technologies have various formats transmitted packets of information.

3) Type of signal coding. Determines how, using electrical impulses, information is encoded in the network.

4)Transmission medium. This is the material (usually a cable) through which the flow of information passes - the same one that is ultimately displayed on the screens of our monitors.

5) Network topology. This is a diagram of a network in which there are “edges”, which are cables, and “vertices” - computers to which these cables stretch. Three main types of network designs are common: ring, bus, and star.

6)Method of access to the data transmission medium. Three methods of accessing the network medium are used: deterministic method, random access method and priority transmission. The most common is the deterministic method, in which, using a special algorithm, the time of use of the transmission medium is divided among all computers located in the medium. In the random network access method, computers compete to access the network. This method has a number of disadvantages. One of these disadvantages is the loss of part of the transmitted information due to collisions of information packets in the network. Priority access provides, accordingly, the greatest amount of information to the established priority station.

The set of these parameters determinesnetwork technology.

Network technology is now widespread IEEE802.3/Ethernet. It has become widespread thanks to simple and inexpensive technologies. It is also popular due to the fact that servicing such networks is easier. The topology of Ethernet networks is usually built in the form of a “star” or “bus”. Transmission media in such networks use both thin and thick coaxial cable , and also twisted pairs and fiber optic cables. The length of Ethernet networks typically ranges from 100 to 2000 meters. The data transfer speed in such networks is usually about 10 Mbit/s. Ethernet networks typically use the CSMA/CD access method, which refers to decentralized random network access methods.

There are also high-speed network options Ethernet: IEEE802.3u/Fast Ethernet and IEEE802.3z/Gigabit Ethernet, providing data transfer rates of up to 100 Mbit/s and up to 1000 Mbit/s, respectively. In these networks, the transmission medium is predominantly fiber optic, or shielded twisted pair.

There are also less common, but still widely used network technologies.

Network technology IEEE802.5/Token-Ring characterized by the fact that all vertices or nodes (computers) in such a network are united in a ring, use the token method of accessing the network, support shielded and unshielded twisted pair, and also fiber optic as a transmission medium. Speed ​​in the Token-Ring network is up to 16 Mbit/s. The maximum number of nodes in such a ring is 260, and the length of the entire network can reach 4000 meters.

Read the following materials on the topic:

Local network IEEE802.4/ArcNet is special in that it uses the transfer of authority access method to transfer data. This network is one of the oldest and previously popular in the world. This popularity is due to the reliability and low cost of the network. Nowadays, such network technology is less common, since the speed in such a network is quite low - about 2.5 Mbit/s. Like most other networks, it uses shielded and unshielded twisted pairs and fiber optic cables as a transmission medium, which can form a network up to 6000 meters long and include up to 255 subscribers.

Network architecture FDDI (Fiber Distributed Data Interface), is based on IEEE802.4/ArcNet and is very popular due to its high reliability. This network technology includes two fiber optic rings, length up to 100 km. This also ensures high data transfer speeds on the network - about 100 Mbit/s. The point of creating two fiber optic rings is that one of the rings carries a path with redundant data. This reduces the chance of losing transmitted information. Such a network can have up to 500 subscribers, which is also an advantage over other network technologies.

A local computer network is a collection of computers connected by communication lines, providing network users with the potential opportunity to share the resources of all computers. On the other hand, simply put, a computer network is a collection of computers and various devices, providing information exchange between computers on a network without the use of any intermediate storage media.

The main purpose of computer networks is sharing resources and the implementation of interactive communication both within one company and beyond its borders. Resources are data, applications, and peripherals, such as external drive, printer, mouse, modem or joystick.

Computers on the network perform the following functions:

  • - organizing access to the network
  • - information transfer management
  • - provision computing resources and services to network users.

Currently, local computing (LAN) has become very widespread. This is due to several reasons:

  • * connecting computers into a network allows you to save significantly cash by reducing the cost of maintaining computers (it is enough to have a certain disk space on the file server (the main computer of the network) with installed software products, used by several workstations);
  • * local networks allow you to use your mailbox to transfer messages to other computers, which allows you to short term transfer documents from one computer to another;
  • * local networks, with special software, are used to organize the sharing of files (for example, accountants on several machines can process transactions of the same ledger).

Among other things, in some areas of activity it is simply impossible to do without a LAN. These areas include: banking, warehouse operations of large companies, electronic archives of libraries, etc. In these areas, each individual workstation, in principle, cannot store all the information (mainly due to its too large volume).

Wide Area Network

Internet is a global computer network covering the whole world.

The Internet, which once served exclusively research and academic groups whose interests extended to access to supercomputers, is becoming increasingly popular in the business world.

Companies are seduced by speed, cheap global communications, and ease of joint work, available programs, unique database Internet networks. They view the global network as a complement to their own local networks.

Based on the method of organization, networks are divided into real and artificial.

Artificial networks(pseudo-networks) allow computers to be linked together via serial or parallel ports and don't need additional devices. Sometimes communication in such a network is called null modem communication (no modem is used). The connection itself is called null modem. Artificial networks are used when it is necessary to transfer information from one computer to another. MS-DOS and Windows are equipped with special programs for implementing a null modem connection.

Real networks allow you to connect computers using special switching devices and physical environment data transfer.

According to the territorial distribution, networks can be local, global, regional and urban.

Local area network (LAN) -Local Area Networks (LAN)- is a group (communication system) of a relatively small number of computers united by a shared data transmission medium, located in a small area limited in size within one or several nearby buildings (usually within a radius of no more than 1-2 km) for the purpose of sharing resources all computers

A network connecting computers that are geographically distant from each other. It differs from a local network in more extensive communications (satellite, cable, etc.). The global network connects local networks.

Metropolitan Area Network (MAN - Metropolitan Area NetWork)- a network that serves the information needs of a large city.

Regional- located on the territory of a city or region.

Also, recently experts have identified a type of network such as a banking network, which is a special case corporate network large company. It is obvious that the specifics of banking activities impose strict requirements on information security systems in bank computer networks. An equally important role when building a corporate network is played by the need to ensure trouble-free and uninterrupted operation, since even a short-term failure in its operation can lead to gigantic losses.

By affiliation, departmental and state networks are distinguished.

Departmental belong to one organization and are located on its territory.

Government networks- networks used in government agencies.

Based on the speed of information transfer, computer networks are divided into low-, medium- and high-speed.

Low speed(up to 10 Mbit/s),

Medium speed(up to 100 Mbit/s),

High speed(over 100 Mbit/s);

Depending on the purpose and technical solutions networks can have different configurations (or, as they also say, architecture, or topology).

IN annular topology, information is transmitted over a closed channel. Each subscriber is directly connected to its two closest neighbors, although in principle it is capable of contacting any subscriber on the network.

IN star-shaped(radial) in the center there is a central control computer that sequentially communicates with subscribers and connects them with each other.

IN tire configuration, computers are connected to a common channel (bus), through which they can exchange messages.

IN tree-like- there is a “main” computer, to which computers of the next level are subordinate, etc.

In addition, configurations without a distinct nature of connections are possible; the limit is a fully connected configuration, when every computer on the network is directly connected to every other computer.

From the point of view of organizing the interaction of computers, networks are divided into peer-to-peer (Peer-to-Peer Network) and with a dedicated server (Dedicated Server Network).

All computers in a peer-to-peer network have equal rights. Any network user can access data stored on any computer.

Peer-to-peer networks can be organized using operating systems such as LANtastic, windows"3.11, Novell Netware Lite. These programs work with both DOS and Windows. Peer-to-peer networks can also be organized on the basis of all modern 32-bit operating systems - Windows 9xME2k, Windows NT workstation versions, OS/2) and some others.

Advantages of peer-to-peer networks:

  • 1) the easiest to install and operate.
  • 2) operating rooms DOS systems and Windows have all the necessary functions that allow you to build a peer-to-peer network.

The disadvantage of peer-to-peer networks is that it is difficult to resolve information security issues. Therefore, this method of organizing a network is used for networks with a small number of computers and where the issue of data protection is not fundamental.

In a hierarchical network, when the network is installed, one or more computers are pre-allocated to manage data exchange over the network and resource distribution. Such a computer is called a server.

Any computer that has access to the server's services is called a network client or workstation.

A server in hierarchical networks is a permanent storage of shared resources. The server itself can only be a client of a server at a higher hierarchy level. Therefore, hierarchical networks are sometimes called dedicated server networks.

Servers are usually high-performance computers, possibly with several parallel processors, high-capacity hard drives, and a high-speed network card (100 Mbit/s or more).

The hierarchical network model is the most preferable, as it allows you to create the most stable network structure and more rationally distribute resources.

Another advantage of a hierarchical network is a higher level of data protection.

The disadvantages of a hierarchical network, compared to peer-to-peer networks, include:

  • 1) the need for an additional OS for the server.
  • 2) higher complexity of network installation and upgrade.
  • 3) The need for selection separate computer as a server.

Local networks (LAN) unite relatively small number computers (usually from 10 to 100, although sometimes much larger ones are found) within one room (educational computer class), building or institution (for example, a university). The traditional name - local area network (LAN) - is rather a tribute to those times when networks were mainly used to solve computing problems; today, in 99% of cases, we are talking exclusively about the exchange of information in the form of texts, graphic and video images, and numerical arrays. The usefulness of drugs is explained by the fact that from 60% to 90% of the information an institution needs circulates within it, without needing to go outside.

The creation of automated enterprise management systems (ACS) had a great influence on the development of drugs. ACS include several automated workstations (AWS), measuring systems, control points. Another important field of activity in which LS has proven its effectiveness is the creation of educational computer technology classes (ECCT).

Due to the relatively short lengths of communication lines (usually no more than 300 meters), information can be transmitted via LAN to digital form With high speed transfers. At long distances, this transmission method is unacceptable due to the inevitable attenuation of high-frequency signals; in these cases, it is necessary to resort to additional technical (digital-to-analog conversions) and software (error correction protocols, etc.) solutions.

A characteristic feature of the LAN is the presence of a high-speed communication channel connecting all subscribers for transmitting information in digital form.

There are wired and wireless channels. Each of them is characterized by certain values ​​of parameters that are essential from the point of view of drug organization:

  • - data transfer speed;
  • - maximum line length;
  • - noise immunity;
  • - mechanical strength;
  • - convenience and ease of installation;
  • - cost.

If, for example, two protocols have different ways of breaking up data into packets and adding information (packet sequencing, timing, and error checking), then a computer running one of those protocols will not be able to successfully communicate with a computer running the other protocol. .

Until the mid-80s, most local networks were isolated. They served separate companies and rarely merged into large systems. However, when local networks reached a high level of development and the volume of information transmitted by them increased, they became components large networks. Data transmitted from one local network to another along one of the possible routes is called routed. Protocols that support data transfer between networks over multiple routes are called routed protocols.

Among the many protocols, the most common are the following:

  • NetBEUI;
  • XNS;
  • · IPX/SPX and NWLmk;
  • · OSI protocol suite.

Global Area Network (WAN or WAN - World Area NetWork)- a network connecting computers that are geographically distant from each other. It differs from a local network in more extensive communications (satellite, cable, etc.). The global network connects local networks.

WAN (World Area Network) - a global network covering large geographic regions, including both local networks and other telecommunications networks and devices. An example of a WAN is a packet-switching network (Frame relay), through which various computer networks can “talk” to each other.

Today, when the geographical boundaries of networks are expanding to connect users from different cities and states, LANs are turning into a global computer network [WAN], and the number of computers on the network can already vary from tens to several thousand.

Internet- a global computer network covering the whole world. Today the Internet has about 15 million subscribers in more than 150 countries. The network size increases monthly by 7-10%. The Internet forms a kind of core that connects various information networks belonging to various institutions around the world with one another.

If previously the network was used exclusively as a medium for transferring files and e-mail messages, today more complex tasks distributed access to resources. About three years ago, shells were created that support the functions network search and access to distributed information resources, electronic archives.

The Internet, which once served exclusively research and teaching groups whose interests extended to access to supercomputers, is becoming increasingly popular in the business world.

Currently, the Internet uses almost all known communication lines from low-speed telephone lines to high-speed digital satellite channels.

In fact, the Internet consists of many local and global networks belonging to various companies and enterprises, interconnected by various communication lines. The Internet can be imagined as a mosaic made up of small networks of different sizes, which actively interact with one another, sending files, messages, etc.

A computer network is an association of several computers for the joint solution of information, computing, educational and other problems.

The main purpose of computer networks is the sharing of resources and the implementation of interactive communications both within one company and outside it.