Standards for presenting dynamic content data. Advantages of raster graphics

ANNOTATION FOR WORK PROGRAM PM.01 PROCESSING OF INDUSTRY INFORMATION 1.1. Scope of the program The work program of the professional module “processing of industry information” is part of the main professional educational program in accordance with the Federal State Educational Standard for the specialty SVE 09.02.05 Applied informatics (by industry) of basic training in terms of mastering the main type professional activity and corresponding professional competencies (PC): PC1.1. Process static information content. PC1.2. Process dynamic information content. PC1.3. Prepare equipment for operation. PC1.4. Set up and work with industry-specific information content processing equipment. PC1.5. Monitor the operation of computers, peripheral devices and telecommunication systems, ensure their correct operation 1.2. The place of the professional module in the structure of the main professional educational program: the discipline is included in the professional cycle of the compulsory part. 1.3. Goals and objectives of the professional module - requirements for the results of mastering the professional module In order to master the specified type of professional activity and the corresponding professional competencies, the student during the development of the professional module must: have practical experience: 1. processing static information content; 2. processing dynamic information content; 3. installation of dynamic information content; 4. work with industry equipment for processing information content; 5. monitoring the operation of computer, peripheral devices and telecommunication systems, ensuring their correct operation; 6. preparing equipment for operation; be able to: 1. carry out the process of prepress preparation of information content; 2. install and work with specialized application software; 3. work in a graphic editor; 4. process raster and vector images; 5. work with application packages for text layout; 6. prepare original layouts; 7. work with application packages for processing industry information; 8. work with presentation preparation programs; 9. install and work with application software for processing dynamic information content; 10. work with application software for processing economic information; 11. convert analog forms of dynamic information content into digital ones; 12. record dynamic information content in a given format; 13. install and work with specialized application software for editing dynamic information content; 14. select installation tools dynamic content; 15. carry out event-oriented editing of dynamic content; 16. work with specialized equipment for processing static and dynamic information content; 17. choose equipment to solve the task; 18. install and configure application software; 19. diagnose equipment malfunctions using hardware and software; 20. monitor the operating parameters of the equipment; 21. eliminate minor malfunctions in the operation of equipment; 22. carry out equipment maintenance at the user level; 23. prepare error reports; 24. switch industry-specific hardware systems; 25. carry out commissioning of industry-specific equipment; 26. carry out testing of industry-specific equipment; 27. establish collateral; and configure system software to know: 1. basics of information technology; 2. technologies for working with static information content; 3. standards for formats for presenting static information content; 4. standards for formats for presenting graphical data; 5. computer terminology; 6. standards for the preparation of technical documentation; 7. sequence and rules of pre-press preparation; 8. rules for preparing and designing presentations; 9. software for processing information content; 10. basics of ergonomics; eleven. mathematical methods information processing; 12. information technologies for working with dynamic content; 13. standards for dynamic data presentation formats; 14. terminology in the field of dynamic information content; 15. software for processing information content; 16. principles of linear and non-linear editing of dynamic content; 17. rules for constructing dynamic information content; 18. rules for preparing dynamic information content for installation; 19. technical means of collecting, processing, storing and displaying static and dynamic content; 20. principles of operation of specialized equipment; 21. operating modes of computer and peripheral devices; 22. principles of building computer and peripheral equipment; 23. equipment maintenance rules; 24. equipment maintenance regulations; 25. types and types of text checks; 26. ranges of permissible operational characteristics of equipment; 27. principles of switching industry-specific hardware systems; 28. operational characteristics of industry equipment; 29. operating principles of system software; 1.4. Recommended number of hours for mastering the professional module program: maximum student teaching load 745 hours, including:  mandatory classroom teaching load of the student 394 hours;  independent work 197 hours;  educational practice 78;  industrial practice 76 hours. 1.5. Forms of intermediate certification: differentiated tests, exam, qualifying exam. 1.6. Contents of the professional module Section 1. Processing of static information content Topic 1.1. Fundamentals of information technology Topic 1.2.Static information content Topic 1.3.Computer graphics content Topic 1.4.Theory of computer graphics Topic 1.5.Photo processing Topic 1.6.Basic parameters of a vector contour Topic 1.7.Processing of raster images Topic 1.8.Development of design and construction documentation Section 2. Processing of dynamic information content Topic 2.1. The process of planning a layout and working with a printing house Topic 2.2. Basic techniques for creating original layouts of various printed publications, taking into account the features of the modern printing base and paper type Topic 2.3. Technologies of the printing process Topic 2.4. Basics of typography Topic 2.5. Equipment for the designer's work Topic 2.6. Creation of ps-files and preparation of the original layout for transfer to the printing house for subsequent color separation on a phototypesetting machine Section 3. Preparing equipment for work Topic 3.1. Presentation preparation standard Topic 3.2. Presentation forms Topic 3.3. Effects of presentation Topic 3.4 Preparation of presentations Section 4. Information technologies for working with economic information Topic 4.1. General information and interface of the Mathcad program Topic 4.2. Exact calculations in Mathcad Topic 4.3. Numerical methods in Mathcad Section 5. Information technologies for working with sound Topic 5.1 Forms of presentation of audio information Topic 5.2 AdobeAudition program Topic 5.3 Working in single-track mode (EditView). Working in multi-track mode Topic 5.4 Working with cyclic and wave files Topic 5.5 Using noise reduction filters Topic 5.6 Editing voices Topic 5.7 Using the channel mixer and real-time effects of the Audition program. Topic 5.8 Batch Processing and scripting Topic 5.9 Optimization sound files for the Internet Topic 5.10 Importing audio data from a CD and creating a new CD Section 6. Video processing Topic 6.1 Creation methods digital video Images. Types of Digital Video Topic 6.2 AdobePremiere Basic Concepts. Program interface. Windows Project, Source, Program Topic 6.3 Importing and exporting files Section 7 Creating simple animation Topic 7.1 Methods of creating animation. Types of animation. The simplest GIF animation. FLASH animation Topic 7.2 AdobeFlash program. Program interface capabilities Topic 7.3 Tools of the AdobeFlash program Topic 7.4 Filling. Combining contours. Lasso tool. Work with text. Section 8. Editing dynamic information content Topic 8.1 Concept of editing Topic 8.2 Basic rules for shooting video materials Topic 8.3 Video editing. Film editing Topic 8.4 Video editing. Basics of working in the AdobePremierePro application and its installation Topic 8.5 Video editing. Basic editing tools in the Program, Source, and Timeline windows. Topic 8.6 Video editing. Video and audio transitions Topic 8.7 Video editing. Transparency of video clips. Movement and scaling of clips Topic 8.8 Video editing. Video effects Topic 8.9 Video editing. Sound in a film Topic 8.10 Computer animation: Technology for creating an animated film Topic 8.11 Computer animation: Working with color. Types of fills and their application Topic 8.12 Computer animation: Shape animation. Tracing raster images Topic 8.13 Computer animation: motion animation Topic 8.14 Computer animation: Symbols. Complex animation Topic 8.15 Computer animation: Library samples and their instances Topic 8.16 Computer animation: Animating a nested instance Topic 8.17 Computer animation: Layer mask. Masking layers Topic 8.18 Computer animation: Sound. Saving, exporting, publishing Section 9. Technical means collection, storage and display of static content Topic 9.1 Camera and its equipment Topic 9.2 Graphic tablet Topic 9.3 Scanners Topic 9.4 Printers Topic 9.5 Plotters Topic 9.6 Risograph Topic 9.7 Cutter and laminator Topic 9.8 Stapler and booklet maker Section 10. Technical means of collection, processing, storage and demonstration of dynamic content Topic 10.1 Video camera and its equipment Topic 10.2 Equipment for recording sound Section 11. Technical means of processing and storing content Topic 11.1 Processor Topic 11.2 Motherboard Topic 11.3 Video card Topic 11.4 Sound card Topic 11.5 Video capture card Topic 11.6 Equipment for storing information

Topic 1.2. Processing information content using graphic editors

Lecture 1. Introduction to computer graphics

Classification of computer graphics

CG can be classified according to the following criteria:

Depending on the organization of the graphics system

1. passive or non-interactive - this is the organization of the operation of the graphics system, in which the display is used only to display images under program control without user intervention. Graphical representation once received cannot be changed.

2. active or interactive (dynamic, interactive) is the reproduction of images on the screen under user control.

Depending on the image formation method

raster graphics is a graphic in which an image is represented by a two-dimensional array of points that are elements of a raster. Raster is two-dimensional array dots (pixels) arranged in rows and columns designed to represent an image by coloring each dot a specific color.

2. Vector graphics – an imaging method that uses mathematical descriptions to determine the position, length, and coordinates of lines to be drawn.

3. fractal graphics - directly related to vector. Like vector, fractal graphics are calculated, but differ in that no objects are stored in the computer's memory.

4. 3D graphics.

Depending on color gamut differentiate black and white And colored graphics.

Depending on the image display methods

1. illustrative graphics – a method of depicting graphic material.

2. demonstrative graphics – associated with dynamic objects.



Technologies for depicting dynamic objects Three main methods are used:

1. drawing - erasing;

2. change of personnel;

3. dynamic images.

Tools for creating and processing display graphics are divided into animation (two-dimensional and three-dimensional), processing and output of live video, and a variety of special video processors.

Depending on application methods

1. scientific graphics – displaying graphs on a plane and in space, solving systems of equations, graphic interpretation (MathCAD).

2. engineering graphics (automation systems design work) – various applications in mechanical engineering, printed circuit board design, architecture, etc.

3. business graphics – building graphs, diagrams, creating commercials, demonstrators.

Business graphics

The concept of business graphics includes methods and means of graphic interpretation of scientific and business information: tables, diagrams, diagrams, illustrations, drawings.

Among KG software tools, business graphics tools occupy a special place. They are intended for creating illustrations when preparing reporting documentation, statistical summaries and other illustrative materials. Business graphics software is included in word processors and spreadsheet processors.

The MS Office environment has built-in tools for creating business graphics: graphic paint editor, MS Graph tool, MS Excel charts.

Types of computer graphics

Despite the fact that there are many classes of software for working with CG, there are only three types of CG: raster, vector and fractal graphics. They differ in the principles of image formation when displayed on a monitor screen or when printed on paper.

Raster graphics used in the development of electronic and printed publications.

Illustrations made using raster graphics are rarely created manually using computer programs. More often, illustrations prepared by the artist on paper or photographs are scanned for this purpose. Recently, digital photo and video cameras have found widespread use for inputting raster images into a computer. Respectively, Most graphic editors designed for working with raster illustrations focused not so much on creating images, but on processing them . On the Internet, raster illustrations are mainly used.

Software tools for working with vector graphics, on the contrary, are intended primarily to for creating illustrations and, to a lesser extent, for processing them. Such tools are widely used in advertising agencies, design bureaus, editorial offices and publishing houses. Design work based on the use of fonts and simple geometric elements is much easier to solve using vector graphics. There are examples of highly artistic works created using vector graphics, but they are the exception rather than the rule, since the artistic preparation of illustrations using vector graphics is extremely complex.

Software tools for working with fractal graphics are designed for automatic image generation by mathematical calculations. Creating a fractal artistic composition is not about drawing or design, but about programming. Fractal graphics are rarely used to create printed or electronic documents, but they are often used in entertainment programs.

Raster graphics. The main element bitmap is a point. If the image is on-screen, then this point is called a pixel. Distinctive Features A pixel is its homogeneity (all pixels are the same size) and indivisibility (a pixel does not contain smaller pixels). Depending on what graphic screen resolution is configured operating system computer, the screen can display images with 640x480, 800x600, 1024x768 or more pixels.

The size of the image is directly related to its resolution. This parameter is measured in dots per inch (dpi). For a 15-inch diagonal monitor, the image size on the screen is approximately 28x21 cm. Knowing that there are 25.4 mm in 1 inch, we can calculate that when the monitor operates in 800x600 pixel mode, the screen image resolution is 72 dpi.

When printing, the resolution must be much higher. Polygraphic printing of a full-color image requires a resolution of at least 300 dpi. A standard photograph measuring 10x15 cm should contain approximately 1000x1500 pixels.

The color of any pixel in a raster image is stored in the computer using a combination of bits. The more bits, the more shades of colors you can get. The number of bits the computer uses for any given pixel is called the pixel bit depth. The simplest raster image, consisting of pixels with only two colors - black and white, is called one-bit images. Number of available colors or gradations gray equals 2 to the power of the number of bits in a pixel. Colors described in 24 bits provide over 16 million available colors and are called natural colors.

Raster images have many characteristics that must be organized and captured by the computer. The dimensions of an image and the arrangement of its pixels are two of the main characteristics that a raster image file must store in order to create an image. Even if the information about the color of any pixel and any other characteristics is corrupted, the computer will still be able to recreate a version of the drawing if it knows how all its pixels are located. A pixel itself does not have a size, it is just an area of ​​computer memory that stores color information, so the rectangularity coefficient of the image (determines the number of pixels of the pattern matrix horizontally and vertically) does not correspond to any real dimension. Knowing only the rectangularity coefficient of the image with a certain resolution, you can determine the real dimensions of the picture. This is called a new image consisting of pixels with only two colors - black and white. V. vertically. Are the coordinates of the displayed

Raster resolution is simply the number of elements (pixels) given area(inch). Raster graphics files take up a large amount of computer memory. Three factors have the greatest influence on the amount of memory:

image size;

2. bit color depth;

3. file format used to store the image.

Advantages of raster graphics:

1. hardware feasibility;

2. software independence (file formats intended for saving bitmaps are standard, therefore they do not matter in which graphic editor a particular image was created);

3. photorealistic images.

Disadvantages of raster graphics:

1. a significant volume of files (determined by the product of the image area by the resolution and the color depth (if they are reduced to a single dimension);

2. fundamental difficulties of transforming pixel images;

3. pixelation effect - associated with the inability to enlarge the image to examine details. Since the image is made up of dots, magnification causes the dots to become larger. It is not possible to see any additional details when enlarging a raster image, and increasing the raster dots visually distorts the illustration and makes it rough;

4. hardware dependence is the cause of many errors;

5. lack of objects.

Vector graphics. If in raster graphics the main element of the image is a point, then in vector graphics it is a line (it does not matter whether it is a straight line or a curve).

Of course, there are also lines in raster graphics, but there they are considered as combinations of points. For each line point in raster graphics, one or more memory cells are allocated (the more colors the points can have, the more cells are allocated to them). Accordingly, the longer the raster line, the more memory it takes up. In vector graphics, the amount of memory occupied by a line does not depend on the size of the line, since it is represented as a formula, or more precisely, in the form of several parameters. Whatever we do with this line, only its parameters stored in memory cells change. The number of cells remains unchanged for any line.

Line is an elementary vector graphics object. Everything in a vector illustration is made up of lines. The simplest objects are combined into more complex ones (for example, a quadrilateral object can be thought of as four connected lines, and a cube object is even more complex: it can be considered either 12 connected lines or 6 connected quadrilaterals). Because of this approach vector graphics often call object-oriented graphics.

EXAMPLE In general, the equation of a third-order curve can be written as

x 3+a 1y 3+a 2x2y+a 3xy 2+a 4x 2+a 5y 2+a 6xy+a 7x+a 8y+a 9= 0.

It can be seen that nine parameters are sufficient for recording. To specify a third-order curve segment, you need to have two more parameters. If we add to them parameters expressing line properties such as thickness, color, character, etc., then 20-30 bytes of RAM will be enough to store one object. Quite complex compositions, numbering thousands of objects, consume only tens and hundreds of kilobytes.

Like all objects, lines have properties: line shape, thickness, color, character (solid, dotted, etc.). Closed lines have padding property. Inner area closed loop can be filled color, texture, map. The simplest line, if it is not closed, has two vertices, which are called nodes. Nodes also have properties that determine how the vertex of a line looks and how two lines connect to each other.

Note that vector graphics objects are stored in memory as a set of parameters, but all images are still displayed on the screen as dots (simply because the screen is designed that way). Before displaying each object on the screen, the program calculates the coordinates of screen points in the object’s image, which is why vector graphics are sometimes called computed graphics. Similar calculations are made when outputting objects to a printer.

Basic concepts of CG

Raster concept

Appearance and wide use raster is based on the property of human vision to perceive an image consisting of individual dots as a single whole. This feature of vision has been used by artists for a long time. Printing technology is also based on it.

The image is projected onto a light-sensitive plate through glass on which an opaque raster grid is uniformly applied. As a result, the continuous halftone image is broken into individual cells which are called raster elements . The raster has become widespread in the production of various types of printed products: newspapers, magazines, books.

The concept of a continuous halftone image comes from photography. In fact, a photographic print, when viewed through an optical device with very high magnification, also consists of individual elementary dots. However, they are so small that they are indistinguishable to the naked eye.

Other methods of presenting images: printing, printing, displaying on a monitor - use relatively large raster elements.

Light and color

Light as a physical phenomenon is a stream of electromagnetic waves of various lengths and amplitudes. The human eye, being a complex optical system, perceives these waves in the range of wavelengths from approximately 350 to 780 nm. Light is perceived either directly from a source, such as a lighting fixture, or as reflected from the surfaces of objects or refracted when passing through transparent and translucent objects. Color is a characteristic of the eye's perception of electromagnetic waves of different lengths, since it is the wavelength that determines the visible color for the eye. The amplitude, which determines the energy of the wave (proportional to the square of the amplitude), is responsible for the brightness of the color. Thus, the very concept of color is a feature of the human “vision” of the environment.

Rice. 1. Human eye

In Fig. 1 schematically shows the human eye. Photoreceptors located on the surface of the retina act as light receivers. The lens is a kind of lens that forms the image, and the iris acts as a diaphragm, regulating the amount of light transmitted into the eye. Sensitive cells in the eye respond differently to waves of different lengths. Light intensity is a measure of the energy of light affecting the eye, and brightness is a measure of the eye's perception of this impact. The integral curve of the spectral sensitivity of the eye is shown in Fig. 2; this is the standard curve of the International Commission on Illumination (CIE, or CIE - Commission International de l'Eclairage).

Photoreceptors are divided into two types: rods and cones. The sticks are highly sensitive and work in low light conditions. They are insensitive to wavelength and therefore do not “distinguish” colors. Cones, on the other hand, have a narrow spectral curve and “distinguish” colors. There is only one type of rods, and cones are divided into three types, each of which is sensitive to a certain range of wavelengths (long, medium or short.) Their sensitivity also varies.

In Fig. Figure 3 shows cone sensitivity curves for all three types. It can be seen that the cones that perceive the colors of the green spectrum have the greatest sensitivity, the “red” cones are slightly weaker, and the “blue” cones are significantly weaker.

Rice. 2. Integral curve of the spectral sensitivity of the eye

Rice. 3. Sensitivity curves for various receptors

Basics of Color Theory

When working with color we use the concepts color resolution (also called color depth) and color model . Color resolution determines how color information is encoded and determines how many colors a screen can display at once. To encode a two-color (black and white) image, it is enough to allocate one bit to represent the color of each pixel. Allocation of one byte allows you to encode 256 different colors. Two bytes (16 bits) allow you to define 65536 different colors. This mode is called High Color. If three bytes (24 bits) are used to encode color, 16.5 million colors can be displayed simultaneously. This mode is called True Color.

Colors in nature are rarely simple. Most color shades are formed by mixing primary colors. The method of dividing a color shade into its components is called a color model. There are many various types color models, but in computer graphics, as a rule, no more than three are used. These models are known as RGB, CMYK and HSB.

Color- one of the factors in our perception of light radiation. The following are used to characterize color: attributes.

Color tone. Can be determined by the predominant wavelength in the radiation spectrum. Hue allows you to distinguish one color from another, for example, green from red, yellow and others.

Brightness. Determined by energy, intensity of light radiation. Expresses the amount of light perceived.

Saturation or purity of tone. Expressed as the proportion of white present. In an ideally pure color there is no white admixture. If, for example, white color is added to a pure red color in a certain proportion (artists call this whitening), the result will be a light, pale red color.

These three attributes allow you to describe all colors and shades. The fact that there are exactly three attributes is one of the manifestations of the three-dimensional properties of color.

The science that studies color and its measurements is called colorimetry. It describes the general patterns of human color perception of light.

One of the basic laws colorimetry are the laws of color mixing. These laws were formulated in their most complete form in 1853 by the German mathematician Hermann Grassmann:

1. Color is three-dimensional - three components are needed to describe it. Any four colors are linearly related, although there are an unlimited number of linearly independent sets of three colors.

In other words, for anyone specified color(C) we can write the following color equation, expressing the linear dependence of colors:

C = k1 C1 + k2 C2 + k3 C3,

where C1, C2, C3 are some basic, linearly independent colors, coefficients k1, k2, and k3 are the amount of the corresponding mixed color. The linear independence of colors C1, C2, C3 means that none of them can be expressed as a weighted sum (linear combination) of the other two.

The first law can be interpreted in a broader sense, namely in the sense three-dimensionality colors. It is not necessary to use a mixture of other colors to describe a color; you can use other values, but there must be three of them.

2. If in a mixture of three color components one changes continuously while the other two remain constant, the color of the mixture also changes continuously.

3. The color of the mixture depends only on the colors of the components being mixed and does not depend on their spectral compositions.

The meaning of the third law becomes clearer if we consider that the same color (including the color of mixed components) can be obtained different ways. For example, a component to be mixed may in turn be obtained by mixing other components.

Table of values ​​of some colors in the RGB numerical model

HSV color model

Model H.S.B.(Hue Saturation Brightness = Hue Saturation Brightness) is based on the subjective perception of color by a person. Proposed in 1978. This model is also based on the colors of the RGB model, but any color in it is defined by its hue (hue), saturation (i.e. adding white paint to it) and brightness (i.e. adding black paint to it). Virtually any color is obtained from the spectral color by adding gray paint. This model is hardware-dependent and does not correspond to the perception of the human eye, since the eye perceives spectral colors as colors with different brightness (blue appears darker than red), and in the HSB model they are all

brightness is assigned to 100%.

Rice. 5. Models HSB and HSV

H detects the frequency of light and takes a value from 0 to 360 degrees.

V or B: V- value (accepts values ​​from 0 to 1) or B- brightness, which determines the level of white light (takes values ​​from 0 to 100%). Are the height of the cone.

S- determines color saturation. Its value is the radius of the cone.

Rice. 6. Color wheel at S=1 and V=1 (B=100%)

In the HSV model (Fig. 5), color is described by the following parameters: hue H (Hue), saturation S (Saturation), brightness, lightness V (Value). The H value is measured in degrees from 0 to 360, because here the colors of the rainbow are arranged in a circle in the following order: red, orange, yellow, green, blue, indigo, violet. The S and V values ​​are in the range (0…1).

Here are examples of color coding for the HSV model. At S=0 (i.e. on the V axis) - gray tones. V=0 corresponds to black color. White color is coded as S=0, V=1. Colors located in a circle opposite each other, i.e., differing in H by 180º, are complementary. Setting color using HSV parameters is quite often used in graphics systems ah, and usually the cone scan is shown.

The HSV color model is convenient for use in those graphic editors that are focused not on processing ready-made images, but on creating them with your own hands. There are programs that allow you to simulate various artist tools (brushes, pens, felt-tip pens, pencils), paint materials (watercolor, gouache, oil, ink, charcoal, pastel) and canvas materials (canvas, cardboard, rice paper, etc.). When creating your own artwork, it's convenient to work in the HSV model, and once you're done, you can convert it to an RGB or CMYK model, depending on whether it will be used as screen or printed illustration.

There are other color models built similarly to HSV, such as the HLS (Hue, Lighting, Saturation) models, and HSB also uses a color cone.

Lab color model

Model Lab is a hardware-independent model, which distinguishes it from those described above. It has been experimentally proven that color perception depends on the observer (if you remember colorblind people, there is a difference in age-related color perception, etc.) and observation conditions (everything is gray in the dark). Scientists from the International Commission on Illumination (CIE=Commission Internationale de l"Eclairage) in 1931, they standardized the conditions for observing colors and studied color perception in large group of people. As a result, the basic components of the new XYZ color model were experimentally determined. This model is hardware independent, since it describes colors as they are perceived by a person, more precisely by a “standard CIE observer”. It was accepted as the standard. The Lab color model used in computer graphics is derived from the XYZ color model. It got its name from its basic components L, a And b. Component L carries information about the brightness of the image, and the components A And b- about its colors (i.e. a And b- chromatic components). Component A changes from green to red, and b- from blue to yellow. Brightness in this model is separated from color, which is convenient for adjusting contrast, sharpness, etc. However, being abstract and highly mathematical, this model remains inconvenient for practical work.

Since all color models are mathematical, they are easily converted from one to another according to simple formulas. Such converters are built into all “decent” graphics programs.

Color profiles

The theories of color perception and reproduction outlined above are used in practice with serious amendments. Formed in 1993, the International Color Consortium (ICC) developed and standardized color management systems (Color Management System (CMS). Such systems are designed to ensure color consistency at all stages of operation for any device, taking into account the features specific devices when reproducing color.

In reality, there are no devices with a color gamut that completely matches RGB models, CMYK, CIE and any others. Therefore, to bring the capabilities of devices to some common denominator, they were developed color profiles .

Color profile– a means of describing color reproduction parameters.

In computer graphics, all work begins in RGB space because the monitor physically emits these colors. At the initiative of Microsoft and Hewlett Packard, the standard sRGB model was adopted, corresponding to the color gamut of an average quality monitor. In this color space, graphics should be reproduced without problems on most computers. But this model is very simplified, and its color gamut is significantly narrower than that of high-quality monitors.

Currently, color profiles created in accordance with ICC requirements have become an almost universal standard. The main content of such a profile consists of tables (matrices) of color correspondence for various transformations.

The most ordinary monitor profile should contain at least matrices for the CIE - RGB conversion and a table for the inverse conversion, white parameters and gradation characteristics (Gamma parameter).

main feature ICC profile of the printing device - the need to take into account the mutual influence of colors. If on the monitor the phosphor dots emit almost independently, then during printing the inks are superimposed on the paper and on each other. Therefore, the profiles of printing devices contain huge matrices for recalculating mutual transformations of the XYZ and Lab spaces, mathematical models various options such transformations.

Color coding. Palette

Color coding

In order for a computer to be able to work with color images, it is necessary to represent colors in the form of numbers - color encoding. The encoding method depends on the color model and numeric data format in the computer.

For the RGB model, each of the components can be represented by numbers limited to a certain range, for example fractional numbers from zero to one or integers from zero to some maximum value. The most common color representation scheme for video devices is the so-called RGB representation, in which any color is represented as the sum of three primary colors - red, green, blue - with given intensities. The entire possible color space is a unit cube, and each color is defined by a triple of numbers (r, g, b) – (red, green, blue). For example, yellow is specified as (1, 1, 0), and crimson is specified as (1, 0, 1), white color The set corresponds to (1, 1, 1), and black corresponds to (0, 0, 0).

Typically, a fixed number is allocated for storing each color component. n memory bit. Therefore, it is considered that the acceptable range of values ​​for color components is not , but .

Almost any video adapter can display significantly large quantity colors than that determined by the size of video memory allocated for one pixel. To use this feature, the concept of a palette is introduced.

Palette– an array in which each possible pixel value is associated with a color value (r, g, b). The size of the palette and its organization depend on the type of video adapter used.

The simplest is the organization palettes on an EGA adapter. Each of the 16 possible logical colors (pixel values) is allocated 6 bits, 2 bits for each color component. In this case, the color in the palette is specified by a byte of the form 00rgbRGB, where r, g, b, R, G, B can take the value 0 or 1. Thus, for each of the 16 logical colors, you can set any of the 64 possible physical colors.

16-color standard palette for EGA, VGA video modes. Implementing a palette for the 16-color modes of VGA adapters is much more complex. In addition to supporting the EGA adapter palette, the video adapter additionally contains 256 special DAC registers, where for each color its 18-bit representation is stored (6 bits for each component). In this case, a value from 0 to 63 is compared, as before, with the original logical color number using 6-bit registers of the EGA palette, but it is no longer an RGB decomposition of the color, but the number of the DAC register containing the physical color.

256-color for VGA. For 256-VGA, the pixel value is directly used to index the DAC register array.

Currently, the True Color format is quite common, in which each component is represented as a byte, which gives 256 gradations of brightness for each component: R=0...255, G=0...255, B=0...255. The number of colors is 256x256x256=16.7 million (224).

This coding method can be called component. On a computer, True Color image codes are represented as triplets of bytes, or are packed into a long integer (four-byte bits (as, for example, done in the Windows API):

C = bbbbbbbb gggggggg rrrrrrrr.

Index palettes

When working with images in computer graphics systems, you often have to make a compromise between image quality (you need as many colors as possible) and the resources required to store and reproduce the image, calculated, for example, in memory capacity (you need to reduce the number of bytes per pixel). Additionally, a given image itself may only use a limited number of colors. For example, for drawing, two colors may be enough; for a human face, shades of pink, yellow, purple, red, green are important, and for the sky, shades of blue and gray are important. In these cases, using full color color coding is redundant.

When limiting the number of colors, use a palette that provides a set of colors that are important for a given image. A palette can be thought of as a table of colors. The palette establishes the relationship between the color code and its components in the selected color model.

Computer video systems usually provide the ability for the programmer to set their own color palette. Each color shade is represented by a single number, and this number does not express the color of the pixel, but the color index (its number). The color itself is searched for by this number in the accompanying color palette attached to the file. These color palettes are called index palettes.

Index palette is a data table that stores information about what code a particular color is encoded with. This table is created and stored along with the graphic file.

Various images may have different color palettes. For example, in one image green color may be encoded with the index 64, and in another this index may be given to the color pink. If you reproduce an image from "alien" color palette, then the green tree on the screen may turn out to be pink.

Fixed palette

In cases where the color of the image is encoded in two bytes (High Color mode), 65 thousand colors can be displayed on the screen. Of course, these are not all possible colors, but only one 256th part of the total continuous spectrum colors available in True Color mode. In such an image, each two-byte code also expresses some color from the general spectrum. But in this case, it is impossible to attach an index palette to the file, which would record which code corresponds to which color, since this table would have 65 thousand entries and its size would be hundreds of thousands of bytes. It hardly makes sense to attach a table to a file that may be larger in size than the file itself. In this case, use the concept fixed palette. It does not need to be included with the file, since in any graphics file that has a 16-bit color encoding, the same code always expresses the same color.

Safe palette

Term safe palette used in Web graphics. Since the data transfer speed on the Internet still leaves much to be desired, graphics with color coding higher than 8-bit are not used to design Web pages.

In this case, a problem arises due to the fact that the creator of the Web page does not have the slightest idea about what model of computer and under what programs his work will be viewed. He's not sure whether his "green tree" will turn red or orange on users' screens.

In this regard, the following decision was made. All the most popular programs for viewing Web pages (browsers) are pre-configured for a certain fixed palette. If a Web page developer uses only this palette, then he can be sure that users around the world will see the drawing correctly. This palette does not have 256 colors, as one might expect, but only 216. This is due to the fact that not all computers connected to the Internet are capable of reproducing 256 colors.

Such a palette, which strictly defines the indices for encoding 216 colors, is called safe palette.

Graphical interfaces and programming standards
computer graphics

Standardization in computer graphics is aimed at ensuring mobility and portability of application programs, unifying interaction with graphics devices and ensuring the possibility of exchange graphic information between different subsystems. The use of standards allows you to reduce the development time for graphic systems and increase them life cycle. Today, in the practice of using CG tools, a large number of standards are used, differing in purpose and functionality. They have varying degrees of formality - from factual to international standards.

The year 1976 should be considered the starting point in the work on standardization of graphic tools. It was then that the first meeting to discuss graphic standards took place in the French city of Seilac. Since then, graphic standards have been dealt with in various national and international standards organizations associated with the use of

We have released a new book “Content Marketing in in social networks: How to get into your subscribers’ heads and make them fall in love with your brand.”

Subscribe

Informational content is information that is useful to the reader.

More videos on our channel - learn internet marketing with SEMANTICA

This is the data for which the user opens a search engine. A man wants to buy a cat. He goes to Yandex and enters: “Cat care.” In the search results, he sees your website, where there are detailed articles about care. And then he finds out that you also have an online store with elite cat food.

Your articles are informational content. It indirectly increases your sales.

Quality of information content

It is better to talk about what information content should be in comparison with other types of materials.

  • Official - accompanying service information, navigation tips, etc.
  • Selling - direct advertisements about goods or services.
  • Entertaining - to improve your mood.

We all come home from work in the evenings. We are tired, we want to sleep. And many people simply don’t have the strength to read anything serious about nuclear physics. We want to watch an excerpt from KVN while we eat, smile and go to bed.

Therefore, almost all of it is entertaining. It should make a person smile, laugh, and send the picture to friends.

Information material is serious content. It must tell a person something new. Give knowledge. It’s as if the reader was at a university lecture. However, even the most serious material can be presented in such a way that a person will read it with a smile and interest. And then he will definitely send the post to his friends.

Information content:

  • is beneficial for the reader;
  • helps satisfy needs;
  • helps achieve the author's goals;

And for this, the material must have the following characteristics:

  • Information content.
  • Literacy.
  • Thematic.
  • Logic.
  • Relevance.

The material should answer a question that worries a person.

Why does your website or public page need information content?

Let's remember the definition of marketing. This is an increase in company profits by satisfying consumer needs.

Answering a user's question is the first step towards sales. This is the satisfaction of a person’s current need - the need for information, for an answer to his question. And people love those who give them what they want.

Types of information content

Changing information. For example, - user content is dynamic.

Static content is that material that has not changed. It is published once and remains in this form.

If we talk about the type of published materials, they can be very different:

  • text;
  • video;
  • podcasts;
  • white books.

The main thing is the message that is contained in them. This is the answer to the question that your article, your video, your audio recording gives.

How to create informational content

On one's own. We give this answer to any questions about how to make good material.

Find out what knowledge your audience lacks. Focus on the theme of the portal.

Use the Udemy.com service. This is a learning platform for online courses. Choose a topic that is relevant to your content. See what's included in the course. For example, in the screenshot above there is a program from a web design course.

See what is said about the subject. Write material - you can make a series of educational articles.

But you write about something you know a lot about, right? Talk about your personal experience, give examples of their personal experience. Describe how you yourself solved similar problems in practice.

The main criterion for quality information material is usefulness. Will the reader be able to immediately after reading begin to embody what he read about?

Informational content will allow you to gain the trust of users. It will attract new visitors to you. And it will make your site not just an Internet resource, but educational portal which will attract new customers.

1. Prepare a video report about the organization (the report must include video materials, audio materials, have logical structure and plot, credits). To reflect in the report general information about the organization, interviews with employees, the specifics of the activities of individual specialists, the duration of the material is no more than 10 minutes.

2. Development stages:

Creating a plot;

Storyboard (preferably);

Recording video material;

Recording audio material (interviews with employees);

Processing and installation;

Adding titles and footage.

ATTENTION!!!

All types of materials are collected only with the permission of the organization’s management and should not contain confidential information, as well as violate the laws of the Russian Federation by any means.

Task 3. Complete the work and describe the procedure for its implementation(based on the organization’s profile):

Install and work with specialized application software;

Install and work with application software;

Diagnose equipment malfunctions using hardware and software;

Monitor operating parameters of equipment;

Eliminate minor malfunctions in equipment operation;

Perform equipment maintenance at the user level;

Prepare error reports;

Carry out commissioning of industry equipment;

Test industry equipment;

Install and configure system software.

Task 4. Create a standard form and calculation of an employee’s salary at the enterprise (where the internship takes place). Take any work position as an example.

1. The development must be an external program containing tabular data, graphical data, and control elements. The program should generate one type of report - “employee salary for six months.”

Task 5. Provide information on these issues based on the industry focus of the enterprise:

1. Operating principles of specialized equipment;

2. Operating modes of computer and peripheral devices;

3. Principles of constructing computer and peripheral equipment;

4. Equipment maintenance rules;

5. Equipment maintenance regulations;

6. Types and types of test checks;

7. Ranges of permissible operational characteristics of equipment;

8. Performance characteristics industry-specific equipment;

9. Principles of switching industry-specific hardware systems;

10. Operating principles of system software.



Task 6. Creating a presentation using MS PowerPoint (or any other presentation resource) in which to present information on the following topics:

Topic 1. Static information content

Technologies for working with static information content;

Standards for graphic data presentation formats;

Standards for presentation formats for static information content;

Rules for constructing static information content;

Technical means for collecting, processing, storing and displaying static content.

Topic 2. Dynamic information content

Technologies for working with dynamic information content;

Standards for dynamic data presentation formats;

Standards for formats for presenting dynamic information content;

Information content processing software;

Rules for constructing dynamic information content;

Principles of linear and non-linear editing of dynamic content;

Rules for preparing dynamic information content for editing;

Technical means for collecting, processing, storing and displaying dynamic content.

CREATION AND EDITING VIDEO MOVIES USING NON-LINEAR VIDEO EDITING PROGRAM PINNACLE STUDIO

The final qualifying work is completed in the form thesis

student of group 43 Alina Igorevna Tatarintseva

Basic professional educational program by specialty

02/09/05 Applied informatics (by industry)

Full-time form of education

Head: teacher I. V. Krapivina

Reviewer:

Job protected

________________

with a rating of _______

Chairman of the Commission

____________________

Valuiki 2017

Introduction…………………………………………………………………………..3

1.1. Basics of video editing...................................7

1.2. Methods for processing video information...................................10

1.3. Programs for editing and processing video information...21

Chapter 2.

2.1.Technical specifications.................................................... ...................28

2.2. Practical development video using PinnacleStudio software

Conclusion...........................................................................................................50

Bibliography.............................................................................................52

Applications..........................................................................................................56


Introduction

On the Internet today, video clips make up the majority of all content. Such popular video services as Youtube, Rutube, and many others have popularized the creation of video clips.

Creating a commercial using modern technologies is a fun and fairly simple process. By using specialized programs you can create absolutely anything - from a five-minute video telling about a new product released on the market, to a real full-length film presentation of a car.
To produce high-quality advertising video, you need to understand how digital video is shot and the technological process as a whole.
Another driver of progress in this area has been advanced software. Every year more and more advanced programs for creating video clips appear. Most of them are quite complex and cumbersome programs. But there are also simpler programs that are not difficult to understand.

One of the professional programs for creating serious commercials and even entire films. Pinnacle Studio - professional program for video editing, has all the modern capabilities and tools for non-linear video editing. Convenient customizable interface, functional tools for editing audio and video tracks, the ability to apply a variety of effects and filters, technology to increase video processing speed and many other functions. PinnacleStudio is the undisputed leader among video editing programs.
Currently, multimedia technologies are widely used in education, in particular for advertising and popularization of educational services provided by educational institutions. Promotional videos have become popular lately.

Today, with such rapidly growing computer performance and the growing speed of Internet access, you can watch and create videos on almost any computer with sufficient capacity. hard drive. And on a more or less modern computer with the appropriate hardware, you can build a home video studio, with which you can record video from TV programs, camcorder, VCR, process it and publish it on the Internet. In this regard, many programs for working with video have appeared on the software market, allowing you to create full-fledged video clips.

Relevance final qualifying work is due to insufficient knowledge of theoretical and methodological foundations production of commercials using professional video editing programs.

Research problem: lack of career guidance video in the specialty: “Teaching in primary grades.”

Purpose of the study: creating and editing videos using the non-linear video editing program PinnacleStudio.

Subject of study: a set of theoretical and practical aspects of creating a career guidance video using a computer.
Object of study: professional video editing program PinnacleStudio.

Research hypothesis: A video on the specialty “Teaching in primary school” will be informative and meaningful if:

– existing ones will be researched and systematized informational resources on creating videos;

– requirements for the video were drawn up;

– the structure of the video was developed;

– the video will be created using modern software.

To achieve the goal, taking into account the identified problem and the formed hypothesis, the following research objectives were identified:

– research and systematize available information resources for creating videos;

– create requirements for a video about the specialty “teaching in primary school”

– develop the structure of a video about educational institution;

– create a video about the specialty “Teaching in primary grades” in an educational institution based on modern software.

Research methods:

Theoretical analysis;

Empirical method;

Analytical method;

Design method.

Theoretical significance of the study: is that modern technologies for creating videos were analyzed and summarized.

Practical significance of the study: is to develop and create a video about the specialty “Teaching in primary grades”, which would help popularize the specialty in educational institutions.

The work consists of an introduction, two chapters, a conclusion, and a list of references.

Chapter 1. Theoretical foundations of working with dynamic information content

Video Editing Basics

Video (from the Latin video - I look, I see) - a variety of technologies for recording, processing, transmitting, storing and playing visual or audiovisual material, as well as a common name for your own video material, television signal or film, including those recorded on a physical medium (video cassette, videodisc, etc.) .

Video information, directly, is an image recorded on magnetic tape, film, photograph or optical disk, from which it can be reproduced.

Basic video signal parameters:

Number (frequency) of frames per second (the number of still images that replace each other when showing 1 second of video material and creating the effect of moving objects on the screen);

Interlace scanning;

Permission;

Screen aspect ratio;

Number of colors and color resolution;

Bitrate or width of the video stream (for digital video).

Now, when the scope of use of personal computers is expanding, the idea arises to create a home video studio based on a computer. However, when working with a digital video signal, there is a need to process and store very large amounts of information, for example, one minute of a digital video signal with SIF resolution (comparable to VHS) and truecolor color rendering (millions of colors) will take:

(288 x 358) pixels x 24 bits x 25 fps x 60 s = 442 MB,

that is, on media used in modern PCs, such as a CD (CD-ROM, about 650 MB) or a hard drive (several gigabytes), it will not be possible to save a full-time video recorded in this format. With MPEG compression, the amount of video information can be seen without noticeable image degradation.

MPEG is an acronym for Moving Picture Experts Group. This expert group works under the joint leadership of two organizations - ISO (Organization for international standards) and IEC (International Electrotechnical Commission). The official name of the group is ISO/IEC JTC1 SC29 WG11. Its task is to develop uniform standards for coding audio and video signals. MPEG standards are used in CD-i and CD-Video technologies, are part of the DVD standard, and are actively used in digital broadcasting, cable and satellite TV, Internet radio, multimedia computer products, communications via ISDN channels and many other electronic information systems. Often the acronym MPEG is used to refer to the standards developed by this group. The following are currently known:

MPEG-1 is designed for recording synchronized video (usually in SIF format, 288 x 358) and audio on CD-ROM, taking into account maximum speed reading about 1.5 Mbit/s.

The quality parameters of video data processed by MPEG-1 are in many ways similar to conventional VHS video, so this format is used primarily in areas where it is inconvenient or impractical to use standard analog video media.

MPEG-2 is designed for processing video images comparable in quality to television with a data transmission system capacity ranging from 3 to 15 Mbit/s; professionals also use large streams up to 50 Mbit/s. Many television channels are switching to technologies based on MPEG-2; a signal compressed in accordance with this standard is broadcast via television satellites and is used to archive large volumes of video material.

MPEG-3 - intended for use in high-definition television (HDTV) systems with a data rate of 20-40 Mbit/s, but later became part of the MPEG-2 standard and is no longer mentioned separately. By the way, the MP3 format, which is sometimes confused with MPEG-3, is intended only for audio compression and the full name of MP3 is MPEG AudioLayer III

MPEG-4 defines the principles for working with the digital representation of media data for three areas: interactive multimedia (including products distributed on optical disks and via the Internet), graphics applications (synthetic content) and digital television.

History of video editing

The history of digital nonlinear video editing goes back more than 20 years. The earliest systems could process video files at 160x200 resolution with 150:1 compression and could only support one channel of 22 kHz audio. The disk capacity allowed for video editing to assemble a short video in rough form and only with direct splices.

1989 was marked by the release of the first version of AvidMediaComposer and non-linear video editing systems acquired a modern look with an interface similar to today’s: timeline, two monitors, and a basket with sources.

Video editing systems were very expensive and inaccessible to many users. The situation changed in 1996 thanks to a German company that introduced the new Fast 601 system (AvidLiquid). It turned out to be not so expensive and worked according to new video editing rules. It became possible to work with different formats, used MPEG-2 compression, and most importantly, for the first time in the process of video editing, the output of the “masters” of the project into various formats: analog, digital, DVD. From now on modern system Video editing software must have the ability to import, export, transcode video and audio in formats that are used on the Internet and home video. Video editing has become accessible to everyone.

In 2008, editing systems for stereo films appeared. Stereo cinema begins to capture and captivate the viewer and becomes an integral part of the film industry. And video editors are studying how to show the transfer of space on the screen.

Processing of video information includes a number of stages: digitization, creation of videos or video clips, and their subsequent playback.

Digitizing a video, unlike its playback, is not done in real time, but, nevertheless, here too much depends on the technologies used and the software that supports them.

In the simplest case of implementing the procedure for digitizing video information, a video camera connected to a computer is used. The camcorder enters playback mode. To carry out digitization, one of the video data digitization programs is used, for example, Pro Multimedia. With its help, an AVI file is created on your hard drive. For this file, the appropriate name and expected file size are specified. By launching the program simultaneously with the start of video playback in the camcorder, the process of digitizing video data begins. To reduce the size of a video file, the same program can convert it to MPEG format, which reduces its size (for example, from 4GB to 300MB). Subsequent playback of the video can be carried out using standard Windows application: Media Player.

In more difficult cases video clip editing is used in accordance with the developed script. It involves working with individual frames or their sequences. Today linear and non-linear editing can be used.

In linear editing of video information, the source material is located on a video cassette. In order to gain access to a specific place on the tape, you must constantly rewind the film in search of the required frame. For these purposes, special “mounting” equipment is used.

Currently, when creating electronic publications, technologies for performing video editing and editing digitized video material inside a computer have become widespread. This technology was called non-linear editing, since it provided operators with direct access to the necessary frames or video fragments recorded on the computer’s hard drive. This made it possible to avoid the tedious process of constantly (linearly) rewinding the video tape back and forth when viewing and searching for these fragments.

In the case of non-linear editing, all material is pre-digitized and located on disk memory (hard drive), resulting in random instant access to the required frame.

A standard digital system, similar to an analog editing complex, is built on a single-stream architecture. This means that only one copy of the original video (AVI file) is used in the calculations.

In the case of more complex procedures for working with video material, it becomes necessary to create and use a second copy of digital video (or part thereof). Thus, to create any mixing transition or effect between two clips, the computer’s RAM must simultaneously contain frames of both the ending video clip and the beginning clip, sequentially loading them from the hard drive, decoding (decompressing) and calculating new frames of the resulting clip. Then reverse compression (compression) of the data is carried out and written to disk. This process is called rendering.

Real-time nonlinear editing systems use a two-stream video compression and decompression board and an additional digital effects board. A chipset for performing specified mixing effects in real time can also be installed directly on the compression board (for example, in Pinnacle Systems ReelTime - more than 130 two-dimensional effects are performed in real time). However, even at the same time, an additional board can be used to expand the range of hardware-based effects (for example, Pinnacle Systems ReelTime NITRO - ReelTime + Genie).

Operating with two streams, such digital systems can perform other necessary functions inherent in classic analogue editing and mixing systems, for example, titling or various types of pp-projections (keying, projections using transparency effects, etc. ).

Processing video information requires high speed of the computing structures used. In practice, such calculations require billions of specialized operations on image pixels. Obviously, the speed of their execution significantly depends on the speed of the processor.

Standard PCs are universal machines, i.e. turn out to be relatively slow in terms of solving this problem. For example, a Pentium 150Mhz can only perform about 50 million operations per second, distributed among various tasks. As a result, when calculating even relatively simple effects and transitions, it takes tens and hundreds of times more time than the actual time of their playback. Therefore, various hardware and software tools are used to accelerate video image processing. For example, enter modern boards non-linear editing (miroVideo DC30plus for PC or VlabMotion for Amiga) for compression and decompression of video information. These chips speed up rendering, but do not result in real-time rendering.

Digitized video fragments are compressed and presented in MPEG format before being recorded on disk. Saving information may result in loss of information.

If, after editing is completed, it is necessary to record a finished video fragment on a videotape, then the above-mentioned video input/output card is required. Today there is a wide variety of such cards.

Devices for working with video signals on IBM PC computers include: devices for input and capture of video sequences (capture - play), frame grabbers, TV tuners, VGA-TV signal converters and MPEG players. It should be noted that their functionality goes far beyond the scope of electronic publications.

Video information can be played by programs such as Media Player simultaneously with sound. In this case, for editing, as a rule, programs are used that provide complex information processing: video and audio data. To such software include Adobe Premiere, Ulead Media Studio Pro and others.