What is "structured data" and why you should use it. Concept of structured data

If the pages on your site are marked up in a special way, Google Search their extended descriptions and other useful information may be displayed. For example, a rich description for a restaurant website might include a summary of reviews and pricing information. The data on the page is structured using the schema.org glossary or in formats such as microdata, RDF, microformats, etc. Search Console also provides a Marker tool for this purpose.

In Search Console, on the Structured Data page, you can view relevant information about your site collected by Googlebot. It will also include information about any markup errors that prevent rich snippets or other content from displaying. useful information in search results.

On the page Structured Data Lists all types of structured data on your site, as well as information about whether there are errors in it.

Only objects are indicated top level, found on the pages. For example, if your page contains a schema.org/Event object that contains schema.org/Place data, only the Event property will be taken into account.

If the list is missing structured data that you added to the page using microformats, microdata, or RDFa, use this tool. It allows you to test whether Google can access information on a page and recognize the markup.

Diagnosis and elimination of markup errors

1. Find out which types of structured data have errors

Statistics for each data type are shown in the table below the graph. For clarity, all types are sorted by the number of errors. Please note that the word "element" in this table means one HTML tag per source code pages. Therefore, if the site, for example, has a data type “Movies” with errors in 3000 elements and a type “Places” with errors in 42 elements, then start working on eliminating errors from movies.

2. Determine the type of errors

Click on a structured data type in a table to see detailed list all problematic elements that relate to it. A list of up to 10,000 URLs will appear, showing the number of errors and their type for each page. Click on the URL to see the detected markup, such as element types and properties.

There are two types of errors in structured data:

  • Missing fields
    For example, the extended description of an event web page indicates the location and performer, but does not indicate the date of the event.
  • No minimum or maximum rating
    For example, a product is rated on a five-point scale, but the properties bestRating (5) or worstRating (1) are not marked.

3. Correct the markup on the site

Start your investigation with the examples provided in the Structured Data section. Corrective measures depend entirely on how you implemented the markup on the site. For example, if you did this using a content management system (CMS), you will likely need to adjust its settings.

Informatics 2017

Teacher: Makhno K.V.

Lesson topic: “Files and their processing. Structured data"

Target: introduce students to the concept of an array, consider techniques for working with these types of arrays.

Lesson objectives:

    Educational– development of cognitive interest, logical thinking.

    Educational– introduce the concept of an array, study and consolidate basic skills in working with arrays.

    Developmental– development of logical thinking, memory, attentiveness, broadening of horizons.

Lesson type : lesson - learning new material.

View : lesson - lecture.

Technology : problematic - research.

Equipment : posters depicting array syntax in all three programming languages ​​used, interactive whiteboard, projector.

Lesson Plan

    Organizing time.

    Studying a new topic.

    The stage of generalization, systematization of knowledge and consolidation of what has been learned.

    Summing up, homework.

During the classes

Today in the lesson we must present a holistic picture of the data types of the Pascal language. Prepare to carefully perceive the information. During the lecture, a presentation will be shown that shows important points Topics. You need to write them down in your notebook.

The functioning of any program is associated with data processing. The data intended for processing is called initial and is usually specified at the beginning of the program. During execution, the program may request missing source data.

During program execution, input data is converted into results.

Every data element used in a program is a constant or a variable.

Structured types data define an ordered set of scalar variables and are characterized by the type of their components.

Structured data types, unlike simple ones, define sets complex meanings with one common name. We can say that structural types determine a certain way of forming new types from existing ones.

There are several structuring methods. Based on the method of organization and type of components in complex data types, they are distinguished the following varieties: regular type (arrays); combined type(records); file type(files); multiple type(s); string type(strings); in the Turbo Pascal language versions 6.0 and older, an object type (objects) was introduced.

Unlike simple types data, data of a structured type is characterized by the multiplicity of elements forming this type, i.e. a variable or constant of a structured type always has multiple components. Each component, in turn, can belong to a structured type, i.e. nesting of types is possible.

All structured data types require separate consideration and will be studied in detail by us in the future, but today we will only define them.

Lines. A string is a sequence of characters code table personal computer. The number of characters per line can vary from 0 to 255.

Arrays. Simple types define distinct sets of inseparable values. In contrast, structured types define sets of complex values, each of which forms a collection of several values ​​of another type. IN structural types allocate a regular type (arrays). Arrays got their name regular type (or rows) because they combine similar elements, ordered (settled) by indices that determine the position of each element in the array.

Multitudes. A set is a structured data type that is a set of objects interconnected by some characteristic or group of characteristics that can be considered as a single whole. Each object in the set is called element of the set. All elements of the set must belong to one of the scalar types, except real.

Records. To record a combination of objects different types Pascal uses a combined data type – record. For example, a product in a warehouse is described by the following quantities: name, quantity, price, availability of a quality certificate, etc. In this example, the name is a string value, the quantity is integer, the price is real, and the presence of a certificate is boolean.

A record is the most general and flexible structured data type, since it can be formed from heterogeneous components and it explicitly expresses the relationship between data elements that characterize a real object.

Files. It is convenient to have large sets of data recorded in external memory as a sequence of signals. In Pascal, special objects are provided for these purposes - files. A file is a collection of data recorded in external memory under a specific name.

Consider problems on declaring variables and constants of scalar type.

When starting to solve problems on declaring data of a scalar type, you should remember that:

Each program variable must be declared;

· variable declarations are placed in a section that begins with the word var; constants are placed in a section that begins with the word const; variables of user types (enumerated and interval) are declared according to a special scheme;

The variable name can use letters of the Latin alphabet and numbers (the first character must be a letter);

The instruction for declaring constants looks like this: ConstantName = constant value;

Example:

min=1; ( minimum value}

max=54; ( maximum value}

//instruction for declaring variables looks like this: name VariableName: type;

k1: integer; (number of notebooks)

k2:byte; (number of pencils)

c1: real; (price of one notebook)

//instructions for declaring variables of interval type are placed in two sections type, var and look like this:

days=1..31 ; (days of the month)

workday: days; (work days)

vihodday: days; (weekend)

//instructions for declaring variables of an enumerated type are placed in two sections type, var and look like this:

days=(monday, muesday, wednesday, thursday, friday, saturday, sunday) ; (days)

day: days; (days of the week)

season: (may, april, juin); (vacation days)

Homework:

Prepare a message about any application program.

Structured types are characterized by the multiplicity of elements that form this type, i.e. have several components. Each component, in turn, can belong to a structured type, i.e. Nesting of types is allowed.

Arrays represent a formal union of several objects of the same type (numbers, symbols, strings, etc.), considered as a single whole. All array components are data of the same type.

General view of an array definition:

Type A = array [array index type] of [array component type]

For example, M1=array of real;

Strings is an array of characters, but the number of characters in a line can vary. The string is treated as a chain of characters of arbitrary length. The maximum number of characters is no more than 255. Each character in the line has its own index (number).

Record is a data structure consisting of a fixed number of components called record fields. Unlike an array, the components of a record (fields) can be various types. Records allow you to combine values ​​of different types.

Month: (Jan, Feb, Mar, Apr, May, Jun, July, Aug, Sept, Oct, Nov, Dec);

Year: 2000..2050;

Sets– these are sets of similar objects that are logically connected to each other. The number of elements included in a set can vary from 0 to 256. It is the inconstancy of their elements that sets sets differ from arrays and records.

Digits = Set of 1..5;

File– named area of ​​external memory. A file contains components of the same type other than files (i.e., you cannot create a "file of files"). The file length is not specified and is limited only by the capacity of external memory devices.

F: File of Integer;

We will become more familiar with structured types as we further study the language.

      1. Pointer (reference type)

Contains the address of a memory byte that contains a data value of a certain type. This type is also called reference type. The description uses the ^ character and a type identifier. For example, P=^integer;

Using pointers is a flexible means of control dynamic memory and provides the ability to process large-scale data arrays.

    1. Constants

Constant is a quantity whose value does not change during program execution.

    Numerical constants are used to write numbers. The following types are distinguished:

Whole numbers: written with a + or - sign, or without a sign, according to the usual arithmetic rules: -10 +5 5

Real numbers can be written in one of two forms:

regular entry : 2.5 -3.14 2. - please note that whole part separated from the fractional symbol by a dot;

exponential form: in this notation, a real number is represented as m*10 p, where m is mantissa or number base, 0.1≤|m|≤1, p – order numbers, this is an integer constant. Indeed, any real number can be represented in exponential form:

153.5 -0.1535*10 3

99.005 0.99005*10 2

All IBM-compatible computers store real numbers as a combination of mantissa and exponent, which makes it possible to simplify operations on them using special arithmetic that handles mantissa and exponent separately. For program recording numbers in exponential form, instead of "multiply by 10 to the power" the notation is used E or e(Latin):

153.5 -0.1535*10 3 -0.1535E3 or -1.535E02

99.005 0.99005*10 2 0.99005E+2 or 9.9005e+01

Without taking special measures, a Pascal program will display real numbers on the screen and printer in exactly this form. In addition, this form is convenient for writing very small and very large numbers:

Since the size of memory allocated for the mantissa and order is limited, then real numbers are always represented in computer memory with some error. For example, the simplest real fraction 2/3 gives 0.666666 in decimal representation... and, regardless of the size of memory allocated to store the number, it is impossible to store All its signs in the fractional part. One of the typical programming problems is taking into account possible errors when working with real numbers.

Hexadecimal numbers consist of hexadecimal digits preceded by a $ sign. The range of hexadecimal numbers is from $00000000 to $FFFFFFFF.

In addition to numerical constants, there are other types of constants:

    brain teaser constants.

They serve to check the truth or falsity of certain conditions in the program and can only accept one of two values: function word true stands for truth and false- lie;

    Character constants.

Can take on the value of any printable character and are written as the character enclosed in apostrophes("single quotes"):

In the latter case the value character constant equals the space character. If you want to write the apostrophe symbol itself as a character constant, it is doubled inside outer apostrophes: """"

Character constants also include constants of the form #X, where X is numeric value from 0 to 255 inclusive, representing the decimal ASCII-code symbol. Tables of ASCII codes used by the DOS and Windows operating systems are given in Appendix 1. For example, the value #65 would correspond to the Latin character code "A".

    String constants.

These are any sequences of characters enclosed in apostrophes. As a rule, string constants are used to record prompts for data input issued by the program, display diagnostic messages, etc.:

"Enter X value:"

If it is necessary to write the apostrophe character itself in a string constant, this is done in the same way as for character constants.

Constants in Turbo Pascal can be named. Unnamed constants are used, for example, when displaying the text of messages in the previous example. Named Constants are described in the program description section by an operator of the following form:

const Name1=Value1;

Name2=Value2;

NameN=ValueN;

Here, the const keyword indicates the beginning of the section for named constant declarations. It is clear that it is often more convenient to refer to a constant by name than to rewrite its numeric or string value each time. Example of a constants section:

const e=2.7182818285;

lang="Turbo Pascal 7.1";

This describes a numeric constant e with the value of the base of the natural logarithm and a string constant named lang containing the string "Turbo Pascal 7.1".

Each name given by the programmer must be unique within one program. If we include this section in our program, we will no longer be able to create other objects named e and lang in it.

Every enterprise has many different databases that are replenished from structured data sources. Structured data is data that is entered into databases in a specific form, for example, Excel tables, with strictly defined fields. A set of enterprise databases is called in English literature Enterprise Data Warehouse (EDW) - literally “data warehouse”. I have not yet come across an analogue of this term in Russian-language literature, so let’s call it “enterprise data warehouse.” For beauty, we will use the English abbreviation EDW.

Structured data sources are applications that capture data from various transactions. For example, this could be CDRs in the operator’s network, notifications of network failures (trouble tickets), financial transactions on bank accounts, data from the ER (Enterprise Resource Planning) system, data application programs, and etc.

Business intelligence BI (Business Intelligence) is a data processing component. This various applications, tools and utilities that allow you to analyze the data collected in the EDW and make decisions based on it. These are systems for generating operational reports, selective queries, OLAP (On-Line Analytical Processing) applications, the so-called. “disruptive analytics”, predictive analysis and data visualization systems. Simply put, the manager must see the business process in an easy-to-read form, preferably graphic and animated, in order to quickly accept optimal solutions. The first law of business: correct solution– this is a decision made on time. If the right solution for yesterday accepted today, it is not a fact that it is still correct.

But what if the data sources are unstructured, heterogeneous, obtained from different sources? How will analytical systems work with them? Try using your mouse to select several cells with data in Excel spreadsheet and paste into simple text editor(for example, Notepad) and you will see what “unstructured data” is. Examples of unstructured data: email, information from social networks, XML data, video, audio and image files, GPS data, satellite images, sensor data, web logs, movement data mobile subscriber in handover, RFID tags, PDF documents...

To store such information in data processing centers (DPCs), distributed file system Hadoop, HDFS (Hadoop Distributed File System). HDFS can store all types of data: structured, unstructured and semi-structured.

Big Data applications for business intelligence are not only a processing component, but also with data, both structured and not. They include applications, tools and utilities that help analyze large volumes of data and make decisions based on data from Hadoop and other non-relational storage systems. It does not include traditional applications BI analytics, as well as extension tools for Hadoop itself.

Besides, an important component Hadoop is a MapReduce system. It is designed to manage resources and data processing in Hadoop to ensure storage reliability and optimized data placement in geographically distributed data centers. The MapReduce system consists of two main components - Map, which distributes duplicate blocks of unstructured data across various nodes of the storage system (for the purpose of reliable storage of information), and Reduce - a component for removing identical data, both in order to reduce the required total storage volume and increase correctness subsequent actions on the data. MapReduce is notable for the fact that it processes data where it is stored (i.e. in HDFS), instead of moving it somewhere for processing, and then writing the results somewhere else, which is usually done in conventional EDW . MapReduce also has a built-in data recovery system, i.e. if one storage node fails, MapReduce always knows where to go for a copy of the lost data.

Although the speed of MapReduce data processing is an order of magnitude higher than traditional processing methods with data extraction, nevertheless, due to the incomparably large volumes of data (that’s why they are Big Data), MapReduce usually uses parallel processing data streams (batch mode). With Hadoop 2.0, resource management is a separate functionality (called YARN), so MapReduce is no longer a bottleneck in Big Data.

The transition to Big Data systems does not mean that traditional EDW should be scrapped. Instead, they can be used together to take advantage of both and extract new business value through their synergies.

What is all this for?

There is a widespread opinion among consumers of IT and telecom equipment that all these spectacular foreign word and letter combinations - Cloud Computing, Big Data and various other IMS with softswitches are invented by cunning equipment suppliers in order to maintain their margins. That is, to sell, sell and sell new developments. Otherwise, the sales plan will not be fulfilled and Bill Jobs Chambers will say “ah-ah-ah.” And “the bonus for the quarter was covered.”

Therefore, let's talk about the need for all this and trends.

Probably many have not yet forgotten scary virus H1N1 flu. There were fears that it could be even stronger than the Spanish flu of 1918, when the number of victims was in the tens of millions. Although doctors were supposed to regularly report about increasing cases of diseases (and they did report them), the analysis of this information was delayed by 1-2 weeks. And the people themselves applied, as a rule, 3-5 days after the onset of the disease. That is, measures were taken, by and large, retroactively.

The dependence of the value of information on time usually takes the form of a U-shaped curve.

Information is most valuable either immediately after it is received (for acceptance operational decisions), or after some time (for trend analysis).

Google, which stores many years of search history, decided to analyze the 50 million most popular queries from the sites of previous influenza epidemics, and compare them with medical statistics during these epidemics. A system was developed to establish a correlation between frequency certain requests and 40-50 typical queries were found. The correlation coefficient reached 97%.

In 2009, it was possible to avoid the serious consequences of the H1N1 epidemic, precisely because the data was obtained immediately, and not after 1-2 weeks, when the clinics in the epidemic areas would no longer be crowded. This was perhaps the very first use of big data technology, although at that time it was not yet called that.

It is well known that the price of an air ticket is very unpredictable and depends on many factors. I recently found myself in a situation where I could buy the same economy class ticket from the same airline to the same city in two possible options. For a flight leaving in the evening in three hours, a ticket cost 12 thousand rubles, and for the early morning tomorrow– 1500 rubles. I repeat, there is one airline and even the aircraft on both flights are of the same type. Typically, the price of a ticket becomes more expensive the closer the departure time. There are many other factors that influence the price of a ticket - once a booking agent explained to me the essence of this host of fares, but I still didn’t understand anything. There may be cases when the price of a ticket, on the contrary, falls if, as the departure date approaches, there are many unsold seats, in the event of any promotions, etc.

Once Upon a Time, Oren Encioni, Program Director artificial intelligence at Washington State University, getting ready to fly to his brother's wedding. Since weddings are usually planned in advance, he bought the ticket immediately, long before departure. The ticket was indeed inexpensive, much cheaper than usual when he bought a ticket for an urgent business trip. During the flight, he boasted to his neighbor how cheaply he managed to buy a ticket. It turned out that the neighbor’s ticket was even cheaper, and he bought it later. Out of frustration, Mr. Encioni conducted an impromptu sociological survey right in the cabin about ticket prices and dates of purchase. Most passengers paid less than Encioni, and almost all bought tickets later than Encioni. It was very strange. And Enzioni, as a professional, decided to tackle this problem.

Having acquired a sample of 12 thousand transactions on the website of one of the travel agencies, he created a model for predicting prices for air tickets. The system analyzed only prices and dates, without taking into account any factors. Only “what” and “how much”, without analyzing “why”. The output was a predictive probability of a decrease or increase in the price of a flight, based on the history of price changes for other flights. As a result, the scientist founded a small consulting firm called Farecast (play on words: Fare - fare, price; Forecast - forecast) to forecast prices for air tickets, based on a large database of flight bookings, which, of course, did not give 100% accuracy (which indicated in the user agreement), but with a reasonable degree of probability could answer the question of whether to buy a ticket right now or wait. To further protect against lawsuits, the system also provided a “self-trust score” something like this: “There is an 83.65% chance that the ticket price will be lower in three days.”

Then Farecast was bought by Microsoft for several billion dollars and integrated its model into its Bing search engine. (And, as is most often the case with Microsoft, nothing more is heard about this functionality, since few people use this Bing, and those who use it know nothing about this function).

These two examples show how social benefits and economic benefits can be achieved through Big Data analytics.

What exactly is Big Data?

There is no strict definition for “big data”. As technologies emerged for working with large volumes of data, for which the memory of one computer was no longer enough and had to be stored somewhere (MapReduce, Apache Hadoop), it became possible to operate much larger volumes of data than before. In this case, the data could be unstructured.

This makes it possible to abandon the restrictions of the so-called. “representative samples” from which larger conclusions can be drawn. The analysis of causality is replaced by the analysis of simple correlations: it is not “why” that is analyzed, but “what” and “how much”. This fundamentally changes established approaches to how to make decisions and analyze a situation.

On stock markets Tens of billions of transactions occur every day, of which about two-thirds of trades are resolved using computer algorithms based mathematical models using huge amounts of data.

Back in 2000, the amount of digitized information accounted for only 25% of the total amount of information in the world. Currently, the amount of information stored in the world is on the order of zettabytes, of which non-digital information accounts for less than 2%.

According to historians, from 1453 to 1503 (over 50 years) about 8 million books were printed. This is more than all the handwritten books written by scribes since the Nativity of Christ. In other words, it took 50 years to approximately double the information stock. Today this happens every three days.

To understand the value of “big data” and how it works, let’s give a simple example. Before the invention of photography, it took from several hours to several days or even weeks to draw a portrait of a person. At the same time, the artist did a certain amount of strokes or strokes, the number of which (to achieve a “portrait likeness”) can be measured in hundreds and thousands. At the same time, it was important HOW to draw, how to apply paint, how to shade, etc. With the invention of photography, the number of “grains” in analog photography, or the number of “pixels” in digital photography, changed by several orders of magnitude, and HOW to arrange them does not matter to us - it’s up to us the camera does this.

However, the result is essentially the same – an image of a person. But there are also differences. In a handwritten portrait, the accuracy of the likeness is very relative and depends on the “vision” of the artist; distortion of proportions, addition of shades and details, which are in the “original”, i.e., are inevitable. in a human face, there was none. The photograph accurately and scrupulously conveys the “WHAT”, leaving the “HOW” in the background.

With some allegory, we can say that photography is Big Data for a handwritten portrait.

And now we will record every human movement at strictly defined and fairly small time intervals. It will turn out to be a movie. Film is “big data” in relation to photography. We increased the amount of data, processed it accordingly - we received a new quality - a moving image. By changing the quantity, adding a processing algorithm, we get a new quality.

Now the video images themselves serve as food for computer systems Big Data.

As the scale of processed data increases, new opportunities appear that are not available when processing smaller data volumes. Google predicts flu epidemics no worse, and much faster, than official medical statistics. To do this, it requires a thorough analysis of hundreds of billions of source data, as a result of which it provides an answer much faster than official sources.

Well, briefly about two more aspects of big data.

Accuracy.

Big Data systems can analyze huge amounts of data, and in some cases - all data, and NOT samples. Using all the data, we get a more accurate result and can see nuances that are not available with limited sampling. However, one has to be content general idea, and not by understanding the phenomenon up to the smallest details. However, inaccuracies at the micro level allow large quantities data allow you to make discoveries at the macro level.

Causality.

We are used to looking for reasons in everything. This is, in fact, what scientific analysis is based on. In the world of big data, causality is not so important. More important are the correlations between the data, which can give necessary knowledge. Correlations cannot answer the question “why”, but they do a good job of predicting “what” will happen if certain correlations are discovered. And most often this is exactly what is required.

***

Topic 4.7

Algorithm programming
formation and processing
one-dimensional arrays

Structured Data

Often it is necessary to process not single data, but a collection of data of the same type. For example, task tabulating function , which consists of obtaining a sequence of values given function for several argument values. To intermediately store each value of the received data, you need to declare your own variable with a unique name.

Referring to each sequence variable by name turns into a long string of similar operations with each variable. The program code becomes poorly visible. Such a program requires a lot of memory to accommodate.

To resolve these problems in algorithmic languages structured data is used. The simplest structured data is data sets .

Array is a set of variables of the same type ( array elements ). All variables have the same name, and to access specific element array uses an additional identifier - its serial number(index), which starts from 0.

In addition to arrays in programming for plotting efficient algorithms Other standard data structures can be used, data structures such as stacks, queues, related lists and others.

Along with standard data structures, user-defined data structures can be used. These data structures are defined by object-oriented programming tools using classes .

4.7.2. Tools for describing and working with one-dimensional
data sets

Array– sequence of variables same type, united by common name . For example: one-dimensional array a(9) consists of 10 elements with a common name a: a(0), a(1), a(2), a(3),..., a(9), ordered by index i , which takes values ​​from 0 to 9:

a(i)
i

An array is declared in a VB program in the same way that simple variables are declared. If the array is declared local , it can only be used in the procedure in which it is declared. If the array is declared as global , it can be used anywhere in the program.

When declaring an array, the declaration statement must include the following information:

· array name – name (identifier) ​​that is used to represent the array in the program;

· data type – the data type that the array elements have;


· dimension (rank) – the number of dimensions of the declared array (i.e. the number of indexes when declared; one-dimensional arrays have one dimension);

· amount of elements – the number of elements that will be contained in the array.

Let's look at examples of some array descriptions:

In these examples the following arrays are declared:

· one-dimensional array d, consisting of 31 elements of type Integer with indexes from 0 to 30;

· one-dimensional array a, consisting of 11 elements of type Double with indices from 0 to 10;

· two-dimensional array b, consisting of 14x11=151 elements of type Single with indexes on rows from 0 to 13 and columns from 0 to 10.

Please note that the value of the lower bound of the array inVBthere can only be 0 .

Thus, the array consists of elements that can be accessed using indexes . When accessing array elements indices are written after the name in parentheses and can be any valid integer expression. For example, d(24), a(2*i+1).

Note that the number of indices indicates the size of the array. So, in the example above, the array dimension is a(10) equal to one. Array b(2,3) has dimension 2.

Unlike dimensions , array size is the number of elements in the array. In our example, the size of the array is a(10) equals 11.

Before using an array in a program, it must be declared using the operator Dim, and assign specific values ​​to the array elements. Operator Dim allocates memory space computer to place array elements, zeros elements of numeric arrays or fills elements of string arrays empty lines ("""").

As with simple data types, when declaring arrays, which are structured data types, there are two ways to allocate memory: static – at the compilation stage before executing the program, and dynamic – during program execution. By default, an array whose boundaries are specified by constant expressions is considered static. Memory to accommodate such an array is allocated at the compilation stage and retained for the entire execution period.

You can fill array elements with specific values ​​using input array element values ​​using the operator assignments or using initialization array elements.

Initialization array elements is an element-by-element assignment of a value in an array declaration statement. In this case, the size of the array is not specified in parentheses after the array name, but is determined implicitly by the size of the list of values. The list of values ​​starts with the element at index 0 and is enclosed in curly brackets , For example:

It should be noted that regardless of specific task, algorithms for generating and processing arrays are usually built using regular cyclic structures:

To make it easier to work with arrays in procedures, the built-in function is used to determine the upper bound of the array
Bound(ArrayName).

This function returns (determines) the number last element array and allows you to process arrays in procedures without passing the number of array elements as a parameter. For example,

Alternatively, you can use the method to determine the upper bound of a one-dimensional array GetUpperBound(). Since the array is one-dimensional, the value 0 should be specified in parentheses. For example:

In addition, it is known that the keyword ByVal specifies passing an array argument by value, and the keyword ByRef indicates that the array argument is passed by reference. Note that if the keywords ByVal or ByRef are omitted, the array argument is passed by reference.

Thus, when describing the formal parameters of any procedure after ArrayNameYou must always include empty parentheses because they indicate that this parameter is a one-dimensional array.

Note that there are no parentheses after the array name, which is the actual parameter.

As you know, passing arguments by value (using keyword ByVal) causes VB to pass a copy of the data to the procedure. Therefore, you should not pass arrays by value unless you really need to.