Remote control of the robot. The robot can be controlled in different ways

Regardless of the size and number of people living in a home, the living space quickly becomes filled with a lot of things that need to be stored somewhere. The solution, it would seem, is simple - to purchase additional pieces of furniture to store belongings, but they themselves clutter up the room. Clapperboard folding shelves from Elsa are designed to solve the problem of bulky furniture that hides usable space.

The idea is simple: when you don't need to store anything on the shelves, they fold compactly, saving space. When closed, the flat sections fit almost flush against the wall of the room and look like a wooden panel, a door or some kind of modern work of art. But when you need to put things, you simply fold the flat element towards you and get a shelf (two, three or five). Great for items you frequently take out and use, such as periodicals, books, wallet, phone or keys. The things were taken away, the shelf was closed, and now the space in the room was not cluttered in vain. The middle shelf can be used as a table for a laptop, a place for studying, drawing, sewing and other work. The lower shelves in the nursery are for toys. While children play with them, the shelves are closed, freeing up space for active activities. Clapperboard folding shelves will be very useful in any residential premises, offices or public institutions.

The manufacturer offers folding shelves made of oak veneer, walnut veneer and mixed material, as well as in two sizes S (width 250 x depth 60/160 x 1900 mm) and L (width 360 x depth 60/260 x 1900 mm).

More options for folding shelves with photo printing from Riveli Shelving:



Hello, Habrahabr! I sat on the evening of June 11, watching a film. Unexpectedly for myself, I discovered that a woman I had never known before had written to me with an offer to make a robot for their new quest. The bottom line is that you need to solve puzzles, explore hiding places, correctly apply hints, use available things and ultimately get keys and open doors... I was required to make a robot controlled from a computer using a separate program. I had doubts about some problems, for example: will I have time and how exactly to do wireless data transfer (I had previously only done wireless data transfer on the NXT)? After weighing the pros and cons, I agreed. After that, I began to think about data transfer. Since it was necessary to make a robot quickly, there was no time to remember and further master, for example, Delphi, so the idea arose to make a module that would send commands. The computer is simply required to send data to the COM port. This method is strange, but the fastest. This is what I want to describe here. I will also attach 3 programs that will help you make a radio-controlled car.
Transmitter assembly and its program.
I made a module for a computer from FTDI Basic Breakout 5/3.3V from DFrobot, a fairly common ATMEGA 328P-PU microcontroller with an Arduino bootloader and a radio module based on the nRF24L01 chip. Essentially it's just an Arduino Uno with a radio module. It is what it is. The radio module has a feature that I did not immediately notice: the input voltage should be in the range from 3 to 3.6 volts (although applying 5 volts to it will not kill it, but will not work), the upper limit of the logical unit is 5V. This means that to connect the radio module to the mega you do not need a level converter between 3.3V and 5V, but you need to install a 3.3V stabilizer. FTDI has a built-in stabilizer, and I powered the radio module from it.

This is what the module itself looks like (inside and in the assembly):

The program consists of initialization, start message and processing of commands from the control program. This was the case in my case. Basic commands of the Mirf library:

#include
#include
#include
#include
#include
These libraries are needed for the radio module to work

Mirf.csnPin = 4 - sets the pin number responsible for “permission to communicate” between the radio module and the MK
Mirf.cePin = 6 - sets the pin number responsible for the operating mode of the radio module (receiver/transmitter)
Mirf.spi = &MirfHardwareSpi - configures the SPI line
Mirf.init() - initializes the radio module
Mirf.payload = 1 - size in bytes of one message (default 16, maximum 32)
Mirf.channel = 19 - sets the channel (0 - 127, default 0)
Mirf.config() - sets transfer parameters


Mirf.setTADDR((byte *)"serv1") - switches the radio module to transmitter mode
Mirf.setRADDR((byte *)“serv1”) - switches the radio module to receiver mode

Mirf.send(data) - sends a byte array
Mirf.dataReady() - reports the completion of processing of received data
Mirf.getData(data) - write received data to the data array

I am attaching the code for the transmitter program.

Transmitter program

#include
#include
#include
#include
#include

Char active;
byte data;

Void setup()
{
Serial.begin(19200);

Mirf.csnPin = 4;
Mirf.cePin = 6;

Mirf.init();
Mirf.payload = 1;
Mirf.channel = 19;
Mirf.config();

Mirf.setTADDR((byte *)"serv1");

//signal message about the start of work
data=7;
Mirf.send(data);
delay(200);
}

void loop()
{
if (Serial.available()) //If the data is ready to be read
{
active=Serial.read(); // Write data to a variable
}

If (active=="2")
{
data=2;
}

If (active=="3")
{
data=3;
}

If (active=="4")
{
data=4;
}

If (active=="5")
{
data=5;
}

If (active=="6")
{
data=6;
}

Mirf.send(data); //Send data
while(Mirf.isSending()); // Wait until the data is sent
}

Management program.

There is one interesting thing - Processing. The syntax is the same as in Arduino, only instead of void loop() there is void draw(). But it became even more interesting in my situation with the processing Serial library, which allows you to work with a serial port. After reading the tutorials on Spurkfun's website, I played around with blinking the LED on the Arduino connected to the computer at the click of a mouse. After that, I wrote a program to control the robot from the keyboard. I am attaching the arrow control code. In principle, there is nothing unusual in it.

Machine control program

import processing.serial.*;
import cc.arduino.*;

Serial myPort;
PFont f=createFont("LetterGothicStd-32.vlw", 24);

Void setup()
{
size(360, 160);
stroke(255);
background(0);
textFont(f);

String portName = "XXXX"; // Here you need to write the name of your port
myPort = new Serial(this, portName, 19200);
}

Void draw() (
if (keyPressed == false)
{
clear();
myPort.write("6");
println("6");
}
}

Void keyPressed()
{
// 10 - enter
// 32 - space
// 37/38/39/40 - keys
clear();

Fill(255);
textAlign(CENTER);
//text(keyCode, 180, 80);

Switch(keyCode)
{
case 37:
text("Edem vlevo", 180, 80);
myPort.write("1");
break;

Case 38:
text("Edem pryamo", 180, 80);
myPort.write("2");
break;

Case 39:
text("Edem vpravo", 180, 80);
myPort.write("3");
break;

Case 40:
text("Edem nazad", 180, 80);
myPort.write("4");
break;

Default:
text("Takoy kommandi net", 180, 80);
myPort.write("6");
break;
}
}

Receiver program.

The initialization of this program differs from the initialization of the transmitter program in just one line. The key command in the endless loop is Mirf.getData(data). Next, the received command is compared with the numbers that correspond to any actions of the robot. Well, then the robot acts exactly according to commands. I am attaching the program code for the machine's receiver.

Machine programs

#include
#include
#include
#include
#include

Void setup()
{
Serial.begin(9600);

PinMode(13, OUTPUT); //LED

Mirf.csnPin = 10;
Mirf.cePin = 9;
Mirf.spi =
Mirf.init();
Mirf.payload = 1;
Mirf.channel = 19;
Mirf.config();
Mirf.setRADDR((byte *)"serv1");
}

void loop()
{
byte data;

If(!Mirf.isSending() && Mirf.dataReady())
{
Mirf.getData(data);
Serial.println(data);
}

Switch (data)
{
case 1:
motors(-100, 100); // turn left
break;

Case 2:
motors(100, 100); // go straight
break;

Case 3:
motors(100, -100); // turn right
break;

Case 4:
motors(-100, -100); // going back
break;

Default:
motors(0, 0); // we're standing
break;
}

Delay(50);
}

Conclusion.

What came out of all this:

I made this robot for Claustrophobia. They conduct quests in reality in different cities, and just for one of these quests, the organizers needed a radio-controlled robot sapper. I like it. This, of course, is flawed, because... against the backdrop of control using the communication tools built into the laptop, but it was done in its own way, done very quickly and without any problems. I hope this article will help you do something similar, and maybe even more complicated. Here, whoever wants what.

Tags: Add tags

The main module of the Lego Mindstorms EV3 construction set can work with leJOS firmware, which allows you to run Java applications. Especially for this, Oracle has released and supports a separate version of the full-fledged Java SE.

The normal JVM allowed me to use the Java Management Extensions (JMX) protocol built into it to implement remote control of the robotic arm. To combine control elements, sensor readings and images from IP cameras installed on the robot, a mnemonic diagram made on the AggreGate platform is used.


The robot itself consists of two main parts: a chassis and a manipulator arm. They are controlled by two completely independent EV3 computers, all coordinated through the control server. There is no direct connection between computers.

Both computers are connected to the room’s IP network via NETGEAR WNA1100 Wi-Fi adapters. The robot is controlled by eight Mindstorms motors - 4 of them are “large” and 4 are “small”. Also installed are infrared and ultrasonic sensors to automatically stop at an obstacle when reversing, two touch sensors to stop the rotation of the manipulator due to an obstacle, and a gyroscopic sensor to facilitate operator orientation by visualizing the position of the shoulder.

The chassis has two motors, each of which transmits power to a pair of tracked drives. Another motor rotates the entire robotic arm 360 degrees.

In the manipulator itself, two motors are responsible for raising and lowering the “shoulder” and “forearm”. Three more motors are responsible for raising/lowering the hand, rotating it 360 degrees and squeezing/unclosing the “fingers”.

The most complex mechanical unit is the “brush”. Due to the need to move three heavy engines to the “elbow” area, the design turned out to be quite tricky.

In general, everything looks like this (a box of matches was difficult to find for scale):

To transmit the image, two cameras are installed:

  • A regular Android smartphone with the IP Webcam app installed for a general overview (HTC One pictured)
  • Autonomous Wi-Fi micro-camera AI-Ball, installed directly on the “hand” of the manipulator and helps to grab objects of complex shapes

EV3 Programming

The software of the robot itself turned out to be as simple as possible. The programs on the two computers are very similar, they start a JMX server, register MBeans corresponding to motors and sensors, and go to sleep waiting for JMX operations.

Code of the main classes of the robotic arm software

public class Arm ( public static void main(String args) ( try ( EV3Helper.printOnLCD("Starting..."); EV3Helper.startJMXServer("192.168.1.8", 9000); MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); EV3LargeRegulatedMotor motor = new EV3LargeRegulatedMotor(BrickFinder.getDefault().getPort("A")); LargeMotorMXBean m = new LargeMotorController(motor); ); // Registering other motors here EV3TouchSensor touchSensor = new EV3TouchSensor(SensorPort.S1); TouchSensorMXBean tos = new TouchSensorController(touchSensor); ; // Registering other sensors here EV3Helper.printOnLCD("Running"); Thread.sleep(Integer.MAX_VALUE); catch (Throwable e) ( e.printStackTrace(); ) ) public class EV3Helper ( static void startJMXServer(String address, int port) ( MBeanServer server = ManagementFactory.getPlatformMBeanServer(); try ( java.rmi.registry.LocateRegistry.createRegistry(port); JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://" + address + ":" + String.valueOf(port ) + "/server"); props = new HashMap

();

props.put("com.sun.management.jmxremote.authenticate", "false");

public interface LargeMotorMXBean ( public abstract void forward(); public abstract boolean suspendRegulation(); public abstract int getTachoCount(); public abstract float getPosition(); public abstract void flt(); public abstract void flt(boolean immediateReturn); public abstract void stop(boolean immediateReturn); public abstract void waitComplete(); public abstract void rotateTo(int limitAngle, boolean immediateReturn); abstract int getLimitAngle(); public abstract void resetTachoCount(); public abstract void rotate(int angle, boolean immediateReturn); public abstract void rotateTo(int limitAngle()); public abstract void setStallThreshold(int error, int time); public abstract int getRotationSpeed(); public abstract float getMaxSpeed();

public abstract void stop();

public abstract int getSpeed();

public abstract void setSpeed(int speed); )

Example MBean implementation code

public class LargeMotorController implements LargeMotorMXBean ( final EV3LargeRegulatedMotor motor; public LargeMotorController(EV3LargeRegulatedMotor motor) ( this.motor = motor; ) @Override public void forward() ( motor.forward(); ) @Override public boolean suspendRegulation() ( return motor. suspendRegulation(); ) @Override public int getTachoCount() ( return motor.getTachoCount(); ) @Override public float getPosition() ( return motor.getPosition(); ) @Override public void flt() ( motor.flt() ; ) @Override public void flt(boolean immediateReturn) ( motor.flt(immediateReturn); ) // Similar delegating methods skipped )

Oddly enough, the programming ended there. Not a single line of code was written on the server side or operator workstation.

Connecting to the server


After that, select the assets (i.e. MBeans in this case) that will be added to the device profile:

Selecting MBeans


And after a few seconds we look and change the current values ​​of all polled properties of MBeans:

Device snapshot


You can also test various operations by manually calling MBean methods, such as forward() and stop().

List of operations


Next, we set up polling periods for sensors. A high polling frequency (100 times per second) is used since the control server is located on the local network together with the robot, and it is the server that makes decisions about stopping rotation when it hits an obstacle, etc. The solution is certainly not industrial, but in a well-functioning Wi-Fi network within one apartment it proved to be quite adequate.

Survey periods


Operator Interface

Now let's move on to creating the operator interface. To do this, we first create a new widget and add the necessary components to it. In the final working version it looks like this:

In fact, the entire interface consists of several panels with buttons, sliders and indicators, grouped in various grid layouts, and two large video players that broadcast images from cameras.

View from inside the interface editor

Whole form:

View with container panels shown:


Now, as the automated control system specialists say, all that remains is to “revive the mnemonic diagram.” For this purpose, so-called bindings connecting properties and methods of graphical interface components with properties and methods of server objects. Since EV3 computers are already connected to the server, the MBeans of our robot can also be server objects.

The entire operator interface contains about 120 bindings, most of which are of the same type:

Half of the same type of bindings implement control by clicking on buttons located on the mnemonic diagram. This is beautiful, convenient for testing, but completely unsuitable for real movement of the robot and moving loads. Activators of bindings from this group are events mousePressed And mouseReleased various buttons.

The second half of the bindings allows you to control the robot from the keyboard by first pressing the Keyboard Control button. These bindings react to events keyPressed And keyReleased, and in the condition of each binding it is written which button code should be reacted to.

All control bindings call methods forward(), backward() And stop() various MBeans. Since event delivery occurs asynchronously, it is important that function calls forward()/backward() and subsequent calls stop() not mixed up. To do this, bindings that call methods of one MBean are added to one Queue.

Two separate groups of bindings set the initial speeds and accelerations of the engines (currently this is implemented on the server side using a model, so these bindings are disabled) and change the speeds/accelerations when moving the Speed ​​and Acceleration sliders.

In my last blog post, I mentioned that the widely available Wii Control, a closed-loop joystick for the Nintendo Wii, can be used to remotely control a robot's arms. Now I want to continue this topic and give a brief overview of remote control methods...

There are, generally speaking, two widely used and universally accepted methods for remote control of autonomous and semi-autonomous devices:

  1. Control using infrared signals from the remote control (same as changing TV channels)
  2. Radio control

The first method, in relation to controlling the robot, is implemented using a simple circuit, which even I, not a fan of a soldering iron, managed to solder in half an hour - and the WinLIRC program, which is essentially a Windows driver for such a control model (details are on my website, in the section Robot sensors).

Radio control is a widely used practice; you can tear out a model of such control from any radio-controlled toy, or find it in any magazine for radio amateurs.

Recently, other methods of wireless control have become increasingly widespread. Of course, we are talking about Bluetooth and Wi-Fi technologies, which are currently used almost everywhere in computers, PDAs, communicators, mobile phones...

The robot control model when using Wi-Fi and Bluetooth technologies is basically this: a mobile phone or PDA is attached directly to the robot, which, through a certain self-soldering circuit, can send control signals to the robot and take sensor readings. The main “brain” activity is carried out on the main, stationary computer (sometimes even with the help of a distributed network of servers). This approach sometimes makes it possible to reduce the weight of the robot and its power consumption by several times.

By the way, there is a known case when, at one of the world presentations of a robot, it at one point froze in place - for several minutes. This happened precisely because of overloads in the Wi-Fi network of the building where the presentation took place.

Another way to control the robot is visual. In the simplest version, the robot simply moves towards the light. By the way, the task of moving along a line can be considered a variation of this method. But, of course, such visual controls are not very functional and not very interactive. More complex options include using a webcam mounted on the robot and analyzing the image coming from the camera. For example, this is how robots are taught to recognize human facial expressions. To implement control using a webcam, it is convenient to use the RoboRealm software, which I have already discussed.

Sound control is a fairly standard function; you can use the regular Windows Vista OS to implement it.

By the way, currently there are also sensors that implement artificial smell (read - in English - about the use of artificial smell in space), materials have long been created that make it possible to realize sensitive skin (even the keyboard for my old Palm m505 is made of a homogeneous material that is sensitive to touch ), well, robots can also feel taste...

In conclusion: remote control is required for almost any robot, no matter how autonomous it is. Therefore, when designing your own robot, take this issue seriously, choose the most affordable option, and focus on it - so that later you don’t have to start all over again...