Agile Automation sunset

Agile Automation, Inc.

PO BOX 336405

North Las Vegas, NV 89033-6405

office (702) 489-8490

fax (702) 489-9203

Device Control

Agile Automation knows device control. Every agile application will control some type of hardware device. Agile Automation develops relationships with many hardware manufacturers as well. Some manufacturers will modify their drivers, and sometimes their devices, in order to improve functionality and to fit into an agile architecture.

Agile Automation can include any type of hardware device into an agile application. This includes, but not limited to, multi-axis robots, single axis actuators, microscope stages, analog cameras, digital cameras, measurement equipment, stimulus equipment, on/off devices, analog devices, digital devices, etc. As a bonus, users will enjoy the agile design as they configure their application for the actual devices they purchased.

Agile Automation has plenty of experience synchronizing devices with each other as well. Applications can trigger devices by the toggling of a switch, as a robot reaches a specific location, before a camera starts acquiring an image, etc. Our event-driven architecture helps minimize delays to the latencies of the devices themselves, meaning the software adds very little latency itself. The maximum software latency is only a few microseconds and is dependent on available bandwidth. The system’s engineer can meet short latency requirements by selecting the proper devices and computer hardware. Agile Automation will be happy to help identify those proper devices.

Motion Control

Motion control includes any device that moves: pneumatic actuators, servo motor actuators, stepper motor actuators, as well as anything else that moves. The software controls single axis actuators just as they appear in the system. In other words, the software simply commands the actuator to move from point A to point B when the user wants the payload to move from point A to point B. The movement of the payload on a multi-axis stage or robot, on the other hand, can be completely different from the movement of the axes. The user will want the payload to move from point A to point B, but that will require a coordinated movement of all the axes, depending on its configuration.

We as humans normally think of motion within one, two, or three dimensions. These dimensions represent a straight line, a plane, or space. The single axis actuator allows motion in the straight line as mentioned above. A simple two-axis device will be comprised of two actuators configured with each axis allowing motion in each dimension of a plane. Moving a payload attached to such a device is as simple as moving each actuator the proper distance along each dimension. It becomes a little more difficult, however, when the user expects the payload to move along a straight line. This requires coordinated motion in each dimension and the ability to control both actuators simultaneously. A simple three-axis device will be comprised of three actuators configured with each axis allowing motion in each dimension of space. Just like the two-axis device, these devices move their payload by moving each actuator the proper distance along each dimension. Moving the payload in a straight line also requires coordinated motion in each dimension and simultaneous control of each actuator.

Many other configurations available complicate the software even more. A single-axis actuator could be rotational and used for positioning one of many payloads to a workspace. Considerations for these devices will include whether the device can rotate endlessly in the same direction or must rotate in both directions when positioning its payloads.

Some two-axis devices are rotational as well where one actuator usually rotates about an axis that is orthogonal to the other actuator’s axis of rotation (such as azimuth and elevation). A third actuator could rotate about an axis orthogonal to the other two as well. These devices normally position their payload to some orientation rather than to a location. Considerations, again, include whether each axis can rotate continuously or must rotate in the both directions as required. Since positioning is normally an orientation, the software can simply rotate each axis to the proper orientation.

Now consider an articulated robotic arm comprised of multiple actuators capable of moving the payload to a location in 3D space. There is no longer a relationship between the desired location and the position of each actuator. There are actually multiple combinations of actuator positions to get the payload to some of its locations. The payload may also move along multiple paths depending on its current location. The actuator’s position and the path used to get to a location will depend on a few factors. One factor is how accurate and repeatable the location must be. Another could be what obstacles will the payload encounter while traveling through its path. Another could be what will the payload be doing while traveling through its path. There can be numerous factors depending on the system’s parameters. Since actuators have a backlash component, their accuracy and repeatability depend on direction and velocity while approaching their final position. Therefore, the final payload location is more accurate and repeatable when all actuators move in the same manner every time the payload moves to the same location. Some systems will choose to position the payload to a location near the final location and then approach the final location from the same direction and velocity every time. This will require overshooting the position in some actuators and undershooting the position in others depending on the final positions required. This technique reduces the effects of backlash, but it also takes longer to achieve. Other systems will choose to position the robot to the same locations in the same order every time. This minimizes backlash issues and allows higher speed; but it does not provide flexibility.

Some systems will require the payload to travel through a specific path when moving from point A to point B, such as a straight line, an arc, or some other curve. Some may require the velocity of the payload to remain constant through the entire path. This will require the software to calculate multiple coordinated moves for every actuator with varying direction and speed changes throughout the entire path. The software can command the robot through multiple short move commands, or by multiple velocity control commands. The latter will usually accomplish much smoother motion, but requires tight a control loop between the software and the motion control device. Some motion controllers allow the software to program the velocity commands ahead of the actual control logic while the motion controller processes the tight control loop.

Agile Automation has experience with various motion control devices and designs its motion control components to hide many of the necessary details. This allows the application developer to simply deal with the motion devices in the user’s perspective. That is, how the user would accomplish the task. The agile components hide the details of how the software actually commands the actuators. This allows the system architect to select the actuators and motion controllers that best suit the application. The application is never designed around any specific device, whereby keeping it as agile as possible.

Imaging Control

Imaging control includes any device that produces images, such as a camera. Imaging devices collect light and build their images using a special sensor placed inside the device. The camera then transfers the image to a buffer inside the device or to computer memory as it integrates the next image in the sensor. This allows a continuous stream of images while missing very little data. This also means the imaging device is always ahead of the computer. This can cause issues if the application needs to switch a light on or off, open a shutter, or perform some other synchronized task that affects the image as it integrates. Controlling an imaging device is a completely different problem than controlling other devices. Any control loop that needs to synchronize imaging with other devices needs to consider this issue.

Agile Automation has developed a sophisticated imaging application capable of synchronizing various devices with the imaging device and routing images to multiple pipes based on the stimuli applied to each image. This required a control loop capable of staying ahead of the imaging device in order to switch other devices on or off at the correct time without losing any data. These agile techniques are now part of the agile development concept.

Simple Devices

Software either reads information from simple input devices or writes information to simple output devices. Input devices turn one or more signal lines on or off or they apply a voltage or current to a signal line. Output devices react to similar signal lines as the software turns them on or off or applies a voltage or current. Turning signals on or off refers to digital devices while voltage or current stimuli refer to analog devices. These devices are the simplest to communicate with and probably include the largest number of devices in the market. To control these devices, however, requires another hardware device capable of converting input signals to a form readable by the software and converting commands from the software to output signals necessary to control a device. These devices are known as input/output (or I/O) devices and there are many to choose from.

The I/O devices selected for any system will depend on what the system needs to control. Selecting the correct I/O devices also requires understanding all the specifications, especially when it comes to analog devices. Some analog devices require very high resolution and accuracy because they may respond to very small differences in voltage levels. The I/O device selected, therefore, must be capable of making small changes (or reading small changes) reliably and accurately. Sometimes this will not be necessary, however, so the I/O device selected will not be as critical.

Synchronized control is another consideration when selecting the I/O device. Synchronization requirements will many times require a very short delay between the trigger event and response at the device. Software already adds latency, but that latency is usually very short with today’s technology. Some I/O devices are connected directly to the computer’s I/O bus. These devices will normally provide the shortest latency since they can communicate at very fast bus speeds. Some devices, however, are connected through serial, Ethernet, or USB ports. These devices normally have longer latencies since they add another layer of communication. The I/O device selected must exceed any acceptable latency in its worst-case scenario. (Total latency will change depending on various timing issues between the layers of communication. The worst-case scenario includes the longest possible delay between each layer.)

Agile Automation designs software to allow the system architect to choose the best I/O devices for their application. We never design software for any specific I/O device, making applications as agile as possible.

Data Signals

Data travels through an agile application to and from its devices using data signals. Data signals are objects that contain the actual data. They send data between each other with the event-driven mechanism. Using data signals to communicate with devices helps to decouple the application from the actual device. Data objects can be reduced to a few data types, whereby keeping the number of interfaces required by any application to a minimum. Each data signal provides an interface for accessing the data in the units required by the device. This allows devices to acquire data in one unit and other devices to process in another.

Synchronized Devices

Many simple applications can simply control devices in a serial fashion as required to accomplish its intended job. The devices in these applications are able to perform their functions without worrying about the state of the other devices in the system. Many systems, however, have devices that must wait for other devices to be in a specific state before they can continue. Applications can control this serially as well, meaning the application can command a device to perform a function, wait for the device to complete, command the next device, etc. until the job is complete. This could add a lot of unnecessary idle time to the application when commanding devices that require many seconds, or minutes, to complete their task. It is also seldom the case where all devices must wait for all other devices to be in a proper state before beginning their function. The application can actually improve throughput by commanding non-dependent devices to function simultaneously.

Agile Automation developed a synchronization mechanism that allows the application to command devices in parallel, ahead of when they should perform their task. Each command can be associated with a trigger that will fire off the command when a specific event occurs. Various events can trigger device commands such as when motion devices reach a specific location, or when imaging devices begin integrating a specific image, or when a user toggles a switch or clicks a button on the GUI, etc.