Conveyor Tracking General

From SoftMC-Wiki
Jump to: navigation, search

TOP2.png

Introduction

A moving frame’s variable data-type is defined by the user as a separate unit (object) and can be linked to one or more robots. The data type has its own properties and definitions. The moving frame represents a coordinate system relative to the robot’s world coordinate system. Set a limited area (in further text: window) on which the robot tracks. This area is the operation region or the working frame. Before the entrance to this region, there is a sensor that marks objects entering the region. The tracking starts when a trigger occurs and you enable the procedure. During tracking, you can move the robot both in absolute and relative movements. The absolute movements shift by the tracking process in the same direction and distance as the moving object. The relative movements are commanded in respect to the current robot position. The relative movements are superposed to the movements caused by the tracking of the moving frame. The tracking lasts until the object exits the working frame,. Then, the robot starts tracking the next item on the frame.

caption

You can classify objects in the working frame and only track certain types of objects. Each moving frame can have several master sources (external sources of moving position). The number of sources is the number of degrees of freedom of the moving frame. You can use several robots to track the same operating region, but the region must be presented as a different moving frame object. Robots must be of the same type as the world frame of the object in the tracking process (dimension and robot type). The process is completed in one of two ways: disable tracking or send no new triggers.

There are two different concepts of NDOF’s: one for the robot which means the number of motors driving it and one for the moving frame, which means the number of external position sources assigned to it. These two numbers can be different. Only the object type (dimension and type) of the moving frame must match the robot's world frame.

Window Declaration

Limit the region by one or more pairs of points, depending on the number of degrees of freedom of the element to be tracked. For every degree of freedom a pair of points defines its region limits. The boundaries are the upstream (lower limit) and the downstream (the upper limit). The lower and upper limits are referred to the trigger point. The lower limit is closer to the trigger point than the upper limit. Moving direction is from upper to lower. While the object is inside the working frame, every change of conveyor movement direction is tracked. The only limitation is that region limits must be relative to the trigger position so the upstream is closest to the trigger. An object might enter from the downstream limit. To correct this error, add a direction flag to the trigger to indicate if the region limits are inverse.

The moving frame is divided into components. For example, an XY table tracks as two axes: X and Y. The coordinate translation is done automatically when using the robot coordinate system as a base for the position calculation. The moving frame coordinate system is the same as the tracking robot. The master source is independent in the moving frame type. The group master is used as an input position to follow.

Tracking

Tracking starts when the tracking process flag is enabled, which assumes that the object has passed the sensor. Tracking actually starts when the relevant object enters the operation region and ends when it leaves the frame. During its motion in the frame, the moving frame position is re-calculated every sample. The position is based on the frame boundary limits and the scaling ratio in moving frame units. To this basic formula, a correction term is added to correct a delay of 1-2 samples between the trigger from the sensor to the start of the operation. Due to the discretization of the SERCOS samples, this delay causes a gap in the position.

Moving Frame

The input position is the current moving fame position. The input position (master source) is usually taken as an external source (external encoder), simulated axis or even another moving frame position.

If the master source is another moving frame (e.g., TCP-coordinates of any other kinematics with more than 2 axes), the offset/transformation between the tracking kinematics and the tracked kinematics is considered. In the implementation state of group as a master source (phase 3), the offset must be taken into consideration in TRIGGER or other alternative.

The tracking element and the moving frame must have the same kinematics system. It is impossible to track a different kinematics system.

The orientation is composed of two components: the master source (if there is one) and the orientation change due to movement. On the path, if the new position command is known, the new orientation can be calculated.

Tracking Process

The tracking process consists of two phases: approach (where the robot position catches the moving frame position) and track (when both positions are synchronized (<robot>.HERE and <moving frame>.HERE)). The two phases are differentiated by the value of ISMOVINGFRAMESYNCHRONIZED. These are pure internal process phases over which you have no control. They are shown here to provide a better understanding of the implemented algorithms.

The robot motion is determined by modal values of velocity, acceleration and jerk both for rotation and translation (ATRAN, VTRAN, DTRAN, JTRAN, VROT, AROT, DROT, JROT). These values are taken during moving frame assignment when <element>.MASTERFRAME is assigned. Later, changing these values does not affect the approaching process. For groups that do not have a kinematics model, usual kinematics values are used: VCRUISE, ACC and JERK. This phase is complete when both differences in velocity and position drop below certain thresholds. Higher values of these parameters assure faster approaches. Both robot’s position and orientation coordinates are taken into account.

A system offers two different approach algorithms. One is the absolute tracking algorithm, and the other is the relative tracking algorithm. Both are based on internal, closed prediction-PD position loop using position and velocity differences as success criteria. Use the absolute tracking algorithm in simple, slow-motion tracking with minimal initial distance of the robot from the tracking object. Use the relative tracking algorithm in all other cases. Besides differences in approaching algorithms there is also difference in the way both algorithms are used.

Absolute Approach

Absolute tracking algorithm (slave = 3)

The system automatically moves the robot position (<robot>.SETPOINT) toward the tracking object position (<moving frame>.HERE). Once synchronization is achieved, both positions are equal.

In this algorithm, two parameters influence the approaching behavior: <moving frame>.FILTERFACTOR and <moving frame>.DAMPINGFACTOR.

NOTE-Info.svgNOTE
The requirement to enter the tracking phase is that the position error between the moving frame and the robot is less than half of the product between acceleration and the square of the sampling period (1/2 AT^2).
NOTE-Info.svgNOTE
FITERFACTOR significantly influences the synchronization (approaching phase) behavior. Use small values in the beginning and increase them step-by-step until the desired results are obtained. High values of FITERFACTOR lead to robot instability!

Relative Approach

Relative tracking algorithm (slave = 5)

The system moves the initial (before tracking) robot position (<robot>.SETPOINT) in the same direction as the tracking object position. Once synchronization is achieved, the robot moves synchronously on the same relative position to the moving object at the moment of approach start (slave=0).

It is your responsibility to add the appropriate movement and bring the robot position to the object. In this algorithm, the two parameters are used differently.

FILTERFACTOR defines the threshold needed to end the approach phase and switch to the track phase. As this parameter gets bigger, the approach phase lasts less time, but the transition moment becomes jerkier.

DAMPINGFACTOR tunes the prediction part of the algorithm. Larger values correspond to longer approach phases. Smaller values make the approach phase shorter.

NOTE-Info.svgNOTE
Due to non-linear effects (kinematics, velocity limitations, etc.), the algorithm becomes unstable with small DAMPINGFACTOR values. Low values of DAMPINGFACTOR lead to robot instability!

Functional Changes

Since Version 4.2.8, the following improvements have been implemented:

1.)Old algorithm will be not available anymore, same values of "<robot>.slave" property (5) will be used for the new algorithm.

2.)New set of parameters is defined.

Parameters defining sync & desync behavior:

<Group>.VelocitySyncTran

<Group>.AccelerationSyncTran

<Group>.JerkSyncTran

<Group>.VelocitySync Rot

<Group>.AccelerationSyncRot

<Group>.JerkSyncRot


<Group>.VelocityDeSyncTran

<Group>.AccelerationDeSyncTran

<Group>.JerkDeSyncTran

<Group>.VelocityDeSync Rot

<Group>.AccelerationDeSyncRot

<Group>.JerkDeSyncRot


These are all Modal only double format properties, having default values set by ConfigGroup command to the initial values of corresponding robot max properties (VMTran, VMRot, ...). The de-sync parameters will define the slope of de-syncing of the robot from conveyor. The profile of de-sync will be always sine-acceleration profile.


All of above properties will be limited by robot rate & max values in same way as all other motion parameters are:


{vel|acc|jerk}{sync|desync}{tran|rot} = min({vel|acc|jerk}max{tran|rot},{vel|acc|jerk}rate)


It is important to note that arate parameter will influence syncing while drate parameter influences the de-syncing process


Parameters for describing the motion frame are used to assure validity of robot motion, they provide the way to take the moving-frame velocity into account and therefore avoid the combined motion of robot and Moving Frame to exceed the robot joint velocity limits. It is also used for monitoring externals source velocities and accelerations and if exceeding values are registered an error will be returned. The Moving Frame parameters are:


<Moving Frame>.VelocityMaxTrans................. default value: 1000 mm/sec

<Moving Frame>.AccelerationMaxTrans......... default value: 1000 mm/sec2

<Moving Frame>.JerkMaxTrans....................... default value: 1000 mm/sec3

<Moving Frame>.VelocityMaxRot..................... default value: 100 deg/sec

<Moving Frame>.AccelerationMaxRot............. default value: 100 deg/sec2

<Moving Frame>.JerkMaxRot........................... default value: 100 deg/sec3


Increasing these values will effectively decrease the velocity of the superimposed robot motion while it is synchronized to the robot.(PHASE2)


The other functionality of these variables is to monitor source velocity & acceleration and if it exceeds the given value to trigger an error.


3.)Buffering trigger values. The trigger values will be stored in a ring buffer as before but now with additional information indicating the relevant Moving Frame for each value entered. The ring buffer is an element of robot.


4.)Trigger and NextItem commands will have additional (optional) parameter describing which of the Moving Frames is addressed (in cases robot is engaged with two conveyor – which normally happens only during transitions):


TRIGGER <robot> {NDOF = <index> Value = <''value''>} {MasterFrame = <MovingFrame>}

NEXTITEM <robot> {MasterFrame = <MovingFrame>}


The buffer ring of trigger positions will be kept per each robot.

In case user does not specify the Moving Frame and the robot is engaged to several Moving Frames at same time an error will be returned.

In case user specifies a Moving Frame that is currently not linked to the robot, the specific item will be handled in robot's buffer (added or deleted) marked with given Moving Frame. So NEXTITEM command will work on items marked with given Moving Frame only, all others will be skipped.

As before, “slave=0” will delete all items from the buffer.


5.)De-Sync process. Desync process (disconnecting robot from the MasterFrame) will be done in a controlled fashion by a sine-acceleration stopping profile defined by robot's de-sync parameters. The process will be “on the path” which means that from a moment of disengagement the real motion source will be exchanged for a virtual one with same initial velocity and zero initial acceleration reducing the initial velocity to zero along the same direction of the Moving Frame.

The process will be initiated with same command as in previous versions “slave=0” or by assigning a different MasterFrame.

6.)The process of desyncing will not stop any currently executing motion that was issued before or after the moment of de-syncing.

7.)When a movement is entered during de-sync process its target point will be the same point as the movement would be entered to a free, non-slaved, robot.

8.)Absolute target points of motions initiated while robot was synchronized with the Moving Frame will end after de-syncing in a point that is a combination of Moving Frame virtual stopping position and the target of the movement. There will be no predictions of that point possible.

9.)New feature is the ability to change MasterFrame on-the fly, the MasterFrame property will be now available both modally and nodally. The behavior of MasterFrame is defined by:

9.1) Setting MasterFrame to “none” will initiate de-sync process and at the end set “slave” property to zero.

9.2) Setting MasterFrame to a different Moving Frame then the currently used will initiate both de-syncing from the current motion source and syncing to the newly given one. The both process will be active at same time.

9.3) If the robot is following two Moving Frames one in de-sync and the other in sync phase, adding the third will be rejected by an error.

9.4) Issuing “<robot>. MasterFrame = mf1” to a robot that is already engaged with mf1, will be ignored. If the robot is de-syncing from mf1 it will not be possible to engage mf1 to this or any other robot until de-sync is not finished.

9.5) “STOP <robot>”, “<robot>.Slave=0” and “<robot>. MasterFrame = none” are equivalent and will start de-syncing on all active Moving Frames attached to the robot.

9.6) Adding MasterFrame nodally will initiate sync/desync action at the moment of motion start.

10.)Blending/Superposition of motions will be not affected by syncing or de-syncing phases.

11.) The “Item missed...” error (30238) will start the desync process, but will not stop or cancel any motion entered before the error occurred. This meas if the user catches the 30238 error by an user-error handler (onError), and continues the task, motion originally given to be executed with synchronized robot will be executed “as is” means with a totally different points from the originally intended.


Functionality Changes


The IsMovingFrameSynchronized flag (IMFS) values changed, now there is extended set of values:

(0) - not synchronized: robot could be either not trying to catch the moving frame (slave = 0 or iiw = 0) or the robot is trying to catch the moving frame but did not achieve it yet, means it is in sync process (slave = 5 and iiw = 1).

(1) - synchronized: robot is following moving frame directly

(-1)- robot is de-syncing from moving frame,

(-2)- robot is both syncing from one moving frame and trying to catch the second moving frame


Important consequence:

the IMFS flag is not a binary flag anymore, therefore the previously used code:


must be changed into:


Syntax changes

Here is a list of syntax changes accompanied with this feature upgrade:


New properties:

parameters defining sync & desync behavior:


<Group>.VelocitySyncTran

<Group>.AccelerationSyncTran

<Group>.JerkSyncTran

<Group>.VelocitySync Rot

<Group>.AccelerationSyncRot

<Group>.JerkSyncRot


<Group>.VelocityDeSyncTran

<Group>.AccelerationDeSyncTran

<Group>.JerkDeSyncTran

<Group>.VelocityDeSync Rot

<Group>.AccelerationDeSyncRot

<Group>.JerkDeSyncRot


Parameters for describing Moving Frame.


<Moving Frame>.VelocityMaxTrans

<Moving Frame>.AccelerationMaxTrans

<Moving Frame>.JerkMaxTrans

<Moving Frame>.VelocityMaxRot

<Moving Frame>.AccelerationMaxRot

<Moving Frame>.JerkMaxRot


Trigger commands (additional Moving Frame specification):


TRIGGER <robot> {NDOF = <index> Value = <''value''>} {MasterFrame = <MovingFrame>}

NEXTITEM <robot> { MasterFrame = <MovingFrame>}

Theory of operation

Trajectory

A new type of moving frame is added to the system, the rotary moving frame, which will represent a circular (arc) movement. The Moving Frame is defined by only one independent variable (NOF = 1), which represents the angular increment of the arc. The arc trajectory of the rotary moving frame is given by the user with three points:


Location MasterSource
UpStream[1] UpMaster[1]
ArcPoint <none>
DownStream[1] DownMaster[1]

The position part of these three points will define the arc-trajectory and arc-angle (α). The arc angle will run from 0 to α. Arc-trajectory is defined by:

center – center point of the circle

start – beginning of the arc (unit vector)

normal – auxiliary unit vector normal to the start vector defining the arc-trajectory as:

MF(t)pos = (R+dR)*(start*sin(x) + normal*cos(x)) + center

where x(t) runs between 0 and α.

From here it is clear that:

UpStream[1].pos = center + R*start

and

DownStream[1].pos = center + R*(start*sin(α) + normal*cos(α))


The orientational (rotation) part of the Moving Frame is defined as:

Orientation(t) = Q(x(t) + Δ, v)

where

v is the vector defined by: v = normal x start and

Q is the quaternion defined by the rotation vector and rotation angle about it.

Δ is the initial orientation angle obtained from the solution of the equation: UpStream[1].ori = Q(Δ,v)(*.ori is the orientational part of location as quaternion).


Only the orientation of UpStream[1] has an influence on the rotary conveyor orientation. The coordinates of DownStream[1] and ArcPoint will be ignored.

The running angle x is computed according to:


Tracking

The tracking behavior will be the same as in linear conveyor, this means that the conveyor tracking is initiated by “slave=5” or “MasterFrame=<new frame>”. With the dR defined by the radius offset of the current trigger value. The offset values of the conveyor angle will be defined by the trigger value of the master source. At this moment also the circle pre-calculation will be activated (computing the circle center, start & normal vectors).

Therefore, if the arc is given by an invalid set of points (all three belong to a same line) it will be detected at one of the following places:

querying <MF>.center while MF is not linked to any robot

assigning <MF> to <robot>.MasterFrame

The tracking process itself begins in the same way as with linear conveyors, the real robot position is obtained by adding the moving frame running position to the position of a virtual robot. The virtual robot is actually the position of the robot before the tracking started plus all additional robot movements entered afterwards. Expressed as an equation this means:

Robotreal = Robotvirtual [MF(t) MF(0)]


MF(0)≡ <MF>.zeroWhere the operators and are defined by:

C = A B C = A BC.pos = A.pos + B.pos C.pos = A.pos - B.posC.ori = A.ori * B.ori C.ori = B.ori-1 * A.ori

where:.pos – is the position part of the location.ori – is the orientation part of the location (quaternion)+ - regular three dimensional vector addition * - quaternion multiplication-1 - quaternion inverse


Illustration 1: Catching the item on a rotary conveyor "MOVES CNV.ZERO"


Proof of concept

Basic idea of choosing above operation instead of compound is to be always on the bath of moving frame. For example if the tracking formula would be:

Robotreal = Robotvirtual : [MF(t) : MF(0)-1]


this would practically move the robot end position according to virtual robot orientation, that means in all cases of no-zero orientation the robot end point would differ from conveyor path.

Also the issue of Tool/Base robot frame is completely addressed in the used tracking scheme, which then looks as:

Robotreal = base-1:(base:Robotvirtual:tool [MF(t) MF(0)]):tool-1


Robotvirtual := base-1:cmd:tool-1

cmd := target = MF(0) (in case MOVES to cnv.zero was given)

setpoint := base:Robotreal:tool


therefore:

setpoint := base:Robotreal:tool

= base:(base-1:(base:Robotvirtual:tool [MF(t) MF(0)]):tool-1):tool

= ((base:Robotvirtual:tool) [MF(t) MF(0)])

= ((base:(base-1:cmd:tool-1):tool) [MF(t) MF(0)])

= (cmd) [MF(t) MF(0)]

= (MF(0)) [MF(t) MF(0)]

= MF(t) ≡ <MF>.here


Q.E.D.

New properties and commands

<Moving Frame>.type

Short Form none

Syntax ?<Moving Frame>.Type Availability ...

Description This property defines the moving frame type according to:

0 – linear (default)

1 – rotary

2 – rotary decoupled (same as rotary except that the orientation angle does not change with the master-source)

In case the Moving frame is engaged (linked to a robot by assignment of <robot>.MasterFrame) an error will be returned.

Only moving frames with NOF = 1 will allow its type property to be set to 1(rotary).

Type Long

Range 0-1

Units none

Default 0

Scope Task or Terminal

Limitations Read-Write, Modal only

Example ?CNV.type

See Also