手机版
1 2 3 4
首页 > 新闻中心 > 翻译公司资讯 >
翻译公司资讯

世联翻译公司完成软件实验介绍英文翻译

发布时间:2020-06-23 11:07  点击:

世联翻译公司完成软件实验介绍英文翻译
To analyze for example the time from the start of tracking to the time the
animal enters a particular zone.
This chapter describes the Trial Control functions of the EthoVision XT Base
version only. For a detailed overview of conditions, creating sub-rules and
controlling hardware devices, see the EthoVision XT Trial and Hardware Control
Manual which you can find on your installation DVD.
158 Chapter 7 - Trial Control
7.1 Introduction to Trial Control
why use trial control?
Trial Control allows you to automate your experiment. For example:
 You want to set a maximum duration for your trials.
 See page 182
 You want to automate the start and/or stop of data acquisition.
A few examples:
- Start recording when the rat is first detected in the open field.
- Stop recording when the rat has reached the platform in the Morris water maze.
- Start recording at exactly 12:30:00.
- Stop recording after the animal has been in the closed arms of the plus maze for 5
minutes.
 See page 185
To use Trial Control:
1. Open the Trial Control screen (see page 163).
2. Define the conditions that, when met during your trial, trigger specific actions. Organize
conditions and actions in a sequence (see page 171).
3. Before starting data acquisition, make sure that those Trial Control Settings are active.
See also page 663 for instructions how to manage Trial Control Settings.
Your EthoVision XT license and Trial Control
Your EthoVision XT license determines which type of Trial Control you can use.
 EthoVision XT Base license – You can define a rule to start and stop data recording (Start-
Stop trial rule; see page 185). You cannot control hardware devices.
 EthoVision XT Base + Trial and Hardware Control Module – You can define a Star/Stop
trial rule, and in addition sub-rules. Moreover, you can control hardware devices. To
acquire data in an experiment made with the Trial and Hardware Control Module, you
must have a hardware key enabled for Trial and Hardware Control plugged in your
computer.
The EthoVision Trial and Hardware Control Manual, which you can find on your
installation DVD, includes extensive information on the functions available with
the Trial and Hardware Control Module.
Chapter 7 - Trial Control 159
conditions and actions
A Condition is a statement that EthoVision evaluates. An Action is a command executed on a
variable or a hardware device. You can therefore control your experiment by linking
conditions with actions.
 Example – In a Morris water maze test, stop tracking when the rat is detected on the
platform (provided that the platform has been defined as a zone).
The action is Stop tracking and the condition is Rat detected on the platform.
You define and link conditions with actions in a graphical form. The example above can be
represented by the following:
the start-stop trial rule
Conditions and actions are organized in a logical sequence named called the Start-Stop trial
rule. This can be viewed as a set of instructions executed for starting and stopping data
recording.
For more information on the Start-Stop trial rule, see page 185.
The Trial Control function also allows you to analyze events that occurred during the trial, or
the time between two specific events. For example, the time from the condition A being
activated to action X being taken. For the detailed procedure, see page 193.
Figure 7.1 A condition is followed by an action.
The condition checks that the animal is in the
zone named “Platform”. The action “Stop track”
is taken when the condition is met.
With the Trial and Hardware Control add-on, you can also define subroutines
called Sub-rules. The sub-rules are meant to carry out specific actions. They can
start at specific times and be repeated according to user-specified conditions.
For more information, see the EthoVision XT Trial and Hardware Control Manual
on the EthoVision XT installation DVD.
160 Chapter 7 - Trial Control
how trial control instructions are executed
The instructions contained in the Trial Control Settings are carried out from the moment you
start a trial, to the moment the trial is stopped. Only the instructions in the Trial Control
Settings currently active (that is, highlighted in blue in the Experiment Explorer) are carried
out.
The program evaluates the Trial Control sequence at each sample time. The rate at which this
happens depends on your chosen sample rate, not on the video frame rate.
The program remembers which Trial Control box was evaluated (active) in the previous
sample. Depending on the type of this box:
 For a Condition box – EthoVision XT checks whether the condition is met. If it is not, the
condition becomes false. The program waits until the condition is met. When this
happens (condition becomes true - see 3 in Figure 7.2), the program passes control to the
next box in the sequence. The condition becomes then inactive (see 4 in Figure 7.2).
 For an Action box – EthoVision XT carries out the action (see 4 in Figure 7.2), and passes
control to the next box, which becomes active. Then, the Action box becomes inactive
(see 5 in Figure 7.2).
 For Sub-rules and their References, see the EthoVision XT Trial and Hardware Control
Manual.
When a box becomes active, the previous becomes inactive.
 Boxes combined in parallel using operators (see page 178) are evaluated at the same
time, in unspecified order. This means that one cannot establish which condition is
evaluated/which action is taken first.
 Actions on Trial Control variables are executed immediately. Actions on hardware devices
are executed when all boxes that must be evaluated at that sample time have been
evaluated.
 If a box being evaluated contains a condition that is immediately true, the program
passes control to the next box. Therefore, within one sample time the program can pass
control to two or more boxes to the right.
 When you stop the trial or the Maximum trial duration has been reached, all Trial Control
boxes are deactivated.
 When the Rule End box of the Start/Stop trial rule is evaluated, data recording stops.
From that moment, Trial Control is deactivated, even in those sub-rules that were
ongoing in the meantime.
Chapter 7 - Trial Control 161
Trial Control in multiple arenas
If your experimental setup includes two or more arenas, Trial Control is applied to each arena
separately. This means that, if a condition is met in one arena, EthoVision XT takes the
corresponding action in that arena, not the others.
In the following example, a setup includes four cages, each defined as an arena. A Trial
Control In zone condition (see page 173) has been defined so that tracking starts when the
animal is first detected in the arena. When you first put an animal in Arena 2, the condition is
Figure 7.2 Schematic representation of how Trail Control Instructions are executed. The scheme shows an
example of a Start-Stop trial rule (see page 185).
1-Tracking starts, either manually or because a previous condition has been met.
2- Control passes to a Condition box (for example, “Is mouse on top of Shelter?” which becomes active. The
condition is evaluated. Since the condition is not met immediately, it becomes false. 3- The Condition is
met.
4- Control passes to the next box. In this case, it is an Action. Actions are taken immediately.
5- The Action box becomes inactive, and the next box becomes active.
For clarity, step 3 and 4 have been placed separately. In reality, when a condition is met it becomes inactive
at the same time, and control passes to the next box.
Hatched outlines - Condition Box becomes active.
Dark outlines - Condition becomes true or Action is taken. Pale outlines - Box becomes inactive
162 Chapter 7 - Trial Control
met in this arena and tracking starts for that arena. When you release the second animal in
Arena 4, 2 seconds later, tracking in that arena starts 2 seconds later than in Arena 2 (see
Figure 7.3).
The advantage of Trial Control in multiple arenas is that you can put one animal at a time
into the arenas, and EthoVision XT will start tracking in each arena at the appropriate
moment.
If your setup includes multiple arenas, you cannot define a condition/action specific to one
arena. This means that the zone on which a condition is based on must be present in all
arenas, and have the same name.
 If a zone is not present in an arena, and a condition is based on that zone, Trial Control
cannot progress for that arena. Therefore, tracking does not stop unless you set a
Maximum trial duration or tracking reaches the end of the video.
 At any sample time, Trial Control carries out the instructions for each arena. However,
you cannot establish in which order the arenas are evaluated at a specific sample time.
Figure 7.3 Trial Control in multiple arenas. The time values displayed on the monitor are the times elapsed
since the start of tracking in a particular arena.Tracking started earlier in Arena 2 than in Arena 4 (see text),
therefore ar any time the Elapsed time (duration of tracking) is longer in Arena 2 than in Arena 4.
Chapter 7 - Trial Control 163
7.2 The Trial Control screen
To access the Trial Control screen, click Trial Control Settings 1 in the Experiment Explorer, or
from the Setup menu, select Trial Control Settings, then click Open and select Trial Control
Settings 1. Next, click OK. The Trial Control screen appears, showing the default Trial Control
settings.
To access the Trial Control screen, you can also create a new Trial Control Settings, or open
one other than Trial Control Settings 1 (see page 663).
 The Components pane, listing the conditions on which you can base your actions and the
operators which you can use to combine conditions. See the next page.
 The Trial Control Settings window, showing the Trial Control Settings that are active. It
contains a sequence of boxes connected by arrows. See page 166.
 The Maximum trial duration pane that enables you to define a maximum duration of the
trial. See page 171.
Figure 7.4 The Trial Control Settings screen. A - Components pane. B - Maximum Trial duration pane.
C - Trial Control Settings window.
164 Chapter 7 - Trial Control
You can show/hide the Components pane and the Maximum trial duration pane by clicking
the Show/Hide button on the component tool bar and selecting/deselecting the
corresponding option in the menu.
the components pane
With the Components pane (see Figure 7.5) you choose the blocks that build up your trial
control rules. Not all the components listed below may be available on your screen,
depending on what EthoVision XT license you have on your computer (see page 158).
If you do not see the Components pane, click the Show/Hide button on the
components tool bar and select Components.
Figure 7.5 The Components pane for Trial Control.
Chapter 7 - Trial Control 165
 Structures
- Sub-rule – To define a subroutine that can be called from a specific point of the Trial
Control sequence.
 Reference – To insert a call to a sub-rule within a sequence of instructions.
- Operator – To combine two or more conditions in such a way that an action is taken
when All, Any or "N of All" conditions are met. See page 178.
 Conditions (see page 172):
- Time – To define a condition based on time.
- Time interval – To define a condition based on a time interval.
- Trial Control variable – To define a condition based on a Trial control variable.
- Dependent variables – To define a condition based on a variable that describe the
animal's behavior, for example velocity, presence in a zone, movement etc.
Under Dependent variables, you can view the list of variables available.
- Hardware – To define a condition based on the state of a hardware device (only with
the Trial and Hardware Control add-on).
 Actions
- Trial Control variable – To define an action on a Trial control variable. See page 175.
- Hardware – To define an action on a hardware device (only with the Trial and
Hardware Control add-on).
- External command – To control external applications. With an External command
action you can, for example, start an external application or run a batch file.
How to use the Components pane
To define a sub-rule, condition, action or operator:
 Double-click its name.
 Click the button next to it.
 Drag the name from the Components pane to the Trial Control window.
A new Trial Control box appears in the top-left corner of the Trial Control window. Insert the
new box in the sequence of boxes (see page 169).
For the complete procedure for Programming Trial Control, see page 171.
For more information on Sub-rules, References to sub-rules and hardware
devices, see the EthoVision XT Trial and Hardware Control Manual that you can
find on your installation DVD.
166 Chapter 7 - Trial Control
the trial control settings window
The Trial Control Settings window contains the sequences of instructions (rules) currently
present in the Trial Control Settings. When you create a new Trial Control Settings profile, the
Trial Control window contains the default Start-Stop trial rule (see page 185).
You can then define your own conditions in the Start-Stop trial rule that determine the start
and stop of data recording.
For more information:
 About programming Trial Control – See page 171.
 About the Start-Stop trial rule – See page 185.
Grid
The trial control boxes automatically snap to a grid. You can change this by clicking the
Show/Hide button on the component tool bar and selecting/deselecting the two Grid
options (Snap to Grid and Show Grid).
Zoom
The component tool bar of the Trial Control Settings shows three zoom icons:
 Zoom in – You can keep zooming in until the trial control boxes have reached their
original full size.
 Zoom out – You can keep zooming out until all trial control boxes fit in the window.
Figure 7.6 The Trial Control window, with the default Start-Stop trial rule.
Chapter 7 - Trial Control 167
 Fit all – Clicking this button fits all trial control boxes into the window.
working with trial control boxes
A Trial control box has the following information:
 A - Type of control (Rule Begin/End, Action, Condition, Operator, Reference). You cannot
change this text.
 B - Name – Text describing the control. To change this text, click the Settings button and
enter the text under Name, for example Drop one food item. You can also add a longer
description under Comment (this is not shown).
Names of Trial Control boxes must be unique, unless you make a copy of an existing box
(see page 180).
 C - Properties – Depending on the type of control, it contains the option chosen, the
formula or the command to be given, or the sub-rule that reference refers to.
The Trial Control window is ‘dynamic’: this means that it expands when you
move trial control boxes to the right. In this case, you can navigate ‘from left to
right’ in the Trial Control window by using the scroll bar at the bottom. Use the
Zoom to fit button in the component tool bar to make all trial control boxes
visible.
Figure 7.7 An example of a Trial Control box.
168 Chapter 7 - Trial Control
Colors
Trial control boxes have different colors:
 Blue - for the Start-Stop trial rule, sub-rules and sub-rule references.
 Olive green – for conditions.
 Light green – for actions.
 Grey – for operators.
Moving a box
1. Hover the mouse on the margin or the colored area of the box. The mouse cursor
changes to a four-headed arrow.
2. Drag the box to the position you require.
Moving a group of boxes
1. Draw a box around the boxes you want to move (see figure below) or click on the boxes
you want to select while holding the Ctrl key.
As a result, the selected boxes get a gray, dark border.
2. Hover the mouse on the margin or the colored area of one of the selected boxes. The
mouse cursor changes to a four-headed arrow.
Chapter 7 - Trial Control 169
3. Drag the group of boxes to the position you require.
Inserting a box in a sequence
1. Drag the Trial Control box between two boxes until the connecting arrow turns white.
2. Release the mouse button. The new box is inserted.
Connecting two boxes
1. Point the mouse to the center of the first box, press and hold the left mouse button and
drag toward the center of the other box.
2. Release the mouse button when the pointer has reached the center of the other box. The
two boxes are connected.
- You cannot create connections from the Rule End box to any other box, nor from any
box to the Rule Begin box.
- Operator boxes can have one, two or more input arrows; all other boxes have no more
than one input arrow.
- All boxes can have 1 or more output arrows, pointing to different boxes.
- You cannot create a circular sequence of Trial Control boxes.
Modifying the settings in a box
Follow the instructions below when you have inserted a Trial Control box, and you want to
modify the properties of that box.
1. Locate the Trial Control box that specifies the condition or operator you want to change.
You can find the name of the condition/operator in the upper green/grey area of the box.
2. Click the Settings button in the lower part of the box.
3. Make the appropriate settings in the window that appears (see the corresponding section
above for defining conditions and operators).
170 Chapter 7 - Trial Control
Deleting a box
1. Click the title of the box. The box border is highlighted.
2. Press Delete.
Deleting a group of boxes
1. Draw a box around the boxes you want to delete or click on the boxes you want to select
while holding the Ctrl key.
2. Press Delete.
You cannot delete the Rule Begin, the Rule End box, the Start track box and the Stop track
box.
Deleting a connecting arrow
1. Click the arrow you want to delete. The arrow turns bold to show it is selected.
2. Press Delete.
You cannot delete the arrow connecting the Stop track box and the Rule End box.
Exporting Trial Control Settings
You can export an image of the Trial Control Settings:
1. Click the Export image button in the component tool bar.
2. Select a location to save the image to, type in the File name or accept the default one and
select an image type from the Save as type list.
3. Click Save.
The complete Trial Control window is exported, irrespective of the zoom factor.
Chapter 7 - Trial Control 171
maximum trial duration pane
In the Maximum Trial Duration pane you define the maximum duration of the trials. For
further information, see page 182.
If you do not see this pane, click the Show/Hide button on the component tool bar and select
Maximum Trial Duration. If the text in this pane is greyed out, the Trial Control Settings are
read-only.
7.3 Programming Trial Control
procedure
1. Before defining Trial Control in the program, it is helpful to draw your experimental
procedure as a flow diagram, where each block represents an action or a condition
which, when met, triggers other actions or conditions.
2. From the Setup menu, select Trial Control Settings, select New, enter a name of the new
Trial Control Settings or accept the suggested one, and click OK. The default Start/Stop
trial rule appears on the screen.
3. Build the Trial Control sequence outlined in step 1, using the components available.
- To define a Condition, click one of the buttons under Conditions.
 See page 172
Figure 7.8 The Maximum Trial Duration pane.
If you just want record data for a specific time, you can do so by setting the
Maximum trial duration (page 27).
172 Chapter 7 - Trial Control
- To define an Action, click the button under Actions.
 See page 174
Insert the box in the appropriate place in the sequence.
4. Test the Trial Control sequence.
 See page 183
5. Apply Trial Control to your trials.
 See page 183
 When you create a new action or condition, and another of the same type has already
been defined in this or other Trial Control Settings, a message appears asking you
whether you want to create a new element or make a copy of the existing element. For
more information, see page 180.
 You can also combine multiple conditions. To combine multiple boxes, see page 178.
using conditions
A Condition is a statement that EthoVision checks during the trial. When the Condition is
met (True), the program evaluates the next Trial Control element (another condition, an
action or a reference to a sub-rule).
Examples of conditions (in italics):
 When the rat reaches the platform, stop tracking.
 When the mouse is detected in the open field, start tracking.
 When the animal has visited zone A ten times, stop tracking.
How to define a condition
1. In the Components pane under Conditions, locate the type of condition you want to
define.
2. Double-click the condition name or click the button next to it.
3. If the Add a condition window appears, it means that there is at least one condition of the
same type in your experiment. You are asked to choose between creating a new condition,
or re-use an existing one (see page 180). Choose the option you require and click OK. If this
window does not appear, skip this step.
Chapter 7 - Trial Control 173
4. Next to Condition name, type in the name you want to give to the condition, or accept the
default name.
5. Specify the condition properties.
6. Enter a Comment (optional), then click OK.
7. Insert the condition box in the sequence.
 If the condition is complex (for example, "stop the trial either if the rat has reached the
platform or it has been swimming for 60 seconds"), then you must define separate
conditions and combine them (see page 178).
 See also the examples on page 189.
 For a detailed overview of conditions, see the EthoVision XT Trial and Hardware Control
Manual on your installation DVD.
Types of condition
 Time – Helps you defining a time interval that must elapse before an action be taken.
Example – Start tracking after a delay of 2 seconds, or start tracking at 12h00.
 Time interval – This condition makes sense when it is combined with another condition.
Example: Stop tracking when the animal is found in Zone A (In zone condition) between 5
and 10 minutes (Time interval condition).
 Trial Control variable – Helps you make a comparison between a Trial Control variable
and a value, another variable or a formula at the time the condition becomes active (for
the meaning of becomes active, see page 160).
Example – Stop tracking when the variable Counter has reached 10.
 Dependent variables – To define a condition based on the behavior of the subject.
Choose one of the dependent variables to create the condition.
Example 1 – Stop tracking when the subject has visited 10 times in the Target zone (In zone
condition).
Example 2 – Stop tracking when the subject has been walking for more than 5 minutes
(Movement condition).
Note — You cannot create a Trial Control condition based on one of the behaviors
detected with the Automatic Behavior Recognition function.
 Hardware – To define a condition based on the signal given by a hardware device. To use
hardware devices with EthoVision, you must have the Trial and Hardware Control addon.
See the EthoVision XT Trial and Hardware Control Manual on your installation DVD.
174 Chapter 7 - Trial Control
using actions
An Action is a command that EthoVision carries out during acquisition and that influences
the trial.
Examples of actions (in italics):
 When the animal is detected in the arena, start tracking.
This is an example of system actions (start tracking and stop tracking).
 When the animal enters the maze's left arm, do C= C+1.
This is an example of an action taken on a Trial Control variable. See page 175.
 When the animal comes out of the shelter, start video recording with Media Recorder.
The actions Start tracking and Stop tracking are already defined in the Start-Stop trial rule.
Beside these, you can define actions on Trial control variables.
 You cannot create additional actions of the Start track and Stop track type, nor can you
delete the existing ones.
 If your EthoVision license includes the Trial and Hardware Control add-on module, you
can also define actions on hardware devices. See the EthoVision XT Trial and Hardware
Control Manual on your installation DVD.
How to define a Trial Control variable
1. In the Components pane, click the button next to Trial Control variable under Conditions
or Actions. Next, click the Variables button.
2. The Trial Control Variables window lists the variables currently in the experiment (also
those defined in other Trial Control Settings). To add a new variable, click Add variable.
If you have inserted Condition boxes based on Activity continuous in your Trial
Control rule and then deselect Activity analysis in the Experiment settings (see
page 100), your rule becomes invalid. The Condition boxes based on Activity
continuous are removed from your sequence and the connecting arrows are
removed. Redesign your Trial Control rule and connect the arrows between the
boxes (see page 169).
For a detailed overview of conditions, see “Overview of conditions” in the
EthoVision XT Trial and Hardware Control Manual, which you can find on your
installation DVD.
Chapter 7 - Trial Control 175
3. A new row is appended to the table. Under Name, type in the name you want to give to
the variable. Under Initial Value, enter the value of this variable at the start of the trial
(default: 0).
4. Click OK. In the TC-variable action/condition window, define the action or condition you
require. Click Cancel if you do not want to create a condition or action based on this
variable at this point.
 To delete a variable, click the variable name in the Trial Control Variables window and
click the Delete variable button.
 To rename a variable, click the variable name in the Trial Control Variables window and
edit this name.
 The default name of a new trial control variable is VarN, where N is a progressive number.
 The variable name cannot contain blank spaces.
How to define an Action on a Trial Control variable
1. In the Components pane, under Actions click the button next to Trial Control variable.
2. If the Add an action window appears, it means that there is at least one action of the same
type in your experiment. You are asked to choose between creating a new action, or re-use
an existing one (see page 180).
3. Next to Action Name, enter the name of the action (for example, Increment Counter) or
accept the default name.
4. Under Action to perform, select the variable from the list. You can also create the variable
by clicking Variables if you have not yet done so.
5. Next to the = symbol, do one of the following:
- To assign the same value of another variable (for example A = B), select the other
variable (B) from the second list.
- To enter a formula, click the double-arrow button.
176 Chapter 7 - Trial Control
Select the operator from the list and specify the formula in the second and third lists.
For example, A= A + 1.
- To assign a random value, select Random from the second list, and select the Minimum
and Maximum limits (only integer values, 0 up to 999) in which the random value
must lay.
6. Enter a Comment (optional), then click OK.
7. Insert the resulting Action box in the Trial Control rule.
Notes
 If your setup includes multiple arenas, each arena receives an instance of the variable.
Thus, a variable can have different values in different arenas.
 You cannot combine Random with a formula (for example, to compute A= Random+1).
The equivalent solution is the following: define first an action B= Random, and then one
more action A= B+1. Place the two resulting Action boxes in sequence.
 To generate a random value, the maximum limit must be greater than the minimum.
How to define an External command
1. In the Components pane, under Actions click the button next to External command.
Chapter 7 - Trial Control 177
2. Next to Action Name, enter the name of the action (for example, start recording) or accept
the default name.
3. Under Actions to perform, select which file you want to run by clicking the ellipsis button.
4. Next, select one of the file types from the list:
- Executables (*.exe).
- Batch Files (*.bat).
- All Files (*.*).
5. Locate the file and click Open.
6. Optionally, enter a Command line option.
Example - You carry out live tracking during a 24-hour period and you want to make a
recording in Media Recorder but only when the animal leaves the shelter (defined as a
Hidden Zone, where it spends most of its time). First, start up Media Recorder using an
External command box: select MRCmd.exe as the Executable to run and enter /E as a
Command line option to start Media Recorder. Next, insert a Condition Out of shelter and
combine this with a Time condition to make sure that Media Recorder is started before
recording starts (see Figure 7.9 for an example). Then, insert an External command box:
select MRCmd.exe as the Executable to run and enter /R as a Command line option to
start recording with Media Recorder. Similarly, you can stop recording (Command line
option: /S) when the animal enters the shelter again.
Click the Information button to get additional information about defining an
External command.
There may be a delay between the command Start Recording and the moment
Media Recorder actually starts recording. Run a test recording to test how long
this delay is.
178 Chapter 7 - Trial Control
using operators
The Operators help you combine actions, conditions and sub-rules in various ways. For
example:
 When at least one of the two conditions A and B is met, then do …
This is an example of conditions combined by an operator of the "Any" type (OR logic).
 When two conditions are met at the same time, then do …
This is an example of conditions combined by an operator of the "All" type (AND logic).
 When at least/at most/exactly 4 of 8 conditions are met, then do …
This is an example of conditions combined by an operator of the "N of All" type.
To combine conditions/actions/rules:
1. Define the conditions/actions/rules that you want to combine. Place them in your Trial
Control sequence as parallel branches. The connecting arrows must originate from the
condition/action that precedes the combination of elements you want to define.
Figure 7.9 Example of the External command action to start a recording with Media Recorder when the
animal leaves a shelter. The left Start MR action box starts up Media Recorder. The Start recording MR
action box on the right starts the recording when both the Out of Shelter and Time(1) conditions are true,
that is, the center-point of the animal has left the shelter at least 5 seconds after Media Recorder was
started.
Chapter 7 - Trial Control 179
2. In the Components pane under Structures, double-click Operator or click the button next
to it.
3. If the Add an operator window appears, it means that there is at least one operator of the
same type in your experiment. You are asked to choose between creating a new operator,
or re-use an existing one. If this window does not appear, skip this step.
- Create a new operator – A new operator is created.
- Reuse an existing operator – Select the name of the operator already present in your
experiment. See page 180 for more information.
Click OK. The Operator window appears.
4. Under Name, enter the Operator name or accept the default name Operator (n), where n
is a progressive number.
5. Under Operator triggers when, select the option that applies:
- Any (at least one) of the inputs is 'true'.
- All inputs are simultaneously 'true'.
- N of All inputs are simultaneously 'true'.
Where 'true' means a condition met, an action carried out, or a sub-rule finished
(depending on the elements you want to combine).
- If you choose the third option, specify how many inputs must be 'true': = (exactly equal
to), not= (not equal to), >= (at least), <= (a maximum of), etc. Specify the number in the
box.
180 Chapter 7 - Trial Control
6. Enter a Comment (optional) to describe this operator, and click OK.
7. A new Operator box appears in the Trial Control. Place the box right of the elements
defined in step 1, and connect each element (or ending element, in the case of a sequence)
to the operator.
8. Connect the operator to the next element that should be activated.
 Names of operators must be unique in your experiment. You cannot define two
operators with the same Operator name, even if these are defined in two different Trial
Control Settings.
 An Operator can also have just one input box. In that case the operator is of no use,
because control passes immediately to the next box as soon as the input condition
becomes true or the input action is carried out. EthoVision informs you about this.
re-using trial control elements
All elements of Trial Control (conditions, actions, operators, sub-rules and sub-rule
references) can be duplicated that you have defined in other Trial Control Settings can be
duplicated and re-used in your current Trial Control Settings to reduce your time spent
editing.
To re-use all the elements defined in your current Trial Control Settings profile, make a copy
of it: right-click the profile in the Experiment Explorer and select Duplicate.
Chapter 7 - Trial Control 181
How to re-use a Trial Control element
1. Click the button next to the category of element that you want to re-use.
2. The Add window appears. Select Reuse an existing condition/ action.
This window does not appear when the experiment contains only one Trial Control
Settings profile, or the experiment contains more Trial Control Settings profiles but none
of them contains an element of the same type as that you have chosen.
3. Select the name of the existing element from the list next to the option.
The second list shows the Trial Control Settings profile that contains that element. If the
element is present in multiple Trial Control Settings, choose the appropriate one from
the list.
4. Click OK.
5. A window appears for the type of element chosen. The Name and settings specified here
are the same as in the element chosen in step 3.
- To create an identical copy of the element, click OK and go to step 7.
- In all other cases, edit the settings and click OK, then go to step 6.
6. If you have changed any property of the new element (including name and comment), a
window appears showing two options:
- Apply the new settings only in the current trial control profile.
- Apply the new settings in all writable Trial Control profiles.
The program asks you whether you want to apply the properties only to the new copy, or
to extend those changes to the original elements in all Trial Control Settings that are
writable (that means, not locked after acquisition). Choose the option you require and
click OK.
7. Insert the resulting box in the Trial Control sequence.
 If you choose the option Apply the new settings in all writable trial control profiles,
changes are not made in those profiles made read-only after data acquisition.
 You cannot re-use a Trial Control element from the same Trial Control Settings. This is
because the Trial Control elements must be unique in order for correct analysis to be
done.
182 Chapter 7 - Trial Control
defining a maximum trial duration
If the conditions to stop the trial (see page 185) are never met, EthoVision XT waits
indefinitely and the trial never ends. To prevent this from happening, you can define a
maximum trial duration. For example in a novel object test, if you define a condition 'stop
the track when the mouse enters the zone with the familiar object' it may happen that the
mouse completely ignores the familiar object and only pays attention to the novel object.
 Use a maximum trial duration – Select this check box to define a maximum trial duration
and enter the maximum duration of the trial (in hours, minutes or seconds).
When you set a Maximum trial duration, the trial stops when that time has been
reached, regardless of whether one or more rules are being evaluated.
Instead of using a Maximum trial duration, you can also define a condition based on
time and place it immediately before the Stop track box (see page 185). However, there
are two important differences:
 If you use Maximum trial duration, the program counts the time from the start of the
trial (this is indicated by the Start-Stop trial box). Instead, a condition placed
immediately before the Stop track box considers the time from the start of data
recording (this is indicated by the Start track box). The two starting points may not be
the same if you have a condition between Start-Stop trial and Start track that makes
data recording start some time later than the trial.
 With a multi-arena setup, a Maximum trial duration stops the trial (and thus data
recording) in all the arenas simultaneously, even when data recording had started at
different times. Instead, a time condition placed between the Start track and the Stop
track box stops data recording in one arena when the condition is met in that arena. This
means that you can have data recording stop at different times in different arenas.
For example, you set to start data recording when the animal is detected for the first
time (In zone condition). Next, you define a delay condition of 5 minutes immediately
before the Stop track box. It the animals are detected for the first time at different times
in different arenas, data recording stops at different times too because of the constant
delay for all arenas. The trial ends when the recording stops in the last arena.
Chapter 7 - Trial Control 183
testing the trial control sequence
applying trial control to your trials
To apply Trial Control to your trials, make sure that the appropriate Trial Control Settings
profile is highlighted in blue in the Experiment Explorer.
Test your setup thoroughly before carrying out the actual trials (see above).
 For setups with multiple arenas – Trial Control is applied to each arena independently.
 For batch data acquisition – In the Trial List, you can specify which Trial Control Settings
you want to use for a specific trial. For more information, see page 270.
 Locked Trial Control Settings – When a Trial Control Settings profile is used for acquiring
at least one trial, it becomes locked. Locked settings are indicated by a lock symbol in the
Experiment Explorer, and cannot be edited. To edit a locked Trial Control Settings profile,
make a copy of it and edit this copy. See page 663.
 Tracking from video files – When you track from video files, Trial Control checks
conditions using video time instead of the real time.
- Conditions based on Delays – If you select the Detection Determines Speed option,
Trial Control is carried out at the speed set by EthoVision in order not to skip video
images (see page 280). This results in the video playing faster or slower than normal
(1x), depending on the processor load necessary to detect subjects. For example, if
detection requires little processor work, the program tracks the subject faster than
normal. A Delay condition (for example, Delay 60 s) is therefore met earlier than at real
time.
- Using Clock time – If you define a condition based on clock time, or schedule a sub-rule
with Clock time, this is translated into the video start time, that is, the date and time
the video file used for tracking was created.
Example 1 – You set a Time condition to start tracking After clock time 11:30. The video
file was created on March 6, 2008 at 11:00. once you start the trial, the condition is met
It is not easy to make a complex Trial Control sequence work right the first time.
To check that Trial Control works as expected, see “Testing the trial control
sequence” in the EthoVision XT Trial and Hardware Control Manual on your
installation DVD.
184 Chapter 7 - Trial Control
half an hour later in the video.
If you had set to start tracking After clock time 10:30, tracking would start immediately
after starting the trial.
Example 2 – You set a sub-rule to start at 10:00 (1st day). The video file was created on
March 6, 2008 at 11:00. Once you start the trial, the sub-rule never starts, because the
planned start occurs before the initial time of the video. To make a sub-rule start when
tracking from that video, set the start time between 11:00 and the video end time.
 Recording video, then tracking – If you choose to record video first and then acquire data
from the resulting video file (see page 297):
- When recording video only, Trial control is turned off. You get an appropriate message
when selecting the Save video file only option in the Acquisition window.
- When you track from that video, Trial Control for Start-Stop is activated, but you
cannot control hardware devices.
 Re-doing a trial – For video files recorded with EthoVision, you can re-do the
corresponding trial (see Redo trials in Chapter 9). However, if you re-do a trial the Trial
Control log files recorded with the previous instance of the trial are deleted.
 Stopping a trial – When you stop the trial, all rules active in the Trial Control Settings are
ended immediately, and hardware devices are reset.
Chapter 7 - Trial Control 185
7.4 The Start-Stop trial rule
The Start-Stop trial rule is displayed on your screen when you create or open Trial Control
Settings. With this rule, you control the start and stop of data acquisition (tracking). You can
only modify the initial Start-Stop trial rule.
the default start-stop trial rule
The default Start-Stop trial rule is a sequence of six boxes (but see exceptions described on
page 186):
 Rule Begin - Start-Stop trial – Activated when you start the trial (from the Acquisition
menu, select Start Trial, or click the Start Trial button, or press Ctrl+F5).
Once you start the trial, control passes to the next box.
 Condition - In Zone - Cumulative duration >=1.00 s When Center-Point is in Arena – This
is the default Start track condition. It is fulfilled when center point of the subject (or of
any subjects, in the case of an arena with multiple subjects) has been detected in the
arena for 1 second after you started the trial.
If you start the trial and the animal is not detected yet, the program waits until it detects
the animal for 1 second, then it starts tracking.
The condition is applied separately for each arena. This means that tracking can starts at
different times in different arenas in the same trial.
 Action - Start track – Activated when the condition on its left side is met. Once this box is
activated, data recording (tracking) starts. If the condition placed between the Start-Stop
trial box and this box is not met immediately, tracking starts later than the time you
start the trial.
Figure 7.10 The default Start-Stop trial rule. See explanation in the text.
186 Chapter 7 - Trial Control
 Condition - Time - Infinite delay (condition never met) – This is the default Stop track
condition. This condition is never met. The trial stops when you give the Stop command
or the time exceeds the Maximum trial duration (when this has been set).
 Action - Stop track – Marks the end of all tracks (and trial).
 Rule End - Start-Stop trial – This box is just the delimiter of the rule, it does not take any
action.
Trial Control with Activity analysis
If you selected Activity analysis in the Experiment Settings, the Condition - In zone box is
removed from the default Start-Stop rule. To carry out tracking and activity analysis
simultaneously, and start tracking when your subject is detected in the arena for a specific
time, insert a new In Zone condition box in the Start-Stop rule. For more information on
Activity analysis, see page 100.
Note: If you also select Behavior recognition in the Experiment Settings, the Start-Stop rule is
as described below.
Trial Control with Rat behavior recognition
When you select Behavior recognition under Analysis Options in the Experiment Settings
(page 101), a Time condition is added between the Condition - In zone box and the Action
Start Track box. This means that EthoVision XT waits 20 seconds after detecting the animal
for the first time, before starting actual tracking. This is done because the behavior
recognition algorithms need a number of video frames equivalent to about 18 seconds
before the current frame to recognize behavior.
If this additional condition was absent, the first 18 seconds of the track would contain no
behavior data (see Figure 5.3 on page 104).
Figure 7.11 Part of the Start-Stop trial rule of the Trial Control Settings when selecting Behavior recognition
in the Experiment Settings.
The condition “After a delay of 20 seconds” is removed automatically from a Trial
Control rule if you de-select Behavior recognition in the Experiment Settings.
Chapter 7 - Trial Control 187
An important distinction: Trial vs. track
 Trial – A Trial can be viewed as a container for the data collected in one recording session.
It starts when you give the Start command in acquisition and stops when the tracks for
all arenas and subjects have stopped.
 Track – A Track corresponds to the actual recording of a subject's position and behavior.
The start of a track may or may not coincide with the start of the trial. This depends on
your Trial Control Settings. If you use the default Trial Control Settings, the track starts 1
second after the animal has been detected in the arena and stops when you stop the
trial.
A Trial may contain one or more tracks. For example, if you track two subjects
simultaneously, each trial includes two tracks, one per subject. Similarly, if your setup
contains four arenas with two subjects each, each trial includes 4 arenas x 2 subjects = 8
tracks.
In a multiple-arena setup, the end of a track does not necessarily mean the end of the trial.
The trial ends when all tracks come to an end.
customizing the start-stop trial rule
Note that you cannot delete the Rule Begin, Rule End, Start track and Stop track boxes.
Furthermore, you cannot define an additional Start-Stop trial rule in the same Trial Control
Settings. To create a new rule, create new Trial Control Settings (see page 171).
Modifying the Start track condition
The default Start track condition is an In zone condition.
 To modify that condition, click the Settings button.
In the window that appears:
- Click Settings and specify the zone in which the animal should be.
188 Chapter 7 - Trial Control
- From the Statistic list, specify the time the animal should be in the zone (Cumulative
Duration), or how many times it should visit the zone (Frequency) in order for
EthoVision XT to start tracking.
 To use another condition (for example: start recording exactly 1 minute after starting the
trial), delete first the current condition (click that box and press Delete) and insert the
new one.
 To start recording as soon as you start the trial, delete the Start track condition: Click the
box immediately before the Start track box and press Delete.
 For an overview of conditions, see page 173.
Modifying the Stop track condition
The default Stop track condition is a Time condition.
 To modify that condition, click the Settings button, and choose the option you require.
 To use another condition, delete first the current condition (click that box and press
Delete), insert the new one (see page 172) and re-connect all the boxes (page 169).
Chapter 7 - Trial Control 189
7.5 Examples of Start-Stop trial rules
general
Starting data recording at a specific time
You want to start recording at a time you are not in the lab, for example at 23:00 h.
Delete the default Start track condition (see page 187). Define a Time condition (see
page 172). Select After clock time and enter 23:00:00. Click OK and place the resulting box
before the Start track box
Before leaving the lab, click the green button to start the trial. The program waits till 23:00 to
start data recording.
If you want to stop tracking when a specific time has elapsed, see page 27.
Keep at least one condition between Start track and Stop track. If you do not do
this, tracking stops immediately after tracking starts, resulting in no data.
If you have inserted Condition boxes based on Activity continuous in your Trial
Control rule and then deselect Activity analysis in the Experiment settings (see
page 100), your rule becomes invalid. The Condition boxes based on Activity
continuous are removed from your sequence and the connecting arrows are
removed. Redesign your Trial Control rule and connect the arrows between the
boxes (see page 169).
For more information on conditions, see Overview of conditions in the
EthoVision XT Trial and Hardware Control Manual.
190 Chapter 7 - Trial Control
Stopping data recording after the maximum time has elapsed
Click Settings in the Condition box immediately before the Stop track box. Select After a
delay of and enter the maximum time.
Instead of using a Time condition, you can also use the Maximum trial duration option (see
page 182).
open field (multiple arenas)
Starting data recording when the animal has been detected in the open field. The start
command is given to each arena independently.
In this setup, four open fields are treated as separate arenas. You want to start acquisition
when the animal is detected in the open field independent of what happens in other arenas.
This can be achieved by using the default Start-Stop trial rule. As soon as an the subject is
detected in an arena, tracking starts for that arena, not the others. This way you do not have
to release all the animals at the same time.
morris water maze
Stopping the trial when the animal has found the platform
In the Arena Settings, make sure that the platform has been defined as zone. In the Trial
Control Settings, delete the default Stop track condition (see page 187). Next, define an In
Zone condition (see page 172).
 If you want the program to stop recording as soon as the animal is over the platform,
select Frequency as Statistic and choose >= 1. Click Settings and select the platform zone.
 Sometimes the animal swims over the platform, but it does not stop there. In such cases
the program would stop recording while the animal has not ‘found’ the platform. Instead
of selecting Frequency, choose Current duration and the minimum time the animal must
stay on the platform (for example, 3 s). Click Settings and select the platform zone.
Click OK and place the resulting box before the Stop track box.
Chapter 7 - Trial Control 191
Stopping the trial either when the rat has found the platform, or when it has been
swimming in the water maze for 60 seconds.
The Arena Settings and the condition "the rat has found the platform" are similar to those in
the example above. The condition "rat swimming for 60 s" can be translated to "delay from
tracking >= 60 s".
The track stops when either condition is met. The two conditions are combined with OR logic
(see Figure 7.12).
This solution results in tracks of different duration: less than 60 s for the animals that found
the platform, and 60 s for the others.
Instead of two Condition boxes in the example above, you can also define the In zone
condition box and set a Maximum Trial duration (see page 182).
eight-arm radial maze
Stopping the trial when the animal has been in four arms within 10 minutes.
This can be done by combining eight conditions, that is, that the animal must be in the arm
specifying that at least four must be met, no matter which arm the animal visits.
1. Create an In zone condition (see page 173) and specify that the Frequency for Arm 1 must
be >=1. That is, the animal must have visited Arm 1 at least once. Do the same for each of
the other arms.
Figure 7.12 Example of a Start-Stop trial rule for a water maze. The trial stops when the animal has been in
the platform zone for at least 3 s without break, or the time since the start of tracking is 60 s.
A - In zone condition that specifies that the animal mist be for at least 3 seconds over het Platform zone.
Select Current duration >= 3s. B - Time condition that specifies a delay of 60 s since the track started.
C - ‘Any’ operator box.
192 Chapter 7 - Trial Control
2. Connect the resulting eight condition boxes in parallel using the N of All operator (see
Figure 7.13).
3. Set the Maximum trial duration (see page 182) to 10 minutes to stop tracking in the case
the animal fails to visit four arms in the meantime.
For more information on "N of All" operators, see page 178.
Figure 7.13 Trial Control sequence for an eight-arm radial maze. The trial must stop when the animal
has visited four of the arms at least once.
1, 2,... 8 - In zone condition boxes for Arm 1,2,... 8 respectively. A condition is met when the Frequency of
In zone for that arm is greater than or equal to 1. A - Operator that checks that at least four of the eight
conditions are met. B - Stop track box. When four conditions are met, the trial is stopped.
Chapter 7 - Trial Control 193
7.6 Analysis of Trial Control data
With the EthoVision analysis function you can analyze the events that occur during a trial by
means of statistics or time plots.
 Trial Control events – For example, when exactly does a condition become true?
 Trial Control states – To analyze the time between two Trial Control events. For example,
how much time elapsed from the moment a condition became active to when the
condition became true?
Analysis of Trial Control data is generally carried out for testing purposes, or to analyze the
subject's response to presentation of stimuli (for instance, in conditioning tests).
To analyze Trial Control data, in the Analysis Profile choose Trial Control event to analyze
simple events, or Trial Control state to analyze time intervals between specific events. Next,
calculate statistics (from the Analyze menu select Calculate Statistics) or visualize the data
(from the Visualize menu select Plot Integrated Data).
 If you want to analyze the behavior of your subjects, see Chapter 14.
 If you want to calculate statistics/visualize data of dependent variables in portions of a
track defined by Trial Control events, then you must first define the Nesting intervals in
the Data Profile. See page 473.
Exporting Trial Control data
You can export Trial Control events (for example, Action becomes active, or Condition becomes
true) and Trial Control states (for example, From Action becomes active To Condition becomes
true). For more information, see page 654.
For more information on analysis of trial control data, see “Analysis of Trial
Control data” in the EthoVision XT Trial and Hardware Control Manual, which you
can find on your installation DVD.
 
Chapter 8 - Configuring Detection Settings 195
Chapter 8
Configuring
Detection Settings
8.1 Why configure detection settings..................................................... 196
Short introduction to the Detection settings.
8.2 General procedure ............................................................................. 198
8.3 Method settings ................................................................................ 201
To specify how EthoVision XT detects the subject(s) and body points.
8.4 Subject Identification settings .......................................................... 203
To specify how EthoVision recognizes color-marked individuals.
8.5 Video settings .................................................................................... 208
Sample rate, image adjustments and Activity analysis settings.
8.6 Detection settings (detection methods)........................................... 219
Specify how EthoVision separates the subject from the background.
8.7 Subject contour settings.................................................................... 234
Pixel erosion and dilation to smooth the subject contour.
8.8 Subject size settings .......................................................................... 237
To specify the apparent size of the subjects. Includes settings for rat
behavior recognition.
8.9 Working with Nose-tail base detection............................................ 241
To optimize detection of nose-point and tail-base of rodents.
8.10 Detection settings for Rat behavior recognition.............................. 246
8.11 Customizing the Detection Settings screen ..................................... 249
See also Managing Settings and Profiles (page 663).
196 Chapter 8 - Configuring Detection Settings
8.1 Why configure detection settings
EthoVision XT needs a few criteria to track moving subjects.
For example, you need to specify how different the subject is from the background in terms
of gray scale or color values, you need to select a method to distinguish the subject from the
background, how many images per second you want EthoVision XT to analyze and to set the
average subject size. Such criteria make up your Detection Settings.
You can define different Detection settings in the same experiment. For example, you can
have one set for detecting white animals, and another to detect dark ones. For more
information, see page 663.
Which settings are available in the Detection Settings window first of all depends on the
version of EthoVision XT:
 EthoVision XT Base version – In this version, you can track the center-point of the body of
a single animal. For the detection of the animal's body, four detection methods are
available. The base version also allows tracking of a color marker on a single animal; in
this case the color marker is treated as the center-point of the animal.
 Multiple Body Points module – With this add-on module, you can track the center-point,
the nose-point and the tail-base of a single animal. For the detection of multiple body
points, three detection methods are available.
 Social Interaction module – This add-on allows you to track two or more animals in one
arena. You can use Color marker tracking or Marker assisted tracking. You can use this
add-on in combination with the Multiple Body Points module to study social interactions
in detail.
 Rat Behavior Recognition module – For detecting a number of behaviors automatically,
including rearing, grooming and sniffing. In the Detection Settings, the Behavior
Settings are enabled.
Tracking multiple subjects requires that you carefully adjust the Detection
Settings. Make sure you follow the General procedure of configuring Detection
Settings in the order described below (see General procedure on page 4).
We recommend to only use Tracking from video files if you use the Multiple
Body Points module in combination with the Social Interaction module.
Chapter 8 - Configuring Detection Settings 197
opening the detection settings
Before opening the Detection Settings, make sure that you have valid Arena Settings.
To open the Detection Settings, do one of the following:
 In the Experiment Explorer, click the folder Detection Settings to expand it and click on
one of the Detection Settings to open the Detection Settings screen.
 From the Setup menu, select Detection Settings. Select Open, select one of the Detection
Settings from the list and click OK.
Result – The Detection Settings screen opens. By default, the Detection Settings window,
the Video Source and Playback Control window are displayed. You can use the Show/
Hide button on the component tool bar to change the view settings.
The Detection Settings window
Depending on the number of subjects per arena and the tracked features selected in the
Experiment Settings (see page 91), the layout of the Detection Settings window differs.
Figure 8.1 The Detection Settings window. See the text for an explanation of the letters.
198 Chapter 8 - Configuring Detection Settings
The Detection Settings window contains the following sections (see also Figure 8.1):
 Method (A) – This section contains the methods for subject detection, nose-tail base
detection (if applicable), and options to use a scan window and to apply marker-assisted
tracking.
 Detection (B) – In this section you configure the Subject Detection settings.
 Subject Identification (C) – This section is only available when you have multiple animals.
 Video (D) – In this section you can select your video if you track from video, adjust video
settings if you track live, set the Sample rate and Smoothing settings, and select settings
for Activity analysis.
 Subject Size (E) – In this section you set the subject size for one or more animals. You also
set important parameters for rat behavior recognition (when enabled).
 Subject Contour (F) – In this section you can erode and dilate the detected body to
optimize detection.
8.2 General procedure
Subject detection works well if there is good contrast between the subject and the
background in the video image, and for the whole duration of the trials. Increasing the
contrast (for example, by changing the background so it differs as much as possible in color
from the subject) is far more effective than any detection setting.
Experiment Settings
In the Experiment Settings window (see also page 91):
1. Select the Number of Subjects per Arena.
You can use a pre-defined template to automatically configure detection
settings for commonly used experimental setups (see “Creating a new
experiment based on a pre-defined template” on page 90). After you have done
this, you must still adjust the detection settings (as described in this chapter)
before you can track any animal correctly.
Make sure you carefully follow the order of steps as described below. If a
particular step does not apply to your setup, proceed to the next step.
Chapter 8 - Configuring Detection Settings 199
2. Select one of the options from Detected features.
Method section - 1
Which methods and options are available in the Method section, depends on the Experiment
Settings.
3. Make the following selection:
- Use scan window – Make sure this option is NOT selected while you are configuring
Detection Settings.
- Marker assisted tracking – Select this option when you want to track more than one
animal in the same arena. In all other cases go to step 5.
see page 202
Subject Identification section
4. You can use Subject Identification, if you have multiple subjects per arena and you have
either selected Color marker tracking (treat marker as center-point) in the Experiment
Settings or Marker assisted tracking in the Detection Settings.
see page 203
Video section
5. In the Video section, you can have the following options:
- Select video (only if you track from video) - Click this button and browse to your video
if it is not automatically selected.
- Image (only if you track live). Click this button to adjust the settings of your camera.
Dependent on your camera or frame grabber board, some options may be greyed out.
- Sample rate – The sample rate is the number of video images per second you want
EthoVision XT to analyze among those available.
- Smoothing – Select the option you require.
- Activity (only if you selected Activity analysis in the Experiment Settings – Click this
button to create and view settings for Activity analysis.
see page 208
Method section - 2
Which methods and options are available in the Method section, depends on the Experiment
Settings.
6. Select one of the following:
- Method – These subject detection methods (Gray scaling: page 220, Static subtraction:
page 221, Dynamic subtraction: page 226, Differencing: page 230) must always be
selected.
200 Chapter 8 - Configuring Detection Settings
- Nose-Tail detection – These nose-tail detection methods (Shape-based (XT4), Modelbased
(XT5), Advanced Model-based (XT6)) are only available when you have selected
Center-point, nose-point and tail-base detection for a single animal in the Experiment
Settings.
see page 219 for Detection methods and
page 241 for Nose-tail detection
methods
Detection section
7. In the Detection section, you can configure the subject detection method (Gray scaling:
page 220, Static subtraction: page 221, Dynamic subtraction: page 226 and Differencing:
page 230) you selected in the previous step.
see page 220
Subject Contour
8. In the Subject Contour section, set the level of Erosion and Dilation.
 see page 234
Subject Size
9. In the Subject Size section, click the Edit button to set:
- Detected subject size – Here you can set the Minimum and Maximum subject size.
- Modeled subject size – Here you model the subject size when you have multiple
subjects or when you use the Nose-tail detection method Advanced Model-based
(XT6) for one or more subjects.
- Advanced Subject Size settings – Here you can set Maximum noise size, Shape stability
and Modelling effort in case you have multiple subjects or when you use the Nose-tail
detection method Advanced Model-based (XT6) for one or more subjects.
Click the Behavior button (when present) to acquire the size and shape parameters for
rat behavior recognition.
10.Once the subject is detected well, in the Method section, select Use scan window (see
page 202) and click OK.
see page 237
If you select Center-point, nose-point and tail-base detection with 2 or
more Subjects per Arena in the Experiment Settings, the Nose-Tail
detection in the Detection Settings is automatically set to Advanced
Model-based (XT6) and therefore the Nose-tail detection methods are not
displayed.
Chapter 8 - Configuring Detection Settings 201
You are now ready to acquire data (see Chapter 9).
Notes
 Every time you apply changes in the Detection Settings window, you can see the
consequences in the Video Source window.
 To save the detection settings, click the Save Changes button at the bottom of the
window. If you have made more changes and you want to return to the last saved
settings, click the Undo Changes button.
 EthoVision XT offers a number real-time statistics on the quality of detection that you
can check while you adjust detection settings.
 Keep in mind that detection in the Detection Settings is real-time, whereas with
Detection determines speed (page 280) during acquisition the quality of detection can
be better!
8.3 Method settings
For the detection methods, see page 219.
marker assisted tracking
When do I use Marker assisted tracking?
You use marker assisted tracking when you have more than one subject per arena and when
you have NOT selected Color marker tracking in the Experiment Settings (see page 91).
Marker assisted tracking is optimized for use with rodents.
202 Chapter 8 - Configuring Detection Settings
How to use Marker assisted tracking?
In the Method section of the Detection Settings window, select the Marker assisted tracking
check box. The Identification button in the Subject Identification section now becomes
enabled.
Follow the steps in the Subject Identification section below to set up Marker assisted
tracking.
See also Tips for marker tracking on page 207.
What is the difference between Marker assisted tracking and Color marker tracking?
 With Marker assisted tracking, EthoVision tracks the animal's body and uses the marker
to determine the animal's identity. When you use Color marker tracking, EthoVision
tracks just the marker.
 With Color marker tracking, you can track any species (that can be marked) whereas
Marker assisted tracking is optimized for rodents only. With color marker tracking, only
the position of the marker is recorded. The actual shape and size of the animal is ignored.
To use color marker tracking, select Color marker tracking (treat marker as center-point)
in the Experiment Settings (see page 100). Next, in the Detection Settings window adjust
the Subject Identification and Video settings (page 203 and page 208).
See also Tips for marker tracking on page 207.
use scan window
When Use scan window is selected, Ethovision XT finds the subject, 'follows' it and searches
only the area immediately around it in the following video image. Therefore, the scan
window moves with the subject.
Why use a scan window?
Use a scan window for two purposes:
When you do NOT select the Marker assisted tracking check box, you will carry
out unmarked tracking. You can carry out unmarked tracking when you analyze
the variables on a group level (so the identity of the animals is not important) or
when the animals cannot touch.
Only select Scan Window after you have finished configuring the Detection
Settings. Scan window should not be selected while you configure the Detection
Settings.
Chapter 8 - Configuring Detection Settings 203
 To reduce problems with reflections – If a reflection occurs outside the scan window (for
example, waves in a water maze), this is ignored, resulting in fewer detection errors.
However, make effort to improve lighting to eliminate reflections (see page 58).
 To increase the sample rate without missed samples – With a scan window, your
computer processes data from a small proportion of the video image. This reduces the
average processor load, so you can increase the sample rate, if necessary, without missed
samples (remember that the higher the processor load, the more likely samples are
skipped).
Losing the subject – When the subject disappears from the scan window, EthoVision XT
scans the whole arena to find the subject again, and then re-positions the scan window over
that new location.
For user of previous EthoVision versions – The size of the scan window is automatically
determined by the program and changes during acquisition according to the subject size.
Therefore, you do not need to specify it.
8.4 Subject Identification settings
subject identification
You carry out the procedure described below for either Marker assisted tracking or Color
marker tracking.
1. Put the marked animals in the arena or play the video. Optimize the camera setup (see
page 55), lighting conditions (see page 58) and marker characteristics (page 207).
Make sure you select a point in the video where the animals do not touch each
other!
If you use multiple body point detection, it is normal that the nose is not
correctly identified at this point.
204 Chapter 8 - Configuring Detection Settings
2. In the Subject Identification section, click the name of one of the subjects and click the
Identification button.
Result – The Identification of Subject # window and the Marker detection window open.
You should enlarge the Marker Detection window by dragging its bottom-right corner.
3. Move the mouse pointer to the Marker Detection window so the pointer becomes an
eyedropper.
4. Move the eyedropper on top of the color marker of the subject you want to identify (see
the figure below) and click the left mouse button.
The Identification window now displays the color you just picked and the pixels with the
initial color are highlighted in the Marker Detection window. In the Identification
window, you can change the following (see also Figure 8.2):
- Hue – Hue is the predominant wavelength of the marker color and represents what is
usually referred to as color in everyday life (red, green, blue, etc.). The range of values
for Hue of the picked color are shown and this range is represented by the box on the
vertical color bar on the right.
- Saturation – Saturation represents the purity of a color. Saturation decreases when a
pure color is mixed with white; "red" is saturated, "pink" is less saturated. The range of
values for Saturation are shown and this range is represented by the width of the box
on the Color map.
- Brightness – Brightness (or Intensity) represents the amount of light reflected by the
colored surface. The range of values for Brightness are shown and this range is
represented by the height of the box on the Color map. If you set this range too broad,
you will not be able to separate the colors well.
If the marker is not detected completely or not detected in all areas of the arena, expand
the range of Hue, Saturation and Brightness slightly.
The detected marker can be eroded or dilated in order to compensate for specific
scenarios. For example, you can dilate the marker if the marker is partly masked by cage
Chapter 8 - Configuring Detection Settings 205
bars or you can use erosion to round the marker which will prevent the center-point from
jittering.
See Fine-tuning color settings on page 205.
Fine-tuning color settings
When you first pick a marker color in the Marker Detection window, EthoVision selects all
pixels in the video image with the same initial color. Groups of pixels with this initial color
are highlighted by an outline with the opposite color. Because a marker in the video image
can consist of different shades of the same color, it is possible that initially not the complete
marker is selected (see Figure 8.3).
Figure 8.2 The Identification window and its relation with the HSI color model. A = Color bar: the box
represents Hue which corresponds to an angle on the circle in the HSI color model (for example, 0 degrees
means red, 240 degrees means blue). B = Color map: the height of the box represents the Brightness (or
Intensity) range which corresponds to the vertical position of the color circle. The width of the box
represents the Saturation range which corresponds to the horizontal position on the circle between the
center and the edge.
206 Chapter 8 - Configuring Detection Settings
The Figure 8.3 shows part of the Marker Detection window and part of the Identification
window.
You can fine-tune the color settings by adjusting the Hue, Saturation and Brightness in the
Identification window.
5. Change the range of color settings by changing the numbers or by resizing the Hue box on
the vertical color bar, or resizing/moving the box in the color map (horizontally to adjust
Saturation, vertically to adjust Brightness). As a result, the outline covers (almost) the
complete marker (see Figure 8.4).
Figure 8.3 The initial color that is picked in the Marker Detection window (left picture) and the
corresponding range for color settings Hue, Saturation and Brightness in the Identification
window. The arrows indicate how changing the boxes changes the corresponding color setting
Chapter 8 - Configuring Detection Settings 207
6. Next, play the video to see in the Marker Detection window whether the marker is
detected correctly in different parts of the arena.
If the marker 'dances' then your color settings are too sensitive. Go back to step 5 and
make the box larger.
7. Continue with setting the following:
- Marker erosion – Set the number of pixels to erode. By selecting Erode first, then
dilate, you can make the marker more round to prevent the center-point of the marker
to start jittering.
- Marker dilation – Set the number of pixels to dilate. By selecting Dilate first, then
erode, you can prevent the marker from being masked or divided in two separate
markers by, for instance, a grid on top of the arena.
- Minimal marker size – Set the Minimal marker size to prevent noise to be detected as
the marker. First, increase the Minimal marker size until noise is not detected
anymore. Next, enter a value for the Minimal marker size that is somewhere in
between this lower threshold and the value of the Current marker size.
- Marker pointer – Select a Marker pointer from the list. With relatively small markers it
is useful to select Cross lines.
8. Click OK when you are done.
Repeat steps 2-8 for all subjects you want to identify.
tips for marker tracking
Color characteristics
 Use a color scale (for example from a paint company) to find out which colors are most
easily recognized by EthoVision in your setup and lighting conditions. Do this before
applying color markers to your animals.
 Use colors that have different hue values. For example, use red and green and not red
and orange.
 It may be wise to avoid using red for marking, since it looks like blood.
 Note that marking your animals may stress them, and therefore affect their behavior. If
necessary, ensure that you select a marking method that lasts for a longer period of time.
Figure 8.4 The color of the marker after fine-tuning the color settings. Most of the marker is
now selected as indicated by the white outline (see also Figure 8.3)
208 Chapter 8 - Configuring Detection Settings
Marker characteristics
 Make sure that the marker is as round as possible, this will ensure that the relative
movement of the center of gravity of the marker is the same in all directions when the
edges of the marker change due to posture changes or otherwise. For color marker
tracking it will help to prevent the jitter of the marker.
 When you use marker assisted tracking, make sure the marker is not too big; the marker
can interfere with proper detection of the body contour. For example, make sure that a
dark marker on a white animal does not cover the complete width of the animal because
it can cause the body to be split in two.
Lighting conditions
 Use a sensitive camera if possible. A low light intensity makes it difficult to separate
different colors. When it is not possible to use a sensitive camera or strong illumination
in your setup, try using fluorescent marker colors with UV lighting.
 For optimal color separation, illuminate your setup with lamps that approximate to daylight
in color temperature, that is, have a wide spectrum range.
Subject roles
The names under Subjects in the Subject Identification section are the Subject roles entered
in the Experiment Settings (see page 91). You can use Subject roles "Control" and "Treated",
for instance, to plan to give the control animals a blue marker in some trials and treated
animals the blue marker in other trials. To do this, define multiple sets of Detection Settings,
one for each combination of marker color*treatment level. Before acquiring the data, make
sure that you use the Detection Settings that correspond to the current animals.
8.5 Video settings
sample rate
The Sample rate is the rate at which EthoVision analyzes the images to find the subject. It is
expressed in samples per second.
Chapter 8 - Configuring Detection Settings 209
Selecting a certain sample rate does not mean that the program can always analyze data at
that rate. If the computer processor load is too high, EthoVision XT may skip a sample and
analyze the next one. Skipped samples result in missed samples (see below).
The maximum sample rate is the frame rate set by the TV standard of your video. For PAL
video, frame rate is 25 frames/s, therefore the maximum sample rate is 25 samples per
seconds. For NTSC video, the maximum sample rate is 29.97 samples per seconds.
The sample rate you set in EthoVision XT can only be the frame rate divided by an integer. For
example, for PAL video it is 25, 12.5, 8.33, etc.
What is the optimal sample rate?
Setting the correct sample rate is very important. If the rate is too high, the noise caused by
small movements of your animal will be picked up and give an overestimate of dependent
variables such as the distance moved. If the sample rate is too low, you will loose data and
get an underestimate of the distance moved.
The table below gives some general recommendations taken from the published literature.
These sample rates have successfully been used to track animals with previous EthoVision
versions. However, we strongly recommend that you determine the optimum sample rate for
your specific setup and animals (see below). Note that if, for instance, your treatment causes
hyperactivity, you will need a higher sample rate for hyperactive animals than somnolent
animals.
Some digital cameras support very high frame rates. However, this requires a lot
of processor capacity. To prevent that EthoVision XT discards samples while
tracking live, do not set the frame rate and sample rate too high. Check the
percentage of missed samples in the Trial list (see page 263) after tracking to
make sure the EthoVision XT can handle the selected frame rate.
If you selected both Nose-tail tracking and Marker assisted tracking, we
recommend a sample rate of 12.5 samples per second.
For Rat behavior recognition, select a sample rate between 25 and 31 frames per
second.
Animal Sample rate (samples/second)
Damselfish 5
Goldfish 0.5
Zebrafish larvae (analog camera) 25
Zebrafish larvae (FireWire camera) 30 or 60*
Mites 1
210 Chapter 8 - Configuring Detection Settings
* For rapid movements you may want to track with a higher sample rate. It depends on the number
of tracked subjects, the video resolution, the camera settings and the processor speed of your
computer whether that is possible.
The optimal sample rate is the minimum sample rate that provides an accurate estimation of
the dependent variables (distance, velocity, etc.) without including the redundant
information due to phenomena other than the 'real' locomotion. For example, for an animal
walking in a straight line the data points will never be in a straight line because the centerpoint
of the subject shifts laterally with each step. In order to distinguish between 'real'
movement and effects like the one described above, you can calculate dependent variables
like distance moved using the maximum or a lower sample rate.
1. Create new Detection Settings (see page 198) and specify the maximum sample rate (25
or 29.97, depending on your TV standard). With a FireWire camera this sample rate may
be higher. However, whether this is possible depends on the performance of your
computer, the number of animals you track, and the video resolution.
2. Start Acquisition and acquire data with those Detection Settings (see Chapter 10).
3. Calculate the dependent variable you are interested in (see Chapter 19). Export the data
for example to Excel (see page 653) and plot the dependent variable values against the
sample rate. In the example below, distance moved is used.
4. Repeat steps 1 to 3 by selecting smaller sample rates.
Once the data are plotted as in Figure 8.5, there should be a range of sample rates for which
the dependent variable value does not change much (plateau). This means that slight
changes in the sample rate do no result in loss of information, or addition of redundant
information (noise and movements like body wobble).
Mouse 12
Parasitic wasps 2
Rat 5
Rodent's nose 25 (PAL) 30 (NTSC)
Tick 3
Tree-shrew (Tupaia) 6-12
Chapter 8 - Configuring Detection Settings 211
Low sample rates result in loss of useful information, because the sinuosity of the original
path is removed. Therefore, the total distance moved is usually decreased (see figure below).
High sample rates result in acquisition of redundant information. In the case of body
wobbling, and assuming that the animal is moving along a straight line, the lateral shift of
the body center causes the total distance moved to be longer than the 'real' one.
With Track Smoothing (see page 401) you can filter out 'noise' as a result of body wobble.
Figure 8.5 Detecting optimal sample rate from a collection of distance moved recorded with different
sample rates.
212 Chapter 8 - Configuring Detection Settings
Missed samples
The actual sample rate may be lower than the maximum you set, because an image cannot
be captured until the previous one is processed. If the sample rate you define is too high,
EthoVision will miss samples (up to 1% is acceptable) and the processor load will be high. The
percentage of missed samples is shown in the Analysis Results and Scoring pane (see
page 251) and in the Trial List as a System Variable (page 262). You can calculate the number of
missed samples in acquired tracks with the Number statistic of continuous variables (e.g.,
velocity). If your processor load is larger than 100, and there are large amounts of missed
values, you will have to lower the sample rate. The following factors may cause the processor
load to be too high:
 Computer memory, processor speed and video card capacity – See the system
requirements on page 38. In general, using a computer with a dual-core CPU helps you to
work with higher sample rates than normal computers do.
 Other programs installed – Do not install other video software (for example, video
editing programs, DVD burning software), because this can interfere with EthoVision's
video processing and cause a reduction in performance.
 Other programs are running – Make sure you shut down all other programs, including
those running in the background such as e-mail programs and virus scanners. These are
usually shown in the System Tray in the bottom-right corner of your screen.
 Windows Classic – The performance will considerably increase if you set the Windows
Theme to Windows Classic when using Windows 7.
 Image resolution – For live video tracking, In the Experiment Settings you can choose the
resolution for your video image (see page 95).
 Size of arenas – Make arenas as small as possible (but including the entire area the
animal can be in).
 Number of arenas – If you track live and use more than four arenas in a trial, check first
that no samples are missed. If the number of missed samples is too high, first make a
MPEG-4 file (provided that you have the Picolo Diligent board installed on your PC), then
track from that. More generally, if you track from video files the number of arenas is
never a problem as long as you select Detection determines speed (see page 280).
When making detection settings, you could start with making an arena definition with
only one area which speeds up the detection process. After you have finished configuring
detection settings for one arena, you can add the others to the arena definition.
 Display options – You can decrease processor load by minimizing the number of Track
Features to be displayed (see page 250) and by closing the Analysis Results and Scoring
pane (see page 285).
Chapter 8 - Configuring Detection Settings 213
 Real time analysis – Hiding the Analysis Results and Scoring pane results in saving
processor power.
 Detection method – If possible, use the Gray scaling method which requires less
processor load than Static subtraction. Static subtraction requires less processor load
than Dynamic subtraction and Differencing.
 Area to search for subjects – If you cannot achieve the optimum sample rate, make sure
that you select Use scan window (see page 202), but only after you are finished
configuring the detection settings.
Tracking from video files
You can switch the speed at which EthoVision acquires data from real time (1x) to the highest
achievable by the computer, by selecting Detection determines speed (see page 280). This
option allows you to:
 Ensure that you do not lose any frames when the video frame rate is faster then your
processor can handle. The video is played slower than real time, without missed samples.
 Acquire data faster than in real time when the video frame rate is slower than the
processor can handle.
select video
If you track from video, you may want to acquire data from a video that differs from the one
you used to create the Arena Settings. By default, the Detection Settings uses the video you
grabbed a background image from in the Arena Settings. If you want to track from another
video file, click Select Video under Video. Browse to the location of your video and click Open.
This option is only available if you chose track from video files in the Experiment Settings.
image settings
If you track live, you can adjust the live video signal before EthoVision XT analyzes it for
detection. For example, you can adjust contrast and brightness.
Click the Image button under Video. In the window that appears, adjust the properties you
require. Contrast enhances the lighter and darker parts of the image, Brightness makes the
image lighter, Saturation increases the color intensity. The Image Settings also affect the
After acquisition you can see the proportion of missed samples in the Trial list
(see Chapter 9) as one of the System Variables.
214 Chapter 8 - Configuring Detection Settings
image that you can save to a video file (see page 213). If you click the Default button, the
settings are reset to the defaults of the camera driver.
The Image button is only available if your experiment is set to Live tracking. Dependent on
the camera, some settings may be greyed out.
smoothing
In some cases you may want to adjust the quality of the video image before acquiring data. If
your video contains fine-grained noise, this may be improved by using Video pixel
smoothing. If the detected body contour is ‘flickering’, using Track noise reduction may
improve the quality of the track. Click the Smoothing button and adjust one of the options
below.
Video pixel smoothing
Select a Video pixel smoothing value to reduce the influence of fine-grained noise on
detection. Because of fine-grained noise, adjacent pixels that are expected to have the same
Always try adjusting the lighting and camera aperture settings before changing
the Video Adjustment Settings.
If you change settings, you need to redefine your detection thresholds (see
above) and make a new reference image.
Chapter 8 - Configuring Detection Settings 215
(or similar) gray scale value may have very different values. In such cases, EthoVision XT may
occasionally detect groups of pixels as irrelevant subjects.
The Video pixel smoothing option reduces the difference between adjacent pixels prior to
detection, by smudging the image, that is, replacing the gray scale value of each pixel with
the median of the surrounding pixels.
Pixel smoothing does not affect Color marker tracking. It does affect detecting the body
contour in Marker assisted tracking.
Choose one of the values:
 None (default) – No pixel smoothing. The video image is analyzed for subject detection
as it is.
 Low – Each pixel is blended with the 8 nearest pixels (pixel distance =1).
 Medium – Each pixel is blended with the 24 nearest pixels (pixel distance 1 or 2).
 High – Each pixel is blended with the 48 nearest pixels (pixel distance 1, 2 or 3).
Example – A bright pixel (gray value= 240) is surrounded by dark pixels:
If you select Video pixel smoothing= Low, that pixel gets the median value calculated among
the 8 nearest pixels plus that pixel itself. In that case the median is 150, so that pixel will look
darker. If you specify Video pixel smoothing= Medium, the median is calculated over the 24
nearest pixels plus the pixel itself. If you specify Video pixel smoothing= High, an even
bigger group of surrounding pixels is considered.
A high Video Pixel smoothing level requires a significant amount of processor capacity.
Why use the Video pixel smoothing option?
 Select a moderate Video pixel smoothing value or leave None selected If adjacent pixels
in the background are relatively constant. Using more surrounding pixels for the
smoothing effect does not bring up better results.
 Select a high Video pixel smoothing value if adjacent pixels in the background are on
average very different. For example, when the cage's bedding material looks grainy. In
216 Chapter 8 - Configuring Detection Settings
such cases you need to smooth each pixel using more surrounding pixels to compensate
for this variation.
Track noise reduction
If the detected centre point of your animal is continuously moving, while in fact your animal
is sitting still, the total distance moved will be overestimated. You can use track smoothing
to correct for this after you have acquired your data (see page 401 for more information).
In some cases better quality tracking can be obtained by reducing track noise during
acquisition. This may especially be the case if you use Trial and Hardware Control. As an
example, if the center point of an animal is detected in a zone, you want the pellet dispenser
to drop a pellet. If the detected center point is moving rapidly because of noise, this may
result in a number of consecutive pellets to be dropped, every time the center point crosses
the border of the zone. Track noise reduction may solve this problem.
With Track noise reduction, rapid changes in the distance moved will be compensated for
and the path will be smoothed. Using Track noise reduction in the Detection Settings
influences the acquired track, and therefore it is not possible to change it back after
acquisition. This is in contrast to post-acquisition smoothing (see page 401), where you can
use profiles to calculate analysis results with and without those filters applied. Also, do not
use Track noise reduction if you are particularly interested in rapid movements of your
animal, for example, if you study the startle response of zebra fish larvae.
Figure 8.6 shows the effect of Track noise reduction on the walking path of a subject. In this
example the effect on the X-coordinates of the animal is shown.
Using Video Pixel smoothing may result in losing information in the video image
important for detection. For example, sharp borders of subjects, etc.
Figure 8.6 the effect of Track noise reduction on the walking path of a subject.
Chapter 8 - Configuring Detection Settings 217
Track noise reduction makes use of the Gaussian Process Regression method. Track noise
reduction is applied during acquisition. Hence, it alters the acquired tracks, which cannot be
undone afterwards.
With Gaussian Process Regression, the sample points are smoothed, using the x-y
coordinates of the previous 12 sample points. This differs from the Lowess post-acquisition
smoothing method (see page 403) that uses samples before and after the sample point to be
smoothed. This is not possible during acquisition, because the x-y coordinates of future
samples are not yet known.
If you use noise-tail tracking, the paths of the nose point and tail base are smoothed
independent of the path of the center point.
activity settings
If you selected Activity analysis in the Experiment Settings, you must create settings for this
analysis. To make it easier to judge whether the settings are correct, make sure the detected
Body fill of your subject is not shown in the video window. Click the Show/Hide button in the
top-right corner of your window, select Detection Features and de-select Body fill and Noise.
Then select Activity. Close the Detection Features window and play the video. The detected
pixel change between samples is shown in purple.
Click the Activity button in the Detection Settings window. The Activity Settings window
opens (see Figure 8.7).
 Activity threshold – This value gives the threshold for the difference in grey scale values,
between a sample and the previous sample.
 Background noise filter – Use this filter to remove noise in the video, or camera image.
With the background noise filter, a pixel change is only counted as a change, if the
surrounding pixels also have changed. The pixels that are not fully surrounded with
changed pixels are removed and around the remaining pixels a layer of changed pixels is
Figure 8.7 The Activity Settings window.
218 Chapter 8 - Configuring Detection Settings
added. The higher the setting for the background noise filter, the more surrounding
pixels are used. See Figure 8.8 for an explanation.
 Compression artifacts filter – Use the compression artifacts filter to compensate for
video artifacts that are regularly recurring. With the compression artifacts filter, only the
changes that are occurring in a number of consecutive frames are taken into account. If
you track live, we recommend to leave this setting on the default value Off. If you track
from video, or On if you select Redo tracking. However, if you are interested in very brief
or fast occurring changes, leave the setting for the Compression artifacts filter on Off.
Create settings in such a way that all activity of your animal is detected and some noise is
left. Also try whether lowering the sample rate (see page 208) and using Video Pixel
Smoothing (see page 214) improves Activity detection. Then, click the Show/Hide button
once more and select Detection Features. De-select Activity and select Body fill. Then create
detection settings for your subject. Or, if you need different sample rates for activity analysis
and tracking, create separate detection settings for tracking.
It is also possibly to only carry out activity analysis and not create detection settings for
tracking. However, if you do so, it may happen that EthoVision XT has so much difficulties to
detect the animal, that this decreases the performance of acquisition. This may result in
many missed samples. Therefore, while creating activity settings, check that the proportion
missed samples does not become too high (see also “Missed samples” on page 212).
Figure 8.8 Background noise filter with the value 1. The black squares represent pixels that have
changed in two consecutive samples. First, all pixels that are not completely surrounded by one layer of
changed pixels are removed (red squares). Then, one layer of changed pixels is added around the
remaining pixels. The thin red hairline shows the original changed pixels.
Chapter 8 - Configuring Detection Settings 219
8.6 Detection settings (detection methods)
which detection method should i use?
There are four methods available to distinguish the animal from the background:
Use Gray scaling when:
 The animal's grayness differs from the background in all places that can be visited.
 The background cannot change during a trial.
 Lighting is even (minimal shadows and reflections) during the trial.
Example – tracking a white rat in a uniform black open field with no bright objects.
Use Static Subtraction when:
 The Gray scaling method does not work (because other objects in the arena have a
similar color as the animal).
 The background does not change in time.
 The light is constant during the trial.
Example – Tracking a white rat in an open field with unavoidable reflections or bright
objects.
Use Dynamic Subtraction when:
During trials light conditions gradually change or the background changes (bedding
material is kicked around, food pellets are dropped, droppings appear etc.).
Example – Tracking a mouse in a home cage provided with bedding material. The
activity of the mouse causes the bedding to change appearance in the video image.
Use Differencing when:
There is a lot of variation in contrast between a subject and the background within an
arena. Variation in contrast can be caused, for example, by a gradient in light intensity in
the arena or in the fur of the animal, e.g. hooded rats.
220 Chapter 8 - Configuring Detection Settings
detection method: gray scaling
How does the Gray scaling method work?
The video image is converted to monochrome. Each pixel in the image has a gray scale value,
ranging from 0 (black) to 255 (white). With Gray scaling, you define which range of gray scale
values should be considered as the subject. The remaining gray scale values are considered
as background.
Procedure
1. Select Gray scaling in the Methods section of the Detection Settings window.
2. Insert the subject in the arena, or position the media file at a point where the subject is
moving.
With the Gray scaling method selected in the Detection settings window, it is not possible to
grab a frame or to select another video file because the Gray scaling method does not use a
Reference image.
3. In the Detection section, move the two sliders next to Select range or type the values in
the corresponding fields to define the lower and higher limits of gray scale values (range
from 0=black to 255=white) of the animal. The background cannot contain gray scale
values outside these limits.
4. Check on the Video window the quality of detection resulting from the current gray scale
range. The detected subject shows the features and colors you have chosen in the Track
Features window (see page 250).
- If the detected area is too small relative to the real subject, you need to increase the
range (at least in one direction - brighter or darker).
- Areas marked as Noise (by default, these are shown in orange; see page 250), indicate
that the gray scale range is too wide – you need to narrow it in at least one direction.
Chapter 8 - Configuring Detection Settings 221
5. Move the sliders until the subject (or the part which is of interest) is detected fully, and the
noise is minimized. Check that the subject is properly detected in all parts of the arena by
moving the video slider, or by waiting for the live animal to move.
detection method: static subtraction
How does the Static subtraction method work?
The video image is converted to monochrome. Each pixel in the image has a gray scale value,
ranging from 0 (black) to 255 (white). With the Static subtraction method, you choose an
image of the arena without the subject, named Reference Image. When analyzing the
images, EthoVision XT subtracts the gray scale value of each pixel in the reference image
from the gray scale value of the corresponding pixel in the current image (live or from video).
The pixels with non-zero difference are considered the subject.
You can remove small non-zero differences by defining the contrast between current image
and background that must be considered as the subject (see the procedure below). The
remaining pixels are considered as the background (see Figure 8.9).
Procedure
1. Select Static subtraction in the Method section of the Detection Settings window.
It is important that the complete animal's body is detected for optimal
tracking. Proceed with the Contour adjustments (see page 41) to optimize
body detection.
Figure 8.9 An example of how the Static subtraction detection method works. The gray scale value of each
pixel of the reference image is subtracted from the gray scale value of each pixel of the live image. The
result is ‘0’ for every pixel; if the difference > ‘0’ and within the gray scale range you have set, these pixels
are considered to be the subject. So, with this method your task is to specify the contrast that optimizes the
detection of the subject.
222 Chapter 8 - Configuring Detection Settings
2. Under Detection, click the Settings button next to Reference Image. The image on the left
is the Reference Image that is used at the start of the track. The options on the right of this
window are greyed out.
The aim is to obtain a reference image that does not contain images of the animals you want
to track. To do so, follow the instructions below in consecutive order. If A fails, move on to B,
if that fails move on to C.
1. Grab Current (A) - Scroll through the video until you find an image without animals. If
you track live, make sure that there are no animals in the arena. Click Grab Current (A).
This image will be the initial reference image. Skip Steps 2 and 3. and click Close.
If your video does not contain images without animals, continue with option 2. Also
continue with option 2 if you track live and you cannot start with an empty arena.
2. Grab from other (B)- You may have a video with an identical background as the one you
use for tracking, but without animals. Or you may have an image of a background without
animals. If this is the case, click Grab from Other and select this video file or image file. If
you select a video file, the first frame of this file will be used as an initial reference image.
If you select an image file, this has to have the same resolution as the video file you use for
tracking. Browse to this file and click Open. Skip step 3 and click Close. If you do not have
such video or image, proceed with option 3.
Figure 8.10 The Reference Image window of static subtraction and live tracking. If you track from video
file, the text in this window is slightly different but the options are the same. Follow the procedure in
consecutive order until the left image is without animals.
Chapter 8 - Configuring Detection Settings 223
By default, the reference images are stored in the folder Bitmap Files of your experiment. If
the background has not changed, you can use these images as reference images in other
experiments.
3. Start learning (C) - With this option an average image of the entire video will be made. If
the animals are moving, learning will average out the pixels of the animals, resulting in an
initial reference image without animals.
If you track live, you have to click Start Learning, and subsequently click Stop Learning as
soon as you have obtained an initial reference image without animals. Click Close. I.
4. Click Close when you are finished grabbing a Reference Image.
5. From the Subject is … than background list, select one of the following, depending on the
color of the subject you want to track:
- Brighter than background – For example, to track a Wistar rat in a black open field.
- Darker than background – For example, to track a C57BL6 mouse in an open field with
white bedding.
- Brighter and darker than background – For example, to track a DBA2 mouse in a home
cage with white background and a black shelter, or a hooded (black and white) rat in a
uniform gray open field.
Result – Depending on the selection above, different contrast sliders become available:
- For Brighter than background – Bright Contrast slider.
- For Darker than background – Dark Contrast slider.
- For Brighter and darker than background – Both sliders.
For each slider, the contrast varies from 0 (no contrast) to 255 (full contrast).
Unlike with Gray scaling, the values selected with the sliders represent the difference
between the current and the reference image, not absolute gray scale values.
Figure 8.11 The Learning process in the Reference Image window. A-The video image in which the
animal is in the view at all times, B-The result of applying Learn: the moving animal is removed from
the background.
224 Chapter 8 - Configuring Detection Settings
When the subject is brighter and darker than the background, detection only works well
when there is enough contrast between the areas of different brightness and the
background. For example, tracking a hooded rat works well when the background is
intermediate between black and white.
6. Release the subject in the arena, or position the media file at a point where the subject is
moving.
7. Move the appropriate slider or type the values in the corresponding fields to define the
lower and higher limits of the contrast that corresponds to the subject.
In the Video window, check the quality of detection.
Example 1 – The subject is brighter than the background. Only the whiter area of the
subject is detected.
 Move the Bright Contrast slider to the left to increase the range towards values of
lower contrast between subject and background.
Example 2 – The subject is darker than the background. Its body is detected only partially in
the area of lower contrast.
Chapter 8 - Configuring Detection Settings 225
 Move the Dark Contrast slider to the left to increase the range towards lower values of
contrast between subject and background.
Example 3 – The subject is brighter and darker than the background. Only the darker areas
of the black fur are detected.
 Move the Bright Contrast slider to the left to increase the range towards less contrast
between the subject's white areas and the gray background. Then, move the Dark
Contrast slider to the left to increase the range towards less contrast between the
subject's black areas and the background.
8. Move the sliders until the subject (or the part which is of interest) is detected fully, and
the noise is minimized. Check that the subject is properly detected in all parts of the arena
by playing back different parts of the video file, or by waiting for the live animal to move.
It is important that the complete animal's body is detected for optimal
tracking. Proceed with the Contour adjustments (see page 41) to optimize
body detection.
226 Chapter 8 - Configuring Detection Settings
detection method: dynamic subtraction
How does the Dynamic subtraction method work?
Like with Static subtraction (see page 221), the program compares each sampled image with
a reference image, with the important difference that the reference image is updated
regularly. This compensates for temporal changes in the background.
With Dynamic subtraction, the reference image is updated at every sample. You specify the
percentage contribution of the current video image to reference image.
Procedure
1. In the Method section of Detection Settings window, select Dynamic subtraction.
2. In the Detection section, click the Reference Image Settings button. Create reference
images without animals, following the procedure under “Reference image” on page 228.
3. From the Subject is … than background list, select one of the options from the list,
depending on the color of the subject you want to track (see step 6 at page 223 for details).
Chapter 8 - Configuring Detection Settings 227
4. Move the slider next to Current frame weight or enter the value in the appropriate field to
specify how the reference image is updated (range 0-100%):
- In typical situations, a value between 1-5 gives a good result.
- Select a low value if you want to have a large number of past images to contribute to
each reference image. As a result, changes in the background are diluted over many
images. Choose a low value when the background changes slowly.
- Select a high value if you want to have a small number of past images to contribute to
each reference image. As a result, changes in the background are captured over short
time. Choose a high value when the background changes rapidly, for example, when
the subject is very active and moves the bedding material around.
- If you select 0, the reference image is not updated. This is the same as using Static
Subtraction.
- If you select 100, each sample gets its own reference image with no contribution by
the past images.
- Changing the Current frame weight does not affect the processor load significantly.
Figure 8.12 In the Dynamic subtraction detection method, the Reference image is updated at each sample.
The starting reference image is the one you specify by clicking the Grab from Video, Grab from Camera, or
Grab from Other button in the Reference Image window (see page 39), otherwise it is the first frame
analyzed (not shown in the picture). For the general sample n, the reference image is obtained by summing
the reference image of the previous sample n–1 and the current image n where the area around the subject
estimated from the previous sample has been removed. The current image with subject removed is given
the weight  that you specify (see the procedure), while the previous reference image is given the weight
(1-). Because of the way it is determined, each reference contains information on a number of past
images, depending on the value of . See the text for more information.
To find the optimal Current frame weight, set a value and carry out one or
more trials. Evaluate if the tracking was satisfactory. If not, increase or
decrease the setting by 20% and try again.
It is important that as much of the animal's body is detected for good
tracking. Proceed with the Contour adjustments (see page 234) to optimize
body detection.
228 Chapter 8 - Configuring Detection Settings
Reference image
Under Detection, click the Settings button next to Reference Image. You now see two video
images. The image on the left is the Reference Image that is used at the start of the track.
The image on the right is the Reference Image that is continuously updated during tracking.
The aim is to obtain reference images that do not contain images of the animals you want to
track. To do so, follow the instructions below in consecutive order. If A fails, move on to B, if
that fails move on to C etc.
1. Grab Current (A) - Scroll through the video until you find an image without animals. If
you track live, make sure that there are no animals in the arena. Click Grab Current (A).
This image will be the initial reference image. Skip Steps 2-4 and click Close.
If your video does not contain images without animals, continue with option 2. Also
continue with option 2 if you track live and you cannot start with an empty arena.
2. Grab from other (B)- You may have a video with an identical background as the one in the
video you track from, but without animals. Or you may have an image of a background
without animals. If this is the case, click Grab from Other and select this video file or
image file. If you select a video file, the first frame of this file will be used as an initial
reference image. If you select an image file, this has to have the same resolution as the
video file you use for tracking. Browse to this file and click Open. Skip steps 3 and 4 and
click Close. If you do not have such video or image, proceed with option 3.
Figure 8.13 The Reference Image window for dynamic subtraction and tracking from video file. If you track
live, the text in this window is slightly different but the options are the same. Follow the procedure in
consecutive order until both images are without animals.
Chapter 8 - Configuring Detection Settings 229
By default, the reference images are stored in the folder Bitmap Files of your experiment. If
the background has not changed, you can use these images as reference images in other
experiments.
3. Start learning (C) - With this option an average image of the entire video will be made. If
the animals are moving, learning will average out the pixels of the animals, resulting in an
initial reference image without animals.
If you track live, you have to click Start Learning, and subsequently click Stop Learning as
soon as you have obtained an initial reference image without animals.
If this step results in a satisfying initial reference image, skip step 4 and click Close. If not,
proceed with step 4.
4. Grab Dynamic Image (D) - If options 1 to 3 do not result in a satisfying initial reference
image, using the current updated reference image as the initial reference image may
solve the problem. Click Grab Dynamic Image (D) below the dynamic reference image.
Acquisition settings - If you run a number of consecutive trials, you may want to choose
which image to use as initial reference image.
 Use saved reference image - Use this option if the background remains constant
between the different trials.
 Use dynamic reference image - Use this option if the background changes between the
different trials.
Grabbing the reference image is optional with the Dynamic Subtraction method. If you do
not do that, EthoVision XT takes the first sample or video frame available and considers that
as the first reference image.
If you are tracking from video files, you must play the video forward whilst making dynamic
subtraction settings. This is because the program needs to update the reference image. Do
not skip through the file, since the reference image will then not be correctly made.
How is the reference image updated?
A video stream is composed of a number of video images (frames). During data acquisition,
EthoVision XT analyzes one every x images according to the sample rate specified (see
page 208). When analyzing the sample (image) n, the reference image is obtained by
summing up the gray scale values of each pixel from two images:
 The reference image made of pixels which have an average value of previous images.
 The current image, where a square area around the subject detected in the previous
sample has been removed. This provides a rough estimate of the current background.
The gray scale values are summed up according to the formula:
Referencei,n = (1-) * Referencei,n-1 +  * Currenti,n
for each pixel I, where:
230 Chapter 8 - Configuring Detection Settings
 Referencei,n = Gray scale value of pixel I in the reference image of sample n.
 Referencei,n-1 = Gray scale value of pixel I in the reference image of sample n–1.
 Currenti,n = Gray scale value of pixel I in sample n where a square area around the
subject previously detected has been removed.
  = Current Frame weight.
The Current Frame weight determines the relative weight of the two components of the new
reference image.
Because the above formula is recursive, that is, each value of Referencei,n is also a function of
the previous sample, the value of  determines the number of past images that contribute to
the reference image for the sample n. The lower , the more past images contribute at least
partially to the current reference image.
The extent to which each past image contributes to the current reference image is a power
function of 1-. The older an image relative to the current one, the smaller its contribution to
the reference image.
 Example – If =20%, then 1- =80%. The first video image contributes by 80% to the
second sample, by 80%2 =64% to the third sample, then by 80%3 =51% to the fourth
sample, etc. At the 21th sample, the contribution by the first image gets below 1%.
detection method: differencing
How does the Differencing method work?
Like with Dynamic subtraction, the Differencing method updates the reference image over
time. Differencing makes a statistical (probabilistic) comparison between each pixel in the
reference image and the pixels of the current image. The statistical comparison uses the
variance in the contrast between the current and reference image to calculate the
probability that each pixel is the subject.
In most cases, the Differencing method works better than the other two subtraction
methods to detect the subject.
The Differencing method takes more processor load than the subtraction methods.
Therefore, when using Differencing, make sure you computer meets the system
requirements as specified on page 38.
Procedure
1. In the Method section of the Detection Settings window, select Differencing.
2. In the Detection section, click the Reference Image Settings button. Create reference
images without animals, following the procedure under Reference image on page 231.
Chapter 8 - Configuring Detection Settings 231
3. From the Subject is … than background list, select one of the options from the list,
depending on the color of the subject you want to track (see step 6 at page 223 for details).
4. Next, if necessary, adjust the position of the Sensitivity slider and change the option
selected in the Background Changes list.\
The Sensitivity slider determines what difference in contrast from the background is
seen as the animal. For an image with good contrast, there is no need to change the
slider. For images with less contrast, adjust the position of the slider to the right or the
left until the subject is properly detected.
In the Background Changes list you can select options that reflect how fast the
background changes. For example, a cage with bedding might change a lot because of
animals kicking around the bedding material. If this case, to prevent changes in the
background to interfere with detection, select 'Medium fast' or faster. Usually, 'Medium
slow' works just fine.
Reference image
Under Detection, click the Settings button next to Reference Image. You now see two video
images. The image on the left is the Reference Image that is used at the start of the track.
The image on the right is the Reference Image that is continuously updated during tracking.
The aim is to obtain reference images that do not contain images of the animals you want to
track. To do so, follow the instructions below in consecutive order. If A fails, move on to B, if
that fails move on to C etc.
It is important that as much as possible of the animal's body is detected for
good tracking. Adjust the Subject Contour settings (see page 234) to optimize
body detection.
232 Chapter 8 - Configuring Detection Settings
1. Grab Current (A) - Scroll through the video until you find an image without animals. If
you track live, make sure that there are no animals in the arena. Click Grab Current (A).
This image will be the initial reference image. Skip Steps 2-4 and click Close.
If your video does not contain images without animals, continue with option 2. Also
continue with option 2 if you track live and you cannot start with an empty arena.
2. Grab from other (B) - You may have a video with an identical background as the one in the
video you track from, but without animals. Or you may have an frame of a background
without animals. If this is the case, click Grab from Other and select this video file or
image file. If you select a video file, the first image of this file will be used as an initial
reference image. If you select an image file, this has to have the same resolution as the
video file you use for tracking. Browse to this file and click Open. Skip steps 3 and 4 and
click Close. If you do not have such video or image, proceed with option 3.
By default, the reference images are stored in the folder Bitmap Files of your experiment.
If the background has not changed, you can use these images as reference images in
other experiments.
Figure 8.14 The Reference Image window for differencing and tracking from video file. If you track live, the
text in this window is slightly different but the options are the same. Follow the procedure in consecutive
order until both images are without animals.
Chapter 8 - Configuring Detection Settings 233
3. Start learning (C) - With this option an average image of the entire video will be made. If
the animals are moving, learning will average out the pixels of the animals, resulting in an
initial reference image without animals.
If you track live, you have to click Start Learning, and subsequently click Stop Learning as
soon as you have obtained an initial reference image without animals.
If this step results in a satisfying initial reference image, skip step 4 and click Close. If not,
proceed with step 4.
4. Grab Dynamic Image (D) - If options 1 to 3 do not result in a satisfying initial reference
image, using the current updated reference image as the initial reference image may
solve the problem. Click Grab Dynamic Image (D) below the dynamic reference image.
Acquisition settings
If you run a number of consecutive trials, you may want to choose which image to use as
initial reference image.
 Use saved reference image - Use this option if the background remains constant
between the different trials.
 Use dynamic reference image - Use this option if the background changes between the
different trials.
Grabbing the reference image is optional with the Dynamic Subtraction method. If you do
not do that, EthoVision XT takes the first sample or video frame available and considers that
as the first reference image.
If you are tracking from video files, you must play the video forward whilst making dynamic
subtraction settings. This is because the program needs to update the reference image. Do
not skip through the file, since the reference image will then not be correctly made.
How is the reference image updated?
The Differencing method uses a gaussian distribution of all pixels in a frame. EthoVision XT
keeps a running average of the mean  and the variance 2 of the gray value of each pixel to
detect unlikely pixels. These pixels are considered to be the subject.
The mean of the gray values is summed up according to the same formula as for Dynamic
subtraction (see page 229).
The variance of the gray values is summed up according to the following formula:
Variancei,n = (1-) * Variancei,n-1 +  * (Currenti,n – Referencei,n)2
for each pixel I, where:
 Variancei,n = Variance of gray scale value of pixel I in the reference image of sample n.
 Currenti,n = Mean gray scale value of pixel I in sample n where a square area around the
subject previously detected has been removed.
234 Chapter 8 - Configuring Detection Settings
 Referencei,n-1 = Mean gray scale value of pixel I in the reference image of sample n–1.
  = Current Frame weight.
The Current Frame weight determines the relative weight of the two components of the new
reference image (see the example on page 230).
8.7 Subject contour settings
contour erosion and dilation
Before you start setting the Contour adjustments
It is important that the complete body of the animal is detected (indicated by the 'noise'
color in the video window). If even after setting the Contour adjustments you do not achieve
this, go back to the appropriate Detection method and adjust the contrast to improve body
detection.
Figure 8.15 The picture on the left shows a sub-optimal result of body detection (part of the right side of the
body is not detected). The picture on the right shows the result when the contrast settings are optimized;
now the complete body is detected. The color of the body contour at this stage is orange (=noise) because
the model parameters have not been configured yet.
Chapter 8 - Configuring Detection Settings 235
Why use Contour Adjustments?
 To give a smooth contour for accurate modeling and to remove individual pixels of noise
– For this purpose, Erode first, then dilate is selected by default.
 To eliminate the detection of thin objects such as the rat's tail – Select Erode first, then
dilate. A reason for why you may want to eliminate the animal's tail is that when the
animal sits still and its tail moves, it adds to distance moved.
 To remove indentations in the shape of the subject, such as those caused by the cage
bars, or to 'join up' the stripes on the animal's body (for wasps, fish etc.) – Select Dilation
and Erosion, and Dilate first, then erode. This removes indentations in the shape of the
subject, giving a smoother outline, or ensures that EthoVision XT detects them as one
animal.
 To deal with occlusions of the animal's body – If you use nose-tail tracking (Advanced
Model-based) with rodents, optimize the Shape Stability (see page 243).
 To deal with two animals touching – When two animals touch, EthoVision loses the
separate shapes. By optimizing the Modelling effort (see page 243), EthoVision can
determine which part of the large body fill belongs to which animal.
Figure 8.16 A – An example of a rat detected by EthoVision XT without any filtering applied.
B – The same animal, after applying the Erosion filter on. C – The layer of pixels removed by Erosion. D –
The same animal when first Erosion and then Dilation are applied. E – The net result of Erode first, then
dilate: the pixels corresponding to the rat's tail are removed.
236 Chapter 8 - Configuring Detection Settings
Contour erosion
The Contour erosion function reduces the subject's area by setting the contour pixels of the
subject to the background value. The detected subject appears smaller in the Video window.
To apply erosion, select Contour erosion and from the list select the thickness of the layer of
pixels to be removed, expressed in number of pixels (Minimum =1, Maximum =10).
Figure 8.13A shows the subject as detected by EthoVision with no filtering. After applying
erosion, a layer of pixels is removed from the contour (Figure 8.13B). Figure 8.13C shows the
pixels that were removed.
Contour dilation
The Contour dilation function increases the subject's surface area by settings the
background pixels adjacent to the subject's contour to the subject value. Therefore, the
detected subject appears larger in the Video window.
To apply dilation, select Contour dilation and from the list select the thickness of the layer of
pixels to be added, expressed in number of pixels (Minimum =1, Maximum =10).
Figure 8.13A shows the subject as detected by EthoVision XT with no filtering. After removing
the rat's tail with the erosion function (Figure 8.13B), a layer of pixels is added back using
dilation (Figure 8.13D), restoring the original size of the subject.
Combining dilation and erosion
Select both Dilation and Erosion if you want to apply the two filters together. From the Order
list, select one of the following:
 Erode first, then dilate – A layer of pixels is removed, then added to the contour.
 Dilate first, then erode – A layer of pixels is added, then removed.
Use Erode first, the dilate when you use either the Model-based (XT 5) or the
Advanced Model-based (XT 6) nose-tail tracking method because in this case the
tail can negatively affect tracking. When you use the Shape-based (XT4) method,
make sure the tail is fully detected as part of the subject.
Chapter 8 - Configuring Detection Settings 237
8.8 Subject size settings
subject size
The Subject size settings use the result of the body detection to model the body size of the
animals. This prevents objects like droppings or large reflections from being detected during
tracking. Please note that the term size here means surface area in video pixels, not length or
screen pixels. Enlarging the Video window does not change the subject's size in video pixels.
Setting the Subject size for a single animal
 Set the Detected subject size using the Minimum and Maximum subject size when you
want to carry out Center-point detection or Nose-tail detection with either the Shapebased
(XT4) or Model-based (XT5) detection method. The Detected subject size sets the
absolute limits of the size that is possible to be detected as a subject.
 Set the Modeled subject size when you want to carry out Nose-tail detection using the
Advanced Model-based (XT6) detection method. The Modeled subject size is the size of
the model that the program will try to fit to the detected subject.
Setting the Subject size for multiple animals
Set the Modeled subject size when you want to track multiple animals. To set the Subject
size:
1. In the Subject size section, click the Edit button.
In the Subject Size window, in the figure at the top, the thin red contour represents the
current size of what EthoVision XT assumes is the animal shape.
 If you want to set the Detected subject size, proceed with step 2.
 If you want to set the Modeled subject size, proceed with step 3.
Before you set the Subject size, make sure all animal body contours are detected
properly and, for multiple animals, the animals do not touch each other.
If you selected Behavior recognition in the Experiment Settings, see page 246.
238 Chapter 8 - Configuring Detection Settings
Click the info button for more information about setting the subject size.
2. Set the Minimum and Maximum subject size (represented by a green contour):
- Maximum subject size – The largest surface area (in pixels) that is detected as the
subject. Objects bigger than the Maximum subject size, for example, the
experimenter's arm, are detected as noise and not tracked. Decrease the Maximum
subject size until its thick green contour surrounds the thin red contour by a fair
margin.
- Minimum subject size – The smallest surface area (in pixels) that is detected as the
subject. Objects smaller than the Minimum subject size, such as droppings or
disturbed sawdust, are detected as noise and not tracked. Increase the Minimum
subject size until its thick green contour is smaller than the thin red contour by a fair
margin.
The two sliders are interdependent. So, after you have set the Minimum subject size, when
you next change the Maximum subject size, the slider for the Minimum subject size also
moves (although the size in pixels stays the same).
Figure 8.17 The Subject size window with the current detected subject size, Minimum and
Maximum subject size.
Chapter 8 - Configuring Detection Settings 239
3. In the Modeled subject size group, select Apply settings to all subjects if your multiple
animals have similar sizes.
The Modeled subject size settings are only available when you use multiple subjects or the
Advanced Model-based (XT6) nose-tail detection.
4. Select one of the subjects to model the subject size for, by clicking the name of the
subject.
5. Next, adjust the modeled subject size (under Average - pixels) to the detected subject size
(under Current - pixels):
You do this by clicking the Grab button. Keep clicking the Grab button until the modeled
(Average) subject size equals the detected (Current) subject size.
When the modeled (Average) subject size equals the detected (Current) subject size, this
becomes visible:
- In the Modeled subject size group: the Average subject size now equals or is larger
than the Current subject size (see the table in Figure 8.16).
- In the Video window: the modeled subject size now completely overlaps with the
current subject size (see the Video window in Figure 8.16).
Figure 8.18 Part of the Modeled subject size group in the Subject size window (left) and the Video
window. In the table, Current shows the current detected subject size in pixels, Average shows the
modeled subject size in pixels. The arrows point to the visual feedback you get about the current and
average subject size in the Video window.
240 Chapter 8 - Configuring Detection Settings
- In the picture at the top of the Subject size window: the bold yellow contour
represents the modeled subject size. This now coincides with the detected subject size
indicated by the thin red contour (see Figure 8.14).
6. You can now set the Tolerance. Click the corresponding cell and enter a value.
The Tolerance determines the deviation of the average subject size. When the Current
detected size deviates more from the Average subject size than the Tolerance, then the
object is not considered to be the subject anymore and EthoVision starts making an
educated statistical guess about the body contour of the animal.
This is visible in the video window by a wobbling marker-color area. When this happens
when animals do not touch, you should increase the Tolerance.
7. Select the Fix check box for each subject.
8. You can now proceed to set the Maximum noise size, Shape stability and Modeling effort.
Tips for setting the Subject Size
 Make sure you do not set the Tolerance too small; it is better get a wrong body size/
shape than a wrong location of the animal.
 It is better to set you Average subject size slightly bigger than the actual subject size,
especially when you carry out nose-tail tracking.
 If you want to carry out Live tracking with multiple similarly-sized animals, it is
recommended to first introduce one animal into the arena and make the Subject Size
settings for this animal.
 If the subject size changes a lot between trials, it is recommended to create new
Detection Settings for this new size.
Figure 8.19 Part of the Modeled subject size group in the Subject size window (left) and the Video
window. The modeled (Average) subject size is now adjusted to the detected (Current) subject size.
Compare the table and video window in this figure with those in Figure 8.15
Chapter 8 - Configuring Detection Settings 241
8.9 Working with Nose-tail base detection
overview
When you set an experiment for Nose-tail base detection, EthoVision XT analyzes the
contour of the area detected as subject at each sample, and assigns Nose-point and Tail-base
to two specific pixels of the contour. Furthermore, it determines the direction the animal is
supposed to point to (Head direction).
 Nose- and tail-base points – The two points are detected independently through one of
two complex algorithms. The nose-point is found in all cases, except when the centerpoint
is not found either. The tail-base may not be found in a few cases if detection is
good.
Note:
- You can have EthoVision detect the nose- and tail-base points of your subjects when
you have upgraded to the Multiple Body Point Module. To do so, upgrade your
hardware key (see page 51). To set an experiment to Nose-tail base detection, in the
Experiment Settings select Center-point, nose-point and tail-base detection (see
page 100).
- Reliable tracking of nose and tail-base is limited by the size of the video image. You can
mix four camera images like in the case of a group of PhenoTypers, with good results.
Mixing 16 camera images makes the subjects too small for reliable nose and tail-base
tracking.
 Head direction – Once the nose-point has been found, two points are determined along
the contour lying at a specific distance from the nose-point. The Head direction is the
line dividing equally the angle formed by the center and those additional points.
The Head direction to zone is quantified as a dependent variable, and is expressed in
units of rotation (see page 610).
methods of nose-tail detection
In EthoVision XT, three methods for nose-tail base detection are available:
 Shape-based (XT 4) – This detection method analyzes the contour of the area detected as
subject at each sample to assign the nose-point and tail-base. Make sure in the detection
settings that the tail is fully detected. With this method is may be possible to track 'nonrodent'
shapes but the method is not designed for it.
 Model-based (XT 5) – This detection method analyzes the varying shape of the contour of
the area detected as subject and builds up a 'rodent model'. It is more robust than the
242 Chapter 8 - Configuring Detection Settings
Shape-based method because it does not require the nose and tail to be visible: it can
'predict' the position of the nose and the tail based on previous samples. Make sure in
the detection settings that the tail is removed from the body contour with Erode and
Dilate (see page 236).
 Advanced Model-based (XT 6) – This detection method teaches the animal shape and
how it moves in the first 15 frames and continually updates its statistics. Therefore, it can
handle severe shape distortions, such as, for example, when the animal's body is
occluded or when multiple animal's touch. However, it requires a lot of computer
performance.
This is the only method available when you track multiple animals with nose-tail
detection. It is the preferred methods for rodents. Make sure in the detection settings
that the tail is removed from the body contour with Erode and Dilate (see page 236).
Which of the three methods should I use?
 When you want to track other animals than rodents, we recommend you use the Shapebased
(XT 4) method.
 When you want to track a single rodent without occlusions or without difficult tracking
conditions, we recommend you use the Model-based (XT 5) method.
 When you track rodents that can be occluded, for example, by bars or other objects in the
cage, we recommend you use the Advanced Model-based method and to track From
video file.
 When you want to track multiple rodents using Marker assisted tracking, EthoVision
automatically selects the Advanced Model-based (XT 6) method. In this case, we
recommend you track From video file.
maximum noise size
Maximum noise size is only available if you have chosen the Advanced Model-based (XT6)
nose-tail detection method.
You set the Maximum noise size in the Subject size window:
1. Go to the Advanced section by clicking the little down-arrow at the bottom-right of the
Subject size window.
2. Set the Maximum subject noise. The value should be lower than the minimum subject
size and but high enough to remove noise from the video image.
Chapter 8 - Configuring Detection Settings 243
shape stability
The Shape stability setting is only available if you have chosen the Advanced Model-based
(XT6) nose-tail detection method.
The Shape stability setting is used when you track animals whose body can be occluded by,
for example, cage bars or part of the body of another animal. When this happens, the
animal's body consists of two separate objects that are close together.
You set the Shape stability in the Subject size window:
1. Go to the Advanced section by clicking the little down-arrow at the bottom-right of the
Subject size window.
2. The Shape stability optimized for slider has two extreme settings:
- Occlusions – When you set the slider close to Occlusions, EthoVision considers
separate objects that are close together part of one animal.
- Noise – When you set the slider close to Noise, EthoVision considers separate smaller
parts not part of the animal.
The figure below shows the animal model as a result of applying the two extreme Shape
stability settings.
If you are not sure which setting to select, leave Shape stability at the default value of 620.
modeling effort
The Modeling effort setting is used when two animals touch and EthoVision loses the
separate shapes. At this point, EthoVision tries to determine which part of the big 'merged'
body fill belongs to either animal. This costs a lot of processing load.
The Modeling effort optimized for slider has two extreme settings:
Figure 8.20 An example of the result of the two extreme Shape
stability settings. 'Noise' shows that the front of the animal, on
the other side of the bar, is not considered to be part of the
animal. 'Occlusion' displays the animal body as a whole.
244 Chapter 8 - Configuring Detection Settings
 Performance – When you set the slider close to Performance, EthoVision is only allowed a
short time to determine which part of the 'merged' body fill belongs to which animal.
Therefore, Modeling quality is low.
 Modelling – When you set the slider to Modelling, EthoVision is allowed a longer time
per frame to determine which part of the 'merged' body fill belongs to which animal.
Therefore, Modelling quality is good, but this costs a lot of processor load.
We recommend to select Modelling only when you have a computer that exceeds the
minimum system requirements.
When you are not sure which setting to select, leave Modeling effort at the default value of
‘500’.
how to optimize nose-tail detection
Because of the way the nose- and tail base points are found, nose-tail base detection is much
depending on the quality of the video image and the experimental setup. Before using this
feature, please check the following guidelines:
Conditions related to the Arenas
 Light – Light conditions must be equal across the arena. Try to remove shades, light spots
and reflections. For proper detection, the subject's body contour must be kept as
constant as possible across the whole arena.
 Subject/background contrast – The color of the subject and of the background must be
contrasting enough to get a full and clear body contour.
 Video quality – Noise and interference reduce the proportion of samples which are
correctly detected.
 Noise reduction – The Video Pixel smoothing function (see page 214) can sometimes help
getting a more appropriate body contour. However this is of little use if the video has too
much noise or too little contrast.
 Areas hidden to the camera view – When the animal enters or exits areas hidden to the
camera (for instance, a shelter), nose-point and tail-base are wrongly assigned.
 Number of arenas – Reliable tracking of nose and tail-base is limited by the size of the
video image. You can mix maximally four camera images like in the case of a group of
PhenoTypers, with good results.
Chapter 8 - Configuring Detection Settings 245
Conditions related to the Subjects
 Subject's apparent size – The subject must be large enough to get a constant body
contour. Small animals and large arenas pose detection problems with nose- and tailbase
points. When you mix the image of multiple cameras with a quad unit, like in the
case of a group of PhenoTypers, a group of 4 cameras gives good results. When mixing 16
PhenoTypers, the apparent size of the subject is generally too small.
 Subject's color variation – For hooded rats, the light flanks and dark head must contrast
with the background, otherwise detection of body contour is sub-optimal, although the
Differencing detection method (see page 230) can help.
 Water maze – Tracking nose- and tail-base points in a water maze is impossible because
the tail-base is under the water, and it is not possible to obtain a proper body contour.
 Subject's behavior – Immobile animals are hard to track because their body contour
differs from that of a mobile animal. Nose-points are therefore hard to detect.
Experiment Settings
 Detection methods – We recommend to track from video files if you use the Advanced
Model-based (XT 6) method.
 Sample rate – As high as possible (25 or 29.97 samples/s). For Nose-tail tracking in
combination with Marker assisted tracking, you should use a sample rate of 12.5 or 14.98
samples/s.
 Tracking live – When tracking requires high processor load, it may result in many missing
points. Tracking from video files is preferred (see below), especially when you use the
Advanced Model-based (XT6) method.
 Tracking from video files – Keep the Detection Determines speed option selected.
 Missing tail-base points – The high percentage of missing tail-base points is an
indication of poor detection. The higher this percentage, the greater the probability that
the nose-point is not placed in the correct location. To estimate the proportion of missing
tail-base points, run some test trials and visualize the Sample list (see Chapter 12). You
can quantify this by selecting Number of samples as a statistic for a dependent variable
such as Velocity for the nose point.
In practice…
The contour of the blob detected as subject is crucial for proper detection of nose- and tailbase
points. If only part of the subject is detected, EthoVision may swap the pixels assigned
as nose-point and tail-base. Or the nose-point is not placed on the subject's nose tip (for
246 Chapter 8 - Configuring Detection Settings
clarity, the nose point is shown together with the Head direction; see page 250 for how to
view this on the screen):
Select a wider range of gray scale values (see page 220 or page 224) or adjust the sensitivity
(see page 231) to increase the number of pixels detected as subject. As a result, the nose- and
tail-base points are detected correctly:
 When you use the Shape-based (XT 4) method, make sure that the tail is fully detected.
 When you use the Model-based (XT 5) or the Advanced model-based (XT 6) method,
remove the tail from the detected subject using the Erode and Dilate filters (page
page 234).
8.10 Detection settings for Rat behavior
recognition
Nose-tail detection method
Rat behavior recognition works when nose-tail base detection is enabled.
In the Detection Settings window, under Method select:
 Model-based (XT5) (default) — This is selected automatically when you select Rat
behavior recognition under Analysis Options in the Experiment Settings.
 Advanced Model-based (XT6) — Use only when there are occlusions in the arena that
make the subject’s apparent size smaller, or when using the Model-based (XT5) detection
method does not provide good results.
Chapter 8 - Configuring Detection Settings 247
Sample rate settings
In the Detection Settings window, under Video, select a value of sample rate between 25 and
31 samples/second.
Subject size settings
In the Detection Settings window, under Subject Size:
1. Click the Behavior button. The Behavior Recognition Settings window opens.
2. If you work with video, play the video up to a frame where:
- The subject is walking normally, and its hind limbs can be partially seen; see the figure
below. It is important that the animal’s body is not contracted or stretched.
- Nose- and tail-base points are correctly detected.
If you track live, wait until the animal shows a posture like in the figure above.
3. In the Behavior Recognition Settings window, click the Grab button.
4. In the Behavior Recognition Settings window a still image appears showing the detected
subject’s contour and the detected body points.
Figure 8.21 Play the video until the subject walks
normally, and nose- and tail-base points are
correctly detected.
Figure 8.22 The Behavior Settings window.
248 Chapter 8 - Configuring Detection Settings
You can update the grabbed image at any time:
- If you track from video files, position the video to another frame, and click Grab.
- If you track live, wait that the posture of the animal is optimal and click Grab.
EthoVision XT only stores the image that you grabbed last.
5. In the Behavior Recognition Settings window, make sure that the calculated Subject
length is greater than 60 pixels, and that the Posture index is between 70 and 90.
If the Subject length is smaller than 60 pixels, move the camera closer to the animal, or
use a higher video resolution.
6. Click OK to close the Behavior Recognition Settings window.
Entering specific size values — If you know specific size values (for example, from a previous
experiment using the same animal size, camera, lighting, camera-arena distance and the
same calibration), click Manual in the Behavior Recognition Settings window and in the
Manual Settings window enter the following values (see the picture below for explanation):
 Subject area (in distance unit square)
 Center-nose length (in distance unit)
 Center-tail length (in distance unit)
 Posture (between 70-90).
Chapter 8 - Configuring Detection Settings 249
Then click OK. The Behavior Recognition Settings window says No image saved: Size settings
were manually set.
 Subject size is expressed in the unit selected in the Experiment Settings.
 The value of Subject length (min. 60 pixels) in the Behavior Recognition Settings
window is the sum of Center-nose length and Center-tail length, expressed in pixels. If
this value is lower than 60, when opening the Data acquisition screen an error message
appears. To increase subject length, move the camera closer to the animal, or use a
higher video resolution.
Making size-dependent detection settings
Accurate recognition of behavior is based on subject size settings. Since apparent size
increases with the subject age, all being equal, we advise you to create detection settings
specific for a certain age class. Each Detection Settings profile can only be used for a limited
time. For example, for Wistar rats, a Detection Settings profile for rats that are 3-5 weeks old,
which can be used about one week, and a Detection Settings profile for rats older than 5
weeks, which can be used for two weeks.
Subjects should not vary in size for more than 10%. If that happens, create more Detection
Settings (for example, one for smaller animals and one for larger animals).
Subject contour settings
For optimal results, we recommend to select Erode first, then Dilate (see page 234 for details)
to remove the tail from the detected body.
Warnings
EthoVision XT shows a warning message in the following cases:
 When the sample rate set is lower then 25 or higher than 31 samples/s.
 When the Subject length is smaller than 60 pixels.
 When the animal is larger than the arena.
8.11 Customizing the Detection Settings screen
To achieve optimal subject detection, you need proper feedback about the effect of your
settings on the quality of detection. EthoVision offers you a number of statistics for this
purpose.
250 Chapter 8 - Configuring Detection Settings
Customizing the detection features
1. Open the Detection Settings (see page 197).
2. Click the Show/Hide button on the component tool bar and select Track Features.
3. Select View for the feature you want to view. Choose the color and (for body points) the
trail for the features you want to view.
- Nose-point – To check that the nose tip is detected correctly (see page 241 for details).
- Center-point – To check that the center-point of the subject is detected correctly.
The center-point is the point whose X,Y coordinates are the arithmetic mean of the X,Y
coordinates of all pixels detected as subject. For more information on how the noseand
tail-base points are calculated, see page 241.
- Tail-base – To check that the base of the tail is detected correctly (see page 241 for
details).
- Head direction – To estimate at what the subject is sniffing. Select this option
especially with novel object and orientation tests.
- Body contour – To check that the subject's contour (or the part which should be found)
is detected.
- Body fill – To check that the subject's body (or part of it) is detected. For example, in a
test where it is important to measure the change in the animal's shape to estimate its
mobility.
If you do not select a color for Body fill, the body contour will be shown as noise.
- Noise – To view the pixels that match the criteria for subject detection (depending on
the detection method), but other than those detected as subject.
We recommend to keep Noise selected. This way you can see which parts of the video
image have gray scale values similar to those of the subject(s) to be detected.
Chapter 8 - Configuring Detection Settings 251
- Activity - To view the pixels that match the criteria for activity detection (see page 217).
This setting is only available if you selected Activity analysis in the Experiment
Settings.
Some of the options above are not available if your experiment is set to Only centerpoint
detection in the Experiment Settings (see page 91).
4. If you have selected to view the body points' trail, choose the number of Samples you
want to be shown at a time.
5. Check in the Video window the appearance of the detection features. When you are
satisfied with the options selected, close the Detection Features window. Next, continue
with the procedure below.
viewing the detection statistics
The detection statistics are displayed in the Analysis Results and Scoring pane, which is, by
default, displayed at the bottom of the screen.
The Trial Status tab shows immediate feedback when you change detection settings.
Detection statistics
 Missed samples – The percentage and number of samples that were skipped due to lack
of processor time. This information is useful to check whether the sample rate specified
(see page 208) can be handled by your computer. See page 212 for tips on how to increase
the maximum sample rate handled by the PC. When you select another video file, or click
Displaying detection features can use a lot of processor power and reduce the
maximum possible sample rate if you are tracking live.
If the Analysis Results and Scoring pane is not displayed, click the Show/Hide
button on the component tool bar and select Analysis Results and Scoring.
The tabs Independent Variables, Dependent Variables and Manual Scoring show
no feedback in the Detection Settings, but they do when you acquire tracks (see
page 285 and page 314).
252 Chapter 8 - Configuring Detection Settings
Save changes in the Detection Settings window, the value for Missed samples is reset to
zero.
 Subject not found – The percentage and number of samples in which the subject was not
found. This information is useful to check the quality of detection in general. When a
subject is not found, it means that EthoVision XT processed the image, but did not find
anything matching the current Detection Settings. Use Subject not found to assess the
quality of tracking. When you select another video file, or click Save changes, the value
for Subject not found is reset to zero.
Warning thresholds
The percentages of missed samples and samples where the subject is not found are usually
displayed in green for each arena and subject. When the values are above the set threshold,
they are highlighted in red.
To change the thresholds, click the button under Missed samples or Subject not found and
change the value next to ‘Missed samples’ alert above.
After acquisition you can view the proportion of missed samples and samples in
which the subject was not found in different parts of the software.
In the Trial list, click Show/Hide on the tool bar and select Variables and select
Missed samples and/or Subject not found.
In the Statistics and Charts screen, click Show/Hide on the tool bar and select
Independent Variables and select Missed samples and/or Subject not found.
In the Track Visualization or in the Heatmaps screen, click Show/Hide on the tool
bar and select Layout. Under Available, drag Missed samples and/or Subject not
found to the On Columns or On Rows box.

Unitrans世联翻译公司在您身边,离您近的翻译公司,心贴心的专业服务,专业的全球语言翻译与信息解决方案供应商,专业翻译机构品牌。无论在本地,国内还是海外,我们的专业、星级体贴服务,为您的事业加速!世联翻译公司在北京、上海、深圳等国际交往城市设有翻译基地,业务覆盖全国城市。每天有近百万字节的信息和贸易通过世联走向全球!积累了大量政商用户数据,翻译人才库数据,多语种语料库大数据。世联品牌和服务品质已得到政务防务和国际组织、跨国公司和大中型企业等近万用户的认可。 专业翻译公司,北京翻译公司,上海翻译公司,英文翻译,日文翻译,韩语翻译,翻译公司排行榜,翻译公司收费价格表,翻译公司收费标准,翻译公司北京,翻译公司上海。
  • “贵司提交的稿件专业词汇用词准确,语言表达流畅,排版规范, 且服务态度好。在贵司的帮助下,我司的编制周期得以缩短,稿件语言的表达质量得到很大提升”

    华东建筑设计研究总院

  • “我单位是一家总部位于丹麦的高科技企业,和世联翻译第一次接触,心中仍有着一定的犹豫,贵司专业的译员与高水准的服务,得到了国外合作伙伴的认可!”

    世万保制动器(上海)有限公司

  • “我公司是一家荷兰驻华分公司,主要致力于行为学研究软件、仪器和集成系统的开发和销售工作,所需翻译的英文说明书专业性强,翻译难度较大,贵司总能提供优质的服务。”

    诺达思(北京)信息技术有限责任公司

  • “为我司在东南亚地区的业务开拓提供小语种翻译服务中,翻译稿件格式美观整洁,能最大程度的还原原文的样式,同时翻译质量和速度也得到我司的肯定和好评!”

    上海大众

  • “在此之前,我们公司和其他翻译公司有过合作,但是翻译质量实在不敢恭维,所以当我认识刘颖洁以后,对她的专业性和贵公司翻译的质量非常满意,随即签署了长期合作合同。”

    银泰资源股份有限公司

  • “我行自2017年与世联翻译合作,合作过程中十分愉快。特别感谢Jasmine Liu, 态度热情亲切,有耐心,对我行提出的要求落实到位,体现了非常高的专业性。”

    南洋商业银行

  • “与我公司对接的世联翻译客服经理,可以及时对我们的要求进行反馈,也会尽量满足我们临时紧急的文件翻译要求。热情周到的服务给我们留下深刻印象!”

    黑龙江飞鹤乳业有限公司

  • “翻译金融行业文件各式各样版式复杂,试译多家翻译公司,后经过比价、比服务、比质量等流程下来,最终敲定了世联翻译。非常感谢你们提供的优质服务。”

    国金证券股份有限公司

  • “我司所需翻译的资料专业性强,涉及面广,翻译难度大,贵司总能提供优质的服务。在一次业主单位对完工资料质量的抽查中,我司因为俄文翻译质量过关而受到了好评。”

    中辰汇通科技有限责任公司

  • “我司在2014年与贵公司建立合作关系,贵公司的翻译服务质量高、速度快、态度好,赢得了我司各部门的一致好评。贵司经理工作认真踏实,特此致以诚挚的感谢!”

    新华联国际置地(马来西亚)有限公司

  • “我们需要的翻译人员,不论是笔译还是口译,都需要具有很强的专业性,贵公司的德文翻译稿件和现场的同声传译都得到了我公司和合作伙伴的充分肯定。”

    西马远东医疗投资管理有限公司

  • “在这5年中,世联翻译公司人员对工作的认真、负责、热情、周到深深的打动了我。不仅译件质量好,交稿时间及时,还能在我司资金周转紧张时给予体谅。”

    华润万东医疗装备股份有限公司

  • “我公司与世联翻译一直保持着长期合作关系,这家公司报价合理,质量可靠,效率又高。他们翻译的译文发到国外公司,对方也很认可。”

    北京世博达科技发展有限公司

  • “贵公司翻译的译文质量很高,语言表达流畅、排版格式规范、专业术语翻译到位、翻译的速度非常快、后期服务热情。我司翻译了大量的专业文件,经过长久合作,名副其实,值得信赖。”

    北京塞特雷特科技有限公司

  • “针对我们农业科研论文写作要求,尽量寻找专业对口的专家为我提供翻译服务,最后又按照学术期刊的要求,提供润色原稿和相关的证明文件。非常感谢世联翻译公司!”

    中国农科院

  • “世联的客服经理态度热情亲切,对我们提出的要求都落实到位,回答我们的问题也非常有耐心。译员十分专业,工作尽职尽责,获得与其共事的公司总部同事们的一致高度认可。”

    格莱姆公司

  • “我公司与马来西亚政府有相关业务往来,急需翻译项目报备材料。在经过对各个翻译公司的服务水平和质量的权衡下,我们选择了世联翻译公司。翻译很成功,公司领导非常满意。”

    北京韬盛科技发展有限公司

  • “客服经理能一贯热情负责的完成每一次翻译工作的组织及沟通。为客户与译员之间搭起顺畅的沟通桥梁。能协助我方建立专业词库,并向译员准确传达落实,准确及高效的完成统一风格。”

    HEURTEY PETROCHEM法国赫锑石化

  • “贵公司与我社对翻译项目进行了几次详细的会谈,期间公司负责人和廖小姐还亲自来我社拜访,对待工作热情,专业度高,我们双方达成了很好的共识。对贵公司的服务给予好评!”

    东华大学出版社

  • “非常感谢世联翻译!我们对此次缅甸语访谈翻译项目非常满意,世联在充分了解我司项目的翻译意图情况下,即高效又保质地完成了译文。”

    上海奥美广告有限公司

  • “在合作过程中,世联翻译保质、保量、及时的完成我们交给的翻译工作。客户经理工作积极,服务热情、周到,能全面的了解客户的需求,在此表示特别的感谢。”

    北京中唐电工程咨询有限公司

  • “我们通过图书翻译项目与你们相识乃至建立友谊,你们报价合理、服务细致、翻译质量可靠。请允许我们借此机会向你们表示衷心的感谢!”

    山东教育出版社

  • “很满意世联的翻译质量,交稿准时,中英互译都比较好,措辞和句式结构都比较地道,译文忠实于原文。TNC是一家国际环保组织,发给我们美国总部的同事后,他们反应也不错。”

    TNC大自然保护协会

  • “原英国首相布莱尔来访,需要非常专业的同声传译服务,因是第一次接触,心中仍有着一定的犹豫,但是贵司专业的译员与高水准的服务,给我们留下了非常深刻的印象。”

    北京师范大学壹基金公益研究院

  • “在与世联翻译合作期间,世联秉承着“上善若水、厚德载物”的文化理念,以上乘的品质和质量,信守对客户的承诺,出色地完成了我公司交予的翻译工作。”

    国科创新(北京)信息咨询中心

  • “由于项目要求时间相当紧凑,所以世联在保证质量的前提下,尽力按照时间完成任务。使我们在世博会俄罗斯馆日活动中准备充足,并受到一致好评。”

    北京华国之窗咨询有限公司

  • “贵公司针对客户需要,挑选优秀的译员承接项目,翻译过程客户随时查看中途稿,并且与客户沟通术语方面的知识,能够更准确的了解到客户的需求,确保稿件高质量。”

    日工建机(北京)国际进出口有限公司

15811068017

15801211926

18801485229
点击添加微信

无需转接等回电