TICSQServer Reference Manual
TICSQServer is the main application that is used to fill the
quality database. It uses some auxiliary applications that are described
TICSQServer is meant to be run regularly, e.g., on a
daily (nightly) basis.
The steps of a
TICSQServer run are described below. A full
overview of all possible command line options is also given.
Steps of the Quality Database Update Process
The following steps are performed when updating the quality
database. Not all steps are always performed. Some steps are optional
and can be configured via command line options. Other steps are only
performed under certain conditions. The order of the steps is fixed
since there are causal dependencies between steps.
- Create a backup of the database. This step is optional and can
be suppressed via the
- Traverse the file system starting at the root of a branch set in the
PROJECTS.txt and/or quality database and/or passed as a command line
option for the given project. Directories and files matching the
expressions in the ARCHIVE file are accepted and collected for further
- Update the file information in the database.
All new and modified files get the status 'not checked'. This means
they have to be rechecked further on in the process.
- New files are added.
- Removed files are marked "deleted".
- Modified files (determined by a checksum based on the contents of
a file) are updated. Updating a file means appending a new instance of
the file to its life cycle.
- Calculate the build relations between source files and
make files. This is done for new or changed build files only.
First, the build file relations of changed build files are removed.
Next, for each build file all filenames within the build file are
collected and the relation is stored in the database.
- Calculate the include relations between source files (only C and C++)
First, all the include relations of the changed source files
are removed. Next, each source file is scanned for include files. Using
the build file options, the actual included files (mostly header files)
are found and the relations are stored in the database. This process is
done recursively on all the included files (registering each relation
only once to speedup the process and to avoid circular include
- Calculate the Lines Of Code (LOC) of all files not deleted
and for which the LOC is not yet stored in the database. The default way
to calculate Lines Of Code is by simply counting the actual number of
lines excluding those lines that are considered "generated" as can be
specified in the TICS LANGUAGES section. Since
version 6.0, also Effective Lines Of Code are calculated. Both
LOC definitions (LOC and ELOC) may be overridden by a user supplied
custom LOC counter by placing appropriate Perl modules to count the
number of lines in the configuration directory. Since 6.3 the number of
lines including generated lines of code are calculated.
- Calculate the change rate of source code lines. This includes
the number of lines added, removed or changed since the previous
measurement. Also an accumulative value is calculated which shows the
change rate over the project's life-time.
- Calculate the test coverage of unit tests, based on data
generated by external tools.
- Calculate the coding standard violations of each changed, new
or previously failed file. All files marked as not checked are analyzed.
These can be new files added, changed files whose contents has been
modified or files that failed in the previous run. Files can fail to be
successfully analyzed for various reasons. The file could not be
compiled, the file was not in a makefile or internal analyzer problems
stopped the analyzing process. If a file succeeds, all violations found
are stored in the database.
- Calculate the compiler warnings. This takes warning output
from the compiler normally used by the build process and turns these
into violations. The violations are aggregated and can be shown as
totals, per level or per category, just as for coding standard
- Calculate the abstract interpretation. This takes all files
in the archive into account; not just the changed files. This is because
this analysis exceeds file boundaries and analyzes inter-file
Calculate the cyclomatic complexity of each file. The cyclomatic
complexity of a file is defined as the sum of the cyclomatic complexities
of all functions/methods defined in the file divided by the number of
functions/methods in the file.
Calculate the fan-out of each file. Fan-out is the dependency on
other coding units.
Calculate dead code in the archive. This takes all files in the
archive into account. Dead code analysis attempts to find all functions
that are not used and all files that are not buildable.
Calculate duplicated code in the archive. This takes all files
in the archive into account. Duplicated code attempts to find all code
fragments that are shared between at least two separate locations.
Calculate the fix rate for each file. Fix rate attempts correlate
problem reports from an issue tracker to a file. It tracks which files
are changed to solve certain issues. This data is also calculated
accumulatively; tracking all issues related to the file's life time.
- Aggregate elementary data upwards in the directory hierarchy and
aggregate data upwards in a user defined structure, called the
Organizational View. The aggregated data is stored in temporary
tables that are recomputed at the end of each TICSQServer run.
Aggregation is performed to speedup data access when using the TICS
viewer to browse the data.
TICSQServer is the main program to fill the quality database.
TICSQServer runs for the project specified on the command line via the
-project option, or for each project specified in the
PROJECTS.txt configuration file in case of
-allprojects. Projects are processed according to their last
modification times. The oldest project is processed first, the most recent
project last. Per project, new and changed files are processed first, followed
by any files that failed to be correctly processed the last time.
TICSQServer processes files in this order for the following reason.
When the time restrictions for the run are not large enough to analyze
both the new and previously failed files, this arrangement allows all
new files to be analyzed first. Since files that failed the previous run
are likely to fail again (unless proper action has been taken),
processing such files mostly consumes time without providing new data.
Before starting the update of the quality database, TICSQServer
optionally performs the following steps (in this order).
- If the AUTOUPDATE
property in the project configuration of the SERVER.txt is set,
TICSQServer synchronizes the file system with the SCM tool by
performing an SCM update, checking out all files from the SCM repository
onto the locally accessible file system.
- If the property PREPAREQDB
is set in the SERVER.txt file, TICSQServer will run the given executable
script or program. TICSQServer passes the project name and the view name as its
arguments to the PREPAREQDB script. This script can be used for instance to run
a build, set necessary environment variables, etc.
TICSQServer [option...] [inputfile...]
TICSQServer accepts a list of inputfiles.
inputfile is a file or directory that
- possibly contains wildcards or
- is prefixed with '
@' to denote a project file
Note that all specified files must be within one of the specified views
of the project (in the SERVER.txt project configuration).
The following TICSQServer options are allowed.
- update all configured quality databases
- -branchname branch name
- calculate only the branch with branch name
- -calc metric
- calculate the specified (comma separated) metric type(s) [default: on]
- -config string
- use the given compiler configuration
- show only new violations relative to the database
- dump the internal module dependencies (internal use only)
- use the QA acceptation (yes/no) as exit code
- finalize the quality database for the viewer [default: on]
- show this help info
- -language language
- only analyze files of the
given (comma separated) languages [default: all languages]
- -level int
- show violations upto the specified level
- -log int
- show diagnostic messages upto the specified log level
- -logdir dir
- use the specified directory for server log files
- only check files that are new to the archive
- do not backup the current version of the database
- do not overwrite the global log file (but append to it instead)
- do not show deltas
- do not finalize the quality database for the viewer
- suppress TICS logo output
- skip the preparation phase
- do not perform sanity checks
- suppress all warnings
- show violation overview tables [default: on]
- automatically perform SCM update and build steps [default: on]
- -project string
- quality database to update
- -recalc metric
- recalculate the specified (comma separated) metric type(s) for unchanged files
- show violation messages [default: on]
- -setbaseline name[,delta:(0|1)][,plotline:(0|1)]
- set baseline name for a project. Defaults: delta:0,plotline:0
- show suppressed violations in violation overview
- show rule synopsis in violation overview [default: on]
- dump stack trace in case of errors
- show timing information on individual process stages [default: on]
- -tmpdir dir
- use the specified directory for intermediate files
- -today yyyy-mm-dd
- pretend that today is the given date
- show cumulative violation overview tables [default: on]
- show version info and exit
TICSQServer provides information about the status of the run as follows. In
case of a successful run the exit status is
0. In case of an
unsuccessful run, the exit code is an integer unequal to
exact value of a non-zero exit code is subject to the OS and the shell
interpreter in which the TICSQServer command is invoked.
Note that the semantics of the exit status is changed when running with the
-exitsqa option. When running with the
option, the exit value may still be non-zero in case of a "successful" run.
Namely, in case the run does not satisfy the QA criteria (i.e., fails to meet
the required QA targets).