You can to debug system issues in many different ways.

Each of the Services maintains its own log file, which helps with finding the source of an error.

Additionally, systemd helps by automatically restarting services that fail completely. A few basic operating system commands are also explained in the following sections, which can be used to find out if the base system is running smoothly.

Log files#

The majority of the log files can be found under the /var/log/ directory. The following table shows the relevant log files and their usage.

Log File



Additional Notes


Squirro services


Detailed log file about each service.


Squirro services


Messages sent to the standard output stream of the service and not logged in the main service log file.


Squirro services


Messages sent to the standard error stream of the service (typically error messages) and not logged in the main service log file. Might contain useful information when a service is unable to boot up.


Nginx (Squirro services)


Every request to the web services is recorded in this log file in a line-by-line format.


Nginx (Squirro services)


Records errors on the HTTP level. When a service is stopped, errors may show up here indicating that the service is not reachable.




Update log for Squirro Cluster/Storage node.



General system log. Serious system failures will be recorded here.




ES_CLUSTER_NAME.log: Records cluster information and major failures.

ES_CLUSTER_NAME__index_indexing_slowlog.log: contains the logs about the indexing performed by the system

ES_CLUSTER_NAME__index_search_slowlog.log: contains the logs about the queries asked to the system















RHEL/CentOS 6/7

Used to debug connection issues

Additionally the /var/lib/squirro/ directory contains the following log files:

Log File



Additional Notes


datasource (sqdatasourced)


Contains rotated log files for the created data sources. Any logs during the initial phase of creating the source and loading data into the system (dataloader logs; before transforming them in the pipeline; for these the ingester logs are relevant) will be found here.


machinelearning (sqmachinelearningd)


Contains log files for the machinelearning jobs that run on the server. Each machinelearning job uses its own log file and any output during its execution is logged there (for example, output during the training of a model).

The log level can be changed for each service. Such changes can be made within /etc/squirro/ in the ini file corresponding to each service.

For any of the services, the following can be added to the ini files to adjust the log level:

level = INFO

RHEL/CentOS 7 Service monitoring with systemctl#

With GNU/Linux Centos 7, we rely on systemd to control and manage the Squirro services.

To check for all the service use the following command:

systemctl list-units --type service --all

If you want to inspect a single service you can use:

systemctl status SERVICE_NAME

This last command also returns some fundamental information about the service (current status, PiD, …) and if you call it with root permissions you also receive the last lines of the logs.

Should you wish to restart a particular service, the following command can be run:

systemctl restart SERVICE_NAME

It is important to reiterate that when Squirro services go down, the systemd daemon automatically attempts to restart the service. Should the service still be inactive, the server administrator should inspect the logs related belonging to that service. These log files consist of:

  • /var/log/squirro/SERVICE_NAME/SERVICE_NAME.log

  • /var/log/squirro/SERVICE_NAME/stderr.log

Monitoring Services from Web Interface#

Within Squirro, server administrators are also able to inspect the status of the current services from the web interface.

Such a feature is available as a plugin from within the Server space as can be seen below.


System commands#

The Squirro services are standard Unix daemons. Standard Linux utilities can be used to debug any issues that may arise.

Processor usage#

The current processor usage can be consulted with two standard commands: uptime and top.


Next to some uptime information, the uptime command outputs the load average for the past 1, 5 and 15 minutes. The load average is a simple metric showing how many processed had to wait for processing. It should usually be close or below to 1.0. If it goes above 5.0 the load is quite high, values above that are unusual.

When seeing a high load average value, the top will usually show the processes that are generating load. But when the CPU usage shown by top is low despite a high load average, that may indicate issues with I/O, such as disk performance.


The command top shows a list of all processes on the system, sorted by current CPU usage. Pressing M on the keyboard (upper case, so use Shift+m) will sort the list by memory usage.

Memory usage#

Memory usage of individual processes can be debugged with the top command above. To see memory usage of the system as a whole, use free.


The free command outputs some statistics on how much RAM is being used by the system. The most useful value to consider is the used and free “-/+ buffers/cache”. Those values account for how much memory the system is committed to using and it cannot free it easily.

By default free outputs all values in bytes. By calling it with the -m parameter (free -m) all values are output in megabytes instead.

When free memory is very low, the system may be running into issues with memory usage. In some cases, the kernel may need to kill processes randomly to make space. Those instances can be seen in the standard system log /var/log/messages and are manifest by lines such as “Out of memory: kill process 23123”.

Disk usage#

A full disk will prevent the system from working. The df command can help with finding those issues.


Use the df command to see a list of all partitions and their disk usage. The column “Use%” will show the usage in percentage. Anything above 95% is considered full and will usually hinder the system from working well.

When you are experiencing full disks, consider enlarging the corresponding disk, or contact Squirro Support for ways to remove extra data.

Following log files#


A lot of information is captured in log files. These files can be followed with the tail command, specifically by using it’s -f parameter to follow all updates on a file.

For example:

tail -f /var/log/squirro/topic/topic.log

This shows a real-time view of what is written into the topic service log file.

tail also accepts multiple file names or even wildcards. So all Squirro service log files can be monitored as follows:

tail -f /var/log/squirro/topic/topic.log


The grep command searches files for occurrences of a specific text. For example, if Squirro is reporting errors, but you are unsure where they might be coming from, the following command helps pinning down the responsible service:

grep ERROR /var/log/squirro/*/*.log

This will output a list of all Squirro log files that contain the text “ERROR” together with the lines that contain this text.

Squirro Logs#

All the Squirro logs are stored in /var/log/squirro. Each service has its own directory that can be queried as follow:

tail -f /var/log/squirro/SERVICE/*.log

For instance, if we are interested in the topic service:

tail -f /var/log/squirro/topic/*.log

In case we want to check all the services log:

tail -f /var/log/squirro/*/*.log

The ingester service#

Due to its complexity, the ingester service has a different log structure. To do its job the service manages a set of processes (named processor_X where X in (1,2 … N)). Each process maintains its unique log in its unique directory. The easiest way to debug consists in merging their content via the following command:

tail -f /var/log/squirro/ingester/processor_*/*.log