0% found this document useful (0 votes)
2 views

Logpoint Search Query Language (1)

The document provides a comprehensive guide to the Search Query Language in LogPoint version 6.12.0, detailing various search techniques, aggregators, commands, and filtering methods. It includes sections on simple searches, aggregators, one-to-one commands, process commands, and filtering commands. Each section is organized with subtopics that explain specific functionalities and usage examples.

Uploaded by

Serhane Abdel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Logpoint Search Query Language (1)

The document provides a comprehensive guide to the Search Query Language in LogPoint version 6.12.0, detailing various search techniques, aggregators, commands, and filtering methods. It includes sections on simple searches, aggregators, one-to-one commands, process commands, and filtering commands. Each section is organized with subtopics that explain specific functionalities and usage examples.

Uploaded by

Serhane Abdel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Search Query

LogPoint Language
V 6.12.0 (latest)

October 1, 2021
CONTENTS

1 Search Query Language 1

2 Simple Search 2
2.1 Single word . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Multiple words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Phrases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.4 Field values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.5 Logical operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.6 Parentheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.7 Wildcards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.8 Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.9 Lower and Upper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.10 Time Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.11 List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.12 Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Aggregators 12
3.1 chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 timechart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 count() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 distinct_count() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 sum() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.6 max() and min() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.7 avg() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.8 var() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.9 distinct_list() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 One-to-One Commands 22
4.1 rex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 rename . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

i
5 Process Commands 27
5.1 String Concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Domain Lookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4 Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.5 Experimental Median Quartile Quantile . . . . . . . . . . . . . . . . . . . 30
5.6 Process lookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7 GEOIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.8 Codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.9 InRange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.10 Regex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.11 DNS Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.12 Compare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.13 IP Lookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.14 Compare Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.15 Clean Char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.16 Current Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.17 Count Char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.18 DNS Cleanup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.19 Grok . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.20 AsciiConverter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.21 WhoIsLookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.22 Eval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.23 toList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.24 toTable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

6 Filtering Commands 44
6.1 search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2 filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.3 latest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.4 order by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.5 limit <number> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

7 Pattern Finding 49
7.1 Single Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 Multiple Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

8 Chaining of commands 56

9 Additional Notes 57
9.1 Process or Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.2 Conditional Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.3 Forward Slash Expression** . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.4 norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.5 timechart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

ii
9.6 Capturing normalized field values . . . . . . . . . . . . . . . . . . . . . . . 58
9.7 Grok Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

iii
CHAPTER

ONE

SEARCH QUERY LANGUAGE

LogPoint’s Query Language is extensive, intuitive, and user-friendly. It covers all the
search commands, functions, arguments, and clauses. You can search the log messages
in various formats depending on the query you use.
LogPoint also supports chaining of commands and multi-line queries. Use a pipe (|) to
chain the commands and press Shift + Enter to add a new line in the query. The search
keywords are not case-sensitive.

Note: The examples of some search queries provided in this section may not yield any
result as the relevant logs may not be available in your system.

This guide provides the following information that you need to use the LogPoint Query
Language:

• Learn about the types of simple queries to familiarize yourself with the LogPoint
Query Language. Refer to Simple Search.

• Learn how to aggregate fields with chart and timechart commands. Refer to
Aggregators.

• Learn about the one-to-one commands. Refer to One-to-One Commands.

• Learn about the process commands. Refer to Process Commands.

• Learn how to filter the search results. Refer to Filtering Commands.

• Learn how to find one or multiple streams and patterns of data to correlate a
particular event. Refer to Pattern Finding.

• Learn how to chain multiple commands into a single query. Refer to Chaining of
commands.

1
CHAPTER

TWO

SIMPLE SEARCH

You can use the following types of simple queries to familiarize yourself with the
LogPoint Query Language.

2.1 Single word


Single word search is the most basic search that you can run in LogPoint. Enter a single
word in the Query Bar to retrieve the logs containing the word.

login

This query searches for all the logs containing the word login in the message.

2.2 Multiple words


Searching with multiple words lets you search the original logs using a combination of
words. For searches with multiple words, only the logs containing all the words are
displayed.

Note: The order of the words is not important.

account locked

This query searches for all the logs containing both the search terms account and locked
in the message.

2.3 Phrases
Phrase Search lets you search the exact phrase in the logs. You must enclose the words
inside double-quotes (” “).

2
Search Query Language Documentation, Release latest

Note: The order of the words is important.

"account locked"

This query searches for all the logs containing the exact phrase account locked.

2.4 Field values


The normalized logs contain information in key-value pairs. You can use these pairs
directly in the log search. To see all the logs from the user Bob, use the following query:

user = Bob

This query searches for all the logs from the user Bob.

device_ip = 192.168.2.1

This query searches for all the logs coming from the device with the IP Address
192.168.2.1.
You can combine multiple field value pairs as:

device_ip = 192.168.2.1 sig_id = 10051

You can also combine this with a simple query as:

login device_ip = 192.168.2.1 sig_id = 10051

2.5 Logical operators


You can use various keywords to perform logical operations in the LogPoint search
query.

2.5.1 And
Use the logical operator and to search for the slogs containing both the specified
parameters.

login and successful

This query searches for all the messages containing the word login and the word
successful.

2.4. Field values 3


Search Query Language Documentation, Release latest

The and operator can also be used for key-value search queries as follows:

login and device_ip=192.168.2.2

2.5.2 Or
Use the logical operator or to search for the logs containing either of the specified
parameters.

login or logout

This query searches for all the messages containing either the word login or the word
logout.
This operator can also be used with the key-value search query as follows:

device_ip = 192.168.2.1 or device_ip = 127.0.0.1

2.5.3 Not
You can use the hyphen (-) symbol for the logical negation in your searches.

login -Bob

This query searches for the log messages containing the word login but not the word
Bob.

-device_ip = 192.168.2.243

This query returns the logs containing all the device_ips except 192.168.2.243.

Note:

• While searching with field-names, you can also use != and NOT to denote negation.

device_ip != 192.168.2.243

NOT device_ip = 192.168.2.243

• By default, the or operator binds stronger than the and operator. Therefore, for the
query login or logout and MSWinEventLog, LogPoint returns the log messages
containing either login or logout, but containing MsWinEventLog.

2.5. Logical operators 4


Search Query Language Documentation, Release latest

2.6 Parentheses
In LogPoint, the or operator has a higher precedence by default. You can use
parentheses to override the default binding behavior when using the logical operators
in the search query.

"login failed" or (denied and locked)

This query returns the log messages containing login failed or both denied and locked.

2.7 Wildcards
You can use wildcards as replacements for a part of the query string. Use the following
characters as wildcards:

• ? - Replacement for single character.

• * - Replacement for multiple characters.

If you want all the log messages containing the word login or logon, use the following:

log?n

Note: This query also searches for the log messages containing other variations such
as logan, logbn, and logcn.

log*

This query returns the logs containing the words starting with log such as logpoint,
logout, and login.

Note: You can also use Wildcards while forming a search query with field names. To
get all the usernames that end in t, use the following.

username = *t

2.8 Step
You can use the step function to group fields. To see the log messages with
destination_port in steps of 100 as follows:

2.6. Parentheses 5
Search Query Language Documentation, Release latest

destination_port count
0 - 100 50
100 - 200 32

step(destination_port,100) = 0 | chart count() by destination_port

This query searches for all the log messages containing the field destination_port, and
groups them in steps of 100. The value at the end of the query specifies the starting
value of the destination_port for grouping.

Note: You can use the step to group using multiple field names.

2.9 Lower and Upper


You can change type-case of your field values. Use the lower function to change the
values to lower case. Similarly, use the upper function to change the field values to
upper case. The upper and lower functions change the type-case of the values to the
same case so that you can observe consistent results.
Use the upper and lower functions with chart and timechart commands.

| chart count() by upper(action)

| timechart count() by lower(action)

2.10 Time Functions


The Time Functions extract specied values from a time-based field. The following time
functions are supported in the Simple Search Query:

• second

• minute

• hour

• day

• day of week

• month

2.9. Lower and Upper 6


Search Query Language Documentation, Release latest

The arguments taken by these functions are numeric. These functions parse Unix
Timestamps.

Note: Unix time is a system for describing instants in time, defined as the number of
seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday,
1 January 1970, not counting leap seconds. It is used widely in Unix-like and many other
operating systems and file formats.
Example: 1384409898 is the Unix time equivalent of Thu, 14 Nov 2013 06:18:18 GMT

In LogPoint, col_ts and log_ts carry Unix timestamps. However, you can create your
own fields which contain the Unix timestamps using the rex or norm commands.

2.10.1 second
You can use the second function to search for the logs generated or collected in
seconds.
The generic syntax for second is:

second(field) = value

The value for second ranges from 0 to 59.

second(log_ts) = 23

This query searches for the logs generated during the twenty third second.

2.10.2 minute
You can use the minute function to search for the logs generated or collected in minutes.
The values for the minute range from 0 to 59.

minute(col_ts) = 2

This query searches for the logs generated during the second minute.
minute() can also be used in aggregation functions.

2.10.3 hour
You can use the hour function to search for the logs generated or collected in hours.
The values for the hour range from 1 to 24.
Example:

2.10. Time Functions 7


Search Query Language Documentation, Release latest

hour(col_ts) = 1

This query displays the logs generated during the first hour.

2.10.4 day
You can use the day function to search for the logs generated or collected in days.
Example:

day(col_ts) = 4

This query displays the logs of the 4th day.


++++++++++++= day of week +++++++++++++
You can use the day of week function to search the logs for the specific day of the week.
The value for day_of_week ranges from 1 (Sunday) to 7 (Saturday).
Example:

day_of_week(col_ts) = 7 OR day_of_week(col_ts) = 1

This query displays the logs in off days, i.e, Saturday and Sunday.

2.10.5 month
You can use the month function to search the logs generated or collected in months.
The value of month ranges from 1 (January) to 12 (December).
Example:

month(col_ts) = 6

This query displays the log activity for June.

Note: You can use the relational operators (>, <, = and !=) with the time commands
to create a sensible time-range for your search queries.

The following table summarizes the time functions:

2.10. Time Functions 8


Search Query Language Documentation, Release latest

Time functions Working Examples Value Range


second second(cpl_ts) = 20 0 - 59
minute minute(col_ts) = 18 0 - 59
hour hour(col_ts) = 6 0 - 23
day day(col_ts) = 14 1 - 31
day_of_week day_of_week(col_ts) = 5 1 - 7 (Sun - Sat)
month month(col_ts) = 11 1 - 12 (Jan - Dec)

2.11 List
You can create a static list with a number of values, and use this list in the search query
instead of keying in all the values.
For example, if you create a list EMPLOYEES with the names of all the employees in
a company, you can check whether a single user has logged into the system using the
following query.

user in EMPLOYEES action=login

The search query matches the value of the field user with all the values in the
EMPLOYEES list.

Warning: The name of the list must be provided in uppercase.

You can also use an Inline List while executing a search query.
The generic syntax for inline list is:

field in [value1, value2,....]

which is equivalent to field = value1 OR field = value2.


Example:

source_port in [21, 53, 88, 123]

In cases where the values have multiple words in the inline List, use quotation marks as
shown below.

event in ["Process completed", "Process accomplished"]

2.11. List 9
Search Query Language Documentation, Release latest

2.12 Table
Tables are external file-formats which contain the information you may choose to
associate with a search result. The file formats supported for the tables are CSV, ODBC,
LDAP, and Threat Intelligence. The information obtained is prefixed with the table alias
in the log messages.
For example:
IPList is a CSV table containing fields such as Address, IP, Name, and SN. To view the
content of this external CSV table, use the following query:

table "IPList"

The following content is displayed:

To view all student entries in a table called studentResult, which contains student_name,
student_roll, and percentage as fields, use:

table "studentResult"

To search for all the student entries in the table studentResult who have passed with
distinction:

table "studentResult" percentage >= 80

To search for all the student entries in the table studentResult who have failed:

table "studentResult" percentage < 40

2.12. Table 10
Search Query Language Documentation, Release latest

Note: In the Data Privacy Module enabled systems, when you use the table query, you
can only see the values of the search results in the encrypted form. You cannot request
a decryption for these values.

2.12. Table 11
CHAPTER

THREE

AGGREGATORS

Aggregation functions are used with the chart and the timechart commands to
aggregate the fields. The search results can be formatted using fields, chart or timechart
commands.

Note: The search results for the aggregation function in a search query have a limit of
10000 search results. Use the limit <number> command with a higher limit to display a
higher number of the search results.

3.1 chart
With the chart command, you can get log messages in the chart form. If you want to
see all the messages containing login and group them by device_ip you can use the
following query.

login device_ip = * | chart count() by device_ip

This query searches for all the log messages containing the word login, and groups them
by device_ip. It then displays the number of log messages for each device_ip.
You can also count by multiple fields. The log message count is then displayed for each
of the field.

login | chart count() by destination_address, destination_port

In this case, the count of the log messages for every combination of
destination_address and destination_port is grouped and the corresponding count is
shown.
You can use other aggregation functions such as max and min in place of count.

connection | chart max(datasize) by source_address

12
Search Query Language Documentation, Release latest

datasize=*| chart max(datasize) as mx, min(datasize) as mn,


sum(datasize) as sm by source_address limit 15

You can also display the chart in different forms such as Column, Bar, Line, and Area.

Fig. 1: Column Chart

Fig. 2: Bar Chart

3.1. chart 13
Search Query Language Documentation, Release latest

Fig. 3: Line Chart

Fig. 4: Area Chart

You can also modify aggregation functions as follows:


object = connection | chart count(action=permitted) by source_address

In this query, only the log messages containing action=permitted are counted. You can
write the same query as:

3.1. chart 14
Search Query Language Documentation, Release latest

action = permitted object = connection | chart count() by source_address

Multiple counts or other aggregators can be used in a single query string.

object = connection | chart count(action=permitted), count(action=blocked)


by source_address

This query displays two columns. The first is the count of the connections with the
permitted action and the second is the count of blocked actions.

3.2 timechart
You can use the timechart command to chart the log messages as a time series data. It
first displays the logs according to the time they were collected or generated. Then, it
returns the log results according to the collection time stamp (col_ts) or log generation
time (log_ts) as selected in the system.
The terms log_ts and col_ts differ in the function.

log_ts col_ts
Denotes time present in log messages. Denotes the time when the log was
actually collected in LogPoint.

For example you can timechart all the messages with login shown below.

login | timechart count()

This plots the count of all the messages containing the word login into a graph with the
horizontal axis as time. The total time-span is the time selected for the search query.

| timechart on log_ts count()

This query plots the count of the logs based on the log_ts field.

3.2. timechart 15
Search Query Language Documentation, Release latest

You can also use the timechart command to plot the data on a fixed time-interval. To
have a timechart with bars for every 20 minutes, use the following query:
login | timechart count() every 20 minutes

You can use every x minutes, every x hours, or every x days with the timechart command.

Note: When the limit of timechart() is not specified, the number of bars of the
timechart depends on the nature of the query.

• The number is always equal to 30 if the time-range is less than 30 units. For
example, if you provide a time span of 10 minutes LogPoint displays 30 bars in
the span of 20 seconds.
• If the time-range is greater than 30 units, the number of bars is equal to the
time-range. This holds true until the upper limit of the number of bars is reached,
which is 59.
• There are also some special cases for the number of graphs. The number of bars is
equal to the number of seconds specified, and the time span of 1 day displays 24
bars in the span of one hour.

Aggregation functions are used with the chart and the timechart commands by joining
them with the | symbol.
The following aggregators are available in LogPoint:

3.2. timechart 16
Search Query Language Documentation, Release latest

• count()

• distinct_count()

• sum()

• min()

• max()

• avg()

• var()

Note: The aggregators are pluggable from LogPoint 5.2.4. This means LogPoint can
create such functions on request.

3.3 count()
You can use the count function to get the total number of logs in the search results. For
example,

| chart count()

This query displays the total number of log messages in the search results.

login | chart count() by device_ip

This query searches for all the log messages containing the word login. It then groups
the logs by their device_ips and shows the count of the log messages for each of the
Device IP.
You can also give filters to the count() function as shown below.

login | chart count(event_id = 528) by device_ip

This query looks for all the log messages containing the word login. It then groups
them by their device_ip s and shows the count of the messages containing the field
value event_id = 528.

3.4 distinct_count()
You can use the distinct_count() function to get the number of distinct count of the
object. For example,

3.3. count() 17
Search Query Language Documentation, Release latest

| chart distinct_count(destination_port) by destination_address

In this case, though different ports may have multiple counts, distinct_count() returns
the count of the distinct ports for every destination address.
If the search results for a particular destination address had the following data:

port count
21 20
25 30
901 15

The result for the distinct_count() is 3 for each of the ports 21, 25 and 901. However,
the result of the count() is 65.

3.5 sum()
You can use the sum() function to sum the values of the specified fields.

| chart sum(datasize) by device_ip

This query displays the sum of all the datasize fields for each device_ip.
You can also give filters to the sum() function.

| chart sum(datasize, datasize > 500)

This query only sums a datasize if it is greater than 500. The expression can be any valid
query string but must not contain any view modifiers.

3.6 max() and min()


These functions can be used to find the maximum or minimum value of the specified
field.

| chart max(severity) by device_ip

This query displays the maximum severity value in each of the device_ip.

login | chart count(), max(col_ts) by device_ip, col_type

This query looks for all the log messages containing the word login. Then, it groups
the search results by their device_ips and the col_type and shows the count of the log
messages and the latest col_ts for each of the groups.

3.5. sum() 18
Search Query Language Documentation, Release latest

The max() and min() functions also support filter expressions as:

| chart max(severity, severity < 5)

This query shows the maximum severity that is less than 5.

3.7 avg()
You can use the avg() function to calculate the average of all the values of the specified
field.

| chart count(), avg(response_time, response_time=*)

This query calculates the average response_time.

3.8 var()
You can use the var() function to calculate the variance of the field values. Variance
describes how far the values are spread out from the mean value.
Execute the following query for proper visualization of how the data fluctuates around
the average value.

severity = * | chart count(),avg(severity),var(severity) by device_ip

Note: You can use +, -, *, /, and ^ to add, subtract, multiply, divide, and to raise the
power in the min(), max(), sum(), avg(), and var() functions.

Example:

avg(field1/field2^2+field3)

Warning: While using the expressions such as avg(), and min(), it is good to use a
proper filter to discard log messages not containing the specified fields.

3.9 distinct_list()
You can use the distinct_list() function to return the list of all the distinct values of the
field. For example, if you want to view all the distinct values of the field action in the
system, you can use the following query:

3.7. avg() 19
Search Query Language Documentation, Release latest

| chart distinct_list(action)

Fig. 5: Example of distinct_list

You can use a grouping parameter to group the distinct list. For example:

| chart distinct_list(action) as actions by user

The above query returns the list of every distinct value of the field action in the actions
column grouped by the grouping parameter user. You can use this example to view all
the actions performed and machines used by every user in the system.

Fig. 6: Example of distinct_list

You can also use this function with other aggregation functions. For example:

user=Jolly | chart distinct_list(action) as actions, distinct_count(action) as actions_count by user

The above query returns the list of all the distinct actions with their counts for the user
Jolly.

3.9. distinct_list() 20
Search Query Language Documentation, Release latest

Fig. 7: Example of distinct_list

3.9. distinct_list() 21
CHAPTER

FOUR

ONE-TO-ONE COMMANDS

The One-to-one commands take one value as input and provide one output.
For example, you can use the rex and the norm commands to extract specific parts
of the log messages into an ad-hoc field name. This is equivalent to normalizing log
messages during the search. However, the extracted values are not saved.
The rex and norm commands do not filter the log messages. They list all the log
messages returned by the query and add the specified ad-hoc key-value pairs if possible.

4.1 rex
You can use the rex command to recognize regex patterns in the re2 format. The
extracted variable is retained only for the current search scope. The result also shows
the log messages that are not matched by the rex expression.
Example Log:

Oct 15 20:33:02 WIN-J2OVISWBB31.immuneaps.nfsserver.com


MSWinEventLog 1 Security 169978 Sat Oct 15 20:33:01 2011 5156
Microsoft-Windows-Security-Auditing N/A N/A Success Audit
WIN-J2OVISWBB31.immuneaps.nfsserver.com Filtering Platform Connection
The Windows Filtering Platform has allowed a connection. Application
Information: Process ID: 4 Application Name: System Network Information:
Direction: Inbound Source Address: 192.168.2.255 Source Port: 138 Destination
Address: 192.168.2.221 Destination Port: 138 Protocol: 17 Filter Information:
Filter Run-Time ID: 67524 Layer Name: Receive/Accept Layer Run-Time ID: 44
169765

You can use the rex command to extract the protocol id into a field protocol_id with the
following syntax:-

| rex Protocol:\s*(?P<protocol_id>\d+)

The query format is similar to the following:

22
Search Query Language Documentation, Release latest

| rex any regular expression:\s+(?P<field_name>expression to capture to field)

Warning: The (?P< >) expression is part of the rex syntax to specify the field name.

You can also extract multiple fields from a single rex operation as shown below.

| rex Source Address:\s*(?P<src_address>\d+\.\d+\.\d+\.\d+)

The extracted values can be used to chart your results. For example,

| rex Protocol:\s+(?P<protocol_id>\d+) | chart count() by protocol_id

Since the rex command acts on the search results, you can add it to a query string as
shown below:

Windows Filtering AND allowed | rex Protocol:\s+(?P<protocol_id>\d+)

user=* | rex on user:\s+(?P<account>\S+)@(?P<domain>\S+) |


chart count() by account, domain | search account=*

Note: Use Single quote to address inline normalization while using square bracket. For
example:
This syntax works: | norm on user <my_user:\S+> | chart count() by my_user.
But this does not. | norm on user <my_user:[A-Z]+> | chart count() by my_user.
If you use the box brackets ( [, ] ), single quote (‘’) is necessary in the syntax.

4.2 norm
You can use the norm command to extract variables from the search results into a
field. The difference between the rex command and the norm command is that norm
supports both, LogPoint normalization syntax and re2 syntax, whereas the rex command
only supports re2 syntax.
Example Log:

Dec 17 05:00:14 ubuntu sshd[7596]: Invalid user Bob from 110.44.116.194

To extract the value of the user into the field user, use the following syntax:-

4.2. norm 23
Search Query Language Documentation, Release latest

| norm Invalid user <user:word>

And this can also be used to chart in the graph as follows.

| norm Invalid user <user:word>| chart count() by user

You can also use the norm command to extract multiple key-value pairs as shown below:

| norm Invalid user <user:word> from <source_ip:ip>


| chart count() by my_user, msg | search my_user=*

Note:

• For the list of definers (simplified regular expressions), refer to the List of Definers.

• Use Single quote to address inline normalization while using square bracket. For
example:
This syntax works: | norm on user <my_user:\S+> | chart count() by my_user.
But this does not. | norm on user <my_user:[A-Z]+> | chart count() by my_user.
If you use the box brackets ( [, ] ), single quote (‘’) is necessary in the syntax.

4.3 fields
You can use the fields command to display the search results in a tabular form. The
table is constructed with headers according to the field-names you specify. LogPoint
returns null if the logs do not contain the specified fields.

4.3. fields 24
Search Query Language Documentation, Release latest

Fig. 1: Count of logs using fields command

| fields source_address, source_port, destination_address, destination_port

Here, the fields source_address, source_port, destination_address, and


destination_port are displayed in a tabular form as shown above.
Any log message without the field destination_port has a corresponding row with the
destination_port column value as -N/A-.

4.4 rename
You can use the rename command to rename the original field names.
Example:

| rename device_ip as host

When multiple fields of a log are renamed as the same name, the rightmost field takes
precedence over others and only that field is renamed.
Example:

| rename source_address as ip, destination_address as ip

Here, if both the source_address and destination_address fields are present in a log,
only the destination_address field is renamed as ip in search results.

4.4. rename 25
Search Query Language Documentation, Release latest

The log messages after normalization can have different field-names for information
carrying similar values. For example, different logs may have name, username, u_name,
or user_name as keys for the same field username. To aggregate all the results and
analyze them properly, you can use the rename command.

| rename target_user as user, caller_user as user | chart count() by user

In some cases, the field names can be more informative with the use of rename command
as below:

label = Attack | rename source_address as attacking_ip |


chart count() by attacking_ip

4.4. rename 26
CHAPTER

FIVE

PROCESS COMMANDS

You can use the process command to execute different one-to-one functions which
produce one output for one input given.
Some default process commands available in LogPoint are:

5.1 String Concat


This process command lets you join multiple field values of the search results.
Syntax:

| process concat(fieldname1, fieldname2, ...., fieldnameN) as string

Example:

| process concat(city, country) as geo_address

5.2 Domain Lookup


This process command provides the domain name from a URL.
Syntax:

| process domain(url) as domain_name

Example:

27
Search Query Language Documentation, Release latest

url=* | process domain(url) as domain_name |


chart count() by domain_name, url

5.3 Difference
This process command calculates the difference between two numerical field values of
a search.
Syntax:

| process diff(fieldname1,fieldname2) as string

Example:

| process diff(sent_datasize,received_datasize) as difference


| chart count() by sent_datasize, received_datasize,difference

5.3. Difference 28
Search Query Language Documentation, Release latest

5.4 Summation
This process command calculates the sum between two numerical field values of a
search.
Syntax:

| chart sum(fieldname)

Example:

label = Memory | chart sum(used) as Memory_Used by col_ts

5.4. Summation 29
Search Query Language Documentation, Release latest

5.5 Experimental Median Quartile Quantile


The Experimental Median Quartile Quantile process command performs statistical
analysis (median, quartile, and quantile) of events based on fields. All these commands
take numerical field values as input.
Median
Syntax:

| chart median(fieldname) as string

Example:

doable_mps=* |chart median(doable_mps)

Quartile
Syntax:

| chart quartile(fieldname) as string1, string2, string3

Example:

doable_mps=* |chart quartile(doable_mps)

Quantile
Syntax:

5.5. Experimental Median Quartile Quantile 30


Search Query Language Documentation, Release latest

| process quantile(fieldname)

Example:
doable_mps=* | process quantile(doable_mps)
|search quantile>0.99
|chart count() by doable_mps order by doable_mps desc

5.6 Process lookup


This process command looks up for the related data from the user defined table.
Syntax:
| process lookup(table,field)

Example:
| process lookup(lookup_table, device_ip)

5.7 GEOIP
This process command gives the geographical information of a public IP address. It
adds a new value “internal” to all the fields generated for the private IP supporting the
RFC 1918 Address Allocation for Private Internets.
Syntax:

5.6. Process lookup 31


Search Query Language Documentation, Release latest

| process geoip (fieldname)

Example:

| process geoip (source_address)

For the Private IP:

For the Public IP:

5.8 Codec
The Codec process command encodes the field values to ASCII characters or decodes
the ASCII characters to their text value.
Syntax:

| process codec(<encode/decode function>, <field to be encoded/decoded>) as <attribute_


,→name>

Example:

| process codec(encode, name) as encoded_name

5.8. Codec 32
Search Query Language Documentation, Release latest

5.9 InRange
The InRange process command determines whether a certain field-value falls within the
range of two given values. The processed query returns TRUE if the value is in the range.
Syntax:

| process in_range(endpoint1, endpoint2, field, result, inclusion)

where,

endpoint1 and endpoint2 are the endpoint fields for the range,
the field is the fieldname to check whether its value falls within the given range,
result is the user provided field to assign the result (TRUE or FALSE),
inclusion is the parameter to specify whether the range is inclusive or exclusive of
given endpoint values. When this parameter is TRUE, the endpoints will be included for
the query and if it is FALSE, the endpoints will be excluded.

Example:

| process in_range(datasize, sig_id, duration,Result, True)

5.10 Regex
The Regex process command extracts specific parts of the log messages into custom
field names.
Syntax:

5.9. InRange 33
Search Query Language Documentation, Release latest

| process regex("_regexpattern", _fieldname)


| process regex("_regexpattern", "_fieldname")

Both syntaxes are valid.

Example:

| process regex("(?P<type>\S*)",msg)

5.11 DNS Process


This process command returns the domain name assigned to an IP address and
vice-versa. It takes an IP address or a Domain Name and a Field Name as input. The
plugin then verifies the value of the field. If the input is an IP Address, it resolves the
address to a hostname and if the input is a Domain Name, it resolves the address to an
IP Address. The output value is stored in the Field Name provided.
Syntax:

| process dns(IP Address or Hostname)

Example:

destination_address=* | process dns(destination_address) as domain


| chart count() by domain

5.11. DNS Process 34


Search Query Language Documentation, Release latest

5.12 Compare
This process command compares two values to check if they match or not.
Syntax:

| process compare(fieldname1,fieldname2) as string

Example:

| process compare(source_address, destination_address) as match


| chart count() by match, source_address, destination address

5.13 IP Lookup
This process command enriches the log messages with the Classless Inter-Domain
Routing (CIDR) address details. A list of CIDRs is uploaded in the CSV format during
the configuration of the plugin. For any IP Address type within the log messages, it
matches the IP with the content of the user-defined Lookup table and then enriches the
search results by adding the CIDR details.
Syntax:

| process ip_lookup(IP_lookup_table, column, fieldname)


where IP_lookup_table is the lookup table configured in the plugin,
Column is the column name of the table which is to be matched
with the fieldname of the log message.

Example:

| process ip_lookup(lookup_table_A, IP, device_ip)

This command compares the IP column of the lookup_table_A with the device_ip field
of the log and if matched, the search result is enriched.

5.12. Compare 35
Search Query Language Documentation, Release latest

5.14 Compare Network


This process command takes a list of IP addresses as inputs and checks if they are from
the same network or different ones. It also checks whether the networks are public or
private. The comparison is carried out using either the default or the customized CIDR
values.
Syntax:

| process compare_network(fieldname1,fieldname2)

Example: (Using default CIDR value)

source_address=* destination_address=*
| process compare_network (source_address, destination_address)
| chart count() by source_address_public, destination_address_public,
same_network, source_address, destination_address

5.15 Clean Char


This process command removes all the alphanumeric characters present in a field-value.

5.14. Compare Network 36


Search Query Language Documentation, Release latest

Syntax:

| process clean_char(<field_name>) as <string_1>, <string_2>

Example:

| process clean_char(msg) as special, characters


| chart count() by special, characters

5.16 Current Time


This process command gets the current time from the user and adds it as a new field
to all the logs. This information can be used to compare, compute, and operate the
timestamp fields in the log message.
Syntax:

| process current_time(a) as string

Example:

source_address=* | process current_time(a) as time_ts


| chart count() by time_ts, log_ts, source_address

5.16. Current Time 37


Search Query Language Documentation, Release latest

5.17 Count Char


This process command counts the number of characters present in a field-value.
Syntax:

| process count_char(fieldname) as int

Example:

| process count_char(msg) as total_chars


| search total_chars >= 100

5.18 DNS Cleanup


This process command converts a DNS from an unreadable format to a readable format.
Syntax:

| process dns_cleanup(fieldname) as string

Example:

5.17. Count Char 38


Search Query Language Documentation, Release latest

col_type=syslog | norm dns=<DNS.string>| search DNS=*


|process dns_cleanup(DNS) as cleaned_dns
| norm on cleaned_dns .<dns:.*>.
| chart count() by DNS, cleaned_dns, dns

5.19 Grok
The grok process command enables you to extract key-value pairs from logs during
query runtime using Grok patterns. Grok patterns are the patterns defined using regular
expression that match with words, numbers, IP addresses, and other data formats.
Refer to Grok Patterns and find a list of all the Grok patterns and their corresponding
regular expressions.
Syntax:
| process grok("<signature>")

A signature can contain one or more Grok patterns.


Example:
To extract the IP address, method, and URL from the log message:
192.168.3.10 GET /index.html

Use the command:


| process grok("%{IP:ip_address_in_log} %{WORD:method_in_log} %{URIPATHPARAM:url_in_
,→log}")

Using this command adds the ip_address_in_log, method_in_log, and url_in_log fields
and their respective values to the log if it matches the signature pattern.

5.19. Grok 39
Search Query Language Documentation, Release latest

5.20 AsciiConverter
This process command converts hexadecimal (hex) value and decimal (dec) value of
various keys to their corresponding readable ASCII values. The application supports
the Extended ASCII Table for processing decimal values.
Hexadecimal to ASCII
Syntax:
| process ascii_converter(fieldname,hex) as string

Example:
| process ascii_converter(sig_id,hex) as alias_name

Decimal to ASCII
Syntax:
| process ascii_converter(fieldname,dec) as string

Example:
| process ascii_converter(sig_id,dec) as alias_name

5.21 WhoIsLookup
The whoislookup process command enriches the search result with the information
related to the given field name from the WHOIS database.The WHOIS database consists

5.20. AsciiConverter 40
Search Query Language Documentation, Release latest

of information about the registered users of an Internet resource such as registrar,


IP address, registry expiry date, updated date, name server information and other
information. If the specified field name and its corresponding value are matched with
the equivalent field values of the WHOIS database, the process command enriches the
search result, however, note that the extracted values are not saved.
Syntax:

| process whoislookup(field_name)

Example:

domain =* | process whoislookup(domain)

5.22 Eval
This process command evaluates mathematical, boolean and string expressions. It
places the result of the evaluation in an identifier as a new field.
Syntax:

| process eval("identifier=expression")

Example:

| process eval("Revenue=unit_sold*Selling_price")

5.22. Eval 41
Search Query Language Documentation, Release latest

Note: Refer to the Evaluation Process Plugin Manual for more details.

5.23 toList
This process command populates the dynamic list with the field values of the search
result.
Syntax:

| process toList (list_name, field_name)

Example:

device_ip=* | process toList(device_ip_list, device_ip)

Note: Refer to the Dynamic List for more details.

5.24 toTable
This process command populates the dynamic table with the fields and field values of
the search result.
Syntax:

| process toTable (table_name, field_name1, field_name2,...., field_name9)

Example:

device_ip=* | process toTable(device_ip_table, device_name, device_ip, action)

5.23. toList 42
Search Query Language Documentation, Release latest

Note: Refer to the Dynamic Table for more details.

5.24. toTable 43
CHAPTER

SIX

FILTERING COMMANDS

Filtering commands help you filter the search results.

6.1 search
Using the search command, you can conduct searches on the search results. The
LogPoint search query searches on dynamic fields returned from the norm, rex, and
the table commands.
To search for users who have logged in more than 5 times:

login user = * | chart count() as count_user by user | search count_user > 5

If you create a dynamic field new field using norm command as,

| norm actual_mps = < new_field:int >

To view the logs which have 100 as the value of the new field, use the search command
as:

| norm actual_mps = < new_field:int >|search new_field = 100

We recommend you to use the search command only in the following cases:

• When you need to filter the results for simple search (non key-value search).
For example:

| search error

• When you need to filter the results using the or logical operator.
For example:

44
Search Query Language Documentation, Release latest

| search device_name=localhost or col_type=filesystem

Note: It is not advised to use the search command unless absolutely necessary. The
reason for this is that the search command uses heavy resources. So, it is always better
to apply any kind of filtering before using the search command.

6.2 filter
The filter command lets you further filter the logs retrieved in the search results.
Syntax:

<search query> | filter <condition>

For example, if you want to display only the domains that have more than 10 events
associated with them in the search results, use the following query:

norm_id=*Firewall url=* | process domain(url) as domain | chart count() as events by domain |�


,→filter

events>10

The query searches for all the logs containing the fields url and norm_id with the value
of norm_id having Firewall at the end. It then adds a new field domain to the logs
based on the respective URLs and groups the results by their domains. Finally, the
filter command limits the results to only those domains that have more than 10 events
associated with them.
The filter command does not index the intermediate fields, and thus, is computationally
more efficient than the search command. Therefore, LogPoint uses the filter command
to drill-down on the search results, which significantly speeds up the drill-down process.

Note:

• The filter command filters the results based on dynamic fields returned from the
norm, rex, and table commands as well.

• The filter command only works with expressions having the =, >, <, >=, and <=
operators.

• To filter the results with more than one condition, you must chain multiple filter
expressions.

6.2. filter 45
Search Query Language Documentation, Release latest

6.3 latest
The latest command finds the most recent log messages for every unique combination
of provided field values.

| latest by device_ip | timechart count() by device_ip

This query searches for the latest logs of all the devices.

status = down port = 80 | latest on log_ts by device_ip

This query searches for all the latest devices based on the log_ts field whose web server
running on the port number 80 is down.

6.4 order by
Use the order by to sort the search results based on a numeric field. You can sort the
results in either the ascending or the descending order.
Examples:

device_name= "John Doe" and col_type="syslog" | order by col_ts asc

This query searches for all the syslog messages generated from the device named John
Doe and sorts them in the ascending order of their col_ts values.

device_name=* | order by log_ts desc

This query searches for the logs from all the devices in the system and sorts them in the
descending order of their log_ts values.

Note: The sorting order of the search results is inconsistent when a search query
does not contain an order sorting command. Use the order by command to make it
consistent.

6.5 limit <number>


Use the limit <number> command to limit the number of results displayed.
Additionally, you can add the other keyword at the end of the query to display the
aggregation of the rest of the results.

Note:

6.3. latest 46
Search Query Language Documentation, Release latest

• The feature to display the Top-10 and the Rest graphs is supported for the
aggregation queries.
• While using the limit <number> command to retrieve a large volume of logs, make
sure that your system has enough resources to load and render the data.

Example:

destination_address = * | chart count() by source_address limit 10 other

This query searches for all the logs having a destination address, filters the top 10
results by their source address and rolls-up all the remaining results in the eleventh
line. The source_address field displays the word other in the table as shown in the
figure below.

Some other working examples:

device_ip=*| chart count() by action, source_address limit 5 other

(continues on next page)

6.5. limit <number> 47


Search Query Language Documentation, Release latest

(continued from previous page)


| chart sum(actual_mps) by service limit 20 other

| chart count() by action limit 10 other

6.5. limit <number> 48


CHAPTER

SEVEN

PATTERN FINDING

Pattern finding is a method of finding one or multiple streams and patterns of data to
correlate a particular event. For example: five failed logins, followed by a successful
login. It can be performed on the basis of the count and the time of occurrence of
the stream. Use the Pattern Finding rules to detect complex event patterns in a large
number of logs.
Correlation is the ability to track multiple types of logs and deduce meanings from
them. It lets you look for a collection of events that make up a suspicious behavior
and investigate further.

7.1 Single Stream


A stream consists of a count or occurrence of a query. The query can be a simple search
query or an aggregating query. The stream can consist of a having same or a within
expression. Stream has notion of time.

Syntax Description
[] For single streams, square brackets contain a stream of
events.
within Keyword to denote the notion of time frame
having same Keyword

Following are the working examples for pattern finding using single stream:
To find 5 login attempts:

[5 action = "logged on"]

[5 login]

To find 5 login attempts within a timeframe of 2 minutes:

49
Search Query Language Documentation, Release latest

[5 action = "logged on" within 2 minutes]

[5 login within 2 minutes]

To find 5 login attempts by the same user:

[5 action = "logged on" having same user]

[5 login having same user]

To find 10 login attempts by the same user from the same source_address (multiple
fields) within 5 minutes:

[10 action = "logged on" having same user, source_address within 5 minutes]

The time format for specifying timeframe are: second(s), minute(s), hour(s) and day(s).

[error] as E

This query finds the logs with errors. It then aliases the result as E and displays the fields
prefixed with E such as E.severity, and E.device_ip. You can then use the aliased fields
as shown below:

[error] as E | rename E.device_ip as DIP | search DIP = "127.0.0.1"

7.1. Single Stream 50


Search Query Language Documentation, Release latest

Pattern finding queries for different conditions:


10 login to localhost (source_address) by the same user for the last 15 minutes.

[10 login source_address = 127.0.0.1 having same user_name within 15 minutes]

The field of a log file with a norm command .

[2 login | norm <username:word> login successful having same username within 10 seconds]

7.2 Multiple Streams


You can join multiple patterns by using Pattern Finding by Joining Streams and Pattern
Finding by Following Streams.

7.2.1 Left Join


You can use a left join to return all the values from the table or stream on the left, and
only the common values from the table or stream on the right.

7.2. Multiple Streams 51


Search Query Language Documentation, Release latest

Example:

[table event_prob] as s1
left join [event = * | chart count() by event] as s2
on s1.event = s2.event

7.2.2 Right Join


You can use a right join to return all the values from the table or stream on the right and
only the common values from the table or stream on the left.
Example:

[5 transaction error having same user within 30 seconds] as s1


right join [transaction successful] as s2
on s1.user=s2.user

7.2.3 Join
Join queries are used to link the results from different sources. The link between two
streams must have an on condition. The link between two lookup sources or any of the
lookup and stream does not require a time-range. Join as a part of a search string, can
link one data-set to another based on one or more common fields. For instance, two
completely different data-sets can be linked together based on a username or event ID
field present in both the data-sets.
The syntax for joining multiple patterns is as follows:
[stream 1] <aliased as s1> <JOIN> [stream 2] <aliased as s2> on <Join_conditions> |
additional filter query.

[action = locked] as locked


join
[action = unlocked] as unlocked
on
locked.target_user = unlocked.target_user
| chart count() by locked.target_user, locked.caller_computer,
unlocked.caller_user

[login] as l join [table User] as u on l.user = u.user

To find the events where a reserved port of an Operating System (inside the
PORT_MACHINE table) is equal to the blocked port (inside the BLOCKED_PORT table):

[table PORT_MACHINE port<1024] as s1 join [table BLOCKED_PORT] as s2


on s1.port=s2.port

7.2. Multiple Streams 52


Search Query Language Documentation, Release latest

To find 5 login attempts by the same user within 1 minute followed by 5 failed login
attempts by the same user within 1 minute
[5 login having same user within 1 minute] as s1
followed by
[5 failed having same user within 1 minute]

To find 5 login attempts by the same user within 1 minute followed by 5 failed attempts
by the same user within 1 minute and users from both result are same
[5 login having same user within 1 minute] as s1
followed by
[5 failed having same username within 1 minute] as s2
on
s1.username = s2.username

7.2.4 Followed by
Pattern Finding by followed by is useful when two sequential streams are connected to
an action.
For example:
[2 login success having same user] AS stream1
followed by
[login failure] as stream2
ON
stream1.user = stream2.user

Here,

Syntax Description
[ ] AS stream1 A simple pattern finding query aliased as stream1
followed by Keyword
[ ] AS stream2 A simple search aliased as stream2
ON Keyword
stream1.user = Matching field from the 2 streams
stream2.user

The syntax for joining multiple patterns is as follows:

• [stream 1] <aliased as s1> <followed by> [stream 2] <aliased as s2> <within time
limit> on <Join_conditions>| additional filter query.
• [stream 1] as s1 followed by [stream2] as s2 within time_interval on s1.field = s2.field
• [stream 1] as s1 followed by [stream2] as s2 on s1.field = s2.field

7.2. Multiple Streams 53


Search Query Language Documentation, Release latest

• [stream 1] as s1 followed by [stream2] as s2 within time_interval

The inference derived from the above queries:

• Streams can be labeled using alias. Here, the first stream is labeled as s1. This
labeling is useful while setting the join conditions in the join query.

• The operation between multiple streams is carried out using “followed by” or
“join”.

• Use the followed by keyword to connect two sequential streams anticipating an


action, e.g., multiple login attempts followed by successful login.

• Use the join keyword to view additional information in the final search. The join
syntax is mostly used with tables for enriching the data.

• Time limit for occurrence can also be specified.

• If you use the join keyword, then specify the on condition.

• Join conditions are simple mathematical operations between the data-sets of two
streams.

• Use additional filter query to mitigate false positives which are generally created
while joining a stream and a table. Searching the query with a distinct key from the
table displays an error-less result.

[| chart count() by device_ip] AS lookup


JOIN
[device_ip=*] AS log ON lookup.device_ip = log.device_ip

This query does not display histogram but displays the log table.

[device_ip=*] as log join [| chart count() by device_ip] as lookup on


log.device_ip=lookup.device_ip

This query displays both the histogram and the log table.

Note:

• The Latest command is supported in pattern finding queries.

• All the reserved keywords such as on, join, as, and chart are not case-sensitive.

• If you want to use reserved keywords in simple search or some other contexts, put
them in quotes.

7.2. Multiple Streams 54


Search Query Language Documentation, Release latest

login | chart count() by device_ip | search "count()" > 5

7.2. Multiple Streams 55


CHAPTER

EIGHT

CHAINING OF COMMANDS

You can chain multiple commands into a single query by using the pipe (|) character.
Any command except fields can appear before or after any other command. The fields
command must always appear at the end of the command chain.
Example:

| chart count() as cnt by device_name | search cnt > 1000

This query displays the number of logs with the same device_name appearing more
than 1000 times.

(label = logoff) AND hour (log_ts) > 8 AND hour (log_ts) <16 |
latest by user |
timechart count() by user

This query captures all the log messages labeled as logoff and those collected between
8 AM and 4 PM. It then displays the timechart of the recent users for the selected
time-frame.

56
CHAPTER

NINE

ADDITIONAL NOTES

9.1 Process or Count


Since count and process are keywords, they must be enclosed within double quotes.

MsWinEventLog product=* | chart count() as "Count" by product


order by count() desc limit 10

Similarly,

MsWinEventLog product=* "process"=* action=*


| fields product, "process", action, object

9.2 Conditional Expression


Conditional expression within parenthesis () must be separated explicitly by or.

| chart count(label = delete or label = remove) as remove

9.3 Forward Slash Expression**


Any expression after the forward slash must also be enclosed within double quotes.

source_name = "/opt/immune/var/log/audit/webserver.log"
| chart count() by source_address

9.4 norm

57
Search Query Language Documentation, Release latest

| norm doable_mps=<dmps:'['0-9']'+>

| norm <:'\['><my_field:word><:'\]'> | chart count() by my_field

9.5 timechart
Limit does not work with timechart.

| timechart count() by col_type

9.6 Capturing normalized field values


You can use norm on command to capture the normalized field value in the log search
result.

Suppose the log search result consists of a log value pair

source_name = /opt/immune/var/log/benchmarker

Now, if you want to capture the first two words of the path,
you can write the query as follows:

| norm on source_name <capture:'\/opt\/immune'>

This feature works well with rex command too.

user=* | rex on user:\s+(?P<account>\S+)@(?P<domain>\S+)


| chart count() by account, domain | search account=*

In the example above, the rex command is used on a field which captures email
addresses. The email address is then broken into account and domain using the
corresponding regex.

9.7 Grok Patterns


The LogPoint search recognizes the following Grok patterns.
General Patterns

9.5. timechart 58
Search Query Language Documentation, Release latest

Pattern name Regular expression


USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:.[0-9]+)?)|(?:.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:.[0-9A-Fa-f]*)?)|(?:.[0-9A-Fa-f]+)))
POSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING(?>(?<!\)(?>”(?>.|[^”]+)+”|”“|(?>’(?>\\.|[^\\’]+)+’)|’‘|(?>‘(?>.|[^‘]+)+‘)|‘‘))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
DOMAINTLD [a-zA-Z]+
EMAIL %{NOTSPACE}@%{WORD}.%{DOMAINTLD}
QS %{QUOTEDSTRING}

Networking-related Patterns

Pattern name Regular expression


MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2
IP (?:%{IPV6}|%{IPV4})
HOSTNAME b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(.?|b)
HOST %{HOSTNAME}
IPORHOST (?:%{HOSTNAME}|%{IP})
HOSTPORT %{IPORHOST}:%{POSINT}

Path-related patterns

9.7. Grok Patterns 59


Search Query Language Documentation, Release latest

Pattern name Regular expression


PATH (?:%{UNIXPATH}|%{WINPATH})
UNIXPATH (?>/(?>[w_%!$@:.,-]+|.)*)+
TTY (?:/dev/(pts|tty([pq])?)(w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\)(?:\[^\?*]*)+
URIPROTO [A-Za-z]+(+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
URIPATH (?:/[A-Za-z0-9$.+!*’(){},~:;=@#%_-]*)+
URIPARAM ?[A-Za-z0-9$.+!*’|(){},~@#%&/=:;_?-[]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?
(?:%{URIPATHPARAM})?

Date and time patterns

Pattern name Regular expression


MONTH b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?
|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHNUM2 (?:0[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?
|Sat(?:urday)?|Sun(?:day)?)
YEAR (?>dd){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T
]%{HOUR}:?%{MINUTE} (?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822%{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_RFC2822%{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME}
%{ISO8601_TIMEZONE}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
DATESTAMP_EVENTLOG
%{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND

9.7. Grok Patterns 60


Search Query Language Documentation, Release latest

Syslog patterns

Pattern name Regular expression


SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[w._/%-]+)
SYSLOGPROG %{PROG:program}(?:[%{POSINT:pid}])?
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}
SYSLOGHOST %{IPORHOST}

Log formats

Pattern name Regular expression


SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp}
(?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource}
%{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident}
%{USER:auth} [%{HTTPDATE:timestamp}]
“(?:%{WORD:verb} %{NOTSPACE:request}(?:
HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})”
%{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer}
%{QS:agent}

9.7. Grok Patterns 61

You might also like