Create a Source
Sources define the location of your entity's logs and how to enrich the log entries. To start continuous log collection through the OCI management agents, a source needs to be associated with one or more entities.
For more specific steps to
- Ingest Application, Infrastructure, Database and Other Generic Logs
- Create syslog source, see Set Up Syslog Monitoring.
- Create database instance log sources, see Set Up Database Instance Monitoring.
- Set Up REST API Log Collection
- Set Up Windows Event Monitoring
- Ingest Logs of Oracle Diagnostic Logging (ODL) Format
Additional Topics:
Use Data Filters in Sources
Oracle Logging Analytics lets you mask and hide sensitive information from your log entries as well as hide entire log entries before the log data is uploaded to the cloud.
Using the Data Filters tab when editing or creating a source, you can mask IP addresses, user ID, host name, and other sensitive information with replacement strings, drop specific keywords and values from a log entry, and also hide an entire log entry.
You can add data filters when creating a log source, or when editing an existing source. See Customize an Oracle-Defined Source to learn about editing existing log sources.
If the log data is sent to Oracle Logging Analytics using On-demand Upload or collection from object store, then the masking will happen on the cloud side before the data is indexed. If you are collecting logs using the Management Agent, then the logs are masked before the content leaves your premises.
Topics:
Masking Log Data
Masking is the process of taking a set of existing text and replacing it with other static text to hide the original content.
If you want to mask any information such as the user name and the host name from the log entries:
-
Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.
-
The administration resources are listed in the left hand navigation pane under Resources. Click Sources.
-
Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.
-
Click the Data Filters tab and click Add.
-
Enter the mask Name, select Mask as the Type, enter the Find Expression value, and its associated Replace Expression value.
Find Expression value can be plain text search or standard regular expression. The value that will be replaced with the Replace Expression should be surrounded by quotes
(
)
.Name Find Expression Replace Expression mask username User=(\S+)s+
confidential mask host Host=(\S+)s+
mask_host Note
The syntax of the replace string should match the syntax of the string that’s being replaced. For example, a number shouldn’t be replaced with a string. An IP address of the form
123.45.67.89
should be replaced with000.000.000.000
and not with000.000
. If the syntaxes don’t match, then the parsers may break. -
Click Save.
When you view the masked log entries for this log source, you’ll find that Oracle Logging Analytics has masked the values of the fields that you’ve specified.
-
User = confidential
-
Host = mask_host
Hash Masking the Log Data
When you mask the log data using the mask as described in the previous
section, the masked information is replaced by a static string provided in the
Replace Expression. For example, when the user name is masked with the string
confidential
, then the user name is always replaced with the
expression confidential
in the log records for every occurrence. By
using hash mask, you can hash the found value with a unique hash. For example, if
the log records contain multiple user names, then each user name is hashed to a
unique value. So, if the string user1
is replaced with the text
hash ebdkromluceaqie
for every occurrence, then the hash can still
be used to identify that these log entries are for the same user. However, the
actual user name will not be visible.
Risk Associated: Because this is a hash, there is no way to
recover the actual value of the masked original text. However, taking a hash of any
string, you arrive at the same hash every time. Ensure that you consider this risk
while hash masking the log data. For example, the string oracle
has
the md5 hash of a189c633d9995e11bf8607170ec9a4b8
. Every time
someone tries to create an md5 hash of the string oracle
, it will
always be the same value. Although you cannot take this md5 hash and reverse it back
to get the original string oracle
, if someone tries to guess and
forward hash the value oracle
, they will see that the hash matches
the one in the log entry.
To apply the hash mask data filter on your log data:
-
Go to Create Source page. For steps, see Create a Source.
-
You can also edit a source that already exists. For steps to open an Edit Source page, see Edit Source.
-
Click the Data Filters tab and click Add.
-
Enter the mask Name, select Hash Mask as the Type, enter the Find Expression value, and its associated Replace Expression value.
Name Find Expression Replace Expression Mask User Name User=(\S+)s+
Text Hash Mask Port Port=(\d+)s+
Numeric Hash -
Click Save.
If you want to use hash mask on a field that is string based, you can use Text or Numeric hash as a string field. But if your data field is numeric, such as an integer, long, or floating point, then you must use Numeric hash. If you do not use numeric hash, then the replace text will cause your regular expressions which depend on this value to be a number, to break. The value will also not be stored.
This replacement happens before the data is parsed. Typically, when the data must be masked, it's not clear if it is always numeric. Therefore, you must decide the type of hash while creating the mask definition.
As the result of the above example hash masking, each user name is replaced by a unique text hash, and each port number is replaced by a unique numeric hash.
You can utilize the hash mask when filtering or analyzing your log data. See Filter Logs by Hash Mask.
Dropping Specific Keywords or Values from Your Log Records
Oracle Logging Analytics lets you search for a specific keyword or value in log records and drop the matched keyword or value if that keyword exists in the log records.
Consider the following log record:
ns5xt_119131: NetScreen device_id=ns5xt_119131
[Root]system-notification-00257(traffic): start_time="2017-02-07 05:00:03"
duration=4 policy_id=2 service=smtp proto=6 src zone=Untrust dst
zone=mail_servers action=Permit sent=756 rcvd=756 src=192.0.2.1 dst=203.0.113.1
src_port=44796 dst_port=25 src-xlated ip=192.0.2.1 port=44796 dst-xlated
ip=203.0.113.1 port=25 session_id=18738
If you want to hide the keyword device_id
and its value from the log record:
-
Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.
-
The administration resources are listed in the left hand navigation pane under Resources. Click Sources.
-
Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.
-
Click the Data Filters tab and click Add.
-
Enter the filter Name, select Drop String as the Type, and enter the Find Expression value such as
device_id=\S*
-
Click Save.
When you view the log records for this source, you’ll find that Oracle Logging Analytics has dropped the keywords or values that you’ve specified.
Ensure that your parser regular expression matches the log record pattern, otherwise Oracle Logging Analytics may not parse the records properly after dropping the keyword.
Apart from adding data filters when creating a source, you can also edit an existing source to add data filters. See Customize an Oracle-Defined Source to learn about editing existing sources.
Dropping an Entire Log Entry Based on Specific Keywords
Oracle Logging Analytics lets you search for a specific keyword or value in log records and drop an entire log entry in a log record if that keyword exists.
Consider the following log record:
ns5xt_119131: NetScreen device_id=ns5xt_119131
[Root]system-notification-00257(traffic): start_time="2017-02-07 05:00:03"
duration=4 policy_id=2 service=smtp proto=6 src zone=Untrust dst
zone=mail_servers action=Permit sent=756 rcvd=756 src=198.51.100.1
dst=203.0.113.254 src_port=44796 dst_port=25 src-xlated ip=198.51.100.1
port=44796 dst-xlated ip=203.0.113.254 port=25 session_id=18738
Let’s say that you want to drop entire log entry if the keyword
device_id
exists in them:
-
Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.
-
The administration resources are listed in the left hand navigation pane under Resources. Click Sources.
-
Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.
-
Click the Data Filters tab and click Add.
-
Enter the filter Name, select Drop Log Entry as the Type, and enter the Find Expression value such as
.*device_id=.*
It is important that the regular expression must match the entire log entry. Using
.*
in front of and at the end of the regular expression ensures that it match all other text in the log entry. -
Click Save.
When you view the log entries for this log source, you’ll find that Oracle Logging Analytics has dropped all
those log entries that contain the string device_id
in them.
Apart from adding data filters when creating a source, you can also edit an existing source to add data filters. See Customize an Oracle-Defined Source to learn about editing existing sources.
Use Extended Fields in Sources
The Extended Fields feature in Oracle Logging Analytics lets you extract additional fields from a log record in addition to any fields that the parser parsed.
In the source definition, a parser is chosen that can break a log file into log entries and each log entry into a set of base fields. These base fields would need to be consistent across all log entries. A base parser extracts common fields from a log record. However, if you have a requirement to extract additional fields from the log entry content, then you can use the extended fields definition. For example, the parser may be defined so that all the text at the end of the common fields of a log entry are parsed and stored into a field named Message.
When you search for logs using the updated source, values of the extended fields are displayed along with the fields extracted by the base parser.
To add the Log Group as the input field, provide its OCID for the value instead of the name.
If you use Automatic parse time only option in your source definition instead of creating a parser, then the only field that will be available for creating Extended Field Definitions will be the Original Log Content field since no other fields will be populated by the parser. See Use the Automatic Time Parser.
Oracle Logging Analytics enables you to search for the extended fields that you’re looking for. You can search based on the how it was created, the type of base field, or with some example content of the field. Enter the example content in the Search field, or click the down arrow for the search dialog box. In the search dialog box, under Creation Type, select if the extended fields that you’re looking for are Oracle-defined or user-defined. Under Base Field, you can select from the options available. You can also specify the example content or the extraction field expression that can be used for the search. Click Apply Filters.
Table 8-1 Sample Example Content and Extended Field Extraction Expression
Description | Base Field | Example Content | Extended Field Extraction Expression |
---|---|---|---|
To extract the endpoint file entension from the URI field of a Fusion Middleware Access log file |
|
|
This will extract the file suffix such as jpg or html and store the value into the field Content Type. It will only extract for suffixes listed in the expression. |
To extract the user name from the file path of a log entity |
|
|
|
To extract the start time from the Message field Note: Event Start Time is a Timestamp data type field. If this were a numeric data type field, then the Start Time would be stored simply as a number, and not as a timestamp. |
|
|
|
Source: Parser Name: |
|
|
|
Source: Parser Name: |
|
|
|
Source: Parser Name: |
|
|
|
Source: Parser Name: |
|
|
|
Configure Field Enrichment Options
Oracle Logging Analytics lets you configure Field Enrichment options so you can further extract and display meaningful information from your extended fields data.
One of the Field Enrichment options is Geolocation, which converts IP addresses or location coordinates present in the log records to a country or country code. This can be used in log sources like Web Access Logs that have external client IP addresses.
Using the Lookup Field Enrichment option, you can match field-value combinations from logs to an external lookup table.
Include additional information in your log entries by using the Additional Fields option. This information gets added to each log entry at processing time.
To replace a string/expression in a field with an alternative expression and store the result in an output field, use the Substitution option.
-
For a source, you can define a maximum of three field enrichments, each of different type.
-
To add the Log Group as the input field, provide its OCID for the value instead of the name.
Use Ingest-Time Lookups in the Source
You can add data from multiple lookups by setting up the Lookup Field Enrichment option multiple times. The Lookup Field Enrichment is processed in the same order as it is created. So, if you have related lookups where the keys overlap and help in adding more enrichments with the processing of each lookup, then ensure to include the overlapping keys in the input and output selections of the Lookup Field Enrichment definition. For an example of using multiple related lookups to enrich log data, see Example of Adding Multiple Lookup Field Enrichments.
Steps to Add Lookup Field Enrichment
-
Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.
The administration resources are listed in the left hand navigation pane under Resources. Click Sources.
The Sources page opens. Click Create Source.
Alternatively, click the Actions menu icon next to the source entry that you want to edit and select Edit. The Edit Source page is displayed.
Note
Make sure that a parser is selected in the source definition page to have the Add button enabled for field enrichment.
-
Click the Field Enrichment tab and then click Add.
The Add Field Enrichment dialog box opens.
-
In the Add Field Enrichment dialog box,
- Select Lookup as the Function.
- Select the Lookup Table Name from the drop down menu.
- Under Input Fields, select the
Lookup Table Column and the Log
Source Field to which it must be mapped. This is to map
the key from the lookup table to a field that is populated by your
parser in Log Source Field, for example,
errid
column in the lookup table can be mapped to theError ID
field in the logs.The list for the input fields in Log Source Field will be limited to the fields that your log source populates.
- Under Actions, select the new log
source field and the field value in the lookup table column to which it
must be mapped. When a matching record is found in the specified lookup
table based on the input mapping above, the output field specified in
the Log Source Field is be added to the log with
the value of the output lookup column specified in Field
value. For example, the
erraction
column in the lookup table can be mapped to theAction
field.Optionally, click + Another item to map more output fields.
- Click Add field enrichment.
The lookup is now added to the Field Enrichment table.
-
Keep the Enabled check box selected.
-
To add more lookups, repeat steps 3 and 4.
When you display the log records of the log source for which you created the ingest-time lookup field enrichment, you can see that the Output Field displays values that are populated against the log entries because of the lookup table reference you used in creating the field enrichment. See Manage Lookups.
Example of Adding Multiple Lookup Field Enrichments
You can add up to three Lookup Field Enrichments to a source. The individual lookups may or may not be related to one another.
The following example illustrates how three related lookups can be set up such that the log data can be enriched with information from all three lookups. Consider the following three related lookups that have information about multiple hosts:
Lookup1: SystemConfigLookup
Serial Number | Manufacturer | Operating System | Memory | Processor Type | Disk Drive | Host ID |
---|---|---|---|---|---|---|
SER-NUM-01 | Manuf1 | OS1 | 256TB | Proc1 | Hard Drive | 1001 |
SER-NUM-02 | Manuf2 | OS2 | 7.5TB | Proc3 | Solid State Drive | 1002 |
SER-NUM-03 | Manuf2 | OS3 | 16TB | Proc2 | Solid State Drive | 1003 |
SER-NUM-04 | Manuf3 | OS1 | 512TB | Proc5 | Hard Drive | 1004 |
SER-NUM-05 | Manuf1 | OS1 | 128TB | Proc4 | Hard Drive | 1001 |
Lookup2: GeneralHostConfigLookup
Host ID | Host Owner | Host Location | Host Description | Host IP Address |
---|---|---|---|---|
1001 | Jack | San Francisco | Description for Jack host | 192.0.2.76 |
1002 | Alexis | Denver | Description for Alexis host | 203.0.113.58 |
1003 | John | Seattle | Description for John host | 198.51.100.11 |
1004 | Jane | San Jose | Description for Jane host | 198.51.100.164 |
Lookup3: NetworkConfigLookup
IP Address | Subnet Mask | Gateway | DNS Server |
---|---|---|---|
192.0.2.76 | 255.255.255.252 | 192.0.2.1 | Recursive server |
203.0.113.58 | 255.255.255.0 | 203.0.113.1 | Authoritative server |
198.51.100.11 | 255.255.255.224 | 198.51.100.1 | Root server |
198.51.100.164 | 255.255.255.192 | 198.51.100.1 | Recursive server |
Between the lookups Lookup1 and Lookup2, Host
ID
is the common key that can be selected as the output in the first
lookup field enrichment, and as the input in the second lookup field enrichment.
Similarly, between the lookups Lookup2 and Lookup3, IP
Address
is the common key that can be selected as the output in the
first lookup field enrichment, and as the input in the second lookup field
enrichment.
With the above setting, let the lookup field enrichments be configured in the order 1, 2, and 3:
Lookup Field Enrichment | Lookup Table Name | Input Fields | Actions |
---|---|---|---|
1 | SystemConfigLookup |
|
|
2 | GeneralHostConfigLookup |
|
|
3 | NetworkConfigLookup |
|
|
After the above enrichment configuration is complete, when the Serial
Number
field is detected in the log data, it is further enriched with
Operating System
, Memory
, Host
ID
, Host Owner
, Host IP Address
,
Gateway
, and DNS Server
from the three
lookups. So, for the serial number SER-NUM-01 detected in the log, it is
enriched with additional information OS1, 256TB, 1001,
Jack, 192.0.2.76, 192.0.2.1, and Recursive
server.
Use the Geolocation Field for Grouping Logs
After you set up the Geolocation field enrichment, you can view log records grouped by country or country code. This is useful when you analyze logs that have crucial location information such as IP address or location coordinates, for example, access logs, trace logs, or application transport logs.
Add More Data to Your Log Entries at Processing Time
You might want to include more information in each of your entries as additional metadata. This information is not part of the log entry but is added at processing time, for example, Container ID, Node. For an example of adding metadata while uploading logs on demand, see Upload Logs on Demand.
The information thus added might not be directly visible in the Log Explorer. Complete the following steps to make it visible in the Log Explorer for your log analysis:
-
Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.
The administration resources are listed in the left hand navigation pane under Resources. Click Sources.
The Sources page opens. Click the Actions menu icon next to the source entry that you want to edit and select Edit. The Edit Source page is displayed.
Note
Make sure that a parser is selected in the source definition page to have the Add button enabled for field enrichment.
-
Click the Field Enrichment tab and then click Add.
The Add Field Enrichment dialog box opens.
-
In the Add Field Enrichment dialog box,
- Select Additional Fields as the Function.
- Under Map Fields, select the fields that you want to map to the source. The fields that are selected in the parsers associated with this source are not available here.
- Click Add.
After you specify the additional fields, they are visible in the Log Explorer for log analysis. They can also be selected while configuring the Extended Fields or Labels for sources.
Use Substitution Function to Replace an Expression in a Field
During log processing, if you want to replace a part of the field value with an alternative string or expression, then use the substitution function and store the resulting expression of the field in another output field.
Consider the scenario where you want to capture all the log records that have the
field URI
with the content of the format
http://www.example.com/forum/books?<ISBN>
, and the
value of ISBN varies with each log record. In such cases, you can substitute the
value of ISBN in the field of each log record with a string
allExampleBooks and store in a field modified_URI
. As a
result, all the log records captured with URI
in the above format will
also have the field modified_URI
with the value
http://www.example.com/forum/books?allExampleBooks
. You can now use
the field modified_URI
in your search query for filtering those logs
for further analysis in the Log Explorer.
Additionally, use the option Substitute all matches to replace all
the occurrences of the value in the field. For example, if the field Original
log content
has multiple occurrences of IP address that you want to replace
with a string, then you can use this option. The result can be saved in a field, for
example Altered log content
. You can now use the field Altered
log content
in the query to filter all the log records that have IP
addresses in the field Original log content
.
-
Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.
The administration resources are listed in the left hand navigation pane under Resources. Click Sources.
The Sources page opens. Click Create Source.
Alternatively, click the Actions menu icon next to the source entry that you want to edit and select Edit. The Edit Source page is displayed.
-
Enter a name for the source, a suitable description, and select the source type. Select a parser that must be used to parse the logs. These selections will determine the fields that are available for field enrichment.
-
Click the Field Enrichment tab and then click Add field enrichment.
-
In the Add field enrichment dialog box, select Substitution as the Function.
-
In the Input Fields section:
-
Select the Log Source Field that has values that you want to replace, for example,
URI
. -
Under Expression to match, provide the regex expression to match for the string in the field that must be replaced.
-
Specify the Replacement string/expression that must be substituted in place of the original value of the input field.
-
If the field has multiple occurrences of the string that you want replaced, then enable Substitute all matches check box.
-
-
Under Output Field section, select the field that must store the new value of the input field after the original value is replaced with the substitution value.
-
Click Add field enrichment.
Use Labels in Sources
Oracle Logging Analytics lets you add labels or tags to log records, based on defined conditions.
When a log entry matches the condition that you have defined, a label is populated with that log entry. That label is available in your log explorer visualizations as well as for searching and filtering log entries.
You can use Oracle-defined or user created labels in the sources. To create a custom label to tag a specific log entry, see Create a Label.
-
To use labels in an existing source, edit that source. For steps to open an Edit Source page, see Edit Source.
-
Click the Labels tab.
-
To add a conditional label, click Add conditional label.
In the Conditions section:
-
Select the log field on which you want to apply the condition from the Input Field list.
-
Select the operator from the Operator list.
-
In the Condition Value field, specify the value of the condition to be matched for applying the label.
Note
To add the Log Group as the input field, provide its OCID for the value instead of the name.
-
To add more conditions, click Add Condition icon , and repeat the steps 3a through 3c. Select the logical operation to apply on the multiple conditions. Select from AND, OR, NOT AND, or NOT OR.
To add a group of conditions, click the Group Condition icon , and repeat the steps 3a through 3c to add each condition. A group of conditions must have more than one condition. Select the logical operation to apply on the group of conditions. Select from AND, OR, NOT AND, or NOT OR.
To remove a condition, click Remove Condition icon .
To view the list of conditions in the form of statement, click Show Condition Summary.
-
-
Under Actions, select from the already available Oracle-defined or user created labels. If required, you can create a new label by clicking Create Label.
Select the Enabled check box.
-
Click Add.
Oracle Logging Analytics enables you to search for the labels that you’re looking for in the Log Explorer. You can search based on any of the parameters defined for the labels. Enter the search string in the Search field. You can specify the search criteria in the search dialog box. Under Creation Type, select if the labels that you’re looking for are Oracle-defined or user-defined. Under the fields Input Field, Operator, and Output Field, you can select from the options available. You can also specify the condition value or the output value that can be used for the search. Click Apply Filters.
You can now search log data based on the labels that you’ve created. See Filter Logs by Labels.
Use the Conditional Fields to Enrich the Data Set
Optionally, if you want to select any arbitrary field and write a value to it, you can use the conditional fields. Populating a value in an arbitrary field using the conditional fields functionality is very similar to using Lookups. However, using the conditional fields provides more flexibility in your matching conditions and is ideal to use when dealing with a small number of conditions - field population definitions. For example, if you have a few conditions to populate a field, then you can avoid creating and managing a lookup by using conditional fields.
The steps to add the conditional fields are similar to those in the workflow above for adding conditional labels.
-
In the step 3, instead of clicking Add conditional label, click Add conditional field. The rest of the step 3 to select the conditions remains the same as the above workflow.
-
In step 4 above,
-
For the Output Field, select from the already available Oracle-defined or user created fields from the menu. If required, you can create a new field by clicking Create New Field.
-
Enter an Output Value to write for the output field when the input condition is true.
For example, the source can be configured to attach the
authentication.login
output value for theSecurity Category
output field when the log record contains the input fieldMethod
set to the value CONNECT .Select the Enabled check box.
-
Use the Automatic Time Parser
Oracle Logging Analytics lets you configure your source to use a generic parser instead of creating a parser for your logs. When doing this, your logs will only have the log time parsed from the log entries if the time can be identified by Oracle Logging Analytics.
This is particularly helpful when you’re not sure about how to parse your logs or how to write regular expressions to parse your logs, and you just want to pass the raw log data to perform analysis. Typically, a parser defines how the fields are extracted from a log entry for a given type of log file. However, the generic parser in Oracle Logging Analytics can:
-
Detect the time stamp and the time zone from log entries.
-
Create a time stamp using the current time if the log entries don’t have any time stamp.
-
Detect whether the log entries are multiple lined or single lined.
-
Time stamp:
-
When a log entry doesn’t have a time stamp, then the generic parser creates and displays the time stamp based on the time when the log data was collected.
-
When a log record contains a time stamp, but the time zone isn’t defined, then the generic parser uses the management agent’s time zone.
When using Management Agent, if the timezone is not detected properly, then you can manually set the timezone in the agent configuration files. See Manually Specify Time Zone and Character Encoding for Files.
When uploading logs using on-demand upload, you can specify the timezone along with your upload to force the timezone if we cannot properly detect it. If you're using CLI, see Command Line Reference: Logging Analytics - Upload. If you're using REST API, then see Logging Analytics API - Upload.
-
When a log file has log records with multiple time zones, the generic parser can support up to 11 time zones.
-
When a log file displays some log entries with a time zone and some without, then the generic parser uses the previously found timezone for the ones missing a timezone.
-
When you ingest logs using management agent, if the time zone or the time zone offset is not indicated in the log records, then Oracle Logging Analytics compares the last modified time of the OS with the timestamp of the last log entry to determine the proper time zone.
-
-
Multiple lines: When a log entry spans multiple lines, the generic parser can captures the multiline content correctly.