It seems that scheduler.log events are all prepared for parsing
04-09-2018 23:35:04.548 +0000 ERROR SavedSplunker - **savedsearch_id**="nobody;my_lookups;Unix DHCP Refresh", **message**="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed
Yet, etc/app/search/default/props.conf is insistent on overwriting that so that it extracts EVERYTHING after "SavedSplunker - " into the the 'message' field.
[scheduler]
EXTRACT-fields = (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P[^ ]*)\s+(?P[^ ]+) - (?P.+)
so, now instead of message being: **Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available**
it's expanded to:
savedsearch_id="nobody;my_lookups;Unix DHCP Refresh", **message**="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed
So the question is, why? And why choose the same fieldname that's already been used in the event itself?
It seems that it would have been been much more logical to have chosen a different fieldname than 'message'.
Thank you
↧
Question for Engineering - Wondering why Splunk overwrites the 'messages' field in scheduler.log events.
↧
Want more than 10,000 lines into a single event
I want more than 10,000 lines to merge and show in a single event.
[tally_nightly_prd]
SHOULD_LINEMERGE=true
NO_BINARY_CHECK=true
CHARSET=UTF-8
TRUNCATE=0
disabled=false
BREAK_ONLY_BEFORE=\*\*\*\*\*\*\*\*\*\*\*\*\snightlyProcess\sStarted
MAX_EVENTS=90000
TIME_FORMAT=%+
TIME_PREFIX=\*\*\*\*\*\*\*\*\*\*\*\*\snightlyProcess\sStarted
↧
↧
Why does Splunk overwrites the 'messages' field in scheduler.log events?
It seems that scheduler.log events are all prepared for parsing
04-09-2018 23:35:04.548 +0000 ERROR SavedSplunker - **savedsearch_id**="nobody;my_lookups;Unix DHCP Refresh", **message**="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed
Yet, etc/app/search/default/props.conf is insistent on overwriting that so that it extracts EVERYTHING after "SavedSplunker - " into the the 'message' field.
[scheduler]
EXTRACT-fields = (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P[^ ]*)\s+(?P[^ ]+) - (?P.+)
So, now instead of message being: **Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available**
it's expanded to:
savedsearch_id="nobody;my_lookups;Unix DHCP Refresh", **message**="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed
So the question is, why? And why choose the same fieldname that's already been used in the event itself?
It seems that it would have been much more logical to have chosen a different fieldname than 'message'.
Thank you
↧
How can I get more than 10,000 lines into a single event?
I want more than 10,000 lines to merge and show in a single event.
[tally_nightly_prd]
SHOULD_LINEMERGE=true
NO_BINARY_CHECK=true
CHARSET=UTF-8
TRUNCATE=0
disabled=false
BREAK_ONLY_BEFORE=\*\*\*\*\*\*\*\*\*\*\*\*\snightlyProcess\sStarted
MAX_EVENTS=90000
TIME_FORMAT=%+
TIME_PREFIX=\*\*\*\*\*\*\*\*\*\*\*\*\snightlyProcess\sStarted
↧
How to use regex to send events to NullQueue?
Hi,
How to use regex to send all events related to fw_rule=0 and from a sensor sensor=abcd-f01 to null queue?
sample event:
rec_type=71 app_proto=Unknown client_app=Unknown client_version="" connection_id=0 dest_autonomous_system=0 dest_bytes=0 dest_ip=1.2.3.4 dest_ip_country=0 dest_mask=0 dest_pkts=0 dest_port=80 dest_tos=0 dns_query="" dns_rec_id=0 dns_resp_id=0 dns_ttl=0 event_desc="Flow Statistics" event_sec=1523461342 event_subtype=1 event_type=1003 event_usec=0 file_count=0 first_pkt_sec=1523461341 fw_policy=F01_POLICY fw_rule=0 fw_rule_action=Fastpath fw_rule_reason=N/A has_ipv6=1 http_referrer="" http_response=0 iface_egress=1.2 iface_ingress=P2.3 instance_id=0 ip_layer=0 ip_proto=TCP ips_count=0 last_pkt_sec=0 legacy_ip_address=0.0.0.0 mac_address=0:0:0:0:0:0 monitor_rule_1=N/A monitor_rule_2=N/A monitor_rule_3=N/A monitor_rule_4=N/A monitor_rule_5=N/A monitor_rule_6=N/A monitor_rule_7=N/A monitor_rule_8=0 netbios_domain="" netflow_src=00000000-0000-0000-0000-000000000000 num_ioc=0 rec_type_desc="Connection Statistics" rec_type_simple=RNA referenced_host="" sec_intel_event=No sec_intel_ip=N/A sec_zone_egress=F01_OUTSIDE sec_zone_ingress=F01_INSIDE security_context=00000000000000000000000000000000 sensor=abcd-f01 sinkhole_uuid=00000000-0000-0000-0000-000000000000 snmp_in=0 snmp_out=0 src_autonomous_system=0 src_bytes=0 src_ip=22.33.44.55 src_ip_country=unknown src_mask=0 src_pkts=0 src_port=4382 src_tos=0 ssl_actual_action=Unknown ssl_cert_fingerprint=0000000000000000000000000000000000000000 ssl_cipher_suite=TLS_NULL_WITH_NULL_NULL ssl_expected_action=Unknown ssl_flow_error=0 ssl_flow_flags=0 ssl_flow_messages=0 ssl_flow_status=Unknown ssl_policy_id=00000000000000000000000000000000 ssl_rule_id=0 ssl_server_cert_status="Not Checked" ssl_server_name="" ssl_session_id=0000000000000000000000000000000000000000000000000000000000000000 ssl_ticket_id=0000000000000000000000000000000000000000 ssl_url_category=0 ssl_version=Unknown tcp_flags=0 url="" url_category=Unknown url_reputation="Risk unknown" user="No Authentication Required" user_agent="" vlan_id=0 web_app=Unknown
I tried below in config, it's not working.Is there anything I have to add.
props.conf
[cisco:estreamer:data]
TRANSFORMS-null= setnull
transforms.conf
[setnull]
SOURCE_KEY=_raw
REGEX = (fw_rule=0.*sensor=abcd-f01)
DEST_KEY = queue
FORMAT = nullQueue
↧
↧
Why does Splunk overwrite the 'messages' field in scheduler.log events?
It seems that scheduler.log events are all prepared for parsing
04-09-2018 23:35:04.548 +0000 ERROR SavedSplunker - **savedsearch_id**="nobody;my_lookups;Unix DHCP Refresh", **message**="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed
Yet, etc/app/search/default/props.conf is insistent on overwriting that so that it extracts EVERYTHING after "SavedSplunker - " into the the 'message' field.
[scheduler]
EXTRACT-fields = (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P[^ ]*)\s+(?P[^ ]+) - (?P.+)
So, now instead of message being: **Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available**
it's expanded to:
savedsearch_id="nobody;my_lookups;Unix DHCP Refresh", **message**="Error in 'lookup' command: Lookups: The lookup table 'dhcp_lookup' does not exist or is not available.". No actions executed
So the question is, why? And why choose the same fieldname that's already been used in the event itself?
It seems that it would have been much more logical to have chosen a different fieldname than 'message'.
Thank you
↧
Is it possible to do a sequential double override of index and sourcetype coming from 1 source?
Previous related question:
What adverse results can occur if using an override index and override sourcetype at the same time?
I have been unable to get a double override to work. At this point just wanted to see if anyone else has done this or if its impossible.
Thank you
↧
REST API modular input xml data parsing
hello splunk
i have a question that About extracting by specifying sourcetype in props.conf.
i want parsing xml data that Xml data received in response
but that data is not parsing
this is my develop environment
1. use REST API
- use REST API modular input
- Every 300 seconds https call setting
- index=main sourcetype=ex_st
- install in heavy Forwarder
1. cluster
- 1 search head (heavy forwarder and indexer master in 1 search header)
- 2 indexer(cluster)
2. props.conf
[xmlData]
category = Custom
SHOULD_LINEMERGE = true
KV_MODE = xml
disabled = false
TRUNCATE = 0
BREAK_ONLY_BEFORE =
NO_BINARY_CHECK = true
description = XML Data Field extract
3. received xml data sample
![alt text][1]
[1]: /storage/temp/240587-그림1.jpg
↧
How to use spath command in props.conf or transforms.conf
hello
i have a question
i have a data that access_log data and json data mixed.
my need is extraction field from mixed data.
splunk auto extraction field from that key=value format data
so, i received extracted json field(field name is "json") like this :
{"key":"value", "key":"value", "key":"value", "key":"value", "key":"value" ....(skip)}
and if search the json field, i return data.
but, i want extraction field from json like this :
| fields key, key2, key3, key4 ....(skip)
finally, i found spath command.
so, I got the results that I wanted
and i tried modify props.conf to Automatically extraction field from json
but that is not work.
what should i do ?
ps : i tried that use the eval command like this :
| eval example = spath(json, 'json.key')
and like this :
| eval example = json.key
but that is also not working
↧
↧
How to use spath command in props.conf or transforms.conf?
hello
I have a question
I have a data that access_log data and json data is mixed and my need is to extract field from mixed data.
splunk auto extraction field from that key=value format data
So, i received extracted json field(field name is "json") like this :
{"key":"value", "key":"value", "key":"value", "key":"value", "key":"value" ....(skip)}
And if I search the json field, I return the data but, I want extracted field from JSON like this :
| fields key, key2, key3, key4 ....(skip)
Finally, I found spath command and I got the results that I wanted.
I tried to modify props.conf to automatically extract the field from json but it is not working.
What should I do?
ps: I tried t0 use the eval command like this :
| eval example = spath(json, 'json.key')
And like this:
| eval example = json.key
But that is also not working.
↧
Help with Line Break for log
04/19/18 12:32:17.398524 - IQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(Hos
tLoanExists).DATETIME(4/19/2018 12:32:17 PM)~A0~BLMS~DCARD~F??????????????~HL
OAN~JID=ALL
04/19/18 12:32:17.398907 - RSIQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(H
ostLoanExists).DATETIME(4/19/2018 12:32:17 PM)~K1:SymConnect is off host
04/19/18 12:33:26.915422 - IQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(Ho
stLoanExists).DATETIME(4/19/2018 12:33:26 PM)~A0~BLMS~DCARD~F??????????????~H
LOAN~JID=ALL
04/19/18 12:33:26.930871 - RSIQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(
HostLoanExists).DATETIME(4/19/2018 12:33:26 PM)~K0~JID=01~JID=02~JID=03~JID=0
4~JID=05~JID=06~JID=07~JID=08~JID=09~JID=10~JID=80~JID=81~JID=82
04/19/18 12:33:39.309465 - IQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(Hos
tLoanExists).DATETIME(4/19/2018 12:33:39 PM)~A0~BLMS~DCARD~F??????????????~HL
OAN~JID=ALL
04/19/18 12:33:39.310146 - RSIQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(H
ostLoanExists).DATETIME(4/19/2018 12:33:39 PM)~K0~JID=01~JID=02~JID=80~JID=81
04/19/18 12:34:05.371520 - IQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(Hos
tLoanExists).DATETIME(4/19/2018 12:34:05 PM)~A0~BLMS~DCARD~F??????????????~HL
OAN~JID=ALL
04/19/18 12:34:05.372184 - RSIQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(H
ostLoanExists).DATETIME(4/19/2018 12:34:05 PM)~K0~JID=80
04/19/18 12:34:57.811784 - IQ~MSG.ACCTNUM(XXXX).FUNCTION(Inquiry).CALLER(Host
LoanExists).DATETIME(4/19/2018 12:34:57 PM)~A0~BLMS~DCARD~F??????????????~HLO
AN~JID=ALL
A new entry always begin with 00/00/00
What would the line_break be for my props?
↧
Why am I still seeing debug logs in the Splunk heavy forwarder filtering?
I have set the following on transforms.conf and props.conf but I still see DEBUG logs in my search. what did I miss
transforms.conf
#Remove: DEBUG
[null_kube_DEBUG]
REGEX = (DEBUG)
DEST_KEY=queue
FORMAT=nullQueue
props.conf
#### kube ################################
[source::kube.var.log.containers.*]
TRANSFORMS-null = null_kube_DEBUG
↧
Should props.conf match inputs.conf?
Does the props.conf file of an indexer has the same contents as the inputs.conf file of the forwarder from which it is receiving the data?
↧
↧
line break help with incoming logging data
I've tried several attempts at fixing this incoming line data. Looks to be pretty straight forward, but splunk is ingesting these lines as a single event, not breaking them up into individual events. Date looks pretty clean, but timestamp might not be being pickup as a separator? Need some props.conf/REGEX help to get this to split correctly.
Thanks in advance!!!
Joe
sourcetype=release_mcbatch <--UNIQUE
Sample Logging (should be 10 events):
13:15:24.817 INFO com.dstoutput.dp.util.DPConnection - DataBase Connection created.
13:15:24.933 INFO c.d.d.p.RelMgrDocUpdateProcessor - JOB :: no more pending relMgrDocUpdate records in RecordProcessQueue table.
13:15:24.933 INFO c.d.d.p.RelMgrDocUpdateProcessor - JOB :: release manager document update processor successfully completed.
13:15:24.935 INFO c.d.dp.processor.DPProcessor - DataBase Connection closed.
13:30:15.730 INFO c.d.d.p.RelMgrDocUpdateProcessor - JOB :: release manager document update processor started.
13:30:15.764 INFO c.d.d.p.RelMgrDocUpdateProcessor - fetching all pending relMgrDocUpdate records from RecordProcessQueue table
13:30:16.496 INFO com.dstoutput.dp.util.DPConnection - DataBase Connection created.
13:30:16.801 INFO c.d.d.p.RelMgrDocUpdateProcessor - JOB :: no more pending relMgrDocUpdate records in RecordProcessQueue table.
13:30:16.801 INFO c.d.d.p.RelMgrDocUpdateProcessor - JOB :: release manager document update processor successfully completed.
13:30:16.803 INFO c.d.dp.processor.DPProcessor - DataBase Connection closed.
↧
Wrong time format in Microsoft DNS TA
Hi guys,
I've posted a sample DNS log with some random data:
02/05/2018 14:15:24 1264 PACKET 0000008AE9170080 UDP Rcv 0.0.0.0 7c4d Q [0001 D NOERROR] SRV (16)_kerberos-master(4)_tcp(3)RANDOM(3)CA0)
We're trying to ingest these logs into Splunk which has been partially successful. We began to ingest the data, however, the timestamps are taken incorrectly.
As we are in the UK 02/05/2018 should be the 2nd of May 2018 but it's only searchable as the 5th of February 2018. We are using the Microsoft DNS TA on a UF send the data via a heavy forwarder and then to the indexers. I've tried adding the below stanza to props.conf on the UF and the heavy forwarder but still the date only shows as the American Format. Anyone got any ideas on how to fix this?
*[MSAD:NT6:DNS]
TIME_FORMAT = %d/%m/%Y*
The default props.conf in the DNS TA doesn't have anything relating to timestamps which is also very confusing?
Any advice would be appreciated.
Cheers!
↧
Eval and rex to props.comf
Just looking some help to construct props file for below search statement.
| rex max_match=10 field=violation_details "(?[^<]+)" | base64 field=cookie_value action=decode |
↧
How to configure props and transforms.conf based on the rex extractions in my sample search?
I need to construct props and transforms for below sample search.
index=blaa sourcetype=my_source | rex field=X__Edgescape "lat=(?P(.*?)),"
| rex field=X__Edgescape "long=(?P(.*?)),"
| rex field=X__Edgescape "continent=(?P(.*?)),"
| rex field=X__Edgescape "country_code=(?P(.*?)),"
| table
i constructed below one too work but just thought of take expert advice.
transforms.conf
[latitude1]
SOURCE_KEY = X_Edgescape
REGEX ="lat=(?P(.*?)),"
MV_ADD = TRUE
[longitude1]
SOURCE_KEY = X_Edgescape
REGEX ="lat=(?P(.*?)),"
MV_ADD = TRUE
PROPS.CONF
[sourcetype]
REPORT-fields = latitude1,longitude1
↧
↧
Keep specific events and discard the rest
How to filter sets of monitored logs with HF?
Hi,
I have a number of logs files monitored by UFs and sent to autoLB HFs (which send downstream to the indexers).
I want to apply a regex filter to all the sources where I only collect logs that contain "error" or "warn".
Example search:
Index=common_index sourcetype=my_app_logs source = /temp/my_app/*
This will return a number of "source" field values where source =
/temp/my_app/logs/my_app.log
/temp/my_app/logs/my_app_gc.log
/temp/my_app/logs/my_app_user.log
/temp/my_app/logs/my_app_bootstart.log
On the UFs I have:
inputs.conf
[monitor:///temp/my_app/logs/*]
disabled = 0
index = common_index
sourcetype = my_app_logs
outputs.conf
[tcpout]
defaultGroup = default-autolb-group
indexAndForward = 0
[tcpout-server://x.x.x.1:9997]
[tcpout:default-autolb-group]
disabled = false
server = x.x.x.1:9997,y.y.y.2:9997
[tcpout-server://y.y.y.2:9997]
On the HFs x.x.x.1, and y.y.y.2 I have:
outputs.conf
[tcpout]
defaultGroup = default-autolb-group
forwardedindex.filter.disable = true
indexAndForward = 0
maxQueueSize = 7MB
[tcpout-server://i.i.i.1:9997]
[tcpout-server://i.i.i.2:9997]
[tcpout:default-autolb-group]
disabled = false
server = i.i.i.1:9997,i.i.i.2:9997
Referencing [http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest]
It says to create "queue" stanzas in props and transforms (I am inferring on the HFs) this is where I need guidance.
For example, would I just add the following stanzas to the HFs?
in props.conf
[source::/temp/my_app/logs/*] <------- this is for applying the REGEX filter to all monitored logs
TRANSFORMS-set=setnull_myApp, setparsing_myApp
in transforms.conf
[setnull_myApp]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue
[setparsing_myApp]
REGEX = error, warn <------ ERROR OR WARN??
DEST_KEY = queue
FORMAT = indexQueue
My other question, if I want to selectively apply filters to some files (not all) source files would I create individual source stanzas?
For example:
[source::/temp/my_app/logs/my_app.log]
and would I need to change props and transforms individually as well?
For example:
[setnull_myApp.log]
[setparsing_myApp.log]
Thank you for your guidance!
↧
props.conf hostname with hyphen
We are able to match entries in props.conf using the hostname... unless that hostname has a hyphen. Then, for whatever reason, it never matches. See example below:
[host::server1.example.com]
TRANSFORMS-server1 = route_to_idx1
### This works
[host::server-2.example.com]
TRANSFORMS-server2 = route_to_idx2
### This doesn't work
↧
I want to set a different "time zone" from the "time zone" in the log.
If there is a log like below that have time zone "+00:00", how can I set different time zone?
2018/5/9 3:00:00+00:00 this is test
If I input this log, system time zone of my splunk server is `JST` and difference of UTC and JST is +9 hours, so normally `_time` will be `2018/5/9 12:00:00`.
But I want `splunk` to recognize this log's time zone is JST, so I configured `TZ = Asia/Tokyo` in props.conf of `HF`.
*In my environment there are `HF` and `Indexer`
Then I was thinking that `_time` would be `2018/5/9 3:00:00`, but `_time` wasn't changed.
Did I set `TZ` in wrong place?
Or if the time zone is described in the log, does it override the `TZ` setting and there is no way to avoid it?
Please someone tell me about it.
↧