Quantcast
Channel: Questions in topic: "props.conf"
Viewing all 1485 articles
Browse latest View live

Can props.conf and transforms.conf be created in the Deployment Server GUI to push to indexers?

$
0
0
Do props.conf and transforms.conf need to be created in $SPLUNK_HOME/etc/deployment-apps/YOURAPP/local or can it be created in the deployment server GUI to push to indexers?

Why is WebLogic server.out parsing not working?

$
0
0
Hi all, I have a Splunk installation here with lot's or Oracle WebLogic logging. Everything except the *server.out files are working great but whatever I do I cannot get it to work properly for those files. My current props.conf is: LINE_BREAKER=([\n\r]+)<\w{3}\s\d{1,2},\s\d{4}\s\d{1,2}:\d{1,2}:\d{1,2}\s\w{2}\s\w+> MAX_TIMESTAMP_LOOKAHEAD=30 SHOULD_LINEMERGE=false TIME_FORMAT=%b %d, %Y %I:%M:%S %P %Z TIME_PREFIX=^< TRUNCATE=999999 This works fine when I manually use the data upload via the webui but it is not working from the UF. Any advice on this? How does WebLogic write data to this file? My next step is looking at time_before_close but that is the last resort.

why are my configurations not working even after reboot?

$
0
0
The log files I'm working with are using the log4j syntax, and I'm loading them into splunk through the GUI (not real-time monitoring) So that I don't need to update the inputs.conf file. I have customized the following configuration files : indexes.conf : [index_infodebug] homePath=$SPLUNK_DB/$_index_infodebug/db coldPath= $SPLUNK_DB/$_index_infodebug /colddb thawedPath=$SPLUNK_DB/$_index_infodebug /thaweddb frozenTimePeriodInSecs = 2628000 #1month #logs to be erased [index_testconf] homePath=$SPLUNK_DB/$_index_testconf /db coldPath= $SPLUNK_DB/$_index_testconf /colddb thawedPath=$SPLUNK_DB/$_index_testconf /thaweddb frozenTimePeriodInSecs = 2628000 #1 month coldToFrozenDir = my/archive/directory #logs to be retained transforms.conf: [infodebug_logs] REGEX = \d{3}\s*(INFO|DEBUG)\s*[[] DEST_KEY = _MetaData:Index FORMAT = index_infodebug [short_source] SOURCE_KEY = Metadata:Source REGEX = Windchill_\d{4}-\d\d-\d\d_\d+_\d+\.tgz:\.\/Windchill_\d{4}-\d\d-\d\d_\d+_\d+\/(?[0-9a-zA-Z._-]+log) (forget the caracters in italic) DEST_KEY = MetaData:Source props.conf: [testconf_sourcetype] ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = true BREAK_ONLY_BEFORE = \d\d?d\d:\d\d BREAK_ONLY_BEFORE_DATE = True CHARSET = UTF-8 DATETIME_CONFIG = /etc/datetime.xml DEPTH_LIMIT = 1000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = True TRANSFORMS = TRUNCATE = 10000 category = Application description = Output produced by any Java 2 Enterprise Edition (J2EE) application server using log4j detect_trailing_nulls = false maxDist = 75 pulldown_type = true TRANSFORMS-index = infodebug_logs TRANSFORMS-source = short_source Both regex are working : -the first aims at routing INFO and DEBUG events to the appropriate index, which is configured to erase them after 1 month. (while other logs are archived) - the second one is for the extraction of more readable source names. I've tested them with the REGEX command, so i know they fit my data. After the restart of the splunk server, i've put my data into splunk. My problem is that NEITHER both transforms NOR the archiving part are working. I've tried with 60 seconds for the test and nothing happened. The events are only parsed the right way, as I specified in props.conf. I would be glad if someone could help me with that issues, thanks!

Conditional execution of transforms based on property of event

$
0
0
Does anyone know of a way to control execution of transforms based on a non-metadata property of an event? I have a collection of 100+ regular expressions which extract values using strongly typed field names within windows security logs. I don't want Splunk to have to execute 100+ regular expressions against every event. Ideally, I would like to control Splunk behavior such that only certain extraction rules are executed against events having certain patterns (in my case, EventCodes) within the sourcetype. Is there a way to do this? Here is an example of the sort of logic I would like to be able to apply Props.conf: [source::WinEventLog:Security] REPORT-wineventlog_security_subject_extractions_1 = if(match(EventCode,(4624|4624),"wineventlog_security_subject_extractions_1",noop) Transforms.conf: [wineventlog_security_subject_extractions_1] SOURCE_KEY = _raw REGEX = Subject :\s+Security ID:\s+(.*)\s+Account Name:\s+(.*)\s+Account Domain:\s+(.*)\s+Logon ID:\s+(.*) FORMAT = Subject_SID::$1 Subject_Account_Name::$2 Subject_Account_Domain::$3 Subject_Logon_ID::$4

How to edit my props and transforms to filter out certain phrases in ASA logs?

$
0
0
Hi, I've read a few articles on filtering data inputs. Basically I have a noisy ASA that I'm logging, and I want to discard anything with these phrases in them. They are eating up my licensing quota and I don't need them. Built outbound TCP connection Teardown TCP connection Teardown UDP connection Built dynamic TCP translation Teardown dynamic TCP translation I have the following set up for my configuration files but it's not working. Can someone help me out, I'm not good at regex :/ transforms.conf [geo_us_states] external_type = geo filename = geo_us_states.kmz [geo_countries] external_type = geo filename = geo_countries.kmz [geo_attr_us_states] filename = geo_attr_us_states.csv [geo_attr_countries] filename = geo_attr_countries.csv [discard_events] REGEX = Built outbound TCP connection DEST_KEY = queue FORMAT = nullQueue [discard_events] REGEX = Teardown dynamic TCP translation DEST_KEY = queue FORMAT = nullQueue [discard_events] REGEX = Teardown TCP connection DEST_KEY = queue FORMAT = nullQueue [discard_events] REGEX = Teardown UDP connection DEST_KEY = queue FORMAT = nullQueue [discard_events] REGEX = Built dynamic TCP translation DEST_KEY = queue FORMAT = nullQueue props.conf # Version 7.1.1 [sendmail] FIELDALIAS-msgid = mid AS msgid EXTRACT-qid = sendmail[^\]]*\]:\s+(?[^:]*): [source::/var/log/foo] # Transforms must be applied in this order # to make sure events are dropped on the # floor prior to making their way to the # index processor TRANSFORMS-set = discard_events

Unable to drop event

$
0
0
I'm trying to drop some failed messages from an imported txt file, but they continue to be in the sourcetype. The regex works using regexe101.com, and it looks ok in the splunk docs, but they wont be dropped. Can somebody check my syntax please. Below is copies of the transforms.conf and props.conf. [splunk@linux1 local]$ cat props.conf [sslah] LINE_BREAKER = \d(4)-\d(2)-\d(2)\s\d(2):\d(2):\d(2) TIME_FORMAT = %Y-%m-%d %H:%M:%S TIME_PREFIX = ^ SHOULD_LINEMERGE = False MAX_TIMESTAMP_LOOKAHEAD = 19 TRUNCATE = 2000 TRANSFORMS-null = setsslah [splunk@linux1 local]$ cat transforms.conf [setsslah] REGEX = \d+.\d+.\d+.\d+\sFAILED DEST_KEY = queue FORMAT = nullQueue

Using regex, how to exclude any events in the host field and keep the rest?

$
0
0
One big syslog file I need to index (monitor) daily. Many hosts log to this syslog file. I want to exclude any events that contain 'server1' in the host field, and keep the rest. On the receiving indexer, the following is in /opt/splunk/etc/system/local **props.conf:** [source::/syslog/Security/*.log] TRANSFORMS-set = setnull, setparsing **transforms.conf:** [setnull] REGEX = server1 DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = . DEST_KEY = queue FORMAT = indexQueue Where might I have gone wrong? This does not seem to work.

Change Index and Sourcetype

$
0
0
I have set of data, where I want to send events with a 404 error code to a different index as well as after processing the records, I want to set a final, different sourcetype. Neither are working. Please advise... props.conf: [weblogs] SHOULD_LINEMERGE = false LINE_BREAKER = (&&&)(?=\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b) DATETIME_CONFIG = NO_BINARY_CHECK = true category = Custom pulldown_type = true TRANSFORMS-1 = notfound TRANSFORMS-2 = setsourcetype disabled = false transforms.conf: [notfound] REGEX = 404 DEST_KEY = _MetaData:Index FORMAT = notfoundindex [setsourcetype] SOURCE_KEY = _raw REGEX = ^. DEST_KEY = Metadata:Sourcetype FORMAT = sourcetype::access_combined

How to override the props source?

$
0
0
I have the below configured but source is not being over written. I am trying to wild card anything after Windows in the path. props.conf [source::/var/log/Windows/*] TRANSFORMS-changesource=changesource transforms.conf [changesource] FORMAT= source::WinEventLog:Security DEST_KEY=MetaData:Source I have it deployed to my heavy forwarder. It's not working there. I am also curious if I can deploy something like this to a Universal Forwarder after I am sure the syntax is correct?

TIMESTAMP_FIELDS setting in props.conf is not taken into account

$
0
0
I have the issue that the TIMESTAMP_FIELDS setting in the props.conf on the Universal Forwarder is not taken into account. It seems like the field _time is filled in with the time the line is being indexed and not take from the log line itself. **Splunk Enterprise:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Splunk Universal Forwarder:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Log line example:** {"Application":"CNIP","CallStatus":"OK","CallType":"TERM-RP","Called":"xxxxxxxxx","Calling":"xxxxxxxxx","Clir":"false","DelayTime":"161","Error":"","ErrorBy":"","ErrorSeverity":"","Name":"xxxxxxxxx","NameBy":"DisDB","OverwriteCli":"","Protocol":"SIPPROXY","SessionId":"xxxxxxxxx","StartTime":"2018-06-20T08:36:00Z","StopTime":"2018-06-20T08:36:00Z","logLevel":1} **How it is seen on Splunk:** ![alt text][1] [1]: /storage/temp/252009-2018-06-20-10-59-20-search-splunk-663.png As you can see, the times are not taken from the "StartTime" field in the logline. Here the config on the Forwarder: **inputs.conf** [monitor:///locationOnServer/LogFile] index=csdp_prod_services source=CNIPService sourcetype=CnipCallLog.log ignoreOlderThan=1d **props.conf** [CNIPService] SHOULD_LINEMERGE=false INDEXED_EXTRACTIONS=json KV_MODE=none category=Structured disabled=false TIMESTAMP_FIELDS=StartTime TZ = UTC #I tried with and without this field, same behavior TIME_FORMAT=%FT%TZ #I tried with and without this field, same behavior What am I missing here to make this work? I want the _time field to be filled in based on the "StartTime" field in the log lines.

Why is the TIMESTAMP_FIELDS setting in props.conf, on the Universal Forwarder, not taken into account?

$
0
0
I have the issue that the TIMESTAMP_FIELDS setting in the props.conf on the Universal Forwarder is not taken into account. It seems like the field _time is filled in with the time the line is being indexed and not take from the log line itself. **Splunk Enterprise:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Splunk Universal Forwarder:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Log line example:** {"Application":"CNIP","CallStatus":"OK","CallType":"TERM-RP","Called":"xxxxxxxxx","Calling":"xxxxxxxxx","Clir":"false","DelayTime":"161","Error":"","ErrorBy":"","ErrorSeverity":"","Name":"xxxxxxxxx","NameBy":"DisDB","OverwriteCli":"","Protocol":"SIPPROXY","SessionId":"xxxxxxxxx","StartTime":"2018-06-20T08:36:00Z","StopTime":"2018-06-20T08:36:00Z","logLevel":1} **How it is seen on Splunk:** ![alt text][1] [1]: /storage/temp/252009-2018-06-20-10-59-20-search-splunk-663.png As you can see, the times are not taken from the "StartTime" field in the logline. Here the config on the Forwarder: **inputs.conf** [monitor:///locationOnServer/LogFile] index=csdp_prod_services source=CNIPService sourcetype=CnipCallLog.log ignoreOlderThan=1d **props.conf** [CNIPService] SHOULD_LINEMERGE=false INDEXED_EXTRACTIONS=json KV_MODE=none category=Structured disabled=false TIMESTAMP_FIELDS=StartTime TZ = UTC #I tried with and without this field, same behavior TIME_FORMAT=%FT%TZ #I tried with and without this field, same behavior What am I missing here to make this work? I want the _time field to be filled in based on the "StartTime" field in the log lines.

Applying different extractions to the same source from different hosts

$
0
0
I have two groups of servers that are both running haproxy, and the logs are in the same location (e.g. /var/log/haproxy.log). The log format is different for each of them. I'd like to specify to apply one REPORT transform to haproxy logs coming from \*.foo.com and a different REPORT transform to haproxy logs coming from \*.bar.com (search-time field extractions). I can't change the sourcetype name itself, it's expected to be 'haproxy' in both cases. I can't determine any easy way to have a props.conf stanza that matches both a source AND a host, but it seems wasteful and slow to run two giant regular expressions for every line when only one needs to be applied. Are there any config tricks that can be used here?

How to use spath command in props.conf or transforms.conf in Splunk?

$
0
0
Hello , I used spath command to extract field from json data: {"key":"value", "key":"value", "key":"value", "key":"value", "key":"value" ...} and I got the results that I wanted (.. | spath input = json) My question is how can I get the same results in props.conf or transforms.conf? Thank you for answer M&A

How would I filter out fields via Props.conf?

$
0
0
I am forwarding windows events from graylog to a UF and then UF to Indexer. I have a props.conf to create field alias from the Graylog fields. Once I have these I want to eliminate the gray log fields from being indexed. Here is Props.conf. FIELDALIAS-winlogbeat_as_host = winlogbeat_fields_collector_node_id as host FIELDALIAS-winlogbeat_as_eventid = winlogbeat_event_id as EventCode FIELDALIAS-winlogbeat_as_processname = winlogbeat_event_data_ProcessName as Process_Name FIELDALIAS-winlogbeat_as_logonid = winlogbeat_event_data_TargetLogonId as Logon_ID FIELDALIAS-winlogbeat_as_user = winlogbeat_event_data_TargetUserName as user FIELDALIAS-winlogbeat_as_src_user = user as src_user FIELDALIAS-winlogbeat_as_action = winlogbeat_keywords as action FIELDALIAS-winlogbeat_as_security_id = winlogbeat_event_data_TargetUserSid as Security_ID FIELDALIAS-winlogbeat_as_account_domain = winlogbeat_event_data_TargetDomainName as Account_Domain FIELDALIAS-winlogbeat_as_logontype = winlogbeat_event_data_LogonType as Logon_Type FIELDALIAS-winlogbeat_as_srcip = winlogbeat_event_data_IpAddress as src_ip FIELDALIAS-winlogbeat_as_src = winlogbeat_computer_name as src FIELDALIAS-winlogbeat_as_destip = src_ip as dest_ip How would I eliminate the winlogbeat fields from being indexed? Thanks!

MAX_EVENTS in props.conf not working

$
0
0
Hi everyone, We have the following Splunk configuration: - Splunk Cloud instance (managed) - Universal Forwarder - Monitoring log We need to index event logs with more than 256 lines. props.conf (located at: SplunkUniversalForwarder\etc\system\local) has the following configuration: [esb] disabled = false TRUNCATE = 0 LINE_BREAKER = ^.{4}-.{2}-.{2}\s.* SHOULD_LINEMERGE = true MAX_EVENTS = 100000 At search time, events appear truncated at max of 257 lines so, I suppose, that MAX_EVENTS props.conf isn't working. How can i solve this issue? Thanks

Unable to divert events at index time to different sourcetypes from single source

$
0
0
Log contains a hodgepodge of different logger events. Using transforms and props I am still unable to assign the sourcetypes to the events based on simple regex. This is just to test the ability to do this, but nothing I have tried is working. Everything is going into the default declared augustus sourcetype. This is all being run on a stand alone enterprise test machine. **props.conf** [sourcetype::augustus] TRANSFORMS-change_sourcetype = mapped_events [sourcetype::augustus_mapped] DATETIME_CONFIG = NO_BINARY_CHECK = true category = Custom pulldown_type = 1 **transforms.conf** [mapped_events] REGEX = :\s FORMAT = sourcetype::augustus_mapped DEST_KEY = MetaData:Sourcetype

Logs from Microsoft Azure Active Directory Reporting Add-on for Splunk are in Chinese. Tried encoding it by setting the CHARSET value to AUTO as well as GB18030 in props.conf file. But still the same issue. Any thoughts what might resolve this issue?

$
0
0
Logs from Microsoft Azure Active Directory Reporting Add-on for Splunk are in Chinese. Tried encoding it by setting the CHARSET value to AUTO as well as GB18030 in props.conf file. But still the same issue. Any thoughts what might resolve this issue? [ms:aad:signin] CHARSET=GB18030 [ms:aad:audit] CHARSET=GB18030

How to LINEMERGE sourcetype="mscs:storage:blob" for Azure blob storage files ?

$
0
0
I have install the Splunk_TA_microsoft-cloudservices APP on a heavy forwarder which routes to multiple indexers in a "distributed search peer" configuration. The app is working and I can find the events in my search head for example, ` index=azure sourcetype = mscs:storage:blob ` The files that are sent to the blob storage are in json format. The problem is that Splunk is not parsing it correctly. I believe I need to add SHOULD_LINEMERGE somewhere . I was thinking that I need to add to "props.conf" on the fwdr in /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/local When I look in .../default I can see a props.conf but there is no stanza for `sourcetype = mscs:storage:blob`. Do I create a props.conf in .../local and enter a stanza for `sourcetype = mscs:storage:blob` and add SHOULD_LINEMERGE = true? Ideally I want the breaks to occur on the timestamps. Please advise. Thank you

Why won't Splunk parse my multi-line event properly?

$
0
0
I am currently unable to parse my multi-line event properly using Splunk. Here is an example from the start of the event:{31B2F340-016D-11D2-945F-00C04FB984F9}options-it.comDefault Domain Policytrue2002-09-17T07:41:342018-05-03T13:58:322018-07-09T04:00:36.6876121ZO:DAG:DAD:PAI(OA;CI;CR;edacfd8f-ffb3-11d1-b41d-00a0c968f939;;AU)(A;;CCDCLCSWRPWPDTLOSDRCWDWO;;;DA)(A;CI;LCRPRC;;;S-1-5-21-1060284298-1275210071-1417001333-95787)(A;CI;LCRPRC;;;S-1-5-21-1060284298-1275210071-1417001333-12472)(A;CI;CCDCLCRPWPSDRCWDWO;;;S-1-5-21-1060284298-1275210071-1417001333-95786)(A;CI;CCDCLCRPWPSDRCWDWO;;;S-1-5-21-1060284298-1275210071-1417001333-22697)(A;CI;CCDCLCSWRPWPDTLOSDRCWDWO;;;DA)(A;CI;CCDCLCSWRPWPDTLOSDRCWDWO;;;S-1-5-21-1060284298-1275210071-1417001333-519)(A;;LCRPLORC;;;ED)(A;CI;LCRPLORC;;;AU)(A;CI;CCDCLCSWRPWPDTLOSDRCWDWO;;;SY)(A;CIIO;CCDCLCSWRPWPDTLOSDRCWDWO;;;CO)S:AI(AU;CIIDSA;CCDCSWWPDTLOCRSDWDWO;;;WD)(AU;CIIDFA;CCDCSWWPDTCRSDWDWO;;;WD)S-1-5-21-1060284298-1275210071-1417001333-512OPTIONS-IT\Domain Admins I am trying to get it split the events properly, where each event starts with this line: This is the props settings im trying: BREAK_ONLY_BEFORE=.+GPO\sxmlns:xsd.+ CHARSET=UTF-16LE SHOULD_LINEMERGE=false disabled=false TIME_FORMAT=%Y-%m-%dT%H:%M:%S TIME_PREFIX=.+ MAX_TIMESTAMP=18 LINE_BREAKER=.+GPO\sxmlns:xsd.+

How to parse JSON (blob) data by time stamp AND clean up the trailing brackets and commas?

$
0
0
So I am using this stanza in my .../local props.conf [mscs:storage:blob] SHOULD_LINEMERGE = true LINE_BREAKER: (,[\r\n]+\s+){ TRUNCATE = 0 KV_MODE = json This works fine to parse my events like this "time": "2018-07-09T16:14:21.3832528Z", "resourceId": "blah", "operationName": "blah blah", "category": "blah", "resultType": "blah", "resultSignature": "blah", "durationMs": 1234, "callerIpAddress": "x.x.x.x", "correlationId": "x-x-x-x-x", "identity": {"blah blah":{"blah"}}, "level": "blah", "location": "blah", "properties": {"statusCode":"OK","serviceRequestId":"x-x-x"} } ] } OR "time": "2018-07-09T16:14:21.3832528Z", "resourceId": "blah", "operationName": "blah blah", "category": "blah", "resultType": "blah", "resultSignature": "blah", "durationMs": 1234, "callerIpAddress": "x.x.x.x", "correlationId": "x-x-x-x-x", "identity": {"blah blah":{"blah"}}, "level": "blah", "location": "blah", "properties": {"statusCode":"OK","serviceRequestId":"x-x-x"} } , { Does anyone know how to drop the brackets and commas? Thank you
Viewing all 1485 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>