Quantcast
Channel: Questions in topic: "props.conf"
Viewing all 1485 articles
Browse latest View live

How to create a props.conf file for time format

$
0
0
My timestamp is appearing as such: 2019-12-10T18:13:42-05:00 My props.conf file looks like this: TIME_FORMAT=%Y-%m-%dT%H:%M:%S-%:z Is this correct? Some of my indexes are presenting a really strange date compared to the time stamp but I think the devices they are coming from my be in a different TZ. My next question would be how to convert to EST.

Strange data comes out as field values. Same as the source path

$
0
0
I have a strange problem. In the search header, the value of the field created in props.conf is the same as the source path. SH - props.conf LOOKUP-center_tag_dest = ldcc_portal_asset IP AS dest OUTPUT group_id AS tag_dest result ![alt text][1] Normal values ​​also appear but some of these problems are found I want to know what is the cause and how to solve it. [1]: /storage/temp/277753-캡처.png

Parsing Meraki CMX API data on props.conf

$
0
0
I'm trying to parse data coming from Cisco Meraki CMX using HTTP Event Collector. The raw data was JSON API. I'm using props.conf [http_json] INDEXED_EXTRACTIONS = json SHOULD_LINEMERGE = false KV_MODE = json NO_BINARY_CHECK = 1 TIME_PREFIX = "seenTime": " TIME_FORMAT = %Y-%m-%d MAX_DAYS_AGO = 5000 but the event in the Splunk is only parsed to only one event. Do I need to edit transforms.conf for this kind of data or is there any mistake on my props.conf? Thank you. ![alt text][1] This is the sample JSON raw data that I'm trying to parse. { "version": "2.0", "secret": "splunkmci", "type": "DevicesSeen", "data": { "apMac": "00:18:0a:13:dd:b0", "apFloors": [], "apTags": [ "dev", "home", "test" ], "observations": [ { "ipv4": "/192.168.0.56", "location": { "lat": 51.5355157, "lng": -0.06990350000000944, "unc": 0.20755340376944298, "x": [], "y": [] }, "seenTime": "2016-09-24T00:06:14Z", "ssid": ".interwebs", "os": null, "clientMac": "18:fe:34:d7:7c:26", "seenEpoch": 1474675574, "rssi": 56, "ipv6": null, "manufacturer": "Espressif" }, { "ipv4": null, "location": { "lat": 51.5355157, "lng": -0.06990350000000944, "unc": 2.355135751488216, "x": [], "y": [] }, "seenTime": "2016-09-24T00:06:28Z", "ssid": null, "os": "Android", "clientMac": "98:f1:70:34:1c:3e", "seenEpoch": 1474675588, "rssi": 43, "ipv6": null, "manufacturer": "Murata Manufacturing" }, { ... [1]: /storage/temp/277774-annotation-2019-12-12-174609.jpg

Help modifying timezone in props.conf

$
0
0
I need to change the timezone for a host sending logs to our production instance. I have set up a free test instance of Splunk to try this out before making any changes and have been unsuccessful. I am sending syslog via port 514 and monitoring the var/log/auth.log file on this test instance. Can anyone offer me guidance on what I am doing wrong? My changes to props.conf are below. ![alt text][1] ![alt text][2] /opt/splunk/etc/system/local/props.conf [host::127.0.0.1] TZ = Americas/Los_Angeles [host::cb-mint] TZ = Americas/Los_Angeles [1]: /storage/temp/277796-splunkexample1.png [2]: /storage/temp/277797-splunkexample2.png

JSON input not splitting up in single line

$
0
0
I am using API to fetch the JSON logs and sending JSON output to Splunk. Props.conf is on the search head. I am seeing the intermittent issues of not splitting the JSON logs even though I am sending one by one JSON objects via scripting. Json payload : {"test": "emailid", "remote": "13.17.14.2", "guide": "05773-56-C2-E9", "test1": "testing", "date": "2019-12-13T19:05:03.836+00:00", "sessionID": "abc1"} {"remote": "13.7.4.28", "guide": "05773-56-C2-E9", "test1": "testing", "date": "2019-12-13T19:05:03.836+00:00", "sessionID": "abc1"} Props.conf is : INDEXED_EXTRACTIONS = JSON BREAK_ONLY_BEFORE_DATE=false BREAK_ONLY_BEFORE=(\{\"|\"\}) MUST_BREAK_AFTER=\"\} Please guide. I tried including `SHOULD_LINEMERGE = false` But it didnt work.

Best timestamp format

$
0
0
Hello guys, could you confirm Splunk handles best US format (MM/DD/YYYY or YYYY/MM/DD for instance) where month preceding day? Thanks.

Universal Forwarder props.conf and transforms.conf settings

$
0
0
I am trying to get the output from a python script to indexer. So i added transforms.conf and props.conf under C:\Program Files\SplunkUniversalForwarder\etc\system\local transforms.conf [myexternaltable] REGEX = (.) external_cmd = addnum.py $1 DEST_KEY = queue FORMAT = indexQueue props.conf [sitescope_daily2_log] TRANSFORMS-runscript=myexternaltable But its not working, can anyone please help me with correct settings needs to be done on UF. Thanks, Niloo

syslog-ng props and transforms conf for ingesting data

$
0
0
Hi! I'm trying to ingest metric data from a Virtual Machine Linux box, using syslog-ng and Splunk Universal Forwarder. It's for an application, so on my windows box I'm trying to make the configuration files for transforms and props in /etc/apps/app_name/local directory. It's currently working for another box with rsyslog instead of syslog-ng. For some reason it isn't with syslog-ng. transforms.conf [syslog-ng_stats] INGEST_EVAL = metric_name=Metric [object_extraction-ng] DELIMS=";" FIELDS=Date,Hostname,Object,Id,Instance,Status,Type,Metric WRITE_META = true [metric-schema:extract_stats_metrics-ng] METRIC-SCHEMA-MEASURES-ngstats=Hostname,Object,Id,Instance,Status,Type,Metric props.conf [syslog-ng-ctl] TRANSFORMS-fieldvalue=field_extraction TRANSFORMS-metricslog=syslog-ng_stats TRANSFORMS-object=object_extraction-ng METRIC-SCHEMA-TRANSFORMS=metric-schema:extract_stats_metrics-ng I'm pretty sure these are the issue as to why it isn't working, but I don't know what I've done wrong. I hope I explained this properly. If you need more information, let me know. I would greatly appreciate some help on this, I'm stuck.

How to set a large log to ingest as one single event?

$
0
0
Been working on this for a week... hence my question now. I have a log that can be anywhere between 3,000 lines or 20,000_ lines. It's an output of a script that takes around 1 minute to complete. It rights to the log as the script progresses, and I want that entire log (start to finish) in a single event into Splunk. You can see that it's ingesting that logs as multiple events (1st event at the bottom, last event at the top). Tried many combinations of props.conf (see below for current) as well as "interval" in my inputs.conf (hence removed). **Known log formatting:** Always starts with "Job ammolock submitted by teamA123 starting" Always ends with "+ exit" as the last line of the log file. 6:03:33.000 AM + ls -l j3483458_* ... ... + exit Show all 381 lines 6:03:31.000 AM + cd /aa/6/prod/something/custom/spool ... ... Show all 27 lines 6:03:29.000 AM + export RETURN_CODE=0 ... ... Show all 357 lines 6:03:23.000 AM ************************************************** TOTAL RECORDS READ FROM TABLE = 000120882 ************************************************** ... ... Show all 60 lines 6:02:45.000 AM Job ammolock submitted by teamA123 starting 2019/12/19 06:002:45 AM ... ... Show all 2269 lines INPUTS.CONF [monitor:///blah/blah/blah/blah.log] index = blah sourcetype = blah:logs disabled = 0 PROPS.CONF [blah:logs] MAX_EVENTS=40000 SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+\+\sexit) TIME_PREFIX=Job\s\w+\ssubmitted\sby\s\w+\sstarting\s MAX_TIMESTAMP_LOOKAHEAD = 22 TRUNCATE=99999999 What am I missing folks? Joe

Regex / Transforms issue.

$
0
0
Hi Regexian Splunkers, I have an event that looks like so: 2020-02-20 20:22:02.202020 test:>"value" test1:>"value1" test2:>"this is a \"test\"" test3:>"this is \"a test\" ok" test4:>"this is \"another\" test ok?" I want the fields extracted to look as so: test value test1 value1 test2 this is a \"test\" test3 this is \"a test\" ok" test4 this is \"another\" test..ok? In words.... i want a fieldname, which can have letters numbers and underscores.... followed by ```:>``` ... followed by double quotes, the field value (which can have escaped quotes in it) which will be followed by double quotes. My configs look like: props.conf: [mydumbsourcetype] TRANSFORMS-01-Pull_Colon_Angle_Events = Pull_Colon_Angle_Events transforms.conf: [Pull_Colon_Angle_Events] disabled = false SOURCE_KEY = _raw REGEX = ([\w\_]+)\=\"((?:[^"\\]|\\.)*)\" FORMAT = $1::$2 My regex works on Regex101.com ( https://regex101.com/r/LUb4km/1 ).. but on Splunk i am getting no fields. Any help would be lovely. ./D

Validating timestamp extraction after an update

$
0
0
Hi, I have updated all my instances by updating the datetime.xml file as described here: https://docs.splunk.com/Documentation/Splunk/8.0.0/ReleaseNotes/FixDatetimexml2020#Download_and_deploy_an_app_that_temporarily_replaces_the_defective_datetime.xml_file_with_the_fixed_file Now Im trying to validate the fix by following the suggested procedure i.e. **1-Paste the following text into a text editor:** date,message 19-12-31 23:58:44,Test Message - datetime.xml testing - override - puppet managed forced restart 20-01-02 23:58:54,Test Message - datetime.xml testing - override - puppet managed forced restart **2-Save the text as a text file, for example, test_file.csv, to a place that is accessible from all of your Splunk platform instances.** **3-On the Splunk platform instance that you want to validate, adjust the MAX_DAYS_HENCE setting for the [default] stanza in the $SPLUNK_HOME/etc/system/local/props.conf configuration file.** [default] MAX_DAYS_HENCE = 40 **4-Restart the Splunk platform**. **5-Using the Splunk CLI, add the text file you saved earlier as a oneshot monitor to the Splunk platform instance that you want to validate.** $SPLUNK_HOME/bin/splunk add oneshot -source test_file.csv -sourcetype csv -index main **6-Perform a search on the text in Step 1. The text with the two digit "20" should have a timestamp with the correct two-digit year of 2020.** Now I'm stuck at step 3, I do not have a props.conf file in /etc/system/local/ of any of the instances ,furthermore I have lots of custom apps that have their own `props.conf` within their respective /apps/[appname] directory. I m not sure how to validate this fix in this scenario, I was able to validate this on a single instance test server by just copying the `/opt/splunk/etc/system/default/props.conf` onto `/opt/splunk/etc/system/local` and editing the MAX_DAYS_HENCE value. But in this production environment not sure how to go about it. If i create a props.conf under `/opt/splunk/etc/system/local/` this would override all other `props.conf` and break things? Any suggestions? Thanks.

Is it better to specify TIME_FORMAT or let splunk automatically determine time format?

$
0
0
Hey, I am currently doing clean up work on some of the in house TA's build for our environment. We are getting timestamps in a more consistent way on some sources and I was wondering what would improve indexer performance? We previously had the following time related settings in our props.conf TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 20 I want to remove the TIME_PREFIX and TIME_FORMAT settings and just leave MAX_TIMESTAMP_LOOKAHEAD. Does this even matter?

Is it better to specify TIME_FORMAT or let Splunk automatically determine time format?

$
0
0
Hey, I am currently doing clean up work on some of the in house TA's build for our environment. We are getting timestamps in a more consistent way on some sources and I was wondering what would improve indexer performance? We previously had the following time-related settings in our props.conf TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 20 I want to remove the TIME_PREFIX and TIME_FORMAT settings and just leave MAX_TIMESTAMP_LOOKAHEAD. Does this even matter?

Regex for CIDR exclusion

$
0
0
Hi, Need some help with getting a correct Regex for CIDR exclusion. *(This is an example. Not the real IP range. ) Trying to exclude events that have ips in the 79.40.96.0/22 range. Following is the Regex that I was able to create using https://regex101.com/. It does match when I test with IPs from the range, in https://regex101.com. However it doesn't seem to work when I test it against events in Splunk. ^79\.40\.(96\.([1-9]|[1-9]\d|[12]\d\d)|99\.([1-9]?\d|1\d\d|2[0-4]\d|25[0-4])|9[78]\.([1-9]?\d|[12]\d\d))$ Sample event would start with the ip. 79.40.96.12 ....... Thanks, AKN

Help with props and regex for index time extraction and adjustment of time zone

$
0
0
A typical Event (which has no line breaks): HOSTVULN: HOST_ID=109436564, IP="10.1.40.106", TRACKING_METHOD="AGENT", OS="Windows 10 Enterprise 64 bit Edition Version 1803", DNS="410-dt-12345-04", NETBIOS="410-DT-12345-04", LAST_SCAN_DATETIME="2020-01-09T18:06:05Z", LAST_VM_SCANNED_DATE="2020-01-09T17:59:24Z", SEVERITY=4, QID="372286", TYPE="CONFIRMED", SSL="0", STATUS="FIXED", FIRST_FOUND_DATETIME="2019-12-14T02:23:09Z", LAST_FOUND_DATETIME="2019-12-19T20:16:45Z", TIMES_FOUND="36", LAST_TEST_DATETIME="2020-01-09T17:59:24Z", LAST_UPDATE_DATETIME="2020-01-09T18:06:05Z", LAST_FIXED_DATETIME="2019-12-20T00:39:31Z", IS_IGNORED="0", IS_DISABLED="0" Splunk is currently extracting the index time based on **LAST_SCAN_DATETIME="2020-01-09T18:06:05Z"**. I assume this is because this is the first date/time in the event. Fair enough. I have two issues to fix. 1. I would prefer Splunk to extract the index time based instead on the *second* date/time **LAST_VM_SCANNED_DATE="2020-01-09T17:59:24Z"** so I have written a regex for props.conf to account for this which is destined for the index cluster search peers. 2. All of the times in the events are GMT, (my local time is Pacific) and the events are currently being indexed 8 hours "into the future". I want the event indexed to my local time. Again, I have tried to correct for this in props.conf which is destined for the index cluster search peers. My overall problem is that, although the props.conf is successfully pushed to the index cluster search peers (via a cluster bundle), the configuration is being completely ignored by Splunk. I'm unsure whether the props.conf configuration is invalid, or it's in the wrong location, or whatever. Here is the props.conf that is on the indexers: [qualys:hostDetection] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_PREFIX = ^.+LAST_VM_SCANNED_DATE=" TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ TZ = GMT MAX_TIMESTAMP_LOOKAHEAD = 22 category = Custom pulldown_type = 1 I was particularly concerned about the line: **TIME_PREFIX = ^.+LAST_VM_SCANNED_DATE="** and whether either of the last two characters needed to be escaped with a \ but no combination I tried has worked. Advice would be much appreciated. Thank you.

Forward filtered logs to indexer and full logs to third party syslog server

$
0
0
Hello, I am currently forwarding logs from uf to HF to idx. What I am trying to achieve is drop windows event with the event code 4674 for example from being sent to idx. At the same time, forward all the logs including the events with the event code 4674 to a 3rd party syslog server from the HF. While configuring the HF, I found: 1) there is a delay in sending the logs to syslog server. I noticed that I received it roughly 15 minutes after the event time. 2) I configured the event code 4674 to send to nullqueue. By doing so, it dropped the event from being sent to idx. However, it is also not forwarding this event to the 3rd party syslog server. I need it only drop the events from being sent to idx. It has to send this event to the syslog server. Not sure what is wrong..hope someone can advise. Below is my outputs.conf setting: [tcpout] defaultGroup = primary_indexers [syslog] defaultGroup = EverythingtoSSB [syslog:EverythingtoSSB] #sendCookedData=false server=1.2.3.4:514 type=udp maxEventSize=5096 Below is props.conf: [WinEventLog:Security] TRANSFORMS-set=wineventlog-setnull,routeAlltoSSB Below is transforms.conf [routeAlltoSSB] REGEX=(.) DEST_KEY=_SYSLOG_ROUTING FORMAT=EverythingtoSSB [wineventlog-setnull] REGEX=(?m)^EventCode=(4674) DEST_KEY = queue FORMAT = nullQueue

Filter events from UF based on source + sourcetype or host

$
0
0
Hello, is it possible to filter events based on sourcetype + (host OR sourcetype) with props.conf/transforms.conf on indexers? Filtering data only based on sourcetype or source could be too wide. Thanks.

Understanding the relationship between props.conf and transforms.con

$
0
0
I am setting up a specific forwarder to monitor a log file that generates logs for multiple cases, but I only care about one, so I decided to filter the logs when they reach the server by utilizing the combination of nullQueue and indexQueue, in that order, to only retain the logs that matter to me. What I don't understand in the props.conf is the purpose of [source::] TRANSFORMS- = , etc. What is the class in that entry? Can it be anything I want, should it tie to what the transform will do or does it have to be related to the stanzas I'll be calling? I read the "Route and filter data" page and I get the general idea, I just don't understand how to tie in the props.conf the configuration set in transforms.conf

How to stop processing properties if a condition is met

$
0
0
Is it possible to stop processing properties in props.conf if a condition is met? I've been running a lot of tests with props.conf-transforms.conf and how the indexer performs under different conditions and settings and I was wondering if there was a way to stop processing transforms assigned under a props.conf stanza if a transform was successful.

Changing the sourcetype to remove spaces

$
0
0
I'm working on a TA to process Venafi messages brought in via RestAPI. When I was testing I used hostname in the props.conf file to call the transform to change the sourcetype. I can't do that in production because the production Windows servers send logs via the UF. I tried this yesterday in test. props.conf [source::Venafi\sTrust\sProtection\sPlatform] TZ = US/Pacific TRANSFORMS-venafi = venafi_sourcetype_rename transforms.conf [venafi_sourcetype_rename] DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::venafi_tpp REGEX = (.) According to the Splunk documentation it is a source-matching pattern 3. source::, where is the source, or source-matching pattern, for an event. This is what I have to work with source = Venafi Trust Protection Platform sourcetype = Venafi TPP Log Event Any ideas on how I can use source to reset sourcetype? TIA, Joe
Viewing all 1485 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>