Quantcast
Channel: Questions in topic: "props.conf"
Viewing all 1485 articles
Browse latest View live

dynamicly assigning index based on eventsize

$
0
0
Hi everyone, I would like to send events based on their size in different indexes. I'm currently using the props.conf and the transforms.conf. unfortunately this doesn't work as it should. My goal is that all events with a length of 2048 or less get into the index normally. All events larger than this should be stored in the big index. But this does not work. I suspect that my REGEX is wrong. This is my props.conf: [big_events] TRUNCATE = 0 SHOULD_LINEMERGE = TRUE TRANSFORMS-1 = big_index TRANSFORMS-2 = normal_index And my transforms.conf: [big_index] REGEX = ^.{2049,}$ SOURCE_KEY = _raw DEST_KEY = _MetaData:Index FORMAT = big [normal_index] SOURCE_KEY = _raw REGEX = ^.{1,2048}$ DEST_KEY = _MetaData:Index FORMAT = normal Currently, all events are placed in the index that is queried first or in main. So if I call the big_index first, then the events are distributed randomly between big and main. On the other hand, the events are divided between normal and main. Does anyone know why? Thank you very much. P.S. The TRUNCATE=0 is necessary because the events are very large (>5MB)

Why is the search filter returning no events?

$
0
0
hi all, I have an index with a custom field and an extract of my props.conf: [source::/test/.../*_Splunk_Telemetry.(txt|log)] sourcetype = script_windows_log # Field extractors EXTRACT-script_name = ^.*\/[0-9]*_(?P.*)\.[a-z]{3}$ in source when I launch generic search I have this result : ![alt text][1] For example for script name: Microlise_Splunk_telemetry I have more than 2500 events (I stop the search before ending) and if I make a filter into the script_name field: ![alt text][2] There are no results, why is that? I try with "where" I am getting the same result, with quotes and simple quotes, same result. Does anybody have any idea? Thanks for your help. [1]: /storage/temp/230770-capture-decran-2018-03-22-144642.png [2]: /storage/temp/230771-capture-decran-2018-03-22-144901.png

How to setup TIME_FORMAT with time and date in separate locations ?

$
0
0
I get trouble to setup TIME_FORMAT= ????, the documents help only if Date and time is in one line. In my case : The log file is generateted from 00:00 to 23:59 date is 032318 in Filename. on time format i get for each line in this log file timestamp but not date ! I need methode to move the Date from the filename to the TIME_FORMAT extraction for index all line with date and time. sample : filesname : xxxx.020918_00004.log here we have the date only The have starting line like : 13:00:11.588 [5636.5636] ...... here are the time stamps from 00:00 to 23:59 for each day There no date in the file! how do need to define the TIME_FORMATE in props.conf for this case ? TIME_FORMAT= %H:%M:%S ..... missing the DATE ? for correct indexing this is a question about application NETbackup from Veritas and his logs on files in /usr/openv/netbackup/logs >>> date in logfilename >> time in logfile on files in /usr/openv/logs >> we have unixtime time and date in log file this no proplem ! Thank in advanced, Darius

How to merge XML rows in one event through props.conf?

$
0
0
Hi to all, I've got a log file in which there are many XML messages printed. One single log message is split into many rows (as you can see from the example below), but I have to merge those rows into a single Splunk event. I'm on Splunk Enterprise Cluster Environment 6.6.2, and these logs are provided by many Universal Forwarders which sends them to two Heavy Forwarders 6.6.1 (HF) who send the logs to indexer cluster (IDX). I've tried many props.conf configurations, on HF (BREAK_ONLY_BEFORE, MUST_NOT_BREAK_AFTER, DATETIME_CONFIG, etc...), also on IDX, but Splunk continues to split the event on tag "" given that it finds a timestamp. == props.conf (on HF and IDX) == [my_sourcetype] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = false BREAK_ONLY_BEFORE = \d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}\.\d{4} TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N MAX_TIMESTAMP_LOOKAHEAD = 26 MUST_NOT_BREAK_AFTER = \s*(http://tempuri.org/Service/tag_afalsetrue999999999ffffffffff99ffffffffffffffff999999999fffffff9,99ffffftrueffffffffffffffffffffffffff22/03/2018fffff999992018-02-20T20:31:20.097ffXMLfalse } Have you got any ideas how to fix this behavior? Also, do I have to configure only HF props.conf or only IDX props.conf or both?

Is there a way to replace _raw at search time with EXTRACT or REPORT?

$
0
0
Hi all, Currently on 6.5.2, but hopefully upgrading to 7.x in the next few months. I have some data that is basically a I'm wondering if there is a way that I can replace the _raw with just the json at search time. I know I can do it with EVAL/replace in props, but I'm hoping to do it before that. The end goal is to have the entire event be json by the time auto kv runs, so that Splunk will parse out all of the json fields. I have no problem extracting the json payload, but I'm struggling to find the process to actually replace _raw with EXTRACT or REPORT (both run before Auto KV). If i use EXTRACT with a named capture group of _raw, then splunk will rename it to just raw...so that doesn't help. With REPORT, the DEST_KEY isn't available at search time, so I can't use that. I've tried using a named capture group there too with ***CLEAN_KEYS = false***, but nothing gets extracted in that case. I tried capturing the json and then using ***FORMAT = _raw::$1*** ... but that didn't seem to work either. Am I missing something obvious? Is this possible? Worst case, I'll try to extract that json fields with REPORT, but unless the json is pretty flat, the extracted fields aren't the easiest to work with (parent/child relationships get lost). Or maybe I'll just do it at index time and extract the component then too. Maybe someday Splunk will be able to automatically parse json/xml/etc when that structured data is just in an event (not the entire event)? **EDIT** Adding a sample event. Mar 26 13:44:57 myserver java {"userName":null,"enterpriseName":null,"type":"UPDATE","entityType":"letters","entityVersion":null,"assoicatedEvent":true,"eventReceivedTime":1522071897164,"entities":[{"children":null,"parentType":"letters","entityScope":"GLOBAL","lastUpdatedBy":"GUID","lastUpdatedDate":1522071896000,"creationDate":1518127649000,"alreadyMarkedForUnavailable":false,"unavailableTimestamp":0,"letters":[],"name":"server","location":null,"description":null,"productVersion":"version","status":"UP","lastStateChange":1521206768829,"disks":[{"children":null,"parentType":"letters","entityScope":"GLOBAL","name":"name:","unit":"GB","used":0.10546875,"available":0.90234375,"ID":"GUID","parentID":"GUID","externalID":null,"size":1.0087890625}],"owner":"GUID","ID":"GUID","parentID":"GUID","externalID":null,"address":IP","managementIP":"IP","BGPPeers":null,"messages":null,"peakMemoryUsage":74.54,"averageMemoryUsage":74.54,"currentMemoryUsage":74.54,"peakCPUUsage":37.79,"averageCPUUsage":25.45,"currentCPUUsage":25.57}],"diffMap":{"EntityKey ( id = GUID entityClassType = letters )":{"added":null,"deleted":null,"modified":{"currentCPUUsage":{"oldValue":"25.26","newValue":"25.57"},"peakCPUUsage":{"oldValue":"37.78","newValue":"37.79"}}}},"ignoreDiffInMediationEvents":false,"updateMechanism":"DEFAULT","requestID":null,"sourceEnterpriseID":null}

How to route a subset of data to a local splunk indexer and all the data to a 3rd party system?

$
0
0
Scenario - I am trying to send all Windows Forwarded events to the 3rd party appliance, and send only forwarded events with the words "Avecto Defendpoint Service" to the on premise Splunk Indexer. I have been following - http://docs.splunk.com/Documentation/Splunk/7.0.2/Forwarding/Routeandfilterdatad And have the following config files. **Inputs.conf** [default] host = xxx-xxx-xxx [WinEventLog://ForwardedEvents] disabled = 0 **Props.conf** [WinEventLog:ForwardedEvents] TRANSFORMS-routing = routeAll, routeSubset **Transforms.conf** [routeAll] REGEX=(.) DEST_KEY=_TCP_ROUTING FORMAT=3rdpartyappliance [routeSubset] REGEX=(Avecto) DEST_KEY=_TCP_ROUTING FORMAT=indexers,3rdpartyappliance **Outputs.conf** [tcpout] defaultGroup = nothing [tcpout:3rdpartyappliance] server = xxx.xxx.xxx.xxx # sets the output to raw format. sendCookedData = false #### Outputs to Splunk Indexer #### [tcpout:indexers] server = xxx.xxx.xxx.xxx **Results** If I remove defaultGroup=nothing from outputs.conf. Any event is duplicated to the internal and external indexer, but I feel I don't have the right REGEX for only the events that I want to send internally.

Can we change index in the indexer?

$
0
0
Our heavy forwarder is forwarding logs to the Indexer. All the logs are going to the main Index. In the indexer level, is it possible to change Props.conf and /transforms.conf to send the logs to a different(alternate) index based on the Host? I put the below changes in the indexer conf files which is not working. Same config changes works from heavy forwarder, but I am having a different issue when I put it in the Heavy forwarder. From Heavy forwarder its duplicating the messages into both main and Newindex. So I am trying to put directly into indexer, which is not working :( **in props.conf** [host::(IPaddress] TRANSFORMS = rewrite-DPindex **in transforms.conf** [rewrite-DPindex] REGEX = .* DEST_KEY = _MetaData:Index FORMAT = NewIndex

How to onboard csv files using Monitor with specific requirements?

$
0
0
I have .csv file which would be on-boarded into Splunk using Monitor. It has two specific requirements as below: 1. The first 16 lines will be ignored. 2. The file will be updated once in a day. The whole file will be refreshed. But the refreshed file will contain few new or modified records. So I only have o consider those records for uploading into Splunk in each day. So, can anyone please help me how to 1. write the inputs.conf, props.conf or transforms.conf(if require)? 2. how to write the field extractor in Search Head? Thanks in advance.

How to assign a field alias to a json field?

$
0
0
We have the following in props.conf - FIELDALIAS-alias1 = apiRequest.apiInfo.clientID AS clientID It doesn't seem to do what we had in mind meaning to alias `apiRequest.apiInfo.clientID` as `clientID`.

Is there a way to use some sort of regular expression with field aliases?

$
0
0
Is there a way to simply the `props.conf` configurations and do the following in one command - FIELDALIAS-alias01 = "apiRequest.reqHeader.appContext" AS appContext FIELDALIAS-alias02 = "apiRequest.reqHeader.dob" AS dob FIELDALIAS-alias03 = "apiRequest.reqHeader.encryptedUserName" AS encryptedUserName FIELDALIAS-alias04 = "apiRequest.reqHeader.firstName" AS firstName FIELDALIAS-alias05 = "apiRequest.reqHeader.lastName" AS lastName

Why is the checkpoint OPSEC LEA app not fetching audit logs?

$
0
0
Here might be I am wrong because when I check the props.conf file I did not find any stanza for audit logs. So how can I proceed further with audit logs?

Are there inconsistencies in behavior with the need for INDEXED_EXTRACTIONS?

$
0
0
The admin class (lab) says that for json we need the following in the `props.conf` of the **forwarder**. INDEXED_EXTRACTIONS=json However, I know that for json all works fine even if `INDEXED_EXTRACTIONS=json` is **only** at the indexer level and maybe even that is not needed. Recently at [Why does the csv sourcetype work for upload but not via the forwarder?][1] [1]: https://answers.splunk.com/answers/627788/why-does-the-csv-sourcetype-work-for-upload-but-no.html We realized that `INDEXED_EXTRACTIONS = csv` is absolutely needed at the forwarder level. Why is that? It seems that not all pre-defined sourcetypes are treated equally.

How to create an If Statement in Props.conf?

$
0
0
I need to set a value based on another value. How would I do this: if severity = 1 severity=high One of my customers thought I can use a lookup table? I know I can do this in a search but not sure in a props.conf.

Why is Props.conf pulling Time as indexed first?

$
0
0
Hi, I have a big ask to solve this. I am making an API call to Redmine to pull data from this and then pulling this into Splunk. My issue is that Splunk is pulling the "pulled date/time" the time the data is pulled from this api/cron job as the _time rather than making the "Created" time as being the indexed time. So when I search for stuff, for example, all issues for March 2018 I get issues that were pulled on that date, not the date they were created. I am looking for a way to deal with this issue which is resulting in the wrong results being displayed. I know that the props.conf is one way of potentially dealing with this issue. But this is tricky and I am wondering if there is another way around this? The cron job produces a CSV which is pulled into Splunk and it comprises of about 30 columns which are comma separated. The field that I want to be the _time is 20 columns in (as seen in an excel spreadsheet) and an example of this would be "04/03/2018 12:33 PM" There are no other unique identifiers & there are other fields such as updated (date time) and start date (date time) then "Created" column. So how can I use props.conf or otherwise that I can get this data in which is scheduled to be pulled in every morning into Splunk? Thanks a million this is a big one to solve so I really appreciate any support, thanks. Colin

What are the precedence of stanza and option in props.conf

$
0
0
Hi all, I'm monitoring a set of logs using Splunk input. By default they have the sourcetype "others-sourcetype" and using a transform I generate, for some of them, the source type "my-sourcetype" which is generated dinamically. Splunk reads "others-sourceytpe" props.conf stanza and it applies **only** "TRANSFORMS-sourcetype_override", which change correctly the sourcetype "others-sourceytpe" into "my-sourcetype", but it ignore all other configurations: LINE_BREAKER , SEDCMD-blfRemover and so on. Also, even if Splunk change "others-sourceytpe" into "my-sourcetype" it does not use again props.conf to read "my-sourcetype" stanza, so Splunk doesn't apply any event manipulation options Can you help me to understand which is the precedence of the props.conf stanzas and options? props.conf: [others-sourcetype] SHOULD_LINEMERGE = false NO_BINARY_CHECK = true CHARSET = UTF-8 disabled = false TRUNCATE = 100000 LINE_BREAKER = ([\r\n]+)(?=\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d+\s+\w+) TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 24 SEDCMD-blfRemover = s/\x0A//g SEDCMD-acrRemover = s/\x0D//g TRANSFORMS-giorno=add_giorno TRANSFORMS-sourcetype_override = others-sourcetype_logs_override category = Custom pulldown_type = 1 [my-sourcetype] SHOULD_LINEMERGE = false NO_BINARY_CHECK = true CHARSET = UTF-8 disabled = false TRUNCATE = 100000 LINE_BREAKER = ([\r\n]+)(?=\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d+\s+\w+) TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 24 SEDCMD-blfRemover = s/\x0A//g SEDCMD-acrRemover = s/\x0D//g TRANSFORMS-anno = anno transforms.conf: [others-sourcetype_logs_override] REGEX = \/home\/user\/Desktop\/(.*)\/.*.log SOURCE_KEY = MetaData:Source FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype [add_giorno] REGEX = (?^\d\d\d\d-\d\d-\d\d) FORMAT = giorno::$1 WRITE_META = true [anno] REGEX = (?^\d\d\d\d) FORMAT = anno::$1 WRITE_META = true [mese] REGEX = (?^\d\d\d\d-\d\d) FORMAT = mese::$1 WRITE_META = true

Setting new timestamp via props.conf not working

$
0
0
Hi ! I have the following json and I would like to set the field "Date" as timestamp. Splunk is currently setting the date and time corresponding to when I index the data. json {"Date":"2018-02-26","Id commande":"L4512XXX","Type":"A","Quantité vendue":"1000","Support de vente":"Livre","Code pays":"FR","Référence":"REFXXX"} In order to set the field Date as timestamp, here is my configuration file : props.conf [json_sourcetype] KV_MODE = json NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = true TRUNCATE = 0 TIME_PREFIX = Date":" MAX_TIMESTAMP_LOOKAHEAD = 200 TIME_FORMAT = %Y-%m-%d But it is not working at all, it stills takes the indexation date as if this config was not taken into account. Do you know from where it could come ? Thanks !

Why can't I set a new timestamp via props.conf?

$
0
0
Hi! I have the following JSON and I would like to set the field "Date" as timestamp. Splunk is currently setting the date and time corresponding to when I index the data. JSON {"Date":"2018-02-26","Id commande":"L4512XXX","Type":"A","Quantité vendue":"1000","Support de vente":"Livre","Code pays":"FR","Référence":"REFXXX"} In order to set the field Date as timestamp, here is my configuration file : props.conf [json_sourcetype] KV_MODE = json NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = true TRUNCATE = 0 TIME_PREFIX = Date":" MAX_TIMESTAMP_LOOKAHEAD = 200 TIME_FORMAT = %Y-%m-%d But it is not working at all, it stills takes the indexation date as if this config was not taken into account. Do you know where it could come from? Thanks!

Enabling Milliseconds time stamp in splunk

$
0
0
I have multiple log files with different sources which log time stamp in different formats as below. In one the cases (CASE 2) I am lossing millisecond precision in splunk. Can some one tell which configuration needs to be changed here and if thet configuration needs to be changed at source type level or some universal setting will handle all issues? Case 1 (Millisecond timestamp is preserved) [2018-04-04 00:26:10,649][ERROR][shield.action ] ********* Splunk Time Stamp = 4/4/18 12:26:10.649 AM ==================================== Case 2 (Millisecond timestamp lost) RAW LOG : 2018-04-04 00:29:02,183 INFO response - 2018-04-04 00:29:02,183 ******* Splunk Timestamp :4/4/18 12:29:02.000 AM =============== Case 3 (Millisecond timestamp is preserved) RAW log:2018-04-04 00:31:09.118 c.e.f.f.c.p.B************** SPlunk Time stamp : 4/4/18 12:31:09.118 AM

Route and Filter Data from syslog (and syslog-ng is NOT an immediate option)

$
0
0
I have a typical scenario that could be resolved with a UF on syslog-ng, however that is a future resolution. At the moment, I have 2 data sources (A and B) coming in on a common port (e.g. TCP 666). Each data source needs its own sourcetype name and reside in a different index. So far, I have been trying to follow the docs and I created the following in .../opt/splunk/etc/system/local Inputs.conf [TCP://666] <------------------- this is the original data source stanza Disabled = 0 index = indx_A sourcetype = st_A Props.conf [sourcetype_B] <--------------------- I added this to accomodate data source B TRANSFORMS-routing = sourcetype_B_routing Transforms.conf [sourcetype_B_routing] <--------------------- I added this to accomodate data source B REGEX=|String_between_pipes| DEST_KEY=_TCP_ROUTING FORMAT=Everything, Subsidiary Outputs.conf I used two existing stanzas to direct the data to two different locations. [tcpout:Everything] <----- splunk destination disabled = false server = x.x.x.x, x.x.x.x <---------------------I have multiple destinations autoLB = true [tcpout:Subsidiary] <----------- 3rd party destination, data to be sent raw disabled = false sendCookedData=false server = x.x.x.x:port I need some guidance on connecting the inputs to the props>tranforms>outputs. How do I edit the inputs.conf stanza (or other location) to define index_B and sourcetype_B, and tie the inputs.conf to the other .confs? Thank you

MAX_EVENTS in probs.conf issue.

$
0
0
Hello everyone, I have a problem with props.conf. My props.conf: [test_cx1] BREAK_ONLY_BEFORE = \ NO_BINARY_CHECK = true TRUNCATE = 0 category = Custom description = Checkmarx disabled = false pulldown_type = 1 And I have a xml file: So I want to a single event contain all CxXMLResults. How can i do this ? What is my wrong ? ![alt text][1] [1]: /storage/temp/235666-linelimit.png As you can see breaks line after each 257 lines. But I set MAX_EVENTS to 99999. Thanks your answers..
Viewing all 1485 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>