Quantcast
Channel: Questions in topic: "props.conf"
Viewing all 1485 articles
Browse latest View live

Making a delimited multivalue field using props and transforms

$
0
0
Hi Guys, I'm having some issues with my making my multi value field work as I would like. My "cve" field has values like: "CVE-2017-3003, CVE-2017-3002, CVE-2017-3001, CVE-2017-3000, CVE-2017-2999, CVE-2017-2998, CVE-2017-2997" OR "CVE-2013-1346" OR "CVE-2015-2808, CVE-2013-2566" My goal is to make it when I search for a single CVE (such as CVE-2017-3000), hosts with that cve will be shown. However, currently it will only show the host if my search matches ALL of the CVE values for that host. I want to use transforms and props to configure this delimited extraction at search time by the "," separating all my cves I've been trying for a while now to no avail, could someone please show me what my transforms and props needs? Currently I have... props.conf REPORT-type = cve_separated transforms.conf [cve_separated] DELIMS = "," FIELDS = cve MV_ADD = true Any help would be greatly appreciated, cheers.

extract fields from json array with multivalue and sub-array

$
0
0
Here is my sample data { "applications": [ { "id": 2537302, "name": "addressdb_prod", "language": "dotnet", "health_status": "unknown", "reporting": true, "last_reported_at": "2017-08-03T10:41:52+00:00", "application_summary": { "response_time": 5.04, "throughput": 32.3, "error_rate": 0.0, "apdex_target": 0.5, "apdex_score": 1.0, "host_count": 2, "instance_count": 2 }, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [ 2536708, 2561561 ], "servers": [ 25610567, 22686416 ], "application_hosts": [ 2536699, 2561553 ] } }, { "id": 2537304, "name": "addressdb_trn", "language": "dotnet", "health_status": "unknown", "reporting": true, "last_reported_at": "2017-08-03T10:41:47+00:00", "application_summary": { "response_time": 4.93, "throughput": 32.0, "error_rate": 0.0, "apdex_target": 0.5, "apdex_score": 1.0, "host_count": 2, "instance_count": 2 }, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [ 2536709, 2561556 ], "servers": [ 25610567, 22686416 ], "application_hosts": [ 2536699, 2561553 ] } }, { "id": 2268424, "name": "All unnamed", "language": "dotnet", "health_status": "unknown", "reporting": true, "last_reported_at": "2017-08-03T10:41:52+00:00", "application_summary": { "response_time": 5.31, "throughput": 276.0, "error_rate": 0.0, "apdex_target": 0.5, "apdex_score": 1.0, "host_count": 7, "instance_count": 17 }, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [ 38654227, 15703550, 47088155, 45753664, 19172730, 2561560, 19335897, 3249555, 14322182, 14322163, 3249553, 2536710, 2561555, 32941739, 32941738 ], "servers": [ 14652052, 13742329, 47086419, 45753649, 22686416, 2512533, 25610567 ], "application_hosts": [ 19172728, 2545902, 47088154, 45753663, 2561553, 2474911, 2492596, 2536699 ] } }, { "id": 2569403, "name": "allaboutme_dev", "language": "dotnet", "health_status": "gray", "reporting": false, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [], "servers": [], "application_hosts": [] } }, { "id": 17088365, "name": "api_cop_dev_emos", "language": "dotnet", "health_status": "gray", "reporting": false, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [], "servers": [], "application_hosts": [] } } ], "links": { "application.servers": "/v2/servers?ids={server_ids}", "application.server": "/v2/servers/{server_id}", "application.application_hosts": "/v2/application/{application_id}/hosts?ids={host_ids}", "application.application_host": "/v2/application/{application_id}/hosts/{host_id}", "application.application_instances": "/v2/application/{application_id}/instances?ids={instance_ids}", "application.application_instance": "/v2/application/{application_id}/instances/{instance_id}", "application.alert_policy": "/v2/alert_policies/{alert_policy_id}" } } I tried to follow the link [https://answers.splunk.com/answers/289520/how-to-split-a-json-array-into-multiple-events-wit.html][1] [1]: https://answers.splunk.com/answers/289520/how-to-split-a-json-array-into-multiple-events-wit.html with configs in my props.conf file [json_no_timestamp:nrc] SHOULD_LINEMERGE=false LINE_BREAKER=((?

Making a delimited multivalue field using props and transforms - qualys TA

$
0
0
Hi Guys, I'm using the qualys TA to extract VM data and I'm having some issues with making a multi value field out of the "cve" field qualys use. My "cve" field has values like: "CVE-2017-3003, CVE-2017-3002, CVE-2017-3001, CVE-2017-3000, CVE-2017-2999, CVE-2017-2998, CVE-2017-2997" OR "CVE-2013-1346" OR "CVE-2015-2808, CVE-2013-2566" My goal is to make it when I search for a single CVE (such as CVE-2017-3000), hosts with that cve will be shown. However, currently it will only show the host if my search matches the full value for the cve field which could be over 20 cve values. (Sorry if that doesn't make sense, It's quite hard to explain without visualisations but I've not got enough karma to upload pictures.) I want to use transforms and props to configure this extraction at search time. I have my regex working to capture each individual CVE. I've been trying for a while now to no avail, could someone please show me what my transforms and props needs? Currently I have... props.conf [qualys:hostDetection] REPORT-cve = mv-cve transforms.conf [mv-cve] REGEX = cve=(?CVE-\d+\-\d+) (I've tried this line without the "cve=" but it shows an example with it in the docs) FORMAT = cve::$1 (I don't fully understand what this line is doing, I don't think the docs are very clear) MV_ADD= true Any help would be greatly appreciated, cheers.

Making a multivalue field from a value obtained in a lookup

$
0
0
Hi guys, I'm not sure if this is possible or not but it would be good to get it cleared up so I know for future. So I'm wondering if I can use props and transforms (and maybe fields.conf) to make a multivalue field at search time. However, the field I want to make multi value isn't indexed, it comes from a lookup that's configured to automatically enrich my data at search time in props.conf. Is this possible? I'm having doubts because I think props/transforms/fields is applied before my automatic lookup? Any help would be appreciated! Cheers

Cisco Networks Add-on for Splunk Enterprise: modifying sourcetype for all "syslog" sourcetypes

$
0
0
I'm about to install the Cisco Networks App and Add-On into our environment, and I'm a bit new with Splunk. What has me a bit concerned are these two stanzas in the props.conf: [syslog] TRANSFORMS-force_sourcetype_for_cisco_ios = force_sourcetype_for_cisco_ios, force_sourcetype_for_cisco_ios-xr, force_sourcetype_for_cisco_ios-xe # VERY experimental for RFC5424 support [rfc5424_syslog] TRANSFORMS-force_sourcetype_for_cisco_ios = force_sourcetype_for_cisco_ios-rfc5424 Do I have it right - that these will perform index-time changes to any records with "syslog" and "rfc5424-syslog" types? Changing them to "cisco:asa"? I'm not certain that I won't end up with "syslog" records which **aren't** related to the Cisco IOS. Could I instead just set the sourcetype manually for the input, and then remove the above from the config?

Problem filtering with props.conf and transform.conf

$
0
0
Hi, I parsed a lot of post on splunk answers, but I still have a problem to filter a specific sourcetype. Here the log line I want to trash `Sep 11 16:16:08 192.168.24.35 ROOT_FW_2: NetScreen device_id=ROOT_FW_2 [Root]system-notification-00257(traffic): start_time="2017-09-11 16:15:51" duration=16 policy_id=86 service=smtp (tcp) proto=6 src zone=zone_in dst zone=zone_out action=Permit sent=22056 rcvd=1284 src=192.168.1.1 dst=192.168.10.10 src_port=40049 dst_port=80 src-xlated ip=192.168.100.5 port=40049 dst-xlated ip=10.25.23.55 port=80 session_id=1015055 reason=Close - TCP FIN` On my Formwarder: $SPLUNK_HOME/etc/system/local/props.conf `# Filtrage de tout sauf les accept [JuniperFW] TRANSFORMS-Juniper-null = remove_juniper_permit` $SPLUNK_HOME/etc/system/local/transform.conf `# Accepter tout sauf les Permit [remove_juniper_permit] REGEX = action=Permit DEST_KEY = queue FORMAT = nullQueue` Splunkd was restarted with no stanza errors My Splunk version is 6.6.3. Does anyone should have de clue or a way to debug this ? Thank's a lot.

Where does props.conf need to exist in a distributed deployment?

$
0
0
I think I need to push this from the deployment to each device or at least the forwarder and search head. I have 5 servers making up my SPLUNK Enterprise deployment, 1 SH, 1 FW, 1 DS, 2 Indexers. My props.conf on the forwarder has this configuration for 1 data source: FIELDALIAS-severity_as_id = severity as severity_id FIELDALIAS-dst_as_dest = dst as dest EVAL-app = netwitness EXTRACT-subject = CEF\:\s+\d(?:\|[^\|]+){3}\|(?[^\|]+) When I search I am not seeing the 'subject' does this need to be pushed to the search head? how about the other devices. I am trying to understand this. Thanks!

Trouble setting nullQueue format in transforms.conf

$
0
0
Hi, I'm trying to remove part of my watchguard logs from splunk that just report session timeouts. Here is what I'm trying: Props.conf [watchguard] TRANSFORMS-watchguard = remove-session-timeout Transforms.conf [remove-session-timeout] REGEX=^[^\)\n]*\)\s+(?P\w+\[\d+\]:\s+\w+\s+\w+\s+\w+:) DEST_KEY = queue FORMAT = nullQueue I'm doing a general search via index="watchguard" and still seeing the idle timout logs. Do I need specific nomanclature for this to work correctly? I'm assuming I can call the props.conf [ ] any name i want. Also should I be opening and closing with code /code? Thanks.

How to extract nested key value pairs from a specific JSON string field using spath and kvdelim?

$
0
0
I have JSON that looks like this. With the "message" field, there can be one or more key value pairs. How can I extract the key value pairs that are within the "message" field? { "severity":"INFO", "logger":"controllers.offers.OfferController", "thread":"application-akka.actor.default-dispatcher-297", "message":"2017-09-14 15:12:56,980 [I] c.o.OfferController h5FCZGLPj95A7DPq 67b33d676699b9cab76c7f86 \/offers\/private\/offer\/saveOffer\/25 POST Successfully saved offerId=69 for productId=3 ", "properties":{ "path":"\/offers\/private\/offer\/saveOffer\/25", "http_method":"POST", "request_id":"xxxxxGLPj95xxxxx", "client_id":"xxxxxd676699b9cab76xxxxx" } } I've tried this, but it doesn't work: index=xyz | spath input=message | extract kvdelim="=" pairdelim=" " | table offerId, productId I need to be able to do this at search time since it's not possible for me to modify props.conf

Having troubles extracting a time stamp.

$
0
0
Hello all, I'm having an issue with my environment while trying to index a set of logs i get from a file nightly and attempt to process them. what is happening is splunk is not finding the timestamp and either setting it as file mod time or at index time. I do not have this issue with the other logs sent from this same server.(syslog server sending many logs). At the bottom are 3 log lines as an example. I'm trying to extract the epoch timestamp from the start of the line. AV - Alert - "**1504324797**" --> i'm not seeing any failed to parse timestamp errors so i'm confused as to why this is being bypassed and setting it to the file mod time or index time. The input stanza: [monitor:///apps/alienvault/ossec-alerts-*.log] whitelist=ossec-alerts index = test sourcetype = alienv disabled = 0 Props.conf: (I've commented out the field extractions to make sure they arnt the issue.) [alienv] TIME_PREFIX = ^\w+\W+\w+\W+ I've also tried: AV - Alert - " , \-\s\" , no time prefix and others TIME_FORMAT = %s 10 digit epoch format TZ = UTC #REPORT-alienv = av-syslog-hdr, av-syslog-user, av-syslog-srcip, av-syslog-location1, av-syslog-location2, av-syslog-message #REPORT-alienv-loc = av-syslog-location1, av-syslog-location2 #FIELDALIAS-signature = action as signature #FIELDALIAS-src = src_ip as src #TRANSFORMS-sev = av-syslog-sev #TRANSFORMS-suppressions = av-win-suppress-detail I have a distributed environment so i've placed the props.conf/transforms.conf on the indexers and search heads for search time field extractions. The indexers and search heads are version 6.5, the server i'm forwarding from uses a universal forwarder version 6.4.1 Log line examples: AV - Alert - "1504324797" --> RID: "700008"; RL: "2"; RG: "windows,authentication_success,"; RC: "A Kerberos service ticket was requested: Success."; USER: "user@server.com"; SRCIP: "None"; HOSTNAME: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; LOCATION: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; EVENT: "[INIT]2017 Sep 02 00:00:02 WinEvtLog: Security: AUDIT_SUCCESS(4769): Microsoft-Windows-Security-Auditing: user@server.com: server.domain: server.domain: A Kerberos service ticket was requested. Account Information: Account Name: user@server.com Account Domain: server.domain Logon GUID: {5DDE4BE2-4A37-D51B-77F1-CDFE96B24E23} Service Information: Service Name: krbtgt Service ID: S-1-5-21-2277870611-162051517-1830794436-502 Network Information: Client Address: xxx.xxx.xxx.xxx Client Port: 65168 Additional Information: Ticket Options: 0x40810000 Ticket Encryption Type: 0x12 Failure Code: 0x0 Transited Services: - This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested. [END]"; AV - Alert - "1504324797" --> RID: "700008"; RL: "2"; RG: "windows,authentication_success,"; RC: "A Kerberos service ticket was requested: Success."; USER: "user@server.com"; SRCIP: "None"; HOSTNAME: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; LOCATION: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; EVENT: "[INIT]2017 Sep 02 00:00:02 WinEvtLog: Security: AUDIT_SUCCESS(4769): Microsoft-Windows-Security-Auditing: user@server.com: server.domain: server.domain: A Kerberos service ticket was requested. Account Information: Account Name: user@server.com Account Domain: server.domain Logon GUID: {5DDE4BE2-4A37-D51B-77F1-CDFxxxxxxxxx} Service Information: Service Name: service$ Service ID: S-1-5-21-2277870611-162051517-1830794436-1296 Network Information: Client Address: xxx.xxx.xxx.xxx Client Port: 65170 Additional Information: Ticket Options: 0x40810000 Ticket Encryption Type: 0x12 Failure Code: 0x0 Transited Services: - This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested. [END]"; AV - Alert - "1504324797" --> RID: "700008"; RL: "2"; RG: "windows,authentication_success,"; RC: "A Kerberos service ticket was requested: Success."; USER: "user@server.com"; SRCIP: "None"; HOSTNAME: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; LOCATION: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; EVENT: "[INIT]2017 Sep 02 00:00:02 WinEvtLog: Security: AUDIT_SUCCESS(4769): Microsoft-Windows-Security-Auditing: user@server.com: server.domain: server.domain: A Kerberos service ticket was requested. Account Information: Account Name: user@server.com Account Domain: server.domain Logon GUID: {5DDE4BE2-4A37-D51B-77F1-CDFxxxxxxxxx} Service Information: Service Name: service$ Service ID: S-1-5-21-2277870611-162051517-183079xxxx-xxxx Network Information: Client Address: xxx.xxx.xxx.xxx Client Port: 65169 Additional Information: Ticket Options: 0x40810000 Ticket Encryption Type: 0x12 Failure Code: 0x0 Transited Services: - This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested. [END]";

What are the capabilities of the "force_local_processing"

$
0
0
Does anyone know the full effects of the new option "force_local_processing "? How does it change the following information: https://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F What are the aggregator and regex replacement processors?

How to extract date field from the filename in Splunk

$
0
0
Hello All, Can any one please help me to extract date from filename in Splunk? below is my existing configuration, but am not able to get file name date as indexing date in splunk. Below is my existing configuration inputs.conf [monitor://C:\sampletestfile\*] index=test sourcetype=largefile crcSalt = datetime.xml props.conf [largefile] DATETIME_CONFIG = /etc/apps/myapp/local/datetime.xml My filename is my_20170606.txt. Thanks in Adavance

splunk btool returns many duplicate events for props

$
0
0
hi guys I am experiencing an odd behavior when using btool to troubleshoot some issues. When I run btool to get the list of props.conf in my instance I get lots of duplicates and I don´t know why this is happening nor if it is normal / expected to be like this. any ideas or explanations?? Example: $ splunk btool props --debug list | grep send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue .... .... ... (lots of times)

Can I have two apps that have two different indexers and indexes for the SAME Windows Event Log monitor stanza?

$
0
0
I have an app with an inputs.conf that has a stanza for [WinEventLog://Microsoft-Security-Logs] to an index and uses _TCP_ROUTING to make sure the events go to the correct indexer. I have a group that runs their own splunk environment and wants their data sent to their own index/indexers, but I still need it as well. I would like to create a second app with another [WinEventLog://Microsoft-Security-Logs] stanza that sends the same information to their servers as well. I don't see any facility for having two of the same inputs.conf stanzas, even in two different apps. It seems like the configurations are merged and the last variable read takes precedence. Is there a way to do this?

How to customize logging using transforms.conf and props.conf? Where do we configure these?

$
0
0
How to do customize log event using transforms.conf and props.conf file? Do we configure transforms.conf and props.conf file in Splunk forwarder, search head clusters, or indexer?

btool command returns many duplicate events for props.conf

$
0
0
hi guys I am experiencing an odd behavior when using btool to troubleshoot some issues. When I run btool to get the list of props.conf in my instance I get lots of duplicates and I don´t know why this is happening nor if it is normal / expected to be like this. any ideas or explanations?? Example: $ splunk btool props --debug list | grep send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue /opt/splunk/etc/apps/test/default/props.conf TRANSFORMS = send_to_nullqueue .... .... ... (lots of times)

How to edit props.conf to cope with two different time values in log file

$
0
0
Hi All, I have created an index and sourcetype for two logs files. I have set up my props.conf to extract the date/time and separate onto one line, however one of my logs has a colon after the time and it is not separating out correctly. see below. 19/09/2017 13:34:51.438 2017-09-19 13:34:51.438683 [ptp1:pps--phc1(ens1f0/ens1f1)], last: 0, mean: 0, min: 2147483647, max: -2147483647, bad-period: 0, overflows: 0 19/09/2017 13:34:51.437 2017-09-19 13:34:51.437853: warning: ptp ptp1: failed to receive Announce within 12.000 seconds 2017-09-19 13:34:51.437898: debug: ptp ptp1: state PTP_LISTENING 2017-09-19 13:34:51.437911: debug: netRefreshIGMP 19/09/2017 13:34:50.823 2017-09-19 13:34:50.823439 [phc0(ens1f0/ens1f1)->system], offset: -8.875, freq-adj: -42949.984, in-sync: 1 my props.conf file [ptp_log] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = false BREAK_ONLY_BEFORE = ^\d{4}\-\d{2}\-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{6}\s MAX_TIMESTAMP_LOOKAHEAD = 26 TIME_PREFIX = ^ If I put a colon into regex it will miss the other log file. Is the only way to do this two sourcetypes? Thanks,

Can I make a search time field extraction from a piece of the file/source?

$
0
0
I need to create a field in splunk that is a portion of the file path, do I need to do that @ index time or can I do it at search time? I know the regex just dont know how to make a portion of source into a field in the event. Thanks in advance!

Need help with regex in props.conf

$
0
0
Hi all, Here is how my raw logs look. I need help with props.conf so that I can index by the second time field instead of the first one. Sep 19 12:45:19 129.106.x.x fdbsyslog: **timestamp=2017.09.19 - 12:25:16.056** devname=123 device_id=123 type=alert Thanks in advance

Json file getting truncated

$
0
0
Below is my i/p file { "Count": 2, "Items": [ { "total_time": { "S": "0.000s" }, "start_date_time": { "S": "2017-09-19 05:00:43" }, "bad_records": { "N": "0" }, "successful_records": { "N": "0" }, "source": { "S": "mps_dnc" }, "end_date_time": { "S": "2017-09-19 05:00:43" }, "file_name": { "S": "No File" }, "total_records": { "N": "0" }, "job_name": { "S": "mps_dnc_out" } }, { "total_time": { "S": "12.783s" }, "start_date_time": { "S": "2017-09-19 11:42:21" }, "bad_records": { "N": "0" }, "successful_records": { "N": "12094" }, "source": { "S": "mps_dnc" }, "end_date_time": { "S": "2017-09-19 11:42:34" }, "file_name": { "S": "do_not_contact_list_2017-09-19T11_42_20.581Z.txt" }, "total_records": { "N": "12094" }, "job_name": { "S": "mps_dnc_out" } } ], "ScannedCount": 2, "ConsumedCapacity": null } Below is my probs.conf and limit .conf [spath] # number of characters to read from an XML or JSON event when auto extracting extraction_cutoff = 10000 # cat props.conf [dynamoout] TRUNCATE = 0 KV_MODE = json NO_BINARY_CHECK = true SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]*) DATETIME_CONFIG = CURRENT [source::/script_logs_mps/*.*] CHECK_METHOD=entire_md5 Still on splunk i can see only 8 lines.
Viewing all 1485 articles
Browse latest View live