Quantcast
Channel: Questions in topic: "props.conf"
Viewing all 1485 articles
Browse latest View live

props.conf Line Breaking

$
0
0
Hello everyone, I have several events with different time stamps that I'm trying to breakup. The props file I'm using is as follows: sourcetype=applogs SHOULD_LINEMERGE=true NO_BINARY_CHECK=false BREAK_ONLY_BEFORE = ^(\d{2}\:\d{2}\:\d{2}\||\d{4}\-\d{2}\-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}|\d\/\d{2}\/\d{4}\s+?\d{2}:\d{2}:\d{2}:) disabled = false pulldown_type = true Alternatively, I also use: sourcetype=applogs SHOULD_LINEMERGE=false LINE_BREAKER = ([\r\n]+)( \d{2}\:\d{2}\:\d{2}\||\d{4}\-\d{2}\-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}|\d\/\d{2}\/\d{4}\s+?\d{2}:\d{2}:\d{2}:) disabled = false pulldown_type = true The logs look like this: 14:49:27| stuff 14:49:27|stuff 14:49:28|stuff Another log looks like 7/17/2017 17:02:12:stuff 7/17/2017 17:02:12:stuff 7/17/2017 17:02:12:stuff And yet another log looks like this: 7/17/2017 17:01:58.345, stuff 7/17/2017 17:01:58.355, stuff 7/17/2017 17:01:58.376, stuff Could someone let me know what I'm doing wrong in my prop.conf files? For some reason they're not breaking up the file

Aruba ClearPass App for Splunk Enterprise: Why is data is not readable?

$
0
0
I have configured Aruba ClearPass App for Splunk Enterprise on forwarder and indexer. Made sure data is coming on tcp 1514 and enabled port on both forwarder and indexer and configured to aruba index. Getting data into Splunk, but it is not readable. Below is raw data of it \xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD \xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD \xFF\xF4\xFF\xFD\xFF\xF4\xFF\xFD Below is props for this data [Aruba:CPPM:Syslog] SHOULD_LINEMERGE = false TIME_PREFIX = timestamp= TIME_FORMAT = %Y-%m-%d %H:%M:%S.%N%Z MAX_TIMESTAMP_LOOKAHEAD = 30

Cisco Networks Add-on for Splunk Enterprise: Why are events not line breaking?

$
0
0
Hello World, i've done a pretty straight forward setup from the Cisco Networks App and Add-on for Splunk Enterprise. We've got a distributed environment with indexers and Search Heads. The input is configured like this: ++++++++++++++++ [tcp://8372] connection_host = dns sourcetype = Cisco:SmartCallHome index = main ++++++++++++++++ We configured our switches like this: ################### call-home contact-email-addr Moes-bar@company.de site-id "onemorebeer" profile "Splunk" destination transport-method http destination address http http://happyserver:8372 subscribe-to-alert-group diagnostic severity debug subscribe-to-alert-group environment severity debug subscribe-to-alert-group inventory subscribe-to-alert-group inventory periodic daily 1:00 ################### [/opt/splunk/etc(apps/TA-cisco_ios/default/props.conf] is in default The Call home events come in, but they don't get extracted. they look alle like this: ![alt text][1] Does anybody know more than me? [1]: /storage/temp/206890-events.png

Structured data (TSV) configured on UNiversal Forwarder with Transform applied on Indexer

$
0
0
I have some TSV files that I am forwarding with a Universal Forwarder. I have props.conf configured on the UF with the following for the sourcetype: FIELD_DELIMITER = \t HEADER_FIELD_LINE_NUMBER = 1 That has worked great. But now I have a need to drop some events so they do not get indexed. On the Indexer I have configured the following for the sourcetype in props.conf: [] TRANSFORMS-null = drop_batchrequests ...and in transforms.conf: [drop_batchrequests] REGEX = batchRequest DEST_KEY = queue FORMAT = nullQueue At first it was not working, I was still getting events that contain batchRequest. So I temporarily removed the structured data configuration on the Universal Forwarder (shown above) and the transform worked as desired - batchRequest events were no longer indexed.... But now the tsv format and field recognition was not there... So I tried to configure everything in one place. On the Indexer I specified the structured data config in props.conf using FIELD_DELIMITER and FIELD_NAMES (since I can't use HEADER_FIELD_LINE_NUMBER on the Inedxer.) The result of that was the batchRequests events were not indexed, but the fields (from the header row) still were not extracted. Am I doing something wrong? Or is there some reason why these configurations (TSV/structured data field recognition and dropping certain events to the nullQueue) on the same sourcetype will not work together? I can get each to work independently - but not together.

Linux Auditd: How to override the default configurations for props.conf?

$
0
0
When the Linux Auditd app is installed on a Splunk Enterprise (indexer), is the props.conf in the TA_linux-auditd/default/props.conf overriding anything by default? I am confused on how overriding works. Splunk documentations says the following: Note: If you forward data, and you want to assign a source type for a source, you must assign the source type in props.conf ***on the forwarder***. If you do it in props.conf on the receiver, the override has no effect. So if I have the Linux Auditd app installed on an indexer and I have a universal forwarder sending audit log data to my indexer, will any configuration I add in TA_linux-auditd/local be applied to data received from forwarders or data that my indexer itself is forwarding?? The NOTE above makes it sound like I need to install Linux Auditd app on my forwarder not just my indexer.

Splunk Add-on Builder: Why is my regular expression to remove first line of text file not working?

$
0
0
I used the Splunk Add-on Builder to export an add-on that I'm working on but I needed to make a tweak to props.conf and transforms.conf in order to remove the first line of multiple source types (text files) that I'm monitoring in inputs.conf. However, I am getting no results and am unsure why, these are my .conf stanzas. [assignment_fact] NO_BINARY_CHECK = true category = Splunk App Add-on Builder disabled = false SHOULD_LINEMERGE = 0 pulldown_type = true TRANSFORMS-removeheader = removeheaderrow REPORT-ta_builder_internal_use_table_format_results_for_assignment_fact = ta_builder_internal_use_table_format_results_for_assignment_fact [removeheaderrow] REGEX = \A[^\n]*\n DEST_KEY = queue FORMAT = nullQueue

How to edit props.conf to line merge a set of results?

$
0
0
Hello I have below set of line events(repeating) which I want to convert to single event. For every 6 events I want to convert it to 1 event, viz below 07/24/2017 16:16:31 host=myhost SMB1Enabled=0x0 07/24/2017 16:16:31 host=host=myhost PSExecEnabled=0x0 07/24/2017 16:16:31 host=host=myhost SymantecDefsDate=2017-07-24 07/24/2017 16:16:31 host=host=myhost SystemInfected=0x0 07/24/2017 16:16:31 host=host=myhost SymantecRunningStatus=0x1 07/24/2017 16:16:31 host=host=myhost SymantecProductionVersion=12.1.5337.5000 To go ahead with a multi line merge should I go ahead with below config change in props.conf [my_sourcetype] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE=([\r\n]+)(?=\d{1,2}\/\d{1,2}\/\d{4}\s\d{1,2}:\d{1,2}:\d{1,2}\shost=/^\w{1,20}$/\sSMB1Enabled) MAX_EVENTS=6 But it does not seem to work, Help would be appreciated @lguinn @bwooden @kristian.kolb @woodcock Regards Sayanta B

Sourcetype Assignment

$
0
0
Hello All, I have two servers with hostnames H1 & H2, both have the same log file named "/apps/logs/log.log" I have set the line breaking based on source file name in my props.conf, For ex: [source::///apps/logs/log.log] But the log.log available in H1 & H2 are with different time zones. Even though I separate sourcetypes for H1 & H2 in inputs.conf, default source file configuration is applied based on props.conf How can I overcome this conflict. In the example I have just quoted two hosts but in our environment we have 100 such servers. Regards, BK

How do I adjust forwarded Windows Event Logs fields to properly standardize the host and computername (without FQDN) to work properly in Splunk?

$
0
0
Hello, I am hoping someone from this awesome community can help me out with windows event logs/forwarding issue(s) I am having at index time. Background: I have a WEC server that is receiving all windows event logs from a large amount of servers (300+). These logs are forwarded to the wec server with native windows forwarding from each server. The WEC server (with universal forwarder installed on it) then forwards the forwarded event logs to my Splunk Heavy Forwarder where I have a props and transform conf files setup to adjust the fields for more detailed information and to remove traces of the WEC server forwarding the events (to normalize the events for addons, etc). I would like the events to be populated in as though they had the universal forwarder installed on them so addons and Splunk work properly with the information, so as to prevent any conflicts. To do this I updated my props and transform files to 1. Create and set a forwarder field with the host value of the WEC server (takes the original host value and places it into a new field called forwarder) this will then tell me the origin is from the WEC - with the idea being everything else looks exactly the same as if a Universal Forwarder was on that specific machine and had sent the log itself. 2. Remove the fully qualified domain name from the computer name and list only the computer name 3. Set the host field by reading the ComputerName field (minus the FQDN domain as it was removed in the previous step) 4. Set sourcetype by log type to change the field WinForwardedEvents to WinEventLog:Security (working and functioning great). Here is a copy of the props.conf and transforms.conf additions. Transforms.conf ###########Set Original Host (Forwarder) as Forwarder######## [WinEventLog-SetForwarderName] REGEX = (?m)host::(.*)$ SOURCE_KEY = MetaData:Host FORMAT = forwarder::$1 WRITE_META = true ########### Remove ComputerName FQDN######## [RemoveFQDN-ComputerName] REGEX = (?m)ComputerName=([a-zA-Z0-9-_]+)?\b\.\w+\.\w+$ FORMAT = ComputerName::$1 DEST_KEY = MetaData:ComputerName ########### Set Host to Computer Name-Windows Event Logs######## [Set-Host-By-ComputerName] REGEX = (?m)ComputerName=([a-zA-Z0-9-_]+)?\b\.\w+\.\w+$ FORMAT = host::$1 DEST_KEY = MetaData:Host ########### Set Sourcetype by Log Name ############ [Set-Sourcetype-By-LogName] REGEX = (?m)LogName=(.*)?\b FORMAT = sourcetype::WinEventLog:$1 DEST_KEY = MetaData:Sourcetype Props.conf #############Windows Event Log Forwarded Events ########## [WinEventLog:ForwardedEvents] TRANSFORMS-001-sethost_sourcetype = WinEventLog-SetForwarderName, RemoveFQDN-ComputerName, Set-Host-By-ComputerName, Set-Sourcetype-By-LogName Thus my question is three-fold: 1. I am seeing all of the host fields showing up with the host name of the WEC server, Why are the hosts showing up as the WEC server? 2. The computername field is still populated with the fully qualified domain name, and not without the FQDN. Why is the FQDN not being removed? 3. Are there any additional steps I am missing to normalizing this data so it is the same as if it was loaded via a universal forwarder (and can be processed by the Windows TA Infrastructure and other addons)? **I would guess I probably went somewhere wrong on the regex and that is what is causing this not to work, however help in fixing this is most definitely appreciated.** Thank you! Here is the log output from this setup, in which I can see the forwarder field being added to the log, as well as the sourcetype being changed, however the fqdn is not being removed and the host is not showing up as the computername: 07/26/2017 04:41:54 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4624 EventType=0 Type=Information ComputerName=EXCHMB01.Computer.Access.Login TaskCategory=Logon OpCode=Info RecordNumber=1102979350 Keywords=Audit Success Message=An account was successfully logged on. Subject: Security ID: S-1-0-0 Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Impersonation Level: Impersonation New Logon: Security ID: S-1-5-18 Account Name: EXCHMB01$ Account Domain: ACCESSLOGIN Logon ID: 0x7CC8B84E Logon GUID: {2906A915-43AE-E3E1-6663-03E8D5AF05C6} Process Information: Process ID: 0x0 Process Name: - Network Information: Workstation Name: - Source Network Address: removed/excluded Source Port: 61782 Detailed Authentication Information: Logon Process: Kerberos Authentication Package: Kerberos Transited Services: - Package Name (NTLM only): - Key Length: 0 Event Actions Type Field Value Actions Selected Account_Name - EXCHMB01$ ComputerName EXCHMB01.Computer.Access.Login EventCode 4624 host wecprd01 sourcetype WinEventLog:Security Event Account_Domain - ACCESSLOGIN Authentication_Package Kerberos EventCodeDescription An account was successfully logged on EventType 0 Impersonation_Level Impersonation Key_Length 0 Keywords Audit Success LogName Security Logon_GUID {2906A915-43AE-E3E1-6663-03E8D5AF05C6} Logon_ID 0x0 0x7CC8B84E Logon_Process Kerberos Logon_Type 3 OpCode Info Package_Name__NTLM_only_ - Process_ID 0x0 Process_Name - RecordNumber 1102979350 Security_ID S-1-0-0 S-1-5-18 SourceName Microsoft Windows security auditing. Source_Network_Address removed/excluded Source_Port 61782 TaskCategory Logon Transited_Services - Type Information Workstation_Name - dvc wecprd01 dvc_nt_host wecprd01 event_id 1102979350 eventtype windows_logon_success(authentication) wineventlog_security(os windows) wineventlog_windows(os windows) winsec(security) forwarder wecprd01 id 1102979350 name An account was successfully logged on severity Information severity_id 0 signature An account was successfully logged on signature_id 4624 tag authentication os security windows Time _time 2017-07-26T16:41:54.000-05:00 Default index main linecount 63 punct //_::_\r=\r=___.\r=\r=\r=\r=...\r=\r=\r=\r=_\r=_____.\r\r\r\r:\r\rt source WinEventLog:ForwardedEvents splunk_server SPLUNKPRD01

Change field to arbitrary value following a regex match using props.conf and transforms.com

$
0
0
I have two firewall devices that log their activities in different formats. I'm trying to create CIM compliant logs. I want to have a field labeled "action" and I want it to set that field to either be be a value of "drop", "reject", "accept" based on some regex. The values drop, reject, or accept is not found in the event itself. This is what I have so far and it is not creating a field name action with the desired value. props.conf [host::192.168.1.1] TRANSFORMS-junos = junos_action_deny,junos_action_allow transforms.conf [junos_action_deny] REGEX = RT_FLOW_SESSION_DENY FORMAT = action::drop [junos_action_allow] REGEX = RT_FLOW_SESSION_CREATE FORMAT = action::allow Example event Jul 26 18:17:57 192.168.1.1 1 2017-07-26T18:17:57.909-07:00 fw3 RT_FLOW - RT_FLOW_SESSION_DENY [junos@2636.1.1.1.2.40 source-address="184.105.139.110" source-port="44147" destination-address="192.168.1.10" destination-port="69" service-name="junos-tftp" protocol-id="17" icmp-type="0" policy-name="default-deny" source-zone-name="dmz" destination-zone-name="clients" application="UNKNOWN" nested-application="UNKNOWN" username="N/A" roles="N/A" packet-incoming-interface="reth0.1311" encrypted="UNKNOWN" reason="policy deny"] session denied 184.105.139.110/44147->192.168.1.10/69 junos-tftp 17(0) default-deny dmz clients UNKNOWN UNKNOWN N/A(N/A) reth0.1311 UNKNOWN policy deny

Time Zone issue

$
0
0
Hi All, We have application logs configured to Splunk. When I search for the last 15min there were no results but when I search for the last one hour it gave so many events. Upon checking it, I came to know that there is an time zone issue and have configured time zone configurations for these logs in props.conf. Even after that the issue persists. Could you please help me in checking what might be other issues in this case and what will be the configurations to be deployed in this case? Thank you in advance.

Setting up props.conf and transforms.conf log filtering in Splunk Web. Can you point me to the relevant documentation?

$
0
0
I apologize if this is a very obvious question, but I'm completely lost. A project I am working on is to filter the logs coming in so they are never indexed in the first place. Using a question here, I was able to make these props.conf and tranforms.conf files: #props [sourcetype] TRANSFORMS-set=setnull,setparsing #transforms [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = regex_to_match_at_front_(code1|code2|code3) DSET_KEY = queue FORMAT = indexQueue Here is the question I got this from: [https://answers.splunk.com/answers/185495/how-to-configure-propsconf-and-transformsconf-to-s-1.html][1] Now I need to test this. Sonce I have to do this via Splunk Web, I'm trying to set up these options in a new sourcetype, via the advanced options. I know I can set up a sourcetype to properly parse the events, but whenever I some of the code as new settings, Splunk automatically deletes them (as with TRANSFORMS-set=setnull,setparsing) or replaces them (I obviously cannot have two REGEX). Basically, is it even possible to set this up in Splunk Web? If so, can you link me to the relevant documentation. I don't have access to the backend, so I need to decide how to proceed. Thank you. P.S. If anyone has another way of whitelisting events, I'd like to hear about it. [1]: http://how-to-configure-propsconf-and-transformsconf-to-s-1.html

How to exclude the Windows events with Splunk process before indexing?

$
0
0
Hi, I see a lot of events in Windows logs with Process splunk-regmon, powershell etc. Is there a way to exclude the processes before indexing? message contains: C:\Program Files\SplunkUniversalForwarder\bin\*

I want make a field as index time

$
0
0
I have csv file contains timestamp name, create_date, duration, distance are field names sourcetype: example I want to make that field as indexing time, what changes I have make in config files

How to prevent splunk from merging few JSON strings into single event?

$
0
0
Example raw data: {"field1": "value1", "field2": "value2", ..., "string": "1" } {"field1": "value1", "field2": "value2", ... ,"string":"2"} {"field1": "value1", "field2": "value2", ..., "string":"3" } {"field1": "value1", "field2": "value2", ..., "string":"4" } Splunk merge few of raw data string into single event, as result you got 2 events. Event 1: {"field1": "value1", "field2": "value2", ..., "string": "1" } {"field1": "value1", "field2": "value2", ... ,"string":"2"} {"field1": "value1", "field2": "value2", ..., "string":"3" } Event 2: { [-] field1: value1 field2: value2 ... string: 4 } So, 80% of events looks like event 1 in example. But some events caught by single row and parsed as JSON type. Using SHC and splunkforwarder for data delivery, version 6.5.5 both. I have tried to setup props.conf on splunkforwarder (app which work with JSON log files), tried to use different LINE_BREAKER: 1. (\}) 2. \} 3. "(^)\{" Current props.conf: [json-logs] SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = (\}) TIME_PREFIX = \"time\": \" Have same problem not only with JSON format logs, looks like props.conf line breaking options not work at all. What I am doing wrong?

How to fix my universal forwarder configurations so that Splunk only forwards the data I want to monitor to a third-party system?

$
0
0
I am trying to forward to a third-party system from a Universal forwarder. I have tried two approaches. In both cases I am receiving a lot of unnecessary data on the third-party end. It looks like Splunk is not only forwarding the file that I am monitoring but also internal logs as well. What can I do to fix this? I am attaching conf files for both: **Approach 1:** use props, transforms, and outputs **props.conf** [source::/home/abc/splunk-test/test.txt] TRANSFORMS-routing=monitoring **transforms.conf** [monitoring] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=monitoring_tcp **outputs.conf** [tcpout] defaultGroup=group_a [tcpout:group_a] disabled=true [tcpout:monitoring_tcp] sendCookedData=false server=x.x.x.x:514 **Approach 2:** use inputs.conf and outputs.conf (I deleted everything from props and transforms) **inputs.conf** [default] host=abc [monitor:///home/abc/splunk-test/test.txt] _TCP_ROUTING=monitoring_tcp **outputs.conf** [tcpout] defaultGroup=group_a [tcpout:group_a] disabled=true [tcpout:monitoring_tcp] sendCookedData=false server=x.x.x.x:514

How can I search all the XML nested data?

$
0
0
Dear all, I need to search all XML tagged data including nested data but I only get first data by a search command. Please help me how can I search all the XML data? Splunk version is 6.5.3 Here are details (1) source XML file (2) props.conf (3) transforms.conf (4) Search SPL (5) Search result (screen shot) (1) source XML file (Samle_XML_for_answers.xml) 12345ABC1234567890C12345Contract800Product0Base4500Exchange1000Cabling10400 (2) props.conf [(mysourcetype)] CHARSET = SJIS NO_BINARY_CHECK =1 pulldown_type = 1 REPORT-xmlkv = xmlkv-sample (3) transforms.conf [xmlkv-sample] REGEX = <([^\s\>]*)[^\>]*\>([^<]*)\<\/\1\> FORMAT = $1::$2 (4) Search SPL source="Samle_XML_for_answers.xml" | table Businessname, Contractcd, ReceptNumber, Ratingname, Aprice, Count, PreviousALService | transpose (5) Search result (screen shot) ![alt text][1] [1]: /storage/temp/208747-search-result.png

Not able to extract _raw data using props.conf and transforms.conf

$
0
0
Hello Splunk Gurus, I'm extracting the data from database-input (using Splunk DBX 3.1.0) and sourcing that to index "my_index". When I search from Splunk I see the following output: **Splunk Search: ** index=my_index sourcetype=my_dbx_st source=test_tbl_dbx31_input | table _taw **Output:** 2017-08-01 11:01:01.509, access_time="2017-03-30 6:44:16.0", process_id="PROC7678", internal_id="2436", internal_name="Test_Reports", user_id="487657" access_time = access_time="2017-03-30 6:44:16.0" process_id = process_id="PROC7678" internal_id = internal_id="2436" internal_name = internal_name="Test_Reports" user_id = user_id="487657" **props.conf** [my_audit] SHOULD_LINE_MERGE=false KV_MODE=auto REPORT-my_audit_extract=my_audit_extractions **transforms.conf** [my_audit_extractions] DELIMS = "," FIELDS = default_time, access_time, process_id, internal_id, internal_name, user_id **Problem** I'm not able to extract _raw data using props.conf and transforms.conf here. Also you will notice that I had to use an extra field "default_time" in FIELDS sections because that field-value gets populated automatically as a first system-field. Am I missing any key properties in the props.conf or transforms.conf to get my field extracted properly as following? access_time="2017-03-30 6:44:16.0" process_id="PROC7678" internal_id="2436" internal_name="Test_Reports" user_id="487657" Thanks for your help!

How to reset 'props.conf' to defaults w/o reinstalling Splunk?

$
0
0
Hi all, I have Splunk on Windows 10. I fiddled recently with some properties in "C:\Program Files\Splunk\etc/system/default/props.conf" but later I restored them to defaults (or so I thought. I might well forget to reset some changes) Now I have a problem with "File Integrity Check": "props.conf" 's Check result is "differs" I'd like to reset the file to defaults, desirably w/o reinstalling Splunk but maybe I should just ignore the notification? Many thanks, Tim

extract fields from json array with multivalue and sub-array

$
0
0
Here is my sample data { "applications": [ { "id": 2537302, "name": "addressdb_prod", "language": "dotnet", "health_status": "unknown", "reporting": true, "last_reported_at": "2017-08-03T10:41:52+00:00", "application_summary": { "response_time": 5.04, "throughput": 32.3, "error_rate": 0.0, "apdex_target": 0.5, "apdex_score": 1.0, "host_count": 2, "instance_count": 2 }, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [ 2536708, 2561561 ], "servers": [ 25610567, 22686416 ], "application_hosts": [ 2536699, 2561553 ] } }, { "id": 2537304, "name": "addressdb_trn", "language": "dotnet", "health_status": "unknown", "reporting": true, "last_reported_at": "2017-08-03T10:41:47+00:00", "application_summary": { "response_time": 4.93, "throughput": 32.0, "error_rate": 0.0, "apdex_target": 0.5, "apdex_score": 1.0, "host_count": 2, "instance_count": 2 }, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [ 2536709, 2561556 ], "servers": [ 25610567, 22686416 ], "application_hosts": [ 2536699, 2561553 ] } }, { "id": 2268424, "name": "All unnamed", "language": "dotnet", "health_status": "unknown", "reporting": true, "last_reported_at": "2017-08-03T10:41:52+00:00", "application_summary": { "response_time": 5.31, "throughput": 276.0, "error_rate": 0.0, "apdex_target": 0.5, "apdex_score": 1.0, "host_count": 7, "instance_count": 17 }, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [ 38654227, 15703550, 47088155, 45753664, 19172730, 2561560, 19335897, 3249555, 14322182, 14322163, 3249553, 2536710, 2561555, 32941739, 32941738 ], "servers": [ 14652052, 13742329, 47086419, 45753649, 22686416, 2512533, 25610567 ], "application_hosts": [ 19172728, 2545902, 47088154, 45753663, 2561553, 2474911, 2492596, 2536699 ] } }, { "id": 2569403, "name": "allaboutme_dev", "language": "dotnet", "health_status": "gray", "reporting": false, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [], "servers": [], "application_hosts": [] } }, { "id": 17088365, "name": "api_cop_dev_emos", "language": "dotnet", "health_status": "gray", "reporting": false, "settings": { "app_apdex_threshold": 0.5, "end_user_apdex_threshold": 7.0, "enable_real_user_monitoring": true, "use_server_side_config": false }, "links": { "application_instances": [], "servers": [], "application_hosts": [] } } ], "links": { "application.servers": "/v2/servers?ids={server_ids}", "application.server": "/v2/servers/{server_id}", "application.application_hosts": "/v2/application/{application_id}/hosts?ids={host_ids}", "application.application_host": "/v2/application/{application_id}/hosts/{host_id}", "application.application_instances": "/v2/application/{application_id}/instances?ids={instance_ids}", "application.application_instance": "/v2/application/{application_id}/instances/{instance_id}", "application.alert_policy": "/v2/alert_policies/{alert_policy_id}" } } I tried to follow the link [https://answers.splunk.com/answers/289520/how-to-split-a-json-array-into-multiple-events-wit.html][1] [1]: https://answers.splunk.com/answers/289520/how-to-split-a-json-array-into-multiple-events-wit.html with configs in my props.conf file [json_no_timestamp:nrc] SHOULD_LINEMERGE=false LINE_BREAKER=((?
Viewing all 1485 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>