Attempting to send a CSV file, but it's a bit messy. I need to remove some entries that aren't formatted correctly, delete the header row, and replace it with my own (hence `FIELD_NAMES`). Data is on a UF and goes to my IDX. I'm not using `INDEXED_EXTRACTIONS` on the UF because the .csv file isn't clean/properly formatted, so I have my IDX doing the work.
*working*
- event breaking
- removing improperly formatted entries
- removed original header
*not working*
- my field names (nothing is parsed when searching, my `FIELD_NAMES` are missing).
[Based on this][1], I'm thinking that the old header isn't stripped until it reaches typingQueue ( `TRANSFORMS`), but my `FIELD_NAMES` is trying to be applied at the aggQueue so it isn't working...but I'm not sure. How to fix this?
**UF inputs**
[monitor://C:\test\testfile_*.csv]
index = main
sourcetype = test
crcSalt =
queue = parsingQueue
disbled = 0
**IDX props**
[test]
SHOULD_LINEMERGE = false
FIELD_NAMES = contentID,moduleName,levelName,date,loginID,last,first,var1,var2,var3,var4
FIELD_DELIMITER = ,
TIME_FORMAT = %F %T.%3Q
TZ = UTC
TRANSFORMS-null_hdr_and_nonevt = del_hdr,del_nonevt
**IDX transforms**
[del_hdr]
REGEX = ^ContentID.*
DEST_KEY = queue
FORMAT = nullQueue
[del_nonevt]
REGEX = ^(?!\d+,).*
DEST_KEY = queue
FORMAT = nullQueue
[1]: https://wiki.splunk.com/Community:HowIndexingWorks