Firstly, I do not have a v.3.4.x instance as this version is no longer supported (see the support lifecycle page). You should upgrade to a supported version at the earliest opportunity.
The data flow logic can leverage Python code but it is more involved than using a single node. For example:

The 'Go' Create Data node is configured to output the 'Fieldxyz' field.

The 'Test Data' node represents your source data (per the input.xlsx file). The Aggregate node is configured to count the number of input data records:

The Lookup node is left at its default configuration - meaning the fields from the inputs will be merged.
The 'Add _IsHeader field' Transform node is configured to add the '_IsHeader' field to the input data set.

The 'Generate Header rows' Transform node is configured to create the first two rows in the output file:


The Cat node is configured to produce the union of the two input data sets :


The 'Create Output Records' Transform node is configured to generate the data that wil be written to the output file:
#### ConfigureFields Script

#### ProcessRecords Script

The above two scripts are included in the 'Create Output Records Transform Node Scripts.txt' file below.
The 'Write Output File' Output CDV/Delimited node writes the output data. It is configured with the Filename.

The node is also configured with a custom FieldDelimiter - here it is set to Hex code for the ASCII 'BEL' character (0x07) but you could use any character that is guaranteed to not be included in the input data. The HeaderMode is set to 'None' to supress the writing of the field header record to the output file. The FileExistsBehavior property can also be set to 'Overwrite' if required.

The contents of the output file then contains the following when viewed in Notepad++:

For users of Analyze v.3.6.x/ v.3.8.x see the attached example data flow.
Attached files
Write_Custom_Delimited_File - 16 Dec 2021.lna
Create Output Records Transform Node Scripts.txt