You could perhaps try using the Execute Data Flow node.
Your Data360 Metadata for Salesforce node and Transform node would be in the 'parent' data flow and the Data360 Get for Salesforce node would be in the 'child' data flow. An input field from the Transform node to the Execute Data Flow node would specify each of the queries in a separate record.
The Execute Data Flow node will pass the unused field values into the child data flow (see the Running with data driven run properties section of the node's documentation).
The Execute Data Flow node would be configured to clean up the failed child runs that failed (the default clean up setting removes the run data for successful runs).
Your main data flow could instantiate a .brd file (with the required metadata but no records) to hold the results of the queries and pass the filename into the child data flow. If the query is successful the child data flow could then append the results of the run into the .brd file. Downstream logic can be configured to wait until all the records have been processed by adding a run dependency on the Execute Data Flow node.
The downside of this approach may be performance as you are running a separate data flow for each record in the input data set and there is some system overhead to start the child data flow.