Salesforce CLI has command options to export/import data hierarchically. For example, you can export a few accounts plus their respective opportunities in a tree structure and import them that way without having to manually rewire them with the new IDs in the target org.
Here are 4 SFDX commands you can run on Terminal/Command prompt to do just that:
sfdx force:auth:web:login -d -a myProductionOrg
sfdx force:data:tree:export -q “SELECT ID, Name, \
( SELECT ID, Name, StageName, AccountID, \
CloseDate, Amount FROM Opportunities ) \
FROM Account WHERE Name IN ( \
‘Nice Account with oppties to test with’, \
‘Another good account’ ) ” \
-u myProductionOrg –outputdir ./TestDataFromProd –plan
sfdx force:auth:web:login -d -a mySandbox -r https://test.salesforce.com
sfdx force:data:tree:import -u mySandbox \
–plan ./TestDataFromProd/Account-Opportunity-plan.json
The first command (sfdx force:auth:web:login) opens the browser for you to log into Production and authorize SFDX.
The second command (sfdx force:data:tree:export) exports the query result into files in the TestDataFromProd folder. It will create these 3 files:
Account-Opportunity-plan.json
Accounts.json
Opportunitys.json
SFDX will export up to 2000 records in one file per object plus a plan file to link accounts and opportunities. When importing, there is a limit of 200 records per file so you may have to split files (more on that later).
The third command (sfdx force:auth:web:login) opens the browser for you to log into the sandbox and authorize SFDX there.
The last command (sfdx force:data:tree:import) imports the files from the TestDataFromProd folder into the sandbox and wires up accounts with respective opportunities at the same time.
In case you need to split the JSON file, I’ve adapted the Node script below:
if( process.argv.length < 3 ) {
console.log( ‘Target JSON file is required.’ );
process.exit( 1 );
}
var target = process.argv[ 2 ];
console.log( ‘File: ‘ + target );
var fs = require( ‘fs’ );
var newFileName = target.replace( “.json”, “” );
fs.readFile( target, function ( err, data ) {
if (err) throw err;
var jsonArray = JSON.parse( data );
var i = 1;
jsonArray = jsonArray.records;
while( jsonArray.length !== 0 ) {
var fileName = newFileName + ‘-‘ + i + “.json”;
var newArray = jsonArray.splice( 0, 200 );
var newObj = { records: newArray };
fs.writeFileSync( fileName, JSON.stringify( newObj ) );
console.log( fileName );
i++;
}
})
You can run it like below in OS X:
node ./split-json.js Opportunitys.json
It will split Opportunitys.json into numbered files: Opportunitys-1.json, Opportunitys-2.json, Opportunitys-3.json, … each having up to 200 records.
Before importing, edit the plan file (Account-Opportunity-plan.json) to add the names of the new split files in place of the original Opportunitys.json file:
{
“sobject”: “Opportunity”,
“saveRefs”: false,
“resolveRefs”: true,
“files”: [
“Opportunitys-1.json”
, “Opportunitys-2.json”
, “Opportunitys-3.json” …
The structure of SObject trees accommodates 5 levels deep so we should be able to import accounts with opportunities and line items:
However, queries can only bring parent and child relationships – only one level of nested records, that is, accounts and their opportunities, but not the line items in each opportunity – so that still needs to be researched.
It is possible to create plugins for SFDX so that looks like one way to implement the ability to get deeper SObject trees to import:
https://github.com/forcedotcom/sfdx-plugin-generate
Another way it may be possible is to run a second query export (say, opportunities and their line items) and edit the first plan file to add more steps. There may be a challenge in writing both queries to bring opportunities in the same order.
Leave a Reply