Every new data source you load will require some amount of configuration even if it is only to register the data source. You will need to ssh into the senzing/sshd container to run the the G2ConfigTool.py on located on the /opt/senzing/g2/python folder.
Adding your own data sources
Let's say you want to load some data you have mapped with the data source code "FOO". Here is how you would add it in the G2ConfigTool ...
- execute the G2ConfigTool.py program from /opt/senzing/g2/python
- execute the addDataSource command with the parameter FOO
- save the configuration
- quit out of the G2ConfigTool
Within a few minutes the AWS cloud formation stack will recognize the configuration has been updated and you can begin loading data again.
Adding data from Senzing mappers on GitHub
If you are loading data with one of the Senzing mappers on Github, you will need to apply the associated configuration file. For example ../
- if you are loading Dun and Bradstreet data, you will need to execute all the commands located in https://github.com/Senzing/mapper-dnb/blob/master/dnb_config_updates.json
- if you are loading Dow Jones data, you will need to execute all the commands located in https://github.com/Senzing/mapper-dowjones/blob/master/dj_config_updates.json
You can either scp these config files into the container or cut and paste their contents into a file you create in there. In either case, you apply the config by typing ...
./G2ConfigTool.py -f /<somepath>/dj_config_updates.json
This will execute all the configuration commands and save and exit when done. You will notice that these files do more than just add data source codes. For instance, they add new features and attributes used by these more complicated data sources.
It is rare that you will need to add features and attributes for your own data sources. However, if you find you need to you can use the examples in the pre-built mappers on GitHub.