Traceability has many driving factors. It is often discussed in relation to sustainability – helping organisations understand the impact of their initiatives and meet consumer demand for ethically-sourced products. But traceability can also be an operational advantage to your organisation. It allows you to see your supply chain in more detail, better understand your counterparties and identify bottlenecks, manage risk, and satisfy the requirements of external stakeholders including legislators and trade finance partners.

Many commodity organisations are therefore investing in a variety of projects to improve traceability within their supply chains. In part one, we discussed why any commodity traceability project should start with an internal data audit. Today, we take the next step in this process, asking how you will manage the nuances of getting new data into your system.

Validity and verification

Checking data validity is the first step in any new data import and an important consideration when looking at new data sources. This includes ensuring the data is in a format that can be read, shared, and analysed, as well as confirming that the data itself is correct.

Most systems include some automated validity checks – we have probably all seen an error when copying percentages, dates, and times into Excel, or been frustrated when a webform wouldn’t accept special characters or values that are out of range.

But there is good reason for these data formats, and for why you need to keep each piece of information in its own field – so that you can accurately process and report on it. User annoyance is often because the system has not been set up with intuitive data formats – an important point to consider when defining them. The validity check is therefore an important stage in ensuring the data is in the format your system is expecting so that it can be processed without creating errors.

Different commodity management systems will have different ways for you to bring external data into the system, and some provide better validation than others. For example, if your people are copying and pasting, or typing information between documents and systems or spreadsheets, validation is lost. The only way to then ensure that the data logged matches the data you received is to have somebody else manually check it, which still does not remove the risk of errors, to say nothing of being tedious.

Uploading commodity data

Another option is to upload Excel files directly into the system. This ensures that the data you are using exactly matches the data you received, and can allow you to process much more data than would be practical otherwise. These uploads can take different forms in different commodity management systems – if they are available at all. At Gen10 we map the client’s file uploads to a template so that when they receive data from a counterparty, they simply select the correct template, and the data is uploaded to the correct fields for the relevant contract and virtual lot. Data validation is automatic – CommOS will stop the upload and notify the user if the file does not match the template’s data fields.

Discover how Gen10 use this data upload approach to manage complex HVI requirements in cotton trading.

APIs and traceability

An even more straightforward way to bring external data into your system is to use APIs. An API is an application programming interface, and it is used to connect different software systems so that data can automatically flow between them.

Our daily online life is made easier by these APIs without us even noticing. Some examples include logging into an app or website using your Google or Facebook account, seeing the weather on Google or your PC, paying with PayPal, or using a price comparison site. And if you are using a good commodity management ecosystem, your CTRM and ERP are most likely connected by API too.

Whilst APIs are the most straightforward way for users to access data from other systems, they also require the most technical knowledge and time to set up and need both parties to work together to solve the data challenge. They are ideal for situations where data needs to be shared in or close to real-time, and updates occur regularly, such as between CTRM and ERP systems. And using an API in this situation improves how both systems operate so there is an incentive for the software providers to incorporate APIs.

Commodity traceability data depends on the use case

With traceability data, using APIs will depend on the individual use case. It may be worth the investment to gain live data from your biggest counterparties or inspection companies, or you may discover that you would need to connect to too many smaller systems, and it would take too long to realise a return on the investment. And since APIs require collaboration, counterparties also need to agree to the project.

A good commodity management system provider should be able to act as an advisory partner when you are looking to incorporate better traceability and more data management into your processes. They should be able to explain your options in detail. This should include any technical requirements and advice on the best course of action for importing new traceability data into the system, as well as improving traceability in the data you process.

Finding the most appropriate way to get data into your commodity management system can be one of the most difficult parts of the process, and will almost always take time, but as we will see in part 3, it can lead to significant benefits – both in sustainability and operationally.

Friendliness and expertise: Euro Alloys on working with Gen10

CommOS – managing operational risk in commodities

Risk and Commodity Management in grain trading