RPA Get Smarter – Ethics and Transparency Should be Top of Mind

The first incarnations of robotic process automation (or RPA) technologies followed fundamental rules. These systems were similar to user interface testing tools in that instead of a human operator clicking on areas of the screen, the software (or a ‘robot’ as it became known) would do this in its place. This freed the user from time spent on very low-level tasks such as extracting content from the screen, copying and pasting, etc.

While basic in functionality, these early RPA implementations brought clear speed and efficiency benefits. The tools evolved to encompass basic workflow automation in subsequent years, but the process was rigid and had limited applicability across the enterprise.

Shortly after 2000, automation companies such as UiPath, Automation Anywhere, and Blue Prism (although some with different names in their initial incarnation) were founded. With a clear focus on the automation space, these companies began to make significant advancements in the enterprise automation space.

RPA gets smarter

Over the years, the functionality of RPA systems has grown significantly. They are no longer the rigid tools of their early incarnations, but offer much smarter process automation. UiPath, for example, lists 20 automation products on its website in groups like Discover, Build, Manage, Run & Engage. Your competitors have full offerings too.

The use cases for robotic process automation are now wide and varied. For example, with built-in smart technology, instead of simply clicking on screen regions, systems can now automatically extract the content of invoices (or other customer-submitted data) and convert it into a database format. structured data. These smart features may well be powered by forms of artificial intelligence, albeit hidden under the hood of the RPA app itself. Automation Anywhere has a good example of this exact use case.

Given the breadth of use cases that RPA technologies now address in enterprise organizations, it is difficult to see a product development and expansion path that does not add more artificial intelligence capabilities to the RPA tools themselves. While still delivered in the Robotic package.

Process automation software, this functionality is likely to go from being hidden under the hood and used to drive specific use cases in RPA software (such as content extraction) to working in its own right that is user-friendly.

Blurring of AI and RPA

RPA vendors will compete with artificial intelligence vendors that sell automated machine learning software to the business. These tools, called AutoML, allow users with little or no data science experience (often referred to as citizen data scientists) to create custom AI models with your data. These models are not restricted to specifically defined use cases, but can be anything that business users want (and have the supporting data) to build on.

Using our example above, once the invoice data has been extracted, why not let the customer create a custom AI model to rank these invoices by priority without incorporating or connecting to 3 additional ones?rd AI tool for parties? This is the next logical step in the RPA market; some leaders in the space already have some of these features.

This blurring of the lines between robotic process automation and artificial intelligence is particularly topical right now because, along with specialized RPA vendors, established tech companies like Microsoft are bringing their own low-code RPA solutions to market. Taking Microsoft as an example, it has a long history with Artificial Intelligence. Through Azure, your many different AI tools, including tools for creating custom AI models and a dedicated AutoML solution. Most relevant is the drive to combine your products to make unique value propositions. In our context here, that means Azure AI and low-code RPA technologies are likely to be closely aligned.

The evolving debate on the ethics of AI

Evolving at the same time as RPA and AI technologies are the discussions, and in some jurisdictions, regulations on the ethics of AI systems. Valid concerns are being raised about the ethics of AI and the diversity of organizations that build AI.

In general, these discussions and regulations are intended to ensure that artificial intelligence systems are built, implemented and used in a fair, transparent and responsible manner. There are critical organizational and ethical reasons to ensure your AI systems behave ethically.

When building systems that operate on data that represents people (such as in human resources, finance, healthcare, insurance, etc.), the systems must be transparent and impartial; Even beyond the use cases created with people data, organizations are now demanding transparency in their AI so that they can effectively assess the operational risks of implementing that AI in their business.

A typical approach is to define the company’s ethical principles, create or adopt an AI ethical framework, and continually evaluate its AI systems against that ethical framework and principles.

As with RPA, AI model development can be outsourced to 3rd party companies. Therefore, evaluating the transparency and ethics of these systems becomes even more important given the lack of information about the construction process.

However, most public and organizational discussions on ethics tend to take place only in the context of artificial intelligence (which is often the focus of media headlines). For this reason, developers and users of RPA systems may feel that these ethical concerns may not apply to them, as they “only” work with process automation software.

Automation can affect people’s lives

If we go back to our invoice processing example used earlier, we saw the potential for a custom AI model within the RPA software to automatically prioritize invoices for payment. The technological change would be minor to change this use case to one in healthcare that prioritizes health insurance claims over bills.

RPA technology could still extract data from claim documents automatically and translate it into a structured format. The company could then train a custom classification model (using historical claim data) to prioritize payments or, conversely, flag payments to be suspended pending review.

However, here the ethical concerns should now be very apparent. The decision made by this model, contained in the RPA software, will directly affect people’s health and finances.

As seen in this example, what may appear to be relatively benign automation software is actually evolving to reduce (or completely eliminate) the human being in the cycle of critical decisions that affect people’s lives. The technology may or may not be explicitly labeled and sold as Artificial Intelligence; however, notions of ethics should still be a priority.

We need a different lens

It may be best to view these ethical concerns, not through the lens of artificial intelligence, but through a lens focused on automated algorithmic decision making.

The reality is that it is not just the fact that AI technology may be making decisions that should be of concern, but in fact any automated approach that does not have enough human oversight (whether it is powered by a system based in rules, Robotic Process Automation, Surface Machine Learning, or Complex Deep Learning, for example).

In fact, if you look at the recently announced Framework of ethics, transparency and accountability, which is aimed at the public sector, you will see that it focuses on “Automated Decision Making”. From the guidance document, “Automated decision making refers to both exclusively automated decisions (no human judgment) and automated assisted decision making (aids human judgment). “

Similarly, the GDPR has been in force in the European Union for some time, establishing clear provisions on the rights of individuals in relation to automated individual decision making. The European Commission gives the following definition: “Decision-making based solely on automated means occurs when decisions about you are made by technological means and without any human input.

Finally, the state of California proposed in 2020 the Automated decision systems liability law with similar goals and definitions. Within this Law, Artificial Intelligence (but not the Robotic Process Automation explicitly) is called: “’Automated Decision System ‘or’ ADS ‘means a computational process, including one derived from machine learning, statistics or other data processing techniques or artificial intelligence, that makes a decision or facilitates human decision making, that impacts people”With evaluation of accuracy, fairness, bias, discrimination, privacy and security. Therefore, it is clear that the principle of a more general lens is recognized in public policy making.

Companies should also apply governance to RPA

As organizations are implementing teams, processes, and technologies to govern the development and use of AI within their organization, these must be extended to include all automated decision-making systems. To reduce the burden and facilitate operation at scale within large organizations, there should not be one set of processes and tools for RPA and one for AI (or indeed, for every AI model).

This would lead to a huge manual process to collect the relevant information, make this information comparable, and map it to the chosen process framework. Instead, a unified approach should allow for a common set of controls leading to informed decision-making and approvals.

This should not be at odds with the adoption of RPA or AI either; Clear guidelines and approvals allow teams to move forward with implementation, knowing the limits within which they can operate. When the more general lens is used, rather than just one aimed at AI, the implication becomes clear; Ethics must be a priority for developers and users of all automated decision-making systems, not just AI, which includes robotic process automation.

Image credit: pixabay; pexels; Thank you!

Stuart battersby

CTO of Chatterbox Labs

Dr Stuart Battersby is a technology leader and CTO of Chatterbox Labs. With a PhD in Cognitive Sciences from Queen Mary, the University of London Stuart now leads all research and technical development for Chatterbox’s AIMI ethical AI platform.

Leave a Comment