In 2017 Ed Felton, a professor of computer science and public affairs at Princeton, brought attention to the intellectual property aspect of explainable AI, wherein companies or governments may preclude reverse engineering or exposure of their algorithms, either for purposes of security, continuing exclusivity of a valuable property, or else national security.
Indeed, in seeking a balance between openness and security, AI research is sometimes ushered back into IP silos and away from useful public and peer scrutiny.
One review by the Committee on Standards in Public Life in the UK contends that 'it may not be necessary or desirable to publish the source code for an AI system'. They go on stating that '[the] continuous refinement of AI systems could also be a problem if the system is deployed in an environment where the user can alter its performance and does so maliciously'. The committee also found that the UK government is not adequately open about the extent to which machine learning is involved in public sector decisions.
In 2019, the research body OpenAI ironically decided not to release the source code of the text-creation algorithm GPT-2, declaring its ability to mimic human writing patterns as 'too dangerous'.
The impetus to withhold machine learning source code is at odds with the drive toward open standards and peer review in collaboration between the private sector and government — a further obstacle to explainable AI.
Additionally, publishing algorithms obtained through machine learning processes limits the field of AI deconstruction to a forensic, post facto approach, since an algorithm is only the outcome of a machine learning process rather than the architecture of the neural network itself.