You may have heard about the recent Executive Order on Artificial Intelligence, and the creation of the AI.gov website. The focus of these recent actions by the Biden administration is on AI as an overall technology. In this overview, we look at what this could mean for the security industry specifically.

The wording in this EO, and related statements, are mostly focusing on generative AI, though the term "AI-generated content" is only used twice, as is the term "surveillance". Further, the intent is primarily stated as  "to protect Americans from the potential risks of AI systems". To that end, the bulk of what is laid out in this EO and the related actions should have minimal impact on the security industry specifically, though there are a few areas that are of primary interest, as outlined below.

In a section defined as "Protecting Americans’ Privacy" federal support to accellerate the development and use of privacy-preserving techniques is mentioned, though no specifics on what that support might actually look like. The intent though is to preserve privacy of training data. This could potentially impact AI developers for security products who use training video from public scenes or cameras, or training data that might otherwise have levels of detail that make it possible to recognize specific persons. For the most part, this level of detail is not needed in training data, and most systems are trained to be able to detect persons in a scene with limited detail, so this would not be expected to have a major impact.

Under the "Advancing Equity and Civil Rights" section, the EO mostly focuses on applications of AI tools and technology as it relates to criminal justice, policing, benefits programs, and similar scenarios. While this shouldn't apply directly to security, the undertone is ensuring that AI products do not unfairly discriminate, exclude, or single out minorities or specifc groups of people.

AI in general has suffered from training data sets having inherrent bias in the sense of not representing factors such as race, gender, or other criteria equally across the dataset. This has typically not been a deliberate act, but more a general challenge in availability of training data.

By definition a "minority" is a trait that is statistically smaller, a dataset of "people" then would be likely to have fewer appearances of minorities. However if we want all kinds of people to be detectable with similar accuracy, we need to build datasets where minorities have the same number of appearances as "majorities", which can be costly and challenging in some cases. Stronger incentives for companies to ensure their training data is not introducing unintended biases should only be a good thing overall.

Other sections foucs on the use of AI in healthcare or education. While these are important topics in general, there is no direct correllation to security applications, beyond factors that already exist in products today.

The EO also addresses concepts of NIST-produced standards to test AI products/algorithms. DOE and DHS will also be creating an AI Safety and Security Board to evalute AI threats to critical infrastructure. Again, this relates very little to security and surveillance applications, where AI is primarily being used to detect the unauthorized presence of persons or vehicles.

Overall, the bulk of likely will have little direct impact on the security industry as a whole. However, like many government directives, particularly as it relates to emerging technology and areas where governments in general are usually not well suited to regulate or impact such things, this will likely impact the security industry in the form of sporadic, unpredictable, and quasi-random requirements and disclosure requests from government-backed security project administrators.

It would not be surprising, for example, for a surveillance project being deployed into a government housing development to have a requirment that vendors provide proof of compliance with these directives. Or, for vendors to provide a summary of the results of testing against the NIST standards, which may not even exist or relate directly to security products.

An upside could be the reduction of companies using the term "AI" to describe their products or functionality, when in fact the level of AI in those products is zero, or very minimal. Companies that are heavily branded around "AI" may want to reconsider that approach.

Edge-based devices may have a slight advantage in the sense that these are typically marketed primarily as advanced detectors. With no need for cloud connectivity, and with the devices primarily acting as stand-alone units, there may be less concerns around data leakage as it relates to privacy in general, or for use in ongoing training. Cloud-based AI systems could eventually be more heavility scrutinized, with vendors being asked to submit statements or technical documents on exactly how incoming data is used as it relates to AI. To be clear, this is only speculation, but it would make sense for companies of cloud-based products to pay particularly close attention as this EO moves towards actual implementation and enforcement.

Lastly, some online discussions of this new EO has raised concern that the testing and compliance critera, which was undoubtedly influenced by some of the larger generative AI companies, is structured to make it harder for new AI companies to come to market. The costs of putting together more representatitve training data, and submitting to extensive testing, could be costly or just generally discouraging. These concerns seem reasonable, and it remains to be seen how this might impact new startups in this space. However, again this is unlikely to have a large impact on security applications directly, unless such a company is planning to finally make the Minority Report movie a reality.

The EO addresses working with "allies and partners" globally, with a list of some countries the Biden administration claims to already be working with. Notably absent of course are Russia and China, two companies that have their own bases of AI technology development. If these policies become overly limiting to startups in the US, or its allies, making it difficult to compete with products coming out of China, for example, we may see these policies significantly watered down, or fundamentally abandoned altogether.

Overall, this should have minimal impact on security products and applications today and in the short-term future. The longer-term impact of this EO remains to be seen, and will most likely be heavily influenced by many factors outside of the control of the US Government, or its regulations.