NVIDIA’s Larry Brown on Keys to Federal AI Operationalization – MeriTalk

0

While most federal agencies are at least dipping their feet in the artificial intelligence (AI) pool, new MeriTalk research reveals that some are struggling to integrate technology more broadly into operations.

Recent recommendations from the National Safety Commission on Artificial Intelligence (NSACI) underscore the importance of AI to government, suggesting federal leaders double spending on AI research and development each year – targeting $ 32 billion dollars by fiscal 2026.

MeriTalk recently connected with Larry Brown, Ph.D., Head of Solutions Architecture, Public Sector, NVIDIA, to discuss how agencies can use the momentum of AI to operationalize technology, especially at the periphery; what federal AI leaders are doing differently; and what is needed to move from proof of concept to large-scale deployment.

MeriTalk: Almost 90% of survey respondents said operationalizing federal AI is the cornerstone of digitally-driven government. What makes AI so important?

Larry brown: AI has become a spectacular and enabling technology for so many core capabilities. The fundamental problem is that we don’t have enough people to do the job when it comes to critical analysis that is so important to federal missions – whether it’s analyzing a full-screen video, monitor cyber intrusions or detect fraud.

AI gives our software faster and more advanced reasoning capabilities. We can do things we couldn’t do before, such as dealing with cyber threats and taking action to prevent breaches in a timely manner.

MeriTalk: Almost two-thirds of those surveyed said their agency is struggling to take localized AI pilots and integrate them into overall IT operations. What are the biggest challenges associated with growing beyond the pilot phase, and how can agencies overcome these challenges?

Brown: There is a lot of talk about how best to evolve pilots, so I’m not surprised that this has happened. Teams need to be tactical when setting goals for the AI ​​initiative.

The most successful projects are those that have a very narrow scope and a clearly defined set of inputs, outputs and measures of success. When planning a pilot, teams should include contributions from partners, infrastructure specialists, application software support teams, and people with data science expertise.

There are several other keys to operationalizing AI. Management support is very important, both at the level of sales and IT management. And you need a technical champion, a senior executive to take charge of any required technological transformation. AI often requires new expertise with a strong focus on data science. We also generally see a requirement for new infrastructure capabilities that extend beyond traditional IT platforms.

MeriTalk: What steps should agencies consider when working to lay the groundwork for widespread AI integration?

Brown: There are several steps to consider. First, the maturity of AI: are we doing traditional machine learning or advanced analytics? Do we have enough data scientists on staff? What AI, machine learning and data science tools do we use today? So what types of apps are we working on? Are we using video, audio or cyber analysis?

Finally, agencies can assess their compute infrastructure for AI. This is probably the most overlooked requirement and consideration. It’s not just the application side of the equation. While engineers and data scientists are using laptops from 10 years ago, they don’t have the right infrastructure. If an organization has centralized IT resources, but the resources are a few years old and designed to run web servers, it won’t work.

Having high-performance computing capacity is one of the “stool legs” organizations must build to enable data scientists. That said, organizations early in their journey don’t necessarily need a high-performance computing environment and can instead focus on accelerating compute capacity and overall AI maturity.

MeriTalk: Almost half of those polled said they were doing AI on the edge – and the vast majority said the government should be doing more AI on the edge. Why is AI at the edge so important?

Brown: For the most part, the edge represents the farthest place one could imagine putting computing capabilities at bay. The environment is often non-traditional (think of an airplane or a small vehicle) and / or hilly. There are generally constraints on space, power and Internet access.

The challenge is for the government to work at the limit, in harsh and less than ideal environments. Examples of missions include providing healthcare to isolated communities, providing humanitarian assistance, conducting covert operations and much more.

The federal community is excited about AI because of its application to the periphery. But, we must remember that significant development and R&D is required. AI at the edge requires significant backend or data center core compute infrastructure. Agencies need engineers, data scientists, and developers who develop the initial algorithms, as well as cybersecurity staff and systems to keep data safe.

In my mind, organizations never really have a pure edge scenario or a pure data center scenario, at least not in the public sector. There is still a heavy reliance on round trips between the data center and the edge.

MeriTalk: Some of the biggest challenges with AI at the edge are data center security, power consumption / availability, and systems management expertise. How do you see the agencies working to overcome these challenges?

Brown: There are a number of technologies around trust and authentication that are important to the development and deployment of software at the edge. A root of trust, for example, is a chain of control between the data center and the edge environments. A number of solutions from Dell Technologies and NVIDIA support agencies address these considerations and more.

Businesses can now get high performance computing capacity compressed to credit card sized devices. NVIDIA manufactures Jetson GPU modules with extensive integrated security. These units enable secure AI and advanced computing at the edge. NVIDIA EGX is an accelerated computing platform that enables agencies to deliver end-to-end performance, management and software-defined infrastructure on NVIDIA Certified servers deployed in data centers, in the cloud and at the edge.

For example, in collaboration with Dell Technologies, we centralized a GPU cluster for the United States Postal Service (USPS). USPS developers are creating new algorithms for various purposes. Once the team develops the algorithms, they distribute them to 400 post offices across the continental United States. GPUs are centrally managed and you don’t need to remove edge slots to update. USPS can seamlessly switch to newer software versions.

MeriTalk: Can you provide examples of how the USPS is working towards widespread AI integration?

Brown: One of our first projects with the USPS was to detect dangerous mail packages. The USPS has learned over time that if one “bad” package is discovered, there are often more.

USPS images the packages. If the team identifies a suspicious package, they can quickly compare an image of that package to all the other packages in the system. The team can identify and extract similar packages for further investigation.

We helped USPS create an AI infrastructure solution that enables fast and accurate image comparison. With the new system, the USPS can perform searches in hours rather than days.

NVIDIA is now working with USPS on other AI initiatives, including identifying and analyzing zip code boundaries, efficiency of delivery route and logistics, fraud detection, and more. .

MeriTalk: What are federal AI leaders doing differently from agencies that might struggle with AI?

Brown: Agencies that make progress in adopting AI are moving fast. There is a lot out there in terms of software tools, expertise. If you don’t have the expertise, that’s not a barrier. There is support.

Look for software that can speed up implementation. NVIDIA has a rich collection of solutions that provide easy-to-digest modules for software developers, for example.

We are also seeing successful agencies getting creative with data acquisition. Federated learning is a way for different agencies and groups to share anonymous or blind datasets to train AI models. Thus, an organization can share data for AI training, but does not have to share values ​​in these areas.

From a leadership perspective, if you give an IT team a well-defined tactical, actionable AI proof of concept, there’s no reason the project should take six months or a year. The IT team should be able to get results in a matter of weeks – it’s very possible today.


Source link

Leave A Reply

Your email address will not be published.