Top 10 Needs to Deploy XR Solutions in Aerospace Manufacturing Environments

Kellin Bershinsky

Senior Engineer, XR Group Leader at Ball

Learning Objectives

This presentation outlines 10 capability gaps that are needed for virtual and augmented reality solutions to reach their potential within aerospace manufacturing environments. These needs range from contamination risk mitigation to overcoming limitations and security concerns. The goal of this presentation is to find partners willing to help define the scope of the problems presented and aid in prioritizing solutions.


Key Takeaways:



  • A large portion of areospace manufacturing effots cannot use wireless technology

  • A large portion of aerospace manufacturing cannot use commercial cloud servies

  • ESD and silicone contamination present major risks to electronic, optics and bonding operations


"Right now, XR is making some revolutionary changes in the aerospace industry."

Kellin Bershinsky

Senior Engineer, XR Group Leader at Ball

Transcript

Welcome, everyone. I’m Kellin Bershinsky. I’m a Senior Engineer who leads the XR Group at Ball Aerospace. The XR group or XRG, as we like to call ourselves, our charter is to enhance program execution and new business efforts by providing innovative, virtual, and augmented reality Solutions.


Today, I’ll be talking to you about the top 10 needs to deploy XR solutions in aerospace manufacturing environments. I’ll start off this talk by giving you a little bit of a brief intro to XR or Extended Reality. I’ll talk through some of the misconceptions around the technology, and why it’s important. Then, I’ll jump into the top 10 needs for really standing up these capabilities in our production pipelines, and finish up with some key takeaways.


What is Extended Reality or XR? XR encompasses a range of technologies that range from augmented reality to virtual reality, and also include many different types of human computer interfaces that are rolled into these technologies. if you would imagine this spectrum here, virtual reality continuum, as they like to call it, ranges from reality, just experiencing the world as we know it, but introducing more and more layers of computer generated digital content that can either be overlaid on the world, such as the case in augmented reality, and everything in green in these three images on the bottom represents digital content.


So, AR or augmented reality, in its pure form, it’s just a heads up display showing information overlaid on top of your vision. Doesn’t track with your surroundings, so it’s just going to be static. Wherever you look, it’s gonna be overlaid and covering up your vision there. On the other end of the spectrum, you have virtual reality, which is taking you into a completely computer generated world. Everywhere you look is all digital content, and you’re unable to see the real world around you, as most of them kind of enclose your encapsulate your entire vision, so that’s all you can see.


However, there’s there’s many different things you can do in between virtual and augmented reality, and that’s where mixed reality comes into play. This is also synonymous to certain AR applications. Mixed reality, I think, is a good way of distinguishing this idea of mixing digital content into your world.In this image, you can see there’s a person standing there, there’s this dolphin that’s computer generated, that’s jumping out of the floor. It’s being occluded by the person standing there, as well as occluding the couch behind it. It’s blending the real and virtual worlds together to create this fully immersive experience, which is really cool.


Then, I mentioned these next generation human computer interfaces. I stole a little bit of this from Michael Abrash, from Facebook Reality Labs. He’s the Chief Technologist there. At Oculus Connect 6, he talked about this idea of human computer interface bandwidth, and how we have all these different ways of perceiving the world around us, not just sight and hearing, but touch and smell. We also have different ways of interacting with our world that make use of our full dynamic range of motion, our ability to interact in a variety of different ways with the world around us.


Where that comes into play with XR is that when you look at our traditional computer interfaces, you have a range of input, that pretty bottleneck down when you consider what is possible with a keyboard or mouse. They have their use, and they’re very helpful in our current work, but when you really try to expand that to the full range of human capabilities and experience that just it’s a severe bottleneck.


When you start looking at these XR hardware, being able to track not only your head movements, hand gesture recognition, it can use voice recognition, also as eye tracking, you start to layer in all of these different ways of being able to read in inputs from the user, and also being able to track their head movement and visualize objects on to the world around them that are able to interact with the world around them, which is really big.


What industries are currently being targeted for XR use cases? At Ball Aerospace, we have four divisions, the one that you’re probably most familiar with is our Civil Space Group, which does a lot of work with NASA and building instruments for major space applications. We also have a Tactical Solutions Group, which makes antennas and cameras for military applications, air, land, and sea. We also have a National Defense Group, which is focused on national defense space efforts. Then, we have our Systems Engineering Solutions Group that focuses on software solutions.


At the top here, these are really what we’re focused on currently, being able to use XR for 3d visualization, and CAD reviews, collaboration and communication, enhancing the way that we are being able to show these ideas to customers, but also be able to work with different roles within our own teams and communicate exactly is what we’re trying to do. We also have really advanced manufacturing capabilities. We do manufacture a lot of our own hardware. Then, industrial manufacturing, use cases are also interesting to us. Training, remote support, those are going to be big for us. But then, there’s a whole host of other industries here.


Healthcare. Being able to take scans of patients and overlay that scan on top of the patient before surgery makes a cut is really big education. You can imagine the tactile aspects of being able to perform the lessons, and really having that knowledge retention, and ability to learn quicker, is huge. Military. I’ll talk a little bit more about this later. Military investing pretty heavily in using this technology for a variety of applications and next generation user interfaces, emergency response. Imagine if I know where something’s located, but I just need to figure out how to get there, having a pair of glasses that can overlay where that position is, with respect to my current location, will make that so much easier. Marketing and Advertising. Really harnessing the wow factor of kind of bringing customers into this digital world and making it more magical experience. Then finally, retail. Being able to try it before you buy it is becoming more and more popular.


Next slide is talking about XR misconceptions and defense spending. First misconception is nobody’s using VR, AR. It’s definitely not the case, especially in those industries that I just pointed to. At least in my own experience, Aerospace, every big player in Aerospace has some group that is dedicated to trying to figure out what use cases fit and exploiting those use cases to implement them in their own production pipelines.


This is what I hear from time to time, “It’s all a fad, it will be gone next year.” It’s just not the case. People have been saying that since the dawn of AR and VR. In 2016. with the release of consumer grade headsets, it was a big explosion. There was a really massive optimization of how fast this industry was going to grow. It ended up not meeting those wild expectations but it has continued to grow. It’s continuing to progress. These numbers here show the latest and greatest projections. 42.55 billion in 2020 up to 333.16 billion by 2025. Still, maybe a little more optimistic but projecting to grow not shrink anytime soon.


A recent advancement, Microsoft won a contract with the US Army for their [unintelligible] headset for $21.9 billion over 10 years, which is really big. This is a follow on tool for an $80 million contract that there were one in order to implement it into warfighter helmets. Devices are very cool. They’re a modified HoloLens device to have a much wider field of view. They’ve got a variety of different sensors on it, and they’re ruggedized, which is pretty amazing. Excited to see what comes out of that for the commercial sector. Then, capital spending is still to continue to be a thing through 2019 2022 billion spent on new startups, which is great.


On top of this, some of its not reported here, is what companies are also spending money to advance the technology. I know that Facebook is putting in upwards of 10,000 different engineers to be able to advance the technology and realize some of the use cases there. Apple is reported to have up to 1000 engineers working on AR capabilities. Pretty exciting. I think it’s gonna change quite a bit in the next 2 to 5 years.


Another one is, “It’s too expensive, too complicated, requires too much space.” Maybe from a consumer standpoint, that’s true. You need a pretty high end computer in order to drive the first gen headsets. That’s not the case anymore. You can get an Oculus Quest 2 for $300, completely standalone, easy to get up and running. It’s really made things simpler, and really drove down the price point. Then, the last one, requires too much space. It’s better if you’re not running into things. We usually recommend, at least the arms with away from everything and try to keep people centered, but starting to see some innovation, just being able to see your keyboard, be able to map out your furniture better, so that you’re not punching your monitors, or breaking things. That’ll continue to reduce, and it really depends on the use case. I think there’s some really nice elements of being able to get up and walk around things, just makes the work we do a little bit more physically active and engaging, and I think better overall anyway.


This next one is pretty funny. “I’ve already tried VR.” I’ve heard this quite a few times, going to different trade shows. And you know how this waves greatest experience for people to try out? “Oh, no, I’m good. I’ve already tried VR before.” What’s funny to me is, it’s kind of like saying, “I’ve tried movies.” “Oh, yeah, no, I don’t need to see any more movies. I’ve seen a movie before. I know what movies are. I don’t need to try to get—” so try to keep that in mind if you ever catch yourself saying something like that. It’s a platform. Any type of applications possible, things are rapidly changing. If you think you’ve seen everything in VR, it’s like saying you’ve seen every movie that’s out there in a single experience. Don’t get yourself saying that.


Then, this last one, it’s more on the optimistic side of XR replace computers. I myself was guilty of this. Once you drink the XR Kool Aid, thinking that this stuff is really going to completely revolutionize the way that we do work and interface with computers, which I still wholeheartedly believe, but there’s just so much infrastructure built up on the computers and the way they are all the software applications are using from an engineering perspective, unless you have the plugins in there to use a VR. That’s not going to happen anytime soon. On top of that, having gained a number of user experience or groups been around since 2016, don’t have a ton of experience with AR and VR, but enough to see some general trends.


There’s a lot of people out there that just can’t use the tech. It makes some motion sick or simulation sick, I should say, they may have some type of headache issue that comes up. They may be claustrophobic, don’t want to be in a virtual confined space, just so many different things. So, trying to find the right group of people to leverage this technology, the right use cases, and there might be a different headset. Eventually, I kind of see AR and VR merging into more all encompassing headsets that do everything. For the time being, trying to find the right equipment for the right task is key in order to get more adoption.


Then, this chart on the right, this just goes back to, “It’s all fad, it’ll be gone next year.” At least on the defense side, this is pulled from taxes, which is archives government contract, government contracting, and what’s available for getting proposals out there and winning awards. Just doing a quick search trying to see how many defense contracts have some AR VR component to them. The trend is exponentially increasing, and stuff is going to be rapidly advancing here in the near future—it already is.


This next slide is talking a little bit about why this technology is important. When you start looking at what the future of engineering looks like, there’s this idea of this digital twin. I’ll read this off real quick. Digital twin is a digital representation of a physical product, including product specifications, geometry models, material prop, property simulations, and related information. This is really trying to take everything that goes into the design and synchronize it together into one model that people can access that is latest and greatest, up to date, single source of truth. Wen you look at that over the course of a program’s lifecycle, this is where the digital thread element comes into play, and being able to see exactly what’s happening to that digital twin from cradle to grave.


Where does XR fit into this? So, digital twin, a lot of this is going to rely on what’s known as model driven approach or model based systems engineering, where you’re able to better organize the information and link it together so that if things change, you understand the impacts across the entire project, it’s able to synchronize with multiple software platforms. If I do look at a particular piece, I get all the information there that I need, as far as lead times, on designing constraints, how this fits into the big picture, all of it.


Whenever you’re going to be in a situation where it’s helpful to leverage that information and collaborate, that’s where XR comes in. Being able to take a digital twin, bring it into a virtual collaborative environment, and share that information with people that are not only in the same room, but spread out across different rooms, buildings, areas of the country, areas of the world, that’s where we really see this technology shining. Just taking our remote collaboration to the next level, and bringing in all of the features that computers currently give us.


You can imagine being in a virtual collaborative space, be able to bring up any information from the web, from your own servers, models, animations, taking whiteboarding and brainstorming to the next level by creating 3d models and animating them or adding physics constraints. There’s just so much that can be done there. So, that’s really exciting.


Spatial IO, which this image is taken from. They’ve done a great job of kind of visualizing that vision—definitely worth checking out. Unfortunately, it at least this current state won’t work for us. I’ll get into that as to why in a little bit.


Jumping into the top 10 features that are—this is kind of, I guess, taking a step back, I’m trying to express some problems that I’ve been seeing within our industry, that I’m hoping other groups of people, either in the same industry or other industries are also seeing the same issues. I’m hoping, we’re pretty small fish, we’re a medium sized engineering company, roughly 6000 strong, but we’re just getting started trying to shape what impacts these are going to have on the future of hardware and being able to get these feature requests out there. I think we’re too small to do that. So, I’m hoping the more of us that are saying the same thing, that the solution providers will hopefully be able to bring those to the table if they know it’s a priority.


First one is—these are these are things to really get this to take off in the Aerospace Manufacturing sector—the headsets themselves are not ideal. I call these time factors because I think that they’ll be solved, it’s just a matter of time. There’s already companies that are working on these. There’s not any new opportunities here that I’m really going to showcase. Whenever we’re trying to do roadmapping, or working with, trying to figure out what use cases to focus on when and where, keeping this in mind and the current state and what we think the ideal state will be later on, is pretty key. Head mounted displays, current status, a lot of them are too bulky, too heavy to be wearing by everybody. My wife, she can only wear a headset for like 30, 45 minutes before it starts hurting her neck. So, gonna have to reduce that form factor quite a bit in order to make that realistic to being used for the majority of the day.


Eyestrain is another one. There’s things that you can do to try to alleviate this, but the vergence accommodation issue, being able to see virtual content up really close is a problem right now. HoloLens will actually clip it so that anything that comes within a certain distance, you can’t see it. There are some really interesting technology coming down the pipeline that solves this and will hopefully resolve some of the eyestrain that comes along with it.


Another big one is simulator sickness. I haven’t seen any great solutions for this. I’ve seen things from being able to kind of have a level reference field to almost make it like a foliated rendering solution where it encompasses your eyes. You can only see a limited field of view, to providing ecstatic frames that you’re looking to, like you’re inside of a spaceship or something to deal with this. But even then, there’s a chance that if best practices aren’t followed, that you’re going to make people sick, which is not good, and that’s a problem. At least as far as the AR technology is concerned, a lot of the waveguide display waves have a fairly limited field of view. I think that’ll change here in the near future. It comes with the simulator sickness, and some of the solutions there and limiting the amount of content that the user can see, that is problematic, especially when you’re trying to use that to make big decisions. Maybe you’re doing a brainstorming session, it’s not natural.


Tethered versus wireless. [Unintelligible] headsets are tethered, which means I gotta have it attached to a computer. That’s going to limit mobility. However, you get all the processing power of those computers, which is great. If I have some super advanced model and entire spacecraft that I would bring in, it’s really the only way to do it, unless you have some type of cloud rendering capability. Wireless is great. Streaming technology is coming along, but that has security risks. Also, a lot of the devices that are wireless are also mobile. So, the processor limited in [inaudible] not able to get the type of rendering fidelity that you could from a tethered headset.


Then, not fit for the environment. I’ll get into this in a little bit and a couple other bullets. There’s some particular issues, especially in Aerospace Manufacturing, that a lot of the headsets have that are not ideal.


Then, on the other side of this chart, I’ve got dynamic object tracking. A lot of the use cases I’ve seen, especially when it comes to augmented reality, you have to synchronize the digital content with the objects and if you want them to be displaying information that’s relevant to the object and things on the object. That works great. A lot of the techniques I’ve seen, fiducial markers, you have some type of room synchronization to be able to identify key features in the room and synchronize what you’re looking at. Whenever you pick up an object and move it, it’s going to destroy that synchronization. Maybe you have fiducial markers on the object are able to track it that way, but be able to tack a sticker like that on thousands of components, not realistic. So, being able to track that somehow is going to be important. Another one of these things that’s definitely coming down the pipeline.


Right now, what we’re doing is just trying to keep a static settings. You start with the chassis, you mount the chassis, and then everything else, you’re not tracking the nuts and bolts, just animating parts and showing that assembly sequence. There are some workarounds. Hopefully, someday, you’ll be able to take these high resolution tracking mechanisms and use them for tolerance tracking, object identification. I think a lot of that also comes with how good our depth sensors are, spatial maps. A lot of exciting possibilities out there once we’re able to track object poses and everything else.


Internal risks and concerns. This is specific to, at least the work we do, but I’m guessing there’s a lot of this across the other companies. The big one is risk to intellectual property. You can imagine, we’re a government contractor, it’s a huge deal if we were to leak some of that proprietary information. So, introducing any risks that would introduce leaks is not ideal. The technology comes with cameras, it’s got depth sensors, they’ve got wireless data streaming capabilities, they’re mobile, what are you going to do if one of these walks off site or gets lost, and they’re standing on computers, so they’re vulnerable to the same security hacking concerns that normal computers are. It’s just so much here that some way to really make these as secure as possible, while still making them useful. I think that’s the main opportunity there.


I know there’s a lot that’s going into making these things as secure as possible. One issue that we constantly running up against and then some other concerns. What I found is, since this is an emerging technology that a lot of the processes required to validate or verify that this is going to be safe, and that we can use this just don’t exist. So, we’re going to have to make it up as we go, and that’s not ideal, but that’s kind of where we’re at. Data, security, and IT interfaces you so be able to integrate it in with our own infrastructure. How do you add in two factor authentication when it’s not compatible with these types of devices? How do you get through firewall issues and active directory authentication? There’s just a lot that we’ve been running into there that are not ideal.


I mentioned employee health and safety, eyestrain, as well as simulator sickness—those are main concerns. Whenever you’re seeing things moving, and you’re not experiencing that acceleration, that’s where the simulator sickness comes into play. Kind of the opposite of motion sickness, where you can imagine your car driving, if you’re not looking out the window, seeing where you’re going, you might be feeling those G forces, but if your your eyes don’t agree, that’s where you get motion sick. Simulator sickness is kind of opposite of that.


We’ve also seen a need to leverage or upgrade infrastructure. Start at the bottom, if we’re going to be using this for streaming of some kind, do our network access points provide enough bandwidth in order for multiple devices to be streaming as well as anybody else that needs to connect to the network wirelessly? Right now, we’re in a big upgrade of our PLM, our Program Lifecycle Management System. There’s features in there that will allow us to leverage some of these capabilities, but they may not be as mature as some of the other elements, and they all come with their own individual price tags. It’s kind of this mix mash, and a lot of them are compatible with each other. Trying to find a balance there, what to use.


Then, I’ve seen a lot of solutions out there that what they’re actually providing is a simple solution, but then they’ve gone out and built their own configuration data management system that allows you to do version control. That’s nice, but not necessary. Actually, unless you’re providing the end all be all solution, that’s not ideal. Because if we’re going to have to go out and buy different pieces of software from every different vendor, and they each have their own different version software, it just doesn’t stack very well. Trying to chain these things together is not going to work. Need to try to leverage our existing systems, if we can, and have plugins there to make that work.


Here’s some big things I wanted to get out there. These are some some logos. I’ve seen quite a few solution providers out there that are making use of commercial cloud services, which is awesome. I agree that the cloud has so many advantages, especially when you start thinking about remote rendering and hosting and offloading some of the processing power into the cloud. You can get even smaller form factor AR devices. I can picture being used for virtual collaboration using the same kind of remote rendering interface, taking advantage of AI services, and 3d rendering. In those virtual collaborative environments, you can have really high fidelity models. Unfortunately, we can’t access commercial cloud services. We’ve recently been leveraging Zero GCC High, which is great, but there’s not a lot of solution providers that have deployed their application to that environment. The opportunity there is for people to be moving existing applications to secure cloud networks or going through this verification process of getting government on board with using existing cloud networks and having secure channels.


The other one is that we can’t use wireless technology in probably 80% of the work we do. These are secure environments, proprietary networks that don’t have any wireless networking capabilities, which is kind of a drag, because all these commercial cloud services can only be accessed through like a tethered type interface. We can’t use motion controllers. If we have smart tools, IoT, anything it’s wireless, we can’t connect to it. This is like a fairytale dream. But if there is anything out there that was like a super secure wireless communication, the government would allow him to secure areas, that would be amazing. If you know anything like that out there, I would love to hear about it or things coming down the pipeline. My contact info is at the bottom, so we’ll be able to chat. That’d be a dream come true.


Here’s some other potential problems and opportunities. This is something that you may not be aware of. It depends if you’re in the manufacturing business and you deal with electrostatic sensitive components, then you are probably aware of this. A lot of the headsets I’ve seen and going through our testing, we’ve determined that most of them are made, at least the surfaces of them are made of some type of non-conductive material, most plastics are. The problem with that is that it’s going to build up charge over time. Even if you don’t directly touch the headset to a hardware, static electricity can jump pretty far. Even getting near it, poses a pretty serious risk to the hardware and you can zap it fry transistors, which is would be horrible.


We have some ways of working around that with ESD blowers and wrapping it with conductive material and trying to bleed off charge. But the electrostatic discharge, ion blowers have a tendency to draw people’s eyes as the air is rushing across there just isn’t ideal. A lot of the transparent portions of AR glasses, there’s not much that we can do there. It always ends up being a risk.


The big opportunity there is being able to either make the hardware with some type of static dissipative material, doesn’t have to be completely conductive, but just has to bleed off charge and have some way of us touching the ground strap to it so that we can continuously do that, or some type of wrap that can go over the entire thing. It becomes difficult, as I mentioned with AR glasses, but that’d be great. We’d love to see something like that in the near future.


Then, this other one is silicone sensitivity. Majority of mobile devices, including a lot of AR devices we’ve tested, are made of a soft, rubbery material, which is silicone. The problem with that is silicone never completely cured. Uncured silicone can creep across surfaces, which if it goes across optics on something that you’ve launched into space, it’s not like you just go up there and wipe off the mirror, right. That’s incredibly dangerous to the hardware. On top of that, silicone also causes bonds to fail. Anything that you’re gluing or adhering together, anything that you’re sticking in order to make it more stable during launch, those bonds can fail, which would be disastrous, if the thing was to fall apart on launch, just be terrible. We end up having a very silicone adverse environments. The opportunity there is to be making these devices out of non silicone based material, or creating some type of protective cover that doesn’t contain silicone, or to contain the silicone of the device and also static dissipative would be fantastic.


Final needs. I’m going to start on the right, Mitigating risk resistance to change. I kind of pictured this being one of the enablers of the technology. It’s been a little slow. What we really need is a method of kickstarting adoption. I’ve heard excuses across the table. People are too busy, extremely risk averse, don’t want to be bringing in something that could possibly make things not meet schedule, even though all signs and pilot studies that are out there show that, for certain use cases, you can really drive down the amount of time it takes to do things, which would actually help them.


A lot of people are just unable to see the value in this technology. They give you the run around, they can see it in everybody else’s group with their own, which is always interesting, or they may be really excited about it, but they’ll go completely out of scope and just be really excited about this idea they had that is completely impossible for the technology to do. Like, being able to fix a particular problem for you automatically, which would just be completely kind of out of scope. Maybe you need some artificial intelligence. There are some to possibly go down that route, but I’ve seen multiple things where it gets carried away and you end up trying to pursue something that’s impossible.


Then, on the other side, these would be really nice to haves. Just making it easier to integrate this technology. I mentioned some of the issues with connecting its IT infrastructure, just making it very simple to deal with firewall issues, deal with connecting it to existing network networks that are proprietary and working through the authentication issues to be amazing. I’ve seen issues connecting into the zero gov cloud. It just seems like, every step of the way, we’re running into barriers. Eliminating as much of that as possible would be key.


Also, seem quite a bit of limited functionality of solutions. People are picking use cases, and then building an entire product on top of that one use case. For this to really take off, you can imagine if I want to use this on the floor, hands on training, and being able to go in a virtual collaborative environment with somebody and walk through the process is going to be key. Having those separate and pay separate software licenses and having that capabilities not talk to one another. Then, the licenses themselves can be extremely expensive. Trying to daisy chain together just isn’t isn’t realistic.


Most recently, I’ve seen wild swings and costs. A reason wanting to charge a fortune for something just because it’s new, but the capability itself is relatively simple. I’ve seen a lot of aerospace companies, including our own, just develop our own technologies, since unity is a pretty good prototyping application. We have such varied use cases that are outside of the scope. A lot of these were generic applications just makes sense to be able to build it in our own features as needed, instead of waiting for him.


Last one is if you are going to be providing a solution, you’re gonna have to make sure that it does integrate with other capabilities that are out there, or provide hooks, so that if we have use cases that are outside of it, that we can add in features to make it usable. Providing that API that allows us to kind of build on top of that floor platform is going to be huge.


Finishing up some key takeaways. Right now, XR is making some revolutionary changes in the aerospace industry. I’ve seen some great use cases out there, ranging from using it for harness installation routing, and placing fasteners, as well as some torquing operations, and making sure that you’re really simplifying the instruction set that’s given to technicians and taking manufacturing capabilities to the next level. The technology itself is rapidly advancing. I’ve been seeing things across the board every year, resolutions getting better, cloud capabilities getting better. There’s just so much that’s it’s coming to fruition and so much potential for technology as more and more comes online.


I talked through a number of aerospace challenges that we have to overcome. There’s a lot that’s out there. This is going to be pretty tricky to make mainstream, and so looking for partners to help alleviate some of those issues. Then, listed some opportunities to increase the speed of XR in industry adoption. If you have any cues into how we can overcome those, I would love to hear them.


The last one is, XR will converge with other emerging technologies that include human computer interfaces. I didn’t talk much about this, but this is one of the most exciting aspects of XR. Being able to use it for IoT is huge, but also leveraging all the capabilities on the cloud. Once you start to bring in the advantage of AI assistants and being able to take advantage of contextually aware warnings or bringing up lessons learned, relevant to what you’re currently working on will just be huge.


It looks like my email got pulled off here, but if you want to reach out to me, my name is Kellen Bershinsky. My email is KBershin@ball.com. Feel free to reach out. I’m also on LinkedIn. If you reach out there, we can connect there. Anyway, thank you so much for taking the time to watch this. I hope you got something out of it. If you want to follow up, I’d love to hear from you. Take care.


Get full Q/N Access

Sign up to Q/N with a few details to watch this presentation.

  • Hidden
  • Hidden