Cars – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Tue, 20 Feb 2024 14:25:30 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/cropped-DC-logo-emblem_multicolor-32x32.png Cars – Dataconomy https://dataconomy.ru 32 32 The reason behind the Mercedes-Benz recall https://dataconomy.ru/2024/02/20/mercedes-benz-recall/ Tue, 20 Feb 2024 14:25:30 +0000 https://dataconomy.ru/?p=48805 Mercedes-Benz has recently announced a recall affecting about 250,000 vehicles worldwide. The Mercedes-Benz recall is because some parts in these cars might not be as safe as they should be. The German authorities say there could be engine troubles or fires. The cars being recalled are from different models made in 2023, like the AMG […]]]>

Mercedes-Benz has recently announced a recall affecting about 250,000 vehicles worldwide. The Mercedes-Benz recall is because some parts in these cars might not be as safe as they should be. The German authorities say there could be engine troubles or fires.

The cars being recalled are from different models made in 2023, like the AMG GT, S-Class, and others. In Germany alone, more than 37,000 cars might have problems. This shows how serious the situation is.

The Mercedes-Benz recall will affect 250,000 vehicles globally for safety issues like engine malfunctions and fire risks. Learn about affected models.
Mercedes-Benz recall: The German Federal Motor Transport Authority (KBA) has highlighted the seriousness of the issue, prompting Mercedes-Benz to take proactive measures (Image credit)

Mercedes-Benz wants to make sure their customers are safe, so they’re fixing the problems for free. The repairs won’t take too long, and they’re working hard to make it easy for car owners. If you have one of these cars, it’s important to get in touch with Mercedes-Benz to get it fixed. Want to learn more? Keep reading and learn more about why the recall is happening, which models are included, and what it means for car owners.


Ford Explorer recalls to affect almost 2 million cars


Mercedes-Benz recall explained

Mercedes-Benz, a stalwart in the automotive industry, has recently initiated a global recall affecting approximately 250,000 vehicles, drawing attention to potential safety issues.

At the heart of the issue lie certain fuses within the vehicles that have failed to meet the stringent requirements mandated by regulatory authorities. The German Federal Motor Transport Authority (KBA) has identified potential consequences, ranging from engine malfunctions to an escalated risk of fire. Recognizing the severity of these implications, Mercedes-Benz has proactively taken steps to address the concerns and safeguard its customers.

The affected vehicles span various models from the 2023 lineup, including the AMG GT, C-Class, CLE, E-Class, EQE, EQS, GLC, S-Class, and SL. Notably, Germany is expected to bear the brunt of this recall, with over 37,000 vehicles likely to be impacted. This widespread measure underscores the gravity of the situation and the necessity for prompt action.

The Mercedes-Benz recall will affect 250,000 vehicles globally for safety issues like engine malfunctions and fire risks. Learn about affected models.
The Mercedes-Benz recall addresses potential issues with certain fuses that may lead to engine malfunctions or an increased risk of fires

A spokesperson for Mercedes-Benz reiterated the company’s steadfast commitment to prioritizing consumer welfare. To mitigate any potential risks, a comprehensive plan has been devised to replace specific components within the affected vehicles. Importantly, this repair initiative will be undertaken at no cost to the owners.

The repair process, estimated to last a few hours, will be executed to minimize disruption for customers. Mercedes-Benz has assured owners of the impacted vehicles that every effort will be made to expedite the process without compromising on quality or safety standards.


A Tesla autopilot flaw started a recall that will affect two million vehicles


Owners of the affected vehicles are encouraged to promptly contact authorized Mercedes-Benz service centers to schedule the necessary repairs. By adhering to the recall instructions, customers can uphold the integrity of their vehicles and ensure their continued safety on the road.

Featured image credit: Eray Eliaçık/Bing

]]>
How Data Science Is Driving The Driverless Car https://dataconomy.ru/2015/12/21/how-data-science-is-driving-the-driverless-car/ https://dataconomy.ru/2015/12/21/how-data-science-is-driving-the-driverless-car/#comments Mon, 21 Dec 2015 09:30:43 +0000 https://dataconomy.ru/?p=14621 Google may be the first name associated with driverless cars, but they’re not the only ones interested in the technology. More importantly, it seems people have wildly different opinions on exactly how the story will play out. How much data will these cars produce? What kind of legal quandaries will we face? While the details […]]]>

Google may be the first name associated with driverless cars, but they’re not the only ones interested in the technology. More importantly, it seems people have wildly different opinions on exactly how the story will play out. How much data will these cars produce? What kind of legal quandaries will we face? While the details are hazy, there is no doubt these machines will not only be available in the near future, they will soak up data like a sponge, and put that information to use both inside the car and out.

Where does all this data come from?

When a passenger loves their car very much, they go on trips. Of course, for a driverless car, it can be a little hard to see the road. This is why they are loaded with sensors. They may be equipped with Laser Illuminating Detection and Ranging, like Google cars, which are used to build a 3D map of the surroundings. It can see lines on the road to know where the lane is, or notice that a light has turned red. Now the car can see, but what about those other cars zooming about? How can a driverless car avoid a fender bender? Once the car is equipped with radar units the car can see just how fast the surrounding objects are moving.

There are three main hardwares in the driverless car model: sensors, processors, and actuators. Images and information gleaned from sensors travel through the processor, which effectively tells the car what to do via actuators—tools that allow a computer to control physical components like brakes, or steering wheels.

carCertain kinds of data can also come from other cars. Your little car is not alone on this journey from toddler-brained-automobile to futuristic sci-fi car. Data from other cars is also used to better map environments. If cars were to process the image of a new stop sign put in place, that data would eventually become part of the overall mapping system. Thanks to cloud access, the car is always connected and, ideally, up-to-date.

These cars aren’t just lumps of smart technology; they’re the result of learning algorithms, and catalogued information based on previous experience. It’s the incredible software in place that models responses and behaviors in real time. The more the car, or computer, drives, the more it knows. This doesn’t mean you have to educate your car. Rather, the problem is in the very specific split-second decisions that drivers make every day. Scientists can’t possibly program a car to recognize every little objects and how to act in every situation. A car may not inherently know the difference between a glass bottle or a newspaper, but by absorbing more data through driving and real experience, it may learn. It may learn that items in the road will cause other drivers to swerve, or how to anticipate when a driver will change lanes.

By turning driver experiences into programmable information, scientists can also make driverless cars much more practical. The ability to relay all kinds of data at once doesn’t necessarily equal the perfect car. There don’t need to be flashing lights every time another car brakes; it shouldn’t brake every time a bird is in the street. The results would be overwhelming. There is a multitude of background noise that drivers tune out to focus on what is vital. That’s why data science is necessary to determine what is and isn’t important. These cars use predictive and prescriptive models to deal with the influx of sensory information in a practical way.

How do we manage that data?

While scientists continue to argue over the exact number, some strategists argue that driverless cars could create up to 1GB of data per second. Given the number of sensors, and the fact they must constantly transmit information, it isn’t hard to believe such a high number. Theoretically that would equal petabytes of data a year, just from one measly car. The counterargument often concerns what actually happens to this data. Obviously, not all data from every car will be considered important. Rather, only important information would actually be stored. As most data is simply used for active driving, not for detailed analysis, those petabytes of data wouldn’t end up on the cloud. Much like unnecessary food photos, no one wants to see the 5,155th photo from your rear bumper—and no one likely will.

There are so many concerns about how the data from driverless cars will be put to use, but the fact is that, in order to get where they are now, designers and manufacturers have already absorbed exorbitant amounts of data. In order for the Google Self-Driving Car to maneuver the 2,000 miles they operate on is because the area is entirely pre-mapped. Every drive along these streets creates more data, making it safer and more reliable. What happens if you drive outside of their small pre-mapped area? Frankly, your car might not know what to do. There is no taking your unmanned car for a spin through uncharted territory. It will take time and the sharing of data to create bigger and better maps. No one company will be charting the entire globe, alone. The more companies experiment and create, the faster they can absorb and analyze the mountains of data standing between the cars of today and the future. Moreover, many speculate that the development of better techniques will make a lot of data more accessible to those in other fields. While not all car, tech or logistic companies can afford to invest in driverless cars, they too may benefit in the long run.

Apart from problems of public acceptance and legal complications, autonomous cars also include a risk of security and personal data-related problems never before experienced on the road. While the data might be perfect, such cars are hackable—as has been recently proven when two hackers took out a Jeep on a highway (purely for science reasons, thankfully). If you can keep your car safe from hackers, you may still have to worry about your personal data. The absolute terror that advertisers will track consumers and direct stream commercials to the dashboard is a real fear that will be hard to sooth. There is money in data, and stopping that kind of innovation might be an uphill battle.

Nonetheless, the mixture of complexity and common sense that is driving automated cars is breathtaking. The magic isn’t just in images of driverless, hovering cars (obviously, all futuristic cars can hover), but the ways that scientists are teaching computers to learn. No, they might never be capable of split-second decisions quite like a human, but with the right data, teachers and algorithms, robots can mimic the human mind very well.

Like the article? Subscribe to our weekly Newsletter.

image credit: The Economist

 

]]>
https://dataconomy.ru/2015/12/21/how-data-science-is-driving-the-driverless-car/feed/ 11
Ford Focuses on Big Data Ambitions with the New Silicon Valley Research Centre https://dataconomy.ru/2015/01/26/ford-focuses-on-big-data-ambitions-with-the-new-silicon-valley-research-centre/ https://dataconomy.ru/2015/01/26/ford-focuses-on-big-data-ambitions-with-the-new-silicon-valley-research-centre/#respond Mon, 26 Jan 2015 12:31:50 +0000 https://dataconomy.ru/?p=11683 Ford have been at the forefront of top-down big data analytics from the get-go. In our interview with Ford’s Chief Data Scientist Mike Cavaretta last year, he alluded to the opening of a research centre, to propel Ford’s big data research to dizzying new heights. Now, this plan has come to fruition- Ford’s Silicon Valley Research Center […]]]>

Ford have been at the forefront of top-down big data analytics from the get-go. In our interview with Ford’s Chief Data Scientist Mike Cavaretta last year, he alluded to the opening of a research centre, to propel Ford’s big data research to dizzying new heights. Now, this plan has come to fruition- Ford’s Silicon Valley Research Center had its grand opening last week. The research center aims to drive innovation in connectivity, mobility, and autonomous vehicles.

Mark Fields, Ford’s President and CEO, hopes the centre will keep Ford at the cutting edge of innovation. “This new research center shows Ford’s commitment to be part of the Silicon Valley innovation ecosystem – anticipating customers’ wants and needs, especially on connectivity, mobility and autonomous vehicles,” Fields stated. “We are working to make these new technologies accessible to everyone, not just luxury customers.”

The announcement outlined that:

  1. Ford and Stanford started an alliance to deliver Fusion Hybrid Autonomous Research Vehicle to university engineers for next phase of testing.
  2. Dragos Maciuca, an experienced Silicon Valley engineer, joins Ford from Apple to serve as senior technical leader at Research and Innovation Center Palo Alto; additional hiring plans will support Ford having one of the largest automotive research teams in Silicon Valley.

This facility is the latest in Ford’s global network of research and innovation centers, including its location in Dearborn, Michigan, which focuses on advanced electronics, human-machine interface, materials science, big data and analytics; and Aachen, Germany, which focuses on next-generation powertrain research, driver-assist technologies and active safety systems, reports their press release.

Situated in the Stanford Research Park, the facility will accommodate 125 researchers, engineers and scientists.

Some of its projects in key areas, include:

  1. Connectivity : Ford is integrating with the Nest application programming interface, targeting home energy and emergency system management while on the road through a series of research experiments.
  2. Mobility: As the next phase in Ford’s Remote Repositioning mobility experiment, the Palo Alto team is now testing the ability to drive vehicles located on Georgia Institute of Technology’s campus in Atlanta from the Bay Area to prove out the new technology.
  3. Autonomous vehicles: While Ford’s research and development in autonomous vehicles is a global effort, including ongoing work with University of Michigan and Massachusetts Institute of Technology, the Palo Alto team will expand collaboration with Stanford University that kicked off in 2013.
  4. Customer experience
  5. Big data and analytics: Ford is leveraging its OpenXC platform to help learn how customers are using their vehicles, and is conducting analytics to detect patterns and learnings that can lead to product improvements or new mobility services.

The opening of this centre marks the beginning of an exciting new chapter in Ford’s development, as well as the development of the automotive industry as a whole.


(Image credit: Marco Ely, via Flickr)

]]>
https://dataconomy.ru/2015/01/26/ford-focuses-on-big-data-ambitions-with-the-new-silicon-valley-research-centre/feed/ 0
Uber Plans to Set “New Standard for the Future Development of Our Cities” https://dataconomy.ru/2015/01/14/uber-plans-to-set-new-standard-for-the-future-development-of-our-cities/ https://dataconomy.ru/2015/01/14/uber-plans-to-set-new-standard-for-the-future-development-of-our-cities/#comments Wed, 14 Jan 2015 10:36:47 +0000 https://dataconomy.ru/?p=11406 Uber, the app-based transportation network and taxi service provider, has revealed a data-sharing alliance with Boston authorities, touting it a “first-of-its-kind partnership”. The insights gleaned from Uber’s vast trove of data will assist in developing strategies for the urban sprawl- relieving traffic congestion, expanding public transportation, and reducing greenhouse gas emissions. The Mayor, Mr. Martin […]]]>

Uber, the app-based transportation network and taxi service provider, has revealed a data-sharing alliance with Boston authorities, touting it a “first-of-its-kind partnership”.

The insights gleaned from Uber’s vast trove of data will assist in developing strategies for the urban sprawl- relieving traffic congestion, expanding public transportation, and reducing greenhouse gas emissions.

The Mayor, Mr. Martin J. Walsh said in this regard : “In Boston, data is driving our conversations, our policy making and how we envision the future of our city. We are using data to change the way we deliver services and we welcome the opportunity to add to our resources. This will help us reach our transportation goals, improve the quality of our neighbourhoods and allow us to think smarter, finding more innovative and creative solutions to some of our most pressing challenges.”

Uber will provide Boston with anonymized trip-level data by ZIP Code Tabulation Area (ZCTA), i.e., data with time stamp on individual trips, distances, durations as well as support for Vision Zero and other transportation safety initiatives, explains an Uber blog post.

However, critics have found the ride-sharing app’s intentions murky, owing to incidents relating to misuse of rider data in the recent past- how this partnership works out, only time will tell.


(Image credit: Uber)

]]>
https://dataconomy.ru/2015/01/14/uber-plans-to-set-new-standard-for-the-future-development-of-our-cities/feed/ 1
How Ford Uses Data Science: Past, Present and Future https://dataconomy.ru/2014/11/18/how-ford-uses-data-science-past-present-and-future/ https://dataconomy.ru/2014/11/18/how-ford-uses-data-science-past-present-and-future/#comments Tue, 18 Nov 2014 11:36:55 +0000 https://dataconomy.ru/?p=10439 Success stories of how data-driven practices can revitalise businesses are rife today, but there are few as compelling as the story of Ford. In 2006, the legendary car manufacturers were in trouble; they closed the year with a $12.6 billion loss, the largest in the company’s history. As we reported earlier in the year, through […]]]>

Mike-CavarettaSuccess stories of how data-driven practices can revitalise businesses are rife today, but there are few as compelling as the story of Ford. In 2006, the legendary car manufacturers were in trouble; they closed the year with a $12.6 billion loss, the largest in the company’s history. As we reported earlier in the year, through implementing a top-down data-driven culture and using innovative data science techniques, Ford was able to start turning profits again in just three years. I was recently lucky enough to speak with Mike Cavaretta, Ford’s Chief Data Scientist, who divulged the inside story of how data saved one of the world’s largest automobile manufacturers, and well as discussing how Ford will use data in the future.


As an overview, how do Ford use data science?
So at the moment, we’re primarily trying to break down our data silos. We have a number of projects that are using Hadoop, and we’re actually setting up our Big Data Analytics Lab, where we can run our own experiments and have a look at some of the more research questions.

Back in 2006/07, Ford was having a downturn. Since then, it’s dramatically turned things around. What role did data science play in this?

Thanks for that question, and thanks so much for phrasing it as “data science” and not “big data”. I think at this point in time, “big data” has come to mean so many things to so many people, I think it’s better to focus on the analytical techniques, and I think data science does a pretty good job of narrowing in on that.

So back to 2006-2007- that was around the time Alan Mulally was brought on. He brought with him this idea that important decisions within the company had to be based on data. He forged that from the very beginning, and from the top down. It really didn’t take a long time for people to realize that if the new CEO is asking, “Hey where is the data you are basing your decision on?”, you’d better go out and find the data, and have a good reason why that data matters to this particular decision.

So, it became apparent quickly that we needed people who could manipulate the data. We didn’t call it “data science” at the time, but being able to bring data to bear against different problems became of primary importance.

The idea was that the roadmap really needed to be based on the best data that we had at that time, and the focus was not only good data and analysis, but also being able to react to that analysis fast.

So an 80% solution would allow us to move quickly, and benefit the business more than a 95% solution where we missed the decision point. I think there were a lot of benefits to being able to bring these methods, ideas and data-driven decisions using good statistical techniques. This approach helps to build your credibility, as you’re able to bring great results with good timing- it just worked out well.

What technologies were you using?

At the time, the primary technologies we were using were really on more on the statistical side, so none of the big data stuff- we were not using Hadoop. The primary database technologies were SQL-driven. Ford has a mix of a lot of different technologies from a lot of different companies- Microsoft, Teradata, Oracle… The database technologies allowed us to go to our IT partners and say “This is the data that is important, we need to be able to make a decision based on this analysis”- and we could do it. On the statistical side, we did a lot of stuff in R. We did some stuff with SAS. But it was much more focused on the statistical analysis stuff.

What technologies have you since added?

So I think the biggest change from our perspective is a recognition that the visualization tools have got much better. We are big fans of Tableau and big fans of Qlikview, and those are the two primary ones we use at Ford.

We’ve done a lot more with R and we’re currently evaluating Pentaho. So we’ve really moved from more point solutions for solving particular problems, to more of a framework and understanding different needs in different areas. For example, there may be certain times when SAS is great for analysis because we already have implementations, and it’s easier to get that into production. There are other times when R is a better choice because it’s got certain packages that makes that analysis a lot easier, so we’re working on trying to put all that together.

Ford Big Data Science Mike Caveretta

You’ve now begun to collect data from the cars themselves- what insights has this yielded?

So there’s a good amount of analytics that can be done on the data we collect. It’s all opt-in data- it’s all data that the customers have agreed to share with us. Primarily, they opt-in to find charging stations, and to better understand how their electric vehicles are working. A lot of the stuff we are looking at has to do with how people are using their vehicles, and how to make sure that the features are working correctly.

Ford Big Data Science Mike Caveretta

Ford use text mining and sentiment analysis to gauge public opinion of certain features and models; tell us more about that.

So a lot of the work that we’ve done to support the development of different features, and to figure out what feature should go on certain vehicles, is based on what we call very targeted social media. Our internal marketing customers will come to us and ask us, “We’re thinking about using this particular feature, and putting it on a vehicle”- the power liftgate of the Ford Escape is a good example, the three-blink turn signal on the Ford Fiesta is another one. In those circumstances, we will take a look at what most people think about the features on similar vehicles. What are they saying about what they would like to see? But we don’t pull in terabytes of Twitter and we don’t use Facebook- we go to other sources that we found to be good indicators what customers like. It’s not shotgun blasts, so to speak; it’s more like very specific rifle shots. This gives us not only quantitative understanding- this customer likes it and this customer doesn’t- but also stories that we can put against it. And these stories are usually when the customers are talking with each other. One great story is for the three-blink turn signal when one customer was describing, “So I got the vehicle. I got the three-blink turn signal and I’m not sure whether I like it or not.” And other people were chiming in saying “You know what, I kind of got the same impression, give it another couple of weeks and just think about how you’re using it on the highway and if you give it a couple of weeks you’ll like it.”

The first person signed back on a few days later and said “You know you what, you were right, now that I understand how it works and where it should be used- I think I like it now!” It was actually kind of beautiful, and that story we can put in front of people and say “This is the way people are using it, these are the some of things they’re talking about”. So now, we’re not only getting the numbers, but also the story behind it. Which I think is very important.

What can we expect from Ford in the future?

I think the position that we’re in right now is really looking at instantiating the experiments we want to do in the analytics space, linking up the different analytics groups, and really focusing on the way that big data technologies allow us to break down data silos.

This company’s been around for over 100 years, and there’s data in different areas that we’ve used for different purposes. So we’ll start looking at that- and start providing value across the different spaces. We’ve put some good effort into that space and got some good traction on it. I can see that as an area that’s going to grow in volume and in importance in the future.


(Featured Image Credit: Hèctor Clivillé)

]]>
https://dataconomy.ru/2014/11/18/how-ford-uses-data-science-past-present-and-future/feed/ 8
Formula One Wins Riding on Big Data Analysis https://dataconomy.ru/2014/11/18/formula-one-wins-riding-on-big-data-analysis/ https://dataconomy.ru/2014/11/18/formula-one-wins-riding-on-big-data-analysis/#comments Tue, 18 Nov 2014 07:45:09 +0000 https://dataconomy.ru/?p=10421 Formula One race predictions are increasingly relying on Big Data Analysis. With nearly 243 terabytes (a little over the data at US Congress Library) of data being generated by race teams combined in the 2014 U.S. Grand prix in Austin, TX, dozens of engineers are working on remotely tracking the data as far as U.K. […]]]>

Formula One race predictions are increasingly relying on Big Data Analysis. With nearly 243 terabytes (a little over the data at US Congress Library) of data being generated by race teams combined in the 2014 U.S. Grand prix in Austin, TX, dozens of engineers are working on remotely tracking the data as far as U.K. in near- real time.

Formula One cars are now equipped with hundreds of sensors which generate data over a multitude of parameters ranging from tyre pressure to fuel- burn efficiency. This data transferred thousands of miles away using fibre optic cables on AT&T’s backbone to fully staffed operations at the team headquarters in U.K. Predictions are being made even before the race starts. “We’re looking two hours into the future and trying to predict where we’ll finish the race before it starts,” said Al Peasland, head of technical partnerships at Infiniti Red Bull Racing.

Tony Jardine, currently a Formula One analyst for Sky Sports in the U.K. with almost 20 years of experience in this field, including as an engineer and a team manager; has seen Formula One technology evolve from drawing boards to basic computers in the 1990s to the analysis and massive data collection today. “Data is so good for drivers that the pit crew could literally coach a driver to the most optimal racing conditions based on a live data feed of everything imaginable,” Jardine quoted.

f1-bigdata-infographic-finalv2-01-1181x194011 (Infographic source: Forbes)

Quoting an example from 2012 where Red Bull Racing and its driver, Sebastian Vettel, received a clipping from behind, damaging his car in the first lap. Despite the accident, engineers were able to get him back into the race by their first scheduled pit stop, just 10 laps into a 71 lap race and devise a strategy just enough to secure him the world championship; Peasland stated “To win with such a fine margin and to have an accident like that at the very beginning and for the whole team to come together, it’s the race that films are made of,” said Peasland. “It’s about making educated decisions efficiently,” said Peasland, head of technical partnerships. “And that comes from measuring the right information.”

When the winning margin is in the range of .600 seconds, the stakes are high. Matt Cadieux, chief technology officer for Infiniti Red Bull Racing remarked that “It’s a data driven business and always has been”. “But the sophistication and speed of race simulations have improved just in the last few years, whereas races used to rely more on gut feel and experience.” The data analysis and processing has dramatically changed over the years, with simulations which would take weeks now getting done in hours; simpler simulations taking minutes and seconds.

Read more here.


(Image credit: NRMA Services)

]]>
https://dataconomy.ru/2014/11/18/formula-one-wins-riding-on-big-data-analysis/feed/ 1