NASA – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Thu, 27 Jun 2024 11:40:23 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/DC-logo-emblem_multicolor-75x75.png NASA – Dataconomy https://dataconomy.ru 32 32 ISS deorbit mission goes to SpaceX for $843M https://dataconomy.ru/2024/06/27/iss-deorbit-mission-goes-to-spacex-iss/ Thu, 27 Jun 2024 11:40:23 +0000 https://dataconomy.ru/?p=54160 The ISS deorbit mission, awarded to SpaceX for $843 million, represents a pivotal moment in space exploration in the context of the NASA ISS project. The International Space Station (ISS), a symbol of international cooperation and scientific progress since its launch in 1998, is nearing the end of its operational life. As such, NASA has […]]]>

The ISS deorbit mission, awarded to SpaceX for $843 million, represents a pivotal moment in space exploration in the context of the NASA ISS project.

The International Space Station (ISS), a symbol of international cooperation and scientific progress since its launch in 1998, is nearing the end of its operational life. As such, NASA has initiated plans to safely deorbit the ISS by 2030. This mission is critical for ensuring that the ISS reenters Earth’s atmosphere safely, without posing risks to populated areas. The selection of SpaceX for this task highlights the company’s expertise and reliability in handling complex space missions.

SpaceX’s role in the ISS deorbit mission involves developing a new spacecraft, the U.S. Deorbit Vehicle (USDV), specifically designed to manage the safe deorbiting of the ISS. This spacecraft will guide the ISS into a controlled descent, ensuring a safe reentry and breakup over remote areas. Unlike the Dragon capsule, which SpaceX uses for transporting cargo and crew, the USDV will be uniquely tailored for the deorbit mission. NASA’s confidence in SpaceX underscores the company’s pivotal role in current and future space operations.

The ISS deorbit mission is not just about the safe disposal of the space station; it is also a huge step towards future commercial space stations. As the ISS has served as a platform for thousands of scientific experiments, its deorbiting marks the transition to new commercial ventures in low Earth orbit (LEO). SpaceX’s involvement in this mission sets the stage for the next phase of space exploration, ensuring continuity in space-based research and international collaboration.

iss deorbit mission goes to spacex ISS
SpaceX will develop a new spacecraft, the U.S. Deorbit Vehicle (USDV), specifically for this mission (Image credit)

SpaceX to develop the ISS deorbit vehicle

NASA’s decision to select SpaceX for the ISS deorbit mission is a testament to the company’s capabilities in space technology. The $843 million contract involves the development of the U.S. Deorbit Vehicle (USDV), a spacecraft designed to deorbit the ISS safely. This mission is crucial as it ensures the ISS reenters Earth’s atmosphere without endangering populated areas. The ISS deorbit process requires precise calculations and advanced technology, making SpaceX a suitable candidate for this task.

The USDV will be responsible for guiding the ISS into a controlled descent, ensuring it breaks up safely upon reentry. This task is complex due to the ISS’s large structure and the need to minimize risks to populated regions.


Not Just For Earthlings: NASA Has Big Uses for Big Data


The USDV will incorporate advanced technologies to manage the ISS’s descent, making it distinct from other spacecraft like the Dragon capsule. Once developed by SpaceX, NASA will take ownership of the USDV and operate it throughout the deorbit mission.

The choice to develop a new spacecraft for the ISS deorbit mission, rather than relying on existing vehicles like the Russian Progress spacecraft, highlights the unique challenges involved. The USDV will feature cutting-edge technology to ensure a controlled and safe reentry of the ISS. This decision underscores the complexity and importance of the ISS deorbit mission, which is critical for safe and responsible space operations.

Why is the ISS being retired?

The International Space Station (ISS) is being retired due to a combination of factors:
  1. Age and wear: The ISS has been in operation since 1998 and much of its equipment and structure are aging. This has led to increased maintenance costs and potential safety concerns.

  2. Cost: Maintaining and operating the ISS is expensive, costing billions of dollars each year.

  3. Transition to commercial space stations: NASA is shifting its focus towards supporting the development of commercial space stations in low Earth orbit, which will take over some of the roles currently filled by the ISS.

  4. Safety concerns: With the aging structure, there are concerns about the long-term safety of the ISS. A controlled deorbit mission ensures a safe re-entry and minimizes risks to populated areas.

It’s important to note that the retirement of the ISS does not mark the end of space exploration. Rather, it represents a transition towards a new era of commercial space stations and continued research in low Earth orbit.

The international collaboration behind the NASA ISS project

The ISS has been a model of international collaboration since its launch, involving five major space agencies: NASA, the Canadian Space Agency (CSA), the European Space Agency (ESA), the Japan Aerospace Exploration Agency (JAXA), and the State Space Corporation Roscosmos. The ISS deorbit mission continues this legacy of cooperation, with NASA leading the effort by awarding the contract to SpaceX. The safe deorbit of the ISS is a responsibility shared by all participating agencies, ensuring a coordinated approach to this critical task.

The ISS has facilitated groundbreaking scientific research, allowing experiments in microgravity that are impossible on Earth. As we approach the end of its operational life, the collaborative spirit that built the ISS remains essential in planning its deorbit. The ISS deorbit mission reflects the continued commitment of the participating space agencies to manage the ISS’s end-of-life phase responsibly.

iss deorbit mission goes to spacex ISS
The ISS deorbit mission continues this legacy of cooperation, with NASA leading the effort by awarding the contract to SpaceX (Image credit)

Each space agency has played a vital role in the ISS’s operations, contributing to its maintenance and scientific research. The ISS deorbit mission, now entrusted to SpaceX, symbolizes the culmination of over two decades of international cooperation and scientific achievement. This mission ensures the safe and controlled reentry of the ISS, preventing risks to populated areas and marking the transition to new space endeavors.

The ISS deorbit mission is not just about ending a chapter; it is about laying the groundwork for future space endeavors. As SpaceX takes on this historic task, the world watches with anticipation, knowing that this mission will pave the way for the next phase of space exploration and innovation.


Featured image credit: SpaceX/Unsplash

]]>
Big Data Plays Surprising Role In Fight Against Climate Change https://dataconomy.ru/2016/06/02/big-data-plays-surprising-role-fight-climate-change/ https://dataconomy.ru/2016/06/02/big-data-plays-surprising-role-fight-climate-change/#comments Thu, 02 Jun 2016 08:00:54 +0000 https://dataconomy.ru/?p=15777 Global warming. For a topic as massive, important, and (somehow) controversial, big data is a clear option for sorting through the muck. What information is reliable? What solutions are realistic? When a global issue pits the future of humanity against, well, everything humanity is accustomed to, facts are vital. Big data doesn’t just give the […]]]>

Global warming. For a topic as massive, important, and (somehow) controversial, big data is a clear option for sorting through the muck. What information is reliable? What solutions are realistic? When a global issue pits the future of humanity against, well, everything humanity is accustomed to, facts are vital. Big data doesn’t just give the public pretty visualizations, it reveals facts about climate change that make a difference. Instead of knowing only that the climate is changing, data studies are showing how fast, where, and which industries are making it worse. How is data being used in the fight against climate change, and will it work?

Mapping the Past to See the Future

The first step for data is to do what it does best: offer a thorough, easy-to-grasp glimpse into the realities of climate change. If GoogleMaps looks massively detailed, check out Landsat, NASA’s records of the state of the global land surface. It’s the longest and most complete record of its kind in existence, and it’s proved indispensable for studying the impacts of climate change. Even NASA has said such data is essential for monitoring how humans are specifically changing the planet and affecting climate change. The US’s Environmental Protection Agency has also used data to show where change is happening. Now, it’s clear the largest source of greenhouse emissions is electricity and heat production—though agriculture, forestry and land use is only one percent behind.

EPA Data makes it clear that China represents the world’s single largest source of CO2 emissions, with the US in second. The next question is, how can we use that information to make a difference? One great strength of data science is predictive modeling, and that will prove vital in upcoming initiatives. NASA uses data to answer questions about the future of the planet. For example, what will the Earth look like in 2100? By integrating actual measurements with climate simulation data, they created realistic expectations for the years to come, as well as a variety of possibilities dependent on a given greenhouse gas scenario.

Ellen Stofan, NASA chief scientist, notes that, “with this new global dataset, people around the world have a valuable new tool to use in planning how to cope with a warming planet.” While NASA won’t necessarily be offering consumer products or passing bills to regulate businesses, their data gives those in power everything they need to make better decisions—and even spend less money.

Creating Smarter, Less Costly Solutions

For cultures and governments that want to stop climate change, but also spend the least amount of money, and give up the least amount of comfort, accurate predictions remove a lot of pressure. Data means that solutions can be carefully evaluated, simulated and implemented. And that may make governments more open to giving them a try. One of the most popular examples is the Ronald Reagan Building in Washington, D.C., where solutions saved both energy and $800,000. And that is only one of several examples. Consumers, on the other hand, are less than excited about giving their daily comforts. But big data solutions are offering consumers the chance to save money by wasting less energy. Fortunately, it doesn’t matter whether people are using less energy for the environment or for themselves.

Of course, solutions aren’t always fun consumer products—sometimes they have much more sinister implication. The effects of climate change are already being felt. Predictive data is also being used to combat the rise in weather disasters. For example, by showing where flooding is most likely to happen, or where sea walls would be the most effective. One study by Data-Pop Alliance shows that data’s role in the climate change discussion go hand-in-hand with disaster relief usages. As climate change continues to increase the frequency and scale of weather related hazards, like cyclones, floods and tsunamis, that data will prove invaluable on a regular basis.

Don’t worry—there are also more optimistic possibilities. The U.N.’s Global Pulse initiative started the Big Data Climate Change Challenge in 2014 due to a powerful need to strengthen “the economic case for action on climate change to show where such action is feasible, affordable and effective.” Again, one of data’s big roles is visually showing that change is both possible and economically plausible. The results of the challenge included a forest monitoring system, data and computational tools for building low-carbon and sustainable systems.

Spreading and Studying Awareness

Big data is doing plenty of work in the field of climate change, though the ordinary person may never see it. That may be simply because it’s a complicated and, oftentimes, entirely overwhelming subject. That’s why data is, once again, also being leveraged as a tool to get the public involved. NASA, the EPA, and Global Pulse don’t just gather and analyze data, they share it. Data solutions can also be used to analyze what the public knows and cares about: namely, social media analysis. To this end, Global Pulse assembled an incredible map of how the world tweets about climate change.

They show not only the when and where of tweets, but the topic. Is the public more interested in energy, climate change politics, or the state of the oceans? Analyzing public opinion and understanding is often associated with marketing campaigns. When a company wants to sell something, they analyze their audience. On the other hand, when a major discussion like that of climate change wants to better inform and engage with the populace, they simply must know what people are thinking. It is important both to create informative, shareable data visualizations, and to register the public response.

There are plenty of shortcomings when it comes to using data to combat climate change. Given the amount of energy information technology uses, it seems a particularly bizarre place to start. A study from IBM even pinpoints several problem areas, including the use of historically shallow data and trying to map complex, nonlinear dynamics. Of course, despite all of these drawbacks, virtually no one in the discussion considers big data anything less than a godsend.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/06/02/big-data-plays-surprising-role-fight-climate-change/feed/ 3
Not Just For Earthlings: NASA Has Big Uses for Big Data https://dataconomy.ru/2016/04/01/not-just-for-earthlings-nasa-has-big-uses-for-big-data/ https://dataconomy.ru/2016/04/01/not-just-for-earthlings-nasa-has-big-uses-for-big-data/#comments Fri, 01 Apr 2016 07:30:15 +0000 https://dataconomy.ru/?p=15178 NASA is often on the front end of tech trends. They are constantly coming out with new tools, releasing incredible footage and data, and even support open source communities. Given how much information they can and often need to receive from their space equipment, it is no surprise that they need better ways to harness […]]]>

NASA is often on the front end of tech trends. They are constantly coming out with new tools, releasing incredible footage and data, and even support open source communities. Given how much information they can and often need to receive from their space equipment, it is no surprise that they need better ways to harness big data. Furthermore, their need to store and quickly receive data means that the technology needs to progress faster and faster to keep up with them.

Deep space yields vast amounts of data

The need for better storage and faster movement is a problem that fundamentally plagues big data. One driverless car could create 1GB of data per second. A space probe could create far more. The amount of data available on the solar system completely overshadows that found on Earth, even though it has been barely explored. There are currently two walls for data in space: transferring, and storing. Many missions use radio frequency to transfer information, meaning data usually moves at megabytes per second, or gigabytes per second. This could change in the future and become much faster, as work with optical communications continues to progress. In this method, data is modulated onto and transmitted by laser beams. Such an upgrade could increase data rates 1000 times faster than they currently are. NASA is planning missions that would stream over 24 terabytes of data daily—roughly 2.4 times the entire library of congress, daily—and that would greatly benefit from new developments like these. Once the data is transmitted, the question of storage must also be addressed.

NASA missions involve hundreds of terabytes being gathered every hour—one terabyte, as they colorfully describe, is the equivalent of information printed on 50,000 trees worth of paper. Moreover, that data is not only large, it is highly complex. According to NASA, their data sets include such large amounts of significant metadata that it “challenges current and future data management practice.” Their plan, however, is not to start from the ground up. Automating programs to extract data, and better utilization of the cloud could allow them to simply store data better. One principal investigator for the Jet Propulsion Laboratory’s big data initiative, Chris Mattmann, plans to “modify open-source computer codes to create faster, cheaper solutions.” As can always be expected from NASA, they intend to return those modified codes back to the open source community for others to use. Another team, working with the Curiosity, found their own solution. By downloading raw images and telemetry directly from the Curiosity, they could move them into Amazon S3 storage buckets. This allowed them to upload, process, store and deliver every image from the cloud.

What does space data look like?

On Earth, NASA’s data helps deal with disasters in real time, predict eco-system behavior, and even offer educational tools to students and teachers. But what about in space? Could all that data be really worth it? Scientists need big data not only for Earth, but for analyzing real-time solar plasma ejections, monitoring ice caps on Mars, or searching for distant galaxies. “We are the keepers of the data,” says Eric De Jong of JPL, “and the users are the astronomers and scientists who need images, mosaics, maps and movies to find patterns and verify theories.”

Another point of focus for NASA is sharing data with the public. Their data visualizations have made waves both internally and around the globe. One data-based visualization, titled Perpetual Ocean, even went viral, though viewers would never know how much work went into it. In order to visualize Earth’s wind, data from over 30 years of satellite observations and modeling had to be unified. The project also drew into question the way data was visualized. Rather than just showing temperature data, they opted to show the flow and unseen forces behind those changes. These visualizations, however, are very different than what people are used to. This data can be used to generate models of the real world, and even some of the simplest visualizations include incredible amounts of data. This image was based on 5 million gigabytes of data—though not all of that even made it into the final visualization.

Not Just For Earthlings: NASA Has Big Uses for Big Data
image credit: Nasa Visualization Explorer

The Scientific Visualization Studio (SVS) is one division in NASA that focuses solely on visualizing data, and they have the tools and team to do amazing work. Director Dr. Horace Mitchell told Mashable that they use a lot of the same software as Pixar, which also reveals the mindset behind the program. NASA has gone so far as to release their own data visualizing app in the Apple store. Again, they are not only telling data stories for internal scientists, but to help the general public better understand and see what is going on around them.

#NASADatanauts

NASA is far from finished in the big data field. Predictive analytics and further solutions are still out there, and NASA is officially taking the plunge to address the issues head on. They are pioneering a user community of “Datanauts” who will share experiences, identify high value datasets, and find applications that could benefit the community. It is also worth noting that NASA recently stated an intention to focus on women in the data science field and, in line with that goal, the first class of Datanauts will be women from a variety of backgrounds.

The future of big data is very important for NASA. The fact that they must process so much data that it “challenges current and future data management practice” is not to be taken lightly. Data storage solution may move at lightning speed, but so does data collection. Once all of the data is secured, there will no doubt be even more stunning and informative visualizations, challenging the ideas of what makes up a proper visualization. There’s no doubt there will be plenty of more viral space data stories in the future.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/04/01/not-just-for-earthlings-nasa-has-big-uses-for-big-data/feed/ 1
The Power of Data to Know the World, to Improve the World, and to Change the World https://dataconomy.ru/2015/10/14/the-power-of-data-to-know-the-world-to-improve-the-world-and-to-change-the-world/ https://dataconomy.ru/2015/10/14/the-power-of-data-to-know-the-world-to-improve-the-world-and-to-change-the-world/#comments Wed, 14 Oct 2015 08:06:14 +0000 https://dataconomy.ru/?p=14276 Dr. Kirk Borne is a data scientist and an astrophysicist. He is Principal Data Scientist in the Strategic Innovation Group at Booz-Allen Hamilton since 2015. He was Professor of Astrophysics and Computational Science in the George Mason University (GMU) School of Physics, Astronomy, and Computational Sciences during 2003-2015. He served as undergraduate advisor for the […]]]>

kirk-borneDr. Kirk Borne is a data scientist and an astrophysicist. He is Principal Data Scientist in the Strategic Innovation Group at Booz-Allen Hamilton since 2015. He was Professor of Astrophysics and Computational Science in the George Mason University (GMU) School of Physics, Astronomy, and Computational Sciences during 2003-2015. He served as undergraduate advisor for the GMU Data Science program and graduate advisor to students in the Computational Science and Informatics PhD program. Prior to that, he spent nearly 20 years supporting NASA projects, including NASA’s Hubble Space Telescope as Data Archive Project Scientist, NASA’s Astronomy Data Center, and NASA’s Space Science Data Operations Office. He has extensive experience in large scientific databases and information systems, including expertise in scientific data mining. He was a contributor to the design and development of the new Large Synoptic Survey Telescope (LSST), for which he contributed in the areas of science data management, informatics and statistical science research, galaxies research, and education and public outreach.

We are proud to have Kirk presenting at Data Natives 2015!


How did you make the jump from astronomy to data science?

The jump was a gradual one (a lifelong evolution) for me. As an astronomer, I was always working with data from telescopes of all sizes. My experience with these various scientific instruments and observatories in different parts of the world (including telescopes in the USA, in Chile, and at the German-Spanish Calar Alto Observatory in Spain) led me to work as a research scientist for the Hubble Space Telescope in the 1980’s. My service work at the Space Telescope Science Institute included database design, development and report generation, which ultimately led to my appointment as NASA’s Hubble Data Archive Project Scientist. We created one of the world’s first major public (open) data repositories for scientific researchers: open access, user-friendly search, web-based interfaces, and more.

This work eventually opened up new opportunities for me at NASA, and I became a contract manager in 1995 for NASA’s Astrophysics Data Facility and Astronomy Data Center, within the USA’s National Space Science Data Center (NSSDC). As I worked with data more and more in my daily professional life, developing better tools for management, indexing, search, access, analysis, visualization, and discovery, it was inevitable that a transition in my research and professional interests would migrate toward data science.

Can you describe the journey that led to where you are today, and your major influences along the way?

Usama Fayyad, the first Chief Data Officer
Usama Fayyad, the first Chief Data Officer

When I was at the NSSDC in the 1990’s, we catalogued, archived, curated, and provided access to over 15,000 space science datasets from many thousands of space instruments. It was during this period that I began to notice a dramatic increase in the size of the datasets that we were ingesting. The biggest jump occurred in 1998 when we were asked to ingest one single experiment’s data, whose total volume of 2 Terabytes more than doubled the total data volume (1 Terabyte) of the other 15,000 datasets combined! It was then that I realized that things were dramatically changing in the expanding “data universe”. I began to explore the power of data mining and machine learning to make discoveries in massive data – I was fascinated with finding the “unknown unknowns” in large data collections. I was influenced greatly by several groups, including: (1) the work at IBM (on the Advanced Scout project that mined the play-by-play databases for professional basketball games); (2) the customer-based data mining and marketing efforts at Capital One Credit Card Company (which was described in a business magazine that I found in the office snack room one day); and (3) and Usama Fayyad (who was working at NASA’s Jet Propulsion Lab with astronomers to mine large galaxy databases). That was the conspiracy of events and inspirational persons that influenced me the most at the beginning of my transition from traditional astronomical research (studying colliding and merging galaxies) to multidisciplinary data-driven research in numerous disciplines, organizations, and industries. Those influences led me to data science and into becoming the data scientist that I am today.

The full transformation for me occurred in 2003 when I left NASA (after 18 years) to become a faculty member at George Mason University in the Computational Science and Informatics (Data Science) PhD program. Ultimately, my colleagues and I launched the world’s first undergraduate Data Science degree program in 2007. The world was taking note of big data and data science, and we were right there in the middle of it. After teaching, advising, and doing research in data science at the academic level for 12 years, I was hungry to do more, in a broader context, for more organizations, in a variety of industries where data analytics is changing the world. When Booz Allen Hamilton offered me that opportunity as their Principal Data Scientist in the NextGen Analytics and Data Science group (consisting of more than 500 data scientists), I left the university and joined this remarkable team in May 2015.

What major milestones or landmark events have stood out to you during your time in the industry? What are you still waiting for?

The first milestone goes back over 30 years when professional organizations and societies formed initiatives around data mining and knowledge discovery from data. The creation of the kdnuggets newsletter by Gregory Piatetsky-Shapiro in 1993 was a definite landmark event in the history of data science. The birth of Google is another big one – they set out to do more than be a search engine company, but to organize all of the world’s information and to make it universally accessible and useful, which they accomplish with some amazing mathematics (linear algebra, the one university class that I took 40 years ago that is now on the top of my recommendations when new students ask me how to get into data science). Then there was a series of events that brought out the importance of data mining (and data science): the terror attacks in the USA on September 11, 2001; the Washington DC sniper case of 2002; and the now-famous Walmart strawberry pop-tarts data mining story of 2004. Those isolated incidents were not isolated in my mind or in the minds of data scientists – the power of data to know the world, to improve the world, and to change the world was increasingly more visible to everyone.

[bctt tweet=”‘The power of #data to know the world, to improve the world’ – @KirkDBorne”]

Internet of Things: One of the three major changes coming for data science.
Internet of Things: One of the three major opportunities for data science.

Then, in 2011-2012, there were three landmark events that changed everything for us: (a) the publication of the McKinsey research report “Big data: The next frontier for innovation, competition, and productivity” in 2011 emphasized the dramatic shortage in data professionals that the workforce would be facing within the next few years; (b) the President of the USA announced the National Big Data Initiative (including several hundred million dollars of research investments); and (c) the publication of an article in the prestigious Harvard Business Review with the title “Data Scientist: The Sexiest Job of the 21st Century.” From my perspective, those three events launched the Data Science revolution that we are now experiencing. The next big changes will include the internet of things (with ubiquitous sensors everywhere, collecting massive streams of data on everything, all the time), also faster analytics (in-memory, on-the-chip, cleverer algorithms, machine learning on clusters and in the cloud, quantum machine learning, and more), and greater conversion of all businesses into data businesses.

In the 30 years since you joined the Space Telescope Science Institute what have been the most significant challenges you have faced?

One of the biggest challenges that everyone has faced in this field is cultural inertia, specifically resistance to change. Whether we look at academic institutions, government agencies, or commercial businesses, there have been a lot of folks who criticized, minimized, or otherwise ignored the revolution that was growing around them. Trying to get organizations, industries, or professions to change requires years of patience, persistence, and perspiration. Those of us who lived through those challenges are seeing the fruits of our labors now, in fact it is not just fruits but entire forests of opportunities! I learned through the years that the best way to bring about big changes fast is to go right to the top – so I was lead author on two position papers in 2009 that were submitted to the USA’s National Research Council of the National Academies of Science. One paper was focused on the transformation of my field (Astronomy) into a data-oriented data science research discipline (Astroinformatics), and the second paper was focused on changing the education system (not just in astronomy, but in all aspects of school-based learning and lifelong learning) by incorporating “Data Science for the Masses” everywhere in all learning settings. Those papers got noticed by significant persons, and the transformations are now well underway.

[bctt tweet=”One of the biggest challenges that everyone has faced in this field is cultural inertia.”]

The challenges that we previously faced have not completely evaporated, but there is hope that they are fading. We are seeing now the resistance is not so much from the leadership within organizations, but from the mid-level workforce – they haven’t entirely embraced the changes that a data-driven business requires, but they are moving in the right direction. The leaders of organizations are now encouraging, sponsoring, and rewarding such transformations in people, processes, and products. That is exactly what my company Booz Allen Hamilton is doing, and it is a wonderful thing to be part of.

Of your achievements and accolades so far, of which are you most proud?

telescope-63119_640
One of the many projects Kirk has contributed to: The Hubble Space Telescope

That is a tough question, but I presume that you are not referring to my wonderful family, children, and grandchildren. I am most proud of my humility. (Hint: that was a small joke.) Seriously, I am humbled by the opportunities, talents, and aptitudes that I have been given. So, when I say that “I am proud of…”, what I really mean to say is that “I am humbled by…”. So, here we go… I am proud that I survived a very tough undergraduate education in Physics. I am proud that I completed my doctoral degree in one of the world’s top astronomy programs (Caltech). I am proud of the awards that I won for my work on the Hubble Space Telescope project. I am proud of the innovations that my group at the NSSDC created around the use and exploration of large datasets. I am proud of co-creating the world’s first undergraduate Data Science degree program. I am proud of my PhD students who have produced some incredible doctoral dissertations. I am proud of the Faculty Impact Award that I received from the Dean of George Mason University’s College of Science. I am proud to be among the worldwide top influencers in big data and data science. I am proud to be a member of the awesome Booz Allen Hamilton data science team. And I am proud to be a part of the Data Natives community of data-driven world-changing innovators. (As you can see, I have a lot to be grateful for – hence the humility!)

What kind of problems are you aiming to tackle in your role at Booz Allen Hamilton?

I get great pleasure in finding solutions through data. The scientist in me loves to explore data (evidence), find new discoveries, develop a hypothesis to explain it, and then test those theories. At Booz Allen Hamilton I am privileged to exercise that Data Scientist role across numerous internal and external projects: human resources, organizational change, training and mentoring, marketing, customer engagement, behavior analytics, risk mitigation, novelty discovery (including fraud detection, anomaly detection, surprise discovery), thought leadership, socialization of data science across organizations and industries, data technologies (including machine learning, data management innovations, graph analytics), predictive and prescriptive analytics, geospatial-temporal modeling, and much more. I feel like a child in a candy store most of the time, and I have to exercise some good judgement as to where to get involved. It is tempting to get involved in too many things.

What advice would you give to the Kirk Borne of 30 years ago? Anything you would have done differently?

I wrote a blog for MapR about my “growth hacker’s journey”. My message throughout that self-history was that I was often in the right place at the right time. I usually didn’t recognize that reality when I was experiencing each of those career moments. It took many years of hindsight to see the growth trajectory, how it was born, and how it took shape. So, I guess I wouldn’t have done things too differently (maybe on the micro level, but definitely not on the macro level). I would tell myself of 30 years ago the same things that I would tell a young person today at the start of their career:

(1) Don’t over-plan (over-specify) your career, since it will evolve in unexpected ways.
(2) Expect to find the essential value and meaning of your work later in life for the things that you are doing now, and that’s okay.
(3) Be ready to make big changes when the opportunities present themselves (just as I left the great Hubble Space Telescope project to pursue bigger opportunities, then I left NASA after 18 years to create a new academic discipline, and then I left a tenured Full Professorship in an innovative university in order to join a brilliant revolutionary business).
(4) Trust your training – I am amazed at how all sorts of things that I learned and experienced have become relevant and useful at different stages of my life.
(5) Be more tolerant of your own mistakes – remember this saying: “Good judgement comes from experience, and experience comes from bad judgement.”
(6) Listen to, but don’t react to the naysayers – do what you know is the right thing for you.
(7) Your aptitudes will be more valuable to you in the long term than your skills (aptitudes include the 7 C’s: cool under the pressure of hard work, courageous problem-solver, curious, creative, communicative, collaborative, and commitment to lifelong learning).
(8) And finally, don’t confuse your job with your career – I had many jobs, but I have had only one career: being a scientist who loves to make discoveries from data. When your lifelong passion becomes your career, hold on to that very tightly.

Which companies or individuals inspire you, and keep you motivated to achieve great things?

Of course, I have to mention my current employer Booz Allen Hamilton, which is making all of the right moves in the area of data analytics and data science. I also am inspired by the top big data and data science influencers – some of them are younger than my own children, and that excites me to see that the next generation is embracing the power of data to transform the world. I am inspired by organizations who apply data for social good, which includes Booz Allen, DataKind, Bayes Impact, Kaggle, and many others. I am motivated by the awesome and fast accomplishments, discoveries, and innovations that are occurring all around us in the data analytics world: in government, businesses, and academia. I have shared the stories of so many of those companies and organizations on Twitter, I cannot begin to keep count of how many — though maybe my 44,000 tweets are a good estimate of the number of data stories that have been worth sharing with the data science social community. Those are the people (my faithful and fearless followers who track my Twitter firehose of data stories) who are definitely the individuals who keep me motivated! The flood of new and interesting data sources everyday motivates all of us data natives to achieve great things, because “Data is what we do”. This is a great time to be doing data!

[bctt tweet=”‘This is a great time to be doing #data!’ – @KirkDBorne”]

(image credit: Evan Bench, CC2.0)

]]>
https://dataconomy.ru/2015/10/14/the-power-of-data-to-know-the-world-to-improve-the-world-and-to-change-the-world/feed/ 2
Global Climate Change Data Competition https://dataconomy.ru/2015/07/09/global-climate-change-data-competition/ https://dataconomy.ru/2015/07/09/global-climate-change-data-competition/#respond Thu, 09 Jul 2015 10:54:39 +0000 https://dataconomy.ru/?p=13095 On June 6th, 2015 Big Data Utah and the Boulder/Denver Big Data Users Group (BDBDUG) kicked off the Global Data Competition, an inaugural event focused on climate change and split into 22 regions around the world. The competition has two phases, each with a scoring system that allows solutions to be compared within the region it […]]]>

On June 6th, 2015 Big Data Utah and the Boulder/Denver Big Data Users Group (BDBDUG) kicked off the Global Data Competition, an inaugural event focused on climate change and split into 22 regions around the world.

The competition has two phases, each with a scoring system that allows solutions to be compared within the region it is submitted. Competitors can compete on an individual, group (2-10 people), or organization (10+) level.

Phase 1, centered on Learning Image Classification, is now underway. Participants are tasked with taking a set of 4,766 images of Mars and predicting whether or not a volcano is depicted in each image. All the data and tips necessary to get started can be found at http://www.global-data-competition.com/code-and-submissions/.

Beginning in late August, Phase 2 will include sessions, videos and tutorials on getting started with data science, reviewing exploratory data analysis techniques, data visualization and web scraping. The competition during this phase will tie the training into acquiring data from a variety of sources, including NASA’s newly released 11 Terabytes of data.

Visit www.global-data-competition.com for more information including registration, code and data.

(image credit: OliBac)
]]>
https://dataconomy.ru/2015/07/09/global-climate-change-data-competition/feed/ 0
NASA’s Big Data Climate Change Model https://dataconomy.ru/2015/06/17/nasas-big-data-climate-change-model/ https://dataconomy.ru/2015/06/17/nasas-big-data-climate-change-model/#comments Wed, 17 Jun 2015 09:29:58 +0000 https://dataconomy.ru/?p=12996 When both NASA and the Pope are speaking out about climate change, you know something is up. Yesterday the NASA Earth Exchange (NEX) unveiled a public data set showing how rainfall, temperature and CO2 levels will change over the next 85 years. The high-resolution data, which is as granular as looking at individual towns changing […]]]>

When both NASA and the Pope are speaking out about climate change, you know something is up. Yesterday the NASA Earth Exchange (NEX) unveiled a public data set showing how rainfall, temperature and CO2 levels will change over the next 85 years.

The high-resolution data, which is as granular as looking at individual towns changing on a daily basis, will help scientists predict catastrophic environmental events such as floods and draughts. These insights will be particularly valuable to the agricultural industry, where it will help to optimize crop yield and prevent losses.

“NASA is in the business of taking what we’ve learned about our planet from space and creating new products that help us all safeguard our future,” said Ellen Stofan, NASA chief scientist. “With this new global dataset, people around the world have a valuable new tool to use in planning how to cope with a warming planet.”

This NASA dataset integrates actual measurements from around the world with data from climate simulations created by the international Fifth Coupled Model Intercomparison Project. These climate simulations used the best physical models of the climate system available to provide forecasts of what the global climate might look like under two different greenhouse gas emissions scenarios: a “business as usual” scenario based on current trends and an “extreme case” with a significant increase in emissions.

Additional information about the new NASA climate projection dataset is available at:

https://nex.nasa.gov/nex/projects/1356/

The dataset is available for download at:

https://cds.nccs.nasa.gov/nex-gddp/

OpenNEX information and training materials are available at:

http://nex.nasa.gov/opennex

For more information about NASA’s Earth science activities, visit:

http://www.nasa.gov/earth

[NASA]

]]>
https://dataconomy.ru/2015/06/17/nasas-big-data-climate-change-model/feed/ 2
Astronomers Harness Machine Learning to Better Understand the Universe https://dataconomy.ru/2015/01/12/astronomers-harness-machine-learning-to-better-understand-the-universe/ https://dataconomy.ru/2015/01/12/astronomers-harness-machine-learning-to-better-understand-the-universe/#respond Mon, 12 Jan 2015 15:21:45 +0000 https://dataconomy.ru/?p=11361 As the scope of usage of machine learning gradually grows in enterprises and scientific fields, another field where it has penetrated is astronomy, helping astronomers gain understanding of the properties of a large numbers of stars. “It’s like video-streaming services not only predicting what you would like to watch in the future, but also your […]]]>

As the scope of usage of machine learning gradually grows in enterprises and scientific fields, another field where it has penetrated is astronomy, helping astronomers gain understanding of the properties of a large numbers of stars.

“It’s like video-streaming services not only predicting what you would like to watch in the future, but also your current age, based on your viewing preferences,” quips Adam Miller of NASA’s Jet Propulsion Laboratory in Pasadena, California.

Mr. Miller is the lead author of a new report on the findings which he presented at the annual American Astronomical Society meeting in Seattle earlier last week and also appearing in the Astrophysical Journal. “We are predicting fundamental properties of the stars,” he said.

Utilizing this branch of Artificial Intelligence, scientists are sorting through thousands of stars in our galaxy and learn their sizes, compositions and other basic traits based on sky survey images.

A news release explains that using the new technique, computer algorithms glean through available stacks of images, identifying patterns that reveal a star’s properties, garnering data on billions of stars in a comparatively less amount of time and expense. Normally, these kinds of details require a spectrum, which is a detailed sifting of the starlight into different wavelengths.

The Machines went through a “training period” first where Miller and his colleagues started with 9,000 stars as their training set. A spectra for these stars revealed several of their basic properties like sizes, temperatures and the amount of heavy elements, such as iron while the varying brightness of the stars had recorded by the Sloan Digital Sky Survey, producing plots called light curves were fed into the machine to help it make associations between the two sets.

After the training period the computer was able to make predictions on its own about other stars by only analyzing light-curves, and gather further data which humans alone cannot process. Herein, computers with their advanced algorithms helped.

Read more here.


(Image credit: NASA)

 

]]>
https://dataconomy.ru/2015/01/12/astronomers-harness-machine-learning-to-better-understand-the-universe/feed/ 0
Visual Cues in Big Data for Analytics and Discovery https://dataconomy.ru/2014/11/12/visual-cues-in-big-data-for-analytics-and-discovery/ https://dataconomy.ru/2014/11/12/visual-cues-in-big-data-for-analytics-and-discovery/#respond Wed, 12 Nov 2014 08:23:50 +0000 https://dataconomy.ru/?p=10374 One of the most fun outcomes that you can achieve with your data is to discover new and interesting things.  Sometimes, the most interesting thing is the detection of a novel, unexpected, surprising object, event, or behavior – i.e., the outlier, the thing that falls outside the bounds of your original expectations, the thing that signals something […]]]>

One of the most fun outcomes that you can achieve with your data is to discover new and interesting things.  Sometimes, the most interesting thing is the detection of a novel, unexpected, surprising object, event, or behavior – i.e., the outlier, the thing that falls outside the bounds of your original expectations, the thing that signals something new about your data domain (a new class of behavior, an anomaly in the data processing pipeline, or an error in the data collection activity).  The more quickly that you can find the interesting features and characteristics within your data collection, consequently the more likely you are to improve decision-making and responsiveness in your data-driven workflows.

Tapping into the human natural cognitive ability to see patterns quickly and to detect anomalies readily is powerful medicine for big data analytics headaches.  That’s where data visualization shines most brightly in the big data firmament!  One could even say that visualization is an efficiency amplification methodology for discovery from data.  But visualization contributes to more than just discovery – it is an analytics ally.

The phrase “A picture is worth a thousand words” is a very common expression. In the modern computing era (where one word is equal to 4 bytes), we should say that a picture is worth 4 Kbytes. However, the rich complexity (high dimensionality and variety) of big data calls for a richer visual experience – perhaps encoding Megabytes (not Kbytes) of information in a single display.  With such capabilities, we can exploit human visual cognitive abilities more effectively.  In particular, visualization is especially useful and powerful for seeing patterns (trends) in your data and for seeing things that break the pattern (outliers). When used in this way (for description, discovery, prediction, and insights) within large datasets, visualization moves into the scientific realm of visual analytics.  The significance of this potential analytics application explains the recent rapid growth in research and development of visualization tools for visual storytelling with data.

One of the best new tools in the visualization universe comes from VisualCue Technologies. They were recently named a winner of the 2014 Ventana Research Business Intelligence Innovation award.

VisualCue uses semantic cues in the form of glyphs (icons, symbols) within the visualization. This is doubly powerful in that it not only exploits visual cognition, but it employs semantics in the presentation and display of the information content within a big data stream. Semantic technologies in general are the future of big data discovery and analytics – going beyond the bits and the bytes, and delivering more than content, the semantics of the data reveal to us what the data means and what is its context: not only the “what”, but the “why”.  One of the first such examples of glyph-enabled data visualization was a NASA project called ViSBARD (Visual System for Browsing, Analysis, and Retrieval of Data) – this ground-breaking system was specifically designed for use with space physics data from spacecraft in the interplanetary environment. VisBARD displayed six or more dimensions of scientific data simultaneously, enabling discovery of patterns and correlations across multiple parameters at once, but it did not provide any semantic elements.  VisualCue is now making major advancements and significant contributions in the direction of visual data semantics.

The semantic visual cue contains an iconic representation of the meaning of a particular data element in the visualization. For example, if you are displaying international shipping manifests, the icons would include an iconic image of a ship, which can be color-coded according to some metric (e.g., time to delivery, load, or country of origin).  The visual cue tile can also include smaller embedded icons representing key performance indicators (KPIs) related to the business intelligence questions that the analyst needs to track and monitor for insights and data-driven business decisions based on a dynamic, evolving, complex (high-dimensional) database.

Visual Cues in Big Data for Analytics and Discovery

VisualCue allows a single display of multiple visual cues (arranged in tiles on an end-user’s dashboard) to simultaneously present and track numerous KPIs, processes, assets, events, clients, suppliers, customers, transactions, etc., using color-coded alerts (green, yellow, red) to signal the spots in the data stream that need the most immediate attention, or where something novel (interesting) is occurring.  The arrangement, grouping, and dashboard layout of the tiles is completely user-configurable.  In fact, if a dashboard is not your thing, you can display the tiles on a map – oh yes, geospatial location-based business intelligence is hot stuff, and now becoming even hotter!  You can also display the tiles in a diagram of your own choice: a floorplan, blueprints, a schematic, a workflow, or whatever graphic display empowers your business decision-making. The sky is the limit! In fact, I hope to try this out on the sky –i.e., putting astronomical event data on a VisualCue-enabled sky map.

As is the case with the most powerful data exploration and visualization tools, VisualCue’s tiles are just the icing on the big data cake. In other words, a user can click on a tile and drill down deeper into the data, for further discovery and analytics. In this way, the analyst can more effectively and efficiently determine the root cause of specific visual cues that were displayed in a particular tile.

The VisualCue Technologies’ solution is easily configurable and extensible to many use cases and business domains. It has a drag-and-drop capability, including a growing library of tiles that you can use “out of the box”.  You can also channel your inner “data artist’ and use their tile builder tool in order to design and create your own personalized cues and tiles.  Some of the application domains that are already supported in their tile library include IT system administration, call center, agent performance, health, education, logistics, vehicle fleet management, supply chain, delivery routing, business process management, and more.

Business intelligence is clearly a driver in the development of the new visual languages that enable efficient and effective big data discovery and analytics.  VisualCue Technologies is an emerging leader in this field. Take a cue from me – check them out. You will be glad you did.



kirkKirk is a data scientist, top big data influencer and professor of astrophysics and computational science at George Mason University.He spent nearly 20 years supporting NASA projects, including NASA’s Hubble Space Telescope as Data Archive Project Scientist, NASA’s Astronomy Data Center, and NASA’s Space Science Data Operations Office. He has extensive experience in large scientific databases and information systems, including expertise in scientific data mining. He is currently working on the design and development of the proposed Large Synoptic Survey Telescope (LSST), for which he is contributing in the areas of science data management, informatics and statistical science research, galaxies research, and education and public outreach. His writing and data reflections can be found at Rocket-Powered Data Science.


(Image Credit: VisualCue)

]]>
https://dataconomy.ru/2014/11/12/visual-cues-in-big-data-for-analytics-and-discovery/feed/ 0
The Data Behind Deep Space Exploration https://dataconomy.ru/2014/10/30/the-data-behind-deep-space-exploration/ https://dataconomy.ru/2014/10/30/the-data-behind-deep-space-exploration/#comments Thu, 30 Oct 2014 10:18:58 +0000 https://dataconomy.ru/?p=10096 The space race began decades ago, but it has now reached unprecedented new levels that are scaled on a near-daily basis. Back then, it was nations getting into space. Now, that has transformed to what level of space nations can cross. With each nation’s every attempt (successful or otherwise) to cross newer, farther frontiers in space, […]]]>

The space race began decades ago, but it has now reached unprecedented new levels that are scaled on a near-daily basis.

Back then, it was nations getting into space. Now, that has transformed to what level of space nations can cross. With each nation’s every attempt (successful or otherwise) to cross newer, farther frontiers in space, come larger and larger volumes of data. The number of earth-gazing satellites has nearly doubled in 2014 alone.

Open Data: The Final Frontier?

Certainly, the instruments used to record this data are incredibly advanced. But the fact remains that there is more information sent than machines have the capacity to interpret. Openly sourcing this data might open it up to new developers and professionals who could potentially create solutions to dealing with this data. It is not clear whether supercomputers currently in existence, as equipped and formidable as they are, would be able to effectively process and interpret this data.

But the issues with making this data open are manifold; first, the sheer size of the data. Superfast connections that can transfer these massive amounts of data are available to governments alone. A single developer, a group or collective may not have the infrastructural wherewithal to even access this data, which would not be in a specific format.

In connection to this issue, the formats this big data is downloaded, stored, accessed and interpreted in may not be obvious, apparent or easily understood to a non-trained eye.

A possible workaround to this might be the initiation of training programs for interested developers, ones that are constantly updated and reconstituted to deal with the burgeoning amounts of data and expanding frontiers explored.

This, however, could have problems of its own, very closely interrelated with potentially making collected data open source. While in an ideal world, interstellar exploration is a collaborative operation that benefits the entire planet, this is not often the case, with a significant portion of the ‘space race’, though advantageous to humanity as a whole, played out as a competition among a select few nations.

Space exploration, world round, is as of now carried out exclusively by national governments, and this data hosted on government servers.  With this comes the very real risk of cyber espionage, which both state agencies world over and private organisations and people increasingly have the means and skills to carry out. The smallest of vulnerabilities in a massive system dealing with this volume of big data could have significant repercussions on national security, especially in politically charged times such as this one.

With a largely non-collaborative space race, data protection remains nationalised.

Selling or outsourcing this data would mean the privatisation of integral cogs of a nation’s space operations, which would also echo previously outlined concerns. There are myriad DBMS (Database Management Systems) currently available, many of which are open source; however, the massive amount of data collected and generated by interstellar vehicles, satellites and rovers is not only extremely intricate but interrelated. Even if this data were to be broken down into a more basic form for less tangled, potentially less complex chunks or packets, big data would still need to be coordinated, relayed and interpreted, which brings us back to the initial problem of terrestrial data transfer not being equipped to handle the data.

What’s Next for Big Data and Space

Big data transferred by satellites are not only utilised for the space race, however. Finance, insurance, agriculture, forestry, fishing, mapping, manufacturing, shipping, mining, sustainable industries, and energy enterprises are using eyes and ears in the sky to make better decisions and improve operational efficiency with actionable data sent ultimately from space. The consequences of utilizing big data reach the ultimate grassroots of a nation, especially true for agrarian economies such as those of South-East Asia.

With the current volumes of big data, the resources available are also unfairly skewed. Of the ten supercomputers available in the world that could handle these volumes of data, six are in the United States and owned by its government.

Plans are currently in the works to build what will be the world’s largest radio telescope. Although this will be a worldwide collaborative effort, it will be located in the United States of America. Work on the  Square Kilometre Array (SKA), which will then become the largest radio telescope ever built, is due to begin in 2018.

The data generated by the SKA amounts to about 700tbps – 700 terabytes per second, “equivalent to all the data flowing on the Internet every two days”.  According to statements by NASA, scientists are currently using cloud computing to store this data, but vulnerabilities in the cloud, which will no doubt be intensely fortified, could lead to a data leak in the style of recent happenings, except with repercussions that are far more serious. Existing data management firms such as Oracle could, of course, be sought after to help find management solutions.

In addition to management, the visualisation and interpretation of this data is important; solutions must concurrently be arrived at to this problem as well. Sorting through the kind of big data that are received and sent by newer and larger satellites each day is a gargantuan task that cannot be sorted through, catalogued or organised by a human, a fact NASA avers to.

The sponsorship for space-data collection methods; satellites, interstellar vehicles et al., have now begun to see a shift from government to more privatised, commercial funding.

Skybox Imaging, a satellite-operating firm that monitors terrestrial surfaces to track change,  built and launched the world’s smallest high-resolution imaging satellite.  In August 2014, Skybox Imaging entered into an agreement with Google Inc., and was acquired by the firm for US$500 million.

In 2014, Monsanto, the American multinational agrochemical and agricultural biotechnology corporation, acquired The Climate Corporation, which records geoclimatic data and relays that information back to earth.

Although interplanetary and interstellar exploration has insofar remained government-owned, funded and regulated, the realm of space, satellites and space-related devices is now becoming increasingly privatised, with increasingly detailed research into climatology, topography, tomography and an increased ability to predict tectonic and climactic events.

These satellites, just as the increasingly advanced telescopes recording information, data and images from outer space, now record more layered, complex information that needs real-time collection and interpretation. The buck, however, does not stop with the management and sorting of this data, which finally, needs to be made both accessible and understandable to the public via programs, graphics, maps and interactive user interfaces such as Google Maps or its more detailed terrestrial brother, Google Earth. This could result in a definitive boost to the software sector, spurring it on fairly quickly.

Space-related big data is not only all-encompassing in what it records, but also in the endless possibilities for advancements to humanity in terms of both advancement and economic activity.



20102ffWriter and communications professional by day, musician by night, Anuradha Santhanam is a former social scientist at the LSE. Her writing focuses on human rights, socioeconomics, technology, innovation and space, world politics and culture. A programmer herself, Anuradha has spent the past year studying and researching, among other things, data and technological governance. An amateur astronomer, she is also passionate about motorsport.
More of her writing is available here and she can be found on Twitter at @anumccartney.


 

(Image credit: NASA’s Marshall Space Flight Center)

 

]]>
https://dataconomy.ru/2014/10/30/the-data-behind-deep-space-exploration/feed/ 1