HP – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Tue, 22 Mar 2022 16:33:15 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/cropped-DC-logo-emblem_multicolor-32x32.png HP – Dataconomy https://dataconomy.ru 32 32 Ken Jee explains how to build a career as a data scientist https://dataconomy.ru/2022/03/22/ken-jee-explains-how-to-build-a-career-as-a-data-scientist/ https://dataconomy.ru/2022/03/22/ken-jee-explains-how-to-build-a-career-as-a-data-scientist/#respond Tue, 22 Mar 2022 11:47:59 +0000 https://dataconomy.ru/?p=22751 There’s no doubt that data scientists are in high demand right now. Companies are looking for people who can help them make sense of all the data they’re collecting and use it to make better decisions. Being a data scientist is a great way to start or further your career. It’s a field rising, and […]]]>

There’s no doubt that data scientists are in high demand right now. Companies are looking for people who can help them make sense of all the data they’re collecting and use it to make better decisions.

Being a data scientist is a great way to start or further your career. It’s a field rising, and there are many opportunities for those with the right skills. We talked with Ken Jee, Head of Data Science at Scouts Consulting Group, about how to build a career in data science.

With a goal-oriented approach to problem solving, data science evangelist Ken Jee is admired for his work in the field. He is the Head of Data Science at Scouts Consulting Group, and creates and shares content via his podcast, website, YouTube channel, and 365 DataScience course offering; frequently contributes to Kaggle; and is a Z by HP global ambassador. He recently helped design data science challenges featured in “Unlocked,” an interactive film from Z by HP. The film and companion website present data scientists with the opportunity to participate in a series of problem-solving challenges while showcasing the value of data science to non-technical stakeholders, with a compelling narrative. We spoke with Jee about how he built a successful career as a data scientist.

What is your background and how did you get started in data science?

All my life, I played competitive sports, and I played golf in college. One of the ways that I found I could create a competitive edge was by analyzing my data and looking at the efficiencies that could be created by better understanding my game, and allocating time towards practice more effectively. Over time I became interested in the data of professional athletes, particularly golfers, so I started to analyze their performance to predict the outcome of events. I tried to play golf professionally for a bit, but it turns out I am better at analyzing data than playing the game itself.

What advice do you give young people starting out or wanting to get into the field?

If they’re just starting out learning data science, I recommend that they just choose a path and stick to it. A lot of times people get really wrapped up in whether they’re taking the right course and end up spinning their wheels. Their time would be better spent just learning, whatever path they take. I will also say that the best way to land a job and get opportunities is by creating a portfolio by doing data science. Create or find data, whether it’s on Kaggle or from somewhere else, like the “Unlocked” challenge, show your work to the world, get feedback and use that to improve your skills.

“Unlocked” is a short film that presents viewers with a series of data science challenges, that I along with other Z by HP Data Science Global ambassadors helped to design. There are challenges that involve data visualization using environmental data; natural language processing or text analysis using a lot of synthesized blog posts and internet data; signal processing of audio information; and computer vision to analyze pictures, along with accompanying tutorials and sample data sets. We wanted to highlight a variety of things that we thought were very exciting within the domain.

There’s a lot of fun in each of these challenges. We’re just really excited to be able to showcase it in such a high production value way. I also think that the film itself shows the essence of data science. A lot of people’s eyes glaze over when they hear about big data, algorithms and coding. I jump out of bed in the morning happy to do this work because we see the tangible impact of the change that we’re creating, and in “Unlocked,” you’re able to follow along in an exciting story. You also get to directly see how the data that you’re analyzing is integrated into the solutions that the characters are creating.

How has technology opened doors for you in your career?

I would argue that technology built my entire career, particularly machine learning and AI tech. This space has given me plenty to talk about in the content that I create, but it has also helped to perpetuate my content and my brand. If you think about it, the big social media companies including YouTube all leverage the most powerful machine learning models to put the right content in front of the right people. If I produce content, these algorithms find a home for it. This technology has helped me to build a community and grow by just producing content that I’m passionate about. It is a bit meta that machine learning models perpetuate my machine learning and data science content. This brand growth through technology has also opened the door for opportunities like a partnership with Z by HP as a global data science ambassador. This role gives me access to and the ability to provide feedback on the development of their line of workstations specifically tuned to data science applications–complete with a fully loaded software stack of the tools that my colleagues and I rely on to do the work we do. Working with their hardware, I’ve been able to save time and expand my capabilities to produce even more!

What educational background is best suited for a career in data science?

I think you have to be able to code, and have an understanding of math and programming, but you don’t need a formal background in those areas. The idea that someone needs a master’s degree in computer science, data science or math is completely overblown. You need to learn those skills in some way, but rather than looking at degrees or certificates, I evaluate candidates on their ability to problem solve and think.

One of the beautiful things about data scientists is that they come from almost every discipline. I’ve met data scientists from backgrounds in psychology, chemistry, finance, etc. The core of data science is problem solving, and I think that’s also the goal in every single educational discipline. The difference is that data scientists use significantly more math and programming tools, and then there’s a bit of business knowledge or subject area expertise sprinkled in. I think a unique combination of skills is what makes data science such an integral aspect of businesses these days. At this point, every business is a technology company in some respect, and every company should be collecting large volumes of data, whether they plan to use it or not. There’s so much insight to be found in data, and with it, avenues for monetization. The point is to find new opportunities.

What’s an easy way to describe how data science delivers value to businesses?

At a high level, the most relevant metric for data science in the short term is cost savings. If you’re better able to estimate how many resources you’ll use, you can buy a more accurate number of those resources and eventually save money. For example, if you own a restaurant and need a set amount of perishable goods per day, you don’t want to have excess inventory at the end of the week. Data science can be used to very accurately predict the right quantity to buy to satisfy the need and minimize the waste, and this can be on-going and adjusted for new parameters. Appropriate resourcing is immensely important, because if you have too much, you’ll have spoilage, and too little, you’ll have unhappy customers. It’s a simple example but when your sales are more accurate, even by a small percentage, those savings compound. At the same time, the data science models get better, the logic improves, and all these analytics can be used for the benefit of the business and its profitability.  

Is being a data scientist applicable across industries?

You can have success as a data scientist generalist, where you bounce across different subject area expertise and industries, like finance, biomedical, etc.; you just have to be able to pick up those domains relatively quickly. I also think that if you’re looking to break into data science from another field, the easiest path for you would be to do data science in that field. It all sort of depends on the nature of the problems you would like to solve. There are verticals where subject area expertise is more important, maybe even more so than data skills, like for sports and you need to understand a specific problem. But generally, someone could switch between roles.

Any final notes?

I’m a huge believer of setting goals and accountability. A good goal is measurable, you control the outcome, and set a time constraint. Once you’ve set your goal, write it down or tell people about it. Also, never forget that learning is a forever journey.

]]>
https://dataconomy.ru/2022/03/22/ken-jee-explains-how-to-build-a-career-as-a-data-scientist/feed/ 0
Don’t Believe the Anti-Hype- Big Data is Still on the Rise https://dataconomy.ru/2015/03/24/dont-believe-the-anti-hype-big-data-is-still-on-the-rise/ https://dataconomy.ru/2015/03/24/dont-believe-the-anti-hype-big-data-is-still-on-the-rise/#respond Tue, 24 Mar 2015 11:56:59 +0000 https://dataconomy.ru/?p=12471 Research and Markets recently published their report on the Global HDaaS (Hadoop-as-a-Service) Market 2015-2019. They market, they believe will continue to grow at a CAGR of 84.81 percent over the period 2015-2019. Although many have been quick to claim the death of “big data” and the demise of Hadoop, adoption continues to rise. M&R analysts attribute this […]]]>

Research and Markets recently published their report on the Global HDaaS (Hadoop-as-a-Service) Market 2015-2019. They market, they believe will continue to grow at a CAGR of 84.81 percent over the period 2015-2019.

Although many have been quick to claim the death of “big data” and the demise of Hadoop, adoption continues to rise. M&R analysts attribute this dissonance to one key factor- although the big players may be moving on, SMEs are flocking to data services and HDaaS in their droves.

“One key trend emerging in this market is the increased adoption of HDaaS by small and medium enterprises. The market has witnessed that SMEs are among the earliest adopters of cloud computing and big data technology. Because this end-user segment is already aware of the potential benefits of cloud computing, HDaaS vendors are looking to capitalize on the increase in demand for HDaaS solutions from this segment,” notes an analyst from the research team.

To calculate the market size, the report considers revenue generated from Hadoop analytics solutions, Hadoop software, applications, services, support, and maintenance.

Some of the key findings of the report point out:

  1. The key vendors in the Global HDaaS Market are Amazon Web Services, EMC², IBM and Microsoft.

    Other Prominent Vendors in the market are: Altiscale, Cask Data, Cloudera, Google, Hortonworks, HP, Infochimps, Karmasphere, MapR Technologies, Mortar Data, Pentaho and Teradata.

  2. Cost-effective solutions for big data management find growing demand and drive the market. Cost-effective cloud computing technology, with capabilities to manage and analyze large amounts of data,through Hadoop, enables enterprises to manage their big data of any size in a cost-effective manner, the report states.
  3. On the other hand, “lack of awareness about or unfamiliarity with the Hadoop technology,” is a drawback affecting the market.

    “The lack of awareness about the Hadoop technology and the lack of trained professionals are two of the major issues that have prevented enterprises from investing in Hadoop-based big data solutions, which is hindering the growth of the market.”

The report contains a comprehensive market and vendor landscape in addition to a SWOT analysis of the key vendors. The detailed report is available here.

Photo credit: Udri / Foter / CC BY-NC-SA

]]>
https://dataconomy.ru/2015/03/24/dont-believe-the-anti-hype-big-data-is-still-on-the-rise/feed/ 0
Kaggle’s March Machine Learning Madness is Back! https://dataconomy.ru/2015/02/07/kaggles-march-madness-machine-learning-mania-is-back/ https://dataconomy.ru/2015/02/07/kaggles-march-madness-machine-learning-mania-is-back/#respond Sat, 07 Feb 2015 09:36:41 +0000 https://dataconomy.ru/?p=11923 In lieu of the upcoming NCAA tournament office pools and pundit prognosis are starting to gain momentum. For the last few years, however the stakes are high; and the amount of predictive data available gets higher. Betting has been taken to an all new level with big data scientists using analytics to predict bids and […]]]>

In lieu of the upcoming NCAA tournament office pools and pundit prognosis are starting to gain momentum. For the last few years, however the stakes are high; and the amount of predictive data available gets higher. Betting has been taken to an all new level with big data scientists using analytics to predict bids and sponsor competitions.

With millions fillings out brackets, coin flipping is no longer an option. The odds of making the correct predictions are exponentially increased with the correct analysis of data collected throughout the seaon, as well as previous years’ data which includes player statistics, tournament seeds, geographical factors and social media.

Kaggle HQ has yet again taken out their annual March Data Madness competition pits you against the millions of sports fans and office-pool bandwagoners who are hoping to win big by correctly predicting the outcome of the men’s NCAA basketball tournament. Presented by HP Software’s industry leading Big Data group and the HP Haven Big Data platform, this competition will test how well predictions based on data stack up against a (jump) shot in the dark.

The competition doesn’t just hone one’s analytical skills; Kaggle is also offering a hefty $15,000 cash prize to the team with the closest prediction. The cash prize and opportunity for glory is drawing an increasingly large field of competitors: 114 teams, 139 individual players, and 649 entries.

“The response is healthy so far and I’d expect many more to jump in, now that there’s a prize on offer,” says Will Cukierski, a Kaggle data scientist. “The make-or-break on our expectations will happen after the 2014 madness starts. We’re really excited to see if people can beat the traditional rankings and experts and seed-based predictions.”

In stage one of this two-stage competition, participants will build and test their models against the previous four tournaments. In the second stage, participants will predict the outcome of the 2015 tournament. While participation in the first stage is not necessary to enter the second, the first stage exists to incentivize model building and provide a means to score predictions. The real competition is forecasting the 2015 results, for which you’ll predict winning percentages for the likelihood of each possible matchup, not just a traditional bracket.

As well as sponsoring the event, HP are offering use of their Haven technology to fuel competitor’s algorithms. According to their blogpost:

You will have access to key HP Haven technologies, including HP Vertica Distributed R to accelerate your machine learning by running your R models across multiple nodes to vastly reduce execution time and analyze much larger data sets.

The competition started on Monday 2 February 2015 UTC and ends on Saturday 14 March 2015 UTC (40 total days).

This isn’t the first example of big data being used for prediction. More than a decade ago, professors Jay Coleman of the University of North Florida in Jacksonville, Allen Lynch of Mercer University in Macon, Georgia, and Mike DuMond of Charles River Associates and Florida State University in Tallahassee created the Dance Card  – a formula designed to predict which teams will receive at-large bids to the NCAA Tournament (aka the Big Dance). For the 2014 bids announced recently the dance card formula correctly predicted 35 of the 36 at-large bids. The model is a combined 108 of 110 over the last three years.

Big Data is also being used in a huge way by teams to improve performances. “Sports teams are using new analytical capabilities to improve their team personnel and on-court performance,” says Davis, vice president of Intel’s Data Center Group. “As an example, teams are using emerging technologies such as multi-view cameras that can measure the tendencies of players in very specific situations to improve performance.”

Full contest details can be found over at Kaggle.

]]>
https://dataconomy.ru/2015/02/07/kaggles-march-madness-machine-learning-mania-is-back/feed/ 0
HP’s Information Governance Portfolio Leverages HP HAVEn to Double Down on Cloud https://dataconomy.ru/2015/02/03/hps-information-governance-portfolio-leverages-hp-haven-to-double-down-on-cloud/ https://dataconomy.ru/2015/02/03/hps-information-governance-portfolio-leverages-hp-haven-to-double-down-on-cloud/#respond Tue, 03 Feb 2015 09:57:24 +0000 https://dataconomy.ru/?p=11821 HP has announced an update to its information governance portfolio, which would include its eDiscovery, ECM, and Information Archiving products. The portfolio, which was developed with the HP HAVEn technology at its core, was designed to help clients understand, organise and analyse more data than ever before. The announcement that came yesterday, claims that the […]]]>

HP has announced an update to its information governance portfolio, which would include its eDiscovery, ECM, and Information Archiving products. The portfolio, which was developed with the HP HAVEn technology at its core, was designed to help clients understand, organise and analyse more data than ever before.

The announcement that came yesterday, claims that the latest upgrade would help users to “proactively prepare for litigation, improve the productivity of their legal professionals, better manage risk and respond to evolving requirements for how enterprise information is governed.”

“Today’s evolving litigation landscape is forcing organizations to look for a more complete, integrated information governance solution that allows them to solve a wider set of information compliance problems,” explained David Jones the general manager at Big Data Solutions, HP Software.

“The continued integration of, and enhancements to, HP’s portfolio will deliver that complete solution, which improves information lifecycle management, streamlines legal workflow and helps our customers address all of these emerging trends,” he added.

Salient enhancements include:

  • Extension of HP eDiscovery OnDemand with increased load file ingestion capability, more-granular security and simplified user interfaces.
  • A new UK data center is now available to expand ECM cloud offerings, including HP WorkSite and HP LinkSite, to the European market.
  • HP Consolidated Archive with Improved user experience and support of mobile devices.
  • HP Digital Safe has enhanced eDiscovery support and workflow, in addition to improved retention management for better information governance.
  • HP WorkSite:  New support of Office 365 for advanced desktop users, plus iOS mobility enhancements, security improvements and support for additional MDM (Mobile Device Management) vendors.
  • HP LinkSite: Enriched workflow that automatically prompts users to email secure links instead of large attachments, by leveraging HP’s secure cloud file-sharing service, to streamline collaboration and reduce mailbox size.
  • HP Universal Search: Augmented search, performance and preview functionality to help users pinpoint highly complex search terms within compound content like emails with zip file attachments.

(Image credit: Windell Oskay, via Flickr)

]]>
https://dataconomy.ru/2015/02/03/hps-information-governance-portfolio-leverages-hp-haven-to-double-down-on-cloud/feed/ 0
Meet HP Haven OnDemand- HP’s Big Data Analytics in the Cloud https://dataconomy.ru/2014/12/03/meet-hp-haven-ondemand-hps-big-data-analytics-in-the-cloud/ https://dataconomy.ru/2014/12/03/meet-hp-haven-ondemand-hps-big-data-analytics-in-the-cloud/#respond Wed, 03 Dec 2014 10:19:10 +0000 https://dataconomy.ru/?p=10770 IT giant HP has released HP Haven OnDemand, their Big Data platform that runs on the HP Helion cloud and enables analyses of data such as business and machine as well as unstructured, human information. “To succeed in today’s marketplace, businesses must be able to leverage all forms of data, at high speed and in […]]]>

IT giant HP has released HP Haven OnDemand, their Big Data platform that runs on the HP Helion cloud and enables analyses of data such as business and machine as well as unstructured, human information.

“To succeed in today’s marketplace, businesses must be able to leverage all forms of data, at high speed and in context, in order to capitalize on emerging opportunities and manage risk and costs,” explained Robert Youngjohns, GM and EVP, HP Software. “With today’s announcement, we are making our unique Big Data platform more accessible and adaptable than ever before, giving customers, partners, and developers an unmatched set of assets that can help them create winning, data-driven businesses.”

As part of their Big Data strategy, with this release HP will integrate Haven assets within the HP Software application portfolio adding new offerings to help businesses mine Big Data analytics, reports the statement announcing the launch, earlier this week.

Key components of the HP Haven Enterprise platform are :

  • HP Vertica OnDemand delivers enterprise-class Big Data analytics via the cloud. HP Vertica OnDemand provides built-in flexible analytic capabilities.
  • HP IDOL OnDemand provides industry standard Big Data web services that developers, partners, and customers can use to build next generation applications enabling analysis of a broad spectrum of data types, including images, social media, text, video and more.

With this release HP Software also unveiled several innovations across its portfolio that leverage HP Haven Big Data analytics:

  • IT Operations Management–New IT Operations Management solutions harnessing Haven analytics enable organizations to automate and optimize IT application and system management, for faster marketing, cost savings and exceptional customer experiences.
  • Security – A new version of HP ArcSight ESM, a security information and event management (SIEM) solution was also brought forth.
  • Intelligent Retention and Content Management – Also announced was a new solution that combines HP hardware and software technologies to allow organizations to intelligently manage data throughout its lifecycle. The solution brings together HP StoreAll, HP ControlPoint, HP Records Manager and HP Haven analytics to reinvent end-to-end information lifecycle management.

Most supplements of Haven will either be available by January 2015, latest by CYQ1 2015.


(Image credit: Don Debold)

]]>
https://dataconomy.ru/2014/12/03/meet-hp-haven-ondemand-hps-big-data-analytics-in-the-cloud/feed/ 0
Democratising the Cloud Marketplace with UberCloud https://dataconomy.ru/2014/11/27/democratising-the-cloud-marketplace-with-ubercloud/ https://dataconomy.ru/2014/11/27/democratising-the-cloud-marketplace-with-ubercloud/#comments Thu, 27 Nov 2014 11:46:14 +0000 https://dataconomy.ru/?p=10621 Two years ago, Wolfgang Gentzsch and Burak Yenier co-founded UberCloud, a cloud marketplace where engineers and scientists can discover, try and buy a range of cloud computing solutions. As well as improving the ease of access and the variety of options available to scientists and engineers, the platform also gives a level playing field to […]]]>

Democratising the Cloud Marketplace with UberCloudTwo years ago, Wolfgang Gentzsch and Burak Yenier co-founded UberCloud, a cloud marketplace where engineers and scientists can discover, try and buy a range of cloud computing solutions. As well as improving the ease of access and the variety of options available to scientists and engineers, the platform also gives a level playing field to industry titans and smaller, niche vendors alike. We caught with Gentzsch to discuss the intricacies of his product, and why two years down the line, a direct competitor to UberCloud has yet to emerge.


Can you tell us more about yourself and your work.

I’m a veteran in high performance computing with a strong focus on applications, applying high performance computing, technical computing, technologies and tools to solve engineering and scientific grand challenge problems.  Grand challenge here means the engineer with his desktop has the grand challenge to design and develop a next generation product.  The scientists might have a grand challenge in researching about climate change, pollution or new material.  So, when I say high performance computing this is relative; it means different things to different people.

I started solving problems as a scientist and a researcher in Max-Planck Institute, applying numerical methods and physics to solving problems for plasma fusion reactors.  In my next position leading the German aerospace and aeronautics department for computing fluid dynamics we solved air flow around airplane wings or inflow through water turbines, really nice and challenging examples.  Later on I founded a few companies while I was a Professor here in Regensburg for computer science and applied mathematics.  I founded a few startup companies which all dealt with one or the other kind of distributed, parallel, and grid computing, all applied to solving real engineering and scientific problems.  One of my companies – Gridware – got acquired by Sun Microsystems in 2000.

To bridge the gap between big Sun Microsystems and our small software company, I moved to California.. Here, we integrated our distributed computing technology, which is basically a load manager for distributed computing resources across networked processors, workstations, compute nodes; whatever was sitting in the network.  On top of this, we have developed what was later called the Sun Grid Engine, the engine to load, manage and monitor the different loads / computer jobs.  After that I was running my own consultancy firm, dealing with governments and large companies around the world and their computing challenges.

Finally, two years ago, my friend Burak Yenier in Silicon Valley and I, in Regensburg, Germany, founded the UberCloud, and as you can imagine this is now again a really exciting time for us.

What does UberCloud do?

The UberCloud is a community and marketplace for engineers and scientists to discover, try and buy computing resources on demand in the cloud.  It is not just one cloud, but it is a marketplace of clouds where end-users are matched automatically with specific cloud computing resource and software providers, according to their specific application requirements.

Intel sponsor UberCloud; what do you think attracts such big names to your company?

All of our sponsors are convinced that the UberCloud online marketplace platform brings end-users together with service providers in an easy and organic fashion.  We are focusing on being broadly horizontal, by reaching out into every computing and big data vertical like finance, oil and gas, computer aided engineering, computation biology, pharma, life sciences, material computation, chemistry, and data analytics

It would be very challenging for the average service provider to focus on such a  wide spectrum of activities out there. UberCloud is targeting the whole spectrum including the long tail of small engineering companies.  This would be too expensive for any resource and software provider to try and get in touch with very small companies who are often part of a supply chain of a big car company for example.  This is exactly the strength of the UberCloud marketplace- it’s for everybody.  The UberCloud marketplace is democratizing high performance technical computing bringing it to the end user who just has a laptop or a workstation on his desk- which, according to IDC, is 95% of engineers & scientists in the world.

Our current target market includes all those who feel the limitations every day of their desktop computing power- our marketplace connects them to one of our 54 cloud providers.  To put a final answer on why Intel is a big sponsor of UberCloud: UberCloud’s aim is to help democratizing the high performance computing market, from workstations, to servers, to supercomputers,and thus drive more computing needs and facilitating them. Enabling computing easily at the engineers fingertips, on demand, will create additional requests for computing and thus at the end additional demand for computing technology which then might come from Intel.

Why do you think there are no direct competitors to UberCloud?

UberCloud is not competing with anybody but it is embracing all service providers and users alike.  We are here to simplify the choice for the engineer and scientist, and the access and usage of these computing resources around the world.  We are a vendor-agnostic marketplace.

Many of the vendors these days are building their own clouds and calling them a “marketplace”, like Amazon, IBM, HP, and Dell, but then it is just an Amazon marketplace, an IBM marketplace, an HP marketplace, and a Dell marketplace,  mainly for their own customers.  These marketplaces are hardware oriented. So HP’s built up their Helion cloud, SGI for quite a while offered their Cyclone cloud. IBM acquired the Softlayer cloud for IBM customers. These big guys usually only work together with big software providers- but UberCloud can serve any small company within the software provider community. We democratize that market and accelerate technical computing usage into even the smallest company out there.

In the engineer’s and scientist’s view, the most important service he or she needs is the application software. This software is the most difficult part in an engineer’s job- dealing with fluid dynamics, dealing with mechanical software, dealing with electromagnetic software.  It takes many years of education and training to finally master this kind of software, but this is also one reason why it is so challenging to build a marketplace for it. Not everyone can build a marketplace because you need to understand the engineers and scientist’s culture, their applications, their requirements, their work environment, and how they are under deadline pressure every day.

Say I’m the end user and my computer is not as quick as I need it to be. What would the steps be for me to be connected with the right provider through UberCloud?

You go to the UberCloud Marketplace and you will see the individual stores and their products..  For example, there is OpenFOAM on Amazon AWS.  OpenFOAM  is fluid dynamic software which is in the public domain and free; it’s a great software and there is an ecosystem of companies who now support this.  You see OpenFOAM on Amazon let’s say for 24 hours on 32 cores for $199.00. It is all pre-packaged, bundled and ready for you.  You click on the offer, move it to your shopping cart, and pay by credit card.  At that moment you will receive an email which says ‘Hello Furhaad, welcome.  Here is your access to the Cloud’.  In this email you will see a link and a password.  You click on the link, and on your workstation you will see a screen that says ‘This is your Amazon Cloud’ or ‘This is the HP Cloud and here is your password’.  You type in your password and then you see your application waiting for your data, in the cloud. Now you will need to submit your input data like the geometry and the physics of your design, and after you sent it you just push the “Run” button. Then your job is running as if it were running on your local workstation.  No difference, everything is prepared, it’s pre-packaged and ready to run, the software license is sitting there for 24 hours on 32 cores.  .  The result is, that instead of one or two weeks to get your application up and running in the cloud, it now takes only one or two minutes to get up to speed.



Wolfgang Gentzsch is president and co-founder of the UberCloud Community and Marketplace for engineers and scientists to discover, try, and buy computing on demand, in the cloud. And he is the Chairman of the International ISC Cloud Conference series. Previously, he was Advisor to the EU projects EUDAT and DEISA, directed the German D-Grid Initiative, and was a member of the Board of Directors of the Open Grid Forum, and of the US President’s Council of Advisors for Science and Technology, PCAST.


 

(Image credit: UberCloud)

]]>
https://dataconomy.ru/2014/11/27/democratising-the-cloud-marketplace-with-ubercloud/feed/ 2
HP Vertica Offers Analytics Platform for SQL on Hadoop Data https://dataconomy.ru/2014/11/19/hp-vertica-offers-analytics-platform-for-sql-on-hadoop-data/ https://dataconomy.ru/2014/11/19/hp-vertica-offers-analytics-platform-for-sql-on-hadoop-data/#respond Wed, 19 Nov 2014 09:39:45 +0000 https://dataconomy.ru/?p=10449 HP announced earlier this week that organizations can now harness the value of Hadoop and the power of the HP Vertica Analytics Platform. HP Vertica for SQL on Hadoop simplifies data exploration, delivered in an economical way – all while running natively on an organization’s preferred Hadoop distribution and leveraging certification with leading visualization tools. […]]]>

HP announced earlier this week that organizations can now harness the value of Hadoop and the power of the HP Vertica Analytics Platform. HP Vertica for SQL on Hadoop simplifies data exploration, delivered in an economical way – all while running natively on an organization’s preferred Hadoop distribution and leveraging certification with leading visualization tools.

Hadoop has been preferred by organisations to cost-effectively store large volumes of unstructured data in what is referred to as a “data lake.” However, exploring HDFS-based data to unearth value for analytic insight has proven challenging, to date.

“With up to 25,000 job postings updates, over 400,000 active postings and over 1,000,000 unique visitors on our site every day, there is tremendous potential insight in all that data,” said Robert Fehrmann, Data Architect at Snagajob. “HP Vertica for SQL on Hadoop combines the power, speed, and scalability of HP Vertica with the ease and effectiveness of Hadoop and gives us an incredibly robust analytics tool to help understand and act on our information assets,” HP stated in a press release.

Combining proven HP Vertica for SQL on Hadoop dramatically reduces the complexity of multiple data architectures with a proven query engine, backed by HP’s first-class support, service, and expertise. Value of data stored on the Hadoop Distributed File System (HDFS) can now be seamlessly tapped into by organisations.

Shilpa Lawande, GM Platform, HP Software Big Data Business Unit stated -“With HP Vertica for SQL on Hadoop, we’re combining the best of both worlds – the industry’s enterprise-proven HP Vertica Analytics Platform with the broadly accessible Hadoop ecosystem.” She added – “Now, organizations can store data in HP Vertica or any Hadoop distribution, explore data in place using HP Vertica for SQL for Hadoop, and serve the highest-performance analytic needs for mixed workloads with HP Vertica Enterprise.”

Comprehensive SQL Analytics by HP Vertica on Hadoop will reduce switching costs and learning curves for users of ANSI SQL syntax. Moreover, data analysts can continue using familiar BI/analytics tools that visualize and auto-generate ANSI SQL code to interact with a wide variety of Hadoop distributions. Users can also tap into rich, built-in analytic functions that support JOINs, complex data types, and other capabilities only available in HP Vertica for SQL on Hadoop.

HP is focused on the deep integration of Hadoop SQL engine with any distribution of Hadoop. HP has partnered with Hortonworks, Cloudera, and MapR to introduce a flexible analytics platform that performs seamlessly across all of your Hadoop distributions.With its experience with petabyte scale deployments and world-class enterprise support and services organization HP is ‘Enterprise Ready’.

HP Vertica for SQL on Hadoop is now available across the globe and is priced per node. It will be available via direct and indirect channels. Hp and prominent Hadoop distribution partners will shed more light on this announcement at HP Discover in Barcelona in December.


(Image credit: HP Vertica)

 

 

]]>
https://dataconomy.ru/2014/11/19/hp-vertica-offers-analytics-platform-for-sql-on-hadoop-data/feed/ 0
Microsoft Azure Will Kill the Traditional Datacenter: Three Observations https://dataconomy.ru/2014/11/07/microsoft-azure-will-kill-the-traditional-datacenter-three-observations/ https://dataconomy.ru/2014/11/07/microsoft-azure-will-kill-the-traditional-datacenter-three-observations/#comments Fri, 07 Nov 2014 10:14:31 +0000 https://dataconomy.ru/?p=10293 Last week I attended a specialized training/go-to-market program in Seattle hosted by Microsoft’s Azure team.  It was a week full of clam chowder, excellent crab cakes, ridiculous discounts at the Microsoft store, and technical training on Azure, Microsoft’s massive and fully loaded penetration point into the world of cloud computing.  Although late to the game […]]]>

Last week I attended a specialized training/go-to-market program in Seattle hosted by Microsoft’s Azure team.  It was a week full of clam chowder, excellent crab cakes, ridiculous discounts at the Microsoft store, and technical training on Azure, Microsoft’s massive and fully loaded penetration point into the world of cloud computing.  Although late to the game (a troubling problem characteristic of this company), Microsoft has invested hundreds of millions of dollars, man hours, and its corporate reputation behind Azure.  However, it must be admitted that Microsoft is a good target for technology-oriented derision and evil empire monikers. After all, they are the ones that released “Windows Me” and pioneered the use of independent consultants in order to reduce employee costs.  And does anyone remember Clippy?  Holy hell, what’s not to hate?

Azure, if I have anything to say about it. And after this deep dive into the technology behind the user interface, I have come to the conclusion that this is the beginning of the end for compute/network/storage vendors that do not dramatically change their business model going forward.  However, this is somewhat of a complicated story.  Let’s start at the top.

Here are my three top takeaways from the training:

1)      Microsoft has a cost profile that is an order of magnitude cheaper than anything anyone can develop in house for large databases and unstructured data.  As consumption increases, price per terabyte declines. In addition, commodity hardware dramatically drives the price down; they are not using anything other than JBOD and white-box high density servers. Given the size and scale of Microsoft’s Azure platform, it is unlikely that any company (other than another cloud provider) could replicate the cost advantages associated with buying in the quantity that Microsoft is. The “magic sauce” is the software and orchestration services that Microsoft layers on top of this massive compute farm; they ensure availability, resiliency, and elasticity.  This in-house IP takes the place of the specialized hardware and exorbitantly expensive licensing costs associated with licensing associated with capabilities like data stream multi-pathing, replication technologies, business continuity, and others must-haves for a data center.

2)      Microsoft is working with a huge number of niche open-source players.  What this translates into is an ability to take complex requirements from clients and craft a customized approach for platform integrity, data ingestion, and data manipulation.  This opens the door for large and composite workload processing in the cloud.  There is a high level of complexity associated with acquiring/ingesting/cleansing data in the oil/gas vertical: companies are not sure how to source the data, how to organize it in such a way that it can be aggregated, and how to ensure that the data conforms to an MDM model.  Microsoft and Azure can make all of this much simpler.  A uniform point in the cloud to upload data creates the opportunity for a “data lake,” an uncurated aggregation of disparate streams of data that can be tied together as needed by Hive and other map/reduce translators.  Ingestion of data can be further simplified (for the client, anyways) using Storm, Sqoop, Flume, and other tools supported by Microsoft for a client-centric ingestion architecture.  Cleansing can be imitated by statistical probabilistic smoothing; given the size of the data sets that we will work with, this should be a reasonable (and cheaper) mechanism for cleansing data than a full-blown MDM architecture and associated data curation.

3)      The machine learning practice (real-time analytics for big data streams) is a technological priority for Microsoft.  They are putting immense dollars and resources into both self-service big data analytics (which are admittedly still somewhat clunky and immature) as well as true statistical-model analysis for data streams.  Microsoft has a pretty robust team of data scientists in the United States and across the globe that can construct the appropriate mathematical regression models that will be needed to establish causation between disparate data elements coming from unrelated sources. Of course, an integration partner is still needed to act as a domain expert to capture and manage workflow, organizational dynamics, data element characterization, and other efforts. However, Microsoft has the muscle to work with these integration partners to understand how to map the data streams and data elements together in a way that will generate the highest value knowledge for the client.

So why do I think this means the end of the traditional data center? Simply put, there will not be a viable space for major players like NetApp, HP, or Cisco going forward.  The era of corporate owned data centers and associated compute resources is coming to a close; the advent of cloud computing is the beginning of the end for these hardware system manufacturers, at least in their current form.  Yes, there is a lot of noise when it comes to security, platform interdependence, legislative mandates, and even the mechanics related to data ingestion and portability.  However, these issues will resolve themselves in the next six to twelve years, and enterprise clients will find the cost model of public cloud irresistible as they evaluate their year over year capital expenditures.  Companies will be forced to adapt, and those that lack the agility to do so will find themselves cornered into niche markets or buried in the graveyard of those corporations that they once supplanted.  It will be interesting to see who makes it.



Microsoft Azure Will Kill the Traditional Datacenter: Three ObservationsJamal is a regular commentator on the Big Data industry. He is an executive and entrepreneur with over 15 years of experience driving strategy for Fortune 500 companies. In addition to technology strategy, his concentrations include digital oil fields, the geo-mechanics of multilateral drilling, well-site operations and completions, integrated workflows, reservoir stimulation, and extraction techniques. He has held leadership positions in Technology, Sales and Marketing, R&D, and M&A in some of the largest corporations in the world. He is currently a senior manager at Wipro where he focuses on emerging technologies.


(Image Credit:Rainer Stropek)

]]>
https://dataconomy.ru/2014/11/07/microsoft-azure-will-kill-the-traditional-datacenter-three-observations/feed/ 1
GoodData’s BI Platform Rakes in $25.7M in Series E with Intel to Market Product and Expand https://dataconomy.ru/2014/10/03/gooddatas-bi-platform-rakes-in-25-7m-in-series-e-with-intel-tomarket-product-and-expand/ https://dataconomy.ru/2014/10/03/gooddatas-bi-platform-rakes-in-25-7m-in-series-e-with-intel-tomarket-product-and-expand/#respond Fri, 03 Oct 2014 08:33:37 +0000 https://dataconomy.ru/?p=9610 GoodData, a Cloud based Business Intelligence startup, has landed $25.7 million in Series E funding on Thursday led by Intel Capital and existing investors Andreessen Horowitz, General Catalyst, Tenaya Capital, TOTVS, Next World Capital, Windcrest, and Pharus Capital. The total funding received so far has now gone up to $101.2 million. Founder and CEO, Roman […]]]>

GoodData, a Cloud based Business Intelligence startup, has landed $25.7 million in Series E funding on Thursday led by Intel Capital and existing investors Andreessen Horowitz, General Catalyst, Tenaya Capital, TOTVS, Next World Capital, Windcrest, and Pharus Capital. The total funding received so far has now gone up to $101.2 million.

Founder and CEO, Roman Stanek told VentureBeat in an interview,“This place is getting serious, so we need to have some real cash to actually be a player.”

According to Ben Kepes, contributor at Forbes, their analytics platform “allows companies to manage, analyze and visualize all from one environment. He adds that “this approach really democratizes data, allowing managers and business people to generate their own insights rather than having to wait for IT departments and data scientists to run queries for them”

Stanek intends to use the funds for marketing and expansion purposes. Intel has been helping the startup meet customers’ privacy needs and comply with regulations. The companies are also collaborating to develop more efficient data-storage techniques, Stanek told VentureBeat.

Based in San Francisco, the company was started in 2007. With a workforce of 300 people, approaching 400 within a year, Stanek claims to have more than 40,000 customers, including Bonobos, Hootsuite, HP, Virgin America, and Zendesk.

Read more here

(Image Credit: knowmadic news)

]]>
https://dataconomy.ru/2014/10/03/gooddatas-bi-platform-rakes-in-25-7m-in-series-e-with-intel-tomarket-product-and-expand/feed/ 0
HP Acquires Eucalyptus Systems to Gain Stonger Foothold in Cloud Computing https://dataconomy.ru/2014/09/15/hp-acquires-eucalyptus-systems-to-gain-stonger-foothold-in-cloud-computing/ https://dataconomy.ru/2014/09/15/hp-acquires-eucalyptus-systems-to-gain-stonger-foothold-in-cloud-computing/#respond Mon, 15 Sep 2014 08:38:11 +0000 https://dataconomy.ru/?p=9161 In a move to bolster its cloud service prowess, HP acquired open source cloud software startup Eucalyptus Systems last week. Essentially, HP hired Eucalyptus’ entire workforce (which is less than 100 people), along with their CEO Marten Mickos as the Senior VP and head of HP’s cloud computing wing. Mickos will be working directly with […]]]>

In a move to bolster its cloud service prowess, HP acquired open source cloud software startup Eucalyptus Systems last week.

Essentially, HP hired Eucalyptus’ entire workforce (which is less than 100 people), along with their CEO Marten Mickos as the Senior VP and head of HP’s cloud computing wing. Mickos will be working directly with HP CEO Meg Whitman, reports a statement on the HP website.

“The addition of Marten to HP’s world-class Cloud leadership team will strengthen and accelerate the strategy we’ve had in place for more than three years, which is to help businesses build, consume and manage open source hybrid clouds,” explains HP CEO Whitman. “Marten will enhance HP’s outstanding bench of Cloud executives and expand HP Helion capabilities, giving customers more choice and greater control of private and hybrid cloud solutions.”

Although the Forrester Wave report for Private Cloud Solutions ranked HP as the leader, Helion still falls behind AWS and IBM as far as revenues go.

“We are absolutely the experts on compatibility between clouds — private clouds, public clouds and hybrid clouds,” points out Marten Mickos in an interview. “We have developed open source software that is used to move computing workloads between each kind of cloud.” Mickos is expected to lead the HP Cloud organization and help HP Helion grow, based on OpenStack® technology.

Read more here.

(Image credit: Gary Sauer-Thompson)

]]>
https://dataconomy.ru/2014/09/15/hp-acquires-eucalyptus-systems-to-gain-stonger-foothold-in-cloud-computing/feed/ 0
Betting Big on Hadoop: HP Invests $50 Million in Hortonworks https://dataconomy.ru/2014/07/25/betting-big-on-hadoop-hp-invests-50-million-in-hortonworks/ https://dataconomy.ru/2014/07/25/betting-big-on-hadoop-hp-invests-50-million-in-hortonworks/#respond Fri, 25 Jul 2014 08:41:51 +0000 https://dataconomy.ru/?p=7658 Hewlett-Packard yesterday announced a $50 million investment in Hortonworks, a big data company that provides enterprise version of Apache Hadoop for big data analytics applications. The announcement comes only a few months after BlackRock and Passport Capital co-led an investment round of $100 million. The partnership will see HP and Hortonworks integrate their engineering strategies, enabling […]]]>

Hewlett-Packard yesterday announced a $50 million investment in Hortonworks, a big data company that provides enterprise version of Apache Hadoop for big data analytics applications. The announcement comes only a few months after BlackRock and Passport Capital co-led an investment round of $100 million.

The partnership will see HP and Hortonworks integrate their engineering strategies, enabling HP customers to deploy the Hortonworks Data Platform as the Hadoop component of HP HAVEn (HP’s data processing stack). In addition to the investment, Martin Fink, HP Chief Technology Office, will join Hortonworks board of directors.

“The ability to understand data and put it to effective use is now more crucial than ever,” said Colin Mahony, general manager, HP Vertica. “Hortonworks has demonstrated outstanding dedication and expertise in addressing the business and technology needs of its customers within this new era of information and data, and we look forward to partnering with the Hortonworks team to deliver innovative big data solutions to our customers.”

The deal between the two companies brings the total investment in Hadoop distributors to nearly $1.2 billion — Intel invested in $900 million Cloudera this March, and in June Google pumped $110 million into MapR.

Existing investors in Hortonworks include, Tenaya Capital, Index Ventures, Benchmark, Dragoneer Investment Group and Yahoo.

Read more here

Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!

[mc4wp_form]

(Image Credit: David)

]]>
https://dataconomy.ru/2014/07/25/betting-big-on-hadoop-hp-invests-50-million-in-hortonworks/feed/ 0
Understanding Big Data: Cross Infrastructure & Analytics https://dataconomy.ru/2014/07/11/understanding-big-data-cross-infrastructure-analytics/ https://dataconomy.ru/2014/07/11/understanding-big-data-cross-infrastructure-analytics/#comments Fri, 11 Jul 2014 14:59:13 +0000 https://dataconomy.ru/?p=6861 In our cartographic overview of the big data ecosystem, we stated a Big Data environment should allow you to store, process, analyse and visualise data. Thus far in the “Understanding Big Data” series, we’ve been breaking down the ecosystem into composite parts, focusing on software which specifically focuses one or two of these objectives. In […]]]>

In our cartographic overview of the big data ecosystem, we stated a Big Data environment should allow you to store, process, analyse and visualise data. Thus far in the “Understanding Big Data” series, we’ve been breaking down the ecosystem into composite parts, focusing on software which specifically focuses one or two of these objectives. In this edition, we’ll be examining the big data offerings of household names and global leaders, most of whom offer an end-to-end solution for storing, processing, analysing and visualising data.

Google

As part of its range of cloud computing products and services, Google offers a big data analysis solution, Google BigQuery. The “big” in the title isn’t misleading- they allow you to process the first terabyte of data for free. The queries are run in an SQL-like language, and can be run on a browser tool, command-line tool or through the BigQuery REST API using a variety of client libraries such as Java, PHP or Python. It’s integrated with a variety of third-party analytics and visualisation tools, including Tableau, Jaspersoft and Qlikview, as well as cloud connectors for services such as Talend, Informatica and Pervasive. We recently reported on how Google developers were using BigQuery to map the notability gender gap in Freebase, which you can read here.

SAP

Understanding Big Data Cross Analytics Infrastructure SAP

 Architecture of SAP HANA; source

SAP’s big data product line revolves around SAP HANA. When speaking with Dataconomy recently about their partnership with SAP HANA, Birst’s VP of Product Strategy said: “when you get to really large datasets, it can have response times that business users are not willing to wait for. What HANA represents is a world class, in-memory database.”
Speed is what HANA does best, claiming speeds between 10,00 and 100,000 times faster than your current data platform. As well as lightning-fast speeds, SAP HANA can be integrated with Hadoop and SAP IQ, the company’s column-oriented, grid-based, massively parallel processing database. SAP HANA users include the NBA, ebay, P&G, Lenovo, ebay and Pacific Drilling.

SAP also offer a range of products for analytics, visualsation, text analytics and business intelligence, as well as applications specifically geared towards fraud detection, customer intelligence and equiment operations.

IBM

IBM has a whole portfolio of products for big data mangement. The core of this portfolio is IBM Infosphere, a solution covering data integration, data warehousing, master data management, big data and information governance. This includes Infosphere Stream, a stream computing solution for real-time analytics, which can handle very high throughput, some of the components of which are open sourced. They also offer InfoSphere Big Insights, which uses Hadoop as a basis for the processing of vast amounts of structured and unstructured data.

They also offer the IBM Watson Explorer, fuelled by the technology of IBM Watson, which offers search, navigation and discovery of data sources. On the analytics side of things, they offer the IBM Smart Analytics System, and end-to-end analytics solution. Last week, they also announced they were taking their Big Data technology to the cloud with IBM Navigation on Cloud, which you can read more about here.

Microsoft

Understanding Big Data Cross Analytics Infrastructure Microsoft

Microsoft Azure demo; source

Microsoft’s big data offering is its Azure platform. The Azure portfolio includes HDInsight, a 100% Hadoop-based service in the cloud. It’s run on a pay-for-what-you-use basis, allows you to develop in Java and .NET and visualise using Microsoft Excel. They also offer Azure SQL database, a scalable, self-managed relational database-as-a-service, used by customers such as Samsung and Easyjet. Azure’s storage facility is durable cloud storage, which is offers several solutions for integrating existing data and works with unstructured text or binary data such as video, audio and images. Last month they also announced Azure ML, a machine learning component which will allow users to build big data-based apps and APIs to predict future events.

Amazon

Amazon Web Services offers solutions for every stage of big data management. The allow you to

  • Collect- AWS Direct Connect and their Import/Export service allow you to move data in and out of the cloud quickly. Inbound data traffic is free.
  • Stream- Amazon Kinesis is their real-time big data streaming solution
  • Store- Amazon Simple Storage Service (S3) is pay-for-what-you-use cloud storage
  • Process- They have NoSQL (Amazon DynamoDB), RDBMS (Amazon RDS) and Hadoop (Amazon Elastic MapReduce) offerings.

They also have an AWS Marketplace, which is essentially a giant catalogue of big data tools all in one location.

HP

Understand Big Data Cross Analytics Infrastructure HP

 

An overview of HP HAVEn; source

HP’s big data offering is called HAVEn. It stands Hadoop/HDFS, Autonomy IDOL (which processes and indexes information), Vertica (for real-time analytics), Enterprise Security and nApps (apps, with an n on the front- we don’t get it either). Vertica, which was acquired by HP in 2011, which (when partnered with the HP ConvergedSystem 300) promises between 50 and 1,000 times the performance, 70% cost savings and takes only days to deploy. Partners of the HAVEn platform include Accenture Analytics, Deloitte Consulting LLP and Capgemini.

Of course, solutions from such established names come at a premium. Although many have pay-as-you-use, some of the technologies mentioned can cost $300,000 for hardware, software and services. On the other end of the financial spectrum, in the next edition of Understanding Big Data we’ll be looking at open source solutions- what technologies are available on the open source model, what opportunities they offer and how much you can actually get without a price tag attached.

(Featured image credit: Microsoft Azure)



Eileen McNulty-Holmes – Editor

1069171_10151498260206906_1602723926_n

Eileen has five years’ experience in journalism and editing for a range of online publications. She has a degree in English Literature from the University of Exeter, and is particularly interested in big data’s application in humanities. She is a native of Shropshire, United Kingdom.

Email: eileen@dataconomy.ru


Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!

[mc4wp_form]

 

]]>
https://dataconomy.ru/2014/07/11/understanding-big-data-cross-infrastructure-analytics/feed/ 2
HP Rolls Out BI and Analytics Software Bundle https://dataconomy.ru/2014/06/10/hp-rolls-bi-analytics-software-bundle/ https://dataconomy.ru/2014/06/10/hp-rolls-bi-analytics-software-bundle/#respond Tue, 10 Jun 2014 08:48:52 +0000 https://dataconomy.ru/?p=5396 HP have unveiled a batch of big data analytics programmes targeted towards the business intelligence market. Uniting themes across the line of products include the aim to lower IT costs, processing data from multiple channels and minimising service downtime. “Organizations are looking for solutions that let them aggregate data from multiple sources—such as social media […]]]>

HP have unveiled a batch of big data analytics programmes targeted towards the business intelligence market. Uniting themes across the line of products include the aim to lower IT costs, processing data from multiple channels and minimising service downtime.

“Organizations are looking for solutions that let them aggregate data from multiple sources—such as social media and machine-generated data— to draw meaningful insights to increase the speed of service delivery and improve end-user satisfaction while reducing potential business disruptions and service outages,” states Roy Ritthaler, an HP vice president for IT operations management.

Products in the new batch include:

  • HP Autonomy IDOL- Aimed at allowing businesses to process unstructured data (video, social media feeds) as well as structured data. Traditional legacy data warehouses and analytics packages can answer the question “What happened?”- HP claim IDOL examines the intricate relationships between data to answer the much more crucial question “Why has this happened?”
  • HP Operations Analytics 2.1- Uses “expert sourcing” analytics to track down the source of IT problems and failures; it is claimed fault detection time can be reduced from days to hours. It also “learns” what anomalous behaviour within the system looks like, to flag up future problems before failures occur
  • HP Operations Log Intelligence 1.0- Intelligent log analytics system for uncovering hidden business insights and reducing operational costs
  • AppPulse- SaaS tool aimed at app developers and owners, which provides analytics on key user metrics
  • HP Propel- Marketed as a “one-stop shop” for IT services, offering integration with currently-implemeted systems

IDOL, Operations Log Intelligence, Propel and AppPulse are available worldwide now. The global Operations Analytics 2.1 is expected some time this month.

Read more here.



Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!

[mc4wp_form]
(Photo credit: HP website)

]]>
https://dataconomy.ru/2014/06/10/hp-rolls-bi-analytics-software-bundle/feed/ 0
Facility-as-a-Service: Customers Can Now Rent HP’s New Modular Data Center Solution https://dataconomy.ru/2014/05/07/facility-service-customers-can-now-rent-hps-new-modular-data-center-solution/ https://dataconomy.ru/2014/05/07/facility-service-customers-can-now-rent-hps-new-modular-data-center-solution/#respond Wed, 07 May 2014 10:20:26 +0000 https://dataconomy.ru/?p=4028 HP presented a new product, called Facility-as-a-Service, which provides a variety of modular data center solutions under five year service agreements. Customers can rent custom-sized, scalable data center facilities, taking full advantage of HP products such as Flexible DC, which scales in 500 kW, 750 kW, 900 kW and 1,500 kW increments, and 20-foot and […]]]>

HP presented a new product, called Facility-as-a-Service, which provides a variety of modular data center solutions under five year service agreements. Customers can rent custom-sized, scalable data center facilities, taking full advantage of HP products such as Flexible DC, which scales in 500 kW, 750 kW, 900 kW and 1,500 kW increments, and 20-foot and 40-foot PODs, as well as custom modular options.

HP’ Vice-president of technology services and data center sourcing strategy, Rick Einhorn, left clear what separates the company’s new service from the competition: “Now, with HP Facility-as-a-Service, a new option is available that enables an organization’s CFO to switch costs from a capital to an operating expense and provides the CIO with their own operated data center which has the flexibility to expand as the business grows.”

The idea behind HP’s new product is to move the cost of data center infrastructure from “being a capital expense to an operation one for the customer” for which HP will collect monthly maintenance payments. The service is competing with big names in data center location, the likes of Arizona-based IO and Luxembourg’s Colt Technology Services —  unlike HP, however, both providers only offer standardised modules.

Read more on the story here

(Image Credit: Bob Mical)

]]>
https://dataconomy.ru/2014/05/07/facility-service-customers-can-now-rent-hps-new-modular-data-center-solution/feed/ 0