agile – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Wed, 23 Dec 2020 10:11:24 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/DC-logo-emblem_multicolor-75x75.png agile – Dataconomy https://dataconomy.ru 32 32 Agile 2: A New Hope https://dataconomy.ru/2020/12/23/agile-2-a-new-hope/ https://dataconomy.ru/2020/12/23/agile-2-a-new-hope/#respond Wed, 23 Dec 2020 09:50:35 +0000 https://dataconomy.ru/?p=21615 On February 11-13, 2001, a group of senior programmers, project managers, and technical analysts met at the Snowbird ski resort in Utah. Their mission, sparked by numerous conversations between the various members over chat and at conferences, was to define a single movement that would end up radically shaping how software was developed. The result […]]]>

On February 11-13, 2001, a group of senior programmers, project managers, and technical analysts met at the Snowbird ski resort in Utah. Their mission, sparked by numerous conversations between the various members over chat and at conferences, was to define a single movement that would end up radically shaping how software was developed. The result of their efforts was a 12-point document known as The Agile Manifesto.

Twenty years have passed since that manifesto was first published. It is worth putting some context around what the world of programming looked like in 2001. The dot-com era was in full swing, and everyone was racing to build websites that could scale. Amazon was still a few years out from showing a profit, and Google was replacing Altavista as the go-to search engine on the web. 

Agile, as the movement became known, focused on the software development process, challenging what had been the dominant Waterfall paradigm with its long cycles and extensive planning stages. Instead, it promoted the idea that software development is an iterative process, one where the precise goals are usually not known ahead of time. Consequently, an iterative approach with short cycles and development goals that could be accomplished within those cycles would generally provide better feedback, making it easier for the client to adjust their thinking as the product moved towards its final state.

As Agile expanded its influence, creating offshoots such as Kanban, Extreme Programming, Scrum, and so forth, the core ideas gained more traction at the C-Suite, and a cottage industry of consultants, scrum masters, and certified Agile gurus emerged as part of this. Yet even as it did so, Agile faced increasing criticism in the areas where it did fail (sometimes spectacularly). Two of the biggest failings of Agile came primarily in areas that had seen comparatively little coverage in the manifesto – the role of design and the role of data.

One of the characteristics that differentiated Agile was the belief that design was an emergent property. A frequent criticism of Agile has been that all too often, this approach has led to both design paralysis as the definition of the product remains too fluid to cogently program to, and integration challenges as multiple teams involved in parallel paths fail to produce common interfaces. This, in turn, leads to fundamental questions about both the notion of a team of equals and the corresponding dangers inherent in a lack of leadership, both of which tend to be played down in the original manifesto.

The second problematic area for Agile has been the shifting roles of programmers/engineers, who tend to focus on the creation of frameworks and processing algorithms, versus data analysts. In the early 2010s, the Big Data era heralded this shift, as the emphasis went from building scalable solutions to handling high volume/high-velocity data (an engineering problem) to using that data to drive both data analytics and machine learning (AI) solutions. The data analyst may use programming tools, but their role is different – the tools are simply a means towards better understanding the inherent, relevant signals coming from the noise of the data. On the other hand, AI specialists increasingly use the data produced by data engineers to create models, often with thousands, millions, or even billions of parameters, to have machines determine their own behavior. These are tool users – using the power of high-speed systems to do most of the modeling and development of “software” computationally – and their needs and priorities are different.

In that scheme of things, especially in conjunction with more than thirty years of open-source development, the role of programmers has gone from being central to all aspects of computing to being necessary primarily as tool builders when existing tools aren’t sufficient to needs. Agile 2000 is just not as relevant today because the emphasis of Agile – tool builders – is not as relevant as the importance of data and machine learning models have escalated.

These were some of Clifford Berg’s considerations and the community that he gathered around him faced when examining what “Agile” really means today. Data Agility was a big part of it – the recognition that the data lifecycle today is not the software lifecycle of two decades ago. In many respects, it is more pipeline oriented: data acquisition, data cleansing and curation, metadata capture, data harmonization, data analysis, data presentation, and data contextualization. 

This last point usually doesn’t exist in data cycle charts. Still, it’s one of the most important – contextualization is taking what is learned from the analysis and using it to train the back processes – to make acquisition easier, make harmonization more precise, and make analysis more meaningful, and presentation more robust. It is the process equivalent of back-propagation in machine learning models, and it is fundamental to becoming a data-driven organization.

Agility plays a role here, but it’s one where leadership comes down to determining what the organization as an organism should learn. The new agility is the recognition that communication is not face-to-face. Instead, it is multimodal – synchronous and asynchronous, augmented by artificial intelligence, which increasingly becomes a team-member itself. It is the awareness that there are times where you need the iterative innovation cycles that Agile offers to refine a package, but that there are also times when you need to establish stakes in the ground, working with dedicated creators to help articulate the true intent of a product.

Put another way, Agile in 2020 must-see Agile 2000 as a component, something useful to shape raw concepts into usable forms, but something that also needs to fit in with the broader data fabric being created. We are entering into a profound new era in which the process of creating anything, whether it be a book, a piece of software, an engine for a car or aircraft, or a wedding cake, ultimately will come down to creating a virtual twin of this first, then “instantiating” it through increasingly complex printers. Some of those printers are matter manipulators, some of them are models for decision making; some of them will take pieces of media and composite them into whole new forms. This ultimately will feedback into the digital twin – the virtual learns from the physical and vice versa.

So where does this leave a newer Agile, an Agile 2 if you will? It is an agility that recognizes DevOps and continuous integration, Zoom meetings spanning ten time zones, and git repositories representing the working state of living projects and collaboration in all its many manifest forms. Agile sees nothing as permanent but instead sees “products” as snapshot artifacts of processes over time.

It is arguable whether Agile 2 can be boiled down to a twelve-point manifesto, and perhaps that’s just as well. Manifestos tend to be rigid and unbending around the edges. Agile 2 is adaptive, fluid, and multifaceted, and like the reflection of a crystal on a plane, it will tend to look different depending upon which angle it’s viewed from. That’s okay.  Clifford Berg and the Agile 2 movement have taken some significant strides in trying to do this. What will emerge from this is likely a new school of thought, a new best practice, about what a data-driven organization ultimately looks and acts like.

]]>
https://dataconomy.ru/2020/12/23/agile-2-a-new-hope/feed/ 0
How Agile Cloud Solutions Can Upgrade On-Prem Performance: The BI Business Case https://dataconomy.ru/2019/03/15/how-agile-cloud-solutions-can-upgrade-on-prem-performance-the-bi-business-case%ef%bb%bf/ https://dataconomy.ru/2019/03/15/how-agile-cloud-solutions-can-upgrade-on-prem-performance-the-bi-business-case%ef%bb%bf/#respond Fri, 15 Mar 2019 15:36:04 +0000 https://dataconomy.ru/?p=20694 Moving IT to the cloud is one of the main objectives  for many companies. But once achieved, they find managing data in a hybrid environment a challenge. This is when metadata management becomes more important than ever. Cloud vs. on-premises? It’s one of those topics that you hear broached at tech conferences, business meetings, even […]]]>

Moving IT to the cloud is one of the main objectives  for many companies. But once achieved, they find managing data in a hybrid environment a challenge. This is when metadata management becomes more important than ever.

Cloud vs. on-premises? It’s one of those topics that you hear broached at tech conferences, business meetings, even cocktail parties (but very boring ones). There are positives and negatives to either – cost vs. control, perceived higher security vs. easier deployment, etc. The arguments are well-known and well-worn but more companies of all sizes are making the move all the time.

Indeed, for many firms a move to a full cloud-based IT environment seems all but inevitable. According to a survey by cloud company Denodo of enterprises large and small, moving to the cloud is not just about saving money – even though companies can significantly reduce costs, eschewing huge investments in computers and servers when they adopt cloud solutions.

According to Forrester, for example, it’s the need for advanced technologies and access to services that drives many to move from on-premises to the cloud. In its report on digital transformation in 2018, Forrester says that for  many firms, it’s not a question of “if” to move to the cloud – it’s when. According to the report, more than half of global enterprises in 2018 were set to adopt “at least one public cloud platform to drive digital transformation and delight customers. This is a ‘magic threshold’ signifying the imminent ubiquity of cloud computing and the future of doing business in today’s digital economy.”

Digital Transformation as a Primary Motivator for Moving to the Cloud

For many, it’s about digital transformation – the need to keep up with the latest IT advances and not miss out, in order to ensure that they are as agile as possible in a hyper-competitive environment. “Cloud computing is just a part of digital transformation—the landscape is immense and there’s no one-size-fits-all solution—but it’s a rapidly growing part that decision-makers can’t afford to ignore,” the Forrester report says, adding that it expects the total global public cloud market to continue to grow in the coming years at an annual 22% compound rate.

And cloud service providers – naturally – have been doing all they can to encourage that move, providing all the services, software, ultimate security measures, and systems companies might need. Tried and true tools that have been in use for years in the office – in their on-premises forms – are now available on-line. For example, Microsoft’s BI stack, including  SSIS, MS-SQL, Tabular, OLAP, SSRS and PowerBI, are all available for Azure customers. Google, IBM, Oracle, and all the others have been doing the same thing. The idea is to make companies as comfortable as possible with a move to the cloud.

But for many companies, some pieces of the cloud puzzle are still missing – such as the management part of things. Tools are just that – and to get their full value, you have to use them effectively. Legacy, on-premises systems are connected by IT personnel who manage the workflow and ensure that everything melds together for smooth operation. Firms will want the same experience when they move their IT to the cloud.

The BI Team’s Dilemma

One area that concerns companies thinking of a cloud move is data management – which is difficult enough to do in-house, much less in the cloud. Business intelligence teams in charge of data management need to ferret out data that could be stored in many different places – databases for different departments, databases for the entire organization, data storage units for social media information, reporting tools, ETL tools, analysis tools, corporate documents, customer information, etc. – and provide answers based on that data to develop reports, forecasts, statements, comply with regulations, etc.

Finding that data quickly may be crucial – such as when the people who enforce GDPR, for example, come calling, and demand that a company show it has the ability to locate its data and thus comply with “right to be forgotten” rules for example – as the GDPR regulations require. It’s not enough to comply with those rules when a request comes in; according to the regulations, GDPR regulators can require a company to show that it has the ability to document data’s “provenance” – where it was located, what database it was stored in, which tables or cells are involved, etc. Failure to do that could result in EU sanctions, even if no actual request to remove the data came in.

It’s crucial that companies be able to account for inaccuracies that could skew data integrity and accuracy is preserved. Organizations need to be able to find, understand and trust their data – as well as figure out and trace the root of errors when they occur. With so many different systems to go through and inconsistent associated metadata,  BI teams often have to search manually for the data they require which can be extremely time consuming.

Customer information, for example, could appear in numerous locations, but labels for the relevant data – type of business, location, sales, etc. – could be different, making a simple search across databases impossible, and even on-premises data tools from Microsoft et al don’t include the tools to easily automate and perform that work. Neither do the online versions.

Metadata Tools to the Rescue

Offering tools to do that – providing cloud services that allow for easy resolution of data legacy questions by resolving metadata issues, with automated searches and easy construction of searches – would help make things easier for firms seeking to move operations online, and closing the gap between the need for better data management and the benefits of cloud-based IT.

Those tools available would give BI teams more power to get things done more quickly; instead of spending days or even weeks searching out relevant data, they could use the cloud tools to easily resolve the metadata issues. Any “butterflies in the stomach” of IT, BI teams, or C-Suite executives, for that matter, would be resolved by this. Fortunately, there are firms that offer these services, even if the tools offered by the big cloud providers don’t.

That the cloud offers a new world of efficiency, agility – and greater profits – is pretty clear at this point. Resolving data legacy issues will encourage even more companies to “reach for the clouds.”

]]>
https://dataconomy.ru/2019/03/15/how-agile-cloud-solutions-can-upgrade-on-prem-performance-the-bi-business-case%ef%bb%bf/feed/ 0