Search Results for “hackathon” – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Mon, 07 Oct 2024 08:09:50 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/cropped-DC-logo-emblem_multicolor-32x32.png Search Results for “hackathon” – Dataconomy https://dataconomy.ru 32 32 Future insights and challenges in data analytics with Aksinia Chumachenko https://dataconomy.ru/2024/09/27/future-insights-and-challenges-in-data-analytics-with-aksinia-chumachenko/ Fri, 27 Sep 2024 12:11:04 +0000 https://dataconomy.ru/?p=58589 The global data analytics market is forecasted to increase by USD 234.4 billion from 2023 to 2028. This rapid increase will accelerate the growth of jobs in the field. To learn more about the trends of data analytics fields, their prospects, and their challenges, we talked to Aksinia Chumachenko, Product Analytics Team Lead at Simpals, […]]]>

The global data analytics market is forecasted to increase by USD 234.4 billion from 2023 to 2028. This rapid increase will accelerate the growth of jobs in the field.

To learn more about the trends of data analytics fields, their prospects, and their challenges, we talked to Aksinia Chumachenko, Product Analytics Team Lead at Simpals, Moldova’s leading digital company. In this interview, Aksinia will share her journey, approach to leadership and mentorship, and vision for the future of this rapidly evolving field.

Your journey from a university student to a Product Analytics Team Lead is inspiring. Could you share the key milestones that have shaped your career in data analytics?

Future insights and challenges in data analytics with Aksinia ChumachenkoMy journey began at NUST MISiS, where I studied Computer Science and Engineering. I studied hard and was a very active student, which made me eligible for an exchange program at Häme University of Applied Sciences (HAMK) in Finland. This experience has led to my first real IT job ― an internship at Renault in 2019. It was my first job as a data analyst. It helped me to become familiar with popular tools such as Excel and SQL and to develop my analytical thinking.

The time I spent at Renault helped me realize that data analytics is something I would be interested in pursuing as a full-time career. After my time at Renault, I joined Sberbank, one of the largest banks in Eastern Europe, as an intern analyst through their highly competitive Sberseasons program. The competition was intense, with over 50 applicants per position. However, three different teams within the bank were interested in hiring me, and I ultimately chose to work with Sberbank CIB, which is responsible for the corporate investment business.

At Sberbank, I worked as an analyst for major B2B clients. This experience helped me to improve my Python skills and get more practical experience working with big data.

In 2020, I transitioned to product analytics at OZON Fintech ― one of the leading marketplaces in Russia. This pivotal role allowed me to double my salary and gain extensive experience working on fintech products. At OZON, I worked with four financial products, and through my data-driven research, we significantly increased key metrics such as usage, number of new customers, returns, and revenue.

In November 2020, BCS Investments, named “Investment Company of the Year” by an authoritative financial online platform, approached me. They were looking to hire their first product analyst and build a new department from scratch. That opportunity fitted with my goals, as I wanted to gain new leadership skills. During my time there, I implemented numerous impactful initiatives. One of the most significant was introducing the A/B testing process from scratch, which improved user experience and product metrics. Thanks to the company-wide implementation of this A/B testing process, we increased the onboarding conversion rate in our app by several percentage points, ultimately impacting the number of customers using the app and, consequently, our revenue.

About a year later, I transitioned to Simpals in Moldova, where I still work as a Product Analytics Team Lead. I manage a team of top-notch data analytics experts and work on one of the most visited websites in Moldova.

Recently, I have been highly involved in giving back to the community. I organized a meetup in Moldova in 2023 and was also a speaker. One of the speakers was a colleague whom I mentored from scratch ― it was a huge pleasure to see how much she’s grown quickly.

I am also a judge in several international hackathons, including the United Nations Big Data Hackathon, where I evaluated 18 different teams based on their solutions’ innovation, quality, and applicability.

Other hackathons to which I was invited as an expert are the MLH Web3Apps Hackathon and MLH Data Hackfest.

As a leader in your field, how do you approach mentoring your team members, and what impact do you hope to have on their careers?

I started mentoring as soon as I had my team. Today, I mentor not just within Simpals but also external organizations such as Women in Tech and Women in Big Data. These are free international programs that help women progress in their careers. As a mentor, I’ve helped several women achieve significant success by leveling up or starting a new career.

Every mentee is different, which is why I create individual development plans based on their goals, strengths, and weaknesses. We also meet regularly for one-on-one meetings to discuss how things are going.

Seeing my impact on colleagues is very rewarding. Moreover, by helping others, I also help myself grow as a professional and a human being.

Aksinia, as the Product Analytics Team Lead at Simpals, a company that has a significant impact on Moldova’s digital ecosystem, what role does data analytics play in the success of digital platforms like 999.md?

999.md is visited by more than 2 million unique users every month, giving us much data to work with. I was responsible for building a team from scratch and leading them to ensure the growth of key metrics and optimize existing processes. Thanks to the adjustments to the key features, we have achieved a 13% revenue increase.

Thanks to our work, the platform can gain more revenue and reduce spending where possible. This is what analytics does: not only does it help to make more money, but it also prevents unnecessary spending, which, for large projects like this, can be significant.

The field of data analytics is constantly evolving. What are the biggest challenges facing product and data analytics today?

Data accumulates fast, and it’s challenging to collect and analyze it. However, even more importantly, the insights generated need to be aligned with the company’s overall strategy and goals. Ask a question: will completing this task drive you to achieve your business goals? Sometimes, data analysts forget to ask themselves this question. But I think it’s crucial to have a business mindset.

Also, many IT professionals find it hard to stay up-to-date with rapidly changing technologies. To stay up-to-date, I regularly attend conferences (sometimes as a speaker). My mentor also helps me constantly grow and explore new things.

You mentioned the importance of aligning data analytics with business strategy. Please give us an example of how this alignment has worked in your role at Simpals.

My team’s task was optimizing the user experience on 999.md. We needed to increase user engagement and conversion rates by making the platform more intuitive and user-friendly. Here is what we did:

  • identified pain points in the user journey;
  • used user segmentation to understand better how different groups use the platform;
  • conducted A/B testing to compare different platform versions and see which changes led to better outcomes.

I discussed earlier how important it is to align data analytics with business goals. The insights we gained allowed us to increase revenue and boost customer satisfaction.

The integration of AI and machine learning into analytics is a hot topic right now. How do you see these technologies shaping the future of data analytics?

AI and machine learning are basically omnipresent. There isn’t a single field where these technologies aren’t used. These technologies also allow us to automate complex data processes. This saves time on ‘manual labor’ and allows us to dedicate more time to problem-solving and creativity.

In the future, we’ll see AI and machine learning becoming even more integral to data analytics, with more sophisticated models and tools that can handle increasingly complex tasks. These technologies work best in synergy with human creativity, not as a replacement. A deep understanding of the data and the business context is still essential for making the most of what AI and machine learning offer.

Given your experience and recognition in the field, including judging international hackathons and the UN Big Data Datathon, how do you see the global data analytics landscape evolving in the coming years?

The role of analysts will gradually change and expand. For example, a trend that I see right now on the market is that analysts must have product management skills, as they need to have a deep understanding of working with data and product knowledge to make decisions.

Another important change is that the new technologies are greatly accelerating the work with data. What used to take days or weeks can now be done in a few hours. For example, Google’s BigQuery cloud data warehouse, which many companies use, is already introducing new tools that make life easier for analysts, such as searching for insights based on a specific table and monitoring data quality.

However, it is important to realize that AI will not replace analysts completely. On the contrary, it will become a powerful tool that will allow you to focus on more complex and strategic tasks. The role of humans in analytics is still very important. Soft skills such as critical thinking and the ability to communicate and negotiate with different people are some crucial things that AI can’t replace.

]]>
Are Microsoft’s AI-powered robots a solution to e-waste or more waste? https://dataconomy.ru/2024/09/17/microsoft-secure-and-sustainable-disposal-of-hard-disks-project/ Tue, 17 Sep 2024 14:49:14 +0000 https://dataconomy.ru/?p=58185 Microsoft’s push for sustainability has taken a technological turn with the use of AI-powered robots to dismantle and destroy hard drives in its data centers. As part of its Secure and Sustainable Disposal of Hard Disks project, this initiative aims to tackle the growing e-waste problem while ensuring data security. But is this high-tech solution […]]]>

Microsoft’s push for sustainability has taken a technological turn with the use of AI-powered robots to dismantle and destroy hard drives in its data centers.

As part of its Secure and Sustainable Disposal of Hard Disks project, this initiative aims to tackle the growing e-waste problem while ensuring data security.

But is this high-tech solution really helping reduce waste, or is it merely creating more waste in the process?

Creating more waste to stop the waste?

The project, which originated during the 2022 Microsoft Hackathon, builds on the company’s Circular Centers initiative. These centers are designed to recycle and repurpose hardware used in Microsoft’s data centers, such as servers and hard drives.

The goal is to help Microsoft achieve its ambitious environmental commitments: Becoming carbon-negative by 2050 and producing zero waste by 2030. The AI-powered robots are now part of this larger effort, decommissioning old hardware more efficiently while ensuring that the valuable materials within are not wasted.

Historically, hard drives have been shredded to protect sensitive data, a process that not only destroys the media but also leads to the loss of valuable materials like neodymium. Every year, millions of hard drives are shredded worldwide, leading to significant waste, especially of rare metals.

Microsoft’s new approach, dubbed #NoShred, uses AI and robotics to dismantle hard drives, ensuring the sensitive data is destroyed while allowing the reuse or recycling of the remaining components.

By using computer vision and robotic arms, the system is able to sort and disassemble hard drives in a secure manner, preserving key materials for recycling. The project aims to achieve a 90% reuse and recycle rate for all hard disks by 2025. A pilot run in Amsterdam showed promising results, reducing downtime and increasing the availability of recycled parts.

While Microsoft’s AI-powered solution offers a more sustainable approach to data center waste, it raises a broader debate:

Are we simply creating more waste to stop the waste?

The development of these AI-powered robots requires significant amounts of energy, rare metals, and resources to build and operate. This energy consumption is expected to rise, especially as AI becomes more integrated into processes worldwide. Data centers already account for about 1-1.3% of global electricity demand, and this is projected to double by 2026.

With AI driving more data processing and higher energy usage, are we offsetting the environmental benefits of recycling with the carbon footprint of these AI systems?

On one hand, the project seems like a step forward. By avoiding the destruction of rare materials, Microsoft is addressing the growing scarcity of components like neodymium, a critical element in technology production.

But on the other hand, the rising demand for AI and the infrastructure required to maintain it may ultimately create new sustainability challenges.

How about a happy medium?

Microsoft’s AI-powered robots represent a new frontier in sustainable tech, but they also highlight the inherent tensions in using advanced technology to solve environmental problems. While the robots are helping Microsoft reach its recycling goals, the long-term sustainability of this approach remains uncertain.

In the race to reduce e-waste, companies like Microsoft must ensure that the solutions they implement do not inadvertently create new problems.

Balancing the benefits of AI with its environmental impact will be key to determining whether initiatives like #NoShred are a true solution—or just another layer of complexity in the fight against waste.


Featured image credit: Emre Çıtak/Ideogram AI

]]>
AI Assistants for software engineers https://dataconomy.ru/2024/08/23/ai-assistants-for-software-engineers/ Fri, 23 Aug 2024 08:10:13 +0000 https://dataconomy.ru/?p=57129 In the rapidly evolving landscape of software development, AI assistants have emerged as game-changing tools, empowering engineers to write code more efficiently than ever before. To gain insights into this transformation, we spoke with Ilia Zadiabin, a mobile developer, about the impact of AI assistants on the software development process in 2024. Ilia Zadiabin is […]]]>

In the rapidly evolving landscape of software development, AI assistants have emerged as game-changing tools, empowering engineers to write code more efficiently than ever before. To gain insights into this transformation, we spoke with Ilia Zadiabin, a mobile developer, about the impact of AI assistants on the software development process in 2024.

AI Assistants for software engineersIlia Zadiabin is a prominent figure in the global tech ecosystem, renowned for his expertise in mobile app development and AI-driven solutions. As the founder of Slai, an innovative AI-powered language learning platform, he gained international recognition by successfully competing with industry giants.

His influence in the software development sphere is further amplified by his articles on TechBullion, a leading tech news platform, where he offers valuable perspectives on cutting-edge development practices and industry trends.

Ilia’s expertise has led to his selection as a judge for several high-profile tech events, including the Business Intelligence Group, the Global Startup Awards Africa, and Geekle’s hackathon. In the healthtech and fintech sectors, his work has set new industry standards, consistently earning praise from users and experts alike.

In general, software developers have looked favorably upon AI assistants, expecting that the new technology can improve productivity and smoothen their workflow. As an expert, could you tell us what exactly AI assistants do?

AI assistants are transforming the code writing process, acting as intelligent companions that enhance productivity and code quality. These tools provide real-time code suggestions and completions, often generating entire functions or code blocks based on context and intent.

A key strength of these AI tools is their ability to suggest alternative solutions to already solved tasks, encouraging developers to consider different approaches and potentially find more efficient or readable solutions. Even when AI suggestions are incorrect, they can be valuable by sparking new ideas or leading developers to better solutions they might not have considered.

By handling routine coding work and offering diverse perspectives, these tools allow developers to focus on higher-level problem-solving and creativity. In essence, AI assistants serve as collaborative partners, augmenting human capabilities in software development.

What AI assistant tools are used in the development workflow? Which features do you believe are required for an AI assistant in case it has to work effectively for software engineers?

AI assistants have become crucial tools in modern software development workflows. Key examples include GitHub Copilot, GitHub Copilot Chat, JetBrains AI, and Google Gemini for Android Studio. These tools offer features like code generation, real-time suggestions, and debugging support.

For more personalized support, developers can use tools like llama code assistant, Continue.dev, and Supermaven. An interesting feature is Claude Projects, which allows using multiple files as context for the AI assistant.

Effective AI assistants for software engineers should offer:

  1. Accurate code generation and completion
  2. Context-awareness across multiple files
  3. Multi-language support
  4. Integration with development workflows

I see. Could you provide more details on how they help improve productivity in your field?

The Microsoft study showed that developers using Copilot completed tasks 55% faster and had a higher task completion rate (78% vs 70%). The Accenture experiment demonstrated an 84% to 107% increase in successful builds with AI assistance.

Moreover, AI tools automate many of the mundane, repetitive tasks, allowing developers to focus on higher-level design and problem-solving, reducing stress and mistakes, and thereby enhancing productivity.

Can you name a good example of a project where an AI assistant has dramatically improved the result?

Research suggests that AI assistants can increase development speed by up to 50%, benefiting most projects. However, AI tools are particularly effective for certain types of tasks, especially those that are large and repetitive.

Writing tests is an excellent example of where AI assistants excel. They can efficiently generate comprehensive test coverage for an entire project – a task that developers often find tedious but is crucial for software quality. AI assistants are also highly effective at writing comments and documentation for technical projects, rarely missing important details.

A concrete example of AI’s impact is Duolingo’s adoption of GitHub Copilot. The language-learning platform reported a 25% increase in developer productivity after implementing the AI tool. This case demonstrates how AI assistants can significantly enhance development efficiency in real-world scenarios, particularly for companies with large codebases and complex software systems.

What problems are encountered while working with AI Assistants?

When working with AI assistants in software development, two main concerns arise. First is the issue of data privacy and potential code leakage. Developers worry about proprietary code being exposed to third parties through cloud-based AI models. Some companies address this by offering on-premises solutions, but for individual developers using public AI services, the risk remains.

The second concern involves AI mistakes and hallucinations, though this is less problematic in software development than in other fields. AI coding assistants typically generate small code snippets, making errors easier to spot and correct. The structured nature of programming languages, with strict syntax rules, helps in quick error detection. Unlike in natural language processing, code provides immediate feedback through compiler errors or runtime issues.

In practice, the hallucination problem common in AI chatbots is less severe in coding contexts. The rigid structure of programming languages naturally constrains the AI, reducing nonsensical outputs. Developers can easily identify and fix AI-generated errors, such as incorrect method names or syntax.

You mentioned earlier that AI assistants can dramatically improve productivity. Do you have any concrete data or research findings to support this claim?

GitHub, a leading platform in the software development space, conducted extensive research on the impact of their AI assistant, GitHub Copilot. Their findings, published in May 2024, provide compelling evidence of the benefits of AI assistants in software development.

Regarding productivity in terms of speed, GitHub’s controlled experiment with 95 professional developers yielded significant results. Developers using Copilot completed a specific coding task 55% faster than those without it. On average, Copilot users finished in 1 hour and 11 minutes, while non-users took 2 hours and 41 minutes. This represents a substantial time saving.

However, as mentioned earlier, productivity extends beyond mere speed. The research demonstrated improvements in various other areas as well. Developers using Copilot showed a higher task completion rate, with 78% finishing the task compared to 70% of non-Copilot users.

In terms of job satisfaction, a majority of users reported feeling more fulfilled with their work, experiencing less frustration when coding, and being able to focus on more satisfying tasks when using Copilot. The AI assistant also proved beneficial for maintaining focus and preserving mental energy. A significant portion of developers stated that Copilot helped them stay in the flow and conserve mental effort during repetitive tasks.

Efficiency in handling repetitive work was another area where Copilot showed strong benefits. An overwhelming majority of developers reported completing such tasks faster with the assistance of Copilot.

Research regarding productivity:

https://dl.acm.org/doi/10.1145/3520312.3534864
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321

Research: quantifying GitHub Copilot’s impact on developer productivity and happiness

https://mit-genai.pubpub.org/pub/v5iixksv/release/2

How do you integrate AI Assistants with other Development tools and platforms?

AI assistants for software development generally fall into two categories: those integrated into commercial development platforms and more personalized AI tools for individual developers.

The first category includes AI-powered features in platforms like Sentry, GitLab, and Eraser.io, as well as server-side code analyzers such as Snyk and SonarQube. These tools use AI to enhance specific workflows within their platforms. For example, Sentry suggests solutions to observed issues, while Snyk analyzes code and provides security-focused suggestions. Due to the unique nature of each product, it’s challenging to generalize about their AI enhancements.

The second category comprises “personal” AI assistants like GitHub Copilot, Supermaven, and Continue. These tools integrate directly into developers’ IDEs, primarily focusing on enhancing code completion. They aim to predict and generate code based on the developer’s intent. Some, like Copilot Chat, can even answer development questions by analyzing the entire project context.

It’s worth noting that some companies hesitate to adopt AI assistants due to concerns about data privacy, as these tools may potentially send codebase information to third parties.

How do you cope with situations when the AI assistant gives the wrong or misleading information?

As a frequent user of AI assistants, I encounter this issue regularly. Fortunately, AI hallucinations or errors in code completion are typically easy to spot and correct. Since these tools usually autocomplete only a few lines of code at a time, experienced developers can quickly identify and fix any mistakes.

For AI features in SaaS solutions, errors are generally less impactful as they often come in the form of suggestions rather than direct code changes. Overall, dealing with AI errors is manageable and, interestingly, gives developers confidence that they won’t be easily replaced by AI.

However, I do monitor trends in developer frustration with specific AI autocomplete tools. Persistent issues often lead to developers switching to alternative solutions or occasionally abstaining from AI assistance altogether.

Is it possible to create your own AI assistant?

Yes, you can create your own AI assistant. There are multiple approaches, ranging from complex to more straightforward.

The most challenging approach involves building an AI assistant from scratch using Python, PyTorch, and TensorFlow. However, this path is typically reserved for large companies and dedicated enthusiasts due to its complexity and resource requirements.

A more accessible approach for most developers is to leverage existing Large Language Models (LLMs) and either use them as-is or fine-tune them for specific needs. This method significantly reduces the technical barriers and development time.

To start using local LLMs for code assistance, you’ll need two main components:

  1. An extension for your Integrated Development Environment (IDE). One popular option is Continue (https://www.continue.dev/), which integrates well with various LLMs.
  2. A way to run LLMs locally. Ollama (https://ollama.com/) is a popular tool for downloading and running various LLM models on your local machine.

Other popular solutions in this space include llama-coder, Cody, and Tabby. These tools offer different features and integration options, allowing you to choose the one that best fits your workflow and requirements.

What place do you think AI assistants will take within the area of software development in a few years?

Even now, the combination of AI + developer is superior in speed to just a developer.

In a few years, I believe AI assistants will be core components of software development. As their functionality improves, they will support more sophisticated coding and hopefully will be integrated with compilers to eliminate possible errors.

My expectation is that every developer will use one of the AI assistants to some degree, and I suggest they do so immediately.

These tools improve not only the efficiency of coding but also enable developers to focus on higher-order tasks. In general, AI assistants are likely to enlarge the role of developers, promoting a collaborative environment in which coding will be more accessible to a wider audience.

]]>
Flow Studio: FlowGPT’s rich library now has a text-to-film tool https://dataconomy.ru/2024/07/19/what-is-flowgpt-flow-studio-text-to-film/ Fri, 19 Jul 2024 13:27:12 +0000 https://dataconomy.ru/?p=55303 FlowGPT lets you craft and share AI prompts to generate text and images, create chatbots, and even generate films from text with its latest tool, Flow Studio. Of course, it is not perfect yet, but quite impressive. https://image-cdn.flowgpt.com/trans-video/hanbaowang1-logo.mp4 What is FlowGPT? FlowGPT is an innovative platform designed for anyone interested in harnessing the power of […]]]>

FlowGPT lets you craft and share AI prompts to generate text and images, create chatbots, and even generate films from text with its latest tool, Flow Studio. Of course, it is not perfect yet, but quite impressive.

What is FlowGPT?

FlowGPT is an innovative platform designed for anyone interested in harnessing the power of AI. It focuses on AI prompts—specific instructions you give to AI models like ChatGPT to generate text, translate languages, create content, or generate images. Also, there are specifically designed chatbots with prompts, and you can interact with them, like Character AI and more!

You can create your own prompt, too. FlowGPT provides a form where you can write out the instructions you want to give the AI. Once you’ve created a prompt, you can share it with others on the platform. This helps build a growing library of prompts that everyone can use.

Discover FlowGPT: craft and share AI prompts, create chatbots, and generate videos with Flow Studio. Explore AI's creative potential todayFlowGPT also encourages users to interact with each other. You can discuss prompts, give feedback, and share ideas with other users. This community aspect helps improve the quality of prompts and makes the platform more useful for everyone.

The platform is designed to be easy to use. Its interface is simple and straightforward, making it easy to find and use prompts, create new ones, and connect with the community.

When you use a prompt on FlowGPT, the AI model takes the instructions from the prompt and generates a response based on those instructions. This means you can get tailored responses for creative writing, technical help, translations, and more. Moreover, now it has a text-to-film tool!

What is Flow Studio?

Flow Studio is a tool designed for creating short videos or films up to three minutes long. It simplifies the video creation process by offering a range of features that help you produce engaging content quickly and easily. Here’s a breakdown of what Flow Studio offers:

  • Video creation: Flow Studio allows you to make videos that can last up to three minutes. This is perfect for creating concise, engaging content that holds viewers’ attention.
  • Automatic plot generation: The tool can automatically generate creative plots for your videos. These plots come with unexpected twists, making your content more interesting and entertaining.
  • Character consistency: You can choose a specific character to be the main actor in your video. This character’s appearance, voice, and characteristics remain consistent throughout the video, ensuring a coherent and polished result.
  • Automated sound matching: Flow Studio handles the background music, sound effects, dialogue, and voiceovers automatically. This means you don’t need to worry about post-editing for sound, as everything is taken care of for you.
  • Variety of styles and genres: The tool supports various video styles, including anime, realistic, 3D, and more. It also offers different genres like horror, romance, and comedy, allowing you to tailor your videos to different tastes and themes.

Sounds good? Here is how to use Flow Studio

  • Creating a video: To get started, you first need to create an account on Flow Studio, which you can do through Google, Discord, X, LinkedIn, or email. Once logged in, you can select a character for your video. This character should be mentioned in the story you write. Flow Studio will then use your story as the basis for generating the video, keeping the character’s traits and style consistent throughout.
  • Story input: You can write a story for your video, whether it’s a single line or a detailed paragraph. If you provide a brief story, Flow Studio will expand on it to create a full video. For longer stories, the tool will aim to include the key details in the generated video.

Discover FlowGPT: craft and share AI prompts, create chatbots, and generate videos with Flow Studio. Explore AI's creative potential today

  • Finding and sharing videos: Once your video is created, you can find it under “My Creation” on Flow Studio. You have the option to download your video or share it directly to social media with a single click.

Each video costs 100 Flux, which you can buy or earn by completing tasks and participating in events on FlowGPT. Flux can also be earned through daily video challenges and bounty events.

Flow Studio is planning to introduce higher quality video generation, support for more original characters, and host a video hackathon to further enhance its offerings.

In summary, Flow Studio is a powerful yet user-friendly tool for creating short, engaging videos with minimal effort. It automates many aspects of video production, from plot creation to sound matching, and offers a variety of styles and genres to fit your needs.


Featured image credit: FlowGPT

]]>
What do data scientists do, and how to become one? https://dataconomy.ru/2024/06/18/what-do-data-scientists-do-how-become/ Tue, 18 Jun 2024 12:00:05 +0000 https://dataconomy.ru/?p=53708 What do data scientists do? Let’s find out! A data scientist is a professional who combines math, programming skills, and expertise in fields like finance or healthcare to uncover valuable insights from large sets of data. They clean and analyze data to find patterns and trends, using tools like machine learning to build models that […]]]>

What do data scientists do? Let’s find out! A data scientist is a professional who combines math, programming skills, and expertise in fields like finance or healthcare to uncover valuable insights from large sets of data. They clean and analyze data to find patterns and trends, using tools like machine learning to build models that predict outcomes or solve problems. This process is also closely related to artificial intelligence, as data scientists use AI algorithms to automate tasks and make sense of complex information. Their work helps businesses make informed decisions, improve operations, and innovate across industries, from finance and healthcare to retail and beyond. That’s why you are not the first one to wonder about this:

What do data scientists do?

Data scientists specialize in extracting insights and valuable information from large amounts of data. Their primary tasks include:

  • Data cleaning and preparation: They clean and organize raw data to ensure it is accurate and ready for analysis.

What do data scientists do, and how to become one? Learn everything you need to know about data scientists!

  • Exploratory Data Analysis (EDA): They explore data using statistical methods and visualization techniques to understand patterns, trends, and relationships within the data.
  • Feature engineering: What do data scientists do? They create new features or variables from existing data that can improve the performance of machine learning models.
  • Machine learning modeling: They apply machine learning algorithms to build predictive models or classification systems that can make forecasts or categorize data.
  • Evaluation and optimization: They assess the performance of models, fine-tune parameters, and optimize algorithms to achieve better results.
  • Data visualization and reporting: They present their findings through visualizations, dashboards, and reports, making complex data accessible and understandable to stakeholders.
  • Collaboration and communication: They collaborate with teams across different departments, communicating insights and recommendations to help guide strategic decisions and actions.

Data scientists play a crucial role in various industries, including AI, leveraging their expertise to solve complex problems, improve efficiency, and drive innovation through data-driven decision-making processes.

How to become a data scientist?

Becoming a data scientist typically involves a combination of education, practical experience, and developing specific skills. Here’s a step-by-step roadmap on this career path:

  • Educational foundation:
    • Bachelor’s Degree: Start with a bachelor’s degree in a relevant field such as Computer Science, Mathematics, Statistics, Data Science, or a related discipline. This provides a solid foundation in programming, statistics, and data analysis.
    • Advanced Degrees (Optional): Consider pursuing a master’s degree or even a Ph.D. in Data Science, Statistics, Computer Science, or a related field. Advanced degrees can provide deeper knowledge and specialization, though they are not always required for entry-level positions.
  • Technical skills:
    • Programming languages: Learn programming languages commonly used in data science such as Python and R. These languages are essential for data manipulation, statistical analysis, and building machine learning models.
What do data scientists do, and how to become one? Learn everything you need to know about data scientists!
What do data scientists do? (Image credit)
    • Data manipulation and analysis: Familiarize yourself with tools and libraries for data manipulation (e.g., pandas, NumPy) and statistical analysis (e.g., scipy, StatsModels).
    • Machine learning: Gain proficiency in machine learning techniques such as supervised and unsupervised learning, regression, classification, clustering, and natural language processing (NLP). Libraries like scikit-learn, TensorFlow, and PyTorch are commonly used for these tasks.
    • Data visualization: Learn how to create visual representations of data using tools like Matplotlib, Seaborn, or Tableau. Data visualization is crucial for communicating insights effectively.
  • Practical experience:
    • Internships and projects: Seek internships or work on projects that involve real-world data. This hands-on experience helps you apply theoretical knowledge, develop problem-solving skills, and build a portfolio of projects to showcase your abilities.
    • Kaggle competitions and open-source contributions: Participate in data science competitions on platforms like Kaggle or contribute to open-source projects. These activities provide exposure to diverse datasets and different problem-solving approaches.
  • Soft skills:
    • Develop strong communication skills to effectively present and explain complex technical findings to non-technical stakeholders.
    • Cultivate a mindset for analyzing data-driven problems, identifying patterns, and generating actionable insights.
  • Networking and continuous learning:
    • Connect with professionals in the data science field through meetups, conferences, online forums, and LinkedIn. Networking can provide valuable insights, mentorship opportunities, and potential job leads.
    • Stay updated with the latest trends, techniques, and advancements in data science through online courses, workshops, webinars, and reading research papers.
  • Job search and career growth:
    • Apply for entry-level positions: Start applying for entry-level data scientist positions or related roles (e.g., data analyst, junior data scientist) that align with your skills and interests.
    • Career development: What do data scientists do? Once employed, continue to learn and grow professionally. Seek opportunities for specialization in areas such as AI, big data technologies, or specific industry domains.

Becoming a data scientist is a journey that requires dedication, continuous learning, and a passion for solving complex problems using data-driven approaches. By building a strong foundation of technical skills, gaining practical experience, and cultivating essential soft skills, you can position yourself for a rewarding career in this dynamic and rapidly evolving field.

Data scientist salary for freshers

The salary for freshers in the field of data science can vary depending on factors like location, educational background, skills, and the specific industry or company.

In the United States, for example, the average starting salary for entry-level data scientists can range from approximately $60,000 to $90,000 per year. This can vary significantly based on the cost of living in the region and the demand for data science professionals in that area.

What do data scientists do, and how to become one? Learn everything you need to know about data scientists!
What do data scientists do and how much they earn? (Image credit)

In other countries or regions, such as Europe or Asia, entry-level salaries for data scientists may be lower on average compared to the United States but can still be competitive based on local economic conditions and demand for data science skills.

How long does it take to become a data scientist?

Becoming a data scientist varies based on your background and goals. With a bachelor’s degree in fields like computer science or statistics, you can become a data scientist in about 2 years by completing a master’s in data science. If you lack a related degree, you can enter the field through boot camps or online courses, needing strong math skills and self-motivation. Regardless, gaining experience through projects, hackathons, and volunteering is crucial. Typically, the path includes: bachelor’s degree (0-2 years), master’s degree (2-3 years), gaining experience (3-5 years), and building a portfolio for job applications (5+ years).

Now you know what do data scientists do and the road ahead!


Featured image credit: John Schnobrich/Unsplash

]]>
Talent magnet: How Hackathons help attract new stars? https://dataconomy.ru/2024/04/19/talent-magnet-how-hackathons-help-attract-new-stars/ Fri, 19 Apr 2024 12:40:14 +0000 https://dataconomy.ru/?p=51297 Hackathons have emerged as dynamic catalysts in the rapidly evolving world of technology, igniting creativity, collaboration, and innovation across diverse industries. What began as exclusive battlegrounds for IT coders has evolved into inclusive gatherings, attracting various talents from banking, retail, pharmaceuticals, and beyond. No longer confined to the realm of programmers, hackathons now attract economists, […]]]>

Hackathons have emerged as dynamic catalysts in the rapidly evolving world of technology, igniting creativity, collaboration, and innovation across diverse industries. What began as exclusive battlegrounds for IT coders has evolved into inclusive gatherings, attracting various talents from banking, retail, pharmaceuticals, and beyond. No longer confined to the realm of programmers, hackathons now attract economists, designers, journalists, and professionals with expertise that transcends conventional boundaries.

Traditionally, companies leverage hackathons to address product-related challenges, from exploring new applications for existing services to integrating cutting-edge technologies like machine learning, bots, and blockchain. However, these events serve a dual purpose; beyond solving immediate problems, they function as talent hubs, offering a fertile ground for HR professionals to unearth gems amidst a sea of innovative thinkers.

Enter the Hackathon Raptors—a non-profit community of talented developers who have taken the initiative to organize socially impactful hackathons and reached remarkable heights. Their journey unveils hackathons’ pivotal role in attracting new talent, fostering innovation, and building a vibrant community of developers.

In the world of tech innovation, Maksim Muravev stands out as a driving force behind the Hackathon Raptors community. His firsthand experiences highlight these events’ unique environment, challenging participants to think creatively and collaborate under pressure, ultimately honing their skills and fostering diverse, impactful solutions. Beyond mere competition, Maksim emphasizes hackathons’ vital role in networking and community building within the tech industry. He values the connections forged during these events, which often evolve into lasting friendships and productive collaborations, reflecting his belief in their power to foster a vibrant community of developers eager to learn, share, and grow together.

How Hackathons help attract new stars

Similarly, Dmitry Brazhenko, an ML engineer at Microsoft Copilot, champions hackathons as pivotal for innovation in machine learning and AI. Drawing from his background as a data analytics tutor and contributions to open-source libraries, Dmitry views hackathons as unique platforms for experimenting with new algorithms and models collaboratively. His work, including developing SharpToken and insightful articles on Habr, demonstrates the practical impact of machine learning innovations from these competitive settings. Dmitry also stresses the importance of hackathons in making technology education more accessible, bridging academic learning and real-world application, democratizing technology education, and fostering a forward-thinking approach to future technologies.

Alim Shogenov, an exceptional software engineer renowned for his groundbreaking work across multiple sectors, including education, finance, and travel technology, offers another perspective on the transformative power of hackathons. His innovative project, “Digital Future of Education”, earned him prestigious accolades for its transformative impact on document management in educational institutions, slashing processing time by 16% while enhancing user accessibility. Alim highlights hackathons’ unique environment, pivotal for rapid innovation in turning concepts into working prototypes. His expertise and interdisciplinary collaboration underscore hackathons’ significant role in fostering personal and professional growth within a supportive community.

Dmitrii Starikov, an incredibly talented web developer with a wealth of experience in creating high-load systems for world-renowned exhibitions, libraries, and archives. He has also made significant contributions to projects of national significance that preserve the world’s cultural and historical heritage. Dmitrii is a firm believer that hackathons provide developers with unique opportunities to push the boundaries of their professionalism and solve real-world problems. Dmitrii is absolutely thrilled about the unique challenge that hackathons present! Participants are given the opportunity to apply their skills in novel ways, which helps to highlight the soft skills gained through participation, such as enhanced communication and effective presentation of ideas. Dmitrii is a big fan of the community aspect of hackathons, where connections with like-minded individuals can lead to potential collaborations and opportunities beyond the event itself.

How Hackathons help attract new stars

Oleg Mikhelson, an outstanding tech expert with decades of experience in technology infrastructure, brings a perspective to the discussion. For Oleg, hackathons are instrumental in driving innovation in systems development, testing, and refining infrastructure solutions under pressure akin to real-world challenges. He values the mentorship aspect of hackathons, seeing them as opportunities for tech professionals to exchange knowledge and mentor up-and-coming talent, fostering a supportive environment where learning and mutual support flourish among enthusiasts. Oleg’s insights underscore the multifaceted benefits of hackathons, from driving technological advancements to building a vibrant, collaborative community that transcends individual events.

But how does one organize a hackathon? Here’s a guide to getting started:

  1. Set your goals: Before diving in, figuring out what you want to achieve is crucial. Whether sparking innovation in a specific industry, tackling a social issue, or simply bringing the developer community closer together, having a clear goal will guide you every step. For the Hackathon Raptors, it’s always been about creating a welcoming space where developers can learn, collaborate, and make a difference.
  2. Pick a theme that speaks to you: Choosing the right theme can make or break your hackathon. It should be broad enough to inspire creativity but focused sufficient to provide direction. The Hackathon Raptors have organized events around themes like AI for Humanity and Web Accessibility, drawing in a diverse crowd of developers passionate about making a positive impact.
  3. Build your dream team: Organizing a hackathon is no small feat—it takes a dedicated team with a variety of skills. From event planning and marketing to technical expertise and community engagement, having the right mix of people is essential. The Hackathon Raptors thrive thanks to their diverse organizing team, bringing together different perspectives and talents to ensure their events run smoothly.
  4. Find support and partnerships: Sponsors and partners can provide the resources needed to make your hackathon a success. This includes everything from funding and technology to mentorship and prizes. The Hackathon Raptors have teamed up with companies and organizations that share their values, ensuring their events are well-supported and aligned with their community’s goals.
  5. Spread the word: Getting the word out is key to attracting participants. Utilize social media, online forums, and good old-fashioned word of mouth to generate buzz around your event. The Hackathon Raptors excel at drumming up excitement, using engaging content and personal stories from past participants to inspire others to join the fun.
  6. Create a collaborative atmosphere: Building an environment that fosters collaboration and innovation is essential for a successful hackathon. Offer resources like workshops, mentorship, and networking opportunities to support participants every step of the way. The Hackathon Raptors strongly emphasize inclusivity and support, ensuring everyone feels valued and welcome.
  7. Celebrate success: At the end of the day, it’s important to celebrate all participants’ hard work and achievements. Hand out prizes, showcase projects, and allow teams to continue working on their ideas. The Hackathon Raptors community is about sustainable development and innovation, encouraging teams to keep pushing forward even after the event.

But it’s not just about organizing—hackathons have birthed remarkable projects. Take the MSQRD app, conceived by just two developers, which aimed to revolutionize modern messaging for mobile devices. Despite requiring originally four team members, they hastily assembled a duo on the eve of the hackathon. MSQRD swiftly gained traction among celebrities flaunting its masks on social media, particularly in Europe and the US.

Intrigued by MSQRD’s technical prowess and user engagement, Facebook struck a deal with the developers, granting them access to its vast user base. With plans for future projects in the entertainment and social sectors, MSQRD now seamlessly integrates its features across platforms like Instagram and WhatsApp.

Hackathons have emerged as transformative forces in the tech world, transcending traditional boundaries to become inclusive platforms for innovation, collaboration, and community building. The stories of Maksim, Dmitry, Alim, Dmitrii, and Oleg exemplify individuals’ diverse perspectives and invaluable contributions to these events. From pushing the boundaries of technology to fostering personal and professional growth, hackathons continue to play a pivotal role in shaping the future of technology and empowering individuals to make a difference. As we look ahead, the Hackathon Raptors and similar communities stand as beacons of inspiration, driving positive change and helping us dream of what’s possible through the power of collaboration and innovation.


Featured image credit: Freepik

]]>
BananaConf Tallinn: Where Web3 is Peeled Open for All https://dataconomy.ru/2024/04/16/bananaconf-tallinn-web3-2024/ Tue, 16 Apr 2024 12:51:36 +0000 https://dataconomy.ru/?p=51123 If there’s a single trend dominating the tech conversation in 2024, it’s Web3’s surging momentum. While still a nascent concept to many, Web3 is rapidly gaining traction, backed by a burgeoning market expected to reach $5.5 billion by 2030, with a growth rate of 44.9%. BananaConf, formerly NFT Tallinn, is riding this wave with its […]]]>

If there’s a single trend dominating the tech conversation in 2024, it’s Web3’s surging momentum. While still a nascent concept to many, Web3 is rapidly gaining traction, backed by a burgeoning market expected to reach $5.5 billion by 2030, with a growth rate of 44.9%. BananaConf, formerly NFT Tallinn, is riding this wave with its 2024 conference, positioning the Nordic & Baltic region as a hub for Web3 exploration and development. 

From April 22 to 24, expect hard-hitting discussions, showcases of bleeding-edge applications, and a glimpse into the internet’s decentralized future.


Dataconomy readers! Receive a 40% community discount on your BananaConf tickets.

Web3 is More Than a Buzzword

Let’s be clear: Web3 isn’t your average tech hype cycle. It’s a foundational change in how we think about the internet. “Web3 breaks the mold of centralized corporate control,” Sander Gansen, cofounder and organizer at BananaConf, told me. “It’s about putting power back into the hands of users, giving them real ownership of their data and a say in how digital platforms are run. That’s going to upend the status quo.”

Drivers include the rising need for decentralized technology and applications, which offer solutions to pressing digital challenges like data breaches and privacy infringements. Additionally, Web 3.0 fosters trust and user control over data through features like decentralization and transparency. 

However, challenges include regulatory scrutiny and the evolving nature of regulations, which can create uncertainty for businesses. The scalability of blockchain technology is another restraint, as current infrastructure struggles to handle the demands of a global decentralized ecosystem. 

Yet, there are opportunities to address these challenges, and the event aims to discuss possible solutions and foster a community of movers and shakers that can peel back the sometimes ugly skin and help reveal the fruits within.

BananaConf: It’s Not Just Crypto Anymore

BananaConf won’t shy away from crypto’s role in Web3, but it aims for a bigger picture. Attendees will see how blockchain’s unique characteristics are reshaping everything from finance and supply chains to the booming creator economy. “Web3 use cases are expanding at breakneck speed now,” Gansen said. “The most successful players won’t just ‘do blockchain’ – they’ll use it to solve problems, improve experiences, and create fairer outcomes for businesses and users alike.”

BananaConf’s ‘five stages’ format will be graced by over 100 speakers, offering attendees everything from showcases to thought leadership, ROI-focused talks to discussions on ethics and regulation, and practical masterclasses to developer workshops.

Attendees will also enjoy a list of side events to suit every palette, from the ETHTallinn hackathon and policymakers roundtable to a VIP yacht party and sauna, and (as is expected of all tech conferences), an after-party of two.

Web3: A “C” of Change

Web3 is all about community, co-creation, collaboration, and direct communication, with the technologies that exist now and in the future being the tools to help deliver on the promise of an antidote to the transactional nature of Web2, the organizers recognize that it takes a village to create the biggest Web3 event in the Nordics and Baltics. 

“BananaConf is being supported by both sponsors and grassroots networks behind its success,” Gansen said. “The mix includes heavy hitters like VISA and Binance sharing stage time with NFT collectives Bored Ape Yacht Club and World of Women, representing Web3’s unique blend of disruptive energy and big-picture investment.”

Whether you’re a startup founder, tech veteran, or curious newcomer, BananaConf in Tallinn promises to be a mind-expanding experience with tangible takeaways.

Remember, Dataconomy readers receive a 40% community discount on your BananaConf tickets!

]]>
React Native: Everything you need to know https://dataconomy.ru/2023/09/05/react-native-everything-you-need-to-know/ Tue, 05 Sep 2023 12:52:08 +0000 https://dataconomy.ru/?p=41244 Oh, you’re curious about React Native? Excellent choice! We get it; the mobile development landscape feels like a digital jungle sometimes, but React Native shines like a tech beacon of hope in that chaos. Here’s the lowdown. React Native is like the Swiss Army knife of mobile development, letting you build apps for both Android […]]]>

Oh, you’re curious about React Native? Excellent choice! We get it; the mobile development landscape feels like a digital jungle sometimes, but React Native shines like a tech beacon of hope in that chaos. Here’s the lowdown.

React Native is like the Swiss Army knife of mobile development, letting you build apps for both Android and iOS using one codebase. Yup, you heard that right—one codebase, two platforms. Giants like Uber, Microsoft, and even its creator, Facebook, are hitching their wagons to this star, across various sectors. But hold your horses; before diving head-first, let’s make sure it’s the right pond for you.

What is React Native?

Okay, so what’s this all about? React Native, often shortened to RN, is your go-to framework for building mobile apps using JavaScript. Think of it as your magic wand that lets you conjure up apps that feel native on both Android and iOS.

Back in 2015, Facebook decided they might as well share the wealth, and launched React Native as an open-source project. Fast forward a few years, and it’s dominating the charts like a 90’s boy band. Major players like Instagram and Skype wouldn’t be what they are today without a sprinkle of that React Native magic. We’ll delve deeper into some famous RN-powered apps a little later.

Why the rave reviews?

So what makes React Native the Beyoncé of mobile frameworks? A few things actually.

First off, imagine writing your app’s code just once and then—voila!—it works on both Android and iOS. Yeah, it’s that good. A real time-saver, and let’s be honest, who doesn’t love saving time and money?


Code Llama wants to be your ultimate AI co-creator for code


Next, it’s built on React, which was already a celebrity in the JavaScript world. If you’re wondering how they differ, stick around; we’ve got a React vs. React Native showdown coming up.

And let’s not forget, it turned frontend developers into superheroes overnight. They went from web wizards to mobile mavens, able to craft high-quality apps for various platforms.

Fun fact alert!

Here’s something you may not have known: React Native was born out of a monumental tech blunder. Yep, sometimes mistakes give rise to revolutions, and React Native is living proof.

So, there you have it. If you’re considering jumping on the React Native train, it’s a ride we wholeheartedly recommend.

The history of React Native

Facebook released React Native in 2015 and has been maintaining it ever since. In 2018, React Native had the 2nd highest number of contributors for any repository in GitHub. Today, React Native is supported by contributions from individuals and companies around the world including Callstack, Expo, Infinite Red, Microsoft and Software Mansion.

React Native

The HTML5 experiment

In the early days of mobile expansion, Facebook opted for an HTML5-based mobile site rather than building native apps. This decision, unfortunately, didn’t pan out as planned, leading to subpar performance and user experience issues. In a candid moment in 2012, Mark Zuckerberg admitted, “the biggest mistake we made as a company was betting too much on HTML as opposed to native.”

React Native: Everything you need to know
Facebook released React Native in 2015 (Image: Kerem Gülen/Midjourney)

A turning point with JavaScript

The tide began to turn in 2013 when Facebook developer Jordan Walke discovered a method for generating UI components for iOS through JavaScript. This breakthrough led to a Hackathon within Facebook, aimed at exploring how much could be accomplished with JavaScript, a language predominantly used for web development until then.

The emergence of React Native

The result of this exploration was React Native, initially rolled out for iOS. Recognizing its potential, Facebook quickly extended support to Android and made it an open-source project in 2015.

Climbing the ranks

The framework didn’t take long to gain traction. Within just three years of its release, it became the second most contributed-to project on GitHub. By 2019, it maintained its position in the top ten, standing strong at sixth place with over 9,100 contributors.

So there you have it, the journey of React Native from a makeshift solution to a leading mobile development framework. Whether you’re considering it for your next project or thinking of contributing to its growth, it’s a platform we highly recommend.

What is difference between React and React Native?

Ah, the age-old question: What’s the difference between React and React Native? Though they might seem like siblings in the Facebook family of technologies, they have different talents and career paths, so to speak.

React vs React Native

React, sometimes called ReactJS, is the elder in this tech family. Its playground is the web, helping developers create dynamic and interactive user interfaces for websites. React Native, on the other hand, took that legacy, powered it up with React under the hood, and moved it into the mobile arena. It allows you to build apps for both Android and iOS platforms.

Tech ingredients

Both of them like to play with JavaScript and use JSX (JavaScript XML) for constructing UI elements. But here’s where they part ways—while React incorporates HTML and CSS for styling, React Native skips the web entirely and goes native, using mobile UI elements that are platform-specific.

React Native: Everything you need to know
Both React and React Native like to play with JavaScript and use JSX (Image: Kerem Gülen/Midjourney)

Different roles for different goals

So, though they share DNA, they’re cut out for different roles. React dresses up websites, while React Native takes charge of mobile app attire. Having a handle on React can be a great starting point, but to be a mobile app dev maestro, you’ll need to get acquainted with the nuances of React Native.

Before we dive into the pros and cons of using this framework, it’s essential to grasp the concept of cross-platform development. After all, that’s what React Native is championing in the mobile development space.

Understanding cross-platform development

So, what’s all the fuss about cross-platform development? In the simplest terms, it’s like a universal translator for software, enabling you to build applications that can talk to multiple operating systems—think Microsoft Windows, Linux, macOS, and so on. Imagine creating a web browser or software like Adobe Flash that offers the same experience regardless of the device you’re using. That’s cross-platform development in action.

Why is it the Holy Grail? It’s every developer’s dream scenario—you write the code once, and it runs everywhere. No need to be a polyglot coder fluent in Java, Swift, Objective-C, and whatnot. This approach is not only a boon for developers but also for businesses. It cuts down the time to market and development costs almost by half. Now, let’s explore some popular cross-platform frameworks that have made this dream a reality.

React Native: Everything you need to know
In the simplest terms, cross-platform is like a universal translator for software (Image: Kerem Gülen/Midjourney)

Frameworks in the spotlight

  • React Native: Created by Facebook in 2015, React Native is all about bringing the power of React to mobile development. The winning point? You can code in JavaScript without becoming a linguist in platform-specific languages. React Native is particularly strong in crafting high-responsive, intuitive mobile experiences.
  • Ionic: Introduced in 2013 by Drifty, Ionic is your go-to for hybrid mobile development. It relies on familiar friends—HTML, CSS, and JavaScript—and platforms like PhoneGap and Cordova to offer a native-like feel. Built atop Angular, Ionic is packed with built-in components, speeding up the development process. While it may lag behind React Native in performance due to its WebView-based architecture, the good news is you can test the code easily in any browser.
  • Flutter: Hailing from Google’s stable in 2017, Flutter extends its reach beyond just mobile to include other platforms. It’s the ideal playground for developers who love to tinker with new features or fix pesky bugs, thanks to its hot reload feature, which lets you see changes instantly without restarting the app.
  • Xamarin: Developed by Microsoft, Xamarin allows for a high degree of code reuse—somewhere between 75-90%. But there’s a catch—it’s written in C#, so a mastery of the language is a must. Interestingly, Microsoft itself has been leaning towards React Native, with 38 of its iOS and Android apps as of 2019 using the framework.

Whether you’re a developer looking to broaden your horizons or a business aiming to make a splash in the digital pond, understanding the ins and outs of these cross-platform frameworks can help you make an informed decision. We recommend exploring them to find the best fit for your specific needs.

The magic behind React Native: How it works

So, you’re keen on knowing what makes React Native tick. Even if you’re not a tech wiz, hang tight—we’ve got this explained in the simplest way possible.

What’s in the mix?

React Native is a cocktail of JavaScript and JSX (JavaScript XML), which looks a lot like XML. Now, in a typical mobile app, you’ve got two universes: the JavaScript world and the native world (that’s where the iOS and Android magic happens). This framework can live and operate in both.

The “bridge” connection

Imagine you speak English and your friend speaks French. If you both want to understand each other, you’d need a translator, right? In the React Native world, that translator is called the “bridge.” This feature allows the JavaScript code and the native code (which are like two people speaking different languages) to have a meaningful conversation.

React Native: Everything you need to know
React Native is a cocktail of JavaScript and JSX (JavaScript XML), which looks a lot like XML (Image: Kerem Gülen/Midjourney)

What does this mean for you?

If you already have an app built for iOS or Android, fret not. You don’t have to build everything from scratch when you decide to switch lanes to React Native. The bridge lets you borrow parts from your existing app and integrate them into a new React Native environment.

In a nutshell, this framework is your flexible friend, bridging the gap between two different coding realms and giving you the freedom to combine the best of both worlds. We think that’s pretty empowering, don’t you?

Benefits of React Native

Ever wondered why this framework has been the talk of the tech town? Spoiler alert: It’s not just hype. Here’s why it’s worth considering for your next app project:

Community to the rescue!

One of the stellar things about React Native is its vast developer community. Think of it as the friendliest, nerdiest neighborhood you could possibly live in. Stuck on a bug? Don’t worry, someone among the nearly 50,000 active contributors on Stack Overflow has got your back. Plus, the community vibe is a skill-booster, leveling up your coding chops.

The gift of code reusability

Imagine being able to write once and run anywhere—well, that’s the React Native promise for you. Even if you have a web application already developed in React, guess what? You can reuse a good chunk of that code for your mobile app. It’s like having one-size-fits-all magic pants. It speeds up the dev process and comes with a treasure trove of pre-built components. What’s not to love?

Your wallet will thank you

Building for both iOS and Android usually means two development teams, which doubles your costs. But with React Native’s cross-platform feature, you can have a single team that rolls out apps for both ecosystems. Your wallet just sighed in relief.

Slick and snappy UI

React Native focuses on providing a buttery-smooth user experience. Whether you’re building a basic app or a complex one with bells and whistles, React Native’s got you covered. The UI responds like a dream and loads in a flash. We think that’s user-centricity at its best.

React Native: Everything you need to know
React Native stands out for its ability to offer both cost-efficiency and high performance (Image: Kerem Gülen/Midjourney)

Instant gratification with fast refresh

Updating your app is a breeze with React Native’s Fast Refresh feature. Picture this: you make a code change and—voila!—it’s live in the app instantly. No more waiting for what feels like eons to see your edits come to life. It’s like having a shortcut to productivity.

Ready for tomorrow, today

React Native is like that cool kid in school who was ahead of trends. It’s future-proof. Despite some hiccups, which we’ll get into later, its rapid market adoption and problem-solving capabilities suggest that it’s here to stay.

Speed that’s easy on the eyes

Concerned about performance? While it’s true that React Native may not be as lightning-quick as native code, the difference is often imperceptible to users. So, you get nearly native performance without the native hassle.

React Native: Everything you need to know
React Native ensures that you’re never in a jam without support (Image: Kerem Gülen/Midjourney)

Bottom line

React Native stands out for its ability to offer both cost-efficiency and high performance. With its thriving developer community, the framework ensures that you’re never in a jam without support. Its key selling point—code reusability—gives it a unique edge, enabling a single team of developers to roll out apps for both iOS and Android.

This not only expedites the time to market but also significantly reduces costs. React Native’s focus on delivering a seamless user experience is a big win, while its Fast Refresh feature adds to development agility.

Despite minor performance trade-offs compared to native solutions, the framework’s advantages overwhelmingly tip the scales in its favor. Whether you’re an entrepreneur looking to disrupt the market or a developer wanting to upskill, this framework is a future-proof choice that offers both versatility and reliability.

]]>
Life of modern-day alchemists: What does a data scientist do? https://dataconomy.ru/2023/08/16/what-does-a-data-scientist-do/ Wed, 16 Aug 2023 14:54:28 +0000 https://dataconomy.ru/?p=40291 Today’s question is, “What does a data scientist do.” Step into the realm of data science, where numbers dance like fireflies and patterns emerge from the chaos of information. In this blog post, we’re embarking on a thrilling expedition to demystify the enigmatic role of data scientists. Think of them as modern-day detectives, archeologists, and alchemists […]]]>

Today’s question is, “What does a data scientist do.” Step into the realm of data science, where numbers dance like fireflies and patterns emerge from the chaos of information. In this blog post, we’re embarking on a thrilling expedition to demystify the enigmatic role of data scientists. Think of them as modern-day detectives, archeologists, and alchemists combined, all working their magic to decipher the language of data and unearth the gems hidden within.

Imagine a locked door behind which lies a wealth of secrets waiting to be discovered. Data scientists are the master keyholders, unlocking this portal to reveal the mysteries within. They wield algorithms like ancient incantations, summoning patterns from the chaos and crafting narratives from raw numbers. With a blend of technical prowess and analytical acumen, they unravel the most intricate puzzles hidden within the data landscape.

But make no mistake; data science is not a solitary endeavor; it’s a ballet of complexities and creativity. Data scientists waltz through intricate datasets, twirling with statistical tools and machine learning techniques. They craft models that predict the future, using their intuition as partners in this elegant dance of prediction and possibility.

What does a data scientist do? Dive into the world of data science and uncover the magic behind transforming data into insights. Keep reading...
Exploring the question, “What does a data scientist do?” reveals their role as information alchemists, turning data into gold (Image credit: Eray Eliaçık/Wombo)

Prepare to be amazed as we unravel the mysteries and unveil the fascinating world of data science, where data isn’t just numbers; it’s a portal to a universe of insights and possibilities.? Keep reading and learn everything you need to answer the million-dollar question, what does a data scientist do?

What is a data scientist?

At its core, a data scientist is a skilled professional who extracts meaningful insights and knowledge from complex and often large datasets. They bridge the gap between raw data and valuable insights, using a blend of technical skills, domain knowledge, and analytical expertise. Imagine data scientists as modern-day detectives who sift through a sea of information to uncover hidden patterns, trends, and correlations that can inform decision-making and drive innovation.

Data scientists utilize a diverse toolbox of techniques, including statistical analysis, machine learning, data visualization, and programming, to tackle a wide range of challenges across various industries. They possess a unique ability to transform data into actionable insights, helping organizations make informed choices, solve complex problems, and predict future outcomes.

What does a data scientist do? Dive into the world of data science and uncover the magic behind transforming data into insights. Keep reading...
What does a data scientist do? They embark on a quest to decipher data’s hidden language, transforming raw numbers into actionable insights (Image credit)

In a nutshell, a data scientist is:

  • A problem solver: Data scientists tackle real-world problems by designing and implementing data-driven solutions. Whether it’s predicting customer behavior, optimizing supply chains, or improving healthcare outcomes, they apply their expertise to solve diverse challenges.
  • A data explorer: Much like explorers of old, data scientists venture into the unknown territories of data. They dive deep into datasets, discovering hidden treasures of information that might not be apparent to the untrained eye.
  • A model builder: Data scientists create models that simulate real-world processes. These models can predict future events, classify data into categories, or uncover relationships between variables, enabling better decision-making.
  • An analyst: Data scientists meticulously analyze data to extract meaningful insights. They identify trends, anomalies, and outliers that can provide valuable information to guide business strategies.
  • A storyteller: Data scientists don’t just crunch numbers; they are skilled storytellers. They convey their findings through compelling visualizations, reports, and presentations that resonate with both technical and non-technical audiences.
  • An innovator: In a rapidly evolving technological landscape, data scientists continuously seek new ways to harness data for innovation. They keep up with the latest advancements in their field and adapt their skills to suit the ever-changing data landscape.

Data scientists play a pivotal role in transforming raw data into actionable knowledge, shaping industries, and guiding organizations toward data-driven success. As the digital world continues to expand, the demand for data scientists is only expected to grow, making them a crucial driving force behind the future of innovation and decision-making.

Wondering, “What does a data scientist do?” Look no further – they manipulate data, build models, and drive informed decisions.

What does a data scientist do: Responsibilities and duties

“What does a data scientist do?” The answer encompasses data exploration, feature engineering, and model refinement. In the grand performance of data science, data scientists don multiple hats, each with a unique flair that contributes to the harmonious masterpiece.

What does a data scientist do? Dive into the world of data science and uncover the magic behind transforming data into insights. Keep reading...
At the heart of the matter lies the query, “What does a data scientist do?” The answer: they craft predictive models that illuminate the future (Image credit)
  • Data collection and cleaning: Data scientists kick off their journey by embarking on a digital excavation, unearthing raw data from the digital landscape. Just like sifting through ancient artifacts, they meticulously clean and refine the data, preparing it for the grand unveiling.
  • Exploratory Data Analysis (EDA): Like intrepid explorers wandering through an uncharted forest, data scientists traverse the terrain of data with curiosity. They create visualizations that resemble vibrant treasure maps, unveiling trends, anomalies, and secrets hidden within the data’s labyrinth.
  • Model development: Crafting magic from algorithms! Picture data scientists as wizards conjuring spells from algorithms. They build models that can predict the future, classify the unknown, and even find patterns in the seemingly chaotic.
  • Feature engineering: In the alchemical process of data science, data scientists are the masters of distillation. They transform raw ingredients (data) into refined essences (features) that fuel their predictive concoctions.
  • Machine learning and AI: Are you ready to casting predictive spells? Enter the realm of enchantment where data scientists train machine learning models. It’s a bit like teaching a dragon to dance – a careful choreography of parameters and data to breathe life into these models.
  • Evaluation and optimization: Data scientists embark on a quest to fine-tune their creations. It’s a journey of trial and error, with the goal of crafting models that are as accurate as a marksman’s arrow.
  • Communication and visualization: Data scientists don’t just crunch numbers; they weave tales. Like master storytellers, they craft visualizations and reports that captivate the minds of decision-makers and stakeholders.

At the nexus of technology and analysis, the solution to “What does a data scientist do?” becomes clear: they wield data as a compass.


Is data science a good career?


What does a data scientist do: The impact on industries

The impact of data scientists extends far and wide, like ripples from a stone cast into a pond.

What does a data scientist do? Dive into the world of data science and uncover the magic behind transforming data into insights. Keep reading...
Delving into the depths of data, we uncover the myriad tasks that constitute the answer to “What does a data scientist do?” (Image credit)

Let’s explore the realms they conquer:

  • Healthcare: Data scientists are like healers armed with foresight in healthcare. They predict disease outbreaks, patient outcomes, and medical trends, aiding doctors in delivering timely interventions.
  • Finance: Imagine data scientists as financial wizards, foreseeing market trends and curating investment strategies that seem almost magical in their precision.
  • Retail and e-commerce: In the world of retail, data scientists craft potions of customer satisfaction. They analyze buying behaviors and concoct personalized recommendations that leave shoppers spellbound.
  • Manufacturing: In manufacturing, data scientists work like production sorcerers, optimizing processes, reducing defects, and ensuring every cog in the machinery dances to the tune of efficiency.
  • Social Sciences: Data scientists are also modern-day Sherlock Holmes, helping social scientists unravel the mysteries of human behavior, from sentiment analysis to demographic shifts.

Exploring the multifaceted answer to “What does a data scientist do?” reveals their pivotal role in turning data into informed decisions.

What is a data scientist salary?

The salary of a data scientist varies depending on their experience, skills, and location. In the United States, the average salary for a data scientist is $152,260 per year. However, salaries can range from $99,455 to $237,702 per year.

What does a data scientist do? Dive into the world of data science and uncover the magic behind transforming data into insights. Keep reading...
“What does a data scientist do?” you may ask. They curate, clean, and analyze data, unveiling valuable gems of information (Image credit)

Glimpsing into their world, the response to “What does a data scientist do?” unfolds as a blend of data exploration and storytelling. Here is a breakdown of the average salary for data scientists in different industries:

  • Technology: $157,970 per year
  • Finance: $156,390 per year
  • Healthcare: $147,460 per year
  • Retail: $139,170 per year
  • Government: $136,020 per year

Data scientists in large cities tend to earn higher salaries than those in smaller cities. For example, the average salary for a data scientist in San Francisco is $165,991 per year, while the average salary for a data scientist in Austin, Texas, is $129,617 per year.

When pondering, “What does a data scientist do?” remember their art of turning data chaos into strategic clarity.

Where do data scientists work?

Data scientists work in a variety of industries, including:

  • Technology: Technology companies are always looking for data scientists to help them develop new products and services. Some of the biggest tech companies that hire data scientists include Google, Facebook, Amazon, and Microsoft.
  • Finance: Financial institutions use data scientists to analyze market data, predict trends, and make investment decisions. Some of the biggest financial institutions that hire data scientists include Goldman Sachs, Morgan Stanley, and JP Morgan Chase.
  • Healthcare: Healthcare organizations use data scientists to improve patient care, develop new treatments, and reduce costs. Some of the biggest healthcare organizations that hire data scientists include Kaiser Permanente, Mayo Clinic, and Johns Hopkins Hospital.
  • Retail: Retail companies use data scientists to understand customer behavior, optimize inventory, and personalize marketing campaigns. Some of the biggest retail companies that hire data scientists include Walmart, Amazon, and Target.
  • Government: Government agencies use data scientists to analyze data, make policy decisions, and fight crime. Some of the biggest government agencies that hire data scientists include the Department of Defense, the Department of Homeland Security, and the National Security Agency.

In addition to these industries, data scientists can also work in a variety of other sectors, such as education, manufacturing, and transportation. The demand for data scientists is growing rapidly, so there are many opportunities to find a job in this field.

 What does a data scientist do? Dive into the world of data science and uncover the magic behind transforming data into insights. Keep reading...
The question of “What does a data scientist do?” leads us to their role in shaping business strategies through data-driven insights (Image credit: Eray Eliaçık/Wombo)

Here are some specific examples of companies that hire data scientists:

  • Google: Google is one of the biggest tech companies in the world, and they hire data scientists to work on a variety of projects, such as developing new search algorithms, improving the accuracy of Google Maps, and creating personalized advertising campaigns.
  • Facebook: Facebook is another big tech company that hires data scientists. Data scientists at Facebook work on projects such as developing new ways to recommend friends, predicting what content users will like, and preventing the spread of misinformation.
  • Amazon: Amazon is a major e-commerce company that hires data scientists to work on projects such as improving the accuracy of product recommendations, optimizing the shipping process, and predicting customer demand.
  • Microsoft: Microsoft is a software company that hires data scientists to work on projects such as developing new artificial intelligence (AI) technologies, improving the security of Microsoft products, and analyzing customer data.
  • Walmart: Walmart is a major retailer that hires data scientists to work on projects such as optimizing inventory, reducing food waste, and personalizing marketing campaigns.

These are just a few examples of companies that hire data scientists. As the demand for data scientists continues to grow, there will be even more opportunities to find a job in this field.

At the heart of the question, “What does a data scientist do?” lies their ability to craft algorithms that illuminate trends.

Data scientist vs data analyst: A needed comparison

The differences between these two terms, which are often confused, are as follows:

  Data scientist Data analyst
Role Solves complex problems and forecasts future trends using advanced statistical techniques and predictive modeling. Interprets data to uncover actionable insights guiding business decisions.
Skills Possesses a broad set of skills including Python, R, machine learning, and data visualization. Utilizes tools like SQL and Excel for data manipulation and report creation.
Work Works with larger, more complex data sets. Works with smaller data sets.
Education Often holds higher education degrees (Master’s or PhDs). May only require a Bachelor’s degree.

How long does it take to become a data scientist?

The amount of time it takes to become a data scientist varies depending on your educational background, prior experience, and the skills you want to learn. Suppose you have a bachelor’s degree in a related field, such as computer science, mathematics, or statistics. In that case, you can become a data scientist in about 2 years by completing a master’s degree in data science or a related field.

If you don’t have a bachelor’s degree in a related field, you can still become a data scientist by completing a boot camp or an online course. However, you will need to be self-motivated and have a strong foundation in mathematics and statistics.

No matter what path you choose, gaining experience in data science by working on projects, participating in hackathons, and volunteering is important.

 What does a data scientist do? Dive into the world of data science and uncover the magic behind transforming data into insights. Keep reading...
As we ponder “What does a data scientist do?” we find they are data storytellers, transforming numbers into compelling narratives (Image credit)

Here is a general timeline for becoming a data scientist:

  • 0-2 years: Complete a bachelor’s degree in a related field.
  • 2-3 years: Complete a master’s degree in data science or a related field.
  • 3-5 years: Gain experience in data science by working on projects, participating in hackathons, and volunteering.
  • 5+ years: Build your portfolio and apply for data science jobs.

Of course, this is just a general timeline. The time it takes to become a data scientist will vary depending on your circumstances. However, if you are passionate about data science and willing to work hard, you can become a data scientist in 2-5 years.

If you want to learn how to become a data scientist, visit the related article and explore! The magic of “What does a data scientist do?” is in their ability to transform raw data into strategic wisdom.

Shaping tomorrow’s horizons

At its core, the answer to “What does a data scientist do?” revolves around transforming data into a strategic asset.

As we conclude our journey through the captivating landscape of data science, remember that data scientists are the architects of insights, the conjurers of predictions, and the artists of transformation. They wield algorithms like wands, uncovering the extraordinary within the ordinary. The future lies in the hands of these modern explorers, charting uncharted territories and sculpting a world where data illuminates the path ahead.

So, the next time you encounter a data scientist, remember they are not just crunching numbers – they are painting the canvas of our data-driven future with strokes of innovation and brilliance!

Featured image credit: ThisIsEngineering/Pexels

]]>
15 must-try open source BI software for enhanced data insights https://dataconomy.ru/2023/05/10/open-source-business-intelligence-software/ Wed, 10 May 2023 10:00:58 +0000 https://dataconomy.ru/?p=35573 Open source business intelligence software is a game-changer in the world of data analysis and decision-making. It has revolutionized the way businesses approach data analytics by providing cost-effective and customizable solutions that are tailored to specific business needs. With open source BI software, businesses no longer need to rely on expensive proprietary software solutions that […]]]>

Open source business intelligence software is a game-changer in the world of data analysis and decision-making. It has revolutionized the way businesses approach data analytics by providing cost-effective and customizable solutions that are tailored to specific business needs. With open source BI software, businesses no longer need to rely on expensive proprietary software solutions that can be inflexible and difficult to integrate with existing systems.

Instead, open source BI software offers a range of powerful tools and features that can be customized and integrated seamlessly into existing workflows, making it easier than ever for businesses to unlock valuable insights and drive informed decision-making.

What is open source business intelligence?

Open-source business intelligence (OSBI) is commonly defined as useful business data that is not traded using traditional software licensing agreements. This is one alternative for businesses that want to aggregate more data from data-mining processes without buying fee-based products.

What are the features of an open source business intelligence software?

Open source business intelligence software provides a cost-effective and flexible way for businesses to access and analyze their data. Here are some of the key features of open source BI software:

  • Data integration: Open source BI software can pull data from various sources, such as databases, spreadsheets, and cloud services, and integrate it into a single location for analysis.
  • Data visualization: Open source BI software offers a range of visualization options, including charts, graphs, and dashboards, to help businesses understand their data and make informed decisions.
  • Report generation: Open source BI software enables businesses to create customized reports that can be shared with team members and stakeholders to communicate insights and findings.
  • Predictive analytics: Open source BI software can use algorithms and machine learning to analyze historical data and identify patterns that can be used to predict future trends and outcomes.
  • Collaboration: Open source BI software allows team members to work together on data analysis and share insights with each other, improving collaboration and decision-making across the organization.
15 open source business intelligence software
Open source business intelligence software has made it easier than ever for businesses to integrate data analytics into their workflows

How to select the right business intelligence software?

Selecting the right open source business intelligence software can be a challenging task, as there are many options available in the market. Here are some factors to consider when selecting the right BI software for your business:

  • It’s important to identify the specific business needs that the BI software should address. Consider the types of data you want to analyze, the frequency of reporting, and the number of users who will need access to the software.
  • Look for BI software that can integrate data from different sources, such as databases, spreadsheets, and cloud services. This ensures that all data is available for analysis in one central location.
  • BI software should be easy to use and have a user-friendly interface. This allows users to quickly analyze data and generate reports without needing extensive training.
  • BI software should allow for customization of reports and dashboards. This allows users to tailor the software to their specific needs and preferences.
  • Ensure that the BI software has robust security features to protect sensitive data. Look for software that supports role-based access control, data encryption, and secure user authentication.
  • Consider the future growth of your business and ensure that the BI software can scale to meet your future needs.
  • Consider the cost of the software and any associated licensing fees or maintenance costs. Open source BI software can be a cost-effective option as it is typically free to use and has a large community of developers who provide support.

The right business intelligence strategy leads to lucrative results


Why not opt for a paid version instead?

While open source business intelligence software is a great option for many businesses, there are also some benefits to using a paid version. Here are some reasons why businesses may want to consider a paid BI software:

  • Paid BI software often comes with more advanced features, such as predictive analytics and machine learning, that can provide deeper insights into data.
  • Paid BI software often comes with dedicated technical support, which can help businesses troubleshoot any issues and ensure that the software is running smoothly.
  • Paid BI software often provides more robust security features, such as data encryption and secure user authentication, to protect sensitive data.
  • Paid BI software often integrates with other tools, such as customer relationship management (CRM) or enterprise resource planning (ERP) software, which can provide a more comprehensive view of business operations.
  • Paid BI software often allows for greater customization, allowing businesses to tailor the software to their specific needs and preferences.
  • Paid BI software often offers more scalability options, allowing businesses to easily scale up or down as needed to meet changing business needs.

15 open source business intelligence software (free)

It’s important to note that the following list of 15 open source business intelligence software tools is not ranked in any particular order. Each of these software solutions has its own unique features and capabilities that are tailored to different business needs. Therefore, businesses should carefully evaluate their specific requirements before choosing a tool that best fits their needs.

ClicData

ClicData provides a range of dashboard software solutions, including ClicData Personal, which is available free of cost and provides users with 1 GB of data storage capacity along with unlimited dashboards for a single user. Alternatively, the premium version of ClicData offers more extensive features, including a greater number of data connectors, the ability to automate data refreshes, and advanced sharing capabilities for multi-user access.

JasperReports Server

JasperReports Server is a versatile reporting and analytics software that can be seamlessly integrated into web and mobile applications, and used as a reliable data repository that can deliver real-time or scheduled data analysis. The software is open source, and also has the capability to manage the Jaspersoft paid BI reporting and analytics platform.

15 open source business intelligence software
The flexibility and scalability of open source business intelligence software make it an attractive option for businesses of all sizes

Preset

Preset is a comprehensive business intelligence software designed to work with Apache Superset, an open-source software application for data visualization and exploration that can manage data at the scale of petabytes. Preset provides a fully hosted solution for Apache Superset, which was originally developed as a hackathon project at Airbnb in the summer of 2015.


Navigate through the rough seas of retail with business intelligence as your compass


Helical Insight

Helical Insight is an open-source business intelligence software that offers a wide range of features, including e-mail scheduling, visualization, exporting, multi-tenancy, and user role management. The framework is API-driven, allowing users to seamlessly incorporate any additional functionality they may require. The Instant BI feature of Helical Insight facilitates a user-friendly experience, with a Google-like interface that enables users to ask questions and receive relevant reports and charts in real-time.

15 open source business intelligence software
Open source business intelligence software has disrupted the traditional market for proprietary software solutions

Lightdash

Lightdash is a recently developed open-source business intelligence software solution that can connect with a user’s dbt project, and enable the addition of metrics directly in the data transformation layer. This allows users to create and share insights with the entire team, promoting collaboration and informed decision-making.

KNIME

KNIME is a powerful open-source platform for data analysis that features over 1,000 modules, an extensive library of algorithms, and hundreds of pre-built examples of analyses. The software also offers a suite of integrated tools, making it an all-in-one solution for data scientists and BI executives. With its broad range of features and capabilities, KNIME has become a popular choice for data analysis across a variety of industries.

15 open source business intelligence software
The open source nature of business intelligence software fosters a community of collaboration and innovation

Abixen

Abixen is a software platform that is based on microservices architecture, and is primarily designed to facilitate the creation of enterprise-level applications. The platform empowers users to implement new functionalities by creating new, separate microservices. Abixen’s organizational structure is divided into pages and modules, with one of the modules dedicated to Business Intelligence services. This module enables businesses to leverage sophisticated data analysis tools and techniques to gain meaningful insights into their operations and drive informed decision-making.


BIDW: What makes business intelligence and data warehouses inseparable?


Microsoft Power BI

Microsoft Power BI offers a free version of their platform, which comes with a 1 GB per user data capacity limit and a once-per-day data-refresh schedule. The platform’s dashboards allow users to present insights from a range of third-party platforms, including Salesforce and Google Analytics, on both desktop and mobile devices. Additionally, Power BI provides users with the ability to query the software using natural language, which enables users to enter plain English queries and receive meaningful results.

15 open source business intelligence software
With a range of powerful tools and features, open source business intelligence software can be tailored to meet specific business needs

ReportServer

ReportServer is a versatile open source business intelligence software solution that integrates various reporting engines into a single user interface, enabling users to access the right analytics tool for the right purpose at the right time. The software is available in both a free community tier and an enterprise tier, and offers a range of features and capabilities, including the ability to generate ad-hoc list-like reports through its Dynamic List feature. This functionality empowers users to quickly generate customized reports based on their specific needs, promoting informed decision-making across the organization.

SpagoBI / Knowage

SpagoBI is a comprehensive open-source business intelligence suite that comprises various tools for reporting, charting, and data-mining. The software is developed by the Open Source Competency Center of Engineering Group, which is a prominent Italian software and services company that provides a range of professional services, including user support, maintenance, consultancy, and training. The SpagoBI team has now rebranded the software under the Knowage brand, which continues to offer the same suite of powerful BI tools and features.

15 open source business intelligence software
Open source business intelligence software empowers businesses to unlock valuable insights and make data-driven decisions

Helical Insight

Helical Insights is an innovative open-source BI tool that adopts a unique approach to self-service analytics. The software provides a BI platform that enables end-users to seamlessly incorporate any additional functionality that they may require by leveraging the platform’s API. This enables businesses to customize the BI tool to their specific needs, and to promote informed decision-making based on meaningful insights.


A comprehensive look at data integration and business intelligence


Jaspersoft

Jaspersoft is a versatile and highly customizable Business Intelligence platform that is developer-friendly, and allows developers to create analytics solutions that are tailored to the specific needs of their business. The platform is highly regarded by many users for its extensive customization options, and is particularly favored by Java developers. However, some users have noted certain weaknesses of the platform, such as a lack of support in the community for specific problems, as well as an unintuitive design interface. Nonetheless, Jaspersoft remains a popular choice for businesses that require a flexible and developer-friendly BI platform.

15 open source business intelligence software
Many businesses are now adopting open source business intelligence software to leverage its cost-effective and customizable features

Tableau Public

Tableau Public is a free, powerful BI software that empowers users to create interactive charts and live dashboards, and publish them on the internet, embed them on a website, or share them on social media. The software provides a range of customization options that enable users to optimize the display of their content across various platforms, including desktop, tablet, and mobile devices. Additionally, Tableau Public can connect to Google Sheets, and data can be auto-refreshed once per day, ensuring that users always have access to the most up-to-date information. Overall, Tableau Public is an excellent choice for anyone who wants to create and share compelling data visualizations.

BIRT

BIRT (Business Intelligence Reporting Tool) is an open source business intelligence software project that has achieved top-level status within the Eclipse Foundation. The software is designed to pull data from various data sources, enabling users to generate powerful reports and visualizations that support informed decision-making. With its flexible architecture and extensive set of features, BIRT is a popular choice for businesses and organizations that require a reliable and versatile BI tool.

15 open source business intelligence software
Open source business intelligence software has revolutionized the way businesses approach data analytics

Zoho Reports

Zoho Reports is a powerful BI platform that enables users to connect to almost any data source and generate visual reports and dashboards for analysis. The software is equipped with a robust analytics engine that can process hundreds of millions of records and return relevant insights in a matter of seconds. With its extensive range of features, Zoho Reports is a popular choice for businesses that require a reliable and versatile BI tool. The software also offers a free version that allows for up to two users, making it a cost-effective option for smaller organizations or teams.

Final words

Open source business intelligence software has become an essential tool for businesses looking to make data-driven decisions. The benefits of open source BI software are clear: cost-effectiveness, customization, flexibility, and scalability. With a wide range of tools and features available, businesses can easily adapt open source BI software to their specific needs, and leverage powerful analytics tools to gain meaningful insights into their operations. By embracing open source BI software, businesses can stay ahead of the competition, make informed decisions, and drive growth and success.


From zero to BI hero: Launching your business intelligence career


FAQ

What are the benefits of using open source business intelligence software?

The benefits of using open source business intelligence software include cost savings, customization capabilities, and community support. Open source business intelligence software can provide organizations with the tools they need to analyze data, create reports, and make informed business decisions.

How do I choose the right open source business intelligence software for my organization?

When choosing the right open source business intelligence software for your organization, consider factors such as features, data sources, user interface, customization options, and community support.

How do I integrate open source business intelligence software with other systems?

Integrating open source business intelligence software with other systems can be done using APIs or connectors. Choose compatible systems and test the integration to ensure that it is working correctly.

How can I ensure the security of my open source business intelligence software?

Implement access controls, encryption, and keep the software up-to-date with the latest security patches and updates. Use strong passwords and two-factor authentication to provide an extra layer of security.

]]>
Privacy Lost: Can Decentralized Data Exchanges and MPC Provide the Cure? https://dataconomy.ru/2022/10/25/privacy-decentralized-data-exchanges-mpc/ https://dataconomy.ru/2022/10/25/privacy-decentralized-data-exchanges-mpc/#respond Tue, 25 Oct 2022 15:49:02 +0000 https://dataconomy.ru/?p=30987 Data privacy has rapidly become the number one concern among Internet users. With recent scandals involving data breaches, lack of user control, and third-party exploitation, it’s easy to understand why. However, this doesn’t mean that all hope is lost. There are various ways we can leverage blockchain technology to solve the data privacy dilemma through […]]]>

Data privacy has rapidly become the number one concern among Internet users. With recent scandals involving data breaches, lack of user control, and third-party exploitation, it’s easy to understand why. However, this doesn’t mean that all hope is lost. There are various ways we can leverage blockchain technology to solve the data privacy dilemma through decentralized data exchanges, multi-party computation (MPC), and more.

After all, what good is decentralization if we can’t use it to protect our private information? Let’s take a closer look.

Blockchain Data Privacy Solutions

There are a few ways that blockchain can help protect your data privacy. The most obvious solution is to limit who can access your data in the first place. While centralized databases are wide open for the taking, companies, and individuals can only access blockchain-based data with your permission.

Another way that blockchain can protect your data privacy is by letting you decide how long you want to keep your information on the blockchain. Centralized databases like Gmail and Facebook typically keep your information indefinitely. However, blockchain technology lets you set a deadline for when your data expires and gets deleted from the network.

Decentralized Storage

One of the biggest challenges in data privacy is centralized data storage on a third-party server. After all, if a malicious hacker gains access to that third-party server, they can steal a wide variety of your personal information. This data storage model creates a centralized channel that malicious actors can exploit. Thus, it’s crucial to devise ways to decentralize data storage.

Thankfully, we have blockchain to help us decentralize data storage. By creating a blockchain-based decentralized storage network, we can ensure that your data is safely stored and encrypted locally. With decentralized storage, you’ll have greater control over your data and its use.

Blockchain-Based Identification

Another way that blockchain can protect your data privacy is by providing you with a blockchain-based identification. You can use this identification method to prove your identity and access different goods, services, and websites.

As opposed to the current identification system that uses government-issued identification, blockchain-based identification can be accessed with your unique, encrypted “fingerprint.” You can generate this identification fingerprint by encrypting your personal information, such as your name and address, with a blockchain public key.

This blockchain-based identification method is highly secure and incredibly difficult to hack. Even if someone were to steal your fingerprint, they would still be unable to decrypt your personal information. This makes blockchain-based identification an optimal solution for protecting your data privacy.

Blockchain-based dApps and MPC for Enhanced Data Protection

Another way that blockchain can protect your data privacy is through decentralized apps (dApps). Developers can build these dApps on the blockchain to ensure the security of your data. Decentralized apps are hosted on decentralized networks and use smart contracts to enforce the rules of engagement.

This means that decentralized apps have a level of security that far exceeds that of traditional apps. Decentralized apps also allow you to retain complete control over your data. Unlike conventional apps hosted on centralized servers, decentralized apps are hosted on distributed networks.

In addition, multi-party computing (MPC) adds additional privacy and security. The protocol offers the highest levels of privacy, protecting confidential data when appropriate and when user wants to, helping give them back control.

MPC works by splitting the traditional private keys into multiple pieces and distributing them in numerous places to ensure no one person has full access to the traditional private key. The major advantage here is that the private key is always used in a distributed manner, which makes it more secure.

In other words, MPC technology makes it much harder for potential hackers to gain control over a user’s wallet. To do so, they now need to attack multiple parties across different operating platforms at different locations simultaneously.

Are you inspired to design the future?

Decentralized storage, blockchain-based identification, and decentralized apps for enhanced data protection are all ways that blockchain can help protect your data privacy.

Between October 25 and November 14, Partisia Blockchain is running a hackathon focusing on finding solutions to problems like this. With two tracks – DeFi Beyond Crypto and Data Economy – hackathon participants will benefit from mentorship and workshops, meet fellow hackers and bring their web3 ideas to life on the MPC privacy-preserving blockchain.

Register now, and you could not only find the solution to the issue of DeFi and carbon credits, but you might be in the top 30 teams invited to the on-site event in Paris. You may even walk away with a share of an incredible grant pool, including a whopping $125k and $100k for the two best projects, respectively.

]]>
https://dataconomy.ru/2022/10/25/privacy-decentralized-data-exchanges-mpc/feed/ 0
DeFi and the carbon credits market are a match made in heaven, so why has it stalled? https://dataconomy.ru/2022/10/18/defi-carbon-credits-market-match-stalled/ https://dataconomy.ru/2022/10/18/defi-carbon-credits-market-match-stalled/#respond Tue, 18 Oct 2022 13:25:15 +0000 https://dataconomy.ru/?p=30654 Many startups have tried, and are trying, to use blockchain and – lately – decentralized finance (DeFi) for good. One of the most promising areas is the carbon credits market, where blockchain, transparent distributed ledgers, and smart contracts appear to be a natural fit. But things haven’t progressed as planned. What are carbon credits? Carbon […]]]>

Many startups have tried, and are trying, to use blockchain and – lately – decentralized finance (DeFi) for good. One of the most promising areas is the carbon credits market, where blockchain, transparent distributed ledgers, and smart contracts appear to be a natural fit. But things haven’t progressed as planned.

What are carbon credits?

Carbon credits are a financial instrument used in carbon trading. The carbon credits market, born in Kyoto in 1997, allows businesses and countries to buy and sell permits that limit them to producing a specific amount of carbon dioxide. So those permits help the environment, but how can they help the bottom line, the drive of so many business decisions?

The general idea behind carbon trading is that those companies or countries that can reduce emissions below their permit allowance will be able to sell those reductions as credit. Those entities then are incentivized to invest in technology and other methods of reducing their emissions to make more money. 

In theory, it’s a great way for everyone involved to save money by reducing their emissions while simultaneously creating new markets for innovation. As you might imagine, it’s not always easy to get these markets off the ground – and that’s where DeFi has promised (and so far largely failed) to step in.

Why has DeFi not revolutionized the carbon credit market yet?

Some startups felt they could turbocharge the climate fight with crypto-economics by pushing carbon markets onto the blockchain. They would provide a global infrastructure data layer and force polluting companies to pay higher prices for carbon credits or seek more environmentally friendly approaches to their businesses.

Ultimately, these DeFi startups attempted, in many cases, to upend the system by creating infrastructure to make it easy for people to buy carbon credits, which would then be retired and replaced with new tokens on the blockchain. As a result, the tokens will be stored publicly and safely, able to be bought and traded like any other crypto asset, thereby attracting prospective buyers who didn’t have any interest in carbon credits before.

Campaigns by crypto environmental groups led to millions of carbon credits arriving on chain. According to some scientists and watchdogs, many of the credits were tied to low-quality, long-dormant projects that didn’t improve the environment. The result? Market prices fluctuated wildly, causing mild panic among traditional carbon-credit buyers and issuers. 

Even though these projects are still ongoing, it’s clear that crypto markets are unprepared to handle such a challenge. It’s not about DeFi or blockchain technology but how we think about using blockchain for social good. That’s the real issue here.

The first rule of blockchain club? Don’t talk about blockchain club

Focusing on telling everyone about blockchain might be holding us back. When people think of blockchains, they often envision Bitcoin, meme coins like Doge, massive crypto exchanges, data breaches, and projects that never come to fruition. DeFi has so much money flowing through the sector that it makes up an outsized portion of what many people see as blockchain activity, but it also makes it a target for those with malicious intent. So, as with Kotlin, Javascript, and other development platforms, which we don’t shout about from the rooftops, staying quiet and using the most appropriate technology for the job, reduces risk and, if the solution works well, keeps opinion positive.

Why is DeFi the most appropriate technology for carbon credits?

In reality, blockchains are a fundamental innovation that could support untold numbers of use cases – not all of which revolve around cryptocurrencies. So when we think about blockchains for social good, let’s do it in terms of applications that achieve specific goals and are tied directly to real-world problems. In other words, instead of blockchain initiatives just being lists of features and use cases, we should think about them in terms of outcomes achieved and how they can improve people’s lives and help solve critical issues.

That’s a critical distinction, especially when we look at climate change. Climate change is perhaps one of humanity’s most significant challenges. It’s not just a problem. It’s an existential threat that requires drastic action and innovation to address it in any meaningful way. DeFi could be part of that solution; as one Harvard economist said, there is a lot of value here if they can do it right. But they will have a hard time stopping overzealous buyers looking to make a quick buck by grabbing up low-value credits for cheap. A solution that can mitigate that opportunity might be the way to go.

In other words, it’s not about if we can develop blockchain solutions for climate change – it’s about when and how you incentivize good behavior and limit the abuse of the solution.

It’s easy to see why multi-party computing (MPC) is so promising in this field. The protocol enables frictionless lending of digital assets, and it’s fast, offering the highest security for individuals concerned about safety and for companies, banks, organizations, and governments. 

DeFi and the carbon credit market – along with a focus on incentivizing the correct user behavior and the security that comes with MPC – could be the combination of technologies to realize the potential.

Do you have the answer?

Between October 25 and November 14, Partisia Blockchain is running a hackathon focusing on finding solutions to problems like this. With two tracks – DeFi Beyond Crypto and Data Economy – hackathon participants will benefit from mentorship and workshops, meet fellow hackers and bring their web3 ideas to life on the MPC privacy-preserving blockchain.

Register now, and you could not only find the solution to the issue of DeFi and carbon credits, but you might be in the top 30 teams invited to the on-site event in Paris. You may even walk away with a share of an incredible grant pool, including a whopping $125k and $100k for the two best projects, respectively.

Critically, you might be instrumental in applying blockchain to real-world problems, finding the correct answers, and making life better for all.

]]>
https://dataconomy.ru/2022/10/18/defi-carbon-credits-market-match-stalled/feed/ 0
Join the Partisia Blockchain hackathon https://dataconomy.ru/2022/10/07/join-the-partisia-blockchain-hackathon/ https://dataconomy.ru/2022/10/07/join-the-partisia-blockchain-hackathon/#respond Fri, 07 Oct 2022 13:11:12 +0000 https://dataconomy.ru/?p=30168 Partisia Blockchain and Data Natives are partnering to bring to hackers worldwide a great hackathon experience happening online from Sep 28 to Nov 14. Code and network with like-minded Rust and Python developers, and have free 1:1 mentoring and interesting workshops in this hackathon. As a bonus, between 15-17 Nov, the 30 strongest teams will be invited […]]]>

Partisia Blockchain and Data Natives are partnering to bring to hackers worldwide a great hackathon experience happening online from Sep 28 to Nov 14. Code and network with like-minded Rust and Python developers, and have free 1:1 mentoring and interesting workshops in this hackathon. As a bonus, between 15-17 Nov, the 30 strongest teams will be invited to join free for an extra on-site event in Paris from Dec 1-3. To participate in the competition, developers must submit their code solution built on Partisia Blockchain until Nov 14.

Main focus verticals of the #PartiHack will be DeFi beyond Crypto and Data Economy.

Partisia aspires to create blockchain-based distributed trust technologies that promote the growth of our society. This mission can only be accomplished with the help of a thriving ecosystem of developers and communities who can show the value of MPC on blockchain and how privacy-preserving principles must be a cornerstone of any decentralized infrastructure solution for the future digital economy.

Join the Partisia Blockchain hackathon

The Partisia Blockchain underpins blockchain-based privacy-protecting and governance solutions that show the capability of multi-party computation (MPC) to address societal concerns related to economic growth, healthcare, and well-being while advancing the UN’s sustainability goals.

When will it take place?

● Online: Sep 29 – Nov 14

● Offline: December 1-3

On November 15, when both the offline event registration period and the online activity period come to an end, teams will be chosen for good. There is no cap on the number of teams that can participate in the hackathon, however, Partisia will choose the top 30 teams to present live on stage.

The second part of the event will be held in Paris between December 1-3. The top 30 teams will come together in a face-to-face event and awards ceremony.

How and where to apply?

In order to join the #PartiHack in Paris, teams need to fill out the registration form and submit their code in GitHub. It is not necessary to attend the Paris event to be selected as the winner. But you wouldn’t want to miss the chance to work with founders and senior architects, which will improve your prospects.

You can check the hackathon page here.

Prizepool

● $125K: Become the Parti Hack champion

● $100K: Introducing the “Ivan Damgard MPC Award”

● $75K: Parti Hack second prize

● $25K: Parti Hack third prize

● $25K: Partisia Community Award

Watch their announcement in Singapoure token 2049 here:

]]>
https://dataconomy.ru/2022/10/07/join-the-partisia-blockchain-hackathon/feed/ 0
Top developers will code for a more sustainable world at the Metaverse Masters hackathon https://dataconomy.ru/2022/09/23/global-devslam-metaverse-masters-hackathon/ https://dataconomy.ru/2022/09/23/global-devslam-metaverse-masters-hackathon/#respond Fri, 23 Sep 2022 14:00:07 +0000 https://dataconomy.ru/?p=29345 The world’s biggest coding and development networking event, Global DevSlam, invites the top 200 developers to build the metaverse with the Metaverse Masters hackathon. The Microsoft-sponsored hackathon will be hosted in collaboration with HACKMASTERS. Reimagining a more sustainable world The Metaverse Masters hackathon challenges developers to reinvent a more sustainable world for future generations using […]]]>

The world’s biggest coding and development networking event, Global DevSlam, invites the top 200 developers to build the metaverse with the Metaverse Masters hackathon. The Microsoft-sponsored hackathon will be hosted in collaboration with HACKMASTERS.

metaverse masters

Reimagining a more sustainable world

The Metaverse Masters hackathon challenges developers to reinvent a more sustainable world for future generations using Microsoft Azure and metaverse technologies.

Building mixed reality experiences to reinvent MENA as a sustainable region!

The top 200 developers will try to figure out how to create new planet-friendly solutions using Azure and metaverse technologies to reinvent and enhance the following sectors: tourism, food, and smart cities by reducing carbon footprints and improving quality of life.

Challenge 1: Tourism

In this challenge, developers will use Azure and metaverse technologies to find ways to redefine the tourist experience of visiting Dubai by maximizing visitor enjoyment while minimizing carbon footprint.

Sign up as a team or individually here.

Challenge 2: Food

Developers will need to find a way to reduce food waste by creating a solution that provides consumers with contextual information about the nutrient content and environmental impact of the food they are about to purchase.

Sign up as a team or individually here.

Challenge 3: Virtual Spaces

Developers will reimagine the virtual spaces for novel forms of human interaction to reduce the carbon footprints of energy use, transportation, and education.

Sign up as a team or individually here.

Launch your idea with Microsoft’s support

Only three winners will emerge from the Metaverse Masters hackathon, but no developer will return home empty-handed. The overall winner will get mentorship opportunities with the Microsoft product team and exclusive Microsoft gear. Also, some surprise prizes for the winning team will be announced soon.

HACKMASTERS will invite the developers of the most creative concept to their London Studio, all expenses paid. Microsoft will reward the best HoloLens design with the opportunity to join Founders Hub, Microsoft’s new innovative platform offering support at every stage of idea development.

The prizes are not limited to these. All hackathon participants will receive free entry to Global DevSlam, complimentary Azure credits, exclusive access to Microsoft training materials, and one-to-one mentoring sessions.

More prizes will be announced as the hacking day approaches.

The timeline

Free registrations will be followed by confirmation as a hacker and team matching on October 2. The hackathon’s online ideation and preparation period will start on October 4. Developers will start ideation and building their team’s concept for the hack, interact with exclusive mentors, and submit a one-minute video of their team’s concept by October 10. The actual hacking will take place on October 11 with physical attendance at Global DevSlam at Dubai World Trade Center.

Metaverse Masters hackathon in 5 Questions

1. When and where does the Metaverse Masters hackathon take place?

You’ll hack online from October 4 through October 10, utilizing the tools provided and interacting with the instructors the entire time. On October 11, you’ll present your submitted solution and its supporting presentation to the mentors in person at the Dubai World Trade Center. You will be able to present your hack to the judges and other attendees if you advance to the final round of presentations.

2. Who can participate? 

All developers interested in Azure and Microsoft technologies are welcome to join Metaverse Masters. There is a place for you as long as you’re eager to learn about Microsoft technologies! There is no age limit, but to be considered a legitimate participant, you must be able to attend the actual hackathon on October 11.

3. Can I participate alone?

You can participate unassisted, but it can be challenging to devise a solution in time. The Metaverse Masters teams advise finding a team of 2-5 individuals with various skills. As an example, one person programs, one understands design, and one organizes the team’s work and displays the outcomes. Don’t worry if you don’t already have a team in mind when you sign up. Metaverse Masters will have a team-building opportunity. There won’t be anyone left behind.

4. Can I take my idea further after the hackathon?

Your concept and solution remain yours and your team’s, as Metaverse Masters does not own the intellectual property created during the hackathon. If you bring home a reward in one of the prize categories, you may have the opportunity to develop your solution further.

5. How can I benefit from mentors’ expertise?

The Metaverse Masters team will organize mentor conversations so that each team has the chance to talk about and receive feedback on their concepts. Schedule time with mentors if you want to take advantage of this when the appointments open up. The Metaverse Masters Discord server will be used to discuss mentoring sessions.

You can find all the other details about the Metaverse Masters hackathon and submit your application here. Only the first 50 participants will be able to guarantee their place in the hackathon. There are only 200 spaces available so guarantee yours now!

]]>
https://dataconomy.ru/2022/09/23/global-devslam-metaverse-masters-hackathon/feed/ 0
The world’s best developers will change the future of finance at the FutureHack Hackathon https://dataconomy.ru/2022/09/22/the-worlds-best-developers-will-change-the-future-of-finance-at-the-futurehack-hackathon/ https://dataconomy.ru/2022/09/22/the-worlds-best-developers-will-change-the-future-of-finance-at-the-futurehack-hackathon/#respond Thu, 22 Sep 2022 14:21:50 +0000 https://dataconomy.ru/?p=29264 The Global DevSlam will host the FutureHack Hackathon, held by FutureLab in collaboration with Hackmasters, where the world’s top 200 developers will compete. The FutureHack Hackathon welcomes developers to participate individually or as a team, but only the best of the best will get the opportunity, limited to 200 bright minds. The FutureHack Hackathon is […]]]>

The Global DevSlam will host the FutureHack Hackathon, held by FutureLab in collaboration with Hackmasters, where the world’s top 200 developers will compete.

The FutureHack Hackathon welcomes developers to participate individually or as a team, but only the best of the best will get the opportunity, limited to 200 bright minds.

The world's best developers will change the future of finance at the FutureHack Hackathon

The FutureHack Hackathon is sponsored by FutureLab, Emirates NBD Bank’s strategic think tank and testing center dedicated to creating new solutions that can benefit their clients and respond to their always-evolving needs. The think tank focus on creating seamless smart payments, simplifying business administration around banking for SMEs, and enabling customers to monetize their data. Developers will try to tackle these challenges with their innovative ideas.

Challenges to Tackle

Challenge 1: Seamless Smart Payments

“How to create a new and innovative payment experience using open banking APIs that streamlines payments for daily users?”

Developers will create a solution that would benefit users in their daily payment activities in this challenge, such as innovative, easy-to-use digital payment solutions for replacing cash with digital payments and context-aware retail payments to replace the physical act of payment. Also, utilizing innovative channels like face, palm, senses, state of mind, voice, QR codes, IoT, cars, and others is encouraged.

Sign up as a team or individually here.

Challenge 2: SME Simplified

“How can we develop a value-added service for SMEs that simplifies their business administration around banking?”

In this challenge, the developers’ task will be to create a solution that simplifies banking administration for SMEs. Solutions will need to ​provide SMEs with a better view of their business while spending less time on banking administration, simplify management of transactions for SMEs to increase international business trade or lending criteria for SMEs / micro-SMEs that may not have an extensive credit history.

Sign up as a team or individually here.

Challenge 3: Monetize Your Data

“How can we create a new asset class—’Personal Data’—that can be monetized by all?”

Developers will race against time to create a new asset class — “Personal Data” — with a business model that enables monetization. In order to complete this challenge, developers must design a new asset class, smart contracts to authenticate individuals, goods, and personal data, as well as a monetization platform to make personal data an asset available to all citizens.

Sign up as a team or individually here.

Incubation opportunities with Emirates NBD

Free registrations will be followed by selection and team matching on September 28. The hackathon’s online ideation and preparation period will start on October 5. Developers will start ideation and building their team’s concept for the hack, interact with exclusive mentors, and submit a one-minute video of their team’s concept by October 11. The actual hacking will occur on October 12 with physical attendance at Dubai World Trade Center.

Winners will begin incubation opportunities with Emirates NBD after the event and receive details about an exclusive trip to London with Hackmasters. Also, all participants will have the opportunity to meet recruiters from Emirates NBD and explore current career development opportunities and vacancies.

Great prizes for winners and participants

No bright mind will return empty-handed from FutureHack. The team that came up with the best idea and MVP will get $10,000 worth of prizes and goodies, along with an opportunity to finesse their solutions with Emirates NBD subject matter experts, as well as a fast track to DevSlam Talent – an opportunity to present to potential hiring and be considered for full-time positions.

The creators of the most impactful idea will get an Oculus VR Set and goodies, plus $5000 worth of prizes and a fast track to DevSlam Talent. The winners of the best technical design will also be given a fast track to the DevSlam Talent competition, as well as prizes totaling $3,000 and an Oculus VR set. Also, Hackmasters will invite the team that created the most creative concept to their London Studios, all expenses paid.

All hackathon participants will receive free entry to Global DevSlam on the 10th, 11th, and 13th; they will gain exclusive access to the region’s largest API Souq, 1-on-1 meeting chances with inspiring mentors, and the opportunity to meet recruiters from Emirates NBD and explore current career development paths.

FutureHack Hackathon in 5 Questions

Here are the answers to some of the questions you might have about the FutureHack Hackathon:

1. When and where does FutureHack take place?

You’ll hack online from October 5 through October 11, utilizing the tools provided and interacting with the instructors the entire time. On October 12, you’ll either present your submitted solution and its supporting presentation to the mentors virtually via the FutureHack Discord server or in person at the Dubai World Trade Center. You will be able to present your hack to the judges and other attendees if you advance to the final round of presentations.

2. Who can participate? 

Everyone is welcome to FutureHack, regardless of their level of expertise in coding, design, ideas, or public speaking. Only passion is required! FutureHack offers a small available space but no age restriction. Therefore, sign up as soon as possible because only the first 50 registrations will be assured a spot.

3. Can I participate alone?

You can participate unassisted, but it can be challenging to devise a solution in time. The FutureHack teams advise finding a team of 4-6 individuals with various skills. As an example, one person programs, one understands design, and one organizes the team’s work and displays the outcomes. Don’t worry if you don’t already have a team in mind when you sign up. FutureHack will have a team-building activity. There won’t be anyone left behind.

4. Can I take my idea further after the hackathon?

The development team that created the project during the hackathon is the rightful owner of its intellectual property. The owners and inventors of the tools and techniques that developers utilize are the rightful owners of those resources.

5. How can I benefit from mentors’ expertise?

The FutureHack team will organize mentor conversations so that each team has the chance to talk about and receive feedback on their concepts. Schedule time with mentors if you want to take advantage of this when the appointments open up. The FutureHack Discord server will be used to discuss mentoring sessions.

You can find all the details about the FutureHack Hackathon and submit your application here.

]]>
https://dataconomy.ru/2022/09/22/the-worlds-best-developers-will-change-the-future-of-finance-at-the-futurehack-hackathon/feed/ 0
5 reasons why you should not miss the Global DevSlam https://dataconomy.ru/2022/09/15/5-reasons-why-you-should-not-miss-the-global-devslam/ https://dataconomy.ru/2022/09/15/5-reasons-why-you-should-not-miss-the-global-devslam/#respond Thu, 15 Sep 2022 09:37:05 +0000 https://dataconomy.ru/?p=28789 The Global DevSlam is counting down the days to its grand launch. The world’s biggest coding and development networking event for learning, skills, and talent acquisition will be held for four days, from 10 to 13 October, at the Dubai World Trade Center to celebrate all things code! The Global DevSlam will bring together an […]]]>

The Global DevSlam is counting down the days to its grand launch. The world’s biggest coding and development networking event for learning, skills, and talent acquisition will be held for four days, from 10 to 13 October, at the Dubai World Trade Center to celebrate all things code!

The Global DevSlam will bring together an influential ecosystem of top software engineers, data scientists, coders, and developers from all over the world hosting the most active and strong community network of technical minds, skills, and talents. Professionals, creators of popular software, libraries, frameworks, and the coding world’s radical superstars, who have left their mark on the coding world, will take the stage at the Global DevSlam.

The first mega event in its region

The Global DevSlam will be the first event in the region where public and private businesses with ambitious transformation projects can interact, find, and hire top developers and coders worldwide, enabling the largest coding recruitment drive in the region.

The event’s four-day agenda is live and full of exciting conferences, training sessions, networking opportunities, and hackathons. The Global DevSlam will also host advanced Red Hat, Microsoft, Google, and InterSystems workshops. Participants will have the chance to experience the exhibitions of dozens of leading companies, such as Zoom, VMware, Autodesk, and many others.

Follow disruptive tech and trends shaping tomorrow

Global DevSlam’s power-packed conference agenda will include industry-leading conversations on Python, artificial intelligence, machine learning, blockchain, DevOps, Javatalks, metaverse, mobility, NFTs, gaming, quantum computing, cloud, Kubernetes, and many more disruptive technologies and trends shaping tomorrow.

The world’s top talents and teams will come together for DevSlam Hack Challenges, the largest hackathon in the region with over 3,000 participants, to create innovative solutions to address real-world problems with inspiring ideas.

Launched with the support of Coders HQ, a transformational project by the UAE government to help coders become strategic enablers of the UAE digital economy, Global DevSlam is expected to host over 15,000 visitors from more than 170 countries all over the world. The event will present great networking, discovery, hiring, and career opportunities.

5 reasons why you should not miss the Global DevSlam
The Global DevSlam will bring together an influential ecosystem of top software engineers, data scientists, coders, and developers from all over the world

5 reasons not to miss the Global DevSlam

There are countless reasons not to miss the Global DevSlam, the largest coding and developer community meetup in the Middle East and the world. The five most important of these are as follows:

1. The world’s biggest meetup for the developer community 

More than 15,000 participants from 170 countries, including prominent talents, decision-makers, and visionaries from the world of coding, will come together. The event presents great opportunities for networking, inspiration, hiring, and career. Conferences will take on the subjects trending in the coding world, including but not limited to artificial intelligence, machine learning, Blockchain, DevOps, Javatalks, metaverse, mobility, NFTs, gaming, quantum computing, cloud, Kubernetes, and others.

2. The world’s largest Python conference, PyCon MEA now in Dubai

The Global DevSlam makes a great breakthrough, bringing the world’s largest Python conference to the region. PyCon has hosted smash-hit python community conferences in over 50 countries. It is now making its first appearance in the region as Global DevSlam presents PyCon MEA in collaboration with the Python Software Foundation. Pycon MEA will present 80 speakers and more than 100 hours of interactive learning opportunities.  

3. Exciting series of hackathons

The DevSlam_Hack Challenges will gather the most talented individuals and teams from around the globe to develop innovative solutions to thematic challenges. Over 800 programmers will have four days to address real-world problems with their creative solutions:

The Global DevSlam teams up with Microsoft and Hackmasters to gather the best of the best to hack and showcase futuristic innovations that define an internet for sustainability.

In a one-day hackathon, Emirates NBD challenges the best tech minds in the MENA area to develop creative solutions to three distinct challenges: payments, SME lending, and personal data economy.

A-nex Korea invites hackers to its annual Ko-World hackathon to test, evaluate, and expand its cryptography exploiting skills.

4. Certified workshops

The Global DevSlam will host two certified workshops to take your coding skills to the next level:

Le Wagon

Le Wagon is one of the world’s most renowned coding boot camps. It helps students reinvent their careers through technology by offering immersive web development and data science boot camps. Le Wagon will run the following workshops at the event:

Blockchain Council

The Blockchain Council offers a variety of certificates that are tailored to suit Web3 aficionados of diverse backgrounds. Blockchain Council will run the following workshops at the event:

5. Career opportunities from global organizations

Industry titans in the UAE are embarking on a major tech expansion mission, fueling an increase in the need for programmers. Take advantage of career-changing opportunities as the Global DevSlam is expected to host the largest hiring event for the coding world. 1000+ career opportunities from global organizations will be waiting to be seized by the right talents.

Get involved now!

Global DevSlam will be the first-of-its-kind event in the region, dedicated to the community of coding and development. You can find detailed information about the Global DevSlam and get your pass here.

]]>
https://dataconomy.ru/2022/09/15/5-reasons-why-you-should-not-miss-the-global-devslam/feed/ 0
Data Natives, Europe’s largest Data Science and AI conference, makes its big on-site comeback in Berlin https://dataconomy.ru/2022/09/02/data-natives-europes-largest-data-science-and-ai-conference-makes-its-big-on-site-comeback-in-berlin/ https://dataconomy.ru/2022/09/02/data-natives-europes-largest-data-science-and-ai-conference-makes-its-big-on-site-comeback-in-berlin/#respond Fri, 02 Sep 2022 15:14:55 +0000 https://dataconomy.ru/?p=28310 Dataconomy, Europe’s leading media and events platform for the data-driven generation, hosted the 8th edition of Data Natives 2022 (DN22) was a resounding success, welcoming over 1,000 on-site visitors, with thousands more participating via social media. From August 31st to September 2nd, Europe’s largest tech and Artificial Intelligence conference showcased the newest data innovation with […]]]>

Dataconomy, Europe’s leading media and events platform for the data-driven generation, hosted the 8th edition of Data Natives 2022 (DN22) was a resounding success, welcoming over 1,000 on-site visitors, with thousands more participating via social media. From August 31st to September 2nd, Europe’s largest tech and Artificial Intelligence conference showcased the newest data innovation with its three days of panels, interactive talks, and demonstrations across five stages with over 200 speakers. The presence of 146 nationalities gave the event both academic credence and cosmopolitan flair, many of them esteemed members of Data Natives’ world-beating 183,000-strong online community of data experts.

Data Natives, an intergalactic thought leaders’ experience

Amongst the big names and corporate innovators and leaders present at Data Natives 2022 were:

Dee Wood (Head at Twitter Next Lab, EMEA); Mina Saidze (Lead Data, Axel Springer, Founder, Inclusive Tech); Mike Butcher (Editor at large at Tech Crunch); Tina Kluewer (Director, AI Berlin Centre of Artificial Intelligence); Franziska Heine (Deputy Managing Director, Wikimedia Germany); Carlos Ahumada(Data for Good at Meta); Ait Si Abbou (Director Client Engineering DACH at IBM); Kenza Nicole Büttner (CEO Merantix Labs); Katherine Townsend (Director for Policy at World Wide Web Foundation, Executive Director at Open Data Collaboratives); Lubomila Jordanova (CEO & Co-Founder Plan A & Co-Founder at Greentech Alliance); Julien Carbonnell (Data Scientist);  Will Hurley aka whurley (CEO of Strangeworks); Johannes Steger (MO at Tagesspiegel);  Iliana Portugues Peters (Futurist at Siemens Energy); Clara Rodriguez Fernandez (Deeptech Reporter at Sifted). 

Data Natives, Europe’s largest Data Science and AI conference, makes its big on-site comeback in Berlin
Data Natives, an intergalactic thought leaders’ experience

In addition, Dr. Ralph Kleindiek, Chief Digital Officer (CDO) of Berlin, delivered a keynote and took the time to meet with many young entrepreneurs from startups in all sectors. 

The 8th edition of Data Natives was greatly enhanced by its post-pandemic return to form an entirely in-person event. 

The conference’s theme was naturally future-forward; “Guide to the Galaxy” not only encapsulated the stratospheric tech advancements anticipated in a world so altered by COVID-19 but also shone a necessary spotlight upon up-and-coming B2C developments.

Reflecting Data Natives’ commitment to accessible innovation, Blockchain and Web3 were highlighted as primary topics of discussion throughout the three-day event: A Web3 hackathon enabled visitors to enter the Web3 space with ease and the final day of the conference was devoted to demystifying the often intimidating world of the Metaverse, Blockchain, decentralized finance, and NFTs. 

DN22 dared to lead to the future for inspiration and also served as an opportunity to examine the human edge of tech. This momentum was experienced by the sense of community established throughout the conference, with over 20 satellite events taking place during the event itself and offering a mentorship program to pair tomorrow’s pioneers with current big names in tech. The powerful lineup of female speakers included tech icons and influencers in the Women in Tech Community, like Mina Saidze, founder and CEO of Inclusive Tech, Europe’s first consulting organization for diversity in tech.

Data Natives, Europe’s largest Data Science and AI conference, makes its big on-site comeback in Berlin

On August 31st, the conference entered its first day in a typically pioneering fashion, concentrating this year on the increasing prevalence of Artificial Intelligence in society. With many distinguished faces from the world of AI speaking on the practical application of human-taught tech, DN22 offered curious, if sobering insight into the future of Artificial Intelligence from the standpoint of climate change, Sustainability, and the use of AI to avoid human errors in the recruitment process.

Day 2 focused primarily on data programming, offering tech know-how and innovative instruction on how to develop Machine Learning responsibly. A pioneering analysis of AI’s place in the future of international banking, healthcare, and academic research served as a suitably space-age backdrop to the practical data application demonstrations given by top names in tech throughout the day. 

The conference’s final day truly delivered on its promise to provide visitors with a revolutionary “Guide to the Galaxy”. Exploring the far-flung corners of Planet Tech, DN22 presented an agenda capitalizing on the latest developments in Web3, Blockchain, NFTs, and cryptocurrency, with attendees enjoying a cutting-edge forum of participation and discussion around often inaccessible virtual spaces like the Metaverse. Paula Schwarz’s NFT4refugees art project showcased the intersection of art and tech can impact the migrant crisis relief projects.

“We at Dataconomy and Data Natives are happy that DN22 was such a staggering success; it’s great to know that Europe’s leading conference for tech and AI remains THE calendar event for tech experts and data innovators worldwide. After hosting the event digitally for the past two years, we guided our community on an interstellar on-site journey to explore the galaxy from the vantage point of data science. Data Natives is all about the interaction of human experience and data. So, beyond the latest trends in Data Science and Artificial Intelligence, DN 22 focused heavily on Sustainability, healthcare, social causes, and data ethics. We are an open and inclusive forum for inspired and sensitive discussion, collaboration, and conversation-led innovation”, said Elena Poughia, founder and CEO of Dataconomy and Data Natives.

Data Natives, Europe’s largest Data Science and AI conference, makes its big on-site comeback in Berlin

Breaking news!

Data Natives is thrilled about the announced support of the Berlin Senate for 2023. Dr Ralph Kleindiek, CDO of Berlin, delivered a compelling keynote and took the time to meet with young startup entrepreneurs.

“Elena Poughia founded the Data Natives Conference seven years ago in Berlin, and she worked so hard to enlarge, support and interconnect the data-driven community. Her work and her story show us that even a country like Germany and a city like Berlin can be leading in the field of digitisation when the mindset is right and the spirit high.”

With great thanks to the commitment of Data Natives’ sponsors and community partners

Data Natives shares the success of this 8th edition with its major partners and would like to thank them for their commitment:

Diamond partners

Astar, Siemens Energy, Applied Data Incubator, Paretos, Quix

Gold partners

SAP, Big Bang Food, Metro Digital

Partners

Amplitude, Alteryx, Neti, Schwarz, SMP, PopSink, IBM, Merantix, AI Campus Berlin, Partner AI Berlin, HowtoHealth, OpenFabric, TheMamaAI, OXREF, Tilo

Community partners

Factory Berlin, HandsonData, Omdena, ICT Spring Sesamers, SigmaSquared, Uhlala Group, Vojvodina ICT, Cluster, Women Authors of Achievement, Women in Data, Helsinki Data Science Meetup, Developer Nation, LeWagon, Stte IASI AI, Contextual Solutions, Hackers&Founders, MindSpace, Nectios, Data Fest, Tbilisi, Tiloers Data Science, Retreat Female, Digitale Academy, AsiaBerlinSummit, KIEZ TechSpace, GirlsinTech, Trust in Soda, MindtheGap, Inclusive Tech, TOA LatBan, Female Founders, Tilores

About Data Natives

Founded in 2015, Data Natives is Europe’s leading events platform for the data-driven generation. Aiming to educate and connect their 183,000-strong community of data enthusiasts via interactive events, they spark innovation across industries and throughout their vast network of entrepreneurs, researchers, and students. They also share cutting-edge research and thought-provoking content through their media platform, Dataconomy. Created to facilitate collaboration between communities excited by the boom in data science, Data Natives inspires tech pioneers worldwide through their 50+ hubs in regional tech ecosystems around the globe. Into its eighth year, their flagship conference unites 5000+ data-driven professionals and multinational companies, bringing together the best minds in tech for up-to-the-minute discussion and demonstrations from the worlds of data, tech, AI, corporate ethics, diversity, and Sustainability. 

]]>
https://dataconomy.ru/2022/09/02/data-natives-europes-largest-data-science-and-ai-conference-makes-its-big-on-site-comeback-in-berlin/feed/ 0
DN22 Conference: Don’t miss your chance to be part of Europe’s biggest data science and AI event  https://dataconomy.ru/2022/08/16/dn22-conference-dont-miss-your-chance-to-be-part-of-europes-biggest-data-science-and-ai-event/ https://dataconomy.ru/2022/08/16/dn22-conference-dont-miss-your-chance-to-be-part-of-europes-biggest-data-science-and-ai-event/#respond Tue, 16 Aug 2022 10:09:47 +0000 https://dataconomy.ru/?p=27330 The Data Natives Conference 2022 (DN22 Conference) will bring together data and technology experts, creators, and thinkers to share their perspectives on future tech trends, developments, and moving stories.  What’s in it? DN22 Conference promises to be particularly lively since a hackathon and a “machathon” will take place alongside the conference. The hybrid conference will […]]]>

The Data Natives Conference 2022 (DN22 Conference) will bring together data and technology experts, creators, and thinkers to share their perspectives on future tech trends, developments, and moving stories. 

What’s in it?

DN22 Conference promises to be particularly lively since a hackathon and a “machathon” will take place alongside the conference. The hybrid conference will host offline and online events, activities, and networking sessions.

The DN Matchathon will match the winning solutions from the DN Hackathon, job seekers, startups, and companies, as well as the best tools, people, recruiters, and corporates, to explore collaboration opportunities. You can find detailed information about the DN Hackathon in our article here.

Data Natives 2022 DN22 Conference

This year’s DN22 Conference will feature over 120 speakers on five stages, ranging from climate change activists to industry titans bringing data science to healthcare. Check out the bottomless list of speakers, including visionaries such as Strangeworks CEO Whurley, Sola Osinoiki from Prosus Nasper, what3words’ Clare Jones, and Jessica Graves from Sefleuria.

The agenda for the three-day event is packed with inspiring sessions on the future of society, data economy, data science, entrepreneurship, Blockchain, and Web 3. 

Participants will also have access to exclusive matchmaking events. Those interested in following the event online can move around in a virtual space and exchange ideas and business cards with data science companies, tech geeks, startups, hackers, recruiters, and job seekers from over 37 countries.

When and where?

The DN22 Conference will take place in Kühlhaus, Berlin from August 31 to September 2, with offline and online events.

How to buy tickets?

You can get your ticket here. The various tickets grant access to Data Natives online and offline activities, community membership, and Data Natives 2022 Conference registration.

]]>
https://dataconomy.ru/2022/08/16/dn22-conference-dont-miss-your-chance-to-be-part-of-europes-biggest-data-science-and-ai-event/feed/ 0
Tech-savvy coders are expected at the DN22 Polkadot Hybrid Hackathon challenge! https://dataconomy.ru/2022/08/16/dn22-polkadot-hybrid-hackathon/ https://dataconomy.ru/2022/08/16/dn22-polkadot-hybrid-hackathon/#respond Tue, 16 Aug 2022 10:05:16 +0000 https://dataconomy.ru/?p=27260 The DN22 Polkadot Hybrid Hackathon features three days of non-stop technical sessions, seminars, competitions, and coding and around 1000 participants can enjoy top-notch presentations, demos, community events, and its traditional 48-hour coding competition for 60 straight hours. This year, Astar Network and KILT Protocol have set some fierce challenges for those who want to win over $30,000 […]]]>

The DN22 Polkadot Hybrid Hackathon features three days of non-stop technical sessions, seminars, competitions, and coding and around 1000 participants can enjoy top-notch presentations, demos, community events, and its traditional 48-hour coding competition for 60 straight hours. This year, Astar Network and KILT Protocol have set some fierce challenges for those who want to win over $30,000 in prizes.

DN22 Polkadot Hybrid Hackathon is geared toward creative thinkers with a hacker mindset. Anyone with a technological background is encouraged to apply.

The curators of the DN22 Polkadot Hybrid Hackathon seek to choose the best, most competitive, and most enthusiastic programmers and creatives. This is based on what candidates fill out in their profiles (pitch and personal bios) as well as evidence of participation in past editions (if applicable). Whatever your level of experience, what matters is your potential and love for technology.

Data Natives 2022 will be the biggest AI conference of the year

Data Natives has been gathering data-driven professionals to create a link between communities from all over the world who were mesmerized by the rise of data science, machine learning, and other data-infused technology since 2015.

This year, DN22 will connect attendees both offline and online. The technological solutions provided by Data Natives will establish an environment conducive to conferences on-site and enable interaction among all attendees. You will be able to roam about in a virtual environment while exchanging business cards and ideas with data science firms, tech enthusiasts, startups, hackers, recruiters, and job seekers from more than 37 countries. Face-to-face activities will be held in Berlin, starting from August 26th.

DN22 Conference invites tech-savvy coders to the annual Polkadot Hybrid Hackathon challenge!

Challenges

Below you can see the challenges that Astar Network and KILT Protocol have set up for the competitors:

  • PSP34 NFT Minting Site
  • On-chain Identity Scoring System for EVM
  • Shiden OG NFT using RMRK
  • DeFi Dashboard for Astar Network
  • Token Bridge Contract Between WASM and EVM
  • KILT Event Mobile Wallet

Activities

There will be a series of online and offline activities until September 2nd. You can register right away by entering this link and catch up on the upcoming online workshops:

  • August 15 – Mobile Event Wallet by KILT Protocol
  • August 16 – PSP22 & PSP34 by Astar Network
  • August 17 – Swanky CLI updates by Astar Networks
  • August 19 – Subsquid – WASM indexing by Astar Networks

Full schedule

You can also follow the full schedule below:

Online Hackathon Kick-off08.08.2022 11:00
Online Workshop with Dudley from KILT Protocol15.08.2022 19:00
Online Workshop: PSP22 & PSP34 by Astar Network16.08.2022 19:00
Online Workshop: Swanky CLI updates by Saša from Astar Networks17.08.2022 19:00
Online Workshop: Subsquid – WASM indexing by Pierre Ossun from Astar Networks19.08.2022 19:00
Meet your peers @Mindspace (Berlin)26.08.2022 18:30
User Centric Design Workshop @Mindspace (Berlin)27.08.2022 13:00
How to Validate Your Idea Workshop @Mindspace (Berlin)27.08.2022 17:00
Storytelling 101 / Pitch Deck 101 Workshop @Mindspace (Berlin)28.08.2022 13:00
Pitch Coaching (Performance) Workshop @Mindspace (Berlin)28.08.2022 17:00
Celebratory toast @Mindspace (Berlin)28.08.2022 18:00
Project Submission Deadline29.08.2022 13:00
Finalists Pre-Selection29.08.2022 14:00
Finalists Announcement31.08.2022 14:00
Demo Day from the DN22 Conference @Kühlhaus (Berlin)02.09.2022 15:55

Register here

Register DN22 Polkadot Hybrid Hackathon!

]]>
https://dataconomy.ru/2022/08/16/dn22-polkadot-hybrid-hackathon/feed/ 0
Everything you need to know about computer science major (2022 Edition) https://dataconomy.ru/2022/07/28/is-computer-science-a-good-major/ https://dataconomy.ru/2022/07/28/is-computer-science-a-good-major/#respond Thu, 28 Jul 2022 15:04:16 +0000 https://dataconomy.ru/?p=26439 For a long time, computer science has been one of the first academic fields that come to mind for those who want to step into the world of technology, but the question still lingers: is computer science a good major? Almost every sector now relies heavily on technology, necessitating working with trained individuals who can […]]]>

For a long time, computer science has been one of the first academic fields that come to mind for those who want to step into the world of technology, but the question still lingers: is computer science a good major?

Almost every sector now relies heavily on technology, necessitating working with trained individuals who can build and manage systems and software. The strong demand for computer science degrees and their rise in popularity among college students have both been influenced by this trend.

Is computer science a good major?

Computer science is not for everyone, but it is one of the greatest majors you can choose and has greater employment prospects than most alternatives.

Numerous businesses are having trouble filling positions due to the intense demand for these highly qualified employees. Less than 160,000 computer science graduates were available to fill the more than 567,755 computing employment opportunities in 2021.

Computer science is one of the most sought-after college degrees due to the lack of experienced professionals in the sector and the outstanding return on investment.

From entry-level help desk agents to computer information research scientists, the field of computer science encompasses a wide range of professions. You might be qualified for more career options in this profession by furthering your education or earning one or more certifications.

Computational systems, computer engineering, and data science are all included in the field of computer science. Many computer science specialists work as software developers, IT system administrators, or security analysts for sophisticated digital networks.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major: The computer science field encompasses a wide range of professions

First, we need to understand what computer science is to answer the question: “Is computer science a good major?”

What is computer science?

The study of computing devices and systems is known as computer science. In contrast to electrical and computer engineers, computer scientists focus primarily on software and software systems, including their theory, design, development, and application.

Artificial intelligence, computer systems and networks, security, database systems, human-computer interaction, vision and graphics, numerical analysis, programming languages, software engineering, bioinformatics, and computing theory are some of the main fields of study in computer science.

Even while programming is a requirement for studying computer science, it is simply one aspect of the subject. Computer scientists investigate the performance of computer hardware and software, designing and analyzing methods to solve programs.

Computer scientists face challenges from the abstract—determining which issues can be solved by computers and the complexity of the algorithms—to the concrete—creating programs that run smoothly on mobile devices, are user-friendly, and adhere to security protocols.

Computer science skills

Problem-solving skills are essential when working in a profession undergoing constant change and adaptation. There can be instances when your code’s unit test fails, or you have a long sprint before achieving your next objective. Your adaptability and capacity for solving a pressing issue will be useful in these circumstances.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major: Problem-solving skills are essential when working in a profession undergoing constant change and adaptation

Critical thinking is usually one of the first things that come to mind when considering skills for professions with a computer science focus. Many roles in this industry involve a lot of screen time, data crunching, and code testing. To succeed in this sector, a person must comprehend all aspects of a problem and evaluate and update data mechanically.

However, having creativity will also enable you to embrace your position wholeheartedly. A great computer science job requires the desire to develop novel, fascinating methods of doing things once you’ve mastered the analytical side of the equation.

Is computer science hard?

Again, computer science is not for everyone, and it is considered “hard.” Because there are so many core principles regarding computer software, hardware, and theory to study, earning a computer science degree has been known to include a more demanding workload than you might face with other disciplines. You may need to practice a lot as part of that learning, usually in your free time.

Some students find computer science hard to learn a programming language and computer science subjects. 

You must be at ease using technology to succeed in computer science. A combination of patience, inventiveness, and problem-solving is also necessary for this subject.

The major of computer science is typically regarded as challenging and competitive. It would help if you prepared to put in a lot of time studying new ideas and applying them to your coding projects. It can be difficult for you to keep up with your peers.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major: Is computer science hard?

But in the end, you can succeed as long as you’re enthusiastic about the subject and prepared to put the necessary time and energy into your study.

Pros and cons of computer science

Working a job like this has advantages and disadvantages, just like everything else in the world.

Is computer science a good major? Continue reading to find out if that is the career for you.

Computer science pros

  • Wide range of jobs: Holders of computer science degrees have transferable abilities that enable them to explore various opportunities across numerous industries.
  • Growing sector: Computer science is one of the most practical degree selections because the number of jobs in the computer and math fields is increasing considerably more quickly than the average.
  • High-paying: Graduates from the field of computer science start with some of the best starting wages. A computer science major ought to be near the top of your choice if the salary is essential to you.
  • Easy employment: Companies are currently experiencing trouble filling computing positions because the number of computer science graduates entering the workforce is far less than the industry demands.

Computer science cons

  • Hard to learn: Since the theory is extensively emphasized in most computer science courses, you generally won’t enjoy a computer science program if you’re not interested in abstract thought. Students who are more knowledgeable about computer systems may want to consider a concentration like software engineering that emphasizes practical application.
  • Limited classrooms: It is getting harder to get into a computer science class as more undergraduates choose to study the subject, and institutions are struggling to meet the demand for these courses.
  • Many students choose to major in computer science for its financial rewards. 
  • Mathematics (if you don’t like it): However, this degree might not be for you if you have trouble with math and problem-solving.
  • High cost: Many professional programmers and coders complete coding boot camps that cost a fraction of what a four-year institution would charge to acquire certifications. However, you’ll probably need at least a bachelor’s degree in computer science if you want to work for a leading tech firm like Apple, Google, or Amazon.
Is computer science a good major
Is computer science a good major: A computer scientist’s base income in the US ranges from $88,000 to $192,000, according to Glassdoor, with an average of $106,012

Computer science salary in 2022

According to the US Bureau of Labor Statistics (BLS), the median annual compensation for computer scientists is $131,490, with the lowest 10% of workers earning $74,210 and the highest 10% earning $208,000.

A computer scientist’s base income in the US ranges from $88,000 to $192,000, according to Glassdoor, with an average of $106,012.

Also, these are other computer science-related jobs and their salaries:

Jobs for data scientists alone are expected to grow by 22% between 2020 and 2030, and the same is true for software engineers. Computer scientists will likely continue to have good pay prospects as employers seek to locate skilled staff.

Should I major in computer science if I have no experience?

You can pick a major if you’re excited about the opportunities it will open up for you once you graduate. Given the increased need for programmers, many wonder if studying the subject is okay even without prior knowledge.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major: Should I major in computer science if I have no experience?

You can major in computer science if you are interested in doing so. There is no need for the experience; be curious and open to learning. After all, in the current technological era, it is one of the most in-demand professions that can pay among the top salaries. However, since this type of employment frequently involves long, frustrating hours, be sure you are interested in it for more than just financial gain.

Attend a hackathon, a coding meetup, or have a conversation with someone who works in the area to see whether you are passionate about it. Ask any last-minute questions you may have, and if it’s a meetup or hackathon, observe how they resolve issues.

Is majoring in computer science worth it?

Students with a talent for math and science will probably find a computer science job very satisfying. Two compelling arguments favor majoring in computer science: the high earning potential and the expected industry expansion.

However, this degree might not be a good fit for you if you’re uninterested in computer technology or have trouble with math and technology.

You must consider various aspects and look beyond wage possibilities, just like with any college degree. Your decision-making process should be heavily affected by your strengths and interests.

Computer science jobs

Check some of the best jobs for computer science majors and anyone passionate about the field.

Software developer

Websites, programs, and other applications that operate on computers or other devices are made and developed by software developers.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major?: Computer science jobs

Skills: A solid computer programming foundation is strongly advised for these professions. Being detail-oriented and having good interpersonal skills, which enable one to work collaboratively on projects, are also highly valued.

According to the U.S. Bureau of Labor Statistics, the average salary is about $105,000.

Web developer

Web developers focus on coding, designing, and developing a website’s layout.

Skills: Javascript, HTML/CSS, and other programming languages are necessary for this position. Working on projects with other designers requires collaboration skills and expertise in graphic design, among other things.

According to the U.S. Bureau of Labor Statistics, the average salary is about $69,000.

UX designer

UX designers are responsible for developing meaningful and pertinent experiences for users of a specific platform or product. They contribute to your love of your favorite apps and their user interfaces.

Skills: Your ability to communicate your design idea to your team will greatly aid your proficiency with computer systems and programming. Additionally, having a strong understanding of user empathy makes it easier to perceive the product through the eyes of potential customers and make adjustments.

According to the U.S. Bureau of Labor Statistics, the average salary is about $74,000.

Mobile app developer

Like web developers, mobile app developers specialize in designing, building, and testing mobile applications.

Skills: For variety’s sake, it’s good to know programmatic languages—Java (for Android devices) and Objective-C (for iPhones)—and strong analytical abilities and coding expertise fundamentals.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major?: Computer science jobs

According to the U.S. Bureau of Labor Statistics, the average salary is about $69,000.

IT project manager

The planning, budgeting, and general management of an organization’s IT objectives and efforts are within the purview of IT project managers.

Strong leadership abilities are essential for this position. As the IT project manager, your duties will include managing a team and directing, selecting, and making decisions that are in the best interests of everyone involved.

According to the U.S. Bureau of Labor Statistics, the average salary is about $124,000.

Information security analyst

Information security analysts are responsible for implementing safety measures and safeguarding a business’s computer networks.

Skills: Being careful and detail-oriented in your job is essential for success in this role because the entire organization’s security is at stake. Predicting outcomes and modifying security as necessary is also essential.

According to the U.S. Bureau of Labor Statistics, the average salary is about $98,000.

Systems architect

Systems architects comprehensively analyze a business to determine how to implement the optimal IT strategy for the objectives of their department. To produce the best experience possible, they define and design the system’s architecture.

Skills: The ability to critically assess a business’s goals and determine the volume of resources it will need from all angles are necessary for success as a systems architect. It’s also beneficial to diagnose, evaluate, and translate customer needs.

According to the U.S. Bureau of Labor Statistics, the average salary is about $109,000.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major?: Computer science jobs

To find the ideal career, you may look at blockchain engineer skills, artificial intelligence careers, machine learning engineersdata architectcloud computing, and data engineer jobs.

How do I know if computer science is for me?

Are you considering a computer science degree for a while but do not know how you know if it fits? Don’t worry; answer these questions:

  • Do you love solving puzzles?
  • Did you study humanities or have a two-year technical degree?
  • Do you organized and detail-oriented?
  • Are you musically talented or have some other “master hobby?”
  • Do you love maths and computers?

We can say computer science could suit you if you have many “yes” answers.

Conclusion

The major of computer science is a wise decision because it offers outstanding long-term career prospects, growth pathways, and high-paying entry-level opportunities.

Everything you need to know about computer science major (2022 Edition)
Is computer science a good major?

If I had to give a degree in computer science a difficulty rating, I’d give it a 4 out of 5. The field of computer science is by no means simple. However, if you desire to study and are interested in programming, this course is worthwhile for your future.

Students who earn a degree in computer science will have some of the best possibilities of any degree, and they can go on to seek lucrative, in-demand employment in industries like software engineering, artificial intelligence, data science, and more.

]]>
https://dataconomy.ru/2022/07/28/is-computer-science-a-good-major/feed/ 0
CSI: Data https://dataconomy.ru/2022/05/25/what-is-cyber-forensics-computer-forensics/ https://dataconomy.ru/2022/05/25/what-is-cyber-forensics-computer-forensics/#respond Wed, 25 May 2022 13:36:13 +0000 https://dataconomy.ru/?p=24393 Cyber forensics can be described as the science of crime scene investigation for data devices and services. It is a well-defined and strictly regulated (by law) applied science that collects data as proof of an unlawful activity that involves electronic devices and services, using established investigation standards to capture the culprit by presenting the evidence […]]]>

Cyber forensics can be described as the science of crime scene investigation for data devices and services. It is a well-defined and strictly regulated (by law) applied science that collects data as proof of an unlawful activity that involves electronic devices and services, using established investigation standards to capture the culprit by presenting the evidence to the court or the board of directors.

What is cyber forensics?

Cyber forensics, sometimes known as computer forensics, conducts a methodical inquiry and keeps a traceable chain of evidence to identify what occurred on a computing device and who was responsible for it.

Cyber forensics has grown in popularity over the last two decades because computer and portable media devices, such as smartphones, have been increasingly utilized in criminal behavior. As a result, these gadgets are frequently packed with critical evidence, including usernames, phone logs, location data, text messages, emails, images, and recordings. Cyber forensics experts can recover deleted logs such as files, calls, and messages; get audio records of phone conversations, and identify detailed system user actions to present them in a court of law or internal investigations.

The terms digital forensics and cyber forensics are sometimes used interchangeably with computer forensics.

What is cyber forensics, how does cyber forensics work, cyber forensics techniques, types of cyber forensics
The first stage of digital forensics is collecting digital data while retaining its integrity

The cyber forensics process simplified

The first stage of digital forensics is collecting digital data in a way that retains its integrity. Next, investigators evaluate the data or system to see if it was altered, how it was modified, and who made the changes. Computer forensics isn’t always used in the context of a crime. The forensic process is also utilized in data recovery procedures to recover data from a malfunctioning server, destroyed drive, reformatted OS, or other cause of system failure.

Why is cyber forensics important?

Integrating technology and forensics allows for more efficient investigations and precise findings. Cyber forensics is a specialized field that aids in collecting critical digital evidence to trace criminals.

Electronic equipment collects a large amount of data that the average person would overlook. For example, smart homes generate data over every word we say; cars know when we hit the brakes. These are very valuable for cyber forensics to present tangible proofs. Many people’s innocence is proven with cyber forensics nowadays.

Cyber forensics is used to solve digital and real-world issues like theft, murder, etc. Businesses profit from cyber forensics by tracking system breaches and identifying the attackers.

What is cyber forensics, how does cyber forensics work, cyber forensics techniques, types of cyber forensics
Businesses utilize cyber forensics for data recovery, system, and network breach investigations

How do businesses utilize cyber forensics?

Businesses employ cyber forensics to investigate a system or network breach, which might be used to identify and prosecute cyber attackers. In the event of a system or network failure caused by natural or other disasters, businesses utilize digital forensic specialists and procedures to assist them with data recovery.

How does cyber forensics work?

The first stage of cyber forensics is determining what the evidence is, where it’s being kept, and how it is stored. The next step is to keep the data secure so that no one else can tamper with it.

After collecting the data, the next step is to evaluate it. After obtaining them back, a specialist recuperates the erased files and checks for evidence of a criminal’s attempt to erase secret files. This procedure might require many stages before concluding.

After this, data is collected, and a record is generated. This record contains all of the recovered and available information, which aids in reconstructing the crime scene and reviewing it. The last step involves analyzing the data presented before a court or committee to solve cases.

What is cyber forensics, how does cyber forensics work, cyber forensics techniques, types of cyber forensics
Cyber forensics investigates IT infrastructures, devices, and software to find the clues and evidence it seeks

Cyber forensics techniques 

A forensics investigator makes a copy of a compromised device and examines it using a variety of approaches and unique forensic tools. For instance, they look for copies of deleted, encrypted, or damaged files in hidden directories and unallocated disk space. In preparation for legal proceedings that include discovery, depositions, or genuine litigation, any evidence discovered on the digital copy is meticulously recorded in a finding report and verified with the actual device.

A cyber forensics investigation might employ a variety of methods and specialist expertise. One of them is reverse steganography. Steganography is the covert embedding of information within any form of a digital file, communication, or data stream. When analyzing the data hashing in a file to try and find reverse steganography, computer forensics specialists can undo it. Suppose a cybercriminal hides critical information within an image or other digital file. In that case, it may appear the same before and after to the uneducated eye, but the underlying hash or string of data will prove otherwise.

Cyber forensics has been used as evidence by law enforcement agencies and in criminal and civil law since the 1980s

Stochastic forensics is a computer science technique that extracts and analyzes data without using digital artifacts. Digital processes result in unintended changes to data, which are known as artifacts. Clues related to a digital crime, such as modifications to file attributes during data theft, are included in the term artifact. Stochastic forensics is frequently used in data breach investigations to determine the perpetrator’s identity when it’s believed that the intruder is an insider.

The cross-drive analysis technique combines and cross-references data discovered on multiple computer drives to look for, evaluate, and archive data relevant to an inquiry. Events that arouse suspicion are compared with information from other drives to find similarities and provide context. Anomaly detection is another name for this process.

The live analysis approach examines a computer while operating using system tools on the machine. The examination looks at volatile data, usually kept in cache or RAM. To maintain the integrity of a chain of evidence, many instruments for extracting volatile data need the computer to be sent to a forensics lab.

When a file is deleted from a computer system, its information remains in certain areas of the machine’s memory. The deleted file recovery technique involves searching for fragments of files that were partially erased in one location but still leave traces elsewhere on the system. This is also known as file carving or data carving.

What is cyber forensics, how does cyber forensics work, cyber forensics techniques, types of cyber forensics
A cyber forensics investigation might employ a variety of methods and specialist expertise

Types of cyber forensics

Cyber forensics investigates IT infrastructures, devices, and software to find the clues and evidence it seeks. Using network forensics, investigators monitor and evaluate the criminal’s network traffic. Network intrusion detection systems and other automated tools are used here. In email forensics, experts examine the criminal’s emails and recover deleted email threads, allowing them to extract critical information regarding the case.

Hacking-related offenses are the focus of malware forensics. The malware is examined by a forensic expert, in this case, looking for trojans to figure out who was behind it. Memory forensics is the practice of analyzing data stored in memory (such as cache, RAM, and so on) and extracting information from it.

Mobile forensics is typically focused on mobile devices. This branch examines and analyzes the data from mobile data devices, such as smartphones, tablets, and GPS units. The data recovered from hard drives and cloud platforms by disk forensics are examined and analyzed in detail. Disk forensics extracts data from storage media by searching changed, active, or deleted files.

What is cyber forensics, how does cyber forensics work, cyber forensics techniques, types of cyber forensics
Cyber forensics investigations resolved some notable trade secret theft cases

Cyber forensics prevents trade secret theft

Cyber forensics has been used as evidence by law enforcement agencies and in criminal and civil law since the 1980s. But lately, it has resolved some notable trade secret theft cases.

Apple’s autonomous car division announced the retirement of a software engineer named Xiaolang Zhang, who said he would be returning to China to look after his ailing mother. He informed his superiors he intended to work for an electronic automobile manufacturer in China, which aroused curiosity. According to an FBI statement, Apple’s security staff reviewed Zhang’s activity on the company network and discovered he had taken trade secrets from local company databases to which he had access in the weeks before his resignation. In 2018, he was indicted by the FBI.

In another case, cyber forensics proved a man’s innocence. Anthony Scott Levandowski, formerly an executive of both Uber and Google, was indicted in 2019 with 33 counts of trade secret theft. From 2009 to 2016, he worked for Google’s self-driving car project, where he downloaded thousands of files from a password-protected corporate server. Otto is a self-driving truck startup started by him after he left Google. Uber bought the company In 2016.

Levandowski was arrested in late 2017 and charged with theft of trade secrets as part of the FBI’s widening investigation into Uber. He was indicted by a federal grand jury on October 28, 2018, for one count of trade secrets theft and one count of conspiracy to commit fraud. Levandowski was sentenced to 18 months in prison and $851,499 in fines and restitutionHowever, after a cyber forensics investigation, Levandowski was proven innocent and received a presidential pardon in 2021.

Another famous case that cyber forensics solved was investigating a death, not a trade secret theft. Metadata and medical data from Michael Jackson’s doctor’s iPhone showed that Conrad Murray had given lethal dosages of drugs to the King of Pop, who died in 2009.

]]>
https://dataconomy.ru/2022/05/25/what-is-cyber-forensics-computer-forensics/feed/ 0
Announcing the winner of the Applied Data Hackathon https://dataconomy.ru/2022/03/17/announcing-winner-applied-data-hackathon/ https://dataconomy.ru/2022/03/17/announcing-winner-applied-data-hackathon/#respond Thu, 17 Mar 2022 14:58:25 +0000 https://dataconomy.ru/?p=22716 From February 28 to March 7, 2022, the Applied Data Hackathon welcomed data scientists with entrepreneurial spirits from multiple backgrounds. Participants tackled three challenges or brought their own, intending to solve some of the most pressing real-world problems we face today. The satisfaction of coming up with original and inspiring solutions was incentive enough, but […]]]>

From February 28 to March 7, 2022, the Applied Data Hackathon welcomed data scientists with entrepreneurial spirits from multiple backgrounds. Participants tackled three challenges or brought their own, intending to solve some of the most pressing real-world problems we face today.

The satisfaction of coming up with original and inspiring solutions was incentive enough, but to add to the available rewards, the winner and runners-up had the chance to share over €120,000 in prizes and a place at the Applied Data Incubator in Berlin.

Challengers formed 17 teams, creating their solutions throughout the weekend, culminating in a Pitch Day event hosted online and at The Drivery Berlin on March 7.

An all-star judging panel

Joining the event both virtually and in-person, the judges brought a huge amount of experience and knowledge to the table. Selecting the winners were:

  • Carla Penedo of Celfocus
  • Michael Durst of ITONICS GmbH
  • Claudia Pohlink of Deutsche Bahn
  • Michael Leyendecker of VITRONIC Machine Vision
  • Peter Ummenhofer of GO Consulting GmbH 
  • Thomas Brüse of QuickMove GmbH
  • Simon Mayer of University of St. Gallen
  • Timon Rupp of The Drivery Berlin
  • Judith Wiesinger, DeepTech Entrepreneur
  • Norbert Herrmann of Berliner Senatsverwaltung für Wirtschaft, Energie und Betriebe 
  • Maren Lesche of Applied Data Incubator and Startup Colors

“We already have an incubator in the healthcare field, and I’ve seen the impact that it creates,” Lesche, Founder at Applied Data Incubator said. “There is so much unstructured data around, and we produce, produce, produce, and we don’t even know what to do with it. So I hope that we can empower entrepreneurs and teams to use the available data. This is the reason why I wanted to do it; these hundred hackers that registered for the hackathon can help to shape the future.”

Pitch it to win it

Of the 17 teams that entered, nine presented their solutions during Pitch Day, intending to win those coveted places at the Applied Data Incubator.

The first pitch tackled a problem that will become significant as more electric trucks take to the roads.

Hyperfleet supports logistics with a data-driven multi-variant decision model for order taking and fleet route optimization, helping organizations make decisions that improve their total cost of ownership and the environment.

Three projects tackled an expensive problem. AI Anomaly Detector, Archimedes, and MoveQuickly all created an intelligent anomaly detector for industrial applications, assisting with the predictive maintenance of costly and critical machinery to ensure they stay up and running.

Panos.AI is a digital advisor that helps companies identify, manage and scale their process automation initiatives more successfully, powered by data-driven, self-learning algorithms.

Hyperspace analyzes scientific papers and news articles to extract insights about emerging technology milestones and breakthroughs.

Composite Vision is an automated system for detecting particular types of defects in data acquired by non-destructive testing, such as ultrasound, x-ray, and more.

ClearCO2 maps the cause and effect of carbon emissions in food production and logistics to reverse climate change.

Kapsola empowers health tech companies to label data for use in their AI applications, providing them with services like image classification, object detection, semantic segmentation, and more.

And the winner is?

After a great deal of discussion, the judges selected their winners, with the results appearing for all participants and interested attendees on the Applied Data Hackathon portal, powered by Taikai.

Hyperfleet, Composite Vision, Panos.AI, ClearCO2, Kapsola, Hyperspace, and AI Anomaly Detector all won the opportunity to go through the eligibility criteria process and join Applied Data Incubator either in April or October 2022.

And the winner of the €500 Conference Voucher is MoveQuickly, with Archimedes taking home four hours of special coaching, worth €400.

It was a challenging but fantastic and inspiring week. And it was wonderful to see so many participants, both in-person in Berlin and online.

Congratulations to all the hackathon participants and the partners, mentors, judges, and organizing team for making it all possible. 

]]>
https://dataconomy.ru/2022/03/17/announcing-winner-applied-data-hackathon/feed/ 0
Hackathon success stories are rare, so the Applied Data Hackathon is doing things differently https://dataconomy.ru/2022/02/24/hackathon-success-applied-data-hackathon/ https://dataconomy.ru/2022/02/24/hackathon-success-applied-data-hackathon/#respond Thu, 24 Feb 2022 10:24:44 +0000 https://dataconomy.ru/?p=22598 What connects Zapier, GroupMe, CloudMine, Zaarly, LaunchRock, and Foodspotting? These successful startups were created during weekend hackathon events and, combined, have gone on to raise $51.4m in funding. However, when you search for “hackathon success stories,” you’ll see that these examples are somewhat rare. Partly that’s because the teams involved don’t stay together after the […]]]>

What connects Zapier, GroupMe, CloudMine, Zaarly, LaunchRock, and Foodspotting? These successful startups were created during weekend hackathon events and, combined, have gone on to raise $51.4m in funding.

However, when you search for “hackathon success stories,” you’ll see that these examples are somewhat rare. Partly that’s because the teams involved don’t stay together after the weekend (no matter how unique the ideas are or how well they worked together). Partly it’s because they don’t get the ongoing support necessary to take the concept further.

The Applied Data Hackathon solves those two issues and will run from February 28 to March 7. 

Included in the €120,000+ total prize package is a place at the Applied Data Incubator in Berlin for the winning team. Its six-month startup program supports first-time founders from Europe in launching sustainable data startups based on validated, industry-relevant business solutions.

So what are the challenges being set during the Applied Data Hackathon?

Predictive Maintenance

Powered by hackathon partner QuickMOVE GmbH – an expert in machinery and plant engineering, including innovative conveyor technologies – this challenge focuses on machine data and machine learning.

For example, in logistics and manufacturing, customers expect that conveyor systems have a regular availability rate of 99.7% during their productive hours. Current systems work on the basis that maintenance is called for in a crisis. Future maintenance must be predictable, optimized, and done outside of regular working hours. 

This challenge aims to develop predictive analysis tools to solve those existing and costly problems. QuickMOVE will support the challenge with actual customer data readings from machine controls.

“QuickMOVE regards access to data-driven analytics as very important for future technology development, maintenance, and life cycle management,” Thomas Brüse, Managing Partner at QuickMOVE, said. “Data is the new gold. Applying data-driven solutions will help significantly improve technology in the near future. Startups will generate dynamic fresh eye views on the available technologies and help improve these rapidly.”

Object Classification in Real Environments

The University of St.Gallen and IntellIoT provides this challenge in manufacturing and logistics.

The problem? How to reliably classify objects in real industry environments. Participants are provided with a data set of RGB and low-quality depth images of an industrial scene. While the background is static in these pictures, there is not have enough control over the environment to modify the background, lighting, and other artifacts that occur.

Intrepid hackathon participants will need to develop innovative solutions for such environments.

Alerting for Emerging Topics

Hackathon partners ITONICS – a software platform that helps organizations around the globe to identify emerging technologies, trends, and market potentials and translate them into powerful growth strategies – posed an intriguing challenge. 

When scouting for disruptive information and scanning the corporate environment, it is paramount that one acts promptly to recent developments. ITONICS has built data ingestion and enrichment pipelines that store documents such as news patents, publications, and named entities that allow its users to search and discover those. However, there is too much information to sift through manually and identify whether changes are happening. 

Hackathon participants will research, build, and validate a proof of concept prototype that uses search fields a customer has defined and automatically alerts them if significant changes occur within it.

“Our vision at ITONICS is to empower everybody to innovate,” Moritz Kobler, Senior Product Manager, Cloud at ITONICS, said. “In line with this vision, we aim to offer solutions that will propel innovation management through data-driven processes. Aside from sharing our insights with the hackathon teams and supporting them in creating something innovative, the Applied Data Hackathon provides us with a fresh lens on how we might address authentic customer challenges.”

BYOC – Bring Your Own Challenge

Do you have another real-world issue that you’re burning to solve? Good news! You can bring your own to the table.

The Applied Data Hackathon is looking for data-driven solutions, including data analytics, embedded systems, digital twin applications, processing and analysis models, and AI technologies.

The start of a life-changing weekend

The hackathon, which will take place in person at The Drivery in Berlin – which will also be the location for free co-working space for the winners – and online, could be the first step for the next startup weekend success story.

“The Drivery is proud to be a partner of the Applied Data Hackathon,” Timon Rupp, Founder and CEO at Drivery, said. “We are a marketplace for innovation that provides the perfect infrastructure for the hackathon. With our onsite GPU farm and our coding area, and the Algorithm Farm, we contribute to the hackathon’s success.”

Budding hackathon challengers have formed teams and are ready to get started. 16 teams will be working throughout the weekend of March 5-6, 2022 to deliver solutions to both the big challenges our partners have set and those they brought to the table.

The hackathon culminates with Pitch Day on March 7, at 4 PM CET, and you are invited to witness their unique and inspiring solutions. Whether you need new ideas, want to discover the startup stars of tomorrow, or invest in the future, this event is for you!

Register for the Grand Finale and maybe you’ll be there when the next Zapier or GroupMe is created.

]]>
https://dataconomy.ru/2022/02/24/hackathon-success-applied-data-hackathon/feed/ 0
Data Natives 2022: Back in Berlin, in-person, and as big as ever! https://dataconomy.ru/2021/08/27/data-natives-2022-berlin/ https://dataconomy.ru/2021/08/27/data-natives-2022-berlin/#respond Fri, 27 Aug 2021 11:40:03 +0000 https://dataconomy.ru/?p=22255 Data Natives is back! Europe’s biggest data science and AI event will take place in Berlin, for real and in person, because online conferences are so 2020. Mark your diaries – we’ll be bringing you the best speakers, networking events, hackathons, open forums, and more between March 10 to 24, 2022. Get your tickets now! Data […]]]>

Data Natives is back! Europe’s biggest data science and AI event will take place in Berlin, for real and in person, because online conferences are so 2020. Mark your diaries – we’ll be bringing you the best speakers, networking events, hackathons, open forums, and more between March 10 to 24, 2022. Get your tickets now!

Data Natives Unlimited 2020 was a huge success. We brought together 5,000 attendees, over 150 speakers, incredible partners, and fantastic volunteers. Delivered by a team that put their heart and soul into the event, we set out to create something unique; an online conference that felt every part as good as the offline events of previous years.

As part of Data Natives Unlimited 2020, our DN Unlimited Hackathon brought over 500 hackers, 15 mentors, and the support of companies like Bayer, Flying Health, and Vision Health Pioneers together to create new and innovative solutions in social impact, data protection and cybersecurity, healthcare and data accessibility, and more.

And our Open Forums with the likes of Berlin Partner and Alibaba delivered unparalleled information and takeaways for all our attendees, inspiring the community to take action and innovate.

Finally, Data Natives x VentureBeat Transform saw two flagship AI events come together in July 2021. During the week-long VentureBeat event, hosted by our own Stewart Rogers, we delivered a half-day Data Natives segment that kicked off with a thought-provoking opening speech from our CEO and founder, Elena Poughia. We continued with remarkable insights from Shamala Hinrichsen, Toby Walsh, Kaoutar Chennaf, Thorsten Dittmar, Juan Carlos Medina Serrano, Bart de Witte, Caroline Lair, Taryn Andersen, and Whurley.

The Data Natives We Know and Love

But nothing beats the authentic, in-person Data Natives we know and love, so we’ve moved from our regular November date to March next year. The shift to 2022 allows us – with exemplary levels of safety for all – to return to an in-person conference as well as providing an outstanding online experience.

We’ll see you in Berlin once more for 14 days of dynamic, educational, and inspirational content!

“We can’t wait to bring you Data Natives 2022 – to see each other and hug each other again,” CEO and founder at Dataconomy and Data Natives, Elena Poughia, said. “Most importantly, in a post-pandemic world, we know the event has to be hybrid, and that’s why we’re going to have both in-person and virtual sessions. The reason we have moved it to March is to ensure we can provide this hybrid option. Data Natives has always been multi-faceted, so delivering the conference, hackathon, open forums, matchathon, and more – both online and offline – will allow us to create an environment where everyone can learn and connect.”

We start with the Data Natives Hackathon and Demo Day between March 10-13, 2022, bringing together the smartest in data, tech, entrepreneurship, activism, and policy-making to solve critical challenges.

Throughout March, we’ll be holding our weekly Open Forum sessions. A weekly portion of live debates with experts in data and tech, these online sessions are the perfect build-up to the big event. From online masterclasses to unconference-style gatherings, we will bring together global audiences to learn.

And we’ll crown the month off with the Data Natives conference and Matchathon, March 22-24, 2022. The Matchathon is a global matchmaking session with no borders – companies, startups, hackers, recruiters, and job seekers are all brought together to create the next big thing. And, as ever, the Data Natives conference will bring thinkers, innovators, and creators in data, AI, and emerging technology under one roof.

Get your Tickets Now!

More details will be announced soon, including our first speakers, and for 2022, we’ve got some exciting cards up our sleeve to reveal. You can buy your tickets for the online sessions right now, and soon you’ll be able to upgrade those tickets to the in-person, physical event in Berlin! Existing ticket holders don’t need to do anything – you already have access to the online segments of the event.

Want to reach our community of 106,000+ data enthusiasts at Data Natives 2022? Request our event brochure now for all the ways we can work together to deliver an incredible experience for you and our attendees.

]]>
https://dataconomy.ru/2021/08/27/data-natives-2022-berlin/feed/ 0
Calling All Coders: Bridging the Blockchain Talent Gap https://dataconomy.ru/2021/06/21/calling-coders-blockchain-talent-gap/ https://dataconomy.ru/2021/06/21/calling-coders-blockchain-talent-gap/#respond Mon, 21 Jun 2021 11:31:35 +0000 https://dataconomy.ru/?p=22090 Last week, Dataconomy’s Stewart Rogers served as master of ceremonies for a CogX Open Web Forum panel (curated by Fabric Ventures) I participated in along with Muneeb Ali (Stacks) and Josh Tobkin (Supra-Oracles). We were grappling with the question of how we inspire best-in-breed technical stars to join Open Web companies, particularly in the domain […]]]>

Last week, Dataconomy’s Stewart Rogers served as master of ceremonies for a CogX Open Web Forum panel (curated by Fabric Ventures) I participated in along with Muneeb Ali (Stacks) and Josh Tobkin (Supra-Oracles). We were grappling with the question of how we inspire best-in-breed technical stars to join Open Web companies, particularly in the domain of distributed ledger technology (DLT) and blockchain talent.

Though global unemployment rates are high globally in the wake of the pandemic, labor shortages in areas such as like blockchain and cryptocurrency remain endemic. Labor supply is constrained, resulting in shortages likely to stay for the rest of 2021 and well into 2022. Although crypto’s $1.3+ trillion market cap has attracted a great deal of attention, it has not perhaps attracted as many of the top developers as the industry would like.

One CEO I know has all of the ingredients to move forward, with key stakeholders aligned, the regulatory environment sorted out, initial customers and investors ready to go, but an absence of the right CTO and engineering team to drive the project. Another CEO is just getting the project on its feet, with three of the world’s largest financial institutions prepared to use the platform, which does not yet exist because it requires the right technical team to build the prototype. In both cases, they are grappling with a question that I have seen come up time in again with my thousands of online students in blockchain and fintech. How do you bridge the blockchain talent gap?

How do you bridge the blockchain talent gap?

Blockchain talent essentials

We covered a good deal of ground in our fast-paced CogX discussion, more than can be summarized here. I can, however, extract for you three essential tips for how founders can attract the brightest minds to work on their blockchain projects:

  1. Having exciting problems for engineers to work on. Top developers are attracted to interesting, knotty problems like moths to a candle flame. Are you addressing a fascinating area of computer science or a notable social issue? Highlighting this can draw in the very best.
  2. Connecting with the employees. At the very earliest stages of a company’s development, the personality of the founder is paramount.  Investors, employees, and early adopters are making a bet on the founder, so heightening your awareness of this will allow you to let you hone your connection with people. They are joining you and buying into to your vision.
  3. Inspiring people with narrative. You can accelerate your ability to get the best talent, raise money, attract users, sell to customers, and achieve your goals with your blockchain project if you find your essential, authentic story and learn how to communicate it with passion and conviction. Well-crafted narratives build an emotional connection with people, and whether or not they want to admit it, emotion drives decisions more than facts do. Find your story, learn how to tell it, and you will have a better chance of enticing and enlisting the top developers.

Expanding the pool

Bonus round: another way to get top talent for your blockchain project is to invest in it and develop it. Courses like Oxford Blockchain Strategy provide people the foundations for how to apply DLT in the innovation context. Every great business starts with a great idea, grounded in the art of the possible and an understanding of the impossible (or how to make the impossible possible).  The Oxford program and ones like it serve as a jumping-off point, on the one hand, on how to launch a new project, and on the other hand, as a vehicle for establishing a common vocabulary and framework for action.

As you progress from strategy to build out, you want to have the right technology applied to the right problem sets. An array of excellent boot camps have emerged to groom more specific technical capabilities to build blockchain systems. A rigorously grounded team will get you there faster and more effectively. You will want to carefully evaluate which boot camps provide the right mix of tools and techniques, and you may even find them fertile recruiting grounds if you build relationships with the host organizations.

The bright future

Of course, these are still early days in the evolution of blockchain. I see even larger scale problems being solved and even more exciting opportunity for developers in converging areas, where we bring technologies like artificial intelligence together with blockchain to solve critical problems in domains like digital identity and financial inclusion (helping over 3.5 billion people), health crisis data management (what happens when the next pandemic strikes), and supply chain.

]]>
https://dataconomy.ru/2021/06/21/calling-coders-blockchain-talent-gap/feed/ 0
How EWOR Created a Formula for Disruptive Innovation https://dataconomy.ru/2021/03/04/how-ewor-created-formula-disruptive-innovation/ https://dataconomy.ru/2021/03/04/how-ewor-created-formula-disruptive-innovation/#respond Thu, 04 Mar 2021 12:51:16 +0000 https://dataconomy.ru/?p=21775 On 1 March 2021, EWOR kicked off off another round of its prestigious EWOR Fellowship program.  “The concept,” its founder Daniel Dippold explains, “is uniquely designed to help corporations realize disruptive innovation and individuals to start ventures of global scale.” This March, an exciting mix of people, including former Miss India UK, who holds a […]]]>

On 1 March 2021, EWOR kicked off off another round of its prestigious EWOR Fellowship program. 

“The concept,” its founder Daniel Dippold explains, “is uniquely designed to help corporations realize disruptive innovation and individuals to start ventures of global scale.” This March, an exciting mix of people, including former Miss India UK, who holds a Summa Cum Laude degree in law and a former Swiss math champion, join EWOR and a German real-estate fund with 3 BN assets under management to create outstanding ventures. We were interested in finding out what makes this concept so promising and how it attracted so many exceptional individuals.

EWOR was founded to empower individuals all over the globe to build outstanding ventures and innovations. Its fellowship is a blend of venture building, education, and networking, as shown below. Corporations are joined by a series of innovators with backgrounds in science, business, and computer science. EWOR is responsible for creating an ecosystem and educational approach in which all stakeholders thrive to build ventures of a global scale. Every year, an EWOR fellowship is awarded to approximately 10-30 individuals all over Europe. 

How EWOR Created a Formula for Disruptive Innovation

From our interview with its founder, we were amazed by two critical components of the program.

Machine Learning to Accelerate Talent Discovery

Firstly, machine learning played a role in this process. Daniel Dippold wrote a thesis titled ‘A Machine Learning Analysis of the Impact on Cognitive Dimensions on Success in Entrepreneurship,’ part of which was an algorithm that filtered non-successful entrepreneurs correctly in 97.1% of all cases. The thesis was awarded a distinction from Cambridge University. The founder explains that he could use a variety of factors that machine learning can turn into an ‘entrepreneurial potential map.’ 

He explains further that, opposed to standard IQ tests, many of the factors identified by EWOR’s algorithm are indeed malleable. It helps identify where an individual needs to develop and are by no means a fixed number, such as IQ, that cannot be altered over time. EWOR has built an entire concept around this notion of malleable entrepreneurial potential, incorporated in its platform ewor.io

“The gist is,” Daniel explains, “that school teaches us only one important dimension of intelligence and neglects others that are essential for starting a successful business.” At the University of Cambridge, Daniel researched what are called “environments of unknown unknowns.” Such environments are so complex that it is impossible to factor in every component necessary for making a good decision. There are not only factors one knows are unknown but also factors that one isn’t even aware of. In such environments, things work differently. That is why EWOR has developed a new educational concept that discards concepts essential to ordinary education, such as having a curriculum. 

A Win-Win-Win Situation

Secondly, the EWOR concept is designed to create a win-win-win situation for EWOR fellows, EWOR corporations, and EWOR itself. Partnership companies, who offer their expertise, office space, and other resources to EWOR fellows, benefit from being exposed to innovations it could not possibly create in-house. 

Having conceptualized both incubator and accelerator programs before, Daniel is convinced that both concepts are not ideal for corporate innovation. Instead, an approach needs to be chosen that allows corporations to innovate around their core business but at the same time avoid paralyzing the entrepreneurial spirit of founders. This is a common problem with incubators, where corporations exercise too much control – or are forced to do so because of compliance reasons – which ultimately stymies entrepreneurs. 

An Outstanding Network of Advisors

A look at the advisory board of the company supports our confidence in the program. Personalities such as Alexander Grots, the former Managing Director of IDEO Europe, the world’s largest innovation agency that invented the world-known term ‘design thinking,’ helped co-design much of EWOR’s education platform. Chris Coleridge, Professor at University of Cambridge, founder of multiple accelerator programs and founder of Europe’s first Master’s degree in entrepreneurship, helped design EWOR’s learning map. 

Individuals such as Daniel Marasch, former Executive Board at Lidl, and Alex Schwörer, Owner and former Executive Board of 6000-employee firm PERI, represent large corporations’ voices in EWOR’s programs. Finally, serial inventor and unicorn founder Mattias Bergstrom co-developed and challenged many of their assumptions around building global-scale ventures. 

The Partnership Company

The partnership company, Project Gruppe, is a Germany-based real estate fund and developer with over 25 years of track record within the industry. Project manages over 3 BN €s of assets and owns the entire value chain from raising capital to developing and selling properties. The exclusive partnership between the fund manager and real estate developer allows Project Gruppe to operate efficiently and thus deliver a high financial return to its investors. Christian Grall, CEO of Project Vermittlungs GmbH, states that ‘he is excited to see the EWOR fellows challenge their status quo.’

The Fellows

Lastly, we were impressed by the talent EWOR was able to attract with its program. This year’s fellowship awards the following candidates with an EWOR fellowship:

Suhani Gandhi

Suhani Gandhi holds a Summa Cum Laude Bachelor’s degree in Law and is a scholarship awardee at Imperial College Business School, where she is currently pursuing a Master’s Degree in Strategic Marketing. Suhani founded the first-ever online Hindi school, Holiday Hindi, facilitating language learning through Indian arts and culture. She is an award-winning actress who has appeared in projects on Netflix and Amazon Prime. Suhani was titled Miss India UK in 2014 and nominated by Times Now as NRI of the Year for her contribution to the arts sector. 

Yannick Müller

Yannick Müller is pursuing a Bachelor’s degree in Computer Sciences at ETH Zurich. He participated in over 10 Hackathons, of which he won some. Yannick developed AI solutions to classify web attacks and conceptualized a solar house with a rooftop that allows solar beams to enter during Winter. He is a Swiss math champion and is currently preparing for an Ironman. 

Ayoub Boukir

Ayoub Boukir has a Summa Cum Laude bachelor’s degree in Business Administration and is enrolled in a Master’s Degree in Finance and Accounting at the University of St. Gallen. Ayoub founded a project which used drones to map agricultural land. He is the VP of Finance and Fundraising at World Merit Morocco and has consulted companies and funds investing in Africa.

Clarissa Heidenreich

Clarissa Heidenreich built up Afya Nutrition, a social startup that tackles malnutrition in African countries with the microalgae Spirulina. She has a degree in Business Law and is pursuing a second degree in Corporate Law. Clarissa received multiple recognitions, such as being picked as a McKinsey Firsthand, an EY Future Female Talent, and more. In 2018, she won the National Championship with Enactus Mannheim. Clarissa received several scholarships and awards for academic excellence and has gathered practical business expertise as CEO of Begapinol Dr. Schmidt GmbH. 

William McSweeney

William McSweeney holds degrees in History and Human Rights. In his work at a LawTech company, he has automated a data protection e-learning solution, which netted £200k within 18 months. William led a research project and developed a solution to reduce barriers to access digitized legal advice. His work focuses on bridging the gap between technology and law and increasing access to justice through innovation.

Gatsby Fitzgerald

Gatsby Fitzgerald holds a Bachelor’s degree in Medicinal Chemistry at Imperial College London and is now enrolled in a Joint Honours Management program at the Imperial College Business School. Gatsby has developed a technology to recycle Li-ion batteries during a 12 weeks competition and frequently competes in Marathons and Ironmans.

Sebastian Rappen

Sebastian Rappen focused on Cultural Studies of the Middle East and Philosophy at Eberhard Karls University Tübingen, studied design thinking at Stanford in cooperation with the SUGAR network, and graduated with an interdisciplinary Master in Organisational Design from the University of St.Gallen. Sebastian is the former Content Lead Design Thinking for EY and is a Lecturer for Digital Company Culture at BVS St.Gallen. He is a consultant for human-centered design in the health care and finance sector.

Jivan Navani

Jivan Navani is currently pursuing a Master’s in Entrepreneurship at the University of Cambridge and holds a Bachelor’s in Management from the London School of Economics. At the LSE, he served on the court of governors and was the President of the Investment Society. Jivan has founded numerous successful businesses and has been part of the 2019 cohort of the Y Combinator Startup School. Additionally, he is the former Head of Operations at London Blockchain Labs and former Head of Venture Investment at European Student Startups.

Melanie Preen

Melanie Preen is a fresh student at The University of Manchester, where she is enrolled in the Information Technology Management’s honors program. She co-founded a 3D printing prosthetics club and was recognized for the International School Awards 2021, where she built a prosthetic hand for a girl born with amniotic band syndrome. She is the demo day innovation winner for the LaunchX Entrepreneurship Program in South East Asia, where her team ideated a gamified, remote-controlled biomimicry fish that consumed plastic. She has fundraised successfully for her Cambodian School via VR and has curated and co-hosted the first public TEDx event in Phuket to give a voice to the community.

Aldiyar Semedyarov

Aldiyar Semedyarov is pursuing a Master’s degree in Electrical Engineering and Information Technologies from ETH Zurich. He co-founded the startup Qoqys to address the waste management challenges in Kazakhstan. He won the ABC Incubation program by Nazarbayev University Research and Innovation System and the Fostering Research and Innovation Potential program by Nazarbayev University Young Researchers Alliance. Aldiyar has received a certificate of appreciation from the mayor of Aktau city for the city’s socio-economic development and is a gold and silver medal winner of Kazakhstan’s national physics olympiads. He co-authored two publications that were presented at esteemed international scientific conferences.

]]>
https://dataconomy.ru/2021/03/04/how-ewor-created-formula-disruptive-innovation/feed/ 0
Data-driven journalism, AI ethics, deep fakes, and more – here’s how DN Unlimited ended the year with a bang https://dataconomy.ru/2020/12/09/data-driven-journalism-ai-ethics-deep-fakes/ https://dataconomy.ru/2020/12/09/data-driven-journalism-ai-ethics-deep-fakes/#respond Wed, 09 Dec 2020 11:30:00 +0000 https://dataconomy.ru/?p=21591 Data Natives Unlimited – Europe’s biggest data science and AI event – was forced out of its regular, “so Berlin” home this year thanks to the Covid-19 pandemic. What followed was an endeavor that exceeded our expectations. With 5,000 attendees, over 150 speakers, incredible partners, fantastic volunteers, and a team that put their heart and […]]]>

Data Natives Unlimited – Europe’s biggest data science and AI event – was forced out of its regular, “so Berlin” home this year thanks to the Covid-19 pandemic. What followed was an endeavor that exceeded our expectations.

With 5,000 attendees, over 150 speakers, incredible partners, fantastic volunteers, and a team that put their heart and soul into the event, we set out to create something unique; an online conference that felt every part as good as the offline events of last year.

“2020 has been the year where an unprecedented event changed our lives, and we had to change the way we work, live, and communicate,” CEO and founder at Dataconomy and Data Natives, Elena Poughia, said. “Digital transformation happened faster and accelerated more than ever expected, and similarly Data Natives Unlimited was the experience that we put together as a reaction to the situation.”

And the journey didn’t just start with the conference.

“It was more than an event,” Poughia said. “It was an experience that started from September, with a hackathon, which we found was the best way to find quick, innovative solutions to societal problems and challenges, and then it continued with discussion roundtables tackling important topics before coming to an end with the DN Unlimited conference.” 

And while we’re a little biased, we think we achieved exactly what we set out to do. Over three days, we brought you keynotes, discussions, networking, community features, and exclusive access that came as close to an in-person event as is possible across a screen. 

Opening the event on day one, Poughia set the scene with a talk that sparked some interesting discussions. Looking at a year we lived online, for the most part, Poughia’s keynote turned to the privacy, security, and transparency of our data, a commodity that is now more valuable than ever.

“The world produced an immense amount of data over the past months. It is our responsibility to handle this data with care – staying both private and transparent, sharing our data while protecting it, and always keeping in mind that impact is the new money,” Poughia said.

Chris Wiggins of the New York Times then explained the ins and outs of data-driven journalism and how the world’s oldest newspaper became the forerunner in the media industry by developing a data strategy for its core activities.

Speaking of news, two themes emerged across the first two days of DN Unlimited. Fake news and deep fakes; both of great concern to many.

Juan Carlos Medina Serrano presented his research on Tik Tok as the new gatekeeper of political information and social media’s power to increase societal polarisation. Weifeng Zhong fascinated us by NLP’s applications, predicting the next major political moves by analyzing propaganda content. And we heard from the likes of Thorsten Dittmar, Kathrin Steinbichler, and Alexandra Garatzogianni on deep fakes and fake news too.

“Based on the lessons learned in the last five years of tackling fake news, we can design the proper policy response to deepfakes, but we need to spot the risks early on,” said Areeq Chowdhury of WebRoots Democracy.

Of course, the battle for the presidency in the United States made its way into the conversation, especially given it was happening at the same time as the conference.

“Take the US election; 98% percent of disinformation didn’t require any AI at all to create a very compelling conspiracy. We need to learn to think critically about the media messages we are receiving,” added Kathryn Harrison of FixFake.

There is some hope for the future, however, as we start to see more data sovereignty solutions come to fruition worldwide.

“We are entering the third phase of internet development, where citizens create control over their data. This year is all about personal privacy and internet reliability,” said John Graham-Cumming

In addition to the discussions around how our data is used, we dove deep into another topic that keeps AI and machine learning advocates awake at night; ethics and data bias.

The opening keynote from Mia Shah-Dand, CEO at Lighthouse3 and Founder at Women in AI Ethics, talked about the crisis of ethics and diversity in AI.

“We can draw a direct line between a lack of diversity and AI bias. The questions we should ask ourselves before implementing algorithms are who participated, who was harmed, and who benefitted from our solutions,” Shah-Dand said.

Listening to Jessica Graves of Sefleuria, we realized that there is no technical reason algorithms can’t eventually learn to generate creative output if we give them access to the same inputs and feedback as humans.

The conversation around data also extended to governmental and regulatory policy. Anu Bradford, Alexander Juengling, Sebastien Toupy joined us on the main stage to talk about the “Brussels effect” – the EU’s unique power to influence global corporations and set the rules of the game while acting alone. We found out that neither a ‘hard’ or ‘soft’ Brexit will liberate the UK from the EU’s regulatory reach.

One of our main partners, IBM, had a stage that was booming with insightful talks. Noel Yuhanna and Kip Yego discussed how trust in AI should start with the concrete data foundation, and Jennifer Sukis and Dr. Robin Langerak walked us through the AI lifecycle.

“I haven’t been prouder than before when it comes to our content,” Poughia said. “We really managed to get very high level quality speakers, focused on a lot of interesting topics such as the pandemic, data monopolies, deep fakes, fake news, and other areas that impact our lives.” 

In addition to the conference content, we announced the winners of the DN Unlimited Hackathon we ran in September. Three winners created solutions to help bring adaptive learning to places where internet connectivity is weak, to accelerate precision medicine, and to help people measure their environmental impact.

Our EUvsVirus colleagues Michael Ionita, Urska Jez & Jesus del Valle concluded that realizing our wildest dreams is possible not only through hackathons but entirely online. The trend is here to stay.

Speaking of online, our attendees took full advantage of speaker Ask Me Anything (AMA) sessions, where participants could meet and greet our distinguished experts. Our Slack channels were buzzing with activity, and the various networking tools on offer helped to connect the masses.

“I’m glad we had a way to bring everyone together and to be connected on different mediums and formats,” Poughia said. “And it’s ‘DN Unlimited’ because there was no limit to the communication, and the connections that were made. We really were crossing all borders.”

We would love to bring Data Natives back in an offline capacity for 2021, and pandemic permitting, we’ll make that a reality. And while COVID-19 forced our hand, we couldn’t have wished for a better online event this year.

Thank you to everyone that participated, in whatever capacity – you made it special. We’ll see you all, in either two or three dimensions, next year.

]]>
https://dataconomy.ru/2020/12/09/data-driven-journalism-ai-ethics-deep-fakes/feed/ 0
Europe’s largest data science community launches the digital network platform for this year’s conference https://dataconomy.ru/2020/10/30/europes-largest-data-science-community-launches-the-digital-network-platform-for-this-years-conference/ https://dataconomy.ru/2020/10/30/europes-largest-data-science-community-launches-the-digital-network-platform-for-this-years-conference/#respond Fri, 30 Oct 2020 10:25:30 +0000 https://dataconomy.ru/?p=21554 The DN Unlimited Conference will take place online for the first time this year More than 100 speakers from the fields of AI, machine learning, data science, and technology for social impact, including from The New York Times, IBM, Bayer, and Alibaba Cloud Fully remote networking opportunities via a virtual hub The DN Unlimited Conference […]]]>
  • The DN Unlimited Conference will take place online for the first time this year
  • More than 100 speakers from the fields of AI, machine learning, data science, and technology for social impact, including from The New York Times, IBM, Bayer, and Alibaba Cloud
  • Fully remote networking opportunities via a virtual hub

The DN Unlimited Conference will take place online for the first time this year.

The Data Natives Conference, Europe’s biggest data science gathering, will take place virtually and invite data scientists, entrepreneurs, corporates, academia, and business innovation leaders to connect on November 18-20, 2020.

The conference’s mission is to connect data experts, inspire them, and let people become part of the equation again. With its digital networking platform, DN Unlimited expects to reach a new record high with 5000+ participants. Visitors can expect keynotes and panels from the industry experts and a unique opportunity to start on new collaborations during networking and matchmaking sessions.

In 2019, the sold-out Data Natives conference gathered over 3000 data, technology professionals and decision-makers from over 30 countries, including 29 sponsors, 45 community and media partners, and 176 speakers.The narrative of DN Unlimited Conference 2020 focuses on assisting the digital transformation of businesses, governments, and communities by offering a fresh perspective on data technologies – from empowering organizations to revamp their business models to shedding light on social inequalities and challenges like Climate Change and Healthcare accessibility.

Data science, new business models and the future of our society

In spring 2020, the Data Natives community of 80.000 data scientists mobilised to tackle the challenges brought by the pandemic – from the shortage of medical equipment to remote care – in a series of Hackcorona and EUvsVirus hackathons. Through the collaboration of governments such as the Greek Ministry for Digital Governance, institutions such as the Charité and experts from all over Europe, over 80 data-driven solutions have been developed. DN Unlimited conference will continue to facilitate similar cooperation.

The current crisis demonstrates that only through collaboration, businesses can thrive. While social isolation may be limiting traditional networking opportunities, we are more equipped than ever before to make connections online.

The ability to connect to people and information instantly is so common now. It’s just the beginning of an era of even more profound transformation. We’re living in a time of monumental change. And as the cloud becomes ambiguous, it’s literally rewriting entire industries

Gretchen O’Hara, Microsoft VP; DN Unlimited & HumanAIze Open Forum speaker.

The crisis has called for a digital realignment from both companies and institutions. Elena Poughia, the Founder of Data Natives, perceives the transformation as follows:

It’s not about deploying new spaces via data or technology – it’s about amplifying human strengths. That’s why we need to continue to connect with each other to pivot and co-create the solutions to the challenges we’re facing. These connections will help us move forward

Elena Poughia, the Founder of Data Natives

The DN Unlimited Conference will bring together data & technology leaders from across the globe – Christopher Wiggins (Chief Data Scientist, The New York Times), Lubomila Jordanova (CEO & Founder, Plan A), Angeli Moeller (Bayer AG, Head Global Data Assets), Jessica Graves (Founder & Chief Data Officer, Sefleuria) and many more will take on the virtual stages to talk about the growing urge for global data literacy, resources for improving social inequality and building a data culture for agile business development. 

On stage among others:

Europe's largest data science community launches the digital network platform for this year's conference
]]>
https://dataconomy.ru/2020/10/30/europes-largest-data-science-community-launches-the-digital-network-platform-for-this-years-conference/feed/ 0
Online events for Data Scientists that you can’t miss this autumn https://dataconomy.ru/2020/08/20/online-events-for-data-scientists-that-you-cant-miss-this-autumn/ https://dataconomy.ru/2020/08/20/online-events-for-data-scientists-that-you-cant-miss-this-autumn/#respond Thu, 20 Aug 2020 14:15:12 +0000 https://dataconomy.ru/?p=21524 It looks like the digital form of communication is here to stay for a while. Still, there are lots of opportunities for Data Scientists and data-driven professionals to meet, collaborate, network, and think about their next career move. Here is the list of online events that are worth attending to catch up on the industry […]]]>

It looks like the digital form of communication is here to stay for a while. Still, there are lots of opportunities for Data Scientists and data-driven professionals to meet, collaborate, network, and think about their next career move.

Here is the list of online events that are worth attending to catch up on the industry trends, network, meet, and get your problem-solving skills up to speed. 

For those who are skeptical about virtual events, you might want to change your opinion. Here’s why:

  • You can still get loads of insightful content, catch up on the latest trends with an opportunity to replay your favorite content;
  • You can network from wherever you are in the world. The fact that we can schedule online meetings and network from the comfort of our own home is a game-changer for many (also, that’s great news for introverts, who make up 25 to 40 % of the population);
  • You can meet people whom you wouldn’t necessarily encounter at physical events. Anybody can join a virtual event – that’s thousands of people to connect with!
  • You are more likely to establish connections with high-level experts – everyone is equal in a zoom room and there’s no barrier to connect with high-level speakers digitally during an online event.
  • Last but not least – Climate Change is real and conferences are one of the significant contributors to CO2e increase. One attendee’s footprint can be as high as 2,000 pounds CO2e when extensive travel is needed.

Upcoming virtual events for data-driven professionals:

1. Open Data Science Conferences: ODSC usually hosts conferences for professional data scientists in the EU, Asia, US. 

This year, all gatherings are accessible online with a plethora of various online activities that can interest data scientists such as workshops, on-demand talks, 3-days long virtual Bootcamp, career lab, expo, and more. 

Tickets range from 129 EUR for general admission to 749 EUR for the full access. 

2. DN Unlimited: EU’s biggest community of data scientists is going fully digital this fall and offers a whole new virtual experience lasting from September 10th to November 20th including:

  • Hackathon where data scientists and tech professionals will solve challenges around data privacy & healthcare, sustainable & green businesses, social impact & data protection.
  • Data Natives 2020 flagship conference is also fully accessible online. This year’s focus is on future societies, data economy and all things data science (latest trends, education & more) and matchmaking to connect attendees in a virtual space.
  • The newest addition 2020’s activities are Open Forum virtual events happening every week leading up to the conference discussing topics like Deep Fakes, NLP, Cybersecurity & more (available for free for all ticket holders).

Ticket prices range from 0 to 190 EUR for the full VIP access.

3. Data & Analytics Digital Summit aims to connect business and innovation digitally and focuses on senior business professionals who are excited about the opportunities tech provides for solving business challenges. The summit offers downloadable and actionable takeaways, workshops, interactive panels, and more.

The seat can be reserved via the registration form on their website. 

4. AI & Big Data Expo happens in a hybrid form with AI, Big Data, IoT, Blockchain sessions, matchmaking, networking parties.

Key topics include demystifying AI, creating AI-powered organizations, machine learning, decision science, RPA and automation, infrastructure, platforms, ethics, data analytics.

The ticket prices reach from 0 to around 700 EUR depending on the package.

]]>
https://dataconomy.ru/2020/08/20/online-events-for-data-scientists-that-you-cant-miss-this-autumn/feed/ 0
How can governments harness the power of volunteering https://dataconomy.ru/2020/07/07/how-can-governments-harness-the-power-of-volunteering/ https://dataconomy.ru/2020/07/07/how-can-governments-harness-the-power-of-volunteering/#respond Tue, 07 Jul 2020 15:16:04 +0000 https://dataconomy.ru/?p=21441 Assistant Volunteer, a project of Nable Solutions, was born during the HackCoronaGreece online hackathon to better coordinate the efforts of volunteers. Today, Assistant Volunteer’s platform is part of the Greek Ministry of Health’s official response to eradicating the pandemic.  This year, online hackathons have proven to be a great source of ideation for easily scalable […]]]>

Assistant Volunteer, a project of Nable Solutions, was born during the HackCoronaGreece online hackathon to better coordinate the efforts of volunteers. Today, Assistant Volunteer’s platform is part of the Greek Ministry of Health’s official response to eradicating the pandemic. 

This year, online hackathons have proven to be a great source of ideation for easily scalable solutions during crises. From a shortage of medical equipment to caring for patients remotely, solutions to better manage the COVID-19 outbreak flourished globally. However, it was still to be discovered whether these solutions could be developed into mature products able to be integrated into the official’s response programs.

On April 7th-13th, in Berlin and Athens, the global tech community tackled the most pressing problems Greece faced due to COVID-19 outbreak during the HackCoronaGreece online hackathon organized by Data Natives, eHealth Forum and GFOSS with the support of GreeceVsVirus (an initiative by the Greek Ministry of Digital Governance, Ministry of Health & Ministry of Research & Innovation). Just two months later, Assistant Volunteer, matured its solution to the final stages of development and was selected by the Greek Ministry of Health to officially contribute to managing the COVID-19 pandemic in Greece. 

The era of volunteering

COVID-19 paved the way to a new era of volunteerism in response to the crisis. Even though isolated from each other, volunteer movements across the globe found ways to dedicate their time and efforts to help the ones in need and introduce innovative and effective ways of helping humanity.

According to the United Nations, in Europe and Central Asia, the volunteer movement has been officially recognized by some governments for their services provided by volunteers during the COVID-19 pandemic. That’s exactly the case with HackCoronaGreece and the solutions that have been created by diverse communities.

One such solution, Assistant Volunteer, recognizes the problem of coordination – when thousands of people are gathering for a good cause, their efforts deserve outstanding management to maximize positive effects. 

What’s Assistant Volunteer

Assistant Volunteer was developed as part of the HackCoronaGreece hackathon by Nable Solutions, an award-winning startup providing software solutions with a social cause. Assistant Volunteer is an easy-to-use volunteer management software platform for organizations and government agencies. It can be configured to support organizations of all types and sizes to achieve modernization and upgrade of the operations, seamlessly with their workflow. Through the modular architecture design, organisations can coordinate volunteers through the web app and mobile app. 

Any organization can register, create a profile, come up with actions needed, engage with the database of volunteers, track performance & measure impact.

“We would like to thank all the mentors, judges, and organizers of the HackCoronaGreece who are constantly supporting us to move our project forward.”

Sotirios Metaxas, Assistant Volunteer team

Assistant Volunteer competed with 14 other teams to be selected in the finale of the HackCoronaGreece hackathon and continue the development of their idea. The solution was recognized by the Greek Ministry of Health and selected for assistance in further development. 

Amidst all this chaos that was caused by the pandemic, one thing that made us feel more positive and optimistic were all these amazing initiatives and ideas from the Greek youth and beyond.”

Panagiotis Prezerakos, General Secretary of the Ministry of Health

Multinational pharma giant MSD supports the project

Another influential supporter of the project is MSD – a pharmaceutical multinational company that contributed with an award for Assistant Volunteer wich is a monetary prize of 7.000 EUR. 

Previously, MSD Greece donated 100,000 euros to the Ministry of Health “to strengthen the national greek health system and to protect its citizens”. 

MSD also donated 800,000 masks to New York and New Jersey. Working with Bill and Melinda Gates Foundation and other healthcare companies, MSD contributes to pushing the development of the vaccine forward, diagnostic tools, and treatments to treat COVID-19 as soon as possible.

“MSD has the quote “inventing for life”, focusing a lot on innovation and has been trying these last years to help boost innovation in Greece by providing several funding opportunities for the youth of Greece who is involved in innovation and technology. We are so pleased that the outcome of this hackathon was the creation of tangible solutions that will help society as a whole”

Antonis Karokis – External Affairs Director – MSD

The Greek Ministry of Health included Assistant Volunteer in their official efforts to fight the pandemic and facilitated the population of the platform with 10000 volunteer profiles. Now, organizations can take the next steps in coordinating the volunteer movement in Greece and, potentially, beyond.

]]>
https://dataconomy.ru/2020/07/07/how-can-governments-harness-the-power-of-volunteering/feed/ 0
How healthcare institutions and hackers cooperate https://dataconomy.ru/2020/04/17/how-healthcare-institutions-and-hackers-cooperate/ https://dataconomy.ru/2020/04/17/how-healthcare-institutions-and-hackers-cooperate/#respond Fri, 17 Apr 2020 15:12:59 +0000 https://dataconomy.ru/?p=21192 Hacking Health Berlin, Charité University Hospital Berlin, the Berlin Institute of Health, Data Natives and Vision Health Pioneers brought together 164 participants, and 20 hacking teams that produced 14 community-sourced tech solutions to meet the COVID-19-related needs of Germany’s leading clinicians and researchers. Let’s take a look at the results. “The Second World War showed […]]]>

Hacking Health Berlin, Charité University Hospital Berlin, the Berlin Institute of Health, Data Natives and Vision Health Pioneers brought together 164 participants, and 20 hacking teams that produced 14 community-sourced tech solutions to meet the COVID-19-related needs of Germany’s leading clinicians and researchers. Let’s take a look at the results.

“The Second World War showed us that you can train doctors in four years’ time, not seven…The coronavirus crisis is showing us it might be possible to have a vaccination in half the time. This is time to rethink the entire bureaucracy, the protocols around health”, said health innovator and entrepreneur Roi Shternin in his interview for Dataconomy

Indeed, necessity is the mother of invention, and the crisis is a boost for change. COVID-19 is changing the face of healthcare globally, with more and more diverse technological solutions, experiments and open-source-based collaborations flowing into the industry. Delivering better health outcomes at a lower cost and supporting those on the frontline has been recognized as the crucial need to stop the spread and manage COVID-19 outbreak by global health institutions – for example, FDA decided to allow anti-malaria drugs to be tested to treat COVID-19 patients.

In Europe, the leading health institutions such as Charité University Hospital Berlin, the Berlin Institute of Health and the Diabetes Center Berne, Switzerland partnered up with health innovation movement Hacking Health Berlin, global data science community Data Natives, and Vision Health Pioneers – Berlin-based healthcare startup incubator, to bring together multidisciplinary professionals for an online hackathon to solve the real needs of the healthcare system.

The EasterHack took place on the weekend of Aril 10th and concluded on April 13th with a closing ceremony where five smartest hacking solutions were selected. Clinicians and experts fighting at the forefront at the Charité University Hospital Berlin joined the hackathon as mentors to support the teams in the development of the solutions.

How healthcare institutions and hackers cooperate

The challenges, all based on the actual needs of the leading clinicians and researchers from Charité University Hospital Berlin, the Berlin Institute of Health and the Diabetes Center Berne, Switzerland, focused on supporting and protecting the high-risk population, protecting medical staff from getting infected, building on top of open-source solutions (such as CoEpi.org and COVID-watch.org) to allow privacy-first contact tracing, improving intensive care while reducing long-term lungs damage and support ICU patients psychologically & protecting healthcare workers’ and patients mental health.

Over 200 enthusiasts stood up to these challenges, a total number of 164 participants of 26 nationalities formed 20 teams to spend their Easter holidays hacking from home intensively. They were supported by 35 mentors who worked tirelessly to direct the formation of innovative healthcare ideas and help turn them into actionable solutions – all through online video calls.

“Online mentoring allowed me to support a team in the Balkans. I could not have done that being based in Berlin without extensive travel”, says Maren Lesche, Head of Incubation at Vision Health Pioneers, EasterHack partner and mentor.

At the end of the hacking sessions, 14 pitches were introduced to the committee of 10 jurors – healthcare innovators, thinkers, researchers, and experts. Finally, 5 winners emerged.

THM – The health manager team tackled the manager dilemma for knowledge transfer during and post-corona times. There is a huge amount of research out there about many topics that will become relevant for the post-coronavirus crisis management. The restoration and innovation of healthcare systems, the preparation for future pandemics and more. A more direct transfer between researchers and practitioners is necessary to get a clearer picture on the status quo. THM team decided to pack 10.000 research papers and/or articles directly in the pockets of GPs, the public and hospital workers, allowing them to get their relevant questions answered in the format of short video snippets and visual graphics – all done with the help of a hybrid of curated information and deep learning engine.

“We came with an idea, started by pivoting it and now we moved 500% forward. We are so impressed by the quality of the participants and the mentors”, says Daniela Marzavan from ‘The Health Manager’ team.

Team mAIndcraft created a video-based hotline to provide instant psychological support for healthcare professionals exposed to stress and moral dilemmas during the COVID-19 pandemic. During the pandemic, healthcare professionals are not only exposed to high risk of infection but also to high mental distress due to uncertainty, moral dilemmas. Characterized by high workflow and restricted time, they lack the capacity to reach out to existing psychosocial support structures. By bringing psychological support to healthcare professionals directly via a video chat hotline, the instant psychological support was made possible by the voluntary commitment of a broad network of psychotherapists.

MyCare team developed a platform where all frontline healthcare workers can do regular health self-checks and get the support of qualified personnel to ensure that their total health is prioritized. While using the platform, healthcare workers are asked a set of questions as a form of self-assessment. Their answers are then assessed by a qualified team of experts, who determine the health state of the HCW and recommend timely interventions, including psychologist visits (in-person or digitally), scheduled breaks. This data, if allowed to be shred by a HCW, will serve as a tool for assessing the impact of pandemics on the overall health of HCW and will allow creating well-informed preventative measures.

contribute2.us team tackled the problem of the distribution of critical resources and services. Coronavirus has disrupted supply chains around the world. In some cases, there are still plenty of resources available, but these resources are not arriving at the locations that need them the most. Many people have critical resources sitting on their shelves at home but have no clear idea of where to send them. Thus, the resources remain unused. How contribute2.us solved it? By creating a platform where people can easily indicate where to send the resources (address), what is needed (ex: “We need 100 n95 masks”) and track the progress for the resource collection (ex: “We have received 37 of the 100 requested n95 masks. Thank you so much!”). On the other hand, the platform will allow providing a way to discover these requests based on the supplies they already have.

Last but not least, reCOVer team focused on the privacy-first contact tracing and digital epidemiology challenge by creating a multi-service platform connecting patients to self-assessment tools, dashboards, and personal analytics. Their vision is to enable patients to answer questions about their symptoms, location, and see recommended actions. Users will be able to easily track their logins on their phones and see how symptoms are changing. The secure and anonymous contact tracing will allow patients to stay in the know of their area and avoid potential high impact areas. Finally, information like location, age, and test results will be collected and anonymized into a large database for understanding the effects of the pandemic better.

Even though it is difficult to think of the pandemic bringing good into our lives, one thing is clear – more efficient and effective ways to deliver healthcare worldwide are being implemented as we speak, diverse communities and leading medical institutions are working together, and it’s not going away anytime soon.

]]>
https://dataconomy.ru/2020/04/17/how-healthcare-institutions-and-hackers-cooperate/feed/ 0
How community-sourced tech targets coronavirus in Greece https://dataconomy.ru/2020/04/14/how-community-sourced-tech-targets-coronavirus-in-greece/ https://dataconomy.ru/2020/04/14/how-community-sourced-tech-targets-coronavirus-in-greece/#respond Tue, 14 Apr 2020 14:48:43 +0000 https://dataconomy.ru/?p=21174 From a shortage of medical equipment to caring for patients remotely, Data Natives’ global community tackled the most pressing problems Greece is facing due to COVID-19 during HackCoronaGreece online hackathon. Some weeks ago, here in Europe, we were looking at the news headlines about the coronavirus outbreak in China and did not expect it to […]]]>

From a shortage of medical equipment to caring for patients remotely, Data Natives’ global community tackled the most pressing problems Greece is facing due to COVID-19 during HackCoronaGreece online hackathon.

Some weeks ago, here in Europe, we were looking at the news headlines about the coronavirus outbreak in China and did not expect it to affect our daily lives. Right now, there is no community in the world that hasn’t been impacted by the coronavirus crisis in one way or another.

The rapid spread of coronavirus ravaged healthcare systems, economies and social lives beyond health concerns – think layoffs, data privacy and fears of global surveillance.

However, it also paved the way for community-sourced solutions to flourish with individuals choosing to spend their isolation time differently – teaming up with like-minded people and fighting the crisis together with healthcare professionals and governments.

An example of such a “citizen-to-government” project was HackCoronaGreece online hackathon (April 2nd-13th, 2020) initiated by Data Natives with the support of the Greek Ministry of Digital Governance. The hackathon quickly gathered over 2000 enthusiasts, among which 400 selected data scientists, developers, project managers, designers, healthcare experts, and educators formed 54 teams to produce immediately available digital solutions tackling the main pain points the greek healthcare and social systems are facing due to the COVID-19 outbreak.

These teams worked tirelessly to create systems able to track the capacity, supply chain & demand management for hospitals, support call center/hotlines, help doctors care for patients remotely with telemedicine solutions, analyze the pandemic and create forecast systems and open-source hardware tool for medical personnel. By the end of this high-intensity low-sleep marathon, the evaluation committee and hacking teams gathered online to announce 8 winners of the HackCoronaGreece hackathon.

“This hackathon is an opportunity to modernize the Greek Health System. Taking this step could the everyday lives of our citizens.”

Vassilis Kontozamanis, Deputy Minister at the Greek Ministry of Health

The evaluation criteria were based on the innovative potential of the proposed solution, the positive impact it can have on society, the overall applicability and feasibility of the solution as well as the quality of the technical implementation.

The symptom monitoring and GDPR-compliant data gathering usable for forecasts, real-time tracking, and even long-term COVID-19 recovery research solutions thrived during the hackathon.

Hackit-19 team (1st place) created an easy-to-use app to help individuals, families, and decision-makers to navigate through daily life based on self-reported symptoms and the COVID-19 heat map.

“Greek leadership in this crisis demonstrated the ability of aggregating the best characteristics of our culture”

Charis Lambropoulos, Head of Athens Incubator for Startups

The second place went to team SMARTY who developed an intelligent decision support tool (web-based app) designed to detect undiagnosed cases of COVID-19. Team Survivors (4th place) came up with a reporting app for patients recovering from COVID-19 to self-report on their recovery process (full recovery can take years for patients with severe cases). The data gathered will be laid as a foundation for monitoring, research and policymaking. Keeping the data privacy concerns in mind, the GDPR-proof GPS data tracking mobile solution was offered by ΓΝΩΜΩΝ (Gnomon) team (won the 6th place).

Another HackCoronaGreece team, Assistant Volunteer (3rd place), focused on facilitating community support and created a volunteer resource management system designed to support citizens in need and allow for planning volunteer missions and operations, registry management and volunteer communication.

“This is a great chance for people who have exciting and innovative ideas to get in touch with the Greek government and collaborate to transform these ideas into tangible solutions”

Elpida Kyriakou, IT manager, EVAGGELISMOS Hospital

Other winning teams focused their efforts on helping healthcare workers stay protected with an open-source protective face shield that can be promptly mass-produced in large quantities (COVID-19 Response Greece – Face Shield team won the 5th place), enhancing the COVID-19 drug safety via a knowledge graph (OpenPVSignal COVID19 Knowledge Graph, 5th place) and working on a telemedicine solution that is applicable to 90% of telemedicine systems worldwide via a text message (Emergency Solution, 8th place).

The foundation for action was laid, and the Greek Ministry of Digital Governance is continuing to work with communities via its digital innovation initiatives – yet another proof that “times of crisis can bring people together quickly to create immediate solutions in an agile manner, focus on what matters most and push the innovation forward” as mentioned by Elena Poughia, MD at Dataconomy and CEO at Data Natives.

About HackCorona initiative:

HackCorona hackathons are an initiative of the Data Natives and Dataconomy, a community of 78000+ tech professionals, data enthusiasts, entrepreneurs and activists. After gathering over 1700 hackers and producing 23 digital solutions to help the world fight the COVID-19 outbreak during the 48-hour long virtual hackathon on March 20th-22nd, Data Natives & Dataconomy brought the concept to Greece.

For more information: 

]]>
https://dataconomy.ru/2020/04/14/how-community-sourced-tech-targets-coronavirus-in-greece/feed/ 0
Hackathons and action groups: how tech is responding to the COVID-19 pandemic https://dataconomy.ru/2020/04/09/hackathons-and-action-groups-how-tech-is-responding-to-the-covid-19-pandemic/ https://dataconomy.ru/2020/04/09/hackathons-and-action-groups-how-tech-is-responding-to-the-covid-19-pandemic/#respond Thu, 09 Apr 2020 11:00:18 +0000 https://dataconomy.ru/?p=21165 The global COVID-19 pandemic has generated a wide variety of responses from citizens, governments, charities, organizations, and the startup community worldwide. At the time of writing, the number of confirmed cases has now exceeded 1,000,000, affecting 204 countries and territories. From mandated lockdowns to applauding health workers from balconies, a significant number of people are […]]]>

The global COVID-19 pandemic has generated a wide variety of responses from citizens, governments, charities, organizations, and the startup community worldwide. At the time of writing, the number of confirmed cases has now exceeded 1,000,000, affecting 204 countries and territories.

From mandated lockdowns to applauding health workers from balconies, a significant number of people are taking this as an opportunity to step up and help in any way they see fit. And this is true of the various tech ecosystems too.

And while some are repurposing their existing startups and businesses to assist with the pandemic response, others are joining an ever-expanding number of hackathons across the globe to come up with fresh ideas and feasible solutions.

One such hackathon, #HackCorona, gathered over 1,700 people, and during the course of the 48-hour long online event, 300 people delivered 23 digital solutions to help the world fight the outbreak. Organized by Data Natives and Hacking Health Berlin, the event was created in record time, a hallmark of people’s response to the situation. There really is no time to waste.

Attracting hackers from 41 countries, the teams worked tirelessly to produce solutions that were useful, viable, and immediately available to help in a multitude of areas affected by the spread of the novel coronavirus. Mentors and jurors from Bayer Pharmaceuticals, Flixbus, MotionLab.Berlin, T-Systems International, Fraunhofer, and more both assisted the teams with their applications, and decided which would win a number of useful prizes.

“We are happy to have created a new community of inspired, talented, and creative people from so many different backgrounds and countries eager to change the course of this critical situation,”  CEO at Data Natives, Elena Poughia, said. “This is exactly the reason why we, at Data Natives, are building and nurturing data and tech communities.” 

Distrik5, born from members of the CODE University of Applied Sciences in Berlin, developed a digital currency that is earned when one of its users provides assistance to the elderly, those that are at the highest risk of dying from COVID-19 and its associated complications. The team won a fast track to join the current incubator cohort at Vision Health Pioneers.

Homenauts created a participatory directory of resources to help maintain strong mental health while isolating. Polypoly.eu developed Covid Encounters, a mobile app to track exposure and alert citizens without compromising privacy. HacKIT_19 created a solution that uses data to help you make better decisions with self-reported symptoms. 

In total, eight teams created winning solutions that are viable and instantly applicable to the crisis. And #HackCorona is just one of many such examples around the world.

“The solutions created were a good mixture of ‘citizen first’ solutions with the aim to assist people with limited technology,” Poughia said. “However, what really stood out to me was that we need more data scientists working closely with epidemiologists to predict and understand the current outbreak.”

Poughia warns that we mustn’t slow down now, or become complacent.

“I think it is admirable to see institutions, academic universities, incubators, and accelerators joining in to support the projects,” Poughia said.

“What we need is immediate action and immediate support to keep the momentum going. Volunteers should continue to come together to help but we also need the support of governments, companies, startups, and corporations, so that we can accelerate and find immediate solutions.”

Data Natives is now bringing the #HackCorona concept to Greece. With the support of the Greek Ministry of Digital Governance, Hellenic eHealth and innovation ecosystems and co-organised by GFOSS and eHealthForum, the second edition of HackCorona aims to find creative, easily scalable, and marketable digital solutions. Its aim is to help hospitals manage the supply and demand chain, provide real-time information for coronavirus hotlines, offer telehealth solutions allowing doctors to care for patients remotely, use data to create an extensive mapping, create symptom checkers, and more. 

HackCoronaGreece is currently gathering teams of data scientists, entrepreneurs, technology experts, designers, healthcare professionals, psychologists, and everyone who is interested in contributing for a weekend-long hacking marathon which will conclude on Monday, April 13th with a closing ceremony. Applications are closing on April 10th at 23:59 Central European Time.

Head of Marketing for TechBBQ, and co-organizer of Hack the Crisis DK, Juliana Geller explained the motivation behind creating hackathons at times of need.

“It’s the potential of getting people of all walks of life together to create solutions to a problem that affects all of us,” Geller said. “By doing that for this particular challenge, we can prove it is possible to do it for all the other challenges we face as a society.”

Hack the Crisis is, in fact, not one hackathon, but an entire series that have been set up to find solutions pertaining to COVID-19. Hack the Crisis Norway ran for 48 hours on March 27, 2020, and was won by a team that used 3D printing technology to put visors in the hands of medical staff on site, saving time and reducing the supply chain dramatically.

Of course, bringing people together to create apps, products, and services is one thing, but getting to market quickly enough to make a difference is an entirely different proposition. Almost every hackathon I looked at when researching this article has built deliverability into the judging criteria, so that those who can put the solution into the hands of those that need it are rewarded.

“One of our judging criteria is actually that the solution is delivered as an MVP by the end of the Hackathon and had the potential to be developed into a go-to-market product quickly”, Geller said. “Besides for the ‘saving lives solutions,’ which are obviously the most urgent, we want to see ideas to help the community and help businesses, and it is already clear that those will be affected for a much longer period. So we are positive that the solutions will indeed make a difference.”

Hack the Crisis was originally created by Garage48 AccelerateEstonia, and other Estonian startups, but it has become an entire hackathon community, determined to not only support the efforts against the novel coronavirus, but to supporting other hackathon creators.

Anyone can organize a hackathon and post it on the Hack the Crisis website, which at the time of writing has 46 hackathons listed in over 20 countries. Geography, of course, it not important at this time, since every hackathon is being run remotely, but it does illustrate how global the response is, and how everyone, everywhere, is looking to solve the biggest COVID-19 challenges.

“It is a worldwide movement,” Geller said. “And on April 9-12, 2020, there’ll be a Global Hack. But that is not where it stops, absolutely not. We want to generate solutions that will have value after this crisis, that can actually become a startup and keep benefiting the community later on.”

But there are also groups that are forgoing the traditional hackathon format and are coming up with solutions created in WhatsApp, Telegram, and Facebook Messenger group chats. One such chat was created by Paula Schwarz, fondatrice of the Cloud Nation and founder of Datanomy.Today.

By bringing together like-minded people, and through constant curation of the chat and calls to action to incentivize members to come up with solutions, Schwarz has created a pseudo-hackathon that never ends.

One such solution is Meditainer, which helps get important supplies to those in need. It’s a simple solution, but one that was created quickly and effectively. 

Meditainer is a project very close to Schwarz’ heart. “My grandfathers started a medical company shortly after the second world war,” she said. “This is why I have very good connections in the pharmaceutical sector.”

“Since I had mandates from the United Nations to organize the data of 25 cities and I watched the supply chains of the United Nations fall apart, I realized that right now is the time to leverage my network and the background of my family, together with sophisticated usage of data in order to provide next-level healthcare innovation for the people,” Schwarz said.

So how does it work? 

“Meditainer works directly with governments and strong institutional partners inside and around the United Nations to close supply gaps in healthcare through our effective public-private partnerships,” Schwarz said. “It operates as a distributor of thermometers, smart corona tests and apps that will hopefully help to reduce the spread of the virus.”

So whether you organize a hackathon, participate in one, or create your own “mastermind group” on a messaging platform, there’s one thing that is for sure – you’re making a difference and you’re aiding those in need, when they need it the most.

The benefits for society are obvious, and the growth you’ll witness by getting involved in some way is also extremely apparent.

“I’m grateful to be working with so many active masterminds and I look forward to getting to know key players in the industry even better,” Schwarz said.

The startup industry, and those connected to it, have really stepped up at a time when it is needed the most, and long may that spirit continue.

]]>
https://dataconomy.ru/2020/04/09/hackathons-and-action-groups-how-tech-is-responding-to-the-covid-19-pandemic/feed/ 0
#HackCorona 2.0: Open-source hardware, telehealth & pandemic forecasts https://dataconomy.ru/2020/04/06/hackcorona-2-0-open-source-hardware-telehealth-pandemic-forecasts/ https://dataconomy.ru/2020/04/06/hackcorona-2-0-open-source-hardware-telehealth-pandemic-forecasts/#respond Mon, 06 Apr 2020 14:18:40 +0000 https://dataconomy.ru/?p=21156 According to WHO, it took more than 3 months to reach the first 100,000 confirmed cases of coronavirus worldwide, but only 12 days to reach 200,000, 4 days to reach 300,000, 3 days to reach 400,000 and another 5 to reach 700,000. China’s cases rocketed in the early weeks of the outbreak but curbed before […]]]>

According to WHO, it took more than 3 months to reach the first 100,000 confirmed cases of coronavirus worldwide, but only 12 days to reach 200,000, 4 days to reach 300,000, 3 days to reach 400,000 and another 5 to reach 700,000. China’s cases rocketed in the early weeks of the outbreak but curbed before other significant outbreaks had happened. Now, the US has the most number of confirmed cases. Can we predict where the virus hits next?

Greece currently faces over 1800+ coronavirus cases. As of now, over 200 people have recovered but 81 people died. With a disciplined approach the Greek Government is implementing, the outbreak could be very well under control. However, based on the experience of other European countries and taking into consideration the threats of an economical crisis, easily implementable solutions, will help manage the current state of affairs better, avoid overloading the healthcare system and putting additional burden on the Greek economy. 

#HackCoronaGreece: Supported by the Greek Ministry of Digital Governance

HackCorona hackathons are initiated by Data Natives, a community of 78000+ tech professionals, data enthusiasts, entrepreneurs and activists. After gathering over 1700 hackers and producing 23 digital solutions to help the world fight the COVID-19 outbreak during the 48-hour long virtual hackathon on March 20th-22nd, Data Natives is bringing the concept to Greece. 

With the support of the Greek Ministry of Digital Governance, Hellenic eHealth and innovation ecosystems and co-organised by GFOSS and eHealthForum, the second edition of HackCorona aims to find creative, easily scalable and marketable digital solutions to tackle the main pain points the greek healthcare and social systems are facing due to the COVID-19 outbreak.

HackCoronaGreece is currently gathering teams of data scientists, entrepreneurs, technology experts, designers, healthcare professionals, psychologists, and everyone who is interested in contributing for a weekend-long hacking marathon which will conclude on Monday, April 13th with a closing ceremony. Applications are closing on April 10th at 23:59 Central European Time – hurry up!

Proposals gathered throughout the hackathon will provide solutions directly applicable to the challenges posed and will be evaluated by a renowned committee of jurors, including scientists, entrepreneurs and awarded.

Hackers will deep-dive in an intensive collaboration to come up with the working digital prototypes to:

  • help hospitals manage the demand and supply chain
  • provide real-time information for coronavirus hotlines
  • offer telehealth solutions allowing doctors to care for patients remotely
  • use data to create an extensive mapping, symptom checkers and pandemic forecasts & more
  • come up with fast and easy hardware solutions that can be produced to solve problems defined by hospitals and other healthcare providers

The participation application is available here.

Application deadline: April 10th, 23:59 CET

The event launch, demo sessions and workshops will be streamed here.

Official #HackCoronaGreece website: http://hackcorona.world/gr

Slack group for hackers:  http://hackcorona.world/slack

Become a mentor here.

Join as a journalist here.

]]>
https://dataconomy.ru/2020/04/06/hackcorona-2-0-open-source-hardware-telehealth-pandemic-forecasts/feed/ 0
How to advance in your data science career – AMA with Elena Poughia https://dataconomy.ru/2020/03/26/how-to-advance-in-your-data-science-career-ama-with-elena-poughia/ https://dataconomy.ru/2020/03/26/how-to-advance-in-your-data-science-career-ama-with-elena-poughia/#respond Thu, 26 Mar 2020 11:13:39 +0000 https://dataconomy.ru/?p=21113 On March 18th at 6 PM CET, Elena Poughia, Data Natives’ CEO and curator shared her tips on how advance in your data science career during a live Ask Me Anything Session available via DN Club. Here is the recap of the AMA session with selected Q&As. In January we started our online community club: […]]]>

On March 18th at 6 PM CET, Elena Poughia, Data Natives’ CEO and curator shared her tips on how advance in your data science career during a live Ask Me Anything Session available via DN Club. Here is the recap of the AMA session with selected Q&As.

In January we started our online community club: datanatives.club. The timing couldn’t be more right. Throughout our self-quarantines, because of COVID-19, it is important to stay connected. Luckily you have 78.000+ fellow Data Natives lovers out there to online mingle with. 

We also want to give you the opportunity these weeks at home to refresh your brain with new ideas and knowledge. One of those ways is through ‘Ask me Anything’ sessions, where you can ask all the questions you ever had about how to advance your career in Data Science. Because this might just be the right time for you to step back and think about the future. 

During the first AMA online session, we talked with our founder & CEO Elena Poughia. Running a popular data brand for the past years, managing a diverse tech team and being as connected as she is, she is just the right data boss to get inspired by. You asked a lot of questions via the Typeform, as well as in the chat during the session.

If you missed the session, here is a summary of some of the most insightful questions and answers: 

What resources and training routine do you recommend for interviews in data science, especially with the large tech companies?

Before you even start to search for a role, it’s important to know which data science path is right for you – analytics, engineering, or machine learning? It will vary what questions you’ll be asked, because it will be specific to your chosen field. 

But despite differences in the type, there will always be a similar interview loop. For example, they will ask you what kind of programming languages you are familiar with. Python, R, are the most popular ones in the data science space. C/C++, Java and Scala are common too. 

What other technicalities do I need to prepare?

Big Data technologies are a little hard to follow, considering new tools are developing the time. However, we would recommend learning Spark, because it is very common. 

Of course, you need to prepare for questions around data analysis, data collection, data cleaning, and feature engineering. I would also like to highlight that it is important for you to think about machine learning models. For example, what kind of models you can train – supervised, or unsupervised.

Find at the end of this article a list of resources we recommend for you to practice. 

How can I best present myself?

When you are applying, think about how interested you are in the company and in the role. You need to proactively show that you are interested in the project that you will be building together. You need to show that you really want to be working with that team. The bottom line – it’s also about the culture of the company. 

Right now, many of us are working remotely – connecting becomes more important because of that. You want to get the feeling that you are being seen, understood and supported by the company because you will get into many situations where good communication is the key. Working remotely is only possible when both sides provide enough information, so there is an understanding of what everyone is working on. In this way, it will feel good to participate and build that project together. 

Another thing to think about: how well does your skill set match the job requirements? You also shouldn’t back off when the job ad doesn’t exactly match your profile – you can grow into the position. But when you read the job ad, you do need to get the feeling that it’s you there are looking for. And you should feel close to the topic. Again, it’s important for you and for them to get a good fit when it comes to company culture.

To advance in the data science career, what is better to improve? Statistics skills or programming & developing skills? 

What skills you need to improve really depends on your career goals and your general interests. Therefore, it’s hard to say whether you need to develop programming or statistical skills. 

I would say one advantage of being in Data Science, is that it’s such a new field, it’s always changing and improving. A lot of Data Scientists who started working didn’t consider themselves as such, because the title wasn’t available back then to describe the profession. Eventually, a lot of resources and tools become available as you go. 

Programming is important in landing the first job, so you do need to be able to program. There are also easy programming languages to start with. A popular program is Python, it’s quite common in the data science space. 

But you don’t need to put a lot of pressure, you can always become more skilled and experienced in broader topics and skillsets. I would really emphasize here that this is life-long learning on the job. Especially now that we are all more home, this is an opportunity for you to advance your career and learn new things. 

How can I gain experience? 

Some people say that two years is the maximum you should spend focusing on your studies and training, but you can also enter the workforce before that. If you are switching careers, don’t take too long to educate yourself, but jump in and use the knowledge you gained in your previous background. 

It would be best to reserve at least half of the week to develop your skills. Right now, I would say take online courses that focus on the skills you want to learn. You can also do courses that are not related to data science. It can also be a programming course in a relevant language, for example. It will be good to educate yourself on data science through sources like Data Science Central and Dataconomy. 

Also, at Data Natives we organize projects where you can gain experience. Recently we organized a #HackCorona Hackathon, where we scouted 23 digital solutions to challenges in the coronavirus crisis. Keep an eye on our channels for more!

What about connections, how important is it to network? 

It is good to be connected as much as possible to a community. For example, there are a lot of Python data communities around the world that you can be connected to. Try to meet like-minded people, so you can exchange resources. Essentially your network will be the way to advance your career and these communities will help you with problems you encounter. 

I’m in the process of wrapping up my master’s in mathematics. Should I wait to finish or apply? 

No, don’t wait, go for it now! Don’t even think about it. Go and apply as much as you can. When you apply you can say that you are still enrolled as a student. In fact, I don’t know where you are based, but in Germany, if you are a student, you start as a working student (Werkstudent) and that’s a really good way to enter the job market. It’s like an internship, but you get paid. Then in many cases, you get hired full time after you finish your studies. This is actually one of the best ways to get a job. 

What is your background, Elena?

Well, that’s the funny thing. My background is in economics and arts, so very different. But I fell in love with data science five years ago, because I think it’s such an enriching and multifaceted field and it really helps us to advance research. 

Right now, having this online session together, so many terabytes of data are processed. This I find very fascinating. I really want to support data scientists and hence, we are doing these online sessions. We want to answer all your questions, so you can advance your career. If we can help you find the right path for you and give you the right resources to reach your goals, our mission is accomplished! 

That was it, dear Data Natives. We’ll come soon with a new AMA session, with some of the most interesting data scientists out there. 

Finally, some resources we recommend for you:

Glassdoor to assess companies offering jobs in data.

Leetcode to practice SQL questions

Data Science Interview – free collection of data science interview questions and answers.

The DS interview for real interview questions. 

Dataquest sources for key concepts and to quiz yourself on everything from Python to SQL, to Machine Learning. 

Acing AI Interviews for articles with data science interview questions from big companies.

HackerRank for coding challenges you can work through.

Codewars to test your skills.

]]>
https://dataconomy.ru/2020/03/26/how-to-advance-in-your-data-science-career-ama-with-elena-poughia/feed/ 0
HackCorona: 300 participants, 41 nationalities, 23 solutions to fight COVID-19 outbreak https://dataconomy.ru/2020/03/23/hackcorona-300-participants-41-nationalities-23-solutions-to-fight-covid-19-outbreak/ https://dataconomy.ru/2020/03/23/hackcorona-300-participants-41-nationalities-23-solutions-to-fight-covid-19-outbreak/#respond Mon, 23 Mar 2020 17:45:11 +0000 https://dataconomy.ru/?p=21116 In just one day, the HackCorona initiative gathered over 1700 people and 300 selected hackers came up with 23 digital solutions to help the world fight the COVID-19 outbreak during the 48-hour long virtual hackathon by Data Natives and Hacking Health. Here are the results. HackCorona was created on March 17th in order to find digital […]]]>

In just one day, the HackCorona initiative gathered over 1700 people and 300 selected hackers came up with 23 digital solutions to help the world fight the COVID-19 outbreak during the 48-hour long virtual hackathon by Data Natives and Hacking Health. Here are the results.

HackCorona was created on March 17th in order to find digital solutions for the most pressing problems of the COVID-19 outbreak within a short period of time. In just one day, the initiative gathered over 1700 people. 300 selected data scientists, developers, project managers, designers, healthcare experts and psychologists of 41 nationalities formed 30 teams to collaborate intensively throughout the weekend to come up with the working prototypes for selected challenges:

  • Protecting the Elderly” challenge focused on finding digital solutions for a voluntary care network for the elderly population, supported by young and healthy people.
  • Open-Source Childcare” challenge aimed at creating digital solutions for open source childcare networks.
  • Self-Diagnosis” challenge targeted the development of an online self-diagnosis COVID-19 solutions that would allow to input symptoms and suggest the next steps to take.
  • Open Source Hardware Solutions” challenge intended to build fast and easy medical devices that can be produced to solve problems defined by hospitals and other healthcare providers.
  • The open challenge” allowed participants to suggest and wok the challenge of their own choice

HackCorona hackers were joined by renowned jurors and mentors such as Max Wegner, Head of Regulatory Affairs for Bayer Pharmaceuticals, Thorsten Goltsche, Senior Strategic Consultant at Bayer Business Services, Sabine Seymour, Founder SUPA + MOONDIAL, Dr. Alexander Stage, Vice President Data at FlixBus, Tayla Sheldrake, Operational Project Leader at MotionLab.Berlin, Dandan Wang, Data Scientist at T-Systems International GmbH, Mike Richardson, Deep Technology Entrepreneur & Guest Researcher at Fraunhofer, and more.

I encountered some very committed people, who presented amazing analyses. I really hope that they can actually use their solutions to fight the virus.

Max Wegner, Regulatory Affairs at Bayer Pharmaceuticals.

Hacking teams were focusing on creating easily-marketable solutions to connect volunteers to the high-risk population, encouraging people to volunteer, low-cost wearables tracking body values, assisting parents to deal with anxiety, helping authorities to better manage the lockdown and many more.

HackCorona: 300 participants, 41 nationalities, 23 solutions to fight COVID-19 outbreak
Some of the participants of the HackCorona Online Hackathon

From a community currency to incentivize volunteering to drug screening using quantum calculations

8 winners were selected to receive prizes provided by the HackCorona partners Hacking Health, Bayer, Vision Health Pioneers, Motion Lab and Fraunhofer. 

  • Distrik5 team from the CODE University of Applied Sciences in Berlin developed a community currency to incentivize people to volunteer and help the elderly with their needs by rewarding their time via digital currency. The team won a fast track to join the current batch of incubation at Vision Health Pioneers.
  • Team Homenauts created a directory of resources to help people stay at home and take care of their mental health. Homenauts introduced a participatory platform with ideas on how to better cope with isolation where users can submit useful resources. The team won a prize of connections from the Data Natives team, who will support the development of the platform by connecting Homenauts with marketing and development experts. 
  • DIY Ventilator Scout team created a chatbot (currently available on Telegram) to help engineers to build a DIY ventilator by giving instructions and data regarding the availability of components need to build a ventilator. The team received a prize from Fraunhofer to use the DIY Ventilator Scout system to guide Fraunhofer’s engineers who are currently working on the hardware. 

What a fantastic event with incredible outcomes! … We at MotionLab.Berlin absolutely loved the motivation and enthusiasm. Your energy was felt and we could not be prouder to have been part of such a positive and community building initiative. Thank you DataNatives and all those involved for making this happen.

Tayla Sheldrake, Operational Project Leader at MotionLab.Berlin
  • Covid Encounters team by Polypoly.eu developed a mobile app for tracking the exposure and alerting citizens without compromising their privacy. The app allows notifying any encounters with the possibility of the infection through public alert service that sends a notification to all connected devices.  The team won a prize of connections from the Data Natives team, who will support the development of the app by introducing the team to relevant stakeholders. 
  • HacKIT_19 team developed an easy-to-use app to help individuals, families, and decision-makers to make better decisions based on self-reported symptoms and real-time data. The team won a prize of connections from the Data Natives team.

Best way to spend a Sunday afternoon! I am just listening to the pitches of the #HackCorona teams. Some of them like the team from Anne Bruinsma just came together 48h ago to fight coronavirus. Hands up for the 140 entrepreneurs that spent their precious time to come up with new ideas!

Maren Lesche, Founder at Startup Colors, Head of Incubation at Vision Health Pioneers
  • Quantum Drug Screening team developed an algorithm for drug screening using quantum calculations to describe the drug molecules that have been already approved and can be adopted in therapy faster. Drug discovery for virus infections usually takes a lot of time and manpower and consumes over 15% of pharmaceutical company revenue. The faster way is using computer simulations to target viruses with an array of available drug molecules and look at hundreds of thousands of possible drug solutions in a short time. The team won a prize of connections from the Data Natives team and further support of the project from Bayer.
  • BioReactors team developed a small data AI-powered tool for the optimization of bioreactor settings and nutrition mixtures based on their existing xT smart_DoE solution to scale the vaccine production much faster than usual. The team received a prize from MotionLab Berlin and got access to their facility infrastructure of 4000 square meters to help with the project development.
  • “Our Team” focused on creating prediction models for of Covid-19 outbreak based on a machine learning algorithm with an option to change the parameters and view results. The team won a prize of connections from the Data Natives team and will be introduced to the relevant network stakeholders to push the project further.

CEO of Data Natives, Elena Poughia, said:

We are happy to have created a new community of inspired, talented and creative people from so many different backgrounds and countries eager to change the course of this critical situation – this is exactly the reason why we, at Data Natives, are building and nurturing data and tech communities.

HackCorona initiative was just the beginning. While the winning teams are continuing to work on their solutions, Data Natives is looking to build on the success and bring more bottom-up community-driven hacks to solve current and future problems collectively.

Sponsors & supporters:

Sponsors: Hacking Health, Bayer, Vision Health Pioneers, Motion Lab

Supporters: Fraunhofer, Enpact, gig, INAM, Photonic Insights, SIBB, Unicorns in Tech, StartUp Asia Berlin, Start-A-Factory

Pitching session recording is available via this link.

Winning ceremony recording is available here.

]]>
https://dataconomy.ru/2020/03/23/hackcorona-300-participants-41-nationalities-23-solutions-to-fight-covid-19-outbreak/feed/ 0
Calling the global data science community to #HACKCORONA https://dataconomy.ru/2020/03/17/calling-the-global-data-science-community-to-hackcorona/ https://dataconomy.ru/2020/03/17/calling-the-global-data-science-community-to-hackcorona/#respond Tue, 17 Mar 2020 17:02:04 +0000 https://dataconomy.ru/?p=21106 COVID-19 is still spreading exponentially throughout the world. Some people who get it require hospitalization for respiratory failure for multiple weeks. The hardship falls on elderly people, medical personnel as well as the healthcare system in general.  Identifying the main pain points in the current health crisis situation, Data Natives is activating its 78000+ community […]]]>

COVID-19 is still spreading exponentially throughout the world. Some people who get it require hospitalization for respiratory failure for multiple weeks. The hardship falls on elderly people, medical personnel as well as the healthcare system in general. 

Identifying the main pain points in the current health crisis situation, Data Natives is activating its 78000+ community of data scientists, entrepreneurs, researchers, designers, and tech professionals to come up with solutions for:

  1. Protecting the elderly and medical workers
  2. Offloading the healthcare system with faster diagnosis options and medical equipment delivery and/or assembly
  3. Assisting with daily responsibilities for the ones who need it the most (eg childcare, grocery shopping and medication etc.)

This Friday, March 20th, we are kicking off an online hackathon #HackCorona to find creative solutions for given problems within a short period of time. We’re encouraging data scientists, entrepreneurs, social workers, designers, engineers (and everyone!) to join us.

Join our Slack channel where we’ll share more info very soon.

Let’s #HACKCORONA together!

To participate, apply here.

]]>
https://dataconomy.ru/2020/03/17/calling-the-global-data-science-community-to-hackcorona/feed/ 0
Picks on AI trends from Data Natives 2019 https://dataconomy.ru/2019/12/19/picks-on-ai-trends-from-data-natives-2019/ https://dataconomy.ru/2019/12/19/picks-on-ai-trends-from-data-natives-2019/#comments Thu, 19 Dec 2019 18:12:31 +0000 https://dataconomy.ru/?p=21009 A sneak-peek into a few AI trends we picked for you from Data Natives 2019 – Europe’s coolest Data Science gathering. We are about to enter 2020, a new decade in which Artificial Intelligence is expected to dominate almost all aspects of our lives- the way we live, the way we communicate, how we sleep, […]]]>

A sneak-peek into a few AI trends we picked for you from Data Natives 2019 – Europe’s coolest Data Science gathering.

We are about to enter 2020, a new decade in which Artificial Intelligence is expected to dominate almost all aspects of our lives- the way we live, the way we communicate, how we sleep, what we do at work and more. You may say it already does- and it is true. But I assume the dominance will magnify in the coming decade and humans will become even more conscious of tech affecting their life and the fact that AI is now living with them as a part of their everyday existence. McKinsey estimates AI techniques have the potential to create between $3.5T and $5.8T in value annually across nine business functions in 19 industries. The study equates this value-add to approximately 40% of the overall $9.5T to $15.4T annual impact that could be enabled by all analytical techniques. Something or the other makes us a part of this huge wave in the tech industry, even if we don’t realize it. Hence, the question we asked this year at Data Natives 2019, our yearly conference was “What makes us Tech?”– consciously or subconsciously. 

Elena Poughia, Founder and Head Curator at Data Natives and Managing Director Dataconomy Media  defines this move towards the future in a line,

“We are on a mission to make Data Science accessible, open, transparent and inclusive.”  

It is certainly difficult to capture the excitement and talks at this year’s Data Natives in one single piece as it included 7 days of 25+ satellite events, 8.5 hours of workshops, 8 hours of inspiring keynotes, 10 hours of panels on five stages and a 48 hours-long hackathon, over 3500 data enthusiasts and 182+ speakers. Hence, I decided to pick up a few major discussions and talks that define critical trends in AI for this year and the coming decade from Data Natives 2019. Here is a look: 

How human intelligence will rescue AI?

In the world of Data Scientists, it is now fashionable to call AI stupid. Unable to adapt to change, to be aware of itself and its actions, a simple performer of the algorithms created by the human hand; and especially supposed to be unfit to reproduce the functioning of a human brain. According to Dr Fanny Nusbaum, Chercheur Associé en Psychologie et Neurosciences, there is a form of condescension, of snobbery in these allegations.

“Insulting a machine is obviously not a problem. More seriously, this is an insult to some human beings. To understand, we must ask ourselves: what is intelligence?”

Fanny Nusbaum explains that intelligence is indeed a capacity for adaptation, but adaptation can take many forms. There is a global intelligence, based on the awareness allowing adaptation to new situations and an understanding of the world. Among the individuals demonstrating an optimal adaptation in this global thinking, one can find the great thinkers, philosophers or visionaries, called the “Philocognitives”. 

But there is also a specific intelligence, with adaptation through the execution of a task and whose representatives the most zealous, the “Ultracognitives”, can be high-level athletes, painters, musicians. This specific intelligence strangely looks like what AI does. A swim lane, admittedly, with little ability to adapt to change, perhaps, but the task is usually accomplished in a masterful way. Thus, rather than gargling a questionable scientific knowledge of what intelligence is, perhaps to become the heroes of an AI-frightened population, some experts would be better off seeking convergence between human and artificial intelligences that can certainly work miracles hand in hand.    

The role of AI in the Industrial Revolution

Alistair Nolan, a Senior Policy Analyst at the OECD, spoke about AI in the manufacturing sector. He emphasized that AI is now used in all phases of production, from industrial design to research. However, the rate of adoption of AI among manufacturers is low. This is a particular concern in a context where OECD economies have experienced a decline in the rate of labor productivity growth for some decades. Among other constraints, AI skills are everywhere scarce, and increasing the supply of skills should be a main public-sector goal. 

“All countries have a range of institutions that aim to accelerate technology diffusion, such as Fraunhofer in Germany, which operates applied technology centers that help test and prototype technologies. It is important that such institutions cater to the specific needs of firms that wish to adopt AI. Data policies, for instance, linking firms with data that they don’t know how to use to expertise that can create value from data is also important. This can be facilitated through voluntary data-sharing agreements that governments can help to broker. Policies that restrict cross-border flows of data should generally be avoided. And governments must ensure the right digital infrastructure, such as fiber-based broadband,” he said.

AI, its bias and the mainstream use

The AI Revolution is powerful, unstoppable, and affects every aspect of our lives.  It is fueled by data, and powered by AI practitioners. With great power comes great responsibility to bring trust, sustainability, and impact through AI.   

AI needs to be explainable, able to detect and fix bias, secure against malicious attacks, and traceable: where did the data come from, how is it being used?  The root cause of biased AI is often biased human decisions infused into historic data – we need to build diverse human teams to build and curate unbiased data.

Leading AI platforms offer capabilities for trust & security, low-code build-and-deploy, and co-creation, also lowering the barrier of entry with tools like AutoAI.  Design Thinking, visualization, and data journalism are a staple of successful AI teams.   Dr. Susara van den Heever, Executive Decision Scientist and Program Director, IBM Data Science Elite said that her team used these techniques to help James Fisher create a data strategy for offshore wind farming, and convince stakeholders of the value of AI.  

“AI will have a massive impact on building a sustainable world.  The team at IBM tackled emissions from the transport industry in a co-creation project with Siemens.  If each AI practitioner focuses some of their human intelligence on AI for Good, we will soon see the massive impact,” she says. 

The use of Data and AI in Healthcare 

Before we talk about how AI is changing healthcare, it is important to discuss the relevance of data in the healthcare industry. Bart De Witte, Founder HIPPO AI Foundation and a digital healthcare expert rightly says,

“Data isn’t a commodity, as data is people, and data reflects human life. Data monetization in healthcare will not only allow surveillance capitalism to enter into an even deeper layer of our lives. If future digital medicine is built on data monetization, this will be equivalent to the dispossession of the self. “

He mentioned that this can be the beginning of an unequal new social order, a social order incompatible with human freedom and autonomy. This approach forces the weakest people to involuntarily participate in a human experiment that is not based on consensus. In the long run, this could lead to a highly unequal balance of power between individuals or groups and corporations, or even between citizens and their governments. 

One might have reservations about the use of data in healthcare but we cannot deny the contribution of AI to this industry. Tjasa Zajc, Business Development and Communications Manager at Better emphasized on  “AI for increased equality between the sick and the healthy” in her talk. She noted that researchers are experimenting with AI software that is increasingly able to tell whether you suffer from Parkinson’s disease, schizophrenia, depression, or other types of mental disorders, simply from watching the way you type. AI-supported voice technologies are detecting our mood and help with psychological disorders, and machine vision technologies are recognizing what’s invisible to the human eye. Artificial pancreas — a closed-loop system automatically measuring glucose levels and regulating insulin delivery, is changing diabetes into an increasingly easier condition to manage.

“While a lot of problems plague healthcare, at the same time, many technological innovations are improving the situation for doctors and patients. We are in dire need of that because the need for healthcare is rising, and the shortage of healthcare workers is increasing,” she said.

The Future of AI in Europe 

According to McKinsey, the potential of Europe to deliver on AI and catch up against the most AI-ready countries such as the United States and emerging leaders like China is large. If Europe on average develops and diffuses AI according to its current assets and digital position relative to the world, it could add some €2.7 trillion, or 20 percent, to its combined economic output by 2030. If Europe were to catch up with the US AI frontier, a total of €3.6 trillion could be added to collective GDP in this period.

Why are some companies absorbing AI technologies while most others are not? Among the factors that stand out are their existing digital tools and capabilities and whether their workforce has the right skills to interact with AI and machines. Only 23 percent of European firms report that AI diffusion is independent of both previous digital technologies and the capabilities required to operate with those digital technologies; 64 percent report that AI adoption must be tied to digital capabilities, and 58 percent to digital tools. McKinsey reports that the two biggest barriers to AI adoption in European companies are linked to having the right workforce in place. 

The European Commission has identified Artificial Intelligence as an area of strategic importance for the digital economy, citing it’s cross-cutting applications to robotics, cognitive systems, and big data analytics. In an effort to support this, the Commission’s Horizon 2020 funding includes considerable funding AI, allocating €700M EU funding specifically. This panel of “future of AI in Europe”  was one of the most sought after panels at the conference by Eduard Lebedyuk, Sales Engineer at Intersystems, Alistair Nolan, Organisation for Economic Co-operation and Development at OECD and Nasir Zubairi, CEO at The LHoFT – Luxembourg House of Financial Technology, Taryn Andersen President & co-founder at Impulse4women & a jury Member at EIC SME Innovation Funding Instrument, Dr. Fanny Nusbaum Fondatrice et directrice du Centre PSYRENE, PSYchologie, REcherche, NEurosciences and moderated by Elena Poughia, Founder & CEO of Datanatives. 

AI and Ethics. Why all the fuss? 

Amidst all these innovations in AI that are affecting all sectors of the economy, the aspect that cannot and should not be forgotten is ‘Ethics in AI’. A talk by Dr. Toby Walsh, Professor of AI at the TU Berlin emphasized the need to call out bad behavior when it comes to ethics and wrongs in the world of AI. The most fascinating statement of his talk was when he said that the definition of “fair” itself is questionable. There are 21 definitions of ‘fair’ and most definitions are mutually incompatible unless the predictions are 100 percent accurate or groups are identical. In Artificial Intelligence, maximizing profit will give you a completely different solution “again” and a solution that is unlikely to be seen as fair. Hence, while AI does jobs for us, it is important to question what is “fair” and how we define it at every step. 

(The views expressed by the speakers at Data Natives 2019 are their own and the content of this article is inspired by their talks) 

Read a full event report on Data Natives 2019 here. 

]]>
https://dataconomy.ru/2019/12/19/picks-on-ai-trends-from-data-natives-2019/feed/ 5
“With prol​iferatio​n of digitized technologies, the public is becoming aware of data-collecting sensors & it’s concerns”  https://dataconomy.ru/2019/05/06/with-the-proliferation-of-digitized-technologies-the-public-has-become-increasingly-aware-of-the-omnipresence-of-data-collecting-sensors-its-concerns-%ef%bb%bf/ https://dataconomy.ru/2019/05/06/with-the-proliferation-of-digitized-technologies-the-public-has-become-increasingly-aware-of-the-omnipresence-of-data-collecting-sensors-its-concerns-%ef%bb%bf/#respond Mon, 06 May 2019 13:06:40 +0000 https://dataconomy.ru/?p=20767 How to implement an anonymous data collection scheme that allows the manufacturer to anonymously collect data from its end devices without knowing exactly which device it came from? Yes, this is one of the challenges for the second Blockchain Hackathon (part of LongHash Cryptocon Vol2) in Berlin on May 18-19 this year. More details here. […]]]>

How to implement an anonymous data collection scheme that allows the manufacturer to anonymously collect data from its end devices without knowing exactly which device it came from? Yes, this is one of the challenges for the second Blockchain Hackathon (part of LongHash Cryptocon Vol2) in Berlin on May 18-19 this year. More details here.

As an advantage to all developers, blockchain enthusiasts and crypto geeks who are aching to solve this challenge, here is an interview with Steven Pu, Founder of Taraxa which defines data collection in detail and its benefits/or not, and may be a few tips that might help in cracking this challenge.

Taraxa is a fast, scalable, and device-friendly public ledger designed to help IoT ecosystems become more trusted, autonomous, and valuable with the mission to build IoT’s trust anchor. Taraxa is built from the ground up, by a team of accomplished engineers and academics headquartered in Silicon Valley, hailing from prestigious academic institutions such as Stanford, Princeton, Brown, and Berkeley, all with a passion for enabling the machine to machine (M2M) economy of the future. Edited Excerpts of the interview:

Blockchain sounds like a generic infrastructural technology, what about Taraxa? Is it IoT-specific?

A large subset of IoT applications are stateless. Data anchoring, for example, where IoT devices make periodic commitments into the blockchain about the data they’ve collected, is a stateless operation. This means each transaction is not dependent on past or future transactions. Our protocol has two layers, the top DAG layer and the bottom finalization layer. The DAG layer gives information about transaction inclusion, which means for an IoT device performing a stateless transaction, as long as the transaction is included, it is fine, since such transactions are not impacted by ordering. Our unique design allows IoT devices to obtain this information much faster and earlier than other protocols, in which inclusion, finalization and execution are all tightly bounded into a single step.

Many IoT devices are resource-constrained and cannot run full nodes. That means they cannot completely independently store the entirety of the blockchain’s history, or have the computational resources to verify transactions, or the bandwidth to synchronize with the network. These are usually called light nodes. Current light node designs rely completely on a full node to update its state, giving that specific full node the opportunity to deceive or corrupt any light node it is connected to. Taraxa has a built-in mechanism whereby a light node could query a rando subset of the network for a re-validation of what it has been told by the full node it usually communicates with, giving it the capability to remain far more independent and trustless than what current conventional designs allow.

Why is there a need for anonymized data collection?

With the rapid proliferation of digitized technologies, the public at large has become increasingly aware of the omnipresence of data-collecting sensors as well as concerned about how they’re being used. Recent scandals involving Facebook and Google’s mishandling of user data sparked concerns worldwide among the public as well as regulators. The EU’s General Data Protection Regulation (GDPR) that came into effect in May of 2018 further placed privacy and data ownership at the center of civil discourse. These regulatory trends however are still extremely limited in scope in that they mostly require a user consent upon visiting websites which only acknowledges problem without fundamentally solving it. These concerns are especially thorny in the case of IoT devices, as they have increasingly become embedded directly into our environments without our knowledge, tracking everything from location and movement to voice and video. Much of this also happens with numerous third-parties whose involvement and activities are difficult to track, as well as across political jurisdictions each with their uniquely different regulatory requirements, further complicating social concerns.

If IoT as a technology is to continue proliferation, it must address data privacy concerns head-on and provide socially-acceptable solutions to guarantee secure data ownership and usage without triggering innovation-killing regulatory backlashes.

Are there any successful machine learning applications for anonymized data collection?

Short answer is – yes, any machine learning application can run on this type of data since the data itself is in plain text.

There are two types of anonymity – the anonymization of data source, and the anonymization of the data itself. This challenge is the former, but I will talk about both.

Anonymizing the data source means exactly that – you don’t know where the data came from, but you know that it is real, valid data. In this case, it is simply raw data like anything else, and you may run any machine learning algorithms over them to build applications.

Anonymizing the data itself is much more complex. It usually is done through two methods, via software or hardware. The software method involves what’s called homomorphic encryption, which allows an algorithm to perform arbitrary operations directly on encrypted data, without knowing what that data is. Fully homomorphic encryption is incredibly slow, roughly 50,000 – 100,000 times slower than normal execution. The hardware solution involves trusted execution environment (TEE), which cordons off a section of a processor that requires specific permissions (via cryptographic signatures) to access, effectively preventing unauthorized or malicious programs from accessing restricted memory. Much of the key storage, signing & validation processes are also hard-wired into the hardware so that process is impossible to hack.


What are some of the examples of devices the manufacturer produces? Why do they have to be cryptographically-guaranteed?

Any device that generates data which may be sensitive.

A consumer example would be smart speakers that respond to your voice commands. One persistent concern is whether companies like Google or Amazon are recording all our conversations. They tell us no but it’s difficult to tell for sure, and the machines often misinterpret conversations for commands which result in large segments of these conversations being sent to central servers. While companies need to collect data in order to offer us services, that data does not need to tie directly into our personal identities. It’s OK to know that “user X just asked about ways to cure a STD”, it is not OK to know that “user X is John Smith living at 123 Main Street”. The membership proof ensures that the companies can collect the necessary data to offer the right service, while they cannot associate that data with a person or entity.

How can the end user stop even the anonymized data collection on demand?

This could be easily done if the manufactures build in such functionalities, which they will do if users become highly privacy-conscious, enough to effect regulations that require such functionalities be built in. We are already seeing this happening across major software platforms, it is only a matter of time (very brief amount of time) before hardware platforms come under the same regulatory standards.

How is the user informed about the data collection?

The device manufacturer needs to build in functionality that allows the users to monitor such data collection (see previous question).

What could also be done independently of the manufacturer is to use packet-sniffing software that analyzes real time network traffic and understand what types of data is being sent and received. These types of software are usually used by network administrators or security professionals to protect their systems.

Does this challenge interest you? Apply here for the Hackathon before the 12th of May the event is free for all developers who apply. Also, there is more. If you are a developer or aspiring entrepreneur in the blockchain/crypto space and  want to know about the investment perspectives from Top Asian & European Funds in the Blockchain segment or business use cases in real word adoption, get your free tickets for Hash Talk which will be an afternoon-long summit focused on discussions and creating insights on investment, business, and tech in blockchain curated and brought by LongHash Germany. More details here.

]]>
https://dataconomy.ru/2019/05/06/with-the-proliferation-of-digitized-technologies-the-public-has-become-increasingly-aware-of-the-omnipresence-of-data-collecting-sensors-its-concerns-%ef%bb%bf/feed/ 0
Call to all developers, programmers, entrepreneurs: Three challenges await you https://dataconomy.ru/2019/05/02/call-to-all-developers-programmers-entrepreneurs-three-challenges-await-you%ef%bb%bf/ https://dataconomy.ru/2019/05/02/call-to-all-developers-programmers-entrepreneurs-three-challenges-await-you%ef%bb%bf/#respond Thu, 02 May 2019 12:30:03 +0000 https://dataconomy.ru/?p=20760 Meet investors, Blockchain and crypto enthusiasts, a talent pool of developers and programmers  as they solve three Blockchain challenges over two days in Berlin. Here is why you should be a part of LongHash Cryptocon Vol2. Berlin has been recognised as the cryptocurrency capital of Europe for more than half a decade. The city emerged […]]]>

Meet investors, Blockchain and crypto enthusiasts, a talent pool of developers and programmers  as they solve three Blockchain challenges over two days in Berlin. Here is why you should be a part of LongHash Cryptocon Vol2.

Berlin has been recognised as the cryptocurrency capital of Europe for more than half a decade. The city emerged as one of the first in Europe to accept digital currencies back in 2013 and the crypto revolution is now backed by over 100 blockchain companies based in Berlin. Jasmine Zhang, CEO, LongHash Germany, which is organising its second Blockchain Hackathon (part of LongHash Cryptocon Vol2) in the city on May 18-19 this year, rightly puts this in perspective, “Berlin, as many people have commented already, is a great place with infrastructure and talented, international people. We would like to leverage the strength and expertise we have from the East, and bridge with the West to make a positive impact on blockchain ecosystem. Our aim is to further accelerate the understanding and development of blockchain technology globally.”

LongHash is a platform for accelerating the development and understanding of Blockchain technology. LongHash incubators provide a full-range of support for start-ups working on blockchain-related projects.

As an early-stage blockchain investment and incubation firm, Longhash supports its portfolios long-term. Zhang says, “We are hosting different events including hackathons worldwide, like in Germany, Japan, Vietnam since last year to help their ecosystem grow. This edition’s three projects come from U.S, China and Germany with big potential and a healthy, strong developer community is what they are seeking at the moment and this belongs exactly to the post-investment management that LongHash is providing.”  

Back in Berlin:  With more challenges and ETH prizes for developers and Blockchain Geeks

The first edition of the Hackathon was last year during The Longhash Crypto Festival Berlin, which took place between October 26 and October 29 and promoted innovation among programmers, attracting participants from Asia, eastern Europe and the US. And this being the second edition, the competition will be more challenging yet rewarding at the same time. Winners of the second edition of hackathon have an opportunity to win upto 30 ETH equivalent prizes. Here is a look at the categories:

  • Cybex Prize: 5 Eth
  • MXC Prize: 5 Eth equivalent amount of MXC Token
  • Taraxa Prize: 5 Eth

On top of this, one chosen winner will be  awarded Euro 2,000 equivalent amount of VET powered by VeChain and more prizes are to be announced soon!

The challenges have been carefully designed considering the needs of the Blockchain ecosystem and where the innovation is most desired. Here is a look:

Challenge 1: How to implement an Algo Order in Cybex Dex?

Cybex.io is a blockchain based decentralized exchange that supports crypto trading. When a user has an intention to perform a large trade, it is useful to have an algorithm to split the order into smaller slices and trade it over a longer period. This feature is referred to as ‘Algo Order’ and is widely adopted in regular exchanges.

In decentralized exchange, each sliced order must be signed by the user’s private key. This provides a new challenge to algo orders. In order the place orders automatically, while keeping the private key safe, a user typically has to write its own program and run it in its own machine. This makes it difficult for normal users to use algo orders due to the lack of programming skills.

Design a solution that allows a normal user to execute and manage algo orders.  

The following are some basic Algo order types:

The solution should be using Cybex API, which is available at the following locations:

Solutions will be graded on :

  • User interface friendliness
  • The ideal solution should be easy enough to attract people without programming skills
  • Security
  • As trading involves using private key, the management and storage of the key is a crucial consideration.
  • Framework Coding quality

Challenge 2: How do we automate the Smart Machine Bidding procedure for the LPWAN devices in order to reduce the costs of an IoT network?

MXC foundation focus on connecting Low Power Wide Area Network (LPWAN) technology with the blockchain as an infrastructure for Internet of Things (IoT). MXC automates machine-to-machine (M2M) transactions and provides a device data economy. The pricing policies of data transmissions through gateways in LPWAN are determined by MXC Smart Machine Bidding (SMB). In the SMB, based on the bidding strategies provided by the device owners, and the gateway owners, the payments for using downlink / uplink LPWAN resources will be determined.The following parameters are set by the device in bidding strategies of the SMB:

  • max_bid: the maximum bidding price defined by the device owner shows the upper payment threshold of the device (in MXC tokens) for the downlink request.
  • max_delay: this parameter defines, under certain circumstances, the maximum acceptable_delay (in seconds) for the packet to be sent. If max_delay is reached, the packet will not be sent and the cloud will notify the client about the rejection of the downlink request.
  • accepted_delay: the tolerable delay defined by the client (or device owner) to indicate the time period a packet is willing to wait for the lowest possible price.
  • Lowest possible bidding price is the current lowest bid of the available gateways for the device.

Each gateway provides a value on using its resources called min_bid. The device in order to use the gateway downlink resource, should bid at least min_bid value. If multiple devices in a same time wants to use a downlink resource of a gateway, the one which define more max_bid will be the winner. More details about bidding procedure are provided in MXC Smart Machine Bidding white paper (available in the repository stated below and the MXC website). Based on downlink / uplink data flow of the device owners and their requests, MXC cloud can provide data driven automated smart machine bidding.  max_delay parameter is mainly related to the application and the priority of the data which is known by the device owner/client and is defined by the requirement of the provided application by the device.

On the other hand, accepted_delay, and max_bid parameters should be provided by the device owner (or the client) in some way to make a balance between the priority of the related uplink/downlink data and the corresponding data transmission cost. These two parameters (accepted_delay, and max_bid) can be automatically provided for the device owner to make this balance. Your task is to develop an automated solution (e.g. based on Machine learning methods, dynamic algorithms or greedy algorithm) which provides near-optimum value for accepted_delay and max_bid parameters to reduce the total cost of the LPWAN for the user.

In the input file, you will receive max_delay, payment limit which the data owner wants to pay in total (for all of the transactions), and the downlink resource usage history of the device and other devices. Your program (preferably in Go) will be evaluated by output efficiency (based on the test cases of LPWAN data simulation), and solution explanation (provided in your documentation). Note that you can provide multiple solutions and do the implementation as much as you want/can. A sample input file and its details will be provided in the below repository:

https://gitlab.com/mxc-hackathons/smb

Challenge 3: How to implement an anonymous data collection scheme that allows the manufacturer to anonymously collect data from its end devices without knowing exactly which device it came from?

Data privacy and security has become an increasingly urgent concern worldwide. Large corporations cannot simply collect data from its end users without their knowledge or explicit consent. However, it would be nice if a manufacturer could still collect data generated by its devices without user consent, but do so in a way that’s cryptographically-guaranteed to be anonymous. In this scenario, the manufacturer would like to collect data from anonymous devices, but it would want to be sure the data it’s receiving is not garbage and is guaranteed to have come from a device it has manufactured. The end user would not mind that its device’s data is being harvested as long as it is impossible to trace that device’s data directly to the end user’s identity.This problem is broadly defined as direct anonymous attestation, and more narrowly defined as a membership proof. An earlier paper (https://infoscience.epfl.ch/record/128718/files/CCS08.pdf ) had been published with an open-source implementation published ( https://github.com/ing-bank/zkproofs ).

Assumptions :  We assume that the manufacturer has embedded within each device it made with a pair of asymmetric encryption keys, and that hacking the device on-premise to obtain these keys is prohibitively expensive to do. We further assume that the manufacturer is willing disclose the public keys associated with all of its products to the open.

The challenge: HOW to implement an anonymous data collection scheme that allows the manufacturer to anonymously collect data from its end devices without knowing exactly which device it came from?

  • A device can prove to the manufacturer that it is indeed one of the devices it has created
  • The manufacturer will construct a temporary X.509 certificate so that a proof does not need to be provided every time, temporary because the end user might want to stop even anonymous data collections

Bonus:

  • Manufacturer & device both anchor the challenge & proof onto the blockchain
  • Device anchors its data transmissions onto the blockchain with the temporary certificate

Does this interest you as well? Apply here for the Hackathon before the 12th of May – the event is free for all developers. Also, there is more. If you are a developer or aspiring entrepreneur in the blockchain/crypto space and  want to know about the investment perspectives from Top Asian & European Funds in the Blockchain segment or business use cases in real word adoption, get your free tickets for Hash Talk which will be an afternoon-long summit focused on discussions and creating insights on investment, business, and tech in blockchain curated and brought by LongHash Germany. More details here.

]]>
https://dataconomy.ru/2019/05/02/call-to-all-developers-programmers-entrepreneurs-three-challenges-await-you%ef%bb%bf/feed/ 0
“LPWAN can provide a cost effective​ network for IoT” https://dataconomy.ru/2019/05/02/lpwan-can-provide-a-cost-effective%e2%80%8b-network-for-iot/ https://dataconomy.ru/2019/05/02/lpwan-can-provide-a-cost-effective%e2%80%8b-network-for-iot/#respond Thu, 02 May 2019 10:31:31 +0000 https://dataconomy.ru/?p=20765 How do we automate the Smart Machine Bidding procedure for the LPWAN devices in order to reduce the costs of an IoT network? Yes, this is one of the challenges for the second Blockchain Hackathon (part of LongHash Cryptocon Vol2) in Berlin on May 18-19 this year. More details here. As an advantage to all […]]]>

How do we automate the Smart Machine Bidding procedure for the LPWAN devices in order to reduce the costs of an IoT network? Yes, this is one of the challenges for the second Blockchain Hackathon (part of LongHash Cryptocon Vol2) in Berlin on May 18-19 this year. More details here.

As an advantage to all developers, blockchain enthusiasts and crypto geeks who are aching to solve this challenge, here is an interview with Aslan Mehrabi, Data Scientist at MXC Foundation which defines LPWAN in detail and its IoT devices that operate through it, and maybe a few tips that might help in cracking this challenge.

MXC foundation focuses on connecting Low Power Wide Area Network (LPWAN) technology with the blockchain as an infrastructure for Internet of Things (IoT). MXC automates machine-to-machine (M2M) transactions and provides a device data economy. The pricing policies of data transmissions through gateways in LPWAN are determined by MXC Smart Machine Bidding (SMB). In the SMB, based on the bidding strategies provided by the device owners, and the gateway owners, the payments for using downlink / uplink LPWAN resources will be determined. Your task is to develop an automated solution (e.g. based on Machine learning methods, dynamic algorithms or greedy algorithm) which provides near-optimum value for accepted_delay and max_bid parameters to reduce the total cost of the LPWAN for the user. Edited excerpts of the interview:

Please share a some background of MXC Foundation and its focus?

MXC is creating a global data highway, which automates machine to machine (M2M) transactions, decentralizes big data and enables a device data economy. With the introduction of the Machine Xchange Coin (MXC), adopters of LPWAN data technologies trade data access or sensor data for MXC.

The MXC global data highway is automated using smart contracts running on the Machine Xchange Protocol (MXProtocol). Based in Berlin, MXC is a non-profit foundation promoting the global adoption and implementation of LPWAN data technology.

At MXC, we believe that MXC, paired with LPWAN is the next step in the fourth industrial revolution, we’re actively enabling smart cities and providing public access to big data. By introducing the Machine Exchange Coin and the Machine Exchange Protocol, MXC gives everyone a chance to profit from a more balanced and intelligent infrastructure data network. This is why MXC, is the future of IoT.

What is a “Low Power Wide Area Network”?

LPWAN stands for ‘Low Power, Wide Area Network’. it can be used to realize the Internet of Things (IoT). LPWAN is a type of wide area network that allows radio-equipped devices to communicate. WANs are simply telecommunications networks. The system of cell phone towers and 5G you rely on every day is a WAN. So is the internet, if you want to get technical. You could form an Internet of things WAN using 5G technology, or even landline broadband. However, unless your device has a mains plug, you’re going to run out of battery power very fast that way. Instead, the future of large scale, low maintenance, widely dispersed IoT applications will be found in LOW POWER wireless WANS – LPWANs.

What are the examples of IoT devices that operate through MXC or LPWAN?

Temperature sensors, smart locks, movement sensors, fire alert and etc.

Where will the data come from?  In the LPWAN based IoT network, the devices (also known as sensors / nodes) are required to send the data which they have produces to their corresponding server and receive commands from it. It makes the flow of data which is possible by LPWAN.

How are the prices determined?

By the bidding procedure which is determined in the SMB white paper of MXC

Any examples of device owners/ gateway owners?

If I have a LPWAN device (temperature sensor, smart lock, movement sensor, etc) I am a device owner. LPWAN Gateways are needed in order to send/receive data to/from the LPWAN device. Gateway owners – people or companies who own and maintain gateways.

How fast can the machine operate? How much data can it process and transmit?

It depends on what do you mean by the machine. For LPWAN devices, based on the applications and the firmware, different processing and data transmission speeds can be provided.

What are the main industries where such a technology is applicable?

Smart cities, smart homes and in general the future world will use it. For more information take a look at https://www.matchx.io/solutions/

Why focus on using GO as the programmatic language?

Golang is a preferable language.  It’s convenient, fast, and secure to write code with Golang, and it provides cross-platform support. Golang is currently one of the fastest growing programming languages in the software industry. Its speed, simplicity, and reliability make it the perfect choice for all kinds of developments.

How is cost being defined exactly if the value of the data is set by the owner?

In this task (and generally the SMB), we are investigating on data transmission cost which should be paid by the device owner to the LPWAN resource providers (e.g. gateway owners). Value of the data is set by the data owner in the data market place of MXC which is managed in another ways and is not related to the SMB.

Tell us a few more applications of the LPWAN?

LPWAN can provide a cost effective network for IoT.  Battery usage of LPWAN devices are super low. The devices are able to send / receive data for several years with a single battery. These are just a few of the reasons to say LPWAN empowers the IoT.

Any extra tips for the developers who are working on this task?

A really good implementation of this task is very important because it will help to optimize expenses for end-users.

Does this challenge interest you? Apply here for the Hackathon before the 12th of May the event is free for all developers who apply. Also, there is more. If you are a developer or aspiring entrepreneur in the blockchain/crypto space and  want to know about the investment perspectives from Top Asian & European Funds in the Blockchain segment or business use cases in real word adoption, get your free tickets for Hash Talk which will be an afternoon-long summit focused on discussions and creating insights on investment, business, and tech in blockchain curated and brought by LongHash Germany. More details here.

]]>
https://dataconomy.ru/2019/05/02/lpwan-can-provide-a-cost-effective%e2%80%8b-network-for-iot/feed/ 0
“With Algo-Trading, the market will have good liquidity & higher profits for users from the trading process.” https://dataconomy.ru/2019/05/01/if-more-normal-users-get-into-algo-trading-the-market-will-have-good-liquidity-higher-profits-for-users-from-the-trading-process/ https://dataconomy.ru/2019/05/01/if-more-normal-users-get-into-algo-trading-the-market-will-have-good-liquidity-higher-profits-for-users-from-the-trading-process/#respond Wed, 01 May 2019 09:29:49 +0000 https://dataconomy.ru/?p=20762 How to implement an Algo Order in Cybex Dex?Yes, this is one of the challenges for the second Blockchain Hackathon (part of LongHash Cryptocon Vol2) in Berlin on May 18-19 this year. More details here. As an advantage to all developers, Blockchain enthusiasts and Crypto geeks who are aching to solve this challenge, here is […]]]>

How to implement an Algo Order in Cybex Dex?Yes, this is one of the challenges for the second Blockchain Hackathon (part of LongHash Cryptocon Vol2) in Berlin on May 18-19 this year. More details here.

As an advantage to all developers, Blockchain enthusiasts and Crypto geeks who are aching to solve this challenge, here is an interview with YanFeng Chen: Co-Founder of Cybex.io where he defines Algo-Trading in detail and its benefits, and shares a few tips that might help in cracking this challenge.

Cybex.io is a blockchain based decentralized exchange that supports crypto trading. When a user has an intention to perform a large trade, it is useful to have an algorithm to split the order into smaller slices and trade it over a longer period. This feature is referred to as ‘Algo Order’ and is widely adopted in regular exchanges. In a decentralized exchange, each sliced order must be signed by the user’s private key. This provides a new challenge to Algo orders. In order the place orders automatically while keeping the private key safe, a user typically has to write its own program and run it in its own machine. This makes it difficult for normal users to use Algo orders due to the lack of programming skills. The challenge expects you to “Design a solution that allows a normal user to execute and manage Algo orders.” Below are edited excerpts from the interview:   

What is an Algo Order and why do you use it? Examples of regular exchanges?

Algorithmic trading uses a computer program that follows a defined set of instructions (an algorithm) to place a trade. The trade, in theory, can generate profits at a speed and frequency that is impossible for a human trader. We call this kind of orders ‘Algo Order’.

Algo-trading provides the following benefits:

  • Trades are executed at the best possible prices.
  • Trade order placement is instant and accurate (there is a high chance of execution at the desired levels).
  • Trades are timed correctly and instantly to avoid significant price changes.
  • Reduced transaction costs.Simultaneous automated checks on multiple market conditions.Reduced risk of manual errors when placing trades.
  • Algo-trading can be backtested using available historical and real-time data to see if it is a viable trading strategy.
  • Reduced possibility of mistakes by human traders based on emotional and psychological factors.

An example for Algo Trading in regular exchange:

Suppose a trader follows these simple trade criteria: 

  • Buy 50 shares of a stock when its 50-day moving average goes above the 200-day moving average. (A moving average is an average of past data points that smooths out day-to-day price fluctuations and thereby identifies trends.)  
  • Sell shares of the stock when its 50-day moving average goes below the 200-day moving average.

Using these two simple instructions, a computer program will automatically monitor the stock price (and the moving average indicators) and place the buy and sell orders when the defined conditions are met. The trader no longer needs to monitor live prices and graphs or put in the orders manually. The algorithmic trading system does this automatically by correctly identifying the trading opportunity.

Why did you choose Cybex.io as the blockchain base?

At present, more and more decentralized exchanges come out. How should a trader choose the trading platform? Normally we should consider about:

  • The security level about assets.
  • If the liquidity is good enough.
  • What kind of reward for the market maker.
  • If there is a set of easy-to-use API.

Cybex is positioned as ‘a highly efficient decentralized trading system’ that provides organizations, teams, and individual users with various easy-to-use trading scenarios and business logic implementation.

Cybex also includes a special realtime-order-matching-engine. It can provide real-time market data, the fastest placement and cancellation of orders and an instant notice of confirmed transactions, making the highest trading performance in the industry, which catches up with that of centralized exchanges.

Why do you want normal users to use Algo orders?

Algo-trading can give users many benefits than they trade manually. If we can provide suitable tools for normal users, we can attract more user into the ecosystem.

Does this challenge requires front-end or back-end developers?

It may need front-end or back-end developers, we may need to provide user-friendly front-end tools, or develop components to implement some typical Algorithm on different developing platforms.

What is the current status quo of Algo trading? Are there no user-friendly solutions?

There is centralized solution, i.e., the centralized exchange takes control of your money (including the private key).  But very few decentralized solutions, hence the challenge.

What programmatic skills are required for Algo orders?

The program skill is not the most important one — any program is fine. The key problem to solve is how to come up with a program that can send order on behalf of the user, while keeping it includes. See the ‘security requirement’ answer below.

Why is Longhash interested in this challenge?

LongHash is a platform for accelerating the development and understanding of blockchain technology. We are interested in all new technologies and solutions in blockchain area. We think if there is a good way to invite normal users into Algo-Trading area, there will extremely push forward the whole industry development. Market will have good liquidity; normal users may gain more profit from the trading process.

What are the security requirements?

We’d like to have a program that is only accessible to the user. As the program owns the private key of the user, it is critical that no one else can access it. E.g., a straightforward solution involves running a program on a linux server, which normally involves a system admin therefore not as ideal.

What is an example of a good Framework Coding quality and why?

Configurability–key params are configurable. Modularity–the separation of sub modules. Extensibility–new modules can be extended with few change of core part. Security–keep private key safe.

Does this challenge interest you? Apply here for the Hackathon before the 12th of May the event is free for all developers who apply. Also, there is more. If you are a developer or aspiring entrepreneur in the blockchain/crypto space and  want to know about the investment perspectives from Top Asian & European Funds in the Blockchain segment or business use cases in real word adoption, get your free tickets for Hash Talk which will be an afternoon-long summit focused on discussions and creating insights on investment, business, and tech in blockchain curated and brought by LongHash Germany. More details here.

]]>
https://dataconomy.ru/2019/05/01/if-more-normal-users-get-into-algo-trading-the-market-will-have-good-liquidity-higher-profits-for-users-from-the-trading-process/feed/ 0
15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING https://dataconomy.ru/2018/09/07/15-global-influencers-in-artificial-intelligence-and-machine-learning/ https://dataconomy.ru/2018/09/07/15-global-influencers-in-artificial-intelligence-and-machine-learning/#respond Fri, 07 Sep 2018 09:58:45 +0000 https://dataconomy.ru/?p=20278 It’s easier to get a job if you are an Artificial Intelligence (AI) or Machine Learning (ML) ‘expert’, and investors want to bet on companies which use the words AI or ML in their mission statements. And why should that not be the case? McKinsey estimates AI techniques have the potential to create between $3.5T […]]]>

It’s easier to get a job if you are an Artificial Intelligence (AI) or Machine Learning (ML) ‘expert’, and investors want to bet on companies which use the words AI or ML in their mission statements. And why should that not be the case? McKinsey estimates AI techniques have the potential to create between $3.5T and $5.8T in value annually across nine business functions in 19 industries. The study equates this value-add to approximately 40% of the overall $9.5T to $15.4T annual impact that could be enabled by all analytical techniques.

In the race for fame, are you coming across too many self-proclaimed influencers in the field of AI and ML lately? Don’t worry, you are not alone in this. The internet is full of noise when it comes to information on AI and ML. I have a hard time choosing what blog posts to read, what twitter feeds to pay attention to and which Artificial Intelligence or Machine Learning white paper to download for my next read. I have lost count on the number of article submissions we at Dataconomy Media get on these hot topics.   

The line between the two terms AI and ML is blurring and they are used interchangeably today. It’s hard for one to exist without the other in some cases, though the technical definition of the two is different. Machine Learning is more specific in nature and is a branch or subset of AI. Machine Learning is based on the idea that we can build machines to process data and learn on their own, without our constant supervision. On the other hand, professor John McCarthy defines Artificial Intelligence as the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

I compiled a list of top influencers in the field of AI and ML which I recommend to keep yourself not just updated or informed about the two fields, but rather also learn about AI or ML. I included academics, researchers, entrepreneurs, investors and authors. There are the few parameters I kept in mind. Firstly, these are the people with a huge social media following and their independent voice is not directly linked to the companies they work in or they founded. There are indeed experts in AI and ML who are not all over social media but that requires a separate list from us.  Secondly, I looked at the consistency of the last three years in their social media presence and tried to be geographically well-spread because each part of the world is contributing to the growth of AI and ML. Order of listing does not represent a rank. Here we go:

 15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGAndrew NG

Andrew Yan-Tak Ng is a Chinese English computer scientist and entrepreneur. Ng co-founded and led Google Brain and was a former VP & Chief Scientist at Baidu, building the company’s Artificial Intelligence Group into several thousand people. He is an adjunct professor (formerly associate professor and Director of the AI Lab) at Stanford University. Ng is also an early pioneer in online learning – which led to the co-founding of Coursera.

Twitter: @AndrewYNg


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGFei Fei Li

Fei-Fei Li, who publishes under the name Li Fei-Fei is an Associate Professor of Computer Science at Stanford University. She is the director of the Stanford Artificial Intelligence Lab (SAIL)  and the Stanford Vision Lab. In 2017, she co-founded AI4ALL, a nonprofit working to increase diversity and inclusion in artificial intelligence. She works in the areas of computer vision and cognitive neuroscience.

 

Twitter: @drfeifei


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGMartin Ford

Martin Ford is a futurist and author focusing on the impact of Artificial Intelligence and robotics on society and the economy.  He has written two books on technology. His most recent book, Rise of the Robots: Technology and the Threat of a Jobless Future (2015), was a New York Times bestseller and won the £30,000 Financial Times and McKinsey Business Book of the Year Award in 2015. The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (2009) also dealt with the effects of automation and the potential for structural unemployment and increasing inequality.

Twitter: @MFordFuture


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGNathan Benaich

Nathan is an investor focussed on intelligent systems and data-driven companies. He runs the Research and Applied AI Summit and London. AI, which accelerate the science and applications of AI. Nathan is also an Advisor at TwentyBN, a video understanding company. He earned a PhD at the intersection of computational and experimental cancer biology from the University of Cambridge.

 

Twitter: @NathanBenaich


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGJoanna Bryson

Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath. She has broad academic interests in the structure and utility of intelligence, both natural and artificial.  Venues for her research range from Reddit to Science.  She is best known for her work in systems AI and AI ethics, both of which she began during her PhD in the 1990s, but she and her colleagues publish broadly, in biology, anthropology, sociology, philosophy, cognitive science, and politics.  At Bath, she founded the Intelligent Systems research group and heads their Artificial Models of Natural Intelligence.

Twitter: @j2bryson


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGSoumith Chintala

Soumith Chintala is a Researcher at Facebook AI Research, where he works on deep learning, reinforcement learning, generative image models, agents for video games and large-scale high-performance deep learning. Prior to joining Facebook in August 2014, he worked at MuseAmi, where he built deep learning models for music and vision targeted at mobile devices. He holds a Masters in CS from NYU, and spent time in Yann LeCun’s NYU lab building deep learning models for pedestrian detection, natural image OCR, depth-images among others.

Twitter: @soumithchintala


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGThe CyberCode Twins

America and Penelope Lopez are known as the “The CyberCode Twins”. As Latina twin sisters born and raised in East Los Angeles, they have travelled to many cities and received various awards in tech competitions and hackathons such as the NASA International SpaceApps Challenge, AT&T Developer Summit, HackForLA, IBM Global Mobile Innovators Challenge and many more. Now, they are on a mission to make communities safer thru wearable tech and mobile apps.

Twitter: @cybercodetwins


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGDavid Kenny

David Kenny is the Senior Vice President of IBM’s Watson & Cloud platform. He was formerly the CEO of The Weather Company, which was acquired by IBM in 2016. Kenny replaced Mike Kelly at the Weather Company in January 2012. He was also the Chairman of the board. He was the president of Akamai Technologies and resigned from this position on October 26, 2011. Before joining Akamai, Kenny worked in digital advertising for Publicis Groupe S.A.’s VivaKi.

Twitter: @davidwkenny


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGElon Musk

Elon Reeve Musk is a business magnate, investor and engineer. He is the founder, CEO, and lead designer of SpaceX, CEO, and product architect of Tesla, Inc.; co-founder and CEO of Neuralink; and co-founder of PayPal. In December 2016, he was ranked 21st on the Forbes list of The World’s Most Powerful People. As of August 2018, he has a net worth of $20.2 billion and is listed by Forbes as the 46th-richest person in the world.

Twitter: @elonmusk


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGGary Marcus

Gary F. Marcus is a scientist, author, and entrepreneur. His research focuses on natural and artificial intelligence. Marcus is a Professor in the Department of Psychology at New York University and was Founder and CEO of Geometric Intelligence, a machine learning company later acquired by Uber. As an author, his books include Guitar Zero, which appeared on the New York Times Bestseller list and Kluge: The Haphazard Construction of the Human Mind, a New York Times Editors’ Choice.

Twitter: @GaryMarcus


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGMike Tamir

Mike serves as Head of Data Science at Uber ATG and lecturer for UC Berkeley iSchool Data Science masters program.  Mike has led several teams of Data Scientists in the bay area as Chief Data Scientist for InterTrust and Takt, Director of Data Sciences for MetaScale, and Chief Science Officer for Galvanize he oversaw all data science product development and created the MS in Data Science program in partnership with UNH.  

Twitter: @MikeTamir


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGChris Messina

Christopher Reaves Messina  is an American technology evangelist who is an advocate for open source and open standards. Messina is best known for proposing the use of the hash character (#) on Twitter as a way of grouping messages. Inspired by the use of the hashtag in Internet Relay Chat (IRC). He was formerly Developer Experience Lead at Uber from 2016 to 2017. Messina is also known for his involvement in helping to create the BarCamp, Spread Firefox, and coworking movements. Messina is an active proponent of microformats and OAuth

Twitter: @chrismessina


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

Kirk Borne

Dr. Kirk Borne is the Principal Data Scientist and an Executive Advisor at global technology and consulting firm Booz Allen Hamilton based in McLean, Virginia USA (since 2015). In those roles, he focuses on applications of data science, data analytics, data mining, machine learning, machine intelligence, and modeling across a wide variety of disciplines. He also provides leadership and mentoring to multi-disciplinary teams of scientists, modelers, and data scientists; and he consults with numerous external organizations, industries, agencies, and partners in the use of large data repositories and machine learning for discovery, decision support, and innovation. 

Twitter: @KirkDBorne


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGAlex Champandard

Alex is an Artificial Intelligence expert who has worked in the field for almost 20 years, from both the design and technical sides. He is currently a co-founder and managing director at creative.ai, a company building better tools for visual designers and artists. He has worked in the simulation and computer entertainment industries for many years, applying classical AI algorithms to interactive domains, making tools for designers to be able to use them, and optimizing them run in real-time on consumer hardware. He has also co-organized the largest conference worldwide dedicated to AI in Creative Industries (nucl.ai) and over the years have interviewed hundreds of experts in the field live as part of a continuous online learning community (pre-MOOCs).

Twitter: @alexjc


15 GLOBAL INFLUENCERS IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNINGSpiros Margaris

Spiros Margaris is venture capitalist and founder of Margaris Ventures. He is the only person who reached influencer rankings by being ranked the global No. 1 FinTech, Artificial Intelligence (AI) & Blockchain influencer by Onlaytica(05/2018) in all three categories. He is a speaker at international FinTech and InsurTech conferences, and he publishes articles on his innovation proposals and thought leadership. He published an AI white paper, “Machine learning in financial services: Changing the rules of the game,” for the enterprise software vendor SAP. He is a senior advisor at wefox Group (wefox & ONE), SparkLabs Group, Arbidex, Lodex, BlockLoan, Datametrex AI, Yield Growth Corp., kapilendo, moneymeets and at F10 Fintech Incubator and Accelerator.

Twitter:@SpirosMargaris


]]>
https://dataconomy.ru/2018/09/07/15-global-influencers-in-artificial-intelligence-and-machine-learning/feed/ 0
What Does Trust Mean in IoT? – IoT-EPI Challenge https://dataconomy.ru/2017/03/22/trust-mean-iot-iot-epi-challenge/ https://dataconomy.ru/2017/03/22/trust-mean-iot-iot-epi-challenge/#respond Wed, 22 Mar 2017 17:20:35 +0000 https://dataconomy.ru/?p=17596 On a sunny Friday morning, the IoT-EPI Challenge started bright and early at 9 AM (which is quite early, for Berlin standards) with an introductory talk for the participants and the media to get the rundown of the day. This wasn’t a typical idea-hackathon. Collaboration was key. The goal was to work on challenges in […]]]>

On a sunny Friday morning, the IoT-EPI Challenge started bright and early at 9 AM (which is quite early, for Berlin standards) with an introductory talk for the participants and the media to get the rundown of the day.

This wasn’t a typical idea-hackathon. Collaboration was key. The goal was to work on challenges in the fields that the projects are already active in, with an emphasis on integrating their existing technology into the solutions, and exchanging ideas with the mentors (who were IoT-EPI members). Around 30 participants from 12 teams had a go at one of three challenges – ‘Trust’, ‘Mobility’, and ‘Retail’. The Trust challenge was run by INTERIoT and bIoTope; Mobility was run by symbIoTe; and Retail by BIGIoT, TagITSmart!, and AGILE.

The challenge wasn’t so much about developing technical frameworks, because the projects were building and providing those themselves. The purpose of the challenge was motivating teams to create feasible, applicable, business ideas that could turn into real-world solutions. How did the judges pick the winners? Teams could get a maximum of 9 points, divided into three categories:

  1. Feasibility
  2. Team dynamic
  3. Potential for future collaboration with IoT EPI platforms

Trust

InterIoT and bIoTope were the mentors of the 6 teams that participated in this challenge. It was the most sought-after challenge, with the highest number of applicants. In it, the teams were confronted with an issue that might sound abstract at first. What does ‘trust’ mean for IoT? Trusting platforms? Trusting partners? Trusting technologies? Trusting data? Trusting people? In this challenge, participants could chose between two scenarios:

  • ‘Port’, where 2 teams were tasked to find solutions for trust issues among different IoT platforms responsible for handling different day-to-day elements of port functioning e.g. port authority, cargo loaders, docks, etc;
  • and ‘University’,  where 4 additional teams worked with the same task, but applied to different day-to-day elements of identity-based University benefits e.g. library access, food court access, etc.

Mobility

The Mobility challenge was unlike the Trust challenge, in that it gave participants some more freedom to experiment, since there were no defined scenarios within the smart mobility topic. SymbIoTe mentored three teams, two of which were already established startups – Innroute (Spain), and Inovatica (Poland).

Retail

The biggest challenge in terms of mentoring projects (BIGIoT, TagITSmart!, and AGILE), it was similar to the mobility challenge in its experimental freedom. It was also the one where the jury found the winner of the overall IoT-EPI Challenge. One team, lead by Rahul Tomar, focused on developing a solution to urban food waste, and the other, thingk-design, built an IoT solution for tool sharing.

To sum up the day’s work, each team had 3 minutes to pitch in front of an audience of other teams, mentors, project leaders, media, and IoT-EPI. The only rule – no self-promotion. This was all about the problems and their solutions.

“The challenge wasn’t only exhilarating because of the time limit. It was exciting because of the fact that we get to work with something new, and with great technology”.

Since a lion’s share of participants have experience (or are active) in startups and pitching, the round was very entertaining, lighting fast, and had some truly excellent speakers. This, of course, did not make the jury’s decision any easier. The pitch round was done, and it was time for the judges to decide who won each category, and who was the best team overall.

[accordion_slider id=”1″]

In the end, the proposals that made the smartest use of IoT to solve the task at hand, ended up being the winners.

The judges found that:

were the projects with the highest potential for ‘real-life’ success.

If you’ve been reading carefully, by now, you’ll know that thingk-design was the overall winner of the IoT-EPI Challenge 2017!  They are developing toolstation, a smart sharing system which provides professional grade tools for everyone, round the clock. It provides an ecosystem including manufacturers of tools and wearparts; a social network for DIY-instructions with crowd-sourced info material such as videos, manuals, etc; a cross-selling-platform for construction materials, as well as a smartphone app for end users. This team was the one that, on one hand, who could leverage the most synergies with existing IoT-EPI projects, and on the other, offered to seamlessly integrate BIGIoT‘s, TagITSmart!‘s, and AGILE‘s technology.

Open doors for future collaboration

The work of other teams did not go unnoticed. Several runners-up across categories were encouraged to further develop their projects and make use of the resources that are available through the various Open Calls throughout the year.

We’ve had a great time in Berlin, gathering feedback for our projects, making new connections, and getting such amazing mentorship from the IoT-EPI projects. It’s really exciting to be challenged in this way”

IoT brings accessible solutions to day-to-day problems, bringing high-tech to the home, and to society in general. The efforts to make the IoT-connected world a reality are multilateral and therefore based on collaboration and trust, which is the basis to every long-lasting, fruitful, and productive relationship.

The latter is both what the IoT-EPI Challenge (and the IoT-EPI Week as a whole) is all about, and exactly what came out of it – successfully implementing a “from-lab-to-market” approach, that enabled collaboration amongst the IoT-EPI projects, as well as the future collaboration with ecosystem partners, such as entrepreneurs, developers, corporates, VCs and other multipliers in the field of IoT. The projects’ technologies paired with the teams’ curiosity and willingness to join forces to make viable solutions, were, in the end, the driving forces behind the event’s atmosphere of trust, collaboration, and boundless potential for open innovation.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2017/03/22/trust-mean-iot-iot-epi-challenge/feed/ 0
Data Around the World – Part VIII: The End of the Tour https://dataconomy.ru/2016/11/04/data-around-world-part-viii-end-tour/ https://dataconomy.ru/2016/11/04/data-around-world-part-viii-end-tour/#respond Fri, 04 Nov 2016 08:00:31 +0000 https://dataconomy.ru/?p=16801 The STORM team is back home. In 80 days, they have traveled the earth, showing that electric driving is possible – and very cool. They showed that a strong team can accomplish amazing things and change the world, by building their own bike and traveling around the world with it. Digital transformation The STORM team has […]]]>

The STORM team is back home. In 80 days, they have traveled the earth, showing that electric driving is possible – and very cool. They showed that a strong team can accomplish amazing things and change the world, by building their own bike and traveling around the world with it.

Digital transformation

The STORM team has made digital transformation come to life. They have used multiple digital media channels to reach out to an ever growing community. Via daily vlogs, tweets, facebook posts and paper.li they shared on a daily basis what they were doing. And the app www.follow.storm-eindhoven.com allowed anyone, anywhere, to follow the trip. Via a combination of data sources (the electric bike, the whereabouts of the team via a GPS tracker, the social media activity) and visualizations of that data, everyone was able to see where the team was and what they were doing.

Hence: digital transformation – using digital tools to create an experience. It was not just about the product “electric bike”, it was about the experience “the journey”; and everyone everywhere could participate and travel along.

Data facts of the tour in an infographic

While the STORM team was traveling the last stage on November 2, we assembled a set of facts on their 80-day tour.

The Trip

The STORM team visited an amazing number of 17 countries in those 80 days. But the number of countries where people followed the experience was much larger: of the total 194 countries in the world, people from 103 countries followed the STORM tour (see heatmap).

021116-the-trip

The Bike

In 80 days, the motor used the rear brake more than the front brake, and went slightly more to the right than to the left (according to the blinkers). As we saw in the last hackathon, speed was higher when moving eastwards than when moving to other directions. Luckily the tour went mainly east (8.780 minutes of 13.720 total driving minutes).

021116-the-bike

The App

The app was visited more than 33.000 times, and many visitors were returning visitors, checking the whereabouts of the STORM-team on a regular basis (some even daily! We heard ‘complaints’ such as “so what do I read now in the morning with my first cup of coffee…?”).

The team had 83 events, where they shared the electric driving experience and the fact that a team of students can accomplish the impossible if they work cleverly together. At companies, at universities, but maybe most importantly at schools, where they have inspired quite some kids to go to university and change the world.

021116-the-app

And we assembled 147 data facts on the various data sources, by a team of 12 data scientists. Which sometimes was hard, for example when the team was driving through countries with no internet or 3G hence no motor data. But luckily, there were always tweets to analyze (more than 5.000 in total during the tour), whether it would be on positive or negative sentiment, on hashtags used, or on country of origin of the tweeting persons.

An amazing journey, a true experience, and hopefully a big step towards widespread adoption of electric driving.

021116-other-facts

 

 

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/11/04/data-around-world-part-viii-end-tour/feed/ 0
Data Around The World – Part VII: Hacking at ITility https://dataconomy.ru/2016/11/01/data-around-world-part-vii-hacking-itility/ https://dataconomy.ru/2016/11/01/data-around-world-part-vii-hacking-itility/#respond Tue, 01 Nov 2016 15:45:26 +0000 https://dataconomy.ru/?p=16780 The last stage of the STORM 80-day tour will be driven on November 2 – from Paris to Eindhoven, accompanied by many electrical vehicles. Hacking at ITility In the meantime, we used the results from the hackathon at Stanford University, to dig deeper into that same data set during the second Hackathon at our Itility […]]]>

The last stage of the STORM 80-day tour will be driven on November 2 – from Paris to Eindhoven, accompanied by many electrical vehicles.

Hacking at ITility

In the meantime, we used the results from the hackathon at Stanford University, to dig deeper into that same data set during the second Hackathon at our Itility office in the Netherlands.

What was the hackathon about? 2 main topics: Digging into the electrical spike patterns; and digging into the environmental influencers of driving behavior.

Quick recap of the data used: 28 days of data from the bike – 150 million events, which could be anything from a battery error or a temperature measurement, to the left blinker being turned on.

The data came from 3 main sources:

  1. The battery management system (BMS), containing a lot of detailed data from the bike’s complex battery system
  2. The body controller (BC), containing all sorts of data from the controls of the bike (from horn to cruise-control)
  3. The built-in 3G/GPS module, which sends real-time GPS coordinates and speed of the bike.

Results

After 3 hours of digging into the data, each team was called forward to give a 2-minute pitch on their findings.

The first group (Kevin and Sateh) visualized the patterns of the body temperature. They started with a scatter plot where module temperature is plotted against speed (divided in classes of 10 km/h).

181016-hackathon-3

 

Next, they used anomaly detection to dig further in the data, and found two spikes in BMS master temperature in cartridge 5. Digging further into this cartridge, they found the errors in the logging that occurred just prior to the temperature spikes. The next step is to use the events happening just before the error appeared to build a model to predict this temperature-error to happen.

181016-hackathon-2

Next group (Donald and Arati and Mike) dove deep into one specific day of data (September 7), and looked at the standard deviation of the module temperature, which turns out to be quite low. Also, they noticed that chip 1 showed many errors. Next step is to dig into the root cause of the errors.
They also found that speed was correlated to the heading of the motor: the motor is on average driving faster when going east, and more slowly when driving west, as you can see in the graph below.

181016-hackathon-4

It may have to do with the fact that the most of the time the motorbike is driving east (which is often on a highway), and going west is only happens in small time-windows (probably not highways).

Lastly, they investigated whether the bike was driving above the Dutch speed limit of 120 km/h and plotted the results on a world map (yellow color means driving more than 120 km/h). Apparently, in Austria a lot of speeding tickets could have been given to the STORM team if a Dutch police officer would be on duty there.

Group 3 (Dannes and Cristin and Sander) checked out the bike specifications. Based on these specs they defined which actions would fall under “dangerous violation of the specs”. They actually found examples in the data of these dangerous violations – and digging further in the data they saw that after such a violation occurred, the motor stopped moving.

See the graph: the power (Watt) is calculated by multiplying the current (Ampere) with the voltage (Volt). The voltage slowly decreases over time, causing the power to follow the graph of the current.

Using the specifications of the battery, it could be concluded that the power of the battery was too high just before 8:50 AM. If the stop at that time was a stop due to technical malfunction, this could be the root cause.

image04

Last group (Tim and Robert and Mark) used both Splunk and PowerBI to visualize various parts of the trip. Some detailed research resulted in the visualization of the measured voltage of the module. For example, the decrease in voltage per cartridge and when a cartridge was replaced. The figure shows several steep upwards slopes which indicate a replacement of the cartridge. This indicates an average voltage range between 20 and 25 Volt. Next step would be to correlate the decrease of the voltage to other factors during a specific time period, such as speed, weather and type of environment where the bike is driving.

181016-hackathon-6

A metadata analysis was combined to these findings which visualizes the maximum speed and module temperature per day. A hypotheses was that the maximum speed and module temperature would be related during the trip.

image06

Follow the STORM bike live on November 2! If you own an electric vehicle, please join the team in their last kilometers towards Eindhoven: https://youtu.be/wg5spx_UZKo

 

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/11/01/data-around-world-part-vii-hacking-itility/feed/ 0
Data Around The World Part VI: Hackathon at Stanford https://dataconomy.ru/2016/10/17/data-around-world-part-vi-hackathon-stanford/ https://dataconomy.ru/2016/10/17/data-around-world-part-vi-hackathon-stanford/#respond Mon, 17 Oct 2016 11:09:37 +0000 https://dataconomy.ru/?p=16712 As of October 4th, the STORM team is traveling the US, with exciting events such as a visit to Tesla and Google, driving on the Golden Gate bridge – and a hackathon on the STORM data at Stanford University. What was the hackathon about? A simple task: let’s crunch some data and see if we […]]]>

As of October 4th, the STORM team is traveling the US, with exciting events such as a visit to Tesla and Google, driving on the Golden Gate bridge – and a hackathon on the STORM data at Stanford University.

foto1

What was the hackathon about? A simple task: let’s crunch some data and see if we can help the STORM team!

The hackathon started (after taking pictures of the bike) with a short presentation by STORM on their tour, how the bike was set up and where they had been driving.

foto2

Next step: Explaining the data. There was a lot of data available, so each team could go their own way. So far, we gathered 28 days of data from the bike. This was a stunning total of 150 million events, which could be anything from a battery error or a temperature measurement, to the left blinker being turned on.

foto-3

The data came from 3 main sources:

  1. The battery management system (BMS), containing a lot of detailed data from the bike’s complex battery system
  2. The body controller (BC), containing all sorts of data from the controls of the bike (from horn to cruise-control)
  3. The built-in 3G/GPS module, which sends real-time GPS coordinates and speed of the bike.

Data cleaning

Raw logs generated by the bike are formatted as hexadecimal byte arrays. While this makes the logging very efficient in terms of size, interpreting them is less easy. So in the days before the hackathon we pre-processed all internal logging of the bike, and cleaned it as well, to get the teams up and running as quick as possible. It took the teams some time to get used to the data, but luckily the whole STORM team was present to answer all kinds of questions.

Results

After two hours, each team was called forward to give a 2-minute pitch about what they were working on, and their interesting findings so far. It was immediately clear that most groups focused on the battery system, which was to be expected.
foto4

The first group (“The Electricians”,  bottom left of the photo) noticed some spiking in the current of some battery cells. They were trying to figure out whether this could lead to overall efficiency-loss of the batteries, since they are all linked together. The team focused on battery anomalies, tracking battery spikes per cartridge and total set. Next step: evaluate spikes and correlation with temperature and terrain.

Similarly, one group (“The 4 guys”, top right of the photo) found consistent temperature differences between cells, and related this to their position in the bike. Knowing that batteries become less efficient with increasing temperatures, they looked into optimizing battery placement to increase efficiency.

Not all groups jumped on the battery data: the winning group (Robert and Shreyas, photo below) related the information from the Body Controller (front brake, rear brake, blinker) and tried to find out in what kind of environment the bike was driving based on the way it was handled and logged. For example, a lot of braking and active blinkers could point to an urban environment, while when the brakes and blinker were rarely used it would be likely that the motorcycle is driving on the highway.  
Once knowing the environment the motorcycle is driving in, the team will use that information to determine battery- and motor-efficiency, to determine how the environment effects all the other motorcycle CAN data.

foto5

Another team (“The Chinese Ladies”, photo below) focused on the correlation between speed and temperature and did a high current/low voltage analysis, plotting the current by cartridge per position. They also created a nice moving map showing speed & location on a map.
foto6

Speed data was also used by team “Chinese Boy and Girl” who plotted speed per day to define a correlation between speed and temperature, and temperature and battery. So did team “Pink Jing-solo” who created a model to predict the impact of temperature and location on speed.

The next hackathon on the same data set will be in the Netherlands. Check it out here.

 

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/10/17/data-around-world-part-vi-hackathon-stanford/feed/ 0
Data Around The World – Part II: First Data From The Itility Hackathon https://dataconomy.ru/2016/09/02/data-around-world-part-ii-first-data-itility-hackathon/ https://dataconomy.ru/2016/09/02/data-around-world-part-ii-first-data-itility-hackathon/#respond Fri, 02 Sep 2016 08:00:23 +0000 https://dataconomy.ru/?p=16404 This is the second instalment of our series with Itility. We will cover the journey of the STORM Wave, the world’s first electric touring motorcycle, as it goes around the world in 80 days. The Eindhoven University of Technology team will complete the 23,000 km tour using only the world’s existing electricity grid. Read the whole series here. The […]]]>

This is the second instalment of our series with Itility. We will cover the journey of the STORM Wave, the world’s first electric touring motorcycle, as it goes around the world in 80 days. The Eindhoven University of Technology team will complete the 23,000 km tour using only the world’s existing electricity grid. Read the whole series here.


The STORM embarked on the 80-day-tour around the world on August 14. That means that, up to now, the  first battery powered motorcycle has successfully completed 17 stages

The student team from the Eindhoven University of Technology (TU/e) has seen it all by now. Heavy rains, steep mountains, rocky roads and many more situations they’d never seen before – in training or in tests- with this motorcycle. Time to take a look at the data and find out how the motorcycle has been performing in the real world.

The data is collected from the motorcycle via the CAN bus. This is a messaging network used by all components of the motorcycle – to exchange measurements, events and orders. In the first couple of days, we’ve collected over 30 million messages from about 25 components of the motorcycle, along with GPS measurements, weather information and Social Media data.

 

Data Around The World - Part II: First Data From The Itility Hackathon
CAN data over time

During the Itility Hackathon, an event for data scientists of all kinds of backgrounds and experience, we gave this dataset to participants and asked them: “How can we help STORM in the next 70 days?”. In teams of 3 we spent several hours digging through the data, translating binary and hexadecimal code, calculating distances and correlating metrics, and at the end of the evening, the teams shared their experiences and conclusions.

Most teams started out with descriptive statistics to understand the data:

  •  “How is the data distributed over time?”
  • “What components of the motorcycle send out the most messages?”
  • “Which errors occur most?” (see graph below, codes are errors)

Most teams were able to answer these kind of questions rather quickly.

290816-CAN messages distribution

The next step for most teams was to find out why errors occurred, and if certain errors were correlated with speed, or with voltage and current in the battery pack.

One of the conclusions was that temperature has almost no effect on neither the voltage in the battery, nor on the “under voltage” error that occurs when the battery is almost empty.
The voltage decreases steadily and is only influenced (a little) by the speed. The steady decrease makes it possible to predict when the battery will be too empty to continue driving, and we might even be able to predict the best place to stop for a battery swap.

The graph below shows both the speed and the state of charge of the battery. You can clearly see how kinetic energy from slowing down immediately charges the battery.

170816-speed-soc

 

This next graph shows the relationship between the voltage errors and the voltage in the battery. Also note the little bit of recharging just after 2 PM: the motorcycle recharges as it brakes.

image04

Unfortunately, we did not have enough time during this first hackathon to get to the real “machine learning stage”. Nevertheless: the results were interesting and could really help the STORM team on their journey. Here at Itility, we keep crunching the numbers and sharing our findings via follow.storm-eindhoven.nl

Make sure to join our next hackathon where we’ll try to use machine learning on an even larger set (STORM data over a period of about 60 days).

 

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/09/02/data-around-world-part-ii-first-data-itility-hackathon/feed/ 0
“Security is the big issue to solve around IoT.” – Interview with Cesanta’s Anatoly Lebedev https://dataconomy.ru/2016/08/17/security-big-issue-to-solve-in-iot-interview-anatoly-lebedev/ https://dataconomy.ru/2016/08/17/security-big-issue-to-solve-in-iot-interview-anatoly-lebedev/#respond Wed, 17 Aug 2016 08:00:31 +0000 https://dataconomy.ru/?p=16305 Anatoly Lebedev is the CEO and Co-Founder of Irish company Cesanta. Together with his team, he helps define the future of embedded communication technologies. He believes that if we want to get to 20 billion connected devices by 2020, then IoT integration needs to be made simple, secure and fast. Anatoly ensures that Cesanta products […]]]>

Anatoly_headshot

Anatoly Lebedev is the CEO and Co-Founder of Irish company Cesanta. Together with his team, he helps define the future of embedded communication technologies. He believes that if we want to get to 20 billion connected devices by 2020, then IoT integration needs to be made simple, secure and fast. Anatoly ensures that Cesanta products match this vision. Cesanta has been named as the ‘One to Watch’ by Business & Finance Magazine and is the 2015 winner of the Web Summit’s ‘GoGlobal’ competition. Previously, Anatoly shaped strategic partnerships in Europe, Middle East and Africa in his 8-year tenure at Google. He is heavily involved in the Irish startup scene and can be found as a mentor at hackathons and startup weekends. When he’s not driving the business side of Cesanta forward, you can find the racing enthusiast driving through the Irish countryside.


A little bit about you and your company

My name is Anatoly Lebedev and I’m the CEO of Cesanta, an Irish technology startup working in the Internet of Things field. We are on a mission to bring all devices online. This means we apply connectivity and networking solutions by bringing devices and equipment to the Internet.

I spent 8 years at Google before founding this company.  I started in the tech field and then I moved to the business side and then to strategic partnerships. I was doing multimedia, hardware distribution, data acquisition and so on. And when we decided to start Cesanta, a bunch of other engineers joined us. 70% of the company is comprised of ex-Google employees. The mission itself is quite challenging. How do you bring all those devices online? How do you actually program them physically? That’s why we created Mongoose IoT Platform and made it easier for people with limited skills to actually prototype and build these connected things.

What was the reason for starting this company?

We have a different product called Mongoose Web Server Library. And, this product is widely used by companies like Intel, HP, Dell, Samsung, even NASA; Mongoose is now on the international space station. What we noticed was that Mongoose had been used as part of systems that companies built in-house to achieve IoT connectivity. While those guys I mentioned might have big pockets and a lot of resources to do that, the majority of companies developing IoT-enabled products are smaller and don’t have those budgets.  These smaller companies often stumble upon problems that will create insecurity and unstable products. So what we said was why don’t we just provide them with a platform to create simple, secure IoT connectivity? We know how to build it, and that’s how we decided to build the platform.

More and more companies are going to bring their products online, and all of them will be struggling with infrastructure. But, for most of them their core business is the device and what it does for the consumer; not the connectivity piece. That’s where we enable them. We’ve taken away a big, very specific problem in regards to infrastructure, connectivity and security and we’re giving them more time to concentrate on what they do best – product development.

What do you think is the benefit of using data science in IoT?

Big Data has been ‘the’ big topic, right? I think one of the reasons is that big data probably didn’t succeed as much as expected. There was a hype but then it dropped. There was not enough Big Data. What IoT actually creates effectively is simply a huge amount of data coming in. So in a nutshell IoT is just an enabler for Big Data. We see ourselves as being an intermediary between the business which will create tons of data and the solutions and data scientists who slice and dice the data, providing the actual intelligence for these businesses.

Why did you choose Ireland for your HQ?

We have a lot of diversity inside the company. Most of the people came to Ireland for work.  The majority of our staff worked for Google. What’s beautiful about Ireland is that it’s part of Europe, it’s a relatively inexpensive place to live, it’s in the Eurozone, it’s a 3 – 4 hours short flight from anywhere in Europe.  It’s between the US and Europe, flying to NY is 6 hours and it has a small market. What you see here in Germany [at CeBIT] is that most of the promotional material is in German. Unfortunately, I don’t speak German so I can’t understand what they’re talking about and this [CeBIT] is an international event with 200.000 people coming from all over the world. Plus, Germany is a big market in which companies can  produce mainly for Germany. The diversity that can be achieved in Ireland is not needed if you aim at one large market.

We had a chance to move to Silicon Valley for example. But, in that moment, economically it made more sense to stay in Ireland and we achieved much more. In Silicon Valley everything is more expensive. There’s a lot of talent. But, the talent is hopping from job to job because there’s a large variety of jobs.

Ireland also has pretty good conditions for startups now. It’s actually great when the government helps you as a company.

What are the other significant shifts you see in IoT?

Everyone tries to play into it [IoT]. Every week you see a huge announcement of a big company going into IoT, pouring tens if not hundreds of millions of dollars into development, saying they are going to be the next big player. And it’s great, because it creates more awareness and brings more opportunities to the market. So, we definitely see more businesses entering the market because they figured out that an existing product or a new product that is IoT-enabled will be much more sellable and actually bring more revenue to them. To businesses it’s a no-brainer that IoT is positive, it’s not a fake hype. If you’re for example in British Gas and you install a thermostat which is connected, customers are more interested and their electricity bill will decrease because you are only using heating when you need to, not according to preset timers. Or take connected cars – you won’t have to bring your car to a service station to install a new feature. It can be pushed over the air. Or in health, what we now have are simple trackers, but they will move to becoming  solutions like nano robots in your bloodstream that  tell you ‘oh this fella is about to have a heart attack’. This is not not very far away.

By bringing all devices online we create an additional value not only for businesses but for people. It’s going to be securer and safer because you can prevent a lot of things before they happen. When you have things that talk to each other it’ll be easier to prevent issues.

A lot of problems in the world right now are because people or things don’t communicate in the right manner. Apart from the golden billion, there are an additional six billion people on this planet that will leapfrog. Take parts of Africa where they never had landlines, they leap-frogged into mobile phones directly.

I told you about this chip right? It’s actually a Chinese company producing that chip, it costs about 3 dollars and has an MCU which holds enough memory to make one person feel like a WiFi antenna. It works for a distance of up to 350 meters and there is enough capability to actually embed it into pretty much anything and make that thing connected. At the price point, you can put it pretty much everywhere. Take trackable clothes. Clothes producers are already thinking about how to track shorts and trousers etc.

In five years we’re going to live in a whole different world. But, you need to have security, sensibility, data protection and privacy. Ten years ago we had no phones in our pockets. And now everyone has at least one. Actually, we do much more with them than just phone people. We share everything that’s happening in our lives with our technology.

I watched a talk from Eugene Kaspersky, CEO of Kaspersky Antivirus and he said the biggest threat to the user is himself. Because the amount of information we share about ourselves is enormous. A lot of people post so much on Facebook. They don’t even understand that people outside do have access to that. How old you are, where you live, when and where you are going to travel. So something like connecting them, sending data somewhere can be actually more sensible because you can actually create hard rules to do what you actually need to do. Security is the big issue to solve around IoT.

If you could tackle any technology solvable problem existing today what would that be and why?

One of the biggest challenges for the IoT is it’s diversity.  Companies create a lot of different things that don’t talk to one another. Ideally, I would like to see everything in our lives (which is connected) be able to talk to each other without us. And, I think it’s a long shot, but, when things start talking to each other, things will be way easier. Let’s say you arrive to your house in your smart car. By the time you approach your house, the gates are opening, you park in your driveway and you walk out and the car parks itself in the garage. You come to the door and the door opens because it knows it’s you, you don’t need keys.  Once you enter, by the way your heater knew you’re coming, your system knows that it is Thursday and usually Thursday night you have a glass of wine. The fridge already knew this but also knew that you had ran out. It ordered wine for you and it’s on the way before you even arrive home. These are small things but imagine how much time we’ll free up!

Like this article? Subscribe to our weekly newsletter to never miss out!

Image: Michael Davis-Burchat

]]>
https://dataconomy.ru/2016/08/17/security-big-issue-to-solve-in-iot-interview-anatoly-lebedev/feed/ 0
Data Scientists…futureproof yourselves! https://dataconomy.ru/2016/06/10/data-scientists-futureproof/ https://dataconomy.ru/2016/06/10/data-scientists-futureproof/#comments Fri, 10 Jun 2016 08:00:54 +0000 https://dataconomy.ru/?p=15915 You know who you are. You’re a Data Scientist. In interview rooms across the world, fat cat executives are pushing contracts under your noses with extortionately high numbers on them. What a time to be one of a few doing backstrokes at the world’s most exclusive pool party, right!? What happens when other bright academics, […]]]>

You know who you are. You’re a Data Scientist.

In interview rooms across the world, fat cat executives are pushing contracts under your noses with extortionately high numbers on them. What a time to be one of a few doing backstrokes at the world’s most exclusive pool party, right!?

What happens when other bright academics, who stand outside with noses pressed up against the glass, get bored, rush the gates and start doing huge running pool bombs? One only has to look at the statistics at the growing numbers of governments and businesses funding universities to teach Machine Learning or the accessibility of online courses and competitions to see how this is becoming a more accessible and inclusive market place.

Economist, Tyler Cowan once said; “Food is a product of supply and demand, so try to figure out where the supplies are fresh, the suppliers are creative, and the demanders are informed.”

The same can be said about Data Scientists. Businesses are becoming more informed about the potential of Machine Learning. When these businesses demand the skill, the supply must be met at any cost. Many businesses are smart and they’re flooding billions of dollars into closing the skills gap so it’s only a matter of time before the next generation of Data Scientists emerge.

So, how do you ensure you remain afloat on your inflatable crocodile with your Piña Colada in tact? Simple really, you have to diversify…or drown. Make yourself futureproof now. Continue pushing yourself in new ways, to ensure you’re not left behind.

Here are a few suggestions:

Become a He-Man/She-Ra Coder

One of the current trends we’re seeing is the growing demand for Data Scientists with production coding experience. For some businesses, proof of concept coding is great. For others, the ability to write the code that takes it into production, is even better! Some businesses that don’t have engineering teams tend to prefer this option. Other businesses just like to kill two birds with one stone made out of budget. It’s a great blend of experience to have and one that would see you fit into the “Unicorn” category in the eyes of some hiring managers.

Be a Business Brain

What we see time and time again, is Data Scientists who are employed to solve specific problems in the business, unbeknown to them (and their managers) that there are countless other problems they can solve too. The most successful commercial Data Scientists are the ones who can understand the rhythm of the business around them – how it works, why it isn’t working and taking ownership to solve the problem. You need to be able to proactively approach stakeholders within the business and confidently challenge them on the shortfalls of their department and how an application like Machine Learning can help the company be more successful. Engage yourself with the business. Have one eye on the project in front of you, but be able to identify where the next opportunity is coming from. Sell Machine Learning to everyone.

Become famous!

When we started Big Cloud, we were amazed at how online the Data Science community was. Compared to “old school” industries, it’s crazy how quickly you can make a name for yourself. We are constantly being asked by hiring managers to find them the best Kaggle Masters. Because of the accessibility of Kaggle, it has become a platform from which someone can gain notoriety very quickly. Another reason why managers like people who compete on Kaggle or partake in other extracurricular activities such as hackathons, is because it shows they have a genuine passion for the subject. How many hot shot lawyers finish a tough day in court, go home, log on and start solving cases in their spare time? How many Firemen go out looking for fires to put out when they’re not on watch? Data Science is an industry of discovery and people who are inquisitive and push themselves outside of work, in their own private research to better themselves, are being prioritised by more and more companies.

Look for the pool parties abroad

While there are many opportunities in the United States and the UK, as well as the rest of Western Europe, there are far more businesses in other parts of the world who are looking for Data Scientists. With lower tax brackets than the West, and a less saturated job market, you can expect to live very comfortably in say Bangkok or Kuala Lumpur for a fraction of the cost of London or San Francisco. If nothing else, it offers the opportunity to diversify your CV and solve problems that will benefit people in completely different parts of the world…how cool would that be! Just as important though, it’s also a chance to fulfill a desire of adventure.

Mo Money, Mo Problems

One of the biggest reasons we see offers rejected at final stage is because of a change in salary expectations. Sometimes it’s easy to increase your expectations at the last minute, particularly when the recruiting company have rolled out the red carpet and expedited their recruitment process from 4 weeks to 1 day, just to accommodate you. However, before you decide to hike your demands up at the last minute, consider your next move. It’s true, we see some candidates who not only price themselves out of the job, but sometimes the market entirely! It makes your job search that little bit trickier next time (who wants to earn less?), but it also means you’ll sometimes have to say goodbye to the businesses who are solving the most interesting problems. Strike while the iron is hot, but don’t get caught out in the abyss when equally skilled and cheaper rival applicants begin to emerge.

Get promoted

If there is an end point to your research days, a day when you’ve written your last line of code, perhaps a day when cleaning data is just too much of a pain , there is always a position upstairs. You will always find solace (and safety) in positions of management. Mentor and guide teams and become the parental figure. However, be prepared for the politics that can come along with this gig, as you will fight a daily battle with people in other areas of the business, who sometimes don’t have a clue what you do and why you’re telling them what they should be doing.

Start-up!

You don’t want to be replaced? Upstaged by a snotty nosed kid? Left to rot on the scrap heap? Simple, go into stealth mode and start up your own business! It seems like the most popular destination for most hardcore Machine Learning practitioners, who have a great idea and the balls to live it out. Build a proof of concept, showcase it and get a load of cash rich investors and don’t look back. How hard can it be?…

It’s critical that as Data Scientists who are always seeking to perfect and optimise your models and frameworks, you also need to take the same approach to who you are and what you’re offering to the world around you. When in the moment for here and now, it’s sometimes very easy to lose sight of what can and will be. One thing is for sure, with all the hype and attention Data Science is receiving in the press right now, this party is going to get a whole lot busier! Grab your shades, grip on tight and prepare for the swell!

image credit: Kajoaaa

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/06/10/data-scientists-futureproof/feed/ 1
Data Science Leveraged to Stop Human Trafficking https://dataconomy.ru/2016/05/23/data-science-leveraged-stop-human-trafficking/ https://dataconomy.ru/2016/05/23/data-science-leveraged-stop-human-trafficking/#respond Mon, 23 May 2016 08:00:22 +0000 https://dataconomy.ru/?p=15787 Finding missing children and unraveling the complex web of human trafficking is no easy task. The relevant datasets are massive and often unstandardized. It can be difficult to find the right data at all, as it often disappears from websites and pages on a regular basis. When data is hard enough for scientists to capture […]]]>

Finding missing children and unraveling the complex web of human trafficking is no easy task. The relevant datasets are massive and often unstandardized. It can be difficult to find the right data at all, as it often disappears from websites and pages on a regular basis. When data is hard enough for scientists to capture and evaluate, how can law enforcement agencies even begin to get a handle on it? These agencies, with little funding or know-how, need real help if they want to leverage big data and get a grip on human trafficking.

Many efforts to solve crimes with data is actually coming from outside the law department. From community efforts to non-profits and even full business solutions, it seems the world of data science is actively using their skills for good. More importantly, these data solutions are in stark contrast to the more general and vague job of crime prediction, which is becoming more and more common. Many departments already use data to target trouble areas, but for those crimes that involve huge rings and layers of corruption, there’s a lot more work to be done.

The companies using data science to stop human trafficking often use several methods and mimic what regular law enforcement agencies might do on their own. The “Science Against Slavery” Hackathon, was an all-day Hackathon aimed sharing ideas and creating science-based solutions to the problem of human trafficking. Data scientists, students and hackers honed in on data that district attorneys would otherwise never find. Many focused on automating processes so agencies could use the technology with little guidance. Some focused primarily on generating data that could lead to a conviction—which is much easier said than done. One effort from EPIK Project founder Tom Perez included creating fake listings. They could then gather information on respondents, including real world coordinates. Other plans compared photos mined from escort ads and sites to those from missing person reports. Web crawling could eventually lead to geocoding phone numbers or understanding the distribution of buyers and sellers, as well as social network analysis.

Turning Big Data Into Real World Information

Perhaps one of the more famous initiatives comes from the Polaris Project, a project that was started in 2002 and revitalized in 2012 through the use of data science. When the company heard a talk from the CEO of Palantir, a software and data analysis company, it was clear that the fight against human trafficking needed an upgrade—a big one. With some help from Palantir, Polaris was soon armed with new technology and engineers. They began leveraging data from phone calls, company contacts, legal service providers, and every other part of their organization in one simple platform.

Palantir actually helped other companies, like the National Center for Missing and Exploited Children, or NCMEC, in a similar fashion. By combining data from public and private sources, the organization pinpointed 170 different quantitative and qualitative variables per case record. Advanced analytics were required to evaluate tips, of which 31,945 came by phone, 1,669 through online submission, and 787 from SMS. The project also aimed to digitize old records that spanned several decades and import them into a single searchable analyzable structure. All of this data is powerful, but the final step was making it easily accessible. By importing the numerous formats and levels of information into one database, what once took several weeks—or was impossible entirely—could be done in an instant.

The story of one missing 17-year old girl in California has since become the shining example of data triumphing in the world of human trafficking. Using data science, analysts were able to find multiple online posts advertising the missing girl in question for sex. By analyzing over 50 ads, and nine different women spanning five states, analysts didn’t just find the girl—they saw the larger ring and were able to link the pimp to other crimes and victims.

Visualizations and Easy Solutions for Law Enforcement

The BBC has reported on the amount of data available, and how those terabytes aren’t as immediately helpful as the public would like to think. Child sex abuse raids tend to lead to unbelievable amounts of data. Image forensic specialist Johann Hoffman laments, “the problem is, how as a police officer do you go through that huge amount of data? When you are dealing with terabytes there’s no way a human could ever go through it all.” Using analytics, however, has given them an entirely new approach to data. Friendly data platforms and visualizations help generate a larger story that doesn’t require a master’s degree to understand.

There are several more examples, but one particularly interesting area are those data solutions marketed toward law enforcement. One Y combinator startup wants to act as a paid service for law enforcement. It may feel a tad weird to read a tagline like “the right data at the right time can make or break your prosecution,” but these external companies offer the expertise law enforcement employees likely won’t otherwise have access to. Plus, to make the entire concept a bit more palatable, this particular startup, Rescue Forensics, only registers with official law enforcement agencies, as opposed to just anyone who wants to pay up. Most escort advertisements disappear after a few days, making it incredibly difficult to track. Companies like these who focus entirely on data tracking, analysis and storage can keep otherwise lost information alive for those who need it.

The splintered nature of the entire field might also be one of its biggest assets, for the time being. While splintering in some sectors causes huge problems, and ultimately holds users back from progress, the array of approaches in this area is due to just how many people are interested in creating solutions. These different companies come with different backgrounds and goals and will ultimately lead to new and exciting possibilities. Many operate on open-source platforms, meaning we can expect the number of solutions to continue to skyrocket.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/05/23/data-science-leveraged-stop-human-trafficking/feed/ 0
Big Data’s Next Big Impact? Oil & Gas https://dataconomy.ru/2016/04/18/big-datas-next-big-impact-oil-gas/ https://dataconomy.ru/2016/04/18/big-datas-next-big-impact-oil-gas/#comments Mon, 18 Apr 2016 08:00:16 +0000 https://dataconomy.ru/?p=15271 ‘Disruption’ is a pretty well-worn term these days, mentioned at least three times in the first five sentences of any tech blog and thrown around liberally at any good Silicon Valley hackathon. Up until relatively recently, the efforts of data-led disruption were directed squarely at consumers and changing the way they work, rest and play. […]]]>

‘Disruption’ is a pretty well-worn term these days, mentioned at least three times in the first five sentences of any tech blog and thrown around liberally at any good Silicon Valley hackathon. Up until relatively recently, the efforts of data-led disruption were directed squarely at consumers and changing the way they work, rest and play.

Big Players in Big Data

Wearable technology has undoubtedly become very big business. From smartwatches to fitness trackers, smart clothing and intelligent safety wear, innovation in the space has been rampant.

An estimated 4.9 billion sensors are connected to the internet and that number is expected to rocket from 38 to 50 billion in just five years.The Google-owned Nest thermostat is a great example of the technology already making into our homes. It will learn when you like your environment warm or cold and autonomously adjust the heating to match your specific lifestyle while also maintaining peak efficiency.

Old Industries, New Integration

Beyond the consumer focused smart watches, remotely controllable kettles and connected garden appliances, Big Data has also set its sights on a number of new fields of opportunity such as healthcare and agriculture where there is real potential to have a significant impact and tackle some of the biggest global issues.

Innovators have turned their hand to Big Data solutions within the healthcare space – predicted to be worth $117 billion by 2020 – improving medical research, drug management and enabling remote monitoring of patient recovery. Goldman Sachs estimates a future potential to save billions of dollars in asthma care alone.

Within agriculture Big Data is also having a significant impact on the way food producers are planting, growing and harvesting the world’s food supply. Machinery, climatology and agronomy data are all being successfully combined and leveraged to increase productivity and reduce labour costs.

Big Data’s Next Big Industry. Oil.

Up until recently, data-driven solutions developed for oil and gas had been negligible. However, recent volatility in global energy markets has lead to plummeting oil prices,subsequently creating a stronger demand than ever for new solutions able to deliver increased levels of operational efficiency and automation. Numerous Big Data-led innovations are sitting at the heart of this disruption.

Producing More with Less

Technology-based optimization of oil well performance is not something entirely new and solutions which utilized some level of data analysis have existed for several years, developed and implemented primarily within North America. However, with significant installation costs, these products were aimed squarely at the top tier, high-performing wells and were simply uneconomical to apply to the aging, less prolific wells which make up around 80% of the overall market.

Today, through the use of lightweight, pump-mounted sensors and secure wireless networks, rich data from ‘chatty’ onsite machinery, is able to be collected at a fraction of the cost it once did. Complex algorithms turn this big data into insightful data. Leveraging knowledge of fundamental physical properties, model equipment operations, production trends and reservoir dynamics, data analysis can deliver recommendations such as speed changes to a pump, optimal chemical injection rates as well as initiate autonomous micro-adjustments to individual pump strokes in order to maintain optimal production and efficiency.

On top of this, captured data can be used to track wear on machinery, helping to predict asset failure and alerting operators of pending disruption thereby minimizing machine downtime and increasing site safety. As the software ingests more data, via machine learning capabilities, it becomes increasingly more intelligent and valuable to oil and gas producers.

Data-driven intelligence can also be used to ensure wells are able to adapt to their environment, operating at peak intensity during times of lower energy cost and adjusting operations based on the known below-ground dynamics associated with specific drilling locations.

End-to-end Efficiency

For the largest organizations in the business such as Shell, those involved in every aspect of the lifecycle of energy resources, from drilling, through to refining and finally, retailing to consumers as fuel for their car or home, big data is being leveraged in varying ways throughout the process to provide increased insight and operational efficiencies.

In the initial phases of surveying sites for resource deposits, sensors are used to monitor natural seismic waves below ground to gauge if they’ve passed through oil deposits. Choosing to drill one location over another once would have relied on the data from a few thousand readings. However, with the advancement of both data collection techniques and data analysis tools, operators are now able to crunch data from more than a million readings giving a far more detailed view of what’s below ground and making for better informed decisions.

Once extracted, with limited capacity within refineries, fuel needs to be produced as close as possible to its point of end use to minimize transportation costs. Complex algorithms are enlisted to overlay production cost data with relevant economic indicators and weather patterns to determine likely demand, allocate relevant resources and even set prices at pumps.

With cost barriers to entry being lowered as a result of continuing technological innovation and data intelligence within the oil field increasing rapidly, big data solutions have the potential to become ubiquitous across the oil industry and change its business model altogether. Through not only enabling operators to produce more with less, but having this optimization be carried out autonomously and with continuously growing intelligence due to ever-expanding data sets, connected devices are beginning to pave the way for a next generation energy industry able to maintain sustainability and profitability in even the toughest markets.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/04/18/big-datas-next-big-impact-oil-gas/feed/ 4
Start-ups Pay Huge Salaries to Attract Talented Data Scientists, says Report https://dataconomy.ru/2016/02/24/start-ups-pay-huge-salaries-to-attract-talented-data-scientists-says-report/ https://dataconomy.ru/2016/02/24/start-ups-pay-huge-salaries-to-attract-talented-data-scientists-says-report/#comments Wed, 24 Feb 2016 10:49:06 +0000 https://dataconomy.ru/?p=15048 Jigsaw Academy and Analytics Vidhya launch The Analytics and Big Data Salary Report 2016 Press Release: Unlike large companies, which pay around 9.6 lakhs, Indian startups are willing to pay over Rs.10.8 lakhs per annum to attract the best talent in the analytics industry, according to the Analytics and Big Data Salary Report 2016 by […]]]>

Jigsaw Academy and Analytics Vidhya launch The Analytics and Big Data Salary Report 2016

Press Release: Unlike large companies, which pay around 9.6 lakhs, Indian startups are willing to pay over Rs.10.8 lakhs per annum to attract the best talent in the analytics industry, according to the Analytics and Big Data Salary Report 2016 by Jigsaw Academy and Analytics Vidhya. The report’s findings were based on information collected from 60,000+ analytics professionals with advanced analytics / data science skills.

According to the report, analysts experience the biggest jump in salaries once they have clocked 5 years in the industry, and can expect a raise of up to 70% with an average pay of Rs.12.3 lakhs p.a.

The Analytics & Big Data sector has seen consistent growth over the last five years despite an increasingly uncertain global outlook. The market for advanced analytics is expected to grow at a CAGR of 33.2% and Big Data at a CAGR of 26.4%, almost six to eight times that of the overall IT market.

Commenting on the report findings, CEO of Jigsaw Academy Gaurav Vohra observed that, “The demand for data professionals has grown but a corresponding surge in supply has failed to happen. Experts estimate a shortfall of approximately 200, 000 data analysts in India by 2018. The extremely competitive pay scales reflect this incongruity. In 2005 entry level salaries were around 2-4 lakhs per year but today, pay scales have gone up phenomenally. Demand for big data and analytics professionals is rising because both domestic and international companies are relying upon India for the right talent. With data being generated at such a furious pace, I don’t see the demand for big data analysis—or analysts—slowing down any time soon.”

According to Gaurav Vohra, the start-up ecosystem is responsible for the creation of over 30,000 analytics & Big Data jobs every year across India. Being the startup capital of India, Bangalore sees at least 10,000 jobs being created in the data analytics sector annually, followed by Delhi with around 7000 jobs and Mumbai, with approximately 4000 jobs.

Puneet Gambhir, a key member of the Analytics Leadership at Flipkart, said “We are assiduously building up the analytics talent pool within Flipkart. Analytics enables us to create better and differentiated experiences for our customers at all touch-points of our business and hence the need to shore up on such skillsets.”

Commenting on the report Kunal Jain, Founder & CEO of Analytics Vidhya had this to say:

This is one of the most exciting times to be alive for data science professionals. We are standing at an inflection point in history, after which analytics and data science will become an integral part of any product or service available. Our community comprises thousands of data scientists and we regularly look for trends in the industry. We are excited to release these findings for the benefit of a larger audience and hope that this helps people make the right career choices.

T.V. Mohandas Pai, Chairman of Manipal Global Education Services said that “Analytics is one of the most important skills required in today’s professionals. Having analytics as a part of their skill portfolio will allow these professionals to easily scoop up the most lucrative jobs, as the market is hungry for such trained talent.”

Other key highlights of the report:

  • Kolkata seems to be a clear winner for analysts, earnings-wise, when their salaries are adjusted to the cost of living. They enjoy a better quality of life here than in other cities. The average pay for analysts in Kolkata is projected to be Rs.9.35 lakhs per annum
  • Companies today are looking for employees who have knowledge of multiple tools. The highest recorded salary for 2015-2016 is Rs.12.75 lakhs p.a., the recipients of which are analysts with knowledge of more than one of these tools–SPSS, SAS, R & Python
  • Companies are keen to hire individuals with business acumen and experience, which means that an MBA alone won’t cut it. Candidates are also expected to know analytics to successfully land analytics/Big Data jobs
  • R remains the front-runner in the analytics race with salary packages of Rs.10.2lakhs per annum. Python, however, is hot on its heels
  • Analysts who have Big Data and Data Science skillsets are paid 26% more than analysts with a knowledge of just data science or Big Data
  • Startups seem to be cashing in on professionals with R & SQL skills whereas larger organizations still seem to favour SAS, since they can afford to buy expensive proprietary software

The research methodology used to source data for the Analytics & Big Data Salary Report 2016

The data points include people who came into contact with Analytics Vidhya through their website, job platform, hiring competitions and other sources. This number also includes, but is not limited to, people applying for jobs on the Analytics Vidhya website, confidential searches, hackathons, and tie-ups with skill-enhancement partners.

People with only advanced analytics / data science skills were considered for the study. For example, people who were only experienced in MIS have been excluded from the study. Once filtered, the skill sets of the survey participants were appropriately categorized in order to gain insights into the data.

The findings are open to biases arising from the nature of the jobs / competitions hosted on Analytics Vidhya’s website. However we believe that even with this bias, this first-of-its-kind study of the Indian Analytics industry and the thousands of analytics professionals it employs, reveals many fascinating ground realities.

To download the report please click here

Please note that this version is for media use only – To access the full report, click here.

About Jigsaw Academy
Jigsaw Academy, the online school of analytics, has trained over 40,000 students across 30+ countries in the most widely used industry-relevant data analytics tools and techniques. Jigsaw Academy’s founders, Gaurav Vohra and Sarita Digumarti, have over 25 years of combined experience in consulting and analytics across multiple industry verticals in India and the United States.

About Analytics Vidhya
Analytics Vidhya is one of the world’s largest and fastest growing analytics communities. They aim to make data science knowledge free and accessible to people across the globe. Analytics Vidhya entertains close to 450,000 visits from data science professionals across the globe, who access the organisation’s blog, discussion portal, hackathons, meetups and webinars.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/02/24/start-ups-pay-huge-salaries-to-attract-talented-data-scientists-says-report/feed/ 1
Accelerator and Incubator Alternatives https://dataconomy.ru/2015/12/25/accelerator-and-incubator-alternatives/ https://dataconomy.ru/2015/12/25/accelerator-and-incubator-alternatives/#respond Fri, 25 Dec 2015 09:30:13 +0000 https://dataconomy.ru/?p=14643 Not all accelerators are equal, and not all start-ups will benefit from their rigid programs. Here’s a list of alternatives to help grow your start-up without handing out shares left and right. There are a truly absurd amount of accelerators and incubators around the world. For all the good they have done, they can also […]]]>

Not all accelerators are equal, and not all start-ups will benefit from their rigid programs. Here’s a list of alternatives to help grow your start-up without handing out shares left and right.

There are a truly absurd amount of accelerators and incubators around the world. For all the good they have done, they can also do plenty of damage. They come and go, and some leave several dissatisfied start-ups in their wake. In fact, there are so many coming and going that it is nearly impossible to keep an accurate, up-to-date list of them. For start-ups that have tried the traditional accelerator/incubator route, or are wary to go it at all, there are other options available.

Maker / Hackerspaces

For many, makerspaces are the grassroots ideal for start-up creation. These spaces have sophisticated equipment that some start-ups desperately require. Paying a membership fee means access to all the tools a young start-up might be itching for. Plus, as makerspaces become more prominent and popular, more companies are investing in them. Governments, corporations and even universities may offer funding or space. If you are near a big city, chances are you have a makerspace or hackerspace nearby.

The real power of these spaces doesn’t just end at equipment. They also tend to have a very energetic atmosphere. Surrounded by similar makers and creators, there is never a shortage of creativity or input. Another great advantage is how they lend credibility to an otherwise small or unknown start-up. Much like an endorsement from an accelerator or big company, having a real space to work lends traction to a new name. Start-up founder Tom Panzarella, who works out of a makerspace NextFab Studio explains:”You’re not these two guys in a garage building a robot, right. You have your 21,000-square-foot production space; the boardroom here is really nice if we need to have meetings.”

Ideal for: Access to expensive equipment
Examples: Sudo Room (USA), London Hackerspace, TechShop

Co-working Spaces of Varying Style

The beauty of co-working spaces are the unbelievable variety. From strict and quiet to friendly innovation bonanzas, they offer all kinds of support. Some are more invasive than others, offering perks that range far beyond a coffee machine. Many are run by like-minded founders and their experienced entourage. They may listen to pitches, share duties or offer helpful (totally optional) seminars. They might also just offer you a place to hang your hat and do your work. These are ideal spaces for being involved in an accelerator-like environment while keeping full control of your start-up and avoiding the type grip of rules or sponsors.

Ideal for: Making connections, getting outside help
Examples: Fishburners (Australia), Tech Liminal (USA), AfriLabs (Africa)

Research Institutes

For the founder without a startup. Companies like these take great ideas and turn them into start-ups. To do so, they seek out founders. It’s sounds almost counterintuitive to the start-up culture, but for the right mind these scenarios might be ideal. This is for the entrepreneur that wants to be part of the culture but doesn’t have the “golden ticket” idea yet. Much like a carefully constructed boy-band, these institutes want to create a team that will flourish.

Ideal for: Entrepreneurs
Examples: NoveLook (Israel)

Meetup Platforms

Hackathons and similar short events sound, at first, like what nerds might do on any given weekend. These platforms, however, thrive on a very vital part of innovation: speed. Technology, more than almost any other field, becomes irrelevant fast. Start-ups that take too much time to create, develop, or pitch are left in the dust. Events like Startup Weekend push individuals together with only the common goal of creating. They are a whirlwind education in how to become a successful start-up. Going into these events with a bit of forethought may also yield extra possibilities. Winners don’t just get a medal; they may get written up in Venture Beat or TechCrunch. Even BBC and CNN cover such events.

Ideal for: Mingling, Exposure and Experience
Examples: Startup Bus, Startup Weekend, Oxygen

University Accelerators

Young people are becoming more and more comfortable with the idea of starting their own business. Multiple studies have shown that today’s students are deeply imbibed with an entrepreneurial spirit. They don’t want to work at corporations; they want to create something on their own. This is likely why university accelerators are becoming so widespread. Different programs have different requirements. While some have rather unusual requirements (for example, that a founder be enrolled in a certain program at their school) others actually accept founders that simply have a connection to the university—whether as an alumni, or even as a member of town. These programs also vary wildly in usefulness and the most successful are tied to the big, obvious names, like Harvard, MIT and Northeastern. They do not necessarily offer the same wealth of resources that a full-blown accelerator might. There are no outside mentors, and fewer big connections.

Ideal for: Young folks, getting your feet wet and staying local
Examples: Stanford StarX, Accelerate Cambridge

Startup-In-Residence

Another version of the city-funded program is start-up residency. Many of these wants start-ups to focus on certain local issues or ways to give back to their community. Of course, that can mean a lot of things. From fostering community, analyzing local data or helping the local economy. While many people may not think of themselves as the philanthropist-types, many new apps and technologies could make great changes to communities. Don’t forget, sometimes these residency programs do come from big companies looking to improve their own business.

Ideal for: Developing start-ups looking for outlets
Examples: San Fransisco Entrepreneurship in Residence, Amsterdam Entrepreneurship in Residence, Dell Startup-in-Residence

Public Programs

These are programs designed or highly backed by cities, counties or towns specifically to retain talent and start-ups in the area. They look a lot like incubators and accelerators, but with a very different goal. While they do want their start-ups to succeed, the overall goal is to help local companies thrive and stay in the area. As a result, they want to make their start-ups happy. They generally offer similar services as a traditional incubator, but rather scaled down. Granting space and mentorship, they might be a more reasonable support system for the everyday startup.

Ideal for: Getting an accelerator experience
Examples: International Labs Madrid, Welcome City Lab (France, tourism-specific)

Given how many accelerators will fail their start-ups, or how many start-ups are unprepared to be properly accelerated, these alternatives are always a step in the right direction. If organic acceleration just won’t cut it, or your start-up snagged a spot at the famed TechStars accelerator, then more traditional programs may be worth a try. Just don’t forget that it’s all a business. Not just any old building with a “start-up accelerator” sign will do the trick.

Like the article? Subscribe to our weekly Newsletter.

image credit: impact hub global network

]]>
https://dataconomy.ru/2015/12/25/accelerator-and-incubator-alternatives/feed/ 0
“10% inspiration, 90% perspiration” – Interview with Splunk’s Philipp Drieger https://dataconomy.ru/2015/11/06/10-inspiration-90-perspiration-interview-with-splunks-philipp-drieger/ https://dataconomy.ru/2015/11/06/10-inspiration-90-perspiration-interview-with-splunks-philipp-drieger/#respond Fri, 06 Nov 2015 09:07:03 +0000 https://dataconomy.ru/?p=14435 Philipp works as a Sales Engineer at Splunk. His background is in data visualization and analytics with experience in automotive, transportation and software industries. Philipp’s focus is to leverage Splunk as a data platform for analytics and visualization. He recently won the Deutsche Bahn Hackathon analyzing a 10GB data set around railway infrastructure in 24h […]]]>

driegerPhilipp works as a Sales Engineer at Splunk. His background is in data visualization and analytics with experience in automotive, transportation and software industries. Philipp’s focus is to leverage Splunk as a data platform for analytics and visualization. He recently won the Deutsche Bahn Hackathon analyzing a 10GB data set around railway infrastructure in 24h (read more: http://blogs.splunk.com/2015/06/08/splunk-team-wins-db-infrastructure-data-challenge-in-24h-iot-hackathon/). In collaboration with Robotron Philipp is working on data mining approaches for the IoT and industrial data to optimize business processes.

We are proud to have Philipp presenting at Data Natives 2015!


Can you describe your professional journey up to the point of joining Splunk?

First of all thanks for this interview and I’m looking forward to Data Natives 2015! Before Splunk I worked as a freelance software developer and consultant on many interesting projects – mainly in the automotive and transportation industries. I was focused on real-time 3D visualization to make heterogeneous data accessible and meaningful – regardless of whether it’s car data or complex infrastructure planning projects. As well as that I was researching visual text analytics and published two papers. For the creative part I’ve been active in digital arts for years and realized many audiovisual and interactive art projects like http://www.cubeflow.de.

What kind of problems do you aim to solve as a Sales Engineer at Splunk?

With the variety of ways that Splunk software is used by our customers, I deal with many interesting and different use cases covering IT operations, security, application delivery and business analytics. Recently I’m working on more and more projects in the area of industrial data and the Internet of Things.

How has the field of business intelligence evolved over the last few years with the rise of ‘Big Data’?

I think there is a continued shift in BI since the rise of ‘Big Data’ as new data sources and types following the ‘four v’s’ principle (volume, variety, velocity and veracity) come into play and add substantial value to existing data. Technically, it can be challenging to get insights from such a changing data landscape quickly. This is where Splunk makes the difference: as a universal platform for machine data Splunk provides this flexibility due to late binding and a powerful search language to correlate and analyze heterogeneous data at large scale – including both real-time and historical information.

If you could apply the technology being developed at Splunk to any real world problem, which would it be and why?

That’s a funny question because in fact all the technology developed at Splunk is used to tackle real world problems: to prevent and detect cyber threats and fraud, proactively monitor IT infrastructures to find and fix errors quickly, analyze and visualize business processes, machine data and sensor data to get new insights. For Example Splunk software is used at the Police station in Chandler, in the USA: They evaluate data to support officers on patrol and monitor problematic neighborhoods. And the IT departments of financial institutions use the software to secure online payments.

What are the key lessons you’ve learned in your career? Biggest ‘Ah ha!’ moments or mistakes?

Expressing it for a real data native: It’s about 10% inspiration and 90% perspiration. If you want to achieve 100% you need to combine 90% and 10% wisely.

What advice would you give to technically minded youths looking to get their career started?

Keep an open mind and learn about different technologies to connect the dots.

Which companies individuals inspire you, and keep you motivated to achieve great things?

There have been many companies and individuals that inspired me in different parts of my career, but one of my great inspirations is people in history: philosophers, inventors, artists. Right now I enjoy working with brilliant people here at Splunk.

]]>
https://dataconomy.ru/2015/11/06/10-inspiration-90-perspiration-interview-with-splunks-philipp-drieger/feed/ 0
Processing Big Data Using 1.5KB https://dataconomy.ru/2015/06/22/processing-big-data-using-1-5kb/ https://dataconomy.ru/2015/06/22/processing-big-data-using-1-5kb/#comments Mon, 22 Jun 2015 10:36:09 +0000 https://dataconomy.ru/?p=12956 During one of our data-munging sessions here at Coralogix, we found ourselves needing to assess the cardinality of large data sets. Getting the accurate result is seemingly trivial: you simply iterate over the data, and count the number of unique elements. In reality, however, the task is more troublesome, mainly due to three constraints: process […]]]>

During one of our data-munging sessions here at Coralogix, we found ourselves needing to assess the cardinality of large data sets. Getting the accurate result is seemingly trivial: you simply iterate over the data, and count the number of unique elements. In reality, however, the task is more troublesome, mainly due to three constraints: process time-complexity, process space-complexity and distributed processing. Surprising, isn’t it?

Many real-time algorithms require the execution of a great deal of queries every second, and are thus tightly time-bound. In these cases, even dealing with small data-sets can become a problem – as iterative counting may slow the algorithm down too much for real-time purposes.

On the other hand, when dealing with Big Data, issues arise from the space-bounded hardware. The data cannot be completely loaded into memory, limiting performance to reading from the hard drive, which is unacceptable.

Another problem comes into mind when dealing with Big Data: scaling-out. In this case, reaching consensus among different machines is necessary without replicating all the data in one place for checking uniqueness.

One solution for these problems is to give up precision. However, this is not easily done. In this blog post we’ll explore an intriguing cardinality estimation algorithm: HyperLogLog.

HyperLogLog is an ingenious solution for Big Data, estimating the number of unique elements in huge datasets with 98% precision, with a memory footprint of only ~1.5kB!

I know, that sounds crazy. But trust me, the math works.

The idea behind HyperLogLog is to use the statistical properties of hash functions, while avoiding the need to keep the hashes themselves. Briefly, each element is inserted into a hash function, and the number of sequential trailing 1’s (or 0’s) is counted. This integer is all you need.

A good hash function assures us that each bit of the output sequence is independent of the others, thus having an equal probability (50%) of being set to 1 – similar to tossing a coin. The probability of randomly stumbling upon a sequence of (n) 1’s is thus [latex]1 / 2^n[/latex]. Accordingly, looking at it the other way around, we would need to hash ~[latex]2^n[/latex] unique elements in order to stumble upon a sequence of (n) trailing 1’s in one of the hashes. A naïve algorithm would simply return this figure as an estimation for the number of unique elements in a list. However, using only one hash function is prone to a lot of error – it’s enough for one element to incidentally receive a hash value with many trailing 1’s for the estimation to become extremely biased.

On the other hand, utilizing many hash functions is costly – as each hash calculation requires an O(n) operation. Additionally, finding a family of several hundred or even thousand good distinct hash functions (for increased accuracy) is also a difficult task.

To solve these problems, HyperLogLog utilizes stochastic averaging:

Instead of keeping a single integer for the estimation (“1 bucket”), HyperLogLog introduces many ([latex]2^k[/latex]) registers, or “buckets”, which will be averaged when estimation is requested.

How are these buckets being updated? During insertion, each element is run through one good hash function, and the resulting hash value is broken into two pieces. The first k bits are converted into the integer index of the target bucket among our [latex]2^k[/latex] buckets. On the remaining bits of the hash value we count T – the length of trailing 1’s. If this T is bigger than the current register value, that register is updated accordingly; otherwise we move on to the next element. This way we get an ensemble of [latex]2^k[/latex] estimators for the cardinality of our data set.

(neustar research has built an awesome online demo of HyperLogLog demonstration which makes all this much more comprehendible: http://content.research.neustar.biz/blog/hll.html)

 

When an estimation is requested, we just average the register values –

Screen Shot 2015-06-22 at 12.46.29

and output the estimation as before –

Screen Shot 2015-06-22 at 12.46.37

 

However, the arithmetic mean [latex]\hat{T}[/latex] is sensitive to extreme values, and may introduce higher estimation error. This issue is tackled by using the harmonic mean instead, which is much better when dealing with such values. These 5 simple lines of Python code emphasize this nicely:

 >>> from scipy import mean
 >>> from scipy.stats import hmean as harmonic_mean
 >>> values = [1]*999 + [100000]
 >>> mean(values)
 100.999
 >>> harmonic_mean(values)
 1.0010009909809712

Of course, the more registers you have, the more accurate your result can get. To show this, we’ve ran a small data-set with about 250,000 unique elements through HyperLogLogs with different sizes. The figure below demonstrates how much more effective it is:

 

HyperLogLog Results

 

As you can see, the dark line, which represents an HLL with [latex]2^{16}[/latex] registers, is closest to the 0 error line – with very little error variance and averaging at an impressive 0.000223% error across a board!

Another edge this algorithms brings forth is the straightforward solution for distributed processing: when combining results from several machines each running HyperLogLog with the same hash function, all is needed is to keep the maximum value between all pairs of matching registers for a the cardinality of the whole system!

Summing up, cardinality estimation with HyperLogLog is extremely efficient when dealing with Big Data sets – keeping the error within a few percentages of the actual state. The process can run on distant machines, requiring minimum coordination or data exchanges. And all this is done while typically using little more than a kilobyte of state data per machine.

Truly state of the art.

Side note on Python’s hash() function

Python has a nice built-in function called hash(), which is seemingly appropriate for hashing purposes. However, it is not considered a good hash function, and it is not even intended for real-world hashing at all!

For example, Python hashes every integer into itself, violating an important property where similar, but not identical, inputs hash into dissimilar outputs:

 >>> hash(1135235235)
 1135235235
 >>> hash(-135235)
 -135235
 >>> hash(0)
 0
 >>> hash(1)
 1

Additionally, the underlying Cython implementation uses (-1) as an error code. Accordingly, no input can get this hash value, returning instead (-2):

 >>> hash(-2)
 -2
 >>> hash(-1)
 -2
 >>> id(-2) == id(-1)
 False

Lastly, since Python 2.7.3/Python 3.2.3 each python process initializes the hash() function with a different seed, resulting in a totally different hash value for identical inputs across runs:

 >>> hash("different results on every run") # python process #1
 4996284735471201714
 >>> hash("different results on every run") # python process #2
 5097553022219378483

This is a security feature, to prevent an attacker from designing a set of inputs which will always collide in internal hash-tables, intentionally resulting in very long processing time (and an actual denial of service attack in the mean time).

For these reasons I’d recommend using a non-cryptographic and fast hash function, such as Murmur3 or the like.

Further reading:

The original HyperLogLog article, by P. Flajolet – HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf


Processing Big Data Using 1.5KBLior Redlus is Co-Founder and Chief Scientist at Coralogix. He holds a MSc degree in Neuroscience and information processing from Bar-Ilan University. He has been a statistic advisor for a variety of scientific research papers, and won 1st place in Israel’s ‘BrainHack’ hackathon 2013 for the invention of the Miyagi neuro-helmet Learning Enhancer.


(Header Image Credit: Thomas Hawk / Python / CC BY-NC 2.0 )

]]>
https://dataconomy.ru/2015/06/22/processing-big-data-using-1-5kb/feed/ 1
Why We Work in FinTech: Industry Insiders Weigh In https://dataconomy.ru/2015/04/27/why-we-work-in-fintech-industry-insiders-weigh-in/ https://dataconomy.ru/2015/04/27/why-we-work-in-fintech-industry-insiders-weigh-in/#comments Mon, 27 Apr 2015 15:55:58 +0000 http://ftjournal.com/?p=1258 “FinTech is a mission to change the finance sector from its current state as a monopoly, to a more democratic, transparent sector that really serves customers. I wouldn’t want to be anywhere else, least of all, the traditional finance sector!” CEO of Kantox, Philippe Gelis, expresses a kind of FinTech manifesto that’s sweeping through tech […]]]>

“FinTech is a mission to change the finance sector from its current state as a monopoly, to a more democratic, transparent sector that really serves customers. I wouldn’t want to be anywhere else, least of all, the traditional finance sector!”

CEO of Kantox, Philippe Gelis, expresses a kind of FinTech manifesto that’s sweeping through tech and finance. It can be heard echoing through the halls at young startups, filling headlines in the media, and even being whispered behind closed doors at major banks and investment firms (Gelis confirms that plenty of young people from Goldman Sachs and more have contacted him about job opportunities). The FinTech forces are alive and strong, and many former banking officials are leaving steady salaries and flocking toward the fledgling startup life – in the name of innovation (or so we think). But what’s really behind these forces, and do they have the power to last for years to come?

We recently polled our audience in hopes of gaining a better understanding of the FinTech allure – as in, why people are entering into FinTech, and how they feel the growing industry is perceived. We asked:

  • What was the main motivation for you to join or found a FinTech startup?
  • What do you think is the primary motivator for others to join or found a FinTech startup?
  • How do you feel the FinTech industry is perceived by outsiders?

FinTech: freedom, change, and opportunity

A 50% majority of respondents named opportunity to impact an emerging industry as their highest motivation for working in the sector. This was followed by build and own a financial product (17%), enable or develop technological advancements (13%), high potential financial return (13%), more positive outlook for the job market (1%), and other (1%).

image(1)

When asked why they believe others are getting into FinTech, poll respondents selected more selfish reasons overall. The majority chose build and own a financial product, and high potential financial return ranked second.

image(4)

Nevertheless, in both polls, the top two results were opportunity to impact an emerging industry and build and own a financial product – both of which signal an important bottom line: the FinTech space offers freedom and potential that the traditional finance sector does not. Co-founder and managing director at FinTech startup Savedo, Steffen Wachenfeld, views FinTech as a dynamic realm with multiple attractive qualities for top talent. “Individuals have a substantial impact, can see the big picture, and personally benefit from the success of their ventures,” he says.

Renowned finance reporter and FinTech expert Elizabeth Lumley was first drawn to FinTech’s potential to change the industry after being invited to a payments hackathon at the Google Campus. “I went along. I paid for my beer, via a Tweet. I saw people I knew, people from banks – who’d paid for their beer, with a Tweet. I thought ‘something is changing’ and it has very little to do with anger over the 2008 crisis. It is deeper than that. And I want to be a part of it…So I started writing blogs,” she says. She’s had a hand at spreading knowledge within the field ever since.

Varying perceptions of FinTech

Although new ideas and a revolutionary mindset are giving FinTech important momentum, the movement’s foundation isn’t without its weak points. The majority of poll respondents believe that outsiders view FinTech as innovative, much needed modernization for an archaic industry (39%), but a good chunk of them think the rest of the world views it as overhyped, not a “revolution” (23%). Part of the latter might be attributed to entrepreneurs hastily approaching an industry about which they have almost zero knowledge – which can be a recipe for failure.

image(3)

Lumley never quite bought into the disruption narrative taken on by startups within the sphere for this very reason. How, she wondered, could founders sink the banking battleship without even understanding what it was made of? Now, she sees the potential for FinTech, but strongly advocates founders to develop a solid knowledge of how banking and financial services operate. “Those are the people I think will survive and have companies that will last past the seed funding stage,” she says.

To others, FinTech is simply the means to provide the financial sector with much-needed digitization. “While most industries are pretty much done with the digital revolution (music – Spotify, ebooks – Amazon, dating – Tinder, etc.) banking and financial services still have a huge potential to offer digitize services,” says Anna Friedrich, Head of Communications at Kreditech. In this way, the FinTech manifesto turns less anti-banks and more pro-consumer. Instead of being a competitor or enemy of banks, it’s generating positive interest across the board.

“In the end,” says Friedrich, “FinTech is to banks what Spotify was to the music industry in 2007.”

Why are you in FinTech? Let us know in a tweet or comment.

(Photo source: Heisenberg Media via Creative Commons)

]]>
https://dataconomy.ru/2015/04/27/why-we-work-in-fintech-industry-insiders-weigh-in/feed/ 1
“New services will put us in the driver’s seat.” – Lars Markull of Figo Talks FinTech and the Future https://dataconomy.ru/2015/04/24/new-services-will-put-us-in-the-drivers-seat-lars-markull-of-figo-talks-fintech-and-the-future/ https://dataconomy.ru/2015/04/24/new-services-will-put-us-in-the-drivers-seat-lars-markull-of-figo-talks-fintech-and-the-future/#respond Fri, 24 Apr 2015 14:30:56 +0000 http://ftjournal.com/?p=1263 Lars Markull is all about FinTech. Currently working in business development for German banking API figo, he’s both a catalyst and an observer of the financial sector’s digitization. We caught up with Markull to discuss the FinTech community, a digital future, and the figo API. What is your background, and what drew you to working […]]]>

Lars Markull is all about FinTech. Currently working in business development for German banking API figo, he’s both a catalyst and an observer of the financial sector’s digitization.

We caught up with Markull to discuss the FinTech community, a digital future, and the figo API.

What is your background, and what drew you to working in FinTech?

Actually I always wanted to work in a bank and never really considered working in a startup. However, while finishing my undergraduate degree in finance in 2012 I had my first contact points with FinTech through an internship in a venture capital company including an FinTech investment (SumUp). It made me realize that the industry is at the edge of disruption and mobile POS is just the beginning. I wanted to be part of this change, and joining a startup instead of a bank was a pure rational decision: It was obvious that new players like FinTech startups – not incumbents – will bring disruption and cause an ongoing change to the industry.

2014 saw incredible growth for FinTech. Why do you think it took so long for the ‘revolution’ to begin?

Personally, I don’t think it is a surprise that we haven’t seen disruption earlier in this industry. Regulation, power and credibility of banks and many other reasons caused entry barriers for new players – and not just for startups but for everyone. Consequently, it was logical for industry outsiders to disrupt “easier industries” first. But I think we can all agree, that time has come now for the financial service industry.

Figo provides access for startups to integrate with banks, creating many opportunities. What are some of the most interesting services you’ve seen driven by APIs such as these?

With figo’s banking API, our partners can integrate banks and many other financial sources into their own services. This service is definitely of high interest for FinTech startups, but banks can also benefit from such multi-banking functions.

As of today, we have in total more than 180 developers who are accessing our API in different ways, nevertheless, the main benefit is the same for all: speed! The industry is growing at a rapid pace and a banking API enables our partners to focus on their core product instead of “backend functions.”

We have many interesting solutions on our figo API, but Auxmoney is a very well-known player and an interesting case as well. The P2P crowd-lending platform Auxmoney is, with our figo API, finally able to score loan applicants in real time. The loan applicant can connect his bank account through figo to Auxmoney, and Auxmoney is able to score the customer faster and more accurate than before. Additionally, there are many more banking API use cases for Auxmoney.

A second interesting banking API example is from the US, called Digit. The service is a small add-on for a bank account and would have probably never been developed without the help of a banking API. The user connects his bank account to Digit and the service automatically analyses the income and expenses on this bank account. Based on these transactions, digit automatically calculates an appropriate savings amount for the user and transfers this amount to a separate savings account. This is a small but great example of how a banking API can foster new services in the FinTech space.

Chris Skinner recently commented on the real opportunity being FinTech banks, rather than integration with traditional banks. How does that impact figo’s mission in the future, if you agree with the statement?

I totally agree with Philippe Gelis’ outlook for the FinTech industry. He describes a future where traditional banks will be replaced by FinTech banks. The key advantage of these FinTech banks are their smart APIs, which will allow any third party service to easily access financial data of the customers.

Even though I agree with the statement, it is a big question when the majority of banking customers will switch from traditional banks to these FinTech banks. This will surely not happen in the next five years.

In respect to figo, I do not think that such a development will make a banking API redundant. It might be easier for third parties to connect to these FinTech banks, but the financial industry in total will still be fragmented and financial data will be spread around many sources. A banking API will still add value for companies in this fast-paced environment.

Are the biggest challenges you face to do with market education, technical issues, or regulatory?

These three are surely the most important challenges for us. It is hard to say which one is the biggest challenge and they also differ from country to country. In general you can link regulatory and technical issues together. Currently, the European Union is discussing PSD2 which implies the “access to bank accounts.” It is expected that the EU will rule that banks cannot forbid their customers to use financial data in third party services. Consequently, our regulatory challenges will hopefully decrease. This will reduce technical challenges as well since banks would not be allowed to block third party services anymore but rather open up to them (but we might need to pay banks for accessing the data).

This leaves market education as the last variable. This is, and will definitely be, an ongoing challenge for us in the next couple of years, but fortunately, we are not the only one educating the customer about new possibilities in the financial service industry. Every FinTech startup and innovative bank is, as well.

How are you leveraging the FinTech community to drive interest in your API?

The growing FinTech developer community is indeed a very important resource for us. In order to increase interaction between figo and our developers we are planning different events – most importantly a FinTech hackathon (Bankathon) in Frankfurt next week. We are hosting this two-and-a-half-day hackathon with our friends from Gini and can happily announce that the event is fully booked up. The event will be very different from a typical hackathon, since many of the participants have already a strong FinTech experience: there will be many active figo API developers and also several existing FinTech startups (Number26, Vaamo, FinLeap and many more) answered our call and want to “code the future of FinTech” with us. The participants will not only benefit from the creative and intensive experience but also from some very helpful connections. We will have representatives from our sponsors (e.g. Deutsche Bank, HypoVereinsbank and biw Bank) at the event who will be able to provide helpful banking insights and the impressive jury (Orange Growth Capital, Accel Partners, SBT Venture Capital and Anthemis Group) will at the very end decide who wins the first prize of the Bankathon 2015.

Do you have any final comments about figo and the future of financial services?

Financial data is both the most intimate and most powerful data we have about ourselves. So far this data was always kept in silos – in our online banking service. At figo, we are seeing more and more newly developed services which are about to change that and will use financial data in a completely new way.

I personally expect that sharing personal financial data with third party services will become common and many people will happily agree to share them. This may sound frightening at the beginning. Nevertheless, people will agree to share their financial data with a certain service because they will get something in return. This could be a discount on a financial product, a better (tailored) offering to the needs or access to a certain financial tool.

Right now we are used to the fact that our financial data can only be used in online banking solutions, however, in the future regulation and new services will put us in driver’s seat. Eventually, every one of us will be able to decide where he would like to use his financial data and who should have the right to access and work with it.

]]>
https://dataconomy.ru/2015/04/24/new-services-will-put-us-in-the-drivers-seat-lars-markull-of-figo-talks-fintech-and-the-future/feed/ 0
The Digital Future of Media & Journalism https://dataconomy.ru/2015/02/17/the-digital-future-of-media-journalism/ https://dataconomy.ru/2015/02/17/the-digital-future-of-media-journalism/#respond Tue, 17 Feb 2015 14:57:02 +0000 https://dataconomy.ru/?p=12084 Elina Makri is the co-founder of oikomedia.com, a networked digital platform designed to trace and connect journalists, fixers and media professionals around the globe. She’s also the Greek editor of dialoggers.eu (a Greek-German collaborative journalism project, and the founder of the Youth Investigative Journalism prize, which aims to train and reward budding data journalists. Thus, […]]]>

Elina MakriElina Makri is the co-founder of oikomedia.com, a networked digital platform designed to trace and connect journalists, fixers and media professionals around the globe. She’s also the Greek editor of dialoggers.eu (a Greek-German collaborative journalism project, and the founder of the Youth Investigative Journalism prize, which aims to train and reward budding data journalists. Thus, she’s uniquely placed to discuss the future of journalism; which, from her perspective, is digitised, data-driven and transnational. We spoke to Elina recently about her work, and her thoughts on the future of the ever-evolving media industry.


 To begin with, tell us a little more about Oikomedia.

Oikomedia.com is a platform (a very targeted social network) that helps journalists trace other journalists, media fixers, photographers, cameramen, sound engineers etc around the world, in order to collaborate or exchange views, ideas, expertise. It is a very targeted social network for media professionals.

The idea behind oikomedia is the following: media companies and professionals (local journalists, cameramen, fixers, photographers, etc) need to be able to trace other professionals quickly, but also with a degree of accuracy (based on location, speciality, and previous experience).

Think about a journalist in Oslo who has to leave today because of an uprising in Lebanon. He has to find quickly someone trustworthy who can help him, once in Beirut, have a general overview of the uprising, contact people, translate interviews, find interesting stories.

With Oikomedia, he has just to log in (in all probability through his mobile phone), do an advanced search: =>‘beirut’, =>‘fixer’, => ‘english speaking’, =>‘previous demonstrations covered’), have a look at the portfolio of 5 or 6 local journalists, contact directly a couple of them and wait for a direct answer.

The Digital Future of Media and Journalism

Why do you think there’s such a demand for services like Oikomedia?

I connected with the other Oikomedia founders precisely because there was such a demand. I met Gianluca, my Italian partner, because he was a fixer for quite dangerous reporting in Italy. Without him, foreign media could not “penetrate” into the situation. When his reporting was published, Coca-Cola cancelled all the contracts in Southern Italy. There are so many media freelancers out there that need to find partners and set up projects. Plus, we take no commissions from those collaborations or any by-products, we just offer the platform. Moreover, we will soon release the virtual bureaus: Why maintain an expensive local newsroom / bureau when you can have a global + digital, cost effective and customized network of media professionals when and wherever you need them?

Oikomedia is the answer to an increasing demand. This demand is a direct consequence of the global financial crisis, as well as the challenges presented from new models of reporting, and an effort to harness new business models that pay for the news, big news agencies, broadcast providers. Newspapers are cutting more and more of their foreign correspondent offices, and slashing foreign coverage at an alarming rate, despite the fact that readers demand for news is exploding.

In a pragmatic approach, very often media companies (MARKET) have no extra money and time to send their staff to foreign countries (NEED). They don’t even have time to vainly search Internet directories in order to get stories and ideas in unknown languages (NEED).

You’ve also established hackathons and an award for data journalism; why do you think data journalism has become so important?

Four words: Explosion of available data. This data gives context to journalists’ stories. Data married to narrative structure and expert human knowledge can tell us a lot about our ever-changing world, and can provide checks and balances to a democratic society. Also, I have kept the expression of David Livingstone, the director of the New Jersey Trauma Center at University Hospital in Newark (from a ProPublica story): “In the absence of real data, politicians and policy makers can do what the hell they want.”

So, data journalism can be a powerful tool for:
1. Data control
2. Access and analysis on the information
3. New kinds of reporting with citizens’ participation. A method that can actually build the next generation of civic infrastructure by empowering citizens.

What do you consider to be the biggest changes the digital age has brought about it in the media?

1. Tectonic shifts on the business side- we’ve all felt the importance of this. It’s actually a matter of life and death for a media organization. We are no longer sure if the news industry, as such, exists. Print media bleeds red ink.
2. New ways of storytelling: very, very compelling multimedia storytelling. Should I refer to the post snowfall era?
3. Greater accountability within journalism. Many journalists are afraid of robowriting. I am not. I definitely believe that the work of the journalist after the digital tsunami has been upgraded.
4. More freedom: on-demand access to content anytime, anywhere, interactive user feedback, “democratization” of the creation, publishing distribution and consumption of content. Paradise!

What do you foresee in the future of media and journalism in the digital age?

Absolutely better journalism made by new means…for the ones who will survive. Data will become-if it’s not already- a strategic resource for media. The digital age has provided tools to the people (not only to journalists) to control authorities, has forced governments to adopt “by default open data” acts as is the case for the Greek government and has provided metrics for impact. I wouldn’t worry much for the business models.On the other hand, we should be cautious due to the surveillance mechanisms: who has access on our data, and for what reason?


(Image credit: Galymzhan Abdugalimov, via Unsplash)

]]>
https://dataconomy.ru/2015/02/17/the-digital-future-of-media-journalism/feed/ 0
28th February- 1st March, 2015- THack @ ITB Berlin https://dataconomy.ru/2015/02/09/28th-february-1st-march-2015-thack-itb-berlin/ https://dataconomy.ru/2015/02/09/28th-february-1st-march-2015-thack-itb-berlin/#respond Mon, 09 Feb 2015 14:23:37 +0000 https://dataconomy.ru/?p=11937 Do you have an idea for a digital innovation in travel? Can you code a working mobile or web application in 24 hours that helps travelers discover, enhance or share their journey? Does your original idea have the “wow factor” to win a share of Euro 3,000 in cash prizes … or potentially grow into […]]]>

Do you have an idea for a digital innovation in travel? Can you code a working mobile or web application in 24 hours that helps travelers discover, enhance or share their journey?

Does your original idea have the “wow factor” to win a share of Euro 3,000 in cash prizes … or potentially grow into a stand-alone business?

Then you – and your developer team of up to five hackers – have the chance to disrupt travel at THack @ ITB Berlin, a weekend hackathon February 28 and March 1.

THack is presented by Tnooz, the leading source of global news and analysis about travel tech. Our partners at ITB Berlin produce the world’s largest trade show held annually in March.

THack @ ITB brings together more than 100 developers and leading travel and travel tech companies for a creative competition that seeks solutions to these four travel challenges:

  • Inspiration, discovery, memories – Create an application that helps travelers decide where to go and how to share their experiences and journeys
  • Local concierge – Make a product that explores options for tours, activities and services in-destination
  • Accommodation finder – Hotel? Hostel? Back-country camping? Home-sharing? Build a recommendation engine for personalized accommodations
  • Route planner – Create an travel application for transportation choices that are time-efficient, cost-effective or creatively scenic … or all of the above.

Tnooz’s Tier 1 sponsoring companies (including Lufthansa, GetYourGuide and others to be announced) will provide access to their proprietary APIs to use in programing solutions to the four challenges.

Sponsors will open their APIs to registered developers approximately one week before the event, plus provide on-site or online support for their APIs.

In addition to the cash prizes for winning hacks, a developer or team will be selected to attend a one-week mentoring camp at 33entrepreneurs start-up accelerator in Bordeaux, France.

More details here.

Sponsorship opportunities available.


(Image credit: Tnooz)

 

]]>
https://dataconomy.ru/2015/02/09/28th-february-1st-march-2015-thack-itb-berlin/feed/ 0
22 – 23 April, 2015 – Apps World Germany https://dataconomy.ru/2015/01/26/22-23-april-2015-apps-world-germany/ https://dataconomy.ru/2015/01/26/22-23-april-2015-apps-world-germany/#respond Mon, 26 Jan 2015 15:31:27 +0000 https://dataconomy.ru/?p=11688 Developer Event, Berlin, Germany: Apps World; the leading global event series in its 6th year that has grown to be the leading set of events in the app industry is launching a new show in Berlin – the city fast being recognised as a thriving hub for technology and start-ups, this April. Apps World Germany will […]]]>

Developer Event, Berlin, Germany: Apps World; the leading global event series in its 6th year that has grown to be the leading set of events in the app industry is launching a new show in Berlin – the city fast being recognised as a thriving hub for technology and start-ups, this April. Apps World Germany will be bringing together over 150+ exhibitors and 6,000+ attendees from across the app ecosystem in Germany, Eastern Europe and beyond. The event will attract developers, mobile marketers, mobile operators, device manufacturers, platform owners and industry professionals for two days of high level insight and discussion. The exhibition floor will be buzzing with some of the most innovative new start-ups showcasing alongside industry giants, live hackathons and interactive workshop sessions. Don’t miss your chance to be a part of it – secure your pass today!

See more details here!

]]>
https://dataconomy.ru/2015/01/26/22-23-april-2015-apps-world-germany/feed/ 0
10 Big Data Stories You Shouldn’t Miss this Week https://dataconomy.ru/2015/01/23/10-big-data-stories-you-shouldnt-miss-this-week-10/ https://dataconomy.ru/2015/01/23/10-big-data-stories-you-shouldnt-miss-this-week-10/#comments Fri, 23 Jan 2015 14:34:38 +0000 https://dataconomy.ru/?p=11640 This week, a wealth of industry experts shared their insights into the changing landscape of big data with us. On Monday, Chairman of MBN Solutions Paul Forrest shared his thoughts on how big data can become “the bridge” to success. On Tuesday, Jamal Khawaja informed us why we’ve never been more vulnerable to data hacks […]]]>

This week, a wealth of industry experts shared their insights into the changing landscape of big data with us. On Monday, Chairman of MBN Solutions Paul Forrest shared his thoughts on how big data can become “the bridge” to success. On Tuesday, Jamal Khawaja informed us why we’ve never been more vulnerable to data hacks and breaches, and what we can do about this. We also spoke to one of the co-founders of Mutinerie about the fast-paced life of coworking spaces. On Wednesday, Philip Berliner shared with us his incendiary and insightful polemic “Social Media is Dead. Big Data is on Life Support.” Here’s our picks of the best big data stories of the week:

TOP DATACONOMY ARTICLES

How Facebook Deal With Their Masses of User-Generated DataHow Facebook Deal With Their Masses of User-Generated Data

For decades, companies have lived by the mantra “customer is king”. But in the age of the Internet- when users generate hoardes of data, not all of which is useful or accurate- the rules of the game have changed. We recently spoke to Tye Rattenbury, Trifacta’s lead Data Scientist, about how he dealt with the masses of user-generated data in his previous role at Facebook, as well his current role with Trifacta.

The Most Interesting Man in Data ScienceThe Most Interesting Man in Data Science

From apple grower to fine arts student, from software developer to machine learning PhD- Jose Quesada has done it all. Now, he’s established Data Science Retreat, a course to help people with his passion for growth and development to delve into the world of data science. We recently spoke to Jose about his remarkable story, the Data Science Retreat experience, and why so-called “soft skills” are often the making of future data scientists.

How We Can Use Data Mining to Fight CorruptionHow We Can Use Data Mining to Fight Corruption

“Last year, Transparency International Georgia launched an open-source procurement monitoring and analytics portal, which extracts data from the government’s central e-procurement website and repackages it into user-friendly formats. Users can now generate profiles of procurement transactions made by government agencies, profiles of companies bidding for contracts, & search aggregate statistical data on government spending.”

TOP DATACONOMY NEWS

Mario Gets Self-Aware with Application of Artificial IntelligenceMario Gets Self-Aware with Application of Artificial Intelligence                                                                                                

Researchers of the Cognitive Modelling Group at Germany’s University of Tubingen have developed the Mario AI Project wherein a self aware Mario who makes decisions based on what it learns through spoken instructions or concepts and by exploring his environment.

Stack Exchange Gain $40m to Become to Sole Platform That Matters for Dev Hiring CompaniesStack Exchange Gain $40m to Become to Sole Platform That Matters for Dev Hiring Companies

Stack Exchange the startup behind the popular Q&A platform for professional and enthusiast programmers, Stack Overflow, has secured $40 million in investment in a Series D round of funding, it revealed earlier this week.

Facebook Open Sources Deep Learning and AI Tools on TorchFacebook Open Sources Deep Learning and AI Tools on Torch

“Facebook in an unprecedented move has open-sourced some of its machine learning tools with the scientific computing framework,Torch. The announcement came earlier last week on Friday, through the Facebook AI Research (FAIR) blog.”

TOP UPCOMING EVENTS

2-3 February, 2015- 14th Wearable Technologies Conference, Munich2-3 February, 2015- 14th Wearable Technologies Conference, Munich
     

“The world’s most profound event for wearables will once again gather all important players of the wearable tech ecosystem at the 14th WT | Wearable Technologies Conference in Munich on February 2 and 3.”  

11-12 February, 2015- Big Data & Analytics Summit, Melbourne
11-12 February, 2015- Big Data & Analytics Summit, Melbourne

“Big Data & Analytics Innovation is back in Australia for two days of inspiring, insightful & educational presentations, panel sessions, interactive discussions and world-class networking. Big Data & Analytics Innovation will bring you right up to speed to assist you with your every need covering an array of topics, themes and problem points.”

TOP DATACONOMY JOBS

AdSquareBig Data Solutions Architect, adsquare   

This is truly a chance of a lifetime. At adsquare you will be part of a rapidly growing ad tech startup that will add a totally new dimension to the world of mobile advertising. You will work hand in hand with the adsquare team on understanding the real-time, real-world user context. If you are enthusiastic about BIG DATA processing, think analytical and love distributed backend systems with state of the art frameworks you shouldn’t miss out on this opportunity.

Physicist / Mathematician / Computer Scientist as Data Scientist (m/f)	Physicist / Mathematician / Computer Scientist as Data Scientist, Blue Yonder

If you would like to be part of a highly innovative, challenging and extremely future-oriented software market, and a young and highly motivated team, then please send us your detailed application.

]]>
https://dataconomy.ru/2015/01/23/10-big-data-stories-you-shouldnt-miss-this-week-10/feed/ 2
Hackathons, Smart Cities and Startups: Life Inside a Coworking Space https://dataconomy.ru/2015/01/20/hackathons-smart-cities-and-startups-life-inside-a-coworking-space/ https://dataconomy.ru/2015/01/20/hackathons-smart-cities-and-startups-life-inside-a-coworking-space/#comments Tue, 20 Jan 2015 15:07:43 +0000 https://dataconomy.ru/?p=11519 So you’ve got your Next Great Idea for a data science startup that’s going to change the world. There’s a diverse number of paths you could take- incubators, accelerators, hiring a cheap office space- but one option we hear less about is coworking. We recently spoke to William van den Broek, one of the Co-Founders […]]]>

William van den Broek Mutinerie Coworking Space

So you’ve got your Next Great Idea for a data science startup that’s going to change the world. There’s a diverse number of paths you could take- incubators, accelerators, hiring a cheap office space- but one option we hear less about is coworking. We recently spoke to William van den Broek, one of the Co-Founders of the Mutinerie coworking spaces in Paris, about why he sees coworking as “the future of work”.


1. Tell us more about you and your company.
Mutinerie is a community of independent workers; entrepreneurs, startupers and freelancers based in Paris. Our headquarters is a coworking space of 400 square meters in the north-east of the city.
We are about 160 active members. Some of those work in our space almost every day, and others only come in a few days per month. Our patrons are developers, designers, architects, journalists, translators, consultants, artists- the list goes on. Mutinerie is glad to have a huge diversity of skills. Sharing values is the most important part. Being able to share commons goals and principles with others, trusting and respoecting each other are the keys to build the micro society in which good things can bloom.
We also host and organizes events, workshops and celebrations. Mutinerie has many goals and no limits!
2. Why do you think coworking spaces have become so popular in recent years?
Because the way we work is changing at a very rapid pace, and coworking spaces represent the future of work.
When Mutinerie started, 3 years ago, there were about 1,000 coworking spaces all over the world. Today, that number is closer to 6,000. The number of coworking spaces has nearly doubled year-on-year for the past six years, and we don’t foresee that slowing down in the coming years.
Some studies have predicted that the number of self-employed will exceed the number of employees in a decade our so for most of western countries. It is the beginning of a revolution in our ways of working and organizing our societies. But I’m not describing a shining future where everyone is free, wealthy and happy. Being independent is an exciting but dangerous adventure. Big companies are still able to offer protection and security in terms of revenues and social interactions. Coworking prevents the dangers of a society of self-employed. It enables people to be independent, and free, yet not isolated.
3. What do you think are the main benefits of using coworking spaces over traditional office spaces for new tech startups, and freelance tech professionals?
In most coworking spaces, you’ll have both peers (people sharing similar skills and goals with you) and be around people with complementary profiles. Startups are always seeking skills and talents- they need developers, designers, translators… It’s easy to find freelancers with these skills in the coworking environment. It’s not merely “contact”- it’s seeing how these people work every day, and building trust with them.
Coworking spaces allows startuppers to be very flexible, which is good because you don’t really know if tomorrow, you’ll need to employ three more people or if your project will collapse. Coworking spaces transform fixed costs into variable costs.
4. What differentiates your space from other coworking spaces?
We don’t try so hard to be different, we just try to be good!
The first thing was to find good people and to work hard to deliver a good quality of service. But at the same time, we have high expectations on what is a community of coworkers. Coworkers are not in Mutinerie to show off or promote their own stuff. There is an authentic atmosphere, with authentic people who trust each other.
We managed to find a good balance between core coworkers, more infrequent patrons and travelers. Doing so, Mutinerie remains an open community with strong relations between members.
At Mutinerie, you will always have someone welcoming you and explaining how things work within the community. The result is that ideas and relationships can grow quick and strong.
5. Tell us about some of your patrons working in the data science/ tech field.
We had seen coworkers using data to rethink the way we do architecture or the way we organize our cities. I’ve been initiated with Parametric architecture and Smart cities, using our new knowledge of the flows (of people, goods, cars etc.) to optimize everything.
6. You’re organising a hackathon in March; tell us more about that.
Yes, we are organizing our third Hack/make the Bank event the 27th of March (the 10th edition of the Hack the Bank events) Developers, designers and members of the financial services industry will come together for 48 hours of brainstorming and software creation.  We are very excited to see what attendees can come up with.
7. Have you previously held any other hackathons? What problems/challenges did they address?
We organized and hosted quite a lot of hackathons. It’s the kind of format in which we believe a lot because it is very action-oriented, it gathers smart people and it focuses on general interest.
You find the people first and projects come latter. That’s the opposite of a normal company where the project comes before the men … And the outcome is high.
After a 3-day hackathon, you can have the basis of one intelligent, open source project able to solve problems. And maybe you’ll meet other people sharing interests and skills with whom you can latter start a long-term collaboration.
8. What’s in store for Mutinerie in 2015?
The opening of another space in the south of Paris! We found another great spot to dwell in, a magnificent 800 square meter building in the 14th arrondissement of Paris. It will open in the beginning of June.
We are already planning a lot of events, formation and celebration in Paris as well as in Mutinerie Village, a rural coworking space with accommodations in the countryside.
You can follow our adventure on Facebook, Twitter and our weekly newsletter. And of course, feel free to visit us in Paris if you need a place to work for a few days.


(Image credit: Mutinerie)

]]>
https://dataconomy.ru/2015/01/20/hackathons-smart-cities-and-startups-life-inside-a-coworking-space/feed/ 5
How We Can Use Data Mining to Fight Corruption https://dataconomy.ru/2015/01/19/how-we-can-use-data-mining-to-fight-corruption/ https://dataconomy.ru/2015/01/19/how-we-can-use-data-mining-to-fight-corruption/#comments Mon, 19 Jan 2015 10:10:46 +0000 https://dataconomy.ru/?p=11477 Data Mining is an analytic process designed to explore data (usually large amounts of data – typically business or market related – also known as “big data”) in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns… Two centuries ago, coal mining spurred the […]]]>

Data Mining is an analytic process designed to explore data (usually large amounts of data – typically business or market related – also known as “big data”) in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns…

Two centuries ago, coal mining spurred the European continent’s Industrial Revolution. Today, data mining is fueling the data revolution brought about by exploding streams of data. Using data mining techniques to profile customer preferences and predict purchasing patterns has become common practice in the private sector. But can data mining also be used to fight corruption? And if so, how?

Last year, Transparency International Georgia launched an open-source procurement monitoring and analytics portal, which extracts data from the government’s central e-procurement website and repackages it into user-friendly formats. Users can now generate profiles of procurement transactions made by government agencies, profiles of companies bidding for public contracts, and search aggregate statistical data on government spending. If citizens suspect law violations in electronic tender processes they can submit an online report which a Dispute Resolution Board reviews within ten working days.

Data mining’s potential to spot inadequacies in processes involving elected authorities and public money can be taken even further.

The European Commission, in cooperation with Transparency International, developed ARACHNE data analytics software that cross-checks data from various public and private institutions and helps to identify projects susceptible to risks of fraud, conflict of interests or irregularities.

Researchers from the Corruption Research Center Budapest have examined massive data sets of public procurement procedures from European Union countries, searching for abnormal patterns such as exceptionally short bidding periods or unusual outcomes (e.g. no competition for the winning bid, or bids repeatedly won by the same company). Using inferential statistics – analysis that can be done to draw conclusions beyond what the data actually is capturing – they identified corrupt behavior based on deviations from ordinary patterns.

Data mining can also be used to detect tax fraud and improve taxpayer compliance. In the aftermath of the Luxleaks, when a whistleblower released reams of data about tax evasion schemes in Luxembourg, data mining techniques employed by New York City’s former finance commissioner David Frankel may provide some inspiration: by “identifying individuals who had businesses similar to others but who stood out as outliers on taxes paid” the auditing team improved the efficiency of its investigations into companies suspected of underpaying taxes.

Similarly, data mining could be employed to fight money laundering: an algorithm reviewing banking data and comparing it with information about financial criminal data points may for example contribute to revealing illicit financial flows, an issue that ranks high on Transparency International’s agenda.

The wealth of data that can nowadays be gathered through remote sensing, crowd-sourced citizen reports, news media, census data, cell phone activity, and social networking sites etc., combined with traditional indicators, makes for seemingly endless opportunities. Do you want to identify issues of conflict of interest and/or revolving doors? Do you want to know what people are thinking about corruption in a specific country context? Text mining techniques analysing social media noise during a given period of time may provide you with an answer.

There are many ways non-profits and civil society organisations can benefit from data mining on a pro-bono basis. These include hackathons and advocating for the replication of tools and platforms, which not only render data public but make it relatively easy to organize and process.

The European Citadel on the move website for instance allows users to upload data sets and to create personalized applications, even for people with little experience of data management. One of the most transparent and user-friendly initiatives at the local level is the website Checkbook NYC 2.0, which provides access to New York City government’s US$70 billion annual budget. It details the way money is spent, including specific information on contracts, payments, revenues, budget reports, and audits. It features an application programing interface that lets third-parties choose the data they want and then use it for their own purposes.

Data mining’s nimble and purpose-oriented character can do a lot to dispel the fog in which the public sector operates. But more efforts are needed to exploit its potential to the full and make it available to the widest audience.



Dominic IencoDominic Ienco’s career has spanned over 30 years of combining a strong immersion in music and entertainment with corporate advisory and business development services. He has assisted numerous emerging companies with the raising of capital and acting as a roving advisor he has been particularly successful in procuring international business partners for expanding corporations.


(Image credit: European Commission)

]]>
https://dataconomy.ru/2015/01/19/how-we-can-use-data-mining-to-fight-corruption/feed/ 2
Jumping from PhD to Data Scientist: 3 Tips for Success https://dataconomy.ru/2014/12/18/jumping-from-phd-to-data-scientist-3-tips-for-success/ https://dataconomy.ru/2014/12/18/jumping-from-phd-to-data-scientist-3-tips-for-success/#comments Thu, 18 Dec 2014 15:59:01 +0000 https://dataconomy.ru/?p=11099 William is a Data scientist at Quora, interested in data-driven decision making to improve both product and business. Always interested in learning new things and exploring the ubiquity of data in everyday life. The Interviewees DJ Patil is VP of Product at RelateIQ (acquired by Salesforce). He’s held many other roles, including former Chief Scientist at […]]]>

William Chen - Data Scientist at Quora William is a Data scientist at Quora, interested in data-driven decision making to improve both product and business. Always interested in learning new things and exploring the ubiquity of data in everyday life.


The Interviewees

main-qimg-8f82009c564da7024ca3d0f875d5a368DJ Patil is VP of Product at RelateIQ (acquired by Salesforce). He’s held many other roles, including former Chief Scientist at LinkedIn. DJ co-coined the term “Data Scientist” and co-authored Data Scientist: The Sexiest Job of the 21st Century. DJ transitioned into data science following a research scientist position at University of Maryland. Follow him at @dpatil(Twitter) or at Dj Patil (Quora).

main-qimg-d8b15c139daafbd4f53b03d15e68df86Michelangelo D’Agostino is currently Senior Data Scientist at Civis Analytics. Formerly, he was head of data science at Braintree and was a senior analyst on the 2012 Obama Campaign’s analytics engine. He transitioned into data science following a PhD in Astr ophysics from Berkeley. Follow him at@MichelangeloDA (Twitter) or at Michelangelo D’Agostino (Quora).

1. Seek fast, collaborative environments

During graduate school, Michelangelo did his PhD on IceCube, a neutrino physics experiment at the South Pole. There, they measured cosmic neutrinos via sensors buried in the polar ice cap. One big transition for him as a physicist was the opportunity for him to learn in a fast, collaborative, environment. Michelangelo explains:

All of a sudden, I was working with a couple hundred people all around the world, half in Europe, half in the US in all these different time zones.

It felt like I wasn’t working on something by myself. I was working on really interesting problems with other smart people and doing really hard work. I think that was what kept me in grad school – knowing that I was working with other smart people in a collaborative environment.

Michelangelo goes on to explain that fast, collaborative environments are what distinguishes doing data science in industry from doing research in academia. After doing work for the 2012 Obama Re-election Campaign, Michelangelo briefly contemplated going back to finish his post-doc, but decided to stay in data scientist because the environment suited him better.

I like working with people a lot more than I like working by myself.  I like to work on things that have more impact.  You see a lot more of it in industry, in data science, than you do in research.

I like the pace a lot more.  I think research can often be very slow, especially particle physics.  It takes 10 years to build an experiment now.  You have to have a monastic personality to be a physicist nowadays.

Unfortunately, this kind of environment can be rare for any PhD student working in a research environment. DJ Patil explains the culture shock that many PhD students get when they get their first job in data science:

In academia, the first thing you do is sit at your desk and then close the door. There’s no door anywhere in Silicon Valley; you’re out on the open floor. These people are very much culture shocked when people tell them, “No you must be working, collaborating, engaging, fighting, debating, rather than hiding behind the desk and the door.”

I think that’s just lacking in the training, and where academia fails people. They don’t get a chance to work in teams; they don’t work in groups.

Ultimately, DJ says that forgetting that data science is collaborative is a common mistake people make when considering jumping into data science.

People make a mistake by forgetting that Data Science is a team sport. People might point to people like me or Hammerbacher or Hilary or Peter Norvig and they say, oh look at these people! It’s false, it’s totally false, there’s not one single data scientist that does it all on their own.

Data science is a team sport, somebody has to bring the data together, somebody has to move it, someone needs to analyze it, someone needs to be there to bounce ideas around.

It’s a common trap during one’s PhD to end up only focusing on one’s dissertation and research. Seek out opportunities to further your research in fast, collaborative environments. This includes getting involved with large collaborative projects in your department, team-based competitions in your field, working with others on side or related projects, actively speaking about your research, and attending various conferences, events, and activities!

2. Delve deeply into hard, dirty problems

Not everything you learn in graduate school is specialized domain knowledge. In fact, the experience of working on difficult problems and the strategies that you use to approach them is one of the most valuable skills that Michelangelo picked up during his astrophysics PhD. To get that experience that will ultimately become relevant to data science, Michelangelo suggests:

Work on a hard problem for a long time and figure out how to push through and not be frustrated when something doesn’t work, because things just don’t work most of the time. You just have to keep trying and keep having faith that you can get a project to work in the end. Even if you try many, many things that don’t work, you can find all the bugs, all the mistakes in your reasoning and logic and push through to a working solution in the end.

Specifically, you should be always looking for applications of your research on real, live datasets. This gives you the wisdom of all the nuances when dealing with large, messy datasets, and allows you to understand much more than just the theory of your research. Michelangelo explains:

You can read about it, and people can teach you techniques, but until you’ve actually dealt with a nasty data set that has a formatting issue or other problems, you don’t really appreciate what it’s like when you have to merge a bunch of data sets together or make a bunch of graphs to sanity check something and all of a sudden nothing makes sense in your distributions and you have to figure out what’s going on.

3. Do things beyond your academic specialty

During graduate school, the research that you are doing with your advisor might seem all-consuming. However, it is useful to step back, look at the bigger picture, and pursue the other skills that may serve to augment your experience as a PhD student. DJ offers a reminder:

Many people who come out of academia are very one-dimensional. They haven’t proven that they can make anything, all they’ve proven is that they can study something that nobody (except maybe your advisor and your advisor’s past two students) cares about. That’s a mistake in my opinion.

During that time, you can solve that hard PhD caliber problem AND develop other skills. For example, giving talks, coding in hackathons, etc. Do things in parallel and you’ll get much more out of your academic experience.

A traditional academic curriculum is actually lacking in teaching all of the skills one needs to even become a data scientist. Michelangelo notes this in his interview and says:

You can’t finish a degree and know all the things you need to know to be a data scientist. You have to be willing to constantly teach yourself new techniques.

Michelangelo elaborates that not constantly teaching yourself new things sends a negative signal to companies looking to hire data scientists.

From a hiring perspective, when I talk to PhD students who say they want to be data scientists, I become skeptical if they haven’t taken any active steps.

“Hey, I participated in these Coursera courses or these Kaggle competitions.” or “I’ve gone to the Open Government Meetup and have done these data visualizations.”

Things like that demonstrate that you can work on problems outside your academic specialty, and they show that you really have initiative.

One of the largest dangers of coming out of academia is that you constrain yourself into an environment that rewards an intensely narrow focus on one thing. To expand and to be able to tackle the challenges of becoming a data scientist, you must continuing developing your other skills in parallel, and always be on the lookout for new challenges and opportunities.

More Resources

The Data Science Handbook features interviews from 25 amazing data scientists, including DJ and Michelangelo. Sign up at Data Science Handbook to get 3 free interviews (including the full interviews with DJ and Michelangelo).

For more concrete advice on making the transition from academia to data science, check out the answers at

Subscribe to Storytelling with Statistics to get updates on more posts like these!

(This post was originally published on Quora)

(Image Credit: Eric Fischer)

]]>
https://dataconomy.ru/2014/12/18/jumping-from-phd-to-data-scientist-3-tips-for-success/feed/ 2
Data Activists Set Up Database to Counter Mafia In Italy https://dataconomy.ru/2014/12/01/data-activists-set-up-database-to-counter-mafia-in-italy/ https://dataconomy.ru/2014/12/01/data-activists-set-up-database-to-counter-mafia-in-italy/#respond Mon, 01 Dec 2014 09:54:46 +0000 https://dataconomy.ru/?p=10715 Organised crime in Italy has now found a formidable opponent in the Spaghetti Op​en Data. Data activists – hackers, engineers and front end-developers – have come together under to structure existing data into a database that could prove an ominous foe for the Italian Mafia. It is quite evident through the websites of various government institutes […]]]>

Organised crime in Italy has now found a formidable opponent in the Spaghetti Op​en Data. Data activists – hackers, engineers and front end-developers – have come together under to structure existing data into a database that could prove an ominous foe for the Italian Mafia.

It is quite evident through the websites of various government institutes that Italian authorities have had a tough time collating the trove of data it has collected concerning the mafia dealings, points out Alberto Mucci, writing for Motherboard.

“Sometime you have this paradoxical situation where, for example, the anti-Mafia police (DIA) in Palermo do not have immediate access to the information they need on another part of the country,” explains Andrea B​orruso, a member of Spaghetti Open Data, to  Mucci over a Skype call. “It’s ridiculous and totally inefficient.”

Last year Spaghetti Open Data and DataNinja, another group of Italian data enthusiasts, found out that the authorities won a €7.5 million grant from the EU in 2013 for a database of properties confiscated from the Mafia. Yet, they also found that no such database had since been established.

Their solution? A four day hackathon in Bologna, which gave birth to the “Conf​iscatiBene” (Italian for “seized goods”) project. ConfiscatiBene is “a national database able to gather and organize with clarity and in a single place (this might seem obvious, but it’s really not the case in Italy) a list of all goods confiscated by the Italian authorities from the Mafia,” explains Mucci.

Mucci further highlights that ease of access to the right data is imperative to allay the influence of the organized crime network. “Having a central database would also help in the effort to revitalize former Mafia strongholds by granting former Mafia-owned buildings to entrepreneurs, artists, and activists,” he added.

Read more here.


(Image credit: Eric Erxon)

 

]]>
https://dataconomy.ru/2014/12/01/data-activists-set-up-database-to-counter-mafia-in-italy/feed/ 0
Telit Launches First Big Data Challenge in Collaboration with Google Cloud Platform https://dataconomy.ru/2014/11/25/telit-launches-first-big-data-challenge-in-collaboration-with-google-cloud-platform/ https://dataconomy.ru/2014/11/25/telit-launches-first-big-data-challenge-in-collaboration-with-google-cloud-platform/#respond Tue, 25 Nov 2014 08:10:42 +0000 https://dataconomy.ru/?p=10594 Telit Wireless Solutions, an enabler of the global machine-to-machine (m2m) movement has announced the loT Big Data challenge in collaboration with Google Cloud Platform. It is aimed at promoting and accelerating innovation around the Internet of Things. Developers are invited to create web-based and mobile applications based on data from a wide variety of connected […]]]>

Telit Wireless Solutions, an enabler of the global machine-to-machine (m2m) movement has announced the loT Big Data challenge in collaboration with Google Cloud Platform. It is aimed at promoting and accelerating innovation around the Internet of Things. Developers are invited to create web-based and mobile applications based on data from a wide variety of connected sensors and devices, delivered to the Cloud by Telit’s m2mAIR Cloud application enablement platform, powered by deviceWISE.

Participants will use Google Cloud Platform to harvest the IoT data, to illustrate the power and simplicity of developing compelling IoT solutions that change the way people live, work and play. The IoT Big Data Challenge encourages developers to submit innovative applications with the broadest possible scope and purpose – from the practical, to the clever, to the totally unexpected.

Categories include Business transformation, Healthy and happy living and the good of mankind. Participants will be judged on Innovation, Potenial Commercial Impact and Scalability, Best use of Telit data and GCP developer resources, look, feel and entertainment value.

Registrations for the competition will open on December 2nd with the Kick- off webinar and submissions are due in January. Winners for America and Asia will be announced in the deviceWISE Global Summit at Miami on January 27th and winners for Europe, Middle east and Africa at the Google loT Hackathon in London later on February 3rd.

Telit delivers its extensive portfolio of cellular, short-range, and positioning products in over 80 countries. By making these products business scalable, interchangeable across families, technologies, and generations, and with high quality standards, Telit keeps development cycles short and cost optimized, protects customer design investments, and removes technology risk.

Read more here


(Image Credit: Sebastiaan ter Burg)

]]>
https://dataconomy.ru/2014/11/25/telit-launches-first-big-data-challenge-in-collaboration-with-google-cloud-platform/feed/ 0
10 Big Data Stories You Shouldn’t Miss this Week https://dataconomy.ru/2014/11/21/10-big-data-stories-you-shouldnt-miss-this-week-5/ https://dataconomy.ru/2014/11/21/10-big-data-stories-you-shouldnt-miss-this-week-5/#respond Fri, 21 Nov 2014 14:25:54 +0000 https://dataconomy.ru/?p=10529 “Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” There has been a lot of media attention on Artificial Intelligence this week, with leading influencers taking various positions on the future of its development. […]]]>

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.”

There has been a lot of media attention on Artificial Intelligence this week, with leading influencers taking various positions on the future of its development. Elon Musk, on the one hand, commented on an article published by Edge that AI poses a real existential threat to humanity.  On the contrary, Steve Ballmer, the Ex-CEO of Microsoft, argued that Artificial Intelligence and Machine Learning are the next frontier of Computer Science. It was also revealed The School of Engineering and Applied Sciences at Harvard is set to benefit from an undisclosed donation from Ballmer – reaffirming his belief that Computer Science still required the research and monetary commitment to realise it’s potential. One thing is for sure, however, whether you’re worried about AI or not, its development will continue to see investment by companies across the world- just take a look at Tech giants like Facebook and Google.

Aside from AI, there has been some exciting news and articles in other parts of Data Science too. Below you’ll find our selection of our favourite piece. We hope you enjoy!

TOP DATACONOMY ARTICLES

8297483344_6b63cdfa60_hHow Ford Uses Data Science: Past, Present and Future

Success stories of how data-driven practices can revitalise businesses are rife today, but there are few as compelling as the story of Ford. In 2006, the legendary car manufacturers were in trouble; they closed the year with a $12.6 billion loss, the largest in the company’s history. As we reported earlier in the year, through implementing a top-down data-driven culture and using innovative data science techniques, Ford was able to start turning profits again in just three years. I was recently lucky enough to speak with Mike Cavaretta, Ford’s Chief Data Scientist, who divulged the inside story of how data saved one of the world’s largest automobile manufacturers, and well as discussing how Ford will use data in the future.

SmashedA Team of Researchers Has Found a Way to Predict the Stock Market Using Search Terms

Weeks before the release of their paper, “Quantifying the semantics of search behavior before stock market moves”, Dataconomy met with Dr. Suzy Moat and Dr. Tobias Preis to discuss their research on predicting the stock market by analyzing Google and Wikipedia searches. The initial two studies — which asked the question “Is there a relationship between what people are looking for on Google and Wikipedia and subsequent stock market moves?” — was released in April 2013, and received considerable media attention. Now, the two researchers, along with H. Eugene Stanley and Chester Curme, have come out with a follow-up study that seeks to look at their original findings from a different angle — essentially, which particular topics (Politics, Food, Sports) might have a relationship to stock market moves?

Measuring the Mobile App EconomyMeasuring the Mobile App Economy

If there was ever a fascinating time to be immersed in the app economy, it is now. After experiencing a huge and flourishing boom, the industry now faces a period of change, maturity, or decline, depending on who you’re talking to. Berlin-based Priori Data are in a unique position to comment upon the changing app landscape; their platform offers users the ability to track and benchmark over 2.5 million apps. We recently sat down with Director of Data Science Michelle Tran and Director of Content Natasha Yarotskaya about what insights into the market their data has uncovered, as well as what to expect from the latest version of their platform, which is released today.

TOP DATACONOMY NEWS

HortonworksData Scientists Tackle High-Impact Social Problems at Bayes Impact Hackathon                                                                                                

At a hackathon hosted by Bayes Impact, data scientists pitted against each other to develop data driven and implementable software solutions for high impact social problems. The winning hack will become a Bayes Impact project, staffed with full-time data scientists and engineers who will work with their partners to bring the hack to life.

Steve Ballmer Advocates Machine Learning as the Next Era of Computer ScienceSteve Ballmer Advocates Machine Learning as the Next Era of Computer Science

Steve Ballmer, former Microsoft CEO and Harvard alumnus; who announced a significant donation to the Computer Science Department at Harvard last week is advocating machine learning as the next era of computer science. Ballmer expressed his excitement about the ability of computer and IT to process huge amounts of data not only to see patterns but to suggest actions and understand human intent.

10 Big Data Stories You Shouldn’t Miss this WeekTeradata and MapR Partnership Expands Hadoop Choices within Teradata’s Unified Data Architecture

Teradata, the big data analytics and marketing applications company, and MapR Technologies, Inc., a provider of Apache™ Hadoop, today announced an expanded partnership that covers technology integration, road map alignment, and a unified go-to-market offering. The two companies are optimizing integration of the MapR Distribution within the Teradata® Unified Data Architecture™, providing customers more choices when combining Teradata’s best-in-class data warehousing with Hadoop, discovery platforms, and NoSQL options to drive analytic innovation.

TOP UPCOMING EVENTS

Hortonworks28-30 November, 2014- 5th International Conference on the Digital Home, Guangzhou, China     

The development of the digital home technology is a trend of information penetration and fusion in life. Numerous digital media equipments and contents have been applied into home and life. The growth of media contents and broadband-width stimulates the problems of producing, editing, transmitting, and displaying of media contents. It brings enormous opportunities as well as challenges for the researchers and application developers.

26 November, 2014- Big Data eXchange, London26 November, 2014- Big Data eXchange, London

We’re creating and collecting more data each year than ever before, but how do we derive value from it? Few of us have first-hand experience of mining insight out of our data, and yet increasingly our markets demand such insight, and often in real-time. There are practitioners out there in technology and allied fields who have been data-centric for years and they have the skills and insights in how to adopt modern big data technology and use it to great effect. Those are the people we now look at to help bridge the looming gap between data and insight.

TOP DATACONOMY JOBS

HortonworksPricing Manager / Analyst, Wayfair   

As an Analyst of Pricing you will be responsible for pricing every product that appears on the website. You will manage the daily operational pricing functions while continually seeking to optimize procedures and test strategies to increase gross profit. If you love diving into deep data sets to identify areas for improvement, and be even more enthusiastic about solving those problems then do not hesitate to apply!

26 November, 2014- Big Data eXchange, LondonDatabase Developer NoSQL (Hadoop/Big Data), GameDuell

Work in Berlin for one of the world’s largest games websites and excite more than125 million users. As a Database Developer at GameDuell, you will be responsible for our Hadoop/Big Data infrastructure and you will serve as an interface between different teams.

]]>
https://dataconomy.ru/2014/11/21/10-big-data-stories-you-shouldnt-miss-this-week-5/feed/ 0
Data Scientists Tackle High-Impact Social Problems at Bayes Impact Hackathon https://dataconomy.ru/2014/11/19/data-scientists-tackle-high-impact-social-problems-at-bayes-impact-hackathon/ https://dataconomy.ru/2014/11/19/data-scientists-tackle-high-impact-social-problems-at-bayes-impact-hackathon/#respond Wed, 19 Nov 2014 10:00:14 +0000 https://dataconomy.ru/?p=10456 At a hackathon hosted by Bayes Impact, data scientists pitted against each other to develop data driven and implementable software solutions for high impact social problems. The winning hack will become a Bayes Impact project, staffed with full-time data scientists and engineers who will work with their partners to bring the hack to life. Bayes […]]]>

At a hackathon hosted by Bayes Impact, data scientists pitted against each other to develop data driven and implementable software solutions for high impact social problems. The winning hack will become a Bayes Impact project, staffed with full-time data scientists and engineers who will work with their partners to bring the hack to life.

Bayes Impact is a Y combinator- backed nonprofit that deploys data scientists to solve big social problems with civic and nonprofit organizations. They run a 12-month full-time fellowship for leading data scientists and work with civic and nonprofit organizations such as the Gates Foundation, Johns Hopkins and the White House.

Co-hosted by Jerry Yang’s AME Cloud Ventures and Joe Lonsdale’s Formation; more than 100 data scientists, engineers and designers at the OpenDNS headquarters were given 24 hours and access to a large amount of data to create an app for a social cause.

The winning hack was a web app that detects prostitution rings by monitoring where adult ads are posted. The team used data from Thorn, a nonprofit that aims to fight child sex trafficking with technology. In second place was a web app that used medical data from the Food and Drug Administration to predict interactions between prescription drugs, even if the combinations have never actually been tested.

A team of Facebook employees, who called their project Out For Justice, developed an app that used crime data to better prioritize 911 calls and place patrol cars for the San Francisco Police Department. Their project won third prize. (source: mashable)

Challenge-specific winners for the event were- Donors Choose project: Team Insight; High Point Police Department project: Team Out for Justice; White House Department of Labor project: Team Mine Risk Evaluator

Facebook behavioral economist Alex Peysakhovich said ‘s very impressive that people took this huge data set and did something crazy with it and said ‘here’s something useful you might want to know.'”

Jeff Chung, a principal at AME Cloud Ventures who was on the judges panel, said he thought data projects come out of the competition more fully realized than those at a conventional hackathon.  Chung said the projects showed a surprising amount of range, given that many were based around the same sets of data.

“The data hackathon may seem like it may restrict the products that come out of it because you are setting a foundation, but it’s really interesting to see how people see things differently,” Chung said.

Andrew Jiang, Paul Duan and Eric Liu, the three founders of Bayes Impact, are entrepreneurs come from diverse backgrounds. The engineer, data scientist, and investor had previously worked on or analyzed investments in some of the toughest problems facing all technology companies in the Internet era — how to get more people to click on more stuff.

But in recent years they’d turned their attention away from advertising click-through and to other issues, like criminal justice reform, fraud detection among micro-lending platforms, and better and cheaper research into some of the potential causes of Parkinson’s disease. Andrew said the founders first discussed the idea behind the nonprofit at another hackathon-for-social-good hosted by Paypal in April. Jiang said the founders saw a potential for data scientists to use their skills to improve nonprofit efforts that wasn’t being realized.

It was their belief that the data science which allowed them to move more units could be focused on pressing problems facing civil society, and as with click-through rates, someone would hit on a solution.

“We don’t do this as a service for this or that organization,” says Duan. “We focus on one area and come together to say how can we make this issue better as a whole.”

Read more here.


(Image credit: Bayes Hackathon 2014)

]]>
https://dataconomy.ru/2014/11/19/data-scientists-tackle-high-impact-social-problems-at-bayes-impact-hackathon/feed/ 0
9 Big Data Stories You Shouldn’t Miss this Week https://dataconomy.ru/2014/11/14/9-big-data-stories-you-shouldnt-miss-this-week/ https://dataconomy.ru/2014/11/14/9-big-data-stories-you-shouldnt-miss-this-week/#respond Fri, 14 Nov 2014 11:44:29 +0000 https://dataconomy.ru/?p=10400 TOP DATACONOMY ARTICLES How Big Data Is Changing The Insurance Industry Forever Our guest contributor this week, Bernard Marr, looks at how big data could impact the insurance industry. He concludes that big data in insurance will mean insurers combining the data already available to them will be able to build up a more accurate picture of […]]]>

TOP DATACONOMY ARTICLES

8297483344_6b63cdfa60_hHow Big Data Is Changing The Insurance Industry Forever

Our guest contributor this week, , looks at how big data could impact the insurance industry. He concludes that big data in insurance will mean insurers combining the data already available to them will be able to build up a more accurate picture of who we are, and how safe a bet they are placing by offering us insurance. Some very interesting concerns are also mentioned too!

Cash and Data7 Big Data Funding Stories You Might Have Missed this Year

This year has seen an incredible amount of Big Data funding stories. However, there are a host of interesting companies that received funding this year that you may have forgotten about. We chose our top 7!

TOP DATACONOMY NEWS

HortonworksHadoop Vendor Hortonworks Has Filed For an IPO                                                                                                

Hortonworks, the data platform that delivers Enterprise Apache Hadoop, integrated with existing systems to create an efficient and scalable way to manage enterprise data, has filed for an Initial Public Offering. The number of shares to be sold and the price range for the proposed offering are yet to be determined.

Cortical.io Gain $1.25 Million in New Venture Capital, Share Grand Plans for The FutureCortical.io Gain $1.25 Million in New Venture Capital, Share Grand Plans for The Future

Cortical.io, an Austrian startup whose tech mimics brain function to process language more accurately and natively, have just announced an impressive new funding round. Reventon (NL) is a venture capital firm responsible for this considerable boost to Cortical.io’s coffers. With the new funding, cortical.io already have grand plans on how to bring their game-changing technology to a wider audience.

Upcoming Events

bARCELONA19–21 November, 2014 – Strata + Hadoop World, Barcelona

The best minds in data will gather in Barcelona this November for Strata + Hadoop World to learn, connect, and explore the complex issues and exciting opportunities brought to business by big data, data science, and pervasive computing.

Budapest17-21 November, 2014 – Apachecon Europe, Budapest

This three day technical conference will bring together 500+ attendees and offer over 100 conference sessions across a variety of open source topics covering all Apache projects, as well as visionary keynotes, lightning talks, hackathons, meetups and more.

]]>
https://dataconomy.ru/2014/11/14/9-big-data-stories-you-shouldnt-miss-this-week/feed/ 0
17-21 November, 2014 – Apachecon Europe, Budapest https://dataconomy.ru/2014/11/13/17-21-november-2014-apachecon-europe-budapest/ https://dataconomy.ru/2014/11/13/17-21-november-2014-apachecon-europe-budapest/#respond Thu, 13 Nov 2014 13:45:56 +0000 https://dataconomy.ru/?p=10391 Apache products power over half the Internet, petabytes of data, teraflops of operations, billions of objects, and enhance the lives of countless users and developers. ApacheCon brings developers and users together to explore key issues in building Open Source solutions “The Apache Way”. With hundreds of thousands of applications deploying ASF products and code contributions […]]]>

Apache products power over half the Internet, petabytes of data, teraflops of operations, billions of objects, and enhance the lives of countless users and developers. ApacheCon brings developers and users together to explore key issues in building Open Source solutions “The Apache Way”. With hundreds of thousands of applications deploying ASF products and code contributions by more than 3,500 Committers from around the world, the Apache community is recognized as among the most robust, successful, and respected in Open Source.

Apache products power more than 400 million Websites and countless mission-critical applications worldwide, from financial services to publishing to radioastronomy to social networking to biomedicine research data stores to mobile medical applications. More than a dozen Apache projects form the foundation of today’s Cloud computing, and Apache software continues to play a key role in the evolution of Big Data. Five of the top 10 Open Source downloads are Apache projects: understanding their breadth and capabilities has never been more important in today’s marketplace.

ApacheCon brings together the open source community to learn about and collaborate on the technologies and projects driving the future of open source, big data and cloud computing. Apache projects have and continue to be hugely influential in the innovation and development of software development across a plethora of categories from content, databases and servers, to big data, cloud, mobile and virtual machine. The developers, programmers, committers and users driving this innovation and utilizing these tools will meet in Budapest, November 17-21, for collaboration, education and community building.

First held in 1999 for developers and users of the Apache Server to meet face-to-face, ApacheCon is the official conference, trainings, and expo series of The Apache Software Foundation (ASF), and is the public showcase for Apache innovations.This three day technical conference will bring together 500+ attendees and offer over 100 conference sessions across a variety of open source topics covering all Apache projects, as well as visionary keynotes, lightning talks, hackathons, meetups and more.

You can find the full events schedule here

(Image Credit: Peter)

]]>
https://dataconomy.ru/2014/11/13/17-21-november-2014-apachecon-europe-budapest/feed/ 0
Top Tips for Implementing a Big Data Strategy https://dataconomy.ru/2014/11/04/top-tips-for-implementing-a-big-data-strategy/ https://dataconomy.ru/2014/11/04/top-tips-for-implementing-a-big-data-strategy/#comments Tue, 04 Nov 2014 10:21:31 +0000 https://dataconomy.ru/?p=10175 Ali Rebaie is a Big Data & Analytics industry analyst and consultant of Rebaie Analytics Group. He provides organizations with a vendor-neutral selection of business intelligence & big data technologies and advice on big data and information management strategy and architecture. Ali also appeared in several lists of “Who’s Who in Big Data” and as […]]]>

Ali Rebaie

Ali Rebaie is a Big Data & Analytics industry analyst and consultant of Rebaie Analytics Group. He provides organizations with a vendor-neutral selection of business intelligence & big data technologies and advice on big data and information management strategy and architecture. Ali also appeared in several lists of “Who’s Who in Big Data” and as one of the top big data influencers worldwide. He has led and developed several technology projects in business intelligence and analytics and worked for clients in the Fortune 500. Ali is a member of the internationally renowned Boulder BI Brain Trust (BBBT) in Boulder, USA, a membership-only consortium of leading independent BI and Big Data experts and analysts worldwide.


How is big data evolving in the Middle East?

Big Data has a long 5000-year history here, which effectively was born in Mesopotamia when they used to store data in clay tablets. Then, the royal Library of Ashurbanipal was built and collected thousands of clay tablets and fragments. Scribes used to write and collect these data. In the era of big data, we all became data scribes because we leave traces of our activity from the distance we walk, calories we consume, music we listen, places we go for shopping etc…All the digital devices available now which collect these data from credit bureau agencies, telecom operators, weather streams, sensors, and social media. Once we mashup and draw all these traces in a pattern, we can reveal interesting insights about our health, business, economy, transportation etc.

I have a BI background and based in Dubai, I started in the big data scene in 2010, where I began researching, writing and speaking about the big data market in general. Actually, at that moment, there was no scene in the Middle East and the market was completely immature.

However, at the beginning of 2013, the big data scene started picking up. We had our first event in Dubai that year, where I presented to mostly banks and retailers. The banks were predominately interested in traditional business intelligence technologies, but we also covered some social media analytics topics too.

Now, in 2014, there’s a lot happening. For example, we have more than 6 events this year, where the attendance has been particularly strong. From the first event I attending to the most recent, there has been a dramatic change. The government has been keen on data initiatives like open data, and also smart enablement. Also, I am having discussions with different clients who started to set budgets for advanced analytics and I am also helping others in setting their big data strategies.

In general, the one’s that are implementing big data are telecoms and transport. Retailers are slow adopters, but are moving in that direction. The big problem people are facing in the Gulf is about Data Scientists – the main question I get asked is whether they can be outsourced.

Do you think company’s will start moving in this direction – outsourcing data scientists?

I think this is a great business model! I’ve met some of the largest entertainment and media companies in Dubai, and both have told me that they are in desperate need of data scientists and whether outsourcing is a viable option. Also, one of the major companies here has just implemented Hadoop, but it’s being used by IT people for performance and indexing rather than business application.

So the problem with companies that are using some big data infrastructures, like Hadoop, is that they do not have the data science skills to employ a company wide big data strategy. It will take time for these companies to actually have big data strategies because IT are the only people using it.

Interestingly, the early adopters of Big Data technologies will probably be the public sector, unlike in Europe. This is a big priority for the government in Dubai, especially because of the Expo2020 and the vast amount of data the country is sitting on.

In my opinion, whenever clients ask me this, my main advice is to say that you need to build a core data science team.

Let’s move our focus to BI. I want to hear your opinion on why there is such a huge explosion in BI vendors today.

You’re correct, the early stages of BI were very different from now. Experts and analysts improved the way we store data by introducing the data warehouse. Back then, the main concern was around data warehousing and the different architectures supporting it. Mostly huge enterprises would go on to implement this and these BI solutions were IT driven and not tailored to business users.

Now, I think Big Data is driving BI because technology has become significantly cheaper and the disruption in the market is bringing more technology options. Data warehouse is not dead and it is now part of an extended architecture with fair more options for different purposes and use cases. Crucially, there is a huge community that has emerged around big data – in the early stages of BI, data science was not a buzz field yet and there were very few hackathons, or even data enthusiasts and experts in the field. The collaboration of difference fields – data science, big data, social science, open data, machine learning, open source, etc — I think this has contributed a lot to the development of the industry.

The main reason for this explosion of start-ups or vendors is down to the fact that data has become the backbone of modern business. What business leaders are demanding now is access to that data, to shift the power from IT and democratise the data. As soon as this became a priority, a market emerged and vendors started popping up. It really is a simple case of economics: the demand and interest was there, the vendors started supplying.

Top tips for implementing a Big Data strategy?

The first question that companies need to be address is what data sources are leveraged within the business and what other internal or external data sources they can add, classify and mashup. They need to figure out how current data sources are stored, processed and used.

The second important question is which data source type and what analytics method can help them to answer a specific business problem. For example, if I need to figure out customer loyalty or find influencers, the question I need to ask is: what are the big data types – sentiment analysis, network analysis, sensor data, etc — for this particular problem. Once these have meet matched – the data source type with the business problem – that’s when companies can move forward. Then, you can develop strategic big data use cases which can be presented and delivered by champions or influencers within your organization.

I have been helping companies set Big Data strategy and architecture. There are different challenges we usually face. Public sector clients, for instance, have huge concerns when it comes to data access and security and usually data is siloed and stored in different data centers within the organization. Data integration methods like Data virtualization might help in such cases. Another key message is, one size does not fit all anymore. Whenever, you are extending and unifying your current architecture, you need to fit the right tools to it.

Thirdly, when implementing a big data strategy, it is crucial to not only focus on the technical aspect. My consultancy, for example, focuses a lot on this. We have the technical aspects like governance and infrastructure, but we also focus on the culture. Having the technology implemented is only one part of the story; without a data driven culture, it will go to waste.

(Image Credit: *Crazy Diamond*)

]]>
https://dataconomy.ru/2014/11/04/top-tips-for-implementing-a-big-data-strategy/feed/ 2
10 Big Data Stories You Shouldn’t Miss this Week https://dataconomy.ru/2014/10/24/10-big-data-stories-you-shouldnt-miss-this-week-2/ https://dataconomy.ru/2014/10/24/10-big-data-stories-you-shouldnt-miss-this-week-2/#respond Fri, 24 Oct 2014 15:51:50 +0000 https://dataconomy.ru/?p=10030 This week has been a week of conferences in the realm of big data. Our news roster has been dominated this week by a whole host of announcements from the Strata + Hadoop world conference. Highlights include a raft of new integrations on Microsoft’s Azure platform, Waterline announcing the release of their Waterline Data Inventory, and […]]]>

This week has been a week of conferences in the realm of big data. Our news roster has been dominated this week by a whole host of announcements from the Strata + Hadoop world conference. Highlights include a raft of new integrations on Microsoft’s Azure platform, Waterline announcing the release of their Waterline Data Inventory, and GraphLab making their signature product available to the public.

Here in Europe, the team behind disease monitoring app Infected Flight scooped the Grand Hackathon Prize at TechCrunch Disrupt London. This week also saw the Gartner BI Conference get underway in Munich; our correspondent on the ground Furhaad Shah has been enthusiastically tweeting all of the major announcements over on our Twitter channel. We also hosted the second installment of Big Data, Berlin event series this week. For readers who couldn’t make it this week, we have further events coming up in Munich, London and will be announcing a raft of other locations early next year- stay tuned!

TOP DATACONOMY ARTICLES

6 Stumbling Blocks for Marketing With Data6 Stumbling Blocks for Marketing With Data

Touted as a silver marketing bullet, data and scientific thinking will guide creativity in an evolving social and mobile universe. This is the rationale underlying the launch of OgilvyAmp, essentially an aggregation and rebranding of the data wonk’s buried among Ogilvy’s global offices.

Exasol: Building the Fastest Database in The WorldExasol: Building the Fastest Database in The World

Exasol was founded 14 years ago with a mission in mind: to build an ultra-fast, highly scalable database for analytics. We recently caught up with Graham Mossman, Exasol’s Senior Solutions Architect to discuss Exasol’s development and his upcoming talk at Data Enthusiasts London.

TOP DATACONOMY NEWS

ThoughtSpot Combine Enterprise-Class Features With Consumer-Class Usability on 1 BI PlatformThoughtSpot Combine Enterprise-Class Features With Consumer-Class Usability on 1 BI Platform

As every industry becomes more data-driven, it becomes increasingly crucial that everyone in an organisation- from the Heads of IT to the least tech-savvy business person- has access to the key information. But of course, the requirements of these personel differ wildly, and there is yet to emerge a single technology which combines the enterprise-class infrastructure needed for big data analytics, with usability features that open the data to every person on the payroll. This is where ThoughtSpot come in.

10 Big Data Stories You Shouldn’t Miss this Week84% of Businesses Think Big Data Analytics Will Change Their Competitive Landscapes

The Industrial Internet (A combination of Big Data analytics and the Internet of Things) holds great potential if executives in healthcare and industrial sectors are to be believed, reveals a research published by GE and Accenture. To put things in perspective with regards to the economic potential of Industrial Internet, an estimate of worldwide spending is predicted to be $500 billion by 2020, pointing to a mammoth global GDP of $15 trillion by 2030, according to Wikibon and GE.

10 Big Data Stories You Shouldn’t Miss this WeekThe Fight Against Ebola May Have An Ally in Data Science, Believe Experts

Jai Vijayan while writing for Information Week explains its the unlikely channels are opening up to fight the spread. WHO’s predictions for this week, of 70% fatality rates and 1,000 new infections per week, comes from data about people who have died from or symptomatic, from facilities across Sierra Leone, Liberia, Guinea, and Nigeria, collated with data gathered from medical diagnostic facilities and burial grounds in the affected region.

Augmented Reality Development Gets Fuel as Google Leads Investment of Record $542M in Secretive Magic LeapAugmented Reality Development Gets Fuel as Google Leads Investment of Record $542M in Secretive Magic Leap

An augmented reality startup, Magic Leap, (whom everyone seems to find mysterious) has positively gained some more mystique quotient with the Google (and not Google Ventures) led $542 million Series B financing that it announced on Tuesday. There was participation from Qualcomm Ventures, Legendary Entertainment, including a personal investment from CEO Thomas Tull, KKR, Vulcan Capital, Kleiner Perkins Caufield & Byers, Andreessen Horowitz, Obvious Ventures, and other investors.

Upcoming Events

6022907226_c3bd319110_z27-28 October, 2014- Chief Data Officer European Leadership Forum, London

In recent years, there has been a significant rise in the appointments of Chief Data Officers (CDOs), in both the public and private sector. This is a result of data becoming an increasingly important strategic asset to businesses and public authorities.

Data Science Day Wants to Hear from You!Data Science Day Wants to Hear from You!
The 7th Data Science Day is hitting Berlin on the 30th October, hosted by Zalando and The Unbelievable Machine Company. The event is entitled “What´s next in Data Science? – or: just a bunch of really hot stuff!!!” If that amount of exclamation marks isn’t enough to get you excited, I don’t what is.

]]>
https://dataconomy.ru/2014/10/24/10-big-data-stories-you-shouldnt-miss-this-week-2/feed/ 0
Disease Monitoring App Infected Flights Bags TechCrunch Disrupt Europe 2014 Hackathon Grand Prize https://dataconomy.ru/2014/10/21/disease-monitoring-app-infected-flights-bags-techcrunch-disrupt-europe-2014-hackathon-grand-prize/ https://dataconomy.ru/2014/10/21/disease-monitoring-app-infected-flights-bags-techcrunch-disrupt-europe-2014-hackathon-grand-prize/#respond Tue, 21 Oct 2014 09:38:36 +0000 https://dataconomy.ru/?p=9962 The Disrupt Europe 2014 Hackathon just happened and the Old Billingsgate in London was witness to 89 teams come up with “neat, funny and smart” hacks that might somehow affect change through the tech industry, with the big prize winners having a lot to do with healthcare. The limelight belonged to Infected Flight that won […]]]>

The Disrupt Europe 2014 Hackathon just happened and the Old Billingsgate in London was witness to 89 teams come up with “neat, funny and smart” hacks that might somehow affect change through the tech industry, with the big prize winners having a lot to do with healthcare.

The limelight belonged to Infected Flight that won the Disrupt Europe 2014 Hackathon grand prize, followed by Appilepsy and Seeusoon as runners up.

A cross-platform web app, Infected Flight, models the spread of diseases. Leveraging differential equations, it separates a population into four different groups: susceptible, exposed, infected or recovered. Providing the requisite parameters, a user can simulate infections over time.

“Behind the scene, Infected Flight analyzes real flight path data (source city, source airport, flight time, destination airport and destination city) to figure out whether your country is greatly affected by a disease or not,” writes  Romain Dillet of TechCrunch.

Appilepsy on the other hand is a mobile app that detects the occurrence of convulsive epileptic seizures using a proprietary algorithm. Analysing accelerometer data in real time the app sends a text message and calls previously stored emergency contact.

Seeusoon, the other runners up, monitors flights and alerts long distance couples to meet in the same city for a romantic weekend, with the option to buy tickets integrated within the app. The user experience has also been appreciated.

iComic received an honorable mention for its work. The three teams get to showcase their app at the Disrupt stage in London on Tuesday. The prize money for the winner is £3,000 while other participants received prizes from the event’s API sponsors.

Judges for this event included Government Digital Service engineer Camille Baldock, Startupbootcamp Partner & Program Specialist Eric Brotto, Virgin Management investor Claudia De Antoni, Techstars Director Tak Lo, and FutureLearn developer Melinda Seckington.

Read more here.

(Image Credit: TechCrunch)

]]>
https://dataconomy.ru/2014/10/21/disease-monitoring-app-infected-flights-bags-techcrunch-disrupt-europe-2014-hackathon-grand-prize/feed/ 0
Microsoft Offers Sneak-Peek of the New Automatic Tagging App for Windows 8.1 https://dataconomy.ru/2014/10/21/microsoft-offers-sneak-peek-of-the-new-automatic-tagging-app-for-windows-8-1/ https://dataconomy.ru/2014/10/21/microsoft-offers-sneak-peek-of-the-new-automatic-tagging-app-for-windows-8-1/#respond Tue, 21 Oct 2014 08:51:06 +0000 https://dataconomy.ru/?p=9951 Microsoft has sneaked in a new Windows 8.1 app that helps the user tag their own personal photos utilizing the photos tagged in their Facebook accounts. Dubbed AutoTag‘n Search My Photos, it learns and recognises the facial structure of friends, leveraging Facebook, and then tags photos in the Pictures Library on Windows, reports Venture Beat. […]]]>

Microsoft has sneaked in a new Windows 8.1 app that helps the user tag their own personal photos utilizing the photos tagged in their Facebook accounts.

Dubbed AutoTag‘n Search My Photos, it learns and recognises the facial structure of friends, leveraging Facebook, and then tags photos in the Pictures Library on Windows, reports Venture Beat. The original AutoTag video and accompanying details have been removed since they were posted on the Microsoft website this weekend from its site.

The user can search people in the collection (which can include photos from One Drive and pictures taken from a Windows phone), once they’ve been tagged. The tagging accuracy gets better with usage and the user can also share photos on Facebook with tags included, both to a new and existing album.

“The idea is that tagging people in the app is less of a hassle since there are fewer people left to tag,” informs Emil Protalinski of VB. Adding to that a user can also merge a Facebook profile or a user-created profile with tags detected on photos tagged with other photo applications, such as OneDrive.

However, the app was not available for download at the Windows Store.

Touted as an “experiment by some,” Microsoft Answers explains that the app is in beta and points out some issues and their solutions. It was developed by the Microsoft Garage team, a group that runs side projects, hackathons, science fairs, and “general tinkering” in Redmond.

Read more here.

(Image credit: Mike Mozart)

]]>
https://dataconomy.ru/2014/10/21/microsoft-offers-sneak-peek-of-the-new-automatic-tagging-app-for-windows-8-1/feed/ 0
Berlin Buzzwords is Back- Our Pick of the Events https://dataconomy.ru/2014/05/23/berlin-buzzwords-back-pick-events/ https://dataconomy.ru/2014/05/23/berlin-buzzwords-back-pick-events/#respond Fri, 23 May 2014 13:49:58 +0000 https://dataconomy.ru/?p=4774 Berlin Buzzwords, ‘Germany’s most exciting conference on storing, processing and searching large amounts of digital data’, is back for a fifth year. The conference will take place on May 25-28, at Kulturbrauerei Berlin. It will feature a range of presentations on large scale computing projects, ranging from beginner-friendly talks to in-depth technical presentations about various […]]]>

Berlin Buzzwords, ‘Germany’s most exciting conference on storing, processing and searching large amounts of digital data’, is back for a fifth year. The conference will take place on May 25-28, at Kulturbrauerei Berlin. It will feature a range of presentations on large scale computing projects, ranging from beginner-friendly talks to in-depth technical presentations about various technologies. Here is our pick of some of Buzzword’s events:

Hitfox (and Dataconomy) meet Berlin Buzzwords
An obvious highlight of Berlin Buzzwords will be the meetup organised by us and held in our Dataconomy HQ.
“Peter Grosskopf, Chief Development Officer at HitFox Group, is going to welcome everyone and show the way we approach Big Data at HitFox. Thorsten Bleich, Chief Technology Officer at HitFox’s mobile targeting venture, Datamonk, will follow with his talk on how Datamonk provides targeting solutions in mobile real-time advertising by collecting, transforming, and feeding data into the mobile ecosystem with their own cutting-edge technology.”
See you there!

SHARK ATTACK on SQL-On-Hadoop
This talk gives a quick intro to Apache Spark and its SQL query engine Shark. Additionally Shark is compared to other SQL-on-Hadoop tools from the ecosystem, like Impala and Hive, including a live “usage-demo”.

Staying ahead of Users & Time – two use cases of scaling data with Elasticsearch
People typically choose Elasticsearch for it’s horizontal scaling capabilities and ease of use. We will architect two solutions that both scale well and do it in a way that still allows for change – whether it is data changes, growth rates or resources.
The talk will be accessible to both people who know Elasticsearch (fairly) well and those who have never used it. If you know Elasticsearch and have used it before you will learn how to put some of it’s more advanced data management API to good use. If you only heard of Elasticsearch (or even if not), you will get an impression of why people choose it to index ever growing amounts of data.

What’s new in MongoDB 2.6.
A short talk about what’s new in our biggest release ever! We’ve changed up to 80% of our codebase and added major value to the Database.

Modern Cassandra
Cassandra continues to be the weapon of choice for developers dealing with performance at scale. Whether in social networking (Instagram), scientific computing (SPring-8), or retail (eBay), Cassandra continues to deliver. This talk will look at new features in Cassandra 2.x and the upcoming 3.0, such as lightweight transactions, virtual nodes, a new data model and query language, and more.

]]>
https://dataconomy.ru/2014/05/23/berlin-buzzwords-back-pick-events/feed/ 0
Big Data Week Events — 5th May https://dataconomy.ru/2014/05/05/big-data-week-events-5th-may/ https://dataconomy.ru/2014/05/05/big-data-week-events-5th-may/#respond Mon, 05 May 2014 08:53:43 +0000 https://dataconomy.ru/?p=3882 This week, Dataconomy will be covering the events, meetups and talks around the world for Big Data Week. Below are the events happening in your city today! May 5, 2014 – Nova Scotia Kentville Agricultural Centre Seminar Series The Kentville Agricultural Centre Seminar Series is presented to increase awareness of research conducted by scientists at the […]]]>

This week, Dataconomy will be covering the events, meetups and talks around the world for Big Data Week. Below are the events happening in your city today!

Event logo May 5, 2014 – Nova Scotia
Kentville Agricultural Centre Seminar Series

The Kentville Agricultural Centre Seminar Series is presented to increase awareness of research conducted by scientists at the Atlantic Food and Horticulture Research Centre, and their colleagues.
Event logo May 5, 2014 – Montreal
#BDW14 MTL Hackathon

If you are an experienced data scientist, or a Big Data newbie who wants to learn more about data, join us to solve Big Data challenges, and learn about popular platforms, tools and technologies to pe . . .
Event logo May 5, 2014 – Montreal
#BDW14 MTL Kickoff with Datacratic & Alistair Croll

Let’s kickoff Big Data Week with some of the biggest big data folks in Montreal! Awesome talks by Datacratic & Alistair Croll, & lots of beer of course.
Event logo May 5, 2014 – Missoula
Mayoral Proclamation of Big Data Week Missoula 2014

Missoula Mayor John Engen will proclaim the city’s first-ever Big Data Week May 5 – 11, 2014, and the lineup of events will be announced during the regular City Council meeting at 7 p.m., 140 W. Pine . . .
Event logo May 5, 2014 – Chicago
Open Software Integrators Presents a Two-day Intro to Hadoop Training

This is a 2-day training designed to give you an introduction to the Hadoop Ecosystem. You will learn the essentials of Hadoop Distributed File System (HDFS) & MapReduce Framework
Event logo May 5, 2014 – Perth
Scitech – Big Data and Radio Astronomy in WA

Join Scitech to learn all about Big Data and Radio Astronomy in Western Australia, including an opening address by Executive Producer + Co-Founder of Big Data Week, Andrew Gregson.
Event logo May 5, 2014 – Madrid
Data Science Spain Meetup

The top Data Science community in Spain (Data Science Spain) will hold a meeting open to the public during Big Data Week in Madrid to talk about Big Data Science.
Event logo May 5, 2014 – Kuala Lumpur
Get Started Guide with R!

To provide the audience an understanding on big data and analytics from an infrastructure neutral perspective. Covering ALL the layers from hardware to databases to analytics engines and presentation . . .
Event logo May 5, 2014 – Perth
WAIMOS Data Mining Workshop: ABC it’s easy as 1-2-3

This practical, hands-on, small group workshop will explore the new look IMOS Ocean Portal to encourage data discovery and instil confidence in the mining of physical, chemical and biological variable . . .
Event logo May 5, 2014 – Madrid
Exhibition of big Big Data Companies & Networking Session

Do you want to talk to the main actors of the Big Data scene? Don’t miss this exhibition, where the “big Big Data Companies” will be presenting their latest products and solutions.
Event logo May 5, 2014 – Kuala Lumpur
Teradata-MMU-MDeC Big Data Discovery Hackathon

Big Data can be a big advantage when you’re able to take data from disparate silos and place it in the hands of decision-makers – when they need it. Be a part of the very first Teradata Big Data D . . .
Event logo May 5, 2014 – Madrid
Opening session of the Big Data Week in Madrid: Big Data Industry

Opening of the Big Data Week in Madrid with the main actors of the industry: companies providing data and Big Data services will meet the end users.
Event logo May 5, 2014 – Toronto
Viafoura Big Data Week Tech Talk 1: Kick-off

Big Data Week 2014 kicks off in Toronto! Join us at the Viafoura office as we talk about the big picture with big data from industry executives. Stick around, ask questions, and make new friends.
Event logo May 5, 2014 – Wallonia
Découvrez comment les « big data » peuvent révolutionner la gestion de votre entreprise ! 

Dans toute entreprise, quelque soit sa taille ou son secteur d’activité, et également les associations ou administrations, la quantité de documents et donc d’informations générées et reçues est . . .
Event logo May 5, 2014 – Wallonia
Place for big data: immersion et visualisation de l’univers big data 

Des artistes en résidence exposent leur vision de l’univers des données durant toute la big data week à Mons.
Event logo May 5, 2014 – Barcelona
Sessió Inaugural: L’ #opendata oberta per a qui?

Una introducció sobre com s’obren i com s’utilitzen les dades recollides per l’administració pública. // Una introducción a cómo se abren y cómo se utilizan los datos recogidos por la admi . . .
Event logo May 5, 2014 – Atlanta
#BDW14 Kickoff Keynote Panel w/ Delta, Home Depot, Bloomberg + More

Join fellow Atlantans as we kick off Big Data Week 2014 and learn from high-profile thinkers and innovators how Big Data is transforming the future of business.
Event logo May 5, 2014 – Berlin
Data meets nice people II (Big Data Week edition)

This non-IT focused group is made up of individuals with a mathematical background who are interested in finding answers to business questions with maths and algorithms. Down to earth yet totally nerd . . .

Title image from Big Data Week.

]]>
https://dataconomy.ru/2014/05/05/big-data-week-events-5th-may/feed/ 0