surveillance – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Mon, 15 Apr 2024 09:19:50 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/DC-logo-emblem_multicolor-75x75.png surveillance – Dataconomy https://dataconomy.ru 32 32 FISA Section 702 is threatening your privacy with unchecked surveillance https://dataconomy.ru/2024/04/15/what-is-fisa-section-702-reauthorization-act/ Mon, 15 Apr 2024 09:19:50 +0000 https://dataconomy.ru/?p=51021 Last week, the US Congress narrowly avoided blocking the renewal of FISA Section 702 amidst concerns that it would perpetuate the warrantless monitoring of American citizens. Despite efforts, a proposal to mandate warrants did not succeed. FISA Section 702, a provision of the Foreign Intelligence Surveillance Act, has historically sparked debate due to its allowance […]]]>

Last week, the US Congress narrowly avoided blocking the renewal of FISA Section 702 amidst concerns that it would perpetuate the warrantless monitoring of American citizens. Despite efforts, a proposal to mandate warrants did not succeed.

FISA Section 702, a provision of the Foreign Intelligence Surveillance Act, has historically sparked debate due to its allowance for indirect surveillance of US citizens without a warrant. This led a coalition of Republican dissidents to ally with Democrats, preventing the bill that would extend this provision from reaching a vote on the House floor.

The debate over reauthorization act of FISA Section 702 highlights a critical tension between national security and individual privacy rights. By allowing warrantless surveillance of Americans, the legislation raises significant concerns about government overreach and the erosion of civil liberties. The failure to pass amendments requiring warrants underscores the ongoing challenge of balancing security needs with constitutional protections, shaping the future of surveillance policy and citizens’ rights.

What is FISA Section 702?

FISA Section 702 is “a key provision of the FISA Amendments Act of 2008 that permits the government to conduct targeted surveillance of foreign persons located outside the United States,” reads the PDF shared by the Office of the Director of National Intelligence. This provision necessitates the cooperation of electronic communication service providers to gather foreign intelligence information. The intelligence gathered under Section 702 is utilized to safeguard the United States and its allies against hostile foreign entities such as terrorists, proliferators, and spies, and it supports cybersecurity initiatives.

Under Section 702, the Intelligence Community (IC) is authorized to target non-U.S. persons who are outside the United States and are believed to hold, receive, or transmit foreign intelligence information. This targeting is crucial for national security purposes and does not extend to U.S. persons, the document reads.

The enactment of Section 702 by Congress was motivated by the need to bridge a gap in intelligence collection that emerged with advances in technology after the original FISA legislation in 1978. By the mid-2000s, terrorists and other foreign adversaries increasingly used email services provided by U.S. companies, complicating surveillance efforts. Previously, the government needed individual court orders based on probable cause to access the communications of non-U.S. persons abroad. This process was not only resource-intensive but also challenging due to the stringent probable cause standard intended to protect U.S. persons and individuals within the U.S.

Regarding restrictions under Section 702, the IC cannot target U.S. persons, regardless of their location. Additionally, any person within the U.S., and foreign individuals abroad if the purpose is to indirectly target a U.S. person or someone within the U.S. through what is often referred to as “reverse targeting,” are also off-limits for surveillance under this section. This safeguard is designed to protect the privacy and rights of U.S. nationals and residents.

What is FISA Section 702 reauthorization act
The central issue with the FISA Section 702 reauthorization act is that it formally permits the US to monitor foreign threats overseas

About the reauthorization act

The central issue with the FISA Section 702 reauthorization act is that it formally permits the US to monitor foreign threats overseas. However, if these foreign individuals interact with Americans, the US citizens’ electronic communications may also be collected for intelligence purposes.

In a bid for reform, a bipartisan group called for a change in the reauthorization act to necessitate warrants before collecting data from Americans. Although an amendment was suggested early Friday, it initially gained support but eventually failed, allowing the Section 702 reauthorization to proceed to a vote in the House.

The proposed amendment by Andy Biggs (R-AZ), aimed at prohibiting warrantless surveillance of US persons, was not adopted following a tied vote of 212-212. This outcome ensures that US citizens communicating with foreign targets under surveillance will remain subject to warrantless surveillance. Other amendments referenced in the attached PDF, however, were approved.

The comprehensive bill to renew Section 702 surveillance, pushed swiftly through the House to avert expiration on April 19, gained approval with bipartisan backing despite widespread calls to terminate warrantless surveillance. The Senate now faces the task of passing the bill before the impending deadline of April 19, providing them with the entirety of this week to accomplish it.


Image credits: Kerem Gülen/Midjourney

]]>
Computer vision’s quest to teach machines to see like humans https://dataconomy.ru/2022/05/30/computer-vision/ https://dataconomy.ru/2022/05/30/computer-vision/#respond Mon, 30 May 2022 11:38:46 +0000 https://dataconomy.ru/?p=24517 Computer vision is a discipline of artificial intelligence that allows machines to see their environment like humans. Looking and seeing are different actions. The act of seeing also includes perceiving and understanding the image. The purpose is not just to receive the light reflected from the objects. This is the job of the eye. The […]]]>

Computer vision is a discipline of artificial intelligence that allows machines to see their environment like humans. Looking and seeing are different actions. The act of seeing also includes perceiving and understanding the image. The purpose is not just to receive the light reflected from the objects. This is the job of the eye. The brain’s occipital lobe, responsible for visual processing, processes and makes sense of the objects seen. Machines use cameras as their eyes. Computer vision models process thousands of pixels in images to perform the task of the occipital lobe. In short, the discipline of computer vision allows machines to understand what they see.

What is computer vision?

Computer Vision (CV) is an artificial intelligence discipline that develops techniques that enable machines to see and understand the world around them.

Computer vision is critical to innovations in many technology areas, including autonomous cars, facial recognition, and augmented reality. Computer vision has turned into one of the most important AI disciplines today due to the rapid increase in digital visual data. The increase in visual data also makes it easier to train computer vision algorithms.

Visual perception results from millions of years of evolution and is one of the most reliable abilities of humans. Visual perception is the ability of a 5-year-old child to understand that it is a dinner table after describing all the objects on the table one by one. Doing this for machines is an enormous challenge, and computer vision tries to provide them with this ability.

Computer vision also plays a critical role in achieving the goal of artificial general intelligence. Artificial general intelligence aims to give machines all the abilities of humans and more. But, this would not be possible without including another important feature of our intelligence, which is the ability to understand the objects around us, quickly identify them and give correct responses.

How does computer vision work?

Computer vision uses a small-to-large image processing technique. It starts with detecting and analyzing simple features like pixels and colors, then more complex features like lines and objects.

Understanding the emotion and context of images is simple for humans but very difficult for machines. Imagine looking at a photo of people running. Although the photo presents a static image, you can tell when people are running. For machines, images are just a collection of pixels. Unlike humans, machines cannot understand the context of an image and can only perceive pixels. Computer vision tries to bridge the semantic gap in this equation.

When light rays hit the retina of our eye, cells called photoreceptors convert the light into electrical signals. These electrical signals are then sent to the brain via the optic nerve. The brain converts these signals into images. This process continues to process electrical signals reaching the brain until the images become clear enough. How exactly the brain processes these signals and turns them into images is not yet fully understood. Moreover, how the brain performs many other functions remains a big question mark.

what is computer vision, facial recognition, autonomous vehicles, medical imaging, surveillance,
Unlike humans, machines cannot understand the context of an image and can only perceive pixels. Computer vision tries to bridge the semantic gap in this equation.

Computer vision works with neural networks and other machine learning algorithms that emulate the human brain. Researchers have excelled at mimicking algorithms to the human brain, and even sometimes, they can be surprised by the unexpected behavior of the algorithms they have created themselves.

What we do know is that computer vision is all about pattern recognition. Algorithms using machine learning techniques such as unsupervised learning are trained to recognize patterns in visual data. Many images need to be fed with the algorithm for the training process.

Let’s say you want an algorithm to identify dogs in photos. If you use the unsupervised learning technique, no photos need to be tagged for AI beforehand. Instead, the machine learns certain characteristics of dogs after analyzing thousands or millions of images. Machines can detect the defining characteristics of an animal or object. Although they still wouldn’t know the thing’s name, they could determine whether an unlabeled image contains it. Then you can tell that what the machine has learned is a dog, and a dog is an animal. Supervised learning speeds up the process of training algorithms. In this technique, images are tagged, and the machines also learn what they recognize.

Computer vision techniques

While an image recognition software application may use one of the following techniques, advanced applications such as a self-driving car may utilize many of the following techniques simultaneously:

  • The object identification technique detects a specific object in an image. Advanced versions can identify multiple objects in a single image.
  • Image classification is the technique of grouping images into categories. It is also called the process of assigning labels to images.
  • Image segmentation is a technique used to examine an image separately by breaking it into parts.
  • Pattern detection identifies patterns and continuities in visual data.
  • Edge detection detects the corners of an object to identify the image’s components better.
  • Feature matching is a pattern detection technique that matches similarities in images for classification purposes.

Computer vision use cases

Computer vision is used in many industries today. Using this technology, Instagram can automatically tag people in the photo, Apple groups photos, and Adobe improves the quality of zoomed images. While these are digital examples, there are many application examples of computer vision in the physical world. Let’s take a closer look at some real-world computer vision applications you may have come across:

Facial recognition

Some of the best use cases of computer vision are seen in facial recognition. Facial recognition, which became popular with the iPhone X model released by Apple in 2017, has turned into a standard feature on most smartphones today.

Facial recognition technology is used to identify people, as in Facebook, in addition to authentication on smartphones. On the other hand, law enforcement agencies worldwide use facial recognition technology to detect lawbreakers in video broadcasts.

what is computer vision, facial recognition, autonomous vehicles, medical imaging, surveillance,
Self-driving cars cannot operate without computer vision. However, autonomous vehicles are still in their infancy, and more R&D is needed to get them on the road safely

Autonomous vehicles

Autonomous vehicles use computer vision for real-time image analysis. This technology helps self-driving cars make sense of their environment. Autonomous driving technologies are still in their infancy, and more R&D is needed to get them on the road safely.

Self-driving cars cannot operate without computer vision. CV allows autonomous vehicles to process visual data in real-time. Computer vision creates 3D maps for vehicles and objects identification and classification in autonomous vehicles.

Other important computer vision use cases in this area are vehicle and lane line detection and free space detection. As the name suggests, this technical tool detects open areas around the car. Free-range detection is useful when the driverless car approaches a slow-moving vehicle and needs to change lanes.

Medical imaging

Computer vision is used in the health sector to make faster and more accurate diagnoses and monitor disease progression. Doctors detect early symptoms of invisible diseases such as cancer using pattern detection models.

Medical imaging analysis with computer vision shortens the time for medical professionals to analyze images. Endoscopy, X-ray radiography, ultrasound, and magnetic resonance imaging (MRI) are some of the medical imaging disciplines that use computer vision.

By pairing convolutional neural networks with medical imaging, medical professionals can observe internal organs, detect abnormalities, and understand the cause and effect of certain diseases. It also helps doctors monitor the development of diseases and the progress of treatments.

Content-control

Social media networks need to review millions of new posts every day. It is no longer practical to have a content moderation team reviewing every image or video posted, so computer vision systems are used to automate the process. Computer vision helps social media platforms analyze uploaded content and flag those that contain objectionable images. Companies also use deep learning algorithms for text analysis to identify and block posts containing offensive text.

Surveillance

Video feeds are a solid form of evidence. They help detect lawbreakers and help security experts act before minor concerns turn into disasters. It’s nearly impossible for humans to track surveillance footage from multiple sources, but it’s an easy task for computer vision. Computer vision-assisted surveillance systems can scan live images and detect suspicious behavior.

Facial recognition can be used to identify wanted criminals and thus prevent crimes. The object identification technique we mentioned above can detect people carrying dangerous objects in crowded areas. This technique is also used to determine the number of available parking spaces in smart cities.

]]>
https://dataconomy.ru/2022/05/30/computer-vision/feed/ 0
Storage for video surveillance: keep it simple https://dataconomy.ru/2021/08/31/storage-video-surveillance-keep-it-simple/ https://dataconomy.ru/2021/08/31/storage-video-surveillance-keep-it-simple/#respond Tue, 31 Aug 2021 12:30:59 +0000 https://dataconomy.ru/?p=22270 This year a significant event will take place: somewhere in the world, the billionth CCTV camera will be installed. This means that a camera already monitors every seventh person on the planet. And in some cities, more than a million cameras are already in use, making the ratio even more impressive. That’s a great deal […]]]>

This year a significant event will take place: somewhere in the world, the billionth CCTV camera will be installed. This means that a camera already monitors every seventh person on the planet. And in some cities, more than a million cameras are already in use, making the ratio even more impressive.

That’s a great deal of surveillance. But cameras are used for more than just security. They also help businesses ensure quality control of processes, improve logistics, get better product placement, recognize privileged customers the moment they enter the sales area, and so on.

Storage for video surveillance: keep it simple

RAIDIX sees the usage of video analytics tools for enterprise tasks as an appealing challenge, so they have developed a line of solutions based on:

  • scalable video archive with zero point of failure architecture and the most reliable RAID in the industry;
  • high-performance storage system, which will significantly increase the speed of training models;
  • high-performance solutions for edge infrastructures;
  • mini-hyperconverged solution.

RAIDIX offers three types of solutions that can be used in high-performance infrastructures:

  • centralized solution based on high-performance RAIDIX ERA engine, NVMe drives and high-performance network from NVIDIA:
Storage for video surveillance: keep it simple
AFA based on AIC HA202-PV platform
Storage for video surveillance: keep it simple
AFA based on Supermicro server platform and Western Digital EBOF 
  • a centralized solution for creating video archives that provide the highest access speed and availability of large amounts of data:
Storage for video surveillance: keep it simple
A basic scheme of a video archive 
Storage for video surveillance: keep it simple
Data Storage System based on Supermicro server platform and Western Digital EBOF 
  • RAIDIX ERA-based solution for edge infrastructures:
Storage for video surveillance: keep it simple
  • mini-hyperconverged platform for smaller projects:
Storage for video surveillance: keep it simple
Storage for video surveillance: keep it simple

Below there is a closer look at implementing a video archive in modern installations.

Industry Challenges and Storage Requirements

Video surveillance projects face new challenges at the data storage level. These are not only large requirements for bandwidth and storage capacity but there are also changes in the type of load on the storage system.

Now, most of the workload falls on these tasks and processes:

  • continuous random write operations from multiple cameras and video servers;
  • unpredictable random read operations of the video archive on demand;
  • high transactional load on databases;
  • high-speed work with memory for analytics.

In addition to managing the variety and intensity of these storage workloads, scalability is critical to accommodating new cameras and continually increasing resolutions. Also, to meet the growing needs of video surveillance, companies need high-performance, reliable, and efficient storage systems.

Solution: NAS and…?

Large video surveillance projects go well beyond network video recorders and storage on video surveillance servers.

Modern VSS requires an enterprise-grade infrastructure with separate servers and storage units. The layered approach allows for increased processing power, faster I / O processing, and increased throughput and capacity.

With these requirements in mind, enterprise storage systems are dominated by two architectures:

  • NAS: stores data as files and presents these files to the application as a network folder;
  • SAN: looks like local storage, allowing the operating system to manage the disk.

In the context of video surveillance applications, these two approaches are polar.

Recently, SAN has become the preferred option for enterprise VSS. Sure, NAS technology does a good job for many tasks, but multi-camera, database, and analytics recording workloads require performance that requires a direct connection or SAN approach. IHS forecasts show that the SAN market will grow by more than 15% in 2020-2022, while the NAS segment’s annual growth will drop from 5% to about 2%.

For this reason, video surveillance software vendors recommend local or SAN-attached storage.

Also, many video surveillance projects operate in virtual environments. In these cases, each virtual video surveillance server requires high-performance storage not only for its video content, but also for the operating system, applications, and databases.

Make it VSS (Viable Simple Storage) 

Clearly, both SAN and NAS are easy to use, and the deployment steps applying to them are almost the same since both architectures may require Ethernet-based connectivity (although SANs can use other media such as FC) so that files and directories can be accessed from multiple systems. These solutions should use file locking to prevent multiple systems from modifying files at the same time.

Since many video surveillance systems do not require common video sharing, all this file locking and the complexity of the shared file system is unnecessary overhead that limits performance and adds complexity to maintenance and protection.

Deduplication and compression, also offered by many NAS and SAN systems, are unnecessary for video surveillance solutions. Choosing a solution with these features incurs additional costs for unused technologies. These useless features built into the software negatively impact overall performance and require maintenance to ensure safety and reliability.

Storing data at different levels can be useful when deploying video surveillance. However, video surveillance software already knows how to manage this, as it can create separate storage for databases, real-time recording, and archives. As long as the data is managed by video surveillance software, there is no need for storage in the storage system to move data between tiers dynamically. Consequently, data tiering or automated management is not required as a storage function and also increases risks and complexity.

Why SAN is effective

Most scalable file systems require multiple servers for their functioning. Solutions with multiple servers, in their turn, require an internal network, which can create the following problems:

  • Each write operation creates a series of data transfers over the internal network, which limits performance;
  • peer-to-peer connections create more potential points of failure, which can make it harder to increase storage or replace equipment;
  • while achieving the same redundancy levels as the SAN, scalable file systems provide less bandwidth.

SAN solutions for VSS are also offered by RAIDIX. These solutions are based on software RAID, capable of performing checksum calculations faster than any similar solution in the industry. Also, RAIDIX supports various SAN protocols (iSCSI, FC, iSER, SRP), which help to achieve a number of goals:

  • providing high bandwidth (up to 22GB / s) to work with thousands of high-resolution cameras that can be connected through dozens of video servers;
  • cost-effective maintenance with an increase in the number of cameras and in archive depth: due to the use of proprietary RAID-array technologies, fewer disks are required to obtain the required storage volume and performance;
  • vertical scalability up to 11PB per storage system due to the ability to work with large RAID groups of up to 64 disks and provide failover for two or more disks (when using RAID 7.3 / N + M), as well as combining these groups into a single volume;
  • high reliability of data storage when using RAID 7.3 or RAID N + M, the most fault-tolerant RAID-arrays on the market, which makes possible the use of large disks (up to 18-20TB) without compromising data safety. With an increase in the volume of disks and their number in a RAID array, the likelihood of data loss increases sharply, as the reliability of the disks decreases as well. So, the probability of data loss for RAID6 of 24 18TB disks after one year in operation is 1%, while for RAID 7.3 it is only 0.001%;
  • stability of operation during sudden increases in workload due to sufficient performance headroom, even in situations where drive failure coincides with peaks of intensive work of the video surveillance system. This is achieved thanks to unique technologies of proactive and partial reconstruction;
  • the high performance of RAIDIX storage system does not limit the capabilities of analytical software for video surveillance. Face recognition, motion capture, and other video analytics functions will work without downtime and with minimal latency;
  • the possibility of using the obtained video surveillance data simultaneously not only in security tasks, but also in business tasks for carrying out various analytics. It does not require additional copying operations to analytical systems, while the use of smart prioritization due to QoSmic technology allows users to avoid the influence of additional storage tasks on the main recording function;
  • building an enterprise-level architecture without a single point of failure: RAIDIX 5.X supports dual-controller operation with possible replication to remote systems.

Where to start choosing an archive storage system?

When calculating and selecting an archived data storage system, the following parameters should be considered:

  • type of cameras and their number;
  • archive depth in days;
  • additional retention requirements (if any);
  • the intensity of movement in the frame, its distribution over the time of day or depending on events;
  • type of network infrastructure, its need for updates;
  • how the video analytics software is deployed;
  • whether it is required to use the resources of cloud infrastructures;
  • when and what kind of upgrade is expected (type and number of cameras, list of services, depth of the archive, etc.)

For a basic calculation, one can use the calculators available at specialized software vendors’ websites. For a more accurate calculation in complex projects, the participation of professionals will be required.

In addition, there are two important points to consider when calculating.

Firstly, calculating desirable characteristics of data storage systems should be carried out with the worst-case scenario in mind: the maximum load in case of failure of storage system components, controllers, and drives. Unfortunately, this is what usually happens in real life: with an increase in the load, physical components begin to fail as their capabilities reach the limit.

Secondly, the volume of drives is gradually increasing, but their performance is still the same, and classic RAIDs simply cannot make it. We need technologies that will ensure the availability of large data volumes over the long term. However, with the mass adaptation of the two actuarial accumulators, this will soon change.

Thus, the elements of a modern video archive are:

  • large volume drives (16-18TB);
Storage for video surveillance: keep it simple
  • two or more controllers;
Storage for video surveillance: keep it simple
  • high-performance access interfaces (FC> 16GBps, Eth> 10GBps);
  • controller software that allows easy scaling of the volume without service downtime, makes it possible to survive the failure of multiple drives without losing performance and at least one storage controller, and is also adapted to continuous recording.

Conclusion

The demand for video surveillance projects is steadily growing and entails demand for solutions that create fault-tolerant storage systems. The two main approaches to media storage that the enterprise segment is targeting are NAS and SAN. The second type of configuration seems to be more optimal for video surveillance projects because of its higher performance, the ability to function in different environments, and the use of a large number of servers. For customers looking for high performance and fault tolerance, RAIDIX provides advanced SAN storage solutions based on fast software RAID.

In general, modern data storage systems provide a great number of options, and the user’s task is to determine what is important and to avoid overpaying and bringing unnecessary loads on the system. For example, video surveillance does not actually require storage tiering or automatic management as a storage function. At the same time, this does not mean that the choice of data storage systems should be a piece of cake: there are about a dozen software and hardware-related factors you should pay attention to. Also, when calculating performance indicators and fault tolerance of future storage systems, one should always focus on the worst possible scenario, which is the maximum load in case of storage system components’ failure.

]]>
https://dataconomy.ru/2021/08/31/storage-video-surveillance-keep-it-simple/feed/ 0
The Growing use of Big Data at Intelligence Agencies https://dataconomy.ru/2016/05/31/growing-use-big-data-intelligence-agencies/ https://dataconomy.ru/2016/05/31/growing-use-big-data-intelligence-agencies/#comments Tue, 31 May 2016 08:00:06 +0000 https://dataconomy.ru/?p=15779 The typical Hollywood portrayal of spywork makes the field appear a lot more glamorous than it really is. For better or worse, intelligence agencies really feature a lot of sorting through files, documents, numbers, and other data, most of it done in office buildings with employees hunched over computers. While the work may seem mundane […]]]>

The typical Hollywood portrayal of spywork makes the field appear a lot more glamorous than it really is. For better or worse, intelligence agencies really feature a lot of sorting through files, documents, numbers, and other data, most of it done in office buildings with employees hunched over computers. While the work may seem mundane on the surface, that in no way takes away from its importance. Intelligence agencies face a tremendous challenge as they attempt to identify criminals and possible terrorists before something catastrophic happens. Finding these persons of interest take a great deal of effort, but many intelligence agencies may see a boost as they slowly adopt the latest big data analytics technologies. It’s this gradual adoption of big data that can lead to them becoming more effective in detecting threats and finding culprits.

Paper to Digital

By their very nature, intelligence agencies have had to deal with data. For most of their existence, this data came in the form of good old fashioned paper. Files were scrutinized, sorted, and deciphered at a meticulous level, and simply getting hold of this information could require plenty of time and resources. However, a revolutionary transformation has occurred in the past two decades as data around the world has migrated to the digital realm. Big data is known by the sheer size of data sets, not to mention the frequency with which it is collected and the various sources that data comes from. It’s easier now more than ever to gather this data, and while that part of the job has been alleviated, intelligence agencies now have a lot more information to sift through.

Privacy Concerns

Of course this isn’t without its own brand of controversy. Government surveillance of its citizens strikes at the very heart of every debate surrounding privacy and national security. Many privacy advocates wonder if it’s worth it for intelligence agencies to monitor millions of innocent citizens just to catch a few who may be guilty or who may commit crimes in the future. Obviously, this debate will remain a heated contest for many years to come. However one may feel about how intelligence agencies get their data, what can’t be denied is the important role big data is now playing as these agencies attempt to reach their goals.

Data Volume

The sheer amount of data that agencies have access to is almost incomprehensible. Recently released documents from the UK, for example, reveal that intelligence agencies can gather data such as medical records, travel records, commercial information, and financial records, in addition to data about internet activity and phone information. Given the volume of data now being collected, big data analytics has become an absolute must. That also means many intelligence agencies are turning to data scientists to use the latest big data tools like Hadoop in the cloud to find hidden insights from all the data gathered. It’s this use of the latest technology that will make big data a lot more manageable. Speed and agility are needed as part of a high performance analytics strategy. Intelligence agencies can use these tools in order to discard information deemed irrelevant while concentrating on the data sets that truly matter.

Predicting Crime

In much the same way businesses use big data as part of a predictive analytics strategy, intelligence agencies try to predict future events, only in this case they’re dealing with future crimes. As just one example, the CIA recently launched a new department called the Directorate in Digital Innovation (DDI). This Directorate aims to use big data for “anticipatory intelligence”, taking the data the CIA collects and using it to discover insights pointing to future events that may need CIA involvement. With the right talent and technology, the CIA will likely be able to achieve that goal to at least some extent.

Obstacles and arguments will likely still be an issue as intelligence agencies become more ensconced in data. Even so, these agencies have realized that in order to function and actually respond to the current threats facing the world, big data adoption is an absolute must. It may not involve the same kind of excitement you’ll see in the next James Bond movie, but the work done at intelligence agencies with big data can end up being just as effective.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2016/05/31/growing-use-big-data-intelligence-agencies/feed/ 5