ACR 2 Cybersecurity Risk Management System™ (ACRMS™)
The highly automated ACR 2 Cybersecurity Risk Management System™ (ACRMS™) brings small Federal contractors into compliance with unclassified data cybersecurity requirements with limited need for scarce and expensive cybersecurity experts. Extensive auditing in the long-regulated medical industry shows most cybersecurity program audit failures are due to three issues; lack of a cybersecurity risk assessment, lack of cybersecurity policy safeguards (>60% of HIPAA breaches are due to human error), and lack of user training in cybersecurity safeguards. The ACRMS™ integrates these elements with a user-friendly task management system to provide a cybersecurity capability that greatly reduces the need for expert services. The core technology of the ACRMS™ is an expert system computer model implementation of the NIST 800-30 cybersecurity risk assessment protocol that has been tested in over 200 cybersecurity audits since 2006. Integrated with video assisted policy creation, on-line user education and task management modules, the ACRMS™ has been demonstrated to rapidly and inexpensively bring small contractors into compliance with Federal cybersecurity standards. Given the expected near-term shortfall of cybersecurity experts, the ACRMS™ uniquely allows the small contractors who handle over 20% of federal spending to efficiently meet the ever-increasing demands for cybersecurity.
OneLynk - A holistic, integrated back office solution
Most Enterprise Resource Planning systems take months or years to implement and cost thousands of dollars per employee. AtWork Systems' OneLynk business operating system disrupts the market by providing a holistic, integrated solution that small to mid-sized companies can implement in as little as one-week at a fraction of the cost of existing systems. Companies that adopt the OneLynk system will differentiate themselves through more competitive rates, comprehensive workflows and visible performance data that will result in increased revenue.
Tech to help entrepreneurs build IP cost effectively and grow the business
We apply deep learning to help inventors draft high quality patent applications. This enhances documentation of ideas using machine drafting of documents. This cuts down cost. The Foundation will provide low cost DYI patent filing software.The inventor’s ideas can be uniquely secured using the blockchain and powered by the Inventiv cryptocurrency or token, which works as a medium of exchange that uses strong cryptography to record new ideas on the blockchain, control the creation of additional ideas from the crowd, and verify the transfer of ideas as intellectual property (IP) assets. The system provides Context Sensitive Examples so you can see how others describe similar subject matter, making the provisional preparation easier. The software has a natural language understanding tool, which captures inconsistencies in the filing so that the user can file without mistakes. Using natural language processing (NLP), the software checks the consistency of terms and number references in the application and whether the claims are supported by your description. The system checks claims for proper antecedent basis and shows you how claim elements are described in the description. An editable claim tree allows for ease of manipulating claim concepts.
bonita, CA
www.dtanalysts.com and http//www.astralaunch.com
Booth: 10T
AstraLaunch
AstraLaunch provides assessment, commercialization potential, market intelligence, and helps to prioritize research and technology portfolios. This is accomplished by merging an existing technology assessment program developed at the Muenster University of Applied Sciences in Germany with the Bintel Intelligence Platform. The prototype will first assess the technology based on 43 researched criteria, then use Bintel for collection, processing, analyzing and storing the source abstracts, news and information in an enriched SQL database stored on AWS. This system allows researchers, technology transfer, portfolio managers to quickly develop a comprehensive understanding on any given topic. This is done by collecting, processing, and aggregating market information related to that topic, and presenting it in intuitive visualizations. The system’s benefit keeping the researcher up-to-date in a fast changing and increasingly complex world. The system delivers user-curated updates continuously through the online platform, allowing the end-user to quickly spot changes in trends, new developments, or changes in strategy from competitors or incumbents. This provides a significant informational advantage to the end-user: they will have a superior top-down view of specific topics or groups of topics that interest them, allowing them to build more informed bottom-up analysis or integrated products.
Geoanalytics Platform for Special Operations Mission Readiness
EPIC Ready objectively measures the tasks, conditions and standards assigned to the Joint Mission Essential Tasks (JMETs) associated with a unit’s objectives. This data, which helps determine training effectiveness and mission readiness, can be easily exported to JTIMS. It improves rapid field reviews. Daily objectives reports, which can be completed in minutes, capture whether training objectives are being met and, if not, provide the instant information needed to adjust on the fly.Report. From exercise planners to unit commanders to the joint staff, leadership on all levels receives the exact information needed to determine success and, if shortfalls exist, help determine why. In addition to better information, PAS also reduces the time required for after-action reporting from months to days. Furthermore, you can create immediate lessons learned to enhance planning for the next joint exercise life cycle. Intuitive and simple to use, PAS has the flexibility needed to handle today’s changing requirements. PAS provides the unit performance data needed to ensure force readiness and increase return on investment (ROI) in training. Spanning the Joint Exercise Life Cycle.PAS helps:,Plan and run exercises,Measure resultsFine-tune training Quantify force readiness Improve ROI
Social Media Environment and Internet Replication (SMEIR)
SMEIR delivers the ability to conduct unrestricted offensive and defensive cyber and information operations training while replicating realistic internet scenarios, in closed network environments. Content is flexible and customizable, covering everything from social media analysis and engagement to network mapping. SMEIR addresses the experience gap warfighters encounter between when they receive classroom training, to when they are tasked to attack and execute in live environments. SMEIR is comprised of five core capabilities: website and social media replication; an exercise development and management tool; malicious traffic generation; information operations (IO) with extensive cyber/content libraries; and software-defined infrastructure.
Berkeley, CA
www.lbl.gov/ https://ipo.lbl.gov/for-industry/tech-index/
Booth: 303
Optimized Energy Efficient Prefetcher Hardware Architecture
With rapidly increasing parallelism, DRAM performance and power have surfaced as primary constraints from consumer electronics to high performance computing (HPC) for bulk-synchronous data-parallel applications that are key drivers for multi-core, e.g., image processing, climate modeling, physics simulation, gaming, face recognition, and many others. Lawrence Berkeley National Laboratory’s optimized energy efficient prefetcher hardware architecture, a purely hardware last-level cache prefetcher, exploits the highly correlated prefetch patterns of data-parallel algorithms not recognized by prefetchers oblivious to data parallelism. The technology generates prefetches on behalf of multiple cores in memory address order to maximize DRAM efficiency and bandwidth. It can prefetch from multiple memory pages without expensive translations. Compared to other prefetchers, the LLNL technology improves execution time by 5.5% on average (10% maximum), increases DRAM bandwidth by 9% to 18%, decreases DRAM rank energy by 6%, produces 27% more timely prefetches, and increases coverage by 25% at minimum.
Berkeley, CA
www.lbl.gov/ https://ipo.lbl.gov/for-industry/tech-index/
Booth: 303
IDEALM: Efficient Data Reduction with Locally Exchangeable Measures
LBNL has developed IDEALM, a dynamic sampling algorithm that reduces large streaming data, yet provides accurate information about the data for analysis. IDEALM could prove beneficial to network routers, for use in network monitoring mechanisms; facilities that generate large amounts of data, as a means to reduce data volume; and social networks, among other applications. IDEALM can be used for streaming data in high frequency as well as stored data. Large streaming data are an essential part of computational modeling and network communications. Yet such data are generally intractable to store, compute, search, and retrieve. This dynamic data reduction algorithm detects redundant patterns and reduces data size up to 100 times by exploiting the exchangeability of measurements. IDEALM exploits both redundancies of data in a time series and redundancies of data distribution. Drawbacks to today's common techniques in network monitoring and other practices to reduce the size of collected monitoring measurements -- such as storing a random sample or spectral analysis -- are impractical for large streaming data in high frequency. LBNL's IDEALM resolves issues with current approaches.
rasdaman
Datacubes are enablers for analysis-ready, user-centric Big Data services in science and engineering, such as sensor data, images, image timeseries, simulations (like weather), and statistics data. The pioneer datacube engine, rasdaman, is world-wide leading according to ESA and other experts, with manifold epigons trying to copy. Rasdaman, though, stands out through its performance, scalability, flexibility, security, and standards support, plus its capability for planetary-scale peer federations, shown on Petabyte satellite and climate data, with more than 1000x cloud parallelization. Research Data Alliance in 2018 has attested that rasdaman can be 300x faster than other tools. Further, rasdaman is official datacube reference implementation and blueprint for the OGC and ISO datacube standards. In summary, rasdaman heralds a new generation of services on massive, distributed spatio-temporal data standing out through its flexibility (any query, any time), performance and scalability (2.5+ PB, 1000x parallelization), security (access control down to single pixel level), and open standards (being reference implementation).
Artificial intelligence, machine learning, machine vision for infrastructure and road surface assessments
RoadBotics AI is unlike anything else that exists today to assess infrastructure. RoadBotics is an infrastructure technology company that uses AI to revolutionize how governments and engineering firms make data-driven pavement management decisions. The company uses cutting-edge deep learning and a simple smartphone to disrupt the highly subjective and expensive pavement inspection process. The technology began in development at Carnegie Mellon University as a project in the Robotics Institute with the original intention of contributing to the autonomous vehicle revolution. With much mathematical effort and additional training of the algorithms, the original algorithms were commercialized to assess infrastructure. RoadBotics' algorithms have been trained to read images and automatically make decisions on road conditions. This is a result of a machine-learning technique called deep learning to train neural networks. Neural networks are the core machinery of deep learning and aims to find patterns in data. More specifically, RoadBotics is using computer vision, and through deep learning trains the algorithms. The primary training method was (and is) supervised learning, which involves feeding the equations, that make up the algorithms, labeled data.
Cloud Hypervisor Forensics and Incident Response Platform (CHIRP)
More than $1 trillion in IT spending will be affected by the shift to the Cloud during the next five years. This shift to Infrastructure-as-a-Service (IaaS) platforms has brought challenges to cyber Incident Response (IR) and forensic teams investigating not only breaches and leaks, but also cyber-crime, due to the ephemerality, location and ownership of the data, disks, and technology provided by Cloud Service Providers (CSP). Our Cloud Forensics Platform introduces a novel approach using Virtual Machine Introspection (VMI) to provide intelligence and forensic artifacts from active VMs in cloud systems. Each IaaS leverages a VM Monitor, or hypervisor, to service VMs in the Cloud. Most hypervisors do not expose a useful Application Programming Interface (API) to support customizable, contextual introspection, which is what an analyst needs to conduct an investigation. We have developed scalable VM instrumentation and introspection at an in-depth level that allows fast handling of events, as well as direct access to VM state (or memory), in a safe, stable fashion.
HADES - High-Fidelity Adaptive Deception & Emulation System
The HADES platform is a deception environment that utilizes Software Defined Networks (SDN), cloud computing, dynamic deception, and agentless Virtual Machine Introspection (VMI). These elements fuse to not only create complex, high-fidelity deception networks, but also provide mechanisms to directly interact with the adversary—something current deception products do not facilitate. At the onset of an attack, adversaries are migrated into an emulated deception environment, where they are able to carry out their attacks without any indication that they have been detected or are being observed. HADES then allows the defender to react to adversarial attacks in a methodical and proactive manner by modifying the environment, host attributes, files, and the network itself in real-time. Through a rich set of data and analytics, cybersecurity practitioners gain valuable information about the tools and techniques used by their adversaries, which can then be fed back to the network defender as threat intelligence.
Skylight - A Big Data Platform for Buildings
Site 1001 was spun out of a large construction company where the solution was invented to fill a hole in the currently available software for buildings. Once the solution was created, the true value was discovered in not just performing simple facilities management and maintenance processes but in truly optimizing the performance of the entire building. This new platform is Skylight, a big data platform for buildings. Competitors include old guard facility management or maintenance companies as well as some newer firms focused on building analytics and performance. Our difference is that we bring expertise from both the construction (non-technical) and software (technical) worlds, giving us much better knowledge and insight into how buildings are designed and can be enhanced for performance. By incorporating an AI element into building operations, Site 1001’s Skylight can build correlations between multiple data sources ensuring that buildings are not only running better than they were previously, but empowering the people within them. Building professionals use Skylight to create better places through smarter building management and community connection. Skylight translates core building and operational data into information to maximize satisfaction, lower operational costs, increase asset longevity and enhance property value.
Skylight
Site 1001 was spun out of a large construction company where the solution was invented to fill a hole in the currently available software for buildings. Once the solution was created, the true value was discovered in not just performing simple facilities management and maintenance processes but in truly optimizing the performance of the entire building. Competitors include old guard facility management or maintenance companies as well as some newer firms focused on building analytics and performance. Our difference is that we bring expertise from both the construction (non-technical) and software (technical) worlds, giving us much better knowledge and insight into how buildings are designed and can be enhanced for performance. By incorporating an AI element into building operations, Site 1001 can build correlations between multiple data sources ensuring that buildings are not only running better than they were previously, but empowering the people within them.