ARTIFICIAL INTELLIGENCE
Artificial Intelligence (AI) is used in everyday ways on devices in your home, on your phone, and in the workplace. AI can be loosely defined as applying acquired knowledge to make decisions, in contrast to using explicit logic.
AI has unique quality issues and there are challenges with verifying the behaviour of AI through testing. A major challenge for the technology sector is convincing everyone that AI systems can be trusted with important decisions, and handling the increased relevance of societal concerns in technology implementations.
CONFORMANCE & BEST PRACTICE ADVISORY
The EU’s proposed AI Act is set to be agreed in 2023 and take effect in January 2025. Because of the way the conformance requirements are being prepared, it is unlikely they will be published until late 2024. This leaves little time for businesses to prepare.
There are AI codes of conduct and guidelines being published almost monthly, usually operating at a very high level. We save you the time to consume them all and help you implement technical and management best practices where it matters.
SOLUTION
Our extensive work to set technical AI industry standards with UK and international standards development organisations such as BSI, ISO/IEC and CEN/CENELEC means we have detailed insight into the direction being taken. We can advise on management systems, technical quality management systems, and specific performance requirements.
We are also a partner with ForHumanity, and can audit systems against their scheme criteria.
By 2025, we expect to be an accredited conformity assessment provider for the proposed EU AI Act’s requirements.
Our clients range from SMEs to government bodies, providing the latest insight on the development of the EU’s framework.
We can advise you management and technical processes for your AI systems in the following areas:
Risk management
Management systems and governance implications
Technical quality management
Data quality management and data audits
Detection and treatment of unwanted bias
Cybersecurity
AI TEST STRATEGY, DESIGN AND DELIVERY
There are a number of core problems that make testing AI difficult:
Functional correctness is not absolute
Functional adaptability has unintended side effects
Data is often incorrect or inconsistent
Data and concepts drift
Data is always biased
Ground truth is often unknown
The more we anthropomorphize, the less we expect to specify about quality
SOLUTION
Dragonfly’s team has deep expertise in the testing of AI systems. We can help by:
Constructing test strategies that mitigate risks and comply with industry best practice.
Testing and monitoring input data for data quality characteristics
Testing of model performance, including for unwanted bias and robustness
Implementing specific testing types unique to AI such metamorphic testing
Coaching and assisting existing team members, including accredited formal training
Conducting a test process review to examine current practices, and recommend improvements
TRAINING
Dragonfly’s experts have been working on testing AI systems for several years. Their work has directly contributed to A4Q and ISTQB’s training courses and certifications, and we can offer this training to companies who need to upskill their teams.
In addition to accredited courses, we can also offer bespoke training customised to your organisations needs. This can cover specific vertical use-cases for AI in depth, or upcoming regulatory frameworks.
SOLUTION
Dragonfly’s experts have been working on testing AI systems for several years. Their work has directly contributed to A4Q and ISTQB’s training courses and certifications, and we can offer this training to companies who need to upskill their teams.
In addition to accredited courses, we can also offer bespoke training customised to your organisations needs. This can cover specific vertical use-cases for AI in depth, or upcoming regulatory frameworks.
Find more information here or contact us at training@wearedragonfly.co