How sustainable design of an AI service makes it more secure?
Feb 3, 2026
Since 2024, Aguaro has been a member of the ‘Institut du Numérique Responsable’ (INR, Responsible Digital Institute), a think and do tank created in 2018 and stemming from the ‘Club Green IT’. The INR addresses three key issues in sustainable digital technology:
reducing the economic, social and environmental footprint of digital technology
reducing humanity's footprint through digital technology (IT for Green / IT for Sustainability)
creating sustainable value and responsible innovation through digital technology
by developing codes of conduct, labels, standards, guides, MOOCs and publications, to which Aguaro, among others, contributes.
At a time when the uses – and associated impacts – of Artificial Intelligence are booming, the INR publishes a monthly newsletter entitled ‘L'INR décode l'IA’ (The INR decodes AI), providing analysis and best practices for more responsible AI.
For its latest edition, Rémy Marrone, an expert and journalist specialising in responsible digital technology, interviewed Boris Bailly, co-founder of Aguaro and former ADEME researcher, and Anaëlle Monti, R&D Engineer, on a subject that has long been a blind spot in responsible digital technology: cybersecurity, and more specifically how the responsible design of an AI service makes it more ‘secured by design’.
As the interview is in French, you can find the full translation in this article.
Cybersecurity has long been a blind spot in sustainable digital technology. With the advent of AI, this issue is increasingly coming to the fore. And sustainable digital technology has plenty of advantages that demonstrate how it can serve the interests of cybersecurity.
What helps cybersecurity? While it is possible to seek to eco-design cybersecurity, it is also interesting to see how the responsible design of an AI service makes it more ‘secured by design’. How? And where does the RIA31 - the ethical and responsible AI framework established by the Institute (French only for the moment) - come into play on this issue?
To answer these questions, we met with Aguaro, a well-known player in sustainable IT and contributor to RIA31, via one of its co-founders, Boris Bailly.
1/4 - Reducing the attack surface: questioning the need
"One technical risk we identify is related to the fact that new AI services stack layers on top of each digital infrastructure. Each additional layer, whether immaterial or material, generates an additional risk and increases the potential attack surface. The complex nature of an AI service creates more entry points."
"If we develop a set of criteria in advance to question each project and each business need, we can then examine the AI-based solution, which is often ideologically driven."
2/4 - Controlling data
"We need to be cautious with AI, because it increases information risk by default. Data transmission is easier and less controlled than in a traditional digital service. Data can be captured in bulk much more quickly. Controlling the volume and quality of data becomes key."
3/4 - Betting on greater sovereignty
"Sovereignty is an argument that works well today with decision-making teams and IT departments. It is a good asset for both cybersecurity and reduction of the organisation's footprint: location in France goes hand in hand with a much less carbon-intensive energy mix."
4/4 - Raising cost issues
"The temptation in cybersecurity is to put a safe around each service. In this case, cybersecurity and frugality cannot coexist. It may be appealing for security reasons to equip each service with its own infrastructure. By siloing in this way, we limit the risk of an attack spreading to another service. In practice, however, this results in an oversized server for each service."
"This is a viable approach in a world of unlimited funds and resources. Today, economic considerations prevail and resources are under pressure, so we have an opportunity to show businesses the benefits of doing things differently."
Five best practices from RIA31 that promote cybersecurity
STRATEGY - STR-1_03-1: Has the demand (useful, meeting a real need aligned with one of the UN Sustainable Development Goals vs. futile, secondary, with no added value, achievable without AI) been validated?
STRATEGY - STR-1_06: Has a red team (intrusion testers) been set up to study emerging behaviours and unknown effects of the models?
STRATEGY - STR-2_01: Is data governance in place to address risks related to data security and confidentiality, and ethical use of data processed by AI, throughout the entire life cycle, including end of life?
TECHNOLOGICAL FOUNDATION - BCK-1_01: Are developers provided with tools and methods for assessing the security and robustness of AI applications against threats? For example: ART (Adversarial Robustness Toolbox)?
TECHNOLOGY PLATFORM - BCK-5_01: Are the mapping and minimisation of useful and necessary data throughout the entire life cycle implemented to limit their transfer and optimise their storage and retention (data mesh, etc.)?
RIA31 and beyond Anaëlle Monti, R&D Engineer in Sustainable IT at Aguaro, explains the approach adopted internally with regard to standards:
"RIA31 is a very strong foundation. We refer our customers to this standard. In terms of cybersecurity, we also rely on specialised standards. "
Anaëlle Monti cites the ISO 27001 standard and the ANSSI reference framework, “which presents several attack scenarios and recommendations for the different phases of the AI life cycle”. She concludes: “We have also tried to go beyond French standards. For example, there is the Voluntary AI Safety Standard, a very relevant Australian standard”.
Many thanks to INR and Rémy Marrone for this invitation and this opportunity to share!



