top of page

Blog posts

Semi-Automatic Visual Assistant

  • The ABILITY Team
  • Oct 31
  • 1 min read
ree


Great news!

We’re happy to share that our journal paper is now published in IEEE Access and available on


In this work, we explore a semi-automatic, human-centred AI approach for image conversational descriptions that truly fit their target users: Blind and Visually Impaired (BVI) people.


Here is the idea in a nutshell:

LLMs generate initial image-conversation drafts → BVI experts refine what matters most → the AI learns from those refinements to produce better, more relevant conversations → BVI end-users validate the gains in usefulness and satisfaction.


It is a practical recipe for AI that adapts to people, not the other way around—scaling inclusive,

BVI-centred training data and improving real accessibility outcomes.


One more promising step toward the vision of "Symbiotic Intelligence" — where humans and AI learn from each other and co-adapt for the benefit of all.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Human Centred, Multisensory Device Creation

Contact us

If you want to contact the ABILITY Coordination Team, please send an email to:

sabrina.paneels@cea.fr

In addition, we encourage you to meet the ABILITY Management Board so you can redirect your question accordingly.

Sign up to get the latest news on our product.

Thanks for subscribing!

This project has received funding from the European Union’s Horizon research and innovation programe under grant agreement Nº 101070396

EU flag
bottom of page