It is an interesting idea. It may be possible to have an OLE-like component in the document that we know is a magic Web tool of yours - and to which we can send data encapsulated in the document, and from which we can fetch it again.
I believe that actually what I need is simpler than that. But maybe I’m wrong.
So everything that we are embedding into narratives has URI, currently we represent narrative with HTML, so embedded image resource looks like this:
<embed src="https://latehokusai.researchspace.org/resource/EX_Digital_Image/d57cd4ad-c6fa-43c1-b99f-3a8d62d0c941" type="researchspace/resource" template="image"/>
or this
<embed src="https://latehokusai.researchspace.org/container/ontodiaDiagramContainer/Taira_no_Kanemori_Knowledge_Map_Exploration" type="researchspace/resource" template="knowledge-map"/>
For actual text editor this block is a black box, it can be resized and moved and deleted. But there is no communication between document and content of an embedded block. And when it comes to collaborative editing it can be treated more or less like an image. One can move it around, delete, and that is it.
For offline use it is just an image with caption. In our use case, in addition to that we can also have link to interactive web version.
I’m looking into odt spec and it seems correct representation of this concept is either draw:object-ole or draw:plugin. So when user inserts resource into a document, we create draw:frame with draw:plugin/draw:object-ole and draw:image as an image representation of that block. So the main trick is to find out how to overlay custom HTML over canvas for the given frame.