Is it possible to extend Collabora user interface for custom object handling

Hi there!

My name is Artem, I’m working on opensource project in Cultural Heritage domain - GitHub - researchspace/researchspace: ResearchSpace Platform .

Our current text editor implementation is kind of a simple one based on slate.js and we are looking for a proper text editor.

Our editor provides a way to embed interactive components in addition to a simple text. For example if object is image that we know can be accessible via IIIF API we can show IIIF Viewer so users can zoom in, see Late Hokusai

I wonder if there is a way to extend Collabora web UI with logic for handling custom objects in a odt document. The idea is to use draw:object with image representation and have custom web component that can make it interactive when it is opened in a web browser through Collabora.

It would be great to know if someone tried to implement something similar. Any pointers to a relevant Collobara source code are appreciated!

Thanks,

Artem

This is a really good question. Of course - what we would really love to see is the ability to import, view, explore & print IIIF files in LibreOffice as well as COOL (rather than a web-only solution).
Would you be interested in working on a dialog to handle that around our insert->image functionality ? If so, I think the team at LibreOffice would love that.
I would suggest asking on the LibreOffice development list for some help on how to get started there. Checkout: Developers | LibreOffice - Free Office Suite - Based on OpenOffice - Compatible with Microsoft

IIIF was only an example, the idea is to be able to handle custom objects with interactive web UI component (and we already have a set of components for embeddings into narratives - Semantic tools in ResearchSpace).

The idea is to use ResearchSpace to produce such interactive narrative with Collabora/LibreOffice, represent it as odt with custom objects that have image representation. So when it is opened in desktop editor or exported to pdf it kind of still makes sense, but when it is opened through ResearchSpace it can become interactive.

It is an interesting idea. It may be possible to have an OLE-like component in the document that we know is a magic Web tool of yours - and to which we can send data encapsulated in the document, and from which we can fetch it again.

When you edit an OLE object like a chart - it greys out the rest of the document and allows you to edit just that; and it might be possible to add a mode for this.

Against that - its not totally clear how to add a collaborative editing layer over the top of that: we rely on a single-consistent-server-side-model for simplicity, and of course the APIs to interact with other objects. Beyond that the component would need to save a WMF/EMF preview of itself to store in the document for those who can’t activate the component (think off-line / PC users) - and perhaps in the fullness of time some magic for each platform to allow it to be edited inside a captive / embedded browser (or something).

Its not impossible - but it’s quite a big project - if you have the resource to take that on - we can perhaps invest some time in a design & code-pointers, but it is going to be many person months of skilled developer time I think.

It is an interesting idea. It may be possible to have an OLE-like component in the document that we know is a magic Web tool of yours - and to which we can send data encapsulated in the document, and from which we can fetch it again.

I believe that actually what I need is simpler than that. But maybe I’m wrong.

So everything that we are embedding into narratives has URI, currently we represent narrative with HTML, so embedded image resource looks like this:

<embed src="https://latehokusai.researchspace.org/resource/EX_Digital_Image/d57cd4ad-c6fa-43c1-b99f-3a8d62d0c941" type="researchspace/resource" template="image"/>

or this

<embed src="https://latehokusai.researchspace.org/container/ontodiaDiagramContainer/Taira_no_Kanemori_Knowledge_Map_Exploration" type="researchspace/resource" template="knowledge-map"/>

For actual text editor this block is a black box, it can be resized and moved and deleted. But there is no communication between document and content of an embedded block. And when it comes to collaborative editing it can be treated more or less like an image. One can move it around, delete, and that is it.

For offline use it is just an image with caption. In our use case, in addition to that we can also have link to interactive web version.

I’m looking into odt spec and it seems correct representation of this concept is either draw:object-ole or draw:plugin. So when user inserts resource into a document, we create draw:frame with draw:plugin/draw:object-ole and draw:image as an image representation of that block. So the main trick is to find out how to overlay custom HTML over canvas for the given frame.

It is an interesting idea; as you say we could use an ole-object draw frame, we would then need to send co-ordinates of that to the client as/when that intersects with the visible area or (perhaps better?) give co-ordinates of all such frames in the document on open time, and/or when one moves on re-layout.

Then we could draw the overlays on top as you say inside the view. Might be a useful feature for eg. youtube-video-embedding which is an odd (but perhaps useful) use-case.

If you can do the work in the JS client, and come up with a simple API for that - I can find someone to implement it on the backend if you’d find that useful =)