The requirement is we need to persist large number of objects ( ~1.000.000 records) into Ivy repo within 5 hours.
Does anyone know what is the optimistic way to do that?
Currently, I loop over 1.000.000 records and then use “Ivy.repo().save(obj)”;
May I ask some questions?
I can find some ways to do this, but you need to have the port static for elasticsearch as to make request to elasticsearch of ivy for bulk insert
We have RAM problems with 20-50 Documents in the Repo. What do you want to save in the repo ? Rich text? Or also Documents…
I want to save the Rich texts, no documents.
I think 50 documents in my case is like 2’000’000 characters. That means you will have same problems with Ram. I think at the moment we have a small performance problem on the designer 7.0.x
And maybe you should save it asynchronously?
Have you tried external elasticsearch yet? I think it’s better to have only 1 elasticsearch for multiple engines.
At the moment, if we have 7 engines then we will have 7 elasticsearch processes running.