In the distributed core data chapter, there is a comment, just before the -createObject Implementation heading (page 213 - but i may have a slightly dated PDF) which says:
“If our requirements involve data of this size, then we should consider other options. One that has met great success is to keep a local copy of the entire repository on each machine and when they sync to merely pass deltas back and forth instead of a true client-server environment.”
I am wondering if you could give a little bit more information on this methodology, for example:
- how the initial data is passed back and forth serialized somehow - or still using DO?
- how it is parsed (eg how creation and update process integrates with Core Data)?
- how each client is notified when an update has taken place somewhere else?
- is the server a http server or is there a “master” mac which vends CoreData objects?
A high level view of the technologies and processes (and gotchas!) associated with CoreData is more what i am interested in rather than lower level details, so some hints on how to explore this methodology myself would be great.
Thanks very much! John