small medium large xlarge

Back to: All Forums  Core Data
Generic-user-small
19 Mar 2010, 08:57
John Shea (7 posts)

Hi Marcus,

In the distributed core data chapter, there is a comment, just before the -createObject Implementation heading (page 213 - but i may have a slightly dated PDF) which says:

“If our requirements involve data of this size, then we should consider other options. One that has met great success is to keep a local copy of the entire repository on each machine and when they sync to merely pass deltas back and forth instead of a true client-server environment.”

I am wondering if you could give a little bit more information on this methodology, for example:

  • how the initial data is passed back and forth serialized somehow - or still using DO?
  • how it is parsed (eg how creation and update process integrates with Core Data)?
  • how each client is notified when an update has taken place somewhere else?
  • is the server a http server or is there a “master” mac which vends CoreData objects?

A high level view of the technologies and processes (and gotchas!) associated with CoreData is more what i am interested in rather than lower level details, so some hints on how to explore this methodology myself would be great.

Thanks very much! John

Avatarsmall_pragsmall
19 Mar 2010, 20:16
Marcus S. Zarra (284 posts)

When you are dealing with deltas, I would play with the idea of pushing the sqlite file as a single file transfer between the machines for the initial load. From there I would translate the data to an intermediate format like JSON so that you can relink things on the other side.

You must be logged in to comment