Dear All,
On Wed, 19 Sep 2007, Brian Smith wrote:
> On Tue, 18 Sep 2007, Rasmus Fogh wrote:
>
> > 1) Undo would indeed be a great help. That is why we have had it on our
> > todo list for so long. Unfortunately it would require a *major* change in
> > the entire API, with some tricky problems on top of that. Not for a long
> > time I am afraid, unless somebody can tell us what we should postpone to
> > make place for it.
>
> Given the interconnectedness of the data model I don't really see how this
> can ever be achieved. Basically you're asking for a complete copy of the
> datamodel to be saved between each user operation which sould have major
> performance issues, innit? Surely the way to go is to make sure that the
> programs all have a regular & frequent (15 min?) backup enabled _by
> default_ and that the backups rotate for a user specified (but say default
> 4) number of copies in a non-trampling and easily retrievable (and
> documented) way. Hopefully the paths stuff you're doing for 1.1 will make
> all this easier/more transparent.
I like the idea of improved back-ups. It would be easier to achieve.
There just might be a way to handle the 'undo'. Basically, every time an
API function modifies the data, you store the information to undo it. For
a 'set' you store the object, The set function and the old value. For a
'create' you store the object to delete. For a delete you store the
objects deleted. Then you keep all these reverse operations on a circular
buffer. There are many problems, though. This code would be right through
the entire API. Functions that bypassed the API (there are some) would
not be undoable, and neither would saves. You might need to undo a lot of
API operations to undo a single operation in e.g. Analysis. And you would
hae to also undo the internal state of the program calling the API (again,
e.g. Analysis). This would be a lot of work for what might be a so-so
result, so we are not eager to start any time soon.
It might actually be easier to wait till we have a Python/database
implementation and make an undo with the database rollback facility -
though that would be alot of work as well.
>
> > 2) DataShifter. It might be possible to let DataShifter be called from
> > inside Analysis (I am neither the one who knows nor the one who would have
> > to do it). We take note. But it would still work the same way. The problem
> > about 'importing native format data' is that the CCPN objects are all
> > linked together. Importing e.g. a peaklist means you must decide how to
> > handle all the links that go from the peaks to something else. That is
> > often complicated, and it reuires special purpose handwritten code for
> > every single case that you decide to support.
>
> DataShifter & formatConverter are important for the whole CCPN effort
> (which is not just analysis) and is an area that certainly deserve some
> more attention. As well as the means by which existing users go to
> transfer data between projects, they are the route in for new users - we
> need them to be slick & documented to the point where users do not feel
> really comfortable using them. NB this needs detailed and constructive
> feedback from the users reading this on what functionality is
> broken/missing and to help with documentation for specific cases.
>
> Brian
>
> --
> Dr. Brian O. Smith ---------------------- B Smith at bio gla ac uk
> Division of Biochemistry & Molecular Biology,
> Institute Biomedical & Life Sciences,
> Joseph Black Building, University of Glasgow, Glasgow G12 8QQ, UK.
> Tel: 0141 330 5167/6459/3089 Fax: 0141 330 8640
>
|