[Rear-users] Symlinks and maintainability

Schlomo Schapiro schlomo at schapiro.org
Tue Sep 28 15:33:35 CEST 2010


some other thoughts...

Am 28.09.2010 14:15, schrieb Jeroen Hoekx:
> Hello,
> In our project, we are currently using:
> NETFS_URL=tape:///dev/nst0
> To implement this, we had to add symlinks to all files of the
> OUTPUT=ISO target in the $stage/OBDR/ directory structure. In our
> opinion this will quickly become unmaintainable. When the ISO creation

Yes, this is a real problem, but ReaR apparently has to find a sensible
balance between over-engineered flexibility and modularity vs. ugly
hacks around artificial limitations.

> code gains new functionality, care must be taken that it is added to
> OBDR mode and that there aren't any conflicts. We have also had many
> problems when forgetting to add symlinks in other parts of the code
> and upstream ReaR was also not symlink trouble free :-) Another
> downside is that they don't get along with subversion very well.

Yes, OTOH I prefer symlinks over .sh files that contain 1 line of source

> We would like to discuss what you think about these issues and how we
> can go forward. In its simplest form, it comes down to this: we want
> to include both the ISO and OBDR code when running rear without having
> to maintain symlinks from OBDR/ to ISO/*.
> One solution we have thought of would lead to a configuration of:
> NETFS_URL=obdr:///dev/nst0
> Extra code would be imported by observing the URI scheme of NETFS_URL.

In my world OBDR is much more an aspect of other (existing)
output/backup methods than a target of its own. I believe that this way
of seeing it would allow more shared code than to have OUTPUT=OBDR.

> Currently, the directories where scripts are loaded from are
> hardcoded. This should be made dynamic.

Could you please be more specific, maybe with a few examples what you
mean or what you would like to change? So far it always has been enough
to plug a new script somewhere (and yes, sometimes one has to think a
lot where to plug the script) but the general structure was enough. Did
you see that we actually look also for $BACKUP/ and $OUTPUT/ and
$OUTPUT/$BACKUP scripts? :


I think it would be no problem to also add $BACKUP/$OUTPUT if it would
help you...

Making this dynamic by what aspect? Do you think about an ordered list
of include stages that will be modified by the included scripts, like a
self-modifying program execution plan? While I really appreciate such
cool concepts, I am afraid of a debugging hell with very little

Please write your ideas in more details before my fantasy goes wild :-)

> Another issue is that the code to decide which directories to scan
> needs to run after the configuration is known, but before the first
> stage is sourced. There are already ad-hoc provisions in the ReaR code
> to run code before and after the recovery phase. Maybe it's good to
> generalize this idea to other moments, so code could run at
> "post_config" (which we need), "pre_recovery" or "post_recovery"
> (which is implemented already).

I would prefer to add more stages and keep the include system simple and
straigh-forward. Basically we also intend the numbers to help putting
things within a stage into a defined order and you could consider the

01-10 -> pre_*
11-89 -> main part
90-99 -> post_*

Thus you get your pre and post without adding anything new to the ReaR

> I think such an "event" or "hook" mechanism fits in well with ReaR's
> modular approach. It avoids maintenance of 20-something symlinks for
> only a few additional lines of code. Your views?

Fight symlinks but keep things really simple (OK, IMHO the ReaR
modularisation is simple, I have heard other opinions...)

Thanks for posting these questions to the list,

More information about the rear-users mailing list