[Rear-users] Symlinks and maintainability

Schlomo Schapiro schlomo at schapiro.org
Wed Sep 29 16:50:20 CEST 2010


Hi,

thanks for this suggestion, I'll be able to take a closer look next week
as I am away for a few days now...

Kind Regards,
Schlomo

Am 29.09.2010 08:07, schrieb Jeroen Hoekx:
> Hello,
> 
> On 28 September 2010 15:33, Schlomo Schapiro <schlomo at schapiro.org> wrote:
>> Hi,
>>
>> some other thoughts...
>>
>> Am 28.09.2010 14:15, schrieb Jeroen Hoekx:
>>> Hello,
>>>
>>> In our project, we are currently using:
>>> OUTPUT=OBDR
>>> BACKUP=NETFS
>>> NETFS_URL=tape:///dev/nst0
>>>
>>> To implement this, we had to add symlinks to all files of the
>>> OUTPUT=ISO target in the $stage/OBDR/ directory structure. In our
>>> opinion this will quickly become unmaintainable. When the ISO creation
>>
>> Yes, this is a real problem, but ReaR apparently has to find a sensible
>> balance between over-engineered flexibility and modularity vs. ugly
>> hacks around artificial limitations.
>>
>>> code gains new functionality, care must be taken that it is added to
>>> OBDR mode and that there aren't any conflicts. We have also had many
>>> problems when forgetting to add symlinks in other parts of the code
>>> and upstream ReaR was also not symlink trouble free :-) Another
>>> downside is that they don't get along with subversion very well.
>>
>> Yes, OTOH I prefer symlinks over .sh files that contain 1 line of source
>> ../../some/other/file.sh
> 
> Agreed, symlinks make things explicit.
> 
>>>
>>> We would like to discuss what you think about these issues and how we
>>> can go forward. In its simplest form, it comes down to this: we want
>>> to include both the ISO and OBDR code when running rear without having
>>> to maintain symlinks from OBDR/ to ISO/*.
>>>
>>> One solution we have thought of would lead to a configuration of:
>>> OUTPUT=ISO
>>> BACKUP=NETFS
>>> NETFS_URL=obdr:///dev/nst0
>>>
>>> Extra code would be imported by observing the URI scheme of NETFS_URL.
>>
>> In my world OBDR is much more an aspect of other (existing)
>> output/backup methods than a target of its own. I believe that this way
>> of seeing it would allow more shared code than to have OUTPUT=OBDR.
> 
> Note that with what we're looking for here in this thread, we have
> OUTPUT=ISO and only load the OBDR specific code when the NETFS_URL
> scheme is obdr.
> 
> As an aside, what we ultimately want to avoid is something like this:
> 
> backup/NETFS/default/10_mount_NETFS_path.sh:
> # don't mount anything for tape backups
> if test "$NETFS_PROTO" = "tape" ; then
>     return 0
> fi
> 
> Some branching is unavoidable for abstracting the differences between
> OSes, but I didn't like coding these particular lines. They could have
> been avoided when creating the backup itself would be separated from
> the NFS and CIFS code.
> 
> When we include all OBDR code in the same folder as the ISO creation
> code, we actually make things much more complicated. ISO and OBDR
> would be one monolithic ~file, with multiple branchings (look at
> mkcdrec), but spread in almost arbitrary files. I don't like too much
> branching in the code. ReaR's explicit folder driven modularity has
> allowed us to code the OBDR support rather quickly (with most of the
> time spent on fighting our tape drive), without fear of breaking
> things. If we had to do more edits in the ISO and NETFS creation code,
> there would certainly have been unforeseen side effects.
> 
>>>
>>> Currently, the directories where scripts are loaded from are
>>> hardcoded. This should be made dynamic.
>>
>> Could you please be more specific, maybe with a few examples what you
>> mean or what you would like to change? So far it always has been enough
>> to plug a new script somewhere (and yes, sometimes one has to think a
>> lot where to plug the script) but the general structure was enough. Did
>> you see that we actually look also for $BACKUP/ and $OUTPUT/ and
>> $OUTPUT/$BACKUP scripts? :
>>
>> {default,"$ARCH","$OS","$OS_VENDOR","$OS_VENDOR_ARCH","$OS_VENDOR_VERSION"}/*.sh
>> \
>>  "$BACKUP"/{default,"$ARCH","$OS","$OS_VENDOR","$OS_VENDOR_ARCH","$OS_VENDOR_VERSION"}/*.sh \
>>  "$OUTPUT"/{default,"$ARCH","$OS","$OS_VENDOR","$OS_VENDOR_ARCH","$OS_VENDOR_VERSION"}/*.sh \
>>  "$OUTPUT"/"$BACKUP"/{default,"$ARCH","$OS","$OS_VENDOR","$OS_VENDOR_ARCH","$OS_VENDOR_VERSION"}/*.sh
>>
>> I think it would be no problem to also add $BACKUP/$OUTPUT if it would
>> help you...
> 
> I know that code. We have modified it a bit to support loading from
> arbitrary directories, but following the same principle as earlier,
> with a "target" and "environment".
> 
>> Making this dynamic by what aspect? Do you think about an ordered list
>> of include stages that will be modified by the included scripts, like a
>> self-modifying program execution plan? While I really appreciate such
>> cool concepts, I am afraid of a debugging hell with very little
>> reproducability...
> 
> That's why we want to do this outside of workflows. A simulated run
> should always show the files included in the real run.
> 
>> Please write your ideas in more details before my fantasy goes wild :-)
> 
> Ideas attached to this message :-) It's more or less inspired by
> mkinitcpio. We don't have it in our main branch yet, since we wanted
> to know what you thought about it.
> 
> Greetings,
> 
> Jeroen
> 
>>
>>>
>>> Another issue is that the code to decide which directories to scan
>>> needs to run after the configuration is known, but before the first
>>> stage is sourced. There are already ad-hoc provisions in the ReaR code
>>> to run code before and after the recovery phase. Maybe it's good to
>>> generalize this idea to other moments, so code could run at
>>> "post_config" (which we need), "pre_recovery" or "post_recovery"
>>> (which is implemented already).
>>
>> I would prefer to add more stages and keep the include system simple and
>> straigh-forward. Basically we also intend the numbers to help putting
>> things within a stage into a defined order and you could consider the
>> following
>>
>> 01-10 -> pre_*
>> 11-89 -> main part
>> 90-99 -> post_*
>>
>> Thus you get your pre and post without adding anything new to the ReaR
>> structure.
>>
>>>
>>> I think such an "event" or "hook" mechanism fits in well with ReaR's
>>> modular approach. It avoids maintenance of 20-something symlinks for
>>> only a few additional lines of code. Your views?
>>
>> Fight symlinks but keep things really simple (OK, IMHO the ReaR
>> modularisation is simple, I have heard other opinions...)
>>
>> Thanks for posting these questions to the list,
>> Schlomo
>>
>> ------------------------------------------------------------------------------
>> Start uncovering the many advantages of virtual appliances
>> and start using them to simplify application deployment and
>> accelerate your shift to cloud computing.
>> http://p.sf.net/sfu/novell-sfdev2dev
>> _______________________________________________
>> Rear-users mailing list
>> Rear-users at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/rear-users
>>
>>
>>
>> ------------------------------------------------------------------------------
>> Start uncovering the many advantages of virtual appliances
>> and start using them to simplify application deployment and
>> accelerate your shift to cloud computing.
>> http://p.sf.net/sfu/novell-sfdev2dev
>>
>>
>> _______________________________________________
>> Rear-users mailing list
>> Rear-users at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/rear-users




More information about the rear-users mailing list