Patched BIND has an API for database back-ends. Bind-dyndb-ldap re-implements big part of the API, but all functions required for DNSSEC support are missing and overall functionality is limited.
BIND’s native database implementation is called RBTDB
(Red Black
Tree Database). RBTDB
implements the whole API, supports DNSSEC,
IXFR etc.
The plan is to drop most of code from our database implementation and
re-use RBTDB
as much as possible.
Discussion:
We will get support for these features ‘for free’:
For each LDAP DB maintained by bind-dyndb-ldap: Create internal RBTDB instance and hide it inside LDAP DB instance. E.g.:
typedef struct {
dns_db_t common;
isc_refcount_t refs;
ldap_instance_t *ldap_inst;
+ dns_db_t *rbtdb;
} ldapdb_t;
Remove our implementation of all functions in ldap_driver.c
and
turn most of functions into thin wrappers around RBTDB:
static isc_result_t
allrdatasets(dns_db_t *db, dns_dbnode_t *node, dns_dbversion_t *version,
isc_stdtime_t now, dns_rdatasetiter_t **iteratorp)
{
ldapdb_t *ldapdb = (ldapdb_t *) db;
REQUIRE(VALID_LDAPDB(ldapdb));
+ return dns_db_allrdatasets(ldapdb->rbtdb, node, version, now, iteratorp);
- [our implementation]
}
Block diagram follows. Blue parts are controlled by bind-dyndb-ldap:
The problem is how to dump data from LDAP to the internal/hidden RBTDB instance and how to maintain consistency when changes in LDAP are made. There are several problems:
Fortunatelly, 389 DS team decided to support RFC 4533 (so-called syncrepl, 389 DS ticket #47388). This will save us a lot of headaches caused by persistent search deficiencies.
The current plan is to use refreshAndPersist mode from RFC 4533.
This allows us to store syncCookie returned from LDAP server and resume synchronization process after restart/re-connection etc. As a result, we don’t need to dump content of the whole database during each BIND restart.
Syncrepl puts a new requirement on the LDAP client: Bind-dyndb-ldap has
to be able to map entryUUID
to the associated entry in RBTDB.
We can create auxiliary RBTDB and store mapping entryUUID
=>DNS
name mapping inside it. This RBTDB will stored to and loaded from
filesystem as any other RBTDB.
Ticket | Summary |
---|---|
#123 | LDAP MODRDN (rename) on records is not supported |
SyncRepl protocol may represent MODRDN operation as modification to ‘DN’ attribute while preserving LDAP object’s UUID. The only way to find out old name of the renamed entry is to store LDAP UUID along with the entry.
The entry was renamed if received change notification contains an
entryUUID
and some DN, but particular entryUUID
is already
mapped to some DNS name which doesn’t match the name derived directly
from DN.
In that case, the old name will be deleted from RBTDB completely and the new entry will be filled with the data.
This feature depends on ticket:151. Following information needs to be stored inside MetaDB:
LDAP UUID
-> (DNS zone name, DNS FQDN) mappingCondition for LDAP MODRDN
detection is:
if (LDAP UUID is in MetaDB &&
(dn_to_dnsname(LDAP DN) != DNS names in MetaDB))
{
LDAP MODRDN detected
delete old DNS names
create new DNS names
}
else
{
ordinary LDAP ADD/MOD/DEL detected
}
The content of changed LDAP entry is received by the plugin via syncrepl. The plugin has to synchronize records in RBTDB with received entry.
We can intercept calls to dns_db_addrdataset()
and
dns_db_deleterdataset()
, modify LDAP DB and then modify RBT DB. The
entry change notification (ECN) from LDAP will be propagated back to
BIND via persistent search and then applied again (usually with no
effect).
There is race condition potential. E.g. multiple successive changes in single entry (i.e. DNS name) done by BIND:
The other option is to not write directly to RBTDB, but this way have other problem:
Update filtering based on modifiesName
attribute is not feasible,
because modifiersName is not updated on
delete.
Ticket | Summary |
---|---|
#125 | Re-synchronization is not implemented |
During initial discussion we decided to implement periodical LDAP->RBTDB re-synchronization. It should ensure that all discrepancies between LDAP and RBTDB will be solved eventually.
We likely need the re-synchronization mechanism itself even if it is not
run periodically because a reconnection to LDAP can require
re-synchronization if SyncRepl Content
Update fails with
e-syncRefreshRequired
error.
cn=dns
sub-tree LDAP search for all objects in DNS tree.object generation number < current generation number
.UUID1
and name
name.example.
was deleted but another object with UUID2
and
name name.example.
was added. In has to be detected to prevent
deletion of DNS object equivalent to UUID2
.Provide resync_interval_min
and resync_internal_max
configuration options. Start with some initial value (= minimal?) and
double the interval if no discrepancies were found. Divide the interval
by 2 in case of any error. New value has be belong into interval
[resync_internal_min, resync_internal_max]
.
Initial implementation has some limitations:
LDAP MODRDN (rename) on records is not supported
Startup with big amount of data in LDAP is slow
Re-synchronization is not implemented
Support per-server _location records for FreeIPA sites
Zones enabled at run-time are not loaded properly
Records deleted when connection to LDAP is down are not refreshed properly
Child DNS zone is corrupted if parent zone is hosted on the same server
This feature doesn’t require special management. Options directory
,
resync_interval_min
and resync_interval_max
are provided for
special cases. Default values should work for all users.
New options in /etc/named.conf
:
directory
specifies a filesystem path where cached zones are
stored.resync_interval_min
and resync_interval_max
control
periodical re-synchronization as described above.Existing SOA expiry
field in each zone specifies longest time
interval when data from cache can be served to clients even if
connection to LDAP is down.
No impact on replication.
No impact on updates and upgrades.
This feature depends on 389 with support for RFC 4533 (so-called syncrepl). See 389 DS ticket #47388.
No impact on other development teams and components.
Path specified by directory
option has to exist and be writeable by
named. It is not necesary to backup content of the cache.
Test scenarios that will be transformed to test cases for FreeIPA Continuous Integration during implementation or review phase.
Petr Spacek <pspacek@…