[isf-wifidog] Re: [wirelesstoronto-discuss] code contribution to wifidog auth server

Rob Janes janes.rob at gmail.com
Mer 16 Nov 18:25:39 EST 2005

ok, lots of resistance to my ideas about databases.

i guess the thing to do is make a "rob" branch on the repository and put 
my changes on it?  so you can see it?

i looked over the cvs logs and diffs for AbstractDb*.php.  with the 
exception of blob support, the only changes in the last year and a half 
have been to comments and debug prints.  So, I guess you're saying if it 
works why change it.

as to whether my stuff is a heavy weight abstraction - well, my code, 
DBO*.php, is 720 lines, and AbstractDb*.php is 750 lines.  oh, and 180 
lines for my AbstractDb.php disguise.  i think i managed that by 
resolving all code duplication using inheritance and wrapping.  in terms 
of call stack depth, it is a couple of levels deep, vs 1 level for 

i was originally of the impression that mysql support was an objective, 
but i see a lot of reliance on postgres only things that would make it 
difficult to support mysql.  it looks to me like the number of them is 
not contained, in other words increasing.  that's all fine by me, 
postgres is great, it's just not what i'm getting from comments and 
postings on the mail list.  Looking at the code, and the cvs logs, i'd 
say that the mysql support was forked off a year ago and left to languish.

my abstraction revisits mysql and attempts to smooth out the differences 
between postgres and mysql.  well, ok, at this point it's more like a 
roadmap than a homecoming.  i hit a roadblock - i'm pretty sure the 
interval sql stuff is postgres only.  mysql has intervals, but not the 
same way.  to smooth out that difference some poor coder would have to 
write some date transform code, so when i saw that i stopped thinking 
too hard on mysql support.

isf is in the enviable position of having hands on developers making bug 
fixes at all times.  us at wireless toronto grab a snapshot maybe twice 
a year.  to upgrade the schema by 15 points gives you a lot of sql on 
your screen, which is maybe upsetting if you're a poor user.  me, i just 
said holy cow.  plus, it might not work, cause maybe the snapshot we got 
had a bug, or cause maybe the postgres user i used didn't have alter or 
create.  unfortunately, the error would be on the users screen.  i also 
see that only MainUI invokes the schema upgrade, which means that users 
coming in on auth or ping may potentially use the old schema.  i assume 
your maintenance regimine is to run cvs update (or copy) and then hurry 
to a login screen to upgrade the database.  perhaps we can set this up 
as a configurable option, like how it's done with the cron cleanup, 
either online or offline.


Benoit Grégoire wrote:

>>>At first glance, I don't get it.  We already have a custom DB abstraction
>>>layer.  We are open to replacing it with an off the shelf one or improving
>>>it, but I don't see what replacing it with different but still custom one
>>>would buy anyone?  But I guess I'll understand when I see the code.
>1> * printing obscure error messages in the middle of the html page
>2> * throwing errors that aren't necessarily caught
>3> * gets a new (albiet hopefully cached) db connection with every sql
>4> * printing debug messages in the middle of the html page
>5> * profiling info not collected so much as debugged
>6> * not extensible by inheritance or wrapping
>7> * too much cutting and pasting of code, the big methods are mostly
>>identical but for a couple of lines.
>8> * no objective wrapping of the result set, so all calls must return the
>>entire result set in an array
>1, 2 were unavoidable before we had a proper logging and error display 
>abstraction, there have nothing to do with the db abstraction class design 
>3 was done this way because that is how postgress PHP driver works (returns 
>the same connexion if the parameters are the same, so there was simply no 
>reason to manage keeping a pointer to the connexion.
oddly enough, i found it to be returning multiple connection handles.  
perhaps that's an artifact of my environment though.

>4- If you found a way to implement it so that it doesn't print in the middle 
>of the page but also works everywhere and has no UI dependencies, I'm all for 
yes.  the EventLogging i put together will capture and hold all debug or 
logging messages.  call flush when you want them echoed.  but - you have 
to use elLogMessage("debug ...") and not echo for your debug messages.

>5: Sorry, I don't understand your statement.
other profilers i've used record the timings for reporting on later, 
when it's appropriate for display.  the timing code in abstractdb shows 
the timings front and center for each sql statement as it's executed.

>6: I think it appears so because the really abstract AbstractDb class was 
>pulled from wifidog because at one point there was a transition from MySql 
>and we needed both to be distinct, but that's just a historical artifact.
looks like a bit of a quick fix to me.  now you've got two abstractdb 
files that are and will have to be complete copies of each other, but 
for the nitty gritty mysql and postgres bits.

my code has a top level, DBObject, which is what you get when you open a 
connection.  DBObject is mostly a switch which calls in the desired 
database driver.  the DBObject file defines an abstract base class, 
DBDriver.  DBDriver_postgres and DBDriver_mysql inherit from DBDriver.  
The low level drivers are supposed to be slim, basically just 1 line 
functions to call pg_fetch_all or mysql_fetch_all.  They also include 
bootstrap code so that if the driver is not built in to php they fail 
gracefully.  ie, top level code can catch the error and do something 
appropriate with the error.

>7:  Agreed, but why not just fix it.
yes, but i think the fix i did is different from the fix you would have 

>8:  True, but taht was by design.  We were not trying to re-create one of the 
>heavyweight abstraction layers, just to simplify the calling conventions and 
>add easy to use debuging and profiling information.  Web applications seldom 
>operate on resultsets large enough that there is significant overhead in 
>pulling it all at once (doesn't mean the queries cannot be super-costly, but 
>it's very rare to pull megabytes of data in an array).  Supporting it meant 
>the db calls were no longer identical.
yes.  my approach was to use a result set object which smoothed out the 
differences.  although - the interface includes the ability to get the 
entire result set in one call.  i used a design similar to perl dbi and 
java jdbc.

fyi, one portability problem is that rows in hash array form are 
upper/lower case sensitive, while the database engine itself may not 
be.  ie, the field was called UserName in the schema, but the hash 
always has "username" in it.

i forgot to mention

9. error encapsulation.  to use pg_last_error() or mysql_error()

>>>It's seems unlikely that the only changes needed in the db abstraction
>>>layer are in common.php, unless you kept the exact same API (but then why
>>>change the abstraction layer?).
>>as to why, see above.  as to how, I made AbstractDb into a wrapper
>>around my stuff.  If the method isn't found it passes the call to my stuff.
>Well, now we have an extra indirection layer around code that is still custom, 
>why not just fix what is already there?  The people who wrote the current 
>code spent a lot of time on it.  I'm sure they won't mind having their code 
>repalced by a standard and externally maintained db abstraction layer.  Or 
>even have their code partly rewritten to be better, or completely replaced by 
>something that does stuff that neither the current abstraction nor a standard 
>one could never do.  But replacing all that code by a still custom 
>abstraction layer with an extra indirection to aparently do the exact same 
>thing with the same API as the class current class is a little hard to 
>I'd much rather we merge your code in the current one.
i think it's an either or thing.  my abstraction is modeled on perl dbi 
and java jdbc, but abstractdb just isn't.

>>>>if you folks on the other hand find that these changes are not useful,
>>>>then i'll have to find a way to use wifidog-auth's UI from the install
>>>>script without making changes to wifidog-auth's internals.
>>>See above
>>>>however, i've made some security improvements to the database stuff i
>>>>hope you'll accept -
>>>>* database connections using the unix socket (more secure)
>>>?!?!? That was already supported and documented in the INSTALL file.
>>I didn't find anything when I started this.  i just grepped the entire
>>tree for unix (case independent) and found nothing relevant.  it's great
>>if it's in there, but it would be nice if that piece of help was easier
>>to find.
>Sorry, aparently the person who did the install script deleted it from the 
>documentation and didn't move it into the install script.
>>the pg_connect call made in AbstractDbPostgres.php only supports the
>>unix socket if you supply the full slash path to the postgres unix
>>socket as the hostname.  AbstractDbPostgres will not omit the "host=xxx"
>>keyword from the connection string and so you cannot use the server's
>>unix socket by omitting the "host=xxx" keyword which is what I'd prefer
>>to do, not really knowing where the unix socket is.
>Well, I can assure you it used to work, you just had to leave 
>CONF_DATABASE_HOST blank.  In any case, that is not a DB abstraction layer 
we had some problems with this initially.  it's a problem in a fresh 
installation without a proper pg_hba.conf.

>>>Well, I may be missing something, but I find what you propose extremely
>>>dangerous.  When you upgrade a live server, the schema update must happen
>>>as soon as the code is updated, no matter what.
>>>Otherwise, you have new code running that may or may not work with the old
>>>schema (and may even accidentally corrupt data in a way that will break
>>>the assumptions of the upgrade script).  If you instead checkout in a
>>>sandbox (but with your real database) to have your schema update before
>>>update the code on the live server, you have the opposite problem:  old
>>>code running that may or may not be compatible with the new schema. 
>>>That's why schema validate is always called (not just from the install
>>>script), and why the wifidog user must have table creation privileges.
>>shouldn't updates to the code (via cvs update, or copy) be done while
>>the server is offline?  otherwise, a user could get in there while the
>>update is still progressing.  who knows what would happen.  suppose they
>>got in before schema_validate was updated but after all the other code
>>was updated?  or visa versa?
>>from later postings to this thread i see that Max has already started on
>>separating the roles of maintainer from user, which is i think exactly
>>what i'm calling for.
>All I can say is that ISF strongly disagrees.  
>First we have hotspots that are open (and have people using them) 24h a day.  
>The only semi-reasonnable maintenance window would be between 4 à 5 am, which 
>is a lot to ask from volunteers.  
>Second, creating an offline mode would be very complicated.  The amount of 
>special treatement while offline and after coming back online is probably 
>quite a bit larger than what you thing.  As an example, you'd have to do 
>special treatement when the server comes back online to not send bogus emails 
>to hotspot tech support that their hotspot went down, but you can't just 
>reset the last successfull heartbeat, else you will disrupt the notification 
>mecanism for hotspots that actually are down.  So you'd have to keep a log in 
>the database of your maintenance windows, so you can do all you special 
unless your server never goes down, you probably should have a cool-off 
period after a restart during which no such messages are sent out.  that 
should give the hotspots time to re-establish their heartbeat connection.

>Third, we've fixed bugs or even added features during daytime very frequently 
>in the past.  We would now loose this ability.  Taking the whole network down 
>to fix a minor bug just seems unreasonnable, even more now that many groups 
>use it.  Unexpected problems may sometimes apear because one group made well 
>tested changes but introduced a bug in a code path they never use.
i'm just talking about small bugs that require schema changes, not all 
small bugs.

you upgrade a live server?  bad bad bad!  i have never ever done such a 
thing ;-)

>The only valid reason I see to do this change is that indeed, code updates 
>can't be done entirely atomically and it's possible that a request could 
>corrupt data in a specific record in the few seconds that pass.  But that's 
>not very likely, and if it does happen the consequences are far less than 
>interrupting all active user sessions and taking 75 nodes instantly offline 
>for everyone.  Not to mention the cost of having to do it at night on a 
>weekly basis and incurring the overhead of coding, testing, maintaining and 
>documenting an offline mode.  This is one of the reason I'd personally prefer 
>to drop future support for MySql (I have doubt's that people are ever going 
>to step up to finish it and support it and that all developpers will 
>systematically test their changes with both databases).  Not maintaining 
>complete database independance would allow us to have the schema completly 
>enforce it's semantics trough triggers.  That way just about any request that 
>would corrupt data would simply fail.
are you saying that if the auth server goes down existing wireless users 
will loose their internet connection?  or are you saying that people 
with a browser parked on auth will not get a refresh screen for a while.

you might consider stored procedures rather than triggers.  probably 
more portable and just as apropo.

we don't upgrade weekly.  we're so far behind we're going to swap 
servers.  definitely there will be down time.  a maintenance mode would 
be very nice for our situation.

>Having end user application using databases may not have revolutionned how 
>databases work.  But they tought us a few tricks along the way on hab they 
>can be administered.  One of which is that transparent schema updates work 
>just fine.
>If you are not willing to take the risk and consider the risk of a bug or 
>human error in the offline mode to actually be lower, you are welcome to 
>implement it in the mainline, as long as it's an optional process.
given that we upgrade once in a blue moon, i think a maintenance mode 
thing gives me a certain comfort level.

i don't really see our upgrade schedule changing soon, given our 
existing customizations.

>WiFiDog mailing list
>WiFiDog at listes.ilesansfil.org

More information about the WiFiDog mailing list