And of course he
wanted it all tied together on a network.
About that time
(1989) I was experimenting with RDBMS software (R-Base) to put together a
simple system for recording student assignments and scores. The college had
installed a couple of local area networks (Netware 2.0a on 286's, ARCNET and
IBM baseband), and was starting the process of replacing an IBM System36 and
RPG with Netware, DOS and Ansco Paradox 2.0. He saw my prototype and offered me
a move from teaching to full time IT - in a department of one. Me.
So I built a shiny
new campus-wide routed ARCNET
network, built a couple of Netware 2.0a servers, and started writing the
software that would execute his vision using the multi-user DOS-based Paradox 3.0 as the
RDBMS development platform.
Within a few years
we had a fully networked campus with nearly every computer and lab on the
ARCNET network, multiple Novell servers, and real-time relational database
software that covered most of the administrative and academic computing for the
college.
I learned the
fundamentals of managing relational data, normalization, foreign keys, indexes,
etc. The college ended up with desktop software that managed student
registration, fees, payments; quiz, assignment and test scores; course grades
and academic transcripts. By the early '90's, students could walk up to
specially configured PC's and any time and look up their grades and see exactly
what they needed to do to complete a course, program, degree, certificate and graduate.
Everything was
multi-user and real-time. If an instructor entered a test score, the student
would see the score in real time. If the score was the final score needed to
complete a credit, course or entire program, the student would see the final
credit, course or program transcript seconds later.
No batch jobs. All
DOS based and Netware networked.
One of the barriers
in the pre-386 days was DOS's inability to multitask. That made some aspects of
the software pretty difficult to use, and any task that required significant
compute could not be backgrounded. The meant for example, that a registrar who
needed to run a report for a student would have their PC tied up waiting for the report to complete. That slowed down
the registration process significantly. I scratched my head a bit -
multitasking OS's were not readily available. Quarterdeck DESKview and QEMM
were fairly usable but affected desktop performance. Instead I came up with a
novel solution that allowed background processing without affecting the
performance of the users desktop.
I created a small
multi-user relational database that acted as a job queue. When the real-time
users executed functions that would likely take more than a couple of seconds,
the application would insert the function & parameters into the 'job'
database. A rack of old 286's were left running and logged into an app that
would scan the job database. These 'workers' would scan the database, and if a
record was unlocked, would lock the record, execute the function, return the
results to either the student record RDBMS, a temporary table, or directly to
the Registrar's printer, then unlock and archive job record. Then during peak
processing times (registration week) I simply dusted off more old 286's and
net-booted them into the worker app, where they would share the background
processing load. That made the whole thing somewhat scalable.
I think it was
called 'Distributed Network Computing' or something like that. But I used RDBMS
record locks as the semaphore and database records as the messaging path.
After the student
records database was roughed in and working, the President had me look at the
purchasing system. We were using an archaic green-screen app hosted on a
mainframe run by a service provider. The app was obviously ported from punch
cards - the green screen input had to be column aligned, the results of a
'submit' would show up minutes or hours later, and any errors caused an entire
batch to be rejected.
And of course all
that did was update the chart of accounts. The actual PO's were done on a
typewriter.
Brutal, considering
what we had on the student records side.
We took a look at
the 'modern' version of that providers software - a PC application that put the
80 column, position sensitive batch processing on a DOS green screen with a
modem to submit the job to the mainframe. Errors were returned the next time
you dialed up.
Brutal, considering
what we had on the student records side.
The President asked
my how long it would take to rough in something using Paradox RDBMS. I
committed to a prototype in 3 weeks. In a handful of months we had a fully relational
multi-user real-time purchase order and account management system where
departments could key in their own PO's, accounting could approve and print
them, chart of accounts and budgets would automatically be encumbered, and the
service providers mainframe would be updated at the end of the day. Departments
could look up their budget any time and see the balance and everything
transaction affecting their accounts.
And of course the
President had a small app that let him monitor budgets real-time, showing
recent encumbrances bubbled to the top and highlighted red. He watched the budget like a hawk.
I eventually
integrated our Financial Aid software - a commercial DBase based package. That integration allowed us to automatically determine how much of each students financial aid was surplus to
their tuition and fees and automatically cut financial aid checks. Of course
the first time we did the automatic check printing, we messed
it up.
We also struggled with room scheduling, so I built a really simple room scheduling application using Paradox. The core of the application was a single table with building, room and 10-minute time block as a composite primary key. Checking room availability was a simple query, and Paradox's built-in key and record management prevented duplicate events.
By 1992-1993 I had automated most mundane paper-generating process and several data-entry jobs - which we eliminated through attrition.