That's short-sighted. It doesn't matter where data lives as long as the application keeps everything separate, and this only "helps" administrative stuff if you have to work within the database directly (SQL queries and such). But multiple databases like this is almost always a bad idea in the long run.
The current devs stated that they designed the db process in this manner so that each client's data would be, completely, separate from the other clients' data & this way db administrative efforts would be less hectic & more efficient
One database is how it should be.
Not really. What matters is how much data there is and how it's organized in the tables - multiple databases won't cause a performance problem simply because there are more than one.
Q: Won't having that number of (individual) db's - all, literately, an exact replicate of the other db - actually create more (unnecessary) latency / load on the db server / resources then just having ONE 'master' database & housing ALL the clients' data within that (optimized) database[i] (of course Indexing crucial to keep that db optimized to handle the total load for all client data; it's same as having multiple db's in my view based on proper indexing - just a more efficient design)?
That is actually an appropriate concern. It doesn't matter that the data is distributed (in fact that generally makes things more awkward to work with) but having that load distributed is reasonable. If you're at the point where multiple servers is worth the investment, of course.
Distribute the, pertinent, data sets based on their role / service - individually - across separate database servers - so that the load / latency issues minimized for each db server (itself).
You could have one lower-spec server handling data that's used less frequently or less aggressively (eg, big complicated queries) and another higher-spec server handling the data that gets processed more heavily.
For example: all email marketing (results) data would be housed on db1 server whereas all client specific data would be housed on db2 server.
Reason: the bulk email marketing application demands a tremendous amount of resources & disk space so by isolating those services / operations removes contention between the two db's (Yes / No?)
And moving data across servers makes that harder, though IIRC SQL Server has native functionality for that. But if you can JOIN tables across servers in a query, I would expect that the processing has to happen on one server - it's not like the query is split in half and each piece executed by the server housing the associated data. So simply moving the data for the sake of moving it might not actually do you any good if you're querying both servers' data together frequently.
To hook the separate sets of data when required (client or email data)?
Answer: - the SQL queries would just JOIN / UNION data sets from the (separate) specified MS-SQL db servers based on the required data to produce SQL results (JOIN / UNION already the method with current BI queries using to date; I'm just suggesting the data be segmented to different db servers based on their role / service to the application).
No. Not to me. If you have a particular client or three that is putting an exorbitant load on your server then you need do something about getting them a dedicated instance - as in a separate copy of the application and database on separate servers. But in general it should be the same database/server for everyone.
Q: does it make sense to have multiple copies of the same (exact) db schema for client data vs one 'master' db schema which can house ALL the data so there is less db administrative work (1st off) & (2ndly) reducing the actual load on the 'one server' itself.
Meanwhile your thoughts about migration and the distributed architecture I can't help too much with.
Make sure you have proper indexing before thinking about this. Indexes are almost always enough to do the job without separating data. Go to the full post
Answer: by partitioning the table(s) by week, month, etc... that will reduce the search (latency); limiting the amount of data required to be searched / sorted to form the query.