foundherent Posted March 19, 2019 Share Posted March 19, 2019 (edited) Good Afternoon Team, Did something incredibly dumb and am essentially starting from scratch. I'm hoping to get a stronger foundation this time around. My goal is to create a simple table like so... client_id|client_name|timestamp(not sure timestamp is even necessary?) The client_id is unique and never changes. Their client_name, however, does change! So I'm wanting to do this in a way that enables me to query the client_name for all timestamps before yyyy-mm-dd versus after yyyy-mm-dd. My previous structure(go easy on me!) is below... t1.client_id|client_name_assignment_id|timestamp t2.client_name_assignment_id|client_id|client_name|timestamp I'd then query the client_name based on the client_id and isolate the latest timestamp for said campus_id and it's respective client_name. All help appreciated! Edited March 19, 2019 by foundherent Quote Link to comment Share on other sites More sharing options...
foundherent Posted March 19, 2019 Author Share Posted March 19, 2019 34 minutes ago, foundherent said: I'd then query the client_name based on the client_id and isolate the latest timestamp for said campus_id and it's respective client_name. All help appreciated! Apologies for any confusion - campus_id is from a separate table that I'm hoping will work similarly, but for this example we would be querying the client_id for the latest client_name. Quote Link to comment Share on other sites More sharing options...
Barand Posted March 19, 2019 Share Posted March 19, 2019 28 minutes ago, foundherent said: and isolate the latest timestamp for said campus_id That would be easier if you had "campus_id" in your data. The usual way for data that changes over time (eg product price, employee salary etc) is +-----------+------------------+----------------+-----------------+ | client_id | name | valid_from | valid_until | +-----------+------------------+----------------+-----------------+ | 1 | Name 1 | 2000-01-01 | 2019-03-18 | ) valid until yesterday | 1 | Name 2 | 2000-03-19 | 9999-12-31 | ) new name valid until further notice | 2 | Name 3 | 2000-01--1 | 9999-12-31 | Has not changed yet Quote Link to comment Share on other sites More sharing options...
foundherent Posted March 19, 2019 Author Share Posted March 19, 2019 1 minute ago, Barand said: That would be easier if you had "campus_id" in your data. The usual way for data that changes over time (eg product price, employee salary etc) is +-----------+------------------+----------------+-----------------+ | client_id | name | valid_from | valid_until | +-----------+------------------+----------------+-----------------+ | 1 | Name 1 | 2000-01-01 | 2019-03-18 | ) valid until yesterday | 1 | Name 2 | 2000-03-19 | 9999-12-31 | ) new name valid until further notice | 2 | Name 3 | 2000-01--1 | 9999-12-31 | Has not changed yet Apologies campus_id is a similar table but I wanted to keep it simple. What you're defining helps me significantly, thank you. Forgive my ignorance, but this does create a table that does not necessarily have a uid in the instance of names being identical(which may be a possibility for my table). Is this ok for this instance? Quote Link to comment Share on other sites More sharing options...
Barand Posted March 19, 2019 Share Posted March 19, 2019 You could add an auto_increment id field as primary key or you could use PRIMARY KEY(client_id, valid_from) Quote Link to comment Share on other sites More sharing options...
foundherent Posted March 19, 2019 Author Share Posted March 19, 2019 1 minute ago, Barand said: You could add an auto_increment id field as primary key or you could use PRIMARY KEY(client_id, valid_from) Perfect thank you. My campus table is a bit more robust because it also has an address that can change. So the name change may take place at different times from the address change. The table you gave me earlier will still accomplish this for all intents and purposes, but it seems to be counterproductive to the idea of normalizing the data since the name and address would theoretically be duplicated. Should I disperse this data into 2 tables for the sake of normalization or am I overthinking it? Quote Link to comment Share on other sites More sharing options...
Barand Posted March 19, 2019 Share Posted March 19, 2019 You can still preserve normalization A campus has many names (over time) A campus has many addresses (over time) campus_name +-----------+------------------+----------------+-----------------+ | campus_id | name | valid_from | valid_until | +-----------+------------------+----------------+-----------------+ | 1 | Name 1 | 2000-01-01 | 2019-02-18 | | 1 | Name 2 | 2019-02-19 | 2019-03-03 | | 1 | Name 3 | 2019-03-04 | 9999-12-31 | current campus_address +-----------+------------------+----------------+-----------------+ | campus_id | address | valid_from | valid_until | +-----------+------------------+----------------+-----------------+ | 1 | Address 1 | 2000-01-01 | 2019-02-28 | | 1 | Address 2 | 2019-03-01 | 9999-12-31 | current Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.