Jump to content

Query: retrieve NULL values


dtyson2000

Recommended Posts

Morning all!

 

I have a query question (is that redundant?). I'm trying to retrieve the values from two tables that include common data but I'd also like to retrieve the uncommon data. The users table contains user data. The login table contains login data. Some users have logged in. Others have not.

 

I'd like to calculate the number of days that it has been since each user has logged in, which I have achieved with the following query. However, I would also like to retrieve the names of users who have NOT yet logged in, thus those who have no data in the login table.

 

It's probably something simple that I'm overlooking but I've hit the wall and wanted to throw it out there for your consumption. If someone could point me in a direction, that would be great! I hope I've described it decently enough. Here's what I have:

 

users

id 	|	userid	|	name
1	|	123		|	dave
2	|	456		|	sue
3	|	789		|	fido
4	|	112		|	zippy		


login

id	|	tuserid	|	logindate
1	|	123		|	2011-07-04
2	|	456		|	2011-05-30
3	|	789		|	2011-06-10


SELECT * FROM users,
(SELECT DISTINCT tuserid, MAX(logindate), DATEDIFF(CURDATE(), MAX(logindate)) days_since_login
FROM login
GROUP BY tuserid) login_days
WHERE (userid = tuserid || tuserid IS NULL) AND days_since_login > 10

 

I'd like to return:

 

resulting table

id	|	userid	|	days_since_login
1	|	456		|	41
2	|	789		|	30
3	|	112		|	NULL

 

Any thoughts?

Link to comment
Share on other sites

Suggestion. Do not subquery, then cartesian product (CROSS JOIN), then restrict (WHERE).

 

Instead:

 

Step 1.) LEFT JOIN them both tables by user id.

Step 2.) GROUP BY user id, summarizing by latest date logged in.

 

Also, avoid using NULLS (and LEFT JOINs) as they are not originally part of the relational model.

 

 

Hope it helps.

Link to comment
Share on other sites

Yes I am seriously suggesting.

 

The Relational Model is based on classical 2-valued logic (TRUE and FALSE). NULLs (and LEFT JOINS) makes it 3-valued logic.

 

Research on the consequences on such change, I can't mention it here in detail, for this is not the place, and besides there are lot of sites

in the 'net that discusses such issue.

 

Well, for starters:

 

SELECT CASE WHEN NULL = NULL THEN 1 ELSE 0 END

That will give you 0, w/c means NULL is not even equal to itself! An "object" not equal to itself? Nonsense.

One of the three "laws on thought" dictates: an object is equal to itself.

 

Wow, I just can't comprehend that fact.

 

But I'm forgiving and lives in a non-ideal world, so I try to avoid NULLs as much as possible. If I use LEFT JOIN, only on

cases of GROUPING.

 

Besides, there are no NULLS in the real world. Only N/A values, which can be simulated by "special values."

 

Even ANSI "represents" NULLs using a non-NULL value.

 

That clearly indicates the contradiction on using NULL.

Link to comment
Share on other sites

I think the query is simpler than you thing:

 

SELECT u.id, u.userid, MAX(l.logindate), DATEDIFF(CURDATE(), MAX(logindate)) as days_since_login
FROM users AS u LEFT JOIN login as l ON u.userid = l.tuserid
GROUP BY u.id

should do it (I think). I don't have a test environment here. Using a LEFT JOIN (with the ON phrase instead of a WHERE) should result in any reference to fields in login being NULL if there is no matching record. You could do it with a WHERE, but it is uglier ...

SELECT u.id, u.userid, MAX(l.logindate), DATEDIFF(CURDATE(), MAX(logindate)) as days_since_login
FROM users AS u, login as l 
WHERE (u.userid = l.tuserid OR l.tuserid IS NULL)
GROUP BY u.id

 

 

@ebmigue

NULL means the value IS NOT KNOWN. Which is a very real possibility in the REAL WORLD as well. Since I do not know your favorite color and I do not know fenway's favorite color; both of those values are NULL to me. It is NOT possible to say that ebmigue.favorite_color = fenway.favorite_color -- it may turn out that the values are equal once we learn what they are, but until then, we cannot say they are equal -- we cannot say that one UNKNOWN is the same as another UNKNOWN.

 

Representing a NULL in some other way, does not yield accurate results. If we choose to use an empty string instead of NULL, then ebmigue.favorite_color = fenway.favorite_color would evaluate to true. And a query of the database to group people by favorite_color, would yield incorrect results. Perhaps it is not that critical with colors, but what if we were grouping by gender or political affiliation?

 

I really don't see how you can say the Relational Model is based on "2-valued logic". The whole idea is that we can relate multiple values to any one or more elements.

 

Link to comment
Share on other sites

@ebmigue : DBs, at least the relational ones, are based on set theory -- nothing else.

Set theory AND first-order predicate logic, to be precise. In predicate logic there is no NULL. You cannot argue that.

From wikipedia:

 

The relational model for database management is a database model based on first-order predicate logic, first formulated and proposed in 1969 by E.F. Codd.

 

While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification.

A predicate resembles a function that returns either True or False. (http://en.wikipedia.org/wiki/Relational_model)

 

There is no NULL!

 

To claim that NULL has no use is just proposterous.

No it is not. A claim is only preposterous if it has no rigorous basis (e.g., "eval is evil", "DISTINCT is evil", etc., etc.)

 

I think such claim has its basis, in the same respect that 2-valued predicate logic has its reasons for not allowing

NULL into its system.

 

 

I admit though that, and this is just speculation, that in practice there might be cases where predicate logic is silent about, which gives

rise to the suspicion that NULL, afterall, is useful.

 

IMO, however, that is just hindsight. NULL can be done away with and is needless, for the same reason

that 2-valued predicate logic, from Aristotle to Modern Symbolic Logic, has survived for 2000 years w/o introducing NULL into its system.

 

Again, this claim that NULL is needless is not the same as "NULL is evil", so to speak, not a "religious and dogmatic" claim; it is a scientific, logical, claim on the contrary.

 

I have the backing of the leading authors of relational db theory (read the works Date, Darwen, et. al.).

Link to comment
Share on other sites

@DavidAM

 

I will try, given the place and appropriateness, to explain point-by-point the incoherence of NULL, w/c you claim otherwise.

 

 

NULL means the value IS NOT KNOWN.

 

NULL is not even a VALUE!

 

IS NOT KNOWN (or UNKNOWN)  is a value and is therefore not the same thing as NULL.

 

UNKNOWN VALUE != NULL.

 

Proof:

 

UNKNOWN VALUE == UNKNOWN VALUE //returns TRUE

 

NULL == NULL // returns NULL

 

UNKNOWN VALUE == NULL //returns NULL

(or: TRUE == NULL //returns NULL)

 

So how can they be the same?

 

Representing a NULL in some other way, does not yield accurate results.

That is what I'm saying. NULL introduces "holes" in a two-valued (TRUE and FALSE) system.

The point is, if you want to use NULL then don't expect the precision of the two-valued system.

 

Since people, I assume, wants precision in their programs (why wouldn't we?), then I am suggesting to avoid NULL.

 

My intent is clear.

 

If we choose to use an empty string instead of NULL, then ebmigue.favorite_color = fenway.favorite_color would evaluate to true.

Yes, you are correct.

But make "favorite_color" a "NULLable" attribute, then comparisons becomes 3-valued (T, F, NULL) instead of being 2-valued (T, F). W/c is simpler and has a rich tradition scientifically? The 2-valued system is.

 

And a query of the database to group people by favorite_color, would yield incorrect results.

 

Why? Using empty strings instead of NULL is the correct way. What is your proof that using empty strings would yield incorrect results?

 

I really don't see how you can say the Relational Model is based on "2-valued logic".

 

Any standard textbook on relational databases would say that it is based on 2-valued predicate logic and set theory. Look up Wikipedia. Or buy a book by Date at you local bookstore.

 

The whole idea is that we can relate multiple values to any one or more elements.

That is achieved even without NULL.

 

A bit of history.

 

When Codd introduced the Relational Model (RM) on 1969 there was no NULL. When he revised RM in the early 80's, he introduced it. And from thereon he met criticism. For 10 years (1969 - 1980), RM managed without NULL. Of course, by then, the 80's saw SQL heavily used in the industry. NULL was with SQL.

 

As I've said, look up the leading authors of the Relational Model. Chris Date and Hugh Darwen are the best in the field. They have worked closely with Codd.

Link to comment
Share on other sites

If not NULLs, how can I achieve the end? I'm kind of lost now.

 

Yes, the solution posted by DavidAM is appropriate for you. That was also what I suggested, though I didn't provide the actual code, only the "pseudo-code."

 

 

But I would advise you to transform your attributes to non-NULLable attributes. Avoid LEFT JOINs as much as possible. Use them only when GROUP BYing.

 

As to why I am advising that, look up for books by Chris Date, Hugh Darwen. There you will find the reasons. The conceptual and theoretical reasons.

 

People will advise you otherwise, for performance reasons. They will say, "Ah don't listen to them; their queries are slow and impractical."

 

But I would say, at least they are precise. Before you will launch your product, you would want them precise and reliable. A fast program but is imprecise and unreliable is useless, utterly useless.

 

Of course the ideal is to have a program that is both precise and fast. I do not argue that.

Link to comment
Share on other sites

Please ignore my "proof" I'd given at post http://www.phpfreaks.com/forums/index.php?topic=338280.msg1595479#msg1595479.

 

That "proof" is ambiguous, not exact. But except such "proof", I stand by everything that I've said regarding NULLs.

 

I have given that proof in the attempt to explain to DavidAM (and others) what NULLs are.

-----------

 

Using DavidAM's "objections"/"observations", here is my take on NULL (w/c I hope is clear enough and final):

 

NULL means the value IS NOT KNOWN.

No I don't agree. Such statement confuses NULL and the UNKNOWN truth value.

 

Now, what is a value? A value is "an individual constant (say, the boolean TRUE)."

 

What is NULL? NULL is difficult to define. What is certain is that NULL is not a value. It does not belong to some type (say,integer). NULL is conventionaly understood/considered as a "special mark/flag" that indicates that "some value is unknown," but only as that -- not values.

 

Clearly, if "A" is a table attribute declared of type INT, then the possible values of "A", formally and physically speaking, are the integers supported by the computer system in question (say the integers 0 to (2^32)-1). Formally, NULL is not one of them.

 

Now, when we introduce NULL in an existing 2-valued logical system, the consequence will be the so-called 3-valued logic: there becomes three truth-values: TRUE, FALSE, and UNKNOWN.

 

DECLARE A INT;
A = NULL;
A == 1;  //evaluates to the UNKNOWN truth value.

A == A;   //evaluates to the UNKNOWN truth value.

NULL == A; //evaluates to the UNKNOWN truth value.


DECLARE B BOOL;

B = (NULL == 1); //initialise to the UNKNOWN truth value.

B == ((NULL == 1)); //evaluates to TRUE since both values have the UNKNOWN truth-value.

 

NULL is not a truth-value, by definition; while UNKNOWN is a truth-value. UNKNOWN is merely a consequence of the usage of a 'flag' called NULL.

 

UNKNOWN is a possible value of the type BOOLEAN (in 3-valued logic), while NULL does not even belong to a type.

 

Clearly, both concepts, NULL and UNKNOWN, "are not the same thing." "But SQL thinks it is," w/c makes things matter worse.

 

Since I do not know your favorite color and I do not know fenway's favorite color; both of those values are NULL to me.

Yes, you are correct. They are NULL to you. You flag them as NULL.

 

It is NOT possible to say that ebmigue.favorite_color = fenway.favorite_color -- it may turn out that the values are equal once we learn what they are, but until then, we cannot say they are equal --

Therein lies the problem! Relational Model's intention is to be precise. There is no "maybe". Either we know it or not. Period.

There is no "maybe we know it. maybe not." Precision is achieved by two-valued logic already, why shift to three-valued logic, where uncertainty is introduced?

 

we cannot say that one UNKNOWN is the same as another UNKNOWN.

If "UKNOWN" is taken to mean here as NULL, w/c I defined above as a mere flag, yes you are correct.

 

The context here is like this: You have two "things" A and B, and you declare/flag both of them as UNKNOWN/NULL; obviously, if you test their equality, you cannot say TRUE or FALSE. What you will know, is that the result of your comparison is UNKNOWN. What it means is this: "it is known that the result of your comparison is UNKNOWN".

 

Obviously this result is not anymore the NULL we are taking about. Now we know the value (i.e., that the comparison yields to UNKNOWN); while before (when we assumed that A and B where NULL) there is no such knowledge of some value.

 

The result of comparing variables (or attributes) that are NULL is a BOOLEAN value called UNKNOWN, and is a valid value in 3-valued logic.

 

Representing a NULL in some other way, does not yield accurate results. If we choose to use an empty string instead of NULL, then ebmigue.favorite_color = fenway.favorite_color would evaluate to true.

(Note: Using the term "a NULL" is incorrect. There is only one NULL "value.")

 

Well, using the explanation above, the comparison will not return TRUE, but rather UNKNOWN.

So I don't see any problem there.

 

Now, if you want to retrieve "those that have the same fav colors", obviously ebmigue's and fenway's will not be in the result

since their favorite colors are marked as NULL. Or is it? They could have the same favorite colors, too (since they are unknown).

 

So that is the resulting problem. There is no straightforward way of answering simple queries.

 

But I see the dilemma here. This is the case of "missing information" w/c is encountered in the real world.

 

So you use empty strings, for instance, instead of NULL. But the comparison above will return TRUE! So, problem is not solved.

 

This is the consequence of failure to support user-defined types and type operators overloading.

 

Remember that equality testing is dependent on the type of the operands being tested as equal. "1" == 1 must return FALSE because the operands are of different types.

 

If only in SQL (MySQL, in particular) we could define our own types, say, COLOR, and then override the equality operator ("=") for type COLORs, the problem is solved.

 

First we declare a type called COLOR, defined to have "the usual ROYGBV colors" and the empty string (as a value in case there is missing information).

 

We could then easily freely "customise" the equality operator for type COLOR.

 

In the implementation of the overloaded equality operator, we could code that if we encounter empty strings for both operands (our special value for the missing information) the comparison will result to UNKNOWN.

 

It is also possible to return the FALSE value when the missing color value is encountered during equality testing, in order to avoid problems involving the UNKNOWN truth-value.

 

We might want to document such decisions. In the documentation, the truth-table for comparisons might be necessary for values of type COLOR.

 

Basically, what we are doing is we want to utilise special values for missing information, and at the same time avoiding NULL and the UNKNOWN value, thus avoiding three-valued logic.

 

That is what I meant by saying that in the real world there is no NULL. Special values already capture what NULL intended to achieve.

 

If type definition and type operator overloading is supported in major DBMS products, then using NULL becomes needless. Problem solved.

 

In fact, there are many more methods, aside from the "special value strategy", to deal with missing information w/o using NULL and is theoretically sound.

 

But this is not yet the reality of existing major DBMS products. That is why, NULL and 3-valued logic, with their imprecision, are here to stay with us.

 

But further, the nagging question is: Do relations, by formal definition, have NULL attribute values?

 

The answer is no. By definition, NULL is not a value. But relational attributes necessarily contain a value.

 

So if we "combine" both definitions, contradiction arises.

 

So this "dilemma" is impossible in the first place.

 

And besides, if we encounter missing information, chances are, it is due to bad database design, w/c can be resolved by an alternate better database design.

 

That better database design will not anymore need NULL, w/c is consistent to my claim that NULL is needless, and is not really useful in the real world.

 

NULL means the value IS NOT KNOWN. Which is a very real possibility in the REAL WORLD as well.

 

Of course, in the real world there is missing information. But when we encounter such instances, we do not use the concept of NULL. Instead we use special values that indicates that there is missing info (say, N/A, or TBA (to be announced, etc.).

 

If we have an attribute named WEIGHT_OF_PRODUCT, and supposing we could define our own types, we could create a user-defined type w/c we might call WEIGHT.

 

If we represent the type WEIGHT with integers, then the possible values of type WEIGHT are the INTEGERS + the special value to represent missing information. And NULL is not anymore necessary.

 

And a query of the database to group people by favorite_color, would yield incorrect results. Perhaps it is not that critical with colors, but what if we were grouping by gender or political affiliation?

 

Yep. If your data/program is critical, then use precise methods. Use science, mathematics, and logic. That is the only way.

--------------

 

I will try not to post again on this thread.  IMO, enough is said on my part.

 

If you do not agree, doubtful, etc, about such claims I cannot anymore do anything about it.

 

What I can do is to refer you to the leading authors on these matters (e.g., Date, Darwen, McGoveran, Pascal, et. al). They can explain better. (You can find a wealth of materials here: http://thethirdmanifesto.com/)

 

Refer to them to relieve all doubts and possible counter-arguments lingering in your mind. Research and critical study is the only way to clarify what you already know.

 

Thank you and Hope it helps.

Link to comment
Share on other sites

Theory has no place is the real world -- even Codd knew that, and amended his 2-value system.

 

Besides, while you can get a way with '' (blank) for strings, you can't get away with "0" (zero) for integers.  "No apples" isn't equivalent to "indeterminate number of applies".

 

But pretending that indeterminate values (a) don't exist, (b) don't need to be stored, and © have no place in the real world is just short-sighted.

 

Your "proofs" and "correctness" are ill-suited, where actual problems need to be solve, not addressed on a theoretical level and no further.

 

But if you're an experienced programmer, you know all of this, so why bother fighting?

Link to comment
Share on other sites

I will pretend that I agreed to your agree claims (why would I bother understanding such nonsense).

 

But this I cannot approve:

Theory has no place is the real world -- even Codd knew that, and amended his 2-value system.

 

Computer Science is theory.

 

You wouldn't even have a profession - all of us would not have a profession - if not for theory.

 

Airplanes are impossible w/o the science of physics w/c is theory.

 

Automobiles are impossible w/o the science of mechanics w/c is theory.

 

Synthetic medicine are impossible w/o the science of chemistry w/c is theory.

 

And you say theory has no place in the real world?

 

Wow.

Link to comment
Share on other sites

You've got the science thing backwards -- with one exception of course -- science always comes second. 

 

Newton got hit by an apple before he studied gravity.

 

Computers were designed and built long before there was any computer science.

 

I can find the area under a curve without calculus -- because calculus came as a shortcut after-the-fact.  "First principles" was the observation of something that wasn't calculus until after the observation.

 

Machines flew before we knew how they flew.  Actually, it took NASA an additional ten years to figure out how a plane might fly in the Marshian atmosphere -- because all of the science of the last 100 years of flight never considered variables for different gravity and different air.

 

Medicine is the exception, but only because we don't allow inventors to test medicine before a lot of science has been done.  That's different in developing worlds where witch-doctors have been prescribing herbs and roots and vegetation to fix all sorts of things, all very much without science.

 

Blood-letting.

 

Theory comes way after practice.  Theory is most definitely used to guide further development of practice.  But practice still comes first, and it also comes last.

 

More importantly, scientists, especially mathematitians, and unfortunately debaters/lawyers/logicians as well, get cause and effect totally backwards -- much as you have.

 

When practice conflicts with theory -- when observations conflict with equations -- either the observation is incorrect, or the equation is incorrect.  But in no circumstance is the object being observed incorrect.

 

Math, and formal logic, are merely languages, no different than English, used to efficiently predict what's being observed.  When the prediction doesn't match the description, the prediction is wrong -- every time. 

 

The real world doesn't act according to the laws of physics.  The laws of physics describe what we've seen.  It's a huge difference that the world of formal academia never understands.

 

But hey, as with all business ventures, I give you the same advice.  Use a database to achieve all of these scenarios when someone's paying you cold hard cash that you intend to use to buy a new fancy sportscar. 

 

I guarantee you that before you buy your car, you'll give up on your academic nonsense.

Link to comment
Share on other sites

You've got the science thing backwards -- with one exception of course -- science always comes second. 

You cannot prove that.

Newton got hit by an apple before he studied gravity.

You cannot prove that either. That story is usually regarded as an anecdote.

 

Computers were designed and built long before there was any computer science.

But how did Pascal created it? He used mathematics - mathematics = theory. Of course,

hundred years after Pascal did it, developments of science took place. After the 2nd world war, the stage was in place for a science for computers.

 

I can find the area under a curve without calculus -- because calculus came as a shortcut after-the-fact.  "First principles" was the observation of something that wasn't calculus until after the observation.

True. But how did we became so sure of such methods? Science.

 

Machines flew before we knew how they flew.  Actually, it took NASA an additional ten years to figure out how a plane might fly in the Marshian atmosphere -- because all of the science of the last 100 years of flight never considered variables for different gravity and different air.

How did they realise that?

Because people at NASA KNEW THEORY.

 

Medicine is the exception, but only because we don't allow inventors to test medicine before a lot of science has been done.  That's different in developing worlds where witch-doctors have been prescribing herbs and roots and vegetation to fix all sorts of things, all very much without science.

 

Blood-letting.

You cannot prove that.

 

Theory comes way after practice.  Theory is most definitely used to guide further development of practice.  But practice still comes first, and it also comes last.

You use your assumption as your conclusion -- fallacious.

 

More importantly, scientists, especially mathematitians, and unfortunately debaters/lawyers/logicians as well, get cause and effect totally backwards -- much as you have.

You cannot prove that.

 

When practice conflicts with theory -- when observations conflict with equations -- either the observation is incorrect, or the equation is incorrect.  But in no circumstance is the object being observed incorrect.

That is true. So why bash theory, when practice could also be bashed?

 

Math, and formal logic, are merely languages, no different than English, used to efficiently predict what's being observed.  When the prediction doesn't match the description, the prediction is wrong -- every time. 

That is true. Let's forget all science because they can be wrong. Let's start tomorrow.

 

The real world doesn't act according to the laws of physics.  The laws of physics describe what we've seen.  It's a huge difference that the world of formal academia never understands.

That cannot be proved.

 

But hey, as with all business ventures, I give you the same advice.  Use a database to achieve all of these scenarios when someone's paying you cold hard cash that you intend to use to buy a new fancy sportscar. 

 

I guarantee you that before you buy your car, you'll give up on your academic nonsense.

So that's why. Material gain over knowledge. Hey, I'm not a monk. I understand what you mean.

The whole point of what I'm 'ranting' about is this: by using theory, we could have earned more. =)

Link to comment
Share on other sites

You've got the science thing backwards -- with one exception of course -- science always comes second. 

 

Of course, observation is first, then science is second. That's the whole point of empirical knowledge.

 

But there are other forms of knowledge that are not empirical - they call it rational/deductive knowledge.

 

Mathematics and logic (and thus, relational databases) is of this kind. (read epistemology to know more about this.)

 

But come on. Did you really think that "observation" is still the stage in the field of computing? For around 80 years, people are developing programs. For 40 years, people are creating relational databases.

 

Surely, their knowledge accumulated over time, w/c made possible for theorizing. Rather than go through the whole 40 years again, save your time by learning theory.

 

That's what I meant by saying that theory is important.

 

By "theory" I mean not the speculative kind. Not the "Big Bang Theory" or "Relativity Theory".

 

What I mean is the fundamentals, the basic axioms of your field.

 

If you're a mathematician, surely you must be knowledgeable of the fundamental principles of mathematics.

 

If you're a database professional, surely you must be knowledgeable of the fundamental principles of databases and computer science.

 

Otherwise, one's output could not be as precise as one thinks.

Link to comment
Share on other sites

I don't ever need to care about the fundamentals because I don't ever need my output to be precise.  That's what you're missing.  I need my output to be consistent, not precise.  If it's consistently incorrect, then I can build a system where 1 + 1 = 3.  The fact that 1+1=2 is totally arbitrary.  As long as the function is consistent, I can chain it with something else that's consistent, and that's all that is needed to build a machine -- computer, logic, or any other type of machine.

 

You spend a lot of time saying that others "can't prove" whatever they are saying.  That's where you've hit your own fallacious assumption as conclusion rubbish.  I don't need to prove it.  It doesn't matter if I can prove it. 

 

What matters is that I can depend on it.  I've observed it, time and time again, and I can depend on my observing it again.

 

It doesn't need to be proven.  It also doesn't need to be always true.  It just needs to be consistent.

 

Theory doesn't help me, or anyone else, earn more.  The academic world doens't earn much of anything -- respect included.

 

Practice earns more.  Because the only way to prove that something will work, to another human being with something to lose, is to have them believe that you've done it many times before. 

 

What you miss with theory, that you absolutely never miss with practice, is the incomplete and unforseen and chaotic and unpredictable elements.  You don't know what you don't know.  I know that I've done it.  So I know that it can be done.  Maybe not always, but at least once.  Deductive reason is way more absolute than inductive reason. 

 

And if I've done it time and time again, then I know that it can be done time and time again.  That's it.  That's the whole thing.

 

No amount of theory will prove to anyone, including to yourself, that something will continue to work, because it'll never convince anyone that you haven't missed some little but significant thing.

 

Enough practice will convince everybody that it'll happen when you do it again -- to an acceptable level of certainly.  And that's all that counts when it comes to convincing someone else to risk losing something on your behalf. 

Link to comment
Share on other sites

I don't ever need to care about the fundamentals because I don't ever need my output to be precise.

I hope your clients will not read this. :D

 

That's what you're missing.  I need my output to be consistent, not precise.

Consistency and precision goes "hand in hand" so to speak. Of course, a "consistently imprecise" system is a different kind; I won't call it consistent in the strictest sense of the term "consistent."

 

If it's consistently incorrect, then I can build a system where 1 + 1 = 3.  The fact that 1+1=2 is totally arbitrary.  As long as the function is consistent, I can chain it with something else that's consistent, and that's all that is needed to build a machine -- computer, logic, or any other type of machine.

That's good. If we live in a totally different universe. Wow, now who's grounded on reality?

 

You spend a lot of time saying that others "can't prove" whatever they are saying.  That's where you've hit your own fallacious assumption as conclusion rubbish.  I don't need to prove it.  It doesn't matter if I can prove it. 

Yes, certainly you don't "need" to prove it. You're not bounded by contract. But then again, what is to be done? Nothing? Guys, let's stop working!

 

What matters is that I can depend on it.  I've observed it, time and time again, and I can depend on my observing it again.

As a man of science, I'd say, Something that is not proven cannot be depended upon.

 

It doesn't need to be proven.  It also doesn't need to be always true.  It just needs to be consistent.

Certainly. Not all consistent systems are true (say, novels). But all true systems are consistent.

 

Theory doesn't help me, or anyone else, earn more. 

Stop buying anything "modern" then.

 

The academic world doens't earn much of anything -- respect included.

Why educate your kid?

 

Practice earns more.  Because the only way to prove that something will work, to another human being with something to lose, is to have them believe that you've done it many times before. 

Yes, that's true. I won't argue. I can read them novels too. They may "sound true," but, is it, really?

 

What you miss with theory, that you absolutely never miss with practice, is the incomplete and unforseen and chaotic and unpredictable elements. 

Practice not grounded on theory is, let us say, a dangerous enterprise. Theory not grounded on practice is a luxury, but a good investment (for knowledge's sake, as they say).

 

You don't know what you don't know.  I know that I've done it.  So I know that it can be done.  Maybe not always, but at least once.  Deductive reason is way more absolute than inductive reason. 

 

And if I've done it time and time again, then I know that it can be done time and time again.  That's it.  That's the whole thing.

 

I don't understand that nonsense.

 

 

No amount of theory will prove to anyone, including to yourself, that something will continue to work, because it'll never convince anyone that you haven't missed some little but significant thing.

I agree. That's why we need to learn more, study more, research more - in short, theorize. =)

 

Enough practice will convince everybody that it'll happen when you do it again -- to an acceptable level of certainly.  And that's all that counts when it comes to convincing someone else to risk losing something on your behalf.

I agree. Clients won't care whether or not you know something "theoretical". As long as you deliver. I get it.

 

The question is, can you be so sure of what you deliver? If yes, well, that is your opinion.

If you don't care of the certainty of the products you deliver, well, then again, that is your professional decision.

Link to comment
Share on other sites

Are you seriously trying to tell me that if I memorize that 5x5=25, with no underlying knowledge about "theory", I'm not allowed to use 25 as output in my application???

 

Sure, it helps to know where it comes from... but it's purely academic.  I don't need to know how the registers of the CPU work in order to program a computer to do my bidding.  I'd wager that 99% of the world has no clue.  Same goes for a car.

 

I'm not saying that the underlying theory isn't important in order to make huge gains or improvements.

 

But if I'm happy with what I've got -- in this case, a NULL value -- you can't tell me that it's fundamentally flawed just because it violates some arbitrary theory.

Link to comment
Share on other sites

Are you seriously trying to tell me that if I memorize that 5x5=25, with no underlying knowledge about "theory", I'm not allowed to use 25 as output in my application???

Where did you get that? Such conclusion cannot be inferred from my posts.

 

Sure, it helps to know where it comes from... but it's purely academic.

Sure. Not all "academic" things can be of use - at least on our time. That is for history to tell.

But this - the relational model - is one of them that can be used - relational theory.

 

 

I don't need to know how the registers of the CPU work in order to program a computer to do my bidding.  I'd wager that 99% of the world has no clue.  Same goes for a car.

Precisely! I'm not telling people to know every bits and details of every DBMS. If ones does, one cannot anymore work for clients.

 

I'm saying that, if we want to judge the usefulness, features, capabilities, etc. of a given DBMS, our only basis is its underlying theory, namely, the relational model. No other. That's what makes theory important. It becomes the yardstick for our judgement of a certain DBMS.

 

And people say theory has no place in the real world? Now that's preposterous.

 

Certainly, it is helpful to know implementation details, such as a DBMS's optimizer, parser, etc. But all such knowledge are ephemeral. COme the next version of a DBMS, things might get changed "under the hood." But not so for theory. Yes, there are developments in theory - changes, evolutions. But they are (the authors) as careful and ground their conclusions to the scientific tradition.

 

I'm not saying that the underlying theory isn't important in order to make huge gains or improvements.

I'm glad we're agreed in this.

 

But if I'm happy with what I've got -- in this case, a NULL value -- you can't tell me that it's fundamentally flawed just because it violates some arbitrary theory.

Of course. That is your decision.

 

But:

The Relational Model is no "arbitrary theory" in the field of databases. In fact, it is the basis of all leading DBMS (oracle, mysql, etc.).

 

If it's fundamentally flawed, then it is fundamentally flawed. Your feelings to such fact won't matter.

 

The fact that it is fundamentally flawed should be enough reason to question the status quo.

Link to comment
Share on other sites

Ever heard of an erline table?  100% practice, 0% theory.  And they are considered as gospel by many industries, markets, and algorithms.

 

Life is like an erline table; someone's already gotten what you're gonna get.

 

Please keep the this discussion on topic and sensible.

 

Refrain from such irrelevant remarks.

Link to comment
Share on other sites

I'm saying that, if we want to judge the usefulness, features, capabilities, etc. of a given DBMS, our only basis is its underlying theory, namely, the relational model. No other. That's what makes theory important. It becomes the yardstick for our judgement of a certain DBMS.

Sure -- but I'm not trying to judge it at all.  I'm trying to help people USE it.  That's where "ephemeral" knowledge becomes vitally important.

Link to comment
Share on other sites

You know what an erline table it, or you wouldn't think it irrelevant.  They fly in the face of your theory concept, as they are nothing more than real-world look-up tables derived purely from observational experience, summarizing data that can't be predicted from any known algorithm.

 

And certainly, we don't ever want to "judge the usefulness" of anything.  We want to demonstrate the usefulness. Two very different things.

 

But we left on-topic long ago.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.