Jump to content

Check if variable is a two element unassociated array


NotionCommotion

Recommended Posts

Not a recommendation, but another way to skin the cat:

function check($a, (int)$z)
{
   $return = is_array($a) && (count($a)==$z); // so far so good

   if($return) foreach($a as $k => $v) $return = $return && (is_int($k)); // first non-integer key fails the whole thing
   // can also use a while loop to iterate the array and stop at first false
   return $return;
}

I think your test arrays will always be non-associative as you are using a short syntax listing only values.

 

Also, there is the possibility that the integer indices will not be zero and one.

Link to comment
Share on other sites

Validation of data received via JSON.

 

What is the source of the data? Why are you concerned that the returned data may have indexes other than 0 and 1? And, if it does get returned with indexes other than 0/1 is there a possibility it is correct, yet the process changed to send back different indexes? Why would you want your application to break if that happens? Why aren't you more concerned with the values in the array?

 

FYI: There is no reason to use the strict comparison for the array length  count($a)===2. It's impossible for that function to return a string of "2".

 

Just check that the the value is a two-dimensional array and force the indexes. Then, more importantly, validate the values in the array before you use them. Plus, you can make the length configurable so you can re-purpose, if needed.

 

 

function checkArray($arr, $len=2)
{
   if(!is_array($arr) || count($arr)!=$len)
  {
    return false;
  }
  //Validate the values . . . 
}
Link to comment
Share on other sites

 

What is the source of the data? Why are you concerned that the returned data may have indexes other than 0 and 1? And, if it does get returned with indexes other than 0/1 is there a possibility it is correct, yet the process changed to send back different indexes? Why would you want your application to break if that happens? Why aren't you more concerned with the values in the array?

 

It is just a homemade JSON protocol transferred via sockets between two machines which I both control.  I wish to limit the amount of network traffic by reducing the JSON string size.  I plan on transferring something like the following, and need to retrieve the three values in the d arrays and interpret them based on their position in the array.  I'm also concerned with their values, but need to first understand their meaning by their position.  Make sense?  Thanks

[
  {"t": 123321, "d": [123.41,4113.231,45234.123]},
  {"t": 123324, "d": [143.41,4213.231,44234.123]},
  {"t": 123326, "d": [142.41,4413.231,42234.123]},
  {"t": 123329, "d": [153.41,4313.231,43234.123]}
]
Link to comment
Share on other sites

It is just a homemade JSON protocol

 

Looks like this is the real problem of all your recent threads.

 

 

 

I wish to limit the amount of network traffic by reducing the JSON string size.

 

Of course. ::)

 

Ever heard of compression? BSON? Or even crazier: Measuring to see if the problem is real or just imaginary?

Link to comment
Share on other sites

Ever heard of compression? BSON? Or even crazier: Measuring to see if the problem is real or just imaginary?

 

No, I've never dealt with BSON.  It seems to have been originally developed for MongoDB, but maybe has applications outside of it?  http://php.net/manual/en/function.bson-decode.php seems to be a bit premature.

Link to comment
Share on other sites

Also, if this is a home-spun solution, is there a reason you've determined that the best way to understand the meaning of the 'd' array values is by position instead of named index? I know it's done a lot, but if you're creating this yourself why not make it a bit easier on everyone now and in the future by using logically named indexes?

Link to comment
Share on other sites

Originally, I used indexes, and my JSON looked like:

[
  {"timestamp":123321,"data":{"value":133.3,"units":"lbs"}},
  {"timestamp":123324,"data":{"value":123.2,"units":"lbs"}},
  {"timestamp":123327,"data":{"value":113.4,"units":"lbs"}},
  ....
  {"timestamp":124329,"data":{"value":153.1,"units":"lbs"}}
]

It just seemed like a lot of extra fluff when I could instead do:

[
  {"t":123321,"d":[133.3,"lbs"]},
  {"t":123324,"d":[123.2,"lbs"]},
  {"t":123327,"d":[113.4,"lbs"]},
  ....
  {"t":124329,"d":[153.1,"lbs"]}
]

Maybe, it isn't an issue, and I might as well improve the readability and go back to the first approach.  But then again, if user requirements change in the future, maybe it would be nice to have to deal with less data.

 

Link to comment
Share on other sites

You really, really need to develop a more systematic and professional approach to problem solving. This is way too much randomly-fiddling-with-stuff-and-hoping-it-somehow-helps.

 

Is there a problem right now? Not in some hypothetical future. Now. If there isn't, stop those nano-optimization attempts that don't do anything but keep your busy for yet another week.

 

[...] I might as well improve the readability and go back to the first approach.

 

That's the right solution for the entire thread.

 

 

 

But then again, if user requirements change in the future, maybe it would be nice to have to deal with less data.

 

If the user requirements change in the future, you need to systematically identify the problems and solve those specific problems with the right tools.

 

Adding random “optimizations” just-in-case not only makes no sense. It's objectively counter-productive. By trying to solve your imaginary traffic problems, you've created real problems: Validation has become so obscure that you need multiple posts to even explain it. The data format is close to unreadable. And worst of all, you're wasting your time instead of spending it on useful features, code quality etc.

 

If there was a real problem and a specific goal (e. g. “My customer is sending 10 MiB of JSON data per second and needs to reduce traffic by 50%”), I would happily discuss all kinds of compression methods, alternative formats, custom protocols and whatnot to help you reach your 50% goal. But there's neither a problem nor a goal. It's all just made up.

Link to comment
Share on other sites

I would also add to Jacques last post that readability of code is not a trivial concern. Using good practices when writing code saves many, many hours of wasted hours in varying ways. From least to most beneficial, I would rank it as follows:

 

1. When actively writing new code you don't have to try and "think up" new acronyms to use for variable/function names that don't conflict with others that may already exist that you can't remember. If I need to define a variable for the first day of the current month for a reporting module, using $fom is a bad choice, whereas $firstDayOfCurrentMonth is a much better choice. The amount of extra disk space or execution is too small to even consider.

 

2. If you need to revise or debug existing code you will spend an inordinate amount of time trying to understand what you wrote before because the names are not intuitive. If you have a problem with a function with parameters such as $ac, $fuj, $tor, you will have to go back to where the function is called to understand what the values are that are passed to the function - which may require you to go back even further to assess the logic on how the values were derived.

 

3. Most importantly, when anyone else has to review/revise your code they won't spend hours/days/weeks trying to figure things out. Heck, if you are properly naming and using comments you will be able to copy/paste a section of code into this forum with a description of what your problem is and what you want to achieve and get a good answer quickly. I.e. no need for multiple back-and-forth posts to try and figure out what the heck your code is trying to do.

 

Lastly, don't be afraid to use comments liberally. If some code you write was not very simple and took some thought to develop, chances are you won't remember how or why you ended up with the final code. If for example, you have to fix a bug where a division by zero error occurs - then spell it out where you implement that fix.

  • Like 1
Link to comment
Share on other sites

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.