xylex
Members-
Posts
292 -
Joined
-
Last visited
Everything posted by xylex
-
If you're talking about writing a script that takes a site's intellectual property (IP) and then redisplaying it on your own site without any permission from the IP's owner, you're definitely at least in a gray area or way over the line for what's considered "Fair Use" when you're grabbing more than a couple of sentences to quote in a review. And the cryptic "like an ISBN of a book" - but not that - is raising all kinds of red flags for me. If you're over the line there, it's not a matter of being flagged as a bot, but waiting around for the cease and desist letter and/or lawsuit regardless of how you're gathering that data.
-
Isn't complaining about the customer service at a $3/month host kind of like complaining that the cashier at Taco Bell could barely speak English?
-
Xignite has metal prices. I've used them for stock quotes before, pretty easy to work with.
-
You didn't specify a country, but assuming you prefer a US focus based on your e-mail address. The ones I know of: KBB NADA Ward's Auto
-
Do you have Robots.txt instead of robots.txt? That would do it.
-
Totally unrelated to the question you asked, but from a marketing perspective, I'd be concerned about the length of your splash video. It's nearly 4 minutes long, and you don't even start presenting Crambu until after the first minute. I doubt many people are even getting to this point in the video (you could track this and get the real stats, might be interesting to find out). I'm not a huge fan of splash videos, but if you're going to do one, you might want to cut a condensed version that gets to the point quicker.
-
I think that the method that chaseman posted is the right way to be doing this. With KingPhilip's second suggestion, the use of variables would either be far more restrictive in what you could modify or extremely confusing to read and use if everything were variables and you didn't know what classes used what variables. For example, if you have a few widgets that should all be styled similarly, and they all reference the widget class, if decide you want to add a gradient or rounded corners to those widgets later, you easily could do this in one class. CSS variables wouldn't help at all with this type of situation, and I don't really advocate adding in a feature to a language to support bad use of the language (I know, this philosophy puts me at odds with PHP, goto() for example....).
-
Isn't that what CSS inheritance and multiple class assignment is for? If you have the same value in a bunch of different classes, chances are the problem is in your design rather than the design of CSS itself.
-
The founder's local to me and he pitched PHP Fog a few months ago at our PHP meetup. With what you're saying about other hosting providers works for small scale sites- for shared hosting, as long as you're not consuming more resources than a hosting providers entire server, you can just go with a bigger package. PHP Fog is more geared towards larger scale deployments that spike to more than a single server's resources- when you have those spikes, you can easily scale up to add more VPS instances and requests will be load balanced across these apps. The majority of webhosts don't even offer this service, and no one else that I know of is currently offering a web interface to automatically do this deployment. As for the overall appeal of PHPFog, I think it's biggest shortcoming is the lack of easy database scaling. In your typical LAMP application, unless you're doing a lot of image or video manipulation or something, or your script has coding issues, you'll bottleneck at the database well before you bottleneck at the Apache/PHP side of things, so scaling across more Apache/PHP isn't going to get you any performance gains.
-
First off, wrong forum to be asking for bids. Secondly, you're making the same mistake you made the first time around. Take a note from cunoodle's post and first sit down and write out everything you think a registration should do, not just say it should be "fully comprehensive" and expect that to mean anything to anyone. And if you're really concerned about maintainability and security, expect to spend a fair amount of money sourcing a reputable developer/firm with references if you're expecting to totally trust their judgement with these aspects.
-
What Makes a Niche Website Profit Generating & How to Get Visitors?
xylex replied to chaseman's topic in Miscellaneous
Stop linking "going viral" to generating a profit. The two rarely have anything to do with each other. A great presentation about making a profitable dot-com - -
Version control systems like SVN are good for managing development, but I don't really like having all the .svn files on a production box. I usually deploy with a bash script for simple deployments, and a deployment tool like ANT or Phing for anything more complex than a simple copy. If you have credit card data on your system, this process would violate at least 4 of the 12 requirements for PCI compliance, and probably more than that in practice. Same goes for any other regulated industry, like medical or financial.
-
Using a logout script (user has to click the logout link twice)
xylex replied to j.smith1981's topic in PHP Coding Help
You're doing the login check and outputting the logout link before you're logging the user out, so they are being logged out on the first click; the display just isn't reflecting that until the reload.. Move the logout piece up to the top right after the session_start() cal.l -
A salt in this case would just be a random set of characters you would append to their password before you did the sha1 hashing. The idea there is that even if someone got the password value from the DB, even if they tried to brute force a short password, or had a rainbow table of matches, they wouldn't be able to use it to get the original password because of the salt. Just make sure you use a long enough and random enough string to do this (like don't make your salt 'salt', which I have seen). Two other security pieces for you: Add in is a call to session_regenerate_id() after your session_start() call, this will take care of session fixation attacks. Don't directly echo $username; echo htmlspecialchars($username, ENT_QUOTES); Otherwise, you're putting an XSS vulnerability on your page.
-
The Twitter proposed revenue model is a combination of promoted tweets that show up first in search results, and targeted advertising of tweets that will show up in user's streams regardless of whether or not they searched for the result. The promoted tweets showing up first is basically the same model as AdWords, any of the other search model equivalents, or something like YouTube's promoted videos. The second, more invasive method, can be compared to models like AdSense ads or embedded text links in email newsletters. But using those models as a comparison, you can calculate how big the potential advertiser pool is, how much advertisers will pay per impression, and how many impressions you will get with an established user base. The reason for the high valuation is the user base, since most numbers you would get from those models times Twitter's 250 million users leaves you with a pretty big number. Twitter also has an untapped potential revenue stream of embedding ads directly in tweets. The 140 character limit was originally designed to potentially leave the extra 28 characters in an SMS message for embedded ads, and their user agreement still gives them permission to do this. They haven't done this or announced plans to since there is a limit to how much change and intrusiveness your user base will take before they jump ship (think MySpace) but the potential is there and that would be a huge amount of revenue with 100 million tweets going out each day. And compared to YouTube, the operating costs of Twitter is nothing - you're sending 140 characters down to someone instead of an HD video, so virtually any amount of revenue per tweet easily offsets the cost of delivering that tweet.
-
Two major factors that differ today's dot-com's from the 1999 dot-com's are public investment in the companies, and the demonstration of how to generate positive revenue from a dot-com. For it to be a bubble like the 1999 dot-com bubble, there has to be significant money attached to the overinflated values. In the previous bubble, all of the inflated values of a company were attached to IPO's that pushed these values up even higher and those values were backed by the inflated stock price. In today's overvaluations, for the most part it's just a number on paper that some exec is tossing around - there's little or no actual money attached to that valuation, so if the real valuation is far less, no loss has taken place other than to the owners/co-founder's bragging rights. As some of the bigger social media companies start going public, this might change a little for a handful of companies, but the 90's idea that attaching .com to your corporate name meant you could instantly raise millions in an IPO is long gone. On the other point, in the first dot-com bubble, no one had demonstrated how to make a profit using the internet. So like the Guardian article says, everything was measured some "new way." Today, there are hundreds if not thousands of profitable multi-million dollar tech companies, so it does give something to compare newer startups against, as well as provide a proven roadmap to go from something like a Twitter with a billion users and little to no revenue into a profitable internet monster. So yes, we're not valuing companies in the traditional annual profit / current interest rates for a valuation, but there are models that can be used to value these companies with some degree of accuracy.
-
You summarize what you can produce in 3 sentences, then have a whole paragraph about how little you charge in which you start cheap, talk yourself down 25%, and then add in that you'll go even lower if someone asks you to. Trying to land clients definitely isn't all about price, but that seems to be the biggest focus of your site.
-
It'd be a bit of work to make the changes, and fsockopen() still blocks per connection, so you'll have some delay there, but then all the servers can be polled at once. If you want to get really fancy, you can also just keep the script running and sockets open if your host and servers allow you to, which is how those bigger sites keep live data on hundreds of servers. Basically what you'd have to do is start by changing the rcon class to static so that you can pool the connections outside of it, and then make the methods take a $socket parameter that would replace the references to $this->_socket; In init(), after $this->_sock() is created, make it non blocking with stream_set_blocking(), and init() should return the socket object. That class also has two sets of send/receive methods that you use, Auth() and sendRconCommand(). You would need to split that into 4 methods, one each for sending and one each for receiving. I'll call it sendAuth()/receiveAuth() and sendRconCommand()/receiveRconCommand(). In your code then you would loop through all the servers calling rcon::init(), and create the an array of the object's socket connections with the server ip and port as the key. I'll call this one $rconsSockets You would loop through $rconsSockets and fire rcon::sendAuth() on each one, this will immediately do the sendAuth() command to all the servers. Use stream_select() and process those responses as they become available. If you were going to keep the script and those sockets alive, this next part would be in a while(true) loop or something like that. Repeat the process of sending the status command to all the servers, and then reading and processing the response as they become available. The code to do this would look something like: $numOfSockets = count($rconsSockets); foreach($rconsSockets as $socket) { rcon::sendRconCommand("status", $socket); } $n = 0; $responses = array(); $write = array(); $except = array(); while ($n < $numOfSockets) { $respondedStreams = stream_select($rconsSockets, $write, $except, 0); foreach($respondedStreams as $stream) { $key = array_search($stream, $rconsSockets, true); $response[$key] = rcon::receiveAuth($stream); } $n += count($respondedStreams); } What that code does is grabs the responses as they become available from each server and throws them into that $response array that you can either use later or do the getServerInformation() processing immediately. How that's setup will block to the slowest server, but so does the current code and that does it sequentially. A bit of work, and could probably be architected better than what I'm proposing and would need some better error handling, but this gives you an idea of how to do iwhat you're trying to do very quickly and efficiently. If you end up doing it, I'm sure the rcon community would appreciate it being kicked back to them as well.
-
What do you mean by socket connections? Are you actually connecting over with the socket library, or are you hitting the servers via CURL or SOAP or something? And can you provide a little info of the exchange that's taking place and what you're doing with that response so I can suggest how to do it in parallel?
-
Do you have a broken custom session handler, or can the script not write to the configured session directory? If you're on PHP 5.3 or greater, var_dump on session_start() - if it's false, something's blocking the session from being created.
-
Why not pull from the servers in parallel and pull them down all at once?
-
You're calling session_start() after your using $_SESSION['id']. Move the session_start() line up to the top of your script.
-
On the question of efficiency between doing the join and the subquery- most modern SQL parsers are really good at figuring out what you're looking for whether you're selecting from multiple tables, doing a join, or using a subquery. If you run an EXPLAIN for either kickstart or jesirose's query, you'll likely (though not always) see that they are both following the same query plan, making them identical in performance. When the plans do differ, it's usually a toss up between which one is more efficient between the subquery and the join. This didn't always use to be the case- earlier versions of MySQL isolated subqueries and came up with a separate plan, hence the oft-recommended practice of using a join instead of a subquery whenever possible.
-
Execute the insert query before your get the last insert id, not after.
-
Hit it with this - http://php.net/manual/en/function.json-decode.php