What you describe is a common problem set, typically encountered in Datamarts/datawarehouse building, and even has an industry standard acronym (ETL).
Although there are ETL products available out there, if this is a small scale project, they may not be viable economically or logistically for you, but at least you know they are there, and can google and investigate some. Furthermore, one of the functions of an ETL tool is to fix/transform/conform data as it is loaded. You already have all the data loaded into your RDBMS tables.
Depending on the database involved, ETL's can be written in a number of ways. Typically people will utilize stored procedures/triggers and/or procedural ETL scripts in the language of their choice. What you do depends very much on the quantity of data involved and how much work you have to do to get it into the normalized structure you want.
For example, many many years ago I worked on a project where we needed to load a large amount of hierarchical data into a relational database. We started by loading it into tables that matched the data, and then wrote sproc's that transformed the data and loaded it into our relational structure. Logically it worked, but the problem was that it was way too slow, due to the transactional overhead involved in the database, and the cost of looking up related tables when new rows were inserted.
We solved this by writing a perl script that essentially took the data, and used gobs of memory via perl's associative arrays. We preloaded all the lookup tables into these arrays and did all the relating in perl rather than via sql queries, and saved these transformed rows to disk. We then bulk copied all the data into the destination tables which bypassed the need for transactions. A process that could not be completed in several weeks, was accomplished through this perl based processing/saving to files/bulkcopy import and ran in several hours.
I can only give you a general answer, but I would start with what the database can do natively (stored procs) with the caveat that the volume of data can be a limiter. You can also look at php,ruby,python or whatever other scripting language you like, as most of these are well suited to these types of projects and have similar features.
Obviously you need to build in scheduling for this activity, assuming it happens on a recurring basis.
That's about all the general advice I can think of, off the top of my head.
You set up a proxy server on some machine in your house?
The one you used is the fairly ancient PHPProxy, which hasn't been updated in 8 years.
For some reason you did *something* to Apache, although why you did whatever you did is unclear. PHPProxy is a really old and dead project that implemented an HTTP proxy server.
People use Proxy servers for a couple of reasons
To cache commonly accessed "cache friendly" content in order to save on bandwidth
To control access to the internet, for administrative or security reasons
To circumvent the blocking of an existing web filter
If you're running this on your home network, there's no potential for circumvention, and it's unclear why you would want to circumvent anything. If this is running on a server on the public internet -- well you didn't provide any information about the the problem, the goal, or your network topology.
It's not clear from your post what you are using a proxy server for or why you want to use one.
The sites you mention may not be proxy friendly, or the really old proxy server may have issues with certain sites and techniques or you may have misconfiguration issues.
There is no magic "fix proxy server" button. You need to look at things like logs, and debug from the client using a tool like firebug.
You state you're interested in computer science. If that's the case, you need to start to learn how to debug problems, and learning how to investigate server processes through logs or debugging tools is an important step in that pursuit.
Yes there is a simple way, and it's named "join", or "joining" the tables together.
There are several syntaxes you can use to do the same thing. In this case, all your joins are going to be "inner" joins, so what you want is to use "Left Inner" joins from your user_student table back to user and student, respectively.
SELECT u.name, s.name
FROM user_student us
LEFT JOIN user u ON u.id = us.user_id
LEFT JOIN student s ON s.id = us.student_id
This person created a nice site that really breaks down and makes clear the syntax and the different variations you can use. http://mysqljoin.com/
They're attempting to do a remote execution exploit, basically to see if you are vulnerable. If that exploit worked, your server would run the code at that mako dot com dot ua server, which doesn't do anything other than echo a string, but I'm guessing their spider would then go on to attempt further exploits, should it actually execute.