1. start cmd
2. 'net user': it will list users/accounts of the computer, for example 'Administrator', 'cesar', etc....
2. 'net user
4. enter new password
Monday, April 14, 2008
How To Hack A Windows XP Password
Should MySQL and Web Server share the same box ?
Should MySQL and Web Server share the same box ?
Posted By peter On October 16, 2006 @ 6:09 am In lamp, production, tips | 7 Comments
This is interesting question which I thought it would be good to write about. There are obviously benefits and drawbacks for each of methods.
Smaller applications usually start with single server which has both MySQL and Web server on it. In this case it is not usually the question but once application growths larger and you need to have multiple servers you may decide ether to grow system in MySQL+Apache pairs or split MySQL And Web Server and place them on different boxes.
Generally using separate boxes for MySQL and Web Servers is rather good practice.
It is more secure - Compromising your web server does not directly give access to your database, even though most applications have enough database access permissions to be allow intruder to trash/dump data.
It is easier to analyze - Troubleshooting bottlenecks on shared boxes is more complicated compared to systems running only MySQL or only Web server. In this case you already know who is troublemaker by simply looking at system wide stats.
Easier to maintain - Same thing if box happens to run multiple things it is harder to maintain. I would not call the difference significant in this case though.
Easier to balance - Lets say you have Web application and just added some new feature, ie chat application which increases load on your web server but does not really affect database part of load. If you can operate database farm and web server farms separately you can simply increase number of web servers.
It is less expensive - You typically want database boxes to be secure, using good hardware with ECC memory to avoid database corruption, use RAID to avoid loosing database with any hard drive loss etc. Database boxes also generally require more monitoring and maintainence such as backups so you end up using some serious hardware for this boxes to keep their
number manageable. With Web boxes it is different - you’re quite OK using crappy hardware for them as all you need is CPU power. If box starts to misbehave it is easy to shut it down without affecting site operations. Also you rarely would have data corruption due to web boxes memory failure, more likely you’ll have web server crashes and this sort of things. You can ether clone web servers from template hard drive or even have them disk less booting by NFS.
So if using dedicated boxes is so great why to think about sharing MySQL and Web server at all ? Well mostly it is for cheap guys.
In many applications you will find database servers to be IO bound so CPUs are doing virtually nothing and you’re wasting resources. This is the reason for some cheap environments to have Web servers also on database boxes, might be only handling partial load etc.
I would however only use it in last resort - placing some data crunching scripts on database server is often better use of their free CPU time.
Second thing you may feel bad about it is Web Servers memory. Getting certain amount of memory is pretty cheap, ie 4GB of memory per box costs very close to 2GB, while jump from 16GB to 32GB may be much more expensive (even in price by GB).
So you can get Web boxes with relatively plenty of memory cheap but unless you’re running 500 Apache children with mod_P (php,perl,python) per box (which is probably bad idea anyway).
The good use for such extra memory is probably caching - Web page caching, if you do not have separate layer for it, local memory or cacheing type of caching (depending on your application needs) is very good idea.
One more benefit of local access to MySQL is latency. This was problem many years ago with 10Mbit network but with 1Gbit networks being commodity these days you should not worry too much about it, unless you have each page generated by 1000+ queries, which is bad idea already.
One case I should mention when shared MySQL and Web server makes sense is Web Services architecture when you can have certain boxes providing you with some simple “Services” - these could be small enough to be single shared box (or pair of shared boxes for HA). In such cases I would think about Web Server mainly being provider of different protocol to access your data - it is typically simple and would not require much of CPU and other resources itself.
For example you can see this “Shared” kind of architecture in CNET systems using ATOMICS component to talk to MySQL over HTTP. (not what I’m great fan of this idea though).
7 Comments (Open | Close)
Article printed from MySQL Performance Blog: http://www.mysqlperformanceblog.com
URL to article: http://www.mysqlperformanceblog.com/2006/10/16/should-mysql-and-web-server-share-the-same-box/
MySQL 4 to MySQL 5 Upgrade performance regressions
MySQL 4 to MySQL 5 Upgrade performance regressions
This week I already had two serious performance regression cases when upgrading from MySQL 4.0 and 4.1 to MySQL 5.0. By serious I mean several times performance difference not just 5-10% you often see for simple queries due to generally fatter code.
The problem in both cases was MySQL 5.0 broken group commit bug.
First I should note I am extremely unhappy how MySQL handled this problem. While working for MySQL we spotted this problem early in MySQL 5.0 release cycle as it was introduced and reported it to everyone we could inside the company - this was over 2 years ago. Few months later I created a bug for this issue to get more public attention to the problem and giving extra motivation to MySQL to fix it. Few months later I blogged about this problem with more performance results but as we can see the bug is still in Verified stage and there is no indication any work is going to have it fixed.
I can agree this may be fundamental issue which is not easy to fix, But why is not it mentioned in MySQL 4.1 to 5.0 upgrade notes ? ?
Furthermore if there were no good ideas how to make XA to work with group commit why would not you keep old working code path if XA is disabled ? Many customers do not flush binary log anyway and use single transactional storage engine so they do not care anyway.
Anyway. Enough complains. We have this problem and we have to live with it, most likely MySQL 5.0 and 5.1 would not get any fixes for this problem, so lets see who is affected, how to check you’re affected and how to fix it.
Who is affected ? The good thing is only cheap guys which care about their data are typically affected, meaning you have to have innodb_flush_logs_at_trx_commit=1 so transactions are truly durable. You have to have log-bin enabled to get replication or point in time recovery, but at the same time you should not have hardware RAID or have one without battery backed up cache (BBU unit). I guess this is one of the reason why this bug did not get so much traction inside MySQL - because paying customers would normally have enough money to get BBU unit which is great for performance anyway. Of course you also have to have plenty of concurrent transactions so group commit would trigger in MySQL 4.0 and large number of transactions in total so serializing them would make MySQL unable to keep up. Disks can do 80-150 single page fsyncs per second to get you an idea bout number.
How to spot you’re affected This one is interesting. If you have update prevailing load you would see very strange behavior on 5.0 of MySQL being slow but few queries being “inside innodb” and potentially even empty queue. This is because bottleneck happens in commit phase which is not counted as “inside innodb”. I wish there would be some stats for number of queries waiting to be committed but there is not one easily readable. You can see it from other symptoms though. You would see queries in “SHOW PROCESSLIST” stuck in “end” stage or have “commit” queries in the processlist for multiple statement transactions. Looking at SHOW INNODB STATUS you would notice large amount of log writes and fsyncs per second which matches your hard drive capacity. Plus you would normally see single outstanding log write all the time. There are other ways you can spot the problem as well but these are probably most obvious and easy to use.
How to fix the problem ? Assuming getting back to MySQL 4.1 is not the option you can do one of 3 things to get some of your performance back. XA support has its overhead anyway so you may not get same performance as with MySQL 4.1
- Disable Binary Logging This allows to get group commit back but obviously you loose point in time recovery and replication.
- Use innodb_flush_log_at_trx_commit=2 This is probably best solution. In many cases this would be good change to do with MySQL 4 also because 100% durable transactions are not required anyway and it would allow to get some extra speed. It is often left at default value without a good reason.
- Get BBU If you can’t use any of first two workarounds you better to get battery backed up cache unit and make sure you set your RAID cache policy to “write back”. One of the customers I worked with indeed had battery backed up cache on their system… it just was in “write through” cache policy, so was basically disabled. Note getting BBU is often good idea anyway so you can use this together with other workarounds. Also it is worth to mention BBU does not fix the problem but dramatically raises number of update transactions per second needed to trigger the problem. Without BBU 200 per sec may be well enough with BBU you may only see it if you have 2000 update transactions per second or so, which few people reach.
The other way to approach the problem is of course to work on the application - in large amount of cases the problem happens in case there are away to many updates outside of transactions in auto_commit mode. Wrap them in transactions and reduce number of commits if you can. It is great optimization idea anyway.
Also please do not read this post as MySQL 5.0 is junk in terms of performance and you should stay on MySQL 4.1 until MySQL has taken it away from you as already happened with MySQL 4.0. MySQL 5.0 can offer substantial performance improvements in variety of cases as well as has other benefits. This is simply important regression which you better to know about.
VOIP Advice
Posted By peter On September 10, 2007 @ 8:27 am In site | 9 Comments
As one of my last posts about [1] issues with hosting provider got great response and We got a lot of good advice and offers I decided to ask for advice another problem we have as we’re growing [2] our company - organizing good phone communications.
Our goals are rather simple though the fact we’re globally distributed may put us a bit aside from typical small business needs.
Hosting Advice
Posted By peter On August 31, 2007 @ 5:36 pm In site
During last one and a half year we had pretty good track record with MySQL Performance Blog - there were times when site was slow (especially when backup was running) but I do not remember significant downtime, until today we went down for few hours.
All this time the site was running on dedicated server which I rented from [1] APLUS about 3 years ago. It is rather slow Celeron box with single disk and 512MB running Fedora Core 2. Despite its age (it was used “Value” server even when I got it) the server had very good track record with basically zero failures during this time - there were some network disruptions at Aplus but this is about all problems we had.
So How can you bypass these caches?
Query Profiling with MySQL: Bypassing caches
Quite frequently I run into question like this “I’m using SQL_NO_CACHE but my query is still much faster second time I run it, why is that ?
The answer to this question is simple - because SQL_NO_CACHE only bypasses query cache but it has no change on other caches, which are
MySQL Caches - Innodb Buffer Pool and Key Buffer are best example though Falcon, PBXT and other storage engines have similar buffers. There is also table_cache both MySQL side and Internal Innodb one which can affect query execution speed.
OS Caches Operation Systems typically cache file IO unless you explicitely bypass it by using O_DIRECT flag or mounting file system in direct IO mode.
Hardware Caches State of CPU cache may affect query execution speed but only lightly, the hardware IO cache may however cause dramatic difference. Hardware RAID cache is the one but more important SAN caches which can be pretty big.
So How can you bypass these caches?
For MySQL Caches you can restart MySQL and this is the only way to clean all of the caches. You can do FLUSH TABLES to clean MySQL table cache (but not Innodb table meta data) or you can do “set global key_buffer_size=0; set global key_buffer_size=DEFAULT” to zero out key buffer but there is no way to clean Innodb Buffer Pool without restart.
For OS Caches on Linux you can use drop caches control available in new Linux Kernels. You could also remount file system in question and the safest thing is of course to reboot.
For Hardware Caches it is more hardware specific. Typically doing some IO will flush cashes but you can’t be sure as you do not know what policies exactly do they employ. For RAID hardware caches reboot of the box is also enough however SAN caches may survive longer. Though few of us have SAN available for performance benchmarkingMySQL Performance - eliminating ORDER BY function
MySQL Performance - eliminating ORDER BY function
Posted By peter On October 17, 2007 @ 5:24 am In optimizer | 5 Comments
One of the first rules you would learn about MySQL Performance Optimization is to avoid using functions when comparing constants or order by. Ie use indexed_col=N is good. function(indexed_col)=N is bad because MySQL Typically will be unable to use index on the column even if function is very simple such as arithmetic operation. Same can apply to order by, if you would like that to use the index for sorting. There are however some interesting exception.
7 Comments To "Should MySQL and Web Server share the same box ?"
#1 Comment By John Latham On October 16, 2006 @ 7:58 am
Special case for shared box: circular multi-master replication topology, with each node running web & db, datasources point to localhost. This will (in principle) scale linearly with number of boxes, until propagation delays become problematic (but less of an issue if using sticky sessions). Useful for read-heavy apps.
#2 Comment By peter On October 16, 2006 @ 8:18 am
John,
Thank you for comment.
It scales linearly only from the first glance. In reality it has problems with scaling writes (you mention it already) second as database size grows it may change from CPU bound workload to IO bound workload which slows things down dramatically.
This is not to mention conflicting updates and complicated failure recovery for circular replication.
In general I can only see it used then conflicting updates are not an issue and application can’t be made aware of multi server configuration.
#3 Comment By Michael On October 16, 2006 @ 2:43 pm
Do you have any comments on using VMWare to partition your web servers / databases as virtual machines on one or multiple (physical) boxes? To me the separate physical web server on one box and database is a better idea, but some people keep on recommending this to me.
#4 Comment By peter On October 17, 2006 @ 1:14 am
Michael,
I think using VMWare and other virtualization techniques are good for two cases - testing and if you share same server among different people (in this case not VMWare but other techniques should be used of course)
Some people also use virtualization to ease with cloning as well as configuration moving to other server - I think it is easy enough to do standard way.
Also sharing any way limits you to resources of single server - dedicated physical web and database box will surely have more power.
#5 Pingback By Zedomax Server Upgrade Complete! | zedomax.com - blog about DIYs and Review on reviews of gadgets and technologies… On February 22, 2007 @ 5:55 pm
[…] If you want to know about running a more efficient web server, check out this article on mysql and web server on different boxes. […]
#6 Pingback By smalls blogger » Blog Archive » links for 2007-07-12 On July 11, 2007 @ 6:11 pm
[…] MySQL Performance Blog » Should MySQL and Web Server share the same box ? Should MySQL and Web Server share the same box ? (tags: mysql apache server performance scaling web architecture php) […]
#7 Comment By Dedicated Hosting Provider On March 24, 2008 @ 10:27 am
I had an infrastructure class during my undergraduate studies and we used VMWare for the entire course. VMWare was very good for simulation of different types of issues but also had a lot of problems. It is hard to simulate real systems using virtual machines and virtual machines are very easily corrupted so you need to ensure you backup your information very frequently.