Showing posts with label Database. Show all posts
Showing posts with label Database. Show all posts
0

innodb_file_per_table

innodb_file_per_table
Source : http://capttofu.livejournal.com/11791.html

In a previous post, I was trying to figure out the most optimal way to switch from two large innodb table space files to using innodb_file_per_table to take advantage of some of the benefits of this setting. I had one part of it solved, which was to stop MySQL, add innodb_file_per_table to the my.cnf, then restart, perform a "no-op" alter of "ALTER TABLE t1 ENGINE=InnoDB" which would cause the table to be re-created an it's own .ibd file. The remaining problem was how to be able to resize the huge table space files after converting all the tables to a smaller size (in my case from 10GB to 10MB).

Someone suggested a better way:

1. Alter all innodb tables to MyISAM
2. Stop the server
3. Add innodb_file_per_table to my.cnf
4. Change innodb_data_file_path to new settings (10MB tablespaces) in my.cnf
5. Move all innodb files (logs, data) to a backup directory
6. Restart MySQL
7. Alter all tables converted to MyISAM back to InnoDB

This works great! I think using ALTER vs. dumping the data is the fastest way, for me at least.
0

InnoDB Tablespace Size

InnoDB Tablespace Size
Source :http://optimmysql.blogspot.com/2007/07/innodb-tablespace-size.html

Unlike MyISAM where data for individual tables is stored in their respective files, InnoDB stores data in a tablespace. By default, there is one single tablespace and data of all the databases is stored in one file. This file has data dictionary, tables, as well as indexes in it. There is a global parameter innodb_data_file_path that defines this tablespace file. It has a syntax like ibdata1:256M:autoextend, this means at the beginning a file of size 256 MB will be created and then whenever the data size exceeds this, the file will be auto-extended. The innodb_autoextend_increment variable defines in MB's that by how much each increment should be.

Let's see how well can we play around:

  1. Inserts: Suppose you have too many inserts and InnoDB is extending the file too frequently. It makes sense to increase the value of innodb_autoextend_increment. Say we increase it to 16MB, then obviously the number of attempts to autoextend tablespace comes down by a factor of 2, hence performance. But beware before you take it too easy and increase the value too much. There is a big trap, we will come to it shortly.
  2. Deletes: Here is the trap. You have a 10 GB tablespace (after too many autoextends), delete some 5 GB data (data + indexes) and think now the tablespace is 5 GB. Wrong, InnoDB doesn't have the notion of giving back space to the file system. Though, it will make sure to use the freed up space for further inserts. So, this method directly cannot be used to free disk space. So, in case you have data which you can get rid of, get rid of quickly before the next autoextend is done. One thing that can be done to reclaim space is to use OPTIMIZE TABLE frequently enough on tables that have high volume of inserts and deletes. But again remember, MySQL locks a table during the time OPTIMIZE TABLE is running. Another Gotcha, right? OPTIMIZE TABLE does several other things for which it makes sense to run it, though not that frequently. I will be posting a blog soon on it.
  3. Separate Files per Table: InnoDB provides this option where data (data + indexes) for each table can be stored in a separate file through a global variable innodb_file_per_table. Though still a shared tablespace will be created for storing the likes of data dictionary et al. But still this approach makes sense as having data in small chunks (separate files) will improve the scope of managing them well and may increase performance in general.
  4. Fixed Tablespace size: One way to work around with the tablespace file size problems is to fix the tablespace size (remove autoextend) to an extrapolated value. So, when you hit the limit, you know it is time to cleanup. This is not that viable with all the applications, as extrapolation is not always possible. And also it increases the complexity of the application, which then needs to take care of all such error conditions and not lose any data.

So, where does this end? You need to figure out what your data is, how critical it is, what all you want to do with it, what all you want your data to do. Then take some of the following steps.

  1. Move to MyISAM: For all the tables (or even databases), for which you feel data is not that critical to have transactions et al, move them to MyISAM. So, for the problem we can't solve completely, we destroy the problem.
  2. Separate Tablespace: Its a lot easier to maintain 10 small problems than a single big one.
  3. Delete data/OPTIMIZE TABLE: Figure out how soon you can get rid of data. You actually don't need to delete data as it is. Transfer it to a MyISAM table, compress the file and archive it somewhere else and then delete it from the main table. Likewise there are many ways to do it. Run OPTIMIZE TABLE frequently enough so that it doesn't bother your reads and writes too much and also it doesn't take too much time to run.

Hope this blog was helpful to you. Keep posting your comments.

0

LIFO Alarm Processing

Sekarang aku tengah buat alarm processing untuk support LIFO data transaction. Ini bermakna alarm akan direkodkan tanpa mengira masa dihantar. Jadi setiap kali data dihantar ke server, kita perlu memproses dan membandingkan data yang terima dengan alarm yang telah ada di dalam database.

Boleh tahan jugak memikirkannya. Tapi aku dah jumpa caranya. Nanti aku update post ni nanti dengan cara-cara aku buat.

Seminggu kemudian ....

aku dah siap buat LIFO alarm processing ni.

Katakan kita ada table alarm yang mempunyai field seperti berikut :
  • alarmid (auto increment)
  • alarmcode
  • alamrmdatetime
  • rtuid
  • alarmofftime
Processing flownya macam ni:
Kena ambil 1 alarm sebelum dan 1 selepas masa data diterima.

























































Sebelum Selepas Offtime=Set? Current=Alarm? Result
X X X 1 Create new alarm
X 1 X 1 Update alarm selepas  alarmtime=current
X 1 1 1 Update alarm selepas  alarmtime=current
1 X X X Update alarm sebelum alarmtime=current
1 X 1 X Update alarm sebelum current >offtime, create new alarm
1 1 1 X Update alarm sebelum offtime = current jika current <
offtime
1 1 1 1 Update alarm selepas  alarmtime=current
1

MySQL multi keywords search with priority

Ader org tanya aku pasal nak buat carian nama berdasarkan lebih dari satu keyword dan nak result tu disusun mengikut priority carian tersebut.

kata kan mencari nama manaf osman

nama yang akan keluar :

manaf bin osman
manaf abdul osman
manaf gazali osman
manaf hashim osman
abdul manaf
daud manaf
osman manf

Jadi aku try la buat satu query untuk sort carian tu. Idea dia,

1. Carik location carian tu berlaku untuk setiap keyword.
2. Kalau result location dia 0 (x jumpa), kene jadikan value dia paling tinggi.
3. Left pad kan result dengan kosong di hadapan, max aku pakai 7 digit
4. CONCAT semua result carian.
5. CAST result yang dah di CONCAT kepada nilai integer.
6. Susun mengikut hasil pada no 5.

Ni contoh query yang aku dah buat.

SELECT * FROM (
SELECT CAST(CONCAT(LPAD(BBPos1,7,'0'),LPAD(BBPos2,7,'0')) as UNSIGNED) as Position,CC.* FROM (
SELECT IF(Pos1=0,999999,Pos1) as BBPos1,IF(Pos2=0,999999,Pos2) as BBPos2,BB.* FROM (
SELECT LOCATE('mohd',namapenuh) as Pos1,LOCATE('HJ',namapenuh) as Pos2,AA.* FROM (
SELECT * FROM ahli WHERE namapenuh LIKE '%mohd%'
UNION
SELECT * FROM ahli WHERE namapenuh LIKE '%HJ%'
) as AA
) as BB
) as CC
) as DD ORDER BY Position,namapenuh

Ni kira math dan logik kena power gak la. Macam ninja.

Nin.. Nin..
Hemo... Hemo..
0

PHP Web Service, .NET Remote MySQL Dataset

Projek open source aku yang pertama kat google code.
Kebanyakkan web hosting tak membenarkan connection MySQL dari luar untuk dasar keselamatan. Biasanya kalau nak manage ke pakai phpMyAdmin. Tapi kalau database tu dah besar, nak dump data macam lama gila, tambah lagi dengan internet yang slow.

So, aku create la web service pakai PHP dan return data dalam bentuk .NET dataset untuk mudahkan aku process data. Harap projek ni berguna jugak untuk org lain.

Nin...nin..

http://code.google.com/p/phpremotemysqldataset/downloads/list
0

SQLite for ADO.NET 2.0

Dalam aku tengah rilek-rilek aku teringat nak carik library untuk develop system PDA guna Sqlite sebagai database. Sebelum ni aku dah guna Sqlite ni untuk desktop application. Kira Ok jugak SQlite ni. Bagus untuk small embbed database dan boleh digunakan dalam CD. Library yang ni ader include sekali binary untuk Compact Framework.

Ni link untuk ke website dia.
http://sourceforge.net/project/showfiles.php?group_id=132486
 
Copyright © peyotest