We are working with a reporting application and connected to many External APIs like Adwords/Bing/Yahoo/Facebook.
Table 1 : adwords_data
Similarly we have seperate table for storing data for bing/fb/yahoo api data and storage engine for these tables is MyISAM.
Now our application automatically fetches the API data every hour and updates all of these tables.We get data in CSV and import the CSV using LOAD DATA LOCAL INFILE statement.But the problem we are facing is with high CPU consumption, because the data for all clients are inserted/read in one table only.And we are using beantalkd queues so that data gets loaded parallely for all clients.So for example if we have 100 clients then every hour the system add 100 jobs in queues and all 100 queues makes API call and inserts data in table adwords_data. and at that time When updation is in progress we noticed that CPU consumption of RDS generally goes high above 70% and we need a way to lower down.
So I need some suggestion whats the recommended approach?
I am thinking of two solution :-
1) Create Seperate database for each client
So the idea is to create seperate db for each client based on id.So database for client id 1 we can create the seperate database and store the API data in that respective client database only so that pressure won't be on one table and instead insert/read operations will be performed on the individual clients database table instead of one table.
2) Create Partitions
Second suggestion is to create the partition in the current schema.So Partition by HASH of reporting_client_id
Please suggest!
from Newest questions tagged laravel-5 - Stack Overflow http://ift.tt/2fbnAeg
via IFTTT
Aucun commentaire:
Enregistrer un commentaire