We have just got an an amazon EC2 instance running up and we have already deployed our web application on ec2 instance and app is developed in Laravel 5.0 framework.But we are however facing some performance related issue and need some suggestions/input for same.Basically what we are doing is we have an AWS RDS instance and our table structure is as below:
CREATE TABLE IF NOT EXISTS `adwords_data` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`client_id` bigint(20) NOT NULL,
`date_preset` enum('YESTERDAY','LAST_7_DAYS','LAST_14_DAYS','LAST_30_DAYS','ALL_TIME','7_DAYS_PRIOR_YESTERDAY','30_DAYS_PRIOR_YESTERDAY','60_DAYS_PRIOR_YESTERDAY','90_DAYS_PRIOR_YESTERDAY') COLLATE utf8_unicode_ci NOT NULL,
`CampaignId` bigint(20) NOT NULL,
`CampaignName` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`AdgroupId` bigint(20) NOT NULL,
`AdgroupName` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`Clicks` int(11) NOT NULL,
`Impressions` int(11) NOT NULL,
`ConvertedClicks` int(11) NOT NULL,
`Conversions` int(11) NOT NULL,
`Cost` decimal(10,2) NOT NULL,
`AveragePosition` decimal(10,2) NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `adwords_data_client_id_index` (`client_id`),
KEY `adwords_data_date_preset_index` (`date_preset`),
KEY `adwords_data_campaignid_index` (`CampaignId`),
KEY `adwords_data_adgroupid_index` (`AdgroupId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1 ;
Like above I have more 6 tables but fields are bit different
and here is how my.cnf looks
#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://ift.tt/1l3nI6x
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
loose-local-infile = 1
port = 3306
socket = /var/run/mysqld/mysqld.sock
# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
#
# * Basic Settings
#
local-infile = 1
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
lower_case_table_names=1
table_open_cache=512
query_alloc_block_size=16384
thread_concurrency=8
join_buffer_size=2M
read_buffer_size=4M
tmp_table_size=1024M
max_heap_table_size=1024M # added
innodb_lock_wait_timeout=50
#table_cache=4960
query_cache_size=99M # changed from 1024
query_cache_type=ON # added
myisam_sort_buffer_size=64M
symbolic-links=0
thread_cache_size=16
read_rnd_buffer_size=32M
query_prealloc_size=16384
key_buffer_size=1024M
sort_buffer_size=2M
tmpdir="/tmp"
ft_min_word_len=3
default-storage-engine=MyISAM
innodb_file_per_table=1
innodb_flush_log_at_trx_commit=2
max_allowed_packet=16M
open_files_limit=49996
user = mysql
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
#
# * Fine Tuning
#
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
max_connections = 3000
#table_cache = 64
#thread_concurrency = 10
#
# * Query Cache Configuration
#
query_cache_limit = 1M
query_cache_size = 16M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file = /var/log/mysql/mysql.log
#general_log = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Here you can see queries with especially long duration
#log_slow_queries = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
#binlog_do_db = include_database_name
#binlog_ignore_db = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
local-infile = 1
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer = 16M
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
Our application make calls to Google Adwords API for different date ranges and store the returned data in CSV File.We Process this CSV file, stores the data in mysql database table (around 200,000 records)and then we need to process this 200,000 records, we need to scan each one of 200 K record then apply some business logic (thats our internal logic means applying some mysql aggregation like Sum, Avg etc) but processing these 200 K record takes a long time around 2 hours however we need to get this processing time reduced.
Below is the Code base we are using to process these 200K Records.We have used Laravel chunk method to process data in batch
\DB::table('adwords_data')->select('CampaignId', 'AdgroupId')->where('client_id', '=', $client_id)->groupBy('CampaignId', 'AdgroupId')->chunk('500', function ($adwords_data) {
foreach($adwords_data as $data){
// process each AdgroupId by checking the summation of conversion stats and apply conditions
}
}
After completeting the process from table adwords_data we repeat same process like above with other 6 tables.But performance is not as good we are in need to make execution more faster.
Can anyone suggest us the best way to improve the performance?
Thanks!
from Newest questions tagged laravel-5 - Stack Overflow http://ift.tt/2am5l3m
via IFTTT
Aucun commentaire:
Enregistrer un commentaire