MySQL query performance limit? -


we have home grown document management system , our system running slow, particularly on search. worked fine @ first, has gotten progressively slower on time. taking anywhere 30 150 seconds return results depending upon criteria. our search query. we’ve been staring @ thing left , right , can’t see place tune more. of joined fields indexed on respective tables.

select distinct f.*, ts.*, fo.*, ft.*, p.*, u.*, c.*, co.*, ct.*, fs.*, fd.*, r.*, rt.*, si.*, s.* ( select distinct f.* files f join folders fo on(fo.id = f.belongs_to_folder_id) join projects p on(p.id = f.belongs_to_project_id) left outer join file_statuses fs on(fs.id = f.file_status_id) left outer join submittal_items_files sif on(sif.file_id = f.id) left outer join submittal_items si on(si.id = sif.submittal_item_id) left outer join submittals s on(s.id = si.belongs_to_submittal_id) left outer join record_types rt on(rt.id = f.record_type_id) left outer join companies co on(co.id = f.company_id) left join folders_actions_groups ag on (     f.belongs_to_folder_id = ag.folder_id ,     ag.action_id = 10010 ) left join files_actions_groups fg on (fg.file_id = f.id) join users_groups ug on ((ug.group_id = ag.group_id or ug.group_id = fg.group_id) , ug.user_id = 411) (     (f.file_generated_name concat('%', 'the', '%')) or      (f.record_id concat('%', 'the', '%')) or      (f.record_title concat('%', 'the', '%')) or      (f.additional_info concat('%', 'the', '%')) or      (si.item_number concat('%', 'the', '%')) or      (s.element_number concat('%', 'the', '%')) ) , f.path concat('some text', '%') ,  f.file_status_id = 3 ,  f.file_revision = 1 ,  f.discipline_id = 1 ,  f.record_type_id = 2 ,  f.triage_status_id = 2 ,  f.deleted = 0  order f.created desc, f.id desc   limit 100 ) f left outer join users u on(f.created_by_user_id = u.id) left outer join contacts c on(c.user_id = u.id) left outer join companies co on(co.id = f.company_id) left outer join company_types ct on(ct.id = co.company_type_id) join triage_statuses ts on(f.triage_status_id = ts.id) join folders fo on(fo.id = f.belongs_to_folder_id) join folder_types ft on(ft.id = fo.folder_type_id) join projects p on(p.id = f.belongs_to_project_id) left outer join file_statuses fs on(fs.id = f.file_status_id) left outer join file_disciplines fd on(fd.id = f.discipline_id) left outer join revisions r on(r.id = f.file_revision) left outer join record_types rt on(rt.id = f.record_type_id) left outer join submittal_items_files sif on(sif.file_id = f.id) left outer join submittal_items si on(si.id = sif.submittal_item_id) left outer join submittals s on(s.id = si.belongs_to_submittal_id) left outer join files_actions_groups ffg on(ffg.file_id = f.id) left outer join groups g on(g.id = ffg.group_id) order f.created desc, f.id desc 

this might obvious answer, have indexed database? if you're new indexing, here's pretty rule: put unique index on columns named "id", such folders.id or projects.id, put standard index on columns reference foreign id, such folder.belongs_to_folder_id or folder.record_type_id

another thing change try , select columns use rather huge list of f.*, ts.*, fo.*, ft.*, p.*, u.*, c.*, co.*, ct.*, fs.*, etc...

you have tons of joins, expensive in terms of processing time. need joined tables?


Comments

Popular posts from this blog

How has firefox/gecko HTML+CSS rendering changed in version 38? -

javascript - Complex json ng-repeat -

jquery - Cloning of rows and columns from the old table into the new with colSpan and rowSpan -