Local Community

Uh-oh! Unable to import site - Error message when using WP Engine import

Issue Summary

Hi, I am experiencing an issue when trying to import one of our sites from WP Engine, using the ‘Pull from WP Engine’ functionality of local.

Screenshot 2021-06-09 at 9.00.01 am

Error message: “Error: Command failed: /resourcesPath/lightning-services/mysql-8.0.16+6/bin/darwin/bin/mysql local -B --disable-column-names -e SELECT option_value FROM wp_options WHERE option_name = ‘siteurl’;ERROR 1146 (42S02) at line 1: Table ‘local.wp_options’ doesn’t exist”

I have had a look at the forums and I did find a similar-ish issue here: SQL missing table error prevent importing site. However, in this instance the user was importing a site using the zip method, and in either case the links to the screenshots no longer work. So, I am unable to see how to resolve it.

Troubleshooting Questions

  • Does this happen for all sites in Local, or just one in particular?

No, this is only happening for one particular site. In this instance, the Prod version of the site imports correctly, however the staging version of the site results in the issue.


Replication is as simple as pulling the site from the WPE server.

System Details

  • Which version of Local is being used? 5.10.5+5403

  • What Operating System (OS) and OS version is being used? MacOS 11.4

  • Attach the Local Log. See this Community Forum post for instructions on how to do so:
    Log attached.local-lightning.log (794.9 KB)

I’m not seeing anything else that seems odd in the Local Log, and that error is a general one that can be caused by a few different underlying issues. The general gist of it is that Local can’t find the wp_options table.

Some things you might try to zero in on what’s going on:

  1. If you manually download a backup of the remote site and drag-and-drop the zip onto Local, does that correctly import the site? If it does, then Local should be able to use Connect after this.

  2. Does this happen on an already connected site – for example, you’ve pulled the production site down and then pulled staging down? Is there any difference when pulling the staging environment down to a new site in Local?

  3. Since Production pulls, but staging doesn’t, is there something different about the staging db? The main thing that comes to my mind is if there are different table prefixes between staging and production.

Hi Ben,

    • If I download a backup for the site from WPE and import into local, the site works absolutely fine. However, if you then connect that site to the WPE sever, and pull the same version down, it then displays the error message.
  1. Correct, if production is currently pulled down and working, and then I pull staging, it will break the site/display the error.
  2. So far as I can see, no there isn’t any differences, and as I stated in answer 1, if you use the zip method, the site works fine, so if there were any differences in the db, surely the zip wouldn’t work.

I did reach out to WPE support before coming here and they did have a quick look at the db for me, but couldn’t see anything on their end, so recommended coming here.

That’s definitely odd!

I took a closer look at the original Local log and a few dozen entries before the error we’ve been looking at mentions:

  "thread": "main",
  "class": "DevKitService",
  "message": "PHP Fatal error:  Allowed memory size of 1310720000 bytes exhausted (tried to allocate 1071644704 bytes) in phar:///usr/local/bin/wp/vendor/wp-cli/wp-cli/php/utils.php on line 546\n",
  "level": "warn",
  "timestamp": "2021-06-09T07:59:13.841Z"
  "thread": "main",
  "class": "DevKitService",
  "level": "info",
  "message": "Database downloaded to ./_wpeprivate/autoload.sql\n",
  "timestamp": "2021-06-09T07:59:14.192Z"

Note that PHP ran out of memory and then Local started to download the sql file. I wonder if there’s some sort of setting or plugin that is active on staging that is using too much memory, and as a result the sql dump is corrupted or incomplete?

As for why creating a backup and importing works and doing a pull doesn’t – my only theory would be that creating a backup on the remote system might be using something like mysqldump instead of PHP. Just a guess though.

A couple of things you might try:

  1. Temporarily increase the memory allocated to the staging site? I’m not sure how much control you have there. Also, if something is eating up more than a GB of memory, that’s not an ideal long-term solution.

  2. Try disabling any plugins that are only installed on the staging site before pulling.

  3. If you have access to the raw php logs for the remote site, there might be some additional logging info about what would be eating up that much memory and should get you to zero in on what’s going wrong.

Hope that helps give a few extra pointers for things to look into!

Thanks for further looking into this. I had a look at the staging site on WPE this morning, and upon further investigation, I believe this may be due to the overall database size for the staging site. For whatever reason, the db is being reported as 4.27GB on staging, whereas the prod database is 830mb.

I need to chat to the lead developer for the site in question to find out what is going on with the staging site for it’s db to be so large, and once that is resolved, I’d imagine that the site should pull correctly.

I’ll close this ticket down as based on your last answer, I believe this is going to be a memory related issue.


1 Like

This topic was automatically closed 36 hours after the last reply. New replies are no longer allowed.