One place for hosting & domains

      Working

      Understanding and Working with Files in Laravel

      Introduction

      File uploads are one the most commonly used features on the web. From uploading avatars to family pictures to sending documents via email, we can’t do without files on the web.

      In today’s article will cover all the ways to handle files in Laravel. After reading the article, If we left something out please let us know in the comments and we’ll update the post accordingly.

      Handling files is another thing Laravel has simplified in its ecosystem. Before we get started, we’ll need a few things. First, a Laravel project. There are a few ways to create a new Laravel project, but let’s stick to composer for now.

      1. composer create-project --prefer-dist laravel/laravel files

      Where files is the name of our project. After installing the app, we’ll need a few packages installed, so, let’s get them out of the way. You should note that these packages are only necessary if you intend to save images to Amazon’s s3 or manipulate images like cropping, filters, etc.

      1. composer require league/flysystem-aws-s3-v3:~1.0 intervention/image:~2.4

      After installing the dependencies, the final one is Mailtrap. Mailtrap is a fake SMTP server for development teams to test, view, and share emails sent from the development and staging environments without spamming real customers. So head over to Mailtrap and create a new inbox for testing.

      Then, in welcome.blade.php update the head tag to:

      <meta charset="utf-8">
      <meta http-equiv="X-UA-Compatible" content="IE=edge">
      <meta name="viewport" content="width=device-width, initial-scale=1">
      <title>File uploads</title>
      <style>
        * {
          font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto,
              "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji",
              "Segoe UI Emoji", "Segoe UI Symbol";
        }
      </style>
      

      Modify the body contents to:

      <form action="/process" enctype="multipart/form-data" method="POST">
          <p>
              <label for="photo">
                  <input type="file" name="photo" id="photo">
              </label>
          </p>
          <button>Upload</button>
          {{ csrf_field() }}
      </form>
      

      For the file upload form, the enctype="multipart/form-data" and method="POST" are extremely important as the browser will know how to properly format the request. {{ csrf_field() }} is Laravel specific and will generate a hidden input field with a token that Laravel can use to verify the form submission is legit.

      If the CSRF token does not exist on the page, Laravel will show “The page has expired due to inactivity” page.

      Now that we have our dependencies out of the way, let’s get started.

      Development, as we know it in 2018, is growing fast, and in most cases, there are many solutions to one problem. Take file hosting, for example, now we have so many options to store files, the sheer number of solutions ranging from self-hosted to FTP to cloud storage to GFS and many others.

      Since Laravel is framework that encourages flexibility, it has a native way to handle the many file structures. Be it local, Amazon’s s3, Google’s Cloud, Laravel has you covered.

      Laravel’s solution to this problem is to call them disks. Makes sense, any file storage system you can think of can be labeled as a disk in Laravel. In this regard, Laravel comes with native support for some providers (disks). We have local, public, s3, Rackspace, FTP, etc. All this is possible because of Flysystem.

      If you open config/filesystems.php you’ll see the available disks and their respected configuration.

      From the introduction section above, we have a form with a file input ready to be processed. We can see that the form is pointed to /process. In routes/web.php, we define a new POST /process route.

      use Illuminate\Http\Request;
      
      Route::post('process', function (Request $request) {
          $path = $request->file('photo')->store('photos');
      
          dd($path);
      });
      

      What the above code does is grab the photo field from the request and save it to the photos folder. dd() is a Laravel function that kills the running script and dumps the argument to the page. For me, the file was saved to photos/3hcX8yrOs2NYhpadt4Eacq4TFtpVYUCw6VTRJhfn.png. To find this file on the file system, navigate to storage/app and you’ll find the uploaded file.

      If you don’t like the default naming pattern provided by Laravel, you can provide yours using the storeAs method.

      Route::post('process', function (Request $request) {
          
          $file = $request->file('photo');
          
          
          $filename = 'profile-photo-' . time() . '.' . $file->getClientOriginalExtension();
          
          
          $path = $file->storeAs('photos', $filename);
          
          dd($path);
      });
      

      After running the above code, I got photos/profile-photo-1517311378.png.

      In config/filesystems.php you can see the disks local and public defined. By default, Laravel uses the local disk configuration. The major difference between local and the public disk is that local is private and cannot be accessed from the browser while public can be accessed from the browser.

      Since the public disk is in storage/app/public and Laravel’s server root is in public you need to link storage/app/public to Laravel’s public folder. We can do that with our trusty artisan by running php artisan storage:link.

      Since Laravel doesn’t provide a function to upload multiple files, we need to do that ourselves. It’s not much different from what we’ve been doing so far, we just need a loop.

      First, let’s update our file upload input to accept multiple files.

      <input type="file" name="photos[]" id="photo" multiple>
      

      When we try to process this $request->file(‘photos’), it’s now an array of UploadedFile instances so we need to loop through the array and save each file.

      Route::post('process', function (Request $request) {
          $photos = $request->file('photos');
          $paths  = [];
      
          foreach ($photos as $photo) {
              $extension = $photo->getClientOriginalExtension();
              $filename  = 'profile-photo-' . time() . '.' . $extension;
              $paths[]   = $photo->storeAs('photos', $filename);
          }
      
          dd($paths);
      });
      

      After running this, I got the following array, since I uploaded a GIF and a PNG:

      array:2 [0 => "photos/profile-photo-1517315875.gif"
        1 => "photos/profile-photo-1517315875.png"
      ]
      

      Validation for file uploads is extremely important. Apart from preventing users from uploading the wrong file types, it’s also for security. Let me give an example regarding security. There’s a PHP configuration option cgi.fix_pathinfo=1. What this does is when it encounters a file like https://example.com/images/evil.jpg/nonexistent.php, PHP will assume nonexistent.php is a PHP file and it will try to run it. When it discovers that nonexistent.php doesn’t exist, PHP will be like “I need to fix this ASAP” and try to execute evil.jpg (a PHP file disguised as a JPEG). Because evil.jpg wasn’t validated when it was uploaded, a hacker now has a script they can freely run live on your server… Not… good.

      To validate files in Laravel, there are so many ways, but let’s stick to controller validation.

      Route::post('process', function (Request $request) {
          
          $validation = $request->validate([
              'photo' => 'required|file|image|mimes:jpeg,png,gif,webp|max:2048'
              
              
          ]);
          $file      = $validation['photo']; 
          $extension = $file->getClientOriginalExtension();
          $filename  = 'profile-photo-' . time() . '.' . $extension;
          $path      = $file->storeAs('photos', $filename);
      
          dd($path);
      });
      

      For the above snippet, we told Laravel to make sure the field with the name of the photo is required, a successfully uploaded file, it’s an image, it has one of the defined mime types, and it’s a max of 2048 kilobytes ~~ 2 megabytes.

      Now, when a malicious user uploads a disguised file, the file will fail validation and if for some weird reason you leave cgi.fix_pathinfo on, this is not a means by which you can get PWNED!!!

      If you head over to Laravel’s validation page you’ll see a whole bunch of validation rules.

      Okay, your site is now an adult, it has many visitors and you decide it’s time to move to the cloud. Or maybe from the beginning, you decided your files will live on a separate server. The good news is Laravel comes with support for many cloud providers, but, for this tutorial, let’s stick with Amazon.

      Earlier we installed league/flysystem-aws-s3-v3 through composer. Laravel will automatically look for it if you choose to use Amazon S3 or throw an exception.

      To upload files to the cloud, just use:

      $request->file('photo')->store('photos', 's3');
      

      For multiple file uploads:

      foreach ($photos as $photo) {
          $extension = $photo->getClientOriginalExtension();
          $filename  = 'profile-photo-' . time() . '.' . $extension;
          $paths[]   = $photo->storeAs('photos', $filename, 's3');
      }
      

      Users may have already uploaded files before you decide to switch to a cloud provider, you can check the upcoming sections for what to do when files already exist.

      Note: You’ll have to configure your Amazon S3 credentials in config/filesystems.php.

      Before we do this, let’s quickly configure our mail environment. In .env file you will see this section

      MAIL_DRIVER=smtp
      MAIL_HOST=smtp.mailtrap.io
      MAIL_PORT=2525
      MAIL_USERNAME=null
      MAIL_PASSWORD=null
      MAIL_ENCRYPTION=null
      

      We need a username and password which we can get at Mailtrap.io. Mailtrap is really good for testing emails during development as you don’t have to crowd your email with spam. You can also share inboxes with team members or create separate inboxes.

      First, create an account and login:

      1. Create a new inbox
      2. Click to open inbox
      3. Copy username and password under SMTP section

      After copying credentials, we can modify .env to:

      MAIL_DRIVER=smtp
      MAIL_HOST=smtp.mailtrap.io
      MAIL_PORT=2525
      MAIL_USERNAME=USERNAME
      MAIL_PASSWORD=PASSWORD
      MAIL_ENCRYPTION=null
      

      Don’t bother using mine, I deleted it.

      Create your mailable

      1. php artisan make:mail FileDownloaded

      Then, edit its build method and change it to:

      public function build()
      {
          return $this->from('[email protected]')
              ->view('emails.files_downloaded')
              ->attach(storage_path('app/file.txt'), [
                  'as' => 'secret.txt'
              ]);
      }
      

      As you can see from the method above, we pass the absolute file path to the attach() method and pass an optional array where we can change the name of the attachment or even add custom headers. Next, we need to create our email view.

      Create a new view file in resources/views/emails/files_downloaded.blade.php and place the content below.

      <h1>Only you can stop forest fires</h1>
      <p>Lorem, ipsum dolor sit amet consectetur adipisicing elit. Labore at reiciendis consequatur, ea culpa molestiae ad minima est quibusdam ducimus laboriosam dolorem, quasi sequi! Atque dolore ullam nisi accusantium. Tenetur!</p>
      

      Now, in routes/web.php we can create a new route and trigger a mail when we visit it.

      use App\Mail\FileDownloaded;
      
      Route::get('mail', function () {
          $email = '[email protected]';
      
          Mail::to($email)->send(new FileDownloaded);
      
          dd('done');
      });
      

      If you head over to Mailtrap, you should see this.

      In an application, it’s not every time we process files through uploads. Sometimes, we decide to defer cloud file uploads till a certain user action is complete. Other times we have some files on disk before switching to a cloud provider. For times like this, Laravel provides a convenient Storage facade. For those who don’t know, facades in Laravel are class aliases. So instead of doing something like Symfony\File\Whatever\Long\Namespace\UploadedFile, we can do Storage instead.

      Choosing a disk to upload a file. If no disk is specified, Laravel looks in config/filesystems.php and uses the default disk.

      Storage::disk('local')->exists('file.txt');
      

      Use default cloud provider:

      
      Storage::cloud()->exists('file.txt');
      

      Create a new file with contents:

      Storage::put('file.txt', 'Contents');
      

      Prepend to file:

      Storage::prepend('file.txt', 'Prepended Text');
      

      Append to file:

      Storage::append('file.txt', 'Prepended Text');
      

      Get file contents:

      Storage::get('file.txt')
      

      Check if file exists:

      Storage::exists('file.txt')
      

      Force file download:

      Storage::download('file.txt', $name, $headers); 
      

      Generate publicly accessible URL:

      Storage::url('file.txt');
      

      Generate a temporary public URL (i.e., files that won’t exist after a set time). This will only work for cloud providers as Laravel doesn’t yet know how to handle the generation of temporary URLs for the local disk.

      Storage::temporaryUrl('file.txt’, now()->addMinutes(10));
      

      Get file size:

      Storage::size('file.txt');
      

      Last modified date:

      Storage::lastModified('file.txt')
      

      Copy files:

      Storage::copy('file.txt', 'shared/file.txt');
      

      Move files:

      Storage::move('file.txt', 'secret/file.txt');
      

      Delete files:

      Storage::delete('file.txt');
      

      To delete multiple files:

      Storage::delete(['file1.txt', 'file2.txt']);
      

      Resizing images, adding filters, etc. This is where Laravel needs external help. Adding this feature natively to Laravel will only bloat the application since no installs need it. We need a package called intervention/image. We already installed this package, but for reference.

      1. composer require intervention/image

      Since Laravel can automatically detect packages, we don’t need to register anything. If you are using a version of Laravel lesser than 5.5 read this.

      To resize an image

      $image = Image::make(storage_path('app/public/profile.jpg'))->resize(300, 200);
      

      Even Laravel’s packages are fluent.

      You can head over to their website and see all the fancy effects and filters you can add to your image.

      Laravel also provides handy helpers to work with directories. They are all based on PHP iterators so they’ll provide the utmost performance.

      To get all files:

      Storage::files
      

      To get all files in a directory including files in sub-folders

      Storage::allFiles($directory_name);
      

      To get all directories within a directory

      Storage::directories($directory_name);
      

      To get all directories within a directory including files in sub-directories

      Storage::allDirectories($directory_name);
      

      Make a directory

      Storage::makeDirectory($directory_name);
      

      Delete a directory

      Storage::deleteDirectory($directory_name);
      

      If we left anything out, please let us know down in the comments. Also, checkout Mailtrap, they are really good and it will help you sail through the development phase with regards to debugging emails.

      Working with CORS Policies on Linode Object Storage


      Linode Object Storage offers a globally-available, S3-compatible storage solution. Whether you are storing critical backup files or data for a static website, S3 object storage can efficiently answer the call.

      To make the most of object storage, you may need to access the data from other domains. For instance, your dynamic applications may opt to use S3 for static file storage.

      This leaves you dealing with Cross-Origin Resource Sharing, or CORS. However, it’s often not clear how to effectively navigate CORS policies or deal with issues as they come up.

      This tutorial aims to clarify how to work with CORS and S3. It covers tools and approaches for effectively reviewing and managing CORS policies for Linode Object Storage or most other S3-compatible storage solutions.

      CORS and S3 Storage – What you Need to Know

      Linode Object Storage is an S3, which stands for simple storage service. With S3, data gets stored as objects in “buckets.” This gives S3s a flat approach to storage, in contrast to the hierarchical and logistically more complicated storage structures like traditional file systems. Objects stored in S3 can also be given rich metadata.

      CORS defines how clients and servers from different domains may share resources. Generally, CORS policies restrict access to resources to requests from the same domain. By managing your CORS policies, you can open up services to requests from specified origin domains, or from any domains whatsoever.

      An S3 like Linode Object Storage can provide excellent storage for applications. However, you also want to keep your data as secure as possible while also allowing your applications the access they need.

      This is where managing CORS policies on your object storage service becomes imperative. Applications and other tools often need to access stored resources from particular domains. Implementing specific CORS policies controls what kinds of requests, and responses, each origin domain is allowed.

      Working with CORS Policies on Linode Object Storage

      One of the best tools for managing policies on your S3, including Linode Object Storage, is s3cmd. Follow along with our guide
      Using S3cmd with Object Storage to:

      1. Install s3cmd on your system. The installation takes place on the system from which you intend to manage your S3 instance.

      2. Configure s3cmd for your Linode Object Storage instance. This includes indicating the instance’s access key, endpoint, etc.

      You can verify the connection to your object storage instance with the command to list your buckets. This example lists the one bucket used for this tutorial, example-cors-bucket:

      s3cmd ls
      
      2022-09-24 16:13  s3://example-cors-bucket

      Once you have s3cmd set up for your S3 instance, use it to follow along with the upcoming sections of this tutorial. These show you how to use the tool to review and deploy CORS policies.

      Reviewing CORS Policies for Linode Object Storage

      You can get the current CORS policies for your S3 bucket using the info flag for s3cmd. The command provides general information on the designated bucket, including its policies:

      s3cmd info s3://example-cors-bucket
      
      s3://example-cors-bucket/ (bucket):
         Location:  default
         Payer:     BucketOwner
         Expiration Rule: none
         Policy:    none
         CORS:      <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>*</AllowedOrigin><AllowedHeader>*</AllowedHeader></CORSRule></CORSConfiguration>
         ACL:       31ffbc26-d6ed-4bc3-8a14-ad78fe8f95b6: FULL_CONTROL

      This bucket already has a CORS policy in place. This is because it was set up with the CORS Enabled setting using the Linode Cloud Manager web interface.

      The basic CORS policy above is fairly permissive, allowing access for any request method from any domain. Keep reading to see how you can fine-tune such policies to better fit your particular needs.

      Deploying CORS Policies on Linode Object Storage

      As you can see above, the Linode Cloud Manager can set up a general CORS policy for your bucket. However, if you need more fine-grained control, you need to deploy custom CORS policies.

      Creating CORS policies follows a similar methodology to the one outlined in our
      Define Access and Permissions using Bucket Policies tutorial.

      These next sections break down the particular fields needed for CORS policies and how each affects your bucket’s availability.

      Configuring Policies

      The overall structure for CORS policies on S3 looks like the following. While policies on your object storage instance can generally be set with JSON or XML, CORS policies must use the XML format:

      File: cors_policies.xml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      
      <CORSConfiguration>
        <CORSRule>
          <AllowedHeader>*</AllowedHeader>
      
          <AllowedMethod>GET</AllowedMethod>
          <AllowedMethod>PUT</AllowedMethod>
          <AllowedMethod>POST</AllowedMethod>
          <AllowedMethod>DELETE</AllowedMethod>
          <AllowedMethod>HEAD</AllowedMethod>
      
          <AllowedOrigin>*</AllowedOrigin>
      
          <ExposeHeader>*</ExposeHeader>
      
          <MaxAgeSeconds>3000</MaxAgeSeconds>
        </CORSRule>
      </CORSConfiguration>

      To break this structure down:

      • The policy introduces a list of one or more <CORSRule> elements within a <CORSConfiguration> element. Each <CORSRule> element contains policy details.

      • Policies tend to have some combination of the five types of elements shown in the example above.

        The <AllowedHeader>, <AllowedMethod>, and <AllowedOrigin> elements are almost always present. Further, there may be multiple of these elements within a single <CORSRule>.

        The other two elements, <ExposeHeader> and <MaxAgeSeconds>, are optional. There can be multiple <ExposeHeader> elements, but only one <MaxAgeSeconds>.

      • <AllowedHeader> lets you specify request headers allowed for the given policy. You can find a list of commonly used request headers in AWS’s
        Common Request Headers documentation.

      • <AllowedMethod> lets you specify request methods that the given policy applies to. The full range of supported HTTP request methods is shown in the example above.

      • <AllowedOrigin> lets you specify request origins for the policy. These are the domains from which cross-origin requests can be made.

      • <ExposeHeader> can specify which response headers the policy allows to be exposed. You can find a list of commonly used response headers in AWS’s
        Common Response Headers documentation.

      • <MaxAgeSeconds> can specify the amount of time, in seconds, that browsers are allowed to cache the response to preflight requests. Having this cache allows the browser to repeat the original requests without having to send another preflight request.

      Example CORS Policies

      To give more concrete ideas of how you can work with CORS policies, the following are two additional example policies. One provides another simple, but more limited, policy, while the other presents a more complicated set of two policies.

      • First, a public access read-only policy. This lets any origin, with any request headers, make GET and HEAD requests to the bucket. However, the policy does not expose custom response headers.

        File: cors_policies.xml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        <CORSConfiguration>
          <CORSRule>
            <AllowedHeader>*</AllowedHeader>
        
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>HEAD</AllowedMethod>
        
            <AllowedOrigin>*</AllowedOrigin>
          </CORSRule>
        </CORSConfiguration>
            
      • Next, a set of policies for fine control over requests from example.com. The <AllowedOrigin> elements specify the range of possible example.com domains. The two policies distinguish the kinds of headers allowed based on the kinds of request methods.

        File: cors_policies.xml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        
        <CORSConfiguration>
          <CORSRule>
            <AllowedHeader>Authorization</AllowedHeader>
        
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>HEAD</AllowedMethod>
        
            <AllowedOrigin>http://example.com</AllowedOrigin>
            <AllowedOrigin>http://*.example.com</AllowedOrigin>
            <AllowedOrigin>https://example.com</AllowedOrigin>
            <AllowedOrigin>https://*.example.com</AllowedOrigin>
        
            <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
        
            <MaxAgeSeconds>3000</MaxAgeSeconds>
          </CORSRule>
          <CORSRule>
            <AllowedHeader>Authorization</AllowedHeader>
            <AllowedHeader>Origin</AllowedHeader>
            <AllowedHeader>Content-*</AllowedHeader>
        
            <AllowedMethod>PUT</AllowedMethod>
            <AllowedMethod>POST</AllowedMethod>
            <AllowedMethod>DELETE</AllowedMethod>
        
            <AllowedOrigin>http://example.com</AllowedOrigin>
            <AllowedOrigin>http://*.example.com</AllowedOrigin>
            <AllowedOrigin>https://example.com</AllowedOrigin>
            <AllowedOrigin>https://*.example.com</AllowedOrigin>
        
            <ExposeHeader>ETag</ExposeHeader>
        
            <MaxAgeSeconds>3000</MaxAgeSeconds>
          </CORSRule>
        </CORSConfiguration>
            

      Deploying Policies

      The next step is to actually deploy your CORS policies. Once you do, your S3 bucket starts following them to determine what origins to allow and what request and response information to permit.

      Follow these steps to put your CORS policies into practice on your S3 instance.

      1. Save your CORS policy into a XML file. This example uses a file named cors_policies.xml which contains the second example policy XML above.

      2. Use s3cmd’s setcors commands to deploy the CORS policies to the bucket. This command takes the policy XML file and the bucket identifier as arguments:

        s3cmd setcors cors_policies.xml s3://example-cors-bucket
        
      3. Verify the new CORS policies using the info command as shown earlier in this tutorial:

        s3cmd info s3://example-cors-bucket
        
        s3://example-cors-bucket/ (bucket):
           Location:  default
           Payer:     BucketOwner
           Expiration Rule: none
           Policy:    none
           CORS:      <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedOrigin>http://*.example.com</AllowedOrigin><AllowedOrigin>http://example.com</AllowedOrigin><AllowedOrigin>https://*.example.com</AllowedOrigin><AllowedOrigin>https://example.com</AllowedOrigin><AllowedHeader>Authorization</AllowedHeader><MaxAgeSeconds>3000</MaxAgeSeconds><ExposeHeader>Access-Control-Allow-Origin</ExposeHeader></CORSRule><CORSRule><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>http://*.example.com</AllowedOrigin><AllowedOrigin>http://example.com</AllowedOrigin><AllowedOrigin>https://*.example.com</AllowedOrigin><AllowedOrigin>https://example.com</AllowedOrigin><AllowedHeader>Authorization</AllowedHeader><AllowedHeader>Content-*</AllowedHeader><AllowedHeader>Origin</AllowedHeader><MaxAgeSeconds>3000</MaxAgeSeconds><ExposeHeader>ETag</ExposeHeader></CORSRule></CORSConfiguration>
           ACL:       31ffbc26-d6ed-4bc3-8a14-ad78fe8f95b6: FULL_CONTROL

      Troubleshooting Common CORS Errors

      Having CORS-related issues on your S3 instance? Take these steps to help narrow down the issue and figure out the kind of policy change needed to resolve it.

      1. Review your instance’s CORS policies using s3cmd:

        s3cmd info s3://example-cors-bucket
        

        This can give you a concrete reference for what policies are in place and the specific details of each, like header and origin information.

      2. Review the request and response data. This can give you insights on any possible inconsistencies between existing CORS policies and the actual requests and responses.

        You can use a tool like cURL for this. First, use s3cmd to create a signed URL to an object on your storage instance. This example command creates a URL for an example.txt object and makes the URL last 300 seconds:

        s3cmd signurl s3://example-cors-bucket/example.txt +300
        

        Now, until the URL expires, you can use a cURL command like this one to send a request for the object:

        curl -v "http://example-cors-bucket.us-southeast-1.linodeobjects.com/index.md?AWSAccessKeyId=example-access-key&Expires=1664121793&Signature=example-signature"
        

        The -v option gives you verbose results, outputting more details to help you dissect any request and response issues.

      3. Compare the results of the cURL request to the CORS policy on your instance.

      Conclusion

      This covers the tools and approaches you need to start managing CORS for your Linode Object Storage or other S3 instance. Once you have these, addressing CORS issues is a matter of reviewing and adjusting policies against desired origins and request types.

      Keep improving your resources for managing your S3 through our collection of
      object storage guides. These cover a range of topics to help you with S3 generally, and Linode Object Storage in particular.

      Have more questions or want some help getting started? Feel free to reach out to our
      Support team.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      Working with JSON in MySQL


      Introduction

      SQL databases tend to be rigid.

      If you have worked with them, you would agree that database design though it seems easier, is a lot trickier in practice. SQL databases believe in structure, that is why it’s called structured query language.

      On the other side of the horizon, we have the NoSQL databases, also called schema-less databases that encourage flexibility. In schema-less databases, there is no imposed structural restriction, only data to be saved.

      Though every tool has it’s use case, sometimes things call for a hybrid approach.

      What if you could structure some parts of your database and leave others to be flexible?

      MySQL version 5.7.8 introduces a JSON data type that allows you to accomplish that.

      In this tutorial, you are going to learn.

      1. How to design your database tables using JSON fields.
      2. The various JSON based functions available in MYSQL to create, read, update, and delete rows.
      3. How to work with JSON fields using the Eloquent ORM in Laravel.

      Why Use JSON

      At this moment, you are probably asking yourself why would you want to use JSON when MySQL has been catering to a wide variety of database needs even before it introduced a JSON data type.

      The answer lies in the use-cases where you would probably use a make-shift approach.

      Let me explain with an example.

      Suppose you are building a web application where you have to save a user’s configuration/preferences in the database.

      Generally, you can create a separate database table with the id, user_id, key, and value fields or save it as a formatted string that you can parse at runtime.

      However, this works well for a small number of users. If you have about a thousand users and five configuration keys, you are looking at a table with five thousand records that addresses a very small feature of your application.

      Or if you are taking the formatted string route, extraneous code that only compounds your server load.

      Using a JSON data type field to save a user’s configuration in such a scenario can spare you a database table’s space and bring down the number of records, which were being saved separately, to be the same as the number of users.

      And you get the added benefit of not having to write any JSON parsing code, the ORM or the language runtime takes care of it.

      The Schema

      Before we dive into using all the cool JSON stuff in MySQL, we are going to need a sample database to play with.

      So, let’s get our database schema out of the way first.

      We are going to consider the use case of an online store that houses multiple brands and a variety of electronics.

      Since different electronics have different attributes(compare a Macbook with a Vacuumn Cleaner) that buyers are interested in, typically the Entity–attribute–value model (EAV) pattern is used.

      However, since we now have the option to use a JSON data type, we are going to drop EAV.

      For a start, our database will be named e_store and has three tables only named, brands, categories, and products respectively.

      Our brands and categories tables will be pretty similar, each having an id and a name field.

      CREATE DATABASE IF NOT EXISTS `e_store`
      DEFAULT CHARACTER SET utf8
      DEFAULT COLLATE utf8_general_ci;
      
      SET default_storage_engine = INNODB;
      
      CREATE TABLE `e_store`.`brands`(
          `id` INT UNSIGNED NOT NULL auto_increment ,
          `name` VARCHAR(250) NOT NULL ,
          PRIMARY KEY(`id`)
      );
      
      CREATE TABLE `e_store`.`categories`(
          `id` INT UNSIGNED NOT NULL auto_increment ,
          `name` VARCHAR(250) NOT NULL ,
          PRIMARY KEY(`id`)
      );
      

      The objective of these two tables will be to house the product categories and the brands that provide these products.

      While we are at it, let us go ahead and seed some data into these tables to use later.

      /* Brands */
      INSERT INTO `e_store`.`brands`(`name`)
      VALUES
          ('Samsung');
      
      INSERT INTO `e_store`.`brands`(`name`)
      VALUES
          ('Nokia');
      
      INSERT INTO `e_store`.`brands`(`name`)
      VALUES
          ('Canon');
      
      /* Types of electronic device */
      INSERT INTO `e_store`.`categories`(`name`)
      VALUES
          ('Television');
      
      INSERT INTO `e_store`.`categories`(`name`)
      VALUES
          ('Mobilephone');
      
      INSERT INTO `e_store`.`categories`(`name`)
      VALUES
          ('Camera');
      

      The brands table

      The categories table

      Next, is the business area of this tutorial.

      We are going to create a products table with the id, name, brand_id, category_id, and attributes fields.

      CREATE TABLE `e_store`.`products`(
          `id` INT UNSIGNED NOT NULL AUTO_INCREMENT ,
          `name` VARCHAR(250) NOT NULL ,
          `brand_id` INT UNSIGNED NOT NULL ,
          `category_id` INT UNSIGNED NOT NULL ,
          `attributes` JSON NOT NULL ,
          PRIMARY KEY(`id`) ,
          INDEX `CATEGORY_ID`(`category_id` ASC) ,
          INDEX `BRAND_ID`(`brand_id` ASC) ,
          CONSTRAINT `brand_id` FOREIGN KEY(`brand_id`) REFERENCES `e_store`.`brands`(`id`) ON DELETE RESTRICT ON UPDATE CASCADE ,
          CONSTRAINT `category_id` FOREIGN KEY(`category_id`) REFERENCES `e_store`.`categories`(`id`) ON DELETE RESTRICT ON UPDATE CASCADE
      );
      

      Our table definition specifies foreign key constraints for the brand_id and category_id fields, specifying that they reference the brands and categories table respectively. We have also specified that the referenced rows should not be allowed to delete and if updated, the changes should reflect in the references as well.

      The attributes field’s column type has been declared to be JSON which is the native data type now available in MySQL. This allows us to use the various JSON related constructs in MySQL on our attributes field.

      Here is an entity relationship diagram of our created database.

      The e_store database

      Our database design is not the best in terms of efficiency and accuracy. There is no price column in the products table and we could do with putting a product into multiple categories. However, the purpose of this tutorial is not to teach database design but rather how to model objects of different nature in a single table using MySQL’s JSON features.

      The CRUD Operations

      Let us look at how to create, read, update, and delete data in a JSON field.

      Create

      Creating a record in the database with a JSON field is pretty simple.

      All you need to do is add valid JSON as the field value in your insert statement.

      /* Let's sell some televisions */
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Prime' ,
          '1' ,
          '1' ,
          '{"screen": "50 inch", "resolution": "2048 x 1152 pixels", "ports": {"hdmi": 1, "usb": 3}, "speakers": {"left": "10 watt", "right": "10 watt"}}'
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Octoview' ,
          '1' ,
          '1' ,
          '{"screen": "40 inch", "resolution": "1920 x 1080 pixels", "ports": {"hdmi": 1, "usb": 2}, "speakers": {"left": "10 watt", "right": "10 watt"}}'
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Dreamer' ,
          '1' ,
          '1' ,
          '{"screen": "30 inch", "resolution": "1600 x 900 pixles", "ports": {"hdmi": 1, "usb": 1}, "speakers": {"left": "10 watt", "right": "10 watt"}}'
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Bravia' ,
          '1' ,
          '1' ,
          '{"screen": "25 inch", "resolution": "1366 x 768 pixels", "ports": {"hdmi": 1, "usb": 0}, "speakers": {"left": "5 watt", "right": "5 watt"}}'
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Proton' ,
          '1' ,
          '1' ,
          '{"screen": "20 inch", "resolution": "1280 x 720 pixels", "ports": {"hdmi": 0, "usb": 0}, "speakers": {"left": "5 watt", "right": "5 watt"}}'
      );
      

      The products table after adding televisions

      Instead of laying out the JSON object yourself, you can also use the built-in JSON_OBJECT function.

      The JSON_OBJECT function accepts a list of key/value pairs in the form JSON_OBJECT(key1, value1, key2, value2, ... key(n), value(n)) and returns a JSON object.

      /* Let's sell some mobilephones */
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Desire' ,
          '2' ,
          '2' ,
          JSON_OBJECT(
              "network" ,
              JSON_ARRAY("GSM" , "CDMA" , "HSPA" , "EVDO") ,
              "body" ,
              "5.11 x 2.59 x 0.46 inches" ,
              "weight" ,
              "143 grams" ,
              "sim" ,
              "Micro-SIM" ,
              "display" ,
              "4.5 inches" ,
              "resolution" ,
              "720 x 1280 pixels" ,
              "os" ,
              "Android Jellybean v4.3"
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Passion' ,
          '2' ,
          '2' ,
          JSON_OBJECT(
              "network" ,
              JSON_ARRAY("GSM" , "CDMA" , "HSPA") ,
              "body" ,
              "6.11 x 3.59 x 0.46 inches" ,
              "weight" ,
              "145 grams" ,
              "sim" ,
              "Micro-SIM" ,
              "display" ,
              "4.5 inches" ,
              "resolution" ,
              "720 x 1280 pixels" ,
              "os" ,
              "Android Jellybean v4.3"
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Emotion' ,
          '2' ,
          '2' ,
          JSON_OBJECT(
              "network" ,
              JSON_ARRAY("GSM" , "CDMA" , "EVDO") ,
              "body" ,
              "5.50 x 2.50 x 0.50 inches" ,
              "weight" ,
              "125 grams" ,
              "sim" ,
              "Micro-SIM" ,
              "display" ,
              "5.00 inches" ,
              "resolution" ,
              "720 x 1280 pixels" ,
              "os" ,
              "Android KitKat v4.3"
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Sensation' ,
          '2' ,
          '2' ,
          JSON_OBJECT(
              "network" ,
              JSON_ARRAY("GSM" , "HSPA" , "EVDO") ,
              "body" ,
              "4.00 x 2.00 x 0.75 inches" ,
              "weight" ,
              "150 grams" ,
              "sim" ,
              "Micro-SIM" ,
              "display" ,
              "3.5 inches" ,
              "resolution" ,
              "720 x 1280 pixels" ,
              "os" ,
              "Android Lollypop v4.3"
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Joy' ,
          '2' ,
          '2' ,
          JSON_OBJECT(
              "network" ,
              JSON_ARRAY("CDMA" , "HSPA" , "EVDO") ,
              "body" ,
              "7.00 x 3.50 x 0.25 inches" ,
              "weight" ,
              "250 grams" ,
              "sim" ,
              "Micro-SIM" ,
              "display" ,
              "6.5 inches" ,
              "resolution" ,
              "1920 x 1080 pixels" ,
              "os" ,
              "Android Marshmallow v4.3"
          )
      );
      

      The products table after adding mobilephones

      Notice the JSON_ARRAY function which returns a JSON array when passed a set of values.

      If you specify a single key multiple times, only the first key/value pair will be retained. This is called normalizing the JSON in MySQL’s terms. Also, as part of normalization, the object keys are sorted and the extra white-space between key/value pairs is removed.

      Another function that we can use to create JSON objects is the JSON_MERGE function.

      The JSON_MERGE function takes multiple JSON objects and produces a single, aggregate object.

      /* Let's sell some cameras */
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Explorer' ,
          '3' ,
          '3' ,
          JSON_MERGE(
              '{"sensor_type": "CMOS"}' ,
              '{"processor": "Digic DV III"}' ,
              '{"scanning_system": "progressive"}' ,
              '{"mount_type": "PL"}' ,
              '{"monitor_type": "LCD"}'
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Runner' ,
          '3' ,
          '3' ,
          JSON_MERGE(
              JSON_OBJECT("sensor_type" , "CMOS") ,
              JSON_OBJECT("processor" , "Digic DV II") ,
              JSON_OBJECT("scanning_system" , "progressive") ,
              JSON_OBJECT("mount_type" , "PL") ,
              JSON_OBJECT("monitor_type" , "LED")
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Traveler' ,
          '3' ,
          '3' ,
          JSON_MERGE(
              JSON_OBJECT("sensor_type" , "CMOS") ,
              '{"processor": "Digic DV II"}' ,
              '{"scanning_system": "progressive"}' ,
              '{"mount_type": "PL"}' ,
              '{"monitor_type": "LCD"}'
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Walker' ,
          '3' ,
          '3' ,
          JSON_MERGE(
              '{"sensor_type": "CMOS"}' ,
              '{"processor": "Digic DV I"}' ,
              '{"scanning_system": "progressive"}' ,
              '{"mount_type": "PL"}' ,
              '{"monitor_type": "LED"}'
          )
      );
      
      INSERT INTO `e_store`.`products`(
          `name` ,
          `brand_id` ,
          `category_id` ,
          `attributes`
      )
      VALUES(
          'Jumper' ,
          '3' ,
          '3' ,
          JSON_MERGE(
              '{"sensor_type": "CMOS"}' ,
              '{"processor": "Digic DV I"}' ,
              '{"scanning_system": "progressive"}' ,
              '{"mount_type": "PL"}' ,
              '{"monitor_type": "LCD"}'
          )
      );
      

      The products table after adding cameras

      There is a lot happening in these insert statements and it can get a bit confusing. However, it is pretty simple.

      We are only passing objects to the JSON_MERGE function. Some of them have been constructed using the JSON_OBJECT function we saw previously whereas others have been passed as valid JSON strings.

      In case of the JSON_MERGE function, if a key is repeated multiple times, it’s value is retained as an array in the output.

      A proof of concept is in order I suppose.

      /* output: {"network": ["GSM", "CDMA", "HSPA", "EVDO"]} */
      SELECT JSON_MERGE(
          '{"network": "GSM"}' ,
          '{"network": "CDMA"}' ,
          '{"network": "HSPA"}' ,
          '{"network": "EVDO"}'
      );
      

      We can confirm all our queries were run successfully using the JSON_TYPE function which gives us the field value type.

      /* output: OBJECT */
      SELECT JSON_TYPE(attributes) FROM `e_store`.`products`;
      

      Add attributes are JSON objects

      Read

      Right, we have a few products in our database to work with.

      For typical MySQL values that are not of type JSON, a where clause is pretty straight-forward. Just specify the column, an operator, and the values you need to work with.

      Heuristically, when working with JSON columns, this does not work.

      /* It's not that simple */
      SELECT
          *
      FROM
          `e_store`.`products`
      WHERE
          attributes="{"ports": {"usb": 3, "hdmi": 1}, "screen": "50 inch", "speakers": {"left": "10 watt", "right": "10 watt"}, "resolution": "2048 x 1152 pixels"}";
      

      When you wish to narrow down rows using a JSON field, you should be familiar with the concept of a path expression.

      The most simplest definition of a path expression(think JQuery selectors) is it’s used to specify which parts of the JSON document to work with.

      The second piece of the puzzle is the JSON_EXTRACT function which accepts a path expression to navigate through JSON.

      Let us say we are interested in the range of televisions that have atleast a single USB and HDMI port.

      SELECT
          *
      FROM
          `e_store`.`products`
      WHERE
          `category_id` = 1
      AND JSON_EXTRACT(`attributes` , '$.ports.usb') > 0
      AND JSON_EXTRACT(`attributes` , '$.ports.hdmi') > 0;
      

      Selecting records by JSON attributes

      The first argument to the JSON_EXTRACT function is the JSON to apply the path expression to which is the attributes column. The $ symbol tokenizes the object to work with. The $.ports.usb and $.ports.hdmi path expressions translate to “take the usb key under ports” and “take the hdmi key under ports” respectively.

      Once we have extracted the keys we are interested in, it is pretty simple to use the MySQL operators such as > on them.

      Also, the JSON_EXTRACT function has the alias -> that you can use to make your queries more readable.

      Revising our previous query.

      SELECT
          *
      FROM
          `e_store`.`products`
      WHERE
          `category_id` = 1
      AND `attributes` -> '$.ports.usb' > 0
      AND `attributes` -> '$.ports.hdmi' > 0;
      

      Update

      In order to update JSON values, we are going to use the JSON_INSERT, JSON_REPLACE, and JSON_SET functions. These functions also require a path expression to specify which parts of the JSON object to modify.

      The output of these functions is a valid JSON object with the changes applied.

      Let us modify all mobilephones to have a chipset property as well.

      UPDATE `e_store`.`products`
      SET `attributes` = JSON_INSERT(
          `attributes` ,
          '$.chipset' ,
          'Qualcomm'
      )
      WHERE
          `category_id` = 2;
      

      Updated mobilephones

      The $.chipset path expression identifies the position of the chipset property to be at the root of the object.

      Let us update the chipset property to be more descriptive using the JSON_REPLACE function.

      UPDATE `e_store`.`products`
      SET `attributes` = JSON_REPLACE(
          `attributes` ,
          '$.chipset' ,
          'Qualcomm Snapdragon'
      )
      WHERE
          `category_id` = 2;
      

      Updated mobilephones

      Easy peasy!

      Lastly, we have the JSON_SET function which we will use to specify our televisions are pretty colorful.

      UPDATE `e_store`.`products`
      SET `attributes` = JSON_SET(
          `attributes` ,
          '$.body_color' ,
          'red'
      )
      WHERE
          `category_id` = 1;
      

      Updated televisions

      All of these functions seem identical but there is a difference in the way they behave.

      The JSON_INSERT function will only add the property to the object if it does not exists already.

      The JSON_REPLACE function substitutes the property only if it is found.

      The JSON_SET function will add the property if it is not found else replace it.

      Delete

      There are two parts to deleting that we will look at.

      The first is to delete a certain key/value from your JSON columns whereas the second is to delete rows using a JSON column.

      Let us say we are no longer providing the mount_type information for cameras and wish to remove it for all cameras.

      We will do it using the JSON_REMOVE function which returns the updated JSON after removing the specified key based on the path expression.

      UPDATE `e_store`.`products`
      SET `attributes` = JSON_REMOVE(`attributes` , '$.mount_type')
      WHERE
          `category_id` = 3;
      

      Cameras after removing mount_type property

      For the second case, we also do not provide mobilephones anymore that have the Jellybean version of the Android OS.

      DELETE FROM `e_store`.`products`
      WHERE `category_id` = 2
      AND JSON_EXTRACT(`attributes` , '$.os') LIKE '%Jellybean%';
      

      We do not sell Jellybeans anymore!

      As stated previously, working with a specific attribute requires the use of the JSON_EXTRACT function so in order to apply the LIKE operator, we have first extracted the os property of mobilephones(with the help of category_id) and deleted all records that contain the string Jellybean.

      A Primer for Web Applications

      The old days of directly working with a database are way behind us.

      These days, frameworks insulate developers from lower-level operations and it almost feels alien for a framework fanatic not to be able to translate his/her database knowledge into an object relational mapper.

      For the purpose of not leaving such developers heartbroken and wondering about their existence and purpose in the universe, we are going to look at how to go about the business of JSON columns in the Laravel framework.

      We will only be focusing on the parts that overlap with our subject matter which deals with JSON columns. An in-depth tutorial on the Laravel framework is beyond the scope of this piece.

      Creating the Migrations

      Make sure to configure your Laravel application to use a MySQL database.

      We are going to create three migrations for brands, categories, and products respectively.

      $ php artisan make:migration create_brands
      $ php artisan make:migration create_categories
      $ php artisan make:migration create_products
      

      The create_brands and create_categories migrations are pretty similar and and a regulation for Laravel developers.

      /* database/migrations/create_brands.php */
      
      <?php
      
      use IlluminateSupportFacadesSchema;
      use IlluminateDatabaseSchemaBlueprint;
      use IlluminateDatabaseMigrationsMigration;
      
      class CreateBrands extends Migration
      {
          /**
           * Run the migrations.
           *
           * @return void
           */
          public function up()
          {
              Schema::create('brands', function(Blueprint $table){
                  $table->engine="InnoDB";
                  $table->increments('id');
                  $table->string('name');
                  $table->timestamps();
              });
          }
      
          /**
           * Reverse the migrations.
           *
           * @return void
           */
          public function down()
          {
              Schema::drop('brands');
          }
      }
      
      /* database/migrations/create_categories.php */
      
      <?php
      
      use IlluminateSupportFacadesSchema;
      use IlluminateDatabaseSchemaBlueprint;
      use IlluminateDatabaseMigrationsMigration;
      
      class CreateCategories extends Migration
      {
          /**
           * Run the migrations.
           *
           * @return void
           */
          public function up()
          {
              Schema::create('categories', function(Blueprint $table){
                  $table->engine="InnoDB";
                  $table->increments('id');
                  $table->string('name');
                  $table->timestamps();
              });
          }
      
          /**
           * Reverse the migrations.
           *
           * @return void
           */
          public function down()
          {
              Schema::drop('categories');
          }
      }
      
      

      The create_products migration will also have the directives for indexes and foreign keys.

      /* database/migrations/create_products */
      
      <?php
      
      use IlluminateSupportFacadesSchema;
      use IlluminateDatabaseSchemaBlueprint;
      use IlluminateDatabaseMigrationsMigration;
      
      class CreateProducts extends Migration
      {
          /**
           * Run the migrations.
           *
           * @return void
           */
          public function up()
          {
              Schema::create('products', function(Blueprint $table){
                  $table->engine="InnoDB";
                  $table->increments('id');
                  $table->string('name');
                  $table->unsignedInteger('brand_id');
                  $table->unsignedInteger('category_id');
                  $table->json('attributes');
                  $table->timestamps();
                  // foreign key constraints
                  $table->foreign('brand_id')->references('id')->on('brands')->onDelete('restrict')->onUpdate('cascade');
                  $table->foreign('category_id')->references('id')->on('categories')->onDelete('restrict')->onUpdate('cascade');
                  // indexes
                  $table->index('brand_id');
                  $table->index('category_id');
              });
          }
      
          /**
           * Reverse the migrations.
           *
           * @return void
           */
          public function down()
          {
              Schema::drop('products');
          }
      }
      

      Pay attention to the $table->json('attributes'); statement in the migration.

      Just like creating any other table field using the appropriate data type named method, we have created a JSON column using the json method with the name attributes.

      Also, this only works for database engines that support the JSON data type.

      Engines, such as older versions of MySQL will not be able to carry out these migrations.

      Creating the Models

      Other than associations, there is not much needed to set up our models so let’s run through them quickly.

      /* app/Brand.php */
      
      <?php
      
      namespace App;
      
      use IlluminateDatabaseEloquentModel;
      
      class Brand extends Model
      {
          // A brand has many products
          public function products(){
              return $this->hasMany('Product')
          }
      }
      
      /* app/Category.php */
      
      <?php
      
      namespace App;
      
      use IlluminateDatabaseEloquentModel;
      
      class Category extends Model
      {
          // A category has many products
          public function products(){
              return $this->hasMany('Product')
          }
      }
      
      /* app/Product.php */
      
      <?php
      
      namespace App;
      
      use IlluminateDatabaseEloquentModel;
      
      class Product extends Model
      {
          // Cast attributes JSON to array
          protected $casts = [
              'attributes' => 'array'
          ];
      
          // Each product has a brand
          public function brand(){
              return $this->belongsTo('Brand');
          }
      
          // Each product has a category
          public function category(){
              return $this->belongsTo('Category');
          }
      }
      

      Again, our Product model needs a special mention.

      The $casts array which has the key attributes set to array makes sure whenever a product is fetched from the database, it’s attributes JSON is converted to an associated array.

      We will see later in the tutorial how this facilitates us to update records from our controller actions.

      Resource Operations

      Creating a Product

      Speaking of the admin panel, the parameters to create a product maybe coming in through different routes since we have a number of product categories. You may also have different views to create, edit, show, and delete a product.

      For example, a form to add a camera requires different input fields than a form to add a mobilephone so they warrant separate views.

      Moreover, once you have the user input data, you will most probabaly run it through a request validator, separate for the camera, and the mobilephone each.

      The final step would be to create the product through Eloquent.

      We will be focusing on the camera resource for the rest of this tutorial. Other products can be addressed using the code produced in a similar manner.

      Assuming we are saving a camera and the form fields are named as the respective camera attributes, here is the controller action.

      // creates product in database
      // using form fields
      public function store(Request $request){
          // create object and set properties
          $camera = new AppProduct();
          $camera->name = $request->name;
          $camera->brand_id = $request->brand_id;
          $camera->category_id = $request->category_id;
          $camera->attributes = json_encode([
              'processor' => $request->processor,
              'sensor_type' => $request->sensor_type,
              'monitor_type' => $request->monitor_type,
              'scanning_system' => $request->scanning_system,
          ]);
          // save to database
          $camera->save();
          // show the created camera
          return view('product.camera.show', ['camera' => $camera]);
      }
      

      Fetching Products

      Recall the $casts array we declared earlier in the Product model. It will help us read and edit a product by treating attributes as an associative array.

      // fetches a single product
      // from database
      public function show($id){
          $camera = AppProduct::find($id);
          return view('product.camera.show', ['camera' => $camera]);
      }
      

      Your view would use the $camera variable in the following manner.

      <table>
          <tr>
              <td>Name</td>
              <td>{{ $camera->name }}</td>
          </tr>
          <tr>
              <td>Brand ID</td>
              <td>{{ $camera->brand_id }}</td>
          </tr>
          <tr>
              <td>Category ID</td>
              <td>{{ $camera->category_id }}</td>
          </tr>
          <tr>
              <td>Processor</td>
              <td>{{ $camera->attributes['processor'] }}</td>
          </tr>
          <tr>
              <td>Sensor Type</td>
              <td>{{ $camera->attributes['sensor_type'] }}</td>
          </tr>
          <tr>
              <td>Monitor Type</td>
              <td>{{ $camera->attributes['monitor_type'] }}</td>
          </tr>
          <tr>
              <td>Scanning System</td>
              <td>{{ $camera->attributes['scanning_system'] }}</td>
          </tr>
      </table>
      

      Editing a Product

      As shown in the previous section, you can easily fetch a product and pass it to the view, which in this case would be the edit view.

      You can use the product variable to pre-populate form fields on the edit page.

      Updating the product based on the user input will be pretty similar to the store action we saw earlier, only that instead of creating a new product, you will fetch it first from the database before updating it.

      Searching Based on JSON Attributes

      The last piece of the puzzle that remains to discuss is querying JSON columns using the Eloquent ORM.

      If you have a search page that allows cameras to be searched based on their specifications provided by the user, you can do so with the following code.

      // searches cameras by user provided specifications
      public function search(Request $request){
          $cameras = AppProduct::where([
              ['attributes->processor', 'like', $request->processor],
              ['attributes->sensor_type', 'like', $request->sensor_type],
              ['attributes->monitor_type', 'like', $request->monitor_type],
              ['attributes->scanning_system', 'like', $request->scanning_system]
          ])->get();
          return view('product.camera.search', ['cameras' => $cameras]);
      }
      

      The retrived records will now be available to the product.camera.search view as a $cameras collection.

      Deleting a Product

      Using a non-JSON column attribute, you can delete products by specifying a where clause and then calling the delete method.

      For example, in case of an ID.

      AppProduct::where('id', $id)->delete();
      

      For JSON columns, specify a where clause using a single or multiple attributes and then call the delete method.

      // deletes all cameras with the sensor_type attribute as CMOS
      AppProduct::where('attributes->sensor_type', 'CMOS')->delete();
      }
      

      Curtains

      We have barely scratched the surface when it comes to using JSON columns in MySQL.

      Whenever you need to save data as key/value pairs in a separate table or work with flexible attributes for an entity, you should consider using a JSON data type field instead as it can heavily contribute to compressing your database design.

      If you are interested in diving deeper, the MySQL documentation is a great resource to explore JSON concepts futher.

      I hope you found this tutorial interesting and knowledgeable. Until my next piece, happy coding!



      Source link