![]() Looking at the State column, the random sampling did improve the sample somewhat-we now have 5 states represented instead of just one. We’ll need to change the column type and put the leading 0 back using a transformation. Looking at some of the ZIP Codes, we can see that the column was profiled as a number, and some of the MA ZIP Codes lost their leading 0. Right away we can see that “Not Available” and “N/A” are very common answers, and we need to work with our business partners to decide if these are values we want to count, values we want to convert to a different value, or if we want to count them as invalid results. To do a random sample, we need to create a project, using the same dataset, and configure the sampling to be 5000 random rows.Īfter the sampling is complete, we’re taken to the projects tab, where we can review the sampled data. That’s a good start, but let’s see what else we can find with a random sample. Trivia note: “Pittsburg” is spelled correctly here, only Pittsburgh, PA has the “h” at the end. That’s something we can try and fix in our data prep. We see the same treatment with “LAWRENCE” and “Lawrence”, too. We’re going to need to try a random sample somehow.Ĭities are where the fun usually begins, and looking at the Top 50 values, we see that there is inconsistent casing and DataBrew treats “OLATHE” and “Olathe” differently. Of the 20,000 records profiled, all were in Kansas. State claims to be 100% valid, so let’s take a look at the values. Many US Zip Codes start with zero, and need to be treated as strings in order to retain that leading zero. Also, ZIP Code was identified as a numeric column, which is a very common mistake made by data profilers. Most of the columns are 100% valid, which would be fantastic if true, although I suspect unknown values may be represented by a value which DataBrew does not recognize as “unknown” or “invalid”. ![]() Once complete, we can choose to “View data profile”, then the “Column Statistics” tab to check for completeness, type and validity. The profiling job takes a little over one minute to run since DataBrew will profile a maximum of 20,000 rows even if you select “Full dataset” (you can request a limit increase). The configuration looks like the image below. ![]() To start, we navigate Jobs > Profile Jobs > Create Job. With DataBrew, I can easily set up a Profile Job to gather statistics about the entire dataset. The first thing I like to do when I get an unknown dataset is profile as much of the data as I can. We’ll need a Dataset for every different batch of data we want to use. Īfter we’ve uploaded our data and given DataBrew permission, it’s time to create a Dataset, which is basically a pointer to the data files we’ll be using. Our work with DataBrew begins, as many things in AWS do, by creating a service level IAM role and granting permission to our data, as documented at. I downloaded the dataset, and uploaded the files to an S3 bucket. The link to the most recent PPP data is available is found at. As data professionals, we can help with that. The loan data for the program was released in several batches, and early indication is that the data is a bit of a mess, making it difficult for groups without a data prep organization to analyze the data. The PPP was a decentralized program, with local banks approving and disbursing funds. ![]() As with al government programs, there is a great deal of interest in how the money was allocated. After nearly a year of COVId-19, and several rounds of financial relief, one interesting dataset is that from the SBA’s Paycheck Protection Program (PPP). Now that we’ve had our first look at AWS Glue DataBrew, it’s time to try it out with a real data preparation activity. For Florists (and other small businesses).rjdudley on Blinking an LED with Raspberry Pi 2 and C# Mono. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |