OLD - Te Hiringa - Reo Booklet - Project Summary
Te Hiringa wants us to build the "Reo Booklet". This year it will be an online job title directory, but they want to add new batches of words, year after year, to work and build up a dictionary of some sorts. There will be hundreds of job titles they would like to include and keep adding to over time. There will be a few versions of the job title in te reo for each english word. Each te reo word will have an audio button.\
The directory will be filterable and searchable in english and te reo.
Glossary:
- simple layout and design as per the last sections in the booklet (see screenshot)
- Maybe more than one translation for one English word - need the ability to accommodate this
- Filterable by category, e.g Government Ministries, Government Agencies, Government Ministers
- audio file button for each translation
Uploading:
- ability to add in bulk from spreadsheet (hundreds at a time) - want to add 200 new words/year
Dev Solution:
- We will have a blog "job_titles" holding all the data.
- within the blog we will create a blog post for each English term
- within the blog post content we will store the te reo terms and audio files using some sort of convention (generated by api after CSV upload)
Data Mapping:
We will need a convention to store the following buckets:
- English term
- Te Reo term
- Te Reo audio file
\
Unfortunately we cannot upload the audio file as an attachment (not supported by NB). ~~- so we will need to have these uploaded somewhere and instead we will store the URL.~~
We will upload them to an Amazon S3 bucket/folder for hosting and store the URL somewhere on the blog post.\
\
On each blog post:
- blog post headline = English term
- blog post content = Te reo term(s) and audio file URL(s) (comma seperated, piped, or some other convention)
- maybe store the URL in the page excerpt or something - depends wherever John can stash URLs via the API.
————————————————————————————————————
We will load all entries initially - shouldn't need to use the API with only a couple hundred entries for now - maybe in a couple years (once they get 1000 or more) we can introduce API loading.
API for loading/searching the listings:
If we only have 200 entries at the moment - i don't think we will need API yetEventually we may want it - so not sure if we want to include that now or add it at a later date?For example, i just checked and was able to load 1000 blog posts at once, but page took a couple seconds to load. So we may be able to get better performance (quicker initial page load) by using API to load them in as the user scrolls or clicks "Load more" when viewing "unfiltered. We just will need the whole data set available when they search/filter - so might need to use a loading screen while we fetch all the results.
\
Javascript text field search:
- Text field where users can type to filter (Similar to FASD-CAN, ACE, Translator Register)
- This will check the data-attribute on the entry and show/hide if there is a string match or not.
We usually have these auto search on keystroke - but if result set is really large and/or performance is jerky - we could implement a "Search" button to trigger the filtering where we could use a loading screen.
\
Filters:
- Similar to how we do grids elsewhere - we would handle these with tags on the blog posts.
- What type of filters would be used here?
- (dropdown selects, toggles)
- What filter name/categories do they want?
- Government Ministries, Government Agencies, Government Ministers
\
Uploading entries to the directory:
- We could either write a script to process a CSV
- we could create a front-end admin page where they could upload CSV data and process the data that way
\
CSV format:
- Columns for english, te reo, audio file.
- Potentially a column for each optional filter (true/false)? (which adds a tag)
- alternatively they can apply filters after uploaded into NB
\
Audio hosting:
- Unfortunately NB does not accomodate audio files as page attachments, so they will need to provide the URL of the file in the CSV.
Do we need to figure out where they are uploading these files?an AWS bucketor something?- Do we want them to upload the files themselves or do we need to create a front-end uploader for them in the site which returns the URL?
- We will upload for now - if they want an uploader we can build that in the future, but wont be enough time to provide that before mid-September
————————————————————————————————————
Rough Dev Estimate:
Styling the Front-end - 25 hours
- inflated Rough estimate based off of our recent changes to translator grid to allow for styling and potential revisions
Search and filters - 15 hours
- generate toggle and dropdown filters - show/hide results depending on selections
- text field search - show/hide results depending on if there is a string match
CSV Uploader - 15 hours
- A simple Admin page where they can upload a CSV in a predetermined format containing the English terms, Te Reo terms, audio files, and any pre-determined filter data
Audio file hosting - 5 hours
- We cannot host audio files in Nationbuilder as page attachments, because of this we will want to just spin up an AWS S3 bucket or somewhere to store these files.
- For simplicity sake - would be good if they could provide us with these audio files under a specific naming convention, maybe something like "TE_REO_TERM.xxx" and then we can upload them to the bucket ourselves.
- After uploading the audio files we could provide the client a CSV file with all the urls - and then they could use this as the starting point for filling in all the other data they need to add (English term, Te Reo term, filter data, etc. )
- store URL in excerpt field (or something), prevent robots meta tag added through layout / template.
Total Rough Dev estimate: 60 hours\
Also might be worth mentioning - after we build out all this - future batches should only take a couple of hours to get uploaded.