One way to lend dramatic narrative and an element of fun to learning about Case Law is to pick out choice quotes from Judges which encapsulate the drama of the matter at hand. This is what our team intended to do with a section of the website which would offer a pull quote from one of the Judges offering an explanation as to why they ruled a certain way. This offered up the chance to humanize and make personal the drama of the court room, in addition to identifying the quirks of each of the judges. I’m unsure if this part of the website has been implemented due to time constraints, but in the above image one can see text for the landmark texts which I pulled from the “Decision” data from Joanne, then identified a section of it under 350 characters that would encapsulate the case. Some examples follow: “Brennan found that it was implicit in the history of the First Amendment that obscenity, matter that was utterly without redeeming social importance, should be restrained.” “Justice Douglas wrote the majority opinion and held the postal law unconstitutional because it required an official act (returning the reply card) as a limitation on the unfettered exercise of the addressee’s First Amendment rights.”
Down the line, if someone were able to spend the time and draw direct quotes from the courtroom, much like this website does https://www.mtsu.edu/first-amendment/article/305/texas-v-johnson it would give each landmark case the attention it deserves. There is human drama in addition to all the technical text and one hallmark of our project was highlighting these elements.
Another part of our “nice to haves” that down the line would be very interesting to implement is a way to track the changing positions of Judges across time. As far as I know, there aren’t other websites that do this. But if one were able to highlight the political position of Judges across a spectrum of cases and decades it would be noteworthy. As far as charting the prevailing politics of the time it would be useful for students of law to navigate legal theory by how it applied to different law-makers “textualist vs originalist” for example. On the other hand, it might only be seen as necessary to the fine-grained scholarship that very few actively pursue.
Near the end of the topic I was charged with identifying how the landmark cases should be grouped according to “Eras” and then create the description text to gv them a thematic edge. I opted to bundle these by decade because it just makes more intuitive sense to study history this way.
To some degree this was done to coincide World wars and socio-political events in the U.S. finding a narrative arc was a fun project, but so was really honing in on these landmark cases and finalizing which Topic they belonged in. I wish I could have done the same with the total number of our cases, but alas that would have taken too long. What I gained from this project was a clearer picture of the evolution of cases and closely linked the actions of individuals with the course of legal history. For example the burning of draft cards or use of obscene language on clothing to protest the draft are not only substantial acts of protest, but stand out as landmark cases which have moved forward the way we think about the destruction of government property, and what we are allowed to wear in public. It’s the double or even triple-duty of these cases that fascinate me.
It should be noted however, that even if campaign finance reform become a central theme in the 2000s, it doesn’t mean that the race and civil rights issues of the 60s have been solved. That’s the problem with focusing on a single theme per decade, and as a pedagogical tool it can sometimes distort the legal struggles that are not garnering the most attention. What would be useful for further study would be to show “cases in progress” with the Supreme court to see how long and drawn out some of these struggles really are.
Earlier on in the project, we had to define which of the cases in American Legal History dealt with Freedom of Speech. It was from here that the rest of the project followed. Joanne provided a list of cases that it was my job to go back over and do a manual check regarding if we should include them in the project.
The first part of this was double checking to see if it was a landmark case or not. We eventually ended up with around 65. Then we had to see if Freedom of Speech was really at issue in these cases. Again, this is a subjective manual process and anyone else who undertook this project could have ended up with a relatively different list. We had to make some hard decisions about what to include in our final list. Was freedom of religion an FOS case? What if it involved talking about religion in public, or handing out pamphlets in school? Essentially it came down to identifying how the Judge ruled on a certain case. If FOS was at issue, no matter the subject, we included it. If it was clearly Freedom of Religion or Freedom of Press we gave it a second look.
Because we were short of time we really relied on the frequency of the phrase “freedom of speech” to identify cases to include. There were a number of bundled topics through which we sifted for FOS cases to include, the following are a few examples: “legislative investigations: concerning internal security only”, “federal or state internal security legislation: Smith, Internal Security, and related federal statutes” “loyalty oath or non-Communist affidavit (other than bar applicants, government employees, political party, or teacher)”. These to some degree mirrored our Topic Modeling results.
The following three posts will detail the back-end work and data sorting I helped out with in the late stages of the project.
The Primary Category designation for ~ 500 FOS cases was necessary for creating the “Explore by Topic” page of our website. Eva conducted a Topic Modeling survey of all of the caes matching them up with 20 different topics. Each of the topics contained a set of words which established a theme. It would take keen subject area knowledge to really get this down right from the beginning, but I feel we were able to group up all the cases in some logical and thematic sense.
Each of the cases was given a probability with which it belonged to that category. For the most part, if a case had a percentage above 50% I tossed it into that category. There were a few categories which struck me as containing a lot of formal language and for case which hovered around 50%-60% and below, I took their next highest category and put them in there.
To determine which category the case belonged to I had to conduct a quick survey of the case’s text by looking at the syllabus and summaries available online from websites such as https://www.mtsu.edu/first-amendment/encyclopedia/. This process is quick and dirty which not always the highest degree of accuracy, but it’s important for bundling these in conjunction with input from the Topic Modeling survey. Naturally topics like “school.religious.schools.student.establishment.religion.students.forum.program.university” lend themselves to clearer topics than do “solicitation.charitable.fraud.paid.fee.organizations.requirement.telemarketers.circulators.north” at first glance, but this issue spoke to my general unfamiliarity with the wide variety of cases. It turned out there were indeed a large number of these kinds of cases unbeknownst to me.
Spring break was not exactly a break (at least on my end). I continued refining the site/archive, installing and configuring plugins. Once we figured out our metadata and configured our Contributions forms, as a team we entered data: each fridge represented as a collection, each fridge object as an item, as any contributed item to that fridge, a part of the fridge’s collection. We entered all the fridges in our database, representing all 5 boroughs. We prepared tutorials for how to contribute to the site. Next, we started getting contributions from our audience. It was fascinating that our site was functioning technically and also as intended – our archive kept growing, with our fridges now mapped out, represented via photos and stories.
In order to also host oral history items on our archive, I installed/configured the OHMS plugin suite and a new template for our site as required by OHMS. We presented our site to class, refined our presentation, and finally presented at the GC Showcase. One of the best things about building projects (or any creative work, really) is sharing them – thanks so much to our classmates (fun and very constructive feedback) and everyone at the showcase! To Dr Maney, to Micki Kaufman, to Stefano Morello, who were critical contributors to our project. And of course, to our fridge community!
Of course props to my fantastic teammates. It was a pleasure to work with everyone – I looked forward to all our meetings! It was a lot of hard and very rewarding, enjoyable work!
Week 9 – for several weeks now, we had been having meetings with fridge organizers. I installed Omeka Classic on Reclaim, the third install and the last, and the smoothest! At this point, I was very familiar with Omeka in general and plugins, so everything went smoothly. Because we no longer had the server issues, plugins also worked without any problems. Reclaim was also very responsive (thank you GC!! and thank you Reclaim!) and helpful with any questions I had during configuration/ in case of backend questions. The internet also was my friend while figuring things out 🙂
We also had meetings deciding how we would use Dublincore metadata for our archive. While this may seem like a simple or straightforward task, it certainly was not. It took a lot of brainstorming and thinking through – yet was also enjoyable (for me, at least). Thus we refined the metadata part -what we more broadly defined in our data management plan.
Week 8 – we worked on our Release/consent forms for contributions and our logo. In preparation or our presentation and dissemination, I built our landing page on Commons: https://nyccommunityfridgearchive.commons.gc.cuny.edu
Having gained access to our Reclaim account, I set up an email for us, configured our account, and installed Omeka S on Reclaim (due to a misunderstanding on my part- I thought we were only able to install Omeka S, not Omeka Classic). Omeka S is similar to Omeka Classic but just not as sophisticated and has a completely different user interface than OClassic. While OC makes use of plugins, OS has modules – several of the plugins I had installed were not available as modules… In addition, the Collecting module, which replaces the Contributions plugin, for example, was not as functional as the latter. The interface was more similar to a library database (which requires a certain familiarity to navigate), whereas with Omeka Classic, the interface is just like any website – which means anyone who lands on the page can easily navigate it and not put off by the specific terminology/format of a library/catalog/archive/database. This may seem like yet another mishap, and in terms of all the extra time and effort I had to put into it, it somewhat is, but I did appreciate having learned more about Omeka through all this hands on experience. On a very positive note, I did figure out that we were in fact going to be able to install Omeka Classic on Reclaim.
Week 7 – At this point, to resolve some of the issuesI encountered with the plugins, I had quite a few exchanges with Hostgator technical team (of my website). The did not prove very helpful as the errors were related partly to Omeka, but also partly to the server – one of the technical assistants told me that it was in fact a server issue. I also scheduled appointments with Stefano Morelli, a Digital Fellow, who is very knowledgeable of Omeka (interestingly so, this coincided with our class recommendation of meeting /getting advice from Digital Fellows). He was super helpful. We were unable to solve the issue, though – as I’ve said, the issue was server related.
Many many hours of work – web searches, getting consultations, etc -and the Contributions Plugin, which is so critical to our project, was not working! Just as I was getting extremely frustrated, an email arrived from Dr Maney giving us the great news: that the Reclaim accounts were here. If you know me, you know exactly the language I used when I responded right away – excitement and exclamations!
On the flip side, I learned a lot about Omeka during this process and felt readier for the next stage than I was for the first.
Week 6, we finished our Data Management Plan. It was a lot of intense work – we had a very productive Zoom meeting writing it up. The lecture and advice from Steve Zweibel proved very useful. I presented our DMP during class.
Because ours is an archive that uses metadata; we had to think about data management on two levels: management of data we collect, and configuring metadata for the archive.
At this point, I had started looking into Dublincore, so we tried to think of ways to align our naming practices and identifiers with DC. It became clear to me that with our Data Management Plan, we would be submitting a rough version our metadata, and that we would be refining them further along. It also became clear that planning out this way, although rough at that point, would be critical in guiding our team work.
Week 5, we finished our Work Plan. I wrote the Omeka/Technical part of the document. We also started working on our Data Management Plan/ Data Abstract & Dictionary. We met Micki Kaufman and got great advice on our work/process.
Since we had decided not to pursue the New Media Lab hosting option, I suggested that, at least for the time being, we could host our archive on my personal site. By this time, Dr Maney had informed us that CUNY would provide us with Reclaim hosting option though at that point the timeline was unclear. As we were (or I was) eager to start building the site, I installed Omeka on my site. I learned about all the plugin options and made suggestions to my team members as to which plugins might be useful to install. I installed and started configuring several plugins and setting up the website. I encountered several glitches, and tried to figure out resolving the issues.