9 Natural Language Processing Trends in 2023

what is semantic analysis

SAP HANA Sentiment Analysis is ideal for analyzing business data and handling large volumes of customer feedback, support tickets, and internal communications with other SAP systems. This platform also provides real-time decision-making, which allows businesses to back up their decision processes and strategies with robust data and incorporate them into specific actions within the SAP ecosystem. The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. Next, the experiments were accompanied by changing different hyperparameters until we obtained a better-performing model in support of previous works. During the experimentation, we used techniques like Early-stopping, and Dropout to prevent overfitting. The models used in this experiment were LSTM, GRU, Bi-LSTM, and CNN-Bi-LSTM with Word2vec, GloVe, and FastText.

Sentiment Analysis is a Natural Language Processing field that increasingly attracts researchers, government authorities, business owners, service providers, and companies to improve products, services, and research. Therefore, research on sentiment analysis of YouTube comments related to military events is limited, as current studies focus on different platforms and topics, making understanding ChatGPT App public opinion challenging. As a result, we used deep learning techniques to design and develop a YouTube user sentiment analysis of the Hamas-Israel war. Therefore, we collected comments about the Hamas-Israel conflict from YouTube News channels. Next, significant NLP preprocessing operations are carried out to enhance our classification model and carry out an experiment on DL algorithms.

Additionally, SAP HANA has upgraded its capabilities for storing, processing, and analyzing data through built-in tools like graphs, spatial functions, documents, machine learning, and predictive analytics features. Talkwalker helps users access actionable social data with its comprehensive yet easy-to-use social monitoring tools. For instance, users can define their data segmentation in plain language, which gives a better experience even for beginners. Talkwalker also goes beyond text analysis on social media platforms but also dives into lesser-known forums, new mentions, and even image recognition to give users a complete picture of their online brand perception. The following table provides an at-a-glance summary of the essential features and pricing plans of the top sentiment analysis tools. On a theoretical level, sentiment analysis innate subjectivity and context dependence pose considerable obstacles.

Similarly for offensive language identification the states include not-offensive, offensive untargeted, offensive targeted insult group, offensive targeted insult individual and offensive targeted insult other. Finally, the results are classified into respective states and the models are evaluated using performance metrics like precision, recall, accuracy and f1 score. Sentiment analysis is a process in Natural Language Processing that involves detecting and classifying emotions in texts.

Sentiment analysis approaches

Hence, it is critical to identify which meaning suits the word depending on its usage. These tools can pull information from multiple sources and employ techniques like linear ChatGPT regression to detect fraud and authenticate data. They also run on proprietary AI technology, which makes them powerful, flexible and scalable for all kinds of businesses.

This model passes benchmarks by a large margin and earns 76% of global F1 score on coarse-grained classification, 51% for fine-grained classification, and 73% for implicit and explicit classification. Identification of offensive language using transfer learning contributes the results to Offensive Language Identification in shared task on EACL 2021. The pretrained models like CNN + Bi-LSTM, mBERT, DistilmBERT, ALBERT, XLM-RoBERTa, ULMFIT are used for classifying offensive languages for Tamil, Kannada and Malayalam code-mixed datasets. Without doing preprocessing of texts, ULMFiT achieved massively good F1-scores of 0.96, 0.78 on Malayalam and Tamil, and DistilmBERT model achieved 0.72 on Kannada15. While previous works have explored sentiment analysis in Amharic, the application of deep learning techniques represents a novel advancement. By leveraging the power of deep learning, this research goes beyond traditional methods to better capture the Amharic political sentiment.

9 that, the difference between the training and validation accuracy is nominal, indicating that it is not overfitted and hence capable of generalizing to previously unknown data in the real world. To get to the ideal state for the model, the researcher employed regularization approaches like dropout as discussed above. 9, it can be found that after adding MIBE neologism recognition to the model in Fig. 7, the performance of each model is improved, especially the accuracy and F1 value of RoBERTa-FF-BiLSTM, RoBERTa-FF-LSTM, and RoBERTa-FF-RNN are increased by about 0.2%. Therefore, it is also demonstrated that there are a large number of non-standard and creative web-popular neologisms in danmaku text, which can negatively affect the model’s semantic comprehension and sentiment categorization ability if they are not recognized.

We hope that future work will enable the media embedding to directly explain what a topic exactly means and which topics a media outlet is most interested in, thus helping us understand media bias better. Second, since there is no absolute, independent ground truth on which events have occurred and should have been covered, the aforementioned media selection bias, strictly speaking, should be understood as relative topic coverage, which is a narrower notion. Third, for topics involving more complex semantic relationships, estimating media bias using scales based on antonym pairs and the Semantic Differential theory may not be feasible, which needs further investigation in the future. Sentiment analysis tools show the organization what it needs to watch for in customer text, including interactions or social media. Patterns of speech emerge in individual customers over time, and surface within like-minded groups — such as online consumer forums where people gather to discuss products or services.

  • When these are multiplied by the u column vector for that latent concept, it will effectively weigh that vector.
  • Potential strategies include the utilization of domain-specific lexicons, training data curated for the specific cultural context, or applying machine learning models tailored to accommodate cultural differences.
  • Finally, dropouts are used as a regularization method at the softmax layer28,29.

Figure 14 provides the confusion matrix for CNN-BI-LSTM, each entry in a confusion matrix denotes the number of predictions made by the model where it classified the classes correctly or incorrectly. Out of the 500-testing dataset available for testing, CNN-BI-LSTM correctly predicted 458 of the sentiment sentences. The Misclassification Rate is also known as Classification Error shows the fraction of predictions that were incorrect. These Internet buzzwords contain rich semantic and emotional information, but are difficult to be recognized by general-purpose lexical tools.

Technical SEO Matters Just as Much as Content

The Quartet on the Middle East mediates negotiations, and the Palestinian side is divided between Hamas and Fatah7. These technologies not only help to optimise the email channel but also have applications in the entire digital communication such as content summarisation, smart database, etc. And most probably, more use cases will appear and reinvent the customer-bank relationship soon.

To understand how social media listening can transform your strategy, check out Sprout’s social media listening map. It will show you how to use social listening for org-wide benefits, staying ahead of the competition and making meaningful audience connections. Social sentiment analytics help pinpoint when and how to engage with your customers effectively.

Section “Conclusion and recommendation” concludes the paper and outlines future work. Sentiment analysis, a crucial natural language processing task, involves the automated detection of emotions expressed in text, distinguishing between positive, negative, or neutral sentiments. Nonetheless, conducting sentiment analysis in foreign languages, particularly without annotated data, presents complex challenges9. While traditional approaches have relied on multilingual pre-trained models for transfer learning, limited research has explored the possibility of leveraging translation to conduct sentiment analysis in foreign languages.

While natural language processors are able to analyze large sources of data, they are unable to differentiate between positive, negative, or neutral speech. Moreover, when support agents interact with customers, they are able to adapt their conversation based on the customers’ emotional state which typical NLP models neglect. Therefore, startups are creating NLP models that understand the emotional or sentimental aspect of text data along with its context. Such NLP models improve customer loyalty and retention by delivering better services and customer experiences. Latent Semantic Analysis (LSA (Deerwester et al. 1990)) is a well-established technique for uncovering the topic-based semantic relationships between text documents and words.

Furthermore, its algorithms for event extraction and categorization cannot always perfectly capture the nuanced context and meaning of each event, which might lead to potential misinterpretations. By scraping movie reviews, they ended up with a total of 10,662 sentences, half of which were negative and the other half positive. After converting all of the text to lowercase and removing non-English sentences, they use the Stanford Parser to split sentences into phrases, ending up with a total of 215,154 phrases. To classify sentiment, we remove neutral score 3, then group score 4 and 5 to positive (1), and score 1 and 2 to negative (0). With data as it is without any resampling, we can see that the precision is higher than the recall. If you want to know more about precision and recall, you can check my old post, “Another Twitter sentiment analysis with Python — Part4”.

Brands like MoonPie have found success by engaging in humorous and snarky interactions, increasing their positive mentions and building buzz. By analyzing how users interact with your content, you can refine your brand messaging to better resonate with your audience. Understanding how people feel about your business is crucial, but knowing their sentiment toward your competitors can provide a competitive edge. Social media sentiment analysis can help you understand why customers might prefer a competitor’s product over yours, allowing you to identify gaps and opportunities in your offerings. For example, Sprout users with the Advanced Plan can use AI-powered sentiment analysis in the Smart Inbox and Reviews Feed. This feature automatically categorizes posts as positive, neutral, negative or unclassified, simplifying sorting messages and setting automated rules based on sentiment.

TextBlob returns polarity and subjectivity of a sentence, with a Polarity range of negative to positive. The library’s semantic labels help with analysis, including emoticons, exclamation marks, emojis, and more. Hannah Macready is a freelance writer with 12 years of experience in social media and digital marketing. Her work has appeared in publications such as Fast Company and The Globe & Mail, and has been used in global social media campaigns for brands like Grosvenor Americas and Intuit Mailchimp. In her spare time, Hannah likes exploring the outdoors with her two dogs, Soup and Salad.

what is semantic analysis

In 2021, the focus has shifted to understanding intent and behavior, and the context – semantics – behind them. The first generation of Semantic Web tools required deep expertise in ontologies and knowledge representation. As a result, the primary use has been adding better metadata to websites to describe the things on a page. It requires the extra step of filling in the metadata when adding or changing a page. Several vendors, including Bentley and Siemens, are developing connected semantic webs for industry and infrastructure that they call the industrial metaverse.

This step gradually labels the instances with increasing hardness in a workload. GML fulfills gradual learning by iterative factor inference over a factor graph consisting of the labeled and unlabeled instances and their common features. At each iteration, it typically labels the unlabeled instance with the highest degree of evidential certainty. Sentiment analysis is a highly powerful tool that is increasingly being deployed by all types of businesses, and there are several Python libraries that can help carry out this process. Or Duolingo, which, once learning its audience valued funny content, doubled down on its humorous tone and went fully unhinged.

The cross entropy loss function is utilized for back-propagation training and the accuracy is employed to demonstrate the model classification ability. Ghorbani et al.10 introduced an integrated architecture of CNN and Bidirectional Long Short-Term Memory (LSTM) to assess word polarity. Despite initial setbacks, performance improved to 89.02% when Bidirectional LSTM replaced Bidirectional GRU. Mohammed and Kora11 tackled sentiment analysis for Arabic, a complex and resource-scarce language, creating a dataset of 40,000 annotated tweets.

Multi-task learning models now effectively juggle multiple ABSA subtasks, showing resilience when certain data aspects are absent. Pre-trained models like RoBERTa have been adapted to better capture sentiment-related syntactic nuances across languages. Interactive networks bridge aspect extraction with sentiment classification, offering more complex sentiment insights. Additionally, novel end-to-end methods for pairing aspect and opinion terms have moved beyond sequence tagging to refine ABSA further. These strides are streamlining sentiment analysis and deepening our comprehension of sentiment expression in text55,56,57,58,59. This feature refers to a sentiment analysis tool’s capability to analyze text in multiple languages.

Sentiment analysis can highlight what works and doesn’t work for your workforce. With the help of artificial intelligence, text and human language from all these channels can be combined to provide real-time insights into various aspects of your business. These insights can lead to more knowledgeable workers and the ability to address specific situations more effectively.

When the organization determines how to detect positive and negative sentiment in customer expressions, it can improve its interactions with the customer. By exploring historical data on customer interaction and experience, the company can predict future customer actions and behaviors, and work toward making those actions and behaviors positive. Another reason behind the sentiment complexity of a text is to express different emotions about different aspects of the subject so that one could not grasp the general sentiment of the text. An instance is review #21581 that has the highest S3 in the group of high sentiment complexity. Overall the film is 8/10, in the reviewer’s opinion, and the model managed to predict this positive sentiment despite all the complex emotions expressed in this short text.

By undertaking rigorous quality assessment measures, the potential biases or errors introduced during the translation process can be effectively mitigated, enhancing the reliability and accuracy of sentiment analysis outcomes. One potential solution to address the challenge of inaccurate translations entails leveraging human translation or a hybrid approach that combines machine and human translation. Human translation offers a more nuanced and precise rendition of the source text by considering contextual factors, idiomatic expressions, and cultural disparities that machine translation may overlook. However, it is essential to note that this approach can be resource-intensive in terms of time and cost.

The class labels of sentiment analysis are positive, negative, Mixed-Feelings and unknown State. Two researchers attempted to design a deep learning model for Amharic sentiment analysis. The CNN model designed by Alemu and Getachew8 was overfitted and did not generalize well from training data to unseen data. This problem was solved in this research by adjusting the hyperparameter of the model and shift the model from overfitted to fit that can generalize well to unseen data. The CNN-Bi-LSTM model designed in this study outperforms the work of Fikre19 LSTM model with a 5% increase in performance. This work has a major contribution to update the state-of-the-art Amharic sentiment analysis with improved performance.

In the end, the GRU model converged to the solution faster with no large iterations to arrive at those optimal values. In summary, the GRU model for the Amharic sentiment dataset achieved 88.99%, 90.61%, 89.67% accuracy, precision, and recall, respectively. It indicates that the introduction of jieba lexicon can cut Chinese danmaku text into more reasonable words, reduce noise and ambiguity, and improve the quality of word embedding. Framework diagram of the danmaku sentiment analysis method based on MIBE-Roberta-FF-Bilstm.

Logistic regression is a classification technique and it is far more straightforward to apply than other approaches, specifically in the area of machine learning. In 2020, over 3.9 billion people worldwide used social media, a 7% increase from January. While there are many factors contributing to this user growth, the global penetration of smartphones is the most evident one1. Some instances of social media interaction include comments, likes, and shares that express people’s opinions. This enormous amount of unstructured data gives data scientists and information scientists the ability to look at social interactions at an unprecedented scale and at a level of detail that has never been imagined previously2. Analysis and evaluation of the information are becoming more complicated as the number of people using social networking sites grows.

Uber can thus analyze such Tweets and act upon them to improve the service quality. In the era of information explosion, news media play a crucial role in delivering information to people and shaping their minds. Unfortunately, media bias, also called slanted news coverage, can heavily influence readers’ perceptions of news and result in a skewing of public opinion (Gentzkow et al. 2015; Puglisi and Snyder Jr, 2015b; Sunstein, 2002).

what is semantic analysis

The blue dotted line’s ordinate represents the median similarity to Ukrainian media. Constructing evaluation dimensions using antonym pairs in Semantic Differential is a reliable idea that aligns with how people generally evaluate things. For example, when imagining the gender-related characteristics of an occupation (e.g., nurse), individuals usually weigh between “man” and “woman”, both of which are antonyms regarding gender. You can foun additiona information about ai customer service and artificial intelligence and NLP. Likewise, when it comes to giving an impression of the income level of the Asian race, people tend to weigh between “rich” (high income) and “poor” (low income), which are antonyms related to income.

SE-GCN also emerged as a top performer, particularly excelling in F1-scores, which suggests its efficiency in dealing with the complex challenges of sentiment analysis. Sentiment analysis uses machine learning techniques like natural language processing (NLP) and other calculations such as biometrics to determine if specific data is positive, negative or neutral. The goal of sentiment analysis is to help departments attach metrics and measurable statistics to pieces of data so they can leverage the sentiment in their everyday roles and responsibilities. With the rise of artificial intelligence (AI) and machine learning, social media sentiment analysis tools have become even more sophisticated and accurate.

10 Best Python Libraries for Sentiment Analysis (2024) – Unite.AI

10 Best Python Libraries for Sentiment Analysis ( .

Posted: Tue, 16 Jan 2024 08:00:00 GMT [source]

Different machine learning and deep learning models are used to perform sentimental analysis and offensive language identification. Preprocessing steps include removing stop words, changing text to lowercase, and removing emojis. These embeddings are used to represent words and works better for pretrained deep learning models. Embeddings encode the meaning of the word such that words that are close in the vector space are expected to have similar meanings. By training the models, it produces accurate classifications and while validating the dataset it prevents the model from overfitting and is performed by dividing the dataset into train, test and validation.

what is semantic analysis

From the embedding layer, the input value is passed to the convolutional layer with a size of 64-filter and 3 kernel sizes, as well as with an activation function of ReLU. After the convolutional layer, there is a max-pooling 1D layer with a pool size of 4. what is semantic analysis The output from this layer is passed into the bidirectional layer with 64 units. The output was then passed into the fully connected layer with Sigmoid as the binary classifier. For the optimizer, Adam and Binary Cross entropy for loss function were used.

We determined weighted subcriteria for each category and assigned scores from zero to five. Finally, we totaled the scores to determine the winners for each criterion and their respective use cases. Finally, we applied three different text vectorization techniques, FastText, Word2vec, and GloVe, to the cleaned dataset obtained after finishing the preprocessing steps. The process of converting preprocessed textual data to a format that the machine can understand is called word representation or text vectorization. 2 involves using LSTM, GRU, Bi-LSTM, and CNN-Bi-LSTM for sentiment analysis from YouTube comments.

As each dataset contains slightly different topics and keywords, it would be interesting to assess whether a combination of three different datasets could help to improve the prediction of our model. To evaluate time-lag correlations between sentiment (again, from the headlines) and stock market returns we computed cross-correlation using a time lag of 1 day. The results indicate that there is no statistically significant correlation between sentiment scores and market returns next day. However, there is weak positive correlation between negative sentiment at day t and the volatility of the next day. R-value of 0.24 and p-value below 0.05 indicate that the two variables (negative sentiment and volatility) move in tandem.

The given reading looks into how the ‘live’ part of live coding is defined and executed. It was particularly interesting to see such spectrum of liveliness as it was something we also talked about in class.

I remember in class we talked about how much of the code should be prepared or not, in order to balance making sure that the audience is constaly entertained and that not too much of the performance is preplanned. The reading mentions both styles of live coding, implying it depends on the style and the performer and the performance, and what definition of liveliness they wish to hold.

For such decision, I think it’s important to remember that “within live coding, the performer consciously adopts a medial position, actively maintaining the conditions that will keep the action dynamic.”

It was interesting to see the background / history of how the definition or range of art and music were pushed further. I think the part that stood out most to me was the turning point where art schools provided the environment in which students could try new things, becoming “the point of origin for interdisciplinary and multidisciplinary work.”

It’s also interesting to see how pop started, with “the principle that a good punk song only needed three chords applied just as much as the do-it-yourself attitude.” I think this is a mindset that still prevails, even in our course right now. It’s noteworthy that the combination of a good environment and a shift in people’s perspectives opened the field to new definitions of musicians and artists, and what they could do.

At some point, the author describes a live coder as somebody “concurrently playing and composing.” The complexity of live coding is much more well delivered with this description, as it captures the essence of having to make decisions on the composition and arrangement AND the actual playing of the components at the same time. Based on this, the author goes to explain the importance of notation in live coding, and how these notations serve as a way live coders can manipulate different aspects of the music. Reading this, I finally understood why the concept of ‘live coding’ itself gave me a lot of pressure. I think it may be because there are tools readily available, opening up so many possibilities. However, such large number of options mean that it’s going to be hard to make a decision. But, in live coding, decisions have to be made quickly, on the spot, during the performance.

I also like how he wraps up saying that live coding “treads on uncommon path between oral and written culture, improvisation versus the composed, and the spontaneous versus the arranged.” This again holds the essence of having the minimum amount of plan done before the performance, and having to do both the arranging and the playing at the same time during the performance.

<How it started>

There was no end goal at first. I was trying combinations of different sound, thinking about how they would sound like if they worked as my intro etc. Playing around, I came up with two combinations that I liked. They gave a vibe of somebody maybe being chased, or maybe being high. Then a game I watched a video of years ago popped up in my head, so I decided to maybe try making a visual similar to this. (It’s an educational game and this project too, is far from promoting drug abuse.)

<Performance>

For the performance, I realized that it would be chaotic to go back and forth between the music and the visuals. At the same time, I wanted some aspect of live coding to be there. To get around this, I made the music as a complete piece, allowing myself to focus on the visuals and evaluating the codes according to the preset music.

I could not do any type of screen recording because whenever I tried, the hydra code lagged so much, completely stopping the video in the background and freezing the screen. Because of that, some sounds of the music sounds a little bit different in the video above.

<Tidal Cycles Code>


intro = stack [slow 2 $ s "[ ~ cp]*4", slow 2 $ s "mash*4"]
baseOne = stack [s "bd bd ht lt cp cp sd:2 sd:2", struct "<t(4,8) t(3,8,1)>" $ s "ifdrums" # note "c'maj f'min" # room 0.4]
baseTwo = stack[slow 2 $ s "cosmicg:3 silence silence silence" # gain 0.7, slow 2 $ s "gab" <| n (run 10)]
baseThree = stack[s "haw" <| n (run 4),  s "blip" <| n (run 13) # gain (range 0.9 1.2 rand)]
buildHigh = palindrome $ s "arpy*8" # note ((scale "augmented" "7 3 1 6 5") + 6) # room 0.4
buildMidOne = sound "bd bd ht lt cp cp sd:2 sd:2"
buildMidTwo = stack[s "<feel(5,8,1)>" # room 0.95 # gain 1.3 # up "1", s "feel" # room 0.95 # gain 1.3 # up "-2"]
buildLow = s "f" # stretch 2 # gain 0.75
explosion = slow 2 $ s "sundance:1" # gain 1.2
mainHigh = s "hh*2!4"
mainMid = stack[fast 1.5 $ s "bd hh [cp ~] [~ mt]", fast 1.5 $ s "<mash(3,8) mash(5,8,1)>" # speed "2 1" # squiz 1.1, s "circus:1 ~ ~ ~ "]
endBeep = s "cosmicg:3 silence silence silence" # gain 0.7

-- midi
midiIntro = ccv "2 4 -2 1" # ccn 0 # s "midi"
midiEnd = ccv "2" # ccn 0 # s "midi"
midiBase = ccv "127 0 0 64 127 0" # ccn 1 # s "midi"
midiBuild = ccv "40 10" # ccn 2 # s "midi"
midiMain = ccv "2" # ccn 2 # s "midi"
midiSlow = ccv "20 16 12 8 4" # ccn 3 # s "midi"

playMusic = do {
  d2 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 6, intro),
    (0, 42, fast 4 midiIntro),
    (6, 12, intro # gain 0.8)
  ];
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
    (6, 50, midiBase),
    (6, 42, baseOne),
    (12, 42, baseTwo),
    (18, 22, baseThree),
    (22, 26, baseThree # up 4),
    (26, 30, baseThree),
    (30, 34, baseThree # up 4),
    (34, 42, degradeBy(0.5) baseThree # gain 0.8),
    (42, 46, degradeBy(0.5) baseThree # gain 0.65),
    (46, 50, degradeBy(0.5) baseThree # gain 0.45)
  ];
  d4 $ qtrigger $ filterWhen (>=0) $ seqP [
    (42, 58, buildHigh),
    (46, 58, buildMidOne # gain 1.1),
    (50, 58, buildMidTwo),
    (50, 60, fast 6 midiBuild),
    (50, 58, buildLow),
    (58, 59, explosion)
  ];
  d5 $ qtrigger $ filterWhen (>=0) $ seqP [
    (60, 62, mainHigh),
    (60, 86, midiEnd),
    (60, 86, midiMain),
    (60, 86, midiSlow),
    (62, 84, mainMid)
  ];
  d6 $ qtrigger $ filterWhen (>=0) $ seqP [
    (68, 76, baseOne # gain 0.5),
    (68, 80, baseTwo # gain 0.5),
    (68, 68, baseThree # gain 0.5),
    (76, 86, midiEnd),
    (76, 78, slow 2 endBeep),
    (78, 82, degradeBy(0.7) $ slow 2 endBeep # gain 0.6),
    (82, 86, degradeBy(0.5) $ slow 3 endBeep # gain 0.5)
  ]
}

playMusic

hush

<Hydra Code>


s0.initVideo("/Users/mayalee/Desktop/Spring\ 2024/Live\ Coding/classPerformance/composition/runningVideo.mp4")

src(s0)
  .out(o0)
render(o0)

src(s0)
  .add(
    osc(200,1,0.9)
      .saturate(()=>cc[0])
      .modulate(noise(()=>Math.sin(cc[1] / 100 * time) * 2 + 1))
      .add(shape(30).scale(()=>cc[2]*20).rotate(2))
  )
  .out(o0)
render(o0)

osc(200,1,0.9)
  .rotate(1, 0.1)
  .saturate (()=>cc[0]*5)
  .modulate(noise(()=>Math.sin(0.01 * time) * 2 + 1))
  .add(shape(30).scale(2.5).rotate(2))
  .out(o0)
osc(0.001, 900, 0.8)
  .diff(o0)
  .scale(()=>cc[1] /3000 - 100)
  .modulate(o1, 0.1)
  .out(o2)
render(o2)

osc()
  .shift(0.1,0.9,0.3)
  .out(o2)
render(o2)

osc(200,1,0.9)
  .rotate(1, 0.1)
  .saturate (1)
  .modulate(noise(()=>Math.sin(0.001 * time) * 2 + 1))
  .add(shape(30).scale(2.5).rotate(2))
  .out(o2)
osc(0.001, 900, 0.8)
  .diff(o2)
  .rotate(()=>Math.sin(0.1 * time) * 2 + 1, ()=>cc[3] / 10)
  .scale(0.3)
  .modulate(o1, 0.1)
  .out(o1)
render(o1)

hush()

<Future Improvements>

When I started working on the visuals, I actually thought it would be a good idea to have the beat of the visuals a bit off from the music to create that psycho mood. Looking at the end result, I’m not sure if it was implemented in the way I intended. I think the idea was okay, but making something off in a harmonized way was not such an easy job. I also think there could’ve been more filler visuals. I think there’s a part or two where the visuals are repetitive for a while- making more visuals for these parts, I think, would have made it more intense to watch.

I love how the author explains that in live coding, each of the programmer and the program becomes “mutually part of an open-ended and contingent process of both problem generation and problem solving.” Until now, although I had the understanding that live coding is all about showing the process, there was some pressure that the approach that will be shown ‘has to make sense’ and ‘build up’ towards something. The author’s word created a shift in my perspective that the beauty of live coding performances lies in that the process is not just about problem solving and making things better, but also about creating or running into problems and showing how the programmer deals with them through trials and errors.

Expanding on my initial thought that live coding performance still has to ‘build up’ to create something fancy, I think I limited myself to thinking that there has to be an ultimate product I should be aiming for. After reading the philosopher Paolo Virno’s saying “what characterizes the work of performing artist is that their actions have no extrinsic goal” and that “the purpose of their activity coincides entirely with its own execution.” My understanding is that yes, the performance should show progress in some way, but the focus is not necessarily about the end goal and more about making the best (or interesting) moves for that instance and seeing where the performer can go with that.

Why I chose FoxDot:

I honestly started with clive because I’m a bit interested in C language and wanted to explore something with it. However, there was a lack of information on clive. The documentations on installing and composing music with it was not well done, making it hard to explore. So, I started looking at other options and found FoxDot quite similar to TidalCycles. It also had better documentations to help beginners get it started. (Although I had to change from clive to FoxDot, I still think this exploration was meaningful as it highlights the importance of documentation, especially in more non-common fields such as live coding.) Also, FoxDot uses SuperCollider to output sound, which made it more familiar for me.

What is FoxDot:

FoxDot is an open source live coding enviornment, designed specifically for live coding. It’s in Python, making it more intuitive for the users. (At the same time, after looking at other enviornments our classmates have been exploring, I think there are many more options that are easier and more intuitive.)

Many live coders liked FoxDot mostly because of the fact it was in Python. However, it has been explicitly announced that there will be no further developments/updates: only minor changes when issues are found.

Advantages and Disadvantages of FoxDot:

Personally, I think live coding environments that have higher technology and more reflection of physical music composing are harder for people like me (who don’t have experience in composing music) to understand and play around with. As somebody with a background in computer science, environments like FoxDot, where composition can be done in a more code-like manner, is much more approachable. While I was not able to get to the level of using it, FoxDot can be used along with Sonic Pi, making it more attractive for live coders.

The downside is that since it’s not being updated, there are so many issues that happen with installing and running it. There were many clashes with python and I had to manually edit multiple piles to make it running. Also, there’s a limited number of sounds in FoxDot’s sound library, which may limit the mood and style of music live coders can create.

Example Performance:

Here is a video of a live coding performance with FoxDot done in Momentum Tech Conference 2019, Karachi.

Short Performance:

Below is the code I prepared to show for our in class performance. I left out some parts by mistake, so hopefully this sounds better.

p1 >> play("x-o[--]")

p2 >> jbass(dur=3, sus=1)

progression = [0, 7, 2, 4]
rythm = P[0.5, 0.5, 0.25, 0.25, 0.25, 0.25]
bassline = P[progression].stutter(len(rhythm))

d1 >> pluck(bassline, dur=rhythm, amp=0.5)

d2 >> pluck(bassline, dur=rhythm, amp=0.8, echo=0.5, room=0.5)

melody = P[0,2,4,5,7]

d3 >> pads(melody, dur=0.5, oct=(4,5))

d4 >> scatter(melody)

d5 >> arpy(bassline, dur=rhythm)

p1.stop()
p2.stop()
d1.stop()
d2.stop()
d3.stop()
d4.stop()
d5.stop()