Using asynchronous communication to support virtual faculty learning communities

Participants of the Workshop for New Physics and Astronomy Faculty (NFW) are likely to try evidence-based teaching practices, but they often face barriers to innovation that cause them to revert to traditional instruction. It is known that in-person Faculty Learning Communities (FLCs) can support faculty to successfully change their instruction. However, with NFW participants spread across the country, in-person FLCs are not possible. Faculty Online Learning Communities (FOLCs) are year-long, virtual communities that provide cohorts of NFW participants with a community of peers and ongoing support to help them overcome these barriers. One important FOLC communication channel is a private, Facebook-like platform that allows participants to share their struggles, ask questions, and support each other’s growth as teachers. We analyze one cohort’s interactions on this platform to better understand how it contributes to the achievement of the goals of the FOLC. We conclude that it is possible to create successful FLC interactions in an online environment.


I. INTRODUCTION
The Workshop for New Physics and Astronomy Faculty [1,NFW] is designed to encourage recently-hired faculty members to adopt research-based instructional strategies (RBISs) into their teaching.Previous research has shown that the NFW is successful at increasing knowledge of RBISs and that many faculty change their teaching because of the NFW [2].However, those who implement reforms often do so with significant modifications to suggested best practices and/or revert back to traditional instruction over time due to numerous barriers they face at their home institutions [2][3][4].
To better achieve effective, sustained implementation of RBISs, we offer NFW participants the opportunity to participate in a Faculty Online Learning Community [5,FOLC] for a year following their NFW.FOLCs are virtual communities of NFW participants that are led by a facilitator.They are modeled after Faculty Learning Communities [6,FLCs], which have been shown to promote successful research-based teaching changes among their participants [7].FOLCs attempt to increase effective, sustained use of RBISs by nurturing an ongoing community centered on improving participants' teaching.For one year following their NFW, a FOLC cohort meets every other week for 90 minutes through video conferencing technology and interacts between meetings through a Facebook-like platform called Socialcast [8].
In general, it is challenging to create virtual communities that communicate as effectively as in-person communities [9].We discuss our initial analysis of Socialcast interactions of one FOLC cohort and argue that the FOLC participants do form a vibrant community in which they support each other to become better teachers.

II. SOCIALCAST FACILITATION AND QUESTIONS
Socialcast is a central component of the FOLC experience.It was implemented to provide FOLC participants with a means for asynchronous communication between virtual synchronous meetings.Our hope was that participants would use this platform to interact with each other around their teaching by asking questions, sharing concerns, and providing resources, and celebrating successes.We also hoped that regular communication via Socialcast would help the participants build a feeling of mutual trust, respect, and community.
The FOLC facilitator used several techniques to encourage productive discussion on Socialcast: Meeting preparation: Before each virtual meeting, the facilitator asked participants to post questions they had about the meeting's topic or (if applicable) that they would like addressed by a guest presenter.
Meeting follow-up: At the end of each meeting, the facilitator encouraged participants to follow up on the message board to share resources that they mentioned during the meeting, follow up on unanswered questions, and so on.
Encouraging updates: Between meetings, the facilitator encouraged participants to make posts about aspects of their teaching that they were excited or concerned about, especially with respect to something coming up that week.
Modeling behavior: The facilitator posted about his own struggles with teaching, new activities he was trying in the classroom, and so on.This provided participants with examples of how to share and be vulnerable with each other while positioning the facilitator more as a co-participant than an expert with all the answers.
Sufficient posting: The facilitator tried to maintain a balance between posting enough to remain a genuine participant while not posting so much that he discouraged participants from taking ownership over the discussions.
In our analysis, we look for evidence that these techniques were successful.We also investigate whether the participants 1. used Socialcast consistently during the FOLC, 2. used Socialcast to get help to improve their teaching, 3. receive regular support from each other, and 4. formed a strong community.

III. COHORT, DATA SOURCE, AND METHODS
For this study, we analyze data from the first full-year FOLC cohort, which ran during the 2015-2016 academic year.This cohort had nine participants, including five men and four women.All of them had five or fewer years of teaching experience when they joined the FOLC (five of them had two years or fewer).All were untenured faculty members from a wide variety of institutions, including two Doctoral Universities (Carnegie Classification R2), two Master's Colleges and Universities (M1 or M2), four Baccalaureate Colleges, and one Mixed Baccalaureate/Associate's College.Three of these institutions are public and six are private; they range in size from 700 to 36,000 students.The FOLC was facilitated by a tenured, male physics faculty member from a private, mid-sized Master's-granting university (author AR).
This FOLC cohort used Socialcast for asynchronous communication.Socialcast is a Facebook-like platform in which the FOLC research team created a private group only accessible to the nine participants, the facilitator, and the researchers (although the researchers never posted to the group).Users of the group (i.e., participants and the facilitator) could create new posts, create comment on posts, and "like" posts or comments.They could also upload files or link to external resources in their posts and comments.The data that we analyzed for this study includes the content of all posts, comments, and likes made by this cohort on Socialcast.
To analyze the content of posts and comments, we developed a preliminary coding scheme, summarized in Table I.In developing the codes, we had some a priori expectations of what we would find (e.g., that participants would ask and answer questions) and of what would be important to the development of community (e.g., expressing concerns and engaging in social interactions).Based on these expectations, initial codes and their definitions emerged through discussions between two members of our research team (authors JCC and MHD) on the first month of Socialcast data.Once preliminary definitions were established, the researchers independently coded the data and met to reconcile their differences, which led to the scheme summarized in Table I.Note that the Collaboration code was added late in this process, because collaboration did not appear in the data until the middle of the second semester (during this semester, participants engaged in teaching projects, and several of them chose to work on joint projects).Overall, our intention was to capture the breadth of ways that users might interact via Socialcast in a relatively small number of codes, so some of the codes (e.g., Share or Social) are broad and could be broken into subcodes; we defer more nuanced analysis to later studies.

IV. RESULTS
In total, the Socialcast data comprised 230 posts, 777 comments, and 188 likes (1195 "interactions"), which were created between August 4, 2015 and May 27, 2016 (297 days).To better compare the posting patterns of the facilitator and the participants, we remove posts coded Meta (64 of 71 of which were created by the facilitator and virtually all of which involved logistical matters like scheduling meetings and speakers) and their associated comments and likes; in doing so, we are left with 159 posts, 604 comments, and 163 likes.Overall, the facilitator made about 3.5 times as many posts and 1.6 times as many comments as the average participant.In total, the facilitator averaged about 3.3 non-Meta interactions per week, and participants averaged about 2.1.
Table II summarizes the average numbers of interactions per user and per user-week.
Fig. 1 includes six plots which summarize the results of our analysis.Figs.1a and 1b break the data down by user: the facilitator (Fac), the five male participants (M1-M5), and the four female participants (F1-F4).Fig. 1a shows the number of interactions initiated by each user, and Fig. 1b shows the average number of comments and likes made to posts initiated by each user.There was significant variation in the number of interactions initiated by each participant: 6-24 posts, 24-135   comments, and 0-85 likes.About half of the total likes in the dataset were made by F4.While the average participant made fewer posts and comments than the facilitator, F4 made almost 50% more comments than the facilitator.Finally, the facilitator received about half as many comments to his posts than the average participant.Figs.1c and 1d compare the posts and comments made by the facilitator to those made by the participants.Fig. 1c indicates the fraction of posts that received a certain number of comments.A full half of posts made by the facilitator received no comments, even though they were qualitatively similar to posts made by participants.About 15% of participant posts received seven or more comments; this fraction is about evenly distributed between seven and 22 comments.On average, the facilitator received 2.2 comments on his posts while the participants received 4.2.Fig. 1d indicates the number of posts tagged with codes (see Table I) for the facilitator and the average participant.The facilitator made about twice as many posts and comments that involved each of Share, Answer, and Information; this was greater than one standard deviation above the average participant.In all other codes, the facilitator was within one standard deviation of the average participant.Reflection and Concern were the codes for which the average participant made more posts than the facilitator.
Figs. 1e and 1f show the Socialcast data over time.Fig. 1e compares the different interaction types made by the facilitator and the average participant over the course of the FOLC.The highest concentration of activity was in the middle of the first semester of the FOLC, and activity dies down significantly over winter and spring breaks.Outside of these periods, activity from all of the users is fairly consistent.Fig. 1f displays the number of posts and comments with different codes that were made over the course of the FOLC.Here again there is fairly consistent use of all the codes throughout the FOLC.The only exception is in the second half of the spring semester.During this semester, participants engaged in teaching projects, which resulted in Collaboration interactions that were not present in the first semester.Because there are five other codes that cannot be co-coded with Collaboration (see the Col definition in Table I), these other codes show up less frequently when Collaboration codes are present.

V. DISCUSSION & FUTURE WORK
These results are just a small slice of the wealth of information contained in roughly 1000 posts and comments generated by this FOLC cohort.Nevertheless, we can make some conclusions with respect to the issues raised in Sec.II.
Considering just the participants, we see clear evidence that they used Socialcast consistently over the course of the FOLC (Fig. 1e).They regularly asked for help, expressed concerns, and reflected on their teaching, which suggests that they sought help for improving their teaching; they also regularly answered questions, shared experiences, and provided information, which suggests that they offered help to each other (Figs.1d and 1f).Thus, it is likely that Socialcast interactions supported the participants to improve their teaching.
There is also evidence that the participants formed a strong community through Socialcast.Many of the participants posts and comments are coded as Social (Fig. 1d), which indicates exchanges of affirmation, empathy, and humor.Moreover, the extent of Collaboration interactions is surprising because the second semester teaching projects were not framed by the facilitator as group projects; the participants themselves decided to team up spontaneously.Finally, participants received much more feedback to their posts than the facilitator (Figs.1b and 1c), and most of this feedback came from other participants.
We can also compare the participants to the facilitator.In Sec.II, one of the facilitator's goals was to encourage the use of Socialcast; this clearly happened.The facilitator also intended to model the kind of posting behavior that he hoped the participants would engage in.Thus, we see that he posted and commented more than the majority of participants (Fig. 1a).The distribution of kinds of posts and comments that he made is similar to the participants, but also differs in key ways (Fig. 1d).Because he is a more experienced teacher than any of the participants, he spent more of his time sharing experience, answering questions, and providing information than the participants; similarly, he spent less time discussing concerns or reflecting.Thus, he seems to have successfully navigated the boundary between interacting as a participant while also bringing his expertise in teaching to the group.Finally, the fact that participants ignored the facilitator's posts much more often than each other's (Figs.1b and 1c) indicates that the participants believed it to be more important to respond to each other than to him.This is a positive outcome, because it indicates that the participants recognized the value of supporting each other as novice teachers rather than the facilitator as a veteran teacher.
There are still many questions unanswered about the use of Socialcast in the FOLC.Further analysis of this dataset will focus on the content of the posts; in coding the data, we noticed that topics other than teaching were discussed among the participants, although they mostly centered on issues relevant to being a new faculty member.What are the topics that are discussed, and how often do they come up?We also plan on conducting similar analysis on Socialcast data from other cohorts.This will help us understand, e.g., how the patterns that we have noticed here change with different facilitators.Finally, while we expect that the creation of strong FOLC communities will lead participants to successfully maintain use of RBISs, we will have to wait several years to discover if this is actually the case.
Based on our analysis, we are confident that the participants of this FOLC took good advantage of the communications channels available to them to form a strong, supportive community, despite the physical distance that separated them.Hopefully, we will be able to reproduce the positive outcomes experienced by this cohort in others, both for the benefit of those participants and to help others trying to create productive, virtual communities.

FIG. 1 .
FIG. 1. (Color online) Plots of the non-Meta Socialcast data.1a: Number of interactions broken out by user (Fac is the facilitator, M1-M5 are the male participants, and F1-F4 are the female participants).The horizontal lines indicate the average number of interactions/user for the participants (see Table II).1b: The average number of comments and likes made to posts initiated by each user.The horizontal lines indicate average values across the participants.1c: Fraction of posts that received a certain number of comments, made by the facilitator and the all of the participants combined.The vertical lines indicate average number of comments received by the facilitator and by all participants.1d: Number of posts and comments with particular codes (see Table I) made by the facilitator and the average participant Error bars indicate the standard deviation for each code across the nine participants.Codes are sorted in descending order of participant use.1e: Number of posts (P), comments (C), and likes (L) made by the facilitator (Fac.) and the average participant (Part.)as a function of time over the course of the FOLC.Each horizontal segment represents a two-week interval.1f: Number of posts and comments with particular codes (see Table I) as a function of time over the course of the FOLC.Each horizontal segment represents a two-week interval.
FIG. 1. (Color online) Plots of the non-Meta Socialcast data.1a: Number of interactions broken out by user (Fac is the facilitator, M1-M5 are the male participants, and F1-F4 are the female participants).The horizontal lines indicate the average number of interactions/user for the participants (see Table II).1b: The average number of comments and likes made to posts initiated by each user.The horizontal lines indicate average values across the participants.1c: Fraction of posts that received a certain number of comments, made by the facilitator and the all of the participants combined.The vertical lines indicate average number of comments received by the facilitator and by all participants.1d: Number of posts and comments with particular codes (see Table I) made by the facilitator and the average participant Error bars indicate the standard deviation for each code across the nine participants.Codes are sorted in descending order of participant use.1e: Number of posts (P), comments (C), and likes (L) made by the facilitator (Fac.) and the average participant (Part.)as a function of time over the course of the FOLC.Each horizontal segment represents a two-week interval.1f: Number of posts and comments with particular codes (see Table I) as a function of time over the course of the FOLC.Each horizontal segment represents a two-week interval.

TABLE I .
Codes and brief definitions used in analyzing Socialcast posts and comments.

Table I )
as a function of time over the course of the FOLC.Each horizontal segment represents a two-week interval.

TABLE II .
Number, number/user, and number/user-week of non-Meta posts, comments, likes, and total interactions made by the facilitator and participants of the FOLC over the course of 297 days.